The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.
 
 
 
 
 
 
Go to file
zjgarvey 07d0645f64
[RFC] general support for Adaptive Pooling Ops (#2661)
Adaptive pooling ops can only be decomposed into their non-adaptive
counterparts in trivial cases.

For example, the current decomposition for AtenAdaptiveAvgPool1dOp in
DecomposeComplexOps.cpp supports outSize = inSize (i.e., do literally
nothing), and outSize = 1 (i.e., do a batched average).

The reason adaptive pooling ops are difficult to lower to linalg is that
they are not constantly strided. They are computed by taking an input
tensor of shape (N, C, Hin), and an output size Hout, and computing the
output tensor at position (n,c, h) in the following way:

1. compute st(h) = (h*Hin)//Hout
2. compute en(h) = 1 + ((h+1)*Hin -1)//Hout
3. apply a computation (max or avg) to the slice: INPUT[n, c,
st(h):en(h)]

The provided sample implementation (for ConvertAtenAdaptiveAvgPool1dOp)
uses tensor.extract to access the input tensor inside the payload of a
linalg generic op. This is likely an unattractive use of linalg generic
ops, which is why I am asking for some more targeted feedback on the
validity of this approach before attempting to support the many other
adaptive pooling ops.

Specifically:

- Is the performance of this implementation bad enough to warrant
targeting different dialects entirely? e.g. TMtensor/linalg ext/ etc.
- If the provided implementation is of acceptable performance to the
community, then is it permissable to remove the Adaptive pooling
decompositions from DecomposeComplexOps.cpp? Based on the current
structure of the -torch-decompose-complex-ops pass, it does not seem
possible to only decompose the adaptive ops in special cases (it seems
to get stuck in an infinite loop on a match failure). I would be happy
to instead incorporate the case logic into the conversion directly, and
remove the decompositions once they are rendered completely obsolete.

As long as this approach is acceptable, I can clean up the
implementation with some helper functions, and quickly add support for
each of the remaining Adaptive pooling ops.
2024-01-09 11:14:10 -08:00
.github Add bazel targets for TorchOnnxToTorch conversion passes (#2596) 2023-11-28 13:06:35 -08:00
build_tools Update PYTHONPATH in development.md (#2644) 2023-12-18 22:46:55 -08:00
docs Mentioned helpful tooling to convert Onnx models to Torch MLIR (#2683) 2023-12-21 07:26:20 -08:00
externals Bump llvm-project to 6b65d79fbb4682468333cea42b62f15c2dffd8f3 (#2723) 2024-01-04 14:33:41 -08:00
include [MLIR][ONNX] Add OnnxToTorch support for Slice Op (#2696) 2024-01-03 19:41:10 -08:00
lib [RFC] general support for Adaptive Pooling Ops (#2661) 2024-01-09 11:14:10 -08:00
projects [RFC] general support for Adaptive Pooling Ops (#2661) 2024-01-09 11:14:10 -08:00
python [onnx] Fix importer variable names to make `mlir` legal (#2690) 2023-12-21 17:05:18 -08:00
test [MLIR][ONNX] Add OnnxToTorch support for Slice Op (#2696) 2024-01-03 19:41:10 -08:00
tools Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
utils/bazel [Bazel] Add conversion targets for `TorchToTensor` (#2666) 2023-12-17 06:07:43 -08:00
.clang-format Add stub numpy dialect. 2020-04-26 17:20:58 -07:00
.gitignore Breakup python pytorch deps (#2582) 2023-11-19 12:10:19 -08:00
.gitmodules Revert accidental change to submodule origin. (#2477) 2023-09-20 14:05:52 +08:00
.style.yapf Change preferred style to be PEP8 2022-04-20 14:38:19 -07:00
CITATION.cff Add CITATION file (#2371) 2023-08-02 14:36:15 -07:00
CMakeLists.txt [onnx] Add torch-mlir-import-onnx native port as an optional tool/library. (#2694) 2023-12-27 12:13:34 -08:00
LICENSE Dual license the torch-mlir project. 2021-10-01 10:46:08 -07:00
README.md Update readme to fit new project structure (#2548) 2023-11-12 21:19:18 -08:00
build-requirements.txt [arm64] Fix release builds for ARM64 (#2157) 2023-05-24 13:52:13 -07:00
pytorch-hash.txt build: manually update PyTorch version 2024-01-03 11:47:12 +05:30
pytorch-requirements.txt build: manually update PyTorch version 2024-01-03 11:47:12 +05:30
requirements.txt python: separate build- and test-related pip dependencies (#1874) 2023-02-13 21:22:09 -06:00
setup.py [onnx] Add torch-mlir-import-onnx tool. (#2637) 2023-12-12 22:01:30 -08:00
test-requirements.txt Upstream the ONNX importer. (#2636) 2023-12-12 19:02:51 -08:00
torchvision-requirements.txt build: manually update PyTorch version 2024-01-03 11:47:12 +05:30
whl-requirements.txt Add ARM64 release builds (#2159) 2023-05-25 20:39:19 -07:00

README.md

The Torch-MLIR Project

The Torch-MLIR project aims to provide first class compiler support from the PyTorch ecosystem to the MLIR ecosystem.

This project is participating in the LLVM Incubator process: as such, it is not part of any official LLVM release. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project is not yet endorsed as a component of LLVM.

PyTorch PyTorch is an open source machine learning framework that facilitates the seamless transition from research and prototyping to production-level deployment.

MLIR The MLIR project offers a novel approach for building extensible and reusable compiler architectures, which address the issue of software fragmentation, reduce the cost of developing domain-specific compilers, improve compilation for heterogeneous hardware, and promote compatibility between existing compilers.

Torch-MLIR Several vendors have adopted MLIR as the middle layer in their systems, enabling them to map frameworks such as PyTorch, JAX, and TensorFlow into MLIR and subsequently lower them to their target hardware. We have observed half a dozen custom lowerings from PyTorch to MLIR, making it easier for hardware vendors to focus on their unique value, rather than needing to implement yet another PyTorch frontend for MLIR. The ultimate aim is to be similar to the current hardware vendors adding LLVM target support, rather than each one implementing Clang or a C++ frontend.

Release Build

All the roads from PyTorch to Torch MLIR Dialect

We have few paths to lower down to the Torch MLIR Dialect.

Simplified Architecture Diagram for README

  • TorchScript This is the most tested path down to Torch MLIR Dialect.
  • LazyTensorCore Read more details here.
  • We also have basic TorchDynamo/PyTorch 2.0 support, see our long-term roadmap and Thoughts on PyTorch 2.0 for more details.

Project Communication

  • #torch-mlir channel on the LLVM Discord - this is the most active communication channel
  • Github issues here
  • torch-mlir section of LLVM Discourse

Meetings

Community Meeting / Developer Hour:

  • 1st and 3rd Monday of the month at 9 am PST
  • 2nd and 4th Monday of the month at 5 pm PST

Office Hours:

  • Every Thursday at 8:30 am PST

Meeting links can be found here.

Install torch-mlir snapshot

At the time of writing, we release pre-built snapshot of torch-mlir for Python 3.11 on Linux and macOS.

If you have Python 3.11, the following commands initialize a virtual environment.

python3.11 -m venv mlir_venv
source mlir_venv/bin/activate

Or, if you want to switch over multiple versions of Python using conda, you can create a conda environment with Python 3.11.

conda create -n torch-mlir python=3.11
conda activate torch-mlir
python -m pip install --upgrade pip

Then, we can install torch-mlir with the corresponding torch and torchvision nightlies.

pip install --pre torch-mlir torchvision \
  -f https://llvm.github.io/torch-mlir/package-index/ \
  --extra-index-url https://download.pytorch.org/whl/nightly/cpu

Demos

TorchScript ResNet18

Standalone script to Convert a PyTorch ResNet18 model to MLIR and run it on the CPU Backend:

# Get the latest example if you haven't checked out the code
wget https://raw.githubusercontent.com/llvm/torch-mlir/main/projects/pt1/examples/torchscript_resnet18.py

# Run ResNet18 as a standalone script.
python projects/pt1/examples/torchscript_resnet18.py

load image from https://upload.wikimedia.org/wikipedia/commons/2/26/YellowLabradorLooking_new.jpg
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /home/mlir/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
100.0%
PyTorch prediction
[('Labrador retriever', 70.66319274902344), ('golden retriever', 4.956596374511719), ('Chesapeake Bay retriever', 4.195662975311279)]
torch-mlir prediction
[('Labrador retriever', 70.66320037841797), ('golden retriever', 4.956601619720459), ('Chesapeake Bay retriever', 4.195651531219482)]

Lazy Tensor Core

View examples here.

Repository Layout

The project follows the conventions of typical MLIR-based projects:

  • include/torch-mlir, lib structure for C++ MLIR compiler dialects/passes.
  • test for holding test code.
  • tools for torch-mlir-opt and such.
  • python top level directory for Python code

Developers

If you would like to develop and build torch-mlir from source please look at Development Notes