torch-mlir/frontends/pytorch
Sean Silva 28a7738189 [torch-mlir earthmoving (1/N)] C/C++ code movement.
This creates the `external/torch-mlir` directory as an
LLVM_EXTERNAL_PROJECTS-compatible project (analogous to
`iree-dialects`) and completes movement/rename of all pure MLIR C/C++
compiler code into there. The next step will be to move all the Python
code / code that links/includes PyTorch C++ code (which currently lives
in `frontends/pytorch`) into a subdirectory here.

I call this "earthmoving" because it is mostly mechanical changes and
renames. As a quick summary (we can change this down the road easily)
- C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch`
- CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet`
- preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_`
- CMake `NPCOMPFoo -> TorchMLIRFoo`

The goal of this is to create a standalone project creating a center of
mass for entry into the MLIR ecosystem from PyTorch, suitable in scope
for eventual inclusion/ownership in PyTorch. The idea is that
`external/torch-mlir` will some day be pulled out into its own
repository, and then npcomp will simply pull it in as a submodule.

Layering-wise, what lives in `torch-mlir` lowers code from PyTorch
(currently TorchScript, but TorchFX or pytorch/xla-style tracing are
possible extensions) down to what we have been calling the "Torch
backend contract" which is cleaned up IR (inlining, simplifcation,
conversion to value tensors, ...) entirely in the `torch` dialect. This
is the branching off point for further lowering, of which npcomp takes
one opinion (outside `torch-mlir` of course!), namely the
`TorchConversion` dialect/transforms which lower to IR suitable for IREE
and other linalg-on-tensors based lower-level compilers.

Summary of changes:
- move `{include,lib,test}/Dialect/Torch` into `torch-mlir`
- move relevant parts of CAPI into `torch-mlir`.
- leave a few things related to the `torch-mlir` Python build commented
  out, which should be resolved in a subsequent change.
2021-09-10 21:44:37 -07:00
..
cmake/modules Rework the python build to a static assembly of MLIR+NPCOMP (#251) 2021-07-27 16:10:10 -07:00
csrc [torch-mlir earthmoving (1/N)] C/C++ code movement. 2021-09-10 21:44:37 -07:00
docs Add design sketch for aten fallback. 2020-11-24 18:13:35 -08:00
e2e_testing/torchscript Add basic support for lists. 2021-09-09 20:48:55 -07:00
examples Fix import in jupyter notebook. 2021-09-03 23:57:54 +00:00
python Add basic support for lists. 2021-09-09 20:48:55 -07:00
test Fix lowering of reduce ops 2021-09-08 15:30:15 -07:00
utils [cleanup] Put the root class type for exportPath first. 2021-04-01 18:40:03 -07:00
.gitignore Build packages for npcomp-torch. 2021-07-29 19:58:59 -07:00
CMakeLists.txt Rework the python build to a static assembly of MLIR+NPCOMP (#251) 2021-07-27 16:10:10 -07:00
LICENSE Add pytorch interface to ATen Dialect (#30) 2020-08-21 11:22:47 -07:00
README.md Update README. 2021-03-30 11:33:33 -07:00
setup.py Merge npcomp and mlir python namespaces. 2021-08-22 21:00:42 -07:00

README.md

NPComp - PyTorch frontend integration

This directory contains optional components for interfacing PyTorch to NPComp. Integration is targeted at multiple levels:

  • Via program capture with a ATen pseudo-device.
  • Via IR-level integration with PyTorch (via tracing or scripting interfaces).
  • Interfaces to facilitate checking against reference implementations and verification.

In all situations, the target dialects are maintained in the outer project, along with their lowerings to common intermediate dialects and backends. This directory should be purely about interfacing with the PyTorch/LibTorch components for extracting and executing programs.

The code in this directory is intended to integrate tightly with pytorch, and follows the code style for pytorch. See the overall documentation for frontends for further details about code layout and integration philosophy. In particular, this directory exists to provide a working frontend to an MLIR based pytorch compilation flow and is not intended to be contributed to the LLVM monorepo. If the project is successful, it makes more sense to either break it out as an independent project that depends on LLVM/MLIR/npcomp or contribute it upstream to PyTorch. However, as it will be quite some time before the components are in a state to support such a dependency, it is being carried in-tree in the interim.

Program capture with a ATen dispatch capture.

Integration with a pseudo-device is typified by code like the following:

import torch
import torch_mlir

lhs = torch.rand(2, 3)
rhs = torch.rand(3, 4)

mb = torch_mlir.ModuleBuilder()
with mb.capture_function("mm", [lhs, rhs]) as f:
  result = torch.mm(lhs, rhs)
  f.returns([result])

mb.module.operation.print()

All operations that happen under the mb.capture_function context manager are intercepted via PyTorch's dispatcher, and an IR graph is constructed into the module held by the ModuleBuilder.

This technique has several advantages and disadvantages. For training use cases, this technique generates a backward path automatically using the same method that pytorch natively uses. The resulting graph also tends to be simpler, since it will not reflect conditionals in the original python code. Lastly, it is natural if MLIR is being used as a frontend target for an actual device of some sort. In this case, the MLIR could go through a device-specific lowering path and the resulting code run on a device. The implementation of this technique is largely modeled after pytorch/xla.