torch-mlir/frontends/pytorch
Sean Silva 179105ca3e Add basic MLP's to the e2e curriculum.
These tests pass on the reference backend.

- Add aten.linear op + shape xfer function + ATen->Linalg lowering.
 - Note: this needs to be more automated, and needs to cover more cases.
 - Current not implemented caveats:
  - size-1 broadcasting for bias vector (either static-size-1 or ? case)
  - higher-rank aten.linear ops (not produced by torch.nn.Linear though)
  - type promotion (still don't even know the exact rules here)
- Add folder for torch.derefine op. Now the inliner can clean it up as
  it inlines. (call boundaries are a main place we need to insert
  torch.derefine) This is brittle -- the other important case is control
  flow which will need to be handled via an extension to
  RefineTypes.cpp (as will more robust call handling). River has an
  in-flight patch to update it to the new dataflow framework so I didn't
  want to do anything intrusive here.
    - Also adjust torch.derefine syntax to use the keyword `to` instead of
      `->`, as most type-only, cast-like ops do.
2021-04-27 12:18:54 -07:00
..
csrc Miscellaneous changes while trying to work on ResNet18 2021-04-27 11:51:11 -07:00
docs Add design sketch for aten fallback. 2020-11-24 18:13:35 -08:00
e2e_testing/torchscript Add basic MLP's to the e2e curriculum. 2021-04-27 12:18:54 -07:00
examples Add npcomp-verify-backend-contract pass. 2021-04-20 12:00:35 -07:00
python Add basic MLP's to the e2e curriculum. 2021-04-27 12:18:54 -07:00
test Add basic MLP's to the e2e curriculum. 2021-04-27 12:18:54 -07:00
utils [cleanup] Put the root class type for exportPath first. 2021-04-01 18:40:03 -07:00
CMakeLists.txt Delete old PyTorch 1.3 type dispatch oriented code paths. 2020-11-12 22:27:05 -08:00
LICENSE Add pytorch interface to ATen Dialect (#30) 2020-08-21 11:22:47 -07:00
README.md Update README. 2021-03-30 11:33:33 -07:00

README.md

NPComp - PyTorch frontend integration

This directory contains optional components for interfacing PyTorch to NPComp. Integration is targeted at multiple levels:

  • Via program capture with a ATen pseudo-device.
  • Via IR-level integration with PyTorch (via tracing or scripting interfaces).
  • Interfaces to facilitate checking against reference implementations and verification.

In all situations, the target dialects are maintained in the outer project, along with their lowerings to common intermediate dialects and backends. This directory should be purely about interfacing with the PyTorch/LibTorch components for extracting and executing programs.

The code in this directory is intended to integrate tightly with pytorch, and follows the code style for pytorch. See the overall documentation for frontends for further details about code layout and integration philosophy. In particular, this directory exists to provide a working frontend to an MLIR based pytorch compilation flow and is not intended to be contributed to the LLVM monorepo. If the project is successful, it makes more sense to either break it out as an independent project that depends on LLVM/MLIR/npcomp or contribute it upstream to PyTorch. However, as it will be quite some time before the components are in a state to support such a dependency, it is being carried in-tree in the interim.

Program capture with a ATen dispatch capture.

Integration with a pseudo-device is typified by code like the following:

import torch
import torch_mlir

lhs = torch.rand(2, 3)
rhs = torch.rand(3, 4)

mb = torch_mlir.ModuleBuilder()
with mb.capture_function("mm", [lhs, rhs]) as f:
  result = torch.mm(lhs, rhs)
  f.returns([result])

mb.module.operation.print()

All operations that happen under the mb.capture_function context manager are intercepted via PyTorch's dispatcher, and an IR graph is constructed into the module held by the ModuleBuilder.

This technique has several advantages and disadvantages. For training use cases, this technique generates a backward path automatically using the same method that pytorch natively uses. The resulting graph also tends to be simpler, since it will not reflect conditionals in the original python code. Lastly, it is natural if MLIR is being used as a frontend target for an actual device of some sort. In this case, the MLIR could go through a device-specific lowering path and the resulting code run on a device. The implementation of this technique is largely modeled after pytorch/xla.