torch-mlir/frontends/pytorch
Sean Silva e749074bae Basic infra for annotate shapes and dtypes on arguments.
These allow users to annotate a known "type bound" on the argument,
which can seed shape/dtype inference. We don't rewrite the function
types as part of the import process (it will happen in a
yet-to-be-written pass) because:

1. We would need to interprocedurally rewrite all calls to keep the IR
   consistent. Currently, we have a place after GlobalizeObjectGraph but
   before we convert to tensors where this is convenient to do. Ideally,
   we would do this on the object graph representation.

1. We don't necessarily know that adjusting the function type is a legal
   calling convention change. The pass will have blessed knowledge (by
   the pass pipeline author) that adjusting the argument type based on
   the type bound is safe (which it frequently is).

2. Note that in principle, a type bound could be a fairly general thing
   (such as maximum sizes of dimensions, unions of multiple concrete
   types, etc.). The pass will in principle have logic to interpret the
   type bounds and to determine a suitable "best" (and legal) argument
   type.
2021-04-01 18:40:03 -07:00
..
csrc Basic infra for annotate shapes and dtypes on arguments. 2021-04-01 18:40:03 -07:00
docs Add design sketch for aten fallback. 2020-11-24 18:13:35 -08:00
examples Add support for "trailing_" and "out" variants of various ops. 2021-03-19 10:34:50 -07:00
python Add support for "trailing_" and "out" variants of various ops. 2021-03-19 10:34:50 -07:00
test Basic infra for annotate shapes and dtypes on arguments. 2021-04-01 18:40:03 -07:00
utils Add ability to annotate TorchScript classes. 2021-02-25 11:28:34 -08:00
CMakeLists.txt Delete old PyTorch 1.3 type dispatch oriented code paths. 2020-11-12 22:27:05 -08:00
LICENSE Add pytorch interface to ATen Dialect (#30) 2020-08-21 11:22:47 -07:00
README.md Update README. 2021-03-30 11:33:33 -07:00

README.md

NPComp - PyTorch frontend integration

This directory contains optional components for interfacing PyTorch to NPComp. Integration is targeted at multiple levels:

  • Via program capture with a ATen pseudo-device.
  • Via IR-level integration with PyTorch (via tracing or scripting interfaces).
  • Interfaces to facilitate checking against reference implementations and verification.

In all situations, the target dialects are maintained in the outer project, along with their lowerings to common intermediate dialects and backends. This directory should be purely about interfacing with the PyTorch/LibTorch components for extracting and executing programs.

The code in this directory is intended to integrate tightly with pytorch, and follows the code style for pytorch. See the overall documentation for frontends for further details about code layout and integration philosophy. In particular, this directory exists to provide a working frontend to an MLIR based pytorch compilation flow and is not intended to be contributed to the LLVM monorepo. If the project is successful, it makes more sense to either break it out as an independent project that depends on LLVM/MLIR/npcomp or contribute it upstream to PyTorch. However, as it will be quite some time before the components are in a state to support such a dependency, it is being carried in-tree in the interim.

Program capture with a ATen dispatch capture.

Integration with a pseudo-device is typified by code like the following:

import torch
import torch_mlir

lhs = torch.rand(2, 3)
rhs = torch.rand(3, 4)

mb = torch_mlir.ModuleBuilder()
with mb.capture_function("mm", [lhs, rhs]) as f:
  result = torch.mm(lhs, rhs)
  f.returns([result])

mb.module.operation.print()

All operations that happen under the mb.capture_function context manager are intercepted via PyTorch's dispatcher, and an IR graph is constructed into the module held by the ModuleBuilder.

This technique has several advantages and disadvantages. For training use cases, this technique generates a backward path automatically using the same method that pytorch natively uses. The resulting graph also tends to be simpler, since it will not reflect conditionals in the original python code. Lastly, it is natural if MLIR is being used as a frontend target for an actual device of some sort. In this case, the MLIR could go through a device-specific lowering path and the resulting code run on a device. The implementation of this technique is largely modeled after pytorch/xla.