This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.
Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.
Addresses leftover comment from earlier PRs: #1254 , #1265 to remove `torch_dispatch` frontend. In addition, moves the main arch diagram into `docs/` directory for consistency.
Prior to this patch, the top-level README did not include the line for
running the Python regression tests in `//python/test`. This patch
fixes the problem by adding a line to run the `check-torch-mlir-python`
target.
- update diagram to use the name "Eager Mode" instead of
`torch.dispatch`, which wasn't a very accurate name
- rename `resnet_inference.ipynb` to
`torchscript_resnet_inference.ipynb` - this is in preparation to LTC
and Eager Mode versions
- remove mention of TorchFX - turns out that all TorchFX modules are
actually scriptable modules, so there is literally "zero code" vs
using the TorchScript path
- remove LazyTensorCore example, and instead point at the current
in-development `torch_mlir_ltc_backend` branch.
Note: there were actually some pretty useful utilities built out in the
examples directory, but they now live inside the Eager Mode
`python/torch_mlir/eager_mode/ir_building.py` (and need to be rolled
into a proper home with the upcoming rewrite of our top-level
`torch_mlir.compile` API).
This is intended to explore support for non-structured ops that can't
be modeled by Linalg dialect. `tm_tensor.scan` and `tm_tensor.scatter`
are added as the first such ops. The dialect should aim to be
upstreamed in the future.
* Also adds a requirements.txt and updates docs to reference it versus stringy pip install.
* Adds doc with instructions on creating a wheel.
Fixes#370
* Picks up Python configure changes (was pinned to a bad intermediate commit).
* Uses the new mlir_configure_python_dev_packages() to ensure CMake python is found consistently.
* Fixes the JIT importer to build as a MODULE vs SHARED (needed for linking to Python as a module, per config changes).
* Adds some notes to the README to help folks build a smaller set focused just on this project.
Implement the `lazytensor` python package for converting
lazy computations captured by the Lazy Tensor Core into MLIR.
This PR also fixes a few things with `torchfx` and its example
This moves the bulk of the Python code (including the Torch interop)
from `frontends/pytorch` into `torch-mlir/TorchPlugin`. This also
required reconciling a bunch of other Python-related stuff, like the
`torch` dialects.
As I did this, it was simpler to just remove all the old numpy/basicpy
stuff because we were going to delete it anyway and it was faster than
debugging an intermediate state that would only last O(days) anyway.
torch-mlir has two top-level python packages (built into the
`python_packages` directory):
- `torch_mlir_dialects`: `torch` dialect Python bindings (does not
depend on PyTorch). This also involves building the aggregate CAPI for
`torch-mlir`.
- `torch_mlir`: bindings to the part of the code that links against
PyTorch (or C++ code that transitively does).
Additionally, there remain two more Python packages in npcomp (but
outside `torch-mlir`):
- `npcomp_torch`: Contains the e2e test framework and testing configs
that plug into RefBackend and IREE.
- `npcomp_core`: Contains the low-level interfaces to RefBackend and
IREE that `npcomp_torch` uses, along with its own
`MLIR_PYTHON_PACKAGE_PREFIX=npcomp.` aggregation of the core MLIR
python bindings. (all other functionality has been stripped out)
After all the basicpy/numpy deletions, the `npcomp` C++ code is now very
tiny. It basically just contains RefBackend and the `TorchConversion`
dialect/passes (e.g. `TorchToLinalg.cpp`).
Correspondingly, there are now 4 main testing targets paralleling the
Python layering (which is reflective of the deeper underlying dependency
structure)
- `check-torch-mlir`: checks the `torch-mlir` pure MLIR C++ code.
- `check-torch-mlir-plugin`: checks the code in `TorchPlugin` (e.g.
TorchScript import)
- `check-frontends-pytorch`: Checks the little code we have in
`frontends/pytorch` -- mainly things related to the e2e framework
itself.
- `check-npcomp`: Checks the pure MLIR C++ code inside npcomp.
There is a target `check-npcomp-all` that runs all of them.
The `torch-mlir/build_standalone.sh` script does a standalone build of
`torch-mlir`.
The e2e tests (`tools/torchscript_e2e_test.sh`) are working too.
The update_torch_ods script now lives in
`torch-mlir/build_tools/update_torch_ods.sh` and expects a standalone
build.
This change also required a fix upstream related to cross-shlib Python
dependencies, so we also update llvm-project to
8dca953dd39c0cd8c80decbeb38753f58a4de580 to get
https://reviews.llvm.org/D109776 (no other fixes were needed for the
integrate, thankfully).
This completes most of the large source code changes. Next will be
bringing the CI/packaging/examples back to life.
The tests use the same (pure-Python) test framework as the
normal torchscript_e2e_test.sh, but the tests are added in
`build_tools/torchscript_e2e_heavydep_tests` instead of
`frontends/pytorch/e2e_testing/torchscript`. Any needed dependencies can
easily be configured in generate_serialized_tests.sh.
We add an initial machine translation model with a complex set of
dependencies to seed the curriculum there. I verified that this model
gets to the point of MLIR import (it fails there with a segfault due to
not being able to import the "Any" type).
This required moving a few files from the `torch_mlir` Python module
into multiple modules to isolate the code that depends on our C++
extensions (which now live in `torch_mlir` and
`torch_mlir_torchscript_e2e_test_configs`) from the pure Python code
(which now lives in `torch_mlir_torchscript`). This is an entirely
mechanical change, and lots of imports needed to be updated.
The dependency graph is:
```
torch_mlir_torchscript_e2e_test_configs
/ |
/ |
/ |
V V
torch_mlir_torchscript torch_mlir
```
The `torch_mlir_torchscript_e2e_test_configs` are then dependency-injected
into the `torch_mlir_torchscript` modules to successfully assemble a
working test harness (the code was already structured this way, but this
new file organization allows the isolation from C++ code to actually
happen). This isolation is critical to allowing the serialized programs
to be transported across PyTorch versions and for the test harness to be
used seamlessly to generate the heavydep tests.
Also:
- Extend `_Tracer` class to support nested property (submodule) accesses.
Recommended review order:
- "user-level" docs in README.md
- code in `build_tools/torchscript_e2e_heavydep_tests`.
- changes in `torch_mlir_torchscript/e2e_test/framework.py`
- misc mechanical changes.