Commit Graph

66 Commits (f77d88390a8e3c4bdc2172a0f0342d0df21c598d)

Author SHA1 Message Date
Aart Bik 491f4820f5
[torch-mlir][sparse] pre-pend named buffers to parameter list (#3178)
weights and biases and other model parameters appear as a separate data
structure to the traced graph, but are needed when running the MLIR
compiled code; this PR implements that extended functionality
2024-04-17 14:44:05 -07:00
penguin_wwy af5509c5d9
[FxImporter] Type conversion to resolve the mismatch between Py type and schema type (#3163) 2024-04-15 23:14:19 -07:00
Aart Bik 307f49f566
[torch-mlir][sparse] support sparse tensor output (#3152)
Sparse inputs and outputs are now fully supported! They always consist
of their constituents buffers, passed as numpy arrays. Sparse on!
2024-04-12 09:56:32 -07:00
Aart Bik 184d8c13f4
[torch-mlir][sparse] add ID-net example (#3127)
first sparse-in/sparse-out example, will be used
to make actual sparse output work!
2024-04-09 11:21:30 -07:00
Aart Bik 5797d3aa57
[torch-mlir][sparse] add a COO test for 3-dim (#3119)
This tests COO for more than 2-dim. Note that sparsity should really
propagate into the relu activation and the output, but such cleverness
needs to wait for the pending work in the PyTorch tree.
2024-04-08 16:46:51 -07:00
Stella Laurenzo ffaaf08c31
[fx] Fix type inference for scalar/int types. (#3099)
This was discovered in a downstream test suite and was due to a control
flow nesting merge issue. In-tree test added and fixed.
2024-04-02 13:56:43 -07:00
Rob Suderman ec4cb8be44
Bump LLVM to llvm/llvm-project@0030fc4ac7 (#3079)
Co-authored-by: Peiming Liu <peiming@google.com>
2024-04-01 16:34:59 -07:00
Stella Laurenzo 826786bdd0
[fx] Support ExportedProgram buffer mutation. (#3080)
In the prior state when I supported mutation of user inputs by treating
them as mutable-tensor SSA values, I had left the case of buffer
mutation only vaguely implemented until a concrete use emerged.
    
This patch reworks this buffer mutation support by assuming that buffers
must be resolved via the hooks symbolically and treated with load/store
semantics. This is implied in the structure since we have no SSA value
that represents a buffer and we already assume that reading parameters
happens via such a mechanism.
2024-04-01 14:18:12 -07:00
Stella Laurenzo 282e9b0e64
[fx] Fix type determination for multi-return ops and static `None` returns. (#3081)
In practice, this was caught by the way that AOT autograd traces
`convolution_backward`. For the unit test, we just repro it with a
custom op.
2024-04-01 09:39:38 -07:00
Stella Laurenzo e2343cf4ce
[fx] Implement auto_functionalized higher order op. (#3063)
* Also adds the basic scaffolding for handling more of these, which will
be needed for cond, while, etc.
* Refactors some of the support in the generic OpOverload emitter so it
can be shared with these other special forms.

This has been on my list for a while, but it just so happens that as
part of upgrading to PyTorch 2.3 and a pure upstream flow in Turbine, we
were using a feature that required integration with auto_functionalized.
This is perhaps the "weirdest" of the higher-order ops and a poor place
to start, but needs must. We have testing for this in Turbine.

Full support in Turbine has an entire custom ops facility. I've reduced
this down to a unit test in torch-mlir.
2024-03-26 17:06:05 -07:00
Stella Laurenzo 6ea857c644
[fx] Make the lift_fresh_copy -> clone special form use kwargs. (#3045)
At some point, this op became kwarg-only instead of arg/kwarg.
Discovered when upgrading to PyTorch 2.3.

Also adds a test as this was untested in-tree (was caught out of tree).
2024-03-21 15:34:40 -07:00
penguin_wwy 7616d637fd
Add stateless fx graph import (#3036) 2024-03-21 14:44:54 -07:00
penguin_wwy f34c187ac4
Normalize type hints to be compatible with multiple Python versions (#3028)
Although we provide a wheel package for Python 3.8, it may actually
throw the following exception:
`TypeError: 'type' object is not subscriptable`
2024-03-15 08:29:48 -07:00
Sambhav Jain 0b2f9c89a2
Bring back `dynamic_shapes` constraints in fx importer API (#3026)
https://github.com/llvm/torch-mlir/pull/2992 dropped `constraints` from
the fx importer API,
[breaking](https://github.com/cruise-automation/mlir-tcp/actions/runs/8284385380/job/22669774071)
downstream AOT compile tests in `mlir-tcp` that use it. This knob has
been soft-deprecated for a while now, replaced by `dynamic_shapes` - a
more ergonomic interface. This PR brings back dynamic_shapes constraints
in the new supported form. Also added a python lit test with dynamic
shaped annotations.
2024-03-14 10:26:34 -07:00
Vivek Khandelwal 6e84752c39
build: manually update PyTorch version (#2992)
Set PyTorch and TorchVision version to nightly release 2024-03-07.
This commit also removes the deprecated constraints API:
342e7929b8

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-07 21:42:38 +05:30
Aart Bik f21b76b68a
[torch-mlir][sparse] fixed merge conflict (#2967) 2024-02-28 17:14:00 -08:00
Peiming Liu e85a2a87c5
[torch-mlir][sparse] support e2e sparse kernels with COO inputs. (#2939) 2024-02-28 16:08:37 -08:00
Rob Suderman e48fe45886
[onnx] Import `onnx` import to pass remaining tests (#2951)
Finish supporting importing the vast majority of `onnx` operations. This
includes:
- region support
- region value inherentance
- `torch.string` support
- `torch.list` support
- `torch.optional` support
2024-02-28 12:18:02 -08:00
Aart Bik 30212547a9
[torch-mlir][sparse] add JIT test for block sparse SpMV (#2955)
This required adding a "decompose" pass to the torch lowering, since
torch.mv was not directly handled by lowering to linalg
2024-02-27 11:49:32 -08:00
Sambhav Jain 3cbe6c98ec
Expose `func_name` to the main fx import API (#2949)
As titled.
2024-02-26 10:08:14 -08:00
Aart Bik 4147b280ce
[torch-mlir][sparse] add block sparsity to mlir lowering (#2942)
Also note that we are in the process of proposing SparseTensorMetadata
to PyTorch FX graph export (see
https://github.com/pytorch/pytorch/pull/117907). This will hopefully
eventually replace the current data structures in torch-mlir.
2024-02-23 11:57:20 -08:00
Rob Suderman 53f6d06ab8
[onnx] Drop `ConstantOfShape` logic form importer, fix torch lowering (#2930)
There is no reason to treat `ConstantOfShape` as a specialized import
any as there exists a onnx-to-torch equivalent. Dropping the import
coding and adding support for resource conversion substantially
increases test coverage for dynamically shaped tests.
2024-02-21 21:34:43 -08:00
Rob Suderman 13553d49c9
[onnx] Update the importer to create a `none` for missing operands (#2931)
Some operands are optional so we require a placeholder for missing
operands. We invent an `onnx.None` operation as our placeholder.
2024-02-20 09:30:30 -08:00
Aart Bik c5d8c12469
[torch-mlir][sparse][NFC] fixed typo (#2917)
grammar police
2024-02-16 13:02:00 -08:00
Stella Laurenzo 5253282c55
[fx] Support mutation in ExportedProgram. (#2916)
As of https://github.com/pytorch/pytorch/pull/118969, `ExportedProgram`
has the long awaited fixes to correctly categorize various things
relating to parameters, buffers, mutated inputs and constants.

With this additional modeling, we are finally able to implement
(safely/soundly) the mutable semantics that were attempted on the
TorchScript path. The difference is that on that path, we had to
conservatively treat everything as mutable and run some dodgy heuristics
(which have been the cause of many bugs relating to
"MaximizeValueSemantics") to try to get back to an immutable state.

The new model supports mutability at the graph edges, allowing both user
inputs and buffers to be mutated (there is some more support than that,
but that is all I fully tracked through to implementation).

Therefore, when we receive programs like this, we now can selectively
enable mutation at the edges. This happens to be the mutability model
that IREE supports, which I expect to be a primary beneficiary. However,
there is nothing stopping anyone else from handling the `!torch.tensor`
types and the existing copy/overwrite ops that will be selectively
added.

Since this relies on API changes that will not release until 2.3, I'm
being a bit cautious about not refactoring existing facilities.
2024-02-16 09:46:30 -08:00
Rob Suderman 074f112d6a
[onnx] Add testing using the `onnx` compilation using torch tests (#2795)
We can route the torch tests via `onnx` using the `torch.onnx.export`
tooling. We can then reimport, lower to torch, and compile to linalg to
validate the onnx path is working correctly.

The current implementation exposes some failures in the `onnx` path so
we cannot enable the onnx test suite yet due to segmentation faults.
2024-02-15 10:17:13 -08:00
Aart Bik 24c2fc0b5f
[torch-mlir][sparse] add JIT test to expose pending issues (#2906)
This test exposes issues that need fixing
(1) propagate sparsity into the FX graph (over elt-wise) (2) batched
dimensions need a new "dense(batch)" format
2024-02-13 13:42:56 -08:00
Aart Bik b6f4ca512e
[torch-mlir][sparse] sparsity metadata refinement (#2901)
Various improvements on sparsity metadata:

(1) define single data structure for all sparsity related metadata 
(2) handle batched dense dimensions, as well as dense subtensor
dimensions
(3) refine sparsity propagation for deeper networks
2024-02-12 16:10:57 -08:00
Aart Bik be8375d350
[torch-mlir][sparse] implement first sparse_jit end-to-end path (#2894)
This PR introduces a sparse_jit wrapper that can run simple models with
sparse tensor inputs end-to-end. The implementation shows all required
components on modifying sparse tensor types with a 1:N relation on the
call sites. Two tests shows that the JIT runs end-to-end while computing
the correct results.

More details to follow (generalizing to COO and different ranks, as well
as support for *output* sparse tensors), but the general concepts are
all here now.

**_Update: Thanks to Rob, bump to proper LLVM/MLIR hash is done!_**

_**NOTE that all parameter passing changes are nicely done "downstream"
in MLIR, so very little changes are required in torch-mlir code
proper**_

---------

Co-authored-by: Franz Haniel <77495327+frafranz@users.noreply.github.com>
Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2024-02-12 10:04:54 -08:00
saienduri bfcf93ea21
Rename torch_mlir.compile APIs and introduce FX based analogs (#2842)
Link to related RFC:
https://discourse.llvm.org/t/rfc-rename-torch-mlir-compile-apis-and-introduce-fx-based-analogs/76646
This commit updates the documentation, tests, CMake files, and API for
the proposed changes in the RFC. There is a new torch_mlir/fx.py for
user level APIs related to importing modules and a corresponding test
for this path can be found at test/python/fx_importer/basic_test.py.

---------

Co-authored-by: MaheshRavishankar <mravisha@amd.com>
2024-02-06 19:07:59 -08:00
Daniel Garvey faf7d4aaa5
[fx_importer] Add support for 0D tensors (#2870)
Adds an escape hatch from creating a DenseResourceElementsAttr for
single value tensors into DenseElementsAttr.

For 0d or 1element, splats are better as DenseElementsAttr. Don't use
DenseResourceElementsAttr for it
2024-02-06 00:19:31 -06:00
Dave Liddell 04be6ba773
Make the onnx importer more robust for internal/external and large models (#2794)
Fix for https://github.com/llvm/torch-mlir/issues/2765

The onnx docs say that you can't do shape inference using the in-memory
API for models > 2 GB. This fix replaces that API with the file-based
API. Since the new API generates an intermediate file, also added a
--keep switch to keep that file, which I delete by default.

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-01-31 21:58:43 -08:00
Aart Bik 105aad6f57
[torch-mlir] provide FX traced graph importer for sparse tensors (#2817)
Note that we are waiting for actual FX traced graph support for sparse
tensors. For details see

https://github.com/pytorch/pytorch/issues/117188

Until then, however, we provide this clever importer that builds the FX
traced graph for for the dense case and then puts a sparse annotation
back on the parameters.

With import test.
2024-01-30 21:22:12 -08:00
Stella Laurenzo ccd469ca0d
[fx] Upstream the turbine FxImporter to torch-mlir. (#2681)
Changes made during upstreaming:

* Removed comments attributing some copied code back to torch-mlir
(since it is now repatriated).
* Re-organized imports.
* Inlined RefMapping/RefTracker and TypeSubclassMap from an external
utility module.
* Added FxImporter class comments.
* Updated stack trace extraction to be fail safe.
* Added an entry-point for `import_frozen_exported_program` which uses
the shiny new upstream `torch.export.export()` API (versus the
lower-level/older API that Turbine is presently using). This
necessitated a small FX rewrite to line external state management up
with current conventions.
* Adapted one of Turbine's importer tests to go with this initial
submission. Turbine unfortunately has a lot of more-integration-ey
tests, and I would like to extract those as more of unit tests of the
importer features and upstream them that way vs trying to copy directly.
For now, one overall test with the initial submission gets us moving.

I acknowledge that there are some code quality things that could be
improved in this submission: this was authored over the course of many
months (and often via some trial and error). I would like to keep it
relatively converged with the downstream for the next few steps while
getting the test suite upstreamed. And then it will be easier to take a
hygienic pass through the code.

Including co-authors for contributors in the git log of the original
repository.

Co-authored-by: Ean Garvey <87458719+monorimet@users.noreply.github.com>
Co-authored-by: Avinash Sharma <aviator1994@gmail.com>
Co-authored-by: Arham Khan <arhammkhan@gmail.com>
Co-authored-by: brucekimrokcmu <kwangkyk@alumni.cmu.edu>
Co-authored-by: saienduri <77521230+saienduri@users.noreply.github.com>
2023-12-21 08:40:10 -08:00
Rik Huijzer 8328998172
Allow printing all IR in `torch_mlir.compile` (#2669)
This PR adds the `enable_ir_printing` option to `torch_mlir.compile`,
which can be used to print the IR for all intermediate passes.

When running the added test file via:
```shell
$ python test/python/compile.py 2> tiny.stderr
```
the file `tiny.stderr` is about 700 KB.
2023-12-20 15:08:21 -06:00
Stella Laurenzo ed4df38e8d
[onnx] Add torch-mlir-import-onnx tool. (#2637)
Simple Python console script to import an ONNX protobuf to the torch
dialect for additional processing.

For installed wheels, this can be used with something like:

```
torch-mlir-import-onnx test/python/onnx_importer/LeakyReLU.onnx
```

Or from a dev setup:

```
python -m torch_mlir.tools.import_onnx ...
```
2023-12-12 22:01:30 -08:00
Stella Laurenzo 74f7a0c9d6
Upstream the ONNX importer. (#2636)
This is part 1 of 2, which will also include upstreaming the FX
importer. I started with ONNX because it forces some project layout
updates and is more self contained/easier as a first step.

Deviating somewhat from the RFCs on project layout, I made the following
decisions:

* Locating the `onnx_importer.py` into `torch_mlir.extras` as Maks
already has opened up that namespace and it seemed to fit. Better to
have fewer things at that level.
* Setup the build so that the root project only contains MLIR Python and
pure Python deps (like the importers), but this can be augmented with
the `projects/` adding more depending on which features are enabled.
* The default build continues to build everything whereas in
`TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS=1` mode, it builds a
`torch-mlir-core` wheel with the pure contents only.

`onnx_importer.py` and `importer_smoke_test.py` are almost verbatim
copies from SHARK-Turbine. I made some minor local alterations to adapt
to paths and generalize the way they interact with the outer project. I
expect I can copy these back to Turbine verbatim from here. I also
updated the license boilerplate (they have the same license but slightly
different project norms for the headers) but retained the correct
copyright.

Other updates:

* Added the ONNX importer unit test (which also can generate test data)
in lit, conditioned on the availability of the Python `onnx` package. In
a followup once I know everything is stable, I'll add another env var
that the CI can set to always enable this so we know conclusively if
tests pass.
* Moved the ONNX conversion readme to `docs/`.
* Renamed CMake option `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS` ->
`TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` and inverted the sense. Made the
JitIR importer and LTC options `cmake_dependent_options` for robustness.
2023-12-12 19:02:51 -08:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Ramiro Leal-Cavazos ff762100b8
Add handling of namespaces to library generator (#2391)
When using custom ops, sometimes PyTorch will insert namespaces to the
abstract interpretation function name in the format:
`__torch__.{namespace_1}.{namespace_2}...{op_name}`.  The extra
namespaces are not part of the abstract interpretation function name,
so it needs to be removed before generating the library of MLIR
snippets of abstract interpretation functions. This commit adds
support for removing the namespace information.
2023-08-11 09:56:19 -07:00
Gleb Kazantaev fb52a73cbe
LTC->MLIR Debug Info support (#1922)
* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>
2023-08-02 10:29:11 -04:00
Ramiro Leal-Cavazos 96d662647f
Fix import of constant bool tensor parameters (#2047)
Bool tensors are represented in TorchScript as an array of
`int8_t`s. However, when importing them into Torch-MLIR, the importer
was assuming the array had `int32_t` elements, leading to the importer
reading into memory that was out of bounds. This commit fixes the
casting of the bool tensor.
2023-04-20 18:38:48 -07:00
Ramiro Leal-Cavazos e0f301c890
Add `extra_library` kwarg to `torch_mlir.compile` (#1986)
This commit adds the ability to specify extra abstract interpretation
functions in `torch_mlir.compile` to use during type refinement. This
allows users to easily add custom ops without having to interact with
MLIR or C++ directly.
2023-03-30 09:20:19 -07:00
Maksim Levental 953ea39cb5
handles 2,3,4 from https://github.com/llvm/torch-mlir/issues/1963 (#1964) 2023-03-24 21:50:01 -05:00
Maksim Levental 2eddb3fde7
WIP: No PyTorch dep (#1854) 2023-02-13 14:21:06 -06:00
Gleb Kazantaev 708fa346a6
Fix Base Lazy Backend Type Conversion (#1412)
* Fix c10::prim::Constant conversion; Added CAPI for passes; Added passes to base lazy backend

* Update ivalue_importer to use ImportOptions; Added tests for non-value/value tensor types

* Added tests for scalar Constant import; Updated MB::importFunction to use ImportOptions

* Test updates

* Move back module variable name

* Remove RefineTypes from TorchMlirLoweringContext::Build()

* Rename pass; Remove passes from base lazy backend

* Rename pass to VerifyBackendContractPass

* Aligned cmd pass name; Fixed TorchConversion passes registration
2022-10-04 15:53:28 -07:00
AmosLewis 940959589b [MLIR][TORCH] Add Byte and Char Dtype support 2022-09-30 13:19:31 +05:30
Ashay Rane 0b46462528
Miscellaneous fixes for Windows builds (#1376)
* test: allow spaces in path to Python executable

On Windows, the path to the Python binary may contain spaces, so this
patch adds quotes around the path to the python executable.

Thanks to @sstamenova for suggesting the fix!

* python: remove header file that causes Windows build failures

Similar to https://reviews.llvm.org/D125284, we can safely remove this
header file without affecting the build on either Linux.  It is
necessary to remove this header file on Windows builds since otherwise
it causes build errors.

* python: drop `TORCH_API` from function defined in Torch-MLIR

`TORCH_API` should apply to functions that are either exported by
libtorch.so or ones that are imported from libtorch.so by its downstream
consumers (like Torch-MLIR).  Neither case applies to the
`importJitFunctionAsFuncOp()` function, since it is defined in
Torch-MLIR (and thus outside libtorch.so).  This patch fixes the problem
by dropping `TORCH_API` from that function's declaration.

* python: make output of class anotations deterministic

The `class-annotator-repr.py` test checks for class annotations in a
specific order, but prior to this patch, the order was
non-deterministic, since the code iterated on an _unordered_ map.

This patch makes the iteration order deterministic through two changes:
1. using a sorted map
2. using the class qualified name instead of the address of the class in
memory

* test: use Python3_EXECUTABLE as interpreter path for consistency

This ensures that tests use the Python3 version that was detected using
CMake, instead of whichever python version that happens to be in the
PATH variable when invoking the test.

* test: fix RUN string

The parenthesis syntax does not run on Windows (the shell interprets the
`(` character as part of the path).  Moreover, the ODR violation in the
comment no longer seems to apply.

* python: port parallel test framework to Windows

Since Windows does not support `fork` natively, Python's
`multiprocessing` module needs to use `spawn` on Windows.  However, to
use `spawn`, the multiprocessing module serializes (or pickles) the
worker function and its arguments.  Sadly, the multiprocessing module
(both the default one in Python and the one that is extended in PyTorch)
is unable to serialize lambda functions (see
https://stackoverflow.com/a/19985580) for detals.

Unfortunately, given how our tests are structured, we require that the
function under test is passed as an argument to another function, so we
cannot sidestep our use of lambda functions.

To resolve this problem, this patch makes use of the `multiprocess` and
`dill` Python modules, which together offers a multiprocessing mechanism
that can serialize lambda functions.  The multiprocess module also
offers a process pool, which simplifies the code for our parallel
testing framework.
2022-09-29 12:07:43 -05:00
Ashay Rane a9e1014fc7
python: add `CHECK-LABEL` statements to localize checks (#1363)
It seems as though an upstream change in PyTorch has caused the module
dump to include not just the module being tested, but also several
seemingly unrelated functions in the `torch._decom.decompositions`
namespace.  The presence of these new functions caused lit to match
variables against incorrect statements (i.e.  statements in the
unrelated functions instead of the module under test).

This patch inserts `CHECK-LABEL` statements in the failing tests so that
lit ignores these unrelated functions and only checks the statements at
or after the test module definition.
2022-09-13 15:44:13 -05:00
Tanyo Kwok 290d7755fb
importer: add initial support for loading Float16 tensors (#1169)
follow up #761:

    This patch updates the `torch_mlir::convertTensorToMlirElementsAttr()`
    method to enable the creation of tensors whose base type is Float16.
    This patch also adds a test to validate the IR generation, and it
    updates the test for importing tensors of various types.
2022-08-08 12:37:31 +08:00
Jae Hoon (Antonio) Kim 425362263b Clean up Autogen (#1112)
* Remove unnecessary sed in autogen

* Remove .pyc files frrom VCS
2022-07-30 09:40:02 -04:00