Commit Graph

892 Commits (197ef4224bc41471acd4ccfd8694ed8e0842e716)

Author SHA1 Message Date
Stella Laurenzo ffaaf08c31
[fx] Fix type inference for scalar/int types. (#3099)
This was discovered in a downstream test suite and was due to a control
flow nesting merge issue. In-tree test added and fixed.
2024-04-02 13:56:43 -07:00
penguin_wwy 5325d3e6e6
[fx] Fix type hint for fx importer (#3066)
Co-authored-by: Stella Laurenzo <stellaraccident@gmail.com>
2024-04-01 17:31:43 -07:00
Rob Suderman ec4cb8be44
Bump LLVM to llvm/llvm-project@0030fc4ac7 (#3079)
Co-authored-by: Peiming Liu <peiming@google.com>
2024-04-01 16:34:59 -07:00
Stella Laurenzo 826786bdd0
[fx] Support ExportedProgram buffer mutation. (#3080)
In the prior state when I supported mutation of user inputs by treating
them as mutable-tensor SSA values, I had left the case of buffer
mutation only vaguely implemented until a concrete use emerged.
    
This patch reworks this buffer mutation support by assuming that buffers
must be resolved via the hooks symbolically and treated with load/store
semantics. This is implied in the structure since we have no SSA value
that represents a buffer and we already assume that reading parameters
happens via such a mechanism.
2024-04-01 14:18:12 -07:00
Stella Laurenzo 282e9b0e64
[fx] Fix type determination for multi-return ops and static `None` returns. (#3081)
In practice, this was caught by the way that AOT autograd traces
`convolution_backward`. For the unit test, we just repro it with a
custom op.
2024-04-01 09:39:38 -07:00
Stella Laurenzo e2343cf4ce
[fx] Implement auto_functionalized higher order op. (#3063)
* Also adds the basic scaffolding for handling more of these, which will
be needed for cond, while, etc.
* Refactors some of the support in the generic OpOverload emitter so it
can be shared with these other special forms.

This has been on my list for a while, but it just so happens that as
part of upgrading to PyTorch 2.3 and a pure upstream flow in Turbine, we
were using a feature that required integration with auto_functionalized.
This is perhaps the "weirdest" of the higher-order ops and a poor place
to start, but needs must. We have testing for this in Turbine.

Full support in Turbine has an entire custom ops facility. I've reduced
this down to a unit test in torch-mlir.
2024-03-26 17:06:05 -07:00
Stella Laurenzo 17eeac880a
[fx] Accept `func_visibility=` and return created func op. (#3054)
This is a partial landing of #3046 while waiting for an upstream change
for the rest of it.
2024-03-25 16:48:06 -07:00
Stella Laurenzo 6ea857c644
[fx] Make the lift_fresh_copy -> clone special form use kwargs. (#3045)
At some point, this op became kwarg-only instead of arg/kwarg.
Discovered when upgrading to PyTorch 2.3.

Also adds a test as this was untested in-tree (was caught out of tree).
2024-03-21 15:34:40 -07:00
penguin_wwy 7616d637fd
Add stateless fx graph import (#3036) 2024-03-21 14:44:54 -07:00
Aart Bik fe59f1ee0d
[torch-mlir][sparse] higher dimension COO (#3042)
Lift this from 2-dim only to n-dim for n>=2
2024-03-19 15:59:07 -07:00
penguin_wwy f34c187ac4
Normalize type hints to be compatible with multiple Python versions (#3028)
Although we provide a wheel package for Python 3.8, it may actually
throw the following exception:
`TypeError: 'type' object is not subscriptable`
2024-03-15 08:29:48 -07:00
Sambhav Jain 0b2f9c89a2
Bring back `dynamic_shapes` constraints in fx importer API (#3026)
https://github.com/llvm/torch-mlir/pull/2992 dropped `constraints` from
the fx importer API,
[breaking](https://github.com/cruise-automation/mlir-tcp/actions/runs/8284385380/job/22669774071)
downstream AOT compile tests in `mlir-tcp` that use it. This knob has
been soft-deprecated for a while now, replaced by `dynamic_shapes` - a
more ergonomic interface. This PR brings back dynamic_shapes constraints
in the new supported form. Also added a python lit test with dynamic
shaped annotations.
2024-03-14 10:26:34 -07:00
Daniel Garvey 80c7bc3f7a
fximporter: support newer torch versions (#2999)
uses version checking since attributes exist in both versions, the only
thing that changes is what we're receiving as an fx graph
2024-03-08 14:58:50 -06:00
Vivek Khandelwal 6e84752c39
build: manually update PyTorch version (#2992)
Set PyTorch and TorchVision version to nightly release 2024-03-07.
This commit also removes the deprecated constraints API:
342e7929b8

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-07 21:42:38 +05:30
Yuanqiang Liu 4d01b0f1a3
[FxImporter] remove dataclass slots to support python3.9 (#2974)
* that `dataclass`'s `slots` is supported after python 3.10.
2024-03-06 01:04:38 +08:00
Scott Todd e7d90a4b82
[onnx] Fix type on create_module() in onnx_importer.py. (#2968)
The type returned was changed in
https://github.com/llvm/torch-mlir/pull/2795. This led to errors in the
downstream IREE project: https://github.com/openxla/iree/pull/16622.
2024-02-29 13:01:13 -08:00
Peiming Liu e85a2a87c5
[torch-mlir][sparse] support e2e sparse kernels with COO inputs. (#2939) 2024-02-28 16:08:37 -08:00
Rob Suderman e48fe45886
[onnx] Import `onnx` import to pass remaining tests (#2951)
Finish supporting importing the vast majority of `onnx` operations. This
includes:
- region support
- region value inherentance
- `torch.string` support
- `torch.list` support
- `torch.optional` support
2024-02-28 12:18:02 -08:00
Sambhav Jain 3cbe6c98ec
Expose `func_name` to the main fx import API (#2949)
As titled.
2024-02-26 10:08:14 -08:00
Stella Laurenzo 89e02c195b
Make a typing dependency that is not in older PyTorch backwards compatible. (#2948)
This was found in a downstream that is pegged to an older PyTorch
version.
2024-02-23 15:52:27 -08:00
Aart Bik 4147b280ce
[torch-mlir][sparse] add block sparsity to mlir lowering (#2942)
Also note that we are in the process of proposing SparseTensorMetadata
to PyTorch FX graph export (see
https://github.com/pytorch/pytorch/pull/117907). This will hopefully
eventually replace the current data structures in torch-mlir.
2024-02-23 11:57:20 -08:00
Rob Suderman 53f6d06ab8
[onnx] Drop `ConstantOfShape` logic form importer, fix torch lowering (#2930)
There is no reason to treat `ConstantOfShape` as a specialized import
any as there exists a onnx-to-torch equivalent. Dropping the import
coding and adding support for resource conversion substantially
increases test coverage for dynamically shaped tests.
2024-02-21 21:34:43 -08:00
Rob Suderman 13553d49c9
[onnx] Update the importer to create a `none` for missing operands (#2931)
Some operands are optional so we require a placeholder for missing
operands. We invent an `onnx.None` operation as our placeholder.
2024-02-20 09:30:30 -08:00
Stella Laurenzo 5253282c55
[fx] Support mutation in ExportedProgram. (#2916)
As of https://github.com/pytorch/pytorch/pull/118969, `ExportedProgram`
has the long awaited fixes to correctly categorize various things
relating to parameters, buffers, mutated inputs and constants.

With this additional modeling, we are finally able to implement
(safely/soundly) the mutable semantics that were attempted on the
TorchScript path. The difference is that on that path, we had to
conservatively treat everything as mutable and run some dodgy heuristics
(which have been the cause of many bugs relating to
"MaximizeValueSemantics") to try to get back to an immutable state.

The new model supports mutability at the graph edges, allowing both user
inputs and buffers to be mutated (there is some more support than that,
but that is all I fully tracked through to implementation).

Therefore, when we receive programs like this, we now can selectively
enable mutation at the edges. This happens to be the mutability model
that IREE supports, which I expect to be a primary beneficiary. However,
there is nothing stopping anyone else from handling the `!torch.tensor`
types and the existing copy/overwrite ops that will be selectively
added.

Since this relies on API changes that will not release until 2.3, I'm
being a bit cautious about not refactoring existing facilities.
2024-02-16 09:46:30 -08:00
Rob Suderman 074f112d6a
[onnx] Add testing using the `onnx` compilation using torch tests (#2795)
We can route the torch tests via `onnx` using the `torch.onnx.export`
tooling. We can then reimport, lower to torch, and compile to linalg to
validate the onnx path is working correctly.

The current implementation exposes some failures in the `onnx` path so
we cannot enable the onnx test suite yet due to segmentation faults.
2024-02-15 10:17:13 -08:00
saienduri 8e2e5eeae9
add support for decomposition (#2879)
This commit adds decomposition support into the core aten operators
before importing the module from torch.

Also, this commit deals with the lifted tensor constants in
torch.export.export(). We don't want to add unnecessary placeholder
nodes in the graph (extra args in the block module), and should treat
them like the constants that they are. The unnecessary clone is also
removed for max efficiency.
2024-02-14 21:00:52 -08:00
Daniel Garvey 77b7550997
Add support for bfloat16 in fximporter (#2896)
this introduces an additional soft dependency on the python ml_dtypes
python packages in order to support bfloat16

Addresses #2843
2024-02-14 16:24:25 -06:00
Sambhav Jain 3e836d8dad
[fx_importer] Convert non-persistent buffers lifted as tensor constants (#2902)
The investigation is largely recorded in
https://github.com/llvm/torch-mlir/pull/2881, but this change allows us
to capture non-persistent buffers that were lifted as tensor constants
(after https://github.com/pytorch/pytorch/pull/118969 landed in upstream
PyTorch), and propagate them to `Torch` dialect as "frozen"
`torch.vtensor.literal`. I believe this patch should work with both
nightly and stable PyTorch, but will let CI confirm the same. Thanks
@stellaraccident for the valuable pointers and guidance.

---------

Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-13 12:38:32 -08:00
Aart Bik b6f4ca512e
[torch-mlir][sparse] sparsity metadata refinement (#2901)
Various improvements on sparsity metadata:

(1) define single data structure for all sparsity related metadata 
(2) handle batched dense dimensions, as well as dense subtensor
dimensions
(3) refine sparsity propagation for deeper networks
2024-02-12 16:10:57 -08:00
Aart Bik be8375d350
[torch-mlir][sparse] implement first sparse_jit end-to-end path (#2894)
This PR introduces a sparse_jit wrapper that can run simple models with
sparse tensor inputs end-to-end. The implementation shows all required
components on modifying sparse tensor types with a 1:N relation on the
call sites. Two tests shows that the JIT runs end-to-end while computing
the correct results.

More details to follow (generalizing to COO and different ranks, as well
as support for *output* sparse tensors), but the general concepts are
all here now.

**_Update: Thanks to Rob, bump to proper LLVM/MLIR hash is done!_**

_**NOTE that all parameter passing changes are nicely done "downstream"
in MLIR, so very little changes are required in torch-mlir code
proper**_

---------

Co-authored-by: Franz Haniel <77495327+frafranz@users.noreply.github.com>
Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2024-02-12 10:04:54 -08:00
saienduri bfcf93ea21
Rename torch_mlir.compile APIs and introduce FX based analogs (#2842)
Link to related RFC:
https://discourse.llvm.org/t/rfc-rename-torch-mlir-compile-apis-and-introduce-fx-based-analogs/76646
This commit updates the documentation, tests, CMake files, and API for
the proposed changes in the RFC. There is a new torch_mlir/fx.py for
user level APIs related to importing modules and a corresponding test
for this path can be found at test/python/fx_importer/basic_test.py.

---------

Co-authored-by: MaheshRavishankar <mravisha@amd.com>
2024-02-06 19:07:59 -08:00
Daniel Garvey faf7d4aaa5
[fx_importer] Add support for 0D tensors (#2870)
Adds an escape hatch from creating a DenseResourceElementsAttr for
single value tensors into DenseElementsAttr.

For 0d or 1element, splats are better as DenseElementsAttr. Don't use
DenseResourceElementsAttr for it
2024-02-06 00:19:31 -06:00
Dave Liddell 04be6ba773
Make the onnx importer more robust for internal/external and large models (#2794)
Fix for https://github.com/llvm/torch-mlir/issues/2765

The onnx docs say that you can't do shape inference using the in-memory
API for models > 2 GB. This fix replaces that API with the file-based
API. Since the new API generates an intermediate file, also added a
--keep switch to keep that file, which I delete by default.

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-01-31 21:58:43 -08:00
Rob Suderman 54e258792c
[onnx] Import `onnx` constants as `onnx.Constant` instead of literals (#2831)
To handle the conversion from raw bytes to `DenseElementsAttr` we need
to handle the endianness conversion during `torch-onnx-to-torch`.
Therefore when importing `onnx.Constant` it is better to represent using
the `onnx` constant operation so that only one location requires the
endianness correction.
2024-01-31 11:41:06 -08:00
Aart Bik 105aad6f57
[torch-mlir] provide FX traced graph importer for sparse tensors (#2817)
Note that we are waiting for actual FX traced graph support for sparse
tensors. For details see

https://github.com/pytorch/pytorch/issues/117188

Until then, however, we provide this clever importer that builds the FX
traced graph for for the dense case and then puts a sparse annotation
back on the parameters.

With import test.
2024-01-30 21:22:12 -08:00
Stella Laurenzo 77c14ab22b
[ci] Upgrade to new runners and disable unsupported jobs. (#2818)
Per the RFC and numerous conversations on Discord, this rebuilds the
torch-mlir CI and discontinues the infra and coupling to the binary
releases
(https://discourse.llvm.org/t/rfc-discontinuing-pytorch-1-binary-releases/76371).

I iterated on this to get latency back to about what it was with the old
(much larger and non-ephemeral) runners: About 4m - 4.5m for an
incremental change.

Behind the scenes changes:

* Uses a new runner pool operated by AMD. It is currently set to manual
scaling and has two runners (32-core, 64GiB RAM) while we get some
traction. We can either fiddle with some auto-scaling or use a schedule
to give it an increase during certain high traffic hours.
* Builds are now completely isolated and cannot have run-to-run
interference like we were getting before (i.e. lock file/permissions
stuff).
* The GHA runner is installed directly into a manylinux 2.28 container
with upgraded dev tools. This eliminates the need to do sub-invocations
of docker on Linux in order to run on the same OS that is used to build
wheels.
* While not using it now, this setup was cloned from another project
that posts the built artifacts to the job and fans out testing. Might be
useful here later.
* Uses a special git cache that lets us have ephemeral runners and still
check out the repo and deps (incl. llvm) in ~13s.
* Running in an Azure VM Scale Set.

In-repo changes:

* Disables (but does not yet delete):
  * Old buildAndTest.yml jobs
  * releaseSnapshotPackage.yml
* Adds a new `ci.yml` pipeline and scripts the steps in `build_tools/ci`
(by decomposing the existing `build_linux_packages.sh` for in-tree
builds and modularizing it a bit better).
* Test framework changes:
* Adds a `TORCH_MLIR_TEST_CONCURRENCY` env var that can be used to bound
the multiprocess concurrency. Ended up not using this in the final
version but is useful to have as a knob.
* Changes the default concurrency to `nproc * 0.8 + 1` vs `nproc * 1.1`.
We're running on systems with significantly less virtual memory and I
did a bit of fiddling to find a good tradeoff.
* Changed multiprocess mode to spawn instead of fork. Otherwise, I was
getting instability (as discussed on discord).
* Added MLIR configuration to disable multithreaded contexts globally
for the project. Constantly spawning `nproc * nproc` threads (more than
that actually) was OOM'ing.
* Added a test timeout of 5 minutes. If a multiprocess worker crashes,
the framework can get wedged indefinitely (and then will just be reaped
after multiple hours). We should fix this, but this at least keeps the
CI pool from wedging with stuck jobs.

Functional changes needing followup:

* No matter what I did, I couldn't get the LTC tests to work, and I'm
not 100% sure they were being run in the old setup as the scripts were a
bit twisty. I disabled them and left a comment.
* Dropped out-of-tree build variants. These were not providing much
signal and increase CI needs by 50%.
* Dropped MacOS and Windows builds. Now that we are "just a library" and
not building releases, there is less pressure to test these commit by
commit. Further, since we bump torch-mlir to known good commits on these
platforms, it has been a long time since either of these jobs have
provided much signal (and they take ~an hour+ to run). We can add them
back later post-submit if ever needed.
2024-01-27 18:35:45 -08:00
Yuanqiang Liu e73c5368fb
[FxImporter] make FxImporter to fit python<=3.9 (#2802)
As that torch with py3.9 is also used widely.
2024-01-26 09:01:47 +08:00
Dave Liddell d452c4f4c0
Fix onnx importer to treat Constant values as static (#2780)
Fixes  https://github.com/llvm/torch-mlir/issues/2764

In the case of OPT, there are ConstantOfShape ops whose input shape is
not static (that is, an initializer), but rather comes from a Constant
op. The importer can't handle such non-static input shapes.

The fix here is to create initializers for a subset of Constant ops
(ones with "value" attributes), so that their outputs can be used
statically. Additionally, there was no case for creating a splat of
int64, so I added that as well.

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-01-22 13:00:05 -08:00
Rob Suderman 85b86b36a2
[onnx] Fix importer variable names to make `mlir` legal (#2690)
Some names for `onnx` identifiers are not legal in `mlir-ir`. Sanitize
so that the generated `ir` is legal.
2023-12-21 17:05:18 -08:00
Stella Laurenzo ccd469ca0d
[fx] Upstream the turbine FxImporter to torch-mlir. (#2681)
Changes made during upstreaming:

* Removed comments attributing some copied code back to torch-mlir
(since it is now repatriated).
* Re-organized imports.
* Inlined RefMapping/RefTracker and TypeSubclassMap from an external
utility module.
* Added FxImporter class comments.
* Updated stack trace extraction to be fail safe.
* Added an entry-point for `import_frozen_exported_program` which uses
the shiny new upstream `torch.export.export()` API (versus the
lower-level/older API that Turbine is presently using). This
necessitated a small FX rewrite to line external state management up
with current conventions.
* Adapted one of Turbine's importer tests to go with this initial
submission. Turbine unfortunately has a lot of more-integration-ey
tests, and I would like to extract those as more of unit tests of the
importer features and upstream them that way vs trying to copy directly.
For now, one overall test with the initial submission gets us moving.

I acknowledge that there are some code quality things that could be
improved in this submission: this was authored over the course of many
months (and often via some trial and error). I would like to keep it
relatively converged with the downstream for the next few steps while
getting the test suite upstreamed. And then it will be easier to take a
hygienic pass through the code.

Including co-authors for contributors in the git log of the original
repository.

Co-authored-by: Ean Garvey <87458719+monorimet@users.noreply.github.com>
Co-authored-by: Avinash Sharma <aviator1994@gmail.com>
Co-authored-by: Arham Khan <arhammkhan@gmail.com>
Co-authored-by: brucekimrokcmu <kwangkyk@alumni.cmu.edu>
Co-authored-by: saienduri <77521230+saienduri@users.noreply.github.com>
2023-12-21 08:40:10 -08:00
Stella Laurenzo ed4df38e8d
[onnx] Add torch-mlir-import-onnx tool. (#2637)
Simple Python console script to import an ONNX protobuf to the torch
dialect for additional processing.

For installed wheels, this can be used with something like:

```
torch-mlir-import-onnx test/python/onnx_importer/LeakyReLU.onnx
```

Or from a dev setup:

```
python -m torch_mlir.tools.import_onnx ...
```
2023-12-12 22:01:30 -08:00
Stella Laurenzo 74f7a0c9d6
Upstream the ONNX importer. (#2636)
This is part 1 of 2, which will also include upstreaming the FX
importer. I started with ONNX because it forces some project layout
updates and is more self contained/easier as a first step.

Deviating somewhat from the RFCs on project layout, I made the following
decisions:

* Locating the `onnx_importer.py` into `torch_mlir.extras` as Maks
already has opened up that namespace and it seemed to fit. Better to
have fewer things at that level.
* Setup the build so that the root project only contains MLIR Python and
pure Python deps (like the importers), but this can be augmented with
the `projects/` adding more depending on which features are enabled.
* The default build continues to build everything whereas in
`TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS=1` mode, it builds a
`torch-mlir-core` wheel with the pure contents only.

`onnx_importer.py` and `importer_smoke_test.py` are almost verbatim
copies from SHARK-Turbine. I made some minor local alterations to adapt
to paths and generalize the way they interact with the outer project. I
expect I can copy these back to Turbine verbatim from here. I also
updated the license boilerplate (they have the same license but slightly
different project norms for the headers) but retained the correct
copyright.

Other updates:

* Added the ONNX importer unit test (which also can generate test data)
in lit, conditioned on the availability of the Python `onnx` package. In
a followup once I know everything is stable, I'll add another env var
that the CI can set to always enable this so we know conclusively if
tests pass.
* Moved the ONNX conversion readme to `docs/`.
* Renamed CMake option `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS` ->
`TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` and inverted the sense. Made the
JitIR importer and LTC options `cmake_dependent_options` for robustness.
2023-12-12 19:02:51 -08:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Zhekun(Josh) Zhang 88d4c475d3
[Torch] Fix mixP case for non value semantic ops (#2540)
NonValueSemantic Ops like Add_, div_, etc. expect result DType to be the
same as the first input. However, current implementation would result in
wrong result type for case like:

```python
a = torch.randn(3, 3).half() # float16
b = torch.randn(3, 3) # float32
a += b # i.e. torch.ops.aten.add_(a, b)
```
torch expects `a` to be float16, but dtype refinement would infer
float32 type, since it's replaced by `aten.add`.
2023-11-02 12:40:08 +08:00
Daniel Garvey 4901773f77
add uncovered cases in view lowering (#2524)
removes unecessary checks from empty strided
2023-11-01 21:56:44 -05:00
Yuanqiang Liu 365655ca29
[Torch Dialect] add canonicalize pattern for aten.floor with integer … (#2534)
…type
2023-11-02 09:51:31 +08:00
saienduri a2e694df40
add e2e support for torch.eye operations (aten.eye, aten.eye.m) (#2478) 2023-11-01 11:23:28 -07:00
Daniel Garvey 1d41f7b6fe
Rework AtenEmptyStridedOp checks (#2537)
Now using Value instead of Ints. Trades compile failure for a runtime
assert
2023-10-31 22:56:54 -05:00
xiaolou86 4199feffed
Fix typos in comments (#2539)
Fix typos in comments
2023-10-31 20:10:47 -07:00
JianzheXiao e8706957c0
[Torch Dialect] Add Support for aten.unflatten.int (#2475)
As title, Add support for aten.unflatten.int, support dim to be negative
and one of the sizes' elements to be -1
2023-10-31 15:36:16 +08:00