Commit Graph

2434 Commits (3b85c70748ce7177f344ff0c8a99545f849c328a)
 

Author SHA1 Message Date
saienduri ee75e8d1ae
[MLIR][ONNX] Add OnnxToTorch support for Reshape Op (#2698)
This commit adds the OnnxToTorch support for Reshape op.
2023-12-26 10:20:13 -08:00
Vivek Khandelwal 0849fd0a06 [MLIR][ONNX] Fix onnx.conv lowering to handle bias tensor
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-22 16:36:21 +05:30
Vivek Khandelwal 9a72c6584e [MLIR][ONNX] Add OnnxToTorch support for BatchNormalization and Concat op.
This commit adds the OnnxToTorch support for BatchNormalization and Concat op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-22 11:25:33 +05:30
Rob Suderman 85b86b36a2
[onnx] Fix importer variable names to make `mlir` legal (#2690)
Some names for `onnx` identifiers are not legal in `mlir-ir`. Sanitize
so that the generated `ir` is legal.
2023-12-21 17:05:18 -08:00
Stella Laurenzo ccd469ca0d
[fx] Upstream the turbine FxImporter to torch-mlir. (#2681)
Changes made during upstreaming:

* Removed comments attributing some copied code back to torch-mlir
(since it is now repatriated).
* Re-organized imports.
* Inlined RefMapping/RefTracker and TypeSubclassMap from an external
utility module.
* Added FxImporter class comments.
* Updated stack trace extraction to be fail safe.
* Added an entry-point for `import_frozen_exported_program` which uses
the shiny new upstream `torch.export.export()` API (versus the
lower-level/older API that Turbine is presently using). This
necessitated a small FX rewrite to line external state management up
with current conventions.
* Adapted one of Turbine's importer tests to go with this initial
submission. Turbine unfortunately has a lot of more-integration-ey
tests, and I would like to extract those as more of unit tests of the
importer features and upstream them that way vs trying to copy directly.
For now, one overall test with the initial submission gets us moving.

I acknowledge that there are some code quality things that could be
improved in this submission: this was authored over the course of many
months (and often via some trial and error). I would like to keep it
relatively converged with the downstream for the next few steps while
getting the test suite upstreamed. And then it will be easier to take a
hygienic pass through the code.

Including co-authors for contributors in the git log of the original
repository.

Co-authored-by: Ean Garvey <87458719+monorimet@users.noreply.github.com>
Co-authored-by: Avinash Sharma <aviator1994@gmail.com>
Co-authored-by: Arham Khan <arhammkhan@gmail.com>
Co-authored-by: brucekimrokcmu <kwangkyk@alumni.cmu.edu>
Co-authored-by: saienduri <77521230+saienduri@users.noreply.github.com>
2023-12-21 08:40:10 -08:00
John Wu 46f2cb50dc
[onnx] Lower onnx.HardSigmoid to torch (#2682)
The expression for HardSigmoid in Onnx
(https://onnx.ai/onnx/operators/onnx__HardSigmoid.html): max(0, min(1,
alpha * x + beta))

is inherently different from HardSigmoid in Torch
(https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html)
which is: if x < -3 -> 0
elif x > 3 -> 1
else x/6 + 1/2

That being said, it was just better to compute out the entire expression
when translating the Onnx expression to Torch mlir, which is done in
this PR. Some of the logic is shared from the files in
`DecomposeComplexOps`. Therefore, refactored some shared logic between
`DecomposeComplexOps` and `DefaultDomainGToP` and put it in a `Utils`
file.
2023-12-21 07:29:22 -08:00
John Wu 779a141f8d
Mentioned helpful tooling to convert Onnx models to Torch MLIR (#2683)
- Going through the `#torch-mlir` channel on the `llvm` discord, I
realize that there are some useful commands that would be extremely
helpful in creating Onnx lowers to Torch MLIR. Seems a lot of people are
contributing to this. So, I thought it would be good to add this
information to the docs.

These tools helped streamlined the development of this PR:
https://github.com/llvm/torch-mlir/pull/2682
2023-12-21 07:26:20 -08:00
Vivek Khandelwal 3226241521 [MLIR][ONNX] Add OnnxToTorch support for Conv and ConvTranspose op.
This commit adds the OnnxToTorch support for Conv and ConvTranspose op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-21 11:12:14 +05:30
Stella Laurenzo d75cff6cd1 NFC: Remove unused variable causing a warning. 2023-12-20 19:23:27 -08:00
Rik Huijzer 8328998172
Allow printing all IR in `torch_mlir.compile` (#2669)
This PR adds the `enable_ir_printing` option to `torch_mlir.compile`,
which can be used to print the IR for all intermediate passes.

When running the added test file via:
```shell
$ python test/python/compile.py 2> tiny.stderr
```
the file `tiny.stderr` is about 700 KB.
2023-12-20 15:08:21 -06:00
Rob Suderman 11cc92d4ab
[onnx] Lowerings from `onnx.tan` (#2642)
Started work on the `tan` lowerings for ONNX to Torch. Uses `sin` and
`cos` to represent a `tan`.
2023-12-20 10:09:39 -08:00
Rob Suderman a24aadbfab
[aten] Make `torch.aten.matmul` to `linalg` work for non-broadcasting case (#2659)
Broadcasting for `torch.aten.matmul` is optional so a MxN with NxK
matmul should be legalized to a `linalg.matmul`.
2023-12-20 10:09:10 -08:00
Rik Huijzer 8fa81d181b
Tweak development.md for more speed (#2667)
Adding the `--progress` flag shows the same output as what `git clone`
would show. This is very nice for slow connections. Without it, the
command may run for many minutes without providing any indication that
it is still doing something.

For `--depth=1`, I think it should be safe as most people have new
enough git versions nowadays, but let's be safe and make it an optional
suggestion. I ran all the tests fine with `--depth=1`, but I don't know
whether things will keep working when the submodules get updated for
systems with old git versions.
2023-12-20 09:34:50 +01:00
Sungsoon Cho 20ab882840
Fix typo in DecomposeBernoulli() match failure messages. (#2676) 2023-12-19 20:59:19 -08:00
Han-Chung Wang 869c25877a
Integrate llvm/llvm-project@99045b60b5 to fix bazel build. (#2677)
be3e74b647
breaks bazel in post-submit. The revision bumps it to include the bazel
fix.
2023-12-19 18:07:23 -08:00
Han-Chung Wang be3e74b647
Integrate llvm/llvm-project@282d501476 (2023-12-19) (#2675) 2023-12-19 13:28:37 -08:00
Andreas Falkenberg ebaab4200f
[ONNX] ONNX -> TORCH for Erf (#2673)
TorchOnnxToTorch
For Erf function
2023-12-19 08:07:27 -08:00
Yinrun Lyu 89cfbe894d
Update PYTHONPATH in development.md (#2644)
Modify PYTHONPATH to new related directory in docs.
2023-12-18 22:46:55 -08:00
Vivek Khandelwal 8649b84e3f
[MLIR][ONNX] Add OnnxToTorch support for AveragePool op. (#2672)
This commit adds the OnnxToTorch support for AveragePool op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-18 18:17:11 -06:00
saienduri 698ff3a736
[MLIR][ONNX] Add OnnxToTorch support for Reduction Ops (#2657)
This commit adds the OnnxToTorch support for ReduceSum, ReduceMean, and
ReduceMin ops.
2023-12-18 12:37:31 -08:00
John Wu deacb8ef38
[MLIR][ONNX] Add OnnxToTorch support for Gelu (#2647)
This commit adds the OnnxToTorch support for Gelu op.

---------

Co-authored-by: Rob Suderman <suderman@google.com>
2023-12-18 10:57:08 -08:00
Rob Suderman 791c666479
[torch] Lower `torch.aten.sinh` to `linalg` (#2662) 2023-12-18 09:15:12 -08:00
Sambhav Jain 9c655d0bfb
[Bazel] Add conversion targets for `TorchToTensor` (#2666)
Adapts bazel build per https://github.com/llvm/torch-mlir/pull/2648. 


https://github.com/sjain-stanford/torch-mlir/actions/runs/7233207462/job/19708228888
2023-12-17 06:07:43 -08:00
Rob Suderman ae1a6e4a5a
[onnx] Lower `onnx.Gemm` to `torch` (#2663)
General lowering for `onnx.Gemm` to `torch`
2023-12-16 10:47:58 -08:00
Andreas Falkenberg cee8563060
[onnx] Support of onnx.Greater, onnx.Less, onnx.GreaterOrEqual to Torch (#2649)
The three remaining compare operations
onnx.Greater 
onnx.Less 
onnx.GreaterOrEqual

Are also added with this push request. 
This concludes a set of basic tensor compare functions.
2023-12-16 12:42:11 -05:00
Rob Suderman 61888690bb
[onnx] Add support for `onnx.sinh` (#2643)
Adds a lowering from `onnx.sinh` to `aten.sinh`. This includes adding
the `aten.sinh` operator.
2023-12-15 21:23:51 -08:00
Rob Suderman b3e94208a8
Bump LLVM version to aa165edca8545b212de084d5b18c3d30347f774a (#2658) 2023-12-15 16:41:45 -08:00
Rob Suderman 705ea958ae
[onnx] Lowerings from `onnx.transpose` (#2641)
Lowerings for `transpose` from ONNX to `aten`. Implementation depends on
making multiple `aten.transpose` operations swapping pairs of dimensions.
As `onnx.transpose` can swap around any dimensions it may require
constructing multiple `aten.transpose`.
2023-12-15 15:30:05 -08:00
Quinn Dawkins 030b0140d4
[TorchToLinalg] Lower aten.cat to tensor.concat (#2650)
This replaces the lowering of aten.cat with tensor.concat, allowing more
efficient handling of concatenations in downstream flows. The refbackend
populates concat decomposition patterns that can be used to recover the
previous lowering.
2023-12-15 15:45:32 -05:00
Rob Suderman 061af696ce
[onnx] Lowering for `onnx.shape` to `torch` and `tensor` (#2648)
Includes the lowering from the `aten` equivalent to `tensor` operations.
2023-12-15 11:37:49 -08:00
Sungsoon Cho 55e9401c5c
Implement lowering of aten.cosh op. (#2635) 2023-12-15 11:19:26 -08:00
Gaurav Shukla eb9249e601
[ONNX][MLIR] Add support for LeakyRelu and GatherElements op (#2655)
This commit adds support for `LeakyRelu and GatherElements` op in the
onnx pipeline.

Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-12-15 11:18:28 -08:00
Quinn Dawkins d9f4a80b10
Bump LLVM version to fcd54b368e6713acd236dc47401b5292755900d7 (#2654)
This bumps the llvm submodule to HEAD to pick up recent fixes.
2023-12-15 12:47:22 -05:00
saienduri f59c01fd2f
[MLIR][ONNX] Add OnnxToTorch support for q-z ops (specific ops in description) (#2601)
This commit adds the OnnxToTorch support for Reciprocal, Round,
ScatterElements, Sigmoid, Sin, Tanh, Sqrt, Sub, Sum, Where, Xor,
Squeeze, Unsqueeze ops.
For reviewers, the ops that weren't trivial and probably require extra
review are Sum, Squeeze, and Unsqueeze.
2023-12-15 09:36:18 -08:00
Andreas Falkenberg 4ec8b9fc02
[onnx] add support for onnx.LessOrEqual (#2639)
Added the less or equal operation to OnnxToTorch. 
onnx.LessOrEqual

---------

Co-authored-by: root <andreas.falkenberg@amd.com>
2023-12-14 22:23:23 -05:00
Sungsoon Cho 65f517b3d0
Bump LLVM version to 762964e97fd66ab7728ecc92aa153a61266fa9df. (#2645) 2023-12-14 12:43:21 -08:00
Rob Suderman 4857606ffe
[onnx] Lowerings from `onnx.selu` (#2634)
Lowerings for `selu` lowerings for ONNX to the corresponding torch
implementations. Torch's `selu` implementation has fewer features so
we use the a generalized `elu` with the input scale set to `1.0`.
2023-12-14 08:53:47 -08:00
JianzheXiao 6ddeb1a6ef
[torch] Add support for aten.selu (#2640)
Add `aten.selu` operation to `torch` dialect.
2023-12-13 20:28:08 -08:00
John Wu 42392bc845
[MLIR][ONNX] Add OnnxToTorch support for matmul ops (#2629)
This commit adds the OnnxToTorch support for Matmul.
2023-12-13 09:35:32 -08:00
Stella Laurenzo ed4df38e8d
[onnx] Add torch-mlir-import-onnx tool. (#2637)
Simple Python console script to import an ONNX protobuf to the torch
dialect for additional processing.

For installed wheels, this can be used with something like:

```
torch-mlir-import-onnx test/python/onnx_importer/LeakyReLU.onnx
```

Or from a dev setup:

```
python -m torch_mlir.tools.import_onnx ...
```
2023-12-12 22:01:30 -08:00
JianzheXiao 7cf52ae73f
[Torch Dialect]Add Support for AtenGroupNormOp and AtenNativeGroupNormOp (#2591)
Co-authored-by: LiuYuanqiang <liuyuanqiang.yqliu@bytedance.com>
2023-12-13 11:05:12 +08:00
Stella Laurenzo 74f7a0c9d6
Upstream the ONNX importer. (#2636)
This is part 1 of 2, which will also include upstreaming the FX
importer. I started with ONNX because it forces some project layout
updates and is more self contained/easier as a first step.

Deviating somewhat from the RFCs on project layout, I made the following
decisions:

* Locating the `onnx_importer.py` into `torch_mlir.extras` as Maks
already has opened up that namespace and it seemed to fit. Better to
have fewer things at that level.
* Setup the build so that the root project only contains MLIR Python and
pure Python deps (like the importers), but this can be augmented with
the `projects/` adding more depending on which features are enabled.
* The default build continues to build everything whereas in
`TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS=1` mode, it builds a
`torch-mlir-core` wheel with the pure contents only.

`onnx_importer.py` and `importer_smoke_test.py` are almost verbatim
copies from SHARK-Turbine. I made some minor local alterations to adapt
to paths and generalize the way they interact with the outer project. I
expect I can copy these back to Turbine verbatim from here. I also
updated the license boilerplate (they have the same license but slightly
different project norms for the headers) but retained the correct
copyright.

Other updates:

* Added the ONNX importer unit test (which also can generate test data)
in lit, conditioned on the availability of the Python `onnx` package. In
a followup once I know everything is stable, I'll add another env var
that the CI can set to always enable this so we know conclusively if
tests pass.
* Moved the ONNX conversion readme to `docs/`.
* Renamed CMake option `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS` ->
`TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` and inverted the sense. Made the
JitIR importer and LTC options `cmake_dependent_options` for robustness.
2023-12-12 19:02:51 -08:00
Eric Kunze f67249d34f
Sort the TOSA passing test list (#2630)
For easier tracking of issues, sort the TOSA passing list. It is still
significantly smaller then the XFAIL list would be.

Resolves #2620, at least until the xfail list gets smaller than the
passing list.

Signed-off-by: Eric Kunze <eric.kunze@arm.com>
2023-12-12 14:22:25 -08:00
Frederik Harwath 099e1f4cf5 Bump LLVM version to f7250179e22ce4aab96166493b27223fa28c2181 2023-12-12 10:52:02 +01:00
Frederik Harwath b656c674ee Implement e2e support for aten.acos op
This depends on a change in the LLVM core repository which adds acos
support to the MLIR Math dialect.
2023-12-12 10:52:02 +01:00
Sambhav Jain 7acabafd84
Remove folder from `AtenStackOp` for single element list inputs (#2626)
`AtenStackOp` defines this folder for list operand containing single
element:
```
OpFoldResult AtenStackOp::fold(FoldAdaptor adaptor) {
  auto list = getOperand(0).getDefiningOp<PrimListConstructOp>();
  if (!list || !list->hasOneUse() || list.getElements().size() != 1)
    return nullptr;
  return list.getElements()[0];
}
```
However, unlike `AtenCatOp`, `AtenStackOp` cannot be folded away for
single element list operand because the result from a stack operation
contains an additional dimension (of size 1, like expand_shape).

This PR removes the `AtenStackOp::fold` method, and adds an e2e test for
single element list input case, which fails on current `main` as
follows:
```
Unexpected outcome summary: (linalg)                                                                                                                                                                   
                                                                                                                                                                                                       
****** Failed tests - 1 tests                                                                                                                                                                          
    FAIL - "TensorsStackSingleElementListModule_basic"                                                                                                                                                 
        @ trace item #0 - call to "forward"                                                                                                                                                            
        @ output of call to "forward"                                                                                                                                                                  
        ERROR: shape (torch.Size([10, 32])) is not equal to golden shape (torch.Size([10, 1, 32]))     
```
Thanks Chris Lalau Keraly for the bug report.
2023-12-11 10:52:50 -08:00
Vivek Khandelwal 0b4422a253 [MLIR][ONNX] Add OnnxToTorch support for bitwise and math ops
This commit adds the OnnxToTorch support for BitwiseXor, BitwiseOr, Div, Equal, Cast,
Ceil, Floor, Cos, and Clip op.
This commit also adds the TorchToLinalg support for aten.clamp.Tensor and aten.clamp_min.Tensor op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-11 19:36:01 +05:30
JianzheXiao 96fcde4d77
[Torch Dialect] Support Einsum Op (#2230)
As title, support torch.aten.einsum op

Right now only support Static Shape, because of the known issue, the
fixed solution is here: https://github.com/llvm/torch-mlir/pull/2154

Co-authored-by: Jiawei Wu
[wujiawei.aml@bytedance.com](mailto:wujiawei.aml@bytedance.com)
2023-12-10 12:30:37 +08:00
Vivek Khandelwal 07c3e11f56 [MLIR][TORCH] Add support for Short(si16) data type
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-09 16:52:23 +05:30
Felix Schneider fb21a85874
[TorchToLinalg] Lower grouped conv2d to linalg Op with correct dimension ordering (#2623)
The linalg Op `linalg.conv_2d_ngchw_fgchw` had a bug where

1. Weights were accessed as G,F,C,H,W instead of as F,G,C,H,W
2. Output was accessed as N,F,G,H,W instead of as N,G,F,H,W

Now this has been fixed in
https://github.com/llvm/llvm-project/pull/73855 which broke the
torch-mlir lowering to that Op.

This patch switches lowering in torch-mlir to the newly introduced
`linalg.conv_2d_ngchw_gfchw` op which accesses weights in an order that
is compatible with PyTorch's memory layout.

Fix https://github.com/llvm/torch-mlir/issues/2622
2023-12-08 14:18:23 +01:00