Commit Graph

246 Commits (39307f0462826cb1402703cf23ee7e24a2f51be6)

Author SHA1 Message Date
jinchen b64c22cfc1
Fix onnx sinh lowering (#3253)
iree tests `test_sinh` and `test_sinh_example` passed
2024-04-30 00:44:41 -07:00
Vivek Khandelwal b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op (#3221)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
Stella Laurenzo 5d4b803914 [NFC reformat] Run pre-commit on all files and format misc.
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.

Subsequent patches will format Python files and remaining CPP files.
2024-04-27 14:08:09 -07:00
penguin_wwy 6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243)
Like #3130, gradually replace the deprecated code

https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
Rob Suderman 9a12a093a6
[onnx] Support `onnx.OneHot` lowering to `torch` (#3196)
[onnx] Support `onnx.OneHot` lowering to `torch`

Leverage the `aten.onehot` implementation along with `aten.transpose`
and `aten.where.scalar`.
2024-04-26 12:08:15 -07:00
Andreas Falkenberg cd33d8b011
[onnx] Update DefaultDomainGtoP.cpp gridsampler (#3228)
Gridsampler
In onnx the interpolation mode is called 'linear' whereas in pytorch it
is called 'bilinear'. This led to the problem that everything other than
'bilinear' was rejected. It needed to be changed to linear.
2024-04-25 18:07:05 -07:00
Archana Ramalingam ac11ec796d
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSum Op (#3229)
This commit adds the OnnxToTorch support for ReduceLogSum op
2024-04-25 19:37:57 -04:00
Phaneesh Barwaria f77d88390a
[onnx] handle dynamic padSize tensor in onnx.Pad (#3214)
- Fix pad size to data_rank for dynamic paddingSize Tensor.
- This fix is in accordance with [input
specification](https://onnx.ai/onnx/operators/onnx__Pad.html#inputs) for
onnx.Pad
- Impl will need to be updated for dynamic padSize when support for
`axes` is added.
2024-04-24 11:31:37 +08:00
jinchen 09d42044b4
Support select_last_index attribute of onnx argmin op (#3212)
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/648
all compiled, and the values of results match, but having runtime issue
of dtype mismatch of i/si.
2024-04-23 10:43:38 -07:00
jinchen 61e6312c87
Support select_last_index attribute of onnx argmax op (#3192)
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/635
all compiled, but having run issue of dtype mismatch of i/si.
2024-04-23 10:16:08 -07:00
jinchen ddb29c2c02
[onnx] Add OnnxToTorch support for `onnx.ConvInteger` (#3179)
All e2e iree tests compiled, but they have the run issue of mismatch of
dtype like the following
```
expected:
1x1x2x2xsi32=[[[12 16][24 28]]]
actual:
1x1x2x2xi32=[[[12 16][24 28]]]
```
2024-04-23 09:42:02 -07:00
Vinayak Dev cff2f084d4
[torch] Add OnnxToTorch lowering for `onnx.ReduceL2` (#3175)
Adds OnnxToTorch lowering for the ReduceL2 op.
2024-04-23 02:03:05 -04:00
Vivek Khandelwal 3c252cdd44
[onnx] Add `onnx-to-torch` lowering for random ops (#3193)
This commit adds the OnnxToTorch lowering for Onnx's RandomNormal, RandomNormalLike, RandomUniform, and RandomUniformLike op.
2024-04-22 22:28:07 +05:30
Vivek Khandelwal 6abc7371c8
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for Squeeze and Unsqueeze op (#2991)
This commit also cleans up the OnnxToTorch lowering for the Squeeze and
Unsqueeze op and adds the support for handling edge cases.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-22 08:52:42 +00:00
Rob Suderman 8222637159
[onnx] Extend op version number of `onnx.ScatterElements` (#3195)
Version number was set too high. Lowered to support more cases allows
more tests to pass.

Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:32:18 -04:00
Rob Suderman 733cace1df
[onnx] Fix `onnx.split` by directly handling slicing (#3194)
Previous implementation erroneously mixed up num_outputs with
slice_size. New version correctly computs the slice size and directly
performs slicing rather than leveraging `aten.split.tensor`. This is due
to `onnx` supporting a fixed number of splits making the size
computation more easily computeable when lowering to `aten` rather than
deferring to `aten.split.tensor`.

---------

Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:31:56 -04:00
Rob Suderman b01245c0e8
[onnx] Fix `onnx.Not` for non-bool inputs (#3187)
Need to perform a bool cast to support `onnx.Not` on non-bool inputs.
2024-04-19 11:32:24 -07:00
Andreas Falkenberg b66eabd492
[onnx][torch][linalg] Implementing align-corner modes for gridsampler (#3171)
Align corner modes which select what the corners mean. 
Either the center of the corner points or the edges of the edge points.

---------

Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
2024-04-17 13:38:19 -07:00
Vinayak Dev a0232e9ebd
[MLIR][TORCH] Add OnnxToTorch lowering for ReduceL1 Op (#3146)
Adds OnnxToTorch Lowering for the ReduceL1 op.
2024-04-16 12:24:46 +05:30
zjgarvey 5e564b5864
Adds Some Quantization Support for AtenMatmulOp (#3147)
1. onnx.MatMulInteger now converts to aten.matmul instead of aten.mm
2. aten.matmul, for ranks >=2, now allows quantized inputs and will
lower to linalg::quantized_matmul or linalg::quantized_batch_matmul.
3. added AtenMatmulOp to the FuseQuantizeOps rewrite patters
QuantizeOperands, QuantizeTransposedOperands, and QuantizeAccumulator
4. added several tests, including some to test AtenMmOp with varying
quantization signed-ness.
5. a quantized matmul mat-vec test is added to verify the failure to
lower to linalg; cleaned of out-of-date code related to common
torch-mlir lowering xfails.
6. in debugging a real model with quantized matmuls, I found a bug on
the scalarize-shapes pass which resulted from the aten.full op folder
returning an incompatible result type. This is fixed by the small change
here to
[lib/Dialect/Torch/IR/TorchOps.cpp](https://github.com/llvm/torch-mlir/compare/main...zjgarvey:torch-mlir:MatMulIntegerFix?expand=1#diff-dc8ed165c207918e606490eee3984b1ad51d7034e6aac36fc046bf47f6f03f4f).
2024-04-15 16:06:47 -07:00
jinchen 83cba8c696
[onnx] Support for `onnx.EyeLike` via torch lowering (#2994) 2024-04-15 09:23:26 -07:00
jinchen 859f5d280f
Generalize getting index for onnx compress op (#3150) 2024-04-12 15:18:22 -07:00
penguin_wwy d4a30b7e67
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3130)
We should prefer functional style as the method style is deprecated
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
(https://mlir.llvm.org/deprecation/)
2024-04-11 06:47:35 -07:00
Xida Ren (Cedar) dd967eb199
[ONNX] Support onnx.LSTM (#2969)
This PR only performs a lit test. In lieu of an e2e test, https://github.com/nod-ai/SHARK-TestSuite/pull/142 makede sure that the lowering works & the numbers check out.

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-08 12:23:33 -07:00
Vivek Khandelwal 1d6e4c3d77
[MLIR][TORCH] Add OnnxToTorch lowering for Einsum op (#3117)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-08 22:38:01 +05:30
Vivek Khandelwal af54d27820
[MLIR][TORCH] Fix Onnx.TopK lowering (#3103)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 22:12:48 +05:30
Vivek Khandelwal ce7d4f1660
[MLIR][TORCH] Fix Onnx.ReduceSum lowering for failing e2e tests (#3095)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 09:57:19 +05:30
Rob Suderman f97cd4893f
[torch] Improve shape inference for dynamic shapes (#3091)
Shapes can be processed as tensors to represent the set of dimensions.
As reshapes take a list of scalars this can result in a single dynamic
dimension blocking the adjacent static dimensions.

This pass attempts to de-couple tensor computations related to shapes
and propagate values to better support lowering scalar tensor
computations.
2024-04-02 16:19:57 -07:00
Vivek Khandelwal d1f770c620
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for ReduceMean op (#3008)
This commit also cleans up the OnnxToTorch lowering for the ReduceMean
op and adds the support for handling edge cases.

Signed-Off By: Vivek Khandelwal vivekkhandelwal1424@gmail.com
2024-04-02 16:54:04 +05:30
zjgarvey 532d297c46
[ONNX] Preliminary Work Towards Supporting QuantizedMLP_basic onnx e2e test (#3089)
See the related issues here:
[SHARK-Turbine#556](https://github.com/nod-ai/SHARK-Turbine/issues/556)

1. Adds uint8 casting to onnx.Cast op
2. Fixes an issue with onnx.DequantizeLinear when the scale comes with
shape [1].
3. Adds support for unsigned types in an AtenItemOp folder
4. Adds a simpler quantized model for easier debugging
5. Adds a fusion pass to convert [quant -> dequant -> transpose -> mm]
patterns to [transpose -> quant -> mm].
6. Moved some xfails that are still not passing, but for different
reasons than onnx.cast failures.
2024-04-01 16:21:05 -07:00
Vivek Khandelwal 6844c84702
[MLIR][Torch] Fix OnnxToLinalg lowering for AvgPool op (#3076)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-01 22:14:14 +05:30
Gaurav Shukla 129a79417a
[MLIR][ONNX] Fix onnx.gather_nd implementation (#3070)
The indices should be expanded before the torch.gather operation.

Signed-off-by: Gaurav Shukla <gaurav@amd.com>
2024-04-01 20:17:09 +05:30
zjgarvey c19fc9ba47
[ONNX] Fixes Issue with Dynamic Dims in GlobalAveragePool -> Torch Conversion (#3053)
Two e2e tests (AdaptiveAveragePool1/2dUnitOutputSizeDynamic) were
failing due to numerics. This was as a result of passing -1 as the
kernel size in the lowering for the corresponding onnx op
GlobalAveragePool.
2024-03-28 09:43:09 -07:00
Rob Suderman 14b548f968
[torch] Improve shape inference for `torch-to-linalg` path for reshapes (#3055)
Reshaping tensors depend on directly matching individual dimensions to
their corresponding dim in the `torch.view` reshape dimensions. This
involves decoupling dynamic dimensions from their static counterparts
and support cleanup / canonicalization.
2024-03-26 12:41:40 -07:00
Vivek Khandelwal 9ae33e482e
[MLIR][TORCH] Add OnnxToTorch lowering for ops (#3049)
This commit adds the OnnxToTorch lowering for the Mish, Softplus,
HardSwish, Trilu, ThresholdedRelu op

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-25 20:29:07 +05:30
zjgarvey 6aa481c204
[ONNX] LogSoftmax to Torch (#3024)
This PR adds support for onnx.LogSoftmax both for old versions (<13,
with axis >=0), and new versions (13).
2024-03-22 11:01:39 -07:00
Gaurav Shukla 50635dd509
[ONNX][MLIR] Add support for onnx.gather_nd (#2988)
Signed-off-by: Gaurav Shukla <gaurav@amd.com>
2024-03-22 21:38:39 +05:30
zjgarvey 6ff71b40c8
[ONNX] onnx.DynamicQuantizeLinear to Torch (#3009)
This adds support for converting DynamicQuantizeLinear from torch-onnx
to torch.

I could not get an e2e test to pass, since there seems to be some issues
with uint8 casting somewhere lower in the pipeline. For example
compiling with IREE for llvm-cpu, I would get either the correct zero
point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128).
The output tensor seems to always return a tensor of zeros, which also
occurs when running uint8 examples through QuantizeLinear.

Edit: the first problem can be resolved by casting the output back to
uint8 on output, the second problem is resolved with PR #3018
2024-03-20 10:58:25 -07:00
jinchen 9cf6c45a39
Add OnnxToTorch support for Compress op (#3025) 2024-03-20 17:12:08 +00:00
zjgarvey 7a9608bb69
[ONNX] Reduces onnx.Div sinceVersion to 7 (#3041)
The only difference between version 7 and newer versions is support for
different data types. We should allow this pattern to match as early as
7. Earlier versions have a more manual broadcast specification through
attributes, so I did not include those versions.

See: [onnx.Div
docs](https://onnx.ai/onnx/operators/onnx__Div.html#l-onnx-doc-divl)
2024-03-19 13:35:05 -07:00
Pavani Chowdary c51e2130f2
[onnx] support for lowering mod op from onnx to torch (#2859)
nod-ai/Shark-Turbine#267

---------

Authored-by: boddu.pavani@research.iiit.ac.in
Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-18 17:54:37 +05:30
Xinan Jiang(姜曦楠) d8a52e82c2
[onnx] Fix onnx.cast cases between int32 and int64 (#2982)
2 modifications:
1. torch.int64 is enum 4 in TORCH_DTYPE_TO_INT
2. add int32 support
2024-03-15 17:14:09 +00:00
aldesilv 6fa21bd8b1
OnnxToTorch lower celu op (#2920) 2024-03-13 20:34:10 +05:30
Rob Suderman 8fb28661f9
[onnx] Fix onnx.ReduceMean lowering (#3002)
Reduce mean lowerings did not succesfully lower to `linalg` via torched.
There were two separate paths that could be consolidated to a single
simpler pass. This resulted in a significant improvement in test
coverage.
2024-03-11 11:32:53 -07:00
Rob Suderman bd7f1baa42
[onnx] Fix expand operation for dynamic shape max (#3001)
If the broadcast shape is length-1 at a dim while `?` in the input dim
then we need to broadcast to the dynamic dim. This is equivalent to
taking a max of two dimensions.
2024-03-08 16:23:07 -08:00
Rob Suderman 0723584936
[torch] Add folder for torch.aten.*.Scalar comparisons (#3000)
This folds small version of the tensor-scalar comparison operators as
they are commonly used for shape computations. This includes le, lt, ge,
gt, eq, and ne.
2024-03-08 13:44:00 -08:00
Andreas Falkenberg 551a4e45f3
[onnx] Add support for `onnx.Gemm` with no bias (#2993)
Previous gemm version required a bias vector. 
This provides an alternate path to `Torch::AtenMm`
with no bias operation.
2024-03-07 15:58:38 -08:00
Rob Suderman 1964208d19
[onnx] Fix constant pad for dynamic shape (#2989)
The current padding operation was not functional for dynamic shapes.
Updated and enabled tests so that onnx.pad tests pass.

Work TBD for reflection padding.
2024-03-07 13:29:50 -08:00
Scott Todd 7b18646def
[onnx] Handle optional arguments in Clip op pattern. (#2976)
Spec: https://onnx.ai/onnx/operators/onnx__Clip.html
2024-03-07 17:25:14 +00:00
Rob Suderman c15f1a2bd2
[onnx] Adding lowering for `onnx.Size` operation (#2985)
We can support `onnx.Size` by requesing the size of each dimensions and
taking the product of the results, then packing it into a tensor.

---------

Co-authored-by: Scott Todd <scott.todd0@gmail.com>
2024-03-06 17:01:05 -08:00
Rob Suderman a78659742a
[onnx] Migrate `onnx.ReduceMax` to match `onnx.ReduceMin` (#2981)
This mostly copy-pastes the reduce minimum implementation to reduce max
to improve test coverage. We also improve the aten lowering for min/max
dim for unsigned types.
2024-03-06 16:48:21 -08:00
Andreas Falkenberg ea76dd12ba
[onnx][torch] Gridsampler E2E test and corrections of gridsampler (#2987)
The addition of an e2e test is actually provided in the Shark-Testsuite.
This adds 2 test cases for the gridsampler e2e test. 
Also as intended there were some items found which needed correction, so
the Gridsampler op has also a change.
2024-03-06 10:56:58 -08:00
Rob Suderman 933db87a07
[onnx] Add support for constants of `i1`s (#2978)
`getRawBuffer` expects a densely packed vector of `i1` values however
`onnx` does not densely pack the values. Include code to handle the
packing / unpacking.
2024-03-05 13:55:13 -08:00
Chi_Liu 09875fabd1
[MLIR][ONNX] Add ONNX ReduceProd support (#2943)
Alternatives to https://github.com/llvm/torch-mlir/pull/2908

Fix https://github.com/nod-ai/SHARK-Turbine/issues/353
2024-03-04 11:07:03 -08:00
Rob Suderman d51e80b648
[onnx] Fix onnx.gather lowering for rank-0 indices (#2973)
We assumed rank was atleast 1 however it can be rank-0, generating an
illegal pair of flatten / unflatten operations. Corrected this.
2024-03-04 08:25:19 -08:00
Vivek Khandelwal 579ac8b666
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for sub and sum op (#2954)
This commit adds the support for scalar conversion to byte. 
This commit also fixes the OnnxToLinalg lowering issue for Onnx.Sub and
Onnx.Sum op.
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/466 
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/467

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-29 21:48:46 +05:30
Andreas Falkenberg 5437f32193
[onnx][torch] Lower `onnx.grid_sampler` to the `torch` equivalents (#2952)
This is the lowering of gridsampler from onnx to torch using our prior
implementation of AtenGridSamplerOp.
Here are several checks for cornercases implemented. We may decide to
have part of these checks in AtenGridSamplerOp instead of the onnx
lowering portion.
2024-02-28 13:52:15 -08:00
Rob Suderman e48fe45886
[onnx] Import `onnx` import to pass remaining tests (#2951)
Finish supporting importing the vast majority of `onnx` operations. This
includes:
- region support
- region value inherentance
- `torch.string` support
- `torch.list` support
- `torch.optional` support
2024-02-28 12:18:02 -08:00
Rob Suderman 4a7a7d76f8
[onnx] Fix ReduceMean lowering to torch (#2956)
Torch lowering only supported the most recent version. Refactored the
lowering so more easily handle default values and optional operands /
attributes.
2024-02-27 22:48:07 -08:00
Vivek Khandelwal d81747eadb
[MLIR][TORCH] Extend support for OnnxToLinalg lowering for Dropout and Div op (#2938)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/451,
https://github.com/nod-ai/SHARK-Turbine/issues/452
2024-02-27 11:02:05 +05:30
Rob Suderman 53f6d06ab8
[onnx] Drop `ConstantOfShape` logic form importer, fix torch lowering (#2930)
There is no reason to treat `ConstantOfShape` as a specialized import
any as there exists a onnx-to-torch equivalent. Dropping the import
coding and adding support for resource conversion substantially
increases test coverage for dynamically shaped tests.
2024-02-21 21:34:43 -08:00
Rob Suderman cea51897a5
[onnx] Simplify onnx.slice lowering (#2919)
Onnx slice lowering used arange needlessly instead of directly
constructing the constant dimension values. This makes lowerings to
linalg struggle as multiple folders are required to get what is a
constant index value.
2024-02-19 10:26:29 -08:00
aldesilv d29157b33f
OnnxToTorch support for onnx.InstanceNormalization op (#2710)
https://github.com/nod-ai/SHARK-Turbine/issues/327
2024-02-19 19:53:48 +05:30
Rob Suderman d65925a8b4
[onnx] Fix `onnx.sigmoid` for integer inputs/outputs (#2914)
Sample compilation crashes due to sigmoid with integer inputs/outputs.
This fix avoids crashing but still experiences an error.
2024-02-16 13:35:25 -08:00
Rob Suderman 7a0d0e954b
[onnx] Fix onnx.gather lowering to use torch.aten.index_select (#2913)
Onnx's gather maps directly to `torch.aten.index_select`. We should just
use that path.
2024-02-16 16:05:44 -05:00
Rob Suderman 468c533942
[onnx] Fix crash when negative transpose values exist (#2915)
We are crashing due to indexing into a negative shape. Updated the
lowering to avoid the crash.
2024-02-16 16:04:47 -05:00
Rob Suderman 074f112d6a
[onnx] Add testing using the `onnx` compilation using torch tests (#2795)
We can route the torch tests via `onnx` using the `torch.onnx.export`
tooling. We can then reimport, lower to torch, and compile to linalg to
validate the onnx path is working correctly.

The current implementation exposes some failures in the `onnx` path so
we cannot enable the onnx test suite yet due to segmentation faults.
2024-02-15 10:17:13 -08:00
Vivek Khandelwal d6d1a173dc
[MLIR][Torch] Add OnnxToTorch and TorchToLinalg support for trig ops (#2903)
This commit adds the OnnxToTorch lowering for cosh, acosh, asin, asinh,
and atanh op.
This commit also adds the TorchToLinalg lowering for acosh, asin, asinh,
and atanh op.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-14 11:58:09 +05:30
saienduri 9b967f6b5a
[MLIR][ONNX] Add OnnxToTorch support for Mean, IsInf, IsNaN, PRelu op (#2801)
This commit adds the OnnxToTorch support for Mean, IsInf, IsNaN, and
PRelu ops. All high priority ops were taken so went with these. The non
trivial ones are Mean and IsInf which might require extra review

---------

Co-authored-by: MaheshRavishankar <mravisha@amd.com>
2024-02-13 12:38:21 +05:30
Ashay Rane 21f070e95f
onnx: fix checks in TorchOnnxToTorch pass to match the ONNX spec (#2848)
This PR contains three commits to update the validation checks in the
ONNX -> Torch conversion pass for the AveragePool, Pad, and Slice operators:

> onnx: fix preconditions for lowering AveragePool ops
> 
> The `pads` attribute of the AveragePool operator specifies the value to
> pad at both the beginning as well as the end of the axis (see
> https://onnx.ai/onnx/operators/onnx__AveragePool.html#attributes), so
> the size of this attribute should be twice the rank of the input tensor.
> However, our TorchOnnxToTorch bails out early since it incorrectly
> compares the pads attribute with the rank (not twice the rank) of the
> input tensor.
> 
> This patch fixes the code to match the spec and adds a lit test.

> onnx: allow optional constant value for Pad operator
> 
> The `constant_value` input of the onnx.Pad operator is optional (see
> https://onnx.ai/onnx/operators/onnx__Pad.html#inputs), but the
existing
> logic for lowering the operator into the Torch dialect assumes that it
> is mandatory.
> 
> This patch makes the attribute optional and constructs a default value
> (a list of zeros the size of the input tensor) if the attribute was not
> specified.

> onnx: fix checks for axes and steps inputs of Slice operator
> 
> The ONNX Spec for the Slice operator allows the `starts` and `ends`
> inputs to have fewer indices that the dimensions of the `data` tensor
> (see https://onnx.ai/onnx/operators/onnx__Slice.html), but our code
> expects these inputs to be as many as the `data` tensor's dimensions.
> 
> More precisely, the spec requires that the `starts` and `ends` inputs
> are only as long as the `axes` input, but since the `axes` input is
> optional, the default type for the `axes` input has to match the type
> for the `starts` and `ends` inputs. Moreover, the number of indices in
> the `steps` input also has to match those in the `axes` inputs (instad
> of matching the dimensions of the `data` input).
> 
> This patch fixes the checks in the TorchOnnxToTorch conversion so that
> they match the ONNX spec.
2024-02-07 21:19:27 -08:00
Vivek Khandelwal 4df96616db
[MLIR][TORCH] Modify Onnx.Reshape lowering for static shape cases (#2852)
This commit modifies the OnnxToTorch lowering of Onnx.Reshape op by
creating the result shape list for the aten.reshape using the result
shape values inferred from the op's result shape.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-07 17:44:07 -08:00
Rob Suderman e3faef5224
[onnx] Convert `onnx.QLinearConv` to `torch` (#2851)
Leaning on the QDQ functionality in torch we can support the QLinearConv
operation by piggybacking through `torch.Convolution`. This includes
some changes such as allowing the `onnx` rewriter to run recursively.
Doing so allows `QLinearConv` to decopmose to `onnx.Convolution` which
is then lowered to `torch`.
2024-02-05 16:09:41 -08:00
Rob Suderman cb52c4b3cc
[onnx] Fix `onnx-to-torch` lowering for flatten shape (#2834)
The existing `flatten` lowering did not define what the intermediate
shape was. This could result in failures to lower further to linalg as
the intermediate shape was unknown. Added a shape refinement section.
2024-02-05 14:23:46 -08:00
Gaurav Shukla f4562a8eaa
[ONNX] Fix the lowering of onnx.expand op (#2861)
Signed-off-by: Gaurav Shukla <gauravshukla789@gmail.com>
2024-02-05 23:46:58 +05:30
Xida Ren (Cedar) 24b8c8672a
[torch] Add folders for `torch.fill`, `torch.ones`, `torch.zeros` and `aten.getItem` (#2849)
So that the CumSum Op in OPT can get the constant that it requires to be lowered to TMTensor

---------

Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-02-02 10:46:33 -08:00
Ben Vanik 962d514308
Fixing implicit double->float conversion warning. (#2850)
`[build]
D:\Dev\iree\third_party\torch-mlir\lib\Conversion\TorchOnnxToTorch\DefaultDomainGtoP.cpp(734):
warning C4305: 'argument': truncation from 'double' to 'float'`
2024-02-01 22:02:44 -08:00
Rob Suderman 29baa813bd
[onnx] Fix `pool` lowering for non-symmetric padding (#2837)
`torch` requires that padding be symmetric for pooling operations. To
support non-symmetric pad we need to separately materialize out the
padding operation.

---------

Co-authored-by: James Newling <james.newling@gmail.com>
2024-02-01 14:35:21 -08:00
Rob Suderman 3500523f75
[onnx] Convert resources to denseattr for `onnx.constant` to `torch` (#2830)
`onnx` explicitly specifies that `raw_data` is stored in `little-endian`
layout. While converting
to `torch` we need to convert from a known endian format to an internal
format of consistent
layout. This means endianness must be correct during the import of
`onnx.Constant`.

---------

Co-authored-by: Xida Ren (Cedar) <cedar.ren@gmail.com>
2024-01-31 11:40:53 -08:00
Stella Laurenzo 7301aa80fd
Enable -Werror in lib/ and LTC. (#2841)
Required some massaging of LTC to make it warning clean, and I had to
manually disable some warnings on the generated source files (which we
don't control).

The project is warning clean now.

The `-Werror` flag is disabled by default as we can't control everywhere
people will try to build/install. The CI enables it via
-DTORCH_MLIR_ENABLE_WERROR_FLAG=ON.
2024-01-30 23:33:21 -08:00
Stella Laurenzo 26c0ecd09c [nfc] Remove unused var causing error downstream 2024-01-30 22:18:13 -08:00
aldesilv eff325abc3
OnnxToTorch ReduceMax lowering (#2768)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/352
2024-01-30 11:44:48 +05:30
Quinn Dawkins 494089d53d
Clang format refresh (#2812)
After noticing a number of commits with unrelated formatting changes,
I think something was changed with clang-format at one point and we're
seeing a number of unrelated changes. Doing a refresh can help avoid
this.

The changes made here came from
```
find lib -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
find include -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
find projects -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
```
2024-01-29 12:59:33 -05:00
Rob Suderman d3fd754b93
[onnx] `onnx.MatMulInteger` lowering to `torch.mm` and `quint*` types (#2761)
Torch does not have an equivalent matmul operation for integers. Instead
it sidechannels the information via its quantized types. For this
lowering we setup these sidechannels then invoke `torch.mm`.
2024-01-29 09:40:21 -08:00
Vivek Khandelwal da7c6d2c16
[MLIR][TORCH] Add support for dynamic shape for Onnx.Transpose op (#2803)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-26 09:46:54 -08:00
Phaneesh Barwaria 4964977e85
[ONNX][MLIR] support constantOfShape op (#2747) 2024-01-26 09:36:39 -08:00
Rob Suderman 60bf6c25af
[onnx] Lower `onnx.QLinearMatMul` lowering to `torch` operators (#2776)
We can plumb the linear matmul into pytorch using its quantized types
with side channel information. To handle the final int8 operation we
dequantize and requantize.
2024-01-24 12:28:48 -08:00
Vivek Khandelwal 894805dd5e
[MLIR][TORCH] Support for `onnx.LayerNormalization` (#2789)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-24 11:08:20 -08:00
Gaurav Shukla 12f123eff8
[ONNX][MLIR] Add support for pad op in the onnx pipeline (#2738)
This commit adds mapping from `onnx.pad` op to `torch.pad` op. Currently
it does not support `axes` parameter of `onnx.pad` op.

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-01-25 00:33:37 +05:30
Phaneesh Barwaria ac8975ea12
[MLIR] [ONNX] lowering for onnx tile op and sign op (#2725) 2024-01-24 22:56:21 +05:30
Chi_Liu 77ae56337d
[ONNX][MLIR] Add support for onnx.Exp op (#2792)
https://github.com/nod-ai/SHARK-Turbine/issues/312
2024-01-23 13:45:00 -08:00
James Newling dc056e58e6
[MLIR][TORCH] Add onnx.cast cases used by OPT-1.25M (#2787) 2024-01-23 21:06:25 +05:30
Gaurav Shukla b7a0329676
[ONNX][MLIR] Fix padding size constraint for onnx.maxpool op (#2782)
Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2024-01-23 19:23:01 +05:30
Chi_Liu cad98e8113
[ONNX][TORCH-MLIR] Add TopK support (#2774)
https://github.com/nod-ai/SHARK-Turbine/issues/331
2024-01-22 12:56:39 -08:00
Ramiro Leal-Cavazos 5883ef0f21
Fix unused variable warnings (#2775) 2024-01-22 11:05:55 -08:00
Dave Liddell 2f4924015d
[onnx] Added flatten (#2760)
[https://github.com/nod-ai/SHARK-Turbine/issues/328](url)

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-01-19 16:18:16 -08:00
Gaurav Shukla 3b85c70748
[ONNX][MLIR] Add support for onnx.gather op (#2726)
This commit adds support for gather op in the onnx pipeline.
https://github.com/nod-ai/SHARK-Turbine/issues/242

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-01-19 21:58:29 +05:30
Andreas Falkenberg 4de4d38b87
Initial commit of NonZero op (#2766) 2024-01-18 15:23:13 -10:00
Rob Suderman b5387c0f29
[onnx] Lowering `onnx.dequantize_linear` to `torch` (#2759)
We can make the per-tensor version of the operation to the dequantize
operation via marking with the make quantized tensor component. This
introductions the `qint*` and `quint*` tensor type that can be lowered
to teh appropriate dequantization behavior during the torch-to-linalg
conversion.
2024-01-18 16:47:21 -08:00
Rob Suderman bd11877f6f
[onnx] Support lowering quantize linear to `torch` (#2751)
We can map the per_tensor case to the `torch.aten.quantize_per_linear`
operation. In this case we extract the `scale` and `zeropoint` values
and directly invoke the quantization, then return the integer
representation value.
2024-01-18 16:33:10 -08:00
Phaneesh Barwaria eed144bfbc
[ONNX][MLIR] add Identity op support (#2754) 2024-01-16 19:06:54 +05:30
kumardeepakamd 87389f0762
[ONNXToTorch] Add conversion for Onnx range (#2752)
Implemented ONNX.Range. The spec says the data type for start, limit,
delta are 0-D can be double, float, int16, int32, int64, All int types
mapped to !torch.int and all float types mapped to !torch.float

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-15 14:26:46 -05:00
Rob Suderman 197b3b475c
[onnx] Convert `onnx.constant` to `torch` literal tensor (#2748)
Handles the multiple cases of `onnx` constant values and converts them
to `torch` literal tensors. This can include splats with a single
integer or floating point value, a set of explicit integer values, or
an elements array attr of values.
2024-01-15 09:31:22 -08:00
Chi_Liu c7452af4fa
[MLIR][ONNX] Add OnnxToTorch support for Maxpool Op (#2695)
Add Maxpool ONNX op support.
Add Utils.h/cpp files to create a constant int list for ONNX.
2024-01-12 14:54:38 -08:00
James Newling 47ffc90db4
signed/unsigned c++ compiler warning fixes (#2742) 2024-01-11 09:46:46 -08:00
Andreas Falkenberg 5862854bc8
[ONNX][TORCH-MLIR] LayerNorm (#2716)
Layer Normalization using the torch.aten.native_layer_norm 

https://github.com/nod-ai/SHARK-Turbine/issues/325
2024-01-11 14:27:04 +05:30
Xida Ren (Cedar) aee1fca251
Minor typo fix: in not implemented message for the exclusive and reverse attributes for cumsum (#2740) 2024-01-10 14:24:37 -08:00
kumardeepakamd 29569713f3
support for onnx.expand operator (#2729)
maps onnx.expand to torch aten broadcast_to, three tests added

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-10 13:05:37 -08:00
Vivek Khandelwal 208ae35583 [MLIR][ONNX] Add TorchToOnnx Support for DepthToSpace op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 17:50:47 +05:30
Vivek Khandelwal 4707d3bdc6 [MLIR][ONNX] Add OnnxToTorch support for Bernoulli and CastLike op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 16:24:06 +05:30
Vivek Khandelwal 35e8f86792 [MLIR][ONNX] Add OnnxToTorch support for Dropout and Elu op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 16:23:55 +05:30
Ben Vanik 4dd17f0b71
Fixing implicit double->float truncation warnings. (#2733)
Floating-point literals should use the correct type specifier.
2024-01-08 17:26:38 -05:00
Han-Chung Wang 6096fcb347
[OnnxToTorch] Delete unused variables. (#2728) 2024-01-04 17:30:05 -08:00
John Wu 4e5e34d215
[MLIR][ONNX] Add OnnxToTorch support for Slice Op (#2696) 2024-01-03 19:41:10 -08:00
Xida Ren (Cedar) 1778314620
add basic cumsum. this doesn't support the exclusive and reverse attrs (#2717)
fixes #2711
2024-01-03 09:52:59 -08:00
Xida Ren (Cedar) 9fc212ea9a
support Onnx opset 1-13 ReduceMean where axes is supplied as an attr (#2703)
(instead of an input)

Addresses part of #2689. fixes #2702
2023-12-28 09:31:41 -08:00
Xida Ren (Cedar) d560698e3d
Lower `onnx.split` to `torch.aten` (#2686) 2023-12-27 17:53:07 -08:00
aldesilv 2d796b7502
lower onnx max op to torch aten maximum op (#2618)
lower onnx min op to torch aten minimum op
2023-12-27 11:07:35 -08:00
aldesilv 336cfb64b5
OnnxToTorch support for onnx.Mul op (#2699) 2023-12-27 10:50:08 -08:00
Xida Ren (Cedar) 6847fc1fc6
Fix since-opset too high (#2701)
Addresses two of the ops from
https://github.com/llvm/torch-mlir/issues/2689

https://github.com/llvm/torch-mlir/issues/2700
2023-12-27 10:08:09 -08:00
aldesilv abc6b0a25a
onnx to torch pow support (#2656) 2023-12-27 09:34:48 -08:00
Vivek Khandelwal 4f252c88b4
[MLIR][ONNX] Add OnnxToTorch support for GlobalAveragePool op. (#2692)
This commit adds the OnnxToTorch support for GlobalAveragePool op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-26 10:25:31 -08:00
saienduri ee75e8d1ae
[MLIR][ONNX] Add OnnxToTorch support for Reshape Op (#2698)
This commit adds the OnnxToTorch support for Reshape op.
2023-12-26 10:20:13 -08:00
Vivek Khandelwal 0849fd0a06 [MLIR][ONNX] Fix onnx.conv lowering to handle bias tensor
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-22 16:36:21 +05:30
Vivek Khandelwal 9a72c6584e [MLIR][ONNX] Add OnnxToTorch support for BatchNormalization and Concat op.
This commit adds the OnnxToTorch support for BatchNormalization and Concat op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-22 11:25:33 +05:30
John Wu 46f2cb50dc
[onnx] Lower onnx.HardSigmoid to torch (#2682)
The expression for HardSigmoid in Onnx
(https://onnx.ai/onnx/operators/onnx__HardSigmoid.html): max(0, min(1,
alpha * x + beta))

is inherently different from HardSigmoid in Torch
(https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html)
which is: if x < -3 -> 0
elif x > 3 -> 1
else x/6 + 1/2

That being said, it was just better to compute out the entire expression
when translating the Onnx expression to Torch mlir, which is done in
this PR. Some of the logic is shared from the files in
`DecomposeComplexOps`. Therefore, refactored some shared logic between
`DecomposeComplexOps` and `DefaultDomainGToP` and put it in a `Utils`
file.
2023-12-21 07:29:22 -08:00
Vivek Khandelwal 3226241521 [MLIR][ONNX] Add OnnxToTorch support for Conv and ConvTranspose op.
This commit adds the OnnxToTorch support for Conv and ConvTranspose op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-21 11:12:14 +05:30
Stella Laurenzo d75cff6cd1 NFC: Remove unused variable causing a warning. 2023-12-20 19:23:27 -08:00
Rob Suderman 11cc92d4ab
[onnx] Lowerings from `onnx.tan` (#2642)
Started work on the `tan` lowerings for ONNX to Torch. Uses `sin` and
`cos` to represent a `tan`.
2023-12-20 10:09:39 -08:00
Andreas Falkenberg ebaab4200f
[ONNX] ONNX -> TORCH for Erf (#2673)
TorchOnnxToTorch
For Erf function
2023-12-19 08:07:27 -08:00
Vivek Khandelwal 8649b84e3f
[MLIR][ONNX] Add OnnxToTorch support for AveragePool op. (#2672)
This commit adds the OnnxToTorch support for AveragePool op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-18 18:17:11 -06:00
saienduri 698ff3a736
[MLIR][ONNX] Add OnnxToTorch support for Reduction Ops (#2657)
This commit adds the OnnxToTorch support for ReduceSum, ReduceMean, and
ReduceMin ops.
2023-12-18 12:37:31 -08:00
John Wu deacb8ef38
[MLIR][ONNX] Add OnnxToTorch support for Gelu (#2647)
This commit adds the OnnxToTorch support for Gelu op.

---------

Co-authored-by: Rob Suderman <suderman@google.com>
2023-12-18 10:57:08 -08:00
Rob Suderman ae1a6e4a5a
[onnx] Lower `onnx.Gemm` to `torch` (#2663)
General lowering for `onnx.Gemm` to `torch`
2023-12-16 10:47:58 -08:00
Andreas Falkenberg cee8563060
[onnx] Support of onnx.Greater, onnx.Less, onnx.GreaterOrEqual to Torch (#2649)
The three remaining compare operations
onnx.Greater 
onnx.Less 
onnx.GreaterOrEqual

Are also added with this push request. 
This concludes a set of basic tensor compare functions.
2023-12-16 12:42:11 -05:00
Rob Suderman 61888690bb
[onnx] Add support for `onnx.sinh` (#2643)
Adds a lowering from `onnx.sinh` to `aten.sinh`. This includes adding
the `aten.sinh` operator.
2023-12-15 21:23:51 -08:00
Rob Suderman 705ea958ae
[onnx] Lowerings from `onnx.transpose` (#2641)
Lowerings for `transpose` from ONNX to `aten`. Implementation depends on
making multiple `aten.transpose` operations swapping pairs of dimensions.
As `onnx.transpose` can swap around any dimensions it may require
constructing multiple `aten.transpose`.
2023-12-15 15:30:05 -08:00
Rob Suderman 061af696ce
[onnx] Lowering for `onnx.shape` to `torch` and `tensor` (#2648)
Includes the lowering from the `aten` equivalent to `tensor` operations.
2023-12-15 11:37:49 -08:00
Gaurav Shukla eb9249e601
[ONNX][MLIR] Add support for LeakyRelu and GatherElements op (#2655)
This commit adds support for `LeakyRelu and GatherElements` op in the
onnx pipeline.

Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-12-15 11:18:28 -08:00
saienduri f59c01fd2f
[MLIR][ONNX] Add OnnxToTorch support for q-z ops (specific ops in description) (#2601)
This commit adds the OnnxToTorch support for Reciprocal, Round,
ScatterElements, Sigmoid, Sin, Tanh, Sqrt, Sub, Sum, Where, Xor,
Squeeze, Unsqueeze ops.
For reviewers, the ops that weren't trivial and probably require extra
review are Sum, Squeeze, and Unsqueeze.
2023-12-15 09:36:18 -08:00
Andreas Falkenberg 4ec8b9fc02
[onnx] add support for onnx.LessOrEqual (#2639)
Added the less or equal operation to OnnxToTorch. 
onnx.LessOrEqual

---------

Co-authored-by: root <andreas.falkenberg@amd.com>
2023-12-14 22:23:23 -05:00
Rob Suderman 4857606ffe
[onnx] Lowerings from `onnx.selu` (#2634)
Lowerings for `selu` lowerings for ONNX to the corresponding torch
implementations. Torch's `selu` implementation has fewer features so
we use the a generalized `elu` with the input scale set to `1.0`.
2023-12-14 08:53:47 -08:00
John Wu 42392bc845
[MLIR][ONNX] Add OnnxToTorch support for matmul ops (#2629)
This commit adds the OnnxToTorch support for Matmul.
2023-12-13 09:35:32 -08:00
Frederik Harwath b656c674ee Implement e2e support for aten.acos op
This depends on a change in the LLVM core repository which adds acos
support to the MLIR Math dialect.
2023-12-12 10:52:02 +01:00
Vivek Khandelwal 0b4422a253 [MLIR][ONNX] Add OnnxToTorch support for bitwise and math ops
This commit adds the OnnxToTorch support for BitwiseXor, BitwiseOr, Div, Equal, Cast,
Ceil, Floor, Cos, and Clip op.
This commit also adds the TorchToLinalg support for aten.clamp.Tensor and aten.clamp_min.Tensor op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-11 19:36:01 +05:30
Vivek Khandelwal dc9ea08db5 [MLIR][ONNX] Add OnnxToTorch support for atan and bitwise ops
This commit adds the OnnxToTorch support for Atan, Bitshift, BitwiseAnd,
and BitwiseNot op.
This commit also adds the TorchToLinalg support for AtenBitwiseLeftShiftTensorOp.

Signed-Off By: vivekkhandelwal@nod-labs.com
2023-11-28 17:19:07 +05:30
Stella Laurenzo e06efc5136
Initial TorchOnnxToTorch conversion pipeline. (#2585)
Adds a pipeline to convert custom ops and metadata represented as
`torch.operator` custom ops to corresponding `torch` ops where possible.

This is part of a multi-part approach for building ONNX import in as a
regular feature of torch-mlir. It is focused on the conversions vs the
infra. We will end up maintaining a [pure-python
importer](https://github.com/nod-ai/SHARK-Turbine/blob/main/python/shark_turbine/importers/onnx_importer.py)
to go with this in torch-mlir, and we will also maintain test case
generation utilities derived from it.

I have left substantial documentation in the README of the conversion
directory, including the recommended approach that we will take to keep
building this out.

(note that this organizes the code to coincide with the refactoring in
#2442 versus the current flat arrangement)
2023-11-21 21:02:55 -08:00