Commit Graph

58 Commits (67d6a665a450eb0f69fbade238717f6dd3f56654)

Author SHA1 Message Date
Vinayak Dev 67d6a665a4
[torch] Add OnnxToTorch lowering for `onnx.HannWindow` (#3276)
Adds OnnxToTorch lowering for the `onnx.HannWindow` op. Also factors out
common implementation between the window functions.
2024-05-03 12:04:57 -04:00
Vinayak Dev 05f8b69bf6
[MLIR][TORCH] Add OnnxToTorch support for BlackmanWindow function (#3181)
Implements OnnxToTorch lowering for the BlackmanWindow Function.
2024-04-30 12:21:27 -04:00
jinchen fbbad2d81e
Fix onnx atanh lowering (#3264)
iree tests `test_atanh` and `test_atanh_example` passed
2024-04-30 00:50:08 -07:00
jinchen bf04b53b07
Fix onnx asinh lowering (#3263)
iree tests `test_asinh` and `test_asinh_example` passed
2024-04-30 00:49:57 -07:00
jinchen fb499192df
Fix onnx acosh lowering (#3262)
iree tests `test_acosh` and `test_acosh_example` passed
2024-04-30 00:49:44 -07:00
jinchen aa471f1d96
Fix onnx cosh lowering (#3254)
iree tests `test_cosh` and `test_cosh_example` passed
2024-04-30 00:49:29 -07:00
penguin_wwy 6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243)
Like #3130, gradually replace the deprecated code

https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
jinchen 09d42044b4
Support select_last_index attribute of onnx argmin op (#3212)
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/648
all compiled, and the values of results match, but having runtime issue
of dtype mismatch of i/si.
2024-04-23 10:43:38 -07:00
jinchen 61e6312c87
Support select_last_index attribute of onnx argmax op (#3192)
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/635
all compiled, but having run issue of dtype mismatch of i/si.
2024-04-23 10:16:08 -07:00
jinchen ddb29c2c02
[onnx] Add OnnxToTorch support for `onnx.ConvInteger` (#3179)
All e2e iree tests compiled, but they have the run issue of mismatch of
dtype like the following
```
expected:
1x1x2x2xsi32=[[[12 16][24 28]]]
actual:
1x1x2x2xi32=[[[12 16][24 28]]]
```
2024-04-23 09:42:02 -07:00
Vivek Khandelwal 3c252cdd44
[onnx] Add `onnx-to-torch` lowering for random ops (#3193)
This commit adds the OnnxToTorch lowering for Onnx's RandomNormal, RandomNormalLike, RandomUniform, and RandomUniformLike op.
2024-04-22 22:28:07 +05:30
jinchen 83cba8c696
[onnx] Support for `onnx.EyeLike` via torch lowering (#2994) 2024-04-15 09:23:26 -07:00
jinchen 859f5d280f
Generalize getting index for onnx compress op (#3150) 2024-04-12 15:18:22 -07:00
penguin_wwy d4a30b7e67
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3130)
We should prefer functional style as the method style is deprecated
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
(https://mlir.llvm.org/deprecation/)
2024-04-11 06:47:35 -07:00
Vivek Khandelwal 1d6e4c3d77
[MLIR][TORCH] Add OnnxToTorch lowering for Einsum op (#3117)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-08 22:38:01 +05:30
zjgarvey 532d297c46
[ONNX] Preliminary Work Towards Supporting QuantizedMLP_basic onnx e2e test (#3089)
See the related issues here:
[SHARK-Turbine#556](https://github.com/nod-ai/SHARK-Turbine/issues/556)

1. Adds uint8 casting to onnx.Cast op
2. Fixes an issue with onnx.DequantizeLinear when the scale comes with
shape [1].
3. Adds support for unsigned types in an AtenItemOp folder
4. Adds a simpler quantized model for easier debugging
5. Adds a fusion pass to convert [quant -> dequant -> transpose -> mm]
patterns to [transpose -> quant -> mm].
6. Moved some xfails that are still not passing, but for different
reasons than onnx.cast failures.
2024-04-01 16:21:05 -07:00
Vivek Khandelwal 6844c84702
[MLIR][Torch] Fix OnnxToLinalg lowering for AvgPool op (#3076)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-01 22:14:14 +05:30
zjgarvey 6ff71b40c8
[ONNX] onnx.DynamicQuantizeLinear to Torch (#3009)
This adds support for converting DynamicQuantizeLinear from torch-onnx
to torch.

I could not get an e2e test to pass, since there seems to be some issues
with uint8 casting somewhere lower in the pipeline. For example
compiling with IREE for llvm-cpu, I would get either the correct zero
point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128).
The output tensor seems to always return a tensor of zeros, which also
occurs when running uint8 examples through QuantizeLinear.

Edit: the first problem can be resolved by casting the output back to
uint8 on output, the second problem is resolved with PR #3018
2024-03-20 10:58:25 -07:00
jinchen 9cf6c45a39
Add OnnxToTorch support for Compress op (#3025) 2024-03-20 17:12:08 +00:00
zjgarvey 7a9608bb69
[ONNX] Reduces onnx.Div sinceVersion to 7 (#3041)
The only difference between version 7 and newer versions is support for
different data types. We should allow this pattern to match as early as
7. Earlier versions have a more manual broadcast specification through
attributes, so I did not include those versions.

See: [onnx.Div
docs](https://onnx.ai/onnx/operators/onnx__Div.html#l-onnx-doc-divl)
2024-03-19 13:35:05 -07:00
Xinan Jiang(姜曦楠) d8a52e82c2
[onnx] Fix onnx.cast cases between int32 and int64 (#2982)
2 modifications:
1. torch.int64 is enum 4 in TORCH_DTYPE_TO_INT
2. add int32 support
2024-03-15 17:14:09 +00:00
aldesilv 6fa21bd8b1
OnnxToTorch lower celu op (#2920) 2024-03-13 20:34:10 +05:30
Rob Suderman bd7f1baa42
[onnx] Fix expand operation for dynamic shape max (#3001)
If the broadcast shape is length-1 at a dim while `?` in the input dim
then we need to broadcast to the dynamic dim. This is equivalent to
taking a max of two dimensions.
2024-03-08 16:23:07 -08:00
Scott Todd 7b18646def
[onnx] Handle optional arguments in Clip op pattern. (#2976)
Spec: https://onnx.ai/onnx/operators/onnx__Clip.html
2024-03-07 17:25:14 +00:00
Rob Suderman 933db87a07
[onnx] Add support for constants of `i1`s (#2978)
`getRawBuffer` expects a densely packed vector of `i1` values however
`onnx` does not densely pack the values. Include code to handle the
packing / unpacking.
2024-03-05 13:55:13 -08:00
Vivek Khandelwal d81747eadb
[MLIR][TORCH] Extend support for OnnxToLinalg lowering for Dropout and Div op (#2938)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/451,
https://github.com/nod-ai/SHARK-Turbine/issues/452
2024-02-27 11:02:05 +05:30
Rob Suderman 53f6d06ab8
[onnx] Drop `ConstantOfShape` logic form importer, fix torch lowering (#2930)
There is no reason to treat `ConstantOfShape` as a specialized import
any as there exists a onnx-to-torch equivalent. Dropping the import
coding and adding support for resource conversion substantially
increases test coverage for dynamically shaped tests.
2024-02-21 21:34:43 -08:00
Vivek Khandelwal d6d1a173dc
[MLIR][Torch] Add OnnxToTorch and TorchToLinalg support for trig ops (#2903)
This commit adds the OnnxToTorch lowering for cosh, acosh, asin, asinh,
and atanh op.
This commit also adds the TorchToLinalg lowering for acosh, asin, asinh,
and atanh op.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-14 11:58:09 +05:30
Ashay Rane 21f070e95f
onnx: fix checks in TorchOnnxToTorch pass to match the ONNX spec (#2848)
This PR contains three commits to update the validation checks in the
ONNX -> Torch conversion pass for the AveragePool, Pad, and Slice operators:

> onnx: fix preconditions for lowering AveragePool ops
> 
> The `pads` attribute of the AveragePool operator specifies the value to
> pad at both the beginning as well as the end of the axis (see
> https://onnx.ai/onnx/operators/onnx__AveragePool.html#attributes), so
> the size of this attribute should be twice the rank of the input tensor.
> However, our TorchOnnxToTorch bails out early since it incorrectly
> compares the pads attribute with the rank (not twice the rank) of the
> input tensor.
> 
> This patch fixes the code to match the spec and adds a lit test.

> onnx: allow optional constant value for Pad operator
> 
> The `constant_value` input of the onnx.Pad operator is optional (see
> https://onnx.ai/onnx/operators/onnx__Pad.html#inputs), but the
existing
> logic for lowering the operator into the Torch dialect assumes that it
> is mandatory.
> 
> This patch makes the attribute optional and constructs a default value
> (a list of zeros the size of the input tensor) if the attribute was not
> specified.

> onnx: fix checks for axes and steps inputs of Slice operator
> 
> The ONNX Spec for the Slice operator allows the `starts` and `ends`
> inputs to have fewer indices that the dimensions of the `data` tensor
> (see https://onnx.ai/onnx/operators/onnx__Slice.html), but our code
> expects these inputs to be as many as the `data` tensor's dimensions.
> 
> More precisely, the spec requires that the `starts` and `ends` inputs
> are only as long as the `axes` input, but since the `axes` input is
> optional, the default type for the `axes` input has to match the type
> for the `starts` and `ends` inputs. Moreover, the number of indices in
> the `steps` input also has to match those in the `axes` inputs (instad
> of matching the dimensions of the `data` input).
> 
> This patch fixes the checks in the TorchOnnxToTorch conversion so that
> they match the ONNX spec.
2024-02-07 21:19:27 -08:00
Rob Suderman e3faef5224
[onnx] Convert `onnx.QLinearConv` to `torch` (#2851)
Leaning on the QDQ functionality in torch we can support the QLinearConv
operation by piggybacking through `torch.Convolution`. This includes
some changes such as allowing the `onnx` rewriter to run recursively.
Doing so allows `QLinearConv` to decopmose to `onnx.Convolution` which
is then lowered to `torch`.
2024-02-05 16:09:41 -08:00
Rob Suderman cb52c4b3cc
[onnx] Fix `onnx-to-torch` lowering for flatten shape (#2834)
The existing `flatten` lowering did not define what the intermediate
shape was. This could result in failures to lower further to linalg as
the intermediate shape was unknown. Added a shape refinement section.
2024-02-05 14:23:46 -08:00
Gaurav Shukla f4562a8eaa
[ONNX] Fix the lowering of onnx.expand op (#2861)
Signed-off-by: Gaurav Shukla <gauravshukla789@gmail.com>
2024-02-05 23:46:58 +05:30
Xida Ren (Cedar) 24b8c8672a
[torch] Add folders for `torch.fill`, `torch.ones`, `torch.zeros` and `aten.getItem` (#2849)
So that the CumSum Op in OPT can get the constant that it requires to be lowered to TMTensor

---------

Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-02-02 10:46:33 -08:00
Rob Suderman 3500523f75
[onnx] Convert resources to denseattr for `onnx.constant` to `torch` (#2830)
`onnx` explicitly specifies that `raw_data` is stored in `little-endian`
layout. While converting
to `torch` we need to convert from a known endian format to an internal
format of consistent
layout. This means endianness must be correct during the import of
`onnx.Constant`.

---------

Co-authored-by: Xida Ren (Cedar) <cedar.ren@gmail.com>
2024-01-31 11:40:53 -08:00
Phaneesh Barwaria 4964977e85
[ONNX][MLIR] support constantOfShape op (#2747) 2024-01-26 09:36:39 -08:00
Chi_Liu 77ae56337d
[ONNX][MLIR] Add support for onnx.Exp op (#2792)
https://github.com/nod-ai/SHARK-Turbine/issues/312
2024-01-23 13:45:00 -08:00
James Newling dc056e58e6
[MLIR][TORCH] Add onnx.cast cases used by OPT-1.25M (#2787) 2024-01-23 21:06:25 +05:30
Ramiro Leal-Cavazos 5883ef0f21
Fix unused variable warnings (#2775) 2024-01-22 11:05:55 -08:00
Dave Liddell 2f4924015d
[onnx] Added flatten (#2760)
[https://github.com/nod-ai/SHARK-Turbine/issues/328](url)

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-01-19 16:18:16 -08:00
Rob Suderman b5387c0f29
[onnx] Lowering `onnx.dequantize_linear` to `torch` (#2759)
We can make the per-tensor version of the operation to the dequantize
operation via marking with the make quantized tensor component. This
introductions the `qint*` and `quint*` tensor type that can be lowered
to teh appropriate dequantization behavior during the torch-to-linalg
conversion.
2024-01-18 16:47:21 -08:00
Rob Suderman 197b3b475c
[onnx] Convert `onnx.constant` to `torch` literal tensor (#2748)
Handles the multiple cases of `onnx` constant values and converts them
to `torch` literal tensors. This can include splats with a single
integer or floating point value, a set of explicit integer values, or
an elements array attr of values.
2024-01-15 09:31:22 -08:00
Xida Ren (Cedar) aee1fca251
Minor typo fix: in not implemented message for the exclusive and reverse attributes for cumsum (#2740) 2024-01-10 14:24:37 -08:00
kumardeepakamd 29569713f3
support for onnx.expand operator (#2729)
maps onnx.expand to torch aten broadcast_to, three tests added

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-10 13:05:37 -08:00
Vivek Khandelwal 208ae35583 [MLIR][ONNX] Add TorchToOnnx Support for DepthToSpace op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 17:50:47 +05:30
Vivek Khandelwal 4707d3bdc6 [MLIR][ONNX] Add OnnxToTorch support for Bernoulli and CastLike op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 16:24:06 +05:30
Vivek Khandelwal 35e8f86792 [MLIR][ONNX] Add OnnxToTorch support for Dropout and Elu op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 16:23:55 +05:30
Ben Vanik 4dd17f0b71
Fixing implicit double->float truncation warnings. (#2733)
Floating-point literals should use the correct type specifier.
2024-01-08 17:26:38 -05:00
Xida Ren (Cedar) 1778314620
add basic cumsum. this doesn't support the exclusive and reverse attrs (#2717)
fixes #2711
2024-01-03 09:52:59 -08:00
Xida Ren (Cedar) 6847fc1fc6
Fix since-opset too high (#2701)
Addresses two of the ops from
https://github.com/llvm/torch-mlir/issues/2689

https://github.com/llvm/torch-mlir/issues/2700
2023-12-27 10:08:09 -08:00
Vivek Khandelwal 0849fd0a06 [MLIR][ONNX] Fix onnx.conv lowering to handle bias tensor
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-22 16:36:21 +05:30