Commit Graph

77 Commits (d77bab37d1473dc48340e4807fd382d64c3cd5eb)

Author SHA1 Message Date
Vivek Khandelwal d35b6b412a
[ONNX] Add OnnxToTorch Lowering for Sequence Ops (#3425)
This commit adds the lowering for SequenceAt, SequenceEmpty,
SequenceInsert, SequenceErase op

Signed-Off By: Vivek Khandelwal<vivekkhandelwal1424@gmail.com>
2024-06-08 09:58:11 +05:30
aldesilv f794582b18
add resize nearest mode round_prefer_floor, round_prefer_ceil, ceil (#3421) 2024-06-07 14:04:11 -05:00
Vivek Khandelwal 1a9c0a35a9
[Onnx] Add Onnx->Torch lowering for Onnx.Shrink Op (#3385)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-07 22:47:27 +05:30
Vivek Khandelwal 6382dbbcc0
[ONNX] Add OnnxToTorch lowering for SpaceToDepth op (#3393)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-03 20:29:39 +05:30
Rob Suderman 617b00b983
[NFC] Fix member cast change to global for landing collision (#3407)
A PR landed when moving away from a deprecated cast function. Updated
the corresponding lines to pass.
2024-05-31 17:31:24 +00:00
Surya Jasper fc100a117d
[MLIR][ONNX] Add OnnxToTorch support for Scatter Op (#3400)
This PR adds OnnxToTorch support for Scatter op
2024-05-31 07:36:48 +00:00
Rob Suderman afca88a058
[NFC] Change to *cast instead of .*cast variants (#3405)
Member casts have been deprecated. Changing over a bunch of the member
cast calls to the global templated variants to remove deprecation
warnings.
2024-05-30 23:45:13 -07:00
zjgarvey 074098d20c
Modifies onnx resize lowering to fix numerical issues (#3381)
Updates:

- some unsupported modes are now going to report a match failure for
unsupported coordinate transformation modes.
- fixes a bug that was introduced in the last patch for resize (my
bad...)
- uses actual x and y coordinates for computing weights in bilinear
interpolation (rather than eps modified values)
- slightly simplifies the bilinear interpolation payload for readability
and performance
- passes coordinate transformation mode information from an onnx.Resize
op to the mode string for the aten._interpolate op. This allows us to
perform custom logic in the torch->linalg lowering to support
onnx.Resize options without losing the default behaviors of the
interpolate op.
2024-05-30 20:34:37 -04:00
zjgarvey 27169dcda9
Replace some depreciated uses of cast (#3343)
Contributing towards #3299
2024-05-23 09:01:47 -07:00
RattataKing fcf48872b3
[ONNX] Implement Softsign op (#3373) 2024-05-21 12:10:26 -07:00
zjgarvey 297c270980
onnx.Resize and aten._interpolate : allow n spatial dims. (#3368)
The old lowering only had logic for 2d (i.e. images). this patch allows
interpolation for n spatial dims, which is required for some 3d vision
models such as

- onnx/models/pytorch-3dunet_vaiq_int8

which successfully compiles and runs with this patch.
2024-05-20 13:35:27 -07:00
Andrew Woloszyn 513d89c16d
Add support for the onnx.SequenceLength op. (#3362) 2024-05-17 12:17:43 -07:00
Andrew Woloszyn 72e38dcbbc
Add support for the onnx.SequenceConstruct op. (#3316) 2024-05-17 22:51:28 +05:30
zjgarvey 73b3065a94
[ONNX] Reduces Transpose Opset Version (#3302)
As mentioned in issue #3290 , the difference between onnx.Transpose in
versions 1 and 13 is minimal, and therefore should be supported with the
same conversion pattern.
2024-05-14 21:38:56 +05:30
Archana Ramalingam 20f312853c
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSumExp Op (#3201)
This commit adds the OnnxToTorch support for ReduceLogSumExp op
2024-05-14 09:54:26 +05:30
aldesilv ec6d7aa5d2
OnnxToTorch lowering resize op (#3013)
https://github.com/nod-ai/SHARK-Turbine/issues/358
adds a lowering from onnx to linalg for bilinear and nearest resize with
support for using scales or sizes to get resize shape. uses coordinate
transform half pixel for bilinear mode and asymmetrical for nearest
mode. See
https://github.com/onnx/onnx/blob/main/docs/Operators.md#Resize. Added
two passes -- one for bilinear and the other for nearest.
2024-05-08 21:35:03 +00:00
Vivek Khandelwal 17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op (#3278)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Rob Suderman 321b844df7
Revert hyperbolic trigonometric decompositions (#3271)
We should be using the `torch` path and handling decomposition in the
`math` dialect.
2024-05-03 12:06:44 -04:00
Archana Ramalingam a46fe2c9db
[MLIR][ONNX] Add OnnxToTorch support for ReduceSumSquare Op (#3188)
This commit adds the OnnxToTorch support for ReduceSumSquare ops.

---------

Co-authored-by: Ubuntu <archana@archana-cpu.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
2024-05-02 22:17:45 +05:30
Vivek Khandelwal 0bb62e4347
Revert Onnx.Selu lowering to corresponding Aten op (#3275) 2024-05-02 09:00:24 -07:00
Vinayak Dev 05f8b69bf6
[MLIR][TORCH] Add OnnxToTorch support for BlackmanWindow function (#3181)
Implements OnnxToTorch lowering for the BlackmanWindow Function.
2024-04-30 12:21:27 -04:00
jinchen b64c22cfc1
Fix onnx sinh lowering (#3253)
iree tests `test_sinh` and `test_sinh_example` passed
2024-04-30 00:44:41 -07:00
Vivek Khandelwal b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op (#3221)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
penguin_wwy 6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243)
Like #3130, gradually replace the deprecated code

https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
Archana Ramalingam ac11ec796d
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSum Op (#3229)
This commit adds the OnnxToTorch support for ReduceLogSum op
2024-04-25 19:37:57 -04:00
Vinayak Dev cff2f084d4
[torch] Add OnnxToTorch lowering for `onnx.ReduceL2` (#3175)
Adds OnnxToTorch lowering for the ReduceL2 op.
2024-04-23 02:03:05 -04:00
Vivek Khandelwal 3c252cdd44
[onnx] Add `onnx-to-torch` lowering for random ops (#3193)
This commit adds the OnnxToTorch lowering for Onnx's RandomNormal, RandomNormalLike, RandomUniform, and RandomUniformLike op.
2024-04-22 22:28:07 +05:30
Vivek Khandelwal 6abc7371c8
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for Squeeze and Unsqueeze op (#2991)
This commit also cleans up the OnnxToTorch lowering for the Squeeze and
Unsqueeze op and adds the support for handling edge cases.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-22 08:52:42 +00:00
Rob Suderman 8222637159
[onnx] Extend op version number of `onnx.ScatterElements` (#3195)
Version number was set too high. Lowered to support more cases allows
more tests to pass.

Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:32:18 -04:00
Rob Suderman 733cace1df
[onnx] Fix `onnx.split` by directly handling slicing (#3194)
Previous implementation erroneously mixed up num_outputs with
slice_size. New version correctly computs the slice size and directly
performs slicing rather than leveraging `aten.split.tensor`. This is due
to `onnx` supporting a fixed number of splits making the size
computation more easily computeable when lowering to `aten` rather than
deferring to `aten.split.tensor`.

---------

Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:31:56 -04:00
Vinayak Dev a0232e9ebd
[MLIR][TORCH] Add OnnxToTorch lowering for ReduceL1 Op (#3146)
Adds OnnxToTorch Lowering for the ReduceL1 op.
2024-04-16 12:24:46 +05:30
penguin_wwy d4a30b7e67
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3130)
We should prefer functional style as the method style is deprecated
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
(https://mlir.llvm.org/deprecation/)
2024-04-11 06:47:35 -07:00
Xida Ren (Cedar) dd967eb199
[ONNX] Support onnx.LSTM (#2969)
This PR only performs a lit test. In lieu of an e2e test, https://github.com/nod-ai/SHARK-TestSuite/pull/142 makede sure that the lowering works & the numbers check out.

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-08 12:23:33 -07:00
Vivek Khandelwal af54d27820
[MLIR][TORCH] Fix Onnx.TopK lowering (#3103)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 22:12:48 +05:30
Vivek Khandelwal ce7d4f1660
[MLIR][TORCH] Fix Onnx.ReduceSum lowering for failing e2e tests (#3095)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 09:57:19 +05:30
Rob Suderman f97cd4893f
[torch] Improve shape inference for dynamic shapes (#3091)
Shapes can be processed as tensors to represent the set of dimensions.
As reshapes take a list of scalars this can result in a single dynamic
dimension blocking the adjacent static dimensions.

This pass attempts to de-couple tensor computations related to shapes
and propagate values to better support lowering scalar tensor
computations.
2024-04-02 16:19:57 -07:00
Vivek Khandelwal d1f770c620
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for ReduceMean op (#3008)
This commit also cleans up the OnnxToTorch lowering for the ReduceMean
op and adds the support for handling edge cases.

Signed-Off By: Vivek Khandelwal vivekkhandelwal1424@gmail.com
2024-04-02 16:54:04 +05:30
Rob Suderman 14b548f968
[torch] Improve shape inference for `torch-to-linalg` path for reshapes (#3055)
Reshaping tensors depend on directly matching individual dimensions to
their corresponding dim in the `torch.view` reshape dimensions. This
involves decoupling dynamic dimensions from their static counterparts
and support cleanup / canonicalization.
2024-03-26 12:41:40 -07:00
Vivek Khandelwal 9ae33e482e
[MLIR][TORCH] Add OnnxToTorch lowering for ops (#3049)
This commit adds the OnnxToTorch lowering for the Mish, Softplus,
HardSwish, Trilu, ThresholdedRelu op

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-25 20:29:07 +05:30
Rob Suderman 8fb28661f9
[onnx] Fix onnx.ReduceMean lowering (#3002)
Reduce mean lowerings did not succesfully lower to `linalg` via torched.
There were two separate paths that could be consolidated to a single
simpler pass. This resulted in a significant improvement in test
coverage.
2024-03-11 11:32:53 -07:00
Rob Suderman c15f1a2bd2
[onnx] Adding lowering for `onnx.Size` operation (#2985)
We can support `onnx.Size` by requesing the size of each dimensions and
taking the product of the results, then packing it into a tensor.

---------

Co-authored-by: Scott Todd <scott.todd0@gmail.com>
2024-03-06 17:01:05 -08:00
Rob Suderman a78659742a
[onnx] Migrate `onnx.ReduceMax` to match `onnx.ReduceMin` (#2981)
This mostly copy-pastes the reduce minimum implementation to reduce max
to improve test coverage. We also improve the aten lowering for min/max
dim for unsigned types.
2024-03-06 16:48:21 -08:00
Chi_Liu 09875fabd1
[MLIR][ONNX] Add ONNX ReduceProd support (#2943)
Alternatives to https://github.com/llvm/torch-mlir/pull/2908

Fix https://github.com/nod-ai/SHARK-Turbine/issues/353
2024-03-04 11:07:03 -08:00
Vivek Khandelwal 579ac8b666
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for sub and sum op (#2954)
This commit adds the support for scalar conversion to byte. 
This commit also fixes the OnnxToLinalg lowering issue for Onnx.Sub and
Onnx.Sum op.
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/466 
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/467

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-29 21:48:46 +05:30
Rob Suderman e48fe45886
[onnx] Import `onnx` import to pass remaining tests (#2951)
Finish supporting importing the vast majority of `onnx` operations. This
includes:
- region support
- region value inherentance
- `torch.string` support
- `torch.list` support
- `torch.optional` support
2024-02-28 12:18:02 -08:00
Rob Suderman 4a7a7d76f8
[onnx] Fix ReduceMean lowering to torch (#2956)
Torch lowering only supported the most recent version. Refactored the
lowering so more easily handle default values and optional operands /
attributes.
2024-02-27 22:48:07 -08:00
Rob Suderman cea51897a5
[onnx] Simplify onnx.slice lowering (#2919)
Onnx slice lowering used arange needlessly instead of directly
constructing the constant dimension values. This makes lowerings to
linalg struggle as multiple folders are required to get what is a
constant index value.
2024-02-19 10:26:29 -08:00
Rob Suderman d65925a8b4
[onnx] Fix `onnx.sigmoid` for integer inputs/outputs (#2914)
Sample compilation crashes due to sigmoid with integer inputs/outputs.
This fix avoids crashing but still experiences an error.
2024-02-16 13:35:25 -08:00
Rob Suderman 468c533942
[onnx] Fix crash when negative transpose values exist (#2915)
We are crashing due to indexing into a negative shape. Updated the
lowering to avoid the crash.
2024-02-16 16:04:47 -05:00
Ashay Rane 21f070e95f
onnx: fix checks in TorchOnnxToTorch pass to match the ONNX spec (#2848)
This PR contains three commits to update the validation checks in the
ONNX -> Torch conversion pass for the AveragePool, Pad, and Slice operators:

> onnx: fix preconditions for lowering AveragePool ops
> 
> The `pads` attribute of the AveragePool operator specifies the value to
> pad at both the beginning as well as the end of the axis (see
> https://onnx.ai/onnx/operators/onnx__AveragePool.html#attributes), so
> the size of this attribute should be twice the rank of the input tensor.
> However, our TorchOnnxToTorch bails out early since it incorrectly
> compares the pads attribute with the rank (not twice the rank) of the
> input tensor.
> 
> This patch fixes the code to match the spec and adds a lit test.

> onnx: allow optional constant value for Pad operator
> 
> The `constant_value` input of the onnx.Pad operator is optional (see
> https://onnx.ai/onnx/operators/onnx__Pad.html#inputs), but the
existing
> logic for lowering the operator into the Torch dialect assumes that it
> is mandatory.
> 
> This patch makes the attribute optional and constructs a default value
> (a list of zeros the size of the input tensor) if the attribute was not
> specified.

> onnx: fix checks for axes and steps inputs of Slice operator
> 
> The ONNX Spec for the Slice operator allows the `starts` and `ends`
> inputs to have fewer indices that the dimensions of the `data` tensor
> (see https://onnx.ai/onnx/operators/onnx__Slice.html), but our code
> expects these inputs to be as many as the `data` tensor's dimensions.
> 
> More precisely, the spec requires that the `starts` and `ends` inputs
> are only as long as the `axes` input, but since the `axes` input is
> optional, the default type for the `axes` input has to match the type
> for the `starts` and `ends` inputs. Moreover, the number of indices in
> the `steps` input also has to match those in the `axes` inputs (instad
> of matching the dimensions of the `data` input).
> 
> This patch fixes the checks in the TorchOnnxToTorch conversion so that
> they match the ONNX spec.
2024-02-07 21:19:27 -08:00