Vivek Khandelwal
822d763308
[ONNX] Add OnnxToTorch lowering for Optional, OptionalGetElement op ( #3467 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-18 19:40:18 +05:30
Umang Yadav
59bade3376
[ONNX] Add missing "Abs" in GlobalLpPool ( #3460 )
...
Taking `abs` is required to mimic same logic as onnx/onnxruntime.
Without `abs`, it wouldn't produce correct results for negative values.
Reference code :
f5b6f6dc26/onnxruntime/core/providers/cpu/nn/pool_functors.h (L604)
375c161c67/onnx/reference/ops/op_lp_pool.py (L31)
2024-06-17 11:17:16 +05:30
Manupa Karunaratne
d2b663ece7
Add onnx op LRN lowering ( #3432 )
...
This commit adds support for lowering
Onnx LRN op to aten.
2024-06-14 16:44:43 +00:00
Arham Khan
09c988046c
[ONNX] Add OnnxToTorch lowering for Onnx.NegativeLogLikelihoodLoss Op ( #3380 )
...
This implements the Onnx.NegativeLogLikelihoodLoss op using the
signature provided
[here](https://onnx.ai/onnx/operators/onnx__NegativeLogLikelihoodLoss.html )
by replacing it with a `NLLLossForward` op.
Additionally, I included a helper function `get_loss_reduction_enum` to
convert from a string `reduction` parameter to the corresponding
intended integer value since this is an operation that will be reused
for any loss function module. This differs from `get_reduction_enum` in
`TorchUpstream.cpp` which handles the `reduce` parameter from
`scatter_reduce` type operations.
2024-06-14 22:01:11 +05:30
Vivek Khandelwal
2ea2bc3948
[ONNX] Add OnnxToTorch Lowering for GroupNormalization op ( #3458 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-14 16:18:53 +00:00
Umang Yadav
04c6479350
[ONNX] Add onnx parser for LpPool operator ( #3449 )
...
Similar to https://github.com/llvm/torch-mlir/pull/3435
Solves https://github.com/nod-ai/SHARK-Turbine/issues/728
2024-06-14 21:41:18 +05:30
Vinayak Dev
39d882f7c9
[torch] Add OnnxToTorch lowering for the Col2Im op ( #3424 )
...
Adds OnnxToTorch lowering for the `onnx.Col2Im` op.
2024-06-13 08:42:06 +00:00
Surya Jasper
de7f058a0e
[MLIR][ONNX] Add OnnxToTorch support for MaxRoiPool Op ( #3395 )
...
This PR adds OnnxToTorch support for MaxRoiPool op
2024-06-13 10:46:14 +05:30
Umang Yadav
9b76a2e3eb
[ONNX] add onnx lowering for global lp pool operator ( #3435 )
...
Solves https://github.com/nod-ai/SHARK-Turbine/issues/727
Uses AvgPool to implement GlobalLpPool similar to this
https://github.com/onnx/onnx/blob/main/onnx/reference/ops/op_lp_pool.py
cc: @vivekkhandelwal1
2024-06-13 10:37:08 +05:30
zjgarvey
de28c8540b
[ONNX] add int16 quantization support ( #3446 )
...
There is currently no int16 quantization support in torch. This patch
adds a new mlir type to correspond to the missing "torch.qint16" type,
and enables lowering of quantization-related onnx ops using int16 types.
In follow-up patches, custom quantization logic for ops like
aten.matmul/aten.mm/aten.convolution may need to be revisited to allow
support for qint16. The passes in FuseQuantizedOps.cpp may also need
slight modifications.
2024-06-12 10:37:22 +05:30
Matthias Gehre
e07a0bfc54
onnx.resize: Add support for coordTfMode "half_pixel" ( #3441 )
...
half_pixel is also the default mode used by ONNX, see
https://onnx.ai/onnx/operators/onnx__Resize.html
2024-06-10 20:59:29 +02:00
Vivek Khandelwal
d35b6b412a
[ONNX] Add OnnxToTorch Lowering for Sequence Ops ( #3425 )
...
This commit adds the lowering for SequenceAt, SequenceEmpty,
SequenceInsert, SequenceErase op
Signed-Off By: Vivek Khandelwal<vivekkhandelwal1424@gmail.com>
2024-06-08 09:58:11 +05:30
Vivek Khandelwal
1a9c0a35a9
[Onnx] Add Onnx->Torch lowering for Onnx.Shrink Op ( #3385 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-07 22:47:27 +05:30
Suraj Sudhir
1c2778dd56
[ONNX] Conv op adds support for asymmetric padding. ( #3426 )
...
Supports asymmetric padding by performing a torch.nn.functional.pad on
the input before performing the convolution.
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-06-07 09:54:39 -07:00
Vivek Khandelwal
35dd8c52cd
[ONNX] Add OnnxToTorch Lowering for MaxUnpool op ( #3413 )
...
This commit also adds the Torch declaration for aten.max_unpool2d and
aten.max_unpool3d op. The TorchToLinalg lowering for the same will be
added in a follow-up commit.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-04 21:09:53 +05:30
Xida Ren (Cedar)
11c3281a8a
Fix reducesum onnx lit test to linalg lowering fails ( #3218 )
...
fixes https://github.com/nod-ai/SHARK-Turbine/issues/653
---------
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-06-03 16:36:09 -04:00
Vivek Khandelwal
6382dbbcc0
[ONNX] Add OnnxToTorch lowering for SpaceToDepth op ( #3393 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-03 20:29:39 +05:30
Surya Jasper
fc100a117d
[MLIR][ONNX] Add OnnxToTorch support for Scatter Op ( #3400 )
...
This PR adds OnnxToTorch support for Scatter op
2024-05-31 07:36:48 +00:00
Vivek Khandelwal
d7b8f00d01
[ONNX] Add OnnxToTorch Lowering for LpNormalization op ( #3397 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-30 23:05:26 +05:30
Angel Zhang
52be4bdc18
[ONNX] Fix bugs for the `onnx.OneHot` operator ( #3334 )
...
This commit fixes the bugs for the `onnx.OneHot` operator by:
1) Converting negative indices to non-negative indices
2) Handling both `int` and `float` types for `off` and `on` values
3) Using the correct result type
It also includes a new unit test.
2024-05-22 08:32:00 -04:00
RattataKing
fcf48872b3
[ONNX] Implement Softsign op ( #3373 )
2024-05-21 12:10:26 -07:00
lialan
99511cef82
Implement `onnx.Hardmax` lowering ( #3342 )
...
Co-authored-by: Ubuntu <xunli@wsno1.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
Co-authored-by: Hasekawa-Takumi <bewater.private476@passmail.net>
2024-05-20 20:56:24 +05:30
Andrew Woloszyn
513d89c16d
Add support for the onnx.SequenceLength op. ( #3362 )
2024-05-17 12:17:43 -07:00
Andrew Woloszyn
72e38dcbbc
Add support for the onnx.SequenceConstruct op. ( #3316 )
2024-05-17 22:51:28 +05:30
NeverRaR
26b78285bf
[MLIR][ONNX] Add OnnxToTorch support for GlobalMaxPool Op ( #3232 )
...
https://github.com/nod-ai/SHARK-Turbine/issues/658
---------
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-14 15:55:39 +05:30
Archana Ramalingam
20f312853c
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSumExp Op ( #3201 )
...
This commit adds the OnnxToTorch support for ReduceLogSumExp op
2024-05-14 09:54:26 +05:30
Andreas Falkenberg
adafd51823
[onnx] Gridsampler addition of nearest mode ( #3320 )
...
Added nearest neighbor selection for onnx.Gridsampler
2024-05-10 11:42:10 -07:00
jinchen
4b24909427
Add attributes support for onnx cumsum op ( #3241 )
2024-05-11 02:09:01 +08:00
Angel Zhang
261074f594
[ONNX] Handle one-input case for Min ONNX operator ( #3326 )
...
This commit handles the one-input case for the "Min" ONNX operator. A
new unit test has also been added.
2024-05-10 22:04:03 +05:30
Angel Zhang
7c289d9522
[ONNX] Handle one-input case for `onnx.Max` operator ( #3325 )
...
This commit handles the one-input case for the "Max" ONNX operator. A
new unit test has also been added.
2024-05-10 08:58:46 -07:00
aldesilv
ec6d7aa5d2
OnnxToTorch lowering resize op ( #3013 )
...
https://github.com/nod-ai/SHARK-Turbine/issues/358
adds a lowering from onnx to linalg for bilinear and nearest resize with
support for using scales or sizes to get resize shape. uses coordinate
transform half pixel for bilinear mode and asymmetrical for nearest
mode. See
https://github.com/onnx/onnx/blob/main/docs/Operators.md#Resize . Added
two passes -- one for bilinear and the other for nearest.
2024-05-08 21:35:03 +00:00
Vinayak Dev
6f911ba3d7
[torch] Add OnnxToTorch lowering for `onnx.HammingWindow` ( #3283 )
...
Adds OnnxToTorch lowering for the `onnx.HammingWindow` op.
2024-05-06 10:21:45 -07:00
Vivek Khandelwal
17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op ( #3278 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Rob Suderman
321b844df7
Revert hyperbolic trigonometric decompositions ( #3271 )
...
We should be using the `torch` path and handling decomposition in the
`math` dialect.
2024-05-03 12:06:44 -04:00
Vinayak Dev
67d6a665a4
[torch] Add OnnxToTorch lowering for `onnx.HannWindow` ( #3276 )
...
Adds OnnxToTorch lowering for the `onnx.HannWindow` op. Also factors out
common implementation between the window functions.
2024-05-03 12:04:57 -04:00
Archana Ramalingam
a46fe2c9db
[MLIR][ONNX] Add OnnxToTorch support for ReduceSumSquare Op ( #3188 )
...
This commit adds the OnnxToTorch support for ReduceSumSquare ops.
---------
Co-authored-by: Ubuntu <archana@archana-cpu.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
2024-05-02 22:17:45 +05:30
Vivek Khandelwal
0bb62e4347
Revert Onnx.Selu lowering to corresponding Aten op ( #3275 )
2024-05-02 09:00:24 -07:00
Xida Ren (Cedar)
33eef15e42
Support onnx.If ( #2825 )
...
This is probably a decent PR for learning about blocks and regions.
If you're here to learn about that, consider also looking at
lib/Conversion/TorchToSCF/TorchToSCF.cpp
While this doesn't include an e2e test, it is tested downstream in
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/onnx/operators/If/model.py
---------
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 18:36:40 +00:00
Vinayak Dev
05f8b69bf6
[MLIR][TORCH] Add OnnxToTorch support for BlackmanWindow function ( #3181 )
...
Implements OnnxToTorch lowering for the BlackmanWindow Function.
2024-04-30 12:21:27 -04:00
jinchen
fbbad2d81e
Fix onnx atanh lowering ( #3264 )
...
iree tests `test_atanh` and `test_atanh_example` passed
2024-04-30 00:50:08 -07:00
jinchen
bf04b53b07
Fix onnx asinh lowering ( #3263 )
...
iree tests `test_asinh` and `test_asinh_example` passed
2024-04-30 00:49:57 -07:00
jinchen
fb499192df
Fix onnx acosh lowering ( #3262 )
...
iree tests `test_acosh` and `test_acosh_example` passed
2024-04-30 00:49:44 -07:00
jinchen
aa471f1d96
Fix onnx cosh lowering ( #3254 )
...
iree tests `test_cosh` and `test_cosh_example` passed
2024-04-30 00:49:29 -07:00
jinchen
b64c22cfc1
Fix onnx sinh lowering ( #3253 )
...
iree tests `test_sinh` and `test_sinh_example` passed
2024-04-30 00:44:41 -07:00
Vivek Khandelwal
b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op ( #3221 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
Stella Laurenzo
5d4b803914
[NFC reformat] Run pre-commit on all files and format misc.
...
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.
Subsequent patches will format Python files and remaining CPP files.
2024-04-27 14:08:09 -07:00
Archana Ramalingam
ac11ec796d
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSum Op ( #3229 )
...
This commit adds the OnnxToTorch support for ReduceLogSum op
2024-04-25 19:37:57 -04:00
jinchen
09d42044b4
Support select_last_index attribute of onnx argmin op ( #3212 )
...
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/648
all compiled, and the values of results match, but having runtime issue
of dtype mismatch of i/si.
2024-04-23 10:43:38 -07:00
jinchen
61e6312c87
Support select_last_index attribute of onnx argmax op ( #3192 )
...
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/635
all compiled, but having run issue of dtype mismatch of i/si.
2024-04-23 10:16:08 -07:00
jinchen
ddb29c2c02
[onnx] Add OnnxToTorch support for `onnx.ConvInteger` ( #3179 )
...
All e2e iree tests compiled, but they have the run issue of mismatch of
dtype like the following
```
expected:
1x1x2x2xsi32=[[[12 16][24 28]]]
actual:
1x1x2x2xi32=[[[12 16][24 28]]]
```
2024-04-23 09:42:02 -07:00