Vivek Khandelwal
b870729efe
[torch] Fix `onnx.MaxPool` lowering ( #3133 )
...
This commit fixes the onnx.MaxPool op lowering which was lacking the
indices result support.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-21 21:05:32 +05:30
zjgarvey
297c270980
onnx.Resize and aten._interpolate : allow n spatial dims. ( #3368 )
...
The old lowering only had logic for 2d (i.e. images). this patch allows
interpolation for n spatial dims, which is required for some 3d vision
models such as
- onnx/models/pytorch-3dunet_vaiq_int8
which successfully compiles and runs with this patch.
2024-05-20 13:35:27 -07:00
lialan
99511cef82
Implement `onnx.Hardmax` lowering ( #3342 )
...
Co-authored-by: Ubuntu <xunli@wsno1.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
Co-authored-by: Hasekawa-Takumi <bewater.private476@passmail.net>
2024-05-20 20:56:24 +05:30
Wu Yuan
cc28d566ff
[Stablehlo] Support AtenTrilOp ( #3359 )
...
1. lower aten.tril to stablehlo composed by iota, select and so forth
2. add related e2e test cases
2024-05-20 15:49:24 +08:00
zjgarvey
6cba93b16e
[ONNX][TorchToLinalg] Add support for dynamic dims in Interpolate lowering ( #3351 )
...
Addresses [Shark-Turbine
#196 ](https://github.com/nod-ai/SHARK-TestSuite/issues/196 )
Related tracker [Shark-Turbine
#566 ](https://github.com/nod-ai/SHARK-Turbine/issues/566 )
Related onnx.Resize issues [Shark-Turbine
#616 ](https://github.com/nod-ai/SHARK-Turbine/issues/616 )
2024-05-17 12:18:57 -07:00
Andrew Woloszyn
513d89c16d
Add support for the onnx.SequenceLength op. ( #3362 )
2024-05-17 12:17:43 -07:00
Andrew Woloszyn
72e38dcbbc
Add support for the onnx.SequenceConstruct op. ( #3316 )
2024-05-17 22:51:28 +05:30
Xinyu Yang
28193fd985
[Stablehlo]index type use i64 ( #3354 )
2024-05-16 15:33:23 +08:00
Peiming Liu
ccb772cd0f
[sparse] propagate sparsity properly when decompose torch operations. ( #3318 )
2024-05-15 10:09:27 -07:00
Yuanqiang Liu
5928f68e60
[Stablehlo] refactor amax, max, max.dim's lowering to stablehlo ( #3348 )
...
* not to decompose `aten.amax` on `stablehlo` backend. Because it could
be lowering to `stablehlo.reduce` directly.
* lowering `aten.max.dim` to `stablehlo.reduce apply max` when
`AtenMaxDimOp.getIndices()` doesn't have users. It's more simple.
2024-05-16 00:05:19 +08:00
zjgarvey
73b3065a94
[ONNX] Reduces Transpose Opset Version ( #3302 )
...
As mentioned in issue #3290 , the difference between onnx.Transpose in
versions 1 and 13 is minimal, and therefore should be supported with the
same conversion pattern.
2024-05-14 21:38:56 +05:30
NeverRaR
26b78285bf
[MLIR][ONNX] Add OnnxToTorch support for GlobalMaxPool Op ( #3232 )
...
https://github.com/nod-ai/SHARK-Turbine/issues/658
---------
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-14 15:55:39 +05:30
Archana Ramalingam
20f312853c
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSumExp Op ( #3201 )
...
This commit adds the OnnxToTorch support for ReduceLogSumExp op
2024-05-14 09:54:26 +05:30
Yuanqiang Liu
0b7cbf5e60
[Stablehlo] fix aten.randn's lowering with f32 element type ( #3329 )
2024-05-11 17:40:04 +08:00
Yuanqiang Liu
5f7cb9e253
[Stablehlo] lowering aten.randn & aten.normal_functional to mhlo.rng … ( #3328 )
...
…NORMAL
* split lowering of uniform, randn, normal from Basic.cpp into Rng.cpp
2024-05-11 15:33:37 +08:00
Stella Laurenzo
00efec0b73
[linalg] Implement strict mode lowering for aten.view. ( #3319 )
...
* Enables assume_strict_symbolic_shapes on fx_importer imported
programs, indicating strict shape semantics.
* Reworks the view->reshape lowering to take advantage of strict mode
and do one of:
* Collapse to 0D
* Flatten/Unflatten when there is an inferred dim.
* Fallback to tensor.reshape
* Splits some test cases up and adds an attribute to control the old
pattern (so new corners can be tested in strict mode in isolation).
* Dynamic inferred mode needs upstream work to generalize expand_shape
(so that case is suppressed here).
* Deletes the assert from the existing tensor.reshape lowering if strict
shape mode is enabled (since the condition it is dynamically asserting
cannot happen).
2024-05-10 13:45:50 -07:00
Andreas Falkenberg
adafd51823
[onnx] Gridsampler addition of nearest mode ( #3320 )
...
Added nearest neighbor selection for onnx.Gridsampler
2024-05-10 11:42:10 -07:00
jinchen
4b24909427
Add attributes support for onnx cumsum op ( #3241 )
2024-05-11 02:09:01 +08:00
NeverRaR
1d4859699b
MaxPool1d lowering to linalg ( #3295 )
...
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-10 22:05:26 +05:30
Angel Zhang
261074f594
[ONNX] Handle one-input case for Min ONNX operator ( #3326 )
...
This commit handles the one-input case for the "Min" ONNX operator. A
new unit test has also been added.
2024-05-10 22:04:03 +05:30
Angel Zhang
7c289d9522
[ONNX] Handle one-input case for `onnx.Max` operator ( #3325 )
...
This commit handles the one-input case for the "Max" ONNX operator. A
new unit test has also been added.
2024-05-10 08:58:46 -07:00
penguin_wwy
e0a87e543e
[NFC] Standardize the std::is_same competime expression ( #3321 )
2024-05-10 17:07:37 +08:00
penguin_wwy
afe87d62b4
[Linalg] [Stablehlo] Promote type for compare scalar op ( #3306 )
2024-05-10 02:20:06 +08:00
Aart Bik
a033bbfe6c
[torch-mlir][sparse] recognize to_dense primitive ( #3308 )
...
also maps simply to sparse_tensor.convert
the sparsity types do the rest!
2024-05-08 22:50:17 -07:00
Yuanqiang Liu
5213557b87
[Stablehlo] fix lowering gelu(x, tanh) ( #3307 )
...
* lowering gelu("none") to erf
* lowering gelu("tanh") to tanh
2024-05-09 11:39:13 +08:00
penguin_wwy
0f0f57c960
[Linalg] Refactor compare scalar op ( #3294 )
2024-05-09 10:40:19 +08:00
aldesilv
ec6d7aa5d2
OnnxToTorch lowering resize op ( #3013 )
...
https://github.com/nod-ai/SHARK-Turbine/issues/358
adds a lowering from onnx to linalg for bilinear and nearest resize with
support for using scales or sizes to get resize shape. uses coordinate
transform half pixel for bilinear mode and asymmetrical for nearest
mode. See
https://github.com/onnx/onnx/blob/main/docs/Operators.md#Resize . Added
two passes -- one for bilinear and the other for nearest.
2024-05-08 21:35:03 +00:00
Benoit Jacob
bce800a3f4
Integrate llvm-project at dabdec1001dc368373dd581cf72f37a440873ce3 ( #3300 )
...
Co-authored-by: Jacques Pienaar <jpienaar@google.com>
2024-05-08 14:43:06 -04:00
Jiawei Wu
346a536c9f
[Torch Dialect] decompose all index_put-like op to aten.index_put.hacked_twin for stricter semantics ( #3071 )
...
This PR decomposes all index_put-like op to aten.index_put.hacked_twin for stricter semantics, i.e., no None index in indices argument.
2024-05-08 22:44:57 +08:00
Vinayak Dev
6f911ba3d7
[torch] Add OnnxToTorch lowering for `onnx.HammingWindow` ( #3283 )
...
Adds OnnxToTorch lowering for the `onnx.HammingWindow` op.
2024-05-06 10:21:45 -07:00
Vivek Khandelwal
17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op ( #3278 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Rob Suderman
321b844df7
Revert hyperbolic trigonometric decompositions ( #3271 )
...
We should be using the `torch` path and handling decomposition in the
`math` dialect.
2024-05-03 12:06:44 -04:00
Vinayak Dev
67d6a665a4
[torch] Add OnnxToTorch lowering for `onnx.HannWindow` ( #3276 )
...
Adds OnnxToTorch lowering for the `onnx.HannWindow` op. Also factors out
common implementation between the window functions.
2024-05-03 12:04:57 -04:00
Archana Ramalingam
a46fe2c9db
[MLIR][ONNX] Add OnnxToTorch support for ReduceSumSquare Op ( #3188 )
...
This commit adds the OnnxToTorch support for ReduceSumSquare ops.
---------
Co-authored-by: Ubuntu <archana@archana-cpu.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
2024-05-02 22:17:45 +05:30
Vivek Khandelwal
0bb62e4347
Revert Onnx.Selu lowering to corresponding Aten op ( #3275 )
2024-05-02 09:00:24 -07:00
Prashant Kumar
8c48135a42
[linalg] Fix bug for conversion of complex dtype ( #3269 )
...
The conversion of complex type wasn't supported or checked; the support
and required tests were added.
Fixes:
https://github.com/iree-org/iree/issues/17226#issuecomment-2087779158
2024-05-01 12:06:53 +05:30
Xida Ren (Cedar)
33eef15e42
Support onnx.If ( #2825 )
...
This is probably a decent PR for learning about blocks and regions.
If you're here to learn about that, consider also looking at
lib/Conversion/TorchToSCF/TorchToSCF.cpp
While this doesn't include an e2e test, it is tested downstream in
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/onnx/operators/If/model.py
---------
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 18:36:40 +00:00
zjgarvey
72349f7522
[TorchToLinalg] Adds Quantization Support for ConvTranspose ( #3240 )
...
I spent a little while debugging numerics issues with some tests similar
to the ones in quantized_models.py, only to find that pytorch's
quantized conv transpose is catastrophically inaccurate. I'll upstream
the issue and only leave the tests here which are of the form quantize
-> dequantize -> op.
2024-04-30 09:23:09 -07:00
Vinayak Dev
05f8b69bf6
[MLIR][TORCH] Add OnnxToTorch support for BlackmanWindow function ( #3181 )
...
Implements OnnxToTorch lowering for the BlackmanWindow Function.
2024-04-30 12:21:27 -04:00
Xinyu Yang
f32ada993d
[Stablehlo] Improve the lowering of pool op in stablehlo ( #3259 )
...
1. Handle case stride == None
2. add avgpool3d maxpool1d maxpool3d lowering
2024-05-01 00:06:13 +08:00
jinchen
fbbad2d81e
Fix onnx atanh lowering ( #3264 )
...
iree tests `test_atanh` and `test_atanh_example` passed
2024-04-30 00:50:08 -07:00
jinchen
bf04b53b07
Fix onnx asinh lowering ( #3263 )
...
iree tests `test_asinh` and `test_asinh_example` passed
2024-04-30 00:49:57 -07:00
jinchen
fb499192df
Fix onnx acosh lowering ( #3262 )
...
iree tests `test_acosh` and `test_acosh_example` passed
2024-04-30 00:49:44 -07:00
jinchen
aa471f1d96
Fix onnx cosh lowering ( #3254 )
...
iree tests `test_cosh` and `test_cosh_example` passed
2024-04-30 00:49:29 -07:00
jinchen
b64c22cfc1
Fix onnx sinh lowering ( #3253 )
...
iree tests `test_sinh` and `test_sinh_example` passed
2024-04-30 00:44:41 -07:00
Xinyu Yang
0a5ff68d9d
[stablehlo] Support PrimsCollapseOp and PrimsSplitDimOp in stablehlo ( #3230 )
2024-04-29 17:40:30 +08:00
Vivek Khandelwal
b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op ( #3221 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
Stella Laurenzo
5d4b803914
[NFC reformat] Run pre-commit on all files and format misc.
...
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.
Subsequent patches will format Python files and remaining CPP files.
2024-04-27 14:08:09 -07:00
penguin_wwy
6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa ( #3243 )
...
Like #3130 , gradually replace the deprecated code
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
Rob Suderman
9a12a093a6
[onnx] Support `onnx.OneHot` lowering to `torch` ( #3196 )
...
[onnx] Support `onnx.OneHot` lowering to `torch`
Leverage the `aten.onehot` implementation along with `aten.transpose`
and `aten.where.scalar`.
2024-04-26 12:08:15 -07:00