zjgarvey
73b3065a94
[ONNX] Reduces Transpose Opset Version ( #3302 )
...
As mentioned in issue #3290 , the difference between onnx.Transpose in
versions 1 and 13 is minimal, and therefore should be supported with the
same conversion pattern.
2024-05-14 21:38:56 +05:30
NeverRaR
26b78285bf
[MLIR][ONNX] Add OnnxToTorch support for GlobalMaxPool Op ( #3232 )
...
https://github.com/nod-ai/SHARK-Turbine/issues/658
---------
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-14 15:55:39 +05:30
Archana Ramalingam
20f312853c
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSumExp Op ( #3201 )
...
This commit adds the OnnxToTorch support for ReduceLogSumExp op
2024-05-14 09:54:26 +05:30
Yuanqiang Liu
0b7cbf5e60
[Stablehlo] fix aten.randn's lowering with f32 element type ( #3329 )
2024-05-11 17:40:04 +08:00
Yuanqiang Liu
5f7cb9e253
[Stablehlo] lowering aten.randn & aten.normal_functional to mhlo.rng … ( #3328 )
...
…NORMAL
* split lowering of uniform, randn, normal from Basic.cpp into Rng.cpp
2024-05-11 15:33:37 +08:00
Stella Laurenzo
00efec0b73
[linalg] Implement strict mode lowering for aten.view. ( #3319 )
...
* Enables assume_strict_symbolic_shapes on fx_importer imported
programs, indicating strict shape semantics.
* Reworks the view->reshape lowering to take advantage of strict mode
and do one of:
* Collapse to 0D
* Flatten/Unflatten when there is an inferred dim.
* Fallback to tensor.reshape
* Splits some test cases up and adds an attribute to control the old
pattern (so new corners can be tested in strict mode in isolation).
* Dynamic inferred mode needs upstream work to generalize expand_shape
(so that case is suppressed here).
* Deletes the assert from the existing tensor.reshape lowering if strict
shape mode is enabled (since the condition it is dynamically asserting
cannot happen).
2024-05-10 13:45:50 -07:00
Andreas Falkenberg
adafd51823
[onnx] Gridsampler addition of nearest mode ( #3320 )
...
Added nearest neighbor selection for onnx.Gridsampler
2024-05-10 11:42:10 -07:00
jinchen
4b24909427
Add attributes support for onnx cumsum op ( #3241 )
2024-05-11 02:09:01 +08:00
NeverRaR
1d4859699b
MaxPool1d lowering to linalg ( #3295 )
...
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-10 22:05:26 +05:30
Angel Zhang
261074f594
[ONNX] Handle one-input case for Min ONNX operator ( #3326 )
...
This commit handles the one-input case for the "Min" ONNX operator. A
new unit test has also been added.
2024-05-10 22:04:03 +05:30
Angel Zhang
7c289d9522
[ONNX] Handle one-input case for `onnx.Max` operator ( #3325 )
...
This commit handles the one-input case for the "Max" ONNX operator. A
new unit test has also been added.
2024-05-10 08:58:46 -07:00
penguin_wwy
e0a87e543e
[NFC] Standardize the std::is_same competime expression ( #3321 )
2024-05-10 17:07:37 +08:00
penguin_wwy
afe87d62b4
[Linalg] [Stablehlo] Promote type for compare scalar op ( #3306 )
2024-05-10 02:20:06 +08:00
Aart Bik
a033bbfe6c
[torch-mlir][sparse] recognize to_dense primitive ( #3308 )
...
also maps simply to sparse_tensor.convert
the sparsity types do the rest!
2024-05-08 22:50:17 -07:00
Yuanqiang Liu
5213557b87
[Stablehlo] fix lowering gelu(x, tanh) ( #3307 )
...
* lowering gelu("none") to erf
* lowering gelu("tanh") to tanh
2024-05-09 11:39:13 +08:00
penguin_wwy
0f0f57c960
[Linalg] Refactor compare scalar op ( #3294 )
2024-05-09 10:40:19 +08:00
aldesilv
ec6d7aa5d2
OnnxToTorch lowering resize op ( #3013 )
...
https://github.com/nod-ai/SHARK-Turbine/issues/358
adds a lowering from onnx to linalg for bilinear and nearest resize with
support for using scales or sizes to get resize shape. uses coordinate
transform half pixel for bilinear mode and asymmetrical for nearest
mode. See
https://github.com/onnx/onnx/blob/main/docs/Operators.md#Resize . Added
two passes -- one for bilinear and the other for nearest.
2024-05-08 21:35:03 +00:00
Benoit Jacob
bce800a3f4
Integrate llvm-project at dabdec1001dc368373dd581cf72f37a440873ce3 ( #3300 )
...
Co-authored-by: Jacques Pienaar <jpienaar@google.com>
2024-05-08 14:43:06 -04:00
Jiawei Wu
346a536c9f
[Torch Dialect] decompose all index_put-like op to aten.index_put.hacked_twin for stricter semantics ( #3071 )
...
This PR decomposes all index_put-like op to aten.index_put.hacked_twin for stricter semantics, i.e., no None index in indices argument.
2024-05-08 22:44:57 +08:00
Vinayak Dev
6f911ba3d7
[torch] Add OnnxToTorch lowering for `onnx.HammingWindow` ( #3283 )
...
Adds OnnxToTorch lowering for the `onnx.HammingWindow` op.
2024-05-06 10:21:45 -07:00
Vivek Khandelwal
17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op ( #3278 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Rob Suderman
321b844df7
Revert hyperbolic trigonometric decompositions ( #3271 )
...
We should be using the `torch` path and handling decomposition in the
`math` dialect.
2024-05-03 12:06:44 -04:00
Vinayak Dev
67d6a665a4
[torch] Add OnnxToTorch lowering for `onnx.HannWindow` ( #3276 )
...
Adds OnnxToTorch lowering for the `onnx.HannWindow` op. Also factors out
common implementation between the window functions.
2024-05-03 12:04:57 -04:00
Archana Ramalingam
a46fe2c9db
[MLIR][ONNX] Add OnnxToTorch support for ReduceSumSquare Op ( #3188 )
...
This commit adds the OnnxToTorch support for ReduceSumSquare ops.
---------
Co-authored-by: Ubuntu <archana@archana-cpu.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
2024-05-02 22:17:45 +05:30
Vivek Khandelwal
0bb62e4347
Revert Onnx.Selu lowering to corresponding Aten op ( #3275 )
2024-05-02 09:00:24 -07:00
Prashant Kumar
8c48135a42
[linalg] Fix bug for conversion of complex dtype ( #3269 )
...
The conversion of complex type wasn't supported or checked; the support
and required tests were added.
Fixes:
https://github.com/iree-org/iree/issues/17226#issuecomment-2087779158
2024-05-01 12:06:53 +05:30
Xida Ren (Cedar)
33eef15e42
Support onnx.If ( #2825 )
...
This is probably a decent PR for learning about blocks and regions.
If you're here to learn about that, consider also looking at
lib/Conversion/TorchToSCF/TorchToSCF.cpp
While this doesn't include an e2e test, it is tested downstream in
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/onnx/operators/If/model.py
---------
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 18:36:40 +00:00
zjgarvey
72349f7522
[TorchToLinalg] Adds Quantization Support for ConvTranspose ( #3240 )
...
I spent a little while debugging numerics issues with some tests similar
to the ones in quantized_models.py, only to find that pytorch's
quantized conv transpose is catastrophically inaccurate. I'll upstream
the issue and only leave the tests here which are of the form quantize
-> dequantize -> op.
2024-04-30 09:23:09 -07:00
Vinayak Dev
05f8b69bf6
[MLIR][TORCH] Add OnnxToTorch support for BlackmanWindow function ( #3181 )
...
Implements OnnxToTorch lowering for the BlackmanWindow Function.
2024-04-30 12:21:27 -04:00
Xinyu Yang
f32ada993d
[Stablehlo] Improve the lowering of pool op in stablehlo ( #3259 )
...
1. Handle case stride == None
2. add avgpool3d maxpool1d maxpool3d lowering
2024-05-01 00:06:13 +08:00
jinchen
fbbad2d81e
Fix onnx atanh lowering ( #3264 )
...
iree tests `test_atanh` and `test_atanh_example` passed
2024-04-30 00:50:08 -07:00
jinchen
bf04b53b07
Fix onnx asinh lowering ( #3263 )
...
iree tests `test_asinh` and `test_asinh_example` passed
2024-04-30 00:49:57 -07:00
jinchen
fb499192df
Fix onnx acosh lowering ( #3262 )
...
iree tests `test_acosh` and `test_acosh_example` passed
2024-04-30 00:49:44 -07:00
jinchen
aa471f1d96
Fix onnx cosh lowering ( #3254 )
...
iree tests `test_cosh` and `test_cosh_example` passed
2024-04-30 00:49:29 -07:00
jinchen
b64c22cfc1
Fix onnx sinh lowering ( #3253 )
...
iree tests `test_sinh` and `test_sinh_example` passed
2024-04-30 00:44:41 -07:00
Xinyu Yang
0a5ff68d9d
[stablehlo] Support PrimsCollapseOp and PrimsSplitDimOp in stablehlo ( #3230 )
2024-04-29 17:40:30 +08:00
Vivek Khandelwal
b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op ( #3221 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
Stella Laurenzo
5d4b803914
[NFC reformat] Run pre-commit on all files and format misc.
...
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.
Subsequent patches will format Python files and remaining CPP files.
2024-04-27 14:08:09 -07:00
penguin_wwy
6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa ( #3243 )
...
Like #3130 , gradually replace the deprecated code
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
Rob Suderman
9a12a093a6
[onnx] Support `onnx.OneHot` lowering to `torch` ( #3196 )
...
[onnx] Support `onnx.OneHot` lowering to `torch`
Leverage the `aten.onehot` implementation along with `aten.transpose`
and `aten.where.scalar`.
2024-04-26 12:08:15 -07:00
Xinyu Yang
ac85338491
[Stablehlo] Support AtenPowScalarOp, AtenTanOp, AtenAsinhOp, AtenAcoshOp, AtenAtanhOp, Atan2Op ( #3233 )
2024-04-26 15:47:44 +08:00
penguin_wwy
122eb69a98
[stablehlo] add aten left/right shift op conversion support ( #3234 )
2024-04-26 09:20:49 +08:00
Andreas Falkenberg
cd33d8b011
[onnx] Update DefaultDomainGtoP.cpp gridsampler ( #3228 )
...
Gridsampler
In onnx the interpolation mode is called 'linear' whereas in pytorch it
is called 'bilinear'. This led to the problem that everything other than
'bilinear' was rejected. It needed to be changed to linear.
2024-04-25 18:07:05 -07:00
Archana Ramalingam
ac11ec796d
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSum Op ( #3229 )
...
This commit adds the OnnxToTorch support for ReduceLogSum op
2024-04-25 19:37:57 -04:00
Aart Bik
4361178caa
[torch-mlir][sparse] recognize sparse tensor conversion ( #3226 )
...
Sparse tensor conversions are represented by special aten operators.
This PR ensures the conversions are recognized (instead of failing the
full torch aten lowering to linalg).
2024-04-26 02:32:07 +08:00
Xinyu Yang
7030eacb76
[stablehlo] Support aten.any and aten.all lowering ( #3217 )
2024-04-25 11:15:52 +08:00
Avinash Sharma
678c03b762
Fix nan issue for fp16 torch.randn/randn_like in ConvertAtenUniformOp ( #3184 )
...
For ops that use ConvertAtenUniformOp (e.g. torch.randn/randn_like),
fp16 datatype returns nan values. Trying to lower [this
repro](https://gist.github.com/aviator19941/1c65e658241dea6906ca423f9abaee69 )
will result in nan's, this PR fixes the issue.
2024-04-24 12:28:08 +05:30
Xinyu Yang
e18bf42d0e
[stablehlo] Support ConstantPadNdOp in stablehlo ( #3211 )
...
as title
2024-04-24 14:15:11 +08:00
Phaneesh Barwaria
f77d88390a
[onnx] handle dynamic padSize tensor in onnx.Pad ( #3214 )
...
- Fix pad size to data_rank for dynamic paddingSize Tensor.
- This fix is in accordance with [input
specification](https://onnx.ai/onnx/operators/onnx__Pad.html#inputs ) for
onnx.Pad
- Impl will need to be updated for dynamic padSize when support for
`axes` is added.
2024-04-24 11:31:37 +08:00
Xinyu Yang
42b9eccdb3
[Stablehlo] Fix AtenSumDimIntListOp when dim==None ( #3216 )
...
as titile
2024-04-24 11:25:46 +08:00