Jiawei Wu
346a536c9f
[Torch Dialect] decompose all index_put-like op to aten.index_put.hacked_twin for stricter semantics ( #3071 )
...
This PR decomposes all index_put-like op to aten.index_put.hacked_twin for stricter semantics, i.e., no None index in indices argument.
2024-05-08 22:44:57 +08:00
Xinyu Yang
abef114c0c
[torch] emit aten.Softshrink and aten.Hardshrink ( #3248 )
...
as title
2024-05-08 15:20:45 +08:00
Vinayak Dev
6f911ba3d7
[torch] Add OnnxToTorch lowering for `onnx.HammingWindow` ( #3283 )
...
Adds OnnxToTorch lowering for the `onnx.HammingWindow` op.
2024-05-06 10:21:45 -07:00
Vivek Khandelwal
e60160d793
Revert "Decompose AtenNonzeroOp" ( #3289 )
...
Reverts llvm/torch-mlir#3281
2024-05-06 09:52:04 -07:00
Vivek Khandelwal
17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op ( #3278 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Xida Ren (Cedar)
1af00e6040
Decompose AtenNonzeroOp ( #3281 )
...
This fixes some onnx lit tests not lowering to linalg in
https://github.com/nod-ai/SHARK-Turbine/issues/450
2024-05-05 21:59:25 +08:00
Rob Suderman
321b844df7
Revert hyperbolic trigonometric decompositions ( #3271 )
...
We should be using the `torch` path and handling decomposition in the
`math` dialect.
2024-05-03 12:06:44 -04:00
Vinayak Dev
67d6a665a4
[torch] Add OnnxToTorch lowering for `onnx.HannWindow` ( #3276 )
...
Adds OnnxToTorch lowering for the `onnx.HannWindow` op. Also factors out
common implementation between the window functions.
2024-05-03 12:04:57 -04:00
Archana Ramalingam
a46fe2c9db
[MLIR][ONNX] Add OnnxToTorch support for ReduceSumSquare Op ( #3188 )
...
This commit adds the OnnxToTorch support for ReduceSumSquare ops.
---------
Co-authored-by: Ubuntu <archana@archana-cpu.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
2024-05-02 22:17:45 +05:30
Vivek Khandelwal
0bb62e4347
Revert Onnx.Selu lowering to corresponding Aten op ( #3275 )
2024-05-02 09:00:24 -07:00
Ze Zhang
11cd7cd9e7
Folder and Canonicalizer for PrimsConvertElementTypeOp and AtenMaxPool2dWithIndicesOp ( #3272 )
...
While playing with TorchDynamo on ResNet18. I notice following issues:
- `prims.convert_element_type` can’t be canonicalized even if the input
and the output share the same type
- `aten.max_pool2d_with_indices` is always used instead of
`aten.max_pool2d`, even if the second returned output (indices) has no
user
This PR fixes above issues by adding a folder to the
PrimsConvertElementTypeOp and a canonicalizer to the
AtenMaxPool2dWithIndicesOp
Lit test:
`cmake --build build --target check-torch-mlir-all`
---------
Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-05-02 00:03:41 -07:00
Prashant Kumar
8c48135a42
[linalg] Fix bug for conversion of complex dtype ( #3269 )
...
The conversion of complex type wasn't supported or checked; the support
and required tests were added.
Fixes:
https://github.com/iree-org/iree/issues/17226#issuecomment-2087779158
2024-05-01 12:06:53 +05:30
Xida Ren (Cedar)
33eef15e42
Support onnx.If ( #2825 )
...
This is probably a decent PR for learning about blocks and regions.
If you're here to learn about that, consider also looking at
lib/Conversion/TorchToSCF/TorchToSCF.cpp
While this doesn't include an e2e test, it is tested downstream in
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/onnx/operators/If/model.py
---------
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 18:36:40 +00:00
Xida Ren (Cedar)
315dc6c3e3
[torch] `aten.eye` should use dynamic dims when no static dims are available ( #3202 )
...
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 17:41:03 +00:00
zjgarvey
72349f7522
[TorchToLinalg] Adds Quantization Support for ConvTranspose ( #3240 )
...
I spent a little while debugging numerics issues with some tests similar
to the ones in quantized_models.py, only to find that pytorch's
quantized conv transpose is catastrophically inaccurate. I'll upstream
the issue and only leave the tests here which are of the form quantize
-> dequantize -> op.
2024-04-30 09:23:09 -07:00
Vinayak Dev
05f8b69bf6
[MLIR][TORCH] Add OnnxToTorch support for BlackmanWindow function ( #3181 )
...
Implements OnnxToTorch lowering for the BlackmanWindow Function.
2024-04-30 12:21:27 -04:00
Xinyu Yang
f32ada993d
[Stablehlo] Improve the lowering of pool op in stablehlo ( #3259 )
...
1. Handle case stride == None
2. add avgpool3d maxpool1d maxpool3d lowering
2024-05-01 00:06:13 +08:00
jinchen
fbbad2d81e
Fix onnx atanh lowering ( #3264 )
...
iree tests `test_atanh` and `test_atanh_example` passed
2024-04-30 00:50:08 -07:00
jinchen
bf04b53b07
Fix onnx asinh lowering ( #3263 )
...
iree tests `test_asinh` and `test_asinh_example` passed
2024-04-30 00:49:57 -07:00
jinchen
fb499192df
Fix onnx acosh lowering ( #3262 )
...
iree tests `test_acosh` and `test_acosh_example` passed
2024-04-30 00:49:44 -07:00
jinchen
aa471f1d96
Fix onnx cosh lowering ( #3254 )
...
iree tests `test_cosh` and `test_cosh_example` passed
2024-04-30 00:49:29 -07:00
jinchen
b64c22cfc1
Fix onnx sinh lowering ( #3253 )
...
iree tests `test_sinh` and `test_sinh_example` passed
2024-04-30 00:44:41 -07:00
Rob Suderman
db6721084a
Integrate LLVM at llvm/llvm-project@593f6fdcb4 ( #3260 )
2024-04-29 12:01:40 -07:00
Xinyu Yang
0a5ff68d9d
[stablehlo] Support PrimsCollapseOp and PrimsSplitDimOp in stablehlo ( #3230 )
2024-04-29 17:40:30 +08:00
Vivek Khandelwal
b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op ( #3221 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
Yuanqiang Liu
aed2cf3351
[Torch] emit aten.__contains__.str_list and add folder ( #3249 )
2024-04-29 10:51:17 +08:00
Xinyu Yang
5684dc0441
[Torch] emit aten.celu and decompose it ( #3247 )
...
CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))
2024-04-28 17:23:40 +08:00
Yuanqiang Liu
46c0f3cad0
[Torch] emit aten.log_sigmoid and decompose it to log(sigmoid) ( #3246 )
2024-04-28 11:47:43 +08:00
Stella Laurenzo
5d4b803914
[NFC reformat] Run pre-commit on all files and format misc.
...
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.
Subsequent patches will format Python files and remaining CPP files.
2024-04-27 14:08:09 -07:00
penguin_wwy
6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa ( #3243 )
...
Like #3130 , gradually replace the deprecated code
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
Yuanqiang Liu
f173a06fa7
[Torch] emit aten.ne.str and add folder ( #3242 )
2024-04-28 00:58:50 +08:00
Rob Suderman
9a12a093a6
[onnx] Support `onnx.OneHot` lowering to `torch` ( #3196 )
...
[onnx] Support `onnx.OneHot` lowering to `torch`
Leverage the `aten.onehot` implementation along with `aten.transpose`
and `aten.where.scalar`.
2024-04-26 12:08:15 -07:00
Xinyu Yang
ac85338491
[Stablehlo] Support AtenPowScalarOp, AtenTanOp, AtenAsinhOp, AtenAcoshOp, AtenAtanhOp, Atan2Op ( #3233 )
2024-04-26 15:47:44 +08:00
Yuanqiang Liu
634a796933
[Torch] fold aten.log ( #3223 )
2024-04-26 10:10:02 +08:00
penguin_wwy
122eb69a98
[stablehlo] add aten left/right shift op conversion support ( #3234 )
2024-04-26 09:20:49 +08:00
Andreas Falkenberg
cd33d8b011
[onnx] Update DefaultDomainGtoP.cpp gridsampler ( #3228 )
...
Gridsampler
In onnx the interpolation mode is called 'linear' whereas in pytorch it
is called 'bilinear'. This led to the problem that everything other than
'bilinear' was rejected. It needed to be changed to linear.
2024-04-25 18:07:05 -07:00
Archana Ramalingam
ac11ec796d
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSum Op ( #3229 )
...
This commit adds the OnnxToTorch support for ReduceLogSum op
2024-04-25 19:37:57 -04:00
Aart Bik
2eac8a992f
[torch-mlir][sparse] sparse tensor dialect is a legal dialect ( #3227 )
2024-04-26 02:36:42 +08:00
Yuanqiang Liu
b0ba3def93
[Torch] support AtenScalarImplicitOp canonicalize with float ( #3231 )
2024-04-26 02:36:13 +08:00
Aart Bik
4361178caa
[torch-mlir][sparse] recognize sparse tensor conversion ( #3226 )
...
Sparse tensor conversions are represented by special aten operators.
This PR ensures the conversions are recognized (instead of failing the
full torch aten lowering to linalg).
2024-04-26 02:32:07 +08:00
Xinyu Yang
7030eacb76
[stablehlo] Support aten.any and aten.all lowering ( #3217 )
2024-04-25 11:15:52 +08:00
Avinash Sharma
678c03b762
Fix nan issue for fp16 torch.randn/randn_like in ConvertAtenUniformOp ( #3184 )
...
For ops that use ConvertAtenUniformOp (e.g. torch.randn/randn_like),
fp16 datatype returns nan values. Trying to lower [this
repro](https://gist.github.com/aviator19941/1c65e658241dea6906ca423f9abaee69 )
will result in nan's, this PR fixes the issue.
2024-04-24 12:28:08 +05:30
Yuanqiang Liu
fab2696489
[Torch] support aten.trunc ( #3219 )
...
decompose `trunc(x)` to `sign(x) * floor(abs(x))`
2024-04-24 14:32:33 +08:00
Xinyu Yang
e18bf42d0e
[stablehlo] Support ConstantPadNdOp in stablehlo ( #3211 )
...
as title
2024-04-24 14:15:11 +08:00
Phaneesh Barwaria
f77d88390a
[onnx] handle dynamic padSize tensor in onnx.Pad ( #3214 )
...
- Fix pad size to data_rank for dynamic paddingSize Tensor.
- This fix is in accordance with [input
specification](https://onnx.ai/onnx/operators/onnx__Pad.html#inputs ) for
onnx.Pad
- Impl will need to be updated for dynamic padSize when support for
`axes` is added.
2024-04-24 11:31:37 +08:00
Xinyu Yang
42b9eccdb3
[Stablehlo] Fix AtenSumDimIntListOp when dim==None ( #3216 )
...
as titile
2024-04-24 11:25:46 +08:00
Xinyu Yang
4da3d714cc
[Torch] Support AtenProdOp on linalg and stablehlo ( #3215 )
2024-04-24 11:14:04 +08:00
zjgarvey
a8ba865fca
[torch] Adds Quantization Support for `aten.relu` ( #3177 )
...
A choice was made to quantize the return type of Relu with a scale and
zero point copied from the input's quantization scheme. With this
choice, the torch-to-linalg conversion of quantized Relu essentially
computes max(input, zeroPoint) in the elementwise payload.
2024-04-23 11:01:36 -07:00
jinchen
09d42044b4
Support select_last_index attribute of onnx argmin op ( #3212 )
...
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/648
all compiled, and the values of results match, but having runtime issue
of dtype mismatch of i/si.
2024-04-23 10:43:38 -07:00
jinchen
61e6312c87
Support select_last_index attribute of onnx argmax op ( #3192 )
...
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/635
all compiled, but having run issue of dtype mismatch of i/si.
2024-04-23 10:16:08 -07:00