zjgarvey
6cba93b16e
[ONNX][TorchToLinalg] Add support for dynamic dims in Interpolate lowering ( #3351 )
...
Addresses [Shark-Turbine
#196 ](https://github.com/nod-ai/SHARK-TestSuite/issues/196 )
Related tracker [Shark-Turbine
#566 ](https://github.com/nod-ai/SHARK-Turbine/issues/566 )
Related onnx.Resize issues [Shark-Turbine
#616 ](https://github.com/nod-ai/SHARK-Turbine/issues/616 )
2024-05-17 12:18:57 -07:00
Suraj Sudhir
cba91a9b96
[ONNX][TOSA] Adds ONNX to TOSA e2e tests ( #3358 )
...
- Refactors OnnxBackend to be generic and consume any Torch backend.
---------
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-05-16 21:44:26 -07:00
Xinyu Yang
7faba75696
[Torch] Decompose AtenMaskedScatterOp ( #3353 )
...
Co-authored-by: Yuanqiang Liu <liuyuanqiang.yqliu@bytedance.com>
2024-05-16 15:27:25 +08:00
Suraj Sudhir
0ca88028cd
[FxImporter][TOSA] Enable FxImporter to TOSA e2e tests ( #3349 )
...
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-05-15 14:37:30 -07:00
NeverRaR
1d4859699b
MaxPool1d lowering to linalg ( #3295 )
...
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-10 22:05:26 +05:30
Vivek Khandelwal
10db310460
build: manually update PyTorch version ( #3291 )
...
Set PyTorch and TorchVision version to nightly release 2024-05-05.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-10 21:45:06 +05:30
penguin_wwy
afe87d62b4
[Linalg] [Stablehlo] Promote type for compare scalar op ( #3306 )
2024-05-10 02:20:06 +08:00
Jiawei Wu
346a536c9f
[Torch Dialect] decompose all index_put-like op to aten.index_put.hacked_twin for stricter semantics ( #3071 )
...
This PR decomposes all index_put-like op to aten.index_put.hacked_twin for stricter semantics, i.e., no None index in indices argument.
2024-05-08 22:44:57 +08:00
Xinyu Yang
abef114c0c
[torch] emit aten.Softshrink and aten.Hardshrink ( #3248 )
...
as title
2024-05-08 15:20:45 +08:00
zjgarvey
9be6877c22
Temporarily remove QuantizedMLP_basic ( #3301 )
...
See issue #3298
2024-05-07 14:32:13 -07:00
Vivek Khandelwal
17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op ( #3278 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Ze Zhang
11cd7cd9e7
Folder and Canonicalizer for PrimsConvertElementTypeOp and AtenMaxPool2dWithIndicesOp ( #3272 )
...
While playing with TorchDynamo on ResNet18. I notice following issues:
- `prims.convert_element_type` can’t be canonicalized even if the input
and the output share the same type
- `aten.max_pool2d_with_indices` is always used instead of
`aten.max_pool2d`, even if the second returned output (indices) has no
user
This PR fixes above issues by adding a folder to the
PrimsConvertElementTypeOp and a canonicalizer to the
AtenMaxPool2dWithIndicesOp
Lit test:
`cmake --build build --target check-torch-mlir-all`
---------
Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-05-02 00:03:41 -07:00
Prashant Kumar
8c48135a42
[linalg] Fix bug for conversion of complex dtype ( #3269 )
...
The conversion of complex type wasn't supported or checked; the support
and required tests were added.
Fixes:
https://github.com/iree-org/iree/issues/17226#issuecomment-2087779158
2024-05-01 12:06:53 +05:30
Xida Ren (Cedar)
33eef15e42
Support onnx.If ( #2825 )
...
This is probably a decent PR for learning about blocks and regions.
If you're here to learn about that, consider also looking at
lib/Conversion/TorchToSCF/TorchToSCF.cpp
While this doesn't include an e2e test, it is tested downstream in
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/onnx/operators/If/model.py
---------
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 18:36:40 +00:00
zjgarvey
72349f7522
[TorchToLinalg] Adds Quantization Support for ConvTranspose ( #3240 )
...
I spent a little while debugging numerics issues with some tests similar
to the ones in quantized_models.py, only to find that pytorch's
quantized conv transpose is catastrophically inaccurate. I'll upstream
the issue and only leave the tests here which are of the form quantize
-> dequantize -> op.
2024-04-30 09:23:09 -07:00
Xinyu Yang
f32ada993d
[Stablehlo] Improve the lowering of pool op in stablehlo ( #3259 )
...
1. Handle case stride == None
2. add avgpool3d maxpool1d maxpool3d lowering
2024-05-01 00:06:13 +08:00
Xinyu Yang
0a5ff68d9d
[stablehlo] Support PrimsCollapseOp and PrimsSplitDimOp in stablehlo ( #3230 )
2024-04-29 17:40:30 +08:00
Vivek Khandelwal
b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op ( #3221 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
Xinyu Yang
5684dc0441
[Torch] emit aten.celu and decompose it ( #3247 )
...
CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))
2024-04-28 17:23:40 +08:00
Yuanqiang Liu
46c0f3cad0
[Torch] emit aten.log_sigmoid and decompose it to log(sigmoid) ( #3246 )
2024-04-28 11:47:43 +08:00
Stella Laurenzo
6877302504
[NFC reformat] Applies pre-commit formatting to Python files. ( #3244 )
...
This is a large change because prior to this point, Python files in the
project were not consistently formatted. This reformats them all with
black defaults.
Based on experience with prior projects, if you have a dev/long-term
branch with Python patches, you can minimize merge conflicts prior to
rebasing to include this commit by running `black` on your modified
Python files, squashing, and then rebasing/merging.
2024-04-27 14:16:31 -07:00
penguin_wwy
4fbe77a051
[dynamo] Verify the default value is passed by kwargs ( #2998 )
2024-04-28 02:18:33 +08:00
Rob Suderman
9a12a093a6
[onnx] Support `onnx.OneHot` lowering to `torch` ( #3196 )
...
[onnx] Support `onnx.OneHot` lowering to `torch`
Leverage the `aten.onehot` implementation along with `aten.transpose`
and `aten.where.scalar`.
2024-04-26 12:08:15 -07:00
Xinyu Yang
ac85338491
[Stablehlo] Support AtenPowScalarOp, AtenTanOp, AtenAsinhOp, AtenAcoshOp, AtenAtanhOp, Atan2Op ( #3233 )
2024-04-26 15:47:44 +08:00
penguin_wwy
122eb69a98
[stablehlo] add aten left/right shift op conversion support ( #3234 )
2024-04-26 09:20:49 +08:00
Xinyu Yang
7030eacb76
[stablehlo] Support aten.any and aten.all lowering ( #3217 )
2024-04-25 11:15:52 +08:00
Yuanqiang Liu
fab2696489
[Torch] support aten.trunc ( #3219 )
...
decompose `trunc(x)` to `sign(x) * floor(abs(x))`
2024-04-24 14:32:33 +08:00
Xinyu Yang
e18bf42d0e
[stablehlo] Support ConstantPadNdOp in stablehlo ( #3211 )
...
as title
2024-04-24 14:15:11 +08:00
Xinyu Yang
42b9eccdb3
[Stablehlo] Fix AtenSumDimIntListOp when dim==None ( #3216 )
...
as titile
2024-04-24 11:25:46 +08:00
Xinyu Yang
4da3d714cc
[Torch] Support AtenProdOp on linalg and stablehlo ( #3215 )
2024-04-24 11:14:04 +08:00
zjgarvey
a8ba865fca
[torch] Adds Quantization Support for `aten.relu` ( #3177 )
...
A choice was made to quantize the return type of Relu with a scale and
zero point copied from the input's quantization scheme. With this
choice, the torch-to-linalg conversion of quantized Relu essentially
computes max(input, zeroPoint) in the elementwise payload.
2024-04-23 11:01:36 -07:00
Yuanqiang Liu
db3842f2e8
[Stablehlo] support lowering sinh & cosh to stablehlo ( #3213 )
2024-04-23 19:54:58 +08:00
Xinyu Yang
c1967b607f
[Stablehlo] add AtenLog10Op, AtenLog2Op lowering to stablehlo ( #3208 )
2024-04-23 19:06:55 +08:00
Yuanqiang Liu
1f8123b5f0
[Stablehlo] support unary ops which promote to floating point ( #3209 )
...
* promote input to output element-type when lowering to stablehlo, so
that it could satisfy stablehlo's type constraints.
* split promote-to-fp unary ops from fp-only unary ops.
2024-04-23 17:57:12 +08:00
Yuanqiang Liu
797e4cd395
[Stablehlo] lowering asin, acos, atan ( #3207 )
...
* lowering asin, acos and atan to chlo ops.
2024-04-23 16:24:53 +08:00
Vinayak Dev
cff2f084d4
[torch] Add OnnxToTorch lowering for `onnx.ReduceL2` ( #3175 )
...
Adds OnnxToTorch lowering for the ReduceL2 op.
2024-04-23 02:03:05 -04:00
Vivek Khandelwal
3c252cdd44
[onnx] Add `onnx-to-torch` lowering for random ops ( #3193 )
...
This commit adds the OnnxToTorch lowering for Onnx's RandomNormal, RandomNormalLike, RandomUniform, and RandomUniformLike op.
2024-04-22 22:28:07 +05:30
Vivek Khandelwal
6abc7371c8
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for Squeeze and Unsqueeze op ( #2991 )
...
This commit also cleans up the OnnxToTorch lowering for the Squeeze and
Unsqueeze op and adds the support for handling edge cases.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-22 08:52:42 +00:00
penguin_wwy
e5bdd71baf
[Torch] Emit and decompose prims.iota op ( #3132 )
2024-04-21 19:45:01 -07:00
penguin_wwy
a60e84e5ee
[stablehlo] add aten.expm1 op conversion support ( #3199 )
2024-04-21 19:20:49 -07:00
Rob Suderman
8222637159
[onnx] Extend op version number of `onnx.ScatterElements` ( #3195 )
...
Version number was set too high. Lowered to support more cases allows
more tests to pass.
Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:32:18 -04:00
Rob Suderman
733cace1df
[onnx] Fix `onnx.split` by directly handling slicing ( #3194 )
...
Previous implementation erroneously mixed up num_outputs with
slice_size. New version correctly computs the slice size and directly
performs slicing rather than leveraging `aten.split.tensor`. This is due
to `onnx` supporting a fixed number of splits making the size
computation more easily computeable when lowering to `aten` rather than
deferring to `aten.split.tensor`.
---------
Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:31:56 -04:00
penguin_wwy
b6b01602d3
[stablehlo] add aten.fmod.Tensor op conversion support ( #3198 )
2024-04-21 08:39:36 +08:00
penguin_wwy
ea0ecb67be
[stablehlo] add aten.remainder.Tensor op conversion support ( #3197 )
2024-04-21 00:03:37 +08:00
Rob Suderman
b01245c0e8
[onnx] Fix `onnx.Not` for non-bool inputs ( #3187 )
...
Need to perform a bool cast to support `onnx.Not` on non-bool inputs.
2024-04-19 11:32:24 -07:00
Xinyu Yang
790a697245
[Torch] Add folder for AtenIntOp, AtenFloatOp ( #3189 )
...
See unit test below:
```
// CHECK-LABEL: func.func @torch.aten.tensor.float(
// CHECK-NEXT: torch.vtensor.literal(dense<1.000000e+01> : tensor<f32>) : !torch.vtensor<[],f32>
func.func @torch.aten.tensor.float() -> !torch.vtensor<[],f32> {
%none = torch.constant.none
%false = torch.constant.bool false
%float1.000000e01 = torch.constant.float 1.000000e+01
%67 = torch.aten.tensor.float %float1.000000e01, %none, %none, %false : !torch.float, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],f32>
return %67 : !torch.vtensor<[],f32>
}
// CHECK-LABEL: func.func @torch.aten.tensor.int(
// CHECK-NEXT: torch.vtensor.literal(dense<45> : tensor<si32>) : !torch.vtensor<[],si32>
func.func @torch.aten.tensor.int() -> !torch.vtensor<[],si32> {
%none = torch.constant.none
%false = torch.constant.bool false
%int45 = torch.constant.int 45
%67 = torch.aten.tensor.int %int45, %none, %none, %false : !torch.int, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],si32>
return %67 : !torch.vtensor<[],si32>
}
```
2024-04-19 22:17:06 +08:00
penguin_wwy
5a98c72c7f
[StableHLO] Fix aten.clamp.Tensor in FxImporter2StableHLO ( #3190 )
...
The FX importer will pass static shapes to the Torch dialect, so it
needs to generate a StableHLO that satisfies shape inference.
2024-04-19 17:08:29 +08:00
penguin_wwy
0a6073414d
[FxImporter] Add fx importer to stablehlo e2e test config ( #3183 )
2024-04-18 21:29:17 -07:00
penguin_wwy
6c4f7deebb
[stablehlo] add aten.clamp.Tensor op conversion support ( #3185 )
2024-04-19 10:55:27 +08:00
Rob Suderman
be742a937d
[onnx] Update the failure triage for onnx ( #3186 )
...
Reclassifying what the source of failures are for various bugs so we can
reprioritize what failures are common.
2024-04-18 14:58:13 -07:00