Xinyu Yang
c7d52f63b4
[stablehlo] add aten::_int_mm lowering ( #3474 )
...
as title
2024-06-20 16:10:31 +08:00
Branko Trifkovic
676fa8cc09
Implement lowering of torch.aten.renorm ( #3388 )
...
Closes
[nod-ai/SHARK-Turbine/issues/689](https://github.com/nod-ai/SHARK-Turbine/issues/689 )
---------
Co-authored-by: Branko Trifkovic <branko.trifkovic@syrmia.com>
2024-06-17 10:40:57 -07:00
ptrifunovic98
4555629246
Implement lowering of torch.aten.kthvalue ( #3360 )
...
Closes
[nod-ai/SHARK-Turbine#620 ](https://github.com/nod-ai/SHARK-Turbine/issues/620 )
2024-06-15 11:18:39 +05:30
Xinyu Yang
6f94c7b0aa
[Torch] Add support for Meshgrid ( #3462 )
2024-06-14 23:59:08 +08:00
Wu Yuan
a02e14e971
[FxImporter] Add aten._scaled_dot_product_flash_attention_for_cpu to default decomposition table ( #3456 )
2024-06-14 10:52:09 +08:00
Phaneesh Barwaria
919b599ebe
onnx.MaxPool add atenMaxPool1d lowering support ( #3452 )
...
fixes #3422
2024-06-13 15:37:11 +05:30
Chi_Liu
ae6f5e8251
[ONNX] Fix AveragePool attributes support ( #3235 )
...
Issues was found here https://github.com/nod-ai/SHARK-Turbine/issues/643
- [ONNX] Fix padding attributes for onnx.AveragePool
- [Linalg] Add countIncludePad false support for AtenAvgPool1/2dOp
- [Linalg] Add an avg_pool2d countIncludePad False e2e tests
- [Linalg] Fix conflict with AtenAvgPool3dOp
- [Linalg] Fix e2e crash with AtenAvgPool1dOp
- [Linalg] Add dynamic dim support for AtenAvgPool2dOp
- [Linalg] Fix AvgPool2dDivisorOverrideModule crash
2024-06-12 12:16:43 -07:00
Xinyu Yang
431d98b405
[Stablehlo] Add lowering of GridSampler Op ( #3084 )
...
Inspired by PyTorch decompositions.py.
See
ec58f1f74e/torch/_decomp/decompositions.py (L3923-L4086)
Only support paddingMode=0 or 1 and interpolationMode=0 or 1
2024-06-07 16:06:07 +08:00
Vivek Khandelwal
72837fbb3d
build: manually update PyTorch version ( #3340 )
...
Set PyTorch and TorchVision version to nightly release 2024-05-14.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-06 22:23:40 +05:30
penguin_wwy
d59d0b6e5a
[Linalg] Promote type for compare tensor op ( #3416 )
2024-06-04 16:05:39 -07:00
Vivek Khandelwal
661be2d5b0
[MLIR][Torch] Add TorchToLinalg lowering for AtenAvgPool3dOp ( #3030 )
...
This commit also fixes the average pool op' test failing for
OnnxToLinalg lowering.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-04 22:12:34 +05:30
Yuanqiang Liu
50f7103098
[Stablehlo] support uint8 ( #3367 )
...
Support lowering unsigned integer type to stablehlo as discussed in
https://github.com/llvm/torch-mlir/pull/2184 .
The things I do in this PR:
1. create `setupBackendTypeConversionForStablehlo()`,
`createFuncBackendTypeConversionForStablehloPass` and
`createFinalizingBackendTypeConversionForStablehloPass`.
2. remove `InferTypeOpInterface` from `torch_c.to_builtin_tensor`,
because it's different result type between linalg backend and stablehlo
backend:
```
// linalg backend
func.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> {
%c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xi8>
%0 = tensor.empty() : tensor<3xf32>
%1 = linalg.generic {indexing_maps = [#map, #map], iterator_types = ["parallel"]} ins(%arg0 : tensor<3xi8>) outs(%0 : tensor<3xf32>) {
^bb0(%in: i8, %out: f32):
%2 = arith.uitofp %in : i8 to f32
linalg.yield %2 : f32
} -> tensor<3xf32>
return %1 : tensor<3xf32>
}
// stablehlo backend
func.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> {
%c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xui8>
%0 = stablehlo.convert %arg0 : (tensor<3xui8> -> tensor<3xf32>
return %0 : tensor<3xf32>
}
```
3. fix stablehlo and linalg's conversion
2024-06-04 09:04:59 +08:00
zjgarvey
8995c90879
[TorchToLinalg] add support for quantized group conv ( #3341 )
...
This addresses 7 of the model failures I'm seeing in the test suite. See
[Shark-Turbine issue
#566 ](https://github.com/nod-ai/SHARK-Turbine/issues/566 ).
Need the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream
before merging this. See [llvm-project PR #92136
](https://github.com/llvm/llvm-project/pull/92136 ).
A small additional expansion to operand quantization is included in this
patch to address a model failure that occurs when unblocking the
quantized group convolutions in one of these onnx models.
2024-06-03 21:57:44 +05:30
Xinyu Yang
285b087a5d
[Torch] Emit rrelu and decompose it ( #3250 )
...
as title
2024-06-03 19:25:52 +08:00
Xinyu Yang
267052df2a
[Torch] decompose AtenLerpTensorOp ( #3251 )
...
as title
2024-06-03 15:25:09 +08:00
Xinyu Yang
23b53050de
[Torch]Support conv_transpose1d and conv_transpose3d ( #3286 )
...
1. Support conv_transpose1d and conv_transpose3d
2. Fix bugs of convertTransposedConv func in
lib/Conversion/TorchToStablehlo/Linear.cpp
2024-06-03 15:11:12 +08:00
Yuanqiang Liu
4e05e2cd1e
[Torch] support recompose of aten.split.with_sizes and aten.tensor_sp… ( #3401 )
...
…lit.sections
* support recompose to aten.split.with_sizes and
aten.tensor_split.sections
* fix recompose of aten.chunk
2024-05-31 09:56:47 +08:00
zjgarvey
074098d20c
Modifies onnx resize lowering to fix numerical issues ( #3381 )
...
Updates:
- some unsupported modes are now going to report a match failure for
unsupported coordinate transformation modes.
- fixes a bug that was introduced in the last patch for resize (my
bad...)
- uses actual x and y coordinates for computing weights in bilinear
interpolation (rather than eps modified values)
- slightly simplifies the bilinear interpolation payload for readability
and performance
- passes coordinate transformation mode information from an onnx.Resize
op to the mode string for the aten._interpolate op. This allows us to
perform custom logic in the torch->linalg lowering to support
onnx.Resize options without losing the default behaviors of the
interpolate op.
2024-05-30 20:34:37 -04:00
penguin_wwy
e4be197efd
[FxImporter] Fix transpose rank zero ( #3382 )
2024-05-30 14:31:18 +08:00
penguin_wwy
a5d3b546f8
[FxImporter] Fix embedding bag ( #3387 )
2024-05-29 14:46:21 +08:00
Yuanqiang Liu
e0a5adb1db
[Torch] fix aten.linear's decomposition ( #3391 )
...
* support aten.linear with more rank.
2024-05-27 15:49:50 +08:00
Yuanqiang Liu
28aeb047c1
[Stablehlo] fix crashing on AtenEmbeddingBagSumExample_basic ( #3389 )
2024-05-26 12:34:56 +08:00
Yuanqiang Liu
5bb1a65ec9
[Stablehlo] refactor reduction lowering and support aten.amin ( #3383 )
...
* implement detailed lowering template pattern
`ConvertAtenReduceAllDimsOp` and `ConvertAtenReduceKeepDimOp`
* support `aten.amin`'s lowering.
2024-05-23 20:40:20 +08:00
penguin_wwy
d924d0047f
[FxImporter] Fix primitive type in return ( #3379 )
2024-05-23 09:55:33 +08:00
Yuanqiang Liu
f4bfe3f948
Bump llvm and stablehlo ( #3377 )
...
* bump llvm to 1e5f29af81a5f6fda308074f6345b9fba4faa71c
* bump stablehlo to c44d9af8d4879adccf1054cb61a53377ae5898cb
2024-05-22 23:28:45 +08:00
penguin_wwy
972d47b586
[FxImporter] Fix constant bool tensor ( #3375 )
2024-05-22 22:59:01 +08:00
penguin_wwy
c2c1c2cfa4
[FxImporter] Fix failed e2e case ( #3365 )
2024-05-22 00:20:54 +08:00
Vivek Khandelwal
b870729efe
[torch] Fix `onnx.MaxPool` lowering ( #3133 )
...
This commit fixes the onnx.MaxPool op lowering which was lacking the
indices result support.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-21 21:05:32 +05:30
Wu Yuan
cc28d566ff
[Stablehlo] Support AtenTrilOp ( #3359 )
...
1. lower aten.tril to stablehlo composed by iota, select and so forth
2. add related e2e test cases
2024-05-20 15:49:24 +08:00
Yuanqiang Liu
8814d0ae64
[Torch] emit aten.dot and canonicalize it to aten.matmul ( #3361 )
...
* canonicalize `aten.dot` to `aten.matmul`
2024-05-18 22:45:14 +08:00
zjgarvey
6cba93b16e
[ONNX][TorchToLinalg] Add support for dynamic dims in Interpolate lowering ( #3351 )
...
Addresses [Shark-Turbine
#196 ](https://github.com/nod-ai/SHARK-TestSuite/issues/196 )
Related tracker [Shark-Turbine
#566 ](https://github.com/nod-ai/SHARK-Turbine/issues/566 )
Related onnx.Resize issues [Shark-Turbine
#616 ](https://github.com/nod-ai/SHARK-Turbine/issues/616 )
2024-05-17 12:18:57 -07:00
Suraj Sudhir
cba91a9b96
[ONNX][TOSA] Adds ONNX to TOSA e2e tests ( #3358 )
...
- Refactors OnnxBackend to be generic and consume any Torch backend.
---------
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-05-16 21:44:26 -07:00
Xinyu Yang
7faba75696
[Torch] Decompose AtenMaskedScatterOp ( #3353 )
...
Co-authored-by: Yuanqiang Liu <liuyuanqiang.yqliu@bytedance.com>
2024-05-16 15:27:25 +08:00
Suraj Sudhir
0ca88028cd
[FxImporter][TOSA] Enable FxImporter to TOSA e2e tests ( #3349 )
...
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-05-15 14:37:30 -07:00
NeverRaR
1d4859699b
MaxPool1d lowering to linalg ( #3295 )
...
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-10 22:05:26 +05:30
Vivek Khandelwal
10db310460
build: manually update PyTorch version ( #3291 )
...
Set PyTorch and TorchVision version to nightly release 2024-05-05.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-10 21:45:06 +05:30
penguin_wwy
afe87d62b4
[Linalg] [Stablehlo] Promote type for compare scalar op ( #3306 )
2024-05-10 02:20:06 +08:00
Jiawei Wu
346a536c9f
[Torch Dialect] decompose all index_put-like op to aten.index_put.hacked_twin for stricter semantics ( #3071 )
...
This PR decomposes all index_put-like op to aten.index_put.hacked_twin for stricter semantics, i.e., no None index in indices argument.
2024-05-08 22:44:57 +08:00
Xinyu Yang
abef114c0c
[torch] emit aten.Softshrink and aten.Hardshrink ( #3248 )
...
as title
2024-05-08 15:20:45 +08:00
zjgarvey
9be6877c22
Temporarily remove QuantizedMLP_basic ( #3301 )
...
See issue #3298
2024-05-07 14:32:13 -07:00
Vivek Khandelwal
17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op ( #3278 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Ze Zhang
11cd7cd9e7
Folder and Canonicalizer for PrimsConvertElementTypeOp and AtenMaxPool2dWithIndicesOp ( #3272 )
...
While playing with TorchDynamo on ResNet18. I notice following issues:
- `prims.convert_element_type` can’t be canonicalized even if the input
and the output share the same type
- `aten.max_pool2d_with_indices` is always used instead of
`aten.max_pool2d`, even if the second returned output (indices) has no
user
This PR fixes above issues by adding a folder to the
PrimsConvertElementTypeOp and a canonicalizer to the
AtenMaxPool2dWithIndicesOp
Lit test:
`cmake --build build --target check-torch-mlir-all`
---------
Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-05-02 00:03:41 -07:00
Prashant Kumar
8c48135a42
[linalg] Fix bug for conversion of complex dtype ( #3269 )
...
The conversion of complex type wasn't supported or checked; the support
and required tests were added.
Fixes:
https://github.com/iree-org/iree/issues/17226#issuecomment-2087779158
2024-05-01 12:06:53 +05:30
Xida Ren (Cedar)
33eef15e42
Support onnx.If ( #2825 )
...
This is probably a decent PR for learning about blocks and regions.
If you're here to learn about that, consider also looking at
lib/Conversion/TorchToSCF/TorchToSCF.cpp
While this doesn't include an e2e test, it is tested downstream in
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/onnx/operators/If/model.py
---------
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 18:36:40 +00:00
zjgarvey
72349f7522
[TorchToLinalg] Adds Quantization Support for ConvTranspose ( #3240 )
...
I spent a little while debugging numerics issues with some tests similar
to the ones in quantized_models.py, only to find that pytorch's
quantized conv transpose is catastrophically inaccurate. I'll upstream
the issue and only leave the tests here which are of the form quantize
-> dequantize -> op.
2024-04-30 09:23:09 -07:00
Xinyu Yang
f32ada993d
[Stablehlo] Improve the lowering of pool op in stablehlo ( #3259 )
...
1. Handle case stride == None
2. add avgpool3d maxpool1d maxpool3d lowering
2024-05-01 00:06:13 +08:00
Xinyu Yang
0a5ff68d9d
[stablehlo] Support PrimsCollapseOp and PrimsSplitDimOp in stablehlo ( #3230 )
2024-04-29 17:40:30 +08:00
Vivek Khandelwal
b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op ( #3221 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
Xinyu Yang
5684dc0441
[Torch] emit aten.celu and decompose it ( #3247 )
...
CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))
2024-04-28 17:23:40 +08:00
Yuanqiang Liu
46c0f3cad0
[Torch] emit aten.log_sigmoid and decompose it to log(sigmoid) ( #3246 )
2024-04-28 11:47:43 +08:00