Commit Graph

282 Commits (linkerbench)

Author SHA1 Message Date
Yuanqiang Liu e0a5adb1db
[Torch] fix aten.linear's decomposition (#3391)
* support aten.linear with more rank.
2024-05-27 15:49:50 +08:00
Yuanqiang Liu 05929f9171
enhance verbose option in e2e_testing (#3390)
so that `python3 e2e_testing/main.py -v` would print intermediate IR.
2024-05-27 08:01:07 +08:00
Yuanqiang Liu 28aeb047c1
[Stablehlo] fix crashing on AtenEmbeddingBagSumExample_basic (#3389) 2024-05-26 12:34:56 +08:00
Yuanqiang Liu 5bb1a65ec9
[Stablehlo] refactor reduction lowering and support aten.amin (#3383)
* implement detailed lowering template pattern
`ConvertAtenReduceAllDimsOp` and `ConvertAtenReduceKeepDimOp`
* support `aten.amin`'s lowering.
2024-05-23 20:40:20 +08:00
penguin_wwy d924d0047f
[FxImporter] Fix primitive type in return (#3379) 2024-05-23 09:55:33 +08:00
Yuanqiang Liu f4bfe3f948
Bump llvm and stablehlo (#3377)
* bump llvm to 1e5f29af81a5f6fda308074f6345b9fba4faa71c
* bump stablehlo to c44d9af8d4879adccf1054cb61a53377ae5898cb
2024-05-22 23:28:45 +08:00
penguin_wwy 972d47b586
[FxImporter] Fix constant bool tensor (#3375) 2024-05-22 22:59:01 +08:00
penguin_wwy c2c1c2cfa4
[FxImporter] Fix failed e2e case (#3365) 2024-05-22 00:20:54 +08:00
Vivek Khandelwal b870729efe
[torch] Fix `onnx.MaxPool` lowering (#3133)
This commit fixes the onnx.MaxPool op lowering which was lacking the
indices result support.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-21 21:05:32 +05:30
Wu Yuan cc28d566ff
[Stablehlo] Support AtenTrilOp (#3359)
1. lower aten.tril to stablehlo composed by iota, select and so forth
2. add related e2e test cases
2024-05-20 15:49:24 +08:00
Yuanqiang Liu 8814d0ae64
[Torch] emit aten.dot and canonicalize it to aten.matmul (#3361)
* canonicalize `aten.dot` to `aten.matmul`
2024-05-18 22:45:14 +08:00
zjgarvey 6cba93b16e
[ONNX][TorchToLinalg] Add support for dynamic dims in Interpolate lowering (#3351)
Addresses [Shark-Turbine
#196](https://github.com/nod-ai/SHARK-TestSuite/issues/196)

Related tracker [Shark-Turbine
#566](https://github.com/nod-ai/SHARK-Turbine/issues/566)

Related onnx.Resize issues [Shark-Turbine
#616](https://github.com/nod-ai/SHARK-Turbine/issues/616)
2024-05-17 12:18:57 -07:00
Suraj Sudhir cba91a9b96
[ONNX][TOSA] Adds ONNX to TOSA e2e tests (#3358)
- Refactors OnnxBackend to be generic and consume any Torch backend.

---------

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-05-16 21:44:26 -07:00
Xinyu Yang 7faba75696
[Torch] Decompose AtenMaskedScatterOp (#3353)
Co-authored-by: Yuanqiang Liu <liuyuanqiang.yqliu@bytedance.com>
2024-05-16 15:27:25 +08:00
Suraj Sudhir 0ca88028cd
[FxImporter][TOSA] Enable FxImporter to TOSA e2e tests (#3349)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-05-15 14:37:30 -07:00
Peiming Liu ccb772cd0f
[sparse] propagate sparsity properly when decompose torch operations. (#3318) 2024-05-15 10:09:27 -07:00
Yuanqiang Liu 5928f68e60
[Stablehlo] refactor amax, max, max.dim's lowering to stablehlo (#3348)
* not to decompose `aten.amax` on `stablehlo` backend. Because it could
be lowering to `stablehlo.reduce` directly.
* lowering `aten.max.dim` to `stablehlo.reduce apply max` when
`AtenMaxDimOp.getIndices()` doesn't have users. It's more simple.
2024-05-16 00:05:19 +08:00
penguin_wwy 20d4d16d32
[FxImporter] Add an e2e test example for FxImporter (#3331) 2024-05-14 00:45:19 +08:00
Andreas Falkenberg adafd51823
[onnx] Gridsampler addition of nearest mode (#3320)
Added nearest neighbor selection for onnx.Gridsampler
2024-05-10 11:42:10 -07:00
NeverRaR 1d4859699b
MaxPool1d lowering to linalg (#3295)
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-10 22:05:26 +05:30
penguin_wwy be20db0a0e
[NFC] Delete the deprecated example cases (#3323) 2024-05-11 00:28:58 +08:00
Vivek Khandelwal 10db310460
build: manually update PyTorch version (#3291)
Set PyTorch and TorchVision version to nightly release 2024-05-05.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-10 21:45:06 +05:30
penguin_wwy afe87d62b4
[Linalg] [Stablehlo] Promote type for compare scalar op (#3306) 2024-05-10 02:20:06 +08:00
Peiming Liu cff144b3ac
[sparse] fix double free due to incompatibility between buffer-deallo… (#3303)
…cation and sparse tensors.

**NOTE**: This PR _doges_ the issue in buffer-deallocation pass instead
of resolving it. In the future, we need to fix the bug in
buffer-deallocation pass when handling code generated by sparse
compiler.
2024-05-08 21:18:17 -07:00
aldesilv ec6d7aa5d2
OnnxToTorch lowering resize op (#3013)
https://github.com/nod-ai/SHARK-Turbine/issues/358
adds a lowering from onnx to linalg for bilinear and nearest resize with
support for using scales or sizes to get resize shape. uses coordinate
transform half pixel for bilinear mode and asymmetrical for nearest
mode. See
https://github.com/onnx/onnx/blob/main/docs/Operators.md#Resize. Added
two passes -- one for bilinear and the other for nearest.
2024-05-08 21:35:03 +00:00
zjgarvey 0abc5868b5
[ONNX] Enables data propogation for onnx shape inference (#3280)
This small change seems to dramatically improve shape inference for
complex models, and consequently, improves onnx importer reliability.
2024-05-08 09:29:23 -07:00
Jiawei Wu 346a536c9f
[Torch Dialect] decompose all index_put-like op to aten.index_put.hacked_twin for stricter semantics (#3071)
This PR decomposes all index_put-like op to aten.index_put.hacked_twin for stricter semantics, i.e., no None index in indices argument.
2024-05-08 22:44:57 +08:00
Xinyu Yang abef114c0c
[torch] emit aten.Softshrink and aten.Hardshrink (#3248)
as title
2024-05-08 15:20:45 +08:00
zjgarvey 9be6877c22
Temporarily remove QuantizedMLP_basic (#3301)
See issue #3298
2024-05-07 14:32:13 -07:00
penguin_wwy c3bd850951
[FxImporter] Add backend lowering to Fx API (#3288) 2024-05-07 20:58:50 +08:00
Vivek Khandelwal 17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op (#3278)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Ze Zhang 11cd7cd9e7
Folder and Canonicalizer for PrimsConvertElementTypeOp and AtenMaxPool2dWithIndicesOp (#3272)
While playing with TorchDynamo on ResNet18. I notice following issues:

- `prims.convert_element_type` can’t be canonicalized even if the input
and the output share the same type

- `aten.max_pool2d_with_indices` is always used instead of
`aten.max_pool2d`, even if the second returned output (indices) has no
user

This PR fixes above issues by adding a folder to the
PrimsConvertElementTypeOp and a canonicalizer to the
AtenMaxPool2dWithIndicesOp


Lit test:

`cmake --build build --target check-torch-mlir-all`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-05-02 00:03:41 -07:00
Prashant Kumar 8c48135a42
[linalg] Fix bug for conversion of complex dtype (#3269)
The conversion of complex type wasn't supported or checked; the support
and required tests were added.

Fixes:
https://github.com/iree-org/iree/issues/17226#issuecomment-2087779158
2024-05-01 12:06:53 +05:30
Xida Ren (Cedar) 33eef15e42
Support onnx.If (#2825)
This is probably a decent PR for learning about blocks and regions.

If you're here to learn about that, consider also looking at
lib/Conversion/TorchToSCF/TorchToSCF.cpp

While this doesn't include an e2e test, it is tested downstream in
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/onnx/operators/If/model.py

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 18:36:40 +00:00
zjgarvey 72349f7522
[TorchToLinalg] Adds Quantization Support for ConvTranspose (#3240)
I spent a little while debugging numerics issues with some tests similar
to the ones in quantized_models.py, only to find that pytorch's
quantized conv transpose is catastrophically inaccurate. I'll upstream
the issue and only leave the tests here which are of the form quantize
-> dequantize -> op.
2024-04-30 09:23:09 -07:00
Aart Bik 9442c66856
[torch-mlir][sparse] add a few missing passes to the ref pipeline (#3265)
For some sparse programs (and I am sure other not-seen corner cases for
dense), some passes were missing in the reference pipeline, eventually
resulting in e.g. a unresolved unrealized cast issue. This PR adds some
very obvious missing passes to avoid this situation.
2024-04-30 09:21:39 -07:00
Xinyu Yang f32ada993d
[Stablehlo] Improve the lowering of pool op in stablehlo (#3259)
1. Handle case stride == None
2. add avgpool3d maxpool1d  maxpool3d lowering
2024-05-01 00:06:13 +08:00
Xinyu Yang 0a5ff68d9d
[stablehlo] Support PrimsCollapseOp and PrimsSplitDimOp in stablehlo (#3230) 2024-04-29 17:40:30 +08:00
Vivek Khandelwal b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op (#3221)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
penguin_wwy b2185195e8
[NFC] Update black version (#3256)
* Update black version to support 3.11/3.12
* Reformat code
2024-04-29 11:06:01 +08:00
Yuanqiang Liu aed2cf3351
[Torch] emit aten.__contains__.str_list and add folder (#3249) 2024-04-29 10:51:17 +08:00
Xinyu Yang 5684dc0441
[Torch] emit aten.celu and decompose it (#3247)
CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))
2024-04-28 17:23:40 +08:00
Yuanqiang Liu 46c0f3cad0
[Torch] emit aten.log_sigmoid and decompose it to log(sigmoid) (#3246) 2024-04-28 11:47:43 +08:00
Stella Laurenzo 6877302504
[NFC reformat] Applies pre-commit formatting to Python files. (#3244)
This is a large change because prior to this point, Python files in the
project were not consistently formatted. This reformats them all with
black defaults.

Based on experience with prior projects, if you have a dev/long-term
branch with Python patches, you can minimize merge conflicts prior to
rebasing to include this commit by running `black` on your modified
Python files, squashing, and then rebasing/merging.
2024-04-27 14:16:31 -07:00
Stella Laurenzo 5d4b803914 [NFC reformat] Run pre-commit on all files and format misc.
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.

Subsequent patches will format Python files and remaining CPP files.
2024-04-27 14:08:09 -07:00
Yuanqiang Liu 695458daea
Fix ArgAnnotation with boolean flag which instructs value semantics (#3238) 2024-04-28 02:24:55 +08:00
penguin_wwy 4fbe77a051
[dynamo] Verify the default value is passed by kwargs (#2998) 2024-04-28 02:18:33 +08:00
Yuanqiang Liu f173a06fa7
[Torch] emit aten.ne.str and add folder (#3242) 2024-04-28 00:58:50 +08:00
penguin_wwy 944a6df611
Extract the Python APIs in the pt1 dir back to the root (#3237) 2024-04-27 18:27:37 +08:00
Rob Suderman 9a12a093a6
[onnx] Support `onnx.OneHot` lowering to `torch` (#3196)
[onnx] Support `onnx.OneHot` lowering to `torch`

Leverage the `aten.onehot` implementation along with `aten.transpose`
and `aten.where.scalar`.
2024-04-26 12:08:15 -07:00