Vivek Khandelwal
90e3d69c25
build: manually update PyTorch version ( #3034 )
...
Set PyTorch and TorchVision version to nightly release 2024-03-18.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-20 21:45:07 +05:30
Aart Bik
fe59f1ee0d
[torch-mlir][sparse] higher dimension COO ( #3042 )
...
Lift this from 2-dim only to n-dim for n>=2
2024-03-19 15:59:07 -07:00
Abhishek-TyRnT
df02692726
Dynamic size support for flatten ( #3005 )
...
Added support for dynamic shapes in `flattenusingints` op in tosa
dialect. Due to this some Argmax tests pass
This PR fixes this issue https://github.com/llvm/torch-mlir/issues/3004
The following tests pass after this PR
```
1. "ArgmaxIntModule_basic"
2. "ArgmaxIntModule_multiple_maxs"
3. "ArgmaxModule_basic"
```
2024-03-19 15:19:29 -07:00
zjgarvey
7a9608bb69
[ONNX] Reduces onnx.Div sinceVersion to 7 ( #3041 )
...
The only difference between version 7 and newer versions is support for
different data types. We should allow this pattern to match as early as
7. Earlier versions have a more manual broadcast specification through
attributes, so I did not include those versions.
See: [onnx.Div
docs](https://onnx.ai/onnx/operators/onnx__Div.html#l-onnx-doc-divl )
2024-03-19 13:35:05 -07:00
Yuanqiang Liu
8b96727d0d
[Stablehlo] lowering chlo to stablehlo in torch-to-stablehlo pipeline ( #3037 )
...
as that stablehlo is better than chlo as the boundary between frontend
compiler and backend compiler.
2024-03-19 21:18:54 +08:00
Xida Ren (Cedar)
895ea8663a
add llvm style guide
2024-03-18 18:25:22 +00:00
Pavani Chowdary
c51e2130f2
[onnx] support for lowering mod op from onnx to torch ( #2859 )
...
nod-ai/Shark-Turbine#267
---------
Authored-by: boddu.pavani@research.iiit.ac.in
Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-18 17:54:37 +05:30
Xinan Jiang(姜曦楠)
d8a52e82c2
[onnx] Fix onnx.cast cases between int32 and int64 ( #2982 )
...
2 modifications:
1. torch.int64 is enum 4 in TORCH_DTYPE_TO_INT
2. add int32 support
2024-03-15 17:14:09 +00:00
penguin_wwy
f34c187ac4
Normalize type hints to be compatible with multiple Python versions ( #3028 )
...
Although we provide a wheel package for Python 3.8, it may actually
throw the following exception:
`TypeError: 'type' object is not subscriptable`
2024-03-15 08:29:48 -07:00
Yuanqiang Liu
4282eb9e76
[Torch Dialect] support aten.fake_quantize_per_tensor_affine ( #3014 )
2024-03-15 08:53:29 +08:00
Nithin Meganathan
798bfd7dff
Adds accumulator types in TorchToLinalg for `AtenMmOp` and `AtenConvolutionOp` ( #3027 )
2024-03-14 16:40:40 -07:00
Sambhav Jain
0b2f9c89a2
Bring back `dynamic_shapes` constraints in fx importer API ( #3026 )
...
https://github.com/llvm/torch-mlir/pull/2992 dropped `constraints` from
the fx importer API,
[breaking](https://github.com/cruise-automation/mlir-tcp/actions/runs/8284385380/job/22669774071 )
downstream AOT compile tests in `mlir-tcp` that use it. This knob has
been soft-deprecated for a while now, replaced by `dynamic_shapes` - a
more ergonomic interface. This PR brings back dynamic_shapes constraints
in the new supported form. Also added a python lit test with dynamic
shaped annotations.
2024-03-14 10:26:34 -07:00
penguin_wwy
29ac23a790
Setuptools uses a separate build directory ( #3023 )
...
* setuptools not steal the build directory name
https://github.com/llvm/torch-mlir/pull/3021#issuecomment-1994447855
* support pre-built LLVM
* support CMAKE_BUILD_TYPE env
2024-03-13 20:41:48 -07:00
Yuanqiang Liu
870e63bc3c
[Torch Dialect] support decomposition of aten.linspace ( #3006 )
2024-03-14 08:28:33 +08:00
Yuanqiang Liu
43c6996a31
[Torch Dialect] add folder for aten.ceil and unify patterns of ceil, … ( #3010 )
...
…floor, round
2024-03-14 07:41:58 +08:00
ptrifunovic98
524ff99216
Implement lowering of torch.aten.linalg_cross ( #2986 )
...
Closes
[nod-ai/SHARK-Turbine#497 ](https://github.com/nod-ai/SHARK-Turbine/issues/497 )
2024-03-13 12:17:22 -07:00
aldesilv
6fa21bd8b1
OnnxToTorch lower celu op ( #2920 )
2024-03-13 20:34:10 +05:30
Nithin Meganathan
5ecc1d5c0d
Align softmax accumulation types with Torch's CUDA implementation ( #2996 )
2024-03-12 15:07:45 -07:00
Yuanqiang Liu
ad6159c7cb
[Stablehlo] lowering aten.round to stablehlo.round_nearest_even ( #3011 )
2024-03-12 08:58:20 +08:00
Rob Suderman
e78c99e74e
[torch] Update folders for splat operators ( #3012 )
...
Splat operators required the output is 1-D. This was not a required
restriction and was loosened to 2d.
2024-03-11 16:45:49 -04:00
Devjiu
4b1e87ce67
[TorchDynamo] Enable Elemtwise ops for Scalar arg ( #2744 )
...
This commit provides dummy solution to support elmentwise operations
(mul, add) with scalar argument. ( op(Tensor, Scalar) )
It replaces `torch.aten.add.Tensor` with `torch.aten.add.Scalar`.
```
Unexpected outcome summary: (torchdynamo)
****** Unexpectedly Passed tests - 22 tests
XPASS - "AddCDivModule_basic"
XPASS - "BatchNorm1DModule_basic"
XPASS - "BatchNorm1DStaticShapeModule_basic"
XPASS - "BatchNorm1DWith2DInputModule_basic"
XPASS - "BatchNorm2DModule_basic"
XPASS - "BatchNorm3DModule_basic"
XPASS - "ElementwiseAddScalarInt64Module_basic"
XPASS - "ElementwiseAddScalarIntModule_basic"
XPASS - "ElementwiseMulScalarModule_basic"
XPASS - "ElementwiseMulScalarModule_float"
XPASS - "ElementwiseMulScalarModule_int"
XPASS - "GroupNormModule_basic"
XPASS - "GroupNormNoWeightAndBiasModule_basic"
XPASS - "MobilenetV3Module_basic"
XPASS - "NativeBatchNorm1DModule_basic"
XPASS - "NativeBatchNorm2DModule_basic"
XPASS - "NativeBatchNorm3DModule_basic"
XPASS - "NativeBatchNormNoneWeightModule_basic"
XPASS - "NativeGroupNormBackwardModule_basic"
XPASS - "NativeGroupNormModule_basic"
XPASS - "ResNet18Module_basic"
XPASS - "ResNet18StaticModule_basic"
```
And segfault for test
"ElementwiseAddScalar_TensorLiteralInt32_Module_basic". Somehow this
change doesn't allow to use Tensors, that are not forward arguments, but
local variables of model.
e.g. `self.x = torch.tensor(..)`
See also: #2745
Signed-off-by: Dmitrii Makarenko <dmitrii.makarenko@intel.com>
2024-03-11 12:22:05 -07:00
Rob Suderman
8fb28661f9
[onnx] Fix onnx.ReduceMean lowering ( #3002 )
...
Reduce mean lowerings did not succesfully lower to `linalg` via torched.
There were two separate paths that could be consolidated to a single
simpler pass. This resulted in a significant improvement in test
coverage.
2024-03-11 11:32:53 -07:00
Yuanqiang Liu
229ca3a9e1
[Torch Dialect] emit aten::mul and add folder ( #3007 )
2024-03-11 19:59:34 +08:00
Yuanqiang Liu
a3fe130f73
[Torch Dialect] emit aten::warn ( #3003 )
...
* torch-mlir may not handle `aten.warn`. But it could be handled by
custom users' backend which involves torch-mlir.
2024-03-10 08:29:08 +08:00
Rob Suderman
bd7f1baa42
[onnx] Fix expand operation for dynamic shape max ( #3001 )
...
If the broadcast shape is length-1 at a dim while `?` in the input dim
then we need to broadcast to the dynamic dim. This is equivalent to
taking a max of two dimensions.
2024-03-08 16:23:07 -08:00
Rob Suderman
0723584936
[torch] Add folder for torch.aten.*.Scalar comparisons ( #3000 )
...
This folds small version of the tensor-scalar comparison operators as
they are commonly used for shape computations. This includes le, lt, ge,
gt, eq, and ne.
2024-03-08 13:44:00 -08:00
Daniel Garvey
80c7bc3f7a
fximporter: support newer torch versions ( #2999 )
...
uses version checking since attributes exist in both versions, the only
thing that changes is what we're receiving as an fx graph
2024-03-08 14:58:50 -06:00
Dmitry Babokin
6b3a7d07c2
Fix link to roadmap in README.md ( #2995 )
...
The file was renamed by PR https://github.com/llvm/torch-mlir/pull/2842 .
2024-03-07 20:26:53 -08:00
Andreas Falkenberg
551a4e45f3
[onnx] Add support for `onnx.Gemm` with no bias ( #2993 )
...
Previous gemm version required a bias vector.
This provides an alternate path to `Torch::AtenMm`
with no bias operation.
2024-03-07 15:58:38 -08:00
Rob Suderman
1964208d19
[onnx] Fix constant pad for dynamic shape ( #2989 )
...
The current padding operation was not functional for dynamic shapes.
Updated and enabled tests so that onnx.pad tests pass.
Work TBD for reflection padding.
2024-03-07 13:29:50 -08:00
Scott Todd
7b18646def
[onnx] Handle optional arguments in Clip op pattern. ( #2976 )
...
Spec: https://onnx.ai/onnx/operators/onnx__Clip.html
2024-03-07 17:25:14 +00:00
Vivek Khandelwal
6e84752c39
build: manually update PyTorch version ( #2992 )
...
Set PyTorch and TorchVision version to nightly release 2024-03-07.
This commit also removes the deprecated constraints API:
342e7929b8
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-07 21:42:38 +05:30
penguin_wwy
d5693b3f51
[doc] fix broken links in documents ( #2990 )
...
Co-authored-by: wenyangwang <wenyangwang@tencent.com>
2024-03-06 19:52:34 -08:00
Rob Suderman
c15f1a2bd2
[onnx] Adding lowering for `onnx.Size` operation ( #2985 )
...
We can support `onnx.Size` by requesing the size of each dimensions and
taking the product of the results, then packing it into a tensor.
---------
Co-authored-by: Scott Todd <scott.todd0@gmail.com>
2024-03-06 17:01:05 -08:00
Rob Suderman
a78659742a
[onnx] Migrate `onnx.ReduceMax` to match `onnx.ReduceMin` ( #2981 )
...
This mostly copy-pastes the reduce minimum implementation to reduce max
to improve test coverage. We also improve the aten lowering for min/max
dim for unsigned types.
2024-03-06 16:48:21 -08:00
Andreas Falkenberg
ea76dd12ba
[onnx][torch] Gridsampler E2E test and corrections of gridsampler ( #2987 )
...
The addition of an e2e test is actually provided in the Shark-Testsuite.
This adds 2 test cases for the gridsampler e2e test.
Also as intended there were some items found which needed correction, so
the Gridsampler op has also a change.
2024-03-06 10:56:58 -08:00
Rob Suderman
06292d9429
[torch] Rework `aten.repeat` to use flatten and unsqueeze ( #2984 )
...
Current implementation depends on using `aten.view` which has issues
inferring tensor collapse/expand operations during the lowering to
`linalg`. Using flatten and unsqueeze better infers what the later
reshape behavior.
2024-03-06 10:19:18 -08:00
Ze Zhang
aa7c9a9653
e2e support aten.linalg_norm to aten.linalg_vector_norm ( #2953 )
...
Add e2d support for `aten.linalg_norm` by decompose it to
`aten.linalg_vector_norm`.
Lowering to `aten.linalg_matrix_norm` is still unsupported.
To Test:
`python -m e2e_testing.main -v`
---------
Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-03-05 16:31:01 -08:00
Rob Suderman
bc0527676b
[torch] Add support for `torch.split_with_sizes` via decompose ( #2979 )
...
Convert to individiual slices and tuple together as a list.
---------
Co-authored-by: Scott Todd <scott.todd0@gmail.com>
2024-03-05 15:01:21 -08:00
Rob Suderman
933db87a07
[onnx] Add support for constants of `i1`s ( #2978 )
...
`getRawBuffer` expects a densely packed vector of `i1` values however
`onnx` does not densely pack the values. Include code to handle the
packing / unpacking.
2024-03-05 13:55:13 -08:00
Yuanqiang Liu
4d01b0f1a3
[FxImporter] remove dataclass slots to support python3.9 ( #2974 )
...
* that `dataclass`'s `slots` is supported after python 3.10.
2024-03-06 01:04:38 +08:00
Rob Suderman
a86e89ecb5
[torch] Additional folders for shape computations ( #2972 )
...
A handful of operations are commonly used in shape calculations (slice,
concat, broadcast). Added these additional folders to better propagate
simple shape computations.
2024-03-04 11:46:49 -08:00
Chi_Liu
09875fabd1
[MLIR][ONNX] Add ONNX ReduceProd support ( #2943 )
...
Alternatives to https://github.com/llvm/torch-mlir/pull/2908
Fix https://github.com/nod-ai/SHARK-Turbine/issues/353
2024-03-04 11:07:03 -08:00
Rob Suderman
19d4888278
[torch] Make torch.aten.unflatten lower directly to linalg ( #2971 )
...
Existing lowering via aten.view does not work as well for dynamic shapes
as the lowering to tensor.expand must re-infer dynamic shape matching.
Better to directly lower.
2024-03-04 10:17:42 -08:00
Rob Suderman
d51e80b648
[onnx] Fix onnx.gather lowering for rank-0 indices ( #2973 )
...
We assumed rank was atleast 1 however it can be rank-0, generating an
illegal pair of flatten / unflatten operations. Corrected this.
2024-03-04 08:25:19 -08:00
Yuanqiang Liu
916554f270
[Stablehlo] add torch_to_stablehlo::getBackendTypeForScalarType ( #2975 )
2024-03-04 23:31:54 +08:00
Rob Suderman
61f0a5facf
[torch] Add an `aten.cat` length-0 canonicalization ( #2966 )
...
If an input is length-0 along the dimension of canonicalization we can
remove the tensor from the list
2024-03-01 21:41:12 -08:00
Rob Suderman
d030bffc62
[torch] Support `aten.view` rank-0 collapse ( #2965 )
...
Collapsing to a rank-0 tensor using `aten.view` was currently bailing
out. Added the special case.
2024-03-01 12:31:07 -08:00
Scott Todd
e7d90a4b82
[onnx] Fix type on create_module() in onnx_importer.py. ( #2968 )
...
The type returned was changed in
https://github.com/llvm/torch-mlir/pull/2795 . This led to errors in the
downstream IREE project: https://github.com/openxla/iree/pull/16622 .
2024-02-29 13:01:13 -08:00
Vivek Khandelwal
579ac8b666
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for sub and sum op ( #2954 )
...
This commit adds the support for scalar conversion to byte.
This commit also fixes the OnnxToLinalg lowering issue for Onnx.Sub and
Onnx.Sum op.
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/466
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/467
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-29 21:48:46 +05:30