zjgarvey
50d6ce225f
Align Quantization Rounding Scheme with ONNX/Pytorch ( #3569 )
...
Pytorch and ONNX apparently round to nearest, ties go to nearest even,
but we were using `math::round` for the torch-to-linalg conversion of
`quantize_per_tensor`, which rounds away from zero on ties.
2024-07-29 12:24:46 -07:00
Vinayak Dev
30c4d2f2b8
[torch] Add OnnxToTorch lowering for Onnx.Unique op ( #3523 )
...
Adds OnnxToTorch Lowering for the `Onnx.Unique` op.
2024-07-29 17:32:44 +05:30
pdhirajkumarprasad
a211ccbcff
Implementation of SplitToSequence ops lowering ( #3509 )
...
Added support for splitToSequence ops lowering
Added test case with filecheck
2024-07-29 15:44:22 +05:30
Vivek Khandelwal
b6e4725259
[ONNX] Add OnnxToTorch lowering for NonMaxSuppression op ( #3501 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-07-26 21:01:27 +05:30
yyp0
ea60d72489
[Torch] Add AtenMaskedFillTensorOp support ( #3561 )
2024-07-26 15:32:13 +08:00
Vivek Khandelwal
15cf7106c4
[ONNX] Reduce Onnx.Flatten op version ( #3560 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-07-24 21:27:20 +05:30
Yuanqiang Liu
003b06dfa1
[Torch] enhance naryFolderHelper to support mixed dtypes ( #3559 )
...
* so that it could support like `i64 + f64 => f64`.
* also unify `aten.log`'s folder code to use `naryFolderHelper`.
2024-07-24 17:54:59 +08:00
Yuanqiang Liu
aad1604046
[Torch] enhance fold of aten.squeeze.dim ( #3558 )
2024-07-24 14:13:48 +08:00
Ze Zhang
d1e172f418
Register fake_quantize_cachemask ops and add their decompose patterns ( #3556 )
...
Test:
`cmake --build build --target check-torch-mlir-all`
2024-07-23 11:33:12 -07:00
Yuanqiang Liu
21ad890009
[Torch] enhance fold of aten.slice.Tensor ( #3557 )
...
so that it could support folding slice with any static shape.
2024-07-23 22:53:03 +08:00
Yuanqiang Liu
78846425e2
[Torch] add constriants when decompose aten.split_with_sizes ( #3555 )
2024-07-23 10:34:29 +08:00
Yuanqiang Liu
45c85c3b34
[Stablehlo] bump stablehlo to c28d55e91b4a5daaff18a33ce7e9bbd0f171256a ( #3554 )
2024-07-21 23:16:23 +08:00
Vivek Khandelwal
22c9008bb9
build: Update Roll PyTorch version ( #3548 )
...
This commit also updates the PyTorch and Torchvision nightly links since
they are now moved to a different location.
PyTorch Nightly: https://download.pytorch.org/whl/nightly/cpu/torch/
Torchvision Nightly:
https://download.pytorch.org/whl/nightly/cpu/torchvision/
Disables dtype checks for some ops, tracked by https://github.com/llvm/torch-mlir/issues/3552
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-07-19 21:38:57 +05:30
bosko-syrmia
2cdf3deae3
implement lowering of torch.aten._linalg_slogdet ( #3524 )
2024-07-19 11:24:43 +05:30
Branko Trifkovic
c7d972ed58
Implement lowering of torch.aten.tril_indices ( #3517 )
2024-07-18 18:38:12 +05:30
jinchen
f0ce1e94ce
[ONNX] Add OnnxToTorch support for SequenceMap ( #3535 )
2024-07-17 14:25:09 -07:00
pkapris-syrmia
fde286f491
Implement lowering for torch.aten.hann_window.periodic ( #3502 )
2024-07-17 18:21:23 +05:30
pkapris-syrmia
b59efc75f3
Implement lowering of torch.aten.atleast_1d ( #3498 )
...
This operator is necessary in order to implement torch.aten.vstack.
Which will be added in a future PR.
2024-07-17 18:20:30 +05:30
Arham Khan
574143448b
[E2E][ONNX] torch.multinomial ( #3404 )
...
This PR adds a conversion in the TorchOnnxToTorch pass for the ONNX
Multinomial operation. It also adds a TorchToLinalg lowering for the
`aten.Multinomial` op and does a light refactor of some repeated code
that generates random floating point numbers in
`TorchToLinalg/Random.cpp`.
2024-07-16 23:09:39 +05:30
rohan-tan-bhowmik
0791a8860c
[Torch] Implements TorchToLinalg lowering of torch.ops.aten._weight_norm_interface ( #3538 )
...
Resolves https://github.com/nod-ai/SHARK-Turbine/issues/757 .
Adds TorchToLinalg lowering for `Aten_WeightNormInterfaceOp`.
---------
Co-authored-by: Ubuntu <rbhowmik@RohanBhowmikVM.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
2024-07-16 23:09:12 +05:30
Yuanqiang Liu
714270a922
[Stablehlo] legalize deprecated ops to stablehlo ops ( #3543 )
2024-07-17 00:05:11 +08:00
Xinyu Yang
e5d1677894
[Torch] Eliminate getWithLeastStaticInformation in DecomposeAtenLinspaceOp and DecomposeAtenFakeQuantizePerTensorAffineOp ( #3539 )
...
as title
2024-07-15 10:02:36 +08:00
Matthew Francis-Landau
fe9db78120
Allow custom ops to return an array of tensors ( #3531 )
...
This PR adds support to `fx_importer.py` for handling custom ops that
return an array of tensors. As long as the length of the array is
consistent across runs (determined statically), then this patch will
work. This does not require that the number of tensors returned is
determined by the op's definition.
CC @sjain-stanford
2024-07-14 11:54:23 -07:00
Sambhav Jain
7411ff2f69
[Symbolic Shapes] Test coverage for unbacked symint from data dependent ops ( #3542 )
...
We do have support for translating unbacked symbolic_ints that arise
from data-dependent ops like `aten.nonzero`. This PR adds the python lit
test coverage for the same.
2024-07-14 11:52:03 -07:00
Sambhav Jain
cdbcf519f7
[NFC] Expose both raw Torch dialect and Torch dialect in backend form with Dynamo/FX ( #3541 )
...
This is a non-functional change. It merely allows intercepting the Torch
dialect during TorchDynamo export at two stages:
1. `OutputType.RAW`: This gets us the torch dialect as-imported from the
FX graph
2. `OutputType.TORCH`: This gets us the torch dialect after the raw
torch goes through DecomposeComplexOps and ReduceOpVariants.
Prior to this, there was no way of accessing the Torch dialect in
backend compliant form (right after running the
`torchdynamo-export-to-torch-backend-pipeline`) because both
[here](https://sourcegraph.com/github.com/llvm/torch-mlir@5e4f00acb13f3f849a05e5ac28ee39307a5fdbff/-/blob/python/torch_mlir/fx.py?L33 )
and
[here](https://sourcegraph.com/github.com/llvm/torch-mlir@5e4f00acb13f3f849a05e5ac28ee39307a5fdbff/-/blob/python/torch_mlir/compiler_utils.py?L138 )
the same `OutputType.TORCH` were used, meaning the 2nd condition would
never be reached.
Since the default behavior is unchanged, this is an NFC.
2024-07-14 10:33:47 -07:00
Yuanqiang Liu
5e4f00acb1
[Torch] add support for aten.scatter_add ( #3534 )
2024-07-12 09:15:42 +08:00
zjgarvey
0fb8b017d8
Adds misc fixes for some padding related issues ( #3528 )
...
This patch adds a few misc pad op related changes:
1. Addresses issue <https://github.com/llvm/torch-mlir/issues/3457 >
2. Addresses issue <https://github.com/llvm/torch-mlir/issues/3442 >
3. Fixes the padding order for asymmetrically padded onnx.Conv ops
4. Enables passing quantization through those onnx.Conv op pre-paddings
5. Modifies the torch-to-linalg lowering of AtenReplicationPad2d op to
enable support for input rank != 4
Unfortunately, even with all of these changes, the e2e tests for the
ReplicationPad2d still fail the onnx config, since the torch export
procedure for rearranging the pad order is complicated enough that the
padding ints end up not being able to fold back to constants.
2024-07-11 20:01:45 -05:00
Yuanqiang Liu
b38585e077
[Torch Dialect] fix aten.nan_to_num's decomposition when inf=None ( #3530 )
...
also add shape infer in decomposition, see
https://github.com/llvm/torch-mlir/issues/3312
2024-07-11 08:46:40 +08:00
Xida Ren (Cedar)
5342aa70cf
Support onnx.GRU and onnx.RNN ( #3447 )
2024-07-10 14:04:17 -04:00
Yuanqiang Liu
4bb7ddf601
[Stablehlo] enable stablehlo's python extension binding ( #3529 )
2024-07-10 13:00:13 +08:00
Yuanqiang Liu
5bee9aac63
[Stablehlo] simplify promoteType ( #3525 )
...
only provide `outElementType` when promoteType
2024-07-10 10:52:19 +08:00
zjgarvey
dcb48dd46c
[ONNX] Fix LpNormalization Lowering ( #3521 )
...
The LpNormalization lowering was previously just computing the norm,
which is incorrect. This computes the norm then divides the input tensor
by it's norm.
I've tested this against some simple onnx models locally. I'll look into
adding a test case for this in an external test suite.
2024-07-09 15:42:26 -05:00
Gaurav Shukla
0b46d1110a
[MLIR][ONNX] Add support for onnx.ScatterND ( #3479 )
...
This commit adds support for onnx.ScatterND op in the onnx pipeline.
Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-07-08 13:27:14 +05:30
Matthias Gehre
6ea6a6c2fe
TorchOnnxToTorch: Fix stack-use-after-free ( #3480 )
...
We used to move the SmallVector into an ArrayRef and then the
SmallVector left the scope.
Found by asan.
2024-07-08 09:20:09 +02:00
Yuanqiang Liu
3225f20ab1
[Stablehlo] use index type as dim size, avoid to generate index_cast ( #3526 )
...
For example, the original IR is:
```
module attributes {torch.debug_module_name = "Matmul3D"} {
func.func @forward(%arg0: tensor<?x?x?xf32>, %arg1: tensor<?x?x?xf32>) -> tensor<?x?x?xf32> {
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%c2 = arith.constant 2 : index
%dim = tensor.dim %arg1, %c0 : tensor<?x?x?xf32>
%0 = arith.index_cast %dim : index to i64
%dim_0 = tensor.dim %arg1, %c1 : tensor<?x?x?xf32>
%1 = arith.index_cast %dim_0 : index to i64
%dim_1 = tensor.dim %arg1, %c2 : tensor<?x?x?xf32>
%2 = arith.index_cast %dim_1 : index to i64
%from_elements = tensor.from_elements %0, %1, %2 : tensor<3xi64>
%3 = stablehlo.dynamic_broadcast_in_dim %arg1, %from_elements, dims = [0, 1, 2] : (tensor<?x?x?xf32>, tensor<3xi64>) -> tensor<?x?x?xf32>
%4 = stablehlo.dot_general %arg0, %3, batching_dims = [0] x [0], contracting_dims = [2] x [1] : (tensor<?x?x?xf32>, tensor<?x?x?xf32>) -> tensor<?x?x?xf32>
return %4 : tensor<?x?x?xf32>
}
}
```
After using IndexType, the IR is:
```
module attributes {torch.debug_module_name = "Matmul3D"} {
func.func @forward(%arg0: tensor<?x?x?xf32>, %arg1: tensor<?x?x?xf32>) -> tensor<?x?x?xf32> {
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%c2 = arith.constant 2 : index
%dim = tensor.dim %arg1, %c0 : tensor<?x?x?xf32>
%dim_0 = tensor.dim %arg1, %c1 : tensor<?x?x?xf32>
%dim_1 = tensor.dim %arg1, %c2 : tensor<?x?x?xf32>
%from_elements = tensor.from_elements %dim, %dim_0, %dim_1 : tensor<3xindex>
%0 = stablehlo.dynamic_broadcast_in_dim %arg1, %from_elements, dims = [0, 1, 2] : (tensor<?x?x?xf32>, tensor<3xindex>) -> tensor<?x?x?xf32>
%1 = stablehlo.dot_general %arg0, %0, batching_dims = [0] x [0], contracting_dims = [2] x [1] : (tensor<?x?x?xf32>, tensor<?x?x?xf32>) -> tensor<?x?x?xf32>
return %1 : tensor<?x?x?xf32>
}
}
```
The benefits of using IndexType on shape tensor:
* simplify the IR, avoid to generate `arith.index_cast`
* let backend compiler have a chance to decide the index width of shape
tensor
* let stablehlo backend have a chance to serialize dynamic shape IR by
[shape_legalize_to_stablehlo](https://github.com/openxla/stablehlo/blob/main/stablehlo/tests/shape_legalize_to_stablehlo.mlir )
2024-07-07 18:03:03 +08:00
Ze Zhang
d466d5b809
Register fake_quantize related ops ( #3522 )
...
Register `aten.fake_quantize_per_channel_affine` and
`aten.fake_quantize_per_tensor_affine.tensor_qparams` ops
---------
Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-07-05 11:02:03 -07:00
Sagar Kulkarni
0fe74845da
[ONNX] Fix bug in ONNXToTorch PadOp's pads tensor rearrangement ( #3485 )
...
Fix the pad tensor rearrangement such that we change the representation
from [x1_begin, x2_begin, ..., x1_end, x2_end,...] to [xn_begin, xn_end,
...., x2_begin, x2_end, x1_begin, x1_end] where x1, x2 .. xn are the
dimensions of the pads tensor argument.
---------
Co-authored-by: zjgarvey <zjgarvey@gmail.com>
Co-authored-by: zjgarvey <47986913+zjgarvey@users.noreply.github.com>
2024-07-03 15:02:49 -05:00
Scott Todd
ca0e906675
Fix `uint64_t` type. ( #3519 )
...
`u_int64_t` is nonstandard and does not exist in MSVC.
2024-07-02 16:06:20 +00:00
Yuanqiang Liu
f1e3701caf
[Stablehlo] fix compareOp with scalar's lowering ( #3518 )
...
* use lhs tensor's element type as compute type when rhs is scalar.
* previously `a != 1.0`(a is a fp32 tensor) will lowering to `%6 =
stablehlo.compare EQ, %4, %5, FLOAT : (tensor<2x5xf64>, tensor<2x5xf64>)
-> tensor<2x5xi1>`
* now it will lowering to `%6 = stablehlo.compare EQ, %4, %5, FLOAT :
(tensor<2x5xf32>, tensor<2x5xf32>) -> tensor<2x5xi1>`
2024-07-02 15:31:06 +08:00
Yuanqiang Liu
e2fbded49c
[Torch Dialect] improve argmax/argmin's decomposition to support keep… ( #3514 )
...
…dim=True when dim=None
2024-07-02 09:08:57 +08:00
Vivek Khandelwal
2f231f394e
Bump Onnx Version to 1.16.1 ( #3515 )
...
This commit adds the support for new data types: uint4, and int4 and
uint8 tensor protos. Also, it moves some tests from failing to crashing.
Fixes https://github.com/llvm/torch-mlir/issues/3507
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-07-01 22:15:45 +05:30
Yuanqiang Liu
0e71a192d8
[Torch] support decomposition of aten.aminmax ( #3513 )
...
* unify decompisition of `aten.amax` and `aten.amin`
* support `aten.amax` with `dim=()`
2024-06-29 21:44:05 +08:00
Yuanqiang Liu
f9fc741eef
[Stablehlo] support aten.any.dim, aten.min.dim ( #3500 )
...
* refactor `TorchToStablehlo/Reduction.cpp`
* add `ConvertAtenReduceWithIndicesOp` patterns
2024-06-29 16:53:33 +08:00
Yuanqiang Liu
73ba09c587
support both option -v and TORCH_MLIR_TEST_VERBOSE ( #3511 )
...
so that we could run `python3 -m e2e_testing.main -v` to specify
`verbose=True`
2024-06-29 10:43:31 +08:00
jinchen
3915db0a86
[ONNX] Add OnnxToTorch support for CenterCropPad ( #3496 )
2024-06-28 12:47:29 -07:00
Aart Bik
6fece25ff3
[torch-mlir][sparse] add decomposition features to sparse compiler ( #3505 )
...
Fixes https://github.com/llvm/torch-mlir/issues/3499
2024-06-28 10:18:36 -07:00
zjgarvey
af236dab66
Add support for multiple dynamic reassociation dims for unflatten.int ( #3504 )
...
Addresses an issue with onnx.Gather lowering to linalg:
<https://github.com/nod-ai/SHARK-Turbine/issues/242 >
The builder for tensor.expand_shape, without an explicitly provided
output shape, fails to infer an output shape in the case of multiple
dynamic reassociation dims. I tried adding the output shape explicitly
for tensor.expand_shape, but ran into compilation issues later on (see
<https://github.com/iree-org/iree/issues/17760 >).
This PR adds support by lowering this op to tensor.reshape when multiple
dynamic reassociation dims are provided.
2024-06-28 09:59:51 -07:00
Max191
a1c4089e71
Fix unused variable warning from assertion variable ( #3512 )
...
Inlines a variable into an assertion that is not used elsewhere to fix
build warnings.
2024-06-28 12:20:29 -04:00
Jiawei Wu
f75cbb4df9
[torch dialect] emit aten.fmax/fmin and add decomposition patterns ( #3510 )
2024-06-29 00:07:55 +08:00
Phaneesh Barwaria
5a627c46b7
onnx.DFT basic support ( #3463 )
...
- adds support for DFT v20 on the FFT and IFFT path
- adds required skeleton code for IFFT ops to be recognised in TMlir
2024-06-28 20:08:43 +05:30