Commit Graph

1384 Commits (308c45e61ae14040ad7a046962b3017d3202de9b)

Author SHA1 Message Date
Xinyu Yang 308c45e61a
[Torch] Fix PrimListUnpackOp::getCanonicalizationPatterns (#3140)
Fix the case PrimListUnpackOp's result num is not equal to PrimList
length.
See the following example:
```python
    def forward(self, x):
        if len(x.shape) == 5:
            b0, t, c0, h0, w0 = x.shape
            b, c, h, w = torch.mul(b0, t), c0, h0, w0
        else:
            b1, c1, h1, w1 = x.shape
            b, c, h, w = b1, c1, h1, w1
        res = torch.reshape(x, [b, c, h, w])
        return res
```
Without this fix, the following error message will occur:
```
/root/torch-mlir/externals/llvm-project/mlir/lib/IR/PatternMatch.cpp:118: virtual void mlir::RewriterBase::replaceOp(mlir::Operation *, mlir::ValueRange): Assertion `op->getNumResults() == newValues.size() && "incorrect # of replacement values"' failed.
```
2024-04-11 19:48:49 +08:00
Xinyu Yang 6524838bcb
[Torch] Add general AdaptiveAvgPool2dOp decompose support (#3111)
Previously, it could only handle the situations where outputsize == (1,
1) or outputsize == (input_H, input_W). Now it supports all situations
where input_H % output_H== 0 && input_W % output_W == 0
2024-04-11 17:02:59 +08:00
Yuanqiang Liu 88533b1968
[Stablehlo] fix aten.arange's lowering to stablehlo (#3138)
* promote to f64 to do division, avoid division on i64 (floor div)
* refactor torch-to-stablehlo-pipeline
2024-04-11 15:55:56 +08:00
zjgarvey aa5e150313
Adds Some uint8 Quantization Fixes (#3122)
1. Changes the linalg lowering for dequantization ops to always sign
cast to float to prevent misrepresenting uint32 overflow on subtraction
with zero point.
2. Adds a basic quantized model test which only quantizes and
dequantizes and now passes with these changes in linalg and onnx
configs.
3. Changes the aten.mm lowering to allow mismatched quantized types. 
4. If a quantized matmul arg is uint8, we shift by 128 to faithfully
represent the quantization as a signed i8 quantization. This worked fine
in the AtenMmOp lowering, but I'd be happy to move it to a rewrite in
FuseQuantizedOps.cpp instead if that seems more appropriate.

With the changes 3 and 4, the QuantizedMLP_basic and
QuantizedSingleLayer_basic e2e tests now passes with the onnx config.
2024-04-10 12:36:58 -07:00
Xinyu Yang 5eb0cf9104
[Torch] Add decompose of AtenToPrimDeviceOp (#3131)
As device information isn't relevant to torch-mlir
2024-04-10 22:26:48 +08:00
Yuanqiang Liu 8d5e2578b0
[Stablehlo] lowering aten.view to shape.num_elements + stablehlo.comp… (#3125)
…ute_reshape_shape

as that `aten.view` support at most one `-1` in dim list. The original
calculation of `numel` is wrong when there is a `-1` in dim list.
2024-04-09 14:54:57 +08:00
Xinyu Yang 42a16fa912
[Torch] Support Aten_CastFloatOp. (#3115)
By canonicalize Aten_CastFloatOp into AtenToDtypeOp
2024-04-09 11:06:53 +08:00
Xida Ren (Cedar) dd967eb199
[ONNX] Support onnx.LSTM (#2969)
This PR only performs a lit test. In lieu of an e2e test, https://github.com/nod-ai/SHARK-TestSuite/pull/142 makede sure that the lowering works & the numbers check out.

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-08 12:23:33 -07:00
Vivek Khandelwal 1d6e4c3d77
[MLIR][TORCH] Add OnnxToTorch lowering for Einsum op (#3117)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-08 22:38:01 +05:30
Xinyu Yang 84c24e5771
[Torch] Support Aten__And__ScalarOp (#3114) 2024-04-08 20:24:17 +08:00
Yuanqiang Liu 2c56ef9252
[Torch Dialect] canonicalize aten.sign to aten.sgn (#3112)
* `aten.sign` is a sub-set of `aten.sgn` (`aten.sgn` support complex
type).
2024-04-08 20:05:42 +08:00
Yuanqiang Liu 43d54efd14
[cmake] link TorchMLIRTorchConversionPasses to TorchMLIRConversionPasses (#3113)
* as that `TorchMLIRTorchConversionPasses` missing dependencies of
`TorchMLIRTorchToStablehlo` and `TorchMLIRTorchToTensor`.
* use `TorchMLIRConversionPasses` instead of scattered targets.
2024-04-08 14:44:34 +08:00
Yuanqiang Liu 498ab997cd
[Stablehlo] lowering aten.log1p to stablehlo.log_plus_one (#3110) 2024-04-07 17:01:58 +08:00
Yuanqiang Liu 0a00f38a7e
[Stablehlo] add stablehlo-aggressive-simplification in e2e test (#3109)
* so that more stablehlo e2e testcases would pass.
2024-04-07 10:48:11 +08:00
Rob Suderman 9d9a05366e
[torch] Fix aten.squeeze lowering to use result shape (#3106)
Squeezes can be ambiguous without the output shape information. For
instance (1, 1, 256) squeezed can be either (1, 256) or (256). We need
to check the resulting shape to know what the shape should look like.
2024-04-04 09:43:12 -07:00
Vivek Khandelwal af54d27820
[MLIR][TORCH] Fix Onnx.TopK lowering (#3103)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 22:12:48 +05:30
Vivek Khandelwal 7e778e2179
build: manually update PyTorch version (#3094)
Set PyTorch and TorchVision version to nightly release 2024-04-01.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 10:48:37 +05:30
Vivek Khandelwal ce7d4f1660
[MLIR][TORCH] Fix Onnx.ReduceSum lowering for failing e2e tests (#3095)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 09:57:19 +05:30
Rob Suderman f97cd4893f
[torch] Improve shape inference for dynamic shapes (#3091)
Shapes can be processed as tensors to represent the set of dimensions.
As reshapes take a list of scalars this can result in a single dynamic
dimension blocking the adjacent static dimensions.

This pass attempts to de-couple tensor computations related to shapes
and propagate values to better support lowering scalar tensor
computations.
2024-04-02 16:19:57 -07:00
zjgarvey 40e762ca42
Adds result types to a prelu decomp (#3098)
This adds explicit result types instead of relying on shape/dtype
computations.

Solves a regression issue with IREE: #3092
2024-04-02 11:41:56 -07:00
Yuanqiang Liu 6cbb2f7ae0
[Stablehlo] add stablehlo-canonicalize-dynamism when lowering (#3097)
so that many stablehlo e2e testcases could pass
2024-04-02 22:47:24 +08:00
Vivek Khandelwal d1f770c620
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for ReduceMean op (#3008)
This commit also cleans up the OnnxToTorch lowering for the ReduceMean
op and adds the support for handling edge cases.

Signed-Off By: Vivek Khandelwal vivekkhandelwal1424@gmail.com
2024-04-02 16:54:04 +05:30
Xinyu Yang ac1cd3d78a
[Torch] Support AtenDivTensorModeOp with static int input for linalg and stablehlo backend (#3088) 2024-04-02 17:28:53 +08:00
Thomas Dietert d2432bbe5a
[MLIR][Torch] Do not convert bias tensor to element type if NoneType (#3072)
The `convertTensorToElementType` function expects it's argument to have
a valid tensor type that is not `Torch::NoneType`. This PR checks that
the bias tensor is not of type `Torch::NoneType` before calling
`convertTensorToElementType` on the bias tensor argument in the
`matchAndRewrite` member function of the `ConvertAtenConvolutionOp`
class.
2024-04-02 14:19:26 +05:30
ptrifunovic98 1c8c47d483
Add complex support for aten.norm and similar operations (#3052)
Add support for complex-type input tensors for norm, vector norm, and
Frobenius norm operations.
2024-04-02 14:03:30 +05:30
zjgarvey 532d297c46
[ONNX] Preliminary Work Towards Supporting QuantizedMLP_basic onnx e2e test (#3089)
See the related issues here:
[SHARK-Turbine#556](https://github.com/nod-ai/SHARK-Turbine/issues/556)

1. Adds uint8 casting to onnx.Cast op
2. Fixes an issue with onnx.DequantizeLinear when the scale comes with
shape [1].
3. Adds support for unsigned types in an AtenItemOp folder
4. Adds a simpler quantized model for easier debugging
5. Adds a fusion pass to convert [quant -> dequant -> transpose -> mm]
patterns to [transpose -> quant -> mm].
6. Moved some xfails that are still not passing, but for different
reasons than onnx.cast failures.
2024-04-01 16:21:05 -07:00
Thomas Dietert 3c33dbd987
[MLIR][Torch] Canonicalize torch.from_i1 and torch.to_i1 (#3067)
When lowering `torch.aten.convolution`, it is expected that the
'transposed' argument is a torch.constant operation. In some cases, the
argument was a `from_i1` operation converting an `arith.constant`
operation into a torch.bool. This is not wrong semantically, but instead
of generalizing the legality of the `torch.aten.convolution` op, we
canonicalize `arith.constant` ops followed by `from_i1` ops to
`torch.bool` ops.

For example:
```
//===-------------------------------------------===//
Legalizing operation : 'torch.aten.convolution'(0x124705b90) {
  %33 = "torch.aten.convolution"(%arg0, %20, %21, %31, %29, %30, %19, %32, %0) : (!torch.vtensor<[1,1,28,28],f32>, !torch.vtensor<[10,1,5,5],f32>, !torch.vtensor<[10],f32>, !torch.list<int>, !torch.list<int>, !torch.list<int>, !torch.bool, !torch.list<int>, !torch.int) -> !torch.vtensor<[1,10,24,24],f32>

  * Fold {
  } -> FAILURE : unable to fold

  * Pattern : 'torch.aten.convolution -> ()' {
    ** Failure : unimplemented: only constant transposed supported.      <-- Resolved by this PR
  } -> FAILURE : pattern failed to match

  * Pattern : 'torch.aten.convolution -> ()' {
    ** Failure : not a supported Scalar to Tensor like op
  } -> FAILURE : pattern failed to match

  * Pattern : 'torch.aten.convolution -> ()' {
    ** Failure : not a supported elementwise op
  } -> FAILURE : pattern failed to match

  * Pattern : 'torch.aten.convolution -> ()' {
    ** Failure : not a supported reduce op
  } -> FAILURE : pattern failed to match
} -> FAILURE : no matched legalization pattern
//===-------------------------------------------===//
<stdin>:21:11: error: failed to legalize operation 'torch.aten.convolution' that was explicitly marked illegal
    %17 = torch.operator "onnx.Conv"(%arg0, %0, %1) {torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.group = 1 : si64, torch.onnx.kernel_shape = [5 : si64, 5 : si64], torch.onnx.pads = [0 : si64, 0 : si64, 0 : si64, 0 : si64], torch.onnx.strides = [1 : si64, 1 : si64]} : (!torch.vtensor<[1,1,28,28],f32>, !torch.vtensor<[10,1,5,5],f32>, !torch.vtensor<[10],f32>) -> !torch.vtensor<[1,10,24,24],f32> 
          ^
<stdin>:21:11: note: see current operation: %33 = "torch.aten.convolution"(%arg0, %20, %21, %31, %29, %30, %19, %32, %0) : (!torch.vtensor<[1,1,28,28],f32>, !torch.vtensor<[10,1,5,5],f32>, !torch.vtensor<[10],f32>, !torch.list<int>, !torch.list<int>, !torch.list<int>, !torch.bool, !torch.list<int>, !torch.int) -> !torch.vtensor<[1,10,24,24],f32>
```

Additionally, we require the canonicalization of `to_i1` operating on a
torch.constant bool to an `arith.constant ... : i1` for the e2e tests to
pass successfully.
2024-04-01 14:25:51 -07:00
penguin_wwy b98f7f75dc
[stablehlo] Reduce unnecessary template specialization code (#3047) 2024-04-01 14:18:49 -07:00
Xinan Jiang(姜曦楠) 1cdae6bc68
[MLIR][TORCH]Add support lowing aten.Int.bool to arith (#3083)
Now there no lowing for `aten.Int.bool` in `convert-torch-to-arith`
pass. this PR add this support.

Below is the UT.
```
func.func @torch.aten.Int.bool(%arg0: !torch.bool) -> !torch.int {
  %0 = torch.aten.Int.bool %arg0 : !torch.bool -> !torch.int
  return %0 : !torch.int
}
```
2024-04-01 10:05:08 -07:00
Vivek Khandelwal 6844c84702
[MLIR][Torch] Fix OnnxToLinalg lowering for AvgPool op (#3076)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-01 22:14:14 +05:30
Gaurav Shukla 129a79417a
[MLIR][ONNX] Fix onnx.gather_nd implementation (#3070)
The indices should be expanded before the torch.gather operation.

Signed-off-by: Gaurav Shukla <gaurav@amd.com>
2024-04-01 20:17:09 +05:30
Xinyu Yang da88efad89
[Torch] Fix bug of DecomposeAtenSelectIntOp (#3087)
Fix bug of DecomposeAtenSelectIntOp. Because it may use resultTy when
resultTy has not been inferred.

```
    auto resultTy = op.getType().cast<BaseTensorType>();
    if (sliceTy.getSizes().size() == resultTy.getSizes().size()) {
      rewriter.replaceOp(op, slice);
      return success();
    }

```

So I add restriction.
2024-04-01 21:25:02 +08:00
Jiawei Wu 76080936d4
[stablehlo] add aten.index_put and aten.scatter_add op conversion support (#3086) 2024-04-01 19:39:49 +08:00
Xinyu Yang 40008b025a
[Torch] Support prelu decomposition (#3069) 2024-03-29 08:05:00 +08:00
zjgarvey c19fc9ba47
[ONNX] Fixes Issue with Dynamic Dims in GlobalAveragePool -> Torch Conversion (#3053)
Two e2e tests (AdaptiveAveragePool1/2dUnitOutputSizeDynamic) were
failing due to numerics. This was as a result of passing -1 as the
kernel size in the lowering for the corresponding onnx op
GlobalAveragePool.
2024-03-28 09:43:09 -07:00
Xinyu Yang e6e7689a24
[Torch] support decompose aten.einsum with ellipsis slicing (#3056) 2024-03-27 12:42:10 -07:00
Xida Ren (Cedar) 5f325749f9
add lowerings for AtenLtIntOp and AtenLeIntOp (#3061)
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-03-27 10:06:43 -07:00
Yuanqiang Liu 0a581a97a7
[Torch Dialect] enhance aten.int.tensor's canonicalize (#3058)
support fold with literal vtensor.  
change it to canonicalize because this pattern will create new op.
2024-03-27 09:51:58 +08:00
Rob Suderman 14b548f968
[torch] Improve shape inference for `torch-to-linalg` path for reshapes (#3055)
Reshaping tensors depend on directly matching individual dimensions to
their corresponding dim in the `torch.view` reshape dimensions. This
involves decoupling dynamic dimensions from their static counterparts
and support cleanup / canonicalization.
2024-03-26 12:41:40 -07:00
Vivek Khandelwal 9ae33e482e
[MLIR][TORCH] Add OnnxToTorch lowering for ops (#3049)
This commit adds the OnnxToTorch lowering for the Mish, Softplus,
HardSwish, Trilu, ThresholdedRelu op

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-25 20:29:07 +05:30
schnkmwt 1fcbfa87ec
Implement linalg lowering of diag_embed torch op (#2885)
This PR adds lowering of diag_embed to linalg dilect.
Tracked in https://github.com/nod-ai/SHARK-Turbine/issues/288

---------

Co-authored-by: sachink <sachink@xilinx.com>
2024-03-22 16:32:50 -07:00
zjgarvey 99b3a5f117
Converts all Adaptive Pooling Ops to Linalg (#2808)
The previous conversions for AtenAdaptiveAvgPool1dOp and
AtenAdaptiveMaxPool2dOp are refactored into a general templated
conversion that works for all of the AtenAdaptive...PoolNdOp's.

New support is added for the following ops:

1. AtenAdaptiveMaxPool1d
2. AtenAdaptiveMaxPool3d
3. AtenAdaptiveAvgPool3d

Support is also provided for passing inputs without batch dimensions.
For example, applying adaptive_avg_pool2d to an input tensor of rank 3.

After [pytorch #118162](https://github.com/pytorch/pytorch/pull/118162)
gets down to torch-mlir, I'll add a test for AdaptiveMaxPool1d with
return_indices (which will pass with that upstream fix).

---------

Co-authored-by: James Newling <james.newling@gmail.com>
2024-03-22 11:05:20 -07:00
zjgarvey 6aa481c204
[ONNX] LogSoftmax to Torch (#3024)
This PR adds support for onnx.LogSoftmax both for old versions (<13,
with axis >=0), and new versions (13).
2024-03-22 11:01:39 -07:00
Gaurav Shukla 50635dd509
[ONNX][MLIR] Add support for onnx.gather_nd (#2988)
Signed-off-by: Gaurav Shukla <gaurav@amd.com>
2024-03-22 21:38:39 +05:30
Rob Suderman 3a56714bff
[torch] Fix clamp ranges on quantize_per_tensor on unsigned (#3018)
SExtValue was used for `int` and `uint` clamp values. This caused the
result to always be outputed as `zero`.
2024-03-20 13:37:47 -07:00
Xida Ren (Cedar) cb5cb506df
Fix SCF Forloop fails to convert to linalg when a tensor argument is supplied to the loop block (#3040)
Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-03-20 11:04:02 -07:00
zjgarvey 6ff71b40c8
[ONNX] onnx.DynamicQuantizeLinear to Torch (#3009)
This adds support for converting DynamicQuantizeLinear from torch-onnx
to torch.

I could not get an e2e test to pass, since there seems to be some issues
with uint8 casting somewhere lower in the pipeline. For example
compiling with IREE for llvm-cpu, I would get either the correct zero
point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128).
The output tensor seems to always return a tensor of zeros, which also
occurs when running uint8 examples through QuantizeLinear.

Edit: the first problem can be resolved by casting the output back to
uint8 on output, the second problem is resolved with PR #3018
2024-03-20 10:58:25 -07:00
jinchen 9cf6c45a39
Add OnnxToTorch support for Compress op (#3025) 2024-03-20 17:12:08 +00:00
Abhishek-TyRnT df02692726
Dynamic size support for flatten (#3005)
Added support for dynamic shapes in `flattenusingints` op in tosa
dialect. Due to this some Argmax tests pass
This PR fixes this issue https://github.com/llvm/torch-mlir/issues/3004

The following tests pass after this PR
 ```
1. "ArgmaxIntModule_basic"
2. "ArgmaxIntModule_multiple_maxs"
3. "ArgmaxModule_basic"
```
2024-03-19 15:19:29 -07:00
zjgarvey 7a9608bb69
[ONNX] Reduces onnx.Div sinceVersion to 7 (#3041)
The only difference between version 7 and newer versions is support for
different data types. We should allow this pattern to match as early as
7. Earlier versions have a more manual broadcast specification through
attributes, so I did not include those versions.

See: [onnx.Div
docs](https://onnx.ai/onnx/operators/onnx__Div.html#l-onnx-doc-divl)
2024-03-19 13:35:05 -07:00