Commit Graph

755 Commits (42a16fa9121f5e9725786d5e3ef8a4da0cdc0d3c)

Author SHA1 Message Date
Xida Ren (Cedar) dd967eb199
[ONNX] Support onnx.LSTM (#2969)
This PR only performs a lit test. In lieu of an e2e test, https://github.com/nod-ai/SHARK-TestSuite/pull/142 makede sure that the lowering works & the numbers check out.

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-08 12:23:33 -07:00
Vivek Khandelwal 1d6e4c3d77
[MLIR][TORCH] Add OnnxToTorch lowering for Einsum op (#3117)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-08 22:38:01 +05:30
Xinyu Yang 84c24e5771
[Torch] Support Aten__And__ScalarOp (#3114) 2024-04-08 20:24:17 +08:00
Yuanqiang Liu 498ab997cd
[Stablehlo] lowering aten.log1p to stablehlo.log_plus_one (#3110) 2024-04-07 17:01:58 +08:00
Rob Suderman 9d9a05366e
[torch] Fix aten.squeeze lowering to use result shape (#3106)
Squeezes can be ambiguous without the output shape information. For
instance (1, 1, 256) squeezed can be either (1, 256) or (256). We need
to check the resulting shape to know what the shape should look like.
2024-04-04 09:43:12 -07:00
Vivek Khandelwal af54d27820
[MLIR][TORCH] Fix Onnx.TopK lowering (#3103)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 22:12:48 +05:30
Vivek Khandelwal ce7d4f1660
[MLIR][TORCH] Fix Onnx.ReduceSum lowering for failing e2e tests (#3095)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-03 09:57:19 +05:30
Rob Suderman f97cd4893f
[torch] Improve shape inference for dynamic shapes (#3091)
Shapes can be processed as tensors to represent the set of dimensions.
As reshapes take a list of scalars this can result in a single dynamic
dimension blocking the adjacent static dimensions.

This pass attempts to de-couple tensor computations related to shapes
and propagate values to better support lowering scalar tensor
computations.
2024-04-02 16:19:57 -07:00
Vivek Khandelwal d1f770c620
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for ReduceMean op (#3008)
This commit also cleans up the OnnxToTorch lowering for the ReduceMean
op and adds the support for handling edge cases.

Signed-Off By: Vivek Khandelwal vivekkhandelwal1424@gmail.com
2024-04-02 16:54:04 +05:30
Xinyu Yang ac1cd3d78a
[Torch] Support AtenDivTensorModeOp with static int input for linalg and stablehlo backend (#3088) 2024-04-02 17:28:53 +08:00
Thomas Dietert d2432bbe5a
[MLIR][Torch] Do not convert bias tensor to element type if NoneType (#3072)
The `convertTensorToElementType` function expects it's argument to have
a valid tensor type that is not `Torch::NoneType`. This PR checks that
the bias tensor is not of type `Torch::NoneType` before calling
`convertTensorToElementType` on the bias tensor argument in the
`matchAndRewrite` member function of the `ConvertAtenConvolutionOp`
class.
2024-04-02 14:19:26 +05:30
ptrifunovic98 1c8c47d483
Add complex support for aten.norm and similar operations (#3052)
Add support for complex-type input tensors for norm, vector norm, and
Frobenius norm operations.
2024-04-02 14:03:30 +05:30
zjgarvey 532d297c46
[ONNX] Preliminary Work Towards Supporting QuantizedMLP_basic onnx e2e test (#3089)
See the related issues here:
[SHARK-Turbine#556](https://github.com/nod-ai/SHARK-Turbine/issues/556)

1. Adds uint8 casting to onnx.Cast op
2. Fixes an issue with onnx.DequantizeLinear when the scale comes with
shape [1].
3. Adds support for unsigned types in an AtenItemOp folder
4. Adds a simpler quantized model for easier debugging
5. Adds a fusion pass to convert [quant -> dequant -> transpose -> mm]
patterns to [transpose -> quant -> mm].
6. Moved some xfails that are still not passing, but for different
reasons than onnx.cast failures.
2024-04-01 16:21:05 -07:00
penguin_wwy b98f7f75dc
[stablehlo] Reduce unnecessary template specialization code (#3047) 2024-04-01 14:18:49 -07:00
Xinan Jiang(姜曦楠) 1cdae6bc68
[MLIR][TORCH]Add support lowing aten.Int.bool to arith (#3083)
Now there no lowing for `aten.Int.bool` in `convert-torch-to-arith`
pass. this PR add this support.

Below is the UT.
```
func.func @torch.aten.Int.bool(%arg0: !torch.bool) -> !torch.int {
  %0 = torch.aten.Int.bool %arg0 : !torch.bool -> !torch.int
  return %0 : !torch.int
}
```
2024-04-01 10:05:08 -07:00
Vivek Khandelwal 6844c84702
[MLIR][Torch] Fix OnnxToLinalg lowering for AvgPool op (#3076)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-01 22:14:14 +05:30
Gaurav Shukla 129a79417a
[MLIR][ONNX] Fix onnx.gather_nd implementation (#3070)
The indices should be expanded before the torch.gather operation.

Signed-off-by: Gaurav Shukla <gaurav@amd.com>
2024-04-01 20:17:09 +05:30
Jiawei Wu 76080936d4
[stablehlo] add aten.index_put and aten.scatter_add op conversion support (#3086) 2024-04-01 19:39:49 +08:00
zjgarvey c19fc9ba47
[ONNX] Fixes Issue with Dynamic Dims in GlobalAveragePool -> Torch Conversion (#3053)
Two e2e tests (AdaptiveAveragePool1/2dUnitOutputSizeDynamic) were
failing due to numerics. This was as a result of passing -1 as the
kernel size in the lowering for the corresponding onnx op
GlobalAveragePool.
2024-03-28 09:43:09 -07:00
Xida Ren (Cedar) 5f325749f9
add lowerings for AtenLtIntOp and AtenLeIntOp (#3061)
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-03-27 10:06:43 -07:00
Rob Suderman 14b548f968
[torch] Improve shape inference for `torch-to-linalg` path for reshapes (#3055)
Reshaping tensors depend on directly matching individual dimensions to
their corresponding dim in the `torch.view` reshape dimensions. This
involves decoupling dynamic dimensions from their static counterparts
and support cleanup / canonicalization.
2024-03-26 12:41:40 -07:00
Vivek Khandelwal 9ae33e482e
[MLIR][TORCH] Add OnnxToTorch lowering for ops (#3049)
This commit adds the OnnxToTorch lowering for the Mish, Softplus,
HardSwish, Trilu, ThresholdedRelu op

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-25 20:29:07 +05:30
schnkmwt 1fcbfa87ec
Implement linalg lowering of diag_embed torch op (#2885)
This PR adds lowering of diag_embed to linalg dilect.
Tracked in https://github.com/nod-ai/SHARK-Turbine/issues/288

---------

Co-authored-by: sachink <sachink@xilinx.com>
2024-03-22 16:32:50 -07:00
zjgarvey 99b3a5f117
Converts all Adaptive Pooling Ops to Linalg (#2808)
The previous conversions for AtenAdaptiveAvgPool1dOp and
AtenAdaptiveMaxPool2dOp are refactored into a general templated
conversion that works for all of the AtenAdaptive...PoolNdOp's.

New support is added for the following ops:

1. AtenAdaptiveMaxPool1d
2. AtenAdaptiveMaxPool3d
3. AtenAdaptiveAvgPool3d

Support is also provided for passing inputs without batch dimensions.
For example, applying adaptive_avg_pool2d to an input tensor of rank 3.

After [pytorch #118162](https://github.com/pytorch/pytorch/pull/118162)
gets down to torch-mlir, I'll add a test for AdaptiveMaxPool1d with
return_indices (which will pass with that upstream fix).

---------

Co-authored-by: James Newling <james.newling@gmail.com>
2024-03-22 11:05:20 -07:00
zjgarvey 6aa481c204
[ONNX] LogSoftmax to Torch (#3024)
This PR adds support for onnx.LogSoftmax both for old versions (<13,
with axis >=0), and new versions (13).
2024-03-22 11:01:39 -07:00
Gaurav Shukla 50635dd509
[ONNX][MLIR] Add support for onnx.gather_nd (#2988)
Signed-off-by: Gaurav Shukla <gaurav@amd.com>
2024-03-22 21:38:39 +05:30
Rob Suderman 3a56714bff
[torch] Fix clamp ranges on quantize_per_tensor on unsigned (#3018)
SExtValue was used for `int` and `uint` clamp values. This caused the
result to always be outputed as `zero`.
2024-03-20 13:37:47 -07:00
Xida Ren (Cedar) cb5cb506df
Fix SCF Forloop fails to convert to linalg when a tensor argument is supplied to the loop block (#3040)
Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-03-20 11:04:02 -07:00
zjgarvey 6ff71b40c8
[ONNX] onnx.DynamicQuantizeLinear to Torch (#3009)
This adds support for converting DynamicQuantizeLinear from torch-onnx
to torch.

I could not get an e2e test to pass, since there seems to be some issues
with uint8 casting somewhere lower in the pipeline. For example
compiling with IREE for llvm-cpu, I would get either the correct zero
point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128).
The output tensor seems to always return a tensor of zeros, which also
occurs when running uint8 examples through QuantizeLinear.

Edit: the first problem can be resolved by casting the output back to
uint8 on output, the second problem is resolved with PR #3018
2024-03-20 10:58:25 -07:00
jinchen 9cf6c45a39
Add OnnxToTorch support for Compress op (#3025) 2024-03-20 17:12:08 +00:00
Abhishek-TyRnT df02692726
Dynamic size support for flatten (#3005)
Added support for dynamic shapes in `flattenusingints` op in tosa
dialect. Due to this some Argmax tests pass
This PR fixes this issue https://github.com/llvm/torch-mlir/issues/3004

The following tests pass after this PR
 ```
1. "ArgmaxIntModule_basic"
2. "ArgmaxIntModule_multiple_maxs"
3. "ArgmaxModule_basic"
```
2024-03-19 15:19:29 -07:00
zjgarvey 7a9608bb69
[ONNX] Reduces onnx.Div sinceVersion to 7 (#3041)
The only difference between version 7 and newer versions is support for
different data types. We should allow this pattern to match as early as
7. Earlier versions have a more manual broadcast specification through
attributes, so I did not include those versions.

See: [onnx.Div
docs](https://onnx.ai/onnx/operators/onnx__Div.html#l-onnx-doc-divl)
2024-03-19 13:35:05 -07:00
Pavani Chowdary c51e2130f2
[onnx] support for lowering mod op from onnx to torch (#2859)
nod-ai/Shark-Turbine#267

---------

Authored-by: boddu.pavani@research.iiit.ac.in
Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-18 17:54:37 +05:30
Xinan Jiang(姜曦楠) d8a52e82c2
[onnx] Fix onnx.cast cases between int32 and int64 (#2982)
2 modifications:
1. torch.int64 is enum 4 in TORCH_DTYPE_TO_INT
2. add int32 support
2024-03-15 17:14:09 +00:00
Nithin Meganathan 798bfd7dff
Adds accumulator types in TorchToLinalg for `AtenMmOp` and `AtenConvolutionOp` (#3027) 2024-03-14 16:40:40 -07:00
aldesilv 6fa21bd8b1
OnnxToTorch lower celu op (#2920) 2024-03-13 20:34:10 +05:30
Yuanqiang Liu ad6159c7cb
[Stablehlo] lowering aten.round to stablehlo.round_nearest_even (#3011) 2024-03-12 08:58:20 +08:00
Rob Suderman 8fb28661f9
[onnx] Fix onnx.ReduceMean lowering (#3002)
Reduce mean lowerings did not succesfully lower to `linalg` via torched.
There were two separate paths that could be consolidated to a single
simpler pass. This resulted in a significant improvement in test
coverage.
2024-03-11 11:32:53 -07:00
Rob Suderman bd7f1baa42
[onnx] Fix expand operation for dynamic shape max (#3001)
If the broadcast shape is length-1 at a dim while `?` in the input dim
then we need to broadcast to the dynamic dim. This is equivalent to
taking a max of two dimensions.
2024-03-08 16:23:07 -08:00
Rob Suderman 0723584936
[torch] Add folder for torch.aten.*.Scalar comparisons (#3000)
This folds small version of the tensor-scalar comparison operators as
they are commonly used for shape computations. This includes le, lt, ge,
gt, eq, and ne.
2024-03-08 13:44:00 -08:00
Andreas Falkenberg 551a4e45f3
[onnx] Add support for `onnx.Gemm` with no bias (#2993)
Previous gemm version required a bias vector. 
This provides an alternate path to `Torch::AtenMm`
with no bias operation.
2024-03-07 15:58:38 -08:00
Rob Suderman 1964208d19
[onnx] Fix constant pad for dynamic shape (#2989)
The current padding operation was not functional for dynamic shapes.
Updated and enabled tests so that onnx.pad tests pass.

Work TBD for reflection padding.
2024-03-07 13:29:50 -08:00
Scott Todd 7b18646def
[onnx] Handle optional arguments in Clip op pattern. (#2976)
Spec: https://onnx.ai/onnx/operators/onnx__Clip.html
2024-03-07 17:25:14 +00:00
Rob Suderman c15f1a2bd2
[onnx] Adding lowering for `onnx.Size` operation (#2985)
We can support `onnx.Size` by requesing the size of each dimensions and
taking the product of the results, then packing it into a tensor.

---------

Co-authored-by: Scott Todd <scott.todd0@gmail.com>
2024-03-06 17:01:05 -08:00
Rob Suderman a78659742a
[onnx] Migrate `onnx.ReduceMax` to match `onnx.ReduceMin` (#2981)
This mostly copy-pastes the reduce minimum implementation to reduce max
to improve test coverage. We also improve the aten lowering for min/max
dim for unsigned types.
2024-03-06 16:48:21 -08:00
Andreas Falkenberg ea76dd12ba
[onnx][torch] Gridsampler E2E test and corrections of gridsampler (#2987)
The addition of an e2e test is actually provided in the Shark-Testsuite.
This adds 2 test cases for the gridsampler e2e test. 
Also as intended there were some items found which needed correction, so
the Gridsampler op has also a change.
2024-03-06 10:56:58 -08:00
Rob Suderman 933db87a07
[onnx] Add support for constants of `i1`s (#2978)
`getRawBuffer` expects a densely packed vector of `i1` values however
`onnx` does not densely pack the values. Include code to handle the
packing / unpacking.
2024-03-05 13:55:13 -08:00
Chi_Liu 09875fabd1
[MLIR][ONNX] Add ONNX ReduceProd support (#2943)
Alternatives to https://github.com/llvm/torch-mlir/pull/2908

Fix https://github.com/nod-ai/SHARK-Turbine/issues/353
2024-03-04 11:07:03 -08:00
Rob Suderman 19d4888278
[torch] Make torch.aten.unflatten lower directly to linalg (#2971)
Existing lowering via aten.view does not work as well for dynamic shapes
as the lowering to tensor.expand must re-infer dynamic shape matching.
Better to directly lower.
2024-03-04 10:17:42 -08:00
Rob Suderman d51e80b648
[onnx] Fix onnx.gather lowering for rank-0 indices (#2973)
We assumed rank was atleast 1 however it can be rank-0, generating an
illegal pair of flatten / unflatten operations. Corrected this.
2024-03-04 08:25:19 -08:00