Commit Graph

741 Commits (fe2fb9d9f5b440a89d109ca7514daf2c15eac854)

Author SHA1 Message Date
Xinan Jiang(姜曦楠) 1cdae6bc68
[MLIR][TORCH]Add support lowing aten.Int.bool to arith (#3083)
Now there no lowing for `aten.Int.bool` in `convert-torch-to-arith`
pass. this PR add this support.

Below is the UT.
```
func.func @torch.aten.Int.bool(%arg0: !torch.bool) -> !torch.int {
  %0 = torch.aten.Int.bool %arg0 : !torch.bool -> !torch.int
  return %0 : !torch.int
}
```
2024-04-01 10:05:08 -07:00
Vivek Khandelwal 6844c84702
[MLIR][Torch] Fix OnnxToLinalg lowering for AvgPool op (#3076)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-01 22:14:14 +05:30
Gaurav Shukla 129a79417a
[MLIR][ONNX] Fix onnx.gather_nd implementation (#3070)
The indices should be expanded before the torch.gather operation.

Signed-off-by: Gaurav Shukla <gaurav@amd.com>
2024-04-01 20:17:09 +05:30
Jiawei Wu 76080936d4
[stablehlo] add aten.index_put and aten.scatter_add op conversion support (#3086) 2024-04-01 19:39:49 +08:00
zjgarvey c19fc9ba47
[ONNX] Fixes Issue with Dynamic Dims in GlobalAveragePool -> Torch Conversion (#3053)
Two e2e tests (AdaptiveAveragePool1/2dUnitOutputSizeDynamic) were
failing due to numerics. This was as a result of passing -1 as the
kernel size in the lowering for the corresponding onnx op
GlobalAveragePool.
2024-03-28 09:43:09 -07:00
Xida Ren (Cedar) 5f325749f9
add lowerings for AtenLtIntOp and AtenLeIntOp (#3061)
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-03-27 10:06:43 -07:00
Rob Suderman 14b548f968
[torch] Improve shape inference for `torch-to-linalg` path for reshapes (#3055)
Reshaping tensors depend on directly matching individual dimensions to
their corresponding dim in the `torch.view` reshape dimensions. This
involves decoupling dynamic dimensions from their static counterparts
and support cleanup / canonicalization.
2024-03-26 12:41:40 -07:00
Vivek Khandelwal 9ae33e482e
[MLIR][TORCH] Add OnnxToTorch lowering for ops (#3049)
This commit adds the OnnxToTorch lowering for the Mish, Softplus,
HardSwish, Trilu, ThresholdedRelu op

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-25 20:29:07 +05:30
schnkmwt 1fcbfa87ec
Implement linalg lowering of diag_embed torch op (#2885)
This PR adds lowering of diag_embed to linalg dilect.
Tracked in https://github.com/nod-ai/SHARK-Turbine/issues/288

---------

Co-authored-by: sachink <sachink@xilinx.com>
2024-03-22 16:32:50 -07:00
zjgarvey 99b3a5f117
Converts all Adaptive Pooling Ops to Linalg (#2808)
The previous conversions for AtenAdaptiveAvgPool1dOp and
AtenAdaptiveMaxPool2dOp are refactored into a general templated
conversion that works for all of the AtenAdaptive...PoolNdOp's.

New support is added for the following ops:

1. AtenAdaptiveMaxPool1d
2. AtenAdaptiveMaxPool3d
3. AtenAdaptiveAvgPool3d

Support is also provided for passing inputs without batch dimensions.
For example, applying adaptive_avg_pool2d to an input tensor of rank 3.

After [pytorch #118162](https://github.com/pytorch/pytorch/pull/118162)
gets down to torch-mlir, I'll add a test for AdaptiveMaxPool1d with
return_indices (which will pass with that upstream fix).

---------

Co-authored-by: James Newling <james.newling@gmail.com>
2024-03-22 11:05:20 -07:00
zjgarvey 6aa481c204
[ONNX] LogSoftmax to Torch (#3024)
This PR adds support for onnx.LogSoftmax both for old versions (<13,
with axis >=0), and new versions (13).
2024-03-22 11:01:39 -07:00
Gaurav Shukla 50635dd509
[ONNX][MLIR] Add support for onnx.gather_nd (#2988)
Signed-off-by: Gaurav Shukla <gaurav@amd.com>
2024-03-22 21:38:39 +05:30
Rob Suderman 3a56714bff
[torch] Fix clamp ranges on quantize_per_tensor on unsigned (#3018)
SExtValue was used for `int` and `uint` clamp values. This caused the
result to always be outputed as `zero`.
2024-03-20 13:37:47 -07:00
Xida Ren (Cedar) cb5cb506df
Fix SCF Forloop fails to convert to linalg when a tensor argument is supplied to the loop block (#3040)
Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-03-20 11:04:02 -07:00
zjgarvey 6ff71b40c8
[ONNX] onnx.DynamicQuantizeLinear to Torch (#3009)
This adds support for converting DynamicQuantizeLinear from torch-onnx
to torch.

I could not get an e2e test to pass, since there seems to be some issues
with uint8 casting somewhere lower in the pipeline. For example
compiling with IREE for llvm-cpu, I would get either the correct zero
point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128).
The output tensor seems to always return a tensor of zeros, which also
occurs when running uint8 examples through QuantizeLinear.

Edit: the first problem can be resolved by casting the output back to
uint8 on output, the second problem is resolved with PR #3018
2024-03-20 10:58:25 -07:00
jinchen 9cf6c45a39
Add OnnxToTorch support for Compress op (#3025) 2024-03-20 17:12:08 +00:00
Abhishek-TyRnT df02692726
Dynamic size support for flatten (#3005)
Added support for dynamic shapes in `flattenusingints` op in tosa
dialect. Due to this some Argmax tests pass
This PR fixes this issue https://github.com/llvm/torch-mlir/issues/3004

The following tests pass after this PR
 ```
1. "ArgmaxIntModule_basic"
2. "ArgmaxIntModule_multiple_maxs"
3. "ArgmaxModule_basic"
```
2024-03-19 15:19:29 -07:00
zjgarvey 7a9608bb69
[ONNX] Reduces onnx.Div sinceVersion to 7 (#3041)
The only difference between version 7 and newer versions is support for
different data types. We should allow this pattern to match as early as
7. Earlier versions have a more manual broadcast specification through
attributes, so I did not include those versions.

See: [onnx.Div
docs](https://onnx.ai/onnx/operators/onnx__Div.html#l-onnx-doc-divl)
2024-03-19 13:35:05 -07:00
Pavani Chowdary c51e2130f2
[onnx] support for lowering mod op from onnx to torch (#2859)
nod-ai/Shark-Turbine#267

---------

Authored-by: boddu.pavani@research.iiit.ac.in
Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-18 17:54:37 +05:30
Xinan Jiang(姜曦楠) d8a52e82c2
[onnx] Fix onnx.cast cases between int32 and int64 (#2982)
2 modifications:
1. torch.int64 is enum 4 in TORCH_DTYPE_TO_INT
2. add int32 support
2024-03-15 17:14:09 +00:00
Nithin Meganathan 798bfd7dff
Adds accumulator types in TorchToLinalg for `AtenMmOp` and `AtenConvolutionOp` (#3027) 2024-03-14 16:40:40 -07:00
aldesilv 6fa21bd8b1
OnnxToTorch lower celu op (#2920) 2024-03-13 20:34:10 +05:30
Yuanqiang Liu ad6159c7cb
[Stablehlo] lowering aten.round to stablehlo.round_nearest_even (#3011) 2024-03-12 08:58:20 +08:00
Rob Suderman 8fb28661f9
[onnx] Fix onnx.ReduceMean lowering (#3002)
Reduce mean lowerings did not succesfully lower to `linalg` via torched.
There were two separate paths that could be consolidated to a single
simpler pass. This resulted in a significant improvement in test
coverage.
2024-03-11 11:32:53 -07:00
Rob Suderman bd7f1baa42
[onnx] Fix expand operation for dynamic shape max (#3001)
If the broadcast shape is length-1 at a dim while `?` in the input dim
then we need to broadcast to the dynamic dim. This is equivalent to
taking a max of two dimensions.
2024-03-08 16:23:07 -08:00
Rob Suderman 0723584936
[torch] Add folder for torch.aten.*.Scalar comparisons (#3000)
This folds small version of the tensor-scalar comparison operators as
they are commonly used for shape computations. This includes le, lt, ge,
gt, eq, and ne.
2024-03-08 13:44:00 -08:00
Andreas Falkenberg 551a4e45f3
[onnx] Add support for `onnx.Gemm` with no bias (#2993)
Previous gemm version required a bias vector. 
This provides an alternate path to `Torch::AtenMm`
with no bias operation.
2024-03-07 15:58:38 -08:00
Rob Suderman 1964208d19
[onnx] Fix constant pad for dynamic shape (#2989)
The current padding operation was not functional for dynamic shapes.
Updated and enabled tests so that onnx.pad tests pass.

Work TBD for reflection padding.
2024-03-07 13:29:50 -08:00
Scott Todd 7b18646def
[onnx] Handle optional arguments in Clip op pattern. (#2976)
Spec: https://onnx.ai/onnx/operators/onnx__Clip.html
2024-03-07 17:25:14 +00:00
Rob Suderman c15f1a2bd2
[onnx] Adding lowering for `onnx.Size` operation (#2985)
We can support `onnx.Size` by requesing the size of each dimensions and
taking the product of the results, then packing it into a tensor.

---------

Co-authored-by: Scott Todd <scott.todd0@gmail.com>
2024-03-06 17:01:05 -08:00
Rob Suderman a78659742a
[onnx] Migrate `onnx.ReduceMax` to match `onnx.ReduceMin` (#2981)
This mostly copy-pastes the reduce minimum implementation to reduce max
to improve test coverage. We also improve the aten lowering for min/max
dim for unsigned types.
2024-03-06 16:48:21 -08:00
Andreas Falkenberg ea76dd12ba
[onnx][torch] Gridsampler E2E test and corrections of gridsampler (#2987)
The addition of an e2e test is actually provided in the Shark-Testsuite.
This adds 2 test cases for the gridsampler e2e test. 
Also as intended there were some items found which needed correction, so
the Gridsampler op has also a change.
2024-03-06 10:56:58 -08:00
Rob Suderman 933db87a07
[onnx] Add support for constants of `i1`s (#2978)
`getRawBuffer` expects a densely packed vector of `i1` values however
`onnx` does not densely pack the values. Include code to handle the
packing / unpacking.
2024-03-05 13:55:13 -08:00
Chi_Liu 09875fabd1
[MLIR][ONNX] Add ONNX ReduceProd support (#2943)
Alternatives to https://github.com/llvm/torch-mlir/pull/2908

Fix https://github.com/nod-ai/SHARK-Turbine/issues/353
2024-03-04 11:07:03 -08:00
Rob Suderman 19d4888278
[torch] Make torch.aten.unflatten lower directly to linalg (#2971)
Existing lowering via aten.view does not work as well for dynamic shapes
as the lowering to tensor.expand must re-infer dynamic shape matching.
Better to directly lower.
2024-03-04 10:17:42 -08:00
Rob Suderman d51e80b648
[onnx] Fix onnx.gather lowering for rank-0 indices (#2973)
We assumed rank was atleast 1 however it can be rank-0, generating an
illegal pair of flatten / unflatten operations. Corrected this.
2024-03-04 08:25:19 -08:00
Yuanqiang Liu 916554f270
[Stablehlo] add torch_to_stablehlo::getBackendTypeForScalarType (#2975) 2024-03-04 23:31:54 +08:00
Rob Suderman d030bffc62
[torch] Support `aten.view` rank-0 collapse (#2965)
Collapsing to a rank-0 tensor using `aten.view` was currently bailing
out. Added the special case.
2024-03-01 12:31:07 -08:00
Vivek Khandelwal 579ac8b666
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for sub and sum op (#2954)
This commit adds the support for scalar conversion to byte. 
This commit also fixes the OnnxToLinalg lowering issue for Onnx.Sub and
Onnx.Sum op.
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/466 
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/467

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-29 21:48:46 +05:30
mmakevic 76b81e0ccd
Implement lowering of torch.aten.fmod.Tensor (#2767)
Closing https://github.com/nod-ai/SHARK-Turbine/issues/351
2024-02-29 11:22:03 +05:30
Andreas Falkenberg 5437f32193
[onnx][torch] Lower `onnx.grid_sampler` to the `torch` equivalents (#2952)
This is the lowering of gridsampler from onnx to torch using our prior
implementation of AtenGridSamplerOp.
Here are several checks for cornercases implemented. We may decide to
have part of these checks in AtenGridSamplerOp instead of the onnx
lowering portion.
2024-02-28 13:52:15 -08:00
Rob Suderman e48fe45886
[onnx] Import `onnx` import to pass remaining tests (#2951)
Finish supporting importing the vast majority of `onnx` operations. This
includes:
- region support
- region value inherentance
- `torch.string` support
- `torch.list` support
- `torch.optional` support
2024-02-28 12:18:02 -08:00
Rob Suderman 6f3d62ab04
[torch] Fix folders and `cat` and `view` torch lowerings (#2963)
A bunch of small fixes are interlinked and trigger crashes if not
addressed as a group. This includes:

- aten view when expand from a rank-0 tensor
- slice folder with negative indices
- `aten._shape_as_tensor` folder on a rank-0 tensor
- `aten.cat` of a tensor with a length-0 tensor
2024-02-28 12:04:52 -08:00
Rob Suderman dd673cfa8d
[torch] Add edgecase for aten.shape_to_tensor for rank-0 input (#2962)
Currently lowering uses `tensor.from_elements` which does not allow zero
inputs. In this case we return a `tensor.empty` operation.
2024-02-28 09:47:06 -08:00
Rob Suderman 08bc013fcd
[tosa] Fix TOSA batch matmul lowering to correct transpose ordering (#2959)
The corrective transpose at the end is computed incorrectly. Is it
actually computin the inverse transpose. Inverting the permutations
fixes the issue.
2024-02-28 09:46:58 -08:00
Rob Suderman 4a7a7d76f8
[onnx] Fix ReduceMean lowering to torch (#2956)
Torch lowering only supported the most recent version. Refactored the
lowering so more easily handle default values and optional operands /
attributes.
2024-02-27 22:48:07 -08:00
Abhishek-TyRnT d541779f37
Add support for torch arange float module (#2749)
Added Support for float dtype in in torch.arange in TOSA Dialect

This resolves the following issue :- 
https://github.com/llvm/torch-mlir/issues/2762

The following test cases are passing after this change

1. ArangeDtypeIntModule_basic
2. ArangeFloatModule_basic
3. ArangeNegativeStartFloatModule_basic
4. ArangeStartFloatModule_basic
5. ArangeStartNegativeStepFloatModule_basic
6. ArangeStartOutDtypeModule_basic
7. ArangeStartStepFloatModule_basic

---------

Co-authored-by: James Newling <james.newling@gmail.com>
2024-02-27 13:40:55 -08:00
Rob Suderman e30a083aff
[torch] Rework lowering to tm_tensor.scatter to stop serialization (#2940)
We collapsed and broadcasted scatter indices to a single element
version. We should instead upport `tm_tensor.scatter`s support for
multiple indices and the implicitly broadcasted behavior. This avoids
the serialization and materializing a needlessly large indices tensor.
2024-02-27 11:46:57 -08:00
Vivek Khandelwal d628b5fd06
[MLIR][TORCH] Add support for tanh approximation for Gelu op (#2941)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/461

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-27 19:26:01 +05:30
Vivek Khandelwal d81747eadb
[MLIR][TORCH] Extend support for OnnxToLinalg lowering for Dropout and Div op (#2938)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/451,
https://github.com/nod-ai/SHARK-Turbine/issues/452
2024-02-27 11:02:05 +05:30