Commit Graph

741 Commits (fe2fb9d9f5b440a89d109ca7514daf2c15eac854)

Author SHA1 Message Date
ptrifunovic98 c5a1da1910
Implement lowering of torch.aten.norm.Scalar (#2899)
Closes
[nod-ai/SHARK-Turbine#365](https://github.com/nod-ai/SHARK-Turbine/issues/365)
2024-02-26 08:46:56 -08:00
Andreas Falkenberg 55dc8deb92
[torch] GridSample TorchToLinalg lowering (#2883)
Lowers `torch.grid_sample` to the equilvalent `linalg` representation.
2024-02-23 09:14:38 -08:00
Rob Suderman 53f6d06ab8
[onnx] Drop `ConstantOfShape` logic form importer, fix torch lowering (#2930)
There is no reason to treat `ConstantOfShape` as a specialized import
any as there exists a onnx-to-torch equivalent. Dropping the import
coding and adding support for resource conversion substantially
increases test coverage for dynamically shaped tests.
2024-02-21 21:34:43 -08:00
Rob Suderman df2aa1a369
[torch] Fixed edge conditions for strided slicing (#2929)
Strided slicing can occur with a negative stride. In these cases we need
to bound end differently. This included removing a function that was
generating bad limits.
2024-02-21 21:28:44 -08:00
Srinath Avadhanula 0f80e75c2e
allow tosa.cast to convert from f32 to f16 (#2934)
According to the [official TOSA
spec](https://www.mlplatform.org/tosa/tosa_spec.html#_cast), `tosa.cast`
allows a cast from `fp32` to `fp16`. We were not previously accounting
for this in the `TorchToTosa` lowering.

Also did a tiny bit of cleanup in the code to make it easier to spot
which conversions are currently allowed.

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
2024-02-20 14:22:38 -08:00
Rob Suderman 135c81a416
[torch] Add folder for `prim.NumToTensor.Scalar` (#2921)
Useful for `slice` lowerings that depend on tensors made form scalars.
2024-02-19 11:55:54 -08:00
Rob Suderman cea51897a5
[onnx] Simplify onnx.slice lowering (#2919)
Onnx slice lowering used arange needlessly instead of directly
constructing the constant dimension values. This makes lowerings to
linalg struggle as multiple folders are required to get what is a
constant index value.
2024-02-19 10:26:29 -08:00
Rob Suderman fd08578bdb
[torch] Support dynamic step size for `torch.slice` (#2922)
For some reason we did not directly use the step size dynamically
despite its constructed using the dynamic value.
2024-02-19 10:26:21 -08:00
aldesilv d29157b33f
OnnxToTorch support for onnx.InstanceNormalization op (#2710)
https://github.com/nod-ai/SHARK-Turbine/issues/327
2024-02-19 19:53:48 +05:30
Rob Suderman d65925a8b4
[onnx] Fix `onnx.sigmoid` for integer inputs/outputs (#2914)
Sample compilation crashes due to sigmoid with integer inputs/outputs.
This fix avoids crashing but still experiences an error.
2024-02-16 13:35:25 -08:00
Rob Suderman 7a0d0e954b
[onnx] Fix onnx.gather lowering to use torch.aten.index_select (#2913)
Onnx's gather maps directly to `torch.aten.index_select`. We should just
use that path.
2024-02-16 16:05:44 -05:00
Rob Suderman 468c533942
[onnx] Fix crash when negative transpose values exist (#2915)
We are crashing due to indexing into a negative shape. Updated the
lowering to avoid the crash.
2024-02-16 16:04:47 -05:00
Rob Suderman 074f112d6a
[onnx] Add testing using the `onnx` compilation using torch tests (#2795)
We can route the torch tests via `onnx` using the `torch.onnx.export`
tooling. We can then reimport, lower to torch, and compile to linalg to
validate the onnx path is working correctly.

The current implementation exposes some failures in the `onnx` path so
we cannot enable the onnx test suite yet due to segmentation faults.
2024-02-15 10:17:13 -08:00
Yuanqiang Liu f3e8199a6d
[Stablehlo] add refbackend (#2712) 2024-02-16 01:08:48 +08:00
Vivek Khandelwal d6d1a173dc
[MLIR][Torch] Add OnnxToTorch and TorchToLinalg support for trig ops (#2903)
This commit adds the OnnxToTorch lowering for cosh, acosh, asin, asinh,
and atanh op.
This commit also adds the TorchToLinalg lowering for acosh, asin, asinh,
and atanh op.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-14 11:58:09 +05:30
Rob Suderman e9cdd6cbc5
[torch] Fix tm_tensor.attention for end-to-end (#2907)
Some operations include a backend matcher for specialized operations. We
map these back to generics so they appropriately match to the high
performance versions. This is done for the attention operation.
2024-02-13 21:18:01 -08:00
saienduri 9b967f6b5a
[MLIR][ONNX] Add OnnxToTorch support for Mean, IsInf, IsNaN, PRelu op (#2801)
This commit adds the OnnxToTorch support for Mean, IsInf, IsNaN, and
PRelu ops. All high priority ops were taken so went with these. The non
trivial ones are Mean and IsInf which might require extra review

---------

Co-authored-by: MaheshRavishankar <mravisha@amd.com>
2024-02-13 12:38:21 +05:30
Xida Ren (Cedar) bfb93cb99f
Fix test_add_uint8 failure to lower to linalg (#2893)
By updating convertScalarToDtype invocation pass original source and
destination datatypes for the add op. Also fixes a potential problem
with the sub op.

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-02-12 09:19:39 -08:00
Rob Suderman d83b576c6e
Bump LLVM to llvm/llvm-project@bb180856ec (#2895)
Includes some minor first for `AffineMap::inferFromExprList`
2024-02-09 14:07:49 -08:00
Avinash Sharma 9659a436d1
Add lowering support for math::AbsIOp (#2875)
There is no lowering support for math::AbsIOp, so if the operand is an
integer type, it will fail to lower to math::AbsFOp since the op operand
#0 must be floating-point-like.
2024-02-08 14:53:40 -08:00
Ashay Rane 21f070e95f
onnx: fix checks in TorchOnnxToTorch pass to match the ONNX spec (#2848)
This PR contains three commits to update the validation checks in the
ONNX -> Torch conversion pass for the AveragePool, Pad, and Slice operators:

> onnx: fix preconditions for lowering AveragePool ops
> 
> The `pads` attribute of the AveragePool operator specifies the value to
> pad at both the beginning as well as the end of the axis (see
> https://onnx.ai/onnx/operators/onnx__AveragePool.html#attributes), so
> the size of this attribute should be twice the rank of the input tensor.
> However, our TorchOnnxToTorch bails out early since it incorrectly
> compares the pads attribute with the rank (not twice the rank) of the
> input tensor.
> 
> This patch fixes the code to match the spec and adds a lit test.

> onnx: allow optional constant value for Pad operator
> 
> The `constant_value` input of the onnx.Pad operator is optional (see
> https://onnx.ai/onnx/operators/onnx__Pad.html#inputs), but the
existing
> logic for lowering the operator into the Torch dialect assumes that it
> is mandatory.
> 
> This patch makes the attribute optional and constructs a default value
> (a list of zeros the size of the input tensor) if the attribute was not
> specified.

> onnx: fix checks for axes and steps inputs of Slice operator
> 
> The ONNX Spec for the Slice operator allows the `starts` and `ends`
> inputs to have fewer indices that the dimensions of the `data` tensor
> (see https://onnx.ai/onnx/operators/onnx__Slice.html), but our code
> expects these inputs to be as many as the `data` tensor's dimensions.
> 
> More precisely, the spec requires that the `starts` and `ends` inputs
> are only as long as the `axes` input, but since the `axes` input is
> optional, the default type for the `axes` input has to match the type
> for the `starts` and `ends` inputs. Moreover, the number of indices in
> the `steps` input also has to match those in the `axes` inputs (instad
> of matching the dimensions of the `data` input).
> 
> This patch fixes the checks in the TorchOnnxToTorch conversion so that
> they match the ONNX spec.
2024-02-07 21:19:27 -08:00
Vivek Khandelwal 4df96616db
[MLIR][TORCH] Modify Onnx.Reshape lowering for static shape cases (#2852)
This commit modifies the OnnxToTorch lowering of Onnx.Reshape op by
creating the result shape list for the aten.reshape using the result
shape values inferred from the op's result shape.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-07 17:44:07 -08:00
mmakevic 32dbf99ce2
Implement lowering of torch.aten.all.dim (#2873)
Lowering of torch.aten.all.dim to linalg.

Per PyTorch documentation:

> This function matches the behaviour of NumPy in returning output of
dtype bool for all supported dtypes except uint8. For uint8 the dtype of
output is uint8 itself.

Since there is no support for ui8 in torch-mlir currently
(https://github.com/llvm/torch-mlir/pull/1384#issuecomment-1260011334)
implementation returns failure for that case.
2024-02-07 12:34:52 -08:00
Rob Suderman 041a54ae0c
[torch] Supporting `torch.aten.mul.float` lowering to `arith` (#2833)
Simple missing scalar operation for multiply floats was missing.
2024-02-05 16:23:04 -08:00
Rob Suderman e3faef5224
[onnx] Convert `onnx.QLinearConv` to `torch` (#2851)
Leaning on the QDQ functionality in torch we can support the QLinearConv
operation by piggybacking through `torch.Convolution`. This includes
some changes such as allowing the `onnx` rewriter to run recursively.
Doing so allows `QLinearConv` to decopmose to `onnx.Convolution` which
is then lowered to `torch`.
2024-02-05 16:09:41 -08:00
Rob Suderman cb52c4b3cc
[onnx] Fix `onnx-to-torch` lowering for flatten shape (#2834)
The existing `flatten` lowering did not define what the intermediate
shape was. This could result in failures to lower further to linalg as
the intermediate shape was unknown. Added a shape refinement section.
2024-02-05 14:23:46 -08:00
Gaurav Shukla f4562a8eaa
[ONNX] Fix the lowering of onnx.expand op (#2861)
Signed-off-by: Gaurav Shukla <gauravshukla789@gmail.com>
2024-02-05 23:46:58 +05:30
Xida Ren (Cedar) 24b8c8672a
[torch] Add folders for `torch.fill`, `torch.ones`, `torch.zeros` and `aten.getItem` (#2849)
So that the CumSum Op in OPT can get the constant that it requires to be lowered to TMTensor

---------

Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-02-02 10:46:33 -08:00
Ben Vanik 962d514308
Fixing implicit double->float conversion warning. (#2850)
`[build]
D:\Dev\iree\third_party\torch-mlir\lib\Conversion\TorchOnnxToTorch\DefaultDomainGtoP.cpp(734):
warning C4305: 'argument': truncation from 'double' to 'float'`
2024-02-01 22:02:44 -08:00
Rob Suderman 29baa813bd
[onnx] Fix `pool` lowering for non-symmetric padding (#2837)
`torch` requires that padding be symmetric for pooling operations. To
support non-symmetric pad we need to separately materialize out the
padding operation.

---------

Co-authored-by: James Newling <james.newling@gmail.com>
2024-02-01 14:35:21 -08:00
Rob Suderman 34f6948533
[torch] Support `!countIncludePad` when unpadded for average pool (#2836)
We do not support average pool when `countIncludePad is set to false.
However if the input is unpadded then the setting of the boolean is
unneeded. Extended use by checking if padding is zero before rejecting
the lowering.
2024-01-31 15:09:36 -08:00
Rob Suderman 0114a570e3
[torch] Support lowering `torch.item` to `tensor.extract` (#2835)
Extracting scalar values from tensors can be implemented via a lowering
to tensor.extract.
2024-01-31 15:09:12 -08:00
Sambhav Jain 8a17c98b74
Bump stablehlo to openxla/stablehlo@fd52182f76 (#2821)
With the recent LLVM integrate and changes from
https://github.com/llvm/llvm-project/pull/78260, we hit this build error
in Stablehlo (which is quite old).
```
external/stablehlo/stablehlo/transforms/StablehloRefineShapes.cpp:1020:14: error: no member named 'startRootUpdate' in 'mlir::PatternRewriter'
    rewriter.startRootUpdate(op);
    ~~~~~~~~ ^
external/stablehlo/stablehlo/transforms/StablehloRefineShapes.cpp:1026:16: error: no member named 'finalizeRootUpdate' in 'mlir::PatternRewriter'
      rewriter.finalizeRootUpdate(op);
      ~~~~~~~~ ^
external/stablehlo/stablehlo/transforms/StablehloRefineShapes.cpp:1029:16: error: no member named 'cancelRootUpdate' in 'mlir::PatternRewriter'
      rewriter.cancelRootUpdate(op);
      ~~~~~~~~ ^
external/stablehlo/stablehlo/transforms/StablehloRefineShapes.cpp:1108:14: error: no member named 'updateRootInPlace' in 'mlir::PatternRewriter'
    rewriter.updateRootInPlace(op->getParentOp(), [&]() { return; });
    ~~~~~~~~ ^
4 errors generated.
Target @torch-mlir//:torch-mlir-opt failed to build
```

I'm still puzzled as to how this didn't fail with the CMake merge gating
CI (do we not test Stablehlo builds/tests?). In any case, bumping our
submodule to https://github.com/openxla/stablehlo/pull/1918 fixes it.

It exposes a new failing lit test in TorchToStablehlo though, that I
have looped stablehlo developers into
([here](https://discord.com/channels/999073994483433573/999074539138990131/1201235845391331419)).
```
bazel run @torch-mlir//test/Conversion:TorchToStablehlo/scatter.mlir.test 

...external/torch-mlir/test/Conversion/TorchToStablehlo/scatter.mlir
within split at <stdin>:1 offset :33:8: error: unexpected error: Expects non-empty reduction block for type inference                                                                               
  %0 = torch.aten.scatter.src %arg0, %int0, %arg1, %arg2 : !torch.vtensor<[?,?],si64>, !torch.int, !torch.vtensor<[?,?],si64>, !torch.vtensor<[?,?],si64> -> !torch.vtensor<[?,?],si64>             
       ^                                                                                                                                                                                            
LLVM ERROR: Failed to infer result type(s).               
```

Bazel CI:
https://github.com/sjain-stanford/torch-mlir/actions/runs/7732673480/job/21083102228
2024-01-31 14:21:17 -08:00
Rob Suderman 3500523f75
[onnx] Convert resources to denseattr for `onnx.constant` to `torch` (#2830)
`onnx` explicitly specifies that `raw_data` is stored in `little-endian`
layout. While converting
to `torch` we need to convert from a known endian format to an internal
format of consistent
layout. This means endianness must be correct during the import of
`onnx.Constant`.

---------

Co-authored-by: Xida Ren (Cedar) <cedar.ren@gmail.com>
2024-01-31 11:40:53 -08:00
Stella Laurenzo 7301aa80fd
Enable -Werror in lib/ and LTC. (#2841)
Required some massaging of LTC to make it warning clean, and I had to
manually disable some warnings on the generated source files (which we
don't control).

The project is warning clean now.

The `-Werror` flag is disabled by default as we can't control everywhere
people will try to build/install. The CI enables it via
-DTORCH_MLIR_ENABLE_WERROR_FLAG=ON.
2024-01-30 23:33:21 -08:00
Stella Laurenzo 26c0ecd09c [nfc] Remove unused var causing error downstream 2024-01-30 22:18:13 -08:00
Yuanqiang Liu d778950f45
[Torch Dialect] add fold pattern for aten.clone (#2804) 2024-01-31 09:43:21 +08:00
Rob Suderman 25a5a22cbd
[torch] Support `torch.convolution` quantized lowering to `linalg` (#2811)
Linalg has quantized specific operations. We can lower to these
operations when there is a known zeropoint and scale operations. This
allows the `convolution` to occur with lower bitwidth's, improving the
overall performance.
2024-01-30 13:46:47 -08:00
aldesilv eff325abc3
OnnxToTorch ReduceMax lowering (#2768)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/352
2024-01-30 11:44:48 +05:30
Quinn Dawkins 494089d53d
Clang format refresh (#2812)
After noticing a number of commits with unrelated formatting changes,
I think something was changed with clang-format at one point and we're
seeing a number of unrelated changes. Doing a refresh can help avoid
this.

The changes made here came from
```
find lib -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
find include -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
find projects -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
```
2024-01-29 12:59:33 -05:00
Rob Suderman d3fd754b93
[onnx] `onnx.MatMulInteger` lowering to `torch.mm` and `quint*` types (#2761)
Torch does not have an equivalent matmul operation for integers. Instead
it sidechannels the information via its quantized types. For this
lowering we setup these sidechannels then invoke `torch.mm`.
2024-01-29 09:40:21 -08:00
Aart Bik 46a25d7241
[torch-mlir][sparse] preserve sparsity during lowering torch to linalg (#2809)
This preserves sparsity at the most obvious places of lowering TORCH
tensors to MLIR RankedTensorType tensors. Other places are marked for
audit. With some initial lowering tests.
2024-01-26 10:54:59 -08:00
Vivek Khandelwal da7c6d2c16
[MLIR][TORCH] Add support for dynamic shape for Onnx.Transpose op (#2803)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-26 09:46:54 -08:00
Phaneesh Barwaria 4964977e85
[ONNX][MLIR] support constantOfShape op (#2747) 2024-01-26 09:36:39 -08:00
Rob Suderman 2ef228328f
[torch] `torch.dequantize` for per channel tensors to` linalg` (#2769)
Support a lowering for dequantization for per channel tensors from
`torch` dialect to a linalg decomposition. Tested via a numerical
`torch` test.
2024-01-25 16:40:21 -08:00
lonely eagle e581b33f96
[Stablehlo]fix CumsumInputDtypeInt32Module_basic on stablehlo backend. (#2797)
Code used for testing.For the location of CumsumInputDtypeInt32Module in
the repo you can see
[here](311b6b0286/projects/pt1/python/torch_mlir_e2e_test/test_suite/basic.py (L4148)).
```python
import torch
import torch_mlir

class CumsumInputDtypeInt32Module(torch.nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, val):
        return torch.ops.aten.cumsum(val, 1)
module = torch_mlir.compile(CumsumInputDtypeInt32Module(), [torch.randn(2, 7, 4).to(torch.int32)], output_type="stablehlo")
print(module.operation.get_asm())
```
After fixing the bugs.
```
module attributes {torch.debug_module_name = "CumsumInputDtypeInt32Module"} {
  func.func @forward(%arg0: tensor<2x7x4xi32>) -> tensor<2x7x4xi64> {
    %0 = stablehlo.constant dense<0> : tensor<i64>
    %1 = stablehlo.convert %arg0 : (tensor<2x7x4xi32>) -> tensor<2x7x4xi64>
    %2 = "stablehlo.reduce_window"(%1, %0) ({
    ^bb0(%arg1: tensor<i64>, %arg2: tensor<i64>):
      %3 = stablehlo.add %arg1, %arg2 : tensor<i64>
      stablehlo.return %3 : tensor<i64>
    }) {padding = dense<[[0, 0], [6, 0], [0, 0]]> : tensor<3x2xi64>, window_dilations = dense<1> : tensor<3xi64>, window_dimensions = dense<[1, 7, 1]> : tensor<3xi64>, window_strides = dense<1> : tensor<3xi64>} : (tensor<2x7x4xi64>, tensor<i64>) -> tensor<2x7x4xi64>
    return %2 : tensor<2x7x4xi64>
  }
}
```
2024-01-25 10:44:08 +08:00
Rob Suderman f6f890520b
[torch][quant] Quantized `torch.mm` for linalg with end-to-end test (#2750)
This includes custom op matching for decomposed operations and fusing
dequantization into dense operations. As a validation we compare
to the dequant+mm torch implementation.
2024-01-24 14:02:50 -08:00
Rob Suderman 60bf6c25af
[onnx] Lower `onnx.QLinearMatMul` lowering to `torch` operators (#2776)
We can plumb the linear matmul into pytorch using its quantized types
with side channel information. To handle the final int8 operation we
dequantize and requantize.
2024-01-24 12:28:48 -08:00
Vivek Khandelwal 894805dd5e
[MLIR][TORCH] Support for `onnx.LayerNormalization` (#2789)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-24 11:08:20 -08:00
Gaurav Shukla 12f123eff8
[ONNX][MLIR] Add support for pad op in the onnx pipeline (#2738)
This commit adds mapping from `onnx.pad` op to `torch.pad` op. Currently
it does not support `axes` parameter of `onnx.pad` op.

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-01-25 00:33:37 +05:30
Phaneesh Barwaria ac8975ea12
[MLIR] [ONNX] lowering for onnx tile op and sign op (#2725) 2024-01-24 22:56:21 +05:30
zjgarvey c531f5495b
AtenAdaptiveMaxPool2d Conversion to Linalg (#2779)
The logic here is very similar to the conversion for AdaptiveAvgPool1d
#2661 with a few modifications:

1. buffVal = -inf instead of 0
2. the main linalg generic op accumulates a max, instead of a sum, to
the first output tensor
3. avg pooling requires dividing the sum pool by the kernel width, which
we stored as an auxilliary tensor (kSizeTensor). Here, the auxiliary
tensor will be recording the indices. Strangely enough, the only
signature available for this function is to return indices, and it
appears that they must be computed whether the user desires them or not.
See
[pytorch/torch/nn/functional.py](https://github.com/pytorch/pytorch/blob/main/torch/nn/functional.py#L1174).

Before writing other adaptive pooling conversions, the logic of this
decomposition should be rolled into a helper function that will work for
both max and avg pooling ops. Even the auxiliary tensor should likely be
automated. This code was written in a slightly more tedious way than
strictly necessary (often using loops to fill SmallVectors up to rank-2,
which is only two in this case), in order to more easily facilitate the
transition to a helper function.
2024-01-24 09:09:56 -08:00
Xida Ren (Cedar) ccaac85788
implement aten.conv1d, aten.conv3d, and aten.conv_tbc (#2757)
convolution with [time,batch,channel] ordering, as opposed to the
default [batch, channel, time]. Currently implementing by transposing
the input and output, but may need to get its own implementation in the
future because this is supposed to be an op that gives a speedup. This
is used by fairseq
(https://github.com/facebookresearch/fairseq/issues/172).

(in case you were wondering like me, this is different from transposed
convolution. Transposed convolution has fractional strides).

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
Co-authored-by: Frederik Harwath <frederik.harwath@amd.com>
2024-01-23 21:30:03 -08:00
Chi_Liu 77ae56337d
[ONNX][MLIR] Add support for onnx.Exp op (#2792)
https://github.com/nod-ai/SHARK-Turbine/issues/312
2024-01-23 13:45:00 -08:00
James Newling dc056e58e6
[MLIR][TORCH] Add onnx.cast cases used by OPT-1.25M (#2787) 2024-01-23 21:06:25 +05:30
Gaurav Shukla b7a0329676
[ONNX][MLIR] Fix padding size constraint for onnx.maxpool op (#2782)
Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2024-01-23 19:23:01 +05:30
Chi_Liu cad98e8113
[ONNX][TORCH-MLIR] Add TopK support (#2774)
https://github.com/nod-ai/SHARK-Turbine/issues/331
2024-01-22 12:56:39 -08:00
Ramiro Leal-Cavazos 5883ef0f21
Fix unused variable warnings (#2775) 2024-01-22 11:05:55 -08:00
Srinath Avadhanula 73b30604da
Do not try to legalize transposed convolution (#2721)
Currently transposed convolution is not handled correctly by
`TorchToTosa`. This PR allows transposed convolutions to pass through
the conversion so that they can be handled by other conversion passes
later in a pipeline.

An example input which produces a compilation error is:

```
func.func @forward(%input: !torch.vtensor<[1,64,1,100],f32>) -> !torch.vtensor<[1,64,2,200],f32> {
  %true = torch.constant.bool true
  %int1 = torch.constant.int 1
  %int2 = torch.constant.int 2
  %weight = torch.vtensor.literal(dense<0.0> : tensor<64x64x3x3xf32>) : !torch.vtensor<[64,64,3,3],f32>
  %bias = torch.vtensor.literal(dense<0.0> : tensor<64xf32>) : !torch.vtensor<[64],f32>
  %stride = torch.prim.ListConstruct %int2, %int2 : (!torch.int, !torch.int) -> !torch.list<int>
  %int1x1 = torch.prim.ListConstruct %int1, %int1 : (!torch.int, !torch.int) -> !torch.list<int>
  %output = torch.aten.convolution %input, %weight, %bias, %stride, %int1x1, %int1x1, %true, %int1x1, %int1 : !torch.vtensor<[1,64,1,100],f32>, !torch.vtensor<[64,64,3,3],f32>, !torch.vtensor<[64],f32>, !torch.list<int>, !torch.list<int>, !torch.list<int>, !torch.bool, !torch.list<int>, !torch.int -> !torch.vtensor<[1,64,2,200],f32>
  return %output : !torch.vtensor<[1,64,2,200],f32>
}
```

This MLIR produces an error about a cast operation with a size mismatch
when passed through `torch-to-tosa`:

```
 error: 'tensor.cast' op operand type 'tensor<1x64x1x50xf32>' and result type 'tensor<1x64x2x200xf32>' are cast incompatible
```

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
2024-01-22 10:57:56 -08:00
Franz Haniel b9806cfa38
[TorchToLinalg] Add lowering for torch.aten.diagonal (#2632) 2024-01-22 12:47:13 -05:00
James Newling 50ac3b1912
g++ build fix (#2778)
Introduced in 704cfdaf08 of @wu-s-john 

g++ compiler error: 

Pooling.cpp:177:13: error: explicit specialization in non-namespace
scope ‘class

Design looks good, g++ is just freaking out for no good reason.
Un-nesting the template classes fixes the error.

We don't have g++ CI. This hopefully happens infrequently enough that we
can just fix manually. My service to those folks who really like
building with g++... :)
2024-01-19 19:12:29 -08:00
Dave Liddell 2f4924015d
[onnx] Added flatten (#2760)
[https://github.com/nod-ai/SHARK-Turbine/issues/328](url)

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-01-19 16:18:16 -08:00
Gaurav Shukla 3b85c70748
[ONNX][MLIR] Add support for onnx.gather op (#2726)
This commit adds support for gather op in the onnx pipeline.
https://github.com/nod-ai/SHARK-Turbine/issues/242

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-01-19 21:58:29 +05:30
John Wu 704cfdaf08
Add aten.pool_max3d support to torch-to-linalg (#2735)
Added verification logic to the abstract_interpreter_lib_gen.py

Also made some unit tests

Initially, I thought we can use `linalg::pooling_ndhwc_max` to help
implement this problem. However, on a 5-dimensional matrix it does the
pooling on dimensions (2, 3, 4) which is not what we want. We want
pooling on dimensions (3, 4, 5).

To achieve this, we would need to lower our code using the `linalg`
dialect.


Turns out the pooling code in `linalg` looks like this.

```
func @max_pooling_ncdhw(%I: memref<?x?x?x?x?xf32>, %K: memref<3xindex>, %O: memref<?x?x?x?x?xf32>,
                        %strides: memref<3xindex>, %dilations: memref<3xindex>) {
    %c0 = arith.constant 0 : index
    %c1 = arith.constant 1 : index
    %N = memref.dim %I, %c0 : memref<?x?x?x?x?xf32>
    %C = memref.dim %I, %c1 : memref<?x?x?x?x?xf32>
    %D = memref.dim %I, 2 : memref<?x?x?x?x?xf32>
    %H = memref.dim %I, 3 : memref<?x?x?x?x?xf32>
    %W = memref.dim %I, 4 : memref<?x?x?x?x?xf32>

    %kernel_d = memref.load %K[%c0] : memref<3xindex>
    %kernel_h = memref.load %K[%c1] : memref<3xindex>
    %kernel_w = memref.load %K[2] : memref<3xindex>
    %stride_d = memref.load %strides[%c0] : memref<3xindex>
    %stride_h = memref.load %strides[%c1] : memref<3xindex>
    %stride_w = memref.load %strides[2] : memref<3xindex>
    %dilation_d = memref.load %dilations[%c0] : memref<3xindex>
    %dilation_h = memref.load %dilations[%c1] : memref<3xindex>
    %dilation_w = memref.load %dilations[2] : memref<3xindex>

    linalg.generic {
        indexing_maps = [
            affine_map<(n, c, d, h, w, kd, kh, kw) -> (n, c, d * %stride_d + kd * %dilation_d, h * %stride_h + kh * %dilation_h, w * %stride_w + kw * %dilation_w)>,  // Map for input tensor
            affine_map<(n, c, d, h, w, kd, kh, kw) -> (kd, kh, kw)>,                                              // Map for kernel tensor
            affine_map<(n, c, d, h, w, kd, kh, kw) -> (n, c, d, h, w)>                                            // Map for output tensor
        ],
        iterator_types = ["parallel", "parallel", "parallel", "parallel", "parallel", "reduction", "reduction", "reduction"],
        doc = "3D Max Pooling NCDHW with Strides, Dilations, and Kernel Size"
    } ins(%I, %K : memref<?x?x?x?x?xf32>, memref<3xindex>) outs(%O : memref<?x?x?x?x?xf32>) {
        ^bb0(%input_elem: f32, %kernel_elem: index, %output_elem: f32):
            %max_val = arith.maxf %input_elem, %output_elem : f32
            linalg.yield %max_val : f32
    }
    return
}

```

This was implemented based on it's source code with the adjustments
mentioned above:

4ca1b5e094/mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOps.yaml (L5647)

Issues related to this can be found here

https://github.com/nod-ai/SHARK-Turbine/issues/324
2024-01-19 21:09:46 +05:30
Ilija Kalinić faa4517e83
Implement lowering of torch.aten.remainder.Tensor (#2763)
Closes nod-ai/SHARK-Turbine#349
2024-01-19 18:09:08 +05:30
Andreas Falkenberg 4de4d38b87
Initial commit of NonZero op (#2766) 2024-01-18 15:23:13 -10:00
Rob Suderman b5387c0f29
[onnx] Lowering `onnx.dequantize_linear` to `torch` (#2759)
We can make the per-tensor version of the operation to the dequantize
operation via marking with the make quantized tensor component. This
introductions the `qint*` and `quint*` tensor type that can be lowered
to teh appropriate dequantization behavior during the torch-to-linalg
conversion.
2024-01-18 16:47:21 -08:00
Rob Suderman bd11877f6f
[onnx] Support lowering quantize linear to `torch` (#2751)
We can map the per_tensor case to the `torch.aten.quantize_per_linear`
operation. In this case we extract the `scale` and `zeropoint` values
and directly invoke the quantization, then return the integer
representation value.
2024-01-18 16:33:10 -08:00
Ze Zhang 77a03f2069
torch-to-tosa lowering support for AtenLinalgVectorNormOp (#2734)
This PR add torch-to-tosa lowering support for AtenLinalgVectorNormOp

e2e test:
python -m e2e_testing.main --config=tosa

LIT tests:
cmake --build build --target tools/torch-mlir/all

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-01-18 12:32:23 -08:00
Phaneesh Barwaria eed144bfbc
[ONNX][MLIR] add Identity op support (#2754) 2024-01-16 19:06:54 +05:30
kumardeepakamd 87389f0762
[ONNXToTorch] Add conversion for Onnx range (#2752)
Implemented ONNX.Range. The spec says the data type for start, limit,
delta are 0-D can be double, float, int16, int32, int64, All int types
mapped to !torch.int and all float types mapped to !torch.float

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-15 14:26:46 -05:00
lisaliu1 09421b1cf3
[TorchToLinalg] Add lowering for aten.replication_pad2d (#2715)
Co-authored-by: Lisa Liu <lingl@xilinx.com>
2024-01-15 14:02:27 -05:00
Rob Suderman 197b3b475c
[onnx] Convert `onnx.constant` to `torch` literal tensor (#2748)
Handles the multiple cases of `onnx` constant values and converts them
to `torch` literal tensors. This can include splats with a single
integer or floating point value, a set of explicit integer values, or
an elements array attr of values.
2024-01-15 09:31:22 -08:00
Han-Chung Wang 10acea71be
Bump LLVM to llvm/llvm-project@0cb024b (#2753)
- Add fixes for
af78e5daf0
- Add fixes for
bb6d5c2200
2024-01-15 07:12:12 -08:00
Rob Suderman dc37616d67
[torch][quant] Support quantize and dequantize for torch (#2731)
Handle both `torch.dequantize` and `torch.quantize_per_tensor` including
the op based quantization parameter tracking. This includes adding
`qint32` to torch types as it was missing during the initial type
inclusion.

For testing we only have `torch.int8` and `torch.float` types on
function boundaries as the `qint8` types require passing the scale
and zero point quantization information which is not supported yet.
2024-01-12 19:11:14 -08:00
Chi_Liu c7452af4fa
[MLIR][ONNX] Add OnnxToTorch support for Maxpool Op (#2695)
Add Maxpool ONNX op support.
Add Utils.h/cpp files to create a constant int list for ONNX.
2024-01-12 14:54:38 -08:00
Ze Zhang 670a99ae19
Handle torch.none type in tosa.clamp op (#2739)
This PR updates the torch-to-tosa conversion with following changes:

- Support torch.none as min/max input argument for tosa.clamp op
- Support negative value as start index for tosa.slice op
- Add tosa.logical_or lowering support

e2e test:
python -m e2e_testing.main --config=tosa

LIT tests:
cmake --build build --target tools/torch-mlir/all

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-01-11 10:36:48 -08:00
James Newling 47ffc90db4
signed/unsigned c++ compiler warning fixes (#2742) 2024-01-11 09:46:46 -08:00
Ilija Kalinić e1a86e480a
Implement lowering of torch.aten.logit (#2697)
Closes nod-ai/SHARK-Turbine#290
2024-01-11 20:25:42 +05:30
Andreas Falkenberg 5862854bc8
[ONNX][TORCH-MLIR] LayerNorm (#2716)
Layer Normalization using the torch.aten.native_layer_norm 

https://github.com/nod-ai/SHARK-Turbine/issues/325
2024-01-11 14:27:04 +05:30
Frederik Harwath 0860c41ee2 Implement aten.reflection_pad2d lowering to linalg 2024-01-10 21:32:22 -10:00
Xida Ren (Cedar) aee1fca251
Minor typo fix: in not implemented message for the exclusive and reverse attributes for cumsum (#2740) 2024-01-10 14:24:37 -08:00
kumardeepakamd 29569713f3
support for onnx.expand operator (#2729)
maps onnx.expand to torch aten broadcast_to, three tests added

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-10 13:05:37 -08:00
Vivek Khandelwal 208ae35583 [MLIR][ONNX] Add TorchToOnnx Support for DepthToSpace op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 17:50:47 +05:30
Vivek Khandelwal 4707d3bdc6 [MLIR][ONNX] Add OnnxToTorch support for Bernoulli and CastLike op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 16:24:06 +05:30
Vivek Khandelwal 35e8f86792 [MLIR][ONNX] Add OnnxToTorch support for Dropout and Elu op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 16:23:55 +05:30
zjgarvey 07d0645f64
[RFC] general support for Adaptive Pooling Ops (#2661)
Adaptive pooling ops can only be decomposed into their non-adaptive
counterparts in trivial cases.

For example, the current decomposition for AtenAdaptiveAvgPool1dOp in
DecomposeComplexOps.cpp supports outSize = inSize (i.e., do literally
nothing), and outSize = 1 (i.e., do a batched average).

The reason adaptive pooling ops are difficult to lower to linalg is that
they are not constantly strided. They are computed by taking an input
tensor of shape (N, C, Hin), and an output size Hout, and computing the
output tensor at position (n,c, h) in the following way:

1. compute st(h) = (h*Hin)//Hout
2. compute en(h) = 1 + ((h+1)*Hin -1)//Hout
3. apply a computation (max or avg) to the slice: INPUT[n, c,
st(h):en(h)]

The provided sample implementation (for ConvertAtenAdaptiveAvgPool1dOp)
uses tensor.extract to access the input tensor inside the payload of a
linalg generic op. This is likely an unattractive use of linalg generic
ops, which is why I am asking for some more targeted feedback on the
validity of this approach before attempting to support the many other
adaptive pooling ops.

Specifically:

- Is the performance of this implementation bad enough to warrant
targeting different dialects entirely? e.g. TMtensor/linalg ext/ etc.
- If the provided implementation is of acceptable performance to the
community, then is it permissable to remove the Adaptive pooling
decompositions from DecomposeComplexOps.cpp? Based on the current
structure of the -torch-decompose-complex-ops pass, it does not seem
possible to only decompose the adaptive ops in special cases (it seems
to get stuck in an infinite loop on a match failure). I would be happy
to instead incorporate the case logic into the conversion directly, and
remove the decompositions once they are rendered completely obsolete.

As long as this approach is acceptable, I can clean up the
implementation with some helper functions, and quickly add support for
each of the remaining Adaptive pooling ops.
2024-01-09 11:14:10 -08:00
Ben Vanik 4dd17f0b71
Fixing implicit double->float truncation warnings. (#2733)
Floating-point literals should use the correct type specifier.
2024-01-08 17:26:38 -05:00
Rob Suderman 985e7796a4
[linalg] Added `aten.clamp` support with integers to `torch-to-linalg` (#2718)
The lowering for `aten.clamp` did not support integer types. Added
support for integer types including a signed integer test.
2024-01-05 15:16:49 -08:00
Han-Chung Wang 6096fcb347
[OnnxToTorch] Delete unused variables. (#2728) 2024-01-04 17:30:05 -08:00
John Wu 4e5e34d215
[MLIR][ONNX] Add OnnxToTorch support for Slice Op (#2696) 2024-01-03 19:41:10 -08:00
Xida Ren (Cedar) 1778314620
add basic cumsum. this doesn't support the exclusive and reverse attrs (#2717)
fixes #2711
2024-01-03 09:52:59 -08:00
kumardeepakamd 9adad9bc40
Add support for reflection_pad1d (#2706)
Adds a lowering to Linalg for reflection_pad1d. Based on ideas/code from draft PR
https://github.com/llvm/torch-mlir/pull/2693.

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-02 14:05:11 -05:00
Xida Ren (Cedar) 6660a26594
lower torch.aten.isinf to linalg (#2638)
Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
2023-12-28 17:20:32 -08:00
Xida Ren (Cedar) 9fc212ea9a
support Onnx opset 1-13 ReduceMean where axes is supplied as an attr (#2703)
(instead of an input)

Addresses part of #2689. fixes #2702
2023-12-28 09:31:41 -08:00
Xida Ren (Cedar) d560698e3d
Lower `onnx.split` to `torch.aten` (#2686) 2023-12-27 17:53:07 -08:00
aldesilv 2d796b7502
lower onnx max op to torch aten maximum op (#2618)
lower onnx min op to torch aten minimum op
2023-12-27 11:07:35 -08:00
aldesilv 336cfb64b5
OnnxToTorch support for onnx.Mul op (#2699) 2023-12-27 10:50:08 -08:00
Xida Ren (Cedar) 6847fc1fc6
Fix since-opset too high (#2701)
Addresses two of the ops from
https://github.com/llvm/torch-mlir/issues/2689

https://github.com/llvm/torch-mlir/issues/2700
2023-12-27 10:08:09 -08:00
aldesilv abc6b0a25a
onnx to torch pow support (#2656) 2023-12-27 09:34:48 -08:00