Commit Graph

907 Commits (eb7bf78a9c1e250949cf0151628f35fb0ac06903)

Author SHA1 Message Date
Gaurav Shukla 0b46d1110a
[MLIR][ONNX] Add support for onnx.ScatterND (#3479)
This commit adds support for onnx.ScatterND op in the onnx pipeline.

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-07-08 13:27:14 +05:30
Yuanqiang Liu 3225f20ab1
[Stablehlo] use index type as dim size, avoid to generate index_cast (#3526)
For example, the original IR is:
```
module attributes {torch.debug_module_name = "Matmul3D"} {
  func.func @forward(%arg0: tensor<?x?x?xf32>, %arg1: tensor<?x?x?xf32>) -> tensor<?x?x?xf32> {
    %c0 = arith.constant 0 : index
    %c1 = arith.constant 1 : index
    %c2 = arith.constant 2 : index
    %dim = tensor.dim %arg1, %c0 : tensor<?x?x?xf32>
    %0 = arith.index_cast %dim : index to i64
    %dim_0 = tensor.dim %arg1, %c1 : tensor<?x?x?xf32>
    %1 = arith.index_cast %dim_0 : index to i64
    %dim_1 = tensor.dim %arg1, %c2 : tensor<?x?x?xf32>
    %2 = arith.index_cast %dim_1 : index to i64
    %from_elements = tensor.from_elements %0, %1, %2 : tensor<3xi64>
    %3 = stablehlo.dynamic_broadcast_in_dim %arg1, %from_elements, dims = [0, 1, 2] : (tensor<?x?x?xf32>, tensor<3xi64>) -> tensor<?x?x?xf32>
    %4 = stablehlo.dot_general %arg0, %3, batching_dims = [0] x [0], contracting_dims = [2] x [1] : (tensor<?x?x?xf32>, tensor<?x?x?xf32>) -> tensor<?x?x?xf32>
    return %4 : tensor<?x?x?xf32>
  }
}
```
After using IndexType, the IR is:
```
module attributes {torch.debug_module_name = "Matmul3D"} {
  func.func @forward(%arg0: tensor<?x?x?xf32>, %arg1: tensor<?x?x?xf32>) -> tensor<?x?x?xf32> {
    %c0 = arith.constant 0 : index
    %c1 = arith.constant 1 : index
    %c2 = arith.constant 2 : index
    %dim = tensor.dim %arg1, %c0 : tensor<?x?x?xf32>
    %dim_0 = tensor.dim %arg1, %c1 : tensor<?x?x?xf32>
    %dim_1 = tensor.dim %arg1, %c2 : tensor<?x?x?xf32>
    %from_elements = tensor.from_elements %dim, %dim_0, %dim_1 : tensor<3xindex>
    %0 = stablehlo.dynamic_broadcast_in_dim %arg1, %from_elements, dims = [0, 1, 2] : (tensor<?x?x?xf32>, tensor<3xindex>) -> tensor<?x?x?xf32>
    %1 = stablehlo.dot_general %arg0, %0, batching_dims = [0] x [0], contracting_dims = [2] x [1] : (tensor<?x?x?xf32>, tensor<?x?x?xf32>) -> tensor<?x?x?xf32>
    return %1 : tensor<?x?x?xf32>
  }
}
```

The benefits of using IndexType on shape tensor:
* simplify the IR, avoid to generate `arith.index_cast`
* let backend compiler have a chance to decide the index width of shape
tensor
* let stablehlo backend have a chance to serialize dynamic shape IR by
[shape_legalize_to_stablehlo](https://github.com/openxla/stablehlo/blob/main/stablehlo/tests/shape_legalize_to_stablehlo.mlir)
2024-07-07 18:03:03 +08:00
Ze Zhang d466d5b809
Register fake_quantize related ops (#3522)
Register `aten.fake_quantize_per_channel_affine` and
`aten.fake_quantize_per_tensor_affine.tensor_qparams` ops

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-07-05 11:02:03 -07:00
Sagar Kulkarni 0fe74845da
[ONNX] Fix bug in ONNXToTorch PadOp's pads tensor rearrangement (#3485)
Fix the pad tensor rearrangement such that we change the representation
from [x1_begin, x2_begin, ..., x1_end, x2_end,...] to [xn_begin, xn_end,
...., x2_begin, x2_end, x1_begin, x1_end] where x1, x2 .. xn are the
dimensions of the pads tensor argument.

---------

Co-authored-by: zjgarvey <zjgarvey@gmail.com>
Co-authored-by: zjgarvey <47986913+zjgarvey@users.noreply.github.com>
2024-07-03 15:02:49 -05:00
jinchen 3915db0a86
[ONNX] Add OnnxToTorch support for CenterCropPad (#3496) 2024-06-28 12:47:29 -07:00
Aart Bik 6fece25ff3
[torch-mlir][sparse] add decomposition features to sparse compiler (#3505)
Fixes https://github.com/llvm/torch-mlir/issues/3499
2024-06-28 10:18:36 -07:00
zjgarvey af236dab66
Add support for multiple dynamic reassociation dims for unflatten.int (#3504)
Addresses an issue with onnx.Gather lowering to linalg:
<https://github.com/nod-ai/SHARK-Turbine/issues/242>

The builder for tensor.expand_shape, without an explicitly provided
output shape, fails to infer an output shape in the case of multiple
dynamic reassociation dims. I tried adding the output shape explicitly
for tensor.expand_shape, but ran into compilation issues later on (see
<https://github.com/iree-org/iree/issues/17760>).

This PR adds support by lowering this op to tensor.reshape when multiple
dynamic reassociation dims are provided.
2024-06-28 09:59:51 -07:00
Phaneesh Barwaria 5a627c46b7
onnx.DFT basic support (#3463)
- adds support for DFT v20 on the FFT and IFFT path
- adds required skeleton code for IFFT ops to be recognised in TMlir
2024-06-28 20:08:43 +05:30
Christopher McGirr 7e6d76e997
[Torch] Fix torch.constant.int operation parsing (#3476)
Due to the custom operation parser, the print and parser were expecting
two different forms.

One having the dictionary before the value and the other after.
Following the format of the other constants ops, the constant.int will
follow the `value attr-dict` format. Updated the parser accordingly.
2024-06-28 16:06:52 +02:00
jinchen 6d0ca499e6
[ONNX] Add OnnxToTorch support for ReverseSequence (#3495) 2024-06-27 14:33:41 -07:00
Phaneesh Barwaria 39d1332008
add onnx loop support (#3408)
- Adds limited support for lowering onnx.Loop to primLoopOp
- lower in the pipeline`torch-to-scf` there is a check to see if loop is
for like. A primLoopOp is for like when the input condition is a
`trueBoolConstant`. To adapt the onnx to torch lowering to take
advantage of it, the implementation checks for specific op patterns in
the loodBody region and decides if loop is for like and uses the right
input condition op.
- to adapt the onnxLoopBody to torchLoopBody, we need to adapt the input
block arguments and set the correct output condition variable in the
loop body.
- scanOutput variables are currently not supported.
2024-06-27 17:08:44 +05:30
Matthias Gehre 6678e1a256
TorchToLinalg: Try folding shape computations to keep static shapes when possible (#3475)
Before this PR, a statically shaped aten.convolution would generate
dynamically shaped linalg IR, and even `-canonicalize` would not be able
to fold it back into static shapes. This PR ensure that shape
calculations are folded on construction to directly generate statically
shaped linalg IR.

We achieve that by ensuring that `arith` ops involved in computing
shapes are created via `createOrFold`, so that later uses of
`getAsOpFoldResult` see constants instead of those ops.

For example
```
module {
  func.func @forward(%arg0: !torch.vtensor<[32,336,112,112],f32>,
                        %arg1: !torch.vtensor<[336,168,3,3],f32>, 
                        %arg2: !torch.vtensor<[336],f32>) 
                        -> !torch.vtensor<[32,336,56,56],f32> {
    %false = torch.constant.bool false
    %int2 = torch.constant.int 2
    %int1 = torch.constant.int 1
    %0 = torch.prim.ListConstruct %int1, %int1 : (!torch.int, !torch.int) -> !torch.list<int>
    %1 = torch.prim.ListConstruct %int2, %int2 : (!torch.int, !torch.int) -> !torch.list<int>
    %2 = torch.prim.ListConstruct  : () -> !torch.list<int>
    %3 = torch.aten.convolution %arg0, %arg1, %arg2, %1, %0, %0, %false, %2, %int2 
    : !torch.vtensor<[32,336,112,112],f32>, !torch.vtensor<[336,168,3,3],f32>, !torch.vtensor<[336],f32>, !torch.list<int>,
      !torch.list<int>, !torch.list<int>, !torch.bool, !torch.list<int>, !torch.int
   -> !torch.vtensor<[32,336,56,56],f32>
    return %3 : !torch.vtensor<[32,336,56,56],f32>
  }
}
```
would result in
```
[...]
  %padded = tensor.pad %2 low[%14, %15, %16, %17] high[%14, %15, %16, %17] {
    ^bb0(%arg3: index, %arg4: index, %arg5: index, %arg6: index):
      tensor.yield %cst : f32
    } : tensor<32x336x112x112xf32> to tensor<?x?x?x?xf32>
[...]
  %45 = linalg.conv_2d_ngchw_gfchw {dilations = dense<1> : vector<2xi64>, strides = dense<2> : vector<2xi64>}
    ins(%expanded, %expanded_37 : tensor<?x2x?x?x?xf32>, tensor<2x168x168x3x3xf32>)
    outs(%expanded_44 : tensor<32x2x168x?x?xf32>) -> tensor<32x2x168x?x?xf32>
[...]
```
and with this PR all shapes are static.
2024-06-27 08:43:10 +02:00
Suraj Sudhir 6eebe61bfe
[Tosa] Conversion from torch.__interpolate to tosa.resize() (#3488)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-06-26 09:10:14 -07:00
zjgarvey 368fabf0c1
[ONNX] Basic Support for DeformConv (#3469)
This adds a torchvision op to torch-mlir and a path from onnx.DeformConv
to torchvision.deform_conv2d.

I'm not implementing the torch->linalg lowering for the torchvision op
yet, but posting this PR to get feedback on some of the choices being
made here and to flesh out the onnx frontend a bit.
2024-06-25 12:16:51 -05:00
zjgarvey e346c911f7
[ONNX] Add basic support for RoiAlign (#3493)
This adds an onnx->torch conversion for onnx.RoiAlign into
torchvision.roi_align or torchvision.roi_pool, and adds those two
torchvision ops to torch-mlir.
2024-06-25 11:02:45 -05:00
Vinayak Dev 02340408b7
[torch] Add OnnxToTorch lowering for Onnx.STFT op (#3492)
Adds OnnxToTorch lowering for `Onnx.STFT` op.
2024-06-25 19:00:45 +05:30
Vivek Khandelwal 3c3fbe4680
[ONNX] Add OnnxToTorch lowering for Onnx.Upsample Op (#3371)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-25 12:58:31 +05:30
Sambhav Jain 09f502667b
`AtenTensorOp::fold` should not fold when result type is not fully specified (#3494)
In one of our downstreams, we encountered an internal assertion failure
in an intermediate pass from `AtenTensorOp::fold` invocation:
```
external/llvm-project/llvm/include/llvm/Support/Casting.h:650: decltype(auto) llvm::dyn_cast(const From &) [To = mlir::torch::Torch::NonValueTensorType, From = mlir::Type]: Assertion `detail::isPresent(Val) && "dyn_cast on a non-existent value"' failed.
```

for this snippet in the IR:
```
%arg1: !torch.tensor {torch.type_bound = !torch.vtensor<[1,1,15360],f32>}
...
    %218 = torch.aten.size %arg1 : !torch.tensor -> !torch.list<int>
    %219 = torch.aten.tensor %218, %none, %none, %false : !torch.list<int>, !torch.none, !torch.none, !torch.bool -> !torch.tensor
```

Turns out this was
[fixed](https://github.com/llvm/torch-mlir/pull/3189/files#diff-dc8ed165c207918e606490eee3984b1ad51d7034e6aac36fc046bf47f6f03f4fR3719)
eventually (and we were on an old hash of torch-mlir). This PR submits
just the lit test for test coverage on that specific change:
```c++
OpFoldResult AtenTensorOp::fold(FoldAdaptor adaptor) {
  auto resultTy = dyn_cast<ValueTensorType>(getType());
  // lit test this
  if (!resultTy || !resultTy.hasSizes() || !resultTy.hasDtype())
    return nullptr;
  ...
```
2024-06-24 15:22:50 -07:00
Vivek Khandelwal 83bfb6fb19
[ONNX] Add OnnxToTorch lowering for OptionalHasElement op (#3472)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-21 11:19:00 +05:30
Vivek Khandelwal d29ad4dfbd
[ONNX] Fix Onnx.Hardsigmoid lowering (#3239)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-21 11:18:14 +05:30
zjgarvey 694210f429
[TorchToLinalg] Fix Quantized Convolution Accumulator Type (#3459)
1. truncates zero-points to i32
2. modifies the default accumulator type for i8 from i64 to i32. 
3. now uses the input dtype to infer accumulator dtype.
2024-06-20 13:54:20 -07:00
Peiming Liu ba16bad8c7
[torch-mlir] bump stablehlo/llvm version (#3471)
Update to llvm/llvm-project@5207632f86
Update to openxla/stablehlo@d41390c3a7
2024-06-18 16:59:53 -07:00
Vivek Khandelwal 822d763308
[ONNX] Add OnnxToTorch lowering for Optional, OptionalGetElement op (#3467)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-18 19:40:18 +05:30
Umang Yadav 59bade3376
[ONNX] Add missing "Abs" in GlobalLpPool (#3460)
Taking `abs` is required to mimic same logic as onnx/onnxruntime. 
Without `abs`, it wouldn't produce correct results for negative values. 

Reference code : 

f5b6f6dc26/onnxruntime/core/providers/cpu/nn/pool_functors.h (L604)


375c161c67/onnx/reference/ops/op_lp_pool.py (L31)
2024-06-17 11:17:16 +05:30
Andrea 🦈 51902ec2dc
Create MLIR functions for ONNX operators that are functions (#3409)
Resolves #3384.

Many ONNX operators are defined by functions and therefore could be
expanded into simpler ONNX operations during importing, avoiding the
need for tools downstream to support these operators directly.

This commit adds this capability to onnx_importer.py. When importing a
node, the schema for the node's operator is retrieved. If the schema
provides a function for the operator, a specialized version for the
node's types and attributes will be created and imported as an MLIR
function with private visibility. An MLIR function call will then be
emitted, instead of a normal operator node. Caching is used to avoid
generating redundant functions within the same module.

In order to avoid a disruptive change to the importer output for a
large number of operators that already have TorchOnnxToTorch support,
an allowlist strategy is used by default. With this commit, only one
operator is allowlisted for expansion, MeanVarianceNormalization.
However, many other operators can be correctly expanded by the current
code, so hopefully the allowlist can be gradually extended. It is
possible to disable the allowlist in the configuration, in which case
all functions are expanded (useful for testing).

Tools downstream of the importer may now need to do inlining when
consuming the output of the importer, e.g.:

  cat imported.mlir | torch-mlir-opt --inline --convert-onnx-to-torch

Explanations for subtle code changes:

- Looking up the correct schema and function for an operator requires
  knowing the opset version. NodeImporter retrieves this from the
  opset imports on the ModelProto retained by the GraphInfo. Previously,
  the model_proto field on GraphInfo was None when importing a subgraph
  in import_regions, but this conflicts with the new need for opset
  version info. Since the apparent purpose of setting it to None was to
  control how GraphInfo generates its input map, a new flag is added to
  GraphInfo (is_subgraph) to control this behavior, so that the actual
  ModelProto can now be provided without breaking this. This also turned
  out to be useful for getting the Config via ModelInfo via GraphInfo.
- Some operators' functions are context-dependent, which means the
  function definition depends on the types of the inputs. Therefore node
  importing now needs to look up the types of a node's inputs, not just
  its outputs as was the case previously. Consequently the operand to
  find_type_proto_for_name() may now be a graph input or initializer in
  some cases, so it has to be updated.
2024-06-14 10:11:26 -07:00
Manupa Karunaratne d2b663ece7
Add onnx op LRN lowering (#3432)
This commit adds support for lowering
Onnx LRN op to aten.
2024-06-14 16:44:43 +00:00
Arham Khan 09c988046c
[ONNX] Add OnnxToTorch lowering for Onnx.NegativeLogLikelihoodLoss Op (#3380)
This implements the Onnx.NegativeLogLikelihoodLoss op using the
signature provided
[here](https://onnx.ai/onnx/operators/onnx__NegativeLogLikelihoodLoss.html)
by replacing it with a `NLLLossForward` op.

Additionally, I included a helper function `get_loss_reduction_enum` to
convert from a string `reduction` parameter to the corresponding
intended integer value since this is an operation that will be reused
for any loss function module. This differs from `get_reduction_enum` in
`TorchUpstream.cpp` which handles the `reduce` parameter from
`scatter_reduce` type operations.
2024-06-14 22:01:11 +05:30
Vivek Khandelwal 2ea2bc3948
[ONNX] Add OnnxToTorch Lowering for GroupNormalization op (#3458)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-14 16:18:53 +00:00
Umang Yadav 04c6479350
[ONNX] Add onnx parser for LpPool operator (#3449)
Similar to https://github.com/llvm/torch-mlir/pull/3435

Solves https://github.com/nod-ai/SHARK-Turbine/issues/728
2024-06-14 21:41:18 +05:30
Vinayak Dev 39d882f7c9
[torch] Add OnnxToTorch lowering for the Col2Im op (#3424)
Adds OnnxToTorch lowering for the `onnx.Col2Im` op.
2024-06-13 08:42:06 +00:00
Surya Jasper de7f058a0e
[MLIR][ONNX] Add OnnxToTorch support for MaxRoiPool Op (#3395)
This PR adds OnnxToTorch support for MaxRoiPool op
2024-06-13 10:46:14 +05:30
Umang Yadav 9b76a2e3eb
[ONNX] add onnx lowering for global lp pool operator (#3435)
Solves https://github.com/nod-ai/SHARK-Turbine/issues/727

Uses AvgPool to implement GlobalLpPool similar to this
https://github.com/onnx/onnx/blob/main/onnx/reference/ops/op_lp_pool.py

cc: @vivekkhandelwal1
2024-06-13 10:37:08 +05:30
zjgarvey de28c8540b
[ONNX] add int16 quantization support (#3446)
There is currently no int16 quantization support in torch. This patch
adds a new mlir type to correspond to the missing "torch.qint16" type,
and enables lowering of quantization-related onnx ops using int16 types.

In follow-up patches, custom quantization logic for ops like
aten.matmul/aten.mm/aten.convolution may need to be revisited to allow
support for qint16. The passes in FuseQuantizedOps.cpp may also need
slight modifications.
2024-06-12 10:37:22 +05:30
zjgarvey 7cd3368b20
[ONNX] Fix resize ceil numerics and add half_pixel_symmetric support (#3443)
This patch fixes several failing tests in our [external test
suite](https://github.com/nod-ai/SHARK-TestSuite/tree/main/iree_tests/onnx/node/generated),
and addresses some of the issues discussed in #3420
2024-06-11 22:35:50 -05:00
Matthias Gehre e07a0bfc54
onnx.resize: Add support for coordTfMode "half_pixel" (#3441)
half_pixel is also the default mode used by ONNX, see
https://onnx.ai/onnx/operators/onnx__Resize.html
2024-06-10 20:59:29 +02:00
Aart Bik d77bab37d1
[torch-mlir][sparse] re-enable all sparse tests (#3444)
this fixes the following issue:

https://github.com/llvm/torch-mlir/issues/3418
2024-06-10 11:19:32 -07:00
Sambhav Jain 7e0e23c668
Test custom op import with symbolic shapes (#3431)
Tests the basic constructs of registering a custom op and its abstract
implementations (with FakeTensors) in python, going through TorchDynamo
export, followed by importing the shape expressions in the Torch
dialect.

Also fixes the importer were previously the symbolic bind op insertion
was not gated in one place.
2024-06-09 00:32:49 -07:00
Vivek Khandelwal d35b6b412a
[ONNX] Add OnnxToTorch Lowering for Sequence Ops (#3425)
This commit adds the lowering for SequenceAt, SequenceEmpty,
SequenceInsert, SequenceErase op

Signed-Off By: Vivek Khandelwal<vivekkhandelwal1424@gmail.com>
2024-06-08 09:58:11 +05:30
aldesilv f794582b18
add resize nearest mode round_prefer_floor, round_prefer_ceil, ceil (#3421) 2024-06-07 14:04:11 -05:00
Vivek Khandelwal 1a9c0a35a9
[Onnx] Add Onnx->Torch lowering for Onnx.Shrink Op (#3385)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-07 22:47:27 +05:30
Suraj Sudhir 1c2778dd56
[ONNX] Conv op adds support for asymmetric padding. (#3426)
Supports asymmetric padding by performing a torch.nn.functional.pad on
the input before performing the convolution.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-06-07 09:54:39 -07:00
Sambhav Jain d0a818a03e
Representing Symbolic Shape Expressions in Torch Dialect (#3372)
Torch Dialect with symbolic shape expressions:
```ll
module {                                                                                                                                                                                                     
  func.func @main(%arg0: !torch.vtensor<[?,?,3],f32>, %arg1: !torch.vtensor<[?,?,3],f32>) -> !torch.vtensor<[?,?,3],f32> {                                                                                   
    %0 = torch.symbolic_int "s0" {min_val = 5, max_val = 10} : !torch.int                                                                                                                                    
    %1 = torch.symbolic_int "s1" {min_val = 0, max_val = 100} : !torch.int                                                                                                                                   
    %2 = torch.symbolic_int "s3" {min_val = 0, max_val = 50} : !torch.int                                                                                                                                    
    
    torch.bind_symbolic_shape %arg0, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32>                                                                                          
    torch.bind_symbolic_shape %arg1, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32>                                                                                          
    
    %3 = torch.aten.tanh %arg0 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32>                                                                                                                  
    torch.bind_symbolic_shape %3, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32>                                                                                             
    
    %4 = torch.aten.sigmoid %arg1 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32>                                                                                                               
    torch.bind_symbolic_shape %4, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32>                                                                                             
    
    %5 = torch.prim.ListConstruct %3, %3, %4 : (!torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>) -> !torch.list<vtensor>                                               
    %int1 = torch.constant.int 1                                                                                                                                                                             
    %6 = torch.aten.cat %5, %int1 : !torch.list<vtensor>, !torch.int -> !torch.vtensor<[?,?,3],f32>                                                                                                          
    torch.bind_symbolic_shape %6, [%0, %1, %2], #affine_map<()[s0, s1, s2] -> (s0, s1 * 2 + s2, 3)> : !torch.vtensor<[?,?,3],f32>                                                                            
    
    return %6 : !torch.vtensor<[?,?,3],f32>                                                                                                                                                                  
  }                                                                                                                                                                                                          
}              
```

For reference, this is the TorchDynamo exported program with symbolic
shape expressions that the above Torch dialect program is imported from:
```py
ExportedProgram:                                                                                                                                                                                             
    class GraphModule(torch.nn.Module):                                                                                                                                                                      
        def forward(self, x: "f32[s0, s1, 3]", y: "f32[s0, s3, 3]"):                                                                                                                                         
            # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:31 in forward, code: a = torch.tanh(x)                                        
            tanh: "f32[s0, s1, 3]" = torch.ops.aten.tanh.default(x);  x = None                                                                                                                               
                                                                                                                                                                                                             
            # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:32 in forward, code: b = torch.sigmoid(y)                                     
            sigmoid: "f32[s0, s3, 3]" = torch.ops.aten.sigmoid.default(y);  y = None                                                                                                                         
                                                                                                                                                                                                             
            # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:33 in forward, code: return torch.cat((a, a, b), dim=1)                       
            cat: "f32[s0, 2*s1 + s3, 3]" = torch.ops.aten.cat.default([tanh, tanh, sigmoid], 1);  tanh = sigmoid = None                                                                                      
            return (cat,)                                                                                                                                                                                    
                                                                                                                                                                                                             
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='y'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='cat'), target=None)])                                               
Range constraints: {s0: ValueRanges(lower=5, upper=10, is_bool=False), s1: ValueRanges(lower=0, upper=100, is_bool=False), s3: ValueRanges(lower=0, upper=50, is_bool=False)} 
```

Huge credit to @stellaraccident for the inputs that helped evaluate the
various design options and arrive at the representation of choice.


- [x] Op definitions for symbolic_int and bind_symbolic_shape ops
- [x] fx_importer updates to import range constraints + create
symbolic_int ops
- [x] fx_importer changes for AffineMapAttr building + adding
bind_symbolic_shape ops
- [x] custom printer/parser for inlined AffineMap expressions in mlir
assembly
- [x] Dialect lit test
- [x] fx_importer python lit tests
- [ ] Cleanup pass to remove these ops (can add in a follow-on)
2024-06-07 04:04:03 -07:00
Vivek Khandelwal 72837fbb3d
build: manually update PyTorch version (#3340)
Set PyTorch and TorchVision version to nightly release 2024-05-14.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-06 22:23:40 +05:30
Vivek Khandelwal 35dd8c52cd
[ONNX] Add OnnxToTorch Lowering for MaxUnpool op (#3413)
This commit also adds the Torch declaration for aten.max_unpool2d and
aten.max_unpool3d op. The TorchToLinalg lowering for the same will be
added in a follow-up commit.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-04 21:09:53 +05:30
Xida Ren (Cedar) 11c3281a8a
Fix reducesum onnx lit test to linalg lowering fails (#3218)
fixes https://github.com/nod-ai/SHARK-Turbine/issues/653

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-06-03 16:36:09 -04:00
Vivek Khandelwal 6382dbbcc0
[ONNX] Add OnnxToTorch lowering for SpaceToDepth op (#3393)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-03 20:29:39 +05:30
Surya Jasper fc100a117d
[MLIR][ONNX] Add OnnxToTorch support for Scatter Op (#3400)
This PR adds OnnxToTorch support for Scatter op
2024-05-31 07:36:48 +00:00
zjgarvey 074098d20c
Modifies onnx resize lowering to fix numerical issues (#3381)
Updates:

- some unsupported modes are now going to report a match failure for
unsupported coordinate transformation modes.
- fixes a bug that was introduced in the last patch for resize (my
bad...)
- uses actual x and y coordinates for computing weights in bilinear
interpolation (rather than eps modified values)
- slightly simplifies the bilinear interpolation payload for readability
and performance
- passes coordinate transformation mode information from an onnx.Resize
op to the mode string for the aten._interpolate op. This allows us to
perform custom logic in the torch->linalg lowering to support
onnx.Resize options without losing the default behaviors of the
interpolate op.
2024-05-30 20:34:37 -04:00
Vivek Khandelwal d7b8f00d01
[ONNX] Add OnnxToTorch Lowering for LpNormalization op (#3397)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-30 23:05:26 +05:30
Angel Zhang 2e194e13d6
[Torch] Fix bugs for `Torch::AtenOneHotOp` (#3350)
This PR fixes the bugs for `Torch::AtenOneHotOp` by:

1) Using `Torch::kUnknownSize` as the default value for `numClasses` in
   the pattern matching stage in `DecomposeAtenOneHotOp`
2) Adding `AtenIntScalarOp` to the patterns in `TorchToArith`
3) Handling both `int` and `float` types for `off` and `on` values in
`TorchOnnxToTorch` conversion

It also includes:

1) A new test in `TorchToArith/basic.mlir`, for `torch.aten.Int.Scalar`,
and
2) A new test in `decompose-complex-ops.mlir`, for `torch.aten.one_hot`

**Dependencies**

This PR is dependent on #3334.
2024-05-22 17:19:08 +00:00
Yuanqiang Liu f4bfe3f948
Bump llvm and stablehlo (#3377)
* bump llvm to 1e5f29af81a5f6fda308074f6345b9fba4faa71c
* bump stablehlo to c44d9af8d4879adccf1054cb61a53377ae5898cb
2024-05-22 23:28:45 +08:00
Angel Zhang 52be4bdc18
[ONNX] Fix bugs for the `onnx.OneHot` operator (#3334)
This commit fixes the bugs for the `onnx.OneHot` operator by:

1) Converting negative indices to non-negative indices
2) Handling both `int` and `float` types for `off` and `on` values
3) Using the correct result type

It also includes a new unit test.
2024-05-22 08:32:00 -04:00
Aart Bik 560ca24771
[torch-mlir][sparse] replace xavier with ones initialization (#3374)
ensures stability of results between different set ups
2024-05-21 17:12:55 -07:00
RattataKing fcf48872b3
[ONNX] Implement Softsign op (#3373) 2024-05-21 12:10:26 -07:00
Aart Bik c0e7d2667d
[torch-mlir][sparse] inference mode for sparse GCN test (#3369) 2024-05-20 19:52:16 -07:00
zjgarvey 297c270980
onnx.Resize and aten._interpolate : allow n spatial dims. (#3368)
The old lowering only had logic for 2d (i.e. images). this patch allows
interpolation for n spatial dims, which is required for some 3d vision
models such as

- onnx/models/pytorch-3dunet_vaiq_int8

which successfully compiles and runs with this patch.
2024-05-20 13:35:27 -07:00
lialan 99511cef82
Implement `onnx.Hardmax` lowering (#3342)
Co-authored-by: Ubuntu <xunli@wsno1.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
Co-authored-by: Hasekawa-Takumi <bewater.private476@passmail.net>
2024-05-20 20:56:24 +05:30
Wu Yuan cc28d566ff
[Stablehlo] Support AtenTrilOp (#3359)
1. lower aten.tril to stablehlo composed by iota, select and so forth
2. add related e2e test cases
2024-05-20 15:49:24 +08:00
Aart Bik e80f072ba4
[torch-mlir][sparse] example of a sparse graph convolution (#3363) 2024-05-17 15:43:50 -07:00
zjgarvey 6cba93b16e
[ONNX][TorchToLinalg] Add support for dynamic dims in Interpolate lowering (#3351)
Addresses [Shark-Turbine
#196](https://github.com/nod-ai/SHARK-TestSuite/issues/196)

Related tracker [Shark-Turbine
#566](https://github.com/nod-ai/SHARK-Turbine/issues/566)

Related onnx.Resize issues [Shark-Turbine
#616](https://github.com/nod-ai/SHARK-Turbine/issues/616)
2024-05-17 12:18:57 -07:00
Andrew Woloszyn 513d89c16d
Add support for the onnx.SequenceLength op. (#3362) 2024-05-17 12:17:43 -07:00
Andrew Woloszyn 72e38dcbbc
Add support for the onnx.SequenceConstruct op. (#3316) 2024-05-17 22:51:28 +05:30
Peiming Liu ccb772cd0f
[sparse] propagate sparsity properly when decompose torch operations. (#3318) 2024-05-15 10:09:27 -07:00
Aaron St George ba32b9cee7
Don't fold `aten.clone` if result isn't same type as input (#3347)
Similar to https://github.com/llvm/torch-mlir/pull/2824, we were seeing
some assertion failures after the addition checks around folders were
tightened up in LLVM: https://github.com/llvm/llvm-project/pull/75887 .
This PR essentially moves the logic that used to be applied at the LLVM
level into the folder, which seems to be the suggested fix.
2024-05-16 00:07:45 +08:00
Aart Bik 44fa6c3afd
[torch-mlir][sparse] sparse diagonal feature scaling test (#3344) 2024-05-14 12:13:54 -07:00
Peiming Liu 8e74d64e8f
[sparse] convert to sparse before any use in sparse test. (#3337) 2024-05-14 09:10:36 -07:00
NeverRaR 26b78285bf
[MLIR][ONNX] Add OnnxToTorch support for GlobalMaxPool Op (#3232)
https://github.com/nod-ai/SHARK-Turbine/issues/658

---------

Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-14 15:55:39 +05:30
Archana Ramalingam 20f312853c
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSumExp Op (#3201)
This commit adds the OnnxToTorch support for ReduceLogSumExp op
2024-05-14 09:54:26 +05:30
Aart Bik 667dfcbc5a
[torch-mlir][sparse] enable test on ReLu (#3336)
Downstream MLIR sparsifier has some (rudimentary) support for ReLU now,
and this test can now be enabled with correct end-to-end behavior.

Also see discussion at:

https://discourse.llvm.org/t/min-max-abs-relu-recognition-starter-project/78918
2024-05-13 15:34:26 -07:00
zjgarvey 75d1d72059
Generalize Operand Quantization in FuseQuantizeOps (#3327)
This change enables more customization with operand quantization, and
generalizes the patterns QuantizeOperands and QuantizeTransposeOperands
to QuantizeOperandsPastCommutingOps.

This allows for passing quantization through operations which are
functionally unaffected by quantization, such as view-like ops. The
purpose of this change is to address a myriad of quantization issues
seen in quantized onnx models that have some reshape-like operations
sandwiched in between a dequant and something like a matmul (whose other
operand is immediately quantizable).
2024-05-12 20:49:59 -07:00
Yuanqiang Liu 0b7cbf5e60
[Stablehlo] fix aten.randn's lowering with f32 element type (#3329) 2024-05-11 17:40:04 +08:00
Yuanqiang Liu 5f7cb9e253
[Stablehlo] lowering aten.randn & aten.normal_functional to mhlo.rng … (#3328)
…NORMAL

* split lowering of uniform, randn, normal from Basic.cpp into Rng.cpp
2024-05-11 15:33:37 +08:00
Stella Laurenzo 00efec0b73
[linalg] Implement strict mode lowering for aten.view. (#3319)
* Enables assume_strict_symbolic_shapes on fx_importer imported
programs, indicating strict shape semantics.
* Reworks the view->reshape lowering to take advantage of strict mode
and do one of:
  * Collapse to 0D
  * Flatten/Unflatten when there is an inferred dim.
  * Fallback to tensor.reshape
* Splits some test cases up and adds an attribute to control the old
pattern (so new corners can be tested in strict mode in isolation).
* Dynamic inferred mode needs upstream work to generalize expand_shape
(so that case is suppressed here).
* Deletes the assert from the existing tensor.reshape lowering if strict
shape mode is enabled (since the condition it is dynamically asserting
cannot happen).
2024-05-10 13:45:50 -07:00
Andreas Falkenberg adafd51823
[onnx] Gridsampler addition of nearest mode (#3320)
Added nearest neighbor selection for onnx.Gridsampler
2024-05-10 11:42:10 -07:00
jinchen 4b24909427
Add attributes support for onnx cumsum op (#3241) 2024-05-11 02:09:01 +08:00
NeverRaR 1d4859699b
MaxPool1d lowering to linalg (#3295)
Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-10 22:05:26 +05:30
Angel Zhang 261074f594
[ONNX] Handle one-input case for Min ONNX operator (#3326)
This commit handles the one-input case for the "Min" ONNX operator. A
new unit test has also been added.
2024-05-10 22:04:03 +05:30
Angel Zhang 7c289d9522
[ONNX] Handle one-input case for `onnx.Max` operator (#3325)
This commit handles the one-input case for the "Max" ONNX operator. A
new unit test has also been added.
2024-05-10 08:58:46 -07:00
Peiming Liu 2c22087cab
[sparse] match fx node using target name instead of variables name (#3315) 2024-05-09 12:34:14 -07:00
penguin_wwy afe87d62b4
[Linalg] [Stablehlo] Promote type for compare scalar op (#3306) 2024-05-10 02:20:06 +08:00
Aart Bik 97a822de0a
[torch-mlir][sparse] minor tweaks in sparse tests (#3311)
(1) test full pytorch output for eltwise
(2) use "random" input for LIF, to get general sparse tensor 
(3) introduce way to get true sparsity into network (needs backend fix
first)
2024-05-09 10:03:25 -07:00
Aart Bik a033bbfe6c
[torch-mlir][sparse] recognize to_dense primitive (#3308)
also maps simply to sparse_tensor.convert
the sparsity types do the rest!
2024-05-08 22:50:17 -07:00
Aart Bik 89bb7404c1
[torch-mlir][sparse] add a true network to our NN tests (#3305)
Objective: make the to_sparse work end-to-end!
2024-05-08 21:18:42 -07:00
Peiming Liu cff144b3ac
[sparse] fix double free due to incompatibility between buffer-deallo… (#3303)
…cation and sparse tensors.

**NOTE**: This PR _doges_ the issue in buffer-deallocation pass instead
of resolving it. In the future, we need to fix the bug in
buffer-deallocation pass when handling code generated by sparse
compiler.
2024-05-08 21:18:17 -07:00
Yuanqiang Liu 5213557b87
[Stablehlo] fix lowering gelu(x, tanh) (#3307)
* lowering gelu("none") to erf
* lowering gelu("tanh") to tanh
2024-05-09 11:39:13 +08:00
Aart Bik c4b28e8d9f
[torch-mlir][sparse] test for sparse "activation" (#3304)
Example of introducing sparsity into the forward pass. With a bespoke
propagation (but upstream PyTorch will support this).
2024-05-08 19:01:24 -07:00
aldesilv ec6d7aa5d2
OnnxToTorch lowering resize op (#3013)
https://github.com/nod-ai/SHARK-Turbine/issues/358
adds a lowering from onnx to linalg for bilinear and nearest resize with
support for using scales or sizes to get resize shape. uses coordinate
transform half pixel for bilinear mode and asymmetrical for nearest
mode. See
https://github.com/onnx/onnx/blob/main/docs/Operators.md#Resize. Added
two passes -- one for bilinear and the other for nearest.
2024-05-08 21:35:03 +00:00
Benoit Jacob bce800a3f4
Integrate llvm-project at dabdec1001dc368373dd581cf72f37a440873ce3 (#3300)
Co-authored-by: Jacques Pienaar <jpienaar@google.com>
2024-05-08 14:43:06 -04:00
Aart Bik c77f3b559a
[torch-mlir][sparse] add simple sparsity "propagation" rules (#3297)
While waiting for the full resolution of feature request
https://github.com/pytorch/pytorch/issues/117188
(which will propagate sparsity the right way in upstream PyTorch for all
FX Graphs), this minor change allows us to start testing sparsity
"within" a network, rather than just the parameters. Feel free to add
your own rules for testing (but within reason for what will be done
upstream).

Note, two TODOs need to be addressed to work around some pending issues
to make the JIT execution work.
2024-05-07 15:27:36 -07:00
Vinayak Dev 6f911ba3d7
[torch] Add OnnxToTorch lowering for `onnx.HammingWindow` (#3283)
Adds OnnxToTorch lowering for the `onnx.HammingWindow` op.
2024-05-06 10:21:45 -07:00
Vivek Khandelwal e60160d793
Revert "Decompose AtenNonzeroOp" (#3289)
Reverts llvm/torch-mlir#3281
2024-05-06 09:52:04 -07:00
Vivek Khandelwal 17c3c15131
[ONNX] Add OnnxToTorch lowering for SoftmaxCrossEntropyLoss op (#3278)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-06 17:26:32 +05:30
Xida Ren (Cedar) 1af00e6040
Decompose AtenNonzeroOp (#3281)
This fixes some onnx lit tests not lowering to linalg in
https://github.com/nod-ai/SHARK-Turbine/issues/450
2024-05-05 21:59:25 +08:00
Yuanqiang Liu 53299eb224
[Stablehlo] Bump stablehlo to ab92adeda9119a6c3914cd42367b0a2b70765e91 (#3285) 2024-05-05 19:56:12 +08:00
Rob Suderman 321b844df7
Revert hyperbolic trigonometric decompositions (#3271)
We should be using the `torch` path and handling decomposition in the
`math` dialect.
2024-05-03 12:06:44 -04:00
Vinayak Dev 67d6a665a4
[torch] Add OnnxToTorch lowering for `onnx.HannWindow` (#3276)
Adds OnnxToTorch lowering for the `onnx.HannWindow` op. Also factors out
common implementation between the window functions.
2024-05-03 12:04:57 -04:00
Archana Ramalingam a46fe2c9db
[MLIR][ONNX] Add OnnxToTorch support for ReduceSumSquare Op (#3188)
This commit adds the OnnxToTorch support for ReduceSumSquare ops.

---------

Co-authored-by: Ubuntu <archana@archana-cpu.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
2024-05-02 22:17:45 +05:30
Vivek Khandelwal 0bb62e4347
Revert Onnx.Selu lowering to corresponding Aten op (#3275) 2024-05-02 09:00:24 -07:00
Ze Zhang 11cd7cd9e7
Folder and Canonicalizer for PrimsConvertElementTypeOp and AtenMaxPool2dWithIndicesOp (#3272)
While playing with TorchDynamo on ResNet18. I notice following issues:

- `prims.convert_element_type` can’t be canonicalized even if the input
and the output share the same type

- `aten.max_pool2d_with_indices` is always used instead of
`aten.max_pool2d`, even if the second returned output (indices) has no
user

This PR fixes above issues by adding a folder to the
PrimsConvertElementTypeOp and a canonicalizer to the
AtenMaxPool2dWithIndicesOp


Lit test:

`cmake --build build --target check-torch-mlir-all`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-05-02 00:03:41 -07:00
Xida Ren (Cedar) 33eef15e42
Support onnx.If (#2825)
This is probably a decent PR for learning about blocks and regions.

If you're here to learn about that, consider also looking at
lib/Conversion/TorchToSCF/TorchToSCF.cpp

While this doesn't include an e2e test, it is tested downstream in
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/onnx/operators/If/model.py

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-04-30 18:36:40 +00:00