Commit Graph

777 Commits (da7c6d2c16439ff5a009bd660567d38158fc3049)

Author SHA1 Message Date
Rob Suderman 2ef228328f
[torch] `torch.dequantize` for per channel tensors to` linalg` (#2769)
Support a lowering for dequantization for per channel tensors from
`torch` dialect to a linalg decomposition. Tested via a numerical
`torch` test.
2024-01-25 16:40:21 -08:00
Aart Bik e824fbc65c
[torch-mlir][torch] add encoding field to torch type (#2799)
This adds an encoding field to the torch type, using the interfaces for
printing, parsing, and verification. Note that although this change
prepares adding sparsity to the torch type (as illustrated by the round
trip and invalid tests), nothing in this change depends on the actual
contents of the encoding field!
2024-01-25 10:04:04 -08:00
Rob Suderman f6f890520b
[torch][quant] Quantized `torch.mm` for linalg with end-to-end test (#2750)
This includes custom op matching for decomposed operations and fusing
dequantization into dense operations. As a validation we compare
to the dequant+mm torch implementation.
2024-01-24 14:02:50 -08:00
zjgarvey c531f5495b
AtenAdaptiveMaxPool2d Conversion to Linalg (#2779)
The logic here is very similar to the conversion for AdaptiveAvgPool1d
#2661 with a few modifications:

1. buffVal = -inf instead of 0
2. the main linalg generic op accumulates a max, instead of a sum, to
the first output tensor
3. avg pooling requires dividing the sum pool by the kernel width, which
we stored as an auxilliary tensor (kSizeTensor). Here, the auxiliary
tensor will be recording the indices. Strangely enough, the only
signature available for this function is to return indices, and it
appears that they must be computed whether the user desires them or not.
See
[pytorch/torch/nn/functional.py](https://github.com/pytorch/pytorch/blob/main/torch/nn/functional.py#L1174).

Before writing other adaptive pooling conversions, the logic of this
decomposition should be rolled into a helper function that will work for
both max and avg pooling ops. Even the auxiliary tensor should likely be
automated. This code was written in a slightly more tedious way than
strictly necessary (often using loops to fill SmallVectors up to rank-2,
which is only two in this case), in order to more easily facilitate the
transition to a helper function.
2024-01-24 09:09:56 -08:00
Xida Ren (Cedar) ccaac85788
implement aten.conv1d, aten.conv3d, and aten.conv_tbc (#2757)
convolution with [time,batch,channel] ordering, as opposed to the
default [batch, channel, time]. Currently implementing by transposing
the input and output, but may need to get its own implementation in the
future because this is supposed to be an op that gives a speedup. This
is used by fairseq
(https://github.com/facebookresearch/fairseq/issues/172).

(in case you were wondering like me, this is different from transposed
convolution. Transposed convolution has fractional strides).

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
Co-authored-by: Frederik Harwath <frederik.harwath@amd.com>
2024-01-23 21:30:03 -08:00
Franz Haniel b9806cfa38
[TorchToLinalg] Add lowering for torch.aten.diagonal (#2632) 2024-01-22 12:47:13 -05:00
John Wu 704cfdaf08
Add aten.pool_max3d support to torch-to-linalg (#2735)
Added verification logic to the abstract_interpreter_lib_gen.py

Also made some unit tests

Initially, I thought we can use `linalg::pooling_ndhwc_max` to help
implement this problem. However, on a 5-dimensional matrix it does the
pooling on dimensions (2, 3, 4) which is not what we want. We want
pooling on dimensions (3, 4, 5).

To achieve this, we would need to lower our code using the `linalg`
dialect.


Turns out the pooling code in `linalg` looks like this.

```
func @max_pooling_ncdhw(%I: memref<?x?x?x?x?xf32>, %K: memref<3xindex>, %O: memref<?x?x?x?x?xf32>,
                        %strides: memref<3xindex>, %dilations: memref<3xindex>) {
    %c0 = arith.constant 0 : index
    %c1 = arith.constant 1 : index
    %N = memref.dim %I, %c0 : memref<?x?x?x?x?xf32>
    %C = memref.dim %I, %c1 : memref<?x?x?x?x?xf32>
    %D = memref.dim %I, 2 : memref<?x?x?x?x?xf32>
    %H = memref.dim %I, 3 : memref<?x?x?x?x?xf32>
    %W = memref.dim %I, 4 : memref<?x?x?x?x?xf32>

    %kernel_d = memref.load %K[%c0] : memref<3xindex>
    %kernel_h = memref.load %K[%c1] : memref<3xindex>
    %kernel_w = memref.load %K[2] : memref<3xindex>
    %stride_d = memref.load %strides[%c0] : memref<3xindex>
    %stride_h = memref.load %strides[%c1] : memref<3xindex>
    %stride_w = memref.load %strides[2] : memref<3xindex>
    %dilation_d = memref.load %dilations[%c0] : memref<3xindex>
    %dilation_h = memref.load %dilations[%c1] : memref<3xindex>
    %dilation_w = memref.load %dilations[2] : memref<3xindex>

    linalg.generic {
        indexing_maps = [
            affine_map<(n, c, d, h, w, kd, kh, kw) -> (n, c, d * %stride_d + kd * %dilation_d, h * %stride_h + kh * %dilation_h, w * %stride_w + kw * %dilation_w)>,  // Map for input tensor
            affine_map<(n, c, d, h, w, kd, kh, kw) -> (kd, kh, kw)>,                                              // Map for kernel tensor
            affine_map<(n, c, d, h, w, kd, kh, kw) -> (n, c, d, h, w)>                                            // Map for output tensor
        ],
        iterator_types = ["parallel", "parallel", "parallel", "parallel", "parallel", "reduction", "reduction", "reduction"],
        doc = "3D Max Pooling NCDHW with Strides, Dilations, and Kernel Size"
    } ins(%I, %K : memref<?x?x?x?x?xf32>, memref<3xindex>) outs(%O : memref<?x?x?x?x?xf32>) {
        ^bb0(%input_elem: f32, %kernel_elem: index, %output_elem: f32):
            %max_val = arith.maxf %input_elem, %output_elem : f32
            linalg.yield %max_val : f32
    }
    return
}

```

This was implemented based on it's source code with the adjustments
mentioned above:

4ca1b5e094/mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOps.yaml (L5647)

Issues related to this can be found here

https://github.com/nod-ai/SHARK-Turbine/issues/324
2024-01-19 21:09:46 +05:30
Ilija Kalinić faa4517e83
Implement lowering of torch.aten.remainder.Tensor (#2763)
Closes nod-ai/SHARK-Turbine#349
2024-01-19 18:09:08 +05:30
Sungsoon Cho a8538e1e3f
Decompose AtenNormalFunctionalOp into AtenRandn* and other arithmetic. (#2737) 2024-01-15 22:49:29 -08:00
lonely eagle f85e5c932b
[Torch Dialect] support aten.isneginf, aten.isposinf, aten.nan_to_num (#2743) 2024-01-16 14:29:34 +08:00
James Newling f78ec78ac8
Adjust bound check to be the same as PyTorch native (i.e. stricter) (#2755)
prims.expand expects the start and end dimensions to be strictly less
than the rank of the tensor.
2024-01-15 11:44:45 -08:00
lisaliu1 09421b1cf3
[TorchToLinalg] Add lowering for aten.replication_pad2d (#2715)
Co-authored-by: Lisa Liu <lingl@xilinx.com>
2024-01-15 14:02:27 -05:00
Rob Suderman dc37616d67
[torch][quant] Support quantize and dequantize for torch (#2731)
Handle both `torch.dequantize` and `torch.quantize_per_tensor` including
the op based quantization parameter tracking. This includes adding
`qint32` to torch types as it was missing during the initial type
inclusion.

For testing we only have `torch.int8` and `torch.float` types on
function boundaries as the `qint8` types require passing the scale
and zero point quantization information which is not supported yet.
2024-01-12 19:11:14 -08:00
Ilija Kalinić e1a86e480a
Implement lowering of torch.aten.logit (#2697)
Closes nod-ai/SHARK-Turbine#290
2024-01-11 20:25:42 +05:30
Frederik Harwath 0860c41ee2 Implement aten.reflection_pad2d lowering to linalg 2024-01-10 21:32:22 -10:00
Vivek Khandelwal 208ae35583 [MLIR][ONNX] Add TorchToOnnx Support for DepthToSpace op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 17:50:47 +05:30
Kunwar Grover fb1dfa3126
Bump llvm-project to 6b65d79fbb4682468333cea42b62f15c2dffd8f3 (#2723)
Co-authored-by: hanhanW <hanhan0912@gmail.com>
2024-01-04 14:33:41 -08:00
kumardeepakamd 9adad9bc40
Add support for reflection_pad1d (#2706)
Adds a lowering to Linalg for reflection_pad1d. Based on ideas/code from draft PR
https://github.com/llvm/torch-mlir/pull/2693.

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-02 14:05:11 -05:00
Sungsoon Cho 8e389ff2ff
Implement lowering of torch.aten.exponential (#2680)
https://github.com/llvm/torch-mlir/issues/2646

Decompose aten.exponential() into: -exp(1-x)/lambda
2023-12-27 20:33:18 -08:00
John Wu 46f2cb50dc
[onnx] Lower onnx.HardSigmoid to torch (#2682)
The expression for HardSigmoid in Onnx
(https://onnx.ai/onnx/operators/onnx__HardSigmoid.html): max(0, min(1,
alpha * x + beta))

is inherently different from HardSigmoid in Torch
(https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html)
which is: if x < -3 -> 0
elif x > 3 -> 1
else x/6 + 1/2

That being said, it was just better to compute out the entire expression
when translating the Onnx expression to Torch mlir, which is done in
this PR. Some of the logic is shared from the files in
`DecomposeComplexOps`. Therefore, refactored some shared logic between
`DecomposeComplexOps` and `DefaultDomainGToP` and put it in a `Utils`
file.
2023-12-21 07:29:22 -08:00
Rob Suderman 11cc92d4ab
[onnx] Lowerings from `onnx.tan` (#2642)
Started work on the `tan` lowerings for ONNX to Torch. Uses `sin` and
`cos` to represent a `tan`.
2023-12-20 10:09:39 -08:00
Sungsoon Cho 20ab882840
Fix typo in DecomposeBernoulli() match failure messages. (#2676) 2023-12-19 20:59:19 -08:00
Han-Chung Wang be3e74b647
Integrate llvm/llvm-project@282d501476 (2023-12-19) (#2675) 2023-12-19 13:28:37 -08:00
Sungsoon Cho 55e9401c5c
Implement lowering of aten.cosh op. (#2635) 2023-12-15 11:19:26 -08:00
JianzheXiao 6ddeb1a6ef
[torch] Add support for aten.selu (#2640)
Add `aten.selu` operation to `torch` dialect.
2023-12-13 20:28:08 -08:00
JianzheXiao 7cf52ae73f
[Torch Dialect]Add Support for AtenGroupNormOp and AtenNativeGroupNormOp (#2591)
Co-authored-by: LiuYuanqiang <liuyuanqiang.yqliu@bytedance.com>
2023-12-13 11:05:12 +08:00
Frederik Harwath b656c674ee Implement e2e support for aten.acos op
This depends on a change in the LLVM core repository which adds acos
support to the MLIR Math dialect.
2023-12-12 10:52:02 +01:00
Sambhav Jain 7acabafd84
Remove folder from `AtenStackOp` for single element list inputs (#2626)
`AtenStackOp` defines this folder for list operand containing single
element:
```
OpFoldResult AtenStackOp::fold(FoldAdaptor adaptor) {
  auto list = getOperand(0).getDefiningOp<PrimListConstructOp>();
  if (!list || !list->hasOneUse() || list.getElements().size() != 1)
    return nullptr;
  return list.getElements()[0];
}
```
However, unlike `AtenCatOp`, `AtenStackOp` cannot be folded away for
single element list operand because the result from a stack operation
contains an additional dimension (of size 1, like expand_shape).

This PR removes the `AtenStackOp::fold` method, and adds an e2e test for
single element list input case, which fails on current `main` as
follows:
```
Unexpected outcome summary: (linalg)                                                                                                                                                                   
                                                                                                                                                                                                       
****** Failed tests - 1 tests                                                                                                                                                                          
    FAIL - "TensorsStackSingleElementListModule_basic"                                                                                                                                                 
        @ trace item #0 - call to "forward"                                                                                                                                                            
        @ output of call to "forward"                                                                                                                                                                  
        ERROR: shape (torch.Size([10, 32])) is not equal to golden shape (torch.Size([10, 1, 32]))     
```
Thanks Chris Lalau Keraly for the bug report.
2023-12-11 10:52:50 -08:00
Vivek Khandelwal 0b4422a253 [MLIR][ONNX] Add OnnxToTorch support for bitwise and math ops
This commit adds the OnnxToTorch support for BitwiseXor, BitwiseOr, Div, Equal, Cast,
Ceil, Floor, Cos, and Clip op.
This commit also adds the TorchToLinalg support for aten.clamp.Tensor and aten.clamp_min.Tensor op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-11 19:36:01 +05:30
JianzheXiao 96fcde4d77
[Torch Dialect] Support Einsum Op (#2230)
As title, support torch.aten.einsum op

Right now only support Static Shape, because of the known issue, the
fixed solution is here: https://github.com/llvm/torch-mlir/pull/2154

Co-authored-by: Jiawei Wu
[wujiawei.aml@bytedance.com](mailto:wujiawei.aml@bytedance.com)
2023-12-10 12:30:37 +08:00
Vivek Khandelwal 07c3e11f56 [MLIR][TORCH] Add support for Short(si16) data type
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-09 16:52:23 +05:30
frafranz c0115706a0
Add a decomposition for torch.aten.argmin (#2613)
Adds a lowering for the torch.aten.argmin operator to linalg via decomposition into torch.aten.min.dim.

---------

Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2023-12-06 09:45:30 -05:00
Frederik Harwath 6248216dca
Add aten.min.dim to linalg lowering (#2600) 2023-12-05 07:16:35 -08:00
Ramiro Leal-Cavazos e568f7e999
Move handling of integer signedness to the backend conversions (#2597)
The function `getTypeForScalarType` currently takes an argument to
specify the signedness of integer types. This is leakage of backend
specific requirements into the torch dialect world. Because
`getTypeForScalarType` is a utility function for the torch dialect, it
should only produce types that match the sign conventions used by
PyTorch (regular integers are signed and unsigned integers are
unsigned).

This commit removes the signedness argument from
`getTypeForScalarType`, and moves the backend specific handling of
integer types to the backend code.
2023-11-29 09:43:09 -08:00
Mi Jiazhi f7a92d346e
[Torch Dialect] Decompose AtenTriuOp (#2561)
decompose like:
```
import torch

def my_triu(x, diag):
    rows = torch.ops.aten.size(x, -2)
    cols = torch.ops.aten.size(x, -1)

    row_indices = torch.ops.aten.arange(rows).unsqueeze(1)
    col_indices = torch.ops.aten.arange(cols).unsqueeze(0)

    cond = torch.ops.aten.ge(
        col_indices, torch.ops.aten.add(row_indices, diag))
    return torch.ops.aten.where(cond, x, 0)

x = torch.rand(5, 7)
assert torch.allclose(my_triu(x, 0), torch.triu(x, 0))
assert torch.allclose(my_triu(x, 1), torch.triu(x, 1))
assert torch.allclose(my_triu(x, 2), torch.triu(x, 2))
assert torch.allclose(my_triu(x, -1), torch.triu(x, -1))
```

---------

Co-authored-by: LiuYuanqiang <liuyuanqiang.yqliu@bytedance.com>
2023-11-29 10:35:26 +08:00
Vivek Khandelwal dc9ea08db5 [MLIR][ONNX] Add OnnxToTorch support for atan and bitwise ops
This commit adds the OnnxToTorch support for Atan, Bitshift, BitwiseAnd,
and BitwiseNot op.
This commit also adds the TorchToLinalg support for AtenBitwiseLeftShiftTensorOp.

Signed-Off By: vivekkhandelwal@nod-labs.com
2023-11-28 17:19:07 +05:30
James Newling 1b7d6f2af9
Improve decomposition of pixel_shuffle (support dynamic shapes) (#2590)
The aten.reshape ops in the decomposition are replaced with prims.collapse 
and prims.split_dim ops, which means that the cases where the lowering of
reshape from torch to linalg which are not supported, are avoided.

Essentially, by using the collapse and split_dim ops instead of the
reshape ops, we are not "losing" the information that the reshapes do not
arbitrarily mix dimensions. Which makes lowering easy. 

3 additional tests added: 
- fully dynamic, 
- dynamic only the spatial dimensions, 
- dynamic only in the non-spatial dimensions.
2023-11-22 12:31:06 -08:00
Vivek Khandelwal d50d3aa5e7 [MLIR][TORCH] Add support for unsigned integer types
Refer: https://github.com/pytorch/pytorch/issues/58734
2023-11-21 21:57:26 +05:30
James Newling 03e8f99730
Lowering to linalg of prims split_dim op (#2576)
Adds support for lowering to prims split_op. 

Similar design to collapse op lowering in 
https://github.com/llvm/torch-mlir/pull/2572, with some 
small differences, because the split_dim op (in pytorch) is
view-changing whereas the collapse is not. The difference 
means that 

1) it must be registered in the function Torch::isViewLikeOp
2) it must be be added to the "expected fail" set for the torch dynamo backend.
2023-11-21 07:56:09 -08:00
Zhekun(Josh) Zhang d67afa9e95
[Torch] Add fold rule for AtenMaskedFillTensorOp to AtenMaskedFillScalarOp (#2543) 2023-11-21 13:26:17 +08:00
Stella Laurenzo 5eae0adff1
Breakup python pytorch deps (#2582)
This lifts the core of the jit_ir_importer and ltc out of the pt1
project, making them peers to it. As a side-effect of this layering, now
the "MLIR bits" (dialects, etc) are not commingled with the various
parts of the pt1 project, allowing pt1 and ltc to overlay cleanly onto a
more fundamental "just MLIR" Python core. Prior to this, the Python
namespace was polluted to the point that this could not happen.

That "just MLIR" Python core will be introduced in a followup, which
will create the space to upstream the FX and ONNX pure Python importers.

This primary non-NFC change to the API is:

* `torch_mlir.dialects.torch.importer.jit_ir` ->
`torch_mlir.jit_ir_importer`.

The rest is source code layering so that we can make the pt1 project
optional without losing the other features.

Progress on #2546.
2023-11-19 12:10:19 -08:00
Yuanqiang Liu facbe5d96b
[Torch Dialect] support AtenArangeStartOutOp in ReduceOpVariants like… (#2563)
… AtenBernoulli_FloatOp

It fixing case like: `%2110 = torch.aten.arange.start_out %int1,
%int1517, %int1, %2109 : !torch.int, !torch.int, !torch.int,
!torch.tensor -> !torch.tensor`.
`aten.arange.start_out` doesn't have value semantics also, means`%2110`
is an alias for %2109.
So I decompose it to `aten.arange.start` + `torch.contents.overwrite`.  
The complex decomposition logic is target to handle cases like view and
dtype cast which I add in e2e tests.
2023-11-17 00:51:55 +08:00
James Newling dad1f012f6
Add verification for torch permute op (#2551)
- adds support for an optional verifier to the generated torch op
tablegen (GeneratedTorchOps.td)
- uses the above to add a verifier for the torch permute op. 

Motivation: I hit an unclear error from linalg while developing a
decomposition pass for pixel_shuffle. The error would have been clearer
if the problem had been detected earlier in the invalid aten.permute op.

Testing: new tests added. To run added tests, from the base directory
run

```
 ./build/bin/llvm-lit  test/Dialect/Torch/invalid.mlir
 ```
2023-11-15 11:47:54 -08:00
James Newling e81282ae8f
Support for prims collapse op (lowering to linalg) (#2572)
Steps taken:
1) add generator code to torch_ods_gen.py, run update_torch_ods.sh
2) add (custom) shape and type inference generator code to
abstract_interp_lib_gen.py, run update_abstract_interp_lib.sh
3) Implement lowering to tensor.collapse_dims. Requires the `start` and
`end` values to be constant, else lowering fails
4) Update xfail_sets.py (append to LTC_XFAIL_SET) after running
/tools/e2e_test.sh --filter Collapse --verbose -c XX for all support
backends (XX).

Motivation: 
- Supporting the collapse operation will be useful for lowering of
pixel_shuffle (see Issue #2559)
2023-11-15 08:34:38 -08:00
Yuanqiang Liu 3ab790c50a
[Torch Dialect] add canonicalize for aten.numel (#2562) 2023-11-11 12:16:53 +08:00
Yuanqiang Liu 60effcee89
[Dtype Function] fix aten.div.Tensor_mode's dtype function (#2555) 2023-11-09 09:46:53 +08:00
James Newling b6e551c7b8
Decomposition of aten.pixel_shuffle with static input shape (#2550)
For static tests (that is when the shape is know) for example:

 ```
 @annotate_args([None, ([3, 18, 2, 2], torch.float32, True)])
 ```
 
The e2e passes. But only if the replacement op's return type is set as
undefined (optional shape and type must be explicitly made unset),
otherwise there's a error about the function return type.
 
 For dynamic cases, for example if the above is replaced with 
 
  ```
 @annotate_args([None, ([-1, -1, -1, -1], torch.float32, True)])
 ```

There is a failure to lower to linalg from torch ("view op explicitly
labelled as illegal"). This seems to be because the support for lowering
from torch to linalg with dynamic shapes is limited.
2023-11-08 08:52:44 -05:00
JianzheXiao a42d4c18ff
[Torch Dialect]Support aten.cosine_similarity (#2364)
As title, add support for aten.cosine_similarity, support broadcast
inputA/inputB to the same shape
2023-11-08 15:28:30 +08:00
Jiawei Wu d5ee8ee73a
[Torch Dialect] emit aten.reshape_as op and add decomposition pattern. (#2553) 2023-11-05 11:38:36 +08:00
Yuanqiang Liu 0378da0abd
[Torch Dialect] support aten.isinf (#2544)
Also fix linalg lowering from `UEQ` to `OEQ`.  
I will check other comparison's lowering later.
2023-11-04 22:26:01 +08:00