Commit Graph

101 Commits (9a12a093a6b22114cba7426a59470f4bcd17b7e2)

Author SHA1 Message Date
Yuanqiang Liu 634a796933
[Torch] fold aten.log (#3223) 2024-04-26 10:10:02 +08:00
Yuanqiang Liu fab2696489
[Torch] support aten.trunc (#3219)
decompose `trunc(x)` to `sign(x) * floor(abs(x))`
2024-04-24 14:32:33 +08:00
Yuanqiang Liu 8a1dbbd597
[torchscript] export extra library file name to user (#3203)
* so that it could be specified by user.
2024-04-24 11:34:02 +08:00
Xinyu Yang 4da3d714cc
[Torch] Support AtenProdOp on linalg and stablehlo (#3215) 2024-04-24 11:14:04 +08:00
Yuanqiang Liu db3842f2e8
[Stablehlo] support lowering sinh & cosh to stablehlo (#3213) 2024-04-23 19:54:58 +08:00
penguin_wwy e5bdd71baf
[Torch] Emit and decompose prims.iota op (#3132) 2024-04-21 19:45:01 -07:00
Xinyu Yang 790a697245
[Torch] Add folder for AtenIntOp, AtenFloatOp (#3189)
See unit test below:
```
// CHECK-LABEL:   func.func @torch.aten.tensor.float(
// CHECK-NEXT: torch.vtensor.literal(dense<1.000000e+01> : tensor<f32>) : !torch.vtensor<[],f32>
func.func @torch.aten.tensor.float() -> !torch.vtensor<[],f32> {
  %none = torch.constant.none
  %false = torch.constant.bool false
  %float1.000000e01 = torch.constant.float 1.000000e+01
  %67 = torch.aten.tensor.float %float1.000000e01, %none, %none, %false : !torch.float, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],f32>
  return %67 : !torch.vtensor<[],f32>
}

// CHECK-LABEL:   func.func @torch.aten.tensor.int(
// CHECK-NEXT: torch.vtensor.literal(dense<45> : tensor<si32>) : !torch.vtensor<[],si32>
func.func @torch.aten.tensor.int() -> !torch.vtensor<[],si32> {
  %none = torch.constant.none
  %false = torch.constant.bool false 
  %int45 = torch.constant.int 45
  %67 = torch.aten.tensor.int %int45, %none, %none, %false : !torch.int, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],si32>
  return %67 : !torch.vtensor<[],si32>
}

```
2024-04-19 22:17:06 +08:00
Xinyu Yang d4313eed4a
[Torch] Add decomposition of RepeatInterleaveSelfInt Op (#3075)
Decomposition RepeatInterleaveSelfInt with following ops:
```python

def my_repeat_interleave(input, repeats, dim=None):
    if dim is None:
        # Flatten the input and then repeat
        return input.flatten().unsqueeze(-1).tile((1, repeats)).flatten()
    else:
        # Calculate the shape after repeat
        expanded_shape = list(input.shape)
        expanded_shape[dim] *= repeats
        # Repeat the tensor along the specified dimension
        repeat_shape = [1] * (input.dim() + 1)
        repeat_shape[dim + 1] = repeats
        input = input.unsqueeze(-1)

        # Tile and then reshape
        tiled = torch.tile(input, repeat_shape)
        # Rearrange and reshape
        repeated = tiled.reshape(*expanded_shape)
    return repeated

```

I passed the tests of stablehlo and linalg. When testing onnx, strange
things happened.
In torch-mlir's CI **torch_nightly** and my own
environment(torch==2.4.0.dev20240318+cpu), it can **pass the pass**.
In torch-mlir's CI  **torch_stable**, it **failed**.
The test case is `RepeatInterleaveSelfIntNoDimModule_basic`, the result
shape should be [120].
```python
class RepeatInterleaveSelfIntNoDimModule(torch.nn.Module):

    def __init__(self):
        super().__init__()

    @export
    @annotate_args([
        None,
        ([3, 4, 5], torch.float32, True),
    ])
    def forward(self, x):
        return x.repeat_interleave(2)


@register_test_case(module_factory=lambda: RepeatInterleaveSelfIntNoDimModule())
def RepeatInterleaveSelfIntNoDimModule_basic(module, tu: TestUtils):
    module.forward(tu.rand(3, 4, 5))
```
The error log is as follows:
```
  Unexpected outcome summary: (onnx)
  
  ****** Failed tests - 1 tests
      FAIL - "RepeatInterleaveSelfIntNoDimModule_basic"
          @ trace item #0 - call to "forward"
          @ output of call to "forward"
          ERROR: shape (torch.Size([6, 4, 5])) is not equal to golden shape (torch.Size([120]))
```

@rsuderman 
Would you please help me check what's wrong with my PR? Thanks a lot.
2024-04-18 06:27:51 +08:00
Xinyu Yang d2ba956e69
[Torch] Support Aten_CastLongOp. (#3160)
By canonicalize Aten_CastLongOp into AtenToDtypeOp
2024-04-17 21:58:32 +08:00
IanWood1 5708ee7ec9
Added 2 Ops: Floor divide scalar and Floor divide scalar mode (#3156)
- Added linalg lowering for `AtenFloorDivideScalarOp`
  - Needed `AtenDivScalarModeOp` for the decomp.
- Added linalg lowering for `AtenDivScalarModeOp`
- Moved linalg payload logic to `createDivModePayload()` since the logic
was nearly identical for both `AtenDivScalarModeOp` and
`AtenDivTensorModeOp`. Just a template function
 -  Added `AtenDivScalarModeOp` lowering for stablehlo
 

Pytorch's
[`torch.floor_divide()`](https://pytorch.org/docs/stable/generated/torch.floor_divide.html)
in a previous version (for a reason unknown to me) preformed a
truncation instead of "floor". The already implemented op
`AtenFloorDivideTensorOp` was done before this change. However, this
wasn't caught because our testcases only tested positive floor division.
I changed this to floor as well as adding a few test cases.
2024-04-15 13:45:10 -07:00
Xinyu Yang 5eb0cf9104
[Torch] Add decompose of AtenToPrimDeviceOp (#3131)
As device information isn't relevant to torch-mlir
2024-04-10 22:26:48 +08:00
IanWood1 8ff28527cb
Add more descriptive error message to torch_ods_gen.py. (#3108)
Added error message when adding new torch op to
[torch_ods_gen.py](https://github.com/llvm/torch-mlir/compare/main...IanWood1:torch-mlir:ods_gen_error_message?expand=1#diff-889b60b904ed67a5065a14e8de6fc89e00e199577e4d2bfa134ac4d1c89832d2).


New message displays which op key is failing and possible matches in the
torch `Registry`.
```Op does not match any Torch ops in Registry 
Given op: 
    "aten::hardtanh_wrong : (Tensor, Scalar) -> (Tensor)" 
Possible matches: 
    "aten::hardshrink : (Tensor, Scalar) -> (Tensor)" 
    "aten::hardtanh_ : (Tensor, Scalar, Scalar) -> (Tensor)" 
    "aten::hardtanh : (Tensor, Scalar, Scalar) -> (Tensor)"
    "aten::clamp_min : (Tensor, Scalar) -> (Tensor)" 
    "aten::linalg_cond : (Tensor, Scalar?) -> (Tensor)"```



Also, ran black formatting on file. Based on LLVM style guides this seems to be correct, but I can revert the formatting if needed.
2024-04-09 09:50:34 -07:00
Xinyu Yang 42a16fa912
[Torch] Support Aten_CastFloatOp. (#3115)
By canonicalize Aten_CastFloatOp into AtenToDtypeOp
2024-04-09 11:06:53 +08:00
Xinyu Yang 84c24e5771
[Torch] Support Aten__And__ScalarOp (#3114) 2024-04-08 20:24:17 +08:00
Yuanqiang Liu 2c56ef9252
[Torch Dialect] canonicalize aten.sign to aten.sgn (#3112)
* `aten.sign` is a sub-set of `aten.sgn` (`aten.sgn` support complex
type).
2024-04-08 20:05:42 +08:00
Rob Suderman f97cd4893f
[torch] Improve shape inference for dynamic shapes (#3091)
Shapes can be processed as tensors to represent the set of dimensions.
As reshapes take a list of scalars this can result in a single dynamic
dimension blocking the adjacent static dimensions.

This pass attempts to de-couple tensor computations related to shapes
and propagate values to better support lowering scalar tensor
computations.
2024-04-02 16:19:57 -07:00
Stella Laurenzo 6d680ff445
[ods] Allow all tensor returns to be optional. (#3082)
This was found while tracing backwards graphs: the convolution_backwards
op will return None if the first result is not needed. Confirmed by
defining a custom op with a `Tensor` return signature and having its
meta kernel return None.
2024-03-29 23:09:34 -07:00
Yuanqiang Liu 0a581a97a7
[Torch Dialect] enhance aten.int.tensor's canonicalize (#3058)
support fold with literal vtensor.  
change it to canonicalize because this pattern will create new op.
2024-03-27 09:51:58 +08:00
Rob Suderman 14b548f968
[torch] Improve shape inference for `torch-to-linalg` path for reshapes (#3055)
Reshaping tensors depend on directly matching individual dimensions to
their corresponding dim in the `torch.view` reshape dimensions. This
involves decoupling dynamic dimensions from their static counterparts
and support cleanup / canonicalization.
2024-03-26 12:41:40 -07:00
schnkmwt 1fcbfa87ec
Implement linalg lowering of diag_embed torch op (#2885)
This PR adds lowering of diag_embed to linalg dilect.
Tracked in https://github.com/nod-ai/SHARK-Turbine/issues/288

---------

Co-authored-by: sachink <sachink@xilinx.com>
2024-03-22 16:32:50 -07:00
zjgarvey 99b3a5f117
Converts all Adaptive Pooling Ops to Linalg (#2808)
The previous conversions for AtenAdaptiveAvgPool1dOp and
AtenAdaptiveMaxPool2dOp are refactored into a general templated
conversion that works for all of the AtenAdaptive...PoolNdOp's.

New support is added for the following ops:

1. AtenAdaptiveMaxPool1d
2. AtenAdaptiveMaxPool3d
3. AtenAdaptiveAvgPool3d

Support is also provided for passing inputs without batch dimensions.
For example, applying adaptive_avg_pool2d to an input tensor of rank 3.

After [pytorch #118162](https://github.com/pytorch/pytorch/pull/118162)
gets down to torch-mlir, I'll add a test for AdaptiveMaxPool1d with
return_indices (which will pass with that upstream fix).

---------

Co-authored-by: James Newling <james.newling@gmail.com>
2024-03-22 11:05:20 -07:00
penguin_wwy 7616d637fd
Add stateless fx graph import (#3036) 2024-03-21 14:44:54 -07:00
penguin_wwy f34c187ac4
Normalize type hints to be compatible with multiple Python versions (#3028)
Although we provide a wheel package for Python 3.8, it may actually
throw the following exception:
`TypeError: 'type' object is not subscriptable`
2024-03-15 08:29:48 -07:00
Yuanqiang Liu 4282eb9e76
[Torch Dialect] support aten.fake_quantize_per_tensor_affine (#3014) 2024-03-15 08:53:29 +08:00
Yuanqiang Liu 870e63bc3c
[Torch Dialect] support decomposition of aten.linspace (#3006) 2024-03-14 08:28:33 +08:00
Yuanqiang Liu 43c6996a31
[Torch Dialect] add folder for aten.ceil and unify patterns of ceil, … (#3010)
…floor, round
2024-03-14 07:41:58 +08:00
ptrifunovic98 524ff99216
Implement lowering of torch.aten.linalg_cross (#2986)
Closes
[nod-ai/SHARK-Turbine#497](https://github.com/nod-ai/SHARK-Turbine/issues/497)
2024-03-13 12:17:22 -07:00
Yuanqiang Liu 229ca3a9e1
[Torch Dialect] emit aten::mul and add folder (#3007) 2024-03-11 19:59:34 +08:00
Yuanqiang Liu a3fe130f73
[Torch Dialect] emit aten::warn (#3003)
* torch-mlir may not handle `aten.warn`. But it could be handled by
custom users' backend which involves torch-mlir.
2024-03-10 08:29:08 +08:00
Rob Suderman 0723584936
[torch] Add folder for torch.aten.*.Scalar comparisons (#3000)
This folds small version of the tensor-scalar comparison operators as
they are commonly used for shape computations. This includes le, lt, ge,
gt, eq, and ne.
2024-03-08 13:44:00 -08:00
Ze Zhang aa7c9a9653
e2e support aten.linalg_norm to aten.linalg_vector_norm (#2953)
Add e2d support for `aten.linalg_norm` by decompose it to
`aten.linalg_vector_norm`.

Lowering to `aten.linalg_matrix_norm` is still unsupported.

To Test: 

`python -m e2e_testing.main -v`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-03-05 16:31:01 -08:00
Rob Suderman bc0527676b
[torch] Add support for `torch.split_with_sizes` via decompose (#2979)
Convert to individiual slices and tuple together as a list.

---------

Co-authored-by: Scott Todd <scott.todd0@gmail.com>
2024-03-05 15:01:21 -08:00
Rob Suderman 19d4888278
[torch] Make torch.aten.unflatten lower directly to linalg (#2971)
Existing lowering via aten.view does not work as well for dynamic shapes
as the lowering to tensor.expand must re-infer dynamic shape matching.
Better to directly lower.
2024-03-04 10:17:42 -08:00
Rob Suderman 61f0a5facf
[torch] Add an `aten.cat` length-0 canonicalization (#2966)
If an input is length-0 along the dimension of canonicalization we can
remove the tensor from the list
2024-03-01 21:41:12 -08:00
mmakevic 76b81e0ccd
Implement lowering of torch.aten.fmod.Tensor (#2767)
Closing https://github.com/nod-ai/SHARK-Turbine/issues/351
2024-02-29 11:22:03 +05:30
Rob Suderman 6f3d62ab04
[torch] Fix folders and `cat` and `view` torch lowerings (#2963)
A bunch of small fixes are interlinked and trigger crashes if not
addressed as a group. This includes:

- aten view when expand from a rank-0 tensor
- slice folder with negative indices
- `aten._shape_as_tensor` folder on a rank-0 tensor
- `aten.cat` of a tensor with a length-0 tensor
2024-02-28 12:04:52 -08:00
Vivek Khandelwal d81747eadb
[MLIR][TORCH] Extend support for OnnxToLinalg lowering for Dropout and Div op (#2938)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/451,
https://github.com/nod-ai/SHARK-Turbine/issues/452
2024-02-27 11:02:05 +05:30
ptrifunovic98 c5a1da1910
Implement lowering of torch.aten.norm.Scalar (#2899)
Closes
[nod-ai/SHARK-Turbine#365](https://github.com/nod-ai/SHARK-Turbine/issues/365)
2024-02-26 08:46:56 -08:00
Andreas Falkenberg 55dc8deb92
[torch] GridSample TorchToLinalg lowering (#2883)
Lowers `torch.grid_sample` to the equilvalent `linalg` representation.
2024-02-23 09:14:38 -08:00
Aart Bik 534b266f2d
[torch-mlir][NFC] remove trailing whitespace (#2936) 2024-02-20 11:23:14 -08:00
Rob Suderman 135c81a416
[torch] Add folder for `prim.NumToTensor.Scalar` (#2921)
Useful for `slice` lowerings that depend on tensors made form scalars.
2024-02-19 11:55:54 -08:00
Rob Suderman e80054a3cc
[torch] Folders for `torch.aten.*.tensor` operators [add, sub, mul] (#2878)
Simple folder for limited size aten tensor operations. This is primarily
useful for shape computation folding as they unfortunately can use
`aten` operators. Add, sub, mul are common examples of these folders.
2024-02-19 10:28:23 -08:00
aldesilv d29157b33f
OnnxToTorch support for onnx.InstanceNormalization op (#2710)
https://github.com/nod-ai/SHARK-Turbine/issues/327
2024-02-19 19:53:48 +05:30
Vivek Khandelwal d6d1a173dc
[MLIR][Torch] Add OnnxToTorch and TorchToLinalg support for trig ops (#2903)
This commit adds the OnnxToTorch lowering for cosh, acosh, asin, asinh,
and atanh op.
This commit also adds the TorchToLinalg lowering for acosh, asin, asinh,
and atanh op.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-14 11:58:09 +05:30
Rob Suderman c0f139be0f
[torch] Add `torch.aten.eq.Tensor` comparison folder (#2889)
Added a folded for a equals operator. This allows an equivalent
comparison folder, primarily for when shape computations occur small
size tensor.
2024-02-09 15:02:20 -08:00
Rob Suderman 7d33ba69ac
[torch] Folder for torch.aten.select.int for splat cases (#2890)
If the input or result is a splat value we can just constant fold the
result. This is common for shape computations and can help with shape
inference.
2024-02-09 14:02:54 -08:00
Franz Haniel 4cc62aeb24
Implement trace (#2790)
The lowering decomposes AtenTraceOp into an AtenDiagonalOp followed by
AtenSumOp.

The progress is tracked in
https://github.com/nod-ai/SHARK-Turbine/issues/333.

---------

Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2024-02-09 08:00:24 -08:00
Rob Suderman a8aad2a5ab
[torch] Add `torch.aten.where.*` folders (#2886)
Where operation can be statically computed when involving splats of
known value. Added handling these cases with multiple tests.
2024-02-07 19:43:31 -05:00
Dave Liddell 23647ab2d1
[torhc] aten.index_select folder (#2871)
Folds aten::index_select ops under the following conditions:

1. If the input and output are the same shape, the indexing operation is
a NOP, so just return the input.
2. If the input has shape <1x1x...xNx...x1> (all 1's except for one
dim), and the output shape is <1x1x...x1> (all 1's), then there is a
single index, so extract the single element value and return a tensor
with that value.

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-02-07 16:17:15 -08:00
mmakevic 32dbf99ce2
Implement lowering of torch.aten.all.dim (#2873)
Lowering of torch.aten.all.dim to linalg.

Per PyTorch documentation:

> This function matches the behaviour of NumPy in returning output of
dtype bool for all supported dtypes except uint8. For uint8 the dtype of
output is uint8 itself.

Since there is no support for ui8 in torch-mlir currently
(https://github.com/llvm/torch-mlir/pull/1384#issuecomment-1260011334)
implementation returns failure for that case.
2024-02-07 12:34:52 -08:00