Commit Graph

1230 Commits (da7c6d2c16439ff5a009bd660567d38158fc3049)

Author SHA1 Message Date
Rob Suderman 985e7796a4
[linalg] Added `aten.clamp` support with integers to `torch-to-linalg` (#2718)
The lowering for `aten.clamp` did not support integer types. Added
support for integer types including a signed integer test.
2024-01-05 15:16:49 -08:00
Han-Chung Wang 6096fcb347
[OnnxToTorch] Delete unused variables. (#2728) 2024-01-04 17:30:05 -08:00
Kunwar Grover fb1dfa3126
Bump llvm-project to 6b65d79fbb4682468333cea42b62f15c2dffd8f3 (#2723)
Co-authored-by: hanhanW <hanhan0912@gmail.com>
2024-01-04 14:33:41 -08:00
John Wu 4e5e34d215
[MLIR][ONNX] Add OnnxToTorch support for Slice Op (#2696) 2024-01-03 19:41:10 -08:00
Xida Ren (Cedar) 1778314620
add basic cumsum. this doesn't support the exclusive and reverse attrs (#2717)
fixes #2711
2024-01-03 09:52:59 -08:00
kumardeepakamd 9adad9bc40
Add support for reflection_pad1d (#2706)
Adds a lowering to Linalg for reflection_pad1d. Based on ideas/code from draft PR
https://github.com/llvm/torch-mlir/pull/2693.

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-02 14:05:11 -05:00
Xida Ren (Cedar) 6660a26594
lower torch.aten.isinf to linalg (#2638)
Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
2023-12-28 17:20:32 -08:00
Xida Ren (Cedar) 9fc212ea9a
support Onnx opset 1-13 ReduceMean where axes is supplied as an attr (#2703)
(instead of an input)

Addresses part of #2689. fixes #2702
2023-12-28 09:31:41 -08:00
Sungsoon Cho 8e389ff2ff
Implement lowering of torch.aten.exponential (#2680)
https://github.com/llvm/torch-mlir/issues/2646

Decompose aten.exponential() into: -exp(1-x)/lambda
2023-12-27 20:33:18 -08:00
Xida Ren (Cedar) d560698e3d
Lower `onnx.split` to `torch.aten` (#2686) 2023-12-27 17:53:07 -08:00
aldesilv 2d796b7502
lower onnx max op to torch aten maximum op (#2618)
lower onnx min op to torch aten minimum op
2023-12-27 11:07:35 -08:00
aldesilv 336cfb64b5
OnnxToTorch support for onnx.Mul op (#2699) 2023-12-27 10:50:08 -08:00
Xida Ren (Cedar) 6847fc1fc6
Fix since-opset too high (#2701)
Addresses two of the ops from
https://github.com/llvm/torch-mlir/issues/2689

https://github.com/llvm/torch-mlir/issues/2700
2023-12-27 10:08:09 -08:00
aldesilv abc6b0a25a
onnx to torch pow support (#2656) 2023-12-27 09:34:48 -08:00
Vivek Khandelwal 4f252c88b4
[MLIR][ONNX] Add OnnxToTorch support for GlobalAveragePool op. (#2692)
This commit adds the OnnxToTorch support for GlobalAveragePool op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-26 10:25:31 -08:00
saienduri ee75e8d1ae
[MLIR][ONNX] Add OnnxToTorch support for Reshape Op (#2698)
This commit adds the OnnxToTorch support for Reshape op.
2023-12-26 10:20:13 -08:00
Vivek Khandelwal 0849fd0a06 [MLIR][ONNX] Fix onnx.conv lowering to handle bias tensor
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-22 16:36:21 +05:30
Vivek Khandelwal 9a72c6584e [MLIR][ONNX] Add OnnxToTorch support for BatchNormalization and Concat op.
This commit adds the OnnxToTorch support for BatchNormalization and Concat op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-22 11:25:33 +05:30
John Wu 46f2cb50dc
[onnx] Lower onnx.HardSigmoid to torch (#2682)
The expression for HardSigmoid in Onnx
(https://onnx.ai/onnx/operators/onnx__HardSigmoid.html): max(0, min(1,
alpha * x + beta))

is inherently different from HardSigmoid in Torch
(https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html)
which is: if x < -3 -> 0
elif x > 3 -> 1
else x/6 + 1/2

That being said, it was just better to compute out the entire expression
when translating the Onnx expression to Torch mlir, which is done in
this PR. Some of the logic is shared from the files in
`DecomposeComplexOps`. Therefore, refactored some shared logic between
`DecomposeComplexOps` and `DefaultDomainGToP` and put it in a `Utils`
file.
2023-12-21 07:29:22 -08:00
Vivek Khandelwal 3226241521 [MLIR][ONNX] Add OnnxToTorch support for Conv and ConvTranspose op.
This commit adds the OnnxToTorch support for Conv and ConvTranspose op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-21 11:12:14 +05:30
Stella Laurenzo d75cff6cd1 NFC: Remove unused variable causing a warning. 2023-12-20 19:23:27 -08:00
Rob Suderman 11cc92d4ab
[onnx] Lowerings from `onnx.tan` (#2642)
Started work on the `tan` lowerings for ONNX to Torch. Uses `sin` and
`cos` to represent a `tan`.
2023-12-20 10:09:39 -08:00
Rob Suderman a24aadbfab
[aten] Make `torch.aten.matmul` to `linalg` work for non-broadcasting case (#2659)
Broadcasting for `torch.aten.matmul` is optional so a MxN with NxK
matmul should be legalized to a `linalg.matmul`.
2023-12-20 10:09:10 -08:00
Sungsoon Cho 20ab882840
Fix typo in DecomposeBernoulli() match failure messages. (#2676) 2023-12-19 20:59:19 -08:00
Han-Chung Wang be3e74b647
Integrate llvm/llvm-project@282d501476 (2023-12-19) (#2675) 2023-12-19 13:28:37 -08:00
Andreas Falkenberg ebaab4200f
[ONNX] ONNX -> TORCH for Erf (#2673)
TorchOnnxToTorch
For Erf function
2023-12-19 08:07:27 -08:00
Vivek Khandelwal 8649b84e3f
[MLIR][ONNX] Add OnnxToTorch support for AveragePool op. (#2672)
This commit adds the OnnxToTorch support for AveragePool op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-18 18:17:11 -06:00
saienduri 698ff3a736
[MLIR][ONNX] Add OnnxToTorch support for Reduction Ops (#2657)
This commit adds the OnnxToTorch support for ReduceSum, ReduceMean, and
ReduceMin ops.
2023-12-18 12:37:31 -08:00
John Wu deacb8ef38
[MLIR][ONNX] Add OnnxToTorch support for Gelu (#2647)
This commit adds the OnnxToTorch support for Gelu op.

---------

Co-authored-by: Rob Suderman <suderman@google.com>
2023-12-18 10:57:08 -08:00
Rob Suderman 791c666479
[torch] Lower `torch.aten.sinh` to `linalg` (#2662) 2023-12-18 09:15:12 -08:00
Rob Suderman ae1a6e4a5a
[onnx] Lower `onnx.Gemm` to `torch` (#2663)
General lowering for `onnx.Gemm` to `torch`
2023-12-16 10:47:58 -08:00
Andreas Falkenberg cee8563060
[onnx] Support of onnx.Greater, onnx.Less, onnx.GreaterOrEqual to Torch (#2649)
The three remaining compare operations
onnx.Greater 
onnx.Less 
onnx.GreaterOrEqual

Are also added with this push request. 
This concludes a set of basic tensor compare functions.
2023-12-16 12:42:11 -05:00
Rob Suderman 61888690bb
[onnx] Add support for `onnx.sinh` (#2643)
Adds a lowering from `onnx.sinh` to `aten.sinh`. This includes adding
the `aten.sinh` operator.
2023-12-15 21:23:51 -08:00
Rob Suderman 705ea958ae
[onnx] Lowerings from `onnx.transpose` (#2641)
Lowerings for `transpose` from ONNX to `aten`. Implementation depends on
making multiple `aten.transpose` operations swapping pairs of dimensions.
As `onnx.transpose` can swap around any dimensions it may require
constructing multiple `aten.transpose`.
2023-12-15 15:30:05 -08:00
Quinn Dawkins 030b0140d4
[TorchToLinalg] Lower aten.cat to tensor.concat (#2650)
This replaces the lowering of aten.cat with tensor.concat, allowing more
efficient handling of concatenations in downstream flows. The refbackend
populates concat decomposition patterns that can be used to recover the
previous lowering.
2023-12-15 15:45:32 -05:00
Rob Suderman 061af696ce
[onnx] Lowering for `onnx.shape` to `torch` and `tensor` (#2648)
Includes the lowering from the `aten` equivalent to `tensor` operations.
2023-12-15 11:37:49 -08:00
Sungsoon Cho 55e9401c5c
Implement lowering of aten.cosh op. (#2635) 2023-12-15 11:19:26 -08:00
Gaurav Shukla eb9249e601
[ONNX][MLIR] Add support for LeakyRelu and GatherElements op (#2655)
This commit adds support for `LeakyRelu and GatherElements` op in the
onnx pipeline.

Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-12-15 11:18:28 -08:00
saienduri f59c01fd2f
[MLIR][ONNX] Add OnnxToTorch support for q-z ops (specific ops in description) (#2601)
This commit adds the OnnxToTorch support for Reciprocal, Round,
ScatterElements, Sigmoid, Sin, Tanh, Sqrt, Sub, Sum, Where, Xor,
Squeeze, Unsqueeze ops.
For reviewers, the ops that weren't trivial and probably require extra
review are Sum, Squeeze, and Unsqueeze.
2023-12-15 09:36:18 -08:00
Andreas Falkenberg 4ec8b9fc02
[onnx] add support for onnx.LessOrEqual (#2639)
Added the less or equal operation to OnnxToTorch. 
onnx.LessOrEqual

---------

Co-authored-by: root <andreas.falkenberg@amd.com>
2023-12-14 22:23:23 -05:00
Rob Suderman 4857606ffe
[onnx] Lowerings from `onnx.selu` (#2634)
Lowerings for `selu` lowerings for ONNX to the corresponding torch
implementations. Torch's `selu` implementation has fewer features so
we use the a generalized `elu` with the input scale set to `1.0`.
2023-12-14 08:53:47 -08:00
JianzheXiao 6ddeb1a6ef
[torch] Add support for aten.selu (#2640)
Add `aten.selu` operation to `torch` dialect.
2023-12-13 20:28:08 -08:00
John Wu 42392bc845
[MLIR][ONNX] Add OnnxToTorch support for matmul ops (#2629)
This commit adds the OnnxToTorch support for Matmul.
2023-12-13 09:35:32 -08:00
JianzheXiao 7cf52ae73f
[Torch Dialect]Add Support for AtenGroupNormOp and AtenNativeGroupNormOp (#2591)
Co-authored-by: LiuYuanqiang <liuyuanqiang.yqliu@bytedance.com>
2023-12-13 11:05:12 +08:00
Frederik Harwath b656c674ee Implement e2e support for aten.acos op
This depends on a change in the LLVM core repository which adds acos
support to the MLIR Math dialect.
2023-12-12 10:52:02 +01:00
Sambhav Jain 7acabafd84
Remove folder from `AtenStackOp` for single element list inputs (#2626)
`AtenStackOp` defines this folder for list operand containing single
element:
```
OpFoldResult AtenStackOp::fold(FoldAdaptor adaptor) {
  auto list = getOperand(0).getDefiningOp<PrimListConstructOp>();
  if (!list || !list->hasOneUse() || list.getElements().size() != 1)
    return nullptr;
  return list.getElements()[0];
}
```
However, unlike `AtenCatOp`, `AtenStackOp` cannot be folded away for
single element list operand because the result from a stack operation
contains an additional dimension (of size 1, like expand_shape).

This PR removes the `AtenStackOp::fold` method, and adds an e2e test for
single element list input case, which fails on current `main` as
follows:
```
Unexpected outcome summary: (linalg)                                                                                                                                                                   
                                                                                                                                                                                                       
****** Failed tests - 1 tests                                                                                                                                                                          
    FAIL - "TensorsStackSingleElementListModule_basic"                                                                                                                                                 
        @ trace item #0 - call to "forward"                                                                                                                                                            
        @ output of call to "forward"                                                                                                                                                                  
        ERROR: shape (torch.Size([10, 32])) is not equal to golden shape (torch.Size([10, 1, 32]))     
```
Thanks Chris Lalau Keraly for the bug report.
2023-12-11 10:52:50 -08:00
Vivek Khandelwal 0b4422a253 [MLIR][ONNX] Add OnnxToTorch support for bitwise and math ops
This commit adds the OnnxToTorch support for BitwiseXor, BitwiseOr, Div, Equal, Cast,
Ceil, Floor, Cos, and Clip op.
This commit also adds the TorchToLinalg support for aten.clamp.Tensor and aten.clamp_min.Tensor op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-11 19:36:01 +05:30
JianzheXiao 96fcde4d77
[Torch Dialect] Support Einsum Op (#2230)
As title, support torch.aten.einsum op

Right now only support Static Shape, because of the known issue, the
fixed solution is here: https://github.com/llvm/torch-mlir/pull/2154

Co-authored-by: Jiawei Wu
[wujiawei.aml@bytedance.com](mailto:wujiawei.aml@bytedance.com)
2023-12-10 12:30:37 +08:00
Vivek Khandelwal 07c3e11f56 [MLIR][TORCH] Add support for Short(si16) data type
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-09 16:52:23 +05:30
Felix Schneider fb21a85874
[TorchToLinalg] Lower grouped conv2d to linalg Op with correct dimension ordering (#2623)
The linalg Op `linalg.conv_2d_ngchw_fgchw` had a bug where

1. Weights were accessed as G,F,C,H,W instead of as F,G,C,H,W
2. Output was accessed as N,F,G,H,W instead of as N,G,F,H,W

Now this has been fixed in
https://github.com/llvm/llvm-project/pull/73855 which broke the
torch-mlir lowering to that Op.

This patch switches lowering in torch-mlir to the newly introduced
`linalg.conv_2d_ngchw_gfchw` op which accesses weights in an order that
is compatible with PyTorch's memory layout.

Fix https://github.com/llvm/torch-mlir/issues/2622
2023-12-08 14:18:23 +01:00
Stella Laurenzo 8252656b6d
Advance llvm-project and stablehlo. (#2619)
llvm-project: bbd2b08b95fe76bea138c1b03c1cd42ed3ee04df
stablehlo: ab709fe48de88c67717abfbd7ef17425eb95ddaf

These commits were chosen in order to account for an MLIR API break from
3dbac2c007
which required a patch to stablehlo. We integrate a bit beyond that
commit to deal with some revert/reapply cycles in the intervening range
which were discovered in another downstream.

Further, it requires adaptation to the stablehlo API breaks introduced
from https://github.com/openxla/stablehlo/pull/1872 which are along for
the ride.

Since some stablehlo builders were changed to directly take int64_t
array refs, also traced that up some call stacks to eliminate some
signed/unsigned mismatches that result.

Also adds a few TOSA tests to the passing set that seem to work now.
2023-12-07 23:13:42 -08:00
Quinn Dawkins 63505ad6b2
[TorchToLinalg] Drop constexpr from ifs in argmin/max.dim (#2617)
MSVC-19 does not support constexprs of lambda captured constexpr values
like this: https://godbolt.org/z/ej65rMzdr
Instead, this just drops the constexpr from the if statements.

See the discussion in
https://discord.com/channels/689900678990135345/1062405112292712499/1182338050664185999
2023-12-07 13:08:17 -05:00
Quinn Dawkins 141202bc01
[TorchToLinalg] Fix integer type handling for aten.mm (#2615)
Despite aten.mm requiring the input and output types match, we still opt
to maintain signedness semantics in case later passes try to do any sort
of integer type narrowing.
2023-12-07 00:13:53 -05:00
frafranz c0115706a0
Add a decomposition for torch.aten.argmin (#2613)
Adds a lowering for the torch.aten.argmin operator to linalg via decomposition into torch.aten.min.dim.

---------

Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2023-12-06 09:45:30 -05:00
Frederik Harwath 6248216dca
Add aten.min.dim to linalg lowering (#2600) 2023-12-05 07:16:35 -08:00
Quinn Dawkins 400752ca8d
[TorchToLinalg] NFC: Move Utils.h to an externally accessible location (#2603) 2023-12-01 19:38:21 -05:00
Ramiro Leal-Cavazos e568f7e999
Move handling of integer signedness to the backend conversions (#2597)
The function `getTypeForScalarType` currently takes an argument to
specify the signedness of integer types. This is leakage of backend
specific requirements into the torch dialect world. Because
`getTypeForScalarType` is a utility function for the torch dialect, it
should only produce types that match the sign conventions used by
PyTorch (regular integers are signed and unsigned integers are
unsigned).

This commit removes the signedness argument from
`getTypeForScalarType`, and moves the backend specific handling of
integer types to the backend code.
2023-11-29 09:43:09 -08:00
Mi Jiazhi f7a92d346e
[Torch Dialect] Decompose AtenTriuOp (#2561)
decompose like:
```
import torch

def my_triu(x, diag):
    rows = torch.ops.aten.size(x, -2)
    cols = torch.ops.aten.size(x, -1)

    row_indices = torch.ops.aten.arange(rows).unsqueeze(1)
    col_indices = torch.ops.aten.arange(cols).unsqueeze(0)

    cond = torch.ops.aten.ge(
        col_indices, torch.ops.aten.add(row_indices, diag))
    return torch.ops.aten.where(cond, x, 0)

x = torch.rand(5, 7)
assert torch.allclose(my_triu(x, 0), torch.triu(x, 0))
assert torch.allclose(my_triu(x, 1), torch.triu(x, 1))
assert torch.allclose(my_triu(x, 2), torch.triu(x, 2))
assert torch.allclose(my_triu(x, -1), torch.triu(x, -1))
```

---------

Co-authored-by: LiuYuanqiang <liuyuanqiang.yqliu@bytedance.com>
2023-11-29 10:35:26 +08:00
Vivek Khandelwal dc9ea08db5 [MLIR][ONNX] Add OnnxToTorch support for atan and bitwise ops
This commit adds the OnnxToTorch support for Atan, Bitshift, BitwiseAnd,
and BitwiseNot op.
This commit also adds the TorchToLinalg support for AtenBitwiseLeftShiftTensorOp.

Signed-Off By: vivekkhandelwal@nod-labs.com
2023-11-28 17:19:07 +05:30
James Newling 1b7d6f2af9
Improve decomposition of pixel_shuffle (support dynamic shapes) (#2590)
The aten.reshape ops in the decomposition are replaced with prims.collapse 
and prims.split_dim ops, which means that the cases where the lowering of
reshape from torch to linalg which are not supported, are avoided.

Essentially, by using the collapse and split_dim ops instead of the
reshape ops, we are not "losing" the information that the reshapes do not
arbitrarily mix dimensions. Which makes lowering easy. 

3 additional tests added: 
- fully dynamic, 
- dynamic only the spatial dimensions, 
- dynamic only in the non-spatial dimensions.
2023-11-22 12:31:06 -08:00
Stella Laurenzo e06efc5136
Initial TorchOnnxToTorch conversion pipeline. (#2585)
Adds a pipeline to convert custom ops and metadata represented as
`torch.operator` custom ops to corresponding `torch` ops where possible.

This is part of a multi-part approach for building ONNX import in as a
regular feature of torch-mlir. It is focused on the conversions vs the
infra. We will end up maintaining a [pure-python
importer](https://github.com/nod-ai/SHARK-Turbine/blob/main/python/shark_turbine/importers/onnx_importer.py)
to go with this in torch-mlir, and we will also maintain test case
generation utilities derived from it.

I have left substantial documentation in the README of the conversion
directory, including the recommended approach that we will take to keep
building this out.

(note that this organizes the code to coincide with the refactoring in
#2442 versus the current flat arrangement)
2023-11-21 21:02:55 -08:00
Vivek Khandelwal d50d3aa5e7 [MLIR][TORCH] Add support for unsigned integer types
Refer: https://github.com/pytorch/pytorch/issues/58734
2023-11-21 21:57:26 +05:30
James Newling 03e8f99730
Lowering to linalg of prims split_dim op (#2576)
Adds support for lowering to prims split_op. 

Similar design to collapse op lowering in 
https://github.com/llvm/torch-mlir/pull/2572, with some 
small differences, because the split_dim op (in pytorch) is
view-changing whereas the collapse is not. The difference 
means that 

1) it must be registered in the function Torch::isViewLikeOp
2) it must be be added to the "expected fail" set for the torch dynamo backend.
2023-11-21 07:56:09 -08:00
Zhekun(Josh) Zhang d67afa9e95
[Torch] Add fold rule for AtenMaskedFillTensorOp to AtenMaskedFillScalarOp (#2543) 2023-11-21 13:26:17 +08:00
James Newling 647f2f5076
Additional tests for view lowering (#2584)
The logic for lowering the aten view op to linalg is fairly complex. 
In this PR I have tried to follow all non-failing paths through the 
lowering and add unit tests where they're missing.

There is 1 logical change to the lowering: redundant tensor.cast ops
(same source and destination type) are folded.
2023-11-20 17:35:25 -08:00
Yuanqiang Liu 7b94189e07
[E2E] add nan case in elementwise comparison e2e tests (#2575) 2023-11-20 11:27:08 +08:00
Stella Laurenzo 5eae0adff1
Breakup python pytorch deps (#2582)
This lifts the core of the jit_ir_importer and ltc out of the pt1
project, making them peers to it. As a side-effect of this layering, now
the "MLIR bits" (dialects, etc) are not commingled with the various
parts of the pt1 project, allowing pt1 and ltc to overlay cleanly onto a
more fundamental "just MLIR" Python core. Prior to this, the Python
namespace was polluted to the point that this could not happen.

That "just MLIR" Python core will be introduced in a followup, which
will create the space to upstream the FX and ONNX pure Python importers.

This primary non-NFC change to the API is:

* `torch_mlir.dialects.torch.importer.jit_ir` ->
`torch_mlir.jit_ir_importer`.

The rest is source code layering so that we can make the pt1 project
optional without losing the other features.

Progress on #2546.
2023-11-19 12:10:19 -08:00
Yuanqiang Liu facbe5d96b
[Torch Dialect] support AtenArangeStartOutOp in ReduceOpVariants like… (#2563)
… AtenBernoulli_FloatOp

It fixing case like: `%2110 = torch.aten.arange.start_out %int1,
%int1517, %int1, %2109 : !torch.int, !torch.int, !torch.int,
!torch.tensor -> !torch.tensor`.
`aten.arange.start_out` doesn't have value semantics also, means`%2110`
is an alias for %2109.
So I decompose it to `aten.arange.start` + `torch.contents.overwrite`.  
The complex decomposition logic is target to handle cases like view and
dtype cast which I add in e2e tests.
2023-11-17 00:51:55 +08:00
James Newling dad1f012f6
Add verification for torch permute op (#2551)
- adds support for an optional verifier to the generated torch op
tablegen (GeneratedTorchOps.td)
- uses the above to add a verifier for the torch permute op. 

Motivation: I hit an unclear error from linalg while developing a
decomposition pass for pixel_shuffle. The error would have been clearer
if the problem had been detected earlier in the invalid aten.permute op.

Testing: new tests added. To run added tests, from the base directory
run

```
 ./build/bin/llvm-lit  test/Dialect/Torch/invalid.mlir
 ```
2023-11-15 11:47:54 -08:00
James Newling e81282ae8f
Support for prims collapse op (lowering to linalg) (#2572)
Steps taken:
1) add generator code to torch_ods_gen.py, run update_torch_ods.sh
2) add (custom) shape and type inference generator code to
abstract_interp_lib_gen.py, run update_abstract_interp_lib.sh
3) Implement lowering to tensor.collapse_dims. Requires the `start` and
`end` values to be constant, else lowering fails
4) Update xfail_sets.py (append to LTC_XFAIL_SET) after running
/tools/e2e_test.sh --filter Collapse --verbose -c XX for all support
backends (XX).

Motivation: 
- Supporting the collapse operation will be useful for lowering of
pixel_shuffle (see Issue #2559)
2023-11-15 08:34:38 -08:00
Yuanqiang Liu 3ab790c50a
[Torch Dialect] add canonicalize for aten.numel (#2562) 2023-11-11 12:16:53 +08:00
Yuanqiang Liu 60effcee89
[Dtype Function] fix aten.div.Tensor_mode's dtype function (#2555) 2023-11-09 09:46:53 +08:00
saienduri ad18219820
Fix for unused variable failure when trying to bump torch-mlir in IREE (#2560)
Due to blob being an unused variable, we are not able to bump torch-mlir
in iree. With this PR, we remove this unused variable.
2023-11-08 15:55:41 -08:00
James Newling b6e551c7b8
Decomposition of aten.pixel_shuffle with static input shape (#2550)
For static tests (that is when the shape is know) for example:

 ```
 @annotate_args([None, ([3, 18, 2, 2], torch.float32, True)])
 ```
 
The e2e passes. But only if the replacement op's return type is set as
undefined (optional shape and type must be explicitly made unset),
otherwise there's a error about the function return type.
 
 For dynamic cases, for example if the above is replaced with 
 
  ```
 @annotate_args([None, ([-1, -1, -1, -1], torch.float32, True)])
 ```

There is a failure to lower to linalg from torch ("view op explicitly
labelled as illegal"). This seems to be because the support for lowering
from torch to linalg with dynamic shapes is limited.
2023-11-08 08:52:44 -05:00
JianzheXiao a42d4c18ff
[Torch Dialect]Support aten.cosine_similarity (#2364)
As title, add support for aten.cosine_similarity, support broadcast
inputA/inputB to the same shape
2023-11-08 15:28:30 +08:00
Jiawei Wu d5ee8ee73a
[Torch Dialect] emit aten.reshape_as op and add decomposition pattern. (#2553) 2023-11-05 11:38:36 +08:00
Yuanqiang Liu 0378da0abd
[Torch Dialect] support aten.isinf (#2544)
Also fix linalg lowering from `UEQ` to `OEQ`.  
I will check other comparison's lowering later.
2023-11-04 22:26:01 +08:00
saienduri 88adf384cc
torch-mlir change for dense resource implementation (#2513)
Co-authored-by: Avinash Sharma <avinash@nod-labs.com>
2023-11-03 11:44:07 -07:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Zhekun(Josh) Zhang 88d4c475d3
[Torch] Fix mixP case for non value semantic ops (#2540)
NonValueSemantic Ops like Add_, div_, etc. expect result DType to be the
same as the first input. However, current implementation would result in
wrong result type for case like:

```python
a = torch.randn(3, 3).half() # float16
b = torch.randn(3, 3) # float32
a += b # i.e. torch.ops.aten.add_(a, b)
```
torch expects `a` to be float16, but dtype refinement would infer
float32 type, since it's replaced by `aten.add`.
2023-11-02 12:40:08 +08:00
Daniel Garvey 4901773f77
add uncovered cases in view lowering (#2524)
removes unecessary checks from empty strided
2023-11-01 21:56:44 -05:00
Yuanqiang Liu 365655ca29
[Torch Dialect] add canonicalize pattern for aten.floor with integer … (#2534)
…type
2023-11-02 09:51:31 +08:00
saienduri a2e694df40
add e2e support for torch.eye operations (aten.eye, aten.eye.m) (#2478) 2023-11-01 11:23:28 -07:00
Daniel Garvey 1d41f7b6fe
Rework AtenEmptyStridedOp checks (#2537)
Now using Value instead of Ints. Trades compile failure for a runtime
assert
2023-10-31 22:56:54 -05:00
JianzheXiao e8706957c0
[Torch Dialect] Add Support for aten.unflatten.int (#2475)
As title, Add support for aten.unflatten.int, support dim to be negative
and one of the sizes' elements to be -1
2023-10-31 15:36:16 +08:00
Yuanqiang Liu e7282487ea
[Torch Dialect] support aten.glu (#2531) 2023-10-26 10:36:18 +08:00
Ze Zhang 7cb2db6279
Update dtype check in torch-to-tosa clamp op (#2529)
As titled.

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-23 17:04:30 -07:00
Ze Zhang 4279b750da
update AtenClampOp in torch-to-tosa to handle fp inputs (#2516)
As titled.

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-17 14:49:47 -07:00
Chi_Liu 14a4da923b
Update llvm-project to b44b3494f60296db6aca38a14cab061d9b747a0a (#2511)
The main purpose is to bring in the new mesh dialect change.
https://github.com/llvm/llvm-project/pull/68007
2023-10-16 19:29:48 -07:00
Ze Zhang f2c53b8ca5
Add aten.isclose support and its torch-to-tosa lowering (#2512)
Add aten.isclose op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests


To test e2e tosa lowering:
`python -m e2e_testing.main -v -c=tosa`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-16 09:44:53 -07:00
Ze Zhang e649e06b7b
Add aten.unflatten.int support and its torch-to-tosa lowering (#2509)
Add aten.unflatten.int op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests

To test e2e tosa lowering:

`python -m e2e_testing.main -v -c=tosa`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-13 18:39:41 -07:00
Quinn Dawkins 6f81ad7293
[TorchToLinalg] Improve broadcast lowerings in strict symbolic modes (#2505)
With strict symbolic shapes, we can assume numpy-style dynamic
broadcasts never occur. This improves the lowering in the presence of
this assumption.
2023-10-05 15:15:26 -04:00
Quinn Dawkins ae72eec224
Improve aten.broadcast_to folder when in strict symbol mode (#2504)
Strict symbolic shapes allow us to assume numpy-style dynamic broadcasts
never occur. This allows us to strengthen the folder for broadcasts to
cases where the rank is the same and all shapes match (including dynamic
sentinel values).
2023-10-05 09:02:10 -04:00
Ramiro Leal-Cavazos 2e5d65064c [linalg] Add handling for leadin and trailing size-1 dims in ViewOp
This commit adds to the lowering of `aten.view` handling for the
following cases:

- `(..., a.size(i))` -> `(..., a.size(i), 1, ..., 1)`
- `(..., a.size(i), 1, ..., 1)` -> `(..., a.size(i))`
- `(a.size(i), ...)` -> `(1, ..., 1, a.size(i), ...)`
- `(1, ..., 1, a.size(i), ...)` -> `(a.size(i), ...)`
2023-10-03 23:04:52 +00:00
Ramiro Leal-Cavazos 1c508af0ba Revert "[linalg] Fix handling of trailing size-1 dimensions in aten.view (#2474)"
This reverts commit 7c6b9d2445.
2023-10-03 23:04:52 +00:00
Vivek Khandelwal ca6ce8974f [MLIR][TORCH] Add support for int8 dtype for sub, add, and bitwise_and op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-10-03 22:12:31 +05:30
Vivek Khandelwal 9293326e1e [MLIR][TORCH] Add support for bitwise_right_shit and bitwise_and.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-10-02 13:06:59 +05:30
Vivek Khandelwal c434736ee9 [MLIR][TORCH] Add support for conversion to int8 dtype
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-10-02 09:48:46 +05:30
Vivek Khandelwal 71ac62f3a8 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-28.

aten.baddbmm changes done because upstream PyTorch has now added
support for fp16 gemm on CPU.
Refer: 9399e0b1ff
2023-10-02 09:48:32 +05:30
Stella Laurenzo 860be09a39
Elide dynamic broadcast checks when in strict symbolic shapes mode. (#2496)
When importing dynamic shaped programs from Dynamo, via torch.compile or
torch.export, we can assume that strict symbolic shape checks have been
done prior to generating torch IR. Among other shape checking, this
eliminates the case where an unknown dimension can be dynamically '1' in
a way that signals a broadcast.

Adds a `isAssumingStrictSymbolicShapes` utility which consults a
`torch.assume_strict_symbolic_shapes` attribute on an enclosing scope
and returns true if present.

In the linalg pipeline, many runtime checks are elided when this returns
true.
2023-09-29 16:45:48 -07:00