Commit Graph

558 Commits (9ac90ec7b2710dda9a30450d2c0ee308b31ee4e0)

Author SHA1 Message Date
Sungsoon Cho a8538e1e3f
Decompose AtenNormalFunctionalOp into AtenRandn* and other arithmetic. (#2737) 2024-01-15 22:49:29 -08:00
lonely eagle f85e5c932b
[Torch Dialect] support aten.isneginf, aten.isposinf, aten.nan_to_num (#2743) 2024-01-16 14:29:34 +08:00
James Newling f78ec78ac8
Adjust bound check to be the same as PyTorch native (i.e. stricter) (#2755)
prims.expand expects the start and end dimensions to be strictly less
than the rank of the tensor.
2024-01-15 11:44:45 -08:00
lisaliu1 09421b1cf3
[TorchToLinalg] Add lowering for aten.replication_pad2d (#2715)
Co-authored-by: Lisa Liu <lingl@xilinx.com>
2024-01-15 14:02:27 -05:00
Rob Suderman dc37616d67
[torch][quant] Support quantize and dequantize for torch (#2731)
Handle both `torch.dequantize` and `torch.quantize_per_tensor` including
the op based quantization parameter tracking. This includes adding
`qint32` to torch types as it was missing during the initial type
inclusion.

For testing we only have `torch.int8` and `torch.float` types on
function boundaries as the `qint8` types require passing the scale
and zero point quantization information which is not supported yet.
2024-01-12 19:11:14 -08:00
Ilija Kalinić e1a86e480a
Implement lowering of torch.aten.logit (#2697)
Closes nod-ai/SHARK-Turbine#290
2024-01-11 20:25:42 +05:30
Frederik Harwath 0860c41ee2 Implement aten.reflection_pad2d lowering to linalg 2024-01-10 21:32:22 -10:00
Vivek Khandelwal 208ae35583 [MLIR][ONNX] Add TorchToOnnx Support for DepthToSpace op
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 17:50:47 +05:30
Kunwar Grover fb1dfa3126
Bump llvm-project to 6b65d79fbb4682468333cea42b62f15c2dffd8f3 (#2723)
Co-authored-by: hanhanW <hanhan0912@gmail.com>
2024-01-04 14:33:41 -08:00
kumardeepakamd 9adad9bc40
Add support for reflection_pad1d (#2706)
Adds a lowering to Linalg for reflection_pad1d. Based on ideas/code from draft PR
https://github.com/llvm/torch-mlir/pull/2693.

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-02 14:05:11 -05:00
Sungsoon Cho 8e389ff2ff
Implement lowering of torch.aten.exponential (#2680)
https://github.com/llvm/torch-mlir/issues/2646

Decompose aten.exponential() into: -exp(1-x)/lambda
2023-12-27 20:33:18 -08:00
John Wu 46f2cb50dc
[onnx] Lower onnx.HardSigmoid to torch (#2682)
The expression for HardSigmoid in Onnx
(https://onnx.ai/onnx/operators/onnx__HardSigmoid.html): max(0, min(1,
alpha * x + beta))

is inherently different from HardSigmoid in Torch
(https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html)
which is: if x < -3 -> 0
elif x > 3 -> 1
else x/6 + 1/2

That being said, it was just better to compute out the entire expression
when translating the Onnx expression to Torch mlir, which is done in
this PR. Some of the logic is shared from the files in
`DecomposeComplexOps`. Therefore, refactored some shared logic between
`DecomposeComplexOps` and `DefaultDomainGToP` and put it in a `Utils`
file.
2023-12-21 07:29:22 -08:00
Rob Suderman 11cc92d4ab
[onnx] Lowerings from `onnx.tan` (#2642)
Started work on the `tan` lowerings for ONNX to Torch. Uses `sin` and
`cos` to represent a `tan`.
2023-12-20 10:09:39 -08:00
Sungsoon Cho 20ab882840
Fix typo in DecomposeBernoulli() match failure messages. (#2676) 2023-12-19 20:59:19 -08:00
Han-Chung Wang be3e74b647
Integrate llvm/llvm-project@282d501476 (2023-12-19) (#2675) 2023-12-19 13:28:37 -08:00
Sungsoon Cho 55e9401c5c
Implement lowering of aten.cosh op. (#2635) 2023-12-15 11:19:26 -08:00
JianzheXiao 6ddeb1a6ef
[torch] Add support for aten.selu (#2640)
Add `aten.selu` operation to `torch` dialect.
2023-12-13 20:28:08 -08:00
JianzheXiao 7cf52ae73f
[Torch Dialect]Add Support for AtenGroupNormOp and AtenNativeGroupNormOp (#2591)
Co-authored-by: LiuYuanqiang <liuyuanqiang.yqliu@bytedance.com>
2023-12-13 11:05:12 +08:00
Frederik Harwath b656c674ee Implement e2e support for aten.acos op
This depends on a change in the LLVM core repository which adds acos
support to the MLIR Math dialect.
2023-12-12 10:52:02 +01:00
Vivek Khandelwal 0b4422a253 [MLIR][ONNX] Add OnnxToTorch support for bitwise and math ops
This commit adds the OnnxToTorch support for BitwiseXor, BitwiseOr, Div, Equal, Cast,
Ceil, Floor, Cos, and Clip op.
This commit also adds the TorchToLinalg support for aten.clamp.Tensor and aten.clamp_min.Tensor op.

Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-11 19:36:01 +05:30
JianzheXiao 96fcde4d77
[Torch Dialect] Support Einsum Op (#2230)
As title, support torch.aten.einsum op

Right now only support Static Shape, because of the known issue, the
fixed solution is here: https://github.com/llvm/torch-mlir/pull/2154

Co-authored-by: Jiawei Wu
[wujiawei.aml@bytedance.com](mailto:wujiawei.aml@bytedance.com)
2023-12-10 12:30:37 +08:00
frafranz c0115706a0
Add a decomposition for torch.aten.argmin (#2613)
Adds a lowering for the torch.aten.argmin operator to linalg via decomposition into torch.aten.min.dim.

---------

Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2023-12-06 09:45:30 -05:00
Frederik Harwath 6248216dca
Add aten.min.dim to linalg lowering (#2600) 2023-12-05 07:16:35 -08:00
Mi Jiazhi f7a92d346e
[Torch Dialect] Decompose AtenTriuOp (#2561)
decompose like:
```
import torch

def my_triu(x, diag):
    rows = torch.ops.aten.size(x, -2)
    cols = torch.ops.aten.size(x, -1)

    row_indices = torch.ops.aten.arange(rows).unsqueeze(1)
    col_indices = torch.ops.aten.arange(cols).unsqueeze(0)

    cond = torch.ops.aten.ge(
        col_indices, torch.ops.aten.add(row_indices, diag))
    return torch.ops.aten.where(cond, x, 0)

x = torch.rand(5, 7)
assert torch.allclose(my_triu(x, 0), torch.triu(x, 0))
assert torch.allclose(my_triu(x, 1), torch.triu(x, 1))
assert torch.allclose(my_triu(x, 2), torch.triu(x, 2))
assert torch.allclose(my_triu(x, -1), torch.triu(x, -1))
```

---------

Co-authored-by: LiuYuanqiang <liuyuanqiang.yqliu@bytedance.com>
2023-11-29 10:35:26 +08:00
Vivek Khandelwal dc9ea08db5 [MLIR][ONNX] Add OnnxToTorch support for atan and bitwise ops
This commit adds the OnnxToTorch support for Atan, Bitshift, BitwiseAnd,
and BitwiseNot op.
This commit also adds the TorchToLinalg support for AtenBitwiseLeftShiftTensorOp.

Signed-Off By: vivekkhandelwal@nod-labs.com
2023-11-28 17:19:07 +05:30
James Newling 1b7d6f2af9
Improve decomposition of pixel_shuffle (support dynamic shapes) (#2590)
The aten.reshape ops in the decomposition are replaced with prims.collapse 
and prims.split_dim ops, which means that the cases where the lowering of
reshape from torch to linalg which are not supported, are avoided.

Essentially, by using the collapse and split_dim ops instead of the
reshape ops, we are not "losing" the information that the reshapes do not
arbitrarily mix dimensions. Which makes lowering easy. 

3 additional tests added: 
- fully dynamic, 
- dynamic only the spatial dimensions, 
- dynamic only in the non-spatial dimensions.
2023-11-22 12:31:06 -08:00
James Newling 03e8f99730
Lowering to linalg of prims split_dim op (#2576)
Adds support for lowering to prims split_op. 

Similar design to collapse op lowering in 
https://github.com/llvm/torch-mlir/pull/2572, with some 
small differences, because the split_dim op (in pytorch) is
view-changing whereas the collapse is not. The difference 
means that 

1) it must be registered in the function Torch::isViewLikeOp
2) it must be be added to the "expected fail" set for the torch dynamo backend.
2023-11-21 07:56:09 -08:00
Stella Laurenzo 5eae0adff1
Breakup python pytorch deps (#2582)
This lifts the core of the jit_ir_importer and ltc out of the pt1
project, making them peers to it. As a side-effect of this layering, now
the "MLIR bits" (dialects, etc) are not commingled with the various
parts of the pt1 project, allowing pt1 and ltc to overlay cleanly onto a
more fundamental "just MLIR" Python core. Prior to this, the Python
namespace was polluted to the point that this could not happen.

That "just MLIR" Python core will be introduced in a followup, which
will create the space to upstream the FX and ONNX pure Python importers.

This primary non-NFC change to the API is:

* `torch_mlir.dialects.torch.importer.jit_ir` ->
`torch_mlir.jit_ir_importer`.

The rest is source code layering so that we can make the pt1 project
optional without losing the other features.

Progress on #2546.
2023-11-19 12:10:19 -08:00
Yuanqiang Liu facbe5d96b
[Torch Dialect] support AtenArangeStartOutOp in ReduceOpVariants like… (#2563)
… AtenBernoulli_FloatOp

It fixing case like: `%2110 = torch.aten.arange.start_out %int1,
%int1517, %int1, %2109 : !torch.int, !torch.int, !torch.int,
!torch.tensor -> !torch.tensor`.
`aten.arange.start_out` doesn't have value semantics also, means`%2110`
is an alias for %2109.
So I decompose it to `aten.arange.start` + `torch.contents.overwrite`.  
The complex decomposition logic is target to handle cases like view and
dtype cast which I add in e2e tests.
2023-11-17 00:51:55 +08:00
James Newling e81282ae8f
Support for prims collapse op (lowering to linalg) (#2572)
Steps taken:
1) add generator code to torch_ods_gen.py, run update_torch_ods.sh
2) add (custom) shape and type inference generator code to
abstract_interp_lib_gen.py, run update_abstract_interp_lib.sh
3) Implement lowering to tensor.collapse_dims. Requires the `start` and
`end` values to be constant, else lowering fails
4) Update xfail_sets.py (append to LTC_XFAIL_SET) after running
/tools/e2e_test.sh --filter Collapse --verbose -c XX for all support
backends (XX).

Motivation: 
- Supporting the collapse operation will be useful for lowering of
pixel_shuffle (see Issue #2559)
2023-11-15 08:34:38 -08:00
Yuanqiang Liu 60effcee89
[Dtype Function] fix aten.div.Tensor_mode's dtype function (#2555) 2023-11-09 09:46:53 +08:00
James Newling b6e551c7b8
Decomposition of aten.pixel_shuffle with static input shape (#2550)
For static tests (that is when the shape is know) for example:

 ```
 @annotate_args([None, ([3, 18, 2, 2], torch.float32, True)])
 ```
 
The e2e passes. But only if the replacement op's return type is set as
undefined (optional shape and type must be explicitly made unset),
otherwise there's a error about the function return type.
 
 For dynamic cases, for example if the above is replaced with 
 
  ```
 @annotate_args([None, ([-1, -1, -1, -1], torch.float32, True)])
 ```

There is a failure to lower to linalg from torch ("view op explicitly
labelled as illegal"). This seems to be because the support for lowering
from torch to linalg with dynamic shapes is limited.
2023-11-08 08:52:44 -05:00
JianzheXiao a42d4c18ff
[Torch Dialect]Support aten.cosine_similarity (#2364)
As title, add support for aten.cosine_similarity, support broadcast
inputA/inputB to the same shape
2023-11-08 15:28:30 +08:00
Jiawei Wu d5ee8ee73a
[Torch Dialect] emit aten.reshape_as op and add decomposition pattern. (#2553) 2023-11-05 11:38:36 +08:00
Yuanqiang Liu 0378da0abd
[Torch Dialect] support aten.isinf (#2544)
Also fix linalg lowering from `UEQ` to `OEQ`.  
I will check other comparison's lowering later.
2023-11-04 22:26:01 +08:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Zhekun(Josh) Zhang 88d4c475d3
[Torch] Fix mixP case for non value semantic ops (#2540)
NonValueSemantic Ops like Add_, div_, etc. expect result DType to be the
same as the first input. However, current implementation would result in
wrong result type for case like:

```python
a = torch.randn(3, 3).half() # float16
b = torch.randn(3, 3) # float32
a += b # i.e. torch.ops.aten.add_(a, b)
```
torch expects `a` to be float16, but dtype refinement would infer
float32 type, since it's replaced by `aten.add`.
2023-11-02 12:40:08 +08:00
saienduri a2e694df40
add e2e support for torch.eye operations (aten.eye, aten.eye.m) (#2478) 2023-11-01 11:23:28 -07:00
Daniel Garvey 1d41f7b6fe
Rework AtenEmptyStridedOp checks (#2537)
Now using Value instead of Ints. Trades compile failure for a runtime
assert
2023-10-31 22:56:54 -05:00
JianzheXiao e8706957c0
[Torch Dialect] Add Support for aten.unflatten.int (#2475)
As title, Add support for aten.unflatten.int, support dim to be negative
and one of the sizes' elements to be -1
2023-10-31 15:36:16 +08:00
Yuanqiang Liu e7282487ea
[Torch Dialect] support aten.glu (#2531) 2023-10-26 10:36:18 +08:00
Ze Zhang f2c53b8ca5
Add aten.isclose support and its torch-to-tosa lowering (#2512)
Add aten.isclose op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests


To test e2e tosa lowering:
`python -m e2e_testing.main -v -c=tosa`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-16 09:44:53 -07:00
Ze Zhang e649e06b7b
Add aten.unflatten.int support and its torch-to-tosa lowering (#2509)
Add aten.unflatten.int op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests

To test e2e tosa lowering:

`python -m e2e_testing.main -v -c=tosa`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-13 18:39:41 -07:00
Vivek Khandelwal 9293326e1e [MLIR][TORCH] Add support for bitwise_right_shit and bitwise_and.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-10-02 13:06:59 +05:30
Vivek Khandelwal 71ac62f3a8 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-28.

aten.baddbmm changes done because upstream PyTorch has now added
support for fp16 gemm on CPU.
Refer: 9399e0b1ff
2023-10-02 09:48:32 +05:30
Stella Laurenzo 860be09a39
Elide dynamic broadcast checks when in strict symbolic shapes mode. (#2496)
When importing dynamic shaped programs from Dynamo, via torch.compile or
torch.export, we can assume that strict symbolic shape checks have been
done prior to generating torch IR. Among other shape checking, this
eliminates the case where an unknown dimension can be dynamically '1' in
a way that signals a broadcast.

Adds a `isAssumingStrictSymbolicShapes` utility which consults a
`torch.assume_strict_symbolic_shapes` attribute on an enclosing scope
and returns true if present.

In the linalg pipeline, many runtime checks are elided when this returns
true.
2023-09-29 16:45:48 -07:00
saienduri 4e1dd3bf10
add e2e support for torch.log10 (#2479) 2023-09-28 10:17:03 -07:00
Vivek Khandelwal 7760bda8ee build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-26.

aten._convolution.deprecated changes done because upstream PyTorch has
now added support for fp16 native convolution on CPU.
Refer: 7c9052165a

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-27 16:24:58 +05:30
Bruce Kim a520d39f84
[MLIR][TORCH] Add device "cpu" support for aten.to.dtype_layout op (#2481)
This PR adds device="cpu" support for `aten.to_dtypeLayout` op and
corresponding e2e test suit.
(refer:  PR https://github.com/llvm/torch-mlir/pull/812/)
2023-09-25 10:00:19 -04:00
Bruce Kim 40913a36c2
[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op (redo PR) (#2459)
Making the same PR with #2457, as I accidentally thought the review was already made and merged it (reverted).

Add decompose empty_strided op.
Referring to #1776, this decomposition op only supports default stride values, because accessing the tensor or indexing over that, the indices are determined by the strides.
In MLIR, this is not implicitly supported but assumes that the strides are default while iterating over the tensor.
2023-09-13 10:04:31 -07:00
Ramiro Leal-Cavazos 106b58597a
Revert "[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op (#2457)" (#2458)
This reverts commit 97bec86a8b.
2023-09-12 13:57:47 -07:00
Bruce Kim 97bec86a8b
[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op (#2457)
* implemented e2e test case, shape, dtype func

* AtenEmptyStrided decompose op implemented

* xfailed test module in ltc
2023-09-12 13:37:02 -07:00
Arham Khan 82456eefed
[MLIR][TORCH] add E2E support for aten.new_full (#2425)
* implement aten.new_full

* remove extraneous tests
2023-09-12 09:29:08 -05:00
Yuanqiang Liu 5895b9f8ca
fix compile warning (#2453) 2023-09-12 09:31:47 +08:00
Yuanqiang Liu 1f20b7275d
[Torch Dialect] add canonicalize for aten.min.other (#2452) 2023-09-11 17:28:22 +08:00
Jiawei Wu b411a40b3d
[Torch Dialect] emit aten.__or__Tensor Op (#2437)
* emit aten.__or__TensorOp

* bug fix

* remove convert to stablehlo

* code style refinement
2023-09-06 14:21:51 +08:00
Stella Laurenzo fcb3b718a5 Properly guard clang-specific pragma.
Avoids unsupported pragma warning on GCC.
2023-09-06 00:43:50 -07:00
Jerin Philip 9cb5d38cd1
[MLIR][TORCH] Add E2E `torch.aten.prod_dim_int` (#2423)
Uses the existing reduction codepath, adding modifications or branches
required alongside for prod.
2023-09-05 13:38:51 -07:00
Vivek Khandelwal 3841fe3035 [MLIR][TORCH] Add StableHLO lowering for embedding_bag.padding_idx op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-05 21:32:23 +05:30
Jiawei Wu d62045f64d
emit aten.max.other op (#2436) 2023-09-05 10:52:32 +08:00
Yuanqiang Liu e9ab8ceb1c
[Torch Dialect] support aten.split_with_sizes (#2431)
* [Torch Dialect] support aten.split_with_sizes

* update
2023-09-04 09:59:26 +08:00
Bruce Kim cd1c7df8be
[MLIR][TORCH] Add E2E support for view_as_real op (#2419)
* view_as_real test case, allow dtype in testutils.randn

* abstract python upstream func implemented

* fixed upstream dtype func, implemented view_as_real backend op

* formatted AtenViewAsRealOp, removed change in e2etest/framework

* removed test suit from reshape_like.py, because it's moved to basic.py

* implemented C-API wrapper for mlirComplexF128 type

* fixed torch.complex dtype width in MLIR and Torch MLIR, deleted float16 dtype dict

* Changed IR input of aten fft_fft unit test

* code refactored

* code refactored and fixed ci test

* refactored: removed white spaces, and rolled back to having both input/output affine expr

* refactored: deleted output affine expr to reduce redundancy

* xfail ltc backend

* removed ComplexImag and ComplexReal from torchdynamo xfail set

* copied and pasted from main branch as there's no change to be made in this file

* refactored abstract_interp_lib_gen.py

* refactored: torchtypes.td, formatted, removed commented out code
2023-09-01 21:12:01 -07:00
Arham Khan 34a0897e1b
[MLIR][TORCH] add E2E support for aten.rand (#2424)
* impl decomposition for aten.rand

* remove stablehlo conversion for aten.rand
2023-09-01 13:13:58 -05:00
Vivek Khandelwal 729386c9d8 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-01.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-01 22:07:51 +05:30
Vivek Khandelwal 5c43daa3bf [MLIR][TORCH] Add e2e support for aten.pow.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-31 21:43:24 +05:30
Arham Khan c42d2beb6e
[MLIR][TORCH] add E2E support for aten.min op (#2422)
* impl aten.min op

* remove extraneous test
2023-08-29 12:12:41 -05:00
Arham Khan bc6bba9077 add nondefault test case, add to illegal ops in backend contract 2023-08-28 10:52:16 +05:30
Arham Khan 8855fa3ace amend dtype function 2023-08-28 10:52:16 +05:30
Arham Khan 610d836fd2 impl aten.elu as decomposition 2023-08-28 10:52:16 +05:30
Arham Khan 12eadccc07 add e2e support for aten.elu 2023-08-28 10:52:16 +05:30
Jiawei Wu 4339c00f1b
[Torch Dialect][stablehlo] emit aten.rand op and add converter to stablehlo (#2413)
* [Torch Dialect] emit aten.rand op and add converter to stablehlo

* add failed tests for torchdynamo backend

* add failed test for linalg backend
2023-08-27 21:56:36 +08:00
Jiawei Wu b552d4ed95
[Torch Dialect] Fix small bugs in decompose-complex-ops pass, e.g. missing return sentence (#2409) 2023-08-22 09:56:11 +08:00
Jiawei Wu 60bad54f27
[Torch Dialect] replace none-index in aten.Index.Tensor's param by manually generating it (#2344)
* [Torch Dialect] replace none-index in aten.Index.Tensor's  param by manually generating it
Co-authored-by: Jiawei Wu <wujiawei.aml@bytedance.com>
Co-authored-by: Jianzhe Xiao <jianzhe.xiao@bytedance.com>

* minor typo fix

* add new failed e2e tests for ltc

* fix typo

* Address comments

* Add more e2e tests

* add failed e2e tests for LTC

* address comments

* remove decomposition for AtenIndexTensorHackedTwinOp
2023-08-15 19:36:08 +08:00
Vivek Khandelwal e61ef1ee54 [MLIR][TORCH] Add support for aten._unsafe_index_put.hacked_twin op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-11 08:57:01 +05:30
Vivek Khandelwal f0a8f273f7 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-08-10.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-10 21:59:20 +05:30
Vivek Khandelwal ee6c87ef5b [MLIR][TORCH] Add support for dtype arg for softmax.int op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-08 21:54:47 +05:30
JianzheXiao 38b049eb1a
[Torch Dialect] add support for adaptive_avgpool_1d (#2342)
* [MLIR][TORCH] Fix aten.cumsum lowering for int32 input (#2351)

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.

update PyTorch version to 2.1.0.dev20230729 (#2354)

- torch version: 2.1.0.dev20230729
 - torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
 - torchvision version: 0.16.0.dev20230729

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230730 (#2356)

- torch version: 2.1.0.dev20230730
 - torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
 - torchvision version: 0.16.0.dev20230730

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230731 (#2359)

- torch version: 2.1.0.dev20230731
 - torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
 - torchvision version: 0.16.0.dev20230731

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

LTC->MLIR Debug Info support (#1922)

* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>

build: update llvm tag to 41895843

Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

update PyTorch version to 2.1.0.dev20230802 (#2366)

- torch version: 2.1.0.dev20230802
 - torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
 - torchvision version: 0.16.0.dev20230802

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Change Python version from 3.10 to 3.11 in installation instructions (#2370)

Add CITATION file (#2371)

Add packaging as an install dependency (#2369)

Needed by `torch_mlir._version`. Resolves #2368.

[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)

* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op

update PyTorch version to 2.1.0.dev20230803 (#2372)

- torch version: 2.1.0.dev20230803
 - torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
 - torchvision version: 0.16.0.dev20230803

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Prevent failed stable CI job from cancelling nightly jobs (#2373)

The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.

[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355)

update

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update

update:

update

parent 22e88d523b1970b2e904eb5421d49d987a3d255e
author jianzhe.xiao <jianzhe.xiao@bytedance.com> 1691114110 +0800
committer jianzhe.xiao <jianzhe.xiao@bytedance.com> 1691114119 +0800

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.

update PyTorch version to 2.1.0.dev20230729 (#2354)

- torch version: 2.1.0.dev20230729
 - torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
 - torchvision version: 0.16.0.dev20230729

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230730 (#2356)

- torch version: 2.1.0.dev20230730
 - torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
 - torchvision version: 0.16.0.dev20230730

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230731 (#2359)

- torch version: 2.1.0.dev20230731
 - torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
 - torchvision version: 0.16.0.dev20230731

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

LTC->MLIR Debug Info support (#1922)

* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>

build: update llvm tag to 41895843

Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

update PyTorch version to 2.1.0.dev20230802 (#2366)

- torch version: 2.1.0.dev20230802
 - torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
 - torchvision version: 0.16.0.dev20230802

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Change Python version from 3.10 to 3.11 in installation instructions (#2370)

Add CITATION file (#2371)

Add packaging as an install dependency (#2369)

Needed by `torch_mlir._version`. Resolves #2368.

[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)

* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op

update PyTorch version to 2.1.0.dev20230803 (#2372)

- torch version: 2.1.0.dev20230803
 - torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
 - torchvision version: 0.16.0.dev20230803

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Prevent failed stable CI job from cancelling nightly jobs (#2373)

The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.

[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355)

update

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update

update:

add support for adaptive_pool_id

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update:

* update

---------

Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-08-05 07:48:09 +08:00
Jiawei Wu 20a2b68ed6
[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355) 2023-08-04 09:05:34 +08:00
Jiawei Wu 6db92d1b14
[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)
* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op
2023-08-03 16:21:14 +08:00
JianzheXiao 31ef08b63d
[Stablehlo]Add support for AvgPool1dOp (#2268)
* Add support for AvgPool1d

* Update AbstractInterpLibrary

* support avgpool1d in linalg

* refactored code

* fix nit problem
2023-07-25 14:09:53 +08:00
Jiawei Wu d57f67e7f8
[Torch Dialect] emit aten.nonzero, aten.nonzero_numpy, aten.nonzero_static op (#2338)
By the way, this PR also adds the missing shape function for aten.masked_select.
2023-07-25 09:01:19 +08:00
Alexandre Rames a20422ce65 Support `DerefineOp` in `RefinePublicReturn`. 2023-07-20 20:08:46 +02:00
Alexandre Rames 4847563bed Clean up verification of calling conventions.
The implementation at this place was a remnent of the times the pipeline was
run only once.
Rely instead on the backend verification, after optimizations have had an
opportunity to resolve some uncertainties. (e.g. `!torch.optional`).
2023-07-20 20:08:46 +02:00
Jiawei Wu 9535be7903
[Torch-Dialect] emit aten.narrow.Tensor op and decompose it to aten.narrow op (#2297) 2023-07-20 16:46:44 +08:00
Matthias Gehre 64d7626a52
Fixes for split tensor and slice (#2314)
* RecomposeComplexOps: Remove dead slice op

* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors

* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none

* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max

* More tests for aten.split.Tensor
2023-07-20 09:53:54 +02:00
Jiawei Wu 3f843c8fd9
[torch-dialect] fix aten.type_as op's folder (#2283)
[torch-dialect] fix torch.type_as op's folder by decomposing it to prim.dtype + aten.to_dtype
2023-07-20 09:51:58 +08:00
Matthias Gehre 0c17997000
Don't crash when the input to aten.copy is unranked (#2307)
This can happen when the input comes from an unsupported operator
2023-07-18 09:52:33 +02:00
Ramiro Leal-Cavazos 718f53ff8a
Fix handling of `!torch.number` in abstract interpretation library (#2309)
In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
2023-07-17 09:52:04 -07:00
Yuanqiang Liu 7f6b72aec8
[Torch Dialect] add runtime.assert to check constraint when recomposing complex ops (#2281) 2023-07-14 10:13:19 +08:00
Matthias Gehre c23a61f4b6
DecomposeComplexOps: Use static shape if available (#2289) 2023-07-12 10:07:30 +02:00
Sean Silva 8c87057f50
update PyTorch version to 2.1.0.dev20230704 (#2282)
- torch version: 2.1.0.dev20230704
 - torch commit hash: e5472fd3c324c5ecb343884e5399e0227cc30a6c
 - torchvision version: 0.16.0.dev20230704

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-04 08:23:00 -07:00
Chi_Liu ddd0c06970
[TORCH] Fix recompose off by -1 error (#2271) 2023-06-27 13:34:14 -07:00
Yuanqiang Liu 859885c1d3
[Torch Dialect] Support aten.native_dropout (#2259)
* [Torch Dialect] Support aten.native_dropout

* update
2023-06-27 14:19:33 +08:00
Sean Silva fbb5ed52cf
update PyTorch version to 2.1.0.dev20230623 (#2260)
- torch version: 2.1.0.dev20230623
 - torch commit hash: ad724c83fb0d94cb3bb2cec94e15d88023c64e0d
 - torchvision version: 0.16.0.dev20230623

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-23 09:03:50 -07:00
Yuanqiang Liu 64afc08dab
[Torch Dialect] add missing one_hot dtype function (#2143)
* [Torch Dialect] add missing one_hot dtype function

* update

* update

* update
2023-06-23 16:11:33 +08:00
Yuanqiang Liu 39201a4be5
[Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp'… (#2256)
* [Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp's input is torch.number

* update
2023-06-23 16:02:45 +08:00
Yuanqiang Liu 4fd4477e15
[Torch Dialect] require hasSizes when decompose aten.amax (#2248) 2023-06-22 11:26:51 +08:00
Maksim Levental 0caaf8d32a
Bump LLVM (#2176)
* Bump LLVM

---------

Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Matthias Gehre 4e2ba2e0af
Support aten.sign (#2205) 2023-06-10 20:45:35 +02:00
JianzheXiao e4f8fb1b8c
[Torch Dialect] add support for AtenIsnanOp (#2170)
* add support for mhlo

* Add Test for torch.ne

* fix torch.ne shape/add static test case

* add support for static torch.ne

---------

Co-authored-by: root <root@n31-177-039.byted.org>
2023-06-07 10:06:27 +08:00