Commit Graph

3082 Commits (cb6a499460ef95827ced4544803a2e71e06c7973)
 

Author SHA1 Message Date
Quinn Dawkins 141202bc01
[TorchToLinalg] Fix integer type handling for aten.mm (#2615)
Despite aten.mm requiring the input and output types match, we still opt
to maintain signedness semantics in case later passes try to do any sort
of integer type narrowing.
2023-12-07 00:13:53 -05:00
frafranz c0115706a0
Add a decomposition for torch.aten.argmin (#2613)
Adds a lowering for the torch.aten.argmin operator to linalg via decomposition into torch.aten.min.dim.

---------

Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2023-12-06 09:45:30 -05:00
Frederik Harwath 6244f301fb
Regenerate GeneratedTorchOps.td after recent change to torch_ods_gen.py (#2612)
Try to fix the error reported by @qingyunqu in #2609.
2023-12-05 08:04:32 -08:00
Frederik Harwath 6248216dca
Add aten.min.dim to linalg lowering (#2600) 2023-12-05 07:16:35 -08:00
Frederik Harwath d0b49a912e
Recommend update_torch_ods.sh for re-generating GeneratedTorchOps.td (#2609)
Fix #2608
2023-12-05 05:26:05 -08:00
Vivek Khandelwal 10b5432e7d build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-12-04.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-05 13:18:47 +05:30
Quinn Dawkins 400752ca8d
[TorchToLinalg] NFC: Move Utils.h to an externally accessible location (#2603) 2023-12-01 19:38:21 -05:00
srcarroll 7d0f5cc5a8
Update out of date docs (#2602)
Some of docs referred to old file paths that no longer exists. This
patch updates some of the instructions that I happened to notice were
out of date. This is not a full update
2023-12-01 16:29:37 -06:00
Ramiro Leal-Cavazos e568f7e999
Move handling of integer signedness to the backend conversions (#2597)
The function `getTypeForScalarType` currently takes an argument to
specify the signedness of integer types. This is leakage of backend
specific requirements into the torch dialect world. Because
`getTypeForScalarType` is a utility function for the torch dialect, it
should only produce types that match the sign conventions used by
PyTorch (regular integers are signed and unsigned integers are
unsigned).

This commit removes the signedness argument from
`getTypeForScalarType`, and moves the backend specific handling of
integer types to the backend code.
2023-11-29 09:43:09 -08:00
Sambhav Jain 44f6942796
Bump LLVM and StableHLO (#2598)
Bump LLVM to `5e5a22caf88ac1ccfa8dc5720295fdeba0ad9372` and StableHLO to
`83f095e7217c897f1eccac5652600ceb944cb0e0`.

Bazel GHA:
https://github.com/sjain-stanford/torch-mlir/actions/runs/7027647674
2023-11-28 22:12:24 -08:00
Mi Jiazhi f7a92d346e
[Torch Dialect] Decompose AtenTriuOp (#2561)
decompose like:
```
import torch

def my_triu(x, diag):
    rows = torch.ops.aten.size(x, -2)
    cols = torch.ops.aten.size(x, -1)

    row_indices = torch.ops.aten.arange(rows).unsqueeze(1)
    col_indices = torch.ops.aten.arange(cols).unsqueeze(0)

    cond = torch.ops.aten.ge(
        col_indices, torch.ops.aten.add(row_indices, diag))
    return torch.ops.aten.where(cond, x, 0)

x = torch.rand(5, 7)
assert torch.allclose(my_triu(x, 0), torch.triu(x, 0))
assert torch.allclose(my_triu(x, 1), torch.triu(x, 1))
assert torch.allclose(my_triu(x, 2), torch.triu(x, 2))
assert torch.allclose(my_triu(x, -1), torch.triu(x, -1))
```

---------

Co-authored-by: LiuYuanqiang <liuyuanqiang.yqliu@bytedance.com>
2023-11-29 10:35:26 +08:00
Sambhav Jain 49fdc1a8a6
Add bazel targets for TorchOnnxToTorch conversion passes (#2596)
Adapts to the TorchOnnxToTorch changes from
https://github.com/llvm/torch-mlir/pull/2585.
Also restores bazel builds in post-merge CI that was disabled in
2148c4cd0d.

Bazel workflow:
https://github.com/sjain-stanford/torch-mlir/actions/runs/7023912962
2023-11-28 13:06:35 -08:00
Vivek Khandelwal dc9ea08db5 [MLIR][ONNX] Add OnnxToTorch support for atan and bitwise ops
This commit adds the OnnxToTorch support for Atan, Bitshift, BitwiseAnd,
and BitwiseNot op.
This commit also adds the TorchToLinalg support for AtenBitwiseLeftShiftTensorOp.

Signed-Off By: vivekkhandelwal@nod-labs.com
2023-11-28 17:19:07 +05:30
Stella Laurenzo 53fc995639 Run CI on all main/postsubmit commits.
Prior to this, the concurrency rules for presubmits (which cancel eagerly) were being applied to main. The result was that landing a second patch would cancel the CI on the one prior.
2023-11-22 18:05:18 -08:00
Stella Laurenzo 2148c4cd0d Temporarily disable bazel build until fixed. 2023-11-22 18:00:39 -08:00
Stella Laurenzo 66b73edb28 Move TORCH_MLIR_USE_INSTALLED_PYTORCH to top-level.
This was unfortunately being initialized in a directory below its first use. This was causing the first configure to mis-detect the ABI flags, which was causing type conversion failures at runtime.

Fixes #2298 and hardens some additional messages and checks to better make it clear when something goes awry.
2023-11-22 17:56:26 -08:00
James Newling 1b7d6f2af9
Improve decomposition of pixel_shuffle (support dynamic shapes) (#2590)
The aten.reshape ops in the decomposition are replaced with prims.collapse 
and prims.split_dim ops, which means that the cases where the lowering of
reshape from torch to linalg which are not supported, are avoided.

Essentially, by using the collapse and split_dim ops instead of the
reshape ops, we are not "losing" the information that the reshapes do not
arbitrarily mix dimensions. Which makes lowering easy. 

3 additional tests added: 
- fully dynamic, 
- dynamic only the spatial dimensions, 
- dynamic only in the non-spatial dimensions.
2023-11-22 12:31:06 -08:00
Stella Laurenzo e06efc5136
Initial TorchOnnxToTorch conversion pipeline. (#2585)
Adds a pipeline to convert custom ops and metadata represented as
`torch.operator` custom ops to corresponding `torch` ops where possible.

This is part of a multi-part approach for building ONNX import in as a
regular feature of torch-mlir. It is focused on the conversions vs the
infra. We will end up maintaining a [pure-python
importer](https://github.com/nod-ai/SHARK-Turbine/blob/main/python/shark_turbine/importers/onnx_importer.py)
to go with this in torch-mlir, and we will also maintain test case
generation utilities derived from it.

I have left substantial documentation in the README of the conversion
directory, including the recommended approach that we will take to keep
building this out.

(note that this organizes the code to coincide with the refactoring in
#2442 versus the current flat arrangement)
2023-11-21 21:02:55 -08:00
Vivek Khandelwal d50d3aa5e7 [MLIR][TORCH] Add support for unsigned integer types
Refer: https://github.com/pytorch/pytorch/issues/58734
2023-11-21 21:57:26 +05:30
James Newling 03e8f99730
Lowering to linalg of prims split_dim op (#2576)
Adds support for lowering to prims split_op. 

Similar design to collapse op lowering in 
https://github.com/llvm/torch-mlir/pull/2572, with some 
small differences, because the split_dim op (in pytorch) is
view-changing whereas the collapse is not. The difference 
means that 

1) it must be registered in the function Torch::isViewLikeOp
2) it must be be added to the "expected fail" set for the torch dynamo backend.
2023-11-21 07:56:09 -08:00
Zhekun(Josh) Zhang d67afa9e95
[Torch] Add fold rule for AtenMaskedFillTensorOp to AtenMaskedFillScalarOp (#2543) 2023-11-21 13:26:17 +08:00
Vivek Khandelwal b26797c20b
Disable torch-mlir-core for release build (#2586) 2023-11-20 19:36:14 -08:00
James Newling 647f2f5076
Additional tests for view lowering (#2584)
The logic for lowering the aten view op to linalg is fairly complex. 
In this PR I have tried to follow all non-failing paths through the 
lowering and add unit tests where they're missing.

There is 1 logical change to the lowering: redundant tensor.cast ops
(same source and destination type) are folded.
2023-11-20 17:35:25 -08:00
Yuanqiang Liu 7b94189e07
[E2E] add nan case in elementwise comparison e2e tests (#2575) 2023-11-20 11:27:08 +08:00
Stella Laurenzo 5eae0adff1
Breakup python pytorch deps (#2582)
This lifts the core of the jit_ir_importer and ltc out of the pt1
project, making them peers to it. As a side-effect of this layering, now
the "MLIR bits" (dialects, etc) are not commingled with the various
parts of the pt1 project, allowing pt1 and ltc to overlay cleanly onto a
more fundamental "just MLIR" Python core. Prior to this, the Python
namespace was polluted to the point that this could not happen.

That "just MLIR" Python core will be introduced in a followup, which
will create the space to upstream the FX and ONNX pure Python importers.

This primary non-NFC change to the API is:

* `torch_mlir.dialects.torch.importer.jit_ir` ->
`torch_mlir.jit_ir_importer`.

The rest is source code layering so that we can make the pt1 project
optional without losing the other features.

Progress on #2546.
2023-11-19 12:10:19 -08:00
Yuanqiang Liu facbe5d96b
[Torch Dialect] support AtenArangeStartOutOp in ReduceOpVariants like… (#2563)
… AtenBernoulli_FloatOp

It fixing case like: `%2110 = torch.aten.arange.start_out %int1,
%int1517, %int1, %2109 : !torch.int, !torch.int, !torch.int,
!torch.tensor -> !torch.tensor`.
`aten.arange.start_out` doesn't have value semantics also, means`%2110`
is an alias for %2109.
So I decompose it to `aten.arange.start` + `torch.contents.overwrite`.  
The complex decomposition logic is target to handle cases like view and
dtype cast which I add in e2e tests.
2023-11-17 00:51:55 +08:00
James Newling dad1f012f6
Add verification for torch permute op (#2551)
- adds support for an optional verifier to the generated torch op
tablegen (GeneratedTorchOps.td)
- uses the above to add a verifier for the torch permute op. 

Motivation: I hit an unclear error from linalg while developing a
decomposition pass for pixel_shuffle. The error would have been clearer
if the problem had been detected earlier in the invalid aten.permute op.

Testing: new tests added. To run added tests, from the base directory
run

```
 ./build/bin/llvm-lit  test/Dialect/Torch/invalid.mlir
 ```
2023-11-15 11:47:54 -08:00
James Newling e81282ae8f
Support for prims collapse op (lowering to linalg) (#2572)
Steps taken:
1) add generator code to torch_ods_gen.py, run update_torch_ods.sh
2) add (custom) shape and type inference generator code to
abstract_interp_lib_gen.py, run update_abstract_interp_lib.sh
3) Implement lowering to tensor.collapse_dims. Requires the `start` and
`end` values to be constant, else lowering fails
4) Update xfail_sets.py (append to LTC_XFAIL_SET) after running
/tools/e2e_test.sh --filter Collapse --verbose -c XX for all support
backends (XX).

Motivation: 
- Supporting the collapse operation will be useful for lowering of
pixel_shuffle (see Issue #2559)
2023-11-15 08:34:38 -08:00
Stella Laurenzo 6be9789f9f
update PyTorch version to 2.2.0.dev20231115 (#2577)
torch version: 2.2.0.dev20231115
torch commit hash: a5a404865c01f86881f6b3ab0cd9a562d0b420de
torchvision version: 0.17.0.dev20231115

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-15 06:38:54 -08:00
Stella Laurenzo d734f7890c
update PyTorch version to 2.2.0.dev20231114 (#2574)
torch version: 2.2.0.dev20231114
torch commit hash: ec2f8fd2f1ac81996641848d9c7b904fddbbf9cf
torchvision version: 0.17.0.dev20231114

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-14 06:23:05 -08:00
Shehroze Khan dde66e66b0
add bool scalar type to int implicit cast (#2571)
[LTC] Add bool scalar type to int implicit cast
2023-11-14 08:56:12 -05:00
Stella Laurenzo c61f0bd5bb
update PyTorch version to 2.2.0.dev20231113 (#2570)
torch version: 2.2.0.dev20231113
torch commit hash: a45a8bf9e7e1530692f2703f8da430bc2825af7c
torchvision version: 0.17.0.dev20231113

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-13 07:19:30 -08:00
lzw dd759a24f2
Update readme to fit new project structure (#2548)
Co-authored-by: lanzongwei.lan <lanzongwei.lan@bytedance.com>
2023-11-12 21:19:18 -08:00
Stella Laurenzo 1a064cdf1a
update PyTorch version to 2.2.0.dev20231112 (#2569)
torch version: 2.2.0.dev20231112
torch commit hash: 63a5a14da9ef3ebd68ce0cebea4aa84e030a2cf8
torchvision version: 0.17.0.dev20231112

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-12 07:32:55 -08:00
Stella Laurenzo 2a99402796
update PyTorch version to 2.2.0.dev20231111 (#2568)
torch version: 2.2.0.dev20231111
torch commit hash: f40306d6c4b2613c18525a274d98feeda0036473
torchvision version: 0.17.0.dev20231111

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-11 07:00:47 -08:00
Yuanqiang Liu 3ab790c50a
[Torch Dialect] add canonicalize for aten.numel (#2562) 2023-11-11 12:16:53 +08:00
Stella Laurenzo b20daf5710
update PyTorch version to 2.2.0.dev20231110 (#2566)
torch version: 2.2.0.dev20231110
torch commit hash: edbf22fa03bafd1d2849ba41db75a2bea172dbcc
torchvision version: 0.17.0.dev20231110

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-10 06:23:58 -08:00
James Newling 98ee7fe548 Update E2E links 2023-11-09 13:55:37 -06:00
Ramiro Leal-Cavazos d082310bd8 Move Wiki to `docs/`
Currently the docs are split into two places, the `docs/` directory
and the Github Wiki of Torch-MLIR. This commit moves the wiki docs to
`docs/` to consolidate everything into one place. This has the added
benefit that users will get all the documentation when they clone the
repository.

Note: there are 4 files in the wiki, but only one is truly needed
- Torch-ops-E2E-implementation.md: only file needed
- Coding-Style.md: the contents of this file are already in
Torch-ops-E2E-implementation.md
- Weekly-LLVM-Update.md: this is outdated. We no longer have a weekly
schedule for llvm updates
- Home.md: Contains links to talks and resources that are already
present in the documentation in `docs/` or in
Torch-ops-E2E-implementation.md

Co-authored-by: Yi Zhang <cathyzhyi@google.com>
Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
Co-authored-by: Sean Silva <silvasean@google.com>
Co-authored-by: Daniel Ellis <1346302+dellis23@users.noreply.github.com>
Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-11-09 13:55:37 -06:00
Stella Laurenzo a7b5dfb389
update PyTorch version to 2.2.0.dev20231109 (#2564)
torch version: 2.2.0.dev20231109
torch commit hash: 2c3ba6926e38dba05bda34f0af9c092a40cff5b7
torchvision version: 0.17.0.dev20231109

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-09 06:21:48 -08:00
Yuanqiang Liu 60effcee89
[Dtype Function] fix aten.div.Tensor_mode's dtype function (#2555) 2023-11-09 09:46:53 +08:00
saienduri ad18219820
Fix for unused variable failure when trying to bump torch-mlir in IREE (#2560)
Due to blob being an unused variable, we are not able to bump torch-mlir
in iree. With this PR, we remove this unused variable.
2023-11-08 15:55:41 -08:00
Stella Laurenzo f3bfa81857
update PyTorch version to 2.2.0.dev20231106 (#2556)
torch version: 2.2.0.dev20231106
torch commit hash: a04dd794ad694baeb257c12329c3166c6a44ae50
torchvision version: 0.17.0.dev20231106

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-08 07:20:36 -08:00
James Newling b6e551c7b8
Decomposition of aten.pixel_shuffle with static input shape (#2550)
For static tests (that is when the shape is know) for example:

 ```
 @annotate_args([None, ([3, 18, 2, 2], torch.float32, True)])
 ```
 
The e2e passes. But only if the replacement op's return type is set as
undefined (optional shape and type must be explicitly made unset),
otherwise there's a error about the function return type.
 
 For dynamic cases, for example if the above is replaced with 
 
  ```
 @annotate_args([None, ([-1, -1, -1, -1], torch.float32, True)])
 ```

There is a failure to lower to linalg from torch ("view op explicitly
labelled as illegal"). This seems to be because the support for lowering
from torch to linalg with dynamic shapes is limited.
2023-11-08 08:52:44 -05:00
JianzheXiao a42d4c18ff
[Torch Dialect]Support aten.cosine_similarity (#2364)
As title, add support for aten.cosine_similarity, support broadcast
inputA/inputB to the same shape
2023-11-08 15:28:30 +08:00
James Newling 026cb314da
Specify path of e2e_test.sh after directory change (#2557)
Is there a way to disable some of CI for docs-only PR's?
2023-11-07 16:07:02 -08:00
Stella Laurenzo 4b9db995b5
update PyTorch version to 2.2.0.dev20231105 (#2554)
torch version: 2.2.0.dev20231105
torch commit hash: 2d7dd2e800dfd6332656074bfa208e4b25cfe907
torchvision version: 0.17.0.dev20231105

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-05 06:57:01 -08:00
Jiawei Wu d5ee8ee73a
[Torch Dialect] emit aten.reshape_as op and add decomposition pattern. (#2553) 2023-11-05 11:38:36 +08:00
Stella Laurenzo 71ca529a62
update PyTorch version to 2.2.0.dev20231104 (#2552)
torch version: 2.2.0.dev20231104
torch commit hash: a89fef71845d0dbc2c4c4a4c7878f51f4968ab90
torchvision version: 0.17.0.dev20231104

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-11-04 09:01:26 -07:00
Yuanqiang Liu 0378da0abd
[Torch Dialect] support aten.isinf (#2544)
Also fix linalg lowering from `UEQ` to `OEQ`.  
I will check other comparison's lowering later.
2023-11-04 22:26:01 +08:00