Commit Graph

2413 Commits (0860c41ee2a0bdec41f544f19eba170cf646c3ce)
 

Author SHA1 Message Date
Stella Laurenzo 6d74e8cccd
update PyTorch version to 2.2.0.dev20231023 (#2528)
torch version: 2.2.0.dev20231023
torch commit hash: 88eb6bbb1ab58d7cdb49349b64c02a04911be8f2
torchvision version: 0.17.0.dev20231023

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-23 09:32:21 -07:00
Stella Laurenzo 0f781ab4bf
update PyTorch version to 2.2.0.dev20231022 (#2526)
torch version: 2.2.0.dev20231022
torch commit hash: f468e74875f6b7f95b7b01ccf3b05c3917e2865d
torchvision version: 0.17.0.dev20231022

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-22 08:01:12 -07:00
Sarthak Gupta 7633619ed2
[torch] Implement stronger verifiers for non-value semantic ops (#2519)
Attempt to solve https://github.com/llvm/torch-mlir/issues/2490

Changes for Non Value Semantic Ops having the
`IsTrailingUnderscoreInplaceVariant` trait :
- AnyTorchTensorType -> Torch_NonValueTensorType
- AnyTorchOptionalTensorType -> AnyTorchOptionalNonValueTensorType
- AnyTorchListOfOptionalTensorType ->
AnyTorchListOfOptionalNonValueTensorType
- AnyTorchListOfTensorType -> AnyTorchListOfNonValueTensorType

Created three new tensor types for optional and list non value tensors.
2023-10-21 09:09:55 -07:00
Stella Laurenzo 0acbb264d4
update PyTorch version to 2.2.0.dev20231021 (#2525)
torch version: 2.2.0.dev20231021
torch commit hash: 147ac6b312c4c71e89013be592dc519c81fcac4e
torchvision version: 0.17.0.dev20231021

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-21 08:57:42 -07:00
Stella Laurenzo caa533cc5f
update PyTorch version to 2.2.0.dev20231020 (#2522)
torch version: 2.2.0.dev20231020
torch commit hash: 6ffe31abcae7d580c451cea195bd52258c72ac81
torchvision version: 0.17.0.dev20231020

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-20 09:01:22 -07:00
Vivek Khandelwal 5bc2009332
build: manually update PyTorch version (#2521) 2023-10-19 07:03:00 -07:00
Sambhav Jain 52abae1526
Bump LLVM to get bazel fixes (#2517)
The last llvm bump in https://github.com/llvm/torch-mlir/pull/2511
pointed to
b44b3494f6,
however the bazel build upstream was not clean at this point:

```
ERROR: /root/.cache/bazel/_bazel_root/b89349c08f7224396763d14fe35cba11/external/llvm-project/mlir/BUILD.bazel:5837:18: TdGenerate
external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOpsInterface.h.inc failed: (Exit 1): mlir-tblgen failed: error executing command ...
                                                                                                                                                    
external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td:20:9: error: Could not find include file 'mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td'                                                                                                           
include "mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td"                                                                                                                                                                                                              
        ^                                                                                                                                                                                                                                                              
external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td:20:9: error: Unexpected token at top level                                                                                                                                                           
include "mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td"                                                                                                                                                                                                              
        ^       
```

The bazel fixes followed in a subsequent commit at
28b27c1b10.
This PR bumps LLVM by a few more commits (to include the bazel fixes)
which helps restore Torch-MLIR's bazel build back to 🟢 .

GHA workflow to test bazel build:
https://github.com/sjain-stanford/torch-mlir/actions/runs/6555101471/job/17803082508
2023-10-17 22:00:26 -07:00
Ze Zhang 4279b750da
update AtenClampOp in torch-to-tosa to handle fp inputs (#2516)
As titled.

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-17 14:49:47 -07:00
Chi_Liu 14a4da923b
Update llvm-project to b44b3494f60296db6aca38a14cab061d9b747a0a (#2511)
The main purpose is to bring in the new mesh dialect change.
https://github.com/llvm/llvm-project/pull/68007
2023-10-16 19:29:48 -07:00
Ze Zhang f2c53b8ca5
Add aten.isclose support and its torch-to-tosa lowering (#2512)
Add aten.isclose op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests


To test e2e tosa lowering:
`python -m e2e_testing.main -v -c=tosa`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-16 09:44:53 -07:00
Ze Zhang e649e06b7b
Add aten.unflatten.int support and its torch-to-tosa lowering (#2509)
Add aten.unflatten.int op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests

To test e2e tosa lowering:

`python -m e2e_testing.main -v -c=tosa`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-13 18:39:41 -07:00
Ramiro Leal-Cavazos 9b5a4afadd
Update README to include new meeting schedule (#2503) 2023-10-10 09:54:54 -07:00
Stella Laurenzo 26ea13ddf5
update PyTorch version to 2.2.0.dev20231006 (#2507)
torch version: 2.2.0.dev20231006
torch commit hash: 20217d1426d99d0caa70e1473d89e0c834b7f35e
torchvision version: 0.17.0.dev20231006

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-06 07:27:45 -07:00
Quinn Dawkins 6f81ad7293
[TorchToLinalg] Improve broadcast lowerings in strict symbolic modes (#2505)
With strict symbolic shapes, we can assume numpy-style dynamic
broadcasts never occur. This improves the lowering in the presence of
this assumption.
2023-10-05 15:15:26 -04:00
Stella Laurenzo 42b6c0a14a
update PyTorch version to 2.2.0.dev20231005 (#2506)
torch version: 2.2.0.dev20231005
torch commit hash: 439cba92777ff61b49d24096edfaf128fbd742ea
torchvision version: 0.17.0.dev20231005

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-05 09:45:53 -07:00
Quinn Dawkins ae72eec224
Improve aten.broadcast_to folder when in strict symbol mode (#2504)
Strict symbolic shapes allow us to assume numpy-style dynamic broadcasts
never occur. This allows us to strengthen the folder for broadcasts to
cases where the rank is the same and all shapes match (including dynamic
sentinel values).
2023-10-05 09:02:10 -04:00
Stella Laurenzo 14e6da8588
update PyTorch version to 2.2.0.dev20231004 (#2502)
torch version: 2.2.0.dev20231004
torch commit hash: 56af607c0437ed7321da4b96a4dbccdbd8b5a98b
torchvision version: 0.17.0.dev20231004

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-04 07:55:21 -07:00
Ramiro Leal-Cavazos 2e5d65064c [linalg] Add handling for leadin and trailing size-1 dims in ViewOp
This commit adds to the lowering of `aten.view` handling for the
following cases:

- `(..., a.size(i))` -> `(..., a.size(i), 1, ..., 1)`
- `(..., a.size(i), 1, ..., 1)` -> `(..., a.size(i))`
- `(a.size(i), ...)` -> `(1, ..., 1, a.size(i), ...)`
- `(1, ..., 1, a.size(i), ...)` -> `(a.size(i), ...)`
2023-10-03 23:04:52 +00:00
Ramiro Leal-Cavazos 1c508af0ba Revert "[linalg] Fix handling of trailing size-1 dimensions in aten.view (#2474)"
This reverts commit 7c6b9d2445.
2023-10-03 23:04:52 +00:00
Stella Laurenzo 4892ed433f
update PyTorch version to 2.2.0.dev20231003 (#2500)
torch version: 2.2.0.dev20231003
torch commit hash: 4e30fa82315208dcd38fa16a0ed9851fa8e98bc9
torchvision version: 0.17.0.dev20231003

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-03 10:02:55 -07:00
Vivek Khandelwal ca6ce8974f [MLIR][TORCH] Add support for int8 dtype for sub, add, and bitwise_and op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-10-03 22:12:31 +05:30
Jae Hoon (Antonio) Kim 32d9b20bde
Add linspace/cumprod/roll ops (#2498)
Add linspace/cumprod/roll ops to ODS and add shape inference functions
to make it work with LTC.

Also, add some tensor utils to LTC library for searching for non-detach
copy nodes.
2023-10-03 11:01:07 -04:00
Vivek Khandelwal d10a86f51c Disable LTC for arm release
Also, revert https://github.com/llvm/torch-mlir/pull/2488.
Disabling LTC based on the discussion here:
https://discord.com/channels/636084430946959380/742573221882364009/1156272667813494824
2023-10-02 22:22:07 +05:30
Stella Laurenzo b75c208f4e
update PyTorch version to 2.2.0.dev20231002 (#2497)
torch version: 2.2.0.dev20231002
torch commit hash: 4dae8b49630d2784f6a5d8726db30923e2d1e077
torchvision version: 0.17.0.dev20231002

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-10-02 08:02:15 -07:00
Vivek Khandelwal 9293326e1e [MLIR][TORCH] Add support for bitwise_right_shit and bitwise_and.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-10-02 13:06:59 +05:30
Vivek Khandelwal c434736ee9 [MLIR][TORCH] Add support for conversion to int8 dtype
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-10-02 09:48:46 +05:30
Vivek Khandelwal 71ac62f3a8 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-28.

aten.baddbmm changes done because upstream PyTorch has now added
support for fp16 gemm on CPU.
Refer: 9399e0b1ff
2023-10-02 09:48:32 +05:30
Stella Laurenzo 860be09a39
Elide dynamic broadcast checks when in strict symbolic shapes mode. (#2496)
When importing dynamic shaped programs from Dynamo, via torch.compile or
torch.export, we can assume that strict symbolic shape checks have been
done prior to generating torch IR. Among other shape checking, this
eliminates the case where an unknown dimension can be dynamically '1' in
a way that signals a broadcast.

Adds a `isAssumingStrictSymbolicShapes` utility which consults a
`torch.assume_strict_symbolic_shapes` attribute on an enclosing scope
and returns true if present.

In the linalg pipeline, many runtime checks are elided when this returns
true.
2023-09-29 16:45:48 -07:00
saienduri 4e1dd3bf10
add e2e support for torch.log10 (#2479) 2023-09-28 10:17:03 -07:00
Vivek Khandelwal 8abfa5b196
Use PyTorch nightly for Arm release build (#2488)
The LTC backend has drifted from being able to pass tests on the stable
PyTorch version, so pinning to nightly on ARM.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-27 09:40:32 -07:00
Ramiro Leal-Cavazos 7c6b9d2445
[linalg] Fix handling of trailing size-1 dimensions in aten.view (#2474)
This commit adds to the lowering of `aten.view` handling for the
following cases:

- `(..., a.size(i))` -> `(..., a.size(i), 1, ..., 1)`
- `(..., a.size(i), 1, ..., 1)` -> `(..., a.size(i))`

Fixes: https://github.com/llvm/torch-mlir/issues/2448
2023-09-27 09:09:30 -07:00
Stella Laurenzo e69266a936
update PyTorch version to 2.2.0.dev20230927 (#2489)
torch version: 2.2.0.dev20230927
torch commit hash: d7520d8668dc08f7bed27a64f006c909006e653a
torchvision version: 0.17.0.dev20230927

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-09-27 08:45:35 -07:00
Vivek Khandelwal 7760bda8ee build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-26.

aten._convolution.deprecated changes done because upstream PyTorch has
now added support for fp16 native convolution on CPU.
Refer: 7c9052165a

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-27 16:24:58 +05:30
Daniel Garvey ff7f8b21dc
update llvm-project to d13da154a7c7eff77df8686b2de1cfdfa7cc7029 (#2483) 2023-09-26 16:15:55 -05:00
Ramiro Leal-Cavazos c9fd78988e
[NFC] Clean-up `ConvertAtenViewOp` in linalg backend (#2470)
While trying to fix a bug in the `ConvertAtenViewOp` pattern in the
linalg backend, I realized that the pattern had become quite complex and
had accumulated some dead code, making it hard to reason about.

This commit simplifies the pattern quite a bit. The main changes are:
1. All the static helper functions in the `ConvertAtenViewOp` class have
been simplified, both in their signature and their body. Each one now
performs simple calculations on arrays, and take the least number of
arguments necessary.
2. The body of [the `while`
loop](9fce566b0c/lib/Conversion/TorchToLinalg/DataMovement.cpp (L407))
inside the main pattern has been changed to work on `MutableArrayRef`
slices, to avoid having to keep track of `start` and `end` indices for
the input and output shape arrays.
3. All the heuristics used to determine the mapping between the input
and output dimensions are now in [this relatively short `if-else`
section](9fce566b0c/lib/Conversion/TorchToLinalg/DataMovement.cpp (L428-L460)),
making it easy to see what is going on.
4. Dead code was eliminated + updates to some of the documentation
comments

This commit does not add any new functionality to the
`ConvertAtenViewOp` pattern.
2023-09-26 09:20:01 -07:00
Bruce Kim a520d39f84
[MLIR][TORCH] Add device "cpu" support for aten.to.dtype_layout op (#2481)
This PR adds device="cpu" support for `aten.to_dtypeLayout` op and
corresponding e2e test suit.
(refer:  PR https://github.com/llvm/torch-mlir/pull/812/)
2023-09-25 10:00:19 -04:00
Ashay Rane 5f772e8cb4
CI: reconcile differences between RollPyTorch and pre-merge checks (#2482) 2023-09-23 07:00:16 -07:00
Vivek Khandelwal 6699cbcc74
build: manually update PyTorch version (#2480)
Set PyTorch and TorchVision version to nightly release 2023-09-22.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-22 14:25:18 -07:00
Gleb Kazantaev 059041e0fe
[LTC] Support torch.ones/zeros/arange ops (#2440) 2023-09-21 13:25:14 -04:00
Ben Vanik b9847b1904
Fixing implicit double to float casts. (#2476)
MSVC (and other compilers with implicit narrowing warnings) don't like
this type mismatch.
2023-09-20 10:48:40 -07:00
David Gens 023fc90072
[Torch Dialect] add avg_pool 2d and 3d op variants (#2473)
Adds ODS for `avg_pool2d` and `avg_pool3d`, including their backward and
`adaptive_` variants.
2023-09-20 13:47:08 -04:00
Stella Laurenzo 20ea1c9e91
Revert accidental change to submodule origin. (#2477) 2023-09-20 14:05:52 +08:00
Stella Laurenzo 278c41e938
Bump llvm-project to f66cd9e9556a53142a26a5c21a72e21f1579217c. (#2466)
Picks up DenseResourceElementsAttr python support and fixes minf/maxf
C++ rename.
2023-09-19 10:50:53 -07:00
Vivek Khandelwal b03efdf2e4 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-18.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-19 21:14:05 +05:30
Boian Petkantchin 7a7be60dcf
Fix python package install instructions (#2464) 2023-09-14 10:23:44 -07:00
Sambhav Jain 3d974ed988
[Bazel] Replace mlir-hlo with stablehlo (#2463)
Aligns with https://github.com/llvm/torch-mlir/pull/2460 and fixes bazel
build.

GHA workflow:
https://github.com/sjain-stanford/torch-mlir/actions/runs/6178894329
2023-09-14 08:59:31 -07:00
Bruce Kim 40913a36c2
[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op (redo PR) (#2459)
Making the same PR with #2457, as I accidentally thought the review was already made and merged it (reverted).

Add decompose empty_strided op.
Referring to #1776, this decomposition op only supports default stride values, because accessing the tensor or indexing over that, the indices are determined by the strides.
In MLIR, this is not implicitly supported but assumes that the strides are default while iterating over the tensor.
2023-09-13 10:04:31 -07:00
Vivek Khandelwal 4b4c38da46 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-13.
Ref: 464f9c3725

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-13 21:25:21 +05:30
Stella Laurenzo 107ed0dec9
Fix two CMake issues that were causing Windows compilation failures. (#2461)
At some point in the past month, stablehlo gained a number of patches that implement a non-trivial bit of threaded reference code. It fails to compile in Windows in pretty catastrophic ways.

But this isn't the main problem: by way of the MLIR CMake macros being used, if we include stablehlo before our code, we end up building the whole project, whether needed or not.
2023-09-12 20:51:45 -07:00
Stella Laurenzo 078d1e1a1d
Remove mlir-hlo (replace with stablehlo). (#2460)
We just have to do this: I ran into an issue today where I needed to make a one line patch to stablehlo to work around a compiler issue, and it is completely unapparent how to do so given that the mlir-hlo repo is a read-only export and is at the tail end of a multi-week integration chain from the open-source stablehlo repo.

We've discussed this often enough and gotten +1 from everyone that they are ok with taking the e2e testing hit if it becomes necessary: It is necessary as the current situation is unmanageable.

Looking at it, I expect it wouldn't actually be very difficult to build a little runner binary out of the stablehlo interpreter and subprocess call that in order to get the testing coverage back. I leave that as an exercise to the users of this part of the stack and recommend following the breadcrumbs from the deleted python/torch_mlir_e2e_test/stablehlo_backends/linalg_on_tensors.py file and the main.py changes.

Note that I am pointing us at a stablehlo fork for the moment until it is apparent that we don't need to carry any local patches to it. We can update this in a few days if everything is clear.
2023-09-12 19:10:02 -07:00