Commit Graph

77 Commits (18139994e807d262f52a13b2c8e1b3edfa45ffa0)

Author SHA1 Message Date
zjgarvey af236dab66
Add support for multiple dynamic reassociation dims for unflatten.int (#3504)
Addresses an issue with onnx.Gather lowering to linalg:
<https://github.com/nod-ai/SHARK-Turbine/issues/242>

The builder for tensor.expand_shape, without an explicitly provided
output shape, fails to infer an output shape in the case of multiple
dynamic reassociation dims. I tried adding the output shape explicitly
for tensor.expand_shape, but ran into compilation issues later on (see
<https://github.com/iree-org/iree/issues/17760>).

This PR adds support by lowering this op to tensor.reshape when multiple
dynamic reassociation dims are provided.
2024-06-28 09:59:51 -07:00
Aart Bik d77bab37d1
[torch-mlir][sparse] re-enable all sparse tests (#3444)
this fixes the following issue:

https://github.com/llvm/torch-mlir/issues/3418
2024-06-10 11:19:32 -07:00
Vivek Khandelwal 661be2d5b0
[MLIR][Torch] Add TorchToLinalg lowering for AtenAvgPool3dOp (#3030)
This commit also fixes the average pool op' test failing for
OnnxToLinalg lowering.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-04 22:12:34 +05:30
Rob Suderman afca88a058
[NFC] Change to *cast instead of .*cast variants (#3405)
Member casts have been deprecated. Changing over a bunch of the member
cast calls to the global templated variants to remove deprecation
warnings.
2024-05-30 23:45:13 -07:00
penguin_wwy e4be197efd
[FxImporter] Fix transpose rank zero (#3382) 2024-05-30 14:31:18 +08:00
penguin_wwy 1f544c37d0
[NFC] Remove unused header files (#3386) 2024-05-30 14:30:36 +08:00
Gaurav Shukla 43f961eca4
[MLIR] Fix 64-bit product during aten.view lowering (#3378)
std::accumulate needs 64-bit init value to perform 64-bit arithmetic on
a list of integers.

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-05-23 08:59:28 +05:30
Peiming Liu ccb772cd0f
[sparse] propagate sparsity properly when decompose torch operations. (#3318) 2024-05-15 10:09:27 -07:00
Stella Laurenzo 00efec0b73
[linalg] Implement strict mode lowering for aten.view. (#3319)
* Enables assume_strict_symbolic_shapes on fx_importer imported
programs, indicating strict shape semantics.
* Reworks the view->reshape lowering to take advantage of strict mode
and do one of:
  * Collapse to 0D
  * Flatten/Unflatten when there is an inferred dim.
  * Fallback to tensor.reshape
* Splits some test cases up and adds an attribute to control the old
pattern (so new corners can be tested in strict mode in isolation).
* Dynamic inferred mode needs upstream work to generalize expand_shape
(so that case is suppressed here).
* Deletes the assert from the existing tensor.reshape lowering if strict
shape mode is enabled (since the condition it is dynamically asserting
cannot happen).
2024-05-10 13:45:50 -07:00
Aart Bik a033bbfe6c
[torch-mlir][sparse] recognize to_dense primitive (#3308)
also maps simply to sparse_tensor.convert
the sparsity types do the rest!
2024-05-08 22:50:17 -07:00
penguin_wwy 6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243)
Like #3130, gradually replace the deprecated code

https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
Aart Bik 4361178caa
[torch-mlir][sparse] recognize sparse tensor conversion (#3226)
Sparse tensor conversions are represented by special aten operators.
This PR ensures the conversions are recognized (instead of failing the
full torch aten lowering to linalg).
2024-04-26 02:32:07 +08:00
Rob Suderman 0e77de996a
[torch] Add support for `torch.view` with dynamic shapes (#3164)
We can map to `tensor.reshape` for handling multiple output dynamic
shapes. Later we can perform a more complex analysis for indentifying
expand/collapse cases from the tensor.reshape.

Initially we planned to handle this identification at the `torch` level
however it will be easier to handle once converted to core
mlir-dialects.
2024-04-18 11:47:19 -07:00
Xinan Jiang(姜曦楠) 71d90788d3
[MLIR][TORCH] Support parallel dimemsions expand/collapse (#3051)
This PR support `aten.view` with unique unknown dimension both in input
shape and output shape while the pass convert-torch-to-linalg that
lowing `aten.view` to `tensor.collapse_shape` or `tensor.expand_shape`.

Below is an example
```
func.func @test_reshape(%arg0: !torch.vtensor<[1,?,50,16],f32>) -> !torch.vtensor<[1,?,16],f32> attributes {torch.assume_strict_symbolic_shapes, torch.onnx_meta.ir_version = 9 : si64, torch.onnx_meta.opset_version = 19 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
  %int1 = torch.constant.int 1
  %int-1 = torch.constant.int -1
  %int16 = torch.constant.int 16
  %0 = torch.prim.ListConstruct %int1, %int-1, %int16 : (!torch.int, !torch.int, !torch.int) -> !torch.list<int>
  %1 = torch.aten.view %arg0, %0 : !torch.vtensor<[1,?,50,16],f32>, !torch.list<int> -> !torch.vtensor<[1,?,16],f32>
  return %1 : !torch.vtensor<[1,?,16],f32>
}
```
2024-04-11 10:43:03 -07:00
penguin_wwy d4a30b7e67
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3130)
We should prefer functional style as the method style is deprecated
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
(https://mlir.llvm.org/deprecation/)
2024-04-11 06:47:35 -07:00
Rob Suderman 9d9a05366e
[torch] Fix aten.squeeze lowering to use result shape (#3106)
Squeezes can be ambiguous without the output shape information. For
instance (1, 1, 256) squeezed can be either (1, 256) or (256). We need
to check the resulting shape to know what the shape should look like.
2024-04-04 09:43:12 -07:00
Rob Suderman f97cd4893f
[torch] Improve shape inference for dynamic shapes (#3091)
Shapes can be processed as tensors to represent the set of dimensions.
As reshapes take a list of scalars this can result in a single dynamic
dimension blocking the adjacent static dimensions.

This pass attempts to de-couple tensor computations related to shapes
and propagate values to better support lowering scalar tensor
computations.
2024-04-02 16:19:57 -07:00
schnkmwt 1fcbfa87ec
Implement linalg lowering of diag_embed torch op (#2885)
This PR adds lowering of diag_embed to linalg dilect.
Tracked in https://github.com/nod-ai/SHARK-Turbine/issues/288

---------

Co-authored-by: sachink <sachink@xilinx.com>
2024-03-22 16:32:50 -07:00
Rob Suderman 19d4888278
[torch] Make torch.aten.unflatten lower directly to linalg (#2971)
Existing lowering via aten.view does not work as well for dynamic shapes
as the lowering to tensor.expand must re-infer dynamic shape matching.
Better to directly lower.
2024-03-04 10:17:42 -08:00
Rob Suderman d030bffc62
[torch] Support `aten.view` rank-0 collapse (#2965)
Collapsing to a rank-0 tensor using `aten.view` was currently bailing
out. Added the special case.
2024-03-01 12:31:07 -08:00
Rob Suderman 6f3d62ab04
[torch] Fix folders and `cat` and `view` torch lowerings (#2963)
A bunch of small fixes are interlinked and trigger crashes if not
addressed as a group. This includes:

- aten view when expand from a rank-0 tensor
- slice folder with negative indices
- `aten._shape_as_tensor` folder on a rank-0 tensor
- `aten.cat` of a tensor with a length-0 tensor
2024-02-28 12:04:52 -08:00
Rob Suderman df2aa1a369
[torch] Fixed edge conditions for strided slicing (#2929)
Strided slicing can occur with a negative stride. In these cases we need
to bound end differently. This included removing a function that was
generating bad limits.
2024-02-21 21:28:44 -08:00
Rob Suderman fd08578bdb
[torch] Support dynamic step size for `torch.slice` (#2922)
For some reason we did not directly use the step size dynamically
despite its constructed using the dynamic value.
2024-02-19 10:26:21 -08:00
Quinn Dawkins 494089d53d
Clang format refresh (#2812)
After noticing a number of commits with unrelated formatting changes,
I think something was changed with clang-format at one point and we're
seeing a number of unrelated changes. Doing a refresh can help avoid
this.

The changes made here came from
```
find lib -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
find include -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
find projects -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
```
2024-01-29 12:59:33 -05:00
Aart Bik 46a25d7241
[torch-mlir][sparse] preserve sparsity during lowering torch to linalg (#2809)
This preserves sparsity at the most obvious places of lowering TORCH
tensors to MLIR RankedTensorType tensors. Other places are marked for
audit. With some initial lowering tests.
2024-01-26 10:54:59 -08:00
Franz Haniel b9806cfa38
[TorchToLinalg] Add lowering for torch.aten.diagonal (#2632) 2024-01-22 12:47:13 -05:00
Frederik Harwath 0860c41ee2 Implement aten.reflection_pad2d lowering to linalg 2024-01-10 21:32:22 -10:00
kumardeepakamd 9adad9bc40
Add support for reflection_pad1d (#2706)
Adds a lowering to Linalg for reflection_pad1d. Based on ideas/code from draft PR
https://github.com/llvm/torch-mlir/pull/2693.

---------

Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-02 14:05:11 -05:00
Quinn Dawkins 030b0140d4
[TorchToLinalg] Lower aten.cat to tensor.concat (#2650)
This replaces the lowering of aten.cat with tensor.concat, allowing more
efficient handling of concatenations in downstream flows. The refbackend
populates concat decomposition patterns that can be used to recover the
previous lowering.
2023-12-15 15:45:32 -05:00
Quinn Dawkins 400752ca8d
[TorchToLinalg] NFC: Move Utils.h to an externally accessible location (#2603) 2023-12-01 19:38:21 -05:00
James Newling 647f2f5076
Additional tests for view lowering (#2584)
The logic for lowering the aten view op to linalg is fairly complex. 
In this PR I have tried to follow all non-failing paths through the 
lowering and add unit tests where they're missing.

There is 1 logical change to the lowering: redundant tensor.cast ops
(same source and destination type) are folded.
2023-11-20 17:35:25 -08:00
Daniel Garvey 4901773f77
add uncovered cases in view lowering (#2524)
removes unecessary checks from empty strided
2023-11-01 21:56:44 -05:00
Quinn Dawkins 6f81ad7293
[TorchToLinalg] Improve broadcast lowerings in strict symbolic modes (#2505)
With strict symbolic shapes, we can assume numpy-style dynamic
broadcasts never occur. This improves the lowering in the presence of
this assumption.
2023-10-05 15:15:26 -04:00
Ramiro Leal-Cavazos 2e5d65064c [linalg] Add handling for leadin and trailing size-1 dims in ViewOp
This commit adds to the lowering of `aten.view` handling for the
following cases:

- `(..., a.size(i))` -> `(..., a.size(i), 1, ..., 1)`
- `(..., a.size(i), 1, ..., 1)` -> `(..., a.size(i))`
- `(a.size(i), ...)` -> `(1, ..., 1, a.size(i), ...)`
- `(1, ..., 1, a.size(i), ...)` -> `(a.size(i), ...)`
2023-10-03 23:04:52 +00:00
Ramiro Leal-Cavazos 1c508af0ba Revert "[linalg] Fix handling of trailing size-1 dimensions in aten.view (#2474)"
This reverts commit 7c6b9d2445.
2023-10-03 23:04:52 +00:00
Stella Laurenzo 860be09a39
Elide dynamic broadcast checks when in strict symbolic shapes mode. (#2496)
When importing dynamic shaped programs from Dynamo, via torch.compile or
torch.export, we can assume that strict symbolic shape checks have been
done prior to generating torch IR. Among other shape checking, this
eliminates the case where an unknown dimension can be dynamically '1' in
a way that signals a broadcast.

Adds a `isAssumingStrictSymbolicShapes` utility which consults a
`torch.assume_strict_symbolic_shapes` attribute on an enclosing scope
and returns true if present.

In the linalg pipeline, many runtime checks are elided when this returns
true.
2023-09-29 16:45:48 -07:00
Ramiro Leal-Cavazos 7c6b9d2445
[linalg] Fix handling of trailing size-1 dimensions in aten.view (#2474)
This commit adds to the lowering of `aten.view` handling for the
following cases:

- `(..., a.size(i))` -> `(..., a.size(i), 1, ..., 1)`
- `(..., a.size(i), 1, ..., 1)` -> `(..., a.size(i))`

Fixes: https://github.com/llvm/torch-mlir/issues/2448
2023-09-27 09:09:30 -07:00
Ramiro Leal-Cavazos c9fd78988e
[NFC] Clean-up `ConvertAtenViewOp` in linalg backend (#2470)
While trying to fix a bug in the `ConvertAtenViewOp` pattern in the
linalg backend, I realized that the pattern had become quite complex and
had accumulated some dead code, making it hard to reason about.

This commit simplifies the pattern quite a bit. The main changes are:
1. All the static helper functions in the `ConvertAtenViewOp` class have
been simplified, both in their signature and their body. Each one now
performs simple calculations on arrays, and take the least number of
arguments necessary.
2. The body of [the `while`
loop](9fce566b0c/lib/Conversion/TorchToLinalg/DataMovement.cpp (L407))
inside the main pattern has been changed to work on `MutableArrayRef`
slices, to avoid having to keep track of `start` and `end` indices for
the input and output shape arrays.
3. All the heuristics used to determine the mapping between the input
and output dimensions are now in [this relatively short `if-else`
section](9fce566b0c/lib/Conversion/TorchToLinalg/DataMovement.cpp (L428-L460)),
making it easy to see what is going on.
4. Dead code was eliminated + updates to some of the documentation
comments

This commit does not add any new functionality to the
`ConvertAtenViewOp` pattern.
2023-09-26 09:20:01 -07:00
Vivek Khandelwal 23b72244b1 [MLIR][TORCH] Add different dtype support for aten.bmm op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-12 12:38:46 +05:30
Bruce Kim cd1c7df8be
[MLIR][TORCH] Add E2E support for view_as_real op (#2419)
* view_as_real test case, allow dtype in testutils.randn

* abstract python upstream func implemented

* fixed upstream dtype func, implemented view_as_real backend op

* formatted AtenViewAsRealOp, removed change in e2etest/framework

* removed test suit from reshape_like.py, because it's moved to basic.py

* implemented C-API wrapper for mlirComplexF128 type

* fixed torch.complex dtype width in MLIR and Torch MLIR, deleted float16 dtype dict

* Changed IR input of aten fft_fft unit test

* code refactored

* code refactored and fixed ci test

* refactored: removed white spaces, and rolled back to having both input/output affine expr

* refactored: deleted output affine expr to reduce redundancy

* xfail ltc backend

* removed ComplexImag and ComplexReal from torchdynamo xfail set

* copied and pasted from main branch as there's no change to be made in this file

* refactored abstract_interp_lib_gen.py

* refactored: torchtypes.td, formatted, removed commented out code
2023-09-01 21:12:01 -07:00
Ramiro Leal-Cavazos 41bafe13cc
[build] Update llvm tag to a3f2751f (#2397)
This commit updates the `llvm-project` and `mlir-hlo` submodules to
commits:

llvm-project: a3f2751f782f3cdc6ba4790488ec20163a40ac37
mlir-hlo: 97c7e4b4506c3a2441c923e592833f45da439009

Changes made:

- Rename `getSuccessorEntryOperands` with `getEntrySuccessorOperands`
and remove `operands` from
`getSuccessorRegions` (https://reviews.llvm.org/D157506)
- Make `TypeConverter` a `const` (https://reviews.llvm.org/D157601)
2023-08-15 09:53:28 -07:00
Abhishek Varma 6c9ba4ce95
[Torch-to-Linalg] Add dynamic dimension support for BroadcastTo op (#2174)
-- This commit adds support for dynamic dimension in BroadcastTo op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-07-07 10:01:51 -07:00
Prashant Kumar c47d3aab01 Fix torchdynamo fail test. 2023-05-11 21:29:07 +05:30
Prashant Kumar 3cd91affbc Add complex types support with basic complex ops.
Add complex types support with basic complex types.
Add aten.imag and aten.real op lowering via linalg_backend.
2023-05-11 21:29:07 +05:30
Abhishek Varma 318fe13468 [MLIR][TORCH] Patch up Ops and their lowerings to deal with +ve `dim`
-- In Python we have the concept of negative dimension indexing.
-- We would want to normalize such dimensions to be +ve and within the
   expected range instead.
-- This commit takes care of a few remaining set of Ops and their
   lowerings by applying `toPositiveDim` and `isValidDim` to the
   extracted integer `dim` value.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-14 13:12:56 +05:30
Vivek Khandelwal e90ea3d7ab [MLIR][TORCH] Extend implementation of aten._index_put_impl op.
This commits adds the support for cases for index_put_op:
1.) where index is a 2-d tensor.
2.) where indices is a list of tensors and none, with exactly
2 non none tensors along the consecutive dimensions.

This commit also adds a utility to compute the broadcast shape
given the two input tensors.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-04-05 14:04:30 +05:30
Zhekun Zhang 5758a0bfbb
[StableHLO] Support for slice_scatter (#1960)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-03-22 13:41:04 -07:00
Ziheng Jiang dca2b8a40a
[TORCH] Improve type refinement for aten.cat. (#1908)
* [TORCH] Fix type refinement for aten.cat.

* Add test.

* Address comments.

* Update.

* Update.

* Update.

* Update.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-03-09 16:17:35 -08:00
Ramiro Leal-Cavazos 6c86bec04f
build: update llvm tag to 9acc2f37 (#1828)
This commit makes the following changes:

- Update dialects to use fold API `kEmitFoldAdaptorFolder` and update
signature of `fold` methods (see PSA
https://discourse.llvm.org/t/psa-new-improved-fold-method-signature-has-landed-please-update-your-downstream-projects/67618)
- Replace `makeArrayRef` with `ArrayRef` (see
https://reviews.llvm.org/D140896)
- Remove `TypeRange{}` arg from `b.create<scf::IfOp>` since builder no
longer takes that argument
- Make `func`s in `Torch/invalid.mlir` private, since symbol
declarations cannot be public. (see https://discourse.llvm.org/t/rfc-symbol-definition-declaration-x-visibility-checks/2140)
2023-01-25 01:29:42 +00:00
Tanyo Kwok 577e38da58
build: update llvm tag to 7ccbb4df (#1736)
Summary of changes:

 - LLVM now includes <optional> instead of "llvm/ADT/Optional.h" in most
   (although not all) places
   (https://reviews.llvm.org/rG541ef3d61e9341cd38420c0dbca9250c4d0ea04c).
   This patch replaces the affected instances of `llvm::Optional` with
   `std::optional`.

 - In the usages of llvm::Optional that remain, llvm::Optional::value()
   is deprecated, so this patch replaces them with a dereference.
2022-12-20 18:17:27 +08:00