Commit Graph

2751 Commits (09d42044b4d9d2cd5399d9d5ff5ae97501314db5)
 

Author SHA1 Message Date
Rob Suderman e80054a3cc
[torch] Folders for `torch.aten.*.tensor` operators [add, sub, mul] (#2878)
Simple folder for limited size aten tensor operations. This is primarily
useful for shape computation folding as they unfortunately can use
`aten` operators. Add, sub, mul are common examples of these folders.
2024-02-19 10:28:23 -08:00
Rob Suderman cea51897a5
[onnx] Simplify onnx.slice lowering (#2919)
Onnx slice lowering used arange needlessly instead of directly
constructing the constant dimension values. This makes lowerings to
linalg struggle as multiple folders are required to get what is a
constant index value.
2024-02-19 10:26:29 -08:00
Rob Suderman fd08578bdb
[torch] Support dynamic step size for `torch.slice` (#2922)
For some reason we did not directly use the step size dynamically
despite its constructed using the dynamic value.
2024-02-19 10:26:21 -08:00
aldesilv d29157b33f
OnnxToTorch support for onnx.InstanceNormalization op (#2710)
https://github.com/nod-ai/SHARK-Turbine/issues/327
2024-02-19 19:53:48 +05:30
Aart Bik 78e10ff09b
[torch-mlir][sparse] inline sparse helper methods (#2918)
Even though the reference compiler is not about performance, inlining
the generated sparse helper methods has a rather big positive impact on
performance, leaving a much better first impression. Therefore, we added
this inlining pass (which leaves all other PyTorch modules unaffected,
since they tend to be one big main() method to start with).

testing:

$./tools/e2e_test.sh --config linalg

Summary:
    Passed: 1164
    Expectedly Failed: 8

$ python -m e2e_testing.main --config=torchdynamo

Summary:
    Passed: 976
    Expectedly Failed: 162
2024-02-16 20:56:42 -08:00
Rob Suderman d65925a8b4
[onnx] Fix `onnx.sigmoid` for integer inputs/outputs (#2914)
Sample compilation crashes due to sigmoid with integer inputs/outputs.
This fix avoids crashing but still experiences an error.
2024-02-16 13:35:25 -08:00
Rob Suderman 7a0d0e954b
[onnx] Fix onnx.gather lowering to use torch.aten.index_select (#2913)
Onnx's gather maps directly to `torch.aten.index_select`. We should just
use that path.
2024-02-16 16:05:44 -05:00
Rob Suderman 468c533942
[onnx] Fix crash when negative transpose values exist (#2915)
We are crashing due to indexing into a negative shape. Updated the
lowering to avoid the crash.
2024-02-16 16:04:47 -05:00
Aart Bik c5d8c12469
[torch-mlir][sparse][NFC] fixed typo (#2917)
grammar police
2024-02-16 13:02:00 -08:00
Stella Laurenzo 5253282c55
[fx] Support mutation in ExportedProgram. (#2916)
As of https://github.com/pytorch/pytorch/pull/118969, `ExportedProgram`
has the long awaited fixes to correctly categorize various things
relating to parameters, buffers, mutated inputs and constants.

With this additional modeling, we are finally able to implement
(safely/soundly) the mutable semantics that were attempted on the
TorchScript path. The difference is that on that path, we had to
conservatively treat everything as mutable and run some dodgy heuristics
(which have been the cause of many bugs relating to
"MaximizeValueSemantics") to try to get back to an immutable state.

The new model supports mutability at the graph edges, allowing both user
inputs and buffers to be mutated (there is some more support than that,
but that is all I fully tracked through to implementation).

Therefore, when we receive programs like this, we now can selectively
enable mutation at the edges. This happens to be the mutability model
that IREE supports, which I expect to be a primary beneficiary. However,
there is nothing stopping anyone else from handling the `!torch.tensor`
types and the existing copy/overwrite ops that will be selectively
added.

Since this relies on API changes that will not release until 2.3, I'm
being a bit cautious about not refactoring existing facilities.
2024-02-16 09:46:30 -08:00
Rob Suderman 074f112d6a
[onnx] Add testing using the `onnx` compilation using torch tests (#2795)
We can route the torch tests via `onnx` using the `torch.onnx.export`
tooling. We can then reimport, lower to torch, and compile to linalg to
validate the onnx path is working correctly.

The current implementation exposes some failures in the `onnx` path so
we cannot enable the onnx test suite yet due to segmentation faults.
2024-02-15 10:17:13 -08:00
Yuanqiang Liu 49f63df068
[bazel] commit after run buildifier (#2912) 2024-02-16 01:56:09 +08:00
Yuanqiang Liu 5733c84443
[bazel] fix bazel with stablehlo refbackend and fix some typo (#2911) 2024-02-16 01:38:13 +08:00
Yuanqiang Liu f3e8199a6d
[Stablehlo] add refbackend (#2712) 2024-02-16 01:08:48 +08:00
saienduri 8e2e5eeae9
add support for decomposition (#2879)
This commit adds decomposition support into the core aten operators
before importing the module from torch.

Also, this commit deals with the lifted tensor constants in
torch.export.export(). We don't want to add unnecessary placeholder
nodes in the graph (extra args in the block module), and should treat
them like the constants that they are. The unnecessary clone is also
removed for max efficiency.
2024-02-14 21:00:52 -08:00
Ze Zhang f3b38e5d12
DecomposeComplexOps: update parseEquation to skip space char for AtenEinsumOp op (#2910)
Just a minor update to skip the space char if included in the equation
string

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-02-14 18:18:11 -08:00
Daniel Garvey 77b7550997
Add support for bfloat16 in fximporter (#2896)
this introduces an additional soft dependency on the python ml_dtypes
python packages in order to support bfloat16

Addresses #2843
2024-02-14 16:24:25 -06:00
Ean Garvey e7a09440d3
Bump torch to pytorch/pytorch@b51e024 (#2909)
This version of pytorch includes a patch to enable dynamo support on
Windows, so I would like to sync on this torch version across
torch-mlir/shark-turbine for a seamless Windows import flow.
2024-02-14 13:31:37 -05:00
Vivek Khandelwal d6d1a173dc
[MLIR][Torch] Add OnnxToTorch and TorchToLinalg support for trig ops (#2903)
This commit adds the OnnxToTorch lowering for cosh, acosh, asin, asinh,
and atanh op.
This commit also adds the TorchToLinalg lowering for acosh, asin, asinh,
and atanh op.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-14 11:58:09 +05:30
Rob Suderman e9cdd6cbc5
[torch] Fix tm_tensor.attention for end-to-end (#2907)
Some operations include a backend matcher for specialized operations. We
map these back to generics so they appropriately match to the high
performance versions. This is done for the attention operation.
2024-02-13 21:18:01 -08:00
Scott Todd d6e1d836ca
Drop torch attributes at the end of backend conversion. (#2876)
Fixes https://github.com/llvm/torch-mlir/issues/2866

Some backends / downstream projects expect that a "fully converted"
program has no remaining ops or attributes from the original dialect(s).
2024-02-13 14:32:02 -08:00
Aart Bik 24c2fc0b5f
[torch-mlir][sparse] add JIT test to expose pending issues (#2906)
This test exposes issues that need fixing
(1) propagate sparsity into the FX graph (over elt-wise) (2) batched
dimensions need a new "dense(batch)" format
2024-02-13 13:42:56 -08:00
Sambhav Jain 3e836d8dad
[fx_importer] Convert non-persistent buffers lifted as tensor constants (#2902)
The investigation is largely recorded in
https://github.com/llvm/torch-mlir/pull/2881, but this change allows us
to capture non-persistent buffers that were lifted as tensor constants
(after https://github.com/pytorch/pytorch/pull/118969 landed in upstream
PyTorch), and propagate them to `Torch` dialect as "frozen"
`torch.vtensor.literal`. I believe this patch should work with both
nightly and stable PyTorch, but will let CI confirm the same. Thanks
@stellaraccident for the valuable pointers and guidance.

---------

Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-13 12:38:32 -08:00
saienduri 9b967f6b5a
[MLIR][ONNX] Add OnnxToTorch support for Mean, IsInf, IsNaN, PRelu op (#2801)
This commit adds the OnnxToTorch support for Mean, IsInf, IsNaN, and
PRelu ops. All high priority ops were taken so went with these. The non
trivial ones are Mean and IsInf which might require extra review

---------

Co-authored-by: MaheshRavishankar <mravisha@amd.com>
2024-02-13 12:38:21 +05:30
Aart Bik b6f4ca512e
[torch-mlir][sparse] sparsity metadata refinement (#2901)
Various improvements on sparsity metadata:

(1) define single data structure for all sparsity related metadata 
(2) handle batched dense dimensions, as well as dense subtensor
dimensions
(3) refine sparsity propagation for deeper networks
2024-02-12 16:10:57 -08:00
Ashay Rane 370d6ac9a2
build: find Protobuf using config mode search (#2900)
This patch makes the Protobuf package mandatory in addition to forcing a
config mode search.  The (default) module mode search looks for the
CMake-provided FindProtobuf.cmake file, but this file does not list
Abseil as a dependency, causing linker issues like the one below:

```
ld: Undefined symbols:
  absl::lts_20230802::log_internal::LogMessageFatal::LogMessageFatal(char const*, int, std::__1::basic_string_view<char, std::__1::char_traits<char>>), referenced from:
      google::protobuf::RepeatedPtrField<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>::TypeHandler::Type const& google::protobuf::internal::RepeatedPtrFieldBase::Get<google::protobuf::RepeatedPtrField<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>::TypeHandler>(int) const (.cold.1) in OnnxImporter.cpp.o
```

By forcing a config mode search, CMake looks for the file that is
installed as part of the protobuf package and which does contain the
Abseil dependency.  This workaround is also mentioned in a GitHub issue
for Protobuf:
https://github.com/protocolbuffers/protobuf/issues/12292#issuecomment-1529680040.
2024-02-12 17:31:41 -06:00
Aart Bik be8375d350
[torch-mlir][sparse] implement first sparse_jit end-to-end path (#2894)
This PR introduces a sparse_jit wrapper that can run simple models with
sparse tensor inputs end-to-end. The implementation shows all required
components on modifying sparse tensor types with a 1:N relation on the
call sites. Two tests shows that the JIT runs end-to-end while computing
the correct results.

More details to follow (generalizing to COO and different ranks, as well
as support for *output* sparse tensors), but the general concepts are
all here now.

**_Update: Thanks to Rob, bump to proper LLVM/MLIR hash is done!_**

_**NOTE that all parameter passing changes are nicely done "downstream"
in MLIR, so very little changes are required in torch-mlir code
proper**_

---------

Co-authored-by: Franz Haniel <77495327+frafranz@users.noreply.github.com>
Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2024-02-12 10:04:54 -08:00
Xida Ren (Cedar) bfb93cb99f
Fix test_add_uint8 failure to lower to linalg (#2893)
By updating convertScalarToDtype invocation pass original source and
destination datatypes for the add op. Also fixes a potential problem
with the sub op.

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-02-12 09:19:39 -08:00
Yuanqiang Liu b8c48cf283
Bump stablehlo to openxla/stablehlo@e191eb4c3c3f3144503a8a117d760de5d… (#2891)
…dcc7e89.  
* to involve `chlo-legalize-to-stablehlo` pass.
2024-02-12 01:05:00 +08:00
Rob Suderman c0f139be0f
[torch] Add `torch.aten.eq.Tensor` comparison folder (#2889)
Added a folded for a equals operator. This allows an equivalent
comparison folder, primarily for when shape computations occur small
size tensor.
2024-02-09 15:02:20 -08:00
Rob Suderman d83b576c6e
Bump LLVM to llvm/llvm-project@bb180856ec (#2895)
Includes some minor first for `AffineMap::inferFromExprList`
2024-02-09 14:07:49 -08:00
Rob Suderman 7d33ba69ac
[torch] Folder for torch.aten.select.int for splat cases (#2890)
If the input or result is a splat value we can just constant fold the
result. This is common for shape computations and can help with shape
inference.
2024-02-09 14:02:54 -08:00
Franz Haniel 4cc62aeb24
Implement trace (#2790)
The lowering decomposes AtenTraceOp into an AtenDiagonalOp followed by
AtenSumOp.

The progress is tracked in
https://github.com/nod-ai/SHARK-Turbine/issues/333.

---------

Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2024-02-09 08:00:24 -08:00
Avinash Sharma 9659a436d1
Add lowering support for math::AbsIOp (#2875)
There is no lowering support for math::AbsIOp, so if the operand is an
integer type, it will fail to lower to math::AbsFOp since the op operand
#0 must be floating-point-like.
2024-02-08 14:53:40 -08:00
Aart Bik 44f8f89826
[torch-mlir][sparse] add sparsification to linalg reference backend (#2887)
This adds a few passes that will ensure linalg with sparse tensors are
properly lowered to loops and can run using the ExecutionEngine for
testing (a few details on parameter passing from PyTorch still TBD)

Test results:

$ ./tools/e2e_test.sh --config linalg

Summary:
    Passed: 1144
    Expectedly Failed: 8

$ python -m e2e_testing.main --config=torchdynamo -v

Summary:
    Passed: 960
    Expectedly Failed: 163

Filed issue:
https://github.com/pytorch/pytorch/issues/119407
2024-02-08 09:37:31 -08:00
Ashay Rane 21f070e95f
onnx: fix checks in TorchOnnxToTorch pass to match the ONNX spec (#2848)
This PR contains three commits to update the validation checks in the
ONNX -> Torch conversion pass for the AveragePool, Pad, and Slice operators:

> onnx: fix preconditions for lowering AveragePool ops
> 
> The `pads` attribute of the AveragePool operator specifies the value to
> pad at both the beginning as well as the end of the axis (see
> https://onnx.ai/onnx/operators/onnx__AveragePool.html#attributes), so
> the size of this attribute should be twice the rank of the input tensor.
> However, our TorchOnnxToTorch bails out early since it incorrectly
> compares the pads attribute with the rank (not twice the rank) of the
> input tensor.
> 
> This patch fixes the code to match the spec and adds a lit test.

> onnx: allow optional constant value for Pad operator
> 
> The `constant_value` input of the onnx.Pad operator is optional (see
> https://onnx.ai/onnx/operators/onnx__Pad.html#inputs), but the
existing
> logic for lowering the operator into the Torch dialect assumes that it
> is mandatory.
> 
> This patch makes the attribute optional and constructs a default value
> (a list of zeros the size of the input tensor) if the attribute was not
> specified.

> onnx: fix checks for axes and steps inputs of Slice operator
> 
> The ONNX Spec for the Slice operator allows the `starts` and `ends`
> inputs to have fewer indices that the dimensions of the `data` tensor
> (see https://onnx.ai/onnx/operators/onnx__Slice.html), but our code
> expects these inputs to be as many as the `data` tensor's dimensions.
> 
> More precisely, the spec requires that the `starts` and `ends` inputs
> are only as long as the `axes` input, but since the `axes` input is
> optional, the default type for the `axes` input has to match the type
> for the `starts` and `ends` inputs. Moreover, the number of indices in
> the `steps` input also has to match those in the `axes` inputs (instad
> of matching the dimensions of the `data` input).
> 
> This patch fixes the checks in the TorchOnnxToTorch conversion so that
> they match the ONNX spec.
2024-02-07 21:19:27 -08:00
Vivek Khandelwal 4df96616db
[MLIR][TORCH] Modify Onnx.Reshape lowering for static shape cases (#2852)
This commit modifies the OnnxToTorch lowering of Onnx.Reshape op by
creating the result shape list for the aten.reshape using the result
shape values inferred from the op's result shape.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-07 17:44:07 -08:00
Rob Suderman a8aad2a5ab
[torch] Add `torch.aten.where.*` folders (#2886)
Where operation can be statically computed when involving splats of
known value. Added handling these cases with multiple tests.
2024-02-07 19:43:31 -05:00
Dave Liddell 23647ab2d1
[torhc] aten.index_select folder (#2871)
Folds aten::index_select ops under the following conditions:

1. If the input and output are the same shape, the indexing operation is
a NOP, so just return the input.
2. If the input has shape <1x1x...xNx...x1> (all 1's except for one
dim), and the output shape is <1x1x...x1> (all 1's), then there is a
single index, so extract the single element value and return a tensor
with that value.

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-02-07 16:17:15 -08:00
mmakevic 32dbf99ce2
Implement lowering of torch.aten.all.dim (#2873)
Lowering of torch.aten.all.dim to linalg.

Per PyTorch documentation:

> This function matches the behaviour of NumPy in returning output of
dtype bool for all supported dtypes except uint8. For uint8 the dtype of
output is uint8 itself.

Since there is no support for ui8 in torch-mlir currently
(https://github.com/llvm/torch-mlir/pull/1384#issuecomment-1260011334)
implementation returns failure for that case.
2024-02-07 12:34:52 -08:00
Xida Ren (Cedar) fc04bc7ee9
[torch] AtenSliceOp folder that produces splat results (#2869)
Includes `slice` folder and lit tests

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
2024-02-07 19:00:46 +00:00
James Newling 723b8b1d28
Fix dev docs error/typo (#2880)
Just a one line change in a .md file
2024-02-07 03:55:38 -08:00
saienduri bfcf93ea21
Rename torch_mlir.compile APIs and introduce FX based analogs (#2842)
Link to related RFC:
https://discourse.llvm.org/t/rfc-rename-torch-mlir-compile-apis-and-introduce-fx-based-analogs/76646
This commit updates the documentation, tests, CMake files, and API for
the proposed changes in the RFC. There is a new torch_mlir/fx.py for
user level APIs related to importing modules and a corresponding test
for this path can be found at test/python/fx_importer/basic_test.py.

---------

Co-authored-by: MaheshRavishankar <mravisha@amd.com>
2024-02-06 19:07:59 -08:00
Xida Ren (Cedar) cc06391630
AtenSortOp Folder (#2864)
A chunk off

https://github.com/llvm/torch-mlir/pull/2856
https://github.com/llvm/torch-mlir/pull/2860

---------

Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
2024-02-06 21:12:12 +00:00
Daniel Garvey faf7d4aaa5
[fx_importer] Add support for 0D tensors (#2870)
Adds an escape hatch from creating a DenseResourceElementsAttr for
single value tensors into DenseElementsAttr.

For 0d or 1element, splats are better as DenseElementsAttr. Don't use
DenseResourceElementsAttr for it
2024-02-06 00:19:31 -06:00
Dave Liddell 1cb14f6879
Rob's atenTensor folder (#2867)
If a tensor is initialized by a list with a single constant integer,
this folder turns it into a torch.vtensor.literal

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-02-05 17:10:42 -08:00
Rob Suderman 041a54ae0c
[torch] Supporting `torch.aten.mul.float` lowering to `arith` (#2833)
Simple missing scalar operation for multiply floats was missing.
2024-02-05 16:23:04 -08:00
Rob Suderman e3faef5224
[onnx] Convert `onnx.QLinearConv` to `torch` (#2851)
Leaning on the QDQ functionality in torch we can support the QLinearConv
operation by piggybacking through `torch.Convolution`. This includes
some changes such as allowing the `onnx` rewriter to run recursively.
Doing so allows `QLinearConv` to decopmose to `onnx.Convolution` which
is then lowered to `torch`.
2024-02-05 16:09:41 -08:00
Rob Suderman cb52c4b3cc
[onnx] Fix `onnx-to-torch` lowering for flatten shape (#2834)
The existing `flatten` lowering did not define what the intermediate
shape was. This could result in failures to lower further to linalg as
the intermediate shape was unknown. Added a shape refinement section.
2024-02-05 14:23:46 -08:00
Xida Ren (Cedar) b3a56c0711
Update add_ops to mention llvm-project/mlir/utils/generate-test-checks.py (#2862) 2024-02-05 12:13:43 -08:00