Commit Graph

2588 Commits (d51e80b648cf114165a466a36b237dd5e2949009)
 

Author SHA1 Message Date
Rob Suderman d51e80b648
[onnx] Fix onnx.gather lowering for rank-0 indices (#2973)
We assumed rank was atleast 1 however it can be rank-0, generating an
illegal pair of flatten / unflatten operations. Corrected this.
2024-03-04 08:25:19 -08:00
Yuanqiang Liu 916554f270
[Stablehlo] add torch_to_stablehlo::getBackendTypeForScalarType (#2975) 2024-03-04 23:31:54 +08:00
Rob Suderman 61f0a5facf
[torch] Add an `aten.cat` length-0 canonicalization (#2966)
If an input is length-0 along the dimension of canonicalization we can
remove the tensor from the list
2024-03-01 21:41:12 -08:00
Rob Suderman d030bffc62
[torch] Support `aten.view` rank-0 collapse (#2965)
Collapsing to a rank-0 tensor using `aten.view` was currently bailing
out. Added the special case.
2024-03-01 12:31:07 -08:00
Scott Todd e7d90a4b82
[onnx] Fix type on create_module() in onnx_importer.py. (#2968)
The type returned was changed in
https://github.com/llvm/torch-mlir/pull/2795. This led to errors in the
downstream IREE project: https://github.com/openxla/iree/pull/16622.
2024-02-29 13:01:13 -08:00
Vivek Khandelwal 579ac8b666
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for sub and sum op (#2954)
This commit adds the support for scalar conversion to byte. 
This commit also fixes the OnnxToLinalg lowering issue for Onnx.Sub and
Onnx.Sum op.
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/466 
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/467

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-29 21:48:46 +05:30
mmakevic 76b81e0ccd
Implement lowering of torch.aten.fmod.Tensor (#2767)
Closing https://github.com/nod-ai/SHARK-Turbine/issues/351
2024-02-29 11:22:03 +05:30
Aart Bik f21b76b68a
[torch-mlir][sparse] fixed merge conflict (#2967) 2024-02-28 17:14:00 -08:00
Peiming Liu e85a2a87c5
[torch-mlir][sparse] support e2e sparse kernels with COO inputs. (#2939) 2024-02-28 16:08:37 -08:00
Rob Suderman ed6e75908b
Bump LLVM to llvm/llvm-project@e5ed7b6e2f (#2964) 2024-02-28 14:13:26 -08:00
Andreas Falkenberg 5437f32193
[onnx][torch] Lower `onnx.grid_sampler` to the `torch` equivalents (#2952)
This is the lowering of gridsampler from onnx to torch using our prior
implementation of AtenGridSamplerOp.
Here are several checks for cornercases implemented. We may decide to
have part of these checks in AtenGridSamplerOp instead of the onnx
lowering portion.
2024-02-28 13:52:15 -08:00
Rob Suderman e48fe45886
[onnx] Import `onnx` import to pass remaining tests (#2951)
Finish supporting importing the vast majority of `onnx` operations. This
includes:
- region support
- region value inherentance
- `torch.string` support
- `torch.list` support
- `torch.optional` support
2024-02-28 12:18:02 -08:00
Rob Suderman 6f3d62ab04
[torch] Fix folders and `cat` and `view` torch lowerings (#2963)
A bunch of small fixes are interlinked and trigger crashes if not
addressed as a group. This includes:

- aten view when expand from a rank-0 tensor
- slice folder with negative indices
- `aten._shape_as_tensor` folder on a rank-0 tensor
- `aten.cat` of a tensor with a length-0 tensor
2024-02-28 12:04:52 -08:00
Rob Suderman 73b6df9007
[torch] Fix DecomposeAtenInstanceNorm decomposition (#2960)
The decomposition only suports a NCHW lowering however the operation can
support arbitrary spatial dimensions. Updated the lowering to better
support spatial dimensions.
2024-02-28 10:27:19 -08:00
Rob Suderman dd673cfa8d
[torch] Add edgecase for aten.shape_to_tensor for rank-0 input (#2962)
Currently lowering uses `tensor.from_elements` which does not allow zero
inputs. In this case we return a `tensor.empty` operation.
2024-02-28 09:47:06 -08:00
Rob Suderman 08bc013fcd
[tosa] Fix TOSA batch matmul lowering to correct transpose ordering (#2959)
The corrective transpose at the end is computed incorrectly. Is it
actually computin the inverse transpose. Inverting the permutations
fixes the issue.
2024-02-28 09:46:58 -08:00
Rob Suderman 4a7a7d76f8
[onnx] Fix ReduceMean lowering to torch (#2956)
Torch lowering only supported the most recent version. Refactored the
lowering so more easily handle default values and optional operands /
attributes.
2024-02-27 22:48:07 -08:00
Abhishek-TyRnT d541779f37
Add support for torch arange float module (#2749)
Added Support for float dtype in in torch.arange in TOSA Dialect

This resolves the following issue :- 
https://github.com/llvm/torch-mlir/issues/2762

The following test cases are passing after this change

1. ArangeDtypeIntModule_basic
2. ArangeFloatModule_basic
3. ArangeNegativeStartFloatModule_basic
4. ArangeStartFloatModule_basic
5. ArangeStartNegativeStepFloatModule_basic
6. ArangeStartOutDtypeModule_basic
7. ArangeStartStepFloatModule_basic

---------

Co-authored-by: James Newling <james.newling@gmail.com>
2024-02-27 13:40:55 -08:00
Aart Bik 30212547a9
[torch-mlir][sparse] add JIT test for block sparse SpMV (#2955)
This required adding a "decompose" pass to the torch lowering, since
torch.mv was not directly handled by lowering to linalg
2024-02-27 11:49:32 -08:00
Rob Suderman e30a083aff
[torch] Rework lowering to tm_tensor.scatter to stop serialization (#2940)
We collapsed and broadcasted scatter indices to a single element
version. We should instead upport `tm_tensor.scatter`s support for
multiple indices and the implicitly broadcasted behavior. This avoids
the serialization and materializing a needlessly large indices tensor.
2024-02-27 11:46:57 -08:00
Vivek Khandelwal d628b5fd06
[MLIR][TORCH] Add support for tanh approximation for Gelu op (#2941)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/461

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-27 19:26:01 +05:30
Vivek Khandelwal d81747eadb
[MLIR][TORCH] Extend support for OnnxToLinalg lowering for Dropout and Div op (#2938)
Fixes https://github.com/nod-ai/SHARK-Turbine/issues/451,
https://github.com/nod-ai/SHARK-Turbine/issues/452
2024-02-27 11:02:05 +05:30
Sambhav Jain 3cbe6c98ec
Expose `func_name` to the main fx import API (#2949)
As titled.
2024-02-26 10:08:14 -08:00
ptrifunovic98 c5a1da1910
Implement lowering of torch.aten.norm.Scalar (#2899)
Closes
[nod-ai/SHARK-Turbine#365](https://github.com/nod-ai/SHARK-Turbine/issues/365)
2024-02-26 08:46:56 -08:00
Stella Laurenzo 89e02c195b
Make a typing dependency that is not in older PyTorch backwards compatible. (#2948)
This was found in a downstream that is pegged to an older PyTorch
version.
2024-02-23 15:52:27 -08:00
Rob Suderman ec2b80b433
[ci] Fix mpmath 1.4.0 error by forcing 1.3.0 (#2946)
`mpmath 1.4.0` changes some import locations breaking `torch`. Changing
to `1.3.0` to avoid breaking on `python 3.11`
2024-02-23 13:13:54 -08:00
Aart Bik 4147b280ce
[torch-mlir][sparse] add block sparsity to mlir lowering (#2942)
Also note that we are in the process of proposing SparseTensorMetadata
to PyTorch FX graph export (see
https://github.com/pytorch/pytorch/pull/117907). This will hopefully
eventually replace the current data structures in torch-mlir.
2024-02-23 11:57:20 -08:00
Andreas Falkenberg 55dc8deb92
[torch] GridSample TorchToLinalg lowering (#2883)
Lowers `torch.grid_sample` to the equilvalent `linalg` representation.
2024-02-23 09:14:38 -08:00
Vivek Khandelwal 5af249566b
build: manually update PyTorch version (#2933)
Set PyTorch and TorchVision version to nightly release 2024-02-20.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-02-22 21:16:53 +05:30
Rob Suderman 53f6d06ab8
[onnx] Drop `ConstantOfShape` logic form importer, fix torch lowering (#2930)
There is no reason to treat `ConstantOfShape` as a specialized import
any as there exists a onnx-to-torch equivalent. Dropping the import
coding and adding support for resource conversion substantially
increases test coverage for dynamically shaped tests.
2024-02-21 21:34:43 -08:00
Rob Suderman df2aa1a369
[torch] Fixed edge conditions for strided slicing (#2929)
Strided slicing can occur with a negative stride. In these cases we need
to bound end differently. This included removing a function that was
generating bad limits.
2024-02-21 21:28:44 -08:00
Srinath Avadhanula 0f80e75c2e
allow tosa.cast to convert from f32 to f16 (#2934)
According to the [official TOSA
spec](https://www.mlplatform.org/tosa/tosa_spec.html#_cast), `tosa.cast`
allows a cast from `fp32` to `fp16`. We were not previously accounting
for this in the `TorchToTosa` lowering.

Also did a tiny bit of cleanup in the code to make it easier to spot
which conversions are currently allowed.

---------

Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
2024-02-20 14:22:38 -08:00
Aart Bik 534b266f2d
[torch-mlir][NFC] remove trailing whitespace (#2936) 2024-02-20 11:23:14 -08:00
Rob Suderman 13113df33e
[onnx] Enable crashing tests (#2928)
Crashing tests no longer crash, enable as either passing or xfail tests.

Co-authored-by: Xida Ren (Cedar) <cedar.ren@gmail.com>
2024-02-20 18:34:21 +00:00
Rob Suderman 13553d49c9
[onnx] Update the importer to create a `none` for missing operands (#2931)
Some operands are optional so we require a placeholder for missing
operands. We invent an `onnx.None` operation as our placeholder.
2024-02-20 09:30:30 -08:00
Stella Laurenzo 4446fa00d8
Migrate passes in TorchConversion to use FunctionOpInterface. (#2935)
This enables better re-use in downstreams which use different func
implementations and should have no impact on those that don't except in
opt pipelines if using the old form. With interfaces, explicit pipelines
via `--pass-pipeline=` must be used.
2024-02-20 08:54:02 -08:00
Rob Suderman 135c81a416
[torch] Add folder for `prim.NumToTensor.Scalar` (#2921)
Useful for `slice` lowerings that depend on tensors made form scalars.
2024-02-19 11:55:54 -08:00
Rob Suderman e80054a3cc
[torch] Folders for `torch.aten.*.tensor` operators [add, sub, mul] (#2878)
Simple folder for limited size aten tensor operations. This is primarily
useful for shape computation folding as they unfortunately can use
`aten` operators. Add, sub, mul are common examples of these folders.
2024-02-19 10:28:23 -08:00
Rob Suderman cea51897a5
[onnx] Simplify onnx.slice lowering (#2919)
Onnx slice lowering used arange needlessly instead of directly
constructing the constant dimension values. This makes lowerings to
linalg struggle as multiple folders are required to get what is a
constant index value.
2024-02-19 10:26:29 -08:00
Rob Suderman fd08578bdb
[torch] Support dynamic step size for `torch.slice` (#2922)
For some reason we did not directly use the step size dynamically
despite its constructed using the dynamic value.
2024-02-19 10:26:21 -08:00
aldesilv d29157b33f
OnnxToTorch support for onnx.InstanceNormalization op (#2710)
https://github.com/nod-ai/SHARK-Turbine/issues/327
2024-02-19 19:53:48 +05:30
Aart Bik 78e10ff09b
[torch-mlir][sparse] inline sparse helper methods (#2918)
Even though the reference compiler is not about performance, inlining
the generated sparse helper methods has a rather big positive impact on
performance, leaving a much better first impression. Therefore, we added
this inlining pass (which leaves all other PyTorch modules unaffected,
since they tend to be one big main() method to start with).

testing:

$./tools/e2e_test.sh --config linalg

Summary:
    Passed: 1164
    Expectedly Failed: 8

$ python -m e2e_testing.main --config=torchdynamo

Summary:
    Passed: 976
    Expectedly Failed: 162
2024-02-16 20:56:42 -08:00
Rob Suderman d65925a8b4
[onnx] Fix `onnx.sigmoid` for integer inputs/outputs (#2914)
Sample compilation crashes due to sigmoid with integer inputs/outputs.
This fix avoids crashing but still experiences an error.
2024-02-16 13:35:25 -08:00
Rob Suderman 7a0d0e954b
[onnx] Fix onnx.gather lowering to use torch.aten.index_select (#2913)
Onnx's gather maps directly to `torch.aten.index_select`. We should just
use that path.
2024-02-16 16:05:44 -05:00
Rob Suderman 468c533942
[onnx] Fix crash when negative transpose values exist (#2915)
We are crashing due to indexing into a negative shape. Updated the
lowering to avoid the crash.
2024-02-16 16:04:47 -05:00
Aart Bik c5d8c12469
[torch-mlir][sparse][NFC] fixed typo (#2917)
grammar police
2024-02-16 13:02:00 -08:00
Stella Laurenzo 5253282c55
[fx] Support mutation in ExportedProgram. (#2916)
As of https://github.com/pytorch/pytorch/pull/118969, `ExportedProgram`
has the long awaited fixes to correctly categorize various things
relating to parameters, buffers, mutated inputs and constants.

With this additional modeling, we are finally able to implement
(safely/soundly) the mutable semantics that were attempted on the
TorchScript path. The difference is that on that path, we had to
conservatively treat everything as mutable and run some dodgy heuristics
(which have been the cause of many bugs relating to
"MaximizeValueSemantics") to try to get back to an immutable state.

The new model supports mutability at the graph edges, allowing both user
inputs and buffers to be mutated (there is some more support than that,
but that is all I fully tracked through to implementation).

Therefore, when we receive programs like this, we now can selectively
enable mutation at the edges. This happens to be the mutability model
that IREE supports, which I expect to be a primary beneficiary. However,
there is nothing stopping anyone else from handling the `!torch.tensor`
types and the existing copy/overwrite ops that will be selectively
added.

Since this relies on API changes that will not release until 2.3, I'm
being a bit cautious about not refactoring existing facilities.
2024-02-16 09:46:30 -08:00
Rob Suderman 074f112d6a
[onnx] Add testing using the `onnx` compilation using torch tests (#2795)
We can route the torch tests via `onnx` using the `torch.onnx.export`
tooling. We can then reimport, lower to torch, and compile to linalg to
validate the onnx path is working correctly.

The current implementation exposes some failures in the `onnx` path so
we cannot enable the onnx test suite yet due to segmentation faults.
2024-02-15 10:17:13 -08:00
Yuanqiang Liu 49f63df068
[bazel] commit after run buildifier (#2912) 2024-02-16 01:56:09 +08:00
Yuanqiang Liu 5733c84443
[bazel] fix bazel with stablehlo refbackend and fix some typo (#2911) 2024-02-16 01:38:13 +08:00