Attempt to solve https://github.com/llvm/torch-mlir/issues/2490
Changes for Non Value Semantic Ops having the
`IsTrailingUnderscoreInplaceVariant` trait :
- AnyTorchTensorType -> Torch_NonValueTensorType
- AnyTorchOptionalTensorType -> AnyTorchOptionalNonValueTensorType
- AnyTorchListOfOptionalTensorType ->
AnyTorchListOfOptionalNonValueTensorType
- AnyTorchListOfTensorType -> AnyTorchListOfNonValueTensorType
Created three new tensor types for optional and list non value tensors.
Add aten.isclose op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests
To test e2e tosa lowering:
`python -m e2e_testing.main -v -c=tosa`
---------
Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
Add aten.unflatten.int op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests
To test e2e tosa lowering:
`python -m e2e_testing.main -v -c=tosa`
---------
Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
Add linspace/cumprod/roll ops to ODS and add shape inference functions
to make it work with LTC.
Also, add some tensor utils to LTC library for searching for non-detach
copy nodes.
Set PyTorch and TorchVision version to nightly release 2023-09-28.
aten.baddbmm changes done because upstream PyTorch has now added
support for fp16 gemm on CPU.
Refer: 9399e0b1ff
This commit adds to the lowering of `aten.view` handling for the
following cases:
- `(..., a.size(i))` -> `(..., a.size(i), 1, ..., 1)`
- `(..., a.size(i), 1, ..., 1)` -> `(..., a.size(i))`
Fixes: https://github.com/llvm/torch-mlir/issues/2448
Set PyTorch and TorchVision version to nightly release 2023-09-26.
aten._convolution.deprecated changes done because upstream PyTorch has
now added support for fp16 native convolution on CPU.
Refer: 7c9052165a
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
Making the same PR with #2457, as I accidentally thought the review was already made and merged it (reverted).
Add decompose empty_strided op.
Referring to #1776, this decomposition op only supports default stride values, because accessing the tensor or indexing over that, the indices are determined by the strides.
In MLIR, this is not implicitly supported but assumes that the strides are default while iterating over the tensor.
We just have to do this: I ran into an issue today where I needed to make a one line patch to stablehlo to work around a compiler issue, and it is completely unapparent how to do so given that the mlir-hlo repo is a read-only export and is at the tail end of a multi-week integration chain from the open-source stablehlo repo.
We've discussed this often enough and gotten +1 from everyone that they are ok with taking the e2e testing hit if it becomes necessary: It is necessary as the current situation is unmanageable.
Looking at it, I expect it wouldn't actually be very difficult to build a little runner binary out of the stablehlo interpreter and subprocess call that in order to get the testing coverage back. I leave that as an exercise to the users of this part of the stack and recommend following the breadcrumbs from the deleted python/torch_mlir_e2e_test/stablehlo_backends/linalg_on_tensors.py file and the main.py changes.
Note that I am pointing us at a stablehlo fork for the moment until it is apparent that we don't need to carry any local patches to it. We can update this in a few days if everything is clear.
* view_as_real test case, allow dtype in testutils.randn
* abstract python upstream func implemented
* fixed upstream dtype func, implemented view_as_real backend op
* formatted AtenViewAsRealOp, removed change in e2etest/framework
* removed test suit from reshape_like.py, because it's moved to basic.py
* implemented C-API wrapper for mlirComplexF128 type
* fixed torch.complex dtype width in MLIR and Torch MLIR, deleted float16 dtype dict
* Changed IR input of aten fft_fft unit test
* code refactored
* code refactored and fixed ci test
* refactored: removed white spaces, and rolled back to having both input/output affine expr
* refactored: deleted output affine expr to reduce redundancy
* xfail ltc backend
* removed ComplexImag and ComplexReal from torchdynamo xfail set
* copied and pasted from main branch as there's no change to be made in this file
* refactored abstract_interp_lib_gen.py
* refactored: torchtypes.td, formatted, removed commented out code
* Tensor[]? support operands type support using partial codegen
* aten.index.Tensor support via partial codegen
* Add torch.index_put tracing support
* Added optional tensor list type support for LTC/TorchMLIR lowering
* Added comments
Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>