torch-mlir/lib/Conversion/TorchToLinalg
Vivek Khandelwal f6721e5999
[MLIR][TORCH] Add support for negative step in aten.slice.Tensor op (#3763)
This commit adds the support for negative step values in
aten.slice.Tensor op. Although, PyTorch does not allow negative step
value for slice op but the Onnx.Slice op supports negative step value
which eventually lowers to torch.aten.slice.Tensor op. Hence, the
support is added for handling those kind of values during the
Torch->Linalg lowering of aten.slice.Tensor op.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-10-08 10:34:27 +05:30
..
CMakeLists.txt Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
DataMovement.cpp [MLIR][TORCH] Add support for negative step in aten.slice.Tensor op (#3763) 2024-10-08 10:34:27 +05:30
IndirectDataMovement.cpp [NFC] Change to *cast instead of .*cast variants (#3405) 2024-05-30 23:45:13 -07:00
Linear.cpp [MLIR][TORCH] Add support for negative step in aten.slice.Tensor op (#3763) 2024-10-08 10:34:27 +05:30
Pooling.cpp [TorchToLinalg] Support lowering MaxPool3dWithIndices (#3652) 2024-08-27 14:14:25 -05:00
PopulatePatterns.h Re-enable custom op support 2022-08-16 22:49:08 +05:30
Random.cpp [TorchToLinalg] address a dtype mismatch in `aten.multinomial` lowering (#3630) 2024-08-20 15:14:48 -05:00
Reduction.cpp [NFC] Change to *cast instead of .*cast variants (#3405) 2024-05-30 23:45:13 -07:00
TensorConstructors.cpp Adds misc fixes for some padding related issues (#3528) 2024-07-11 20:01:45 -05:00
TensorScalarInterop.cpp [NFC] Change to *cast instead of .*cast variants (#3405) 2024-05-30 23:45:13 -07:00
TorchToLinalg.cpp [TorchToLinalg][ONNX] Add Basic Determinant Support (#3481) 2024-06-25 13:34:19 -05:00
Uncategorized.cpp Revert "[TorchToLinalg] perform rank0 elementwise computations outside linalg generic ops (#3762)" (#3767) 2024-10-04 14:48:02 -07:00
Utils.cpp [MLIR][TORCH] Add support for negative step in aten.slice.Tensor op (#3763) 2024-10-08 10:34:27 +05:30