torch-mlir/test/Conversion/TorchToLinalg
jinchen 58489faf7f
torch.aten.squeeze.dim lowering with dynamic dims (#3749)
Address https://github.com/nod-ai/SHARK-ModelDev/issues/846

Assume the dynamic squeezed dim is 1.
2024-10-08 10:37:31 -07:00
..
basic.mlir [TorchToLinalg] Use `linalg.transpose` instead of `generic` when lowering `aten.T` (#3660) 2024-09-07 08:09:10 +02:00
broadcast.mlir [TorchToLinalg] Improve broadcast lowerings in strict symbolic modes (#2505) 2023-10-05 15:15:26 -04:00
convolution.mlir [TorchToLinalg][test] Add test for ConvertAtenConvolutionOp (#3679) 2024-08-30 09:51:50 +00:00
elementwise.mlir Revert "[TorchToLinalg] perform rank0 elementwise computations outside linalg generic ops (#3762)" (#3767) 2024-10-04 14:48:02 -07:00
flatten.mlir Integrate llvm-project at dabdec1001dc368373dd581cf72f37a440873ce3 (#3300) 2024-05-08 14:43:06 -04:00
gridsampler.mlir [TorchToLinalg] remove `extract_slice` grid_sample lowering (#3483) 2024-08-20 14:23:43 -07:00
pooling.mlir TorchToLinalg: Try folding shape computations to keep static shapes when possible (#3475) 2024-06-27 08:43:10 +02:00
resize.mlir [TorchToLinalg] Fix possible OOB access in Interpolate lowering (#3570) 2024-08-02 13:55:37 -05:00
sparse.mlir [torch-mlir] bump stablehlo/llvm version (#3471) 2024-06-18 16:59:53 -07:00
squeeze.mlir torch.aten.squeeze.dim lowering with dynamic dims (#3749) 2024-10-08 10:37:31 -07:00
unsqueeze.mlir Integrate llvm-project at dabdec1001dc368373dd581cf72f37a440873ce3 (#3300) 2024-05-08 14:43:06 -04:00
view.mlir Add support for multiple dynamic reassociation dims for unflatten.int (#3504) 2024-06-28 09:59:51 -07:00
view_strict.mlir TorchToLinalg: Try folding shape computations to keep static shapes when possible (#3475) 2024-06-27 08:43:10 +02:00