torch-mlir/test/Conversion/TorchToLinalg
Felix Schneider aca33f1742
[TorchToLinalg] Use Op with native channel order for quantized conv2d (#3807)
I've upstreamed the necessary quantized linalg Op with the
"channel-first" ordering used by torch
(https://github.com/llvm/llvm-project/pull/107740) for 2d convolution.

This patch changes the lowering for the quantized 2d case of
`aten.convolution` accordingly, which saves three transpositions per
convolution (input, weights, result) and therefore removes the
requirement to try to optimize these away in downstream passes.
2024-10-22 20:26:16 +02:00
..
basic.mlir [TorchToLinalg] Use `linalg.transpose` instead of `generic` when lowering `aten.T` (#3660) 2024-09-07 08:09:10 +02:00
broadcast.mlir [TorchToLinalg] Improve broadcast lowerings in strict symbolic modes (#2505) 2023-10-05 15:15:26 -04:00
convolution.mlir [TorchToLinalg] Use Op with native channel order for quantized conv2d (#3807) 2024-10-22 20:26:16 +02:00
elementwise.mlir Revert "[TorchToLinalg] perform rank0 elementwise computations outside linalg generic ops (#3762)" (#3767) 2024-10-04 14:48:02 -07:00
embeddingBag.mlir Remove checking for training specific parameters in EmbeddingBag lowering (#3782) 2024-10-15 09:37:26 -04:00
flatten.mlir Integrate llvm-project at dabdec1001dc368373dd581cf72f37a440873ce3 (#3300) 2024-05-08 14:43:06 -04:00
gridsampler.mlir [TorchToLinalg] remove `extract_slice` grid_sample lowering (#3483) 2024-08-20 14:23:43 -07:00
pooling.mlir TorchToLinalg: Try folding shape computations to keep static shapes when possible (#3475) 2024-06-27 08:43:10 +02:00
resize.mlir [TorchToLinalg] Fix possible OOB access in Interpolate lowering (#3570) 2024-08-02 13:55:37 -05:00
sparse.mlir [torch-mlir] bump stablehlo/llvm version (#3471) 2024-06-18 16:59:53 -07:00
squeeze.mlir torch.aten.squeeze.dim lowering with dynamic dims (#3749) 2024-10-08 10:37:31 -07:00
unsqueeze.mlir Integrate llvm-project at dabdec1001dc368373dd581cf72f37a440873ce3 (#3300) 2024-05-08 14:43:06 -04:00
view.mlir Add support for multiple dynamic reassociation dims for unflatten.int (#3504) 2024-06-28 09:59:51 -07:00
view_strict.mlir TorchToLinalg: Try folding shape computations to keep static shapes when possible (#3475) 2024-06-27 08:43:10 +02:00