mirror of https://github.com/llvm/torch-mlir
fb21a85874
The linalg Op `linalg.conv_2d_ngchw_fgchw` had a bug where 1. Weights were accessed as G,F,C,H,W instead of as F,G,C,H,W 2. Output was accessed as N,F,G,H,W instead of as N,G,F,H,W Now this has been fixed in https://github.com/llvm/llvm-project/pull/73855 which broke the torch-mlir lowering to that Op. This patch switches lowering in torch-mlir to the newly introduced `linalg.conv_2d_ngchw_gfchw` op which accesses weights in an order that is compatible with PyTorch's memory layout. Fix https://github.com/llvm/torch-mlir/issues/2622 |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
DataMovement.cpp | ||
IndirectDataMovement.cpp | ||
Linear.cpp | ||
Pooling.cpp | ||
PopulatePatterns.h | ||
Random.cpp | ||
Reduction.cpp | ||
TensorConstructors.cpp | ||
TensorScalarInterop.cpp | ||
TorchToLinalg.cpp | ||
Uncategorized.cpp | ||
Utils.cpp |