mirror of https://github.com/llvm/torch-mlir
5e564b5864
1. onnx.MatMulInteger now converts to aten.matmul instead of aten.mm 2. aten.matmul, for ranks >=2, now allows quantized inputs and will lower to linalg::quantized_matmul or linalg::quantized_batch_matmul. 3. added AtenMatmulOp to the FuseQuantizeOps rewrite patters QuantizeOperands, QuantizeTransposedOperands, and QuantizeAccumulator 4. added several tests, including some to test AtenMmOp with varying quantization signed-ness. 5. a quantized matmul mat-vec test is added to verify the failure to lower to linalg; cleaned of out-of-date code related to common torch-mlir lowering xfails. 6. in debugging a real model with quantized matmuls, I found a bug on the scalarize-shapes pass which resulted from the aten.full op folder returning an incompatible result type. This is fixed by the small change here to [lib/Dialect/Torch/IR/TorchOps.cpp](https://github.com/llvm/torch-mlir/compare/main...zjgarvey:torch-mlir:MatMulIntegerFix?expand=1#diff-dc8ed165c207918e606490eee3984b1ad51d7034e6aac36fc046bf47f6f03f4f). |
||
---|---|---|
.. | ||
TorchConversionToMLProgram | ||
TorchOnnxToTorch | ||
TorchToArith | ||
TorchToLinalg | ||
TorchToSCF | ||
TorchToStablehlo | ||
TorchToTensor | ||
TorchToTosa |