torch-mlir/lib
Rob Suderman b5387c0f29
[onnx] Lowering `onnx.dequantize_linear` to `torch` (#2759)
We can make the per-tensor version of the operation to the dequantize
operation via marking with the make quantized tensor component. This
introductions the `qint*` and `quint*` tensor type that can be lowered
to teh appropriate dequantization behavior during the torch-to-linalg
conversion.
2024-01-18 16:47:21 -08:00
..
CAPI Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
Conversion [onnx] Lowering `onnx.dequantize_linear` to `torch` (#2759) 2024-01-18 16:47:21 -08:00
Dialect Decompose AtenNormalFunctionalOp into AtenRandn* and other arithmetic. (#2737) 2024-01-15 22:49:29 -08:00
RefBackend [TorchToLinalg] Lower aten.cat to tensor.concat (#2650) 2023-12-15 15:45:32 -05:00
CMakeLists.txt Initial TorchOnnxToTorch conversion pipeline. (#2585) 2023-11-21 21:02:55 -08:00
InitAll.cpp Initial TorchOnnxToTorch conversion pipeline. (#2585) 2023-11-21 21:02:55 -08:00