torch-mlir/projects/ltc
Rob Suderman dc37616d67
[torch][quant] Support quantize and dequantize for torch (#2731)
Handle both `torch.dequantize` and `torch.quantize_per_tensor` including
the op based quantization parameter tracking. This includes adding
`qint32` to torch types as it was missing during the initial type
inclusion.

For testing we only have `torch.int8` and `torch.float` types on
function boundaries as the `qint8` types require passing the scale
and zero point quantization information which is not supported yet.
2024-01-12 19:11:14 -08:00
..
csrc/base_lazy_backend [torch][quant] Support quantize and dequantize for torch (#2731) 2024-01-12 19:11:14 -08:00
CMakeLists.txt Breakup python pytorch deps (#2582) 2023-11-19 12:10:19 -08:00