mirror of https://github.com/llvm/torch-mlir
dc37616d67
Handle both `torch.dequantize` and `torch.quantize_per_tensor` including the op based quantization parameter tracking. This includes adding `qint32` to torch types as it was missing during the initial type inclusion. For testing we only have `torch.int8` and `torch.float` types on function boundaries as the `qint8` types require passing the scale and zero point quantization information which is not supported yet. |
||
---|---|---|
.. | ||
base_lazy_backend |