mirror of https://github.com/llvm/torch-mlir
de28c8540b
There is currently no int16 quantization support in torch. This patch adds a new mlir type to correspond to the missing "torch.qint16" type, and enables lowering of quantization-related onnx ops using int16 types. In follow-up patches, custom quantization logic for ops like aten.matmul/aten.mm/aten.convolution may need to be revisited to allow support for qint16. The passes in FuseQuantizedOps.cpp may also need slight modifications. |
||
---|---|---|
.. | ||
Dialects.h | ||
Registration.h | ||
TorchOps.h | ||
TorchTypes.h | ||
Transforms.h |