torch-mlir/include/torch-mlir-c
zjgarvey de28c8540b
[ONNX] add int16 quantization support (#3446)
There is currently no int16 quantization support in torch. This patch
adds a new mlir type to correspond to the missing "torch.qint16" type,
and enables lowering of quantization-related onnx ops using int16 types.

In follow-up patches, custom quantization logic for ops like
aten.matmul/aten.mm/aten.convolution may need to be revisited to allow
support for qint16. The passes in FuseQuantizedOps.cpp may also need
slight modifications.
2024-06-12 10:37:22 +05:30
..
Dialects.h Clang format refresh (#2812) 2024-01-29 12:59:33 -05:00
Registration.h Enable -Werror in lib/ and LTC. (#2841) 2024-01-30 23:33:21 -08:00
TorchOps.h Add tracing suport to `torch_mlir.compile`. 2022-05-03 09:08:40 -07:00
TorchTypes.h [ONNX] add int16 quantization support (#3446) 2024-06-12 10:37:22 +05:30
Transforms.h Fix Base Lazy Backend Type Conversion (#1412) 2022-10-04 15:53:28 -07:00