torch-mlir/test
zjgarvey de28c8540b
[ONNX] add int16 quantization support (#3446)
There is currently no int16 quantization support in torch. This patch
adds a new mlir type to correspond to the missing "torch.qint16" type,
and enables lowering of quantization-related onnx ops using int16 types.

In follow-up patches, custom quantization logic for ops like
aten.matmul/aten.mm/aten.convolution may need to be revisited to allow
support for qint16. The passes in FuseQuantizedOps.cpp may also need
slight modifications.
2024-06-12 10:37:22 +05:30
..
CAPI [NFC reformat] Run pre-commit on all files and format misc. 2024-04-27 14:08:09 -07:00
Conversion [ONNX] add int16 quantization support (#3446) 2024-06-12 10:37:22 +05:30
Dialect [ONNX] add int16 quantization support (#3446) 2024-06-12 10:37:22 +05:30
RefBackend Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
python [torch-mlir][sparse] re-enable all sparse tests (#3444) 2024-06-10 11:19:32 -07:00
CMakeLists.txt [NFC reformat] Run pre-commit on all files and format misc. 2024-04-27 14:08:09 -07:00
lit.cfg.py [NFC reformat] Applies pre-commit formatting to Python files. (#3244) 2024-04-27 14:16:31 -07:00
lit.site.cfg.py.in Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00