torch-mlir/lib
Ze Zhang d466d5b809
Register fake_quantize related ops (#3522)
Register `aten.fake_quantize_per_channel_affine` and
`aten.fake_quantize_per_tensor_affine.tensor_qparams` ops

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2024-07-05 11:02:03 -07:00
..
CAPI [ONNX] add int16 quantization support (#3446) 2024-06-12 10:37:22 +05:30
Conversion [ONNX] Fix bug in ONNXToTorch PadOp's pads tensor rearrangement (#3485) 2024-07-03 15:02:49 -05:00
Dialect Register fake_quantize related ops (#3522) 2024-07-05 11:02:03 -07:00
RefBackend [NFC] Change to *cast instead of .*cast variants (#3405) 2024-05-30 23:45:13 -07:00
CMakeLists.txt Link necessary op interface implementations (#3364) 2024-06-03 19:43:28 -05:00
InitAll.cpp [Stablehlo] support uint8 (#3367) 2024-06-04 09:04:59 +08:00