mirror of https://github.com/llvm/torch-mlir
6ff71b40c8
This adds support for converting DynamicQuantizeLinear from torch-onnx to torch. I could not get an e2e test to pass, since there seems to be some issues with uint8 casting somewhere lower in the pipeline. For example compiling with IREE for llvm-cpu, I would get either the correct zero point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128). The output tensor seems to always return a tensor of zeros, which also occurs when running uint8 examples through QuantizeLinear. Edit: the first problem can be resolved by casting the output back to uint8 on output, the second problem is resolved with PR #3018 |
||
---|---|---|
.. | ||
CAPI | ||
Conversion | ||
Dialect | ||
RefBackend | ||
python | ||
CMakeLists.txt | ||
lit.cfg.py | ||
lit.site.cfg.py.in |