torch-mlir/test
zjgarvey 6ff71b40c8
[ONNX] onnx.DynamicQuantizeLinear to Torch (#3009)
This adds support for converting DynamicQuantizeLinear from torch-onnx
to torch.

I could not get an e2e test to pass, since there seems to be some issues
with uint8 casting somewhere lower in the pipeline. For example
compiling with IREE for llvm-cpu, I would get either the correct zero
point (if zp < 128) or the correct zero-point minus 256 (if zp >= 128).
The output tensor seems to always return a tensor of zeros, which also
occurs when running uint8 examples through QuantizeLinear.

Edit: the first problem can be resolved by casting the output back to
uint8 on output, the second problem is resolved with PR #3018
2024-03-20 10:58:25 -07:00
..
CAPI Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
Conversion [ONNX] onnx.DynamicQuantizeLinear to Torch (#3009) 2024-03-20 10:58:25 -07:00
Dialect [torch] Add folder for torch.aten.*.Scalar comparisons (#3000) 2024-03-08 13:44:00 -08:00
RefBackend Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
python Normalize type hints to be compatible with multiple Python versions (#3028) 2024-03-15 08:29:48 -07:00
CMakeLists.txt Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
lit.cfg.py [onnx] Add torch-mlir-import-onnx tool. (#2637) 2023-12-12 22:01:30 -08:00
lit.site.cfg.py.in Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00