torch-mlir/test/python/onnx_importer
Dave Liddell 04be6ba773
Make the onnx importer more robust for internal/external and large models (#2794)
Fix for https://github.com/llvm/torch-mlir/issues/2765

The onnx docs say that you can't do shape inference using the in-memory
API for models > 2 GB. This fix replaces that API with the file-based
API. Since the new API generates an intermediate file, also added a
--keep switch to keep that file, which I delete by default.

---------

Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-01-31 21:58:43 -08:00
..
.gitignore Upstream the ONNX importer. (#2636) 2023-12-12 19:02:51 -08:00
LeakyReLU.onnx [onnx] Add torch-mlir-import-onnx tool. (#2637) 2023-12-12 22:01:30 -08:00
_torch_mlir_config.py Upstream the ONNX importer. (#2636) 2023-12-12 19:02:51 -08:00
command_line_test.py Make the onnx importer more robust for internal/external and large models (#2794) 2024-01-31 21:58:43 -08:00
import_onnx_tool.runlit [onnx] Add torch-mlir-import-onnx tool. (#2637) 2023-12-12 22:01:30 -08:00
import_smoke_test.py Upstream the ONNX importer. (#2636) 2023-12-12 19:02:51 -08:00
lit.local.cfg Upstream the ONNX importer. (#2636) 2023-12-12 19:02:51 -08:00