torch-mlir/python/test
Sean Silva 7731211d02 Remove eager_mode
This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.

Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.
2022-12-09 03:50:00 -08:00
..
compile_api Automatically strip overloads for FX-based models. 2022-11-29 22:19:09 -05:00
debug [torchdynamo] Add "lockstep" numerical accuracy debugger. 2022-12-06 07:57:45 -08:00
lazy_backend Fix LTC lib_torch_mlir_ltc.so import error (#1283) 2022-08-25 18:25:01 -04:00
torchscript_e2e_test Remove "torchscript" association from the e2e framework. 2022-08-29 14:10:03 -07:00
CMakeLists.txt Move external/torch-mlir to the root of the repo. 2021-09-27 17:11:08 -07:00
annotations-sugar.py Remove "torchscript" association from the e2e framework. 2022-08-29 14:10:03 -07:00
lit.cfg.py Miscellaneous fixes for Windows builds (#1376) 2022-09-29 12:07:43 -05:00
lit.site.cfg.py.in Dual license the torch-mlir project. 2021-10-01 10:46:08 -07:00