mirror of https://github.com/llvm/torch-mlir
af9e8a5e63
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275), use `torch._dynamo.optimizations.training.aot_autograd` instead of raw `make_fx`. This is more future proof and gives us the backward pass and functionalization. We don't currently get functionalization because of https://github.com/pytorch/pytorch/issues/90759 This also incidentally fixes the source location handling, which makes `lockstep_basic.py` give an accurate source location! |
||
---|---|---|
.. | ||
compile_api | ||
debug | ||
lazy_backend | ||
torchscript_e2e_test | ||
CMakeLists.txt | ||
annotations-sugar.py | ||
lit.cfg.py | ||
lit.site.cfg.py.in |