mirror of https://github.com/llvm/torch-mlir
af9e8a5e63
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275), use `torch._dynamo.optimizations.training.aot_autograd` instead of raw `make_fx`. This is more future proof and gives us the backward pass and functionalization. We don't currently get functionalization because of https://github.com/pytorch/pytorch/issues/90759 This also incidentally fixes the source location handling, which makes `lockstep_basic.py` give an accurate source location! |
||
---|---|---|
.. | ||
_torch_mlir_custom_op_example | ||
cmake/modules | ||
csrc | ||
dialects | ||
__init__.py | ||
compiler_utils.py | ||
dynamo.py |