torch-mlir/python
Sean Silva af9e8a5e63 [torchdynamo] Move to aot_autograd instead of raw make_fx
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275),
use `torch._dynamo.optimizations.training.aot_autograd` instead of raw
`make_fx`. This is more future proof and gives us the backward pass and
functionalization. We don't currently get functionalization because of
https://github.com/pytorch/pytorch/issues/90759

This also incidentally fixes the source location handling, which makes
`lockstep_basic.py` give an accurate source location!
2022-12-15 01:55:50 -08:00
..
test [torchdynamo] Move to aot_autograd instead of raw make_fx 2022-12-15 01:55:50 -08:00
torch_mlir [torchdynamo] Move to aot_autograd instead of raw make_fx 2022-12-15 01:55:50 -08:00
torch_mlir_e2e_test [MLIR][TORCH] Add e2e support for aten.var_mean op 2022-12-12 15:46:54 +05:30
CMakeLists.txt [custom op] Generalize shape library logic to work with dtypes (#1594) 2022-12-13 08:25:41 -08:00
TorchMLIRModule.cpp Miscellaneous fixes for Windows builds (#1376) 2022-09-29 12:07:43 -05:00