torch-mlir/python/torch_mlir
Sean Silva af9e8a5e63 [torchdynamo] Move to aot_autograd instead of raw make_fx
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275),
use `torch._dynamo.optimizations.training.aot_autograd` instead of raw
`make_fx`. This is more future proof and gives us the backward pass and
functionalization. We don't currently get functionalization because of
https://github.com/pytorch/pytorch/issues/90759

This also incidentally fixes the source location handling, which makes
`lockstep_basic.py` give an accurate source location!
2022-12-15 01:55:50 -08:00
..
_torch_mlir_custom_op_example [custom op] Generalize shape library logic to work with dtypes (#1594) 2022-12-13 08:25:41 -08:00
cmake/modules Add initial LTC backend (#610) 2022-07-30 09:40:02 -04:00
csrc Extended TorchMLIRLoweringContext with virtual CreateComputation method (#1699) 2022-12-08 15:57:07 -05:00
dialects Add aten.slice.Tensor & aten.cat folders (#1691) 2022-12-13 13:02:47 -08:00
__init__.py build: manually update PyTorch version 2022-12-05 22:44:32 +05:30
compiler_utils.py Update torch-mlir-opt error message. 2022-10-05 15:02:10 -04:00
dynamo.py [torchdynamo] Move to aot_autograd instead of raw make_fx 2022-12-15 01:55:50 -08:00