Commit Graph

6 Commits (578d0ec292e629c967553fa454383defc5642e65)

Author SHA1 Message Date
Maksim Levental d46f169c1a
Fix kwarg annotation in eager (#747) 2022-04-11 17:35:42 -05:00
Maksim Levental 66de821eaf
small framework plus build_script_function (#745) 2022-04-11 16:53:52 -05:00
Maksim Levental 18ef40acaf
Fixes a bug in use of upstream `normalize_function` in our `normalize_args_kwargs` (in eager mode) and introduces unit tests. (#740)
NB: `shouldnt_normalize2` and `shouldnt_normalize3` currently XPASS i.e., args *will* successfully normalize despite being incorrect due to an [upstream bug](https://github.com/pytorch/pytorch/issues/75342).
2022-04-11 16:17:44 -05:00
Sean Silva 14cf87633c
Add link to forum post describing `__torch_dispatch__` 2022-04-01 10:10:43 -07:00
Maksim Levental 3e999beaea
Small bug fixes in eager mode (#691) 2022-03-28 13:31:07 -05:00
max fe8ac57e6d This PR implements an eager mode backend for PyTorch through the torch-mlir framework. This is accomplished by overriding the `__torch_dispatch__` class method on wrapper subclass `TorchMLIRTensor(torch.Tensor)`.
Effectively, this mode works by compiling op by op as the NN is eagerly executed by PyTorch. Entailed in that compilation is building a representation of the op that can be `torch.jit.script`ed, importing using `ModuleBuilder`, and then executing (e.g., with `RefBackendLinalgOnTensorsBackend`). This mode includes a fallback to conventional PyTorch if anything in the torch-mlir compilation process fails (e.g., unsupported op).

Currently, all e2e tests pass execpt for two that involve an upstream PyTorch bug (https://github.com/pytorch/pytorch/issues/74400).

High priority next steps:

1. A compile cache in order to speed up reruns of the same NN.
2. Integration with IREE (though not in this repo).
3. Integration with `torch.distributed`.
2022-03-22 14:42:57 -07:00