Commit Graph

40 Commits (5ead1d549e9abe4cd5f1ed6d54b2b051581d4068)

Author SHA1 Message Date
Matthias Gehre 0959b502ae
Print name of the backend when tests fail to help debugging issues in CI (#2210)
* Print name of the backend when tests fail to help debugging issues in CI

* Extended test python/test/torchscript_e2e_test/compilation_failure.py
2023-06-09 10:47:07 +02:00
Maksim Levental c3cd7471b4
Pure-Python FX importer. (#2098)
Co-authored-by: Sean Silva <silvasean@google.com>
2023-05-12 00:46:33 -05:00
Maksim Levental 953ea39cb5
handles 2,3,4 from https://github.com/llvm/torch-mlir/issues/1963 (#1964) 2023-03-24 21:50:01 -05:00
Vivek Khandelwal b966733e04 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-01-08.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Ramiro Leal-Cavazos 3260a1ea6e
Allow passing traced `torch.nn.Module`s into `torch_mlir.compile` (#1743)
This commit adds support for passing to `torch_mlir.compile` the
result of running `torch.jit.trace` on a model by relaxing the
condition that checks if the model is already in JIT IR to allow any
`torch.jit.ScriptModule`.

Fixes https://github.com/llvm/torch-mlir/issues/1739
2022-12-22 08:39:55 -08:00
Sean Silva af9e8a5e63 [torchdynamo] Move to aot_autograd instead of raw make_fx
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275),
use `torch._dynamo.optimizations.training.aot_autograd` instead of raw
`make_fx`. This is more future proof and gives us the backward pass and
functionalization. We don't currently get functionalization because of
https://github.com/pytorch/pytorch/issues/90759

This also incidentally fixes the source location handling, which makes
`lockstep_basic.py` give an accurate source location!
2022-12-15 01:55:50 -08:00
Sean Silva 7731211d02 Remove eager_mode
This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.

Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.
2022-12-09 03:50:00 -08:00
Sean Silva 485c18bb2f [torchdynamo] Add "lockstep" numerical accuracy debugger.
Thanks to TorchDynamo's great layering and design, this is only about
100 lines of code for a basic lockstep debugger.

This should allow us to deprecate eager_mode, since AFAIK the only
interesting use case that it was really supporting is for downstream users to
write lockstep debuggers.

NOTE: The exact reporting and interface here is subject to change. Please
try it out and provide feedback (or patches :) ).
- make_fx should not drop source locations: https://github.com/pytorch/pytorch/issues/90276
- Report tensors better (huge tensors should be summarized)
- Maybe don't abort, but just warn?
- Allow customizing atol/rtol.
- How best to print the failing node? And include surrounding graph
context?
2022-12-06 07:57:45 -08:00
Daniel Ellis e2de20575f
Automatically strip overloads for FX-based models. 2022-11-29 22:19:09 -05:00
Sean Silva 3695ca83e6 [torch_mlir.compile] Handle the case of already-scripted models better
Closes #1582
2022-11-16 10:47:13 -08:00
Sean Silva cc468d2d16 [cleanup] Be consistent about apostrophe 2022-11-10 07:42:15 -08:00
Sean Silva 64914603fa [torch_mlir.compile] Add support for multiple exported methods
For AoT deployments models often have multiple exported methods.
This patch enables something like this:

```
class TwoMethodsModule(torch.nn.Module):
    def sin(self, x):
        return torch.ops.aten.sin(x)

    def cos(self, x):
        return torch.ops.aten.cos(x)

example_args = torch_mlir.ExampleArgs()
example_args.add_method("sin", torch.ones(2, 3))
example_args.add_method("cos", torch.ones(2, 4))
print(torch_mlir.compile(TwoMethodsModule(), example_args))
```

In the
[long-term](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md#tools-for-advanced-aot-deployments)
we will need to reconcile this with our story for stateful models and the
backend contract being purely functional. For now, this provides some basic
infra that seems harmless. Arguably, we could tighten up the backend contract
even more to only allow a single compiled function which would prohibit this or
require building out a layer above.

Fixes #1557
2022-11-10 02:10:22 -08:00
Sean Silva 6403c0e56f torch_mlir.compile: allow custom backend_legal_ops set
Allow customizing `backend_legal_ops` for "torch" output type, since we
don't know which backend will be used (it might be a custom backend).
We don't allow customizing the `backend_legal_ops` for the other output
types (Linalg, TOSA, MHLO) since those backends control their set of
legal ops directly.

Fixes #1418
2022-10-12 04:21:22 -07:00
Daniel Ellis 4d47f1671a Reject dictionary inputs when tracing.
The underlying error message was misleading.  See https://github.com/llvm/torch-mlir/issues/1425
2022-09-30 16:02:35 -04:00
Ashay Rane 0b46462528
Miscellaneous fixes for Windows builds (#1376)
* test: allow spaces in path to Python executable

On Windows, the path to the Python binary may contain spaces, so this
patch adds quotes around the path to the python executable.

Thanks to @sstamenova for suggesting the fix!

* python: remove header file that causes Windows build failures

Similar to https://reviews.llvm.org/D125284, we can safely remove this
header file without affecting the build on either Linux.  It is
necessary to remove this header file on Windows builds since otherwise
it causes build errors.

* python: drop `TORCH_API` from function defined in Torch-MLIR

`TORCH_API` should apply to functions that are either exported by
libtorch.so or ones that are imported from libtorch.so by its downstream
consumers (like Torch-MLIR).  Neither case applies to the
`importJitFunctionAsFuncOp()` function, since it is defined in
Torch-MLIR (and thus outside libtorch.so).  This patch fixes the problem
by dropping `TORCH_API` from that function's declaration.

* python: make output of class anotations deterministic

The `class-annotator-repr.py` test checks for class annotations in a
specific order, but prior to this patch, the order was
non-deterministic, since the code iterated on an _unordered_ map.

This patch makes the iteration order deterministic through two changes:
1. using a sorted map
2. using the class qualified name instead of the address of the class in
memory

* test: use Python3_EXECUTABLE as interpreter path for consistency

This ensures that tests use the Python3 version that was detected using
CMake, instead of whichever python version that happens to be in the
PATH variable when invoking the test.

* test: fix RUN string

The parenthesis syntax does not run on Windows (the shell interprets the
`(` character as part of the path).  Moreover, the ODR violation in the
comment no longer seems to apply.

* python: port parallel test framework to Windows

Since Windows does not support `fork` natively, Python's
`multiprocessing` module needs to use `spawn` on Windows.  However, to
use `spawn`, the multiprocessing module serializes (or pickles) the
worker function and its arguments.  Sadly, the multiprocessing module
(both the default one in Python and the one that is extended in PyTorch)
is unable to serialize lambda functions (see
https://stackoverflow.com/a/19985580) for detals.

Unfortunately, given how our tests are structured, we require that the
function under test is passed as an argument to another function, so we
cannot sidestep our use of lambda functions.

To resolve this problem, this patch makes use of the `multiprocess` and
`dill` Python modules, which together offers a multiprocessing mechanism
that can serialize lambda functions.  The multiprocess module also
offers a process pool, which simplifies the code for our parallel
testing framework.
2022-09-29 12:07:43 -05:00
Sean Silva e16b43e20b Remove "torchscript" association from the e2e framework.
We use it for more than TorchScript testing now. This is a purely
mechanical change to adjust some file paths to remove "torchscript".

The most perceptible change here is that now e2e tests are run with

```
./tools/e2e_test.sh
instead of:
./tools/torchscript_e2e_test.sh
```
2022-08-29 14:10:03 -07:00
Henry Tu e869e68559
Fix LTC lib_torch_mlir_ltc.so import error (#1283)
* Build LTC to _mlir_libs directory

* Update CMakeLists.txt
2022-08-25 18:25:01 -04:00
Jae Hoon (Antonio) Kim 0af55781ae
Propagate device data names (#1157)
* Propagate device data names

* Address PR comment

* Add example usage

* Add test for device data names

* Make TorchMlirComputation fields protected

* Add lazy backend device data name unit tests

* Disable lazy backend tests if LTC is disabled

* Add comments
2022-08-16 09:30:22 -04:00
Sean Silva 8ce5d3f12c E2E framework: Report tensor dtype in summary
This helps to triage issues related to backends that don't support all
dtypes.
2022-08-05 10:05:18 -07:00
Sean Silva 31727f81d8 torch_mlir.compile: Allow ignoring traced shapes
In some cases, users know that a traced graph is valid for a wider set
of shapes than they originally traced it with. Provide an option for
users to ignore the shapes in the traced graph when they know it is
legal.

Fixes #997
2022-08-04 10:18:34 -07:00
Jae Hoon (Antonio) Kim 425362263b Clean up Autogen (#1112)
* Remove unnecessary sed in autogen

* Remove .pyc files frrom VCS
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 1bde00c73d Fix LTC Decoupling (#815)
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
2022-07-30 09:40:02 -04:00
Sean Silva 93f1c3138b torch_mlir.compile: Allow OutputType as a string.
A lot of code was super verbose with `torch_mlir.OutputType.XYZ`. Now,
you can simply do `"xyz"`. I updated a few examples.
2022-07-08 17:37:27 -07:00
powderluv cd79538a0c
Update test to pass with newer versions of tanh (#990) 2022-06-28 20:28:13 -07:00
Sean Silva ccc858f531 torch_mlir.compile: Fix API footgun
use_tracing=True was behaving unexpectedly because the handling of
single arguments was happening after the torch.jit.trace call.

This also fixes the check to specifically test for a torch.Tensor or
TensorPlaceholder so that both lists and tuples would be correctly
handled.
2022-06-05 18:10:07 -07:00
Maksim Levental cec5aeedb0
add ci tests (#754) 2022-05-25 14:59:59 -05:00
Sean Silva 2af53ce434 torch_mlir.compile: Add OutputType.RAW
This can help with development and reporting bugs.
2022-05-19 03:41:43 -07:00
Sean Silva ef9e4c95f2 torch_mlir.compile: add support for dynamic sizes.
We do this by inroducing a TensorPlaceholder class, which can be used to
specify dynamic sizes. Internally, we canonicalize all example inputs
to TensorPlaceholder's.

This commit also adds some basic testing, which was missing before.
2022-05-17 07:02:32 -07:00
Maksim Levental d46f169c1a
Fix kwarg annotation in eager (#747) 2022-04-11 17:35:42 -05:00
Maksim Levental 66de821eaf
small framework plus build_script_function (#745) 2022-04-11 16:53:52 -05:00
Maksim Levental 18ef40acaf
Fixes a bug in use of upstream `normalize_function` in our `normalize_args_kwargs` (in eager mode) and introduces unit tests. (#740)
NB: `shouldnt_normalize2` and `shouldnt_normalize3` currently XPASS i.e., args *will* successfully normalize despite being incorrect due to an [upstream bug](https://github.com/pytorch/pytorch/issues/75342).
2022-04-11 16:17:44 -05:00
Sean Silva c46d48f9f5 Make error reporting a bit better.
- Split out TOSA in the CI.
- Add summary of unexpected test outcomes. This works better when there
  are many XFAIL'ing tests, as it only prints out the error_str on
  FAIL, not on XFAIL. Example here:
  https://gist.github.com/silvasean/c7886ec7b3d35c21563cb09f7c3407da
2021-10-28 13:20:16 -07:00
Sean Silva 5b6902e31c Dual license the torch-mlir project.
This commit (with approval from all contributors) dual licenses
the torch-mlir project under both the standard LLVM license and the
standard PyTorch license. This will facilitate moving code between
torch-mlir and the two upstream projects.

The standard file comment is now:

```
// This file is licensed under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
// Also available under a BSD-style license. See LICENSE.
```

See `LICENSE` in the project root for the terms of both licenses.
2021-10-01 10:46:08 -07:00
Sean Silva 4fad753073 Move external/torch-mlir to the root of the repo. 2021-09-27 17:11:08 -07:00
Sean Silva 404bd74ddf Port the bulk of the remaining code to torch-mlir
This leaves no real code outside torch-mlir.

This also renames the "npcomp backend contract" to "linalg on tensors
backend contract" as the name of the abstraction layer that RefBackend
(IREE too) accepts.
2021-09-27 12:48:33 -07:00
Yi Zhang cd7053dfde Add runtime check 2021-09-24 12:01:36 -04:00
Sean Silva 01c6c54dd8 Fix dependency. 2021-09-23 21:39:31 -07:00
Sean Silva 6d8e7f1bb1 Implement Python relayout from #311
Fixes https://github.com/llvm/mlir-npcomp/issues/311

The key change is that TorchPlugin is folded into
`torch_mlir.dialects.torch.importer.jit_ir` (it imports the PyTorch
JIT's IR, so that's a good, scoped name for it).
The CMake option `-DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=OFF` disables it,
which allows building without a PyTorch native dependency.
2021-09-21 09:29:40 -07:00
Sean Silva 5f3b1ce0b8 Fold torch_mlir_dialects python package into `torch_mlir`.
After this change, there are now just two subdirectories in the
`python_packages` directory in our combined build:
- `npcomp_core` with all the npcomp stuff
- `torch_mlir` with all the `torch-mlir` stuff.

The combined `torch_mlir` build will be packaged for use by `pip`.
There isn't anything super useful for wider use in `npcomp_core` so for
now we aren't going to package that one.
2021-09-17 09:27:49 -07:00
Sean Silva 0eb767ea45 Remove frontends/pytorch directory.
It just contained the e2e testing framework. We now fold it into the
main project to reduce complexity.

- `frontends/pytorch/python/` -> `python/torch_support`
- `frontends/pytorch/e2e_testing -> e2e_testing`
- `frontends/pytorch/examples -> examples`
- `frontends/pytorch/test` -> `python/test`
- `torch_mlir_torchscript` python module -> `npcomp_torchscript`
- `torch_mlir_torchscript_e2e_test_configs` python module ->
  `npcomp_torchscript_e2e_test_configs`

This also changes the license of a handful of files from the
"pytorch-style" license to the regular LLVM/npcomp license. The only
people who committed to those files were myself and Yi.
2021-09-17 09:27:49 -07:00