We just have to do this: I ran into an issue today where I needed to make a one line patch to stablehlo to work around a compiler issue, and it is completely unapparent how to do so given that the mlir-hlo repo is a read-only export and is at the tail end of a multi-week integration chain from the open-source stablehlo repo.
We've discussed this often enough and gotten +1 from everyone that they are ok with taking the e2e testing hit if it becomes necessary: It is necessary as the current situation is unmanageable.
Looking at it, I expect it wouldn't actually be very difficult to build a little runner binary out of the stablehlo interpreter and subprocess call that in order to get the testing coverage back. I leave that as an exercise to the users of this part of the stack and recommend following the breadcrumbs from the deleted python/torch_mlir_e2e_test/stablehlo_backends/linalg_on_tensors.py file and the main.py changes.
Note that I am pointing us at a stablehlo fork for the moment until it is apparent that we don't need to carry any local patches to it. We can update this in a few days if everything is clear.
* Tensor[]? support operands type support using partial codegen
* aten.index.Tensor support via partial codegen
* Add torch.index_put tracing support
* Added optional tensor list type support for LTC/TorchMLIR lowering
* Added comments
Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
* LTC/TorchMLIR multi-output operations support
* Update torch-mlir jit lowering to support ops with dynamic number of outputs
* Added support for aten::split_copy, aten::split_with_sizes_copy
* Fix native function for aten::split; cleanup code
* Fix TorchMlirTensorList lowering
* Remove xfails
* feat: split pytorch requirements into stable and nightly
* fix: add true to tests to see full output
* refactor: add comments to explain true statement
* feat: move some tests to experimental mode
* refactor: refactor pipeline into more fine grained difference
* feat: add version differentiation for some tests
* feat: activate more configs
* refactor: change implementation to use less requirement files
* refactor: remove contraints used for testing
* fix: revert some requirement file names
* refactor: remove unnecessary ninja install
* fix: fix version parsing
* refactor: remove dependency on torchvision in main requirements file
* refactor: remove index url
* style: remove unnecesary line switch
* fix: readd index url
Creates a build_linux_arm64 job that builds the release on an arm64 self-hosted runner.
Drop Python 3.10 support
Pass TM_TORCH_VERSION to choose the Stable PyTorch version (since arm64 doesn't have nightly builds)
Borrows nightly / stable Pytorch switch from the WIP
https://github.com/llvm/torch-mlir/pull/2038
This patch, by itself, doesn't fix caching on Windows, but once a new
release of ccache is available, caching for Windows builds should start
working again (validated by building ccache from source and using it
with LLVM builds).
Ccache rejects caching when either the `/Zi` or `/ZI` flags are used
during compilation on Windows, since these flags tell the compiler to
embed debug information in a PDB file (separate from the object file
produced by the compiler). In particular, our CI builds add the `/Zi`
flag, making ccache mark these compiler invocations as uncacheable.
But what caused our CI to add debug flags, especially when we specified
`-DCMAKE_BUILD_TYPE=Release`? On Windows, unless we specify the
`--config Release` flag during the CMake build step, CMake assumes a
debug build. So all this while, we had been producing debug builds of
torch-mlir for every PR! No doubt it took so long to build the Windows
binaries.
The reason for having to specify the configuration during the _build_
step (as opposed to the _configure_ step) of CMake on Windows is that
CMake's Visual Studio generators will produce _both_ Release and Debug
profiles during the CMake configure step (thus requiring a build-time
value that tells CMake whether to build in Release or Debug mode).
Luckily, on Linux and macOS, the `--config` flag seems to be simply
ignored, instead of causing build errors.
Strangely, based on cursory tests, it seems like on Windows we need to
specify the Relase configuration as both `-DCMAKE_BUILD_TYPE=Release` as
well as `--config Release`. Dropping either made my build switch to a
Debug configuration.
Additionally, there is a bug in ccache v4.8 (although this is addressed
in trunk) that causes ccache to reject caching if the compiler
invocation includes any flag that starts with `/Z`, including /`Zc`,
which is added by LLVM's HandleLLVMOptions.cmake and which isn't related
to debug info or PDB files. The next release of ccache should include
the fix, which is to reject caching only for `/Zi` and `/ZI` flags and
not all flags that start with `/Z`.
As a side note, debugging this problem was possible because of ccache's
log file, which is enabled by: `ccache --set-config="log_file=log.txt"`.
We want to ensure that pip packages required for building torch-mlir
should be included in the dependencies of torch-mlir, but we don't want
the pip packages required for _testing_ of torch-mlir to be included
among the dependencies. To be able to specify and install one set of
dependencies and not the other, this patch separates the pip packages
into two files: build-requirements.txt and test-requirements.txt.
This patch also updates references to the requirements.txt file so that
CI builds that run end-to-end tests install test-related pip
dependencies while everything else (including WHL builds) sticks to just
the build-related pip dependencies.
Despite this change, this patch should not affect a torch-mlir
developer's workflow. More precisely, since this patch makes the
top-level requirements.txt file refer to both build-requirements.txt and
test-requirements.txt files, a torch-mlir developer should be able to
continue referring to the requirements.txt file without any impact.
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.
This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
Previously, torchvision had not released WHL files for Python v3.8,
causing failures in torch-mlir python package builds, so we had disabled
building for Python v3.8.
Now that the WHL files are back, this patch re-enables v3.8 builds.
- Use v3 of actions/checkout, since the version we use (v2) uses
Node.js 12, which is deprecated by GitHub.
- Source the PowerShell venv sctipt (instead of the bash sript) since
the calling script is a PowerShell script. Without this, the build
doesn't use venv at all.
- Make the build dependencies in whl-requirements.txt (used by
setup.py) match those in requirements.txt. To that end, this patch
creates a build-requirements.txt that is referenced by
requirements.txt and whl-requirements.txt.
* [custom op] Generalize shape library logic to work with dtypes
This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.
For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.
Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.
This more accurately reflects what it is. The previous name was
conflating the use of RefBackend (which `linalg`, `tosa`, and `mhlo`
configs all use) with the use of the linalg backend (e.g. TorchToLinalg).
This conflation was artifically giving the linalg backend a "privileged"
position, which we want to avoid. We still keep it as the default
backend, and it remains the most complete, but at least there's not
artificial boosting.
This adds a basic e2e Config for TorchDynamo using
Linalg-on-Tensors/RefBackend.
But TorchDynamo is pretty orthogonal to
various other pieces, so it should compose nicely with variations like:
- Switching out all the backends (Linalg-on-Tensors, TOSA, MHLO)
- PyTorch functionalization and decompositions
- Taking the example inputs and compiling with all dynamic or all static
shapes without duplicating tests.
This adds it to the CI, but there are still a lot of XFAIL's.
This also adds a helper `from torch_mlir.dynamo import
make_simple_dynamo_backend` which simplifies some of the steps for
making a Torch-MLIR-based TorchDynamo backend. We include "simple" in
the name because we are going to be exploring various things next from
the long-term roadmap.
The next steps are:
- Burn down all the XFAIL's.
- Start working on the pieces from the [long-term roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md).
- Add functionalization/decompositions into the TorchDynamo flow and
remove reliance on the current Torch-MLIR "frontend".
- Write a pure-Python direct FX->MLIR importer.
- Hook up the new PyTorch symbolic shape stuff.
- Explore PrimTorch decompositions for simplifying backends.
We want each build to be reproducible regardless of prior builds and
prior package installations, but pip, by default, uses cached packages
from previous invocations of `pip install`. As a result, the incorrect
dependencies downloaded in the RollPyTorch workflow in the main
repository cannot be reproduced in private forks of the repository. To
resolve this problem, this patch adds a `--no-cache-dir` flag to pip, so
that it fetches and inspects each requested package independent or prior
installations.
Before this patch, the update_shape_lib.sh and update_torch_ods.sh
scripts only worked on in-tree builds, which implied that the
RollPyTorch action was forced to run the longer-running in-tree build.
As a result of this patch, we should be able to run through the basic
checks in the RollPyTorch action faster, while running the full suite of
tests off the critical path.
The key change in this patch is that the update scripts now look for the
directory that is most recently modified between in-tree or out-of-tree
build directories. The change also correctly handles the case when only
one of the two directories exists.
This patch makes a few small, but key, changes to enable ccache on
Windows. First, it replaces the hendrikmuhs/ccache-action action with
command line invocations to the ccache binary, since the action has two
bugs, one of which causes CI to refer to different ccache artifacts
before versus after the build on Windows whereas the other bug can
sometimes cause the action to incorrectly infer that the cache is empty.
Second, this patch slightly alters the cache key, so that our old cache
artifacts, which have grown too big, are eventually discarded in favor
of the new, smaller cache artifacts. Along the way, this patch also
keeps the RollPyTorch's cache artifact separate from the regular build's
cache artifact so as to keep these artifacts small, and also because the
RollPyTorch action is off the critical path for most contributors.
Finally, this patch makes small changes to the CMake file so that on
Windows, the ccache binary is added as a prefix, as recommended on the
[ccache Wiki](https://github.com/ccache/ccache/wiki/MS-Visual-Studio).
This patch updates the build_linux_packages.sh script so that when
PyTorch needs to be built from source, it is built _before_ building
LLVM and before building Torch-MLIR. The rationale behind this change
is that previously, when the PyTorch build was triggered through the
Torch-MLIR build, the PyTorch compilation added more entries to the
ccache artifacts. However, since we cache the PyTorch _binary_ (i.e.
the WHL file), there is no need to add the PyTorch compilation to the
ccache artifacts. By removing the PyTorch compilation files, we keep
the ccache artifact size small, thus reducing the number of evictions
when we exceed GitHub's allowed limit.
This commit removes almost all of the valsem ops, since the value
semantics version of the ops now exist in PyTorch. The only op missing
is `aten.bernoulli_.float`. In addition, this commit also simplifies
the implementation of `aten.fill.Scalar` by moving it to the pattern
that converts elementwise ops.
Whether or not the PyTorch build is cached should not affect the success
of the Torch-MLIR build, but based on the existing code, a build may
fail if the `TM_PYTORCH_INSTALL_WITHOUT_REBUILD` variable was set but
the build cache doesn't exist.
Although that variable is set by CI upon a cache hit, nuances of
Github's caching behavior can create situations where the coupling
between `TM_PYTORCH_INSTALL_WITHOUT_REBUILD` and the cache lookup fails.
Specifically, a branch other than our default branch (`main`) may create
the cache entry, but because Github doesn't share this cache entry with
builds running on the `main` branch, the `main` branch build tries to
create it's own cache entry. However, since cache identifiers are
unique and because caches are immutable, the caching step running in the
`main` branch appears to create an invalid cache entry (of 233 bytes,
instead of the expected ~60 MB).
Consequently, subsequent builds observe a cache "hit", since caches
created by the `main` branch are shared with all other branches, but
because this cache entry is invalid (since it doesn't actually contain
the ~60 MB PyTorch WHL file), the builds fail.
One workaround would be to let only the `main` branch create caches, but
in doing so, we would also prevent other branches from _reading_ the
cache, making the builds in those branches terribly slow.
So this patch uses a different workaround, which is to check whether the
PyTorch WHL file exists, even if the build observed a cache hit. If the
file doesn't exist, even if it was a purported cache hit, the code
builds PyTorch from source, which is probably intuitive.
A longer term fix will follow, after a discussion with the wider team.
Without this patch, CI logs contained the line:
-- Linker detection: GNU ld
GNU ld is notoriously slow at linking large binaries, so this patch
swaps GNU ld with the LLVM linker.
Since the linker invocation is driven through the compiler, perhaps the
best way to use the LLVM linker is to tell the compiler which linker
binary to use. This patch adds the `-fuse-ld=lld` flag to all Linux
builds of Torch-MLIR in CI to make it use lld.