Most of this can't actually work given the token/security issues, so even if we re-introduce it, we are better off fetching from history vs carrying things that will lead folks astray.
This PR adds a check to the CI right after checking out the Torch-MLIR
repository to make sure that the changes in the PR don't require any
`git clang-format` modifications.
Per the RFC and numerous conversations on Discord, this rebuilds the
torch-mlir CI and discontinues the infra and coupling to the binary
releases
(https://discourse.llvm.org/t/rfc-discontinuing-pytorch-1-binary-releases/76371).
I iterated on this to get latency back to about what it was with the old
(much larger and non-ephemeral) runners: About 4m - 4.5m for an
incremental change.
Behind the scenes changes:
* Uses a new runner pool operated by AMD. It is currently set to manual
scaling and has two runners (32-core, 64GiB RAM) while we get some
traction. We can either fiddle with some auto-scaling or use a schedule
to give it an increase during certain high traffic hours.
* Builds are now completely isolated and cannot have run-to-run
interference like we were getting before (i.e. lock file/permissions
stuff).
* The GHA runner is installed directly into a manylinux 2.28 container
with upgraded dev tools. This eliminates the need to do sub-invocations
of docker on Linux in order to run on the same OS that is used to build
wheels.
* While not using it now, this setup was cloned from another project
that posts the built artifacts to the job and fans out testing. Might be
useful here later.
* Uses a special git cache that lets us have ephemeral runners and still
check out the repo and deps (incl. llvm) in ~13s.
* Running in an Azure VM Scale Set.
In-repo changes:
* Disables (but does not yet delete):
* Old buildAndTest.yml jobs
* releaseSnapshotPackage.yml
* Adds a new `ci.yml` pipeline and scripts the steps in `build_tools/ci`
(by decomposing the existing `build_linux_packages.sh` for in-tree
builds and modularizing it a bit better).
* Test framework changes:
* Adds a `TORCH_MLIR_TEST_CONCURRENCY` env var that can be used to bound
the multiprocess concurrency. Ended up not using this in the final
version but is useful to have as a knob.
* Changes the default concurrency to `nproc * 0.8 + 1` vs `nproc * 1.1`.
We're running on systems with significantly less virtual memory and I
did a bit of fiddling to find a good tradeoff.
* Changed multiprocess mode to spawn instead of fork. Otherwise, I was
getting instability (as discussed on discord).
* Added MLIR configuration to disable multithreaded contexts globally
for the project. Constantly spawning `nproc * nproc` threads (more than
that actually) was OOM'ing.
* Added a test timeout of 5 minutes. If a multiprocess worker crashes,
the framework can get wedged indefinitely (and then will just be reaped
after multiple hours). We should fix this, but this at least keeps the
CI pool from wedging with stuck jobs.
Functional changes needing followup:
* No matter what I did, I couldn't get the LTC tests to work, and I'm
not 100% sure they were being run in the old setup as the scripts were a
bit twisty. I disabled them and left a comment.
* Dropped out-of-tree build variants. These were not providing much
signal and increase CI needs by 50%.
* Dropped MacOS and Windows builds. Now that we are "just a library" and
not building releases, there is less pressure to test these commit by
commit. Further, since we bump torch-mlir to known good commits on these
platforms, it has been a long time since either of these jobs have
provided much signal (and they take ~an hour+ to run). We can add them
back later post-submit if ever needed.
Prior to this, the concurrency rules for presubmits (which cancel eagerly) were being applied to main. The result was that landing a second patch would cancel the CI on the one prior.
This lifts the core of the jit_ir_importer and ltc out of the pt1
project, making them peers to it. As a side-effect of this layering, now
the "MLIR bits" (dialects, etc) are not commingled with the various
parts of the pt1 project, allowing pt1 and ltc to overlay cleanly onto a
more fundamental "just MLIR" Python core. Prior to this, the Python
namespace was polluted to the point that this could not happen.
That "just MLIR" Python core will be introduced in a followup, which
will create the space to upstream the FX and ONNX pure Python importers.
This primary non-NFC change to the API is:
* `torch_mlir.dialects.torch.importer.jit_ir` ->
`torch_mlir.jit_ir_importer`.
The rest is source code layering so that we can make the pt1 project
optional without losing the other features.
Progress on #2546.
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20
There are two primary goals:
1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.
Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.
This also takes steps toward a couple of other layering enhancements:
* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.
It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.
Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
The LTC backend has drifted from being able to pass tests on the stable
PyTorch version, so pinning to nightly on ARM.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.
This patch removes the mock commands, so that once the Build And Test
workflow has successfully completed on the RollPyTorch action, the PR is
merged and the branch is deleted.
This patch adds a new workflow that runs when an update to the
rollpytorch branch by silvasean (in whose name the RollPyTorch action
runs) causes the regular CI build to complete without errors. Upon
execution, this workflow currently just prints the PR number(s) of the
PR created by the RollPyTorch action, but once this is working as
expected, we will add the step to merge the PR changes.
* CI: Skip (redundant) libtorch build when using stable PyTorch version
When we use PyTorch stable builds, there is no need to build libtorch
from source, making the stable-pytorch-with-torch-binary-OFF
configuration redundant with stable-pytorch-with-torch-binary-ON. This
patch drops the redundant configuration from CI.
* CI: Simplify guard conditions for creating and using libtorch cache
Whether libtorch is enabled or not is predicated on a host of conditions
such as the platform, in-tree versus out-of-tree build, and stable
versus nightly PyTorch builds. Instead of repeating these conditions to
guard whether to create or use the libtorch cache artifacts (and getting
them almost incorrect), this patch predicates the relevant pipeline
steps to whether libtorch is enabled, thus making the conditions far
simpler.
* feat: split pytorch requirements into stable and nightly
* fix: add true to tests to see full output
* refactor: add comments to explain true statement
* feat: move some tests to experimental mode
* refactor: refactor pipeline into more fine grained difference
* feat: add version differentiation for some tests
* feat: activate more configs
* refactor: change implementation to use less requirement files
* refactor: remove contraints used for testing
* fix: revert some requirement file names
* refactor: remove unnecessary ninja install
* fix: fix version parsing
* refactor: remove dependency on torchvision in main requirements file
* refactor: remove index url
* style: remove unnecesary line switch
* fix: readd index url
Creates a build_linux_arm64 job that builds the release on an arm64 self-hosted runner.
Drop Python 3.10 support
Pass TM_TORCH_VERSION to choose the Stable PyTorch version (since arm64 doesn't have nightly builds)
Borrows nightly / stable Pytorch switch from the WIP
https://github.com/llvm/torch-mlir/pull/2038
This patch adds a (default-true) input called `cache-enabled` to the
setup-build action, so that when the input is false, ccache is not setup
on the host machine. This patch also sets the input to be false for the
release builds.
Since PRs created by the GitHub action bot cannot trigger workflows (and
thus build tests), this patch uses the token for a GitHub app that was
specifically created for the RollPyTorch action.
We previously used a fork of the action/cache repository for the PyTorch
cache since the actions/cache repo did not support read-only caches.
Now that actions/cache supports separate read and write steps, this
patch switches back to the actions/cache repo.
This patch, by itself, doesn't fix caching on Windows, but once a new
release of ccache is available, caching for Windows builds should start
working again (validated by building ccache from source and using it
with LLVM builds).
Ccache rejects caching when either the `/Zi` or `/ZI` flags are used
during compilation on Windows, since these flags tell the compiler to
embed debug information in a PDB file (separate from the object file
produced by the compiler). In particular, our CI builds add the `/Zi`
flag, making ccache mark these compiler invocations as uncacheable.
But what caused our CI to add debug flags, especially when we specified
`-DCMAKE_BUILD_TYPE=Release`? On Windows, unless we specify the
`--config Release` flag during the CMake build step, CMake assumes a
debug build. So all this while, we had been producing debug builds of
torch-mlir for every PR! No doubt it took so long to build the Windows
binaries.
The reason for having to specify the configuration during the _build_
step (as opposed to the _configure_ step) of CMake on Windows is that
CMake's Visual Studio generators will produce _both_ Release and Debug
profiles during the CMake configure step (thus requiring a build-time
value that tells CMake whether to build in Release or Debug mode).
Luckily, on Linux and macOS, the `--config` flag seems to be simply
ignored, instead of causing build errors.
Strangely, based on cursory tests, it seems like on Windows we need to
specify the Relase configuration as both `-DCMAKE_BUILD_TYPE=Release` as
well as `--config Release`. Dropping either made my build switch to a
Debug configuration.
Additionally, there is a bug in ccache v4.8 (although this is addressed
in trunk) that causes ccache to reject caching if the compiler
invocation includes any flag that starts with `/Z`, including /`Zc`,
which is added by LLVM's HandleLLVMOptions.cmake and which isn't related
to debug info or PDB files. The next release of ccache should include
the fix, which is to reject caching only for `/Zi` and `/ZI` flags and
not all flags that start with `/Z`.
As a side note, debugging this problem was possible because of ccache's
log file, which is enabled by: `ccache --set-config="log_file=log.txt"`.
The GitHub action for creating the PR expects that either the changes
are not committed (in which case it commits them with the specified
commit message) or that the commit exists but that it is also pushed to
remote.
Prior to this patch, we created the commit but did not push it to
remote, causing failures. This patch leaves the changes uncommitted so
that they're committed and pushed to remote as part of the PR creation.
Currently, we run just the Linux in-tree tests when the RollPyTorch
workflow runs, but this is insufficient since WHL files for macOS or
Windows are sometimes not uploaded by PyTorch, causing the RollPyTorch
action to pass but all subsequent torch-mlir CI tests to fail because of
the broken build.
The easiest way to validate the RollPyTorch action on all platforms is
to run the standard set of tests that we run for each submitted PR, so
this patch makes the RollPyTorch action submit a PR instead of
committing the changes to the main branch directly. The PR is assigned
to a handful of folks for review, although this can be changed in the
future.
Despite using sudo to delete the workspace directory, we still
occasionally run into checkout errors. This patch thus drops the
deletion of the workspace prior to checkout. It also restricts the
number of parallel jobs in the submodule fetch step to just one, to try
and resolve the checkout issue ("index.lock: File exists.").
We have recently started seeing errors like:
```
Synchronizing submodule url for 'externals/llvm-project'
Synchronizing submodule url for 'externals/mlir-hlo'
/usr/bin/git -c protocol.version=2 submodule update --init --force --depth=1
Error: fatal: Unable to create '/home/anush/actions-runner/_work/torch-mlir/torch-mlir/.git/modules/externals/llvm-project/index.lock': File exists.
```
As a workaround, this patch removes the workspace directory before the
checkout step.