Commit Graph

41 Commits (31ffb815e454416f8aa924798c459e8aec8cca9b)

Author SHA1 Message Date
Ashay Rane aefa7be6fa
CI: build torch-mlir python package for Python v3.8 (#1827)
Previously, torchvision had not released WHL files for Python v3.8,
causing failures in torch-mlir python package builds, so we had disabled
building for Python v3.8.

Now that the WHL files are back, this patch re-enables v3.8 builds.
2023-01-23 17:58:01 -06:00
powderluv 4dbdc179b7
Bump to Python 3.8 (#1756) 2022-12-29 19:04:54 -08:00
Daniel Ellis 07a65961dd
Disable pypi publishing.
See https://github.com/llvm/torch-mlir/issues/1709
2022-12-13 11:45:41 -05:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00
Sean Silva 7731211d02 Remove eager_mode
This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.

Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.
2022-12-09 03:50:00 -08:00
Sean Silva 29c8823464 [e2e tests] Rename default config from "refbackend" to "linalg"
This more accurately reflects what it is. The previous name was
conflating the use of RefBackend (which `linalg`, `tosa`, and `mhlo`
configs all use) with the use of the linalg backend (e.g. TorchToLinalg).

This conflation was artifically giving the linalg backend a "privileged"
position, which we want to avoid. We still keep it as the default
backend, and it remains the most complete, but at least there's not
artificial boosting.
2022-12-08 01:34:46 -08:00
Daniel Ellis 98d80a642a
Publish releases to PyPI after build 2022-12-07 10:01:55 -05:00
Sean Silva 28957adaac [torchdynamo] Initial TorchDynamo support
This adds a basic e2e Config for TorchDynamo using
Linalg-on-Tensors/RefBackend.
But TorchDynamo is pretty orthogonal to
various other pieces, so it should compose nicely with variations like:
- Switching out all the backends (Linalg-on-Tensors, TOSA, MHLO)
- PyTorch functionalization and decompositions
- Taking the example inputs and compiling with all dynamic or all static
  shapes without duplicating tests.

This adds it to the CI, but there are still a lot of XFAIL's.

This also adds a helper `from torch_mlir.dynamo import
make_simple_dynamo_backend` which simplifies some of the steps for
making a Torch-MLIR-based TorchDynamo backend. We include "simple" in
the name because we are going to be exploring various things next from
the long-term roadmap.

The next steps are:
- Burn down all the XFAIL's.
- Start working on the pieces from the [long-term roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md).
  - Add functionalization/decompositions into the TorchDynamo flow and
    remove reliance on the current Torch-MLIR "frontend".
  - Write a pure-Python direct FX->MLIR importer.
  - Hook up the new PyTorch symbolic shape stuff.
  - Explore PrimTorch decompositions for simplifying backends.
2022-11-24 04:10:25 -08:00
Tanyo Kwok a9fb0c5459
fix mhlo e2e ci crashes (#1620)
* fix mhlo e2e ci crashes

* add passed tests

* calc dynamic positive dim
2022-11-21 21:50:35 +08:00
Ashay Rane eec9a7e022
ci: make pip skip cached packages while installing dependencies (#1570)
We want each build to be reproducible regardless of prior builds and
prior package installations, but pip, by default, uses cached packages
from previous invocations of `pip install`.  As a result, the incorrect
dependencies downloaded in the RollPyTorch workflow in the main
repository cannot be reproduced in private forks of the repository.  To
resolve this problem, this patch adds a `--no-cache-dir` flag to pip, so
that it fetches and inspects each requested package independent or prior
installations.
2022-11-11 20:31:38 -06:00
Ashay Rane 79871040c9
Revert "ci: build PyTorch before building Torch-MLIR (#1542)" (#1545)
This reverts commit 805d728194.
2022-11-01 20:40:09 -05:00
Ashay Rane 805d728194
ci: build PyTorch before building Torch-MLIR (#1542)
This patch updates the build_linux_packages.sh script so that when
PyTorch needs to be built from source, it is built _before_ building
LLVM and before building Torch-MLIR.  The rationale behind this change
is that previously, when the PyTorch build was triggered through the
Torch-MLIR build, the PyTorch compilation added more entries to the
ccache artifacts.  However, since we cache the PyTorch _binary_ (i.e.
the WHL file), there is no need to add the PyTorch compilation to the
ccache artifacts.  By removing the PyTorch compilation files, we keep
the ccache artifact size small, thus reducing the number of evictions
when we exceed GitHub's allowed limit.
2022-11-01 17:03:58 -05:00
powderluv 1c579c8c39
Drop 3.9 binaries to keep under 6hrs build (#1533) 2022-10-28 06:14:08 -07:00
Ashay Rane b86ec38541
ci: use the LLVM linker instead of GNU ld (#1501)
Without this patch, CI logs contained the line:

    -- Linker detection: GNU ld

GNU ld is notoriously slow at linking large binaries, so this patch
swaps GNU ld with the LLVM linker.

Since the linker invocation is driven through the compiler, perhaps the
best way to use the LLVM linker is to tell the compiler which linker
binary to use.  This patch adds the `-fuse-ld=lld` flag to all Linux
builds of Torch-MLIR in CI to make it use lld.
2022-10-18 00:43:04 -05:00
Ashay Rane a9942f343a
Cache PyTorch source builds to reduce CI time (#1500)
* ci: cache PyTorch source builds

This patch reduces the time spent in regular CI builds by caching
PyTorch source builds.  Specifically, this patch:

1. Makes CI lookup the cache entry for the PyTorch commit hash in
   pytorch-version.txt
2. If lookup was successful, CI fetches the previously-generated WHL
   file into the build_tools/python/wheelhouse directory
3. CI sets the `TM_PYTORCH_INSTALL_WITHOUT_REBUILD` variable to `true`
4. The build_libtorch.sh script then uses the downloaded WHL file
   instead of rebuilding PyTorch

* ci: warm up PyTorch source cache during daily RollPyTorch action

This patch makes the RollPyTorch action write the updated WHL file to
the cache, so that it can be later retrieved by CI that runs for each
PR.  We deliberately add the caching step to the end of the action since
the RollPyTorch action never needs to read from the cache, although
executing this step earlier in the process should not cause problems
either.
2022-10-18 00:42:42 -05:00
Daniel Ellis c085da148a Publish Python 3.7 packages.
This is the runtime Colab uses.
2022-10-12 08:50:12 -04:00
Jae Hoon (Antonio) Kim 3e08f5a779
Fix `fromIntArrayRef` call (#1479)
* Fix fromSymint call

* Update PyTorch requirement

* Re-enable LTC
2022-10-11 13:29:07 -04:00
Ashay Rane aefbf65e27
Disable LTC and update PyTorch (#1472)
* build: disable LTC again so that we can bump PyTorch version

When built using PyTorch's master branch, the LTC code has been failing
to build for a few days.  As a result, the PyTorch version referenced by
Torch-MLIR is stalled to the one from October 4th.

In an effort to advance to PyTorch version, this patch disables LTC, and
a subsequent patch will advance the PyTorch version.

* update PyTorch version to 1.14.0.dev20221010

Also disables the `UpSampleNearest2dDynamicFactor_basic` e2e test, since
the (PyTorch) oracle differs from the computed value for both the
refbackend and the eager_mode backends.
2022-10-10 23:05:40 -05:00
Ashay Rane 760cb13be0
build: switch to the correct directory before updating ODS (#1452) 2022-10-04 11:24:32 -05:00
Ashay Rane 8a8e779529
Disable auto-update of PyTorch version until CI script stabilizes (#1456)
Instead of letting the auto-update script either fail because of script
errors or letting it commit bad versions, this patch makes the update
process manual, for now.  Once the script stabilizes, I will its
re-enable periodic execution.
2022-10-04 03:02:44 -05:00
Ashay Rane da02390188
build: update ODS and shape library when updating PyTorch (#1450)
Updating the PyTorch version may break the Torch-MLIR build, as it did
recently, since the PyTorch update caused the shape library to change,
but the shape library was not updated in the commit for updating
PyTorch.

This patch introduces a new default-off environment variable to the
build_linux_packages.sh script called `TM_UPDATE_ODS_AND_SHAPE_LIB`
which instructs the script to run the update_torch_ods.sh and
update_shape_lib.sh scripts.

However, running these scripts requires an in-tree build and the tests
that run for an in-tree build of Torch-MLIR are more comprehensive than
those that run for an out-of-tree build, so this patch also swaps out
the out-of-tree build for an in-tree build.
2022-10-02 18:02:34 -05:00
Ashay Rane 95ffa27733
release: pin PyTorch version in release requirements (#1435)
Prior to this patch, the release process (`pip wheel`) retrieved
dependencies from the pyproject.toml file, which specified a version of
PyTorch that defaulted to the most recent nightly release.  Instead, we
want the release process to use the same pinned PyTorch version as the
development build of PyTorch.

Since TOML files can't reference the pytorch-requirements.txt file, this
patch puts the dependencies from pyproject.toml into
whl-requirements.txt, which references pytorch-requirements.txt.
2022-09-29 14:09:31 -05:00
Ramiro Leal-Cavazos 2509641cab
Add `--no-index` to CI's git-diff check on generated files (#1428)
`git diff` does not work by default on untracked files. Since the
function `_check_file_not_changed_by` stores the new generated file in
an untracked file, `git diff` was not catching any modifications in
the new generated file. This commit adds the flag `--no-index` to make
`git diff` work with untracked files.
2022-09-29 10:31:40 -07:00
Ashay Rane 53e76b8ab6
build: create RollPyTorch to update PyTorch version in Torch-MLIR (#1419)
This patch fetches the most recent nightly (binary) build of PyTorch,
before pinning it in pytorch-requirements.txt, which is referenced in
the top-level requirements.txt file.  This way, end users will continue
to be able to run `pip -r requirements.txt` without worrying whether
doing so will break their Torch-MLIR build.

This patch also fetches the git commit hash that corresponds to the
nightly release, and this hash is passed to the out-of-tree build so
that it can build PyTorch from source.

If we were to sort the torch versions as numbers (in the usual
descending order), then 1.9 appears before 1.13.  To fix this problem,
we use the `--version-sort` flag (along with `--reverse` for specifying
a descending order).  We also filter out lines that don't contain
version numbers by only considering lines that start with a digit.

As a matter of slight clarity, this patch renames the variable
`torch_from_src` to `torch_from_bin`, since that variable is initialized
to `TM_USE_PYTORCH_BINARY`.

Co-authored-by: powderluv <powderluv@users.noreply.github.com>
2022-09-28 15:38:30 -05:00
Ashay Rane 78bfbf2474
build: re-enable TOSA tests after upstream LLVM rollback (#1417) 2022-09-27 07:35:33 -05:00
Jae Hoon (Antonio) Kim 3e27aa2be3
Fix as_strided/slice symint (#1401)
* Fix as_strided symint

* Re-enable LTC tests

* Re-enable LTC

* Add hardtanh shape inference function

* Fix slice symint
2022-09-26 12:16:49 -04:00
Sean Silva 7a77f9fe3d Add a way to turn off crashing tests
This adds a very long and obnoxious option to disable crashing tests.
The right fix here is to use the right multiprocessing techniques to
ensure that segfaulting tests can be XFAILed like normal tests, but we
currently don't know how to implement "catch a segfault" in Python
(patches or even just ideas welcome).

Motivated by #1361, where we ended up removing two tests from *all*
backends due to a failure in one backend, which is undesirable.
2022-09-23 05:01:39 -07:00
Sean Silva 566234f97a
Disable LTC again (#1400)
https://github.com/llvm/torch-mlir/issues/1396
2022-09-22 17:49:13 -05:00
Jae Hoon (Antonio) Kim 8967463980
Fix symint ops and blacklist `lift_fresh_copy` (#1373)
* Add symint to native functions yaml

* Re-enable LTC

* Fix new_empty_strided and narrow_copy
2022-09-20 10:16:04 -04:00
Sean Silva 7fa31817c5
Fix generated file checks (#1338)
No idea how this slipped by. Sorry about that.

Fixes #1334
2022-09-02 12:12:42 -07:00
powderluv 234b2f2bd4
Fix release builds to only build release (#1333)
We were defaulting to building Release and running tests. Tests are spawned separately.
2022-09-02 03:37:57 -07:00
powderluv 729609831c
Remove setting ulimit for docker runs (#1325)
We added both ipc=host and explicit ulimits. This _may_ be causing slow downs on GHA. Remove the ulimit setting still passes all the CI tests locally. `--ipc=host` is still required.
2022-08-31 20:37:53 -07:00
powderluv 9dbe41a85c
Drop Python3.8 binary releases. Still builds from source. (#1329)
Shows low download count and we can add it back if people ask for it. Should save release artifacts space and Release build time.
2022-08-31 20:30:01 -07:00
Sean Silva a924de3e1a Slightly tweak generated file checks
The new logic has the following benefits:
1. It does not clobber the working tree state. We expect testing to not
   change the work tree.
2. It correctly handles the case where a user has changes to the
   generated files, but hasn't checked them in yet (this happens
   frequently when adding new ops).
2022-08-31 20:03:25 -07:00
powderluv 928c815ce2
Add shapelib and Torch ODS gen tests (#1318) 2022-08-31 15:01:59 -07:00
powderluv 9f061ea97d
Dockerize CI + Release builds (#1234)
Gets both CI and Release builds integrated in one workflow.
Mount ccache and pip cache as required for fast iterative builds
Current Release docker builds still run with root perms, fix it
in the future to run as the same user.

There may be some corner cases left especially when switching
build types etc.

Docker build TEST plan:

tl;dr:
Build everythin: Releases (Python 3.8, 3.9, 3.10) and CIs.
  TM_PACKAGES="torch-mlir out-of-tree in-tree"
  2.57s user 2.49s system 0% cpu 30:33.11 total

Out of Tree + PyTorch binaries:

  Fresh build (purged cache):
    TM_PACKAGES="out-of-tree"
    0.47s user 0.51s system 0% cpu 5:24.99 total

  Incremental with ccache:
    TM_PACKAGES="out-of-tree"
    0.09s user 0.08s system 0% cpu 34.817 total

Out of Tree + PyTorch from source

  Incremental
    TM_PACKAGES="out-of-tree" TM_USE_PYTORCH_BINARY=OFF
    1.58s user 1.81s system 2% cpu 1:59.61 total

In-Tree + PyTorch binaries:

  Fresh build and tests: (purge ccache)
  TM_PACKAGES="in-tree"
  0.53s user 0.49s system 0% cpu 6:23.35 total

  Fresh build/ but with prior ccache
  TM_PACKAGES="in-tree"
  0.45s user 0.66s system 0% cpu 3:57.47 total

  Incremental in-tree with all tests and regression tests
  TM_PACKAGES="in-tree"
  0.16s user 0.09s system 0% cpu 2:18.52 total

In-Tree + PyTorch from source

  Fresh build and tests: (purge ccache)
  TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF
  2.03s user 2.28s system 0% cpu 11:11.86 total

  Fresh build/ but with prior ccache
  TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF
  1.58s user 1.88s system 1% cpu 4:53.15 total

  Incremental in-tree with all tests and regression tests
  TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF
  1.09s user 1.10s system 1% cpu 3:29.84 total

  Incremental without tests
  TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF TM_SKIP_TESTS=ON
  1.52s user 1.42s system 3% cpu 1:15.82 total

In-tree+out-of-tree + Pytorch Binaries
  TM_PACKAGES="out-of-tree in-tree"
  0.25s user 0.18s system 0% cpu 3:01.91 total

To clear all artifacts:
rm -rf build build_oot llvm-build libtorch docker_venv
externals/pytorch/build
2022-08-30 11:07:25 -07:00
powderluv 0d1aa43764
Drop Python 3.7x from the nightly binary builds (#1246) 2022-08-18 16:34:12 -07:00
Ashay Rane 874fdb7e42
build: improve robustness of cmake and shell scripts (#1018)
On my local machine, `unzip` didn't exist (producing a "command not
found" error), but CMake ignored the error.  Although the build did
succeed (because it found a previously-built version of libtorch), it
seems better to abort builds on such failures, so this patch checks the
return code of all external process invocations.

Along similar lines, this patch also updates the shell scripts in
`build_tools` to extensively use double-quoting to prevent unintentional
word splitting or globbing.  Since some of the scripts execute `rm`
while using shell variables, this patch also adds the preamble `set -u`
to abort execution if an undefined variable is referenced, so that we
reduce the chances of executing `rm -rf /` if the path expression
happens to refer to an undefined variable.
2022-07-06 14:39:30 -07:00
powderluv 6d09c98b2f
Fix version information in Release builds (#788)
env vars seems to be lost in manylinux docker.
Use a version file like IREE does.
2022-04-25 14:13:17 -07:00
powderluv 4ef61aa27f
Minor buildsystem fixes (#778)
Sets up auto-pinning of latest torch-nightly
2022-04-21 15:53:00 -07:00
powderluv cc3a4a58ef
Add oneshot release snapshot for test/ondemand (#768)
* Add oneshot release snapshot for test/ondemand

Add some build scripts to test new release flow based on IREE.
Wont affect current builds, once this works well we can plumb it
in.

Build with manylinux docker

* Fixes a few issues found when debugging powderluv's setup.

* It is optional to link against Python3_LIBRARIES. Check that and don't do it if they don't exist for this config.
* Clean and auditwheel need to operate on sanitized package names. So "torch_mlir" vs "torch-mlir".
* Adds a pyproject.toml file that pins the build dependencies needed to detect both Torch and Python (the MLIR Python build was failing to detect because Numpy wasn't in the pip venv).
* Commented out auditwheel: These wheels are not PyPi compliant since they weak link to libtorch at runtime. However, they should be fine to deploy to users.
* Adds the --extra-index-url to the pip wheel command, allowing PyTorch to be found.
* Hack setup.py to remove the _mlir_libs dir before building. This keeps back-to-back versions from accumulating in the wheels for subsequent versions. IREE has a more principled way of doing this, but what I have here should work.

Co-authored-by: Stella Laurenzo <stellaraccident@gmail.com>
2022-04-21 02:19:12 -07:00