Commit Graph

1782 Commits (67ab708b636d300db003fbec902411315e222bfb)
 

Author SHA1 Message Date
Yuanqiang Liu 2793a2bd41
fix TorchToMhlo Conversion cmake dependency (#1549) 2022-11-09 18:34:53 -06:00
Sean Silva ec4e01c321
Add Suraj to TorchToTOSA owners (#1566) 2022-11-09 14:55:13 -08:00
Jae Hoon (Antonio) Kim 2ec4b06bbb
Remove MakeView from IR Builder (#1552)
* Remove MakeView from IR Builder

* Update PyTorch requirements
2022-11-09 13:46:34 -05:00
Ashay Rane 9a73b9e6c7
build: un-pin the ninja pip package version (#1562)
Now that the ninja pip package issue has been resolved, this patch
removes the pinned version from requirements.txt so that we can go back
to using the most recent version of ninja.
2022-11-06 14:12:28 -06:00
Ashay Rane e16ccce373
ci: re-add powershell script for windows release builds (#1561)
This file was removed as part of the PR that added build caching for
Windows.
2022-11-06 12:48:38 -06:00
Roll PyTorch Action e78e9cd782 update PyTorch version to 1.14.0.dev20221105 2022-11-06 14:04:59 +00:00
Ashay Rane 27d8d47022
build: pin ninja pip version temporarily to resolve build failure (#1558)
Going from ninja v1.10.2 to v1.11.1, there is a change that breaks the
CI builds with the following error:

```
CMake Error at CMakeLists.txt:47 (project):
  Running
   '/main_checkout/torch-mlir/docker_venv/bin/ninja' '--version'
  failed with:
CMake Error: CMAKE_ASM_COMPILER not set, after EnableLanguage
```

Ostensibly, the reason for the error about the ASM compiler is because
llvm-project/llvm/CMakeLists.txt includes ASM among the list of
languages used in the LLVM project. Adding `-DCMAKE_ASM_COMPILER=clang`
does not resolve the error.

Until we figure out why the new version of ninja causes the build
failures, this patch pins the ninja to the one that worked.
2022-11-05 12:20:56 -05:00
Roll PyTorch Action 5ee20e70a1 update PyTorch version to 1.14.0.dev20221104 2022-11-04 22:01:57 +00:00
Ashay Rane d99b2ddb1b
importer: fix usage after PyTorch update (#1555)
Unless requested otherwise, PyTorch no longer installs most of the
header files under the caffe2 directory (see
https://github.com/pytorch/pytorch/pull/87986).  This breaks our
importer code since we need to use the `MakeGuard()` function to execute
statements in the event of exceptions.

To fix this issue, this patch implements a rudimentary version of
PyTorch's ScopeGuard, where once the class variable goes out of scope,
it executes a predefined method.
2022-11-04 15:02:23 -05:00
Vivek Khandelwal fedf8c0640 [MLIR][TORCH] Add E2E support for aten.upsample_nearest2d_backward.vec op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-04 22:10:07 +05:30
Ashay Rane db5a496eb4
build: enable update scripts to work with out-of-tree builds (#1553)
Before this patch, the update_shape_lib.sh and update_torch_ods.sh
scripts only worked on in-tree builds, which implied that the
RollPyTorch action was forced to run the longer-running in-tree build.
As a result of this patch, we should be able to run through the basic
checks in the RollPyTorch action faster, while running the full suite of
tests off the critical path.

The key change in this patch is that the update scripts now look for the
directory that is most recently modified between in-tree or out-of-tree
build directories.  The change also correctly handles the case when only
one of the two directories exists.
2022-11-04 08:13:02 -05:00
Sean Silva de4bcbfe9b [docs] Centralize all images in docs/images/ 2022-11-04 03:12:17 -07:00
Ashay Rane 2846776897
ci: enable ccache on Windows (#1548)
This patch makes a few small, but key, changes to enable ccache on
Windows.  First, it replaces the hendrikmuhs/ccache-action action with
command line invocations to the ccache binary, since the action has two
bugs, one of which causes CI to refer to different ccache artifacts
before versus after the build on Windows whereas the other bug can
sometimes cause the action to incorrectly infer that the cache is empty.

Second, this patch slightly alters the cache key, so that our old cache
artifacts, which have grown too big, are eventually discarded in favor
of the new, smaller cache artifacts.  Along the way, this patch also
keeps the RollPyTorch's cache artifact separate from the regular build's
cache artifact so as to keep these artifacts small, and also because the
RollPyTorch action is off the critical path for most contributors.

Finally, this patch makes small changes to the CMake file so that on
Windows, the ccache binary is added as a prefix, as recommended on the
[ccache Wiki](https://github.com/ccache/ccache/wiki/MS-Visual-Studio).
2022-11-03 12:17:22 -05:00
Ashay Rane f847642495
CI script improvements (#1547)
* ci: update versions of external actions

Node.js 12 actions are deprecated and will eventually go away, so this
patch bumps the old actions to their latest versions that use Node.js
16.

* ci: replace deprecated action with bash commands

The llvm/actions/install-ninja action uses Node.js 12, which is
deprecated.  Since that action is not updated to work with Node.js 16,
this patch replaces that action with equivalent bash commands to install
Ninja.

* ci: use smaller ccache artifacts to reduce evictions

Over time, our ccache sizes have grown quite large (some as large as
1.3 GB), which results in us routinely exceeding GitHub's limits, thus
triggering frequent cache evictions.  As a result, cache downloads and
uploads take unnecessary long, in addition to fewer cache entries being
available.

Based on experiments on a clean cache state, it appears that we need
less than 300 MB of (compressed) ccache artifacts for each build type.
Anything larger than that will accrue changes from the past that aren't
needed.

To alleviate the cache burden, this patch sets the maximum ccache size
to be 300 MB.  This change should not affect the success or failure of
our builds.  I will monitor the build times to check whether this change
causes any performance degradation.

* ci: use consistent platform identifiers

Prior to this patch, some of our builds ran on `ubuntu-latest`, while
some others ran on `ubuntu-20.04` and others ran on `ubuntu-22.04`, with
similar situations for macOS and windows.  This patch instead sets all
Linux builds to run on `ubuntu-latest`, all macOS builds to run on
`macos-latest`, and all Windows builds to run on `windows-latest`, to
make debugging future CI failures a little easier.
2022-11-02 21:37:01 -05:00
Sean Silva 2162253401 [docs] Add long-term roadmap
Add a roadmap covering expected project evolution over the next 1-2
years.
2022-11-02 03:25:52 -07:00
Ashay Rane 031d127940
ci: introduce read-only and read-write PyTorch build caches (#1546)
Until recently, we had to either risk feature branches creating PyTorch
build caches (which were unusable by the main branch or other parallel
feature branches because of GitHub's rules around sharing caches among
branches) or we had to limit the PyTorch build caches to only the main
branch, causing CI runs on feature branches to be terribly slow because
they had to rebuild PyTorch each time.

This patch enables the best of both worlds, by using a fork
(github.com/ashay/cache) of the GitHub's cache action, where the fork
adds an option (called `save`) which, when set, uploads a new cache
entry.  We thus set this `save` flag only when we're building PyTorch
from source in Torch-MLIR's main branch, whereas all other builds set
this `save` flag to `false`.

The ability to conditionally update the cache has been an oft-requested
feature on the original (github.com/actions/cache) repository and
multiple unmerged PRs exist to allow conditional cache updates, so it is
likely that using the fork is only a temporary solution.
2022-11-01 23:26:17 -07:00
Ashay Rane 79871040c9
Revert "ci: build PyTorch before building Torch-MLIR (#1542)" (#1545)
This reverts commit 805d728194.
2022-11-01 20:40:09 -05:00
Ashay Rane 805d728194
ci: build PyTorch before building Torch-MLIR (#1542)
This patch updates the build_linux_packages.sh script so that when
PyTorch needs to be built from source, it is built _before_ building
LLVM and before building Torch-MLIR.  The rationale behind this change
is that previously, when the PyTorch build was triggered through the
Torch-MLIR build, the PyTorch compilation added more entries to the
ccache artifacts.  However, since we cache the PyTorch _binary_ (i.e.
the WHL file), there is no need to add the PyTorch compilation to the
ccache artifacts.  By removing the PyTorch compilation files, we keep
the ccache artifact size small, thus reducing the number of evictions
when we exceed GitHub's allowed limit.
2022-11-01 17:03:58 -05:00
Ashay Rane 0409595ccc
mlir: add missing dependency on TableGen targets (#1537)
lib/Dialect/Torch/Utils/Utils.cpp includes TorchOps.h, which, by way of
included header files, refers to both TorchOps.h.inc as well as
TorchTypes.h.inc.  However, the build rules do not specify the
dependency of the `TorchMLIRTorchUtils` target on the TableGen generated
header files, causing spurious build errors.

This patch fixes the problem by adding `MLIRTorchOpsIncGen` and
`MLIRTorchTypesIncGen` to the list of dependencies of
`TorchMLIRTorchUtils`.
2022-11-01 14:59:11 -05:00
powderluv 1a33577860
remove spurious ref in publish pages (#1536)
We don't need to pass in optional tag information.
2022-11-01 09:42:21 -07:00
Tanyo Kwok 17bc7c89cc
build: update llvm tag to 74fb770d (#1539)
* build: update llvm tag to 74fb770d

This commit makes the following changes needed to update bump LLVM:

+ replace usages of `tensor::createPadScalarOp`, see https://reviews.llvm.org/D136493
+ Update file checks
2022-11-01 15:27:09 +08:00
Ashay Rane a8970101dc
pytorch: rename pytorch-version.txt to pytorch-hash.txt (#1541)
This patch is part of a larger set of improvements to the CI/build
system.  In the code, we refer to the version as the string that
contains the release identifier such as 1.14.0.dev20221028, so calling
the file that contains the commit hash as pytorch-version.txt creates
confusion.  For the sake of simplicity, this patch renames that file to
be pytorch-hash.txt.
2022-10-31 22:03:05 -05:00
Ashay Rane 2cf1092d4d
ci: restrict PyTorch cache to just the main branch (#1540)
If PyTorch build caches are created on a branch other than the main
branch, then GitHub does not share those caches with the main branch,
making every CI run that runs for each PR slow.  This patch resolves the
problem by letting only the main branch create and use PyTorch build
caches.
2022-10-31 15:14:53 -05:00
Jae Hoon (Antonio) Kim 0701464c47
Remove view ops from IR builder (#1534)
* Remove view ops from IR builder

* Update PyTorch requirements
2022-10-30 21:42:44 -04:00
xndcn 759057cbdd [MLIR][TORCH] Fix wrong parameter name "supportFPInputOnly"
The parameter "supportFPInputOnly" of function createPoolingOp() is
supposed to be "supportNonFPInput", which was added to distinguish
between "MaxPool2d" and "AvgPool2d" op in #718
2022-10-30 23:18:08 +08:00
Vivek Khandelwal c86177730d [MLIR][TORCH] Add E2E support for aten.fill.Tensor op
This commit adds the decomposition for `aten.fill.Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-10-30 18:40:47 +05:30
powderluv 87ab714ed6
Update buildRelease.yml (#1535) 2022-10-30 00:14:54 -07:00
Ramiro Leal-Cavazos b723186983
Remove all but one of valsem ops + move fill.Scalar to elementwise (#1531)
This commit removes almost all of the valsem ops, since the value
semantics version of the ops now exist in PyTorch. The only op missing
is `aten.bernoulli_.float`. In addition, this commit also simplifies
the implementation of `aten.fill.Scalar` by moving it to the pattern
that converts elementwise ops.
2022-10-28 15:06:11 +00:00
powderluv 1c579c8c39
Drop 3.9 binaries to keep under 6hrs build (#1533) 2022-10-28 06:14:08 -07:00
Vivek Khandelwal ea602127b6 [MLIR][TORCH] Add E2E support for aten.addcmul_ and aten.addcdiv_ op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-10-28 16:07:50 +05:30
Roll PyTorch Action 5d5aa47cdf update PyTorch version to 1.14.0.dev20221027 2022-10-27 16:35:00 +00:00
Ashay Rane a11ea93877
build: update llvm tag to f8b84268 (#1528)
The only change required was to update a test to reflect the changes
in https://reviews.llvm.org/D136541.
2022-10-26 15:33:53 -05:00
Ahmed S. Taei 8da8d971c8
[Bazel] Use gloab instead of explicit files (#1529) 2022-10-26 13:28:00 -07:00
Roll PyTorch Action ce01c4f9a7 update PyTorch version to 1.14.0.dev20221026 2022-10-26 15:29:42 +00:00
powderluv bbde4e163f
Add Windows Builder (#1521)
Add a powershell script to build windows .whl packages
Disable LTC as it doesn't build on Windows.
Add GHA hooks
Use Python 3.10.8
2022-10-25 16:13:31 -07:00
Ashay Rane 801452b2f4
ci: make RollPyTorch run only on the Torch-MLIR repo (#1516) 2022-10-25 17:56:59 -05:00
Ahmed S. Taei d865c1de7a
[Bazel] Use glob instead of explicit files (#1520) 2022-10-25 12:23:24 -07:00
Daniel Ellis 3e199aaf11
Add better error message for single-tensor tuple returns. 2022-10-25 12:48:55 -04:00
Vivek Khandelwal ca87033d2f [MLIR][TORCH] Add E2E support for aten.mse_loss op
This commit adds decomposition for the `aten.mse_loss` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-10-25 21:06:58 +05:30
Roll PyTorch Action 2f097d3976 update PyTorch version to 1.14.0.dev20221025 2022-10-25 13:36:46 +00:00
Jae Hoon (Antonio) Kim 2f300935bf
Reference lazy graph executor (#1507)
* Add LazyGraphExecutor registration

* Update PyTorch version to 1.14.0.dev20221024

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2022-10-24 17:15:11 -04:00
powderluv 3f883802e8
Relax the need for only CPU versions of PyTorch (#1505)
* Relax the need for only CPU versions of PyTorch

This allows installing corresponding PyTorch CUDA / ROCM versions and using torch-mlir.

* Remove obsolete comments
2022-10-24 13:46:31 -07:00
Roll PyTorch Action 470a2f62f3 update PyTorch version to 1.14.0.dev20221021 2022-10-21 15:25:28 +00:00
Sean Silva efbebf2001 [docs] Initial code_owners.md
As discussed in #1506, this should help to distribute the review load
and ensure timely, high quality reviews.

Closes #1506
2022-10-21 04:53:00 -07:00
Sean Silva 0dab31666e Fix old reference to !numpy.ndarray 2022-10-21 02:10:18 -07:00
Ashay Rane 4a776be156
build: make PyTorch caching more robust (#1510)
Whether or not the PyTorch build is cached should not affect the success
of the Torch-MLIR build, but based on the existing code, a build may
fail if the `TM_PYTORCH_INSTALL_WITHOUT_REBUILD` variable was set but
the build cache doesn't exist.

Although that variable is set by CI upon a cache hit, nuances of
Github's caching behavior can create situations where the coupling
between `TM_PYTORCH_INSTALL_WITHOUT_REBUILD` and the cache lookup fails.

Specifically, a branch other than our default branch (`main`) may create
the cache entry, but because Github doesn't share this cache entry with
builds running on the `main` branch, the `main` branch build tries to
create it's own cache entry.  However, since cache identifiers are
unique and because caches are immutable, the caching step running in the
`main` branch appears to create an invalid cache entry (of 233 bytes,
instead of the expected ~60 MB).

Consequently, subsequent builds observe a cache "hit", since caches
created by the `main` branch are shared with all other branches, but
because this cache entry is invalid (since it doesn't actually contain
the ~60 MB PyTorch WHL file), the builds fail.

One workaround would be to let only the `main` branch create caches, but
in doing so, we would also prevent other branches from _reading_ the
cache, making the builds in those branches terribly slow.

So this patch uses a different workaround, which is to check whether the
PyTorch WHL file exists, even if the build observed a cache hit.  If the
file doesn't exist, even if it was a purported cache hit, the code
builds PyTorch from source, which is probably intuitive.

A longer term fix will follow, after a discussion with the wider team.
2022-10-20 08:50:18 -05:00
Roll PyTorch Action 724d8d183a update PyTorch version to 1.14.0.dev20221020 2022-10-20 13:38:23 +00:00
Roll PyTorch Action c97df38e3e update PyTorch version to 1.14.0.dev20221019 2022-10-19 15:27:42 +00:00
Ashay Rane 1d28098c3c
Revert "update PyTorch version to 1.14.0.dev20221018" (#1504)
Upstream PyTorch nightly page
[https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html]
somehow dropped the link for torch-1.14.0.dev20221018 for macOS but not
for Linux or Windows, whereas our RollPyTorch action assumes that if the
nightly version is available for Linux, it is also available for macOS.
This reverts the commit that changed the PyTorch version.
2022-10-18 13:51:26 -05:00
Chi_Liu ad6f5848cb
[MLIR][TORCH] Add TorchToTosa lowering for aten.where.self op (#1454) 2022-10-18 09:39:39 -07:00