Commit Graph

2537 Commits (8e2e5eeae991c825496e22470e3d3fb766d54a66)
 

Author SHA1 Message Date
Vivek Khandelwal ab8b23e767 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-05-16.
This commit removes the test `BaddbmmDifferentDtypesModule_basic`
since PyTorch expects all operands to have the same dtype.
Ref: 2abad0c184

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-15 17:53:16 +05:30
Yuanqiang Liu bba0f5891b
[Stablehlo] add conversion for AtenFlipOp (#2163) 2023-06-15 10:27:34 +08:00
Yuanqiang Liu 7c6961bcbf
[Torch Dialect] Support aten.cuda and add canonicalizer for aten.cuda (#2231) 2023-06-14 09:56:39 +08:00
Maksim Levental 0caaf8d32a
Bump LLVM (#2176)
* Bump LLVM

---------

Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Yuanqiang Liu ddea56a832
[Torch Dialect] fix torch.uint8's dtype infer (#2227) 2023-06-13 10:38:20 +08:00
Sean Silva dd5992514d
update PyTorch version to 2.1.0.dev20230612 (#2229)
- torch version: 2.1.0.dev20230612
 - torch commit hash: 8aee9489c907eeae8af1b6df6962f3a4414c984a
 - torchvision version: 0.16.0.dev20230612

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-12 07:40:35 -07:00
Christopher McGirr b461daa06e
fix(TorchToTosa.cpp): adjust torch->tosa div conversion (#2200)
check the return type of the division to figure out whether to use
the floating point implementation of a division or to use the integer.

the issue rose from the fact that the inputs are all integer but the
result was casted to floating point. The conversion then chose to
use the integer implementation of division which is not legal in tosa
when all the inputs get casted to floating point.

fix(TorchToLinalg): AtenDivScalarOp

upcast self operand as well if applicable, the self operand must also
be casted to float as it can be an integer.
2023-06-12 11:18:38 +02:00
Tiago Trevisan Jost cc75557119
feat: support unchanged dimensions in torch.aten.broadcast_to operation. (#2204) 2023-06-12 11:17:25 +02:00
Sean Silva bfb565143f
update PyTorch version to 2.1.0.dev20230611 (#2226)
- torch version: 2.1.0.dev20230611
 - torch commit hash: ec23ae5ad407ee6719b18fc374f231225d027cf0
 - torchvision version: 0.16.0.dev20230611

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-11 07:31:52 -07:00
Matthias Gehre 4e2ba2e0af
Support aten.sign (#2205) 2023-06-10 20:45:35 +02:00
Sean Silva 5ead1d549e
update PyTorch version to 2.1.0.dev20230610 (#2225)
- torch version: 2.1.0.dev20230610
 - torch commit hash: dd69d6251ace7e9bed1c09e7613eaa9f3404912e
 - torchvision version: 0.16.0.dev20230610

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-10 07:40:16 -07:00
Ashay Rane c202cb5263
CI: Checkout repo so that gh knows where to look for the PR (#2223)
Without this patch, the gh command (for merging the PR) doesn't know
which repo we're referring to.
2023-06-09 21:50:19 -05:00
Sean Silva 45c0bd76a4
update PyTorch version to 2.1.0.dev20230609 (#2222)
- torch version: 2.1.0.dev20230609
 - torch commit hash: b6ab7791119b08a6ce80c7810f9baa1fb893c28d
 - torchvision version: 0.16.0.dev20230609

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-09 12:41:31 -05:00
Matthias Gehre 27a3d09917
Torch: Fold RuntimeAssertOp when condition is true (#2198) 2023-06-09 19:06:25 +08:00
Matthias Gehre 0959b502ae
Print name of the backend when tests fail to help debugging issues in CI (#2210)
* Print name of the backend when tests fail to help debugging issues in CI

* Extended test python/test/torchscript_e2e_test/compilation_failure.py
2023-06-09 10:47:07 +02:00
Ashay Rane 33ac7c3ad1
CI: Use GitHub token when calling gh for merging RollPyTorch PR (#2220) 2023-06-08 15:07:43 -05:00
Sean Silva 39d82a49bb
update PyTorch version to 2.1.0.dev20230608 (#2219)
- torch version: 2.1.0.dev20230608
 - torch commit hash: c1406a99df2df9c06e8c7029e2eac41d5b2240cf
 - torchvision version: 0.16.0.dev20230608

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-08 08:59:06 -05:00
Ashay Rane 3c1a796f7e
CI: Merge RollPyTorch PR upon successful completion (#2218)
This patch removes the mock commands, so that once the Build And Test
workflow has successfully completed on the RollPyTorch action, the PR is
merged and the branch is deleted.
2023-06-07 14:06:50 -05:00
Sean Silva 44d5cf6d32
update PyTorch version to 2.1.0.dev20230607 (#2216)
- torch version: 2.1.0.dev20230607
 - torch commit hash: 6226b7d098fbc093c7e6e514a5ff7a256b7447fe
 - torchvision version: 0.16.0.dev20230607

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-07 09:08:22 -05:00
Yuanqiang Liu 5a7bf4e4cb
[Torch Dialect] Add canonicalize pattern for aten.is_floating_point (#2194)
* [Torch Dialect] Add canonicalize pattern for aten.is_floating_point

* implement as fold

* add lit test
2023-06-07 17:05:31 +08:00
Matthias Gehre 816880774b
Fix version comparison against stable (#2209) 2023-06-07 10:19:38 +02:00
Tanyo Kwok 3a1b92c463
Update code_owners.md (#2197) 2023-06-07 12:16:35 +08:00
JianzheXiao e4f8fb1b8c
[Torch Dialect] add support for AtenIsnanOp (#2170)
* add support for mhlo

* Add Test for torch.ne

* fix torch.ne shape/add static test case

* add support for static torch.ne

---------

Co-authored-by: root <root@n31-177-039.byted.org>
2023-06-07 10:06:27 +08:00
Ashay Rane 2480cb7a51
CI: Update script to (mock) merge of RollPyTorch PRs (#2213)
Before enabling the actual merge, this patch dumps to the console the
bash commands that it plans to execute.
2023-06-06 12:38:16 -05:00
Yuanqiang Liu faec8698ea
[Torch Dialect] Support recompose aten.split.Tensor + prim.ListUnpack (#2192) 2023-06-07 01:38:04 +08:00
Roll PyTorch Action e29c5e8003 update PyTorch version to 2.1.0.dev20230606
- torch version: 2.1.0.dev20230606
 - torch commit hash: 4d648e450b8e1386c0079f22c38aebc14fb93872
 - torchvision version: 0.16.0.dev20230606
2023-06-06 19:11:12 +05:30
Vivek Khandelwal da886280fe
[MLIR][TORCH] Add E2E support for aten.tril op (#2202)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-05 16:17:01 -07:00
Ashay Rane 173050ec8a
CI: Fix yaml syntax in merge-rollpytorch.yml (#2201)
This patch fixes the indentation in the yaml file.
2023-06-05 09:43:00 -05:00
Sean Silva c732b7031e
update PyTorch version to 2.1.0.dev20230605 (#2199)
- torch version: 2.1.0.dev20230605
 - torch commit hash: 7a5da818220cc4c950128db5ea65ec98dece559e
 - torchvision version: 0.16.0.dev20230605

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-05 08:48:52 -05:00
Ashay Rane c804dac925
CI: Introduce workflow to auto-merge RollPyTorch updates (#2196)
This patch adds a new workflow that runs when an update to the
rollpytorch branch by silvasean (in whose name the RollPyTorch action
runs) causes the regular CI build to complete without errors.  Upon
execution, this workflow currently just prints the PR number(s) of the
PR created by the RollPyTorch action, but once this is working as
expected, we will add the step to merge the PR changes.
2023-06-05 08:48:20 -05:00
Sean Silva 75bc6cb119
update PyTorch version to 2.1.0.dev20230604 (#2195)
- torch version: 2.1.0.dev20230604
 - torch commit hash: 810edae5137bdc0cd25ac2f133d6633d6146b1e9
 - torchvision version: 0.16.0.dev20230604

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-04 09:29:15 -05:00
Sean Silva 4f323ec352
update PyTorch version to 2.1.0.dev20230603 (#2193)
- torch version: 2.1.0.dev20230603
 - torch commit hash: 7726721661ea114acb81a860519d0a1501d88fca
 - torchvision version: 0.16.0.dev20230603

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-03 09:27:10 -05:00
Sean Silva 4659c6c8f0
update PyTorch version to 2.1.0.dev20230602 (#2191)
- torch version: 2.1.0.dev20230602
 - torch commit hash: 52c7a761c5cb6ae94acf2298827309fba3dbc0f4
 - torchvision version: 0.16.0.dev20230602

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-02 09:18:26 -05:00
Ashay Rane 755d0c46da
CI: Spot fixes related to nightly and stable PyTorch builds (#2190)
* CI: Skip (redundant) libtorch build when using stable PyTorch version

When we use PyTorch stable builds, there is no need to build libtorch
from source, making the stable-pytorch-with-torch-binary-OFF
configuration redundant with stable-pytorch-with-torch-binary-ON.  This
patch drops the redundant configuration from CI.

* CI: Simplify guard conditions for creating and using libtorch cache

Whether libtorch is enabled or not is predicated on a host of conditions
such as the platform, in-tree versus out-of-tree build, and stable
versus nightly PyTorch builds.  Instead of repeating these conditions to
guard whether to create or use the libtorch cache artifacts (and getting
them almost incorrect), this patch predicates the relevant pipeline
steps to whether libtorch is enabled, thus making the conditions far
simpler.
2023-06-01 22:58:25 -07:00
Ramiro Leal-Cavazos a46b5c6af2 Fix types + off-by-1 error, clamp `end` in slice+copy_ recomposition
The `copy_` op being replaced by `RecomposeSliceCopy_` operates on a
subset of the tensor being mutated, while the `index_put` op being
used to replace the `copy_` op operates on the entire tensor being
mutated. This means that the result type of the `index_put` should be
the type of the input to `index_put` and we need to make sure that
`copy_` does not have users before replacing to avoid type conflicts.

This commit also fixes the result type used for the
`AtenArangeStartStepOp`, and an off-by-1 error when creating the
indices vector.

Lastly, this commit also clamps the `end` value from the slice to the
size of the dimension.
2023-06-01 11:14:53 -07:00
Ramiro Leal-Cavazos 281dccc681 [LINALG] Add dynamic support for `PrimMinIntOp` 2023-06-01 11:14:53 -07:00
Sean Silva 1eb992876c
update PyTorch version to 2.1.0.dev20230601 (#2189) 2023-06-01 07:46:03 -07:00
Zhekun Zhang 8af3e50662
[Torch Dialect] Add support for AtenScalarTensorOp (#2085)
* add scalar_tensor op

* add dynamo pass test; needs PR2062

* try to fix

* Empty commit, trigger test

* Empty commit, trigger test

* address comments

* use dtype function

* fix decompose rule

* remove unused include

* Empty commit, trigger test

* fix test

* disable ltc

* fix dtype

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-06-01 11:38:50 +08:00
Sean Silva 7ab16d38cf
update PyTorch version to 2.1.0.dev20230531 (#2188)
- torch version: 2.1.0.dev20230531
 - torch commit hash: 48552338649ccc467060f5f93dbe19e2acbc4d1a
 - torchvision version: 0.16.0.dev20230531

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-05-31 10:51:17 -07:00
Yuanqiang Liu 72b8070e57
[Importer] import constant tuple (#2132)
* [Importer] import constant tuple

* update

* update

* update
2023-05-31 14:14:14 +08:00
Ramiro Leal-Cavazos 479b2175ef
Add `ReadOnly` trait to `copy.to_vtensor` (#2179)
Before inlining a global slot, the users of the global slot are
checked to see if they are `ReadOnly` or `MemoryEffectFree` to make
sure that the global slot is not being mutated. Because the op
`copy.to_vtensor` currently does not have the `ReadOnly` trait, if a
global slot is passed to `copy.to_vtensor`, the pass
`InlineGlobalSlots` will fail.

The op `copy.to_vtensor` is `ReadOnly`, since it does not modify the
contents of the input tensor; it simply makes a new copy. This commit
adds the trait as well as an e2e test that generates the case of a
global slot being passed to a `copy.to_vtensor`.
2023-05-30 21:40:36 +00:00
maxbartel db3f2e3fde
Add Stable PyTorch CI Pipeline (#2038)
* feat: split pytorch requirements into stable and nightly

* fix: add true to tests to see full output

* refactor: add comments to explain true statement

* feat: move some tests to experimental mode

* refactor: refactor pipeline into more fine grained difference

* feat: add version differentiation for some tests

* feat: activate more configs

* refactor: change implementation to use less requirement files

* refactor: remove contraints used for testing

* fix: revert some requirement file names

* refactor: remove unnecessary ninja install

* fix: fix version parsing

* refactor: remove dependency on torchvision in main requirements file

* refactor: remove index url

* style: remove unnecesary line switch

* fix: readd index url
2023-05-30 12:16:24 -07:00
Vivek Khandelwal 959f4f48d5 [MLIR][TORCH] Add support for the total_weight for aten.nll_loss_forward op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-30 20:29:27 +05:30
Gaurav Shukla 552887783a [TM_TENSOR] Add `aten.scatter.[src|value]` op
This commit adds support of `aten.scatter.src` and `aten.scatter.value`
ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-05-29 12:35:53 +05:30
George Petterson b9d29dc055 Add correct type checking for tm_tensor.attention 2023-05-27 05:51:14 +05:30
Yuanqiang Liu 5223f990df
[Stablehlo] Enable Stablehlo backend with arith dialect (#2139) 2023-05-26 22:57:57 +08:00
Sean Silva 4216c7d622
update PyTorch version to 2.1.0.dev20230526 (#2175)
- torch version: 2.1.0.dev20230526
 - torch commit hash: 10b46f7c7f69f9bf705d2b6ea53efb9c59145685
 - torchvision version: 0.16.0.dev20230526

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-05-26 09:10:20 -05:00
powderluv 2f02ae1ebe
Delete another spurious pip (#2173) 2023-05-26 00:02:21 -07:00
powderluv 9b7909b599
Add ARM64 release builds (#2159)
Creates a build_linux_arm64 job that builds the release on an arm64 self-hosted runner.
Drop Python 3.10 support
Pass  TM_TORCH_VERSION to choose the Stable PyTorch version (since arm64 doesn't have nightly builds)

Borrows nightly / stable Pytorch switch from the WIP
https://github.com/llvm/torch-mlir/pull/2038
2023-05-25 20:39:19 -07:00
Zhekun Zhang 69e993b03f
[Torch Op] Add AtenChunkOp support (#2152)
* add chunkOp support

* update LTC xfail list

* address comments

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-26 10:05:19 +08:00