Sean Silva
fbb5ed52cf
update PyTorch version to 2.1.0.dev20230623 ( #2260 )
...
- torch version: 2.1.0.dev20230623
- torch commit hash: ad724c83fb0d94cb3bb2cec94e15d88023c64e0d
- torchvision version: 0.16.0.dev20230623
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-23 09:03:50 -07:00
Yuanqiang Liu
64afc08dab
[Torch Dialect] add missing one_hot dtype function ( #2143 )
...
* [Torch Dialect] add missing one_hot dtype function
* update
* update
* update
2023-06-23 16:11:33 +08:00
Yuanqiang Liu
39201a4be5
[Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp'… ( #2256 )
...
* [Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp's input is torch.number
* update
2023-06-23 16:02:45 +08:00
Ramiro Leal-Cavazos
6f2bf31291
Fix single-element tuple construction in abstract interp library ( #2258 )
...
Single element tuples in Python need a comma after the
element. However, the `registry.py` file, which generates the expected
abstract interpretation function signatures, was not inserting the
comma. This commit changes the expected signature generator to add a
comma after the last element in any non-empty default tuple argument.
2023-06-22 11:27:40 -07:00
Yuanqiang Liu
96b14e952e
[Torch Dialect] Support aten.device.with_index ( #2254 )
2023-06-23 01:07:14 +08:00
Yuanqiang Liu
4fd4477e15
[Torch Dialect] require hasSizes when decompose aten.amax ( #2248 )
2023-06-22 11:26:51 +08:00
Sean Silva
c91c67e53d
update PyTorch version to 2.1.0.dev20230621 ( #2247 )
...
- torch version: 2.1.0.dev20230621
- torch commit hash: e4cf441a4ba770dc869433d876e73051ed9800b2
- torchvision version: 0.16.0.dev20230621
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-21 08:12:45 -07:00
Abhishek Varma
a0d2789840
[MLIR][TORCH] Add e2e support for aten.alias
...
-- This commit adds e2e support for aten.alias op.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-06-21 12:15:31 +05:30
Maksim Levental
0244f540a7
Add typeids to CAPI. ( #2253 )
2023-06-20 22:06:43 -05:00
Abhishek Varma
ebda611100
[build] Update llvm tag to 3f8d8c1a
...
This patch updates the submodules to:
- llvm: 3f8d8c1aac3086f603ad73f18fe2bd4fb91fa10a
- mhlo: 4384a47b03dc377d651523037867899a340b0e96
The only change made is calling `registerAllExtensions` during dialect
registration. See: https://reviews.llvm.org/D120368
2023-06-20 15:45:52 -07:00
Sean Silva
860a2d4bbf
update PyTorch version to 2.1.0.dev20230619 ( #2245 )
...
- torch version: 2.1.0.dev20230619
- torch commit hash: 5beeb400ca3487d55629cbf8b87f9b637a7b657f
- torchvision version: 0.16.0.dev20230619
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-19 07:52:21 -07:00
Sean Silva
9b4e369671
update PyTorch version to 2.1.0.dev20230618 ( #2244 )
...
- torch version: 2.1.0.dev20230618
- torch commit hash: 59c654a6ad8d256b89123dda536052e98cd5e399
- torchvision version: 0.16.0.dev20230618
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-18 09:14:33 -07:00
Sean Silva
145055bdb6
update PyTorch version to 2.1.0.dev20230617 ( #2241 )
...
- torch version: 2.1.0.dev20230617
- torch commit hash: a522f9aedd9c9aaebba5997f201cc23119696578
- torchvision version: 0.16.0.dev20230617
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-17 08:46:28 -07:00
Vivek Khandelwal
f6a6cfea4e
[MLIR][TORCH] Add support for negative index values for index.Tensor op ( #2233 )
...
This commit adds the support for index.Tensor op when the index values
are negative. This commit wraps around the index values by checking
their values at run time.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-16 14:21:04 -05:00
Matthias Gehre
6f420019cb
TorchToTosa: Cast float constants to correct type to support bfloat16 ( #2239 )
2023-06-16 09:51:24 +02:00
Sean Silva
45e2188615
update PyTorch version to 2.1.0.dev20230615 ( #2238 )
...
- torch version: 2.1.0.dev20230615
- torch commit hash: 0d4f9aee900596cd8ed55725f75a5792b6df6de1
- torchvision version: 0.16.0.dev20230615
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-15 09:59:20 -07:00
Vivek Khandelwal
ab8b23e767
build: manually update PyTorch version
...
Set PyTorch and TorchVision version to nightly release 2023-05-16.
This commit removes the test `BaddbmmDifferentDtypesModule_basic`
since PyTorch expects all operands to have the same dtype.
Ref: 2abad0c184
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-15 17:53:16 +05:30
Yuanqiang Liu
bba0f5891b
[Stablehlo] add conversion for AtenFlipOp ( #2163 )
2023-06-15 10:27:34 +08:00
Yuanqiang Liu
7c6961bcbf
[Torch Dialect] Support aten.cuda and add canonicalizer for aten.cuda ( #2231 )
2023-06-14 09:56:39 +08:00
Maksim Levental
0caaf8d32a
Bump LLVM ( #2176 )
...
* Bump LLVM
---------
Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Yuanqiang Liu
ddea56a832
[Torch Dialect] fix torch.uint8's dtype infer ( #2227 )
2023-06-13 10:38:20 +08:00
Sean Silva
dd5992514d
update PyTorch version to 2.1.0.dev20230612 ( #2229 )
...
- torch version: 2.1.0.dev20230612
- torch commit hash: 8aee9489c907eeae8af1b6df6962f3a4414c984a
- torchvision version: 0.16.0.dev20230612
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-12 07:40:35 -07:00
Christopher McGirr
b461daa06e
fix(TorchToTosa.cpp): adjust torch->tosa div conversion ( #2200 )
...
check the return type of the division to figure out whether to use
the floating point implementation of a division or to use the integer.
the issue rose from the fact that the inputs are all integer but the
result was casted to floating point. The conversion then chose to
use the integer implementation of division which is not legal in tosa
when all the inputs get casted to floating point.
fix(TorchToLinalg): AtenDivScalarOp
upcast self operand as well if applicable, the self operand must also
be casted to float as it can be an integer.
2023-06-12 11:18:38 +02:00
Tiago Trevisan Jost
cc75557119
feat: support unchanged dimensions in torch.aten.broadcast_to operation. ( #2204 )
2023-06-12 11:17:25 +02:00
Sean Silva
bfb565143f
update PyTorch version to 2.1.0.dev20230611 ( #2226 )
...
- torch version: 2.1.0.dev20230611
- torch commit hash: ec23ae5ad407ee6719b18fc374f231225d027cf0
- torchvision version: 0.16.0.dev20230611
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-11 07:31:52 -07:00
Matthias Gehre
4e2ba2e0af
Support aten.sign ( #2205 )
2023-06-10 20:45:35 +02:00
Sean Silva
5ead1d549e
update PyTorch version to 2.1.0.dev20230610 ( #2225 )
...
- torch version: 2.1.0.dev20230610
- torch commit hash: dd69d6251ace7e9bed1c09e7613eaa9f3404912e
- torchvision version: 0.16.0.dev20230610
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-10 07:40:16 -07:00
Ashay Rane
c202cb5263
CI: Checkout repo so that gh knows where to look for the PR ( #2223 )
...
Without this patch, the gh command (for merging the PR) doesn't know
which repo we're referring to.
2023-06-09 21:50:19 -05:00
Sean Silva
45c0bd76a4
update PyTorch version to 2.1.0.dev20230609 ( #2222 )
...
- torch version: 2.1.0.dev20230609
- torch commit hash: b6ab7791119b08a6ce80c7810f9baa1fb893c28d
- torchvision version: 0.16.0.dev20230609
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-09 12:41:31 -05:00
Matthias Gehre
27a3d09917
Torch: Fold RuntimeAssertOp when condition is true ( #2198 )
2023-06-09 19:06:25 +08:00
Matthias Gehre
0959b502ae
Print name of the backend when tests fail to help debugging issues in CI ( #2210 )
...
* Print name of the backend when tests fail to help debugging issues in CI
* Extended test python/test/torchscript_e2e_test/compilation_failure.py
2023-06-09 10:47:07 +02:00
Ashay Rane
33ac7c3ad1
CI: Use GitHub token when calling gh for merging RollPyTorch PR ( #2220 )
2023-06-08 15:07:43 -05:00
Sean Silva
39d82a49bb
update PyTorch version to 2.1.0.dev20230608 ( #2219 )
...
- torch version: 2.1.0.dev20230608
- torch commit hash: c1406a99df2df9c06e8c7029e2eac41d5b2240cf
- torchvision version: 0.16.0.dev20230608
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-08 08:59:06 -05:00
Ashay Rane
3c1a796f7e
CI: Merge RollPyTorch PR upon successful completion ( #2218 )
...
This patch removes the mock commands, so that once the Build And Test
workflow has successfully completed on the RollPyTorch action, the PR is
merged and the branch is deleted.
2023-06-07 14:06:50 -05:00
Sean Silva
44d5cf6d32
update PyTorch version to 2.1.0.dev20230607 ( #2216 )
...
- torch version: 2.1.0.dev20230607
- torch commit hash: 6226b7d098fbc093c7e6e514a5ff7a256b7447fe
- torchvision version: 0.16.0.dev20230607
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-07 09:08:22 -05:00
Yuanqiang Liu
5a7bf4e4cb
[Torch Dialect] Add canonicalize pattern for aten.is_floating_point ( #2194 )
...
* [Torch Dialect] Add canonicalize pattern for aten.is_floating_point
* implement as fold
* add lit test
2023-06-07 17:05:31 +08:00
Matthias Gehre
816880774b
Fix version comparison against stable ( #2209 )
2023-06-07 10:19:38 +02:00
Tanyo Kwok
3a1b92c463
Update code_owners.md ( #2197 )
2023-06-07 12:16:35 +08:00
JianzheXiao
e4f8fb1b8c
[Torch Dialect] add support for AtenIsnanOp ( #2170 )
...
* add support for mhlo
* Add Test for torch.ne
* fix torch.ne shape/add static test case
* add support for static torch.ne
---------
Co-authored-by: root <root@n31-177-039.byted.org>
2023-06-07 10:06:27 +08:00
Ashay Rane
2480cb7a51
CI: Update script to (mock) merge of RollPyTorch PRs ( #2213 )
...
Before enabling the actual merge, this patch dumps to the console the
bash commands that it plans to execute.
2023-06-06 12:38:16 -05:00
Yuanqiang Liu
faec8698ea
[Torch Dialect] Support recompose aten.split.Tensor + prim.ListUnpack ( #2192 )
2023-06-07 01:38:04 +08:00
Roll PyTorch Action
e29c5e8003
update PyTorch version to 2.1.0.dev20230606
...
- torch version: 2.1.0.dev20230606
- torch commit hash: 4d648e450b8e1386c0079f22c38aebc14fb93872
- torchvision version: 0.16.0.dev20230606
2023-06-06 19:11:12 +05:30
Vivek Khandelwal
da886280fe
[MLIR][TORCH] Add E2E support for aten.tril op ( #2202 )
...
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-05 16:17:01 -07:00
Ashay Rane
173050ec8a
CI: Fix yaml syntax in merge-rollpytorch.yml ( #2201 )
...
This patch fixes the indentation in the yaml file.
2023-06-05 09:43:00 -05:00
Sean Silva
c732b7031e
update PyTorch version to 2.1.0.dev20230605 ( #2199 )
...
- torch version: 2.1.0.dev20230605
- torch commit hash: 7a5da818220cc4c950128db5ea65ec98dece559e
- torchvision version: 0.16.0.dev20230605
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-05 08:48:52 -05:00
Ashay Rane
c804dac925
CI: Introduce workflow to auto-merge RollPyTorch updates ( #2196 )
...
This patch adds a new workflow that runs when an update to the
rollpytorch branch by silvasean (in whose name the RollPyTorch action
runs) causes the regular CI build to complete without errors. Upon
execution, this workflow currently just prints the PR number(s) of the
PR created by the RollPyTorch action, but once this is working as
expected, we will add the step to merge the PR changes.
2023-06-05 08:48:20 -05:00
Sean Silva
75bc6cb119
update PyTorch version to 2.1.0.dev20230604 ( #2195 )
...
- torch version: 2.1.0.dev20230604
- torch commit hash: 810edae5137bdc0cd25ac2f133d6633d6146b1e9
- torchvision version: 0.16.0.dev20230604
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-04 09:29:15 -05:00
Sean Silva
4f323ec352
update PyTorch version to 2.1.0.dev20230603 ( #2193 )
...
- torch version: 2.1.0.dev20230603
- torch commit hash: 7726721661ea114acb81a860519d0a1501d88fca
- torchvision version: 0.16.0.dev20230603
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-03 09:27:10 -05:00
Sean Silva
4659c6c8f0
update PyTorch version to 2.1.0.dev20230602 ( #2191 )
...
- torch version: 2.1.0.dev20230602
- torch commit hash: 52c7a761c5cb6ae94acf2298827309fba3dbc0f4
- torchvision version: 0.16.0.dev20230602
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-02 09:18:26 -05:00
Ashay Rane
755d0c46da
CI: Spot fixes related to nightly and stable PyTorch builds ( #2190 )
...
* CI: Skip (redundant) libtorch build when using stable PyTorch version
When we use PyTorch stable builds, there is no need to build libtorch
from source, making the stable-pytorch-with-torch-binary-OFF
configuration redundant with stable-pytorch-with-torch-binary-ON. This
patch drops the redundant configuration from CI.
* CI: Simplify guard conditions for creating and using libtorch cache
Whether libtorch is enabled or not is predicated on a host of conditions
such as the platform, in-tree versus out-of-tree build, and stable
versus nightly PyTorch builds. Instead of repeating these conditions to
guard whether to create or use the libtorch cache artifacts (and getting
them almost incorrect), this patch predicates the relevant pipeline
steps to whether libtorch is enabled, thus making the conditions far
simpler.
2023-06-01 22:58:25 -07:00