Commit Graph

2419 Commits (dc37616d6773acc55c7452c242c7f13e838362f4)
 

Author SHA1 Message Date
Yuanqiang Liu c7c59b540e
[Stablehlo] support dynamic shape when convert aten.fill.Scalar (#2349) 2023-07-27 18:35:25 +08:00
Sean Silva 991eba2b51
update PyTorch version to 2.1.0.dev20230726 (#2348)
- torch version: 2.1.0.dev20230726
 - torch commit hash: 964a13b3dfbe583fa213fdca12b4a1732b1bb4e6
 - torchvision version: 0.16.0.dev20230726

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-26 11:12:21 -07:00
Sean Silva c9f2e8366b
update PyTorch version to 2.1.0.dev20230725 (#2341)
- torch version: 2.1.0.dev20230725
 - torch commit hash: 153afbda4b53928e5531f065c02fde1a29f2040a
 - torchvision version: 0.16.0.dev20230725

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-25 09:43:31 -07:00
Gaurav Shukla 398fa0ef5a build: update llvm tag to 4592543a01609fe
- update llvm tag to 4592543a01609feb4b3c19e81a9d54743e15e329
- mhlo now points to f6615343fdab2c74bebd23c78366cf097f9a72df

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-07-25 21:15:44 +05:30
Matthias Gehre 0a67411719
test/CAPI/CMakeLists.txt: Depend on FileCheck (#2329)
I saw test failing when FileCheck wasn't already build
2023-07-25 10:11:55 +02:00
Matthias Gehre c56cb531d5
Ignore constants in the legality error (#2328) 2023-07-25 10:11:40 +02:00
JianzheXiao 31ef08b63d
[Stablehlo]Add support for AvgPool1dOp (#2268)
* Add support for AvgPool1d

* Update AbstractInterpLibrary

* support avgpool1d in linalg

* refactored code

* fix nit problem
2023-07-25 14:09:53 +08:00
Jiawei Wu d57f67e7f8
[Torch Dialect] emit aten.nonzero, aten.nonzero_numpy, aten.nonzero_static op (#2338)
By the way, this PR also adds the missing shape function for aten.masked_select.
2023-07-25 09:01:19 +08:00
Ramiro Leal-Cavazos 4a96e716c0
Use `register_buffer` to make `Add_Module` test work on lazy tensor (#2332)
Doing `module.to('lazy')` only moves the module member tensors to the
device if they are created with `self.register_buffer` or
`self.register_parameter`. Since the `self.tensor` tensor in
`Add_Module` test is currently not created using the `self.register_*`
methods, it is not being moved from CPU to lazy device, which is
causing the test to fail on LTC backend. This commit uses
`self.register_buffer` to fix the test on LTC backend.

This commit also seems to fix the test for torchdynamo.
2023-07-24 09:07:13 -07:00
Sean Silva ef11a77315
update PyTorch version to 2.1.0.dev20230724 (#2339)
- torch version: 2.1.0.dev20230724
 - torch commit hash: ba1da8199b3077b77a78a78e7f0dad166435182f
 - torchvision version: 0.16.0.dev20230724

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-24 07:53:09 -07:00
Yuanqiang Liu 238c0501da
fix cmake torch-mlir-capi linking and bazel build (#2336) 2023-07-24 12:38:56 +08:00
Jiawei Wu 026e8db2e4
[Stablehlo] add converter for aten.scatter.src op (#2295) 2023-07-24 10:14:45 +08:00
Sean Silva dd0e91b466
update PyTorch version to 2.1.0.dev20230723 (#2335)
- torch version: 2.1.0.dev20230723
 - torch commit hash: a060bf3cf05c09906e78d7299efc8184568ea2e1
 - torchvision version: 0.16.0.dev20230723

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-23 07:52:27 -07:00
Sean Silva f0d8b6218b
update PyTorch version to 2.1.0.dev20230722 (#2333)
- torch version: 2.1.0.dev20230722
 - torch commit hash: b5222f140da05e40ac90ff42bd1db6564343daff
 - torchvision version: 0.16.0.dev20230722

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-22 07:53:00 -07:00
Sean Silva fb4c54fbef
update PyTorch version to 2.1.0.dev20230721 (#2331)
- torch version: 2.1.0.dev20230721
 - torch commit hash: f228c8b8cac3db634516c7101dee077cbaa026ab
 - torchvision version: 0.16.0.dev20230721

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-21 12:23:29 -07:00
Matthias Gehre 3ca35b4f3c
TorchToTosa: aten.embedding: Allow indices with any rank (#2327)
It's actually fine to not check the rank of the indices, because the conversion anyways flattens the index tensor to be (1, numElements) before applying tosa::gather, and then anyways reshapes the output tensor to the output shape of the aten.embedding.
2023-07-21 08:54:19 +02:00
Alexandre Rames 1e468e8294 Fix canonicalization of `torch.prim.TupleUnpack`. 2023-07-20 20:08:46 +02:00
Alexandre Rames a20422ce65 Support `DerefineOp` in `RefinePublicReturn`. 2023-07-20 20:08:46 +02:00
Alexandre Rames 4847563bed Clean up verification of calling conventions.
The implementation at this place was a remnent of the times the pipeline was
run only once.
Rely instead on the backend verification, after optimizations have had an
opportunity to resolve some uncertainties. (e.g. `!torch.optional`).
2023-07-20 20:08:46 +02:00
Sean Silva 91a9baa3e7
update PyTorch version to 2.1.0.dev20230720 (#2326)
- torch version: 2.1.0.dev20230720
 - torch commit hash: a16c87a767b22dbfa9e9435b1efe699db377ebf5
 - torchvision version: 0.16.0.dev20230720

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-20 08:03:47 -07:00
Jiawei Wu 9535be7903
[Torch-Dialect] emit aten.narrow.Tensor op and decompose it to aten.narrow op (#2297) 2023-07-20 16:46:44 +08:00
Matthias Gehre 64d7626a52
Fixes for split tensor and slice (#2314)
* RecomposeComplexOps: Remove dead slice op

* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors

* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none

* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max

* More tests for aten.split.Tensor
2023-07-20 09:53:54 +02:00
max 0650efe7c0 Conform to Python custom exception api 2023-07-19 21:00:55 -05:00
Jiawei Wu 3f843c8fd9
[torch-dialect] fix aten.type_as op's folder (#2283)
[torch-dialect] fix torch.type_as op's folder by decomposing it to prim.dtype + aten.to_dtype
2023-07-20 09:51:58 +08:00
Sean Silva c9add6b7d8
update PyTorch version to 2.1.0.dev20230719 (#2323)
- torch version: 2.1.0.dev20230719
 - torch commit hash: 82e03ad95768645f27100929366530f5d62deffe
 - torchvision version: 0.16.0.dev20230719

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-19 08:08:15 -07:00
AyaanShah2204 a308a54255
Fixes Windows DLL crash (#2321)
* explicit inliner extension

* fixed import formatting
2023-07-18 19:12:46 -07:00
Sean Silva 3b56f97f6f
update PyTorch version to 2.1.0.dev20230718 (#2318)
- torch version: 2.1.0.dev20230718
 - torch commit hash: 5e128c4fa1f1217e30c7179aeb5eb5eb95d4dd70
 - torchvision version: 0.16.0.dev20230718

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-18 08:21:40 -07:00
Matthias Gehre 0c17997000
Don't crash when the input to aten.copy is unranked (#2307)
This can happen when the input comes from an unsupported operator
2023-07-18 09:52:33 +02:00
Ramiro Leal-Cavazos 718f53ff8a
Fix handling of `!torch.number` in abstract interpretation library (#2309)
In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
2023-07-17 09:52:04 -07:00
Chi_Liu 5706697e0b
[TOSA] Add aten._index_put_impl support (#2031)
Add e2e support by add  "tosa-to-scf"
2023-07-17 09:51:24 -07:00
Sean Silva ba24a46910
update PyTorch version to 2.1.0.dev20230717 (#2315)
- torch version: 2.1.0.dev20230717
 - torch commit hash: c437a4b1e0da5c00c15c983fecfeedb81b2355f5
 - torchvision version: 0.16.0.dev20230717

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-17 07:48:34 -07:00
Matthias Gehre 06c9bd08e0
lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix legalization of comparions where the input type is bool (#2304) 2023-07-17 09:49:04 +02:00
Sean Silva d69b6bd587
update PyTorch version to 2.1.0.dev20230716 (#2312)
- torch version: 2.1.0.dev20230716
 - torch commit hash: c69b6e5da6f5892c2b2bd5fbf28dd5b568de362f
 - torchvision version: 0.16.0.dev20230716

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-16 07:51:23 -07:00
Sean Silva 27455500c3
update PyTorch version to 2.1.0.dev20230715 (#2311)
- torch version: 2.1.0.dev20230715
 - torch commit hash: 6db8e8b9b7ae2232c3ab0eb7fe19830357695c7d
 - torchvision version: 0.16.0.dev20230715

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-15 09:26:34 -07:00
Sean Silva bcbfeecae0
update PyTorch version to 2.1.0.dev20230714 (#2308)
- torch version: 2.1.0.dev20230714
 - torch commit hash: d257917ad4e5bb1b848f7857026191b61efb2294
 - torchvision version: 0.16.0.dev20230714

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-14 08:29:41 -07:00
Tiago Trevisan Jost 48383554da
TorchToTosa: Legalization for torch.aten.sqrt (#2234) 2023-07-14 08:23:10 +02:00
Yuanqiang Liu 7f6b72aec8
[Torch Dialect] add runtime.assert to check constraint when recomposing complex ops (#2281) 2023-07-14 10:13:19 +08:00
Sean Silva 50f5b658b6
update PyTorch version to 2.1.0.dev20230713 (#2303)
- torch version: 2.1.0.dev20230713
 - torch commit hash: fccac344dff905c235681c7eb1b567d45f45edb6
 - torchvision version: 0.16.0.dev20230713

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-13 10:49:33 -07:00
Matthias Gehre f8e75f659d
Add make_fx_tosa variant to end2end tests (#2240)
* Add make_fx_tosa variant to end2end tests

* e2e_testing/xfail_sets.py: Add make_fx_tosa xfail for stable
2023-07-13 15:07:54 +02:00
nithinsubbiah 91c6454618 Filter out empty strings while generting function signature 2023-07-13 13:51:54 +05:30
Matthias Gehre c23a61f4b6
DecomposeComplexOps: Use static shape if available (#2289) 2023-07-12 10:07:30 +02:00
Sean Silva bbd3094c2f
update PyTorch version to 2.1.0.dev20230711 (#2299)
- torch version: 2.1.0.dev20230711
 - torch commit hash: 927dc662386af052018212c7d01309a506fc94cd
 - torchvision version: 0.16.0.dev20230711

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-11 12:34:22 -07:00
Sean Silva 17669391b3
update PyTorch version to 2.1.0.dev20230710 (#2296)
- torch version: 2.1.0.dev20230710
 - torch commit hash: 69565763c841e4e8d07fd338c9bf6515005b3880
 - torchvision version: 0.16.0.dev20230710

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-10 06:54:40 -07:00
Zhekun Zhang 6a072d4f4a
[Stablehlo] AtenEmptyMemoryFormat remove device cpu check (#2288)
* remove cpu check

* update dtype

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-07-10 15:36:21 +08:00
Sean Silva 05920f9159
update PyTorch version to 2.1.0.dev20230709 (#2293)
- torch version: 2.1.0.dev20230709
 - torch commit hash: 9b5a84f5443c8e3b9db5511a4f58d727b4fade40
 - torchvision version: 0.16.0.dev20230709

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-09 07:56:04 -07:00
Sean Silva 2fdfa0410d
update PyTorch version to 2.1.0.dev20230708 (#2292)
- torch version: 2.1.0.dev20230708
 - torch commit hash: 3a919e00b8237a76ad6faa6040c00b425a96f1f3
 - torchvision version: 0.16.0.dev20230708

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-08 08:14:37 -07:00
Sean Silva 6ac85ee662
update PyTorch version to 2.1.0.dev20230707 (#2290)
- torch version: 2.1.0.dev20230707
 - torch commit hash: 760dafbb05853f5f57f1a6869179df2efbc2cf6b
 - torchvision version: 0.16.0.dev20230707

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-07 10:44:02 -07:00
Abhishek Varma 6c9ba4ce95
[Torch-to-Linalg] Add dynamic dimension support for BroadcastTo op (#2174)
-- This commit adds support for dynamic dimension in BroadcastTo op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-07-07 10:01:51 -07:00
Sean Silva 7f4084b570
update PyTorch version to 2.1.0.dev20230705 (#2284)
- torch version: 2.1.0.dev20230705
 - torch commit hash: 758c84d41f55f90f210e6d7d02e05cda4a13c728
 - torchvision version: 0.16.0.dev20230705

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-05 09:33:10 -07:00
Sean Silva 8c87057f50
update PyTorch version to 2.1.0.dev20230704 (#2282)
- torch version: 2.1.0.dev20230704
 - torch commit hash: e5472fd3c324c5ecb343884e5399e0227cc30a6c
 - torchvision version: 0.16.0.dev20230704

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-04 08:23:00 -07:00