Commit Graph

2379 Commits (46f2cb50dca5e789d1114b127d9a4312fbb8e3d9)
 

Author SHA1 Message Date
Sean Silva ca1f0158b3
update PyTorch version to 2.1.0.dev20230802 (#2366)
- torch version: 2.1.0.dev20230802
 - torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
 - torchvision version: 0.16.0.dev20230802

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-02 09:37:14 -07:00
Vivek Khandelwal a374c39106 build: update llvm tag to 41895843
Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-02 21:18:14 +05:30
Gleb Kazantaev fb52a73cbe
LTC->MLIR Debug Info support (#1922)
* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>
2023-08-02 10:29:11 -04:00
Sean Silva 4c24472dea
update PyTorch version to 2.1.0.dev20230731 (#2359)
- torch version: 2.1.0.dev20230731
 - torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
 - torchvision version: 0.16.0.dev20230731

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-31 07:54:05 -07:00
Sean Silva 5be26a7e0b
update PyTorch version to 2.1.0.dev20230730 (#2356)
- torch version: 2.1.0.dev20230730
 - torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
 - torchvision version: 0.16.0.dev20230730

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-30 07:53:47 -07:00
Sean Silva fbdcf1e3c1
update PyTorch version to 2.1.0.dev20230729 (#2354)
- torch version: 2.1.0.dev20230729
 - torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
 - torchvision version: 0.16.0.dev20230729

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-29 07:38:05 -07:00
Jiawei Wu 16923fdbd2
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.
2023-07-29 21:55:49 +08:00
Vivek Khandelwal 0109bf705b
[MLIR][TORCH] Fix aten.cumsum lowering for int32 input (#2351)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-07-28 09:45:12 -07:00
Sean Silva c19fda4f17
update PyTorch version to 2.1.0.dev20230728 (#2353)
- torch version: 2.1.0.dev20230728
 - torch commit hash: eb5cb724fec897b866fd3a05b0c67ab9b23eeb96
 - torchvision version: 0.16.0.dev20230728

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-28 07:55:48 -07:00
Sean Silva 6aeb1f112f
update PyTorch version to 2.1.0.dev20230727 (#2352)
- torch version: 2.1.0.dev20230727
 - torch commit hash: 8a24a912a5f545d18059b59629aa3598f3783f25
 - torchvision version: 0.16.0.dev20230727

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-27 08:37:51 -07:00
Yuanqiang Liu c7c59b540e
[Stablehlo] support dynamic shape when convert aten.fill.Scalar (#2349) 2023-07-27 18:35:25 +08:00
Sean Silva 991eba2b51
update PyTorch version to 2.1.0.dev20230726 (#2348)
- torch version: 2.1.0.dev20230726
 - torch commit hash: 964a13b3dfbe583fa213fdca12b4a1732b1bb4e6
 - torchvision version: 0.16.0.dev20230726

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-26 11:12:21 -07:00
Sean Silva c9f2e8366b
update PyTorch version to 2.1.0.dev20230725 (#2341)
- torch version: 2.1.0.dev20230725
 - torch commit hash: 153afbda4b53928e5531f065c02fde1a29f2040a
 - torchvision version: 0.16.0.dev20230725

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-25 09:43:31 -07:00
Gaurav Shukla 398fa0ef5a build: update llvm tag to 4592543a01609fe
- update llvm tag to 4592543a01609feb4b3c19e81a9d54743e15e329
- mhlo now points to f6615343fdab2c74bebd23c78366cf097f9a72df

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-07-25 21:15:44 +05:30
Matthias Gehre 0a67411719
test/CAPI/CMakeLists.txt: Depend on FileCheck (#2329)
I saw test failing when FileCheck wasn't already build
2023-07-25 10:11:55 +02:00
Matthias Gehre c56cb531d5
Ignore constants in the legality error (#2328) 2023-07-25 10:11:40 +02:00
JianzheXiao 31ef08b63d
[Stablehlo]Add support for AvgPool1dOp (#2268)
* Add support for AvgPool1d

* Update AbstractInterpLibrary

* support avgpool1d in linalg

* refactored code

* fix nit problem
2023-07-25 14:09:53 +08:00
Jiawei Wu d57f67e7f8
[Torch Dialect] emit aten.nonzero, aten.nonzero_numpy, aten.nonzero_static op (#2338)
By the way, this PR also adds the missing shape function for aten.masked_select.
2023-07-25 09:01:19 +08:00
Ramiro Leal-Cavazos 4a96e716c0
Use `register_buffer` to make `Add_Module` test work on lazy tensor (#2332)
Doing `module.to('lazy')` only moves the module member tensors to the
device if they are created with `self.register_buffer` or
`self.register_parameter`. Since the `self.tensor` tensor in
`Add_Module` test is currently not created using the `self.register_*`
methods, it is not being moved from CPU to lazy device, which is
causing the test to fail on LTC backend. This commit uses
`self.register_buffer` to fix the test on LTC backend.

This commit also seems to fix the test for torchdynamo.
2023-07-24 09:07:13 -07:00
Sean Silva ef11a77315
update PyTorch version to 2.1.0.dev20230724 (#2339)
- torch version: 2.1.0.dev20230724
 - torch commit hash: ba1da8199b3077b77a78a78e7f0dad166435182f
 - torchvision version: 0.16.0.dev20230724

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-24 07:53:09 -07:00
Yuanqiang Liu 238c0501da
fix cmake torch-mlir-capi linking and bazel build (#2336) 2023-07-24 12:38:56 +08:00
Jiawei Wu 026e8db2e4
[Stablehlo] add converter for aten.scatter.src op (#2295) 2023-07-24 10:14:45 +08:00
Sean Silva dd0e91b466
update PyTorch version to 2.1.0.dev20230723 (#2335)
- torch version: 2.1.0.dev20230723
 - torch commit hash: a060bf3cf05c09906e78d7299efc8184568ea2e1
 - torchvision version: 0.16.0.dev20230723

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-23 07:52:27 -07:00
Sean Silva f0d8b6218b
update PyTorch version to 2.1.0.dev20230722 (#2333)
- torch version: 2.1.0.dev20230722
 - torch commit hash: b5222f140da05e40ac90ff42bd1db6564343daff
 - torchvision version: 0.16.0.dev20230722

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-22 07:53:00 -07:00
Sean Silva fb4c54fbef
update PyTorch version to 2.1.0.dev20230721 (#2331)
- torch version: 2.1.0.dev20230721
 - torch commit hash: f228c8b8cac3db634516c7101dee077cbaa026ab
 - torchvision version: 0.16.0.dev20230721

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-21 12:23:29 -07:00
Matthias Gehre 3ca35b4f3c
TorchToTosa: aten.embedding: Allow indices with any rank (#2327)
It's actually fine to not check the rank of the indices, because the conversion anyways flattens the index tensor to be (1, numElements) before applying tosa::gather, and then anyways reshapes the output tensor to the output shape of the aten.embedding.
2023-07-21 08:54:19 +02:00
Alexandre Rames 1e468e8294 Fix canonicalization of `torch.prim.TupleUnpack`. 2023-07-20 20:08:46 +02:00
Alexandre Rames a20422ce65 Support `DerefineOp` in `RefinePublicReturn`. 2023-07-20 20:08:46 +02:00
Alexandre Rames 4847563bed Clean up verification of calling conventions.
The implementation at this place was a remnent of the times the pipeline was
run only once.
Rely instead on the backend verification, after optimizations have had an
opportunity to resolve some uncertainties. (e.g. `!torch.optional`).
2023-07-20 20:08:46 +02:00
Sean Silva 91a9baa3e7
update PyTorch version to 2.1.0.dev20230720 (#2326)
- torch version: 2.1.0.dev20230720
 - torch commit hash: a16c87a767b22dbfa9e9435b1efe699db377ebf5
 - torchvision version: 0.16.0.dev20230720

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-20 08:03:47 -07:00
Jiawei Wu 9535be7903
[Torch-Dialect] emit aten.narrow.Tensor op and decompose it to aten.narrow op (#2297) 2023-07-20 16:46:44 +08:00
Matthias Gehre 64d7626a52
Fixes for split tensor and slice (#2314)
* RecomposeComplexOps: Remove dead slice op

* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors

* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none

* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max

* More tests for aten.split.Tensor
2023-07-20 09:53:54 +02:00
max 0650efe7c0 Conform to Python custom exception api 2023-07-19 21:00:55 -05:00
Jiawei Wu 3f843c8fd9
[torch-dialect] fix aten.type_as op's folder (#2283)
[torch-dialect] fix torch.type_as op's folder by decomposing it to prim.dtype + aten.to_dtype
2023-07-20 09:51:58 +08:00
Sean Silva c9add6b7d8
update PyTorch version to 2.1.0.dev20230719 (#2323)
- torch version: 2.1.0.dev20230719
 - torch commit hash: 82e03ad95768645f27100929366530f5d62deffe
 - torchvision version: 0.16.0.dev20230719

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-19 08:08:15 -07:00
AyaanShah2204 a308a54255
Fixes Windows DLL crash (#2321)
* explicit inliner extension

* fixed import formatting
2023-07-18 19:12:46 -07:00
Sean Silva 3b56f97f6f
update PyTorch version to 2.1.0.dev20230718 (#2318)
- torch version: 2.1.0.dev20230718
 - torch commit hash: 5e128c4fa1f1217e30c7179aeb5eb5eb95d4dd70
 - torchvision version: 0.16.0.dev20230718

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-18 08:21:40 -07:00
Matthias Gehre 0c17997000
Don't crash when the input to aten.copy is unranked (#2307)
This can happen when the input comes from an unsupported operator
2023-07-18 09:52:33 +02:00
Ramiro Leal-Cavazos 718f53ff8a
Fix handling of `!torch.number` in abstract interpretation library (#2309)
In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
2023-07-17 09:52:04 -07:00
Chi_Liu 5706697e0b
[TOSA] Add aten._index_put_impl support (#2031)
Add e2e support by add  "tosa-to-scf"
2023-07-17 09:51:24 -07:00
Sean Silva ba24a46910
update PyTorch version to 2.1.0.dev20230717 (#2315)
- torch version: 2.1.0.dev20230717
 - torch commit hash: c437a4b1e0da5c00c15c983fecfeedb81b2355f5
 - torchvision version: 0.16.0.dev20230717

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-17 07:48:34 -07:00
Matthias Gehre 06c9bd08e0
lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix legalization of comparions where the input type is bool (#2304) 2023-07-17 09:49:04 +02:00
Sean Silva d69b6bd587
update PyTorch version to 2.1.0.dev20230716 (#2312)
- torch version: 2.1.0.dev20230716
 - torch commit hash: c69b6e5da6f5892c2b2bd5fbf28dd5b568de362f
 - torchvision version: 0.16.0.dev20230716

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-16 07:51:23 -07:00
Sean Silva 27455500c3
update PyTorch version to 2.1.0.dev20230715 (#2311)
- torch version: 2.1.0.dev20230715
 - torch commit hash: 6db8e8b9b7ae2232c3ab0eb7fe19830357695c7d
 - torchvision version: 0.16.0.dev20230715

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-15 09:26:34 -07:00
Sean Silva bcbfeecae0
update PyTorch version to 2.1.0.dev20230714 (#2308)
- torch version: 2.1.0.dev20230714
 - torch commit hash: d257917ad4e5bb1b848f7857026191b61efb2294
 - torchvision version: 0.16.0.dev20230714

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-14 08:29:41 -07:00
Tiago Trevisan Jost 48383554da
TorchToTosa: Legalization for torch.aten.sqrt (#2234) 2023-07-14 08:23:10 +02:00
Yuanqiang Liu 7f6b72aec8
[Torch Dialect] add runtime.assert to check constraint when recomposing complex ops (#2281) 2023-07-14 10:13:19 +08:00
Sean Silva 50f5b658b6
update PyTorch version to 2.1.0.dev20230713 (#2303)
- torch version: 2.1.0.dev20230713
 - torch commit hash: fccac344dff905c235681c7eb1b567d45f45edb6
 - torchvision version: 0.16.0.dev20230713

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-13 10:49:33 -07:00
Matthias Gehre f8e75f659d
Add make_fx_tosa variant to end2end tests (#2240)
* Add make_fx_tosa variant to end2end tests

* e2e_testing/xfail_sets.py: Add make_fx_tosa xfail for stable
2023-07-13 15:07:54 +02:00
nithinsubbiah 91c6454618 Filter out empty strings while generting function signature 2023-07-13 13:51:54 +05:30