Commit Graph

424 Commits (pre_fixup_20240110)

Author SHA1 Message Date
Stella Laurenzo 74f7a0c9d6
Upstream the ONNX importer. (#2636)
This is part 1 of 2, which will also include upstreaming the FX
importer. I started with ONNX because it forces some project layout
updates and is more self contained/easier as a first step.

Deviating somewhat from the RFCs on project layout, I made the following
decisions:

* Locating the `onnx_importer.py` into `torch_mlir.extras` as Maks
already has opened up that namespace and it seemed to fit. Better to
have fewer things at that level.
* Setup the build so that the root project only contains MLIR Python and
pure Python deps (like the importers), but this can be augmented with
the `projects/` adding more depending on which features are enabled.
* The default build continues to build everything whereas in
`TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS=1` mode, it builds a
`torch-mlir-core` wheel with the pure contents only.

`onnx_importer.py` and `importer_smoke_test.py` are almost verbatim
copies from SHARK-Turbine. I made some minor local alterations to adapt
to paths and generalize the way they interact with the outer project. I
expect I can copy these back to Turbine verbatim from here. I also
updated the license boilerplate (they have the same license but slightly
different project norms for the headers) but retained the correct
copyright.

Other updates:

* Added the ONNX importer unit test (which also can generate test data)
in lit, conditioned on the availability of the Python `onnx` package. In
a followup once I know everything is stable, I'll add another env var
that the CI can set to always enable this so we know conclusively if
tests pass.
* Moved the ONNX conversion readme to `docs/`.
* Renamed CMake option `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS` ->
`TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` and inverted the sense. Made the
JitIR importer and LTC options `cmake_dependent_options` for robustness.
2023-12-12 19:02:51 -08:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Yuanqiang Liu 365655ca29
[Torch Dialect] add canonicalize pattern for aten.floor with integer … (#2534)
…type
2023-11-02 09:51:31 +08:00
saienduri a2e694df40
add e2e support for torch.eye operations (aten.eye, aten.eye.m) (#2478) 2023-11-01 11:23:28 -07:00
JianzheXiao e8706957c0
[Torch Dialect] Add Support for aten.unflatten.int (#2475)
As title, Add support for aten.unflatten.int, support dim to be negative
and one of the sizes' elements to be -1
2023-10-31 15:36:16 +08:00
Yuanqiang Liu e7282487ea
[Torch Dialect] support aten.glu (#2531) 2023-10-26 10:36:18 +08:00
Sarthak Gupta 7633619ed2
[torch] Implement stronger verifiers for non-value semantic ops (#2519)
Attempt to solve https://github.com/llvm/torch-mlir/issues/2490

Changes for Non Value Semantic Ops having the
`IsTrailingUnderscoreInplaceVariant` trait :
- AnyTorchTensorType -> Torch_NonValueTensorType
- AnyTorchOptionalTensorType -> AnyTorchOptionalNonValueTensorType
- AnyTorchListOfOptionalTensorType ->
AnyTorchListOfOptionalNonValueTensorType
- AnyTorchListOfTensorType -> AnyTorchListOfNonValueTensorType

Created three new tensor types for optional and list non value tensors.
2023-10-21 09:09:55 -07:00
Ze Zhang f2c53b8ca5
Add aten.isclose support and its torch-to-tosa lowering (#2512)
Add aten.isclose op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests


To test e2e tosa lowering:
`python -m e2e_testing.main -v -c=tosa`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-16 09:44:53 -07:00
Ze Zhang e649e06b7b
Add aten.unflatten.int support and its torch-to-tosa lowering (#2509)
Add aten.unflatten.int op
Add its torch-to-tosa lowering
Update the TorchToTosa/basic.mlir tests

To test e2e tosa lowering:

`python -m e2e_testing.main -v -c=tosa`

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-10-13 18:39:41 -07:00
Jae Hoon (Antonio) Kim 32d9b20bde
Add linspace/cumprod/roll ops (#2498)
Add linspace/cumprod/roll ops to ODS and add shape inference functions
to make it work with LTC.

Also, add some tensor utils to LTC library for searching for non-detach
copy nodes.
2023-10-03 11:01:07 -04:00
Vivek Khandelwal 9293326e1e [MLIR][TORCH] Add support for bitwise_right_shit and bitwise_and.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-10-02 13:06:59 +05:30
Vivek Khandelwal 71ac62f3a8 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-28.

aten.baddbmm changes done because upstream PyTorch has now added
support for fp16 gemm on CPU.
Refer: 9399e0b1ff
2023-10-02 09:48:32 +05:30
saienduri 4e1dd3bf10
add e2e support for torch.log10 (#2479) 2023-09-28 10:17:03 -07:00
Vivek Khandelwal 7760bda8ee build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-26.

aten._convolution.deprecated changes done because upstream PyTorch has
now added support for fp16 native convolution on CPU.
Refer: 7c9052165a

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-27 16:24:58 +05:30
Gleb Kazantaev 059041e0fe
[LTC] Support torch.ones/zeros/arange ops (#2440) 2023-09-21 13:25:14 -04:00
David Gens 023fc90072
[Torch Dialect] add avg_pool 2d and 3d op variants (#2473)
Adds ODS for `avg_pool2d` and `avg_pool3d`, including their backward and
`adaptive_` variants.
2023-09-20 13:47:08 -04:00
Bruce Kim 40913a36c2
[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op (redo PR) (#2459)
Making the same PR with #2457, as I accidentally thought the review was already made and merged it (reverted).

Add decompose empty_strided op.
Referring to #1776, this decomposition op only supports default stride values, because accessing the tensor or indexing over that, the indices are determined by the strides.
In MLIR, this is not implicitly supported but assumes that the strides are default while iterating over the tensor.
2023-09-13 10:04:31 -07:00
Vivek Khandelwal 4b4c38da46 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-13.
Ref: 464f9c3725

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-13 21:25:21 +05:30
Ramiro Leal-Cavazos 106b58597a
Revert "[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op (#2457)" (#2458)
This reverts commit 97bec86a8b.
2023-09-12 13:57:47 -07:00
Bruce Kim 97bec86a8b
[MLIR][TORCH] Add E2E support for aten.empty_strided decomposition op (#2457)
* implemented e2e test case, shape, dtype func

* AtenEmptyStrided decompose op implemented

* xfailed test module in ltc
2023-09-12 13:37:02 -07:00
Arham Khan 82456eefed
[MLIR][TORCH] add E2E support for aten.new_full (#2425)
* implement aten.new_full

* remove extraneous tests
2023-09-12 09:29:08 -05:00
Yuanqiang Liu 1f20b7275d
[Torch Dialect] add canonicalize for aten.min.other (#2452) 2023-09-11 17:28:22 +08:00
Jiawei Wu b411a40b3d
[Torch Dialect] emit aten.__or__Tensor Op (#2437)
* emit aten.__or__TensorOp

* bug fix

* remove convert to stablehlo

* code style refinement
2023-09-06 14:21:51 +08:00
Stella Laurenzo fcb3b718a5 Properly guard clang-specific pragma.
Avoids unsupported pragma warning on GCC.
2023-09-06 00:43:50 -07:00
Jerin Philip 9cb5d38cd1
[MLIR][TORCH] Add E2E `torch.aten.prod_dim_int` (#2423)
Uses the existing reduction codepath, adding modifications or branches
required alongside for prod.
2023-09-05 13:38:51 -07:00
Vivek Khandelwal 3841fe3035 [MLIR][TORCH] Add StableHLO lowering for embedding_bag.padding_idx op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-05 21:32:23 +05:30
Jiawei Wu d62045f64d
emit aten.max.other op (#2436) 2023-09-05 10:52:32 +08:00
Yuanqiang Liu e9ab8ceb1c
[Torch Dialect] support aten.split_with_sizes (#2431)
* [Torch Dialect] support aten.split_with_sizes

* update
2023-09-04 09:59:26 +08:00
Bruce Kim cd1c7df8be
[MLIR][TORCH] Add E2E support for view_as_real op (#2419)
* view_as_real test case, allow dtype in testutils.randn

* abstract python upstream func implemented

* fixed upstream dtype func, implemented view_as_real backend op

* formatted AtenViewAsRealOp, removed change in e2etest/framework

* removed test suit from reshape_like.py, because it's moved to basic.py

* implemented C-API wrapper for mlirComplexF128 type

* fixed torch.complex dtype width in MLIR and Torch MLIR, deleted float16 dtype dict

* Changed IR input of aten fft_fft unit test

* code refactored

* code refactored and fixed ci test

* refactored: removed white spaces, and rolled back to having both input/output affine expr

* refactored: deleted output affine expr to reduce redundancy

* xfail ltc backend

* removed ComplexImag and ComplexReal from torchdynamo xfail set

* copied and pasted from main branch as there's no change to be made in this file

* refactored abstract_interp_lib_gen.py

* refactored: torchtypes.td, formatted, removed commented out code
2023-09-01 21:12:01 -07:00
Quinn Dawkins 1fc4314b62
Add folder for aten.broadcast_to on unchanged static shapes (#2421) 2023-09-01 14:50:34 -04:00
Vivek Khandelwal 729386c9d8 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-01.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-01 22:07:51 +05:30
Vivek Khandelwal 5c43daa3bf [MLIR][TORCH] Add e2e support for aten.pow.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-31 21:43:24 +05:30
JianzheXiao 17d02811d5
[Torch Dialect] add folder for aten.any.bool (#2388)
* update

* update

* update

* update

* update

* update

* update
2023-08-30 17:29:03 +08:00
Arham Khan c42d2beb6e
[MLIR][TORCH] add E2E support for aten.min op (#2422)
* impl aten.min op

* remove extraneous test
2023-08-29 12:12:41 -05:00
Zhekun(Josh) Zhang 5282324c68
[Importer] fix has value semantic return type (#2404)
* fix value semantic return

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-08-29 10:14:09 +08:00
David Gens ca34b9c4fc
add max_pool3d (#2386) 2023-08-28 19:01:55 -04:00
Arham Khan 8855fa3ace amend dtype function 2023-08-28 10:52:16 +05:30
Arham Khan a80bc42521 dtype test case 2023-08-28 10:52:16 +05:30
Arham Khan 610d836fd2 impl aten.elu as decomposition 2023-08-28 10:52:16 +05:30
Arham Khan 12eadccc07 add e2e support for aten.elu 2023-08-28 10:52:16 +05:30
Jiawei Wu 4339c00f1b
[Torch Dialect][stablehlo] emit aten.rand op and add converter to stablehlo (#2413)
* [Torch Dialect] emit aten.rand op and add converter to stablehlo

* add failed tests for torchdynamo backend

* add failed test for linalg backend
2023-08-27 21:56:36 +08:00
Gleb Kazantaev 3dd29f9d5d
Update Torch ODS list with new ops (#2361)
* [LTC] Add shape_inference_(add|uniform)

* Add torch.multinomial op.

* Update ods gen; add normal_functional and erfinv ops support

* New TorchMLIR ops: clamp_min.Tensor, clamp_max.Tensor, xlogy, binary_cross_entropy, log_sigmoid_forward, sigmoid_backward, cosine_embedding_loss, scatter.reduce

* Improve the shape inference logic of whereOp

- Infer the result tensor according to the broadcasting semantics

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>

* Added aten::sgn

* Add shape inference logic for hardtanh_backward op

* Added new Torch-MLIR ops

Co-authored-by: GlebKazantaev <gleb.nnstu@gmail.com>

* Add support for elu lowering

* Add support for elu_backward lowering

* Support fmod, remainder, and floor_divide

Emit generated op defs for the remainder.Tensor and fmod.Tensor

Add shape inference impelementations for remainder.Scalar, fmod.Scalar
and floor_divide.Tensor

* Add shape inference logic for im2col

- pytorch.nn.unfold gets decomposed into im2col

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>

* Add aten::eye and aten::eye.m support

* Add tracing for linalg_qr

* Update GeneratedTorchOps.td

* Update xfails

* Fix unbound variable issue in torch_ods_gen

---------

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: zihaoc-cerebras <zihao.chen@cerebras.net>
Co-authored-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
Co-authored-by: Gokul Ramakrishnan <gokul.ramakrishnan@cerebras.net>
Co-authored-by: glebk-cerebras <111300564+glebk-cerebras@users.noreply.github.com>
Co-authored-by: Behzad Abghari <behzad.abghari@gmail.com>
Co-authored-by: Ahmed Elkoushy <ahmed.elkoushy@cerebras.net>
2023-08-21 06:36:39 -04:00
Gleb Kazantaev 5743b6d4ac
LTC multi-output operations support (#2362)
* LTC/TorchMLIR multi-output operations support

* Update torch-mlir jit lowering to support ops with dynamic number of outputs

* Added support for aten::split_copy, aten::split_with_sizes_copy

* Fix native function for aten::split; cleanup code

* Fix TorchMlirTensorList lowering

* Remove xfails
2023-08-20 16:32:11 -04:00
Jiawei Wu 60bad54f27
[Torch Dialect] replace none-index in aten.Index.Tensor's param by manually generating it (#2344)
* [Torch Dialect] replace none-index in aten.Index.Tensor's  param by manually generating it
Co-authored-by: Jiawei Wu <wujiawei.aml@bytedance.com>
Co-authored-by: Jianzhe Xiao <jianzhe.xiao@bytedance.com>

* minor typo fix

* add new failed e2e tests for ltc

* fix typo

* Address comments

* Add more e2e tests

* add failed e2e tests for LTC

* address comments

* remove decomposition for AtenIndexTensorHackedTwinOp
2023-08-15 19:36:08 +08:00
Ramiro Leal-Cavazos ff762100b8
Add handling of namespaces to library generator (#2391)
When using custom ops, sometimes PyTorch will insert namespaces to the
abstract interpretation function name in the format:
`__torch__.{namespace_1}.{namespace_2}...{op_name}`.  The extra
namespaces are not part of the abstract interpretation function name,
so it needs to be removed before generating the library of MLIR
snippets of abstract interpretation functions. This commit adds
support for removing the namespace information.
2023-08-11 09:56:19 -07:00
Vivek Khandelwal e61ef1ee54 [MLIR][TORCH] Add support for aten._unsafe_index_put.hacked_twin op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-11 08:57:01 +05:30
Vivek Khandelwal f0a8f273f7 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-08-10.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-10 21:59:20 +05:30
JianzheXiao 38b049eb1a
[Torch Dialect] add support for adaptive_avgpool_1d (#2342)
* [MLIR][TORCH] Fix aten.cumsum lowering for int32 input (#2351)

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.

update PyTorch version to 2.1.0.dev20230729 (#2354)

- torch version: 2.1.0.dev20230729
 - torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
 - torchvision version: 0.16.0.dev20230729

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230730 (#2356)

- torch version: 2.1.0.dev20230730
 - torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
 - torchvision version: 0.16.0.dev20230730

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230731 (#2359)

- torch version: 2.1.0.dev20230731
 - torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
 - torchvision version: 0.16.0.dev20230731

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

LTC->MLIR Debug Info support (#1922)

* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>

build: update llvm tag to 41895843

Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

update PyTorch version to 2.1.0.dev20230802 (#2366)

- torch version: 2.1.0.dev20230802
 - torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
 - torchvision version: 0.16.0.dev20230802

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Change Python version from 3.10 to 3.11 in installation instructions (#2370)

Add CITATION file (#2371)

Add packaging as an install dependency (#2369)

Needed by `torch_mlir._version`. Resolves #2368.

[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)

* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op

update PyTorch version to 2.1.0.dev20230803 (#2372)

- torch version: 2.1.0.dev20230803
 - torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
 - torchvision version: 0.16.0.dev20230803

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Prevent failed stable CI job from cancelling nightly jobs (#2373)

The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.

[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355)

update

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update

update:

update

parent 22e88d523b1970b2e904eb5421d49d987a3d255e
author jianzhe.xiao <jianzhe.xiao@bytedance.com> 1691114110 +0800
committer jianzhe.xiao <jianzhe.xiao@bytedance.com> 1691114119 +0800

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.

update PyTorch version to 2.1.0.dev20230729 (#2354)

- torch version: 2.1.0.dev20230729
 - torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
 - torchvision version: 0.16.0.dev20230729

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230730 (#2356)

- torch version: 2.1.0.dev20230730
 - torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
 - torchvision version: 0.16.0.dev20230730

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230731 (#2359)

- torch version: 2.1.0.dev20230731
 - torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
 - torchvision version: 0.16.0.dev20230731

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

LTC->MLIR Debug Info support (#1922)

* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>

build: update llvm tag to 41895843

Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

update PyTorch version to 2.1.0.dev20230802 (#2366)

- torch version: 2.1.0.dev20230802
 - torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
 - torchvision version: 0.16.0.dev20230802

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Change Python version from 3.10 to 3.11 in installation instructions (#2370)

Add CITATION file (#2371)

Add packaging as an install dependency (#2369)

Needed by `torch_mlir._version`. Resolves #2368.

[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)

* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op

update PyTorch version to 2.1.0.dev20230803 (#2372)

- torch version: 2.1.0.dev20230803
 - torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
 - torchvision version: 0.16.0.dev20230803

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Prevent failed stable CI job from cancelling nightly jobs (#2373)

The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.

[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355)

update

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update

update:

add support for adaptive_pool_id

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update:

* update

---------

Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-08-05 07:48:09 +08:00
Jiawei Wu 20a2b68ed6
[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355) 2023-08-04 09:05:34 +08:00
Jiawei Wu 6db92d1b14
[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)
* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op
2023-08-03 16:21:14 +08:00