Commit Graph

858 Commits (rm_obsolete_build_automation)

Author SHA1 Message Date
Vivek Khandelwal 729386c9d8 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-01.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-01 22:07:51 +05:30
Vivek Khandelwal 5c43daa3bf [MLIR][TORCH] Add e2e support for aten.pow.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-31 21:43:24 +05:30
Vivek Khandelwal aa15f0d4ca build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-08-30.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-31 16:23:34 +05:30
Gleb Kazantaev 6b02e9a926
[LTC] Tensor[]? support operands type support using partial codegen (#2410)
* Tensor[]? support operands type support using partial codegen

* aten.index.Tensor support via partial codegen

* Add torch.index_put tracing support

* Added optional tensor list type support for LTC/TorchMLIR lowering

* Added comments

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
2023-08-30 06:29:39 -04:00
JianzheXiao 17d02811d5
[Torch Dialect] add folder for aten.any.bool (#2388)
* update

* update

* update

* update

* update

* update

* update
2023-08-30 17:29:03 +08:00
Arham Khan c42d2beb6e
[MLIR][TORCH] add E2E support for aten.min op (#2422)
* impl aten.min op

* remove extraneous test
2023-08-29 12:12:41 -05:00
Zhekun(Josh) Zhang 5282324c68
[Importer] fix has value semantic return type (#2404)
* fix value semantic return

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-08-29 10:14:09 +08:00
David Gens ca34b9c4fc
add max_pool3d (#2386) 2023-08-28 19:01:55 -04:00
Arham Khan bc6bba9077 add nondefault test case, add to illegal ops in backend contract 2023-08-28 10:52:16 +05:30
Arham Khan 8855fa3ace amend dtype function 2023-08-28 10:52:16 +05:30
Arham Khan a80bc42521 dtype test case 2023-08-28 10:52:16 +05:30
Arham Khan 610d836fd2 impl aten.elu as decomposition 2023-08-28 10:52:16 +05:30
Arham Khan 12eadccc07 add e2e support for aten.elu 2023-08-28 10:52:16 +05:30
Jiawei Wu 4339c00f1b
[Torch Dialect][stablehlo] emit aten.rand op and add converter to stablehlo (#2413)
* [Torch Dialect] emit aten.rand op and add converter to stablehlo

* add failed tests for torchdynamo backend

* add failed test for linalg backend
2023-08-27 21:56:36 +08:00
Gleb Kazantaev 3dd29f9d5d
Update Torch ODS list with new ops (#2361)
* [LTC] Add shape_inference_(add|uniform)

* Add torch.multinomial op.

* Update ods gen; add normal_functional and erfinv ops support

* New TorchMLIR ops: clamp_min.Tensor, clamp_max.Tensor, xlogy, binary_cross_entropy, log_sigmoid_forward, sigmoid_backward, cosine_embedding_loss, scatter.reduce

* Improve the shape inference logic of whereOp

- Infer the result tensor according to the broadcasting semantics

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>

* Added aten::sgn

* Add shape inference logic for hardtanh_backward op

* Added new Torch-MLIR ops

Co-authored-by: GlebKazantaev <gleb.nnstu@gmail.com>

* Add support for elu lowering

* Add support for elu_backward lowering

* Support fmod, remainder, and floor_divide

Emit generated op defs for the remainder.Tensor and fmod.Tensor

Add shape inference impelementations for remainder.Scalar, fmod.Scalar
and floor_divide.Tensor

* Add shape inference logic for im2col

- pytorch.nn.unfold gets decomposed into im2col

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>

* Add aten::eye and aten::eye.m support

* Add tracing for linalg_qr

* Update GeneratedTorchOps.td

* Update xfails

* Fix unbound variable issue in torch_ods_gen

---------

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: zihaoc-cerebras <zihao.chen@cerebras.net>
Co-authored-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
Co-authored-by: Gokul Ramakrishnan <gokul.ramakrishnan@cerebras.net>
Co-authored-by: glebk-cerebras <111300564+glebk-cerebras@users.noreply.github.com>
Co-authored-by: Behzad Abghari <behzad.abghari@gmail.com>
Co-authored-by: Ahmed Elkoushy <ahmed.elkoushy@cerebras.net>
2023-08-21 06:36:39 -04:00
Gleb Kazantaev 5743b6d4ac
LTC multi-output operations support (#2362)
* LTC/TorchMLIR multi-output operations support

* Update torch-mlir jit lowering to support ops with dynamic number of outputs

* Added support for aten::split_copy, aten::split_with_sizes_copy

* Fix native function for aten::split; cleanup code

* Fix TorchMlirTensorList lowering

* Remove xfails
2023-08-20 16:32:11 -04:00
Simon Camphausen d77b9cf7ae
[TOSA] Fix conversion for depthwise convolutions (#2398)
* [TOSA] Fix conversion for depthwise convolutions

* Add e2e tests for depthwise and grouped convolutions

Co-authored-by: Lucas Camphausen <lucas.camphausen@iml.fraunhofer.de>
2023-08-18 08:15:54 -07:00
Jiawei Wu 60bad54f27
[Torch Dialect] replace none-index in aten.Index.Tensor's param by manually generating it (#2344)
* [Torch Dialect] replace none-index in aten.Index.Tensor's  param by manually generating it
Co-authored-by: Jiawei Wu <wujiawei.aml@bytedance.com>
Co-authored-by: Jianzhe Xiao <jianzhe.xiao@bytedance.com>

* minor typo fix

* add new failed e2e tests for ltc

* fix typo

* Address comments

* Add more e2e tests

* add failed e2e tests for LTC

* address comments

* remove decomposition for AtenIndexTensorHackedTwinOp
2023-08-15 19:36:08 +08:00
Ramiro Leal-Cavazos ff762100b8
Add handling of namespaces to library generator (#2391)
When using custom ops, sometimes PyTorch will insert namespaces to the
abstract interpretation function name in the format:
`__torch__.{namespace_1}.{namespace_2}...{op_name}`.  The extra
namespaces are not part of the abstract interpretation function name,
so it needs to be removed before generating the library of MLIR
snippets of abstract interpretation functions. This commit adds
support for removing the namespace information.
2023-08-11 09:56:19 -07:00
Vivek Khandelwal e61ef1ee54 [MLIR][TORCH] Add support for aten._unsafe_index_put.hacked_twin op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-11 08:57:01 +05:30
Vivek Khandelwal f0a8f273f7 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-08-10.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-10 21:59:20 +05:30
Vivek Khandelwal ee6c87ef5b [MLIR][TORCH] Add support for dtype arg for softmax.int op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-08 21:54:47 +05:30
JianzheXiao 38b049eb1a
[Torch Dialect] add support for adaptive_avgpool_1d (#2342)
* [MLIR][TORCH] Fix aten.cumsum lowering for int32 input (#2351)

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.

update PyTorch version to 2.1.0.dev20230729 (#2354)

- torch version: 2.1.0.dev20230729
 - torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
 - torchvision version: 0.16.0.dev20230729

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230730 (#2356)

- torch version: 2.1.0.dev20230730
 - torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
 - torchvision version: 0.16.0.dev20230730

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230731 (#2359)

- torch version: 2.1.0.dev20230731
 - torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
 - torchvision version: 0.16.0.dev20230731

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

LTC->MLIR Debug Info support (#1922)

* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>

build: update llvm tag to 41895843

Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

update PyTorch version to 2.1.0.dev20230802 (#2366)

- torch version: 2.1.0.dev20230802
 - torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
 - torchvision version: 0.16.0.dev20230802

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Change Python version from 3.10 to 3.11 in installation instructions (#2370)

Add CITATION file (#2371)

Add packaging as an install dependency (#2369)

Needed by `torch_mlir._version`. Resolves #2368.

[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)

* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op

update PyTorch version to 2.1.0.dev20230803 (#2372)

- torch version: 2.1.0.dev20230803
 - torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
 - torchvision version: 0.16.0.dev20230803

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Prevent failed stable CI job from cancelling nightly jobs (#2373)

The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.

[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355)

update

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update

update:

update

parent 22e88d523b1970b2e904eb5421d49d987a3d255e
author jianzhe.xiao <jianzhe.xiao@bytedance.com> 1691114110 +0800
committer jianzhe.xiao <jianzhe.xiao@bytedance.com> 1691114119 +0800

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)

[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.

update PyTorch version to 2.1.0.dev20230729 (#2354)

- torch version: 2.1.0.dev20230729
 - torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
 - torchvision version: 0.16.0.dev20230729

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230730 (#2356)

- torch version: 2.1.0.dev20230730
 - torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
 - torchvision version: 0.16.0.dev20230730

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

update PyTorch version to 2.1.0.dev20230731 (#2359)

- torch version: 2.1.0.dev20230731
 - torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
 - torchvision version: 0.16.0.dev20230731

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

LTC->MLIR Debug Info support (#1922)

* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>

build: update llvm tag to 41895843

Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

update PyTorch version to 2.1.0.dev20230802 (#2366)

- torch version: 2.1.0.dev20230802
 - torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
 - torchvision version: 0.16.0.dev20230802

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Change Python version from 3.10 to 3.11 in installation instructions (#2370)

Add CITATION file (#2371)

Add packaging as an install dependency (#2369)

Needed by `torch_mlir._version`. Resolves #2368.

[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)

* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op

update PyTorch version to 2.1.0.dev20230803 (#2372)

- torch version: 2.1.0.dev20230803
 - torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
 - torchvision version: 0.16.0.dev20230803

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>

Prevent failed stable CI job from cancelling nightly jobs (#2373)

The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.

[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355)

update

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update

update:

add support for adaptive_pool_id

update xfail sets

update xfail_sets

update

fix xfail_sets

update:

update:

* update

---------

Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-08-05 07:48:09 +08:00
Jiawei Wu 20a2b68ed6
[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355) 2023-08-04 09:05:34 +08:00
Jiawei Wu 6db92d1b14
[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)
* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op
2023-08-03 16:21:14 +08:00
Vivek Khandelwal a374c39106 build: update llvm tag to 41895843
Summary of changes:
- Update tags
  llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
  mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-02 21:18:14 +05:30
Gleb Kazantaev fb52a73cbe
LTC->MLIR Debug Info support (#1922)
* LTC->MLIR Debug Info support

* SW-95317 Propagate Lazy->Jit->MLIR scope name.

* Enhance location information based on op names

Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.

* Update locations logic; updated debug-info.py test

* Use {scope}/{op_name} format to track names by default

---------

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>
2023-08-02 10:29:11 -04:00
Vivek Khandelwal 0109bf705b
[MLIR][TORCH] Fix aten.cumsum lowering for int32 input (#2351)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-07-28 09:45:12 -07:00
JianzheXiao 31ef08b63d
[Stablehlo]Add support for AvgPool1dOp (#2268)
* Add support for AvgPool1d

* Update AbstractInterpLibrary

* support avgpool1d in linalg

* refactored code

* fix nit problem
2023-07-25 14:09:53 +08:00
Jiawei Wu d57f67e7f8
[Torch Dialect] emit aten.nonzero, aten.nonzero_numpy, aten.nonzero_static op (#2338)
By the way, this PR also adds the missing shape function for aten.masked_select.
2023-07-25 09:01:19 +08:00
Ramiro Leal-Cavazos 4a96e716c0
Use `register_buffer` to make `Add_Module` test work on lazy tensor (#2332)
Doing `module.to('lazy')` only moves the module member tensors to the
device if they are created with `self.register_buffer` or
`self.register_parameter`. Since the `self.tensor` tensor in
`Add_Module` test is currently not created using the `self.register_*`
methods, it is not being moved from CPU to lazy device, which is
causing the test to fail on LTC backend. This commit uses
`self.register_buffer` to fix the test on LTC backend.

This commit also seems to fix the test for torchdynamo.
2023-07-24 09:07:13 -07:00
Alexandre Rames 1e468e8294 Fix canonicalization of `torch.prim.TupleUnpack`. 2023-07-20 20:08:46 +02:00
Jiawei Wu 9535be7903
[Torch-Dialect] emit aten.narrow.Tensor op and decompose it to aten.narrow op (#2297) 2023-07-20 16:46:44 +08:00
Matthias Gehre 64d7626a52
Fixes for split tensor and slice (#2314)
* RecomposeComplexOps: Remove dead slice op

* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors

* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none

* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max

* More tests for aten.split.Tensor
2023-07-20 09:53:54 +02:00
max 0650efe7c0 Conform to Python custom exception api 2023-07-19 21:00:55 -05:00
Jiawei Wu 3f843c8fd9
[torch-dialect] fix aten.type_as op's folder (#2283)
[torch-dialect] fix torch.type_as op's folder by decomposing it to prim.dtype + aten.to_dtype
2023-07-20 09:51:58 +08:00
Ramiro Leal-Cavazos 718f53ff8a
Fix handling of `!torch.number` in abstract interpretation library (#2309)
In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
2023-07-17 09:52:04 -07:00
Chi_Liu 5706697e0b
[TOSA] Add aten._index_put_impl support (#2031)
Add e2e support by add  "tosa-to-scf"
2023-07-17 09:51:24 -07:00
Matthias Gehre 06c9bd08e0
lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix legalization of comparions where the input type is bool (#2304) 2023-07-17 09:49:04 +02:00
Matthias Gehre f8e75f659d
Add make_fx_tosa variant to end2end tests (#2240)
* Add make_fx_tosa variant to end2end tests

* e2e_testing/xfail_sets.py: Add make_fx_tosa xfail for stable
2023-07-13 15:07:54 +02:00
nithinsubbiah 91c6454618 Filter out empty strings while generting function signature 2023-07-13 13:51:54 +05:30
Abhishek Varma 6c9ba4ce95
[Torch-to-Linalg] Add dynamic dimension support for BroadcastTo op (#2174)
-- This commit adds support for dynamic dimension in BroadcastTo op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-07-07 10:01:51 -07:00
Jiawei Wu c7fa42b7d3
[Torch Dialect] Add canonicalizer for aten.to.other op (#2273)
Canonicalize aten.to.other to prim.device + prim.dtype + aten.to.device
Co-authored-by: wujiawei.aml <wujiawei.aml@bytedance.com>
2023-06-30 09:43:08 +08:00
Yuanqiang Liu 449cfb8375
[Torch Dialect] add more scalar op folders (#2265) 2023-06-29 10:37:13 +08:00
Yuanqiang Liu 859885c1d3
[Torch Dialect] Support aten.native_dropout (#2259)
* [Torch Dialect] Support aten.native_dropout

* update
2023-06-27 14:19:33 +08:00
Yuanqiang Liu 1ea2b57ab7
[Torch Dialect] add folder for aten.add (#2264)
* [Torch Dialect] add folder for aten.add

* update

* update

* update
2023-06-27 10:55:28 +08:00
Yuanqiang Liu 64afc08dab
[Torch Dialect] add missing one_hot dtype function (#2143)
* [Torch Dialect] add missing one_hot dtype function

* update

* update

* update
2023-06-23 16:11:33 +08:00
Ramiro Leal-Cavazos 6f2bf31291
Fix single-element tuple construction in abstract interp library (#2258)
Single element tuples in Python need a comma after the
element. However, the `registry.py` file, which generates the expected
abstract interpretation function signatures, was not inserting the
comma. This commit changes the expected signature generator to add a
comma after the last element in any non-empty default tuple argument.
2023-06-22 11:27:40 -07:00
Yuanqiang Liu 96b14e952e
[Torch Dialect] Support aten.device.with_index (#2254) 2023-06-23 01:07:14 +08:00
Abhishek Varma a0d2789840 [MLIR][TORCH] Add e2e support for aten.alias
-- This commit adds e2e support for aten.alias op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-06-21 12:15:31 +05:30
Vivek Khandelwal f6a6cfea4e
[MLIR][TORCH] Add support for negative index values for index.Tensor op (#2233)
This commit adds the support for index.Tensor op when the index values
are negative. This commit wraps around the index values by checking
their values at run time.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-16 14:21:04 -05:00
Vivek Khandelwal ab8b23e767 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-05-16.
This commit removes the test `BaddbmmDifferentDtypesModule_basic`
since PyTorch expects all operands to have the same dtype.
Ref: 2abad0c184

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-15 17:53:16 +05:30
Yuanqiang Liu bba0f5891b
[Stablehlo] add conversion for AtenFlipOp (#2163) 2023-06-15 10:27:34 +08:00
Yuanqiang Liu 7c6961bcbf
[Torch Dialect] Support aten.cuda and add canonicalizer for aten.cuda (#2231) 2023-06-14 09:56:39 +08:00
Maksim Levental 0caaf8d32a
Bump LLVM (#2176)
* Bump LLVM

---------

Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Christopher McGirr b461daa06e
fix(TorchToTosa.cpp): adjust torch->tosa div conversion (#2200)
check the return type of the division to figure out whether to use
the floating point implementation of a division or to use the integer.

the issue rose from the fact that the inputs are all integer but the
result was casted to floating point. The conversion then chose to
use the integer implementation of division which is not legal in tosa
when all the inputs get casted to floating point.

fix(TorchToLinalg): AtenDivScalarOp

upcast self operand as well if applicable, the self operand must also
be casted to float as it can be an integer.
2023-06-12 11:18:38 +02:00
Tiago Trevisan Jost cc75557119
feat: support unchanged dimensions in torch.aten.broadcast_to operation. (#2204) 2023-06-12 11:17:25 +02:00
Matthias Gehre 4e2ba2e0af
Support aten.sign (#2205) 2023-06-10 20:45:35 +02:00
Matthias Gehre 0959b502ae
Print name of the backend when tests fail to help debugging issues in CI (#2210)
* Print name of the backend when tests fail to help debugging issues in CI

* Extended test python/test/torchscript_e2e_test/compilation_failure.py
2023-06-09 10:47:07 +02:00
Yuanqiang Liu 5a7bf4e4cb
[Torch Dialect] Add canonicalize pattern for aten.is_floating_point (#2194)
* [Torch Dialect] Add canonicalize pattern for aten.is_floating_point

* implement as fold

* add lit test
2023-06-07 17:05:31 +08:00
Matthias Gehre 816880774b
Fix version comparison against stable (#2209) 2023-06-07 10:19:38 +02:00
JianzheXiao e4f8fb1b8c
[Torch Dialect] add support for AtenIsnanOp (#2170)
* add support for mhlo

* Add Test for torch.ne

* fix torch.ne shape/add static test case

* add support for static torch.ne

---------

Co-authored-by: root <root@n31-177-039.byted.org>
2023-06-07 10:06:27 +08:00
Yuanqiang Liu faec8698ea
[Torch Dialect] Support recompose aten.split.Tensor + prim.ListUnpack (#2192) 2023-06-07 01:38:04 +08:00
Vivek Khandelwal da886280fe
[MLIR][TORCH] Add E2E support for aten.tril op (#2202)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-05 16:17:01 -07:00
Ramiro Leal-Cavazos a46b5c6af2 Fix types + off-by-1 error, clamp `end` in slice+copy_ recomposition
The `copy_` op being replaced by `RecomposeSliceCopy_` operates on a
subset of the tensor being mutated, while the `index_put` op being
used to replace the `copy_` op operates on the entire tensor being
mutated. This means that the result type of the `index_put` should be
the type of the input to `index_put` and we need to make sure that
`copy_` does not have users before replacing to avoid type conflicts.

This commit also fixes the result type used for the
`AtenArangeStartStepOp`, and an off-by-1 error when creating the
indices vector.

Lastly, this commit also clamps the `end` value from the slice to the
size of the dimension.
2023-06-01 11:14:53 -07:00
Ramiro Leal-Cavazos 281dccc681 [LINALG] Add dynamic support for `PrimMinIntOp` 2023-06-01 11:14:53 -07:00
Zhekun Zhang 8af3e50662
[Torch Dialect] Add support for AtenScalarTensorOp (#2085)
* add scalar_tensor op

* add dynamo pass test; needs PR2062

* try to fix

* Empty commit, trigger test

* Empty commit, trigger test

* address comments

* use dtype function

* fix decompose rule

* remove unused include

* Empty commit, trigger test

* fix test

* disable ltc

* fix dtype

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-06-01 11:38:50 +08:00
Yuanqiang Liu 72b8070e57
[Importer] import constant tuple (#2132)
* [Importer] import constant tuple

* update

* update

* update
2023-05-31 14:14:14 +08:00
Ramiro Leal-Cavazos 479b2175ef
Add `ReadOnly` trait to `copy.to_vtensor` (#2179)
Before inlining a global slot, the users of the global slot are
checked to see if they are `ReadOnly` or `MemoryEffectFree` to make
sure that the global slot is not being mutated. Because the op
`copy.to_vtensor` currently does not have the `ReadOnly` trait, if a
global slot is passed to `copy.to_vtensor`, the pass
`InlineGlobalSlots` will fail.

The op `copy.to_vtensor` is `ReadOnly`, since it does not modify the
contents of the input tensor; it simply makes a new copy. This commit
adds the trait as well as an e2e test that generates the case of a
global slot being passed to a `copy.to_vtensor`.
2023-05-30 21:40:36 +00:00
maxbartel db3f2e3fde
Add Stable PyTorch CI Pipeline (#2038)
* feat: split pytorch requirements into stable and nightly

* fix: add true to tests to see full output

* refactor: add comments to explain true statement

* feat: move some tests to experimental mode

* refactor: refactor pipeline into more fine grained difference

* feat: add version differentiation for some tests

* feat: activate more configs

* refactor: change implementation to use less requirement files

* refactor: remove contraints used for testing

* fix: revert some requirement file names

* refactor: remove unnecessary ninja install

* fix: fix version parsing

* refactor: remove dependency on torchvision in main requirements file

* refactor: remove index url

* style: remove unnecesary line switch

* fix: readd index url
2023-05-30 12:16:24 -07:00
Vivek Khandelwal 959f4f48d5 [MLIR][TORCH] Add support for the total_weight for aten.nll_loss_forward op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-30 20:29:27 +05:30
Gaurav Shukla 552887783a [TM_TENSOR] Add `aten.scatter.[src|value]` op
This commit adds support of `aten.scatter.src` and `aten.scatter.value`
ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-05-29 12:35:53 +05:30
Zhekun Zhang 69e993b03f
[Torch Op] Add AtenChunkOp support (#2152)
* add chunkOp support

* update LTC xfail list

* address comments

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-26 10:05:19 +08:00
Zhekun Zhang a426363b7d
[Torch Dialect] Add split.tensor support + recompose rules (#2102)
* add split.tensor support + recompose rules

* add e2e test

* address comments

* address comments

* erase op in recomposeOp

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-23 12:43:33 -07:00
Prateek Gupta 938a489e74 [TORCH-MLIR] Add ODS for aten.sign op.
This commit adds ODS for the aten.sign op.

Signed-Off-By: Prateek Gupta <prateek.gupta2@cerebras.net>
2023-05-23 11:06:42 +05:30
Zhekun Zhang aa97c8383e
[Torch Op] Add unbind.int support with ListUnpack (#2058)
* add unbind int

* reformat

* use unpack canonicalize

* address comments

* Empty commit, trigger test

* add ltc blacklist

* clean up

* address comments

* check permute list

* erase in recompose

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-18 19:07:58 -07:00
Vivek Khandelwal 5698893ae4 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-05-16.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-18 21:30:11 +05:30
Yuanqiang Liu 6f7d9e83df
[Stablehlo] add e2e test for aten.batch_norm (#2129) 2023-05-17 09:04:40 -07:00
gpetters94 0302cf1d92
Add TMTensor::Attention and lower ScaledDotProductAttentionOp to it (#2027) 2023-05-16 15:17:45 -04:00
David Gens 17db2aafa3
add mse_loss_backward (#2111) 2023-05-12 14:29:13 -07:00
Ramiro Leal-Cavazos de02b56e17
Replace RefineTypes with dtype functions (#2105)
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.

All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`

These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.

Co-authored-by: Jiahao Li <liplus17@163.com>
2023-05-12 13:40:45 -07:00
Maksim Levental c3cd7471b4
Pure-Python FX importer. (#2098)
Co-authored-by: Sean Silva <silvasean@google.com>
2023-05-12 00:46:33 -05:00
Zhekun Zhang 1eb18dd8b5
Add AtenFillScalarOp Stablehlo support (#2108)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-11 16:41:46 -07:00
Prashant Kumar c47d3aab01 Fix torchdynamo fail test. 2023-05-11 21:29:07 +05:30
Prashant Kumar 8eb0c7e656 torch.complex to builtin complex types matching.
The right approach would be to create our own !torch.complex type
and use that during import than have a pass that converts to the MLIR
complex types.
2023-05-11 21:29:07 +05:30
Ramiro Leal-Cavazos ab694dfbc1 Add complex dtype support on refbackend 2023-05-11 21:29:07 +05:30
Prashant Kumar 3cd91affbc Add complex types support with basic complex ops.
Add complex types support with basic complex types.
Add aten.imag and aten.real op lowering via linalg_backend.
2023-05-11 21:29:07 +05:30
rahul shrivastava 86429d9656 Add e2e native_group_norm test-cases
Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-05-11 21:21:12 +05:30
rahul shrivastava 40a2c501a1 Add ODS for group_norm
- Add ODS for native_group_norm/backward.
- Add shape-inference for native_group_norm/backward .

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-05-11 21:21:12 +05:30
yifei410 86718cb203
[TOSA] lowering support for aten cat (#2039)
Add support for lowering torch.aten.cat to tosa.concat

* add support for aten cat to tosa

---------

Co-authored-by: yifei <y.zhou@xilinx.com>
Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-05-10 08:25:58 -07:00
Zhekun Zhang fc62b8e9ab
[StableHlo] Fix AtenWhereSelfOp convert rule (#2093)
* fix whereself convert rule

* use int to test promotion

* add dynamo failing test

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-05 15:21:55 -07:00
Vivek Khandelwal 378860f51b [MLIR][TORCH] Add E2E support for aten.topk op
This commit adds the decomposition for the aten.topk op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-05-05 15:50:33 +05:30
Zhekun Zhang 1eceb84899
add stablehlo support for pow.tensor_tensor (#2086)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-04 09:55:03 -07:00
Zhekun Zhang 0cf9ee340b
[Torch Dialect] Add to.dtype_layout canonicalize patterns (#2062)
* add to.dtype_layout canonicalize patterns

* update comment

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-02 20:06:02 -07:00
Yuanqiang Liu c596d11b98
[Torch Dailect] add canonicalize pattern for prim.device (#2066) 2023-05-02 20:05:46 -07:00
Maksim Levental c9fba95642
[Dynamo] turn on `no_python=True` for dynamo tests (#2040) 2023-04-28 18:05:17 -05:00
Ze Zhang 7b73e0cfaf
Add e2e linalg support for aten.atan (#2070)
* new atan op

* update shape

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-04-28 00:04:58 -07:00
rahul shrivastava a58442b50d Add ODS for aten.pow.Scalar
Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-04-27 22:09:45 +05:30
Vivek Khandelwal 491ae5eda4 [MLIR][TORCH] Add E2E support for aten.var_mean.dim op
This commit adds the decomposition for the aten.var_mean.dim op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-27 22:00:44 +05:30
Ramiro Leal-Cavazos c8e062fb4e
Fix default value of `stride` in 2d pooling ops in linalg and tosa (#2065)
When the user does not specify the `stride` value in 2d pooling ops,
`stride` is given the value of an empty list. However, the current
lowerings for pooling ops assumed that the `stride` operand would
always be a list of two ints, leading to crashes when that was not the
case. This commit fixes the crashes by setting the value of `stride`
to `kernel_size` when `stride` is the empty list, since this is the
default `stride` value specified in PyTorch docs. See:
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html#torch.nn.MaxPool2d
2023-04-27 08:31:36 -07:00