Commit Graph

2848 Commits (0b7cbf5e601cb9b2b646df7ab19957ba4293d6c7)
 

Author SHA1 Message Date
Stella Laurenzo fcb3b718a5 Properly guard clang-specific pragma.
Avoids unsupported pragma warning on GCC.
2023-09-06 00:43:50 -07:00
Stella Laurenzo 29fdc3833c Fix GCC warning recommending parens.
Found with a more strict set of warning flags on GCC 9.
2023-09-06 00:23:23 -07:00
Jerin Philip 9cb5d38cd1
[MLIR][TORCH] Add E2E `torch.aten.prod_dim_int` (#2423)
Uses the existing reduction codepath, adding modifications or branches
required alongside for prod.
2023-09-05 13:38:51 -07:00
Jiawei Wu c93c6970e8
[stablehlo] add dtype conversion when converting AtenScalarImplicitOp (#2439) 2023-09-06 01:57:15 +08:00
Vivek Khandelwal 3841fe3035 [MLIR][TORCH] Add StableHLO lowering for embedding_bag.padding_idx op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-05 21:32:23 +05:30
Jiawei Wu d62045f64d
emit aten.max.other op (#2436) 2023-09-05 10:52:32 +08:00
Matthias Gehre a3ac4513e4
build_tools/python_deploy/build_linux_packages.sh: Disable dynamo testing for stable pytorch (#2426) 2023-09-04 10:02:07 +02:00
Jiawei Wu 30510f8cf7
[stablehlo] add AtenScalarImplicitOp's reverter to stablehlo backend (#2434)
* add ScalarImplicitOp's reverter to stablehlo backend

* add new passed test case for stablehlo backend
2023-09-04 14:04:09 +08:00
Yuanqiang Liu e9ab8ceb1c
[Torch Dialect] support aten.split_with_sizes (#2431)
* [Torch Dialect] support aten.split_with_sizes

* update
2023-09-04 09:59:26 +08:00
Bruce Kim cd1c7df8be
[MLIR][TORCH] Add E2E support for view_as_real op (#2419)
* view_as_real test case, allow dtype in testutils.randn

* abstract python upstream func implemented

* fixed upstream dtype func, implemented view_as_real backend op

* formatted AtenViewAsRealOp, removed change in e2etest/framework

* removed test suit from reshape_like.py, because it's moved to basic.py

* implemented C-API wrapper for mlirComplexF128 type

* fixed torch.complex dtype width in MLIR and Torch MLIR, deleted float16 dtype dict

* Changed IR input of aten fft_fft unit test

* code refactored

* code refactored and fixed ci test

* refactored: removed white spaces, and rolled back to having both input/output affine expr

* refactored: deleted output affine expr to reduce redundancy

* xfail ltc backend

* removed ComplexImag and ComplexReal from torchdynamo xfail set

* copied and pasted from main branch as there's no change to be made in this file

* refactored abstract_interp_lib_gen.py

* refactored: torchtypes.td, formatted, removed commented out code
2023-09-01 21:12:01 -07:00
Quinn Dawkins 1fc4314b62
Add folder for aten.broadcast_to on unchanged static shapes (#2421) 2023-09-01 14:50:34 -04:00
Arham Khan 34a0897e1b
[MLIR][TORCH] add E2E support for aten.rand (#2424)
* impl decomposition for aten.rand

* remove stablehlo conversion for aten.rand
2023-09-01 13:13:58 -05:00
Vivek Khandelwal 729386c9d8 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-01.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-09-01 22:07:51 +05:30
Roll PyTorch Action f83ee83604 update PyTorch version to 2.1.0.dev20230831
- torch version: 2.1.0.dev20230831
 - torch commit hash: b5b99fe13b890232bb61155a46239922661f4695
 - torchvision version: 0.16.0.dev20230831
2023-09-01 10:55:49 +05:30
Vivek Khandelwal 5c43daa3bf [MLIR][TORCH] Add e2e support for aten.pow.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-31 21:43:24 +05:30
Vivek Khandelwal aa15f0d4ca build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-08-30.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-08-31 16:23:34 +05:30
Gleb Kazantaev 6b02e9a926
[LTC] Tensor[]? support operands type support using partial codegen (#2410)
* Tensor[]? support operands type support using partial codegen

* aten.index.Tensor support via partial codegen

* Add torch.index_put tracing support

* Added optional tensor list type support for LTC/TorchMLIR lowering

* Added comments

Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
2023-08-30 06:29:39 -04:00
JianzheXiao 17d02811d5
[Torch Dialect] add folder for aten.any.bool (#2388)
* update

* update

* update

* update

* update

* update

* update
2023-08-30 17:29:03 +08:00
jinchen62 1682b540bf
Prototype passes for lowering quantized group matmul (#2402)
* Support brevitas custom op (#2320)

* f16 change for brevitas

* Adapt the change of brevitas quant custom op name

* Add unit tests

* Make brevitas conversions isolated

* Address the comments

---------

Co-authored-by: dan <danimal197@gmail.com>
2023-08-29 21:25:45 -07:00
Arham Khan c42d2beb6e
[MLIR][TORCH] add E2E support for aten.min op (#2422)
* impl aten.min op

* remove extraneous test
2023-08-29 12:12:41 -05:00
Zhekun(Josh) Zhang 5282324c68
[Importer] fix has value semantic return type (#2404)
* fix value semantic return

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-08-29 10:14:09 +08:00
David Gens ca34b9c4fc
add max_pool3d (#2386) 2023-08-28 19:01:55 -04:00
Arham Khan bc6bba9077 add nondefault test case, add to illegal ops in backend contract 2023-08-28 10:52:16 +05:30
Arham Khan 8855fa3ace amend dtype function 2023-08-28 10:52:16 +05:30
Arham Khan a80bc42521 dtype test case 2023-08-28 10:52:16 +05:30
Arham Khan 5138148f5c update passing test sets 2023-08-28 10:52:16 +05:30
Arham Khan 610d836fd2 impl aten.elu as decomposition 2023-08-28 10:52:16 +05:30
Arham Khan 12eadccc07 add e2e support for aten.elu 2023-08-28 10:52:16 +05:30
Jiawei Wu 4339c00f1b
[Torch Dialect][stablehlo] emit aten.rand op and add converter to stablehlo (#2413)
* [Torch Dialect] emit aten.rand op and add converter to stablehlo

* add failed tests for torchdynamo backend

* add failed test for linalg backend
2023-08-27 21:56:36 +08:00
Ashay Rane 8f28d933e1
CI: disable LTC e2e tests in stable PyTorch builds (#2414)
This way, we can keep CI green without being forced to ignore _all_
errors that arise in stable PyTorch builds
2023-08-23 11:11:17 -05:00
Jiawei Wu b552d4ed95
[Torch Dialect] Fix small bugs in decompose-complex-ops pass, e.g. missing return sentence (#2409) 2023-08-22 09:56:11 +08:00
Jiawei Wu 4c9d234b01
revert canonicalizer for PrimListConstructOp (#2408) 2023-08-22 09:18:39 +08:00
Gleb Kazantaev 3dd29f9d5d
Update Torch ODS list with new ops (#2361)
* [LTC] Add shape_inference_(add|uniform)

* Add torch.multinomial op.

* Update ods gen; add normal_functional and erfinv ops support

* New TorchMLIR ops: clamp_min.Tensor, clamp_max.Tensor, xlogy, binary_cross_entropy, log_sigmoid_forward, sigmoid_backward, cosine_embedding_loss, scatter.reduce

* Improve the shape inference logic of whereOp

- Infer the result tensor according to the broadcasting semantics

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>

* Added aten::sgn

* Add shape inference logic for hardtanh_backward op

* Added new Torch-MLIR ops

Co-authored-by: GlebKazantaev <gleb.nnstu@gmail.com>

* Add support for elu lowering

* Add support for elu_backward lowering

* Support fmod, remainder, and floor_divide

Emit generated op defs for the remainder.Tensor and fmod.Tensor

Add shape inference impelementations for remainder.Scalar, fmod.Scalar
and floor_divide.Tensor

* Add shape inference logic for im2col

- pytorch.nn.unfold gets decomposed into im2col

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>

* Add aten::eye and aten::eye.m support

* Add tracing for linalg_qr

* Update GeneratedTorchOps.td

* Update xfails

* Fix unbound variable issue in torch_ods_gen

---------

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: zihaoc-cerebras <zihao.chen@cerebras.net>
Co-authored-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
Co-authored-by: Gokul Ramakrishnan <gokul.ramakrishnan@cerebras.net>
Co-authored-by: glebk-cerebras <111300564+glebk-cerebras@users.noreply.github.com>
Co-authored-by: Behzad Abghari <behzad.abghari@gmail.com>
Co-authored-by: Ahmed Elkoushy <ahmed.elkoushy@cerebras.net>
2023-08-21 06:36:39 -04:00
Yuanqiang Liu b636e0c40c
[Stablehlo Dialect] fix lowering batch_norm with mixed types (#2383)
* [Stablehlo Dialect] fix lowering bn inference with mixed types

* update
2023-08-21 17:36:56 +08:00
Stella Laurenzo 8ffe5d17da Add Sean Silva to code owners as emeritus.
Per request from #2403.
2023-08-20 18:06:07 -07:00
Gleb Kazantaev 5743b6d4ac
LTC multi-output operations support (#2362)
* LTC/TorchMLIR multi-output operations support

* Update torch-mlir jit lowering to support ops with dynamic number of outputs

* Added support for aten::split_copy, aten::split_with_sizes_copy

* Fix native function for aten::split; cleanup code

* Fix TorchMlirTensorList lowering

* Remove xfails
2023-08-20 16:32:11 -04:00
Sean Silva aa007da5ac
update PyTorch version to 2.1.0.dev20230820 (#2406)
- torch version: 2.1.0.dev20230820
 - torch commit hash: 4ce227bfb953d1f64c4d86cc913144ee2a210e57
 - torchvision version: 0.16.0.dev20230820

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-20 07:59:45 -07:00
Sean Silva 822353fa7f
update PyTorch version to 2.1.0.dev20230819 (#2405)
- torch version: 2.1.0.dev20230819
 - torch commit hash: 668af075012c0857053a7cdf7ca764bb3569c6f1
 - torchvision version: 0.16.0.dev20230819

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-19 08:00:37 -07:00
Stella Laurenzo 6648ad91e7
Per request, swap Sean Silva for Stella Laurenzo in code owners. (#2403)
Sean has decided to move on to other ventures and has requested that I help him disengage by resuming top level accountability for the project.
2023-08-18 12:52:00 -07:00
Simon Camphausen d77b9cf7ae
[TOSA] Fix conversion for depthwise convolutions (#2398)
* [TOSA] Fix conversion for depthwise convolutions

* Add e2e tests for depthwise and grouped convolutions

Co-authored-by: Lucas Camphausen <lucas.camphausen@iml.fraunhofer.de>
2023-08-18 08:15:54 -07:00
Sean Silva 594a1fa471
update PyTorch version to 2.1.0.dev20230817 (#2401)
- torch version: 2.1.0.dev20230817
 - torch commit hash: 3522f2a7b7f73e928a8366cb7bd62ab3883dbe75
 - torchvision version: 0.16.0.dev20230817

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-17 07:59:32 -07:00
Sean Silva efe69eb5e3
update PyTorch version to 2.1.0.dev20230816 (#2400)
- torch version: 2.1.0.dev20230816
 - torch commit hash: 3af011b858f5e5c40fd8e9d41fa7f31a928b3b47
 - torchvision version: 0.16.0.dev20230816

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-16 08:12:05 -07:00
Ramiro Leal-Cavazos 41bafe13cc
[build] Update llvm tag to a3f2751f (#2397)
This commit updates the `llvm-project` and `mlir-hlo` submodules to
commits:

llvm-project: a3f2751f782f3cdc6ba4790488ec20163a40ac37
mlir-hlo: 97c7e4b4506c3a2441c923e592833f45da439009

Changes made:

- Rename `getSuccessorEntryOperands` with `getEntrySuccessorOperands`
and remove `operands` from
`getSuccessorRegions` (https://reviews.llvm.org/D157506)
- Make `TypeConverter` a `const` (https://reviews.llvm.org/D157601)
2023-08-15 09:53:28 -07:00
Sean Silva 94f7593c9b
update PyTorch version to 2.1.0.dev20230815 (#2399)
- torch version: 2.1.0.dev20230815
 - torch commit hash: e4d5143f8c73014521f44c3e9b46c642a300dd2f
 - torchvision version: 0.16.0.dev20230815

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-15 07:39:04 -07:00
Jiawei Wu 60bad54f27
[Torch Dialect] replace none-index in aten.Index.Tensor's param by manually generating it (#2344)
* [Torch Dialect] replace none-index in aten.Index.Tensor's  param by manually generating it
Co-authored-by: Jiawei Wu <wujiawei.aml@bytedance.com>
Co-authored-by: Jianzhe Xiao <jianzhe.xiao@bytedance.com>

* minor typo fix

* add new failed e2e tests for ltc

* fix typo

* Address comments

* Add more e2e tests

* add failed e2e tests for LTC

* address comments

* remove decomposition for AtenIndexTensorHackedTwinOp
2023-08-15 19:36:08 +08:00
Sean Silva e0f0c5d6ea
update PyTorch version to 2.1.0.dev20230814 (#2396)
- torch version: 2.1.0.dev20230814
 - torch commit hash: 53551b5c87ca582d71d4bbaf82050d05c3c2f534
 - torchvision version: 0.16.0.dev20230814

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-14 07:46:27 -07:00
Sean Silva 98e75af2a8
update PyTorch version to 2.1.0.dev20230813 (#2394)
- torch version: 2.1.0.dev20230813
 - torch commit hash: 3748ee4a8c4032dac08bd2de0ebf039ad22e0d1e
 - torchvision version: 0.16.0.dev20230813

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-13 08:03:59 -07:00
Sean Silva 17f1300dde
update PyTorch version to 2.1.0.dev20230812 (#2393)
- torch version: 2.1.0.dev20230812
 - torch commit hash: c9397a7bc833cdfdf64aa023631ae5e1c7e9cee4
 - torchvision version: 0.16.0.dev20230812

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-12 07:57:42 -07:00
Ramiro Leal-Cavazos ff762100b8
Add handling of namespaces to library generator (#2391)
When using custom ops, sometimes PyTorch will insert namespaces to the
abstract interpretation function name in the format:
`__torch__.{namespace_1}.{namespace_2}...{op_name}`.  The extra
namespaces are not part of the abstract interpretation function name,
so it needs to be removed before generating the library of MLIR
snippets of abstract interpretation functions. This commit adds
support for removing the namespace information.
2023-08-11 09:56:19 -07:00
Sean Silva 23d7821afa
update PyTorch version to 2.1.0.dev20230811 (#2392)
- torch version: 2.1.0.dev20230811
 - torch commit hash: 422297f87fc25191bb392486c4bb8d25c4785d15
 - torchvision version: 0.16.0.dev20230811

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-08-11 09:45:18 -07:00