Commit Graph

661 Commits (67ab708b636d300db003fbec902411315e222bfb)

Author SHA1 Message Date
Maksim Levental 2eddb3fde7
WIP: No PyTorch dep (#1854) 2023-02-13 14:21:06 -06:00
Yuanqiang Liu 6ab990e1e8
[Torch Dialect] add folder for aten.Int.float (#1863) 2023-02-10 13:59:03 -08:00
Ziheng Jiang f1b8d5e581
[MHLO] Support AtenMaskedFillScalar (#1839)
* [MHLO] Support MaskedFillScalar.

* Update.

* Update.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-02-10 13:58:39 -08:00
Yuanqiang Liu 2f6fdb7f0b
[Torch Dialect] add folder for prim.min.int (#1864) 2023-02-10 13:58:15 -08:00
Zachary Cetinic 2a4a61f98f
Add aten.scatter_reduce op definition (#1846) 2023-02-07 21:59:07 +00:00
Vivek Khandelwal c957cebd03 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-02-05.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-06 13:23:28 +05:30
Zachary Cetinic 2c2009a13d
Add in-place variant of torch.scatter_add (#1836) 2023-02-03 17:54:28 +00:00
Jiahao Li f58ba19448
Add aten.bucketize op and its decomposition (#1834) 2023-02-03 10:20:47 +08:00
Ashay Rane 711646d095
mhlo: migrate conversion to stablehlo (#1840)
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.

This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
2023-02-02 07:29:47 -06:00
Vivek Khandelwal ed9d8d1fb7 [MLIR][TORCH] Add support for clone op with channels last memory format
Fixes https://github.com/llvm/torch-mlir/issues/1829

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-02 16:04:42 +05:30
Sean Silva 72fbf316b4 Update LLVM and MHLO submodules.
Week of 01/30/2023:

Green LLVM commit: e31ee6417c33a6e2f0e8440b1a86d5365279ad68
Green MHLO commit: c2a6f4064d426567b9ef7b0d29d5ab86dc7b2b02 (branch greencommit/2023-01-30-e31ee641)
2023-01-31 06:08:21 -08:00
Jiahao Li f5b689e12f
[MHLO] Support aten.cumsum op in mhlo backend (#1825) 2023-01-29 21:38:27 -08:00
Matthias Gehre adaf05f03e
[TorchToLinalg] Lower AtenRoundOp to math::RoundEvenOp (Fixes #1811) (#1823)
[TorchToLinalg] Lower AtenRoundOp to math::RoundEvenOp (Fixes #1811)
2023-01-25 08:51:29 +01:00
Gleb Kazantaev 3930588a7e
Enable VerifyBackendContract in LTC backend (#1798)
* Enable VerifyBackendContract in LTC backend

* Update VerifyBackendContract pass

* Move convert_scalar_implicit to jit_utils

* Rename VerifyBackendContract to VerifyBackendContractNoDecompositions

* Update verify-backend-contract-error.mlir test
2023-01-24 22:14:17 -05:00
Gleb Kazantaev aa3a88c8d9
Fix JIT schema matching for when ListType is used (#1826) 2023-01-23 21:43:18 -05:00
Vivek Khandelwal 23aa6903f7 [torchdynamo] Add default decomposition for ops in the dynamo backend
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-23 13:33:50 +05:30
Chi_Liu c5ac42a198
[TOSA] Add aten.view shape -1 support (#1815) 2023-01-20 11:56:26 -08:00
Chi_Liu 2587b3f583
[TOSA] Add aten.Index.Tensor support (#1771) 2023-01-19 21:19:00 -08:00
Vivek Khandelwal abf4f207cd [MLIR][TORCH] Add canonicalizer for aten.new_empty_strided op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-19 13:37:32 +05:30
Vivek Khandelwal f9d59eb500 [MLIR][TORCH] Add decomposition for aten.randn_like op
This commit decomposes aten.randn_like op into aten.randn.generator op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-18 12:09:27 +05:30
Vivek Khandelwal 999fd9036b [torchdynamo] Add native_group_norm and split op to the decomp list
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-18 10:40:46 +05:30
Jiahao Li e2698433db
Fix empty tensor when select -1 (#1787) 2023-01-17 10:14:14 -08:00
Jiahao Li 4f94831fed
[LINALG][TOSA][MHLO] Add e2e support for aten bitwise ops (#1753) 2023-01-11 14:40:03 -08:00
Vivek Khandelwal fd236b2c89 [MLIR][TORCH] Add decomposition for prims.var and prims.sqrt op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Vivek Khandelwal b966733e04 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-01-08.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Gleb Kazantaev c8b867b876
Added support for aten::norm.ScalarOpt_dim (#1774)
* Added support for aten::norm.ScalarOpt_dim

* Disable NormalizeModule_basic for linalg
2023-01-10 13:08:25 -05:00
Jiahao Li 8dc5d985eb
Add e2e support for aten logical or/and/xor/not ops (#1761) 2023-01-03 18:11:25 -08:00
Ramiro Leal-Cavazos 273664ded6
[custom op] Replace `tanh` dtype function with `expm1` (#1769)
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.

Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
2023-01-03 14:18:26 -08:00
Srirammaswamy a88e3766e8
Add E2E support for LeakyRelu and LeakyReluBackward ops (#1733)
Co-authored-by: srirammaswamy <srirammaswamy@gmail.com>
2023-01-03 08:30:16 -08:00
Ashay Rane ac780529b4
Revert e2e support for aten logical or/and/xor/not ops (#1757)
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked.  Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result.  This patch reverts the
commit to make the post-merge builds green again.
2022-12-29 21:01:06 -06:00
Shivam Gupta 2f45959f0d
Prelu lowering to linalg (#1712)
Prelu lowering to linalg
2022-12-28 08:51:33 +05:30
Jiahao Li eaab9be207
Add e2e support for aten logical or/and/xor/not ops (#1752) 2022-12-26 10:23:38 +08:00
Ramiro Leal-Cavazos 3260a1ea6e
Allow passing traced `torch.nn.Module`s into `torch_mlir.compile` (#1743)
This commit adds support for passing to `torch_mlir.compile` the
result of running `torch.jit.trace` on a model by relaxing the
condition that checks if the model is already in JIT IR to allow any
`torch.jit.ScriptModule`.

Fixes https://github.com/llvm/torch-mlir/issues/1739
2022-12-22 08:39:55 -08:00
Jiahao Li 60a139271d
Add aten.std.correction op and its decomposition (#1731) 2022-12-21 21:02:40 -08:00
Jiahao Li 15b249777b
[Torch][MHLO] Decompose aten.copy op. Lower aten.rsqrt & sigmoid to mhlo. (#1734) 2022-12-22 10:13:59 +08:00
Chi_Liu b2cefc0b64
[TOSA] Add aten.masked_fill.Tensor/Scalar support (#1735) 2022-12-21 08:56:07 -08:00
Jae Hoon (Antonio) Kim 1d695239ff
Unrevert #1724 (#1737)
* Unrevert #1724

* Update pytorch requirements.txt
2022-12-20 11:17:21 -05:00
Abhishek Varma 66d7a412cb [RefineTypes] Fix knowledge dtype for `aten.embedding` op
-- The dtype of the result of `aten.embedding` should match that of
   the `weight` operand's (operand[0]) instead of hardcoding to f32.
-- This commit aims to provide a fix for the same.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-12-20 19:56:12 +05:30
Ashay Rane dd1cf578a6
build: fix LTC code after upstream PyTorch change (#1727)
pytorch/pytorch@140a3139 reverted a change from yesterday, causing the
RollPyTorch action to break.  This patch reverts the corresponding
change in the torch-mlir LTC code.

This patch also re-enables tests that were previously marked as XFAIL.
2022-12-16 13:07:38 -06:00
ataheridezfouli-groq 17ee643aeb
[TORCH] Add Complex Number support (#1673)
Add Complex number dtype support to torch tensors. Add
aten.fft_fft op to test complex numbers.
2022-12-15 21:40:01 +00:00
Jae Hoon (Antonio) Kim a2a93891ea
Replace asIntArrayRefSlow with macro (#1724)
* Replace asIntArrayRefSlow with macro

* Update pytorch requirements.txt
2022-12-15 11:52:41 -05:00
Prashant Kumar 8ba77ae2a5 Yapf Format `refbacked.py`. 2022-12-15 21:19:52 +05:30
Prashant Kumar 564403e3a1 Add float16 support in the refbackend.
This will require https://reviews.llvm.org/D139121 patch to go through.
2022-12-15 21:19:52 +05:30
Sean Silva af9e8a5e63 [torchdynamo] Move to aot_autograd instead of raw make_fx
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275),
use `torch._dynamo.optimizations.training.aot_autograd` instead of raw
`make_fx`. This is more future proof and gives us the backward pass and
functionalization. We don't currently get functionalization because of
https://github.com/pytorch/pytorch/issues/90759

This also incidentally fixes the source location handling, which makes
`lockstep_basic.py` give an accurate source location!
2022-12-15 01:55:50 -08:00
Ahmed S. Taei b1f6832849
Add aten.slice.Tensor & aten.cat folders (#1691) 2022-12-13 13:02:47 -08:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00
Ashay Rane 430737b820
[cleanup] fix naming of private variable according to the style guide (#1704) 2022-12-12 09:04:46 -06:00
Vivek Khandelwal d4862ec611 [MLIR][TORCH] Add e2e support for aten.var_mean op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-12 15:46:54 +05:30
Vivek Khandelwal f783e19dcb Revert "[MLIR][TORCH] Fix mean and mean.dim op for large-sized inputs"
This reverts commit 55c7e66aa7.
2022-12-09 19:30:46 +05:30
Sean Silva 7731211d02 Remove eager_mode
This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.

Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.
2022-12-09 03:50:00 -08:00