Commit Graph

394 Commits (28bb8662601b472b8d942368554ee0b34c22401c)

Author SHA1 Message Date
Prashant Kumar 8eb0c7e656 torch.complex to builtin complex types matching.
The right approach would be to create our own !torch.complex type
and use that during import than have a pass that converts to the MLIR
complex types.
2023-05-11 21:29:07 +05:30
Prashant Kumar 3cd91affbc Add complex types support with basic complex ops.
Add complex types support with basic complex types.
Add aten.imag and aten.real op lowering via linalg_backend.
2023-05-11 21:29:07 +05:30
Yuanqiang Liu 9f1ed4b2ba
[Torch Dialect] typo fix for RefineTypes (#2087) 2023-05-05 15:22:14 -07:00
Vivek Khandelwal 378860f51b [MLIR][TORCH] Add E2E support for aten.topk op
This commit adds the decomposition for the aten.topk op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-05-05 15:50:33 +05:30
Ze Zhang 7b73e0cfaf
Add e2e linalg support for aten.atan (#2070)
* new atan op

* update shape

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-04-28 00:04:58 -07:00
Vivek Khandelwal 491ae5eda4 [MLIR][TORCH] Add E2E support for aten.var_mean.dim op
This commit adds the decomposition for the aten.var_mean.dim op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-27 22:00:44 +05:30
Ramiro Leal-Cavazos f85f5799e4
Fix creation of empty tensor in decomposition for randn ops (#2043)
The current decomposition for `aten.randn.generator` does not specify
the `dtype` argument of the empty tensors created to store the random
values. This leads to invalid IR when the output type of the `randn`
op is not the default PyTorch dtype.
2023-04-19 08:25:39 -07:00
Vivek Khandelwal ed56e614b7 [MLIR][TORCH] Add E2E support for cross entropy lowering
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-18 08:00:20 +05:30
Roll PyTorch Action 811f330283 update PyTorch version to 2.1.0.dev20230414 2023-04-14 17:10:36 +00:00
Abhishek Varma 318fe13468 [MLIR][TORCH] Patch up Ops and their lowerings to deal with +ve `dim`
-- In Python we have the concept of negative dimension indexing.
-- We would want to normalize such dimensions to be +ve and within the
   expected range instead.
-- This commit takes care of a few remaining set of Ops and their
   lowerings by applying `toPositiveDim` and `isValidDim` to the
   extracted integer `dim` value.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-14 13:12:56 +05:30
Abhishek Varma a13d301356 [MLIR][TORCH] Add e2e support for aten.sort op
-- This commit adds e2e support for atend.sort op.
-- 1. Adds aten.sort op in torch dialect.
-- 2. Adds tm_tensor.sort op in TMTensor dialect.
-- 3. Adds lowering of aten.sort -> tm_tensor.sort.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-13 12:59:43 +05:30
Yuanqiang Liu 72c3326097
[Torch Dialect] support for aten.one_hot (#1852) 2023-04-11 01:02:28 -07:00
Vivek Khandelwal 98747d09a8 [MLIR][TORCH] Add support for prims::view_of op
This op does nothing and just returns the input operand as the
result of the op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-04-11 07:58:10 +05:30
Chi_Liu 4df1d8ae2f
[MLIR] Fold aten select and fill_ pattern (#2000) 2023-04-06 21:16:51 -07:00
Abhishek Varma 5337944ddb [MLIR][TORCH] Add e2e support for aten.randint
-- This commit adds e2e support for aten.randint by decomposing it into
   an aten.randint.low by setting low=0.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-07 00:13:56 +05:30
Vivek Khandelwal e90ea3d7ab [MLIR][TORCH] Extend implementation of aten._index_put_impl op.
This commits adds the support for cases for index_put_op:
1.) where index is a 2-d tensor.
2.) where indices is a list of tensors and none, with exactly
2 non none tensors along the consecutive dimensions.

This commit also adds a utility to compute the broadcast shape
given the two input tensors.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-04-05 14:04:30 +05:30
Vivek Khandelwal 788efc3180 [MLIR][TORCH] Add support for non-unit stride for conv backward
This commit also adds the support for non-unit output padding in the
case of transposed convolution.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-04 17:53:27 +05:30
Vivek Khandelwal 5e9582b055 [MLIR][TORCH] Add e2e support aten.movedim.int op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-04 17:53:27 +05:30
Vivek Khandelwal 82fb9c7fb8 [MLIR][TORCH] Add decomposition for prims::squeeze op
This commit adds the decomposition for the prims.squeeze op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-01 21:45:58 +05:30
Ramiro Leal-Cavazos 42d780dde0
Remove convolution_overrideable, convolution_backward_overrideable (#1984)
The ops `aten.convolution_overrideable` and
`aten.convolution_backward_overrideable` are currently not e2e tested
in Torch-MLIR. Moreover, there is no way to add e2e tests for them
because the ops cannot be called using the CPU backend (this also
prevents adding tested dtype functions for these ops). Since these two
ops are not expected to ever appear in PyTorch traces obtained through
standard means (https://github.com/pytorch/pytorch/issues/97481),
Torch-MLIR should not have to worry about them.
2023-03-29 15:05:56 -07:00
Ramiro Leal-Cavazos 0103c55e55
Add `RecomposeComplexOps` declaration + fix typos in pass name (#1950)
The `RecomposeComplexOps` pass currently does not have a TableGen
declaration and it is using the base class of `DecomposeComplexOps`,
which causes `--mlir-print-ir-after-all` to create wrong pass
labels. This commit fixes that as well as some minor typos in the name
of the pass.
2023-03-28 11:07:47 -07:00
Ramiro Leal-Cavazos d803ab4eeb
Cast `number` to `float` when shape function takes Scalar arg (#1978)
To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
2023-03-28 09:30:31 -07:00
Maksim Levental 953ea39cb5
handles 2,3,4 from https://github.com/llvm/torch-mlir/issues/1963 (#1964) 2023-03-24 21:50:01 -05:00
Ramiro Leal-Cavazos a7449785ec
Use upstream shape functions when available (#1952)
There are several ops that have their shape function upstream and had
not been updated in Torch-MLIR to use the upstream version. This
commit updates those shape function. In addition, TODOs have been
added for shape functions that should be upstream but are not.
2023-03-24 09:13:43 -07:00
Ramiro Leal-Cavazos eae3ff7f1c
Change dtype functions interface to take ints tuple for each tensor (#1965)
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.

This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.

To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
2023-03-23 11:05:39 -07:00
Sean Silva c319a20828 Update to LLVM 029313cc979ae71877b65794b1063d4e51184cc8
- mergeBlockBefore -> inlineBlockBefore
- move tosa-to-tensor pass ordering

https://github.com/llvm/torch-mlir/issues/1178#issuecomment-1476217922
2023-03-21 04:16:20 -07:00
Matthias Gehre aa5bcb3cf2
LowerToBackendContract: Explicitly error out on unimplemented operator (#1947)
* LowerToBackendContract: Explicitly error out on unimplemented operator

But only reject torch.operator when results are invalid.
Otherwise it might be a custom op that the backend supports.
2023-03-20 16:27:08 +01:00
Ramiro Leal-Cavazos 18035021f0
RefineTypes aten.sum: check dtype exists before accessing methods (#1944)
This commit adds a check that `defaultDtype` exists in the RefineTypes
handling of `AtenSumOp` before accessing the method `isInteger`, which
crashes the program is `defaultDtype` is null.

The handling of `defaultDtype` is the same as the one used for the
`AtenSumDimIntListOp`.
2023-03-16 08:35:49 -07:00
Jiahao Li 4912c3937d
Support aten.stack op and decompose it into unsqueeze & cat (#1747) 2023-03-11 09:25:25 +08:00
gpetters94 66b1045a80
Add a new RecomposeComplexOps pass, fold slice+copy_ into indeX_put_ (#1901) 2023-03-10 16:42:11 -05:00
Ziheng Jiang dca2b8a40a
[TORCH] Improve type refinement for aten.cat. (#1908)
* [TORCH] Fix type refinement for aten.cat.

* Add test.

* Address comments.

* Update.

* Update.

* Update.

* Update.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-03-09 16:17:35 -08:00
Roll PyTorch Action 40c25cecc4 update PyTorch version to 2.1.0.dev20230308 2023-03-08 15:25:16 +00:00
Priya Savithiri c2ef5f4165
Add HardtanhBackward TOSA and LINALG support (#1721) 2023-03-06 10:16:37 -08:00
Ramiro Leal-Cavazos d30af8772b
Handle uninitialized lattice elements in RefineTypes (#1911)
The data-flow analysis does not always propagate information to the
entire graph. This results in some lattice elements being
uninitialized. Currently the lattice elements are not checked to see
if they are uninitialized before rewriting the graph, potentially
resulting in invalid IR (see
https://github.com/llvm/torch-mlir/issues/1896).

This commit adds handling for uninitialized lattice elements.
2023-03-03 08:55:58 -08:00
Vivek Khandelwal a32840ffd7 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-02-27.
This commit also adds the lowering for aten.add and aten.Float.Scalar op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-28 22:43:39 +05:30
Zachary Cetinic e7111d473b
[Torch Dialect] Scatter reduce lowering (#1884)
- Lowers the torch.scatter_reduce to linalg_on_tensors dialect.
- Includes support for "sum", "prod", "amax", "amin" and "mean".
2023-02-21 23:05:55 +00:00
Yuanqiang Liu eb74014dd8
[Torch] decompose aten.norm.ScalarOpt_dim to aten.linalg_vector_norm (#1849) 2023-02-20 20:08:29 -08:00
Vivek Khandelwal b17d4d4f08
[MLIR][TORCH] Add decomposition for aten.bernoulli.p op (#1882)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-15 22:36:29 +05:30
Ziheng Jiang f1b8d5e581
[MHLO] Support AtenMaskedFillScalar (#1839)
* [MHLO] Support MaskedFillScalar.

* Update.

* Update.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-02-10 13:58:39 -08:00
Jiahao Li f58ba19448
Add aten.bucketize op and its decomposition (#1834) 2023-02-03 10:20:47 +08:00
Gleb Kazantaev 3930588a7e
Enable VerifyBackendContract in LTC backend (#1798)
* Enable VerifyBackendContract in LTC backend

* Update VerifyBackendContract pass

* Move convert_scalar_implicit to jit_utils

* Rename VerifyBackendContract to VerifyBackendContractNoDecompositions

* Update verify-backend-contract-error.mlir test
2023-01-24 22:14:17 -05:00
Ramiro Leal-Cavazos 6c86bec04f
build: update llvm tag to 9acc2f37 (#1828)
This commit makes the following changes:

- Update dialects to use fold API `kEmitFoldAdaptorFolder` and update
signature of `fold` methods (see PSA
https://discourse.llvm.org/t/psa-new-improved-fold-method-signature-has-landed-please-update-your-downstream-projects/67618)
- Replace `makeArrayRef` with `ArrayRef` (see
https://reviews.llvm.org/D140896)
- Remove `TypeRange{}` arg from `b.create<scf::IfOp>` since builder no
longer takes that argument
- Make `func`s in `Torch/invalid.mlir` private, since symbol
declarations cannot be public. (see https://discourse.llvm.org/t/rfc-symbol-definition-declaration-x-visibility-checks/2140)
2023-01-25 01:29:42 +00:00
Eric Kunze 95bdfaa9bf
update llvm to d23516e9ad477527a9db4d06b1fa9566680ac67c (#1812)
Rename BlockAndValueMapping to IRMapping
Moved PrimTupleConstructOp type validation to its own verifier as the
tablegen version does not work for a combination of variadic input and
non-variadic output.
2023-01-23 16:34:22 -08:00
Ramiro Leal-Cavazos d849cbad14
Make `getTypeForScalarType` safer by returning `FailureOr<Type>` (#1814)
One of the potential values for a `torch_upstream::ScalarType` is
`Undefined`. This means that conversion of a `ScalarType` to another
type is a computation that can fail. To enforce handling of the
failure case, this commit makes the two helper functions that convert
`ScalarType`s into other types return `failure()` when the
`ScalarType` is `Undefined`.
2023-01-20 18:40:13 +00:00
Vivek Khandelwal abf4f207cd [MLIR][TORCH] Add canonicalizer for aten.new_empty_strided op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-19 13:37:32 +05:30
Vivek Khandelwal f9d59eb500 [MLIR][TORCH] Add decomposition for aten.randn_like op
This commit decomposes aten.randn_like op into aten.randn.generator op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-18 12:09:27 +05:30
Jiahao Li e2698433db
Fix empty tensor when select -1 (#1787) 2023-01-17 10:14:14 -08:00
Jiahao Li 4f94831fed
[LINALG][TOSA][MHLO] Add e2e support for aten bitwise ops (#1753) 2023-01-11 14:40:03 -08:00
Vivek Khandelwal fd236b2c89 [MLIR][TORCH] Add decomposition for prims.var and prims.sqrt op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Vivek Khandelwal b966733e04 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-01-08.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30