Commit Graph

244 Commits (102c497c4cc94b0dbae1c786b657715d8cc7b010)

Author SHA1 Message Date
Prashant Kumar 102c497c4c Add decomposition of _log_softmax op.
Decompose _log_softmax into log(softmax(x)).
2022-02-10 23:17:26 +05:30
Prateek Gupta 318946a650 [TORCH][MLIR] Add E2E support for `aten._unsafe_view` op.
This commit adds decomposition of `aten._unsafe_view` op into
`aten.view` op.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2022-02-10 22:28:58 +05:30
Gaurav Shukla bd177bdfc7 [TORCH][MLIR] Add run-time assert support in Torch-dialect
- This commit adds `aten.assert` op in the Torch dialect.
- The `aten.assert` op is lowered to `mlir::Assert` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-09 12:03:01 -05:00
Anup Gangwar f9f97ea184 * [tosa] Support for AtenNativeLayerNormOp
* [tosa] Support for AtenPermuteOp

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-04 14:46:31 -05:00
Prashant Kumar 68acc8696e Modify softmax decomposition to be more numerically stable.
The softmax decomposition is modified according to https://github.com/pytorch/functorch/blob/main/functorch/_src/decompositions.pytorch
to account for numerical stability. Also, modified aten.argmax lowering
to handle negative dimension.
2022-02-03 21:20:36 +05:30
Suraj Sudhir 1b505cbac5
RefineTypes fixes for TOSA backend (#557)
Handles Linear, Adaptive_AvgPool2D and FlattenUsintInts
Adds ResNet18 static model for TOSA

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-02-01 14:08:54 -08:00
Yi Zhang 0cb216a1ad [Torch][Linalg] Add basic support for RNG
This PR include the following pieces:
- Add torch `Generator` type. `Generator` type is converted to i64 in
refbackend type converter.
- Add seed managment support for the default global generator.
`torch_c.getNextSeed` op is used to get the seed. On refbackend, the
`torch_c.getNextSeed` is lowered to load/store from [0] of global
variable `default_generator` memref<i64> in `InsertRngGlobals` pass.
- Add `aten.uniform_` and testing as an example op for RNG ops. Add
`torch.pseudo.aten.uniform` op. It has the same operands and return as
the `aten.uniform_` from the op registry except for value semantics.
2022-01-31 18:56:42 -05:00
Yi Zhang 5d9a15263a [TORCH] Add aten.std e2e support 2022-01-31 15:17:49 -05:00
Prashant Kumar e58b66bc3b Add lowering of `aten.max.dim` op.
Lowering of `aten.max.dim` op has been added.
2022-01-31 21:41:22 +05:30
Anup Gangwar 454fa9d123
* [tosa] Support for AtenFlattenUsingIntsOp (#548) 2022-01-28 21:38:56 -08:00
Liam Fitzpatrick 8bc028af05 Fold __is__ and unchecked_cast of derefine
The added e2e maxpool testcase from #545 was not getting a static shape
due to an unfolded prim.If when RefineTypes was called. This was because
of unfolded torch.iaten.__is__ and torch.prim.unchecked_cast operators
with torch.derefine operands.
2022-01-28 17:54:40 -05:00
Anup Gangwar 7a5736facd
* [tosa] Support for AtenReshapeOp (#543)
* [tosa] Support for AtenBatchNormOp

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-27 14:38:59 -08:00
stephenneuendorffer 3fd9b7789e
Bump LLVM to 881ff4e4ebe8cc0cc045c7c167cffb01f94f27f8 (#539) 2022-01-25 22:16:30 -08:00
Anup Gangwar f8080bd1c5
* [tosa] Support for AtenRsubScalarOp for scalar constants (#531)
* [tosa] Support for AtenCeilOp and AtenReciprocalOp
* [tosa] Support for comparator ops, Aten[Gt|Lt|Eq][Tensor|Scalar]Op with scalar constant
* [tosa] Support for Scalar variants of Aten[Mul|Div|Add|Sub] Ops with scalar constants

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-20 10:58:30 -08:00
Vivek Khandelwal 6fe70c7794 [MLIR][TORCH] Add E2E support for aten.index.Tensor op
This commit adds lowering of `aten.index.Tensor` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-19 13:37:56 +05:30
dan 3745f54489 Update external/llvm-project
- Add `qualified` to ods because of
https://reviews.llvm.org/D113873 and https://reviews.llvm.org/D116905
- Needed to revert https://github.com/llvm/torch-mlir/pull/520 as it
was based on an old torch version.
https://github.com/llvm/torch-mlir/pull/527 will bring this back with
a better design.
- Change ConvertAtenCatOp to use more accurate tensor shape info and
as much static info as possible to pass `tensor.insert_slice`
verification code added by https://reviews.llvm.org/D114715
- Other minor fixes
2022-01-18 13:25:42 -05:00
Anup Gangwar d69d29b7a6 * [tosa] Support for AtenPowTensorScalarOp with constant Scalar as input
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-11 22:55:54 -05:00
Liam Fitzpatrick 077e55d756 Add support for constant_pad_nd
Note that to enable folding of the code coming from an example
like the ConstantPad2dStaticModule e2e test, support for other
operations had to be added/improved:
- aten::neg.int
- aten::eq.float
- aten::eq.str
- prim::Uninitialized
2022-01-11 10:25:25 -05:00
Vivek Khandelwal 35cf8d18f7 Add support for two return values
This commit adds support for two return values of type
memref f32 and i64.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-11 11:07:10 +05:30
Yi Zhang 732a76f45c Make broadcasting result shape more static
This involes the following 2 parts:
- Change refine type to propagate more static shape info.
- Get as much static shape info as possible when creating the result
tensor when converting to linalg.
2022-01-06 18:39:27 -05:00
Liam Fitzpatrick ccfdfd1b80 Refine static shapes for conv2d and maxpool2d 2022-01-03 11:09:23 -06:00
Vivek Khandelwal 4486de5ef3 [MLIR][TORCH] Add E2E support for torch.arange op
This commit adds lowering of `aten.arange.start_step` op.
This commit decomposes `aten.arange` and `aten.arange.start` into
`aten.arange.start_step` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-27 22:45:48 +05:30
Gaurav Shukla a83004c806 [TORCH][MLIR] Fold trivial cases of `aten.to.dtype` and `aten.view` op
- It folds `aten.to.dtype` when the input tensor type and result type
  are exactly same.
- It folds `aten.view` when the rank of both the input tensor type and
  result type is unity.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-24 13:32:34 +05:30
xndcn 5eed562e19 add aten.sub.int/aten.mul.int lowering in TorchToStd 2021-12-17 10:35:15 -08:00
Anup Gangwar a6c3050dd0 * [tosa] Support for Maximum and Minimum
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2021-12-15 11:58:19 -08:00
Prashant Kumar ab81f871e4 Add aten.tensor.int and aten.tensor.float op lowerings.
Add the required lowerings and correct test cases.
These op produce zero-d tensors and it was incorrectly mentioned in
refine types to produce 1d tensor of size 1.
2021-12-15 17:21:34 +05:30
Anup Gangwar cce490d71d
* [tosa] Support for Rsqrt legalization (#480)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2021-12-14 10:03:58 -08:00
Gaurav Shukla 5a47f92390 [TORCH][MLIR] Add E2E support for `aten.squeeze.dim` op
This commit adds lowering of `aten.squeeze.dim` op into
`linalg.TensorCollapseShape` op. Here, the dim(th) dimension of the
input tensor is not supposed to be dynamic.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-10 17:01:20 +05:30
Suraj Sudhir c9c9b68d1f [tosa] Add Torch reduction operators
- Supports variants with multiple dims, one dim, all dime
- Leverages legalize_common and legalize_utils code from
TensorFlow-TOSA work

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-12-03 09:01:48 -08:00
Daniel Garvey a52aded0b9
Add lowering for slice and selectInt (#398) 2021-12-02 22:09:21 -06:00
Gaurav Shukla 73b27b32dc [MLIR][TORCH] Add E2E support for `aten.squeeze` op
This commit adds lowering of `aten.Squeeze` op into
`linalg.TensorCollapseShape` op. The size 1 dynamic dimensions are not
handled as a part of this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-30 23:00:28 +05:30
Yi Zhang 5d28549c2c Add folder for torch.aten.Int.Tensor
This is to fold the common pattern from Bert inference like:
```
%111 = torch.prim.NumToTensor.Scalar %110 : !torch.int ->
    !torch.vtensor<[],si64>
%112 = torch.aten.Int.Tensor %111 : !torch.vtensor<[],si64> ->
    !torch.int
```
2021-11-30 21:55:48 +05:30
dan 03fdf56f21 add aten.add.int lowering in TorchToStd 2021-11-29 13:22:50 -05:00
Yi Zhang 0fe70994e5 Add support for multiple return values
This change is to unblock the work of some backprop ops returning more
than one tensors. We will need to think of a more scalable approach
in the future if more flexible return types combinations are needed.
2021-11-16 21:07:45 -05:00
Suraj Sudhir 628a21bb13
[mlir][tosa] Refactor conversions to use templates (#416)
- Remove use of conversion construction macros
- Add mul and div op conversions
- Add corresponding tests

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-11-11 16:15:58 -08:00
Suraj Sudhir 1019ddf5a0 [tosa] Add structure for eltwise ops
Add a bunch of op legalizations.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-11-11 11:03:24 -08:00
Yi Zhang 3bd9d2a4c7 Add e2e support for aten._softmax_backward_data.
Decompose aten._softmax_backward_data into aten math ops. Also decompose
`aten.size` to facilitate decomposing _softmax_backward_data.
2021-11-09 13:09:30 +05:30
Yi Zhang 05c4dd8e39 Add convertScalarToDtype helper.
This is to facilitate scalar type conversion in the TorchToLinalg. As
part of adding the helper, this PR also:
- Updated `AtenAddTensorOp`, `AtenSubTensorOp` to use the helpers to
support more type variants.
- Added e2e type promotion testing.
- Added i32 memref return/arg type to support e2e testing.
2021-11-08 17:50:52 -05:00
George Petterson f41958037a Add NumToTensor 2021-11-08 15:56:52 -05:00
Prateek Gupta 18e8806b14 [TORCH][MLIR] Add E2E support for aten::to.dtype.
This commit adds end to end support for AtenToDtypeOp from aten
to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-08 12:56:03 -05:00
Prashant Kumar fd505db2c6 Adding support for returning elemental types.
Support for returning elemental types. Previously, only
memref types as returning types was supported. All the hacky ways
to write tests which return elemental types should be taken care of.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-08 22:20:48 +05:30
Prashant Kumar 53b4275ef5 Add lowering of `aten.Int.Tensor` op.
The lowering of `aten.Int.Tensor` op has been added.
The changes has been made as a part of `convert-torch-to-linalg` pass.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-01 21:58:08 +05:30
Yi Zhang 752abc8d01 Add type promotion code to refine types.
The types have different levels of categories: where
complex > floating > integral > boolean (> means left hand
side has higher category).

The operands have different levels of priorities where:
dimensioned tensor > 0-dim tensor > scalar == wrapped 0-dim tensor.
This is represented by the `ResultTypeState.dimResult`,
`ResultTypeState.zeroResult` and `ResultTypeState..wrappedResult` in
the source code.

For operands of the same priorities, the result type should be the
highest categories with sufficient width to hold all operands.

By default, only the highest priority operands participate in the type
promotion logic. Lower priority operands participate if they are in
a higher category than any higher priority operands.

For example, <[],f32> (lower priority) and <[1], si64> tensor would
result in <[?],f32> tensor because floating > integeral. Another example
<[],f64> (lower priority) and <[1], f32> tensor would result in
<[?], f32> tensor because f32 and f64 are the same category.

The ScalarType enum definition, type promotion table, ResultTypeState
struct definition and some helpers are copied from
aten/src/ATen/native/TypeProperties.*
Other references:
- https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc
- https://github.com/pytorch/pytorch/issues/9515

Other minor changes:
1. Fix `visitExpandLikeOp` to consider cases where the given sizes list
size is larger than the input rank.
2. Add back the somehow deleted `torch.aten.softmax.int` tests in
decompose-complex-ops.mlir.
2021-10-29 11:17:39 -04:00
Sean Silva eb6996d557 Update llvm-project to 6f9c25167d16acff3ff8e4f54a8c14a2a175fc59
- Changes to dialect conversion that result in no-op materializations
  not being created.
2021-10-28 17:43:04 -07:00
Suraj Sudhir 7e4ef74774
[tosa] Add Torch.sigmoid fp32 to TOSA (#386)
* [tosa] Add Torch.sigmoid fp32 to TOSA

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-10-28 10:09:12 -07:00
Prashant Kumar 5009cbf55c Add lowering of aten.matmul op.
Lowering of `aten.matmul` op is added from torch to linalg dialect.
The different cases correspond to
https://pytorch.org/docs/stable/generated/torch.matmul.html.
TODO: Broadcasting in case of batch-matmul is yet to be taken care of.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-10-26 12:45:09 -04:00
Yi Zhang abfaf8c577 Add aten.ne.bool to make CI pass 2021-10-21 14:45:41 -04:00
Yi Zhang a459e09ab7 E2e support for aten.softmax.int and aten.embedding
- Added a DecomposeComplexOps pass to decompose complex torchOps.
- Refactored `visitAtenArgmaxOp` and `visitAtenAnyDimOp` to
`visitReductionAlongDimIntOp`.
- Moved some helper functions into
torch-mlir/Dialect/Torch/Utils/Utils.h to be shared by multiple files.
- Added support for f64 tensor as argument and return types.
2021-10-18 17:57:45 -04:00
Yi Zhang 0902438882 Update llvm-project to a54f4eae0e1d0ef5adccdcf9f6c2b518dc1101aa
This brings in https://reviews.llvm.org/D110797. PRs that are in
progress will need to use scripts provided by
https://llvm.discourse.group/t/psa-removed-arithmetic-ops-from-standard/4455.
2021-10-18 13:36:42 -04:00
Sean Silva 19e9fc4ef1 Bring some more order to the e2e error reporting situation.
- Move `run_pipeline_with_repro_report` to a more common place, and use it
  consistently
- Attach a `torch.debug_module_name` to the enclosing `builtin.module`
  op to allow for self-contained error reporting (not needing to pass
  the names around.
- Remove redundant error reporting in linalg_on_tensors_backend.py and
  tosa_backend.py (their respective backend abstract base classes now
  take care of the error reports themselves)
- Save off original value of sys.stderr, rather than always resetting to
  `sys.__stderr__`. This is just more hygienic, and allows nesting if
  desired.
2021-10-08 13:00:12 -07:00