Commit Graph

113 Commits (d4a30b7e6757753ecd04b5675766dde9be914b0b)

Author SHA1 Message Date
Raghavan Raman 0979df6589
Fix unsqueeze in Torch to Tosa conversion (#1780) 2023-01-10 11:09:58 -08:00
Chi_Liu 9dc09ac8c5
[TOSA] Add aten.gather support for tosa (#1680) 2022-12-21 11:04:07 -08:00
Chi_Liu b2cefc0b64
[TOSA] Add aten.masked_fill.Tensor/Scalar support (#1735) 2022-12-21 08:56:07 -08:00
Tanyo Kwok 577e38da58
build: update llvm tag to 7ccbb4df (#1736)
Summary of changes:

 - LLVM now includes <optional> instead of "llvm/ADT/Optional.h" in most
   (although not all) places
   (https://reviews.llvm.org/rG541ef3d61e9341cd38420c0dbca9250c4d0ea04c).
   This patch replaces the affected instances of `llvm::Optional` with
   `std::optional`.

 - In the usages of llvm::Optional that remain, llvm::Optional::value()
   is deprecated, so this patch replaces them with a dereference.
2022-12-20 18:17:27 +08:00
Chi_Liu 163d19cce6
[TOSA] Add aten.add/sub.Scalar/Tensor si64 type support (#1604) 2022-12-12 12:13:07 -08:00
Sambhav Jain f8a2592905
[Bazel] Resolve circular dependency and add targets for conversion to MLProgram dialect (#1694)
A circular dependency was introduced in e7edcc62fd. 

Specifically, the `makeShapeLLVMCompatible` and `makeShapeTorchCompatible` utilities were being called from `lib/Dialect/Torch/IR/TorchTypes.cpp` and `lib/Dialect/Torch/IR/TorchOps.cpp` defined under the `:TorchMLIRTorchDialect` bazel target, leading it to take a dependency on `:TorchMLIRConversionUtils` which already depends on `:TorchMLIRTorchDialect`, hence creating a circular dependency.

This commit resolves the same by moving said utilities from `lib/Conversion/Utils/Utils.cpp` to `lib/Dialect/Torch/Utils/Utils.cpp`. Please LMK if there's a better way to fix this and I will update the code.

This commit also adds the required targets to support building the new conversions from Torch to ML Program dialect that was introduced in f416953600.

Bazel build GHA triggered manually to verify: https://github.com/sjain-stanford/torch-mlir/actions/runs/3645944517
2022-12-08 09:49:54 -08:00
Ramiro Leal-Cavazos dd35488da5
build: update llvm tag to 798fa4b4 (#1684)
- Support for non-prefixed accessors has been removed. See:
  https://reviews.llvm.org/D136727
- Rename `operands` to `methodOperands` in `prim.CallMethod` since the
  name `operands` overlaps with a builtin method name. See:
  https://reviews.llvm.org/D136727
- Add passes in refbackend to lower memref.subview. See:
  https://reviews.llvm.org/D136377
- Replace `CopyToValueTensorOps` first in `RewriteViewLikeSubgraph` in
  maximize-value-semantics.

  The current implementation of the `RewriteViewLikeSubgraph` pass in
  maximize-value-semantics creates temporarily invalid IR. In
  particular, given a forward slice starting from a
  `CopyToNonValueTensorOp` and ending in `CopyToValueTensorOp`s, the
  pass first replaces all uses of the `CopyToNonValueTensorOp` with
  its operand, which results in all the `CopyToValueTensorOp` users
  having their operand have type `!torch.vtensor`, which is invalid.

  The correct way to do things is to first replace all the
  `CopyToValueTensorOp`s with their operand, and then replace all uses
  of the `CopyToNonValueTensorOp` with its operand.

  This only started failing now because the generated accessor
  `getOperand` for the `CopyToValueTensorOp` now returns a
  `TypedValue<NonValueTensorType>`, which has an assert checking that
  the value returned is of the expected type.
2022-12-07 12:20:41 -08:00
Vivek Khandelwal e7edcc62fd build: update llvm tag to 147fe9de
Summary of changes:
- Replace call to `MemoryEffectOpInterface::hasNoEffect`
  with `isMemoryEffectFree`.
- Make fix for the dynamic dims, since
  `kDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1` in llvm
- `makeShapeLLVMCompatible` and `makeShapeTorchCompatible`
  utilities convert shapes in order to remain consistent
  with the Torch and MLIR semantics.
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-01 13:36:50 +05:30
Vivek Khandelwal d9cbf01d1e Revert "build: update llvm tag to 147fe9de"
This reverts commit e45ad313d4.
2022-11-25 12:41:56 +05:30
Vivek Khandelwal e45ad313d4 build: update llvm tag to 147fe9de
Summary of changes:
- Update call to `hasNoEffect` utility
- `KDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1`
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-24 12:44:43 +05:30
Chi_Liu 29c8f47723
[TOSA] Add aten.clamp op tosa support (#1609)
Co-authored-by: AmosLewis <Amos_Lewsi@foxmail.com>
2022-11-18 13:32:13 -08:00
Ramiro Leal-Cavazos 09ca07bca0
`m_TorchConstant{Int/Bool}List` -> `m_TorchListOfConstant{Int/Bool}s` (#1601)
This commit renames the patterns used to match on lists of constant
values to `m_TorchListOfConstant{valueType}s`. This is needed to avoid
ambiguity for when `valueType` has `Optional` in it. In particular, it
makes it clear whether the values in the list are optional or the list
itself is optional.
2022-11-16 20:33:12 +00:00
Chi_Liu dfe7513a45
[MLIR][TORCH] Fix aten.unsqueeze op (#1578)
The range of the unsqueeze dim is: [-input.dim() - 1, input.dim() + 1), the bug forgets to add 1.
2022-11-14 09:09:15 -08:00
Ramiro Leal-Cavazos b723186983
Remove all but one of valsem ops + move fill.Scalar to elementwise (#1531)
This commit removes almost all of the valsem ops, since the value
semantics version of the ops now exist in PyTorch. The only op missing
is `aten.bernoulli_.float`. In addition, this commit also simplifies
the implementation of `aten.fill.Scalar` by moving it to the pattern
that converts elementwise ops.
2022-10-28 15:06:11 +00:00
Chi_Liu ad6f5848cb
[MLIR][TORCH] Add TorchToTosa lowering for aten.where.self op (#1454) 2022-10-18 09:39:39 -07:00
Ramiro Leal-Cavazos 82a3860e25
build: update llvm tag to 4546397e (#1502)
This commit makes the following changes needed to update bump LLVM:

- Replace `linalg.init_tensor` with `tensor.empty` (see:
https://reviews.llvm.org/D135129)
- Replace `NoSideEffect` with `Pure` (see
https://reviews.llvm.org/D135505)
- Replace `body` region accessor for `ReduceOp` and `ReduceWindowOp`
with `getBody`
- Fix incorrect use of `tosa::ReduceSumOp` in `AtenNativeLayerNormOp`
conversion pattern. The result type of `tosa::ReduceSumOp` must have
the same rank as the input type. (see:
https://www.mlplatform.org/tosa/tosa_spec.html#_reduce_sum)

Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>

Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
2022-10-18 04:22:53 +00:00
Vivek Khandelwal d3cc3f1aff [tosa] Add lowering for aten.to.dtype and aten._to_copy op
This commit adds the TorchToTosa lowering for `aten.to.dtype` and
`aten._to_copy` op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Vivek Khandelwal 56f9a9b5de [tosa] Add TorchToTosa lowering for torch.prim.NumToTensor.Scalar op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Ashay Rane faa9a78e38
build: update llvm tag to 6f46ff37 (#1448)
Summary of changes:
 - Updated references to the Arith dialect
   (https://reviews.llvm.org/D134762)
 - Switched to prefixed accessors for MemRef dialect
   (https://reviews.llvm.org/D134995)
 - Fixed warnings about signed/unsigned comparisons, ignored return
   values, and unused variables
2022-10-05 08:28:06 -05:00
Vivek Khandelwal 9dd5ae8239
[tosa] Add TorchToTosa lowering for aten.arange.start_step op (#1442) 2022-09-30 07:33:41 -07:00
Vivek Khandelwal 6db513c51d
[tosa] Add support for some cases of aten.broadcast_to op (#1429)
This commit adds support for TorchToTosa lowering of
`aten.broadcast_to` op for cases:
1.) When the rank of input and output tensor is equal.
2.) When the rank of input tensor is zero.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-29 09:40:56 -07:00
Vivek Khandelwal bce00c8ed1 [tosa] Fix torch.vtensor.literal lowering
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-29 17:03:10 +05:30
Eric Kunze cb1b8796a2
Convert torch si literals into signless for TOSA (#1421) 2022-09-26 16:54:27 -07:00
Vivek Khandelwal 4ef6e69ed4
[MLIR][TORCH] Add TorchToTosa lowering for aten.clone op (#1388)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

Co-authored-by: Suraj Sudhir <16977902+sjarus@users.noreply.github.com>
2022-09-20 15:07:46 -07:00
Vivek Khandelwal 1ffd42bbde
[MLIR][TORCH] Add TorchToTosa lowering for aten.broadcast_to op (#1386)
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-20 10:04:51 -07:00
long.chen 797feaf129
[torch-mlir][Tosa] fix during torch.max.dim lower to tosa the reshape's new shape attr mismatch reshape's result type (#1378) 2022-09-16 21:29:56 -07:00
Ashay Rane bb47c166a0
llvm: update tag to 061e0189 (#1180)
Summary of changes:
 - Switch to C++17 (similar to https://reviews.llvm.org/D131348)
 - Update MHLO to build with LLVM commit hash 061e0189
 - Replace deprecated `hasValue()` and `getValue()` with `has_value()`
   and `value()` respectively (https://reviews.llvm.org/D131349)
 - Use `TypedAttr` (https://reviews.llvm.org/D130092)
 - Use updated assembly format of `mhlo.compare` op (commit
   d03ef01e70fbf9afd0fa1976fbb7ed31838929b3 in MHLO repo)
2022-08-08 20:17:35 -07:00
Jacques Pienaar 247dd64a66
Change to notifyMatchFailure (#1073)
emitError is intended for error cases and not match failures of
patterns. notifyMatchFailure is intended where pattern reports reason
for not matching.

Op verification should also not happen inside patterns but as part of
verify/verification, but left ones that were obviously verification to
emitError inside patterns to keep this change small.
2022-07-17 18:39:54 -07:00
Suraj Sudhir 5e2012c7dd
[tosa] aten.max.dim , aten.slice.tensor ops (#1027)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-07-13 10:10:18 -07:00
Suraj Sudhir d38f2cae5b
[tosa] aten.transpose.int support (#1017)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-07-07 13:05:33 -07:00
Andrew Cain 6885f1ed8a
fix: Broaden range of tosa.matmul outputs that don't need to be reshaped (#1015)
Co-authored-by: Andrew Cain <acain@d-matrix.ai>
2022-07-06 17:24:16 -07:00
Suraj Sudhir bb576c2cb3
[tosa] aten.embedding op support (#991)
Enables BERT legalization.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-06-30 13:13:52 -07:00
Gaurav Shukla 1be604bfd3 [LINALG] Lower `aten.Matmul` to `linalg.BatchMatmul`
This commit lowers `aten.matmul` to `linalg.BatchMatmul` under the
following conditions:
1. The result of matrix multiplication must have batch dimensions,
   i.e., rank greater than 2.
2. The resultant matrix must have at most 1 dynamic batch dimension.

It also handles broadcasting of batch dimensions when batch dimensions
of the matrices are broadcastable.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-06-25 10:58:06 +05:30
Vivek Khandelwal 06750815d1 [tosa] Support for AtenAvgPool2d op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Ashay Rane 9208bf0eb6
llvm: bump tag to e1318078 (#781)
The updated LLVM code includes a patch to create bfloat16 array
attributes, thus enabling a different patch to torch-mlir to flesh out
support for the bfloat16 type.
2022-04-26 12:27:51 -07:00
gpetters94 9ec0683e92
Add 2D case for convolution (#693) 2022-04-08 00:47:57 -04:00
Anup Gangwar ccf924d3df
tosa] Support for Aten[Gelu|GeluBackward] ops (#720)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-03-30 17:00:55 -07:00
Anup Gangwar 5d7a6c2976
[tosa] Support for Aten[Unsqueeze|Contiguous|Dropout|Reshape|View] ops (#700) 2022-03-25 14:15:07 -07:00
Anup Gangwar c60468f141
[tosa] Support for Aten[Zeros|Ones|Fill_Scalar] ops (#604)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-16 09:53:51 -08:00
Anup Gangwar dfc07d11d7
Fix compiler warning introduced in PR575 (#593) 2022-02-14 12:45:19 -08:00
Anup Gangwar 756b75fb2d
[tosa] Support for some ops and fix for Issue #532 (#575)
* [tosa] Support for AtenNe[Tensor|Scalar]Op, AtenLog2Op,
AtenBitwiseAndTensorOp, AtenSquareOp and AtenThresholdOp
* Fix for Issue #532 - Mixed input types for few ops and updated few
tests to use i32 instead of i64

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-11 12:30:02 -08:00
Anup Gangwar f9f97ea184 * [tosa] Support for AtenNativeLayerNormOp
* [tosa] Support for AtenPermuteOp

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-04 14:46:31 -05:00
Suraj Sudhir 0f083e770a
[tosa] Add maxpool2d and adaptive_avgpool2d support (#550)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-31 13:34:09 -08:00
Anup Gangwar 454fa9d123
* [tosa] Support for AtenFlattenUsingIntsOp (#548) 2022-01-28 21:38:56 -08:00
Anup Gangwar 7a5736facd
* [tosa] Support for AtenReshapeOp (#543)
* [tosa] Support for AtenBatchNormOp

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-27 14:38:59 -08:00
Suraj Sudhir eb06d21765
[tosa] Implement conv2d support (#541)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-26 19:16:13 -08:00
stephenneuendorffer 3fd9b7789e
Bump LLVM to 881ff4e4ebe8cc0cc045c7c167cffb01f94f27f8 (#539) 2022-01-25 22:16:30 -08:00
Suraj Sudhir cadea678e5
[tosa] Implement torch.linear support. (#535)
Refactor matmul into separate class and derive variants:
- matmul
- mm, bmm
- linear

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-25 08:48:58 -08:00
Anup Gangwar f8080bd1c5
* [tosa] Support for AtenRsubScalarOp for scalar constants (#531)
* [tosa] Support for AtenCeilOp and AtenReciprocalOp
* [tosa] Support for comparator ops, Aten[Gt|Lt|Eq][Tensor|Scalar]Op with scalar constant
* [tosa] Support for Scalar variants of Aten[Mul|Div|Add|Sub] Ops with scalar constants

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-20 10:58:30 -08:00
Suraj Sudhir 0188ca5498
[tosa] Implement matmul, mm and bmm support (#526)
- Also handles braodcasting n-D tensors, dynamic shapes

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-18 13:37:32 -08:00
Anup Gangwar d69d29b7a6 * [tosa] Support for AtenPowTensorScalarOp with constant Scalar as input
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-11 22:55:54 -05:00
Suraj Sudhir d6b6c0268c
[tosa] Add missing overrride-s to fix compiler warnings (#514)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-07 10:57:54 -08:00
Suraj Sudhir b4842d9863
[tosa] Implement squeeze.dim support (#511)
Templated variants for squeeze and squeeze.dim
2022-01-06 08:31:29 -08:00
Suraj Sudhir 0cd95b5c68
[tosa] Support for Torch.squeeze (#487) 2021-12-15 21:40:29 -08:00
Anup Gangwar a6c3050dd0 * [tosa] Support for Maximum and Minimum
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2021-12-15 11:58:19 -08:00
Suraj Sudhir 829cf8afc3
[tosa] Implement Argmax support (#485)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-12-15 11:01:01 -08:00
Anup Gangwar cce490d71d
* [tosa] Support for Rsqrt legalization (#480)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2021-12-14 10:03:58 -08:00
Suraj Sudhir c9c9b68d1f [tosa] Add Torch reduction operators
- Supports variants with multiple dims, one dim, all dime
- Leverages legalize_common and legalize_utils code from
TensorFlow-TOSA work

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-12-03 09:01:48 -08:00
Yi Zhang 53733933a4 Update llvm upstream to 0b17336f793108a7b10c3fa913039144ef1d0f61
Update AsmPrinter/Parser and MatchAndRewrite
2021-11-16 13:04:51 -05:00
Suraj Sudhir 628a21bb13
[mlir][tosa] Refactor conversions to use templates (#416)
- Remove use of conversion construction macros
- Add mul and div op conversions
- Add corresponding tests

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-11-11 16:15:58 -08:00
Suraj Sudhir 1019ddf5a0 [tosa] Add structure for eltwise ops
Add a bunch of op legalizations.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-11-11 11:03:24 -08:00
Suraj Sudhir 7e4ef74774
[tosa] Add Torch.sigmoid fp32 to TOSA (#386)
* [tosa] Add Torch.sigmoid fp32 to TOSA

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-10-28 10:09:12 -07:00
Sean Silva 0c5c84d63d Add a basic TOSA E2E backend.
We lower through linalg-on-tensors and use RefBackend to run it.
This adds enough support for a "tanh" op. Adding more ops should be
fairly mechanical now that things are wired up. Run with:
```
./tools/torchscript_e2e_test.sh -c tosa
```

The backend structure is very similar to linalg-on-tensors based E2E
backends and is a nice parallel (see `tosa_backend.py`). Actually, this
forced a nice refactoring to the layering here. We removed
`torchscript-module-to-linalg-on-tensors-backend-pipeline` and instead
require separately running
```
torchscript-function-to-torch-backend-pipeline,torch-backend-to-linalg-on-tensors-backend-pipeline
```
This highlights the step that lowers to the "torch backend contract"
of cleaned up `torch` dialect ops is a critical step in the lowering.
Going forward, that is the key load-bearing contract of the torch-mlir
project, not the linalg-on-tensors backend contract.

Recommended review order:
- `TorchToTosa.cpp` / `TorchToTosa/basic.mlir`
- `python/torch_mlir_e2e_test/torchscript/configs/tosa_backend.py` and
  the new `utils.py` file there.
- `python/torch_mlir_e2e_test/tosa_backends/linalg_on_tensors.py` and
  `abc.py` in that directory for the TOSA backend e2e interface.
- other misc mechanical changes
2021-10-08 09:59:45 -07:00