Commit Graph

87 Commits (bfb565143fe2fce7ff94349214e7da6c374279c0)

Author SHA1 Message Date
Matthias Gehre 4e2ba2e0af
Support aten.sign (#2205) 2023-06-10 20:45:35 +02:00
Yuanqiang Liu faec8698ea
[Torch Dialect] Support recompose aten.split.Tensor + prim.ListUnpack (#2192) 2023-06-07 01:38:04 +08:00
TatWai Chong ed4ecb072f
[tosa] support lowering basic torch binary ops with mixed dtypes (#2122)
Lowering torch operations that allow different compatible data types
in its operands to tosa end up generating invalid tosa IR with mixed
data types. In tosa spec, certain operations (generally element-wise
operations) require all operands to have the same data type.

Add wrapper functions for those element-wise tosa ops to perform op
creation with type conversion if necessary.
2023-05-18 17:12:18 -07:00
yifei410 86718cb203
[TOSA] lowering support for aten cat (#2039)
Add support for lowering torch.aten.cat to tosa.concat

* add support for aten cat to tosa

---------

Co-authored-by: yifei <y.zhou@xilinx.com>
Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-05-10 08:25:58 -07:00
Ramiro Leal-Cavazos c8e062fb4e
Fix default value of `stride` in 2d pooling ops in linalg and tosa (#2065)
When the user does not specify the `stride` value in 2d pooling ops,
`stride` is given the value of an empty list. However, the current
lowerings for pooling ops assumed that the `stride` operand would
always be a list of two ints, leading to crashes when that was not the
case. This commit fixes the crashes by setting the value of `stride`
to `kernel_size` when `stride` is the empty list, since this is the
default `stride` value specified in PyTorch docs. See:
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html#torch.nn.MaxPool2d
2023-04-27 08:31:36 -07:00
Chi_Liu 8d25dd454f
[TOSA] Add torch.prim.NumToTensor.Scalar float support (#1802) 2023-04-18 13:36:57 -07:00
Chi_Liu f3d1eda09f
[TOSA] Add aten.abs support (#2032) 2023-04-14 08:43:39 -07:00
Chi_Liu ad36e61040
[TOSA] Add aten.le.tensor support (#2033) 2023-04-14 08:43:14 -07:00
Abhishek Varma 318fe13468 [MLIR][TORCH] Patch up Ops and their lowerings to deal with +ve `dim`
-- In Python we have the concept of negative dimension indexing.
-- We would want to normalize such dimensions to be +ve and within the
   expected range instead.
-- This commit takes care of a few remaining set of Ops and their
   lowerings by applying `toPositiveDim` and `isValidDim` to the
   extracted integer `dim` value.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-14 13:12:56 +05:30
Chi_Liu 8dcd0b2e76
[TOSA] Add support for sliceOp end < 0 (#2011) 2023-04-06 16:19:00 -07:00
Chi_Liu 6bb9965a41
[TOSA] Add support for AtenZerosOp 0/strided layout (#1983) 2023-03-30 07:08:20 -07:00
Michael Feliz 2389729fb9
Add support for aten_remainder in TorchToTosa (#1966) 2023-03-23 17:55:58 -07:00
lisaliu1 d632afce31
Max pool2d ceil mode to tosa (#1957)
* implemented ceil_mode== true support for lowering aten.max_pool2d to tosa
* add e2e test for lowering aten.max_pool2d to tosa with ceil_mode=true

---------

Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-03-21 10:17:39 -07:00
lisaliu1 7d711b9f9f
Constant pad nd to tosa (#1933)
* implemented lowering torch.aten.constant_pad_nd to tosa
* add constant_pad_nd e2e tests to TOSA_PASS_SET
* add PadModule_basic & PadWithNoneValModule_basic to TOSA_PASS_SET
---------

Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-03-15 08:42:15 -07:00
Priya Savithiri c2ef5f4165
Add HardtanhBackward TOSA and LINALG support (#1721) 2023-03-06 10:16:37 -08:00
Chi_Liu 8a7340dfb5
[TOSA] aten.index.tensor multiple indexes support (#1868) 2023-02-13 23:07:15 -08:00
Chi_Liu cc819e73dd
[TOSA] Fix broadcast_to input and output different shape support (#1855) 2023-02-09 09:15:14 -08:00
Vivek Khandelwal ed9d8d1fb7 [MLIR][TORCH] Add support for clone op with channels last memory format
Fixes https://github.com/llvm/torch-mlir/issues/1829

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-02 16:04:42 +05:30
Ramiro Leal-Cavazos 6c86bec04f
build: update llvm tag to 9acc2f37 (#1828)
This commit makes the following changes:

- Update dialects to use fold API `kEmitFoldAdaptorFolder` and update
signature of `fold` methods (see PSA
https://discourse.llvm.org/t/psa-new-improved-fold-method-signature-has-landed-please-update-your-downstream-projects/67618)
- Replace `makeArrayRef` with `ArrayRef` (see
https://reviews.llvm.org/D140896)
- Remove `TypeRange{}` arg from `b.create<scf::IfOp>` since builder no
longer takes that argument
- Make `func`s in `Torch/invalid.mlir` private, since symbol
declarations cannot be public. (see https://discourse.llvm.org/t/rfc-symbol-definition-declaration-x-visibility-checks/2140)
2023-01-25 01:29:42 +00:00
Chi_Liu c5ac42a198
[TOSA] Add aten.view shape -1 support (#1815) 2023-01-20 11:56:26 -08:00
Chi_Liu 2587b3f583
[TOSA] Add aten.Index.Tensor support (#1771) 2023-01-19 21:19:00 -08:00
Jiahao Li 4f94831fed
[LINALG][TOSA][MHLO] Add e2e support for aten bitwise ops (#1753) 2023-01-11 14:40:03 -08:00
Ashay Rane 4e4a571104
[TOSA] Add LeakyReLU conversion pass (#1790)
* feat(TorchToTOSA): LeakyReLU legalization

* test(LeakyReLU): Add LIT test and enable e2e test

Co-authored-by: Philipp Braun <philipp.braun@amd.com>
2023-01-10 21:42:07 -08:00
Ashay Rane 0faba6d2fc
build: update llvm tag to de3f0f7f (#1789)
Credit to @vivekkhandelwal1 for finding the necessary changes.

Summary of changes:

 - Switch Tosa_IntArrayAttr[N], Tosa_IntArrayAttrUpto[N] to DenseI64ArrayAttr.

 - Replace kNoIterationLimit with kNoLimit. (https://reviews.llvm.org/D140525)

 - Add dependency on MhloPasses when MHLO is enabled

 - Specify result type when using mhlo::DotOp
2023-01-10 17:07:19 -06:00
Raghavan Raman 0979df6589
Fix unsqueeze in Torch to Tosa conversion (#1780) 2023-01-10 11:09:58 -08:00
Chi_Liu 9dc09ac8c5
[TOSA] Add aten.gather support for tosa (#1680) 2022-12-21 11:04:07 -08:00
Chi_Liu b2cefc0b64
[TOSA] Add aten.masked_fill.Tensor/Scalar support (#1735) 2022-12-21 08:56:07 -08:00
Tanyo Kwok 577e38da58
build: update llvm tag to 7ccbb4df (#1736)
Summary of changes:

 - LLVM now includes <optional> instead of "llvm/ADT/Optional.h" in most
   (although not all) places
   (https://reviews.llvm.org/rG541ef3d61e9341cd38420c0dbca9250c4d0ea04c).
   This patch replaces the affected instances of `llvm::Optional` with
   `std::optional`.

 - In the usages of llvm::Optional that remain, llvm::Optional::value()
   is deprecated, so this patch replaces them with a dereference.
2022-12-20 18:17:27 +08:00
Chi_Liu 163d19cce6
[TOSA] Add aten.add/sub.Scalar/Tensor si64 type support (#1604) 2022-12-12 12:13:07 -08:00
Sambhav Jain f8a2592905
[Bazel] Resolve circular dependency and add targets for conversion to MLProgram dialect (#1694)
A circular dependency was introduced in e7edcc62fd. 

Specifically, the `makeShapeLLVMCompatible` and `makeShapeTorchCompatible` utilities were being called from `lib/Dialect/Torch/IR/TorchTypes.cpp` and `lib/Dialect/Torch/IR/TorchOps.cpp` defined under the `:TorchMLIRTorchDialect` bazel target, leading it to take a dependency on `:TorchMLIRConversionUtils` which already depends on `:TorchMLIRTorchDialect`, hence creating a circular dependency.

This commit resolves the same by moving said utilities from `lib/Conversion/Utils/Utils.cpp` to `lib/Dialect/Torch/Utils/Utils.cpp`. Please LMK if there's a better way to fix this and I will update the code.

This commit also adds the required targets to support building the new conversions from Torch to ML Program dialect that was introduced in f416953600.

Bazel build GHA triggered manually to verify: https://github.com/sjain-stanford/torch-mlir/actions/runs/3645944517
2022-12-08 09:49:54 -08:00
Ramiro Leal-Cavazos dd35488da5
build: update llvm tag to 798fa4b4 (#1684)
- Support for non-prefixed accessors has been removed. See:
  https://reviews.llvm.org/D136727
- Rename `operands` to `methodOperands` in `prim.CallMethod` since the
  name `operands` overlaps with a builtin method name. See:
  https://reviews.llvm.org/D136727
- Add passes in refbackend to lower memref.subview. See:
  https://reviews.llvm.org/D136377
- Replace `CopyToValueTensorOps` first in `RewriteViewLikeSubgraph` in
  maximize-value-semantics.

  The current implementation of the `RewriteViewLikeSubgraph` pass in
  maximize-value-semantics creates temporarily invalid IR. In
  particular, given a forward slice starting from a
  `CopyToNonValueTensorOp` and ending in `CopyToValueTensorOp`s, the
  pass first replaces all uses of the `CopyToNonValueTensorOp` with
  its operand, which results in all the `CopyToValueTensorOp` users
  having their operand have type `!torch.vtensor`, which is invalid.

  The correct way to do things is to first replace all the
  `CopyToValueTensorOp`s with their operand, and then replace all uses
  of the `CopyToNonValueTensorOp` with its operand.

  This only started failing now because the generated accessor
  `getOperand` for the `CopyToValueTensorOp` now returns a
  `TypedValue<NonValueTensorType>`, which has an assert checking that
  the value returned is of the expected type.
2022-12-07 12:20:41 -08:00
Vivek Khandelwal e7edcc62fd build: update llvm tag to 147fe9de
Summary of changes:
- Replace call to `MemoryEffectOpInterface::hasNoEffect`
  with `isMemoryEffectFree`.
- Make fix for the dynamic dims, since
  `kDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1` in llvm
- `makeShapeLLVMCompatible` and `makeShapeTorchCompatible`
  utilities convert shapes in order to remain consistent
  with the Torch and MLIR semantics.
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-01 13:36:50 +05:30
Vivek Khandelwal d9cbf01d1e Revert "build: update llvm tag to 147fe9de"
This reverts commit e45ad313d4.
2022-11-25 12:41:56 +05:30
Vivek Khandelwal e45ad313d4 build: update llvm tag to 147fe9de
Summary of changes:
- Update call to `hasNoEffect` utility
- `KDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1`
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-24 12:44:43 +05:30
Chi_Liu 29c8f47723
[TOSA] Add aten.clamp op tosa support (#1609)
Co-authored-by: AmosLewis <Amos_Lewsi@foxmail.com>
2022-11-18 13:32:13 -08:00
Ramiro Leal-Cavazos 09ca07bca0
`m_TorchConstant{Int/Bool}List` -> `m_TorchListOfConstant{Int/Bool}s` (#1601)
This commit renames the patterns used to match on lists of constant
values to `m_TorchListOfConstant{valueType}s`. This is needed to avoid
ambiguity for when `valueType` has `Optional` in it. In particular, it
makes it clear whether the values in the list are optional or the list
itself is optional.
2022-11-16 20:33:12 +00:00
Chi_Liu dfe7513a45
[MLIR][TORCH] Fix aten.unsqueeze op (#1578)
The range of the unsqueeze dim is: [-input.dim() - 1, input.dim() + 1), the bug forgets to add 1.
2022-11-14 09:09:15 -08:00
Ramiro Leal-Cavazos b723186983
Remove all but one of valsem ops + move fill.Scalar to elementwise (#1531)
This commit removes almost all of the valsem ops, since the value
semantics version of the ops now exist in PyTorch. The only op missing
is `aten.bernoulli_.float`. In addition, this commit also simplifies
the implementation of `aten.fill.Scalar` by moving it to the pattern
that converts elementwise ops.
2022-10-28 15:06:11 +00:00
Chi_Liu ad6f5848cb
[MLIR][TORCH] Add TorchToTosa lowering for aten.where.self op (#1454) 2022-10-18 09:39:39 -07:00
Ramiro Leal-Cavazos 82a3860e25
build: update llvm tag to 4546397e (#1502)
This commit makes the following changes needed to update bump LLVM:

- Replace `linalg.init_tensor` with `tensor.empty` (see:
https://reviews.llvm.org/D135129)
- Replace `NoSideEffect` with `Pure` (see
https://reviews.llvm.org/D135505)
- Replace `body` region accessor for `ReduceOp` and `ReduceWindowOp`
with `getBody`
- Fix incorrect use of `tosa::ReduceSumOp` in `AtenNativeLayerNormOp`
conversion pattern. The result type of `tosa::ReduceSumOp` must have
the same rank as the input type. (see:
https://www.mlplatform.org/tosa/tosa_spec.html#_reduce_sum)

Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>

Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
2022-10-18 04:22:53 +00:00
Vivek Khandelwal d3cc3f1aff [tosa] Add lowering for aten.to.dtype and aten._to_copy op
This commit adds the TorchToTosa lowering for `aten.to.dtype` and
`aten._to_copy` op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Vivek Khandelwal 56f9a9b5de [tosa] Add TorchToTosa lowering for torch.prim.NumToTensor.Scalar op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Ashay Rane faa9a78e38
build: update llvm tag to 6f46ff37 (#1448)
Summary of changes:
 - Updated references to the Arith dialect
   (https://reviews.llvm.org/D134762)
 - Switched to prefixed accessors for MemRef dialect
   (https://reviews.llvm.org/D134995)
 - Fixed warnings about signed/unsigned comparisons, ignored return
   values, and unused variables
2022-10-05 08:28:06 -05:00
Vivek Khandelwal 9dd5ae8239
[tosa] Add TorchToTosa lowering for aten.arange.start_step op (#1442) 2022-09-30 07:33:41 -07:00
Vivek Khandelwal 6db513c51d
[tosa] Add support for some cases of aten.broadcast_to op (#1429)
This commit adds support for TorchToTosa lowering of
`aten.broadcast_to` op for cases:
1.) When the rank of input and output tensor is equal.
2.) When the rank of input tensor is zero.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-29 09:40:56 -07:00
Vivek Khandelwal bce00c8ed1 [tosa] Fix torch.vtensor.literal lowering
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-29 17:03:10 +05:30
Eric Kunze cb1b8796a2
Convert torch si literals into signless for TOSA (#1421) 2022-09-26 16:54:27 -07:00
Vivek Khandelwal 4ef6e69ed4
[MLIR][TORCH] Add TorchToTosa lowering for aten.clone op (#1388)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

Co-authored-by: Suraj Sudhir <16977902+sjarus@users.noreply.github.com>
2022-09-20 15:07:46 -07:00
Vivek Khandelwal 1ffd42bbde
[MLIR][TORCH] Add TorchToTosa lowering for aten.broadcast_to op (#1386)
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-20 10:04:51 -07:00
long.chen 797feaf129
[torch-mlir][Tosa] fix during torch.max.dim lower to tosa the reshape's new shape attr mismatch reshape's result type (#1378) 2022-09-16 21:29:56 -07:00