Commit Graph

858 Commits (6c86bec04f36291eb45d850718e837ceefbac670)

Author SHA1 Message Date
Ramiro Leal-Cavazos a54b334578
Allow running DecomposeComplexOps more than once (#1671)
The current implementation of `DecomposeComplexOps` fails if an op
expected to be decomposed does not get decomposed in the first
iteration of the `createTorchSimplificationPipeline` in
`LowerToBackendContractPass`. However, some graphs require multiple
iterations of `createTorchSimplificationPipeline` to fully propagate
all statically knowable information, such as dtypes and shapes, to the
entire graph, sometimes resulting in the need to run
`DecomposeComplexOps` more than once.

This commit changes `DecomposeComplexOps` to use a greedy algorithm
for pattern application and moves the legalization check of ops to the
`LowerToBackendContractPass` to allow for the `DecomposeComplexOps` to
run more than once.
2022-12-08 09:26:38 -08:00
Ramiro Leal-Cavazos dd35488da5
build: update llvm tag to 798fa4b4 (#1684)
- Support for non-prefixed accessors has been removed. See:
  https://reviews.llvm.org/D136727
- Rename `operands` to `methodOperands` in `prim.CallMethod` since the
  name `operands` overlaps with a builtin method name. See:
  https://reviews.llvm.org/D136727
- Add passes in refbackend to lower memref.subview. See:
  https://reviews.llvm.org/D136377
- Replace `CopyToValueTensorOps` first in `RewriteViewLikeSubgraph` in
  maximize-value-semantics.

  The current implementation of the `RewriteViewLikeSubgraph` pass in
  maximize-value-semantics creates temporarily invalid IR. In
  particular, given a forward slice starting from a
  `CopyToNonValueTensorOp` and ending in `CopyToValueTensorOp`s, the
  pass first replaces all uses of the `CopyToNonValueTensorOp` with
  its operand, which results in all the `CopyToValueTensorOp` users
  having their operand have type `!torch.vtensor`, which is invalid.

  The correct way to do things is to first replace all the
  `CopyToValueTensorOp`s with their operand, and then replace all uses
  of the `CopyToNonValueTensorOp` with its operand.

  This only started failing now because the generated accessor
  `getOperand` for the `CopyToValueTensorOp` now returns a
  `TypedValue<NonValueTensorType>`, which has an assert checking that
  the value returned is of the expected type.
2022-12-07 12:20:41 -08:00
Vivek Khandelwal 3e4bb2bd8e [MLIR][TORCH] Add E2E support for randn and randn.generator op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-06 22:41:24 +05:30
Vivek Khandelwal f416953600 [MLIR][TORCH] Add TorchConversionToMLProgram and MLProgramBufferize pass
This commit changes the `InsertRngGlobalsPass` to `TorchConversionToMLProgram`
pass. This commit also adds the `MLProgramBufferize` pass for the
bufferization of ml_program dialect ops to run on refbackend.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-02 13:20:46 +05:30
Eric Kunze 3fc27cf6ca
Update LLVM Tag to 2c1fa734 (#1670)
Summary of changes:
 - Change ShapedType::kDynamicSize -> ShapedType::kDynamic
 - llvm::NoneType has been deprecated, change convertScalarToDtype to use llvm::None
2022-12-01 20:38:28 -08:00
Ramiro Leal-Cavazos b4b92c990e
Replace LCG algorithm with squares64 algorithm in AtenUniformOp (#1633)
This commit replaces the LCG algorithm that was being used by the
`TorchToLinalg` lowering of `AtenUniformOp` to generate random numbers
with the `squares64` algorithm, for the LCG algorithm was producing
tensors that were highly correlated with one another.

Squares64 algorithm: https://arxiv.org/abs/2004.06278

Closes https://github.com/llvm/torch-mlir/issues/1608
2022-12-01 08:30:10 -08:00
Vivek Khandelwal e7edcc62fd build: update llvm tag to 147fe9de
Summary of changes:
- Replace call to `MemoryEffectOpInterface::hasNoEffect`
  with `isMemoryEffectFree`.
- Make fix for the dynamic dims, since
  `kDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1` in llvm
- `makeShapeLLVMCompatible` and `makeShapeTorchCompatible`
  utilities convert shapes in order to remain consistent
  with the Torch and MLIR semantics.
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-01 13:36:50 +05:30
Abhishek Varma 47f67853ac [RefineTypes] Add Float16Type dtype knowledge support for trivial ops
-- This commit adds Float16Type dtype knowledge support for trivial ops.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-12-01 10:22:43 +05:30
Ramiro Leal-Cavazos 0983a7f93a
Fix modulus calculation in LCG algorithm of refbackend (#1658)
The current implementation sets the `nextSeed` value to `temp & 127`,
which is wrong. The last step of the LCG algorithm for the multiplier
and increment chosen should be `temp % 2^{64} = temp & (1 <<
63)`. However, because we are dealing with i64 values, the modulus
operation happens automatically, so it is not needed.

See Donald Knuth's values for LCG here:
https://en.wikipedia.org/wiki/Linear_congruential_generator
2022-11-30 08:46:52 -08:00
Abhishek Varma c27c1791f1 [MLIR][TORCH] Add e2e support for `aten.amax` op
-- This commit adds e2e support for `atend.amax` op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-30 17:54:37 +05:30
Abhishek Varma 2c643adcb9 [TORCH][DECOMPOSE] Fix bug in computeReductionType API
-- This commit fixes a bug in computeReductionType API.
-- The bug pertains to removal of `dim` from the `sizes` array.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-30 17:54:37 +05:30
Tanyo Kwok bbcdb38d99
Revert "Decompose torch.slice_scatter (#1622)" (#1659)
This reverts commit f3f2f10030.
2022-11-30 12:47:13 +08:00
Sean Silva ecb09c2fc3 [torchdynamo] Fix output size computation for upsample_nearest2d 2022-11-29 01:46:29 -08:00
Abhishek Varma bb259f918a [MLIR][TORCH] Add lowering for `aten._softmax` when `half_to_float=True`
-- This commit adds decompose logic for `aten._softmax` when
   `half_to_float` is `True`.
-- An e2e test case will be added once support for half to float conversion for
   `aten._softmax` is added upstream.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-28 22:32:00 +05:30
Vivek Khandelwal d9cbf01d1e Revert "build: update llvm tag to 147fe9de"
This reverts commit e45ad313d4.
2022-11-25 12:41:56 +05:30
Vivek Khandelwal e45ad313d4 build: update llvm tag to 147fe9de
Summary of changes:
- Update call to `hasNoEffect` utility
- `KDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1`
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-24 12:44:43 +05:30
Tanyo Kwok 14f1260ac4
Add more mhlo basic converters (#1628)
* Add more mhlo basic converters

* remove unused pinnedMemory constraints

* refine naming
2022-11-24 14:28:34 +08:00
Tanyo Kwok f3f2f10030
Decompose torch.slice_scatter (#1622)
* Decompose torch.slice_scatter

* fix compilation error

* update file check

* fix ci

* fix i64 torch.tensor dtype
2022-11-23 18:14:12 +08:00
Vivek Khandelwal da8fdc9f96 [MLIR][TORCH] Fix refine types crash
This commit fixes https://github.com/llvm/torch-mlir/issues/1599.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-23 15:17:37 +05:30
Tanyo Kwok 4aad5ccf39
fix #1626 return type mismatch (#1634) 2022-11-23 15:02:41 +08:00
Vivek Khandelwal 68f568b704 [MLIR][TORCH] Add E2E support for prims.convert_element_type op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-22 09:36:36 +05:30
Vivek Khandelwal 55c7e66aa7 [MLIR][TORCH] Fix mean and mean.dim op for large-sized inputs
This commit fixes the aten.mean and aten.mean.dim op decomposition
for supporting large-sized inputs.
This commit also fixes the formatting for the file stats.py

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-22 08:38:51 +05:30
Tanyo Kwok a9fb0c5459
fix mhlo e2e ci crashes (#1620)
* fix mhlo e2e ci crashes

* add passed tests

* calc dynamic positive dim
2022-11-21 21:50:35 +08:00
Vivek Khandelwal 4cbd3927d7 [MLIR][TORCH] Add aten.sort.int op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-20 19:00:41 +05:30
Chi_Liu 29c8f47723
[TOSA] Add aten.clamp op tosa support (#1609)
Co-authored-by: AmosLewis <Amos_Lewsi@foxmail.com>
2022-11-18 13:32:13 -08:00
Abhishek Varma 1d949f3ac2 [MLIR][TORCH] Fix aten.upsample_nearest2d op
-- aten.upsample_nearest2d.vec op is not present
   owing to https://github.com/pytorch/pytorch/pull/85638
-- So this commit adds a lowering on aten.upsample_nearest2d.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-18 13:41:47 +05:30
Sean Silva 39de4d6265 [cleanup] Make diagnostics better
Also remove some unused imports.
2022-11-17 02:09:54 -08:00
Vivek Khandelwal 5f7177da35 [MLIR][TORCH] Add decomposition for aten.var_mean.correction op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-17 13:00:09 +05:30
Gaurav Shukla 0d209998d1
llvm: update tag to e864ac6945 (#1600)
Summary of changes:
1. Replace `string` iterator types by `IteratorType` enum.
(e6598b053d)
2. Update `includes` wrt new directory layout of MLIR HLO codebase.
(9fd8d251a8)
3. Update tags
   llvm: e864ac694540342d5e59f59c525c5082f2594fb8
   MHLO: eab364ba2a66bd0613efb94f8a738c1c97aaee92

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>

Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-11-16 14:40:36 -08:00
Ramiro Leal-Cavazos 09ca07bca0
`m_TorchConstant{Int/Bool}List` -> `m_TorchListOfConstant{Int/Bool}s` (#1601)
This commit renames the patterns used to match on lists of constant
values to `m_TorchListOfConstant{valueType}s`. This is needed to avoid
ambiguity for when `valueType` has `Optional` in it. In particular, it
makes it clear whether the values in the list are optional or the list
itself is optional.
2022-11-16 20:33:12 +00:00
Vivek Khandelwal a1d3afdba9 [MLIR][TORCH] Add E2E support for aten.randint.low op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-16 09:54:18 +05:30
AmosLewis 22a5067242 [TOSA] Add more tosa::cast type support 2022-11-16 09:53:10 +05:30
George Petterson 92f385bd9f [MLIR][TORCH] Add E2E support aten.convolution_backward op
This commit adds the decomposition for the `aten.convolution_backward`
and `aten.convolution_backward_overrideable` op.
2022-11-15 07:38:26 +05:30
Chi_Liu dfe7513a45
[MLIR][TORCH] Fix aten.unsqueeze op (#1578)
The range of the unsqueeze dim is: [-input.dim() - 1, input.dim() + 1), the bug forgets to add 1.
2022-11-14 09:09:15 -08:00
Vivek Khandelwal a558034c1a [MLIR][TORCH] Fix aten.upsample_nearest2d_backward op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-12 00:05:36 +05:30
Yuanqiang Liu 2793a2bd41
fix TorchToMhlo Conversion cmake dependency (#1549) 2022-11-09 18:34:53 -06:00
Vivek Khandelwal fedf8c0640 [MLIR][TORCH] Add E2E support for aten.upsample_nearest2d_backward.vec op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-04 22:10:07 +05:30
Ashay Rane 0409595ccc
mlir: add missing dependency on TableGen targets (#1537)
lib/Dialect/Torch/Utils/Utils.cpp includes TorchOps.h, which, by way of
included header files, refers to both TorchOps.h.inc as well as
TorchTypes.h.inc.  However, the build rules do not specify the
dependency of the `TorchMLIRTorchUtils` target on the TableGen generated
header files, causing spurious build errors.

This patch fixes the problem by adding `MLIRTorchOpsIncGen` and
`MLIRTorchTypesIncGen` to the list of dependencies of
`TorchMLIRTorchUtils`.
2022-11-01 14:59:11 -05:00
Tanyo Kwok 17bc7c89cc
build: update llvm tag to 74fb770d (#1539)
* build: update llvm tag to 74fb770d

This commit makes the following changes needed to update bump LLVM:

+ replace usages of `tensor::createPadScalarOp`, see https://reviews.llvm.org/D136493
+ Update file checks
2022-11-01 15:27:09 +08:00
xndcn 759057cbdd [MLIR][TORCH] Fix wrong parameter name "supportFPInputOnly"
The parameter "supportFPInputOnly" of function createPoolingOp() is
supposed to be "supportNonFPInput", which was added to distinguish
between "MaxPool2d" and "AvgPool2d" op in #718
2022-10-30 23:18:08 +08:00
Vivek Khandelwal c86177730d [MLIR][TORCH] Add E2E support for aten.fill.Tensor op
This commit adds the decomposition for `aten.fill.Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-10-30 18:40:47 +05:30
Ramiro Leal-Cavazos b723186983
Remove all but one of valsem ops + move fill.Scalar to elementwise (#1531)
This commit removes almost all of the valsem ops, since the value
semantics version of the ops now exist in PyTorch. The only op missing
is `aten.bernoulli_.float`. In addition, this commit also simplifies
the implementation of `aten.fill.Scalar` by moving it to the pattern
that converts elementwise ops.
2022-10-28 15:06:11 +00:00
Daniel Ellis 3e199aaf11
Add better error message for single-tensor tuple returns. 2022-10-25 12:48:55 -04:00
Vivek Khandelwal ca87033d2f [MLIR][TORCH] Add E2E support for aten.mse_loss op
This commit adds decomposition for the `aten.mse_loss` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-10-25 21:06:58 +05:30
Chi_Liu ad6f5848cb
[MLIR][TORCH] Add TorchToTosa lowering for aten.where.self op (#1454) 2022-10-18 09:39:39 -07:00
Ramiro Leal-Cavazos 82a3860e25
build: update llvm tag to 4546397e (#1502)
This commit makes the following changes needed to update bump LLVM:

- Replace `linalg.init_tensor` with `tensor.empty` (see:
https://reviews.llvm.org/D135129)
- Replace `NoSideEffect` with `Pure` (see
https://reviews.llvm.org/D135505)
- Replace `body` region accessor for `ReduceOp` and `ReduceWindowOp`
with `getBody`
- Fix incorrect use of `tosa::ReduceSumOp` in `AtenNativeLayerNormOp`
conversion pattern. The result type of `tosa::ReduceSumOp` must have
the same rank as the input type. (see:
https://www.mlplatform.org/tosa/tosa_spec.html#_reduce_sum)

Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>

Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
2022-10-18 04:22:53 +00:00
Prashant Kumar 3a2cd23380 [LINALG] Add lowering for aten::round op.
-- Added the lowering for aten::round op.
-- Added the folding for integer cases.
2022-10-13 02:41:26 +05:30
Ramiro Leal-Cavazos 8f76c74be9
Remove unused input tensor from linalg.generic in aten.convolution (#1487)
This commit removes the `weight` tensor from the inputs of one of the
`linalg.generic` ops generated by the `aten.convolution` linalg
lowering, since the indexed values are not actually used by the body
of the `linalg.generic`. Moreover, in general the `weight` tensor does
not have the same shape as the output tensor of the `linalg.generic`,
so both tensors being indexed by the same indexing maps is wrong.
2022-10-12 14:01:24 -07:00
Abhishek Varma 61db1b5c4d
[MLIR][TORCH] Add e2e support for `aten.Mish` op (#1470)
-- This commit adds e2e support for `aten.Mish` op.
-- `aten.Mish` op is decomposed as following :-
    Mish(x) = x * Tanh(Softplus(x))

Signed-off-by: Abhishek Varma <avarma094@gmail.com>

Signed-off-by: Abhishek Varma <avarma094@gmail.com>
2022-10-11 14:03:10 -07:00
Gaurav Shukla da90a25f90 [MLIR][TORCH] Add E2E support for `aten.[div.int|bitwise_or.Tensor]` ops
This commit adds lowering of `aten.div.int` and `aten.bitwise_or.Tensor`
ops. Both these ops are required in order to support bloom_560m model.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-10-10 22:28:51 +05:30
Vivek Khandelwal d3cc3f1aff [tosa] Add lowering for aten.to.dtype and aten._to_copy op
This commit adds the TorchToTosa lowering for `aten.to.dtype` and
`aten._to_copy` op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Vivek Khandelwal 56f9a9b5de [tosa] Add TorchToTosa lowering for torch.prim.NumToTensor.Scalar op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Ramiro Leal-Cavazos 8201e7b067
[LINALG] Make `AtenMaxDimOp` use `arith.maxf` to calculate maximum (#1466)
This commit updates the linalg conversion of `AtenMaxDimOp` to use
`arith.maxf` instead of `arith.select` to calculate the maximum. This
allows better vectorization further downstream, since the operation
can be converted to a simple max reduction when the `indices` result
is not used. See: https://github.com/iree-org/iree/issues/10666.
2022-10-05 18:22:59 -07:00
Ashay Rane faa9a78e38
build: update llvm tag to 6f46ff37 (#1448)
Summary of changes:
 - Updated references to the Arith dialect
   (https://reviews.llvm.org/D134762)
 - Switched to prefixed accessors for MemRef dialect
   (https://reviews.llvm.org/D134995)
 - Fixed warnings about signed/unsigned comparisons, ignored return
   values, and unused variables
2022-10-05 08:28:06 -05:00
Gleb Kazantaev 708fa346a6
Fix Base Lazy Backend Type Conversion (#1412)
* Fix c10::prim::Constant conversion; Added CAPI for passes; Added passes to base lazy backend

* Update ivalue_importer to use ImportOptions; Added tests for non-value/value tensor types

* Added tests for scalar Constant import; Updated MB::importFunction to use ImportOptions

* Test updates

* Move back module variable name

* Remove RefineTypes from TorchMlirLoweringContext::Build()

* Rename pass; Remove passes from base lazy backend

* Rename pass to VerifyBackendContractPass

* Aligned cmd pass name; Fixed TorchConversion passes registration
2022-10-04 15:53:28 -07:00
Daniel Ellis 2ba71af651 Add support for mv decomposition. 2022-10-04 11:34:45 -04:00
Prashant Kumar 6777a9484d [LINALG] Add lowering for the aten.upsample_nearest2d op. 2022-10-04 17:20:29 +05:30
Ashay Rane 855d267c57
build: update shape library after PyTorch version update (#1449)
The auto-update of the PyTorch version broke the Torch-MLIR build
because it did not update the shape library.  Going forward, we should
add the shape library update to the PyTorch version update action.
2022-10-02 14:05:53 -05:00
Vivek Khandelwal 9dd5ae8239
[tosa] Add TorchToTosa lowering for aten.arange.start_step op (#1442) 2022-09-30 07:33:41 -07:00
AmosLewis 940959589b [MLIR][TORCH] Add Byte and Char Dtype support 2022-09-30 13:19:31 +05:30
Vivek Khandelwal 6db513c51d
[tosa] Add support for some cases of aten.broadcast_to op (#1429)
This commit adds support for TorchToTosa lowering of
`aten.broadcast_to` op for cases:
1.) When the rank of input and output tensor is equal.
2.) When the rank of input tensor is zero.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-29 09:40:56 -07:00
Ramiro Leal-Cavazos 0f15b3a594
Bump shape library (#1427) 2022-09-29 09:02:28 -07:00
Vivek Khandelwal bce00c8ed1 [tosa] Fix torch.vtensor.literal lowering
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-29 17:03:10 +05:30
JakopinA 8ef0c874c2
Implement Expand/Collapse Functionality for Aten.View (#1353) 2022-09-27 11:08:14 -07:00
Eric Kunze cb1b8796a2
Convert torch si literals into signless for TOSA (#1421) 2022-09-26 16:54:27 -07:00
武家伟 c03aa63325
[MLIR] Add canonicalizer for aten.slice.t op (#1413)
* [MLIR] Add canonicalizer for aten.slice.t op

* Add mlir tests and strength the canonicalizer

* rename variable

Co-authored-by: Vremold <xremold@gamil.com>
2022-09-26 14:35:50 -07:00
Ashay Rane a60acf272d
build: update llvm tag to bebc9695 (#1415)
Summary of changes:
 - Renamed OptionalArrayRefParameter since the name conflicts with an
   upstream symbol that has a different meaning
   (https://reviews.llvm.org/D133819)
 - Removed extraneous dependency between TorchMLIRTorchToMhlo and
   ChloOps, since the existing dependency on MhloDialect is sufficient
 - Fixed code to prevent warnings related to comparisons between signed
   and unsigned values
2022-09-26 11:44:54 -05:00
武家伟 ab7aa01b1e
[MHLO] Add torch-to-mhlo e2e support for aten.gather op (#1410)
* Add torch-to-mhlo e2e support for aten.gather op 

* Add more e2e tests for torch.aten.gather op
2022-09-25 22:07:46 +08:00
Quinn Dawkins 53bf09ceef
Fix iterator types for embedding bag sum mode (#1371) 2022-09-23 13:13:47 -04:00
Ashay Rane b0b2b3a199
build: add missing dependency on MLIRTorchTypesIncGen (#1405) 2022-09-23 08:08:16 -05:00
Tanyo Kwok 16dd7e2e5f
Fix dynamic shapes type verifications (#1409)
* Fix dynamic shapes type verifications
2022-09-23 20:50:29 +08:00
Tanyo Kwok 72e422b589
Add relu6 and binary broadcasts (#1408)
* Add relu6 and binary broadcasts
2022-09-23 20:39:15 +08:00
Tanyo Kwok 061a97c3f2
Replace empty_like && empty_memory_format with full/full_like (#1398)
* Replace empty_like && empty_memory_format with full/full_like

* fix broadcast rank0 tensor
2022-09-23 10:24:36 +08:00
Vivek Khandelwal 4ef6e69ed4
[MLIR][TORCH] Add TorchToTosa lowering for aten.clone op (#1388)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>

Co-authored-by: Suraj Sudhir <16977902+sjarus@users.noreply.github.com>
2022-09-20 15:07:46 -07:00
Vivek Khandelwal 1ffd42bbde
[MLIR][TORCH] Add TorchToTosa lowering for aten.broadcast_to op (#1386)
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-20 10:04:51 -07:00
武家伟 0e2e94d542
Add torch-to-mhlo e2e support for AtenArangeStartStepOp (#1385)
Co-authored-by: Vremold <xremold@gamil.com>
2022-09-20 22:31:24 +08:00
武家伟 4f3cd236dd
Strength the shape inference for aten.arange-like op (#1367)
Strength the shape inference for aten.arange-like op by
1. registering aten.sub and aten.ceil.Scalar op and design folders for them.
2. register a new constant-like op: Torch::ConstantNumberOp and design canonicalizer for it.
2022-09-20 12:40:19 +08:00
Sambhav Jain bb47b36eac
Add a `AllowedInModuleInitializer` trait to denote ops that are permitted in the module initializer (#1379)
This PR adds an `AllowedInModuleInitializer` trait to keep track of ops that are permitted in the module initializer. We have a handful of such ops that are produced by the IValue importer, and so this change avoids maintaining a list of ops in `TorchOps.cpp` that could lead to spurious merge conflicts, and help us integrate torch-mlir in our downstream compiler better. Please let me know if you'd prefer a better name for the trait itself. Feedback is welcome!
2022-09-19 14:56:35 -07:00
long.chen 797feaf129
[torch-mlir][Tosa] fix during torch.max.dim lower to tosa the reshape's new shape attr mismatch reshape's result type (#1378) 2022-09-16 21:29:56 -07:00
Vivek Khandelwal 04f3a4ffce [MLIR][TORCH] Add support for bool element type for aten.sum[.dim_IntList] op
This commit adds bool element type support for `aten.sum` and
`aten.sum.dim_IntList` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-17 09:18:34 +05:30
Ashay Rane 1895b581c4
shape-lib: generate string as multiple lines to work with MSVC (#1370)
As @oroppas identified, literal strings that are over 16,380 characters
cause the MSVC compiler to throw an error (C2026), eventually causing
the Windows build of Torch-MLIR to fail because the length of the
generated MLIR for the shape library crosses the allowed threshold.

This patch fixes the problem by making the Python script generate one
literal string per line to satisfy the MSVC compiler.

Thanks to @oroppas for the bulk of the effort required to resolve this!
2022-09-16 15:16:01 -05:00
武家伟 b316918947
Add AtenClampOp conversion pattern to MHLO (#1356)
Add AtenClampOp conversion pattern to MHLO
2022-09-16 15:09:21 +08:00
Sean Silva 851ce0c940 Remove TorchLoweringPipelineOptions from TorchConversion pipelines
TorchLoweringPipelineOptions only applies to the frontend lowering
pipeline.
2022-09-14 11:20:29 -07:00
Ashay Rane 2bb5f4d8fe
build: update llvm tag to 4d4ca6c9 (#1359)
Summary of changes:
 - Updated emitAccessorPrefix since the default value has changed
   (https://reviews.llvm.org/D133179)
 - Updated RefineTypes pass since Lattice::isUninitialized() is removed
   (https://reviews.llvm.org/D132800)
 - Updated MHLO tag so that it builds with the updated LLVM tag
 - Disabled two tests that cause segfaults in the TOSA backend (see Issue
   #1361)
2022-09-13 21:24:43 -05:00
gpetters94 48418b9c22
Fold away type_as (#1358) 2022-09-12 18:59:12 -04:00
Tanyo Kwok 7f63a17a46
[MHLO] add new options to pipeline (#1331) 2022-09-12 10:27:41 -07:00
Vivek Khandelwal 71b1f0dd7a [MLIR][TORCH] Add E2E support for aten.index.Tensor_hacked_twin op
This commit adds lowering of `index.Tensor_hacked_twin` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-12 21:47:18 +05:30
George Petterson a12b9c4492 Add lowering for aten::cumsum 2022-09-12 09:28:07 +05:30
Vivek Khandelwal 326f21229e [MLIR][TORCH] Fix shape calculation for aten::pow.Tensor_Tensor op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-08 21:14:12 +05:30
Vivek Khandelwal e35741fb1d [MLIR][TORCH] Add E2E support for aten.bitwise_not op
This commit adds lowering of `aten.bitwise_not` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-08 17:52:12 +05:30
Vivek Khandelwal 7dfadc2498 [MLIR][TORCH] Add E2E support for aten.lift_fresh_copy op
This commit adds lowering of `aten.lift_fresh_copy` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-08 12:32:16 +05:30
Vivek Khandelwal c19fccfca2 [MLIR][TORCH] Add E2E support for aten.pow.Tensor_Tensor op
This commit adds lowering of `aten.pow.Tensor_Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-08 10:01:42 +05:30
武家伟 6a1893a517
[MLIR][MHLO] Add AtenFrobeniusNormDimOp and add its conversion pattern to MHLO and linalg (#1306)
* Add aten.frobenius_norm.dim op and init its conversion pattern to linalg and MHLO, 
* run symbolic-shape-optimization before hlo-legalize-to-linalg to fit more mhlo e2e tests.
2022-09-08 10:15:36 +08:00
Ashay Rane 93f7c0ceb5
build: update llvm tag to d2613d5b (#1343)
Summary of changes:
 - Update the dataflow analysis in RefineTypes.cpp
 - Add tosa-to-arith pass after tosa-to-linalg pass, since
   tosa-to-linalg (and canonicalizations) can produce tosa.const() ops
 - Fixed warning about not making `matchAndRewrite` as override
2022-09-07 14:35:14 -05:00
Gaurav Shukla 99093d0623 [TORCH] Add decomposition of `aten.linear` op
This commit adds decomposition of `aten.linear` op. Due to limited
support at tosa backend in case of dynamic dimensions, this
decomposition is currently disabled for tosa backend.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-09-07 16:58:27 +05:30
Quinn Dawkins cc86cc0f02
Revert "Implement Non-Expand/Collapse Functionality for Aten.View (#1309)" (#1347)
Reverting commit a6a48ba233 to revise unit tests and address dynamic shape handling based on comments in #1309
2022-09-07 01:38:11 -04:00
JakopinA a6a48ba233
Implement Non-Expand/Collapse Functionality for Aten.View (#1309)
Focuses on statically sized cases such as [2, 3] -> [3, 2].
2022-09-06 14:46:04 -04:00
Tanyo Kwok 37f57a9828
Delete ConvertAtenNativeLayerNormOp from TorchToLinalg (#1336)
The ConvertAtenNativeLayerNormOp is delete because we have decomposition already
see https://github.com/llvm/torch-mlir/pull/1332
2022-09-05 10:19:20 +08:00
Tanyo Kwok 512f2d9c23
Add decomposition to aten.native_layer_norm (#1332)
* Add decomposition to aten.native_layer_norm

* fix ci error
2022-09-02 09:29:22 +08:00
Tanyo Kwok 57d8ec151f
[MHLO] add VerifyMhloBackendContract (#1321)
* [MHLO] add VerifyMhloBackendContract

* guard with macro
2022-09-01 17:08:17 +08:00