Commit Graph

845 Commits (fd236b2c89158fa8cf4598ab4ca77c82da681f14)

Author SHA1 Message Date
Vivek Khandelwal fd236b2c89 [MLIR][TORCH] Add decomposition for prims.var and prims.sqrt op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Vivek Khandelwal b966733e04 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-01-08.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Ashay Rane 4e4a571104
[TOSA] Add LeakyReLU conversion pass (#1790)
* feat(TorchToTOSA): LeakyReLU legalization

* test(LeakyReLU): Add LIT test and enable e2e test

Co-authored-by: Philipp Braun <philipp.braun@amd.com>
2023-01-10 21:42:07 -08:00
Ashay Rane 0faba6d2fc
build: update llvm tag to de3f0f7f (#1789)
Credit to @vivekkhandelwal1 for finding the necessary changes.

Summary of changes:

 - Switch Tosa_IntArrayAttr[N], Tosa_IntArrayAttrUpto[N] to DenseI64ArrayAttr.

 - Replace kNoIterationLimit with kNoLimit. (https://reviews.llvm.org/D140525)

 - Add dependency on MhloPasses when MHLO is enabled

 - Specify result type when using mhlo::DotOp
2023-01-10 17:07:19 -06:00
Raghavan Raman 0979df6589
Fix unsqueeze in Torch to Tosa conversion (#1780) 2023-01-10 11:09:58 -08:00
Jiahao Li 8dc5d985eb
Add e2e support for aten logical or/and/xor/not ops (#1761) 2023-01-03 18:11:25 -08:00
Ramiro Leal-Cavazos d44bdd2728
Add `hasDtype` checks everywhere dtypes are used in decompositions (#1750)
There are several decompositions that assume the operands of the op
have dtypes available; however, the only time dtypes are guaranteed to
be present is when the graph has reached the backend contract. In
general, every pass that happens before reaching the backend contract
should not assume dtypes are available and should use `hasDtype` to
check first.

This commit adds `hasDtype` checks to every decomposition that uses
dtypes.
2023-01-03 14:19:18 -08:00
Ramiro Leal-Cavazos 273664ded6
[custom op] Replace `tanh` dtype function with `expm1` (#1769)
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.

Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
2023-01-03 14:18:26 -08:00
Srirammaswamy a88e3766e8
Add E2E support for LeakyRelu and LeakyReluBackward ops (#1733)
Co-authored-by: srirammaswamy <srirammaswamy@gmail.com>
2023-01-03 08:30:16 -08:00
powderluv 3d50d3d9fe
Revert "rebase llvm: 5f24f893cac7aaea292c70f8aa83b021499114be (#1760)" (#1765)
This reverts commit fa356cce50.
2023-01-01 10:56:06 -08:00
Xiafei Qiu fa356cce50
rebase llvm: 5f24f893cac7aaea292c70f8aa83b021499114be (#1760) 2022-12-31 00:07:54 +08:00
Ashay Rane ac780529b4
Revert e2e support for aten logical or/and/xor/not ops (#1757)
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked.  Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result.  This patch reverts the
commit to make the post-merge builds green again.
2022-12-29 21:01:06 -06:00
Shivam Gupta 2f45959f0d
Prelu lowering to linalg (#1712)
Prelu lowering to linalg
2022-12-28 08:51:33 +05:30
Jiahao Li eaab9be207
Add e2e support for aten logical or/and/xor/not ops (#1752) 2022-12-26 10:23:38 +08:00
Jiahao Li 49071f86e6
[MHLO] Evaluate RuntimeAssertOp at compile time (#1732) 2022-12-22 17:12:52 +08:00
Tanyo Kwok 297fd3aa47
Revert "reimplement linear lowering torchToMhlo (#1524)" (#1744)
This reverts commit 50b524546f.
2022-12-21 21:24:07 -08:00
Jiahao Li 60a139271d
Add aten.std.correction op and its decomposition (#1731) 2022-12-21 21:02:40 -08:00
zzp_miracle 50b524546f
reimplement linear lowering torchToMhlo (#1524) 2022-12-22 10:15:16 +08:00
Jiahao Li 15b249777b
[Torch][MHLO] Decompose aten.copy op. Lower aten.rsqrt & sigmoid to mhlo. (#1734) 2022-12-22 10:13:59 +08:00
Chi_Liu 9dc09ac8c5
[TOSA] Add aten.gather support for tosa (#1680) 2022-12-21 11:04:07 -08:00
Chi_Liu b2cefc0b64
[TOSA] Add aten.masked_fill.Tensor/Scalar support (#1735) 2022-12-21 08:56:07 -08:00
pranavmulticore 0f6008c802
Added GeluBackward: MHLO support (#1725) 2022-12-21 20:09:43 +08:00
Abhishek Varma 66d7a412cb [RefineTypes] Fix knowledge dtype for `aten.embedding` op
-- The dtype of the result of `aten.embedding` should match that of
   the `weight` operand's (operand[0]) instead of hardcoding to f32.
-- This commit aims to provide a fix for the same.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-12-20 19:56:12 +05:30
Tanyo Kwok 577e38da58
build: update llvm tag to 7ccbb4df (#1736)
Summary of changes:

 - LLVM now includes <optional> instead of "llvm/ADT/Optional.h" in most
   (although not all) places
   (https://reviews.llvm.org/rG541ef3d61e9341cd38420c0dbca9250c4d0ea04c).
   This patch replaces the affected instances of `llvm::Optional` with
   `std::optional`.

 - In the usages of llvm::Optional that remain, llvm::Optional::value()
   is deprecated, so this patch replaces them with a dereference.
2022-12-20 18:17:27 +08:00
ataheridezfouli-groq 17ee643aeb
[TORCH] Add Complex Number support (#1673)
Add Complex number dtype support to torch tensors. Add
aten.fft_fft op to test complex numbers.
2022-12-15 21:40:01 +00:00
Ramiro Leal-Cavazos 211cf8fc36
Add `report_fatal_error` to `getTypeForScalarType` (#1722)
Functions like `getTypeForScalarType` that do a mapping from one set
of types to another should not fail, and if they do it
should be obvious to the developer that that function has an
unhandled case.

Instead of silently failing when encountering an unsupported type,
this commit adds a `report_fatal_error` at the end, similar to other
type translation functions in this file.
2022-12-15 08:33:14 -08:00
Ramiro Leal-Cavazos 60db793feb
Pass op legality info to `verifyBackendContractPass` (#1705)
In order to verify if a given IR satisfies the backend contract, the
verifier needs to know if decompositions took place, and if so, which
ops were decomposed and which were not.

This commit adds two arguments to `verifyBackendContractPass` to
specify if decompositions took place and which ops to consider backend
legal, similar to the arguments of `LowerToBackendContractPass`.
2022-12-15 08:32:52 -08:00
Prashant Kumar 564403e3a1 Add float16 support in the refbackend.
This will require https://reviews.llvm.org/D139121 patch to go through.
2022-12-15 21:19:52 +05:30
Sean Silva b60da34f84 [cleanup] Fix a few more llvm::None -> std::nullopt 2022-12-14 05:59:49 -08:00
Ashay Rane f63bb9f86c
build: update llvm tag to 3a020527 (#1717)
Summary of changes:

 - Replace `llvm::None` with `std::nullopt`, since the former is deprecated
   (https://reviews.llvm.org/D139763)

 - Use setter for symbol visibility instead of passing string attribute when
   creating FuncOp
2022-12-14 02:06:39 -06:00
Ahmed S. Taei b1f6832849
Add aten.slice.Tensor & aten.cat folders (#1691) 2022-12-13 13:02:47 -08:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00
Chi_Liu 163d19cce6
[TOSA] Add aten.add/sub.Scalar/Tensor si64 type support (#1604) 2022-12-12 12:13:07 -08:00
Ramiro Leal-Cavazos 73bd32d06c
Make `getTensorRank` safer by changing return to `Optional<unsigned>` (#1707)
Currently `getTensorRank` returns -1 if it was unable to get the rank
of the tensor. However, not every use in the codebase was checking the
return value, and in some cases, the return value was casted to
unsigned leading to some infinte loops when an unranked tensor reached
a decomposition.

This commit changes the return of `getTensorRank` to
`Optional<unsigned>` to make it clear to the user that the function
can fail.

This commit also changes a couple of for loops that iterate a vector
in reverse order that can potentially become infinite loops into
range-based for loops.
2022-12-12 08:56:28 -08:00
Vivek Khandelwal d4862ec611 [MLIR][TORCH] Add e2e support for aten.var_mean op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-12 15:46:54 +05:30
Vivek Khandelwal f783e19dcb Revert "[MLIR][TORCH] Fix mean and mean.dim op for large-sized inputs"
This reverts commit 55c7e66aa7.
2022-12-09 19:30:46 +05:30
Sambhav Jain f8a2592905
[Bazel] Resolve circular dependency and add targets for conversion to MLProgram dialect (#1694)
A circular dependency was introduced in e7edcc62fd. 

Specifically, the `makeShapeLLVMCompatible` and `makeShapeTorchCompatible` utilities were being called from `lib/Dialect/Torch/IR/TorchTypes.cpp` and `lib/Dialect/Torch/IR/TorchOps.cpp` defined under the `:TorchMLIRTorchDialect` bazel target, leading it to take a dependency on `:TorchMLIRConversionUtils` which already depends on `:TorchMLIRTorchDialect`, hence creating a circular dependency.

This commit resolves the same by moving said utilities from `lib/Conversion/Utils/Utils.cpp` to `lib/Dialect/Torch/Utils/Utils.cpp`. Please LMK if there's a better way to fix this and I will update the code.

This commit also adds the required targets to support building the new conversions from Torch to ML Program dialect that was introduced in f416953600.

Bazel build GHA triggered manually to verify: https://github.com/sjain-stanford/torch-mlir/actions/runs/3645944517
2022-12-08 09:49:54 -08:00
Ramiro Leal-Cavazos a54b334578
Allow running DecomposeComplexOps more than once (#1671)
The current implementation of `DecomposeComplexOps` fails if an op
expected to be decomposed does not get decomposed in the first
iteration of the `createTorchSimplificationPipeline` in
`LowerToBackendContractPass`. However, some graphs require multiple
iterations of `createTorchSimplificationPipeline` to fully propagate
all statically knowable information, such as dtypes and shapes, to the
entire graph, sometimes resulting in the need to run
`DecomposeComplexOps` more than once.

This commit changes `DecomposeComplexOps` to use a greedy algorithm
for pattern application and moves the legalization check of ops to the
`LowerToBackendContractPass` to allow for the `DecomposeComplexOps` to
run more than once.
2022-12-08 09:26:38 -08:00
Ramiro Leal-Cavazos dd35488da5
build: update llvm tag to 798fa4b4 (#1684)
- Support for non-prefixed accessors has been removed. See:
  https://reviews.llvm.org/D136727
- Rename `operands` to `methodOperands` in `prim.CallMethod` since the
  name `operands` overlaps with a builtin method name. See:
  https://reviews.llvm.org/D136727
- Add passes in refbackend to lower memref.subview. See:
  https://reviews.llvm.org/D136377
- Replace `CopyToValueTensorOps` first in `RewriteViewLikeSubgraph` in
  maximize-value-semantics.

  The current implementation of the `RewriteViewLikeSubgraph` pass in
  maximize-value-semantics creates temporarily invalid IR. In
  particular, given a forward slice starting from a
  `CopyToNonValueTensorOp` and ending in `CopyToValueTensorOp`s, the
  pass first replaces all uses of the `CopyToNonValueTensorOp` with
  its operand, which results in all the `CopyToValueTensorOp` users
  having their operand have type `!torch.vtensor`, which is invalid.

  The correct way to do things is to first replace all the
  `CopyToValueTensorOp`s with their operand, and then replace all uses
  of the `CopyToNonValueTensorOp` with its operand.

  This only started failing now because the generated accessor
  `getOperand` for the `CopyToValueTensorOp` now returns a
  `TypedValue<NonValueTensorType>`, which has an assert checking that
  the value returned is of the expected type.
2022-12-07 12:20:41 -08:00
Vivek Khandelwal 3e4bb2bd8e [MLIR][TORCH] Add E2E support for randn and randn.generator op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-06 22:41:24 +05:30
Vivek Khandelwal f416953600 [MLIR][TORCH] Add TorchConversionToMLProgram and MLProgramBufferize pass
This commit changes the `InsertRngGlobalsPass` to `TorchConversionToMLProgram`
pass. This commit also adds the `MLProgramBufferize` pass for the
bufferization of ml_program dialect ops to run on refbackend.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-02 13:20:46 +05:30
Eric Kunze 3fc27cf6ca
Update LLVM Tag to 2c1fa734 (#1670)
Summary of changes:
 - Change ShapedType::kDynamicSize -> ShapedType::kDynamic
 - llvm::NoneType has been deprecated, change convertScalarToDtype to use llvm::None
2022-12-01 20:38:28 -08:00
Ramiro Leal-Cavazos b4b92c990e
Replace LCG algorithm with squares64 algorithm in AtenUniformOp (#1633)
This commit replaces the LCG algorithm that was being used by the
`TorchToLinalg` lowering of `AtenUniformOp` to generate random numbers
with the `squares64` algorithm, for the LCG algorithm was producing
tensors that were highly correlated with one another.

Squares64 algorithm: https://arxiv.org/abs/2004.06278

Closes https://github.com/llvm/torch-mlir/issues/1608
2022-12-01 08:30:10 -08:00
Vivek Khandelwal e7edcc62fd build: update llvm tag to 147fe9de
Summary of changes:
- Replace call to `MemoryEffectOpInterface::hasNoEffect`
  with `isMemoryEffectFree`.
- Make fix for the dynamic dims, since
  `kDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1` in llvm
- `makeShapeLLVMCompatible` and `makeShapeTorchCompatible`
  utilities convert shapes in order to remain consistent
  with the Torch and MLIR semantics.
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-01 13:36:50 +05:30
Abhishek Varma 47f67853ac [RefineTypes] Add Float16Type dtype knowledge support for trivial ops
-- This commit adds Float16Type dtype knowledge support for trivial ops.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-12-01 10:22:43 +05:30
Ramiro Leal-Cavazos 0983a7f93a
Fix modulus calculation in LCG algorithm of refbackend (#1658)
The current implementation sets the `nextSeed` value to `temp & 127`,
which is wrong. The last step of the LCG algorithm for the multiplier
and increment chosen should be `temp % 2^{64} = temp & (1 <<
63)`. However, because we are dealing with i64 values, the modulus
operation happens automatically, so it is not needed.

See Donald Knuth's values for LCG here:
https://en.wikipedia.org/wiki/Linear_congruential_generator
2022-11-30 08:46:52 -08:00
Abhishek Varma c27c1791f1 [MLIR][TORCH] Add e2e support for `aten.amax` op
-- This commit adds e2e support for `atend.amax` op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-30 17:54:37 +05:30
Abhishek Varma 2c643adcb9 [TORCH][DECOMPOSE] Fix bug in computeReductionType API
-- This commit fixes a bug in computeReductionType API.
-- The bug pertains to removal of `dim` from the `sizes` array.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-30 17:54:37 +05:30
Tanyo Kwok bbcdb38d99
Revert "Decompose torch.slice_scatter (#1622)" (#1659)
This reverts commit f3f2f10030.
2022-11-30 12:47:13 +08:00
Sean Silva ecb09c2fc3 [torchdynamo] Fix output size computation for upsample_nearest2d 2022-11-29 01:46:29 -08:00