Commit Graph

714 Commits (401869e31dc49692e21edd3069072016d46469e2)

Author SHA1 Message Date
Yuanqiang Liu 6ab990e1e8
[Torch Dialect] add folder for aten.Int.float (#1863) 2023-02-10 13:59:03 -08:00
Yuanqiang Liu 2f6fdb7f0b
[Torch Dialect] add folder for prim.min.int (#1864) 2023-02-10 13:58:15 -08:00
Ashay Rane 711646d095
mhlo: migrate conversion to stablehlo (#1840)
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.

This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
2023-02-02 07:29:47 -06:00
Chi_Liu 00fc14a6e1
[TOSA] Add to.dtype i1 to i64 (#1830) 2023-01-27 09:21:06 -08:00
Gleb Kazantaev 3930588a7e
Enable VerifyBackendContract in LTC backend (#1798)
* Enable VerifyBackendContract in LTC backend

* Update VerifyBackendContract pass

* Move convert_scalar_implicit to jit_utils

* Rename VerifyBackendContract to VerifyBackendContractNoDecompositions

* Update verify-backend-contract-error.mlir test
2023-01-24 22:14:17 -05:00
Ramiro Leal-Cavazos 6c86bec04f
build: update llvm tag to 9acc2f37 (#1828)
This commit makes the following changes:

- Update dialects to use fold API `kEmitFoldAdaptorFolder` and update
signature of `fold` methods (see PSA
https://discourse.llvm.org/t/psa-new-improved-fold-method-signature-has-landed-please-update-your-downstream-projects/67618)
- Replace `makeArrayRef` with `ArrayRef` (see
https://reviews.llvm.org/D140896)
- Remove `TypeRange{}` arg from `b.create<scf::IfOp>` since builder no
longer takes that argument
- Make `func`s in `Torch/invalid.mlir` private, since symbol
declarations cannot be public. (see https://discourse.llvm.org/t/rfc-symbol-definition-declaration-x-visibility-checks/2140)
2023-01-25 01:29:42 +00:00
Maksim Levental 8696752eb6
Expose metadata of torch-mlir types (plus verify DictType key and value types). (#1785) 2023-01-16 10:25:02 -06:00
Ashay Rane 4e4a571104
[TOSA] Add LeakyReLU conversion pass (#1790)
* feat(TorchToTOSA): LeakyReLU legalization

* test(LeakyReLU): Add LIT test and enable e2e test

Co-authored-by: Philipp Braun <philipp.braun@amd.com>
2023-01-10 21:42:07 -08:00
Ashay Rane 0faba6d2fc
build: update llvm tag to de3f0f7f (#1789)
Credit to @vivekkhandelwal1 for finding the necessary changes.

Summary of changes:

 - Switch Tosa_IntArrayAttr[N], Tosa_IntArrayAttrUpto[N] to DenseI64ArrayAttr.

 - Replace kNoIterationLimit with kNoLimit. (https://reviews.llvm.org/D140525)

 - Add dependency on MhloPasses when MHLO is enabled

 - Specify result type when using mhlo::DotOp
2023-01-10 17:07:19 -06:00
Raghavan Raman 0979df6589
Fix unsqueeze in Torch to Tosa conversion (#1780) 2023-01-10 11:09:58 -08:00
Ramiro Leal-Cavazos 273664ded6
[custom op] Replace `tanh` dtype function with `expm1` (#1769)
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.

Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
2023-01-03 14:18:26 -08:00
Ashay Rane ac780529b4
Revert e2e support for aten logical or/and/xor/not ops (#1757)
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked.  Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result.  This patch reverts the
commit to make the post-merge builds green again.
2022-12-29 21:01:06 -06:00
Jiahao Li eaab9be207
Add e2e support for aten logical or/and/xor/not ops (#1752) 2022-12-26 10:23:38 +08:00
Jiahao Li 49071f86e6
[MHLO] Evaluate RuntimeAssertOp at compile time (#1732) 2022-12-22 17:12:52 +08:00
Tanyo Kwok 297fd3aa47
Revert "reimplement linear lowering torchToMhlo (#1524)" (#1744)
This reverts commit 50b524546f.
2022-12-21 21:24:07 -08:00
zzp_miracle 50b524546f
reimplement linear lowering torchToMhlo (#1524) 2022-12-22 10:15:16 +08:00
Jiahao Li 15b249777b
[Torch][MHLO] Decompose aten.copy op. Lower aten.rsqrt & sigmoid to mhlo. (#1734) 2022-12-22 10:13:59 +08:00
Chi_Liu 9dc09ac8c5
[TOSA] Add aten.gather support for tosa (#1680) 2022-12-21 11:04:07 -08:00
Chi_Liu b2cefc0b64
[TOSA] Add aten.masked_fill.Tensor/Scalar support (#1735) 2022-12-21 08:56:07 -08:00
ataheridezfouli-groq 17ee643aeb
[TORCH] Add Complex Number support (#1673)
Add Complex number dtype support to torch tensors. Add
aten.fft_fft op to test complex numbers.
2022-12-15 21:40:01 +00:00
Ramiro Leal-Cavazos 60db793feb
Pass op legality info to `verifyBackendContractPass` (#1705)
In order to verify if a given IR satisfies the backend contract, the
verifier needs to know if decompositions took place, and if so, which
ops were decomposed and which were not.

This commit adds two arguments to `verifyBackendContractPass` to
specify if decompositions took place and which ops to consider backend
legal, similar to the arguments of `LowerToBackendContractPass`.
2022-12-15 08:32:52 -08:00
Ahmed S. Taei b1f6832849
Add aten.slice.Tensor & aten.cat folders (#1691) 2022-12-13 13:02:47 -08:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00
Chi_Liu 163d19cce6
[TOSA] Add aten.add/sub.Scalar/Tensor si64 type support (#1604) 2022-12-12 12:13:07 -08:00
Ramiro Leal-Cavazos a54b334578
Allow running DecomposeComplexOps more than once (#1671)
The current implementation of `DecomposeComplexOps` fails if an op
expected to be decomposed does not get decomposed in the first
iteration of the `createTorchSimplificationPipeline` in
`LowerToBackendContractPass`. However, some graphs require multiple
iterations of `createTorchSimplificationPipeline` to fully propagate
all statically knowable information, such as dtypes and shapes, to the
entire graph, sometimes resulting in the need to run
`DecomposeComplexOps` more than once.

This commit changes `DecomposeComplexOps` to use a greedy algorithm
for pattern application and moves the legalization check of ops to the
`LowerToBackendContractPass` to allow for the `DecomposeComplexOps` to
run more than once.
2022-12-08 09:26:38 -08:00
Ramiro Leal-Cavazos 76190e8a3f
Remove unnecessary decompose-complex-ops tests (#1693)
This commit removes lit tests from the `decompose-complex-ops` that
are essentially testing a macro expansion, in accordance with
https://github.com/llvm/torch-mlir/blob/main/docs/architecture.md#dos-and-donts-for-unit-vs-end-to-end-testing .
2022-12-08 08:22:08 -08:00
Ramiro Leal-Cavazos dd35488da5
build: update llvm tag to 798fa4b4 (#1684)
- Support for non-prefixed accessors has been removed. See:
  https://reviews.llvm.org/D136727
- Rename `operands` to `methodOperands` in `prim.CallMethod` since the
  name `operands` overlaps with a builtin method name. See:
  https://reviews.llvm.org/D136727
- Add passes in refbackend to lower memref.subview. See:
  https://reviews.llvm.org/D136377
- Replace `CopyToValueTensorOps` first in `RewriteViewLikeSubgraph` in
  maximize-value-semantics.

  The current implementation of the `RewriteViewLikeSubgraph` pass in
  maximize-value-semantics creates temporarily invalid IR. In
  particular, given a forward slice starting from a
  `CopyToNonValueTensorOp` and ending in `CopyToValueTensorOp`s, the
  pass first replaces all uses of the `CopyToNonValueTensorOp` with
  its operand, which results in all the `CopyToValueTensorOp` users
  having their operand have type `!torch.vtensor`, which is invalid.

  The correct way to do things is to first replace all the
  `CopyToValueTensorOp`s with their operand, and then replace all uses
  of the `CopyToNonValueTensorOp` with its operand.

  This only started failing now because the generated accessor
  `getOperand` for the `CopyToValueTensorOp` now returns a
  `TypedValue<NonValueTensorType>`, which has an assert checking that
  the value returned is of the expected type.
2022-12-07 12:20:41 -08:00
Vivek Khandelwal f416953600 [MLIR][TORCH] Add TorchConversionToMLProgram and MLProgramBufferize pass
This commit changes the `InsertRngGlobalsPass` to `TorchConversionToMLProgram`
pass. This commit also adds the `MLProgramBufferize` pass for the
bufferization of ml_program dialect ops to run on refbackend.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-02 13:20:46 +05:30
Vivek Khandelwal e7edcc62fd build: update llvm tag to 147fe9de
Summary of changes:
- Replace call to `MemoryEffectOpInterface::hasNoEffect`
  with `isMemoryEffectFree`.
- Make fix for the dynamic dims, since
  `kDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1` in llvm
- `makeShapeLLVMCompatible` and `makeShapeTorchCompatible`
  utilities convert shapes in order to remain consistent
  with the Torch and MLIR semantics.
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-01 13:36:50 +05:30
Ramiro Leal-Cavazos 0983a7f93a
Fix modulus calculation in LCG algorithm of refbackend (#1658)
The current implementation sets the `nextSeed` value to `temp & 127`,
which is wrong. The last step of the LCG algorithm for the multiplier
and increment chosen should be `temp % 2^{64} = temp & (1 <<
63)`. However, because we are dealing with i64 values, the modulus
operation happens automatically, so it is not needed.

See Donald Knuth's values for LCG here:
https://en.wikipedia.org/wiki/Linear_congruential_generator
2022-11-30 08:46:52 -08:00
Tanyo Kwok bbcdb38d99
Revert "Decompose torch.slice_scatter (#1622)" (#1659)
This reverts commit f3f2f10030.
2022-11-30 12:47:13 +08:00
Vivek Khandelwal d9cbf01d1e Revert "build: update llvm tag to 147fe9de"
This reverts commit e45ad313d4.
2022-11-25 12:41:56 +05:30
Vivek Khandelwal e45ad313d4 build: update llvm tag to 147fe9de
Summary of changes:
- Update call to `hasNoEffect` utility
- `KDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1`
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-24 12:44:43 +05:30
Tanyo Kwok f3f2f10030
Decompose torch.slice_scatter (#1622)
* Decompose torch.slice_scatter

* fix compilation error

* update file check

* fix ci

* fix i64 torch.tensor dtype
2022-11-23 18:14:12 +08:00
Vivek Khandelwal da8fdc9f96 [MLIR][TORCH] Fix refine types crash
This commit fixes https://github.com/llvm/torch-mlir/issues/1599.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-23 15:17:37 +05:30
Tanyo Kwok 4aad5ccf39
fix #1626 return type mismatch (#1634) 2022-11-23 15:02:41 +08:00
Vivek Khandelwal 55c7e66aa7 [MLIR][TORCH] Fix mean and mean.dim op for large-sized inputs
This commit fixes the aten.mean and aten.mean.dim op decomposition
for supporting large-sized inputs.
This commit also fixes the formatting for the file stats.py

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-22 08:38:51 +05:30
Vivek Khandelwal 4cbd3927d7 [MLIR][TORCH] Add aten.sort.int op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-20 19:00:41 +05:30
Chi_Liu 29c8f47723
[TOSA] Add aten.clamp op tosa support (#1609)
Co-authored-by: AmosLewis <Amos_Lewsi@foxmail.com>
2022-11-18 13:32:13 -08:00
Sean Silva 39de4d6265 [cleanup] Make diagnostics better
Also remove some unused imports.
2022-11-17 02:09:54 -08:00
Sambhav Jain fc4c8d4ed9
Enable torch-mlir LIT tests in Bazel (#1585)
Adds support to run `.mlir` LIT tests in bazel. 

```
bazel test @torch-mlir//test/...
```

Follow-on PR will contain these updates:
- Add tests to GHA CI workflow
- Add `.py` LIT tests to bazel
2022-11-15 14:02:19 -08:00
Chi_Liu dfe7513a45
[MLIR][TORCH] Fix aten.unsqueeze op (#1578)
The range of the unsqueeze dim is: [-input.dim() - 1, input.dim() + 1), the bug forgets to add 1.
2022-11-14 09:09:15 -08:00
Daniel Ellis a7ac0def45
Move single-tensor-tuple-return test to mlir unit test.
Also, add multiple return test.
2022-11-10 09:23:53 -05:00
Xiafei Qiu 4f173c6e0f
update llvm tag to a2620e00. (#1567)
- also update MHLO to 57ba12a2(branch greencommit/2022-11-07-a2620e00)
- change -pass-pipeline format to make tests pass.
2022-11-10 18:39:28 +08:00
Tanyo Kwok 17bc7c89cc
build: update llvm tag to 74fb770d (#1539)
* build: update llvm tag to 74fb770d

This commit makes the following changes needed to update bump LLVM:

+ replace usages of `tensor::createPadScalarOp`, see https://reviews.llvm.org/D136493
+ Update file checks
2022-11-01 15:27:09 +08:00
Ramiro Leal-Cavazos b723186983
Remove all but one of valsem ops + move fill.Scalar to elementwise (#1531)
This commit removes almost all of the valsem ops, since the value
semantics version of the ops now exist in PyTorch. The only op missing
is `aten.bernoulli_.float`. In addition, this commit also simplifies
the implementation of `aten.fill.Scalar` by moving it to the pattern
that converts elementwise ops.
2022-10-28 15:06:11 +00:00
Ashay Rane a11ea93877
build: update llvm tag to f8b84268 (#1528)
The only change required was to update a test to reflect the changes
in https://reviews.llvm.org/D136541.
2022-10-26 15:33:53 -05:00
Vivek Khandelwal ca87033d2f [MLIR][TORCH] Add E2E support for aten.mse_loss op
This commit adds decomposition for the `aten.mse_loss` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-10-25 21:06:58 +05:30
Chi_Liu ad6f5848cb
[MLIR][TORCH] Add TorchToTosa lowering for aten.where.self op (#1454) 2022-10-18 09:39:39 -07:00
Ramiro Leal-Cavazos 82a3860e25
build: update llvm tag to 4546397e (#1502)
This commit makes the following changes needed to update bump LLVM:

- Replace `linalg.init_tensor` with `tensor.empty` (see:
https://reviews.llvm.org/D135129)
- Replace `NoSideEffect` with `Pure` (see
https://reviews.llvm.org/D135505)
- Replace `body` region accessor for `ReduceOp` and `ReduceWindowOp`
with `getBody`
- Fix incorrect use of `tosa::ReduceSumOp` in `AtenNativeLayerNormOp`
conversion pattern. The result type of `tosa::ReduceSumOp` must have
the same rank as the input type. (see:
https://www.mlplatform.org/tosa/tosa_spec.html#_reduce_sum)

Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>

Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
2022-10-18 04:22:53 +00:00
Gaurav Shukla da90a25f90 [MLIR][TORCH] Add E2E support for `aten.[div.int|bitwise_or.Tensor]` ops
This commit adds lowering of `aten.div.int` and `aten.bitwise_or.Tensor`
ops. Both these ops are required in order to support bloom_560m model.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-10-10 22:28:51 +05:30
Vivek Khandelwal d3cc3f1aff [tosa] Add lowering for aten.to.dtype and aten._to_copy op
This commit adds the TorchToTosa lowering for `aten.to.dtype` and
`aten._to_copy` op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Vivek Khandelwal 56f9a9b5de [tosa] Add TorchToTosa lowering for torch.prim.NumToTensor.Scalar op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Gleb Kazantaev 708fa346a6
Fix Base Lazy Backend Type Conversion (#1412)
* Fix c10::prim::Constant conversion; Added CAPI for passes; Added passes to base lazy backend

* Update ivalue_importer to use ImportOptions; Added tests for non-value/value tensor types

* Added tests for scalar Constant import; Updated MB::importFunction to use ImportOptions

* Test updates

* Move back module variable name

* Remove RefineTypes from TorchMlirLoweringContext::Build()

* Rename pass; Remove passes from base lazy backend

* Rename pass to VerifyBackendContractPass

* Aligned cmd pass name; Fixed TorchConversion passes registration
2022-10-04 15:53:28 -07:00
Vivek Khandelwal 9dd5ae8239
[tosa] Add TorchToTosa lowering for aten.arange.start_step op (#1442) 2022-09-30 07:33:41 -07:00
AmosLewis 940959589b [MLIR][TORCH] Add Byte and Char Dtype support 2022-09-30 13:19:31 +05:30
Ashay Rane 0b46462528
Miscellaneous fixes for Windows builds (#1376)
* test: allow spaces in path to Python executable

On Windows, the path to the Python binary may contain spaces, so this
patch adds quotes around the path to the python executable.

Thanks to @sstamenova for suggesting the fix!

* python: remove header file that causes Windows build failures

Similar to https://reviews.llvm.org/D125284, we can safely remove this
header file without affecting the build on either Linux.  It is
necessary to remove this header file on Windows builds since otherwise
it causes build errors.

* python: drop `TORCH_API` from function defined in Torch-MLIR

`TORCH_API` should apply to functions that are either exported by
libtorch.so or ones that are imported from libtorch.so by its downstream
consumers (like Torch-MLIR).  Neither case applies to the
`importJitFunctionAsFuncOp()` function, since it is defined in
Torch-MLIR (and thus outside libtorch.so).  This patch fixes the problem
by dropping `TORCH_API` from that function's declaration.

* python: make output of class anotations deterministic

The `class-annotator-repr.py` test checks for class annotations in a
specific order, but prior to this patch, the order was
non-deterministic, since the code iterated on an _unordered_ map.

This patch makes the iteration order deterministic through two changes:
1. using a sorted map
2. using the class qualified name instead of the address of the class in
memory

* test: use Python3_EXECUTABLE as interpreter path for consistency

This ensures that tests use the Python3 version that was detected using
CMake, instead of whichever python version that happens to be in the
PATH variable when invoking the test.

* test: fix RUN string

The parenthesis syntax does not run on Windows (the shell interprets the
`(` character as part of the path).  Moreover, the ODR violation in the
comment no longer seems to apply.

* python: port parallel test framework to Windows

Since Windows does not support `fork` natively, Python's
`multiprocessing` module needs to use `spawn` on Windows.  However, to
use `spawn`, the multiprocessing module serializes (or pickles) the
worker function and its arguments.  Sadly, the multiprocessing module
(both the default one in Python and the one that is extended in PyTorch)
is unable to serialize lambda functions (see
https://stackoverflow.com/a/19985580) for detals.

Unfortunately, given how our tests are structured, we require that the
function under test is passed as an argument to another function, so we
cannot sidestep our use of lambda functions.

To resolve this problem, this patch makes use of the `multiprocess` and
`dill` Python modules, which together offers a multiprocessing mechanism
that can serialize lambda functions.  The multiprocess module also
offers a process pool, which simplifies the code for our parallel
testing framework.
2022-09-29 12:07:43 -05:00
Vivek Khandelwal bce00c8ed1 [tosa] Fix torch.vtensor.literal lowering
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-29 17:03:10 +05:30
JakopinA 8ef0c874c2
Implement Expand/Collapse Functionality for Aten.View (#1353) 2022-09-27 11:08:14 -07:00
Eric Kunze cb1b8796a2
Convert torch si literals into signless for TOSA (#1421) 2022-09-26 16:54:27 -07:00
武家伟 c03aa63325
[MLIR] Add canonicalizer for aten.slice.t op (#1413)
* [MLIR] Add canonicalizer for aten.slice.t op

* Add mlir tests and strength the canonicalizer

* rename variable

Co-authored-by: Vremold <xremold@gamil.com>
2022-09-26 14:35:50 -07:00
Tanyo Kwok 16dd7e2e5f
Fix dynamic shapes type verifications (#1409)
* Fix dynamic shapes type verifications
2022-09-23 20:50:29 +08:00
Tanyo Kwok 061a97c3f2
Replace empty_like && empty_memory_format with full/full_like (#1398)
* Replace empty_like && empty_memory_format with full/full_like

* fix broadcast rank0 tensor
2022-09-23 10:24:36 +08:00
long.chen 797feaf129
[torch-mlir][Tosa] fix during torch.max.dim lower to tosa the reshape's new shape attr mismatch reshape's result type (#1378) 2022-09-16 21:29:56 -07:00
Ashay Rane a9e1014fc7
python: add `CHECK-LABEL` statements to localize checks (#1363)
It seems as though an upstream change in PyTorch has caused the module
dump to include not just the module being tested, but also several
seemingly unrelated functions in the `torch._decom.decompositions`
namespace.  The presence of these new functions caused lit to match
variables against incorrect statements (i.e.  statements in the
unrelated functions instead of the module under test).

This patch inserts `CHECK-LABEL` statements in the failing tests so that
lit ignores these unrelated functions and only checks the statements at
or after the test module definition.
2022-09-13 15:44:13 -05:00
gpetters94 48418b9c22
Fold away type_as (#1358) 2022-09-12 18:59:12 -04:00
Ashay Rane e52e886845
build: update llvm tag to 00d648bd (#1307)
- Update MHLO commit to build with LLVM commit hash 00d648bd
 - Update TorchToMhlo code to work with Stablehlo
 - Re-enabled two failing TOSA tests, thus resolving Github Issue #1231
2022-08-30 14:44:00 -05:00
Sean Silva 0e3ddbac91 Remove VerifyInvariantsBeforeBackendLowering
LowerToBackendContract now checks all this consistently.
2022-08-26 10:24:43 -07:00
Tanyo Kwok 3d0e18bbe7
Add decomposition for aten.roll (#1170)
* Add decomposition for aten.roll

* add e2e unittest

* refine type of torch.roll

* fix aten::cat output type
2022-08-24 08:36:05 +08:00
Tanyo Kwok 9176b5ed29
Add decomposition for aten.flatten.using_ints (#1161) 2022-08-23 11:52:54 +08:00
Sean Silva 01290d134a Add a way for backends to control which ops are legal for them.
We were already hitting many cases where backends different in terms of
the legal ops that they wanted. This caused unnecessary coupling between
the backends. Examples:
- https://github.com/llvm/torch-mlir/pull/1161
- https://github.com/llvm/torch-mlir/pull/862

This PR centralizes all compilation to go through `torch_mlir.compile`
so that we can keep the logic centralized there. We should move these
lists closer to each backend. Especially cases like
https://github.com/llvm/torch-mlir/pull/862 where blocking a
decomposition is necessary to avoid a crash emphasize that the set of
decompositions is tightly coupled to the backend, and should be
"controlled by the backend" and not something arbitrarily tweakable.

Also:
- Fix a small bug in the way we passed through the backendLegalOps
  option.
- Add better error messages in `torch_mlir.compile` for import errors.
2022-08-22 14:16:13 -07:00
武家伟 99fb4c8637
Add folder for ToF64Op and FromF64Op (#1257) 2022-08-22 09:49:39 +08:00
Vivek Khandelwal 65d811e267 [MLIR][TORCH] Fix dynamic cases for aten.index.Tensor 2022-08-19 12:13:20 +05:30
武家伟 7bd173a1c4
[MHLO] Eliminate explicit dynamic output shape generating in converting AtenSliceTensorOp (#1245)
[MHLO] Eliminate explicit dynamic output shape generating in converting AtenSliceTensorOp
2022-08-19 10:14:57 +08:00
Ramiro Leal-Cavazos 9bc606c384
Add support for returning more than one copy of the same tensor (#1228)
One of the simplifications made by the pass `RefinePublicReturn`
currently only happens if the tensor in question only has one
user. However, the current method of checking this does not correctly
handle the case of a user having multiple uses of the same
tensor. This commit makes sure only unique users are considered.
2022-08-18 22:41:45 +00:00
Sean Silva 283e0f141a Add a concept of "backend legal ops".
This is a first step towards formalizing the set of ops in our backend
contract. The goal is to eventually formalize `torch` dialect ops into 3
categories:
1. Legal in backend contract
2. Illegal in backend contract
3. Conditionally legal in backend contract

The "conditionally legal" set are the ops that we can optionally
decompose for backends.

This patch adds relevant pass options for this throughout the compiler,
in preparation for a new set of traits which will formalize this
classification.
2022-08-18 11:46:50 -07:00
Sean Silva 57681f7947 Iteratively run the main simplification pipeline.
This introduces a new pass LowerToBackendContract (better name very
welcome) which performs the bulk of the simplifications that we do,
such as
- shape refinement
- dtype refinement
- maximizing value semantics
- inlining global slots
- decomposing complex ops

The key difference from before is that it iterates the set of
transformations, which can help to break a number of "catch-22" issues
where one simplification depends on another, the latest example being
here:
https://github.com/llvm/torch-mlir/issues/1131

This also exposed that RefineTypes was sometimes crashing/asserting for
certain inputs. This commit hardens it a bit.
2022-08-17 14:54:33 -07:00
Yan Xu 9be8997536
Revert "add native_dropout and related ops pattern (#1211)" (#1230)
This reverts commit c935795086.
2022-08-17 13:48:10 +08:00
武家伟 11a5b5ac52
[MHLO] Add AtenRSubScalarOp conversion pattern to MHLO (#1233)
* [MHLO] Add AtenRSubScalarOp conversion pattern
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-17 09:07:36 +08:00
Ashay Rane 84d345c650
build: update llvm tag to 2dde4ba6 (#1229)
Summary of changes:
 - Tensor dialect now sets `emitAccessorPrefix` to prefixed, thus
   requring updates to methods that retrieve arguments
   [https://reviews.llvm.org/D131361]
 - Update MHLO to build with LLVM commit hash 2dde4ba6
 - Replace `AbsOp` with `AbsFOp` [https://reviews.llvm.org/D131325]
 - Replace deprecated `getValue()` with `value()`
   [https://reviews.llvm.org/D131349]
 - Remove `AnalysisState::defaultInitialize()`
   [https://reviews.llvm.org/D131746]
 - Update MHLO MLIR tests to use the updated assembly format
 - Disabled two failing TOSA tests (Github Issue link:
   https://github.com/llvm/torch-mlir/issues/1231)
2022-08-15 23:54:45 -07:00
武家伟 3b3cb99ef8
Generalize canonicalization pattern for more aten.sub/div/mul/add op (#1209)
Generalize canonicalization pattern for more sub/div/mul/add op, but for AtenDivTensorModeOp in 'trunc' rounding mode, we try to fold it.
2022-08-16 13:24:08 +08:00
Yan Xu c935795086
add native_dropout and related ops pattern (#1211) 2022-08-15 09:28:47 +08:00
Ramana Radhakrishnan 738f4fe96a
Rename TorchToStd pass as TorchToArith (#1163)
All the converters in this pass appear to create ops from the
arith dialect. Hence the full rename.

Fix GH Issue #409.
2022-08-10 20:12:51 +01:00
武家伟 87562773f8
[MHLO] Add AtenCatOp conversion pattern to MHLO (#1208)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
Co-authored-by: Vremold <xremold@gamil.com>
2022-08-09 22:12:34 -07:00
Ashay Rane bb47c166a0
llvm: update tag to 061e0189 (#1180)
Summary of changes:
 - Switch to C++17 (similar to https://reviews.llvm.org/D131348)
 - Update MHLO to build with LLVM commit hash 061e0189
 - Replace deprecated `hasValue()` and `getValue()` with `has_value()`
   and `value()` respectively (https://reviews.llvm.org/D131349)
 - Use `TypedAttr` (https://reviews.llvm.org/D130092)
 - Use updated assembly format of `mhlo.compare` op (commit
   d03ef01e70fbf9afd0fa1976fbb7ed31838929b3 in MHLO repo)
2022-08-08 20:17:35 -07:00
武家伟 351f15424e
[MHLO] Add transposed convolution conversion pattern (#1171)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-09 09:50:07 +08:00
Sean Silva 504de5e701 Rework how global slot initializers work.
Rather than a per-global-slot initializer region, we now have one for
the whole module. For example, it might look like this:

```
torch.global_slot "private" @tensor : !torch.tensor
torch.global_slot "private" @list : !torch.list<tensor>
torch.global_slot.module_initializer {
  %0 = torch.tensor.literal(dense<0.0> : tensor<f32>) : !torch.tensor
  %1 = torch.prim.ListConstruct %0 : (!torch.tensor) -> !torch.list<tensor>
  torch.initialize.global_slots [
    @tensor(%0 : !torch.tensor)
    @list(%1 : !torch.list<tensor>)
  ]
}
```

This new structure allows GlobalizeObjectGraph to create the initializer in a
much simpler way, avoiding the need to reason about whether different slots
alias each other. Reasoning about whether slots alias each other now is the
responsibility of InlineGlobalSlots, which has to do a much more complicated
analysis, implemented using MLIR's dataflow analysis framework.

Recommended review order:
- Check out the new IR constructs in the .mlir files of various passes
- Op definitions (*.td)
- Changes to GlobalizeObjectGraph pass.
- InlineGlobalSlots pass (~total rewrite)
- Misc changes:
  - Moving torchMlirAdjustStaticInformation for sharing with C++ code.
  - EraseModuleInitializer pass

To make this a bit nicer, it would be good to have a `torch.module` op
with an initializer region attached. That would be more invasive though.

This change has highlighted certain aspects of our project layering
which are worth calling out. None of our backends can handle global
slots, so we enforce that there are no global slots before backend
lowering. At an earlier stage in the project, we had aspirations of
transparently handling mutable global state and such, but for reasons
described below, that is no longer a goal. So really global slots should
be seen as a progressive lowering step as part of inlining all the
IValue's in the original program (GlobalizeObjectGraph is also one such
step).

Over time, with insights from work like IREE-JAX, it has become clear
that there isn't a reliable programming model we can compile for users
where we just transparently handle mutable global state (and some other
things, like lists and dictionaries). There is a need for an "outer
program" that orchestrates more restricted subroutines of the kind we
can handle in our compile flow here. The benefit of that is that it
decouples considerations like shapes, dtypes, etc. from the program
constructs used in the outer program. As long as the outer program can
efficiently invoke (pipelining/async/etc.) high-performance
data-parallel numerical subroutines of the kind we compile in our flow
here, then there is a complete programming model. This is also
consistent with the direction of upstream PyTorch which is becoming more
tracing-based (which inherently loses a lot of program structure, which
then has to be applied back with an "outer program" orchestrating the
traced subroutines).
2022-08-08 18:12:06 -07:00
Tanyo Kwok 290d7755fb
importer: add initial support for loading Float16 tensors (#1169)
follow up #761:

    This patch updates the `torch_mlir::convertTensorToMlirElementsAttr()`
    method to enable the creation of tensors whose base type is Float16.
    This patch also adds a test to validate the IR generation, and it
    updates the test for importing tensors of various types.
2022-08-08 12:37:31 +08:00
Tanyo Kwok 1ee865983b
[MHLO] fix tensor mode aten.div op pattern (#1160)
* [MHLO] fix tensor mode aten.div op pattern

See RFC #999
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-06 23:38:06 +08:00
武家伟 c94431f71c
[MHLO] Add convolution op pattern (#1152)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-04 00:41:35 -07:00
武家伟 d030591df9
[MHLO] Init MHLO pooling-like op conversion (#1141)
* [MHLO] Init MHLO pooling-like op conversion and remove 'op' suffix in filenames

Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>

See RFC #999
2022-08-04 12:34:22 +08:00
Tanyo Kwok f0a24f59f6
[MHLO] Init MHLO linear op patterns (#1132)
See RFC https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-08-03 19:10:54 -07:00
武家伟 636f5acb10
[MHLO] Init MHLO reduce-like op conversion (#1133)
* [MHLO] init reduce-like op conversion from Torch to MHLO
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-03 10:47:52 +08:00
Tanyo Kwok 0b23af27d3
[MHLO] support non-constant torch scalar in BasicOps (#1134)
See RFC https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-08-03 08:16:31 +08:00
Yan Xu 704efdc259
[MHLO] add aten::gelu op pattern (#1127)
add aten::gelu op pattern, and moved some unit tests from basic.mlir to elementwise.mlir
2022-08-02 15:01:30 +08:00
武家伟 76c976682c
[MHLO] Support for dynamic shape in basic op conversion by introducing CHLO dialect (#1123)
* [MHLO] Support for dynamic shape in basic op conversion by introducing CHLO dialect
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>

* [MHLO] Support I32 as shape tensor dtype

* [NFC] Add a 'TODO' annotation
2022-08-02 12:53:24 +08:00
Jae Hoon (Antonio) Kim 425362263b Clean up Autogen (#1112)
* Remove unnecessary sed in autogen

* Remove .pyc files frrom VCS
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 1bde00c73d Fix LTC Decoupling (#815)
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
2022-07-30 09:40:02 -04:00
PhaneeshB 8b5631d4c5 [MLIR][TORCH] Add decomposition for aten.std.dim Op
Signed-Off By: Phaneesh Barwaria <phaneesh@nod-labs.com>
2022-07-29 23:52:54 +05:30
Vivek Khandelwal d386b8f9e5 [MLIR][TORCH] Add decomposition for aten.var.correction op
This commit adds the decomposition for `aten.var.correction` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com
2022-07-29 11:08:57 +05:30
Vivek Khandelwal 7247c6a3a7 [MLIR][TORCH] Add E2E support for aten.ge.int op
This commit adds lowering of `aten.ge.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-29 11:08:57 +05:30
Quinn Dawkins 11a8901078
[MLIR][TORCH] Add support for multiple indexing tensors for aten.index.Tensor (#1097)
- Includes a canonicalizer for `aten.add.t`needed for successfully lowering the shape function
 - Only offers support for statically sized index tensors when there is more than one
 - Dynamic shape support remains for single indexing tensors
2022-07-28 19:00:02 -04:00
武家伟 052d2f84dc
[MHLO] Init MHLO basic op conversion (#1092)
* [MHLO] Init MHLO basic Op Conversion
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>

* [NFC] Remove 'from @llvm-project' annotation

Co-authored-by: wujiawei.jw <wujiawei.jw@bytedance.com>
2022-07-27 13:07:51 +08:00
Kevin Kiningham e8f327cc00 Add lowering to linalg for softplus and log1p
Follows existing conventions for unary operators.
2022-07-25 21:25:57 +05:30
Tanyo Kwok 44ead68772
[MHLO] Init MHLO gather op patterns (#1104)
See RFC https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-07-25 23:47:46 +08:00
Tanyo Kwok f50d7013cd
[MHLO] Add [un]squeeze op patterns (#1099)
* [MHLO] Add [un]squeeze op patterns

* Conform to llvm coding standard

* minor update
2022-07-25 23:28:48 +08:00
Tanyo Kwok b80ce79b9f
[MHLO] Init MHLO view like op patterns (#1090)
* [MHLO] Init MHLO view like op patterns

See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com

* update filecheck test cases

* rebase, remove chlo and clang-format
2022-07-22 15:18:18 +08:00
Tanyo Kwok a02dbb2d5e
[MHLO] Init MHLO slice like op patterns (#1091)
See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-07-22 11:32:45 +08:00
Ramiro Leal-Cavazos f271e6a88c
Add verifiers for ToBuiltinTensorOp and FromBuiltinTensorOp (#1089)
This commit adds verifiers to the ops `ToBuiltinTensorOp` and
`FromBuiltinTensorOp` that make sure that the input and output have
the same shape and data type.
2022-07-21 21:41:45 +00:00
powderluv 31fd812acf
Add linux and macOS source builds in CI (#1070)
This enables building Pytorch from source in the CI.
The build should mostly hit the ccache.
Release builds will follow once we have some runtime on the CI.
2022-07-21 14:16:03 -07:00
Ashay Rane 72dd04cdb3
Revert "python: trim registration and loading of dialects and passes" (#1093)
This reverts commit ad283c1043, since it's
causing nightly build failures for all platforms.
2022-07-21 09:35:42 -07:00
Ashay Rane ad283c1043
python: trim registration and loading of dialects and passes (#1084)
In the interest of merging upstream LLVM quickly, a previous patch
(7f08169) updated the torch-mlir build to register all dialects and
passes through Python bindings.  This patch limits the dialects and
passes to only those that are used in torch-mlir.

Key to this change are the removal of
`MLIRPythonExtension.RegisterEverything` and the introduction of a new
Python module (`_mlir_libs/_site_initialize_0.py`), where we register
the dialects and passes used by torch-mlir.
2022-07-20 18:34:17 -07:00
Ziheng Jiang c61c99e887
[MHLO] Init MHLO integration. (#1083)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-07-20 16:18:16 -07:00
Vivek Khandelwal 4c25878e64 [MLIR][TORCH] Add canonicalization pattern for prim.ListUnpack op
This commit adds the canonicalization pattern for the `prim.ListUnpack` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-18 13:51:25 +05:30
Vivek Khandelwal 3589134d31 [MLIR][TORCH] Add decomposition for aten.var.dim op
This commit adds the decomposition for `aten.var.dim` op.
This commit also make changes in the decomposition for `aten.var` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-15 09:53:42 +05:30
Ashay Rane 29bc48aedb
torch: add pass to catch non-value tensors (#1052)
This patch adds a new pass `torch-verify-conversion-to-value-semantics`,
which looks for non-value semantics tensors to catch such tensors early
during compilation.

This pass requires `torch-refine-public-return` pass to ensure that
return operations are updated to use value tensors, followed by the
canonicalize pass to remove any dead ops that may use or produce
non-value tensors.
2022-07-13 17:11:15 -07:00
Ashay Rane 64c04bd5f6
canonicalizer: [nfc] update LIT variable names for consistency (#1051)
A previous patch used lowercase names for LIT variables.  This patch
replaces them with uppercase names to maintain consistency with other
variables.
2022-07-13 12:28:25 -07:00
Ashay Rane ac4d7d10e0
canonicalizer: propagate type information across copy and cast ops (#1030)
Prior to this patch, the canonicalizers for `AtenSizeOp` and
`AtenSizeIntOp` succeeded only if the tensor operand's type information
included the size of the requested dimension(s).  We can extend the set
of optimizable cases by propagating types across operations whose result
type matches the input tensor type.

Specifically, this patch enables the canonicalizers for `AtenSizeOp` and
`AtenSizeIntOp` to see past `tensor_static_info_cast`,
`copy.to_vtensor`, and `copy.to_tensor` ops until it reaches the first
op whose result type contains size information for the requested
dimensions, with a maximum bound of 6 parent lookups to avoid indefinite
compilation times.  All other encountered ops cause the canonicalizer to
give up.
2022-07-12 12:38:37 -07:00
Sean Silva e5e11e214b GlobalizeObjectGraph: Clean up handling of unused slots
The way we did it previously still created the slot and copied the
initializer even if unused.
2022-07-12 10:47:28 -07:00
Ashay Rane 9017be9e9e
torch: copy uses to prevent iterator invalidation (#1033)
Prior to this patch, the code in the `torch-simplify-shape-calculations`
pass iterated on the uses of an op's result while also modifying the
value.  This caused the iterator to get invalidated, thus terminating
the loop early and producing incorrect IR.  This patch makes use of
`llvm::make_early_inc_range()` to ensure that the iterator is not
invalidated while executing the loop body.
2022-07-11 18:47:04 -07:00
Ramiro Leal-Cavazos 11148e60d6
Undo shape lib changes + update function signature of sum + zero (#1035)
This commit does three things:
  1. Reverts some of the shape lib changes merged in
  https://github.com/llvm/torch-mlir/pull/844
  2. Updates the signature of `aten.sum_dim_IntList` that was recently
  updated in
  23bdb570cf
  3. Replaces `aten.zero.functional` with `aten.zero`, updated in 960758b0b7
2022-07-11 10:56:12 -07:00
Prateek Gupta 2d75654b2c [TORCH][MLIR] Add lowering of `aten.slice_scatter` and
`aten.select_scatter` op.

This commit adds:
1.  Lowering of `aten.slice_scatter` op into `tensor.insert_slice`
op.
2. Decomposes the `aten.select_scatter` op into `aten.slice_scater`
op.

Signed-Off-By: Prateek Gupta <gprateek93@gmail.com>
2022-07-11 14:07:21 +05:30
Ashay Rane 340d8af28a
torch: handle `torch.prim.dtype` ops during type refinement (#1013)
The canonicalizer converts `torch.prim.dtype` ops into integer constants
for valid types, but the type may not be known until type refinement is
complete.  However, type refinement cannot make progress until
`torch.prim.dtype` ops have been resolved to their corresponding integer
constants, thus creating a circular dependency.

This patch creates a tight coupling between type refinement and the
lowering of `torch.prim.dtype` ops by handling such ops as they are
encountered during type refinement.  The unit test in this patch aims to
check whether the type refinement pass can now handle chains of
operations that alternate between type construction and type refinement.
2022-07-08 16:38:51 -07:00
Ramiro Leal-Cavazos 6a72ab4502
Add basic support for list of optional tensors in reduce-op-variants (#971)
This commit adds support for lists of type `list<optional<tensor>>`
where each element in the list is either a `!torch.tensor` or a
`!torch.none`.
2022-07-08 11:12:15 -07:00
Quinn Dawkins f0c3b5a7ed
Add E2E support for aten.len.str (#969) 2022-07-07 10:41:55 -07:00
Ashay Rane 88316b3b4e
torch: fold prim.dtype(bf16) to integer constant 15 (#1012)
A prior patch (63538de2) that added support for bfloat16 type did not
add the canonicalization pattern to fold `torch.prim.dtype` operations
on bfloat16 tensors into the integer constant 15.  This patch fixes the
problem.
2022-07-06 18:21:43 -07:00
Tanyo Kwok d4f1f41435
[MLIR][TORCH] Add decomposition of aten.repeat (#932)
* [MLIR][TORCH] Add decomposition of aten.repeat

* refine & rebase

* refine static shapes

* add e2e test

* Rebase and Refine naming style
2022-07-01 13:02:31 +08:00
Ashay Rane f947443f98
python: lower `prim::{Load,Store,Enter,Exit}` nodes to torch dialect (#983)
TorchScript nodes like `prim::Load` and `prim::Store` aren't supported
in torch-mlir because they can't be lowered to backends, but such nodes
can occur in the TorchScript IR.

This patch adds a rudimentary translation from such nodes to
corresponding ops in the Torch dialect.  Since we expected such nodes to
go away during lowering because of the SymbolDCE pass, this patch does
not add code to lower these ops beyond the Torch dialect.
2022-06-30 13:17:35 -07:00
Sean Silva 227dea7b2e Add support for ScalarType::QUInt8
I ran into this while poking around at
https://github.com/llvm/torch-mlir/issues/959
2022-06-29 15:33:28 -07:00
Ashay Rane 163fa57cde
torch: allow torch dialect ops after running drop-shape pass (#979)
In the `pyhpc_turbulent_kinetic_energy` TorchBench benchmark, the shape
calculation occurs inside loops, but because `DropShapeCalculationsPass`
does not explicitly mark the Torch dialect as legal, the pass execution
fails.

This patch adds Torch to the list of legal dialects, and adds a test to
validate the translation.
2022-06-25 07:27:47 -07:00
Ashay Rane 234fc7fe0c
linalg: lower `aten.triu` op to `linalg.generic` (#965)
Prior to this patch, the torch dialect included `AtenTriuOp` for
computing the upper triangular part of the input matrix, but there was
no code for lowering the op to the linalg dialect.

This patch adds code to generate a `linalg.generic` operation that
compares indices (computed using `linalg.index`) to choose between zero
or the original value (using `arith.select`).  The lowering fails if the
number of dimensions are less than two.  This patch also adds a few
end-to-end tests.
2022-06-23 22:45:48 -07:00
Tanyo Kwok 143a7bcb76
[MLIR][TORCH] Add folder for torch_c.from_i64 & torch_c.to_i64 (#933)
* [MLIR][TORCH] Add folder for torch_c.from_i64 & torch_c.to_i64

* add unit tests for each individual fold

* fix failure of NumelZeroRankModule & TestMultipleTensorAndPrimitiveTypesReturn
2022-06-24 09:34:39 +08:00
erman-gurses 5cff40c88a Add canonicalization for aten.add.tensor op 2022-06-23 17:24:59 -04:00
Maksim Levental 829717c96e
Bump LLVM (#958) 2022-06-22 22:23:46 -05:00
Vivek Khandelwal 77ab31641f [MLIR][TORCH] Add decomposition of aten.numpy_T op
This commit adds the decomposition of `aten.numpy_T` op into
`aten.t` or `aten.permute` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-16 00:01:22 +05:30
Vivek Khandelwal 33fa8e7761 [MLIR][TORCH] Add decomposition of aten.floor_divide op
This commit adds the decomposition of `aten.floor_divide` op into
`aten.div.Tensor_mode` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-14 08:56:25 +05:30
Vivek Khandelwal a11ef674a7 [MLIR][TORCH] Add E2E support for aten.baddbmm op
This commit decomposes `aten.baddbmm` op into `aten.bmm`,
`aten.mul.Scalar`, and `aten.add.Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:28 +05:30
Vivek Khandelwal 2718b4d838 [MLIR][TORCH] Add E2E support for aten.clamp_[min|max] op
This commit decomposes `aten.clamp_min` and `aten.clamp_max` op
into `aten.clamp` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-06 11:52:29 +05:30
Henry Tu abf5c94a1b
Replace valsem.aten.zero with aten.zero.functional (#893) 2022-06-03 16:27:31 -04:00
Vivek Khandelwal 06750815d1 [tosa] Support for AtenAvgPool2d op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Vivek Khandelwal 6f548fc3ad [MLIR][TORCH] Add decomposition of aten.adaptive_avg_pool2d op
This commit adds the decomposition of `aten.adaptive_avg_pool2d` op into
`aten.avg_pool2d` op. The current decomposition only supports cases where
input size is equal to the output size.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Vivek Khandelwal 56e77d4213 [MLIR][TORCH] Add E2E support for aten.Bool.[float|int] op
This commit adds lowering of `aten.Bool.float` and `aten.Bool.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 21:18:34 +05:30
Vivek Khandelwal 014a6d16c7 [MLIR][TORCH] Add E2E support for aten.any.bool op
This commit adds lowering of `aten.any.bool` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 17:24:28 +05:30
Vivek Khandelwal bc9b2156e3 [MLIR][TORCH] Add E2E support for aten.sqrt.int op
This commit adds lowering of `aten.sqrt.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 16:50:39 +05:30
Sean Silva 3fb54cba4c torch.prim.TupleIndex: Adjust tensor types when folding.
In cases where a refinement/derefinement was needed, we didn't fold.

Fixes https://github.com/llvm/torch-mlir/issues/863
2022-05-19 09:36:27 -07:00
Ashay Rane bb52a460cb
mlir: bump llvm tag to 5380e3 (#856)
In addition to updating the llvm-project submodule, this patch also:

1. updates shape functions and tests so that `func` and `call`
   operations refer to the `func` dialect
2. avoid duplicate registration of dialects
2022-05-16 12:54:35 -07:00
Ramiro Leal-Cavazos 96f90efd16
Add shape info to `rand_like` + support for `dtype` flag (#851)
The op `aten.rand_like` was missing a shape function, unit tests, and
the `dtype` argument was being ignored in its decomposition. This
commit fixes all three things.
2022-05-12 16:00:59 -07:00
Yi Zhang 28be6511d2 Fix type promotion code for scalar only operations
Fix the type promotion code for scalar only operation to return
TorchType which is the type tracked in ValueKnowledge.scalarType.

- Fix `getPromotedResultScalarType` to return Torch type.
- Add `getBuiltInTypeForTorchScalar` helper to convert scalar type
to builtin type before passing to the next level type promotion
helper `updateResultTypeState`.
- Add `setScalarType` helper to make setting ValueKnowledge.scalarType
  easier.
2022-05-07 10:37:21 -04:00
Vivek Khandelwal 96fabc0036 [MLIR][TORCH] E2E support for [ge|ceil].float, [ge|ne|gt].float_int op
This commit adds lowering of `aten.ge.float`, `aten.ge.float_int`,
`aten.ne.float_int`, `aten.gt.float_int` and `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py and scalar_comparison.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-05 21:48:35 +05:30
Yi Zhang 9f7264a7a4 Add support for scalar type propagation
The main changes are:
- Added `ValueKnowledge.scalarType` to track scalar type information.
- Added `ValueKnowledge.kind` to indicate the value kind.
- Modified the meet and join helper functions. The ValueKnowledge has
slightly more complicated state now so the meet and join function need
to look at the `kind` field in addition to just the type field.
2022-05-04 16:57:56 -04:00
Sean Silva 32159c4e54 Fix TupleIndex canonicalizer.
It would change the result type.
2022-05-03 09:08:49 -07:00
Sean Silva ab5ad7af09 Add tracing suport to `torch_mlir.compile`.
This also has a fix for the adjustment of types of TupleConstruct
inputs, which I found when using this new functionality on a model.

Some scenarios in tracing create situations where the output of
TupleConstruct has a more refined type than the inputs.

This introduces a helper `adjustStaticInformationForValues` which
subsumes the `derefineValues` helper and the tensor static information
adjustment we were doing.
2022-05-03 09:08:40 -07:00
Vivek Khandelwal c0634bc996 [MLIR][TORCH] Add E2E support for aten.to.dtype_layout op
This commit decomposes `aten.to.dtype_layout` op into `aten.to.dtype` op.
This commit also fixes the formatting for the file type_conversion.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-03 12:48:58 +05:30
Prateek Gupta 81ee5bb58c [TORCH][MLIR] Fix ConstantPad2dStaticModule test.
This commit fixes the `ConstantPad2dStaticModule` test case by adding
the lowering of `aten.pad` operation. Previously the test case
mapped to `aten.constant_pad_nd` operation.
The `aten.pad` now decomposes into `aten.constant_pad_nd` operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-04-29 21:57:01 +05:30
Ashay Rane 809f240f01
importer: add initial support for loading BFloat16 tensors (#761)
This patch updates the `torch_mlir::convertTensorToMlirElementsAttr()`
method to enable the creation of tensors whose base type is BFloat16.
This patch also adds a test to validate the IR generation, and it
updates the test for importing tensors of various types.
2022-04-29 09:01:49 -07:00
Prateek Gupta e1db318a3c [TORCH][MLIR]Add lowering for control flow operations.
1. This commit adds lowering of "while-like" prim loop to scf.while
operation.
2. Adds lowering of "for-like" prim loops to scf.for operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-04-29 16:25:58 +05:30
Sean Silva 44c7b181d3 Revert "[MLIR][TORCH] Add E2E support for aten.ge.float op"
This reverts commit 564734b2d7.
2022-04-28 07:49:58 -07:00
Sean Silva eff144c0b7 Revert "[MLIR][TORCH] Add E2E support for aten.ge.float_int op"
This reverts commit 1f102cc400.
2022-04-28 07:49:58 -07:00
Sean Silva 7669ee4e4a Revert "[MLIR][TORCH] Add E2E support for aten.ne.float_int op"
This reverts commit 51dd462592.
2022-04-28 07:49:58 -07:00
Sean Silva 5ef9f501fa Revert "[MLIR][TORCH] Add E2E support for aten.ceil.float op"
This reverts commit 78f5747568.
2022-04-28 07:49:58 -07:00
Vivek Khandelwal 78f5747568 [MLIR][TORCH] Add E2E support for aten.ceil.float op
This commit adds lowering of `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-28 11:49:35 +05:30
Vivek Khandelwal 51dd462592 [MLIR][TORCH] Add E2E support for aten.ne.float_int op
This commit adds lowering of `aten.ne.float_int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal 1f102cc400 [MLIR][TORCH] Add E2E support for aten.ge.float_int op
This commit adds lowering of `aten.ge.float_int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal 564734b2d7 [MLIR][TORCH] Add E2E support for aten.ge.float op
This commit adds lowering of `aten.ge.float` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal f5b6c4b601 [MLIR][TORCH] Add E2E support for aten.div.float op
This commit adds lowering of `aten.div.float` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Sean Silva 73cc2ac152 Ensure that imported function input type and block arg types are consistent.
I wasn't able to find exactly what frontend situation created it, but
`torch.jit.trace` will sometimes create functions where the
`jit::Block`'s param node has refined tensor types. So we need to adjust
the function's formal param types to those refined types.
2022-04-27 08:01:23 -07:00
Ashay Rane 9208bf0eb6
llvm: bump tag to e1318078 (#781)
The updated LLVM code includes a patch to create bfloat16 array
attributes, thus enabling a different patch to torch-mlir to flesh out
support for the bfloat16 type.
2022-04-26 12:27:51 -07:00
Ashay Rane 9ec4712516
types: allow bf16 as result type for various tensor ops (#798)
Prior to this patch, the result type for several tensor operations could
only be float32, float64, or null.  This patch adds bf16 to the list of
allowed result types.
2022-04-26 11:55:58 -07:00
Vivek Khandelwal 769f3a8870 [MLIR][TORCH] Add E2E support for max_pool2d_with_indices op
This commit adds lowering of `max_pool2d_with_indices` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-18 21:05:19 +05:30
Ashay Rane a893c7d5cf
Add shape transfer function and lowering to linalg for aten.neg (#759)
* shape: add shape transfer function for aten.neg

Prior to this patch, the list of shape transfer functions did not
include `aten.neg`, which resulted in errors like below.

```
error: unsupported by backend lowering: tensor with unknown rank or dtype
note: see current operation: %0 = "torch.aten.neg"(%arg0) :
  (!torch.vtensor<[256,256],f32>) -> !torch.vtensor<*,f32>
note: this is likely due to a missing shape transfer function in shape_lib_gen.py
```

This patch fixes the problem by adding a shape transfer function to
reflect the point-wise nature of this operation.

* linalg: add translation of aten.neg operation

This patch adds a translation rule to lower `aten.neg` operations on
tensors to an `arith.negf` operation wrapped inside a `linalg.generic`
operation.  This patch also adds a rudimentary test.
2022-04-15 11:11:22 -07:00
Sean Silva e7721fb784 Fix error message.
RefineTypes doesn't handle shape refinement anymore.
2022-04-07 14:46:44 -07:00
Sean Silva c17c0a6ba2 Fix for 0-size dim inferred incorrectly.
The issue was in the canonicalizer for torch.aten.ge.int -- in cases
where the operands were swapped, it would miscompile. This issue is
fixed and folding support generalized to `torch.aten.size.int < 0` as
well.

Fixes #716
2022-03-30 16:36:15 -07:00
Gaurav Shukla 969785d1b6 [LINALG] Add E2E support for `aten.where.[Scalar|ScalarSelf|ScalarOther]` ops
This commit decomposes different variants of `aten.where.*` op into
`aten.where.Self` op. It covers `aten.where.Scalar`,
`aten.where.ScalarSelf` and `aten.where.ScalarOther` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-30 20:36:48 +05:30
Vivek Khandelwal 2597c481f6 [MLIR][TORCH] Add E2E support for aten.new_empty op
This commit decomposes `aten.new_empty` op into `aten.empty.memory_format` op.

This commit also made a dtype fix to the constant tensor allocation like ops.
Earlier the dtype for the result was inferred from the result type; now, it's
being evaluated as per the original definition of the op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-30 13:21:01 +05:30
Sean Silva 140babd952 Add minimal support for Union types.
A recent PyTorch commit made ConstantPad2d call a helper function with a
`Union[int, float]` type annotated. This commit adds minimal support for
representing and dealing with that.
https://github.com/pytorch/pytorch/pull/73287

Changes:
- Adding support for `!torch.union<T1, T2, T3>`/`Torch::UnionType`,
  along with the importer and CAPI code.
- Add support in isValidSubtype for union types.
- Adding a canonicalizer for `torch.derefine` to help simplify some code
  that derefines to a UnionType (this also fixes #664).

There is still more work to do for really supporting UnionType well,
such as canonicalizing UnionType's so that they can be compared with
pointer equality.
2022-03-29 17:45:48 -07:00
Liam Fitzpatrick f2269ced80
Improve list index normalization SimplifyShapeCalculations. (#710)
The reified code to compute the shape of torch.aten.constant_pad_nd
uses negative indices when setting list elements. This was not
converted to a positive offset in one place in SimplifyShapeCalculations
which prevented computation of the static shape.
2022-03-29 22:21:47 +02:00
Maksim Levental 25ba51b2af
This commit decomposes aten._reshape_alias op into aten.view op. (#690) 2022-03-28 23:54:28 -05:00
Sean Silva 776426ea4e [SimplifyShapeCalculations] Fix AbstractlyInterpretListOpsWithinABlock
The logic in the rewriting phase had a bug in case of a read-only op
coming before mutation ops. The logic would use the op itself as the
"latest literal", but that is not correct, because later on we replace
the op itself with the *final* "latest literal", assuming that all uses
of the op have been rewritten -- that was working in general, except for
any read-only ops at the beginning.

Big thanks to @ljfitz for the tiny reproducer!

Fixes #704
2022-03-28 13:18:35 -07:00
Anup Gangwar 5d7a6c2976
[tosa] Support for Aten[Unsqueeze|Contiguous|Dropout|Reshape|View] ops (#700) 2022-03-25 14:15:07 -07:00
Gaurav Shukla 02b6d04eb4 [LINALG] Add E2E support for `aten.zero_` op
This commit adds decomposition of `aten.zero_` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-25 12:46:50 +05:30
Gaurav Shukla 7c3ba25238 [LINALG] Add decomposition of `aten.dropout` op
- This commit adds decomposition of `aten.dropout` op. It also covers the
  training mode of the same op.
- It also adds lowering of `aten.sub.float` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-22 13:14:49 +05:30
Vivek Khandelwal 5b9bdfaf3f [MLIR][TORCH] Add E2E support for aten._to_copy op
This commit decomposes `aten._to_copy` op into
`valsem.aten.copy` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 13383b03b8 [MLIR][TORCH] Add value tensor variant to aten::copy_ op
This commit adds the op `ValsemVariantAtenCopyOp` that represents
`AtenCopy_Op` without the underscore. This is needed to make sure
that the `ReduceOpVariants` pass turns the in-place op into an op
that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenCopyOp`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 4c0cd5c23d [MLIR][TORCH] Add E2E support for aten.expand_as op
This commit decomposes `aten.expand_as` op into `aten.broadcast_to` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 12:47:39 +05:30
Vigilans 63fb1e5aad Bump LLVM at 8361c5da30588d3d4a48eae648f53be1feb5cfad 2022-03-18 13:16:14 -04:00
Ramiro Leal-Cavazos 218b4875d5
Make conditions for type refinement of static cast less strict (#680)
This commit adds support for type refinement when
`torch.tensor_static_info_cast`s are involved, even when there are
users of the casted tensor that don't allow type refinements.

Originally the canonicalization pattern for
`torch.tensor_static_info_cast` would check if all the users of the
casted tensor allowed type refinements before making any changes. This
means that if at least one of the users did not allow type
refinements, the pattern would fail. This becomes an issue when doing
shape calculations because the calculations need the shape information
of each input tensor to be available before the calculation can be
simplified.
2022-03-18 09:10:12 -07:00
Vivek Khandelwal 8da7d90611 [MLIR][TORCH] Add E2E support for aten.index_put op
This commit decomposes `aten.index_put` op into
`valsem.aten.index_put_impl` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-16 22:02:02 +05:30
Vivek Khandelwal 3d95c3d6c9 [MLIR][TORCH] Add value tensor variant to aten::_index_put_impl_
This commit adds the op `ValsemVariantAtenIndexPutImplOp` that represents
`Aten_IndexPutImpl_Op` without the underscore. This is needed to
make sure that the `ReduceOpVariants` pass turns the in-place op
into an op that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenIndexPutImplOp` op.

This commit also updates the `torch.bincount` op test cases.
2022-03-16 22:02:02 +05:30
Ramiro Leal-Cavazos 0bcc6d1075
Add maximize-value-semantics support for multiple non-value tensor inputs (#659)
This commit adds value semantics support for ops such as
`aten.view_as` and `aten.expand_as` that take two non-value 
tensors as input.
2022-03-15 18:13:45 -07:00
Sean Silva 92da4988f0 Improve "pseudo" op terminology.
The term "pseudo" is very vague and was getting confusing (I felt I had
to explain it in every comment referencing it). Instead, rework the
"pseudo" ops to instead be named:

- MLIR Syntax: `torch.valsem.*`
- C++ / ODS: `ValsemVariant*Op`

This makes it clear what the concept is, and avoids confusion with other
things that might be called "pseudo", since these are very specific and
should be 100% consistently named w.r.t. the non-valsem-variant ops that
they correspond to.
2022-03-15 17:57:52 -07:00
Sean Silva 84a9693006 Elide `!torch.` prefix in nested dialect types.
This leads to much more succinct types in many cases:

```
!torch.list<!torch.int>
!torch.list<int>

!torch.tuple<!torch.list<!torch.int>, !torch.list<!torch.int>>
!torch.tuple<list<int>, list<int>>

!torch.optional<!torch.list<!torch.int>>
!torch.optional<list<int>>

!torch.list<list<list<tensor>>>
!torch.list<!torch.list<!torch.list<!torch.tensor>>>
```

I would like to take this further and allow omitting the `!torch.`
prefix in all cases, but that's harder -- for example, we currently use
`FuncOp` for functions, and so I don't think we can customize the
printing there. It seems like it will be a longer road to getting that
level of customization.
2022-03-15 17:24:08 -07:00
Sean Silva a5fe0cf063 Introduce new shape library design.
See the documentation in `docs/shape_lib.md` and
`docs/adding_a_shape_function.md` for an overview of the system.

This completely overhauls how we represent shape functions. In
particular, RefineTypes does not infer shapes anymore (only dtypes).
Shape functions are now written in (TorchScript'able) Python.

Recommended review order:

1. Read `docs/shape_lib.md` and `docs/adding_a_shape_function.md`.
1. Code and tests for ReifyShapeCalculations, DropShapeCalculations.
1. Code and tests for SimplifyShapeCalculations.
1. shape_lib_gen.py
1. Code and tests for new RefineTypes pass.
1. Random folders/canonicalizers in TorchOps.cpp and associated test in
   `canonicalize.mlir`.
1. New ReadOnly trait inferred from the registry.
1. Any miscellaneous remaining stuff.

Example `-print-ir-after-all` for ElementwiseUnaryModule:
[IR lowering dump](https://gist.github.com/silvasean/e4dc8cbc8d00aac7819602e3cbd8e212).

Example `-print-ir-after-all` for ElementwiseBinaryModule:
[IR lowering dump](https://gist.github.com/silvasean/daf6860ecced732af3568af6b1899113).
2022-03-15 12:41:58 -07:00
Ramiro Leal-Cavazos 51e267aa37
Combine maximize-value-semantics rewrite patterns into one pattern (#642)
This commit replaces the two rewrite patterns of
maximize-value-semantics with a single pattern that captures the
behavior of both as well as other edge cases previously not
supported. The new pattern works by first performing alias analysis on
a subgraph to see if pattern is applicable, then rewriting all
non-value tensors to value tensors in a single go.
2022-03-10 09:36:52 -08:00
Gaurav Shukla e57d3f9774 [LINALG] Fix `aten.bernoulli` op lowering
- This commit adds E2E support for `aten.rand_like` and
  `aten.bernoulli_.Tensor` ops.
- The `aten.bernoulli(x)` was implemented as:
  `aten.bernoulli(x) = rand_like(x) < 0.5`, assuming 0.5 as default
  probability, whereas according to the pytorch documentation:
  https://pytorch.org/docs/stable/generated/torch.bernoulli.html#torch.bernoulli
  the input x in `aten.bernoulli(x)` is itself a tensor containing
  probabilities to be used for drawing the binary random number.
- So this commit fixes the `aten.bernoulli(x)` implementation as:
  `aten.bernoulli(x) = rand_like(x) < x`.
- It also fixes the case where the input to `aten.bernoulli_.float` is
  an integer tensor. In this case the input must be casted to float type
  before passing it as operand to `aten.rand_like` op.
  `aten.bernoulli_.float(x, p) = rand_like(float(x)) < p`.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-05 09:38:22 +05:30
Vivek Khandelwal af551bd9cd [MLIR][TORCH] Add E2E support for aten.full_like op
This commit decomposes `aten.full_like` op into `aten.empty_like`
and `aten.fill` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-04 21:58:23 +05:30
Vivek Khandelwal d61ae92eee [MLIR][TORCH] Add E2E support for aten.full op
This commit decomposes `aten.full` op into `aten.empty` and
`aten.fill` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-04 21:58:23 +05:30
Ramiro Leal-Cavazos 9ce62473f9
Add static type information support to `aten.bmm` (#636)
This commit adds static type information support to `aten.bmm`. This
is needed for the forward pass of Bert training.
2022-03-03 13:01:17 -08:00
Ramiro Leal-Cavazos 5ec70c175d
[LINALG] Add torch-to-linalg lowering for `TensorStaticInfoCastOp` (#634)
This commit adds a lowering for `TensorStaicInfoCastOp` that simply
replaces the op with the `tensor::CastOp`.
2022-03-02 13:35:26 -08:00
Yi Zhang 1d285f0153 Add aten.hardtanh e2e support. 2022-03-02 12:28:06 -05:00
Prashant Kumar 819f29316f Decompose aten.silu op
Decomposition of aten.silu.op is added as silu(x) = x * sigmoid(x).
2022-03-01 23:24:19 +05:30
Vivek Khandelwal ddd45d6068 [MLIR][TORCH] Add E2E support for aten.new_zeros, aten.new_ones op
This commit adds lowering of `aten.new_zeros` and `aten.new_ones` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-01 22:09:47 +05:30
Prashant Kumar 7c637eebc3 [LINALG] Decompose aten_hardswish op.
`aten.hardswish` op is decomposed into (x/6) * Relu6(x+3).
2022-02-25 21:59:27 +05:30
Gaurav Shukla 056cd2078d Revert "[LINALG] Decompose `aten.batch_norm` into `aten.native_batch_norm`"
This reverts commit 442ff4605c.
2022-02-25 15:46:55 +05:30
Ramiro Leal-Cavazos ba29d4f250
Add operand type invariant to `torch.overwrite.tensor.contents` (#606)
This commit adds the invariant to the op `torch.overwrite.tensor.contents` that
both of its operands have the same shape and size. In order to
maintain the invariant, special handling of this op is added to the
`RefineTypes` pass.
2022-02-22 11:41:46 -08:00
Ramiro Leal-Cavazos ea371a9bf2
Fix handling of view-like ops in `maximize-value-semantics` (#611)
This commit adds handling to the `maximize-value-semantics` pass for
the case where a view-like op depends on a tensor that has been
overwritten by a value tensor. The approach for removing the
dependency is to change the input to the view-like op to be a copy of
the value tensor that is being used to overwrite.

This commit also removes `AtenFill_ScalarOp` and
`AtenBernoulli_FloatOp` from the list of view-like ops, since these
ops now have a corresponding op with value semantics into which they
get converted in the `reduce-op-variants` pass.
2022-02-18 10:19:07 -08:00
Ramiro Leal-Cavazos 2823277f7c
Add static type information support to `aten.mm` (#602)
This commit adds static type information support to `aten.mm`. This is
needed for the forward pass of Bert training.
2022-02-18 09:56:48 -08:00
Nirvedh f8cb32faf0 LLVM bump
Major changes: opTrait changed to Trait, selectOp moved to arith dialect
assertOp moved to cf dialect
2022-02-16 15:28:13 -05:00
Gaurav Shukla 442ff4605c [LINALG] Decompose `aten.batch_norm` into `aten.native_batch_norm`
- This commit decomposes the `aten.batch_norm` op into the
  `aten.native_batch_norm` op, instead of lowering it to the
  `linalg.generic` op.
- It also adds run-time asserts in the `aten.native_batch_norm` lowering
  to make sure that the shape of the weight, bias, running_mean, and
  running_var must match the num of features.
- Since the `aten.native_batch_norm` op is not supported at TOSA backend,
  all the modules that are dependent on the `aten.native_batch_norm` op
  will fail and therefore they should be removed from the TOSA `passing`
  set.
- It also moves `checkNotNone` to utility.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-16 23:41:38 +05:30
Anup Gangwar c60468f141
[tosa] Support for Aten[Zeros|Ones|Fill_Scalar] ops (#604)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-16 09:53:51 -08:00
Prashant Kumar 8b79b5f48f Modify aten._log_softmax op decomposition for numerical stability.
`aten.log_softmax` is decomposed to be more numerically stable.
2022-02-16 12:26:17 +05:30
Gaurav Shukla cd21dda867 [LINALG] Add E2E support for `aten.Hardsigmoid` op
This commit adds lowering of `aten.Hardsigmoid` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-16 02:35:18 +05:30
Ramiro Leal-Cavazos 00a6e9c1bb
[LINALG] Add value tensor variant to `fill_.Scalar` (#600)
This commit adds the op `PseudoAtenFillScalarOp` that represents
`AtenFill_ScalarOp` without the underscore. The approach is the same
as in commit dd998fa4d4.

Adding this op allows for a simpler and more consistent version of the
`empty` and `empty_like` op e2e tests.
2022-02-15 11:58:03 -08:00
Ramiro Leal-Cavazos 413e6000d2
[LINALG] Add value tensor variant to `bernoulli_.float` (#597)
This commit adds the op `PseudoAtenBernoulliFloatOp` that represents
`AtenBernoulli_FloatOp` without the underscore. This is needed to make
sure that the `ReduceOpVariants` pass turns the in-place op into an op
that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value semantics
correctly.
2022-02-14 18:58:48 -08:00
Gaurav Shukla 78c7844c6c [LINALG] Add E2E support for `aten.eq.int` op
- This commit adds lowering of `aten.eq.int` op as a part of
  `convert-torch-to-std` pass.
- It also refactors the code for binary comparison ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-15 01:37:35 +05:30
Gaurav Shukla f00d1686c8 [LINALG] Add E2E support for `aten.[Bool.Tensor|Float.Tensor]` op
- This commit adds lowering of `aten.Bool.Tensor` and
  `aten.Float.Tensor` op as a part of `convert-torch-to-linalg` pass.
- It also adds support for returning bool types.
- It also fixes lowering of the `aten.Int.Tensor` op for non-zero rank
  input tensors.
- If a scalar number is converted to a 0-d tensor and passed on to the
  `aten.Float.Tensor` op, it folds to the scalar number.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-14 23:09:20 +05:30
Yi Zhang 9e7b6cab08 Add folder for aten.gt/lt.float 2022-02-14 12:34:01 -05:00
Henry Tu 73ac9a7e2e Added support for importing node prim::Constant with list type
Prior to this commit, importing a `prim::Constant` node with list type would result in an error since it was not supported. `ivalue_importer::importIValue` was modified to return the MlirValue corresponding to the root so its parent operation could be extracted.
2022-02-11 20:54:06 -05:00
Yi Zhang ce4d6d1f83 Remove hacky aten.select.int lowering code 2022-02-11 18:14:58 -05:00
Anup Gangwar 756b75fb2d
[tosa] Support for some ops and fix for Issue #532 (#575)
* [tosa] Support for AtenNe[Tensor|Scalar]Op, AtenLog2Op,
AtenBitwiseAndTensorOp, AtenSquareOp and AtenThresholdOp
* Fix for Issue #532 - Mixed input types for few ops and updated few
tests to use i32 instead of i64

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-11 12:30:02 -08:00
Prashant Kumar 258660deb6 Add aten.bernoulli decomposition.
aten.bernoulli is decomposed to aten.gtTensor(aten.uniform(x), x).
2022-02-11 00:35:33 +05:30
Prashant Kumar 102c497c4c Add decomposition of _log_softmax op.
Decompose _log_softmax into log(softmax(x)).
2022-02-10 23:17:26 +05:30
Prateek Gupta 318946a650 [TORCH][MLIR] Add E2E support for `aten._unsafe_view` op.
This commit adds decomposition of `aten._unsafe_view` op into
`aten.view` op.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2022-02-10 22:28:58 +05:30
Gaurav Shukla bd177bdfc7 [TORCH][MLIR] Add run-time assert support in Torch-dialect
- This commit adds `aten.assert` op in the Torch dialect.
- The `aten.assert` op is lowered to `mlir::Assert` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-09 12:03:01 -05:00
Anup Gangwar f9f97ea184 * [tosa] Support for AtenNativeLayerNormOp
* [tosa] Support for AtenPermuteOp

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-04 14:46:31 -05:00
Prashant Kumar 68acc8696e Modify softmax decomposition to be more numerically stable.
The softmax decomposition is modified according to https://github.com/pytorch/functorch/blob/main/functorch/_src/decompositions.pytorch
to account for numerical stability. Also, modified aten.argmax lowering
to handle negative dimension.
2022-02-03 21:20:36 +05:30
Suraj Sudhir 1b505cbac5
RefineTypes fixes for TOSA backend (#557)
Handles Linear, Adaptive_AvgPool2D and FlattenUsintInts
Adds ResNet18 static model for TOSA

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-02-01 14:08:54 -08:00
Yi Zhang 0cb216a1ad [Torch][Linalg] Add basic support for RNG
This PR include the following pieces:
- Add torch `Generator` type. `Generator` type is converted to i64 in
refbackend type converter.
- Add seed managment support for the default global generator.
`torch_c.getNextSeed` op is used to get the seed. On refbackend, the
`torch_c.getNextSeed` is lowered to load/store from [0] of global
variable `default_generator` memref<i64> in `InsertRngGlobals` pass.
- Add `aten.uniform_` and testing as an example op for RNG ops. Add
`torch.pseudo.aten.uniform` op. It has the same operands and return as
the `aten.uniform_` from the op registry except for value semantics.
2022-01-31 18:56:42 -05:00
Yi Zhang 5d9a15263a [TORCH] Add aten.std e2e support 2022-01-31 15:17:49 -05:00
Prashant Kumar e58b66bc3b Add lowering of `aten.max.dim` op.
Lowering of `aten.max.dim` op has been added.
2022-01-31 21:41:22 +05:30
Anup Gangwar 454fa9d123
* [tosa] Support for AtenFlattenUsingIntsOp (#548) 2022-01-28 21:38:56 -08:00
Liam Fitzpatrick 8bc028af05 Fold __is__ and unchecked_cast of derefine
The added e2e maxpool testcase from #545 was not getting a static shape
due to an unfolded prim.If when RefineTypes was called. This was because
of unfolded torch.iaten.__is__ and torch.prim.unchecked_cast operators
with torch.derefine operands.
2022-01-28 17:54:40 -05:00
Anup Gangwar 7a5736facd
* [tosa] Support for AtenReshapeOp (#543)
* [tosa] Support for AtenBatchNormOp

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-27 14:38:59 -08:00
stephenneuendorffer 3fd9b7789e
Bump LLVM to 881ff4e4ebe8cc0cc045c7c167cffb01f94f27f8 (#539) 2022-01-25 22:16:30 -08:00
Anup Gangwar f8080bd1c5
* [tosa] Support for AtenRsubScalarOp for scalar constants (#531)
* [tosa] Support for AtenCeilOp and AtenReciprocalOp
* [tosa] Support for comparator ops, Aten[Gt|Lt|Eq][Tensor|Scalar]Op with scalar constant
* [tosa] Support for Scalar variants of Aten[Mul|Div|Add|Sub] Ops with scalar constants

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-20 10:58:30 -08:00
Vivek Khandelwal 6fe70c7794 [MLIR][TORCH] Add E2E support for aten.index.Tensor op
This commit adds lowering of `aten.index.Tensor` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-19 13:37:56 +05:30
dan 3745f54489 Update external/llvm-project
- Add `qualified` to ods because of
https://reviews.llvm.org/D113873 and https://reviews.llvm.org/D116905
- Needed to revert https://github.com/llvm/torch-mlir/pull/520 as it
was based on an old torch version.
https://github.com/llvm/torch-mlir/pull/527 will bring this back with
a better design.
- Change ConvertAtenCatOp to use more accurate tensor shape info and
as much static info as possible to pass `tensor.insert_slice`
verification code added by https://reviews.llvm.org/D114715
- Other minor fixes
2022-01-18 13:25:42 -05:00
Anup Gangwar d69d29b7a6 * [tosa] Support for AtenPowTensorScalarOp with constant Scalar as input
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-11 22:55:54 -05:00
Liam Fitzpatrick 077e55d756 Add support for constant_pad_nd
Note that to enable folding of the code coming from an example
like the ConstantPad2dStaticModule e2e test, support for other
operations had to be added/improved:
- aten::neg.int
- aten::eq.float
- aten::eq.str
- prim::Uninitialized
2022-01-11 10:25:25 -05:00
Vivek Khandelwal 35cf8d18f7 Add support for two return values
This commit adds support for two return values of type
memref f32 and i64.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-11 11:07:10 +05:30
Yi Zhang 732a76f45c Make broadcasting result shape more static
This involes the following 2 parts:
- Change refine type to propagate more static shape info.
- Get as much static shape info as possible when creating the result
tensor when converting to linalg.
2022-01-06 18:39:27 -05:00
Liam Fitzpatrick ccfdfd1b80 Refine static shapes for conv2d and maxpool2d 2022-01-03 11:09:23 -06:00
Vivek Khandelwal 4486de5ef3 [MLIR][TORCH] Add E2E support for torch.arange op
This commit adds lowering of `aten.arange.start_step` op.
This commit decomposes `aten.arange` and `aten.arange.start` into
`aten.arange.start_step` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-27 22:45:48 +05:30
Gaurav Shukla a83004c806 [TORCH][MLIR] Fold trivial cases of `aten.to.dtype` and `aten.view` op
- It folds `aten.to.dtype` when the input tensor type and result type
  are exactly same.
- It folds `aten.view` when the rank of both the input tensor type and
  result type is unity.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-24 13:32:34 +05:30
xndcn 5eed562e19 add aten.sub.int/aten.mul.int lowering in TorchToStd 2021-12-17 10:35:15 -08:00
Anup Gangwar a6c3050dd0 * [tosa] Support for Maximum and Minimum
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2021-12-15 11:58:19 -08:00
Prashant Kumar ab81f871e4 Add aten.tensor.int and aten.tensor.float op lowerings.
Add the required lowerings and correct test cases.
These op produce zero-d tensors and it was incorrectly mentioned in
refine types to produce 1d tensor of size 1.
2021-12-15 17:21:34 +05:30
Anup Gangwar cce490d71d
* [tosa] Support for Rsqrt legalization (#480)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2021-12-14 10:03:58 -08:00
Gaurav Shukla 5a47f92390 [TORCH][MLIR] Add E2E support for `aten.squeeze.dim` op
This commit adds lowering of `aten.squeeze.dim` op into
`linalg.TensorCollapseShape` op. Here, the dim(th) dimension of the
input tensor is not supposed to be dynamic.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-10 17:01:20 +05:30
Suraj Sudhir c9c9b68d1f [tosa] Add Torch reduction operators
- Supports variants with multiple dims, one dim, all dime
- Leverages legalize_common and legalize_utils code from
TensorFlow-TOSA work

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-12-03 09:01:48 -08:00
Daniel Garvey a52aded0b9
Add lowering for slice and selectInt (#398) 2021-12-02 22:09:21 -06:00
Gaurav Shukla 73b27b32dc [MLIR][TORCH] Add E2E support for `aten.squeeze` op
This commit adds lowering of `aten.Squeeze` op into
`linalg.TensorCollapseShape` op. The size 1 dynamic dimensions are not
handled as a part of this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-30 23:00:28 +05:30
Yi Zhang 5d28549c2c Add folder for torch.aten.Int.Tensor
This is to fold the common pattern from Bert inference like:
```
%111 = torch.prim.NumToTensor.Scalar %110 : !torch.int ->
    !torch.vtensor<[],si64>
%112 = torch.aten.Int.Tensor %111 : !torch.vtensor<[],si64> ->
    !torch.int
```
2021-11-30 21:55:48 +05:30
dan 03fdf56f21 add aten.add.int lowering in TorchToStd 2021-11-29 13:22:50 -05:00
Yi Zhang 0fe70994e5 Add support for multiple return values
This change is to unblock the work of some backprop ops returning more
than one tensors. We will need to think of a more scalable approach
in the future if more flexible return types combinations are needed.
2021-11-16 21:07:45 -05:00
Suraj Sudhir 628a21bb13
[mlir][tosa] Refactor conversions to use templates (#416)
- Remove use of conversion construction macros
- Add mul and div op conversions
- Add corresponding tests

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-11-11 16:15:58 -08:00
Suraj Sudhir 1019ddf5a0 [tosa] Add structure for eltwise ops
Add a bunch of op legalizations.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-11-11 11:03:24 -08:00
Yi Zhang 3bd9d2a4c7 Add e2e support for aten._softmax_backward_data.
Decompose aten._softmax_backward_data into aten math ops. Also decompose
`aten.size` to facilitate decomposing _softmax_backward_data.
2021-11-09 13:09:30 +05:30
Yi Zhang 05c4dd8e39 Add convertScalarToDtype helper.
This is to facilitate scalar type conversion in the TorchToLinalg. As
part of adding the helper, this PR also:
- Updated `AtenAddTensorOp`, `AtenSubTensorOp` to use the helpers to
support more type variants.
- Added e2e type promotion testing.
- Added i32 memref return/arg type to support e2e testing.
2021-11-08 17:50:52 -05:00
George Petterson f41958037a Add NumToTensor 2021-11-08 15:56:52 -05:00
Prateek Gupta 18e8806b14 [TORCH][MLIR] Add E2E support for aten::to.dtype.
This commit adds end to end support for AtenToDtypeOp from aten
to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-08 12:56:03 -05:00
Prashant Kumar fd505db2c6 Adding support for returning elemental types.
Support for returning elemental types. Previously, only
memref types as returning types was supported. All the hacky ways
to write tests which return elemental types should be taken care of.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-08 22:20:48 +05:30
Prashant Kumar 53b4275ef5 Add lowering of `aten.Int.Tensor` op.
The lowering of `aten.Int.Tensor` op has been added.
The changes has been made as a part of `convert-torch-to-linalg` pass.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-01 21:58:08 +05:30
Yi Zhang 752abc8d01 Add type promotion code to refine types.
The types have different levels of categories: where
complex > floating > integral > boolean (> means left hand
side has higher category).

The operands have different levels of priorities where:
dimensioned tensor > 0-dim tensor > scalar == wrapped 0-dim tensor.
This is represented by the `ResultTypeState.dimResult`,
`ResultTypeState.zeroResult` and `ResultTypeState..wrappedResult` in
the source code.

For operands of the same priorities, the result type should be the
highest categories with sufficient width to hold all operands.

By default, only the highest priority operands participate in the type
promotion logic. Lower priority operands participate if they are in
a higher category than any higher priority operands.

For example, <[],f32> (lower priority) and <[1], si64> tensor would
result in <[?],f32> tensor because floating > integeral. Another example
<[],f64> (lower priority) and <[1], f32> tensor would result in
<[?], f32> tensor because f32 and f64 are the same category.

The ScalarType enum definition, type promotion table, ResultTypeState
struct definition and some helpers are copied from
aten/src/ATen/native/TypeProperties.*
Other references:
- https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc
- https://github.com/pytorch/pytorch/issues/9515

Other minor changes:
1. Fix `visitExpandLikeOp` to consider cases where the given sizes list
size is larger than the input rank.
2. Add back the somehow deleted `torch.aten.softmax.int` tests in
decompose-complex-ops.mlir.
2021-10-29 11:17:39 -04:00
Sean Silva eb6996d557 Update llvm-project to 6f9c25167d16acff3ff8e4f54a8c14a2a175fc59
- Changes to dialect conversion that result in no-op materializations
  not being created.
2021-10-28 17:43:04 -07:00
Suraj Sudhir 7e4ef74774
[tosa] Add Torch.sigmoid fp32 to TOSA (#386)
* [tosa] Add Torch.sigmoid fp32 to TOSA

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-10-28 10:09:12 -07:00
Prashant Kumar 5009cbf55c Add lowering of aten.matmul op.
Lowering of `aten.matmul` op is added from torch to linalg dialect.
The different cases correspond to
https://pytorch.org/docs/stable/generated/torch.matmul.html.
TODO: Broadcasting in case of batch-matmul is yet to be taken care of.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-10-26 12:45:09 -04:00
Yi Zhang abfaf8c577 Add aten.ne.bool to make CI pass 2021-10-21 14:45:41 -04:00
Yi Zhang a459e09ab7 E2e support for aten.softmax.int and aten.embedding
- Added a DecomposeComplexOps pass to decompose complex torchOps.
- Refactored `visitAtenArgmaxOp` and `visitAtenAnyDimOp` to
`visitReductionAlongDimIntOp`.
- Moved some helper functions into
torch-mlir/Dialect/Torch/Utils/Utils.h to be shared by multiple files.
- Added support for f64 tensor as argument and return types.
2021-10-18 17:57:45 -04:00
Yi Zhang 0902438882 Update llvm-project to a54f4eae0e1d0ef5adccdcf9f6c2b518dc1101aa
This brings in https://reviews.llvm.org/D110797. PRs that are in
progress will need to use scripts provided by
https://llvm.discourse.group/t/psa-removed-arithmetic-ops-from-standard/4455.
2021-10-18 13:36:42 -04:00
Sean Silva 19e9fc4ef1 Bring some more order to the e2e error reporting situation.
- Move `run_pipeline_with_repro_report` to a more common place, and use it
  consistently
- Attach a `torch.debug_module_name` to the enclosing `builtin.module`
  op to allow for self-contained error reporting (not needing to pass
  the names around.
- Remove redundant error reporting in linalg_on_tensors_backend.py and
  tosa_backend.py (their respective backend abstract base classes now
  take care of the error reports themselves)
- Save off original value of sys.stderr, rather than always resetting to
  `sys.__stderr__`. This is just more hygienic, and allows nesting if
  desired.
2021-10-08 13:00:12 -07:00
Sean Silva 0c5c84d63d Add a basic TOSA E2E backend.
We lower through linalg-on-tensors and use RefBackend to run it.
This adds enough support for a "tanh" op. Adding more ops should be
fairly mechanical now that things are wired up. Run with:
```
./tools/torchscript_e2e_test.sh -c tosa
```

The backend structure is very similar to linalg-on-tensors based E2E
backends and is a nice parallel (see `tosa_backend.py`). Actually, this
forced a nice refactoring to the layering here. We removed
`torchscript-module-to-linalg-on-tensors-backend-pipeline` and instead
require separately running
```
torchscript-function-to-torch-backend-pipeline,torch-backend-to-linalg-on-tensors-backend-pipeline
```
This highlights the step that lowers to the "torch backend contract"
of cleaned up `torch` dialect ops is a critical step in the lowering.
Going forward, that is the key load-bearing contract of the torch-mlir
project, not the linalg-on-tensors backend contract.

Recommended review order:
- `TorchToTosa.cpp` / `TorchToTosa/basic.mlir`
- `python/torch_mlir_e2e_test/torchscript/configs/tosa_backend.py` and
  the new `utils.py` file there.
- `python/torch_mlir_e2e_test/tosa_backends/linalg_on_tensors.py` and
  `abc.py` in that directory for the TOSA backend e2e interface.
- other misc mechanical changes
2021-10-08 09:59:45 -07:00
Stephen Neuendorffer 00d42ccaee Add tool substitutions to support out-of-tree builds 2021-10-07 21:16:43 -07:00
dan 2e1498ad11 add i64 support to refbackend 2021-10-05 15:12:44 -04:00
Sean Silva 5b6902e31c Dual license the torch-mlir project.
This commit (with approval from all contributors) dual licenses
the torch-mlir project under both the standard LLVM license and the
standard PyTorch license. This will facilitate moving code between
torch-mlir and the two upstream projects.

The standard file comment is now:

```
// This file is licensed under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
// Also available under a BSD-style license. See LICENSE.
```

See `LICENSE` in the project root for the terms of both licenses.
2021-10-01 10:46:08 -07:00
Sean Silva 4fad753073 Move external/torch-mlir to the root of the repo. 2021-09-27 17:11:08 -07:00
Sean Silva a99cbeeb7e Move TorchConversion dialect and TorchTo* into torch-mlir 2021-09-23 21:39:31 -07:00
Sean Silva 2213584c4f VerifyBackendContract -> VerifyLinalgOnTensorsBackendContract
This moves it into TorchConversion since it is only needed there.

This removes the Backend/ directory.
2021-09-23 21:39:31 -07:00
Yi Zhang 603e068e45 E2e implementation for `aten.cat`,`aten.gather`, `aten.bmm`
Also contains the following changes:
- Remove derefineOp canonicalizer because it's not safe.
- Support for optional tensor and list tensors in reduceOpVariant. This
only works for some special detected and easy to handle cases. For list,
it covers the case list is got from a `ListConstruct`. For optional, it
covers the case optional is constructed from a `DerefineOp`.
- Remove the `inferReturnTypes` for `FromBuiltinTensorOp` because it's
not safe to deduce types from the input. For example, a built-in tensor
of i8 could be converted to si8 or ui8. It's better to let the user
specify the return type explicitly.
2021-09-22 19:15:01 -04:00
Sean Silva 1a0b953ea7 Eliminate almost all mentions of IREE.
A few remain in examples/docs that will be naturally be updated in due
time.

This regresses the list support and the general direction of more widely
supported control flow, lists/dicts/globals that we were going for with
the TorchScript path. The idea is that we are deferring that work to
make torch-mlir a very clean standalone thing. We will reboot it,
probably using some of the tools of iree_pydm to make it simpler, and in
a more natural place (such as an iree-torch repo that depends on IREE and
torch-mlir to build a working PyTorch frontend solution for IREE -- it
was really weird that npcomp depended on IREE).
2021-09-22 16:06:38 -07:00
Sean Silva a25163fbfa Remove old RefBackend
It is superceded by the new one.
2021-09-22 15:33:28 -07:00
Sean Silva f9c48d0b89 Bring up new RefBackend.
`tools/torchscript_e2e_test.sh` is all green.

This needs a few passes I put into torch-mlir/lib/RefBackend (not to be
confused with `npcomp/lib/RefBackend`, which will soon be deleted).

For the sake of review, since this brings together a lot of things, I
split this into its own commit. I temporarily commented out some "list"
stuff that we are going to remove as part of the torch-mlir refocus.
2021-09-22 14:20:22 -07:00
Sean Silva 68fefe7e1f Remove NPCOMP_ENABLE_IREE CMake flag.
Our new dependency management solution relies:
- on the C++ side with the public iree-dialects project, which we
  include and are using as representative of some missing upstream
  ops (so we treat them "as if" they were upstream, with the hope of
  upstreaming them after some codevelopment has happened)
- on the Python side, with simple PYTHONPATH manipulation or installed
  Python packages. No CMake stuff required.
2021-09-17 09:27:49 -07:00
Sean Silva b6be96d722 [torch-mlir earthmoving (2/N)] Python code movement.
This moves the bulk of the Python code (including the Torch interop)
from `frontends/pytorch` into `torch-mlir/TorchPlugin`. This also
required reconciling a bunch of other Python-related stuff, like the
`torch` dialects.

As I did this, it was simpler to just remove all the old numpy/basicpy
stuff because we were going to delete it anyway and it was faster than
debugging an intermediate state that would only last O(days) anyway.

torch-mlir has two top-level python packages (built into the
`python_packages` directory):

- `torch_mlir_dialects`: `torch` dialect Python bindings (does not
  depend on PyTorch). This also involves building the aggregate CAPI for
  `torch-mlir`.
- `torch_mlir`: bindings to the part of the code that links against
  PyTorch (or C++ code that transitively does).

Additionally, there remain two more Python packages in npcomp (but
outside `torch-mlir`):

- `npcomp_torch`: Contains the e2e test framework and testing configs
  that plug into RefBackend and IREE.
- `npcomp_core`: Contains the low-level interfaces to RefBackend and
  IREE that `npcomp_torch` uses, along with its own
  `MLIR_PYTHON_PACKAGE_PREFIX=npcomp.` aggregation of the core MLIR
  python bindings. (all other functionality has been stripped out)

After all the basicpy/numpy deletions, the `npcomp` C++ code is now very
tiny. It basically just contains RefBackend and the `TorchConversion`
dialect/passes (e.g. `TorchToLinalg.cpp`).

Correspondingly, there are now 4 main testing targets paralleling the
Python layering (which is reflective of the deeper underlying dependency
structure)

- `check-torch-mlir`: checks the `torch-mlir` pure MLIR C++ code.
- `check-torch-mlir-plugin`: checks the code in `TorchPlugin` (e.g.
  TorchScript import)
- `check-frontends-pytorch`: Checks the little code we have in
  `frontends/pytorch` -- mainly things related to the e2e framework
  itself.
- `check-npcomp`: Checks the pure MLIR C++ code inside npcomp.

There is a target `check-npcomp-all` that runs all of them.
The `torch-mlir/build_standalone.sh` script does a standalone build of
`torch-mlir`.

The e2e tests (`tools/torchscript_e2e_test.sh`) are working too.

The update_torch_ods script now lives in
`torch-mlir/build_tools/update_torch_ods.sh` and expects a standalone
build.

This change also required a fix upstream related to cross-shlib Python
dependencies, so we also update llvm-project to
8dca953dd39c0cd8c80decbeb38753f58a4de580 to get
https://reviews.llvm.org/D109776 (no other fixes were needed for the
integrate, thankfully).

This completes most of the large source code changes. Next will be
bringing the CI/packaging/examples back to life.
2021-09-15 13:40:30 -07:00
Sean Silva 28a7738189 [torch-mlir earthmoving (1/N)] C/C++ code movement.
This creates the `external/torch-mlir` directory as an
LLVM_EXTERNAL_PROJECTS-compatible project (analogous to
`iree-dialects`) and completes movement/rename of all pure MLIR C/C++
compiler code into there. The next step will be to move all the Python
code / code that links/includes PyTorch C++ code (which currently lives
in `frontends/pytorch`) into a subdirectory here.

I call this "earthmoving" because it is mostly mechanical changes and
renames. As a quick summary (we can change this down the road easily)
- C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch`
- CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet`
- preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_`
- CMake `NPCOMPFoo -> TorchMLIRFoo`

The goal of this is to create a standalone project creating a center of
mass for entry into the MLIR ecosystem from PyTorch, suitable in scope
for eventual inclusion/ownership in PyTorch. The idea is that
`external/torch-mlir` will some day be pulled out into its own
repository, and then npcomp will simply pull it in as a submodule.

Layering-wise, what lives in `torch-mlir` lowers code from PyTorch
(currently TorchScript, but TorchFX or pytorch/xla-style tracing are
possible extensions) down to what we have been calling the "Torch
backend contract" which is cleaned up IR (inlining, simplifcation,
conversion to value tensors, ...) entirely in the `torch` dialect. This
is the branching off point for further lowering, of which npcomp takes
one opinion (outside `torch-mlir` of course!), namely the
`TorchConversion` dialect/transforms which lower to IR suitable for IREE
and other linalg-on-tensors based lower-level compilers.

Summary of changes:
- move `{include,lib,test}/Dialect/Torch` into `torch-mlir`
- move relevant parts of CAPI into `torch-mlir`.
- leave a few things related to the `torch-mlir` Python build commented
  out, which should be resolved in a subsequent change.
2021-09-10 21:44:37 -07:00
Sean Silva a7252f9a06 Add basic support for lists.
This plumbs through a vertical slice of support for lists.

The main chunk of new code here is AnnotateABIPass which captures the
program signature at the Torch backend contract layer, right before we
start `TorchConversion`. The `TorchConversion` lowering process is lossy
w.r.t. types, so it's necessary to do this for all targets in general.
Like using `!iree.list` directly, we use IREE's ABI annotation
representation for this, although there is nothing very IREE-specific
about it (see
https://github.com/google/iree/blob/main/docs/developers/design_docs/function_abi.md)

We change `ListLiteralModule_basic` to use `!torch.int` because IREE
doesn't support f64 yet (and we don't yet have a way for users to say
that they want `!torch.float` to lower as f32).

Recommended review order:
- AnnotateABIPass and tests
- Arg marshaling in npcomp_backend.py and `iree.py`
- Updates to `list_programs.py` / `xfail_sets.py`
- Moving DeleteDeadIREEListsPass to Backend/Common, so that backends
  that don't support lists can use it. RefBackend uses that pass, for
  example.
2021-09-09 20:48:55 -07:00
Yi Zhang 73d553e168 MT model compilation minor changes
This contains the following changes:
 - Fix optional knowledge propagation. The initial knowledge should
 always be NotNone for the operations we implemented.
 - Add Folder for `prim.dtype`
2021-09-09 19:02:48 -04:00
Sean Silva ed2afe43e7 Fix TorchToIREE lowering.
We needed to resize the list, not just reserve capacity.
2021-09-03 23:57:54 +00:00
Sean Silva 1dec561cfd Update llvm-project to 830c0b9023cd0cf91955900e0d96283e7a8c3711
- builder.getSymbolRefAttr is gone.
- OpAsmOpInterface's getAsmResultNames method needs explicit override
- a bunch of churn for builtin.func needing to be made explicit (and
  sometimes implicit?)
- operation printers no longer need to print the operation name
  themselves.
- snuck in beneficial trivial addition to TmpDeleteDeadIREEListsPass to
  test a particular upstream change e2e with my local patchset.
2021-09-03 14:16:38 -07:00
Yi Zhang 3b0e5910a8 Refine types continue.
This should cover all the ops that are left in MT.
2021-09-02 14:39:28 -04:00
Sean Silva 29e1b2fe89 Delete RestrictedCanonicalizer
It doesn't work properly with the new dialect registration framework.
This was latent and only was exposed when running through npcomp-opt.
Not worth investing the brainpower to fix now.
2021-08-27 19:09:29 +00:00
Yi Zhang d6b9709fa5 Changes to refine types
- Add `!torch.optional` knowledge tracking
- Changes to improve type propagation for branches and terminators. See
examples in `refine-types-branch.mlir`
- Refator to separate handling of different ops from `visitOperation`
- Add refine types for a few new ops
2021-08-27 11:42:00 -04:00
Yi Zhang bc5eae41ca Add more folders to fold away branches
Added folders to a few binary computing ops, `TupleUnpack`,
`__contains__.str` and `__getitem__.Dict_str`.
2021-08-26 17:37:49 -04:00
Stella Laurenzo 4148f88576 Merge npcomp and mlir python namespaces.
* Now the parts of the MLIR API are directly exported under the npcomp module (i.e. `npcomp.ir`, etc).
* Has required fixes for https://reviews.llvm.org/D108489
* Deletes npcomp.tracing vs fixing it because it was a very early experiment that will not be carried forward.
* This makes the npcomp python distribution completely standalone and separate from an mlir installation.
* Makes most of npcomp itself relocatable for future use as a library.
* Most things are a namespace package now. In the future we can s/torch_mlir/npcomp.frontends.torch/ and have it layer properly.
2021-08-22 21:00:42 -07:00
Sean Silva cab8d922ec Add TorchToIREE and factor out TorchConversion dialect.
This converts a basic list op (torch.prim.ListConstruct) to the IREE
dialect.

```
    def forward(self, x: float):
            return [x, x]
```

turns into:

```
builtin.func @forward(%arg0: !torch.float) -> !torch.list<!torch.float> {
  %0 = torch.prim.ListConstruct %arg0, %arg0 : (!torch.float, !torch.float) -> !torch.list<!torch.float>
  return %0 : !torch.list<!torch.float>
}
```

which turns into:

```
builtin.func @forward(%arg0: f64) -> !iree.list<f64> {
  %c1 = constant 1 : index
  %c0 = constant 0 : index
  %c2 = constant 2 : index
  %0 = iree.list.create %c2 : !iree.list<f64>
  iree.list.set %0[%c0], %arg0 : !iree.list<f64>, f64
  iree.list.set %0[%c1], %arg0 : !iree.list<f64>, f64
  return %0 : !iree.list<f64>
}
```

As part of doing this, I realized that it was time to formalize the IR
form that we reach right before running TorchTo{Linalg,Std,...}. We now
call it the "Torch backend contract". We then lower the "Torch backend
contract" to the "npcomp backend contract", which involves the new
TorchConversion (`torch_c`) dialect, which holds ops that need to
operate on both the npcomp backend types (e.g. builtin tensors, i1, IREE
list, etc.) and the `!torch` types.

This made more sense, as I realized that if I didn't factor out
`torch_c` then the Torch dialect would have a dependency on IREE
dialect (we previously didn't notice this was an issue because we only
depended on `builtin` types), which seemed wrong to me.

Recommended review order:
- TorchToIREE.cpp / `TorchToIREE/basic.mlir`
- Look at the new structure of createTorchScriptToNpcompBackendPipeline.
  It now lives in TorchConversion/Transforms/Passes.cpp and cleanly
  calls into `Torch::createTorchScriptToTorchBackendPipeline` for the
  frontend lowering to the Torch backend contract.
- Mechanical change extracting
  `torch_c.{to,from}_{i1,i64,f64,builtin_tensor,iree_list}` into a new
  TorchConversion dialect, and a few passes specific to the lowering
  from the Torch backend contract to the npcomp backend contract.
- Minor fixes to TorchToLinalg.cpp to use unconverted operands (now that
  we convert lists as part of operand materialization, we need to use
  the original operands). Also added test for AtenMaxPool2dOp and fixed
  m_TorchConstantIntList.
- TmpDeleteDeadIREELists pass. Temporary pass for deleting dead IREE lists that
  are created as part of operand materialization for conv/max pool/avg pool ops
  in TorchToLinalg.
2021-08-16 15:01:58 -07:00
Yi Zhang 85ff8b692b Fix compilation errors from MT model
With the following changes the compilation can continue until
RefineTypes pass:

- Add operators without ODS into `torch_ods_gen.py`
- Add some new optional and list types in `TorchTypes.td`
- Add some folders for aten int type comparator ops
- Modify GlobalizeObjectGraph.cpp. For global slots that's not used,
dont check if an aliased value is stored in more than one of global
slots. This can work around a failure where the same tensor is stored
in multiple "version" slots which are not used.
2021-08-16 16:37:23 -04:00
Sean Silva a3bfd115ee Remove npcomp-iree-backend-lower-linkage pass.
This is no longer needed by IREE.
2021-08-09 15:28:02 -07:00
Yi Zhang 0342b73bf1 Add torch.aten.flatten.using_ints and aten.MaxPool2d linalg lowering
- torch.aten.flatten.using_ints to linalg lowering
- torch.aten.max_pool2d to linalg lowering
- Support torch.aten.conv2d for more flexible dilation and strides values
2021-08-04 12:00:43 -04:00
Sean Silva 496051163f Rename npcomp-run-mlir to refback-run
This better represents its limited scope. This was causing confusion --
people were feeding it higher level ops that require frontend lowering.
2021-08-03 18:24:24 -07:00
Sean Silva f168cacd6d Remove TCF and TCP.
These were legacy concepts that are now superceded by direct Torch to
linalg-on-tensors lowering. These were based on some very early thinking
related to the layering of frontends vs codegen, which is now obsolete
because:
- We expected a lot more centralization at the frontend (TCF) level. It
  turns out that frontend needs really vary a lot, and there is no grand
  unifying TCF dialect plausible. The additional layer isn't worth it.
- Linalg-on-tensors obsoletes the primary need for TCP. There are still
  a few things not representable with linalg-on-tensors, but the support
  is growing and the whole "not included in linalg-on-tensors" direction
  needs to be rethought. Our TCP dialect didn't cover any of the
  actually important things in this space (such as sort, FFT, top-k,
  etc.).

See historical [slides](https://drive.google.com/file/d/1iljcpTQ5NPaMfGpoPDFml1XkYxjK_6A4/view) / [recording](https://drive.google.com/file/d/1jSPa8TwPKUt0WuLquGc8OgSUVYJHMvWZ/view)
for more details on the origin story here.

Their presence was confusing users too
[bug](https://github.com/llvm/mlir-npcomp/issues/248).

Also,
- Trim down npcomp-run-mlir testing. It was testing TCF to TCP
  lowering for the most part. The essential stuff is retained and
  rephrased with linalg-on-tensors. (we should probably rename it
  "refback-run" or something, as it is just a way to invoke RefBackend)
- test/Python/Backend/RefJIT/simple_invoke_numpy.py is XFAIL'ed. Our
  "anti-framework" direction seems to be the likely future path.
2021-08-02 12:08:39 -07:00
Stella Laurenzo 445472c51e Build packages for npcomp-torch.
* Adds a minimal setup.py for frontends/pytorch
* Makes npcomp-core export its headers and libraries
* Adds a script to build packages.
* Adds CI step to package and smoke test.
* Will need some more tweaks and coordination prior to deploying (version locking etc).
2021-07-29 19:58:59 -07:00