Commit Graph

298 Commits (1bde00c73d58d98e83d3a9d3f51d3495611067e8)

Author SHA1 Message Date
Vivek Khandelwal 3d95c3d6c9 [MLIR][TORCH] Add value tensor variant to aten::_index_put_impl_
This commit adds the op `ValsemVariantAtenIndexPutImplOp` that represents
`Aten_IndexPutImpl_Op` without the underscore. This is needed to
make sure that the `ReduceOpVariants` pass turns the in-place op
into an op that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenIndexPutImplOp` op.

This commit also updates the `torch.bincount` op test cases.
2022-03-16 22:02:02 +05:30
Ramiro Leal-Cavazos 0bcc6d1075
Add maximize-value-semantics support for multiple non-value tensor inputs (#659)
This commit adds value semantics support for ops such as
`aten.view_as` and `aten.expand_as` that take two non-value 
tensors as input.
2022-03-15 18:13:45 -07:00
Sean Silva 92da4988f0 Improve "pseudo" op terminology.
The term "pseudo" is very vague and was getting confusing (I felt I had
to explain it in every comment referencing it). Instead, rework the
"pseudo" ops to instead be named:

- MLIR Syntax: `torch.valsem.*`
- C++ / ODS: `ValsemVariant*Op`

This makes it clear what the concept is, and avoids confusion with other
things that might be called "pseudo", since these are very specific and
should be 100% consistently named w.r.t. the non-valsem-variant ops that
they correspond to.
2022-03-15 17:57:52 -07:00
Sean Silva 7ea50a537a Avoid `using` the `torch_upstream` namespace.
This is code that we always want to treat as "foreign" and not get too
comfortable using in many functions. One way to accomplish that is to
make it a bit clunkier to use.

Also, fix Utils.cpp to match the LLVM/MLIR coding conventions (don't
define functions inside namespaces -- prefer `using` and explicit
qualification).
2022-03-15 17:24:17 -07:00
Sean Silva 84a9693006 Elide `!torch.` prefix in nested dialect types.
This leads to much more succinct types in many cases:

```
!torch.list<!torch.int>
!torch.list<int>

!torch.tuple<!torch.list<!torch.int>, !torch.list<!torch.int>>
!torch.tuple<list<int>, list<int>>

!torch.optional<!torch.list<!torch.int>>
!torch.optional<list<int>>

!torch.list<list<list<tensor>>>
!torch.list<!torch.list<!torch.list<!torch.tensor>>>
```

I would like to take this further and allow omitting the `!torch.`
prefix in all cases, but that's harder -- for example, we currently use
`FuncOp` for functions, and so I don't think we can customize the
printing there. It seems like it will be a longer road to getting that
level of customization.
2022-03-15 17:24:08 -07:00
Sean Silva a5fe0cf063 Introduce new shape library design.
See the documentation in `docs/shape_lib.md` and
`docs/adding_a_shape_function.md` for an overview of the system.

This completely overhauls how we represent shape functions. In
particular, RefineTypes does not infer shapes anymore (only dtypes).
Shape functions are now written in (TorchScript'able) Python.

Recommended review order:

1. Read `docs/shape_lib.md` and `docs/adding_a_shape_function.md`.
1. Code and tests for ReifyShapeCalculations, DropShapeCalculations.
1. Code and tests for SimplifyShapeCalculations.
1. shape_lib_gen.py
1. Code and tests for new RefineTypes pass.
1. Random folders/canonicalizers in TorchOps.cpp and associated test in
   `canonicalize.mlir`.
1. New ReadOnly trait inferred from the registry.
1. Any miscellaneous remaining stuff.

Example `-print-ir-after-all` for ElementwiseUnaryModule:
[IR lowering dump](https://gist.github.com/silvasean/e4dc8cbc8d00aac7819602e3cbd8e212).

Example `-print-ir-after-all` for ElementwiseBinaryModule:
[IR lowering dump](https://gist.github.com/silvasean/daf6860ecced732af3568af6b1899113).
2022-03-15 12:41:58 -07:00
Ramiro Leal-Cavazos 51e267aa37
Combine maximize-value-semantics rewrite patterns into one pattern (#642)
This commit replaces the two rewrite patterns of
maximize-value-semantics with a single pattern that captures the
behavior of both as well as other edge cases previously not
supported. The new pattern works by first performing alias analysis on
a subgraph to see if pattern is applicable, then rewriting all
non-value tensors to value tensors in a single go.
2022-03-10 09:36:52 -08:00
Prateek Gupta 3d9ba5e525 [MLIR][TORCH] Add E2E support for aten.erf op.
Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-03-09 22:22:03 +05:30
Vivek Khandelwal 1a2a9e066f [MLIR][TORCH] Add TorchToTMTensor pass
This pass is added to lower ops, which can not be lowered
via the TorchToLinalg pass, such as `torch.bincount` op.
This pass also uses torch-mlir's TMTensor Dialect to lower the
complex ops.

Also add torch.bincount op lowering with the help of TMTensor dialect

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-08 22:52:34 +05:30
Gaurav Shukla e57d3f9774 [LINALG] Fix `aten.bernoulli` op lowering
- This commit adds E2E support for `aten.rand_like` and
  `aten.bernoulli_.Tensor` ops.
- The `aten.bernoulli(x)` was implemented as:
  `aten.bernoulli(x) = rand_like(x) < 0.5`, assuming 0.5 as default
  probability, whereas according to the pytorch documentation:
  https://pytorch.org/docs/stable/generated/torch.bernoulli.html#torch.bernoulli
  the input x in `aten.bernoulli(x)` is itself a tensor containing
  probabilities to be used for drawing the binary random number.
- So this commit fixes the `aten.bernoulli(x)` implementation as:
  `aten.bernoulli(x) = rand_like(x) < x`.
- It also fixes the case where the input to `aten.bernoulli_.float` is
  an integer tensor. In this case the input must be casted to float type
  before passing it as operand to `aten.rand_like` op.
  `aten.bernoulli_.float(x, p) = rand_like(float(x)) < p`.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-05 09:38:22 +05:30
Vivek Khandelwal af551bd9cd [MLIR][TORCH] Add E2E support for aten.full_like op
This commit decomposes `aten.full_like` op into `aten.empty_like`
and `aten.fill` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-04 21:58:23 +05:30
Vivek Khandelwal d61ae92eee [MLIR][TORCH] Add E2E support for aten.full op
This commit decomposes `aten.full` op into `aten.empty` and
`aten.fill` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-04 21:58:23 +05:30
Ramiro Leal-Cavazos 9ce62473f9
Add static type information support to `aten.bmm` (#636)
This commit adds static type information support to `aten.bmm`. This
is needed for the forward pass of Bert training.
2022-03-03 13:01:17 -08:00
Yi Zhang 1d285f0153 Add aten.hardtanh e2e support. 2022-03-02 12:28:06 -05:00
Prashant Kumar 819f29316f Decompose aten.silu op
Decomposition of aten.silu.op is added as silu(x) = x * sigmoid(x).
2022-03-01 23:24:19 +05:30
Vivek Khandelwal ddd45d6068 [MLIR][TORCH] Add E2E support for aten.new_zeros, aten.new_ones op
This commit adds lowering of `aten.new_zeros` and `aten.new_ones` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-01 22:09:47 +05:30
Prashant Kumar 7c637eebc3 [LINALG] Decompose aten_hardswish op.
`aten.hardswish` op is decomposed into (x/6) * Relu6(x+3).
2022-02-25 21:59:27 +05:30
Gaurav Shukla 056cd2078d Revert "[LINALG] Decompose `aten.batch_norm` into `aten.native_batch_norm`"
This reverts commit 442ff4605c.
2022-02-25 15:46:55 +05:30
Ramiro Leal-Cavazos ba29d4f250
Add operand type invariant to `torch.overwrite.tensor.contents` (#606)
This commit adds the invariant to the op `torch.overwrite.tensor.contents` that
both of its operands have the same shape and size. In order to
maintain the invariant, special handling of this op is added to the
`RefineTypes` pass.
2022-02-22 11:41:46 -08:00
Ramiro Leal-Cavazos ea371a9bf2
Fix handling of view-like ops in `maximize-value-semantics` (#611)
This commit adds handling to the `maximize-value-semantics` pass for
the case where a view-like op depends on a tensor that has been
overwritten by a value tensor. The approach for removing the
dependency is to change the input to the view-like op to be a copy of
the value tensor that is being used to overwrite.

This commit also removes `AtenFill_ScalarOp` and
`AtenBernoulli_FloatOp` from the list of view-like ops, since these
ops now have a corresponding op with value semantics into which they
get converted in the `reduce-op-variants` pass.
2022-02-18 10:19:07 -08:00
Ramiro Leal-Cavazos 2823277f7c
Add static type information support to `aten.mm` (#602)
This commit adds static type information support to `aten.mm`. This is
needed for the forward pass of Bert training.
2022-02-18 09:56:48 -08:00
Prashant Kumar ed9bd556b3 Fix bug for aten_nll_loss op in the refine types pass
The check for `self.hasSizes` was missing before performing `.size()`
operation.
2022-02-17 19:02:12 +05:30
Nirvedh f8cb32faf0 LLVM bump
Major changes: opTrait changed to Trait, selectOp moved to arith dialect
assertOp moved to cf dialect
2022-02-16 15:28:13 -05:00
Gaurav Shukla 442ff4605c [LINALG] Decompose `aten.batch_norm` into `aten.native_batch_norm`
- This commit decomposes the `aten.batch_norm` op into the
  `aten.native_batch_norm` op, instead of lowering it to the
  `linalg.generic` op.
- It also adds run-time asserts in the `aten.native_batch_norm` lowering
  to make sure that the shape of the weight, bias, running_mean, and
  running_var must match the num of features.
- Since the `aten.native_batch_norm` op is not supported at TOSA backend,
  all the modules that are dependent on the `aten.native_batch_norm` op
  will fail and therefore they should be removed from the TOSA `passing`
  set.
- It also moves `checkNotNone` to utility.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-16 23:41:38 +05:30
Prashant Kumar 8b79b5f48f Modify aten._log_softmax op decomposition for numerical stability.
`aten.log_softmax` is decomposed to be more numerically stable.
2022-02-16 12:26:17 +05:30
Gaurav Shukla cd21dda867 [LINALG] Add E2E support for `aten.Hardsigmoid` op
This commit adds lowering of `aten.Hardsigmoid` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-16 02:35:18 +05:30
Ramiro Leal-Cavazos 00a6e9c1bb
[LINALG] Add value tensor variant to `fill_.Scalar` (#600)
This commit adds the op `PseudoAtenFillScalarOp` that represents
`AtenFill_ScalarOp` without the underscore. The approach is the same
as in commit dd998fa4d4.

Adding this op allows for a simpler and more consistent version of the
`empty` and `empty_like` op e2e tests.
2022-02-15 11:58:03 -08:00
Gaurav Shukla 41acde599b [LINALG] Add E2E support for `aten.[le|ge].Scalar` ops
- This commit adds lowering of `aten.le.Scalar` and `aten.ge.Scalar` ops
  as a part of `convert-torch-to-linalg` pass.
- It also creates a new test script `elementwise_comparison.py` for all
  element-wise comparison ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-15 12:21:09 +05:30
Ramiro Leal-Cavazos 413e6000d2
[LINALG] Add value tensor variant to `bernoulli_.float` (#597)
This commit adds the op `PseudoAtenBernoulliFloatOp` that represents
`AtenBernoulli_FloatOp` without the underscore. This is needed to make
sure that the `ReduceOpVariants` pass turns the in-place op into an op
that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value semantics
correctly.
2022-02-14 18:58:48 -08:00
Gaurav Shukla f00d1686c8 [LINALG] Add E2E support for `aten.[Bool.Tensor|Float.Tensor]` op
- This commit adds lowering of `aten.Bool.Tensor` and
  `aten.Float.Tensor` op as a part of `convert-torch-to-linalg` pass.
- It also adds support for returning bool types.
- It also fixes lowering of the `aten.Int.Tensor` op for non-zero rank
  input tensors.
- If a scalar number is converted to a 0-d tensor and passed on to the
  `aten.Float.Tensor` op, it folds to the scalar number.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-14 23:09:20 +05:30
Yi Zhang 9e7b6cab08 Add folder for aten.gt/lt.float 2022-02-14 12:34:01 -05:00
Yi Zhang ce4d6d1f83 Remove hacky aten.select.int lowering code 2022-02-11 18:14:58 -05:00
Ramiro Leal-Cavazos c1167853db
Fix error in RefineTypes for constant alloc ops (#579)
This commit fixes an error in the refine types pass of constant
allocation ops. The function used to set the dtype,
`fillInDtypeGivenDtypeAndDataType`, takes two torch types as arguments,
but a torch type and a standard MLIR type were being passed into it.

This commit also fixes the way the dtype was calculated in
`visitAtenToDtypeOp`. This op was also passing a standard MLIR type as
an argument to the `fillInDtypeGivenDtypeAndDataType`
function. Moreover, since the op `aten.to.dtype` has the dtype
argument as not optional, all that is needed is to match
against the int value to extract the dtype.
2022-02-10 18:02:18 -08:00
Prashant Kumar 258660deb6 Add aten.bernoulli decomposition.
aten.bernoulli is decomposed to aten.gtTensor(aten.uniform(x), x).
2022-02-11 00:35:33 +05:30
Prashant Kumar 102c497c4c Add decomposition of _log_softmax op.
Decompose _log_softmax into log(softmax(x)).
2022-02-10 23:17:26 +05:30
Prateek Gupta 318946a650 [TORCH][MLIR] Add E2E support for `aten._unsafe_view` op.
This commit adds decomposition of `aten._unsafe_view` op into
`aten.view` op.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2022-02-10 22:28:58 +05:30
Ramiro Leal-Cavazos 9b89f8eb3f
[TORCH][MLIR] Add E2E support for aten.clone (#571)
This commit adds support for the aten.clone op.
2022-02-09 19:31:03 -08:00
Gaurav Shukla 2fefe68ffd [TORCH][MLIR] Add E2E support for `aten.native_batch_norm` op
- This commit adds support for `aten.native_batch_norm` operation.
- The current implementation only supports inference mode of
  `aten.native_batch_norm` op.

Signed-Off-By: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-08 02:54:03 +05:30
Prashant Kumar ccf546f14c Add aten::nll_loss_backward op
The lowering of aten::nll_loss_backward op has been added
from torch to linalg dialect. The changes has been made as
a part of -torch-convert-to-linalg pass.

Signed-off-by: Prashant Kumar prashant@nod-labs.com
2022-02-04 21:57:53 +05:30
Prashant Kumar 68acc8696e Modify softmax decomposition to be more numerically stable.
The softmax decomposition is modified according to https://github.com/pytorch/functorch/blob/main/functorch/_src/decompositions.pytorch
to account for numerical stability. Also, modified aten.argmax lowering
to handle negative dimension.
2022-02-03 21:20:36 +05:30
Gaurav Shukla 0079901039 [TORCH][MLIR] Add E2E support for `aten.reshape` op
This commit decomposes `aten.reshape` into `aten.view` op in the case of
value tensor type operand.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-02 20:41:47 +05:30
Suraj Sudhir 1b505cbac5
RefineTypes fixes for TOSA backend (#557)
Handles Linear, Adaptive_AvgPool2D and FlattenUsintInts
Adds ResNet18 static model for TOSA

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-02-01 14:08:54 -08:00
Yi Zhang 0cb216a1ad [Torch][Linalg] Add basic support for RNG
This PR include the following pieces:
- Add torch `Generator` type. `Generator` type is converted to i64 in
refbackend type converter.
- Add seed managment support for the default global generator.
`torch_c.getNextSeed` op is used to get the seed. On refbackend, the
`torch_c.getNextSeed` is lowered to load/store from [0] of global
variable `default_generator` memref<i64> in `InsertRngGlobals` pass.
- Add `aten.uniform_` and testing as an example op for RNG ops. Add
`torch.pseudo.aten.uniform` op. It has the same operands and return as
the `aten.uniform_` from the op registry except for value semantics.
2022-01-31 18:56:42 -05:00
Yi Zhang 5d9a15263a [TORCH] Add aten.std e2e support 2022-01-31 15:17:49 -05:00
Prashant Kumar e58b66bc3b Add lowering of `aten.max.dim` op.
Lowering of `aten.max.dim` op has been added.
2022-01-31 21:41:22 +05:30
Liam Fitzpatrick 8bc028af05 Fold __is__ and unchecked_cast of derefine
The added e2e maxpool testcase from #545 was not getting a static shape
due to an unfolded prim.If when RefineTypes was called. This was because
of unfolded torch.iaten.__is__ and torch.prim.unchecked_cast operators
with torch.derefine operands.
2022-01-28 17:54:40 -05:00
stephenneuendorffer 52ed3313b4
Bump LLVM to 84fe34a0b7fdd7bbf179981d1583693d5d5ec68b (#544)
* external/llvm-project 881ff4e4ebe8...84fe34a0b7fd (466):
  > [MLIR] Workaround for python detection problems.
2022-01-27 17:21:09 -08:00
stephenneuendorffer 3fd9b7789e
Bump LLVM to 881ff4e4ebe8cc0cc045c7c167cffb01f94f27f8 (#539) 2022-01-25 22:16:30 -08:00
Yi Zhang ad4b9e0369 Minor fixes 2022-01-24 19:21:15 -05:00
Vivek Khandelwal 6fe70c7794 [MLIR][TORCH] Add E2E support for aten.index.Tensor op
This commit adds lowering of `aten.index.Tensor` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-19 13:37:56 +05:30
dan 3745f54489 Update external/llvm-project
- Add `qualified` to ods because of
https://reviews.llvm.org/D113873 and https://reviews.llvm.org/D116905
- Needed to revert https://github.com/llvm/torch-mlir/pull/520 as it
was based on an old torch version.
https://github.com/llvm/torch-mlir/pull/527 will bring this back with
a better design.
- Change ConvertAtenCatOp to use more accurate tensor shape info and
as much static info as possible to pass `tensor.insert_slice`
verification code added by https://reviews.llvm.org/D114715
- Other minor fixes
2022-01-18 13:25:42 -05:00
Liam Fitzpatrick 077e55d756 Add support for constant_pad_nd
Note that to enable folding of the code coming from an example
like the ConstantPad2dStaticModule e2e test, support for other
operations had to be added/improved:
- aten::neg.int
- aten::eq.float
- aten::eq.str
- prim::Uninitialized
2022-01-11 10:25:25 -05:00
Vivek Khandelwal ca662dc9cc [MLIR][TORCH] Add E2E support for aten.threshold, aten.threshold_backward op
This commit adds lowering of `aten.threshold` op
This commit adds lowering of `aten.threshold_backward` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-10 11:56:56 +05:30
Yi Zhang 732a76f45c Make broadcasting result shape more static
This involes the following 2 parts:
- Change refine type to propagate more static shape info.
- Get as much static shape info as possible when creating the result
tensor when converting to linalg.
2022-01-06 18:39:27 -05:00
Gaurav Shukla 3c40539b34 [TORCH][MLIR] Add E2E support for `aten.[ones_like|zeros_like]`
- This commit adds E2E support for `aten.ones_like` and
  `aten.zeros_like` ops.
- Adds support for non-None `dtype` argument of `aten.empty_like` op.
- All the unit test cases related to constant tensor allocation like ops
  are moved to a different file named `constant_alloc.py`.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-01-06 20:24:40 +05:30
Liam Fitzpatrick ccfdfd1b80 Refine static shapes for conv2d and maxpool2d 2022-01-03 11:09:23 -06:00
Vivek Khandelwal 4486de5ef3 [MLIR][TORCH] Add E2E support for torch.arange op
This commit adds lowering of `aten.arange.start_step` op.
This commit decomposes `aten.arange` and `aten.arange.start` into
`aten.arange.start_step` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-27 22:45:48 +05:30
Gaurav Shukla a83004c806 [TORCH][MLIR] Fold trivial cases of `aten.to.dtype` and `aten.view` op
- It folds `aten.to.dtype` when the input tensor type and result type
  are exactly same.
- It folds `aten.view` when the rank of both the input tensor type and
  result type is unity.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-24 13:32:34 +05:30
Nirvedh 3cb46cecef Added aten::t() Op 2021-12-22 10:57:10 -05:00
xndcn 5eed562e19 add aten.sub.int/aten.mul.int lowering in TorchToStd 2021-12-17 10:35:15 -08:00
Gaurav Shukla bc9abbc1c9 [TORCH][MLIR] Add E2E support for `aten.empty_like` op
This commit adds decomposition of `aten.empty_like` into `aten.empty`
op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-16 20:17:39 +05:30
Gaurav Shukla eddc09aa55 [TORCH][MLIR] Add E2E support for `aten.eq` and `aten.lt` ops
- Added E2E support for `aten.eq.Tensor` and `aten.lt.Tensor` ops. Both
  the operands are expected to be of the same type, i.e., type promotion
  is not addressed as a part of this commit.
- Added E2E support for `aten.eq.Scalar` and `aten.lt.Scalar` ops.
  Tensor operand type to Scalar operand type promotion has not been
  handled in this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-16 18:47:22 +05:30
Prashant Kumar ab81f871e4 Add aten.tensor.int and aten.tensor.float op lowerings.
Add the required lowerings and correct test cases.
These op produce zero-d tensors and it was incorrectly mentioned in
refine types to produce 1d tensor of size 1.
2021-12-15 17:21:34 +05:30
Prashant Kumar 528354de84 Add `aten.gt.Tensor` op
`aten.gt.Tensor` op has been added in torch dialect and the
lowering of the op has been done to the linalg dialect.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-13 00:08:52 +05:30
Gaurav Shukla a778f990e9 [TORCH][MLIR] Add E2E support for `aten.ceil` op
This commit adds lowering of `aten.ceil` op as a part of element-wise
ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-12 01:15:47 +05:30
Prateek Gupta cfc8de36f8
[MLIR][TORCH] Add E2E support for `aten.native_layer_norm`. (#470)
This commit adds support for aten.native_layer_norm operation. Here
the previous code for aten.layer_norm is tweaked a little bit to
accomodate both mean and variance values alongwith the layer norm
value. This commit also adds decomposition of aten.layer_norm into
aten.native_layer_norm, which was previously getting lowered directly
to linalg.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2021-12-10 19:06:19 +05:30
Gaurav Shukla 5a47f92390 [TORCH][MLIR] Add E2E support for `aten.squeeze.dim` op
This commit adds lowering of `aten.squeeze.dim` op into
`linalg.TensorCollapseShape` op. Here, the dim(th) dimension of the
input tensor is not supposed to be dynamic.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-10 17:01:20 +05:30
Gaurav Shukla f34eb66124 [TORCH][MLIR] Add E2E support for [`aten.gt.Scalar`|`aten.where.self`]
This commit adds lowering of `aten.gt.Scalar` and `aten.where.self` as a
part of element-wise ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-09 12:47:10 +05:30
Vivek Khandelwal 9958cf08b6 [MLIR][TORCH] Add E2E support for aten.zeros op
This commit adds lowering of `aten.zeros` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-08 22:42:33 +05:30
Prashant Kumar 977b1b03ea Add aten::nll_loss_forward op lowering.
The op lowering has been added as a part of `torch-lower-to-linalg`
pass. This takes care of ignore_index but the weight and reduction
operand is still to be accounted for.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-07 17:11:08 +05:30
Prashant Kumar 5c7ce45c4e Update external llvm to 966b72098363d44adf2882b9c34
The external llvm is updated to point to
https://reviews.llvm.org/rG966b72098363d44adf2882b9c34fcdbe344ff913.
Some of the changes wrt. NamedAttr has been addressed.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-06 23:33:58 +05:30
Suraj Sudhir c9c9b68d1f [tosa] Add Torch reduction operators
- Supports variants with multiple dims, one dim, all dime
- Leverages legalize_common and legalize_utils code from
TensorFlow-TOSA work

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-12-03 09:01:48 -08:00
Prashant Kumar ab6211184f Bug fixes that pops up when updating generatedAten ops td
There is an op name change that requires trivial changes.
Also, some of the warning has been fixed.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-03 22:18:18 +05:30
Yi Zhang 24bc06fc8d Fix compilation warnings. 2021-12-03 11:44:32 -05:00
Daniel Garvey a52aded0b9
Add lowering for slice and selectInt (#398) 2021-12-02 22:09:21 -06:00
Vivek Khandelwal 46a2189a41 [MLIR][TORCH] Add E2E support for aten.bitwise_and.tensor op
This commit adds lowering of `aten.bitwise_and.tensor` op.

Signed-Off By: Vivek Khandelwal vivek@nod-labs.com
2021-12-02 21:06:15 +05:30
Vivek Khandelwal 46a0668b3b [MLIR][TORCH] Add E2E support for aten.mean and aten.numel op.
This commit adds lowering of `aten.mean` and `aten.numel` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-02 11:51:13 +05:30
Ramiro Leal-Cavazos e6675a50d3 Add support for dtype argument in reduction ops
Many reduction ops take as an argument an optional output dtype that
can change the type of the input tensor before the reduction is
performed. This commit adds support for the optional dtype flag that
had been previously ignored.

Test:
/tools/torchscript_e2e_test.sh -f 'ReduceSumDtype'
/tools/torchscript_e2e_test.sh -f 'ReduceSumDImIntListDtype'
2021-11-30 12:53:59 -05:00
Gaurav Shukla 73b27b32dc [MLIR][TORCH] Add E2E support for `aten.squeeze` op
This commit adds lowering of `aten.Squeeze` op into
`linalg.TensorCollapseShape` op. The size 1 dynamic dimensions are not
handled as a part of this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-30 23:00:28 +05:30
ds1231h 9ad5954e41 aten.abs and aten.reciprocal to linalg 2021-11-30 11:31:55 -05:00
Yi Zhang 5d28549c2c Add folder for torch.aten.Int.Tensor
This is to fold the common pattern from Bert inference like:
```
%111 = torch.prim.NumToTensor.Scalar %110 : !torch.int ->
    !torch.vtensor<[],si64>
%112 = torch.aten.Int.Tensor %111 : !torch.vtensor<[],si64> ->
    !torch.int
```
2021-11-30 21:55:48 +05:30
Prashant Kumar 36afa4a4d3 Add aten.fill.Scalar op lowering
The lowering of aten.fill.Scalar has been added.
The changes have been made as a part of -torch-convert-to-linalg pass.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-30 21:12:15 +05:30
Daniel Garvey 539511c19b
Add dropout op (#436)
Co-authored-by: dan <dan@nod-labs.com>
2021-11-29 12:30:03 -06:00
dan 03fdf56f21 add aten.add.int lowering in TorchToStd 2021-11-29 13:22:50 -05:00
Liam Fitzpatrick 7616d28ce1 Add leakyrelu support 2021-11-27 23:04:46 +05:30
Prateek Gupta f461a7ebce
[TORCH][MLIR] Add E2E support for aten._softmax operation. (#431)
Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-25 11:19:02 +05:30
nodlabs 67ce816fca lowered addcmul and addcdiv to linalg 2021-11-24 17:26:47 -05:00
Ramiro Leal-Cavazos 56c6e3676b Fix bug in NumToTensor handling of float values
This commit fixes a type promotion bug when NumToTensor was given a
float as an argument. In particular, the rules for type promotion of a
scalar vary depending on if the scalar is part of a tensor op or
not. NumToTensor falls under the second category, but it was being
treated as part of the first category.
2021-11-23 11:47:44 -05:00
Prashant Kumar 1dc374014b Refactor to share code in DecomposeComplexOps pass
Share code in `log_softmax_backward` and `softmax_backward` ops.
2021-11-20 00:39:34 +05:30
Prashant Kumar ea7a30f9b9 Add e2e test for aten.log_softmax_back_data op
aten.log_softmax_back_data op lowering and required
tests has been added. Some NFC have also been added.

Signed-off-by: Prashant Kumar prashant@nod-labs.com
2021-11-19 00:08:28 +05:30
Gaurav Shukla 663fc1ef51 [MLIR][TORCH] Add E2E support for [`aten.mul.Scalar`|`aten.addmm`]
This commit adds lowering of `aten.mul.Scalar` and also adds
decomposition of `aten.addmm` to `aten.mul.Scalar`, `aten.add.Tensor`
and `aten.mm` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-18 22:26:41 +05:30
Prateek Gupta ecf78b9849
[TORCH][MLIR] Add E2E support for `aten.gelu_backward` operation. (#418)
This commit adds new operation `aten.gelu_backward` in the aten
dialect and adds lowering of this operation from aten to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-17 14:59:38 +05:30
Yi Zhang 0fe70994e5 Add support for multiple return values
This change is to unblock the work of some backprop ops returning more
than one tensors. We will need to think of a more scalable approach
in the future if more flexible return types combinations are needed.
2021-11-16 21:07:45 -05:00
Yi Zhang 53733933a4 Update llvm upstream to 0b17336f793108a7b10c3fa913039144ef1d0f61
Update AsmPrinter/Parser and MatchAndRewrite
2021-11-16 13:04:51 -05:00
Prashant Kumar 909f7d7171 Add e2e testing for aten_tanh_backward op.
The e2e testing for aten_tanh_backward op has been added.
The testing is done for ref_backend.
2021-11-09 11:28:49 -05:00
George Petterson 2764e86f02 Add Rsqrt 2021-11-09 11:08:28 -05:00
Yi Zhang 3bd9d2a4c7 Add e2e support for aten._softmax_backward_data.
Decompose aten._softmax_backward_data into aten math ops. Also decompose
`aten.size` to facilitate decomposing _softmax_backward_data.
2021-11-09 13:09:30 +05:30
Yi Zhang 05c4dd8e39 Add convertScalarToDtype helper.
This is to facilitate scalar type conversion in the TorchToLinalg. As
part of adding the helper, this PR also:
- Updated `AtenAddTensorOp`, `AtenSubTensorOp` to use the helpers to
support more type variants.
- Added e2e type promotion testing.
- Added i32 memref return/arg type to support e2e testing.
2021-11-08 17:50:52 -05:00
George Petterson e23cabf3a9 Add log2 2021-11-08 16:19:59 -05:00
George Petterson f41958037a Add NumToTensor 2021-11-08 15:56:52 -05:00
Prateek Gupta 18e8806b14 [TORCH][MLIR] Add E2E support for aten::to.dtype.
This commit adds end to end support for AtenToDtypeOp from aten
to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-08 12:56:03 -05:00
Wang Kangyu 4bb9b44775 Add lowering of "aten.pow.Tensor_Scalar" op
Add e2e support for torch.pow(Tensor, Float)
2021-11-08 09:19:50 -08:00
Wang Kangyu b33543af85 Add lowering of aten.floor op 2021-11-06 17:31:44 -04:00
nodlabs 5ff823ace9 lowerd Sqrt to linalg
reused clang-format, as changes got deleted
2021-11-06 11:29:46 -04:00
Gaurav Shukla 2ce47dc8e4 [TORCH][MLIR] Add E2E support for aten.expand
This commit adds decomposition of `aten.Expand` to `aten.BroadcastTo`
op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-03 23:58:59 +05:30
Prashant Kumar ef897dbb19 Add lowering of `aten.log_softmax` op.
The `aten.log_softmax` is decomposed into `aten.softmax` and
`aten.log` op.
2021-11-03 22:10:05 +05:30
Prashant Kumar 127c7d8e27 Add lowering of `torch.log` op
The lowering of `torch.log` op has been added.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-02 21:18:00 +05:30
George Petterson 6dde5b347e Add rsub 2021-11-02 09:56:48 -04:00
Gaurav Shukla 69eaf9a154 [MLIR][TORCH] Add E2E support for `torch.aten.view`
- This commit adds lowering of `aten.View` to `linalg.TensorExpandShape`.
- This lowering will be successful only when one or more static
  dimensions are expanded.
- It also fixes a typo in `ConvertAtenFlattenUsingIntsOp` conversion
  pattern.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-10-29 22:33:10 +05:30
Yi Zhang 752abc8d01 Add type promotion code to refine types.
The types have different levels of categories: where
complex > floating > integral > boolean (> means left hand
side has higher category).

The operands have different levels of priorities where:
dimensioned tensor > 0-dim tensor > scalar == wrapped 0-dim tensor.
This is represented by the `ResultTypeState.dimResult`,
`ResultTypeState.zeroResult` and `ResultTypeState..wrappedResult` in
the source code.

For operands of the same priorities, the result type should be the
highest categories with sufficient width to hold all operands.

By default, only the highest priority operands participate in the type
promotion logic. Lower priority operands participate if they are in
a higher category than any higher priority operands.

For example, <[],f32> (lower priority) and <[1], si64> tensor would
result in <[?],f32> tensor because floating > integeral. Another example
<[],f64> (lower priority) and <[1], f32> tensor would result in
<[?], f32> tensor because f32 and f64 are the same category.

The ScalarType enum definition, type promotion table, ResultTypeState
struct definition and some helpers are copied from
aten/src/ATen/native/TypeProperties.*
Other references:
- https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc
- https://github.com/pytorch/pytorch/issues/9515

Other minor changes:
1. Fix `visitExpandLikeOp` to consider cases where the given sizes list
size is larger than the input rank.
2. Add back the somehow deleted `torch.aten.softmax.int` tests in
decompose-complex-ops.mlir.
2021-10-29 11:17:39 -04:00
George Petterson 2ea2ab518b Add contiguous 2021-10-29 11:11:50 -04:00
Prateek Gupta c33a2ca952 [TORCH][MLIR] Add E2E support for aten.permute.
This commit adds lowering of aten.permute to linalg.generic operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-10-28 10:25:26 -04:00
Sean Silva 30df2ec71b Add min/max/clamp support.
Part of #380

Also
- BoolType is not considered as Scalar
- e2e framework fixes for nan handling
- `tu.rand(..., low=, high=)` support
- delete unused variable (fix warning)
- Add IouOfModule from #380 to e2e test suite (this is a common
  calculation in vision models)

 Your branch is ahead of 'origin/main' by 1 commit.
2021-10-27 13:29:21 -07:00
Prashant Kumar 5009cbf55c Add lowering of aten.matmul op.
Lowering of `aten.matmul` op is added from torch to linalg dialect.
The different cases correspond to
https://pytorch.org/docs/stable/generated/torch.matmul.html.
TODO: Broadcasting in case of batch-matmul is yet to be taken care of.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-10-26 12:45:09 -04:00
Boian Petkantchin e276dbbaa6
Add aten::gelu lowering (#374)
* Print more exception info on error during test execution

* Fix formatting

* Add aten::gelu lowering

Co-authored-by: Boian Petkantchin <boian@nod-labs.com>
2021-10-25 16:16:01 -07:00
Yi Zhang abfaf8c577 Add aten.ne.bool to make CI pass 2021-10-21 14:45:41 -04:00
George Petterson 7c47b9a0c8 Formatting fix 2021-10-19 13:33:31 -04:00
George Petterson 8853dfbc74 Add broadcast 2021-10-19 13:33:31 -04:00
Yi Zhang a459e09ab7 E2e support for aten.softmax.int and aten.embedding
- Added a DecomposeComplexOps pass to decompose complex torchOps.
- Refactored `visitAtenArgmaxOp` and `visitAtenAnyDimOp` to
`visitReductionAlongDimIntOp`.
- Moved some helper functions into
torch-mlir/Dialect/Torch/Utils/Utils.h to be shared by multiple files.
- Added support for f64 tensor as argument and return types.
2021-10-18 17:57:45 -04:00
Yi Zhang 0902438882 Update llvm-project to a54f4eae0e1d0ef5adccdcf9f6c2b518dc1101aa
This brings in https://reviews.llvm.org/D110797. PRs that are in
progress will need to use scripts provided by
https://llvm.discourse.group/t/psa-removed-arithmetic-ops-from-standard/4455.
2021-10-18 13:36:42 -04:00
dan 7750d2173a add argmax lowering
Add argmax lowering from torch to linalg
2021-10-13 14:31:16 -04:00
Sean Silva 0c5c84d63d Add a basic TOSA E2E backend.
We lower through linalg-on-tensors and use RefBackend to run it.
This adds enough support for a "tanh" op. Adding more ops should be
fairly mechanical now that things are wired up. Run with:
```
./tools/torchscript_e2e_test.sh -c tosa
```

The backend structure is very similar to linalg-on-tensors based E2E
backends and is a nice parallel (see `tosa_backend.py`). Actually, this
forced a nice refactoring to the layering here. We removed
`torchscript-module-to-linalg-on-tensors-backend-pipeline` and instead
require separately running
```
torchscript-function-to-torch-backend-pipeline,torch-backend-to-linalg-on-tensors-backend-pipeline
```
This highlights the step that lowers to the "torch backend contract"
of cleaned up `torch` dialect ops is a critical step in the lowering.
Going forward, that is the key load-bearing contract of the torch-mlir
project, not the linalg-on-tensors backend contract.

Recommended review order:
- `TorchToTosa.cpp` / `TorchToTosa/basic.mlir`
- `python/torch_mlir_e2e_test/torchscript/configs/tosa_backend.py` and
  the new `utils.py` file there.
- `python/torch_mlir_e2e_test/tosa_backends/linalg_on_tensors.py` and
  `abc.py` in that directory for the TOSA backend e2e interface.
- other misc mechanical changes
2021-10-08 09:59:45 -07:00
Yi Zhang 98ba255288 E2e support for layernorm. 2021-10-04 14:15:13 -04:00
Sean Silva 5b6902e31c Dual license the torch-mlir project.
This commit (with approval from all contributors) dual licenses
the torch-mlir project under both the standard LLVM license and the
standard PyTorch license. This will facilitate moving code between
torch-mlir and the two upstream projects.

The standard file comment is now:

```
// This file is licensed under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
// Also available under a BSD-style license. See LICENSE.
```

See `LICENSE` in the project root for the terms of both licenses.
2021-10-01 10:46:08 -07:00
Yi Zhang 89225b0cd8 Add BertSequenceClassification model to e2e
Use torch tracing to get the module because the original model is not
TorchScriptable out of box.
2021-09-30 13:30:29 -04:00
Ramiro Leal-Cavazos b59f2cb673
Implement the lazytensor package (#331)
Implement the `lazytensor` python package for converting
lazy computations captured by the Lazy Tensor Core into MLIR.
This PR also fixes a few things with `torchfx` and its example
2021-09-28 17:25:06 -07:00
Sean Silva 4fad753073 Move external/torch-mlir to the root of the repo. 2021-09-27 17:11:08 -07:00
Sean Silva 28a7738189 [torch-mlir earthmoving (1/N)] C/C++ code movement.
This creates the `external/torch-mlir` directory as an
LLVM_EXTERNAL_PROJECTS-compatible project (analogous to
`iree-dialects`) and completes movement/rename of all pure MLIR C/C++
compiler code into there. The next step will be to move all the Python
code / code that links/includes PyTorch C++ code (which currently lives
in `frontends/pytorch`) into a subdirectory here.

I call this "earthmoving" because it is mostly mechanical changes and
renames. As a quick summary (we can change this down the road easily)
- C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch`
- CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet`
- preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_`
- CMake `NPCOMPFoo -> TorchMLIRFoo`

The goal of this is to create a standalone project creating a center of
mass for entry into the MLIR ecosystem from PyTorch, suitable in scope
for eventual inclusion/ownership in PyTorch. The idea is that
`external/torch-mlir` will some day be pulled out into its own
repository, and then npcomp will simply pull it in as a submodule.

Layering-wise, what lives in `torch-mlir` lowers code from PyTorch
(currently TorchScript, but TorchFX or pytorch/xla-style tracing are
possible extensions) down to what we have been calling the "Torch
backend contract" which is cleaned up IR (inlining, simplifcation,
conversion to value tensors, ...) entirely in the `torch` dialect. This
is the branching off point for further lowering, of which npcomp takes
one opinion (outside `torch-mlir` of course!), namely the
`TorchConversion` dialect/transforms which lower to IR suitable for IREE
and other linalg-on-tensors based lower-level compilers.

Summary of changes:
- move `{include,lib,test}/Dialect/Torch` into `torch-mlir`
- move relevant parts of CAPI into `torch-mlir`.
- leave a few things related to the `torch-mlir` Python build commented
  out, which should be resolved in a subsequent change.
2021-09-10 21:44:37 -07:00
Yi Zhang 73d553e168 MT model compilation minor changes
This contains the following changes:
 - Fix optional knowledge propagation. The initial knowledge should
 always be NotNone for the operations we implemented.
 - Add Folder for `prim.dtype`
2021-09-09 19:02:48 -04:00
Ramiro Leal-Cavazos 6724de7692 Added sum lowering
Added lowering to torch.sum into linalg
2021-09-03 17:37:06 -07:00
Sean Silva 1dec561cfd Update llvm-project to 830c0b9023cd0cf91955900e0d96283e7a8c3711
- builder.getSymbolRefAttr is gone.
- OpAsmOpInterface's getAsmResultNames method needs explicit override
- a bunch of churn for builtin.func needing to be made explicit (and
  sometimes implicit?)
- operation printers no longer need to print the operation name
  themselves.
- snuck in beneficial trivial addition to TmpDeleteDeadIREEListsPass to
  test a particular upstream change e2e with my local patchset.
2021-09-03 14:16:38 -07:00
Yi Zhang 3b0e5910a8 Refine types continue.
This should cover all the ops that are left in MT.
2021-09-02 14:39:28 -04:00
dan d9df4bfc95 Add sigmoid lowering
Follows existing conventions for activation functions
2021-08-30 17:32:23 -04:00
Yi Zhang d6b9709fa5 Changes to refine types
- Add `!torch.optional` knowledge tracking
- Changes to improve type propagation for branches and terminators. See
examples in `refine-types-branch.mlir`
- Refator to separate handling of different ops from `visitOperation`
- Add refine types for a few new ops
2021-08-27 11:42:00 -04:00
Yi Zhang bc5eae41ca Add more folders to fold away branches
Added folders to a few binary computing ops, `TupleUnpack`,
`__contains__.str` and `__getitem__.Dict_str`.
2021-08-26 17:37:49 -04:00
Stella Laurenzo 80ff744c56 Add a few missing deps exposed by stricter linking with BFD. 2021-08-22 11:56:48 -07:00
Sean Silva cab8d922ec Add TorchToIREE and factor out TorchConversion dialect.
This converts a basic list op (torch.prim.ListConstruct) to the IREE
dialect.

```
    def forward(self, x: float):
            return [x, x]
```

turns into:

```
builtin.func @forward(%arg0: !torch.float) -> !torch.list<!torch.float> {
  %0 = torch.prim.ListConstruct %arg0, %arg0 : (!torch.float, !torch.float) -> !torch.list<!torch.float>
  return %0 : !torch.list<!torch.float>
}
```

which turns into:

```
builtin.func @forward(%arg0: f64) -> !iree.list<f64> {
  %c1 = constant 1 : index
  %c0 = constant 0 : index
  %c2 = constant 2 : index
  %0 = iree.list.create %c2 : !iree.list<f64>
  iree.list.set %0[%c0], %arg0 : !iree.list<f64>, f64
  iree.list.set %0[%c1], %arg0 : !iree.list<f64>, f64
  return %0 : !iree.list<f64>
}
```

As part of doing this, I realized that it was time to formalize the IR
form that we reach right before running TorchTo{Linalg,Std,...}. We now
call it the "Torch backend contract". We then lower the "Torch backend
contract" to the "npcomp backend contract", which involves the new
TorchConversion (`torch_c`) dialect, which holds ops that need to
operate on both the npcomp backend types (e.g. builtin tensors, i1, IREE
list, etc.) and the `!torch` types.

This made more sense, as I realized that if I didn't factor out
`torch_c` then the Torch dialect would have a dependency on IREE
dialect (we previously didn't notice this was an issue because we only
depended on `builtin` types), which seemed wrong to me.

Recommended review order:
- TorchToIREE.cpp / `TorchToIREE/basic.mlir`
- Look at the new structure of createTorchScriptToNpcompBackendPipeline.
  It now lives in TorchConversion/Transforms/Passes.cpp and cleanly
  calls into `Torch::createTorchScriptToTorchBackendPipeline` for the
  frontend lowering to the Torch backend contract.
- Mechanical change extracting
  `torch_c.{to,from}_{i1,i64,f64,builtin_tensor,iree_list}` into a new
  TorchConversion dialect, and a few passes specific to the lowering
  from the Torch backend contract to the npcomp backend contract.
- Minor fixes to TorchToLinalg.cpp to use unconverted operands (now that
  we convert lists as part of operand materialization, we need to use
  the original operands). Also added test for AtenMaxPool2dOp and fixed
  m_TorchConstantIntList.
- TmpDeleteDeadIREELists pass. Temporary pass for deleting dead IREE lists that
  are created as part of operand materialization for conv/max pool/avg pool ops
  in TorchToLinalg.
2021-08-16 15:01:58 -07:00
Yi Zhang 85ff8b692b Fix compilation errors from MT model
With the following changes the compilation can continue until
RefineTypes pass:

- Add operators without ODS into `torch_ods_gen.py`
- Add some new optional and list types in `TorchTypes.td`
- Add some folders for aten int type comparator ops
- Modify GlobalizeObjectGraph.cpp. For global slots that's not used,
dont check if an aliased value is stored in more than one of global
slots. This can work around a failure where the same tensor is stored
in multiple "version" slots which are not used.
2021-08-16 16:37:23 -04:00
Yi Zhang bfc3ee35c6 Import Machine Translation model to MLIR.
This includes the following changes to import MT model into MLIR. There
are still a lot of work to for actual compilation.
- Add `torch.dict<>`, `torch.any`, `torch.number` types
- Add `torch.prim.DictConstruct` op
- Fix `torch.prim.TupleConstruct` op assembly format to include resulting types
2021-08-10 15:22:06 -04:00
Yi Zhang 0342b73bf1 Add torch.aten.flatten.using_ints and aten.MaxPool2d linalg lowering
- torch.aten.flatten.using_ints to linalg lowering
- torch.aten.max_pool2d to linalg lowering
- Support torch.aten.conv2d for more flexible dilation and strides values
2021-08-04 12:00:43 -04:00
Yi Zhang 89d4931324 Linalg lowering for aten.conv2d and aten.AdaptiveAvgPool2d
1. Add m_TorchConstantIntList
2. Lowering for aten.conv2d
3. Lowering aten.AdaptiveAvgPool2d
2021-07-09 15:04:29 -07:00
Sean Silva 83b5b5456d Bump llvm-project to da289a174fc6617c7be37be2947480510fd4f02a
- Build adjustments for `.cpp.inc` dialect files.
- Renaming of `memref.dim` to `tensor.dim` for tensor case.

Minor changes:
- Renaming of `mlir::linalg::ReassociationIndices` to
  `mlir::ReassociationIndices`.
- Adjust command line option parsing in npcomp-run-mlir.
2021-07-07 13:57:29 -07:00
Sean Silva 79928cd2dd Generalize support for elementwise ops.
We plumb through e2e a fair number of interesting cases:
- unary, binary, ternary elementwise ops
- ops like `torch.aten.add.Tensor` that also take a scalar parameter
- static size-1 broadcasting

We allow the static size-1 broadcasting case, but emit a runtime error
in the case of dynamic size-1 broadcasting. This seems like a sweet spot
subset of things that can be lowered directly to linalg, while not being
overly constraining to users. This is consistent with what IREE is doing
for CHLO->Linalg lowering as well
([code](50bf7a87e4/iree/compiler/InputConversion/MHLO/BroadcastingToLinalgPatterns.cpp (L1))).

To test the static size-1 case, we added support for the
`torch.aten.unsqueeze` op and lowering for it through
`linalg.tensor_expand_shape`. This involved a generalization of
`MaximizeValueSemantics` able to handle it (the solution there also
works for `torch.aten.flatten.using_ints` which we need for ResNet
anyway)

Also, a few minor additional changes:
- Add `VerifyInvariantsBeforeBackendLowering` pass, which catches a
  large class of errors before we get to backend lowering (now that we
  are doing dialect conversion, the errors are way nicer if we just emit
  them up front rather than in the guts of a random pattern).
- Minor change to RefBackend to allow `linalg.tensor_expand_shape`.

Recommended review order:
- e2e tests in elementwise.py
- `ConvertElementwiseOp` in TorchToLinalg.cpp + elementwise.mlir test
- `ConvertAtenUnsqueezeOp` in TorchToLinalg.cpp + unsqueeze.mlir test
- RefineTypes.cpp + tests
- MaximizeValueSemantics changes + test
- VerifyInvariantsBeforeBackendLowering pass + test
2021-06-28 13:28:38 -07:00
Sean Silva 90c6c64fd6 Make torch.constant.float print a little nicer.
This printing is chosen to be similar to how MLIR prints the values by
default.
2021-06-23 08:07:45 -07:00
Sean Silva 60a947b4a7 Add CastOpInterface to torch.prim.unchecked_cast.
This allows it to fold away in trivial cases.
2021-06-23 08:07:45 -07:00
Yi Zhang 45f2edfc7a Add TorchToSCF pass.
1. Add TorchToSCF pass.
2. Convert prim.If and prim.If.yield.
2021-06-23 08:06:43 -07:00
Yi Zhang 5ad144c4fe More folding for aten.gt.int, aten.ne.int and Aten__Getitem__TOp.
- Fold more for aten.gt.int, aten.ne.int and Aten__Getitem__TOp
- Some format cleaning up
2021-06-23 08:06:37 -07:00
Sean Silva 79aade33da Make MaximizeValueSemantics a bit smarter.
This adds a pattern to MaximizeValueSemantics which does a simple
abstract interpretation within a block, which handles simple cases of
`torch.overwrite_tensor`, enough to remove all the unnecessary uses of
non-value tensors in ResNet right now.

Before/after IR:
[gist](https://gist.github.com/silvasean/a3e1ef625b19dfc63579f73cd3b543b6)

Also,
- Split `torch.copy.tensor` into `torch.copy.to_tensor` and
  `torch.copy.to_vtensor` which convert between value and non-value
  semantic tensors. This is a much cleaner factorization as they have
  very separate use cases and properties (e.g. different side effects)
- Remove the various canonicalization patterns they had, which were
  confusing because they resulted in limited forms of maximizing value
  semantics throughout the pipeline. We should structure our compilation
  pipeline such that only MaximizeValueSemantics should be maximizing
  value semantics.
- Adjust pass pipeline to only run MaximizeValueSemantics once.
- Make OverwriteTensorOp `$value` always be a value tensor and
  `$overwritten` be a non-value tensor.
2021-06-22 16:48:57 -07:00
Sean Silva 78d2cc0818 Make `torch.copy.tensor` canonicalization a bit smarter.
This removes most of the trivial cases that MaximizeValueSemantics needs
to handle, making it easier to see the nontrivial cases.
2021-06-17 18:11:58 -07:00
Sean Silva 40369c54dc Adjust pass pipeline for changes to `dim` canonicalization.
This results in cleaner IR. In particular, Mlp2LayerModule e2e test has
a dim op that is eliminated by this change:
https://gist.github.com/silvasean/734f11a291ae6236c955f65cffae285f
2021-06-17 16:59:55 -07:00