Commit Graph

488 Commits (45e2188615711a0db70cb7ad0ca92b95a46687e2)

Author SHA1 Message Date
Vivek Khandelwal 7247c6a3a7 [MLIR][TORCH] Add E2E support for aten.ge.int op
This commit adds lowering of `aten.ge.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-29 11:08:57 +05:30
Quinn Dawkins 11a8901078
[MLIR][TORCH] Add support for multiple indexing tensors for aten.index.Tensor (#1097)
- Includes a canonicalizer for `aten.add.t`needed for successfully lowering the shape function
 - Only offers support for statically sized index tensors when there is more than one
 - Dynamic shape support remains for single indexing tensors
2022-07-28 19:00:02 -04:00
Quinn Dawkins 3c9addf19c Add e2e support for aten.expm1 2022-07-27 12:31:35 +05:30
武家伟 052d2f84dc
[MHLO] Init MHLO basic op conversion (#1092)
* [MHLO] Init MHLO basic Op Conversion
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>

* [NFC] Remove 'from @llvm-project' annotation

Co-authored-by: wujiawei.jw <wujiawei.jw@bytedance.com>
2022-07-27 13:07:51 +08:00
Kevin Kiningham e8f327cc00 Add lowering to linalg for softplus and log1p
Follows existing conventions for unary operators.
2022-07-25 21:25:57 +05:30
Tanyo Kwok 44ead68772
[MHLO] Init MHLO gather op patterns (#1104)
See RFC https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-07-25 23:47:46 +08:00
Tanyo Kwok f50d7013cd
[MHLO] Add [un]squeeze op patterns (#1099)
* [MHLO] Add [un]squeeze op patterns

* Conform to llvm coding standard

* minor update
2022-07-25 23:28:48 +08:00
Tanyo Kwok b80ce79b9f
[MHLO] Init MHLO view like op patterns (#1090)
* [MHLO] Init MHLO view like op patterns

See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com

* update filecheck test cases

* rebase, remove chlo and clang-format
2022-07-22 15:18:18 +08:00
Tanyo Kwok a02dbb2d5e
[MHLO] Init MHLO slice like op patterns (#1091)
See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-07-22 11:32:45 +08:00
Ziheng Jiang c61c99e887
[MHLO] Init MHLO integration. (#1083)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-07-20 16:18:16 -07:00
Quinn Dawkins 647e75e029
Allow expanding and collapsing in aten::view (#1082)
- Supports cases where the view op expands and collapses dims
simulataneously. This does not handle the case where it is neither
expanding nor collapsing (e.g. [2, 3] -> [3, 2])
 - Additionally fixes a previous bug with adding 1-sized dims on both
sides of a tensor with aten.view
2022-07-20 17:35:51 -04:00
Quinn Dawkins c73a39e40a Add support for index.Tensor on dimensions other than the first
This patch still only supports a single indexing tensor.
2022-07-19 11:36:52 +05:30
Vivek Khandelwal df0b1e77a4 [MLIR][TORCH] Add negative dim support for aten.cat and aten.slice op
This commit adds the support for negative dim cases for `aten.cat`,
`aten.slice.Tensor` and `aten.slice_scatter` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-18 14:01:33 +05:30
Jacques Pienaar 247dd64a66
Change to notifyMatchFailure (#1073)
emitError is intended for error cases and not match failures of
patterns. notifyMatchFailure is intended where pattern reports reason
for not matching.

Op verification should also not happen inside patterns but as part of
verify/verification, but left ones that were obviously verification to
emitError inside patterns to keep this change small.
2022-07-17 18:39:54 -07:00
Ramiro Leal-Cavazos afdaa60dd4
Fix typo in `inputRank` check of `AtenBatchNormOp` (#1046)
The original conversion pattern for `AtenBatchNormOp` required that
the input rank be greater than 2; however, the only
expectation in the conversion pattern and in Pytorch is that the input
rank is greater than 1, since the second dimension of the input must
match the size of the `weight`, `bias`, `runningMean`, and
`runningVar` inputs. This commit fixes the `inputRank` check.
2022-07-15 09:35:59 -07:00
Suraj Sudhir 5e2012c7dd
[tosa] aten.max.dim , aten.slice.tensor ops (#1027)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-07-13 10:10:18 -07:00
Prateek Gupta 3592e0ba7f [TORCH][MLIR] Fix some comments in slice_scatter/select_scatter
lowering.

This commit addresses the remaining comments on lowering of
slice_scatter and select_scatter.

Signed-Off-By: Prateek Gupta <gprateek93@gmail.com>
2022-07-13 09:40:06 +05:30
Prateek Gupta 2d75654b2c [TORCH][MLIR] Add lowering of `aten.slice_scatter` and
`aten.select_scatter` op.

This commit adds:
1.  Lowering of `aten.slice_scatter` op into `tensor.insert_slice`
op.
2. Decomposes the `aten.select_scatter` op into `aten.slice_scater`
op.

Signed-Off-By: Prateek Gupta <gprateek93@gmail.com>
2022-07-11 14:07:21 +05:30
Suraj Sudhir d38f2cae5b
[tosa] aten.transpose.int support (#1017)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-07-07 13:05:33 -07:00
Andrew Cain 6885f1ed8a
fix: Broaden range of tosa.matmul outputs that don't need to be reshaped (#1015)
Co-authored-by: Andrew Cain <acain@d-matrix.ai>
2022-07-06 17:24:16 -07:00
Ramiro Leal-Cavazos f204210266
[LINALG] Fix handling of size-1 dims in `aten.view` again. (#992)
A previous fix to the handling of size-1 dims in
`aten.view` (https://github.com/llvm/torch-mlir/pull/962) resulted in
the wrong grouping of dimensions when size-1 dims where between two
dims of size greater than 1. This commit fixes that.
2022-06-30 16:39:25 -07:00
Suraj Sudhir bb576c2cb3
[tosa] aten.embedding op support (#991)
Enables BERT legalization.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-06-30 13:13:52 -07:00
Gaurav Shukla 1be604bfd3 [LINALG] Lower `aten.Matmul` to `linalg.BatchMatmul`
This commit lowers `aten.matmul` to `linalg.BatchMatmul` under the
following conditions:
1. The result of matrix multiplication must have batch dimensions,
   i.e., rank greater than 2.
2. The resultant matrix must have at most 1 dynamic batch dimension.

It also handles broadcasting of batch dimensions when batch dimensions
of the matrices are broadcastable.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-06-25 10:58:06 +05:30
Ramiro Leal-Cavazos 400fecc1e5
[LINALG] Fix shape function of index.Tensor + support N-rank inputs (#972)
This commit fixes the shape function for `index.Tensor`, adding
support for multiple index tensors and `None`s in the indices
list. This commit also adds support for input tensors of rank greater
than 1. The lowering for `index.Tensor` still has the the limitation
that only a single index tensor along the first dimension of the input
tensor is supported.
2022-06-24 09:45:44 -07:00
Ashay Rane 234fc7fe0c
linalg: lower `aten.triu` op to `linalg.generic` (#965)
Prior to this patch, the torch dialect included `AtenTriuOp` for
computing the upper triangular part of the input matrix, but there was
no code for lowering the op to the linalg dialect.

This patch adds code to generate a `linalg.generic` operation that
compares indices (computed using `linalg.index`) to choose between zero
or the original value (using `arith.select`).  The lowering fails if the
number of dimensions are less than two.  This patch also adds a few
end-to-end tests.
2022-06-23 22:45:48 -07:00
Maksim Levental 829717c96e
Bump LLVM (#958) 2022-06-22 22:23:46 -05:00
Ramiro Leal-Cavazos 8b94759303
[LINALG] Fix handling of size-1 dims in `aten.view` (#962)
This commit adds support for several size-1 dims in a row in an
expansion using `aten.view`.
2022-06-22 15:58:40 -07:00
Maksim Levental a34dad2e07
Fix `verifyLinalgCompatibleTypes` which currently doesn't successfully catch `torch.tensor`. (#947) 2022-06-15 18:21:36 -05:00
Vivek Khandelwal 4605dc9c99 [MLIR][TORCH] Add support for bool type in convertScalarToDtype utility
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-16 00:00:47 +05:30
Albert Sandru 708a51ae2e Add E2E support for aten.is_floating_point 2022-06-15 11:54:00 -05:00
Ramiro Leal-Cavazos 246c2df65a
[LINALG] Fix typo in conversion pattern of `aten.embedding` (#942) 2022-06-15 09:45:10 -07:00
Vivek Khandelwal aed5517fda [MLIR][TORCH] Add integer dtype support for aten.rsub.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-15 16:46:28 +05:30
Bob Adolf b90837ee24
Temporarily revert support for custom op extensions. (#944)
The MacOS builders are having linking trouble with the extension library.
Until it's fixed, all support for op extensions is disabled. It should be
easy to restore once the issue is resolved.
2022-06-14 18:24:40 -07:00
Ramiro Leal-Cavazos 93f6d8e776
[LINALG] Add 0-rank case for `aten.permute` (#940)
The function `AffineMap::inferFromExprList` does not work if the first
vector of expressions is empty, because it uses these expressions to
obtain the context. This prevented `aten.permute` from working for
inputs of 0-rank. This commit adds support for 0-rank inputs.
2022-06-14 12:50:46 -07:00
Bob Adolf 0a7ba62438
Allow torch-mlir to support PyTorch extensions. (#895)
PyTorch allows new operators to be registered dynamically in modules.
Torch-mlir already makes it fairly straightforward to add support for
new operators, and this commit just extends that support to allow new
PyTorch ops to come from a external module.

This does *not* allow ops to be dynamically loaded into torch-mlir.
Torch-mlir must still be compiled with support built-in.

Add a `_torch_mlir_custom_op_example` subpackage to `torch_mlir` which
registers an demonstration op. It will not be imported by default when
importing torch_mlir. It's strictly for testing and documentation.

Adds an end-to-end test for the `torch_mlir_custom_op_example::identity` op.

With all these changes, we should now be actively testing PyTorch extension
support with all future patches.
2022-06-13 14:51:30 -07:00
Maksim Levental 5c85ac3100
Handle `nn.Linear(..., bias=False)` case for TorchToLinalg (#919) 2022-06-08 21:13:43 -05:00
Vivek Khandelwal b95b3d844d [MLIR][TORCH] Add E2E support for aten.div.Tensor_mode op
This commit adds lowering of `aten.div.Tensor_mode` op.
This commit also fixes formatting for the test file elementwise.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:44 +05:30
Vidush Singhal fc419b1e7d
Add E2E support for AtenLogicalOrOp. (#883) 2022-06-03 16:21:03 -07:00
Vidush Singhal 0a913bc904
Add E2E support for AtenAllBoolOp (#874) 2022-06-01 18:20:25 -07:00
Vivek Khandelwal 06750815d1 [tosa] Support for AtenAvgPool2d op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Vivek Khandelwal 6f548fc3ad [MLIR][TORCH] Add decomposition of aten.adaptive_avg_pool2d op
This commit adds the decomposition of `aten.adaptive_avg_pool2d` op into
`aten.avg_pool2d` op. The current decomposition only supports cases where
input size is equal to the output size.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Ashay Rane 029cd54327
build: fix code so that the compiler does not emit warnings (#871)
When compiling without assertions (i.e. in `NDEBUG` mode), a handful of
statements turn to NOPs, which results in warnings such as missing
return statement or unused variables and function. This patch replaces
such statements with `llvm_unreachable()`, which informs the compiler
about program termination regardless of the `NDEBUG` mode. This also
enables torch-mlir to be compiled using the flags `-Wall`, `-Wextra`,
`-Wpedantic`, and `-Werror`.
2022-05-25 14:04:59 -07:00
Vivek Khandelwal 56e77d4213 [MLIR][TORCH] Add E2E support for aten.Bool.[float|int] op
This commit adds lowering of `aten.Bool.float` and `aten.Bool.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 21:18:34 +05:30
Vivek Khandelwal 014a6d16c7 [MLIR][TORCH] Add E2E support for aten.any.bool op
This commit adds lowering of `aten.any.bool` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 17:24:28 +05:30
Vivek Khandelwal bc9b2156e3 [MLIR][TORCH] Add E2E support for aten.sqrt.int op
This commit adds lowering of `aten.sqrt.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 16:50:39 +05:30
Ashay Rane f18b2be911
torch,linalg: add support for translating aten.linalg.vector_norm (#839)
This patch adds support for the torch.linalg.vector_norm op to the torch
dialect, including the necessary shape function.  It also extends the
conversion of reduction operators to support lowering of
AtenLinalgVectorNormOp, in addition to adding a handful of end-to-end
tests to validate the lowering.

There exist several opportunities to make this lowering optimal and
robust.  For instance, in its current form, the translation does not
support ord = 0, +inf, or -inf.  For L1 norms, we don't need to raise
each element to the power 1.0.  Similarly, L2 norms could benefit from
strength reduction.  Since the canonicalization pass is not able to
apply these optimizations, we should consider applying them during the
linalg lowering itself.
2022-05-19 15:48:15 -07:00
Vivek Khandelwal f15d257aac [MLIR][TORCH] Add support for ceil_mode = true for pooling ops
This commit adds support for aten.max_pool2d, aten.max_pool2d_with_indices,
and aten.avg_pool2d op for the cases where ceil_mode = true.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-11 12:52:47 +05:30
Vivek Khandelwal c69a1e5688 [MLIR][TORCH] Add E2E support for ScalarImplicit, Int.Scalar op
This commit adds lowering of `aten.ScalarImplicit` and `aten.Int.Scalar` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-10 22:40:49 +05:30
Prashant Kumar 12b3af70d3 [TORCH] Add folding of aten.detach op.
`aten.detach` op is folded and returns the first operand since it's an
identity function(kind of identity just remove the has_grad attribute).
2022-05-10 21:54:45 +05:30
Prashant Kumar 2b1b0f6e19 [LINALG] Add support for preserve memory format in aten_empty_like op.
The preserve memory specifies that `If any of the input tensors is in channels_last format,
operator output should be in channels_last format` and hence can be
added as is in aten_empty_like op.
2022-05-10 09:37:55 +05:30
Vivek Khandelwal 96fabc0036 [MLIR][TORCH] E2E support for [ge|ceil].float, [ge|ne|gt].float_int op
This commit adds lowering of `aten.ge.float`, `aten.ge.float_int`,
`aten.ne.float_int`, `aten.gt.float_int` and `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py and scalar_comparison.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-05 21:48:35 +05:30
gpetters94 c4dcdd1e34
Add aten.flip (#817) 2022-05-02 16:01:15 -04:00
Vivek Khandelwal 8a06419980 [MLIR][TORCH] Add E2E support for aten.masked_fill.Scalar op
This commit adds lowering of `aten.masked_fill.Scalar` op.
This commit also fixes the formatting of the file constant_alloc.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-02 22:27:33 +05:30
Vivek Khandelwal 4b11284440 [MLIR][TORCH] Add E2E support for aten.avg_pool2d op
This commit adds lowering of `aten.avg_pool2d` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-02 12:31:44 +05:30
Prateek Gupta e1db318a3c [TORCH][MLIR]Add lowering for control flow operations.
1. This commit adds lowering of "while-like" prim loop to scf.while
operation.
2. Adds lowering of "for-like" prim loops to scf.for operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-04-29 16:25:58 +05:30
Sean Silva 44c7b181d3 Revert "[MLIR][TORCH] Add E2E support for aten.ge.float op"
This reverts commit 564734b2d7.
2022-04-28 07:49:58 -07:00
Sean Silva eff144c0b7 Revert "[MLIR][TORCH] Add E2E support for aten.ge.float_int op"
This reverts commit 1f102cc400.
2022-04-28 07:49:58 -07:00
Sean Silva 7669ee4e4a Revert "[MLIR][TORCH] Add E2E support for aten.ne.float_int op"
This reverts commit 51dd462592.
2022-04-28 07:49:58 -07:00
Sean Silva 5ef9f501fa Revert "[MLIR][TORCH] Add E2E support for aten.ceil.float op"
This reverts commit 78f5747568.
2022-04-28 07:49:58 -07:00
Vivek Khandelwal 78f5747568 [MLIR][TORCH] Add E2E support for aten.ceil.float op
This commit adds lowering of `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-28 11:49:35 +05:30
Vivek Khandelwal 51dd462592 [MLIR][TORCH] Add E2E support for aten.ne.float_int op
This commit adds lowering of `aten.ne.float_int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal 1f102cc400 [MLIR][TORCH] Add E2E support for aten.ge.float_int op
This commit adds lowering of `aten.ge.float_int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal 564734b2d7 [MLIR][TORCH] Add E2E support for aten.ge.float op
This commit adds lowering of `aten.ge.float` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal f5b6c4b601 [MLIR][TORCH] Add E2E support for aten.div.float op
This commit adds lowering of `aten.div.float` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Ashay Rane 9208bf0eb6
llvm: bump tag to e1318078 (#781)
The updated LLVM code includes a patch to create bfloat16 array
attributes, thus enabling a different patch to torch-mlir to flesh out
support for the bfloat16 type.
2022-04-26 12:27:51 -07:00
Prashant Kumar 5cdef0213d [LINALG] Bug fix i64 vs i32 type comparison.
Comparing index type instead of integer types solves the problem.
2022-04-22 08:09:58 +05:30
Vivek Khandelwal 769f3a8870 [MLIR][TORCH] Add E2E support for max_pool2d_with_indices op
This commit adds lowering of `max_pool2d_with_indices` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-18 21:05:19 +05:30
Ashay Rane a893c7d5cf
Add shape transfer function and lowering to linalg for aten.neg (#759)
* shape: add shape transfer function for aten.neg

Prior to this patch, the list of shape transfer functions did not
include `aten.neg`, which resulted in errors like below.

```
error: unsupported by backend lowering: tensor with unknown rank or dtype
note: see current operation: %0 = "torch.aten.neg"(%arg0) :
  (!torch.vtensor<[256,256],f32>) -> !torch.vtensor<*,f32>
note: this is likely due to a missing shape transfer function in shape_lib_gen.py
```

This patch fixes the problem by adding a shape transfer function to
reflect the point-wise nature of this operation.

* linalg: add translation of aten.neg operation

This patch adds a translation rule to lower `aten.neg` operations on
tensors to an `arith.negf` operation wrapped inside a `linalg.generic`
operation.  This patch also adds a rudimentary test.
2022-04-15 11:11:22 -07:00
Vivek Khandelwal 1bccb4fc8a [MLIR][TORCH] Add E2E support for aten::max_pool2d_with_indices_backward op
This commit adds lowering of `aten::max_pool2d_with_indices_backward` op.

This commit also fixes formatting issues in basic.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-14 21:46:47 +05:30
gpetters94 9ec0683e92
Add 2D case for convolution (#693) 2022-04-08 00:47:57 -04:00
Prashant Kumar 1d5b5a89e8 [LINALG] Add torch.layout information
torch.layout information has been added.
2022-04-07 20:47:49 +05:30
Prashant Kumar fb8cb0c5f3 [LINALG] Add the lowering of `aten.ne.Scalar` op
The lowering of `aten.ne.Scalar` op has been added to
the linalg backend.
2022-04-05 21:07:28 +05:30
Ramiro Leal-Cavazos 5620fe030e
Add 1D, weight, and reduction support to nll_loss_backward (#729)
This commit adds the following support to the op `nll_loss_backward`:
- `input` tensor can be rank-1
- `weight` parameter
- `reduction` parameter
- `target`, `grad_output`, `total_weight` can be rank-0
- Checks that input tensors are of the expected type
2022-04-04 10:57:49 -07:00
Ramiro Leal-Cavazos 51d4d55f8a
Add support for multi-dim input to `index_put_impl` (#722)
This commit adds support for multi-dimensional tensors as input to the
`_index_put_impl_` op. The support was to some degree already there,
since `ScatterOp` already supports multi-dimensional tensors. This
commit also adds a bit more error checking to `index_put` and
refactors the code for creating `ScatterOp`s to mimic the way one
would make a `Linalg::GenericOp`.
2022-03-31 09:27:21 -07:00
Anup Gangwar ccf924d3df
tosa] Support for Aten[Gelu|GeluBackward] ops (#720)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-03-30 17:00:55 -07:00
Vivek Khandelwal 2597c481f6 [MLIR][TORCH] Add E2E support for aten.new_empty op
This commit decomposes `aten.new_empty` op into `aten.empty.memory_format` op.

This commit also made a dtype fix to the constant tensor allocation like ops.
Earlier the dtype for the result was inferred from the result type; now, it's
being evaluated as per the original definition of the op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-30 13:21:01 +05:30
Sean Silva 520725cdc5 Fix bad rename from "pseudo" to "valsem". 2022-03-28 20:40:42 +00:00
Anup Gangwar 5d7a6c2976
[tosa] Support for Aten[Unsqueeze|Contiguous|Dropout|Reshape|View] ops (#700) 2022-03-25 14:15:07 -07:00
Vivek Khandelwal 88c216da13 [MLIR][TORCH] Add support for same input and output shapes for view op
This commit adds support for the cases of view op where the rank and
the shapes of the input and result are equal.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-25 22:26:10 +05:30
Ramiro Leal-Cavazos e966112c8d
Add final cast to TorchToLinalg conversions missing it (#692)
In order to make sure that the TorchToLinalg conversions leave the
graph in a valid state, the final result of the conversion has to be
casted to the result type of the op. This commit adds this cast to ops
that did not have it.
2022-03-23 13:52:32 -07:00
Qiang Fu f7c7bb800c
Add non-default dtype support for a few elementwise math ops. (#687)
* fix type inference
* fix Torch2Linalg conversion
* add test cases
2022-03-23 13:35:43 -07:00
Gaurav Shukla 7c3ba25238 [LINALG] Add decomposition of `aten.dropout` op
- This commit adds decomposition of `aten.dropout` op. It also covers the
  training mode of the same op.
- It also adds lowering of `aten.sub.float` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-22 13:14:49 +05:30
Vivek Khandelwal 13383b03b8 [MLIR][TORCH] Add value tensor variant to aten::copy_ op
This commit adds the op `ValsemVariantAtenCopyOp` that represents
`AtenCopy_Op` without the underscore. This is needed to make sure
that the `ReduceOpVariants` pass turns the in-place op into an op
that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenCopyOp`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vigilans 63fb1e5aad Bump LLVM at 8361c5da30588d3d4a48eae648f53be1feb5cfad 2022-03-18 13:16:14 -04:00
Prateek Gupta 7256c9e395 [TORCH][MLIR] Fix the return types of `aten.native_layer_norm`.
This commit fixes the 2nd and 3rd return types of the `aten.native_layer_norm`.
Previously the mean and rSTD were returned with reduction dims removed.
This commit fixes this and keeps the reduction dims of the results.

Signed-Off-By: Prateek Gupta <prateek@nord-labs.com>
2022-03-17 12:08:32 +05:30
Vivek Khandelwal 3d95c3d6c9 [MLIR][TORCH] Add value tensor variant to aten::_index_put_impl_
This commit adds the op `ValsemVariantAtenIndexPutImplOp` that represents
`Aten_IndexPutImpl_Op` without the underscore. This is needed to
make sure that the `ReduceOpVariants` pass turns the in-place op
into an op that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenIndexPutImplOp` op.

This commit also updates the `torch.bincount` op test cases.
2022-03-16 22:02:02 +05:30
Sean Silva 92da4988f0 Improve "pseudo" op terminology.
The term "pseudo" is very vague and was getting confusing (I felt I had
to explain it in every comment referencing it). Instead, rework the
"pseudo" ops to instead be named:

- MLIR Syntax: `torch.valsem.*`
- C++ / ODS: `ValsemVariant*Op`

This makes it clear what the concept is, and avoids confusion with other
things that might be called "pseudo", since these are very specific and
should be 100% consistently named w.r.t. the non-valsem-variant ops that
they correspond to.
2022-03-15 17:57:52 -07:00
Sean Silva 5d9222383c Split up TorchToLinalg.cpp
This helps keep things organized and also exposes more parallelism to
the build system. It seems though that most of the compile time is
actually spent in the headers though, so the wall time doesn't decrease
as much as I had hoped (and now that the headers are being included
multiple times, the cpu time actually increases a lot, sadly -- will try
to dig into this).
2022-03-14 10:19:41 -07:00
Prateek Gupta 3d9ba5e525 [MLIR][TORCH] Add E2E support for aten.erf op.
Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-03-09 22:22:03 +05:30
Vivek Khandelwal 1a2a9e066f [MLIR][TORCH] Add TorchToTMTensor pass
This pass is added to lower ops, which can not be lowered
via the TorchToLinalg pass, such as `torch.bincount` op.
This pass also uses torch-mlir's TMTensor Dialect to lower the
complex ops.

Also add torch.bincount op lowering with the help of TMTensor dialect

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-08 22:52:34 +05:30
Vivek Khandelwal b2952b12dd [MLIR][TORCH] Move common helper functions to Utils.cpp
This commit moves the helper function which are common across
different torch-mlir conversion passes into a common directory
Utils.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-08 22:52:34 +05:30
Vivek Khandelwal bf463d1f36 [MLIR][TORCH]Add support for integer-type inputs for sum and max op
This commit adds support for integer type inputs for
`AtenMaxOp`, `AtenSumOp`, `AtenSumDimIntListOp`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-08 22:52:34 +05:30
Ramiro Leal-Cavazos 5ec70c175d
[LINALG] Add torch-to-linalg lowering for `TensorStaticInfoCastOp` (#634)
This commit adds a lowering for `TensorStaicInfoCastOp` that simply
replaces the op with the `tensor::CastOp`.
2022-03-02 13:35:26 -08:00
Ramiro Leal-Cavazos 298eeb79ca
[LINALG] Add handling of unknown dimension in size list of `view` op (#633)
The view op allows for the new shape argument to have a -1 value for
one of the dimensions, and the op is expected to deduce the size of
that dimension by looking at the sizes of the other dimensions and
comparing it to the total number of elements in the original
tensor. This commit adds this functionality.
2022-03-02 13:35:01 -08:00
Yi Zhang 1d285f0153 Add aten.hardtanh e2e support. 2022-03-02 12:28:06 -05:00
Ramiro Leal-Cavazos 1dba4fcbd7
[LINALG] Support for contiguous memory format in `clone` and `empty` (#628)
This commit adds support for the contiguous memory format for the ops
`AtenCloneOp` and `AtenEmptyMemoryFormatOp`.
2022-02-28 13:58:04 -08:00
Ramiro Leal-Cavazos 58abec5c0a
Add `reduction` support to `torch.nll_loss_forward` (#624)
This commit does a couple of things. First, it fixes a bug in the
`linalg.generic` body of the `nll_loss_forward` lowering where the
`ignoreIndex` was being compared with the loop index rather than the
current element of the `target` tensor. This was not being caught by
the tests because they were not testing the case where `ingnoreIndex`
actually corresponds to a value in `target`. This has been fixed.

Second, this commit adds support for the `reduction` argument in
`torch.nll_loss_forward` as well as support for 1-D inputs. In order
to simplify the lowering code, I've refactored the code that creates
the `linalg.generic` ops for elementwise and reduction ops into static
functions, to avoid having boilerplate code for indexing maps, etc
that can be very error prone.

Note: The function `convertScalarToDtype` was moved to before all the
conversion patterns, but nothing in it was modified.
2022-02-28 11:01:23 -08:00
Gaurav Shukla 056cd2078d Revert "[LINALG] Decompose `aten.batch_norm` into `aten.native_batch_norm`"
This reverts commit 442ff4605c.
2022-02-25 15:46:55 +05:30
Prashant Kumar abbde7d439 [TORCH] The torch definition related to aten.gelu has changed.
New str argument approximation is added.
2022-02-18 21:57:46 +05:30
Nirvedh f8cb32faf0 LLVM bump
Major changes: opTrait changed to Trait, selectOp moved to arith dialect
assertOp moved to cf dialect
2022-02-16 15:28:13 -05:00
Gaurav Shukla 442ff4605c [LINALG] Decompose `aten.batch_norm` into `aten.native_batch_norm`
- This commit decomposes the `aten.batch_norm` op into the
  `aten.native_batch_norm` op, instead of lowering it to the
  `linalg.generic` op.
- It also adds run-time asserts in the `aten.native_batch_norm` lowering
  to make sure that the shape of the weight, bias, running_mean, and
  running_var must match the num of features.
- Since the `aten.native_batch_norm` op is not supported at TOSA backend,
  all the modules that are dependent on the `aten.native_batch_norm` op
  will fail and therefore they should be removed from the TOSA `passing`
  set.
- It also moves `checkNotNone` to utility.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-16 23:41:38 +05:30
Anup Gangwar c60468f141
[tosa] Support for Aten[Zeros|Ones|Fill_Scalar] ops (#604)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-16 09:53:51 -08:00
Ramiro Leal-Cavazos 00a6e9c1bb
[LINALG] Add value tensor variant to `fill_.Scalar` (#600)
This commit adds the op `PseudoAtenFillScalarOp` that represents
`AtenFill_ScalarOp` without the underscore. The approach is the same
as in commit dd998fa4d4.

Adding this op allows for a simpler and more consistent version of the
`empty` and `empty_like` op e2e tests.
2022-02-15 11:58:03 -08:00
Gaurav Shukla 41acde599b [LINALG] Add E2E support for `aten.[le|ge].Scalar` ops
- This commit adds lowering of `aten.le.Scalar` and `aten.ge.Scalar` ops
  as a part of `convert-torch-to-linalg` pass.
- It also creates a new test script `elementwise_comparison.py` for all
  element-wise comparison ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-15 12:21:09 +05:30
Anup Gangwar dfc07d11d7
Fix compiler warning introduced in PR575 (#593) 2022-02-14 12:45:19 -08:00
Gaurav Shukla 78c7844c6c [LINALG] Add E2E support for `aten.eq.int` op
- This commit adds lowering of `aten.eq.int` op as a part of
  `convert-torch-to-std` pass.
- It also refactors the code for binary comparison ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-15 01:37:35 +05:30
Gaurav Shukla f00d1686c8 [LINALG] Add E2E support for `aten.[Bool.Tensor|Float.Tensor]` op
- This commit adds lowering of `aten.Bool.Tensor` and
  `aten.Float.Tensor` op as a part of `convert-torch-to-linalg` pass.
- It also adds support for returning bool types.
- It also fixes lowering of the `aten.Int.Tensor` op for non-zero rank
  input tensors.
- If a scalar number is converted to a 0-d tensor and passed on to the
  `aten.Float.Tensor` op, it folds to the scalar number.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-14 23:09:20 +05:30
Ramiro Leal-Cavazos 3dc7847348
[LINALG] Fix linalg generic result type argument in TorchToLinalg (#588)
Some of the lowerings use the result type obtained from the op itself
to tell the `linalg::GenericOp` what the type of the result should be
rather than using the type of the result tensor given to the
`linalg::GenericOp`. This becomes a problem when the result type of
the op has static size information and the result tensor used in
`linalg::GenericOp` has dynamic dimensions, for `linalg::GenericOp`
expects the result type to be equal to the type of the output tensor.

This commit replaces the use of the result type from the op itself
with the type of the result tensor passed to `linalg::GenericOp`.

In order to not create too many dynamic/static versions of the same
e2e test, e2e tests have only been added to the ops that currently
fail when used with static sizes.
2022-02-11 19:42:18 -08:00
Yi Zhang ce4d6d1f83 Remove hacky aten.select.int lowering code 2022-02-11 18:14:58 -05:00
Anup Gangwar 756b75fb2d
[tosa] Support for some ops and fix for Issue #532 (#575)
* [tosa] Support for AtenNe[Tensor|Scalar]Op, AtenLog2Op,
AtenBitwiseAndTensorOp, AtenSquareOp and AtenThresholdOp
* Fix for Issue #532 - Mixed input types for few ops and updated few
tests to use i32 instead of i64

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-11 12:30:02 -08:00
Ramiro Leal-Cavazos 9b89f8eb3f
[TORCH][MLIR] Add E2E support for aten.clone (#571)
This commit adds support for the aten.clone op.
2022-02-09 19:31:03 -08:00
Gaurav Shukla bd177bdfc7 [TORCH][MLIR] Add run-time assert support in Torch-dialect
- This commit adds `aten.assert` op in the Torch dialect.
- The `aten.assert` op is lowered to `mlir::Assert` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-09 12:03:01 -05:00
Prashant Kumar d4ea39b616 Convert bool to float or integer type.
Conversion of torch.bool tensor type to float and integer type is
handled.
2022-02-07 21:22:22 +05:30
Anup Gangwar f9f97ea184 * [tosa] Support for AtenNativeLayerNormOp
* [tosa] Support for AtenPermuteOp

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2022-02-04 14:46:31 -05:00
Prashant Kumar ccf546f14c Add aten::nll_loss_backward op
The lowering of aten::nll_loss_backward op has been added
from torch to linalg dialect. The changes has been made as
a part of -torch-convert-to-linalg pass.

Signed-off-by: Prashant Kumar prashant@nod-labs.com
2022-02-04 21:57:53 +05:30
Prashant Kumar 68acc8696e Modify softmax decomposition to be more numerically stable.
The softmax decomposition is modified according to https://github.com/pytorch/functorch/blob/main/functorch/_src/decompositions.pytorch
to account for numerical stability. Also, modified aten.argmax lowering
to handle negative dimension.
2022-02-03 21:20:36 +05:30
Yi Zhang 0cb216a1ad [Torch][Linalg] Add basic support for RNG
This PR include the following pieces:
- Add torch `Generator` type. `Generator` type is converted to i64 in
refbackend type converter.
- Add seed managment support for the default global generator.
`torch_c.getNextSeed` op is used to get the seed. On refbackend, the
`torch_c.getNextSeed` is lowered to load/store from [0] of global
variable `default_generator` memref<i64> in `InsertRngGlobals` pass.
- Add `aten.uniform_` and testing as an example op for RNG ops. Add
`torch.pseudo.aten.uniform` op. It has the same operands and return as
the `aten.uniform_` from the op registry except for value semantics.
2022-01-31 18:56:42 -05:00
Suraj Sudhir 0f083e770a
[tosa] Add maxpool2d and adaptive_avgpool2d support (#550)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-31 13:34:09 -08:00
Prashant Kumar e58b66bc3b Add lowering of `aten.max.dim` op.
Lowering of `aten.max.dim` op has been added.
2022-01-31 21:41:22 +05:30
Anup Gangwar 454fa9d123
* [tosa] Support for AtenFlattenUsingIntsOp (#548) 2022-01-28 21:38:56 -08:00
Anup Gangwar 7a5736facd
* [tosa] Support for AtenReshapeOp (#543)
* [tosa] Support for AtenBatchNormOp

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-27 14:38:59 -08:00
Suraj Sudhir eb06d21765
[tosa] Implement conv2d support (#541)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-26 19:16:13 -08:00
stephenneuendorffer 3fd9b7789e
Bump LLVM to 881ff4e4ebe8cc0cc045c7c167cffb01f94f27f8 (#539) 2022-01-25 22:16:30 -08:00
Suraj Sudhir cadea678e5
[tosa] Implement torch.linear support. (#535)
Refactor matmul into separate class and derive variants:
- matmul
- mm, bmm
- linear

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-25 08:48:58 -08:00
Anup Gangwar f8080bd1c5
* [tosa] Support for AtenRsubScalarOp for scalar constants (#531)
* [tosa] Support for AtenCeilOp and AtenReciprocalOp
* [tosa] Support for comparator ops, Aten[Gt|Lt|Eq][Tensor|Scalar]Op with scalar constant
* [tosa] Support for Scalar variants of Aten[Mul|Div|Add|Sub] Ops with scalar constants

Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-20 10:58:30 -08:00
Vivek Khandelwal 6fe70c7794 [MLIR][TORCH] Add E2E support for aten.index.Tensor op
This commit adds lowering of `aten.index.Tensor` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-19 13:37:56 +05:30
Suraj Sudhir 0188ca5498
[tosa] Implement matmul, mm and bmm support (#526)
- Also handles braodcasting n-D tensors, dynamic shapes

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-18 13:37:32 -08:00
dan 3745f54489 Update external/llvm-project
- Add `qualified` to ods because of
https://reviews.llvm.org/D113873 and https://reviews.llvm.org/D116905
- Needed to revert https://github.com/llvm/torch-mlir/pull/520 as it
was based on an old torch version.
https://github.com/llvm/torch-mlir/pull/527 will bring this back with
a better design.
- Change ConvertAtenCatOp to use more accurate tensor shape info and
as much static info as possible to pass `tensor.insert_slice`
verification code added by https://reviews.llvm.org/D114715
- Other minor fixes
2022-01-18 13:25:42 -05:00
Suraj Sudhir edf4a0e729
[tosa] Add more common utility functions (#525)
- Common code as TF repository, being moved to MLIR core.
- Will support further legalizations to be published.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-14 13:57:27 -08:00
Anup Gangwar d69d29b7a6 * [tosa] Support for AtenPowTensorScalarOp with constant Scalar as input
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2022-01-11 22:55:54 -05:00
Liam Fitzpatrick 077e55d756 Add support for constant_pad_nd
Note that to enable folding of the code coming from an example
like the ConstantPad2dStaticModule e2e test, support for other
operations had to be added/improved:
- aten::neg.int
- aten::eq.float
- aten::eq.str
- prim::Uninitialized
2022-01-11 10:25:25 -05:00
Vivek Khandelwal ca662dc9cc [MLIR][TORCH] Add E2E support for aten.threshold, aten.threshold_backward op
This commit adds lowering of `aten.threshold` op
This commit adds lowering of `aten.threshold_backward` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-10 11:56:56 +05:30
Yi Zhang 7cf7b91664 [MLIR][TORCH] Fix tensor literal int elem type to be signless
The element type of tensor literal should be signless when converted to
builtin tensor types.
2022-01-07 16:34:24 -05:00
Suraj Sudhir d6b6c0268c
[tosa] Add missing overrride-s to fix compiler warnings (#514)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-07 10:57:54 -08:00
Yi Zhang 732a76f45c Make broadcasting result shape more static
This involes the following 2 parts:
- Change refine type to propagate more static shape info.
- Get as much static shape info as possible when creating the result
tensor when converting to linalg.
2022-01-06 18:39:27 -05:00
Suraj Sudhir b4842d9863
[tosa] Implement squeeze.dim support (#511)
Templated variants for squeeze and squeeze.dim
2022-01-06 08:31:29 -08:00
Vivek Khandelwal 4486de5ef3 [MLIR][TORCH] Add E2E support for torch.arange op
This commit adds lowering of `aten.arange.start_step` op.
This commit decomposes `aten.arange` and `aten.arange.start` into
`aten.arange.start_step` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-27 22:45:48 +05:30
Prashant Kumar 9e1ecf2c0b Add Add and Sub scalar op conversions.
`aten.add.Scalar` and `aten.sub.Scalar` op conversions have been added.
The changes have been made as a part of `-convert-torch-to-linalg` pass.
2021-12-22 21:41:49 +05:30
xndcn 5eed562e19 add aten.sub.int/aten.mul.int lowering in TorchToStd 2021-12-17 10:35:15 -08:00
Yi Zhang d8ba68119e Lower aten::view with linalg.collapse and linalg.expand
We only handle the expanding OR collapsing cases, we do not handle
expanding And collapsing happening at the same time or cases where
it's neither collapsing nor expanding like view of [2,3] for
3x2 tensor.

It's assumed that if a shape list element is got from
`aten.size(tensor, dim)` the corresponding dim is not splitted or
collapsed. This assumption makes it easier to deal with dynamic shapes.
2021-12-16 17:58:20 -05:00
Gaurav Shukla eddc09aa55 [TORCH][MLIR] Add E2E support for `aten.eq` and `aten.lt` ops
- Added E2E support for `aten.eq.Tensor` and `aten.lt.Tensor` ops. Both
  the operands are expected to be of the same type, i.e., type promotion
  is not addressed as a part of this commit.
- Added E2E support for `aten.eq.Scalar` and `aten.lt.Scalar` ops.
  Tensor operand type to Scalar operand type promotion has not been
  handled in this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-16 18:47:22 +05:30
Suraj Sudhir 0cd95b5c68
[tosa] Support for Torch.squeeze (#487) 2021-12-15 21:40:29 -08:00
Daniel Garvey 396ab35c9d
Small fixes for slice edge cases (#476) 2021-12-15 15:54:41 -06:00
Anup Gangwar a6c3050dd0 * [tosa] Support for Maximum and Minimum
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
2021-12-15 11:58:19 -08:00
Suraj Sudhir 829cf8afc3
[tosa] Implement Argmax support (#485)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-12-15 11:01:01 -08:00
Gaurav Shukla d13bb0e5c1 [TORCH]MLIR] Fix C++17 extension warning
The existing implementation of `ConvertConstantTensorAllocOp<>` requires
a C++17 feature `if constexpr ()`. This commit removes the use of that
feature to support the implementation even for lower C++ versions.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-15 23:35:06 +05:30
Prashant Kumar ab81f871e4 Add aten.tensor.int and aten.tensor.float op lowerings.
Add the required lowerings and correct test cases.
These op produce zero-d tensors and it was incorrectly mentioned in
refine types to produce 1d tensor of size 1.
2021-12-15 17:21:34 +05:30
Anup Gangwar cce490d71d
* [tosa] Support for Rsqrt legalization (#480)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2021-12-14 10:03:58 -08:00
Prashant Kumar 6dabf185f5 Add support for int types in gtScalar op.
Support for integer types in gtScalar op has been added.
The code share same logic with gtTensor op and can be merged
which is added as a TODO.
2021-12-14 01:29:52 +05:30
Gaurav Shukla 8d4879feb0 [TORCH][MLIR] Add and templatize lowering of [`aten.zeros|aten.ones|aten.empty`] ops
- Templatize `aten.zeros` and `aten.ones` ops lowering.
- Add E2E support for `aten.empty` op.
- Add Integer type support in `aten.mul.Scalar` op lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-14 00:07:11 +05:30
Prashant Kumar 528354de84 Add `aten.gt.Tensor` op
`aten.gt.Tensor` op has been added in torch dialect and the
lowering of the op has been done to the linalg dialect.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-13 00:08:52 +05:30
Gaurav Shukla a778f990e9 [TORCH][MLIR] Add E2E support for `aten.ceil` op
This commit adds lowering of `aten.ceil` op as a part of element-wise
ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-12 01:15:47 +05:30
Prateek Gupta cfc8de36f8
[MLIR][TORCH] Add E2E support for `aten.native_layer_norm`. (#470)
This commit adds support for aten.native_layer_norm operation. Here
the previous code for aten.layer_norm is tweaked a little bit to
accomodate both mean and variance values alongwith the layer norm
value. This commit also adds decomposition of aten.layer_norm into
aten.native_layer_norm, which was previously getting lowered directly
to linalg.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2021-12-10 19:06:19 +05:30
Gaurav Shukla 5a47f92390 [TORCH][MLIR] Add E2E support for `aten.squeeze.dim` op
This commit adds lowering of `aten.squeeze.dim` op into
`linalg.TensorCollapseShape` op. Here, the dim(th) dimension of the
input tensor is not supposed to be dynamic.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-10 17:01:20 +05:30
Vivek Khandelwal 8130354c09 [MLIR][TORCH] Add E2E support for aten.index_select op
This commit adds lowering of `aten.index_select` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-09 23:13:36 +05:30
Vivek Khandelwal 0a0a1b4476 [MLIR][Torch] Resolve styling issues related to aten zeros/ones op
https://github.com/llvm/torch-mlir/pull/464#discussion_r765065092

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-09 17:42:28 +05:30
Gaurav Shukla f34eb66124 [TORCH][MLIR] Add E2E support for [`aten.gt.Scalar`|`aten.where.self`]
This commit adds lowering of `aten.gt.Scalar` and `aten.where.self` as a
part of element-wise ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-09 12:47:10 +05:30
Liam Fitzpatrick 2414bdb1f0 Linalg lowering for aten.conv2d(bias=True)
Previously aten.conv2d was only lowered if there was no bias.
Here lowering is extended to support bias.
2021-12-08 14:44:36 -08:00
Vivek Khandelwal 9958cf08b6 [MLIR][TORCH] Add E2E support for aten.zeros op
This commit adds lowering of `aten.zeros` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-08 22:42:33 +05:30
Prashant Kumar 977b1b03ea Add aten::nll_loss_forward op lowering.
The op lowering has been added as a part of `torch-lower-to-linalg`
pass. This takes care of ignore_index but the weight and reduction
operand is still to be accounted for.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-07 17:11:08 +05:30
Daniel Garvey b0cb49ca93
Add scalar type promotion for mul and div (#454) 2021-12-03 13:51:25 -06:00
Suraj Sudhir c9c9b68d1f [tosa] Add Torch reduction operators
- Supports variants with multiple dims, one dim, all dime
- Leverages legalize_common and legalize_utils code from
TensorFlow-TOSA work

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-12-03 09:01:48 -08:00
Yi Zhang 24bc06fc8d Fix compilation warnings. 2021-12-03 11:44:32 -05:00
Daniel Garvey a52aded0b9
Add lowering for slice and selectInt (#398) 2021-12-02 22:09:21 -06:00
Vivek Khandelwal 46a2189a41 [MLIR][TORCH] Add E2E support for aten.bitwise_and.tensor op
This commit adds lowering of `aten.bitwise_and.tensor` op.

Signed-Off By: Vivek Khandelwal vivek@nod-labs.com
2021-12-02 21:06:15 +05:30
Vivek Khandelwal 46a0668b3b [MLIR][TORCH] Add E2E support for aten.mean and aten.numel op.
This commit adds lowering of `aten.mean` and `aten.numel` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-02 11:51:13 +05:30
Ramiro Leal-Cavazos e6675a50d3 Add support for dtype argument in reduction ops
Many reduction ops take as an argument an optional output dtype that
can change the type of the input tensor before the reduction is
performed. This commit adds support for the optional dtype flag that
had been previously ignored.

Test:
/tools/torchscript_e2e_test.sh -f 'ReduceSumDtype'
/tools/torchscript_e2e_test.sh -f 'ReduceSumDImIntListDtype'
2021-11-30 12:53:59 -05:00
Gaurav Shukla 73b27b32dc [MLIR][TORCH] Add E2E support for `aten.squeeze` op
This commit adds lowering of `aten.Squeeze` op into
`linalg.TensorCollapseShape` op. The size 1 dynamic dimensions are not
handled as a part of this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-30 23:00:28 +05:30
ds1231h 9ad5954e41 aten.abs and aten.reciprocal to linalg 2021-11-30 11:31:55 -05:00
Prashant Kumar 36afa4a4d3 Add aten.fill.Scalar op lowering
The lowering of aten.fill.Scalar has been added.
The changes have been made as a part of -torch-convert-to-linalg pass.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-30 21:12:15 +05:30
Daniel Garvey 539511c19b
Add dropout op (#436)
Co-authored-by: dan <dan@nod-labs.com>
2021-11-29 12:30:03 -06:00
dan 03fdf56f21 add aten.add.int lowering in TorchToStd 2021-11-29 13:22:50 -05:00
Liam Fitzpatrick 7616d28ce1 Add leakyrelu support 2021-11-27 23:04:46 +05:30
Vivek Khandelwal 8d8d2c2fb8 [MLIR][TORCH] Add E2E support for aten.div.Scalar
This commit adds lowering of `aten.div.Scalar`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-11-24 11:17:40 +05:30
Gaurav Shukla 663fc1ef51 [MLIR][TORCH] Add E2E support for [`aten.mul.Scalar`|`aten.addmm`]
This commit adds lowering of `aten.mul.Scalar` and also adds
decomposition of `aten.addmm` to `aten.mul.Scalar`, `aten.add.Tensor`
and `aten.mm` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-18 22:26:41 +05:30
Prashant Kumar f8ff6d84f4 Support aten::linear with rank 3 inputs
Now, aten::linear supports rank 3 inputs. This is a fix
for upcoming bert-inference task. The correct way should be
to support broadcasting in `aten.matmul` op and decompose
`aten.linear` into right ops.
2021-11-18 22:15:04 +05:30
Prateek Gupta 146f109152 [NFC] Cleanup code for aten.gelu_backward operation.
This commit adds minor non functional changes to the aten.gelu_backward
operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-18 11:24:04 -05:00
Prateek Gupta ecf78b9849
[TORCH][MLIR] Add E2E support for `aten.gelu_backward` operation. (#418)
This commit adds new operation `aten.gelu_backward` in the aten
dialect and adds lowering of this operation from aten to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-17 14:59:38 +05:30
Yi Zhang 53733933a4 Update llvm upstream to 0b17336f793108a7b10c3fa913039144ef1d0f61
Update AsmPrinter/Parser and MatchAndRewrite
2021-11-16 13:04:51 -05:00
Ramiro Leal-Cavazos a2392a0f19 Fix bug in handling of pin_memory in AtenOnesOp conversion
This commit fixes a bug with the way ConvertAtenOnesOp was matching on
the pin_memory bool argument, which always resulted in a failed match.
2021-11-12 11:38:25 -05:00
Suraj Sudhir 628a21bb13
[mlir][tosa] Refactor conversions to use templates (#416)
- Remove use of conversion construction macros
- Add mul and div op conversions
- Add corresponding tests

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-11-11 16:15:58 -08:00
Suraj Sudhir 1019ddf5a0 [tosa] Add structure for eltwise ops
Add a bunch of op legalizations.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-11-11 11:03:24 -08:00
George Petterson 2764e86f02 Add Rsqrt 2021-11-09 11:08:28 -05:00
Yi Zhang 05c4dd8e39 Add convertScalarToDtype helper.
This is to facilitate scalar type conversion in the TorchToLinalg. As
part of adding the helper, this PR also:
- Updated `AtenAddTensorOp`, `AtenSubTensorOp` to use the helpers to
support more type variants.
- Added e2e type promotion testing.
- Added i32 memref return/arg type to support e2e testing.
2021-11-08 17:50:52 -05:00
George Petterson e23cabf3a9 Add log2 2021-11-08 16:19:59 -05:00
George Petterson f41958037a Add NumToTensor 2021-11-08 15:56:52 -05:00
Prateek Gupta 18e8806b14 [TORCH][MLIR] Add E2E support for aten::to.dtype.
This commit adds end to end support for AtenToDtypeOp from aten
to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-08 12:56:03 -05:00
Wang Kangyu 4bb9b44775 Add lowering of "aten.pow.Tensor_Scalar" op
Add e2e support for torch.pow(Tensor, Float)
2021-11-08 09:19:50 -08:00
Wang Kangyu b33543af85 Add lowering of aten.floor op 2021-11-06 17:31:44 -04:00
nodlabs 5ff823ace9 lowerd Sqrt to linalg
reused clang-format, as changes got deleted
2021-11-06 11:29:46 -04:00
Prashant Kumar 127c7d8e27 Add lowering of `torch.log` op
The lowering of `torch.log` op has been added.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-02 21:18:00 +05:30
George Petterson 6dde5b347e Add rsub 2021-11-02 09:56:48 -04:00
Prashant Kumar 53b4275ef5 Add lowering of `aten.Int.Tensor` op.
The lowering of `aten.Int.Tensor` op has been added.
The changes has been made as a part of `convert-torch-to-linalg` pass.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-01 21:58:08 +05:30
Gaurav Shukla 69eaf9a154 [MLIR][TORCH] Add E2E support for `torch.aten.view`
- This commit adds lowering of `aten.View` to `linalg.TensorExpandShape`.
- This lowering will be successful only when one or more static
  dimensions are expanded.
- It also fixes a typo in `ConvertAtenFlattenUsingIntsOp` conversion
  pattern.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-10-29 22:33:10 +05:30
Yi Zhang 752abc8d01 Add type promotion code to refine types.
The types have different levels of categories: where
complex > floating > integral > boolean (> means left hand
side has higher category).

The operands have different levels of priorities where:
dimensioned tensor > 0-dim tensor > scalar == wrapped 0-dim tensor.
This is represented by the `ResultTypeState.dimResult`,
`ResultTypeState.zeroResult` and `ResultTypeState..wrappedResult` in
the source code.

For operands of the same priorities, the result type should be the
highest categories with sufficient width to hold all operands.

By default, only the highest priority operands participate in the type
promotion logic. Lower priority operands participate if they are in
a higher category than any higher priority operands.

For example, <[],f32> (lower priority) and <[1], si64> tensor would
result in <[?],f32> tensor because floating > integeral. Another example
<[],f64> (lower priority) and <[1], f32> tensor would result in
<[?], f32> tensor because f32 and f64 are the same category.

The ScalarType enum definition, type promotion table, ResultTypeState
struct definition and some helpers are copied from
aten/src/ATen/native/TypeProperties.*
Other references:
- https://pytorch.org/docs/stable/tensor_attributes.html#type-promotion-doc
- https://github.com/pytorch/pytorch/issues/9515

Other minor changes:
1. Fix `visitExpandLikeOp` to consider cases where the given sizes list
size is larger than the input rank.
2. Add back the somehow deleted `torch.aten.softmax.int` tests in
decompose-complex-ops.mlir.
2021-10-29 11:17:39 -04:00
George Petterson 2ea2ab518b Add contiguous 2021-10-29 11:11:50 -04:00
Suraj Sudhir 7e4ef74774
[tosa] Add Torch.sigmoid fp32 to TOSA (#386)
* [tosa] Add Torch.sigmoid fp32 to TOSA

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2021-10-28 10:09:12 -07:00
Prateek Gupta c33a2ca952 [TORCH][MLIR] Add E2E support for aten.permute.
This commit adds lowering of aten.permute to linalg.generic operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-10-28 10:25:26 -04:00
Sean Silva 30df2ec71b Add min/max/clamp support.
Part of #380

Also
- BoolType is not considered as Scalar
- e2e framework fixes for nan handling
- `tu.rand(..., low=, high=)` support
- delete unused variable (fix warning)
- Add IouOfModule from #380 to e2e test suite (this is a common
  calculation in vision models)

 Your branch is ahead of 'origin/main' by 1 commit.
2021-10-27 13:29:21 -07:00
Prashant Kumar 5009cbf55c Add lowering of aten.matmul op.
Lowering of `aten.matmul` op is added from torch to linalg dialect.
The different cases correspond to
https://pytorch.org/docs/stable/generated/torch.matmul.html.
TODO: Broadcasting in case of batch-matmul is yet to be taken care of.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-10-26 12:45:09 -04:00
Boian Petkantchin e276dbbaa6
Add aten::gelu lowering (#374)
* Print more exception info on error during test execution

* Fix formatting

* Add aten::gelu lowering

Co-authored-by: Boian Petkantchin <boian@nod-labs.com>
2021-10-25 16:16:01 -07:00
Ramiro Leal-Cavazos 8bfb819d35 Fix bug with transpose of negative dims
Summary:
This commit fixes an off-by-one error in how negative dimensiosn were
being handled in the lowering of transpose. This commit also adds
tests to transpose and unsqueeze to test negative dimensions.
2021-10-25 15:50:55 -04:00
George Petterson 22aeb967c5 Add ones 2021-10-21 14:46:59 -04:00
George Petterson 8853dfbc74 Add broadcast 2021-10-19 13:33:31 -04:00
Yi Zhang a459e09ab7 E2e support for aten.softmax.int and aten.embedding
- Added a DecomposeComplexOps pass to decompose complex torchOps.
- Refactored `visitAtenArgmaxOp` and `visitAtenAnyDimOp` to
`visitReductionAlongDimIntOp`.
- Moved some helper functions into
torch-mlir/Dialect/Torch/Utils/Utils.h to be shared by multiple files.
- Added support for f64 tensor as argument and return types.
2021-10-18 17:57:45 -04:00
Yi Zhang 0902438882 Update llvm-project to a54f4eae0e1d0ef5adccdcf9f6c2b518dc1101aa
This brings in https://reviews.llvm.org/D110797. PRs that are in
progress will need to use scripts provided by
https://llvm.discourse.group/t/psa-removed-arithmetic-ops-from-standard/4455.
2021-10-18 13:36:42 -04:00
dan 7750d2173a add argmax lowering
Add argmax lowering from torch to linalg
2021-10-13 14:31:16 -04:00
Sean Silva 0c5c84d63d Add a basic TOSA E2E backend.
We lower through linalg-on-tensors and use RefBackend to run it.
This adds enough support for a "tanh" op. Adding more ops should be
fairly mechanical now that things are wired up. Run with:
```
./tools/torchscript_e2e_test.sh -c tosa
```

The backend structure is very similar to linalg-on-tensors based E2E
backends and is a nice parallel (see `tosa_backend.py`). Actually, this
forced a nice refactoring to the layering here. We removed
`torchscript-module-to-linalg-on-tensors-backend-pipeline` and instead
require separately running
```
torchscript-function-to-torch-backend-pipeline,torch-backend-to-linalg-on-tensors-backend-pipeline
```
This highlights the step that lowers to the "torch backend contract"
of cleaned up `torch` dialect ops is a critical step in the lowering.
Going forward, that is the key load-bearing contract of the torch-mlir
project, not the linalg-on-tensors backend contract.

Recommended review order:
- `TorchToTosa.cpp` / `TorchToTosa/basic.mlir`
- `python/torch_mlir_e2e_test/torchscript/configs/tosa_backend.py` and
  the new `utils.py` file there.
- `python/torch_mlir_e2e_test/tosa_backends/linalg_on_tensors.py` and
  `abc.py` in that directory for the TOSA backend e2e interface.
- other misc mechanical changes
2021-10-08 09:59:45 -07:00
Yi Zhang 98ba255288 E2e support for layernorm. 2021-10-04 14:15:13 -04:00
Sean Silva 5b6902e31c Dual license the torch-mlir project.
This commit (with approval from all contributors) dual licenses
the torch-mlir project under both the standard LLVM license and the
standard PyTorch license. This will facilitate moving code between
torch-mlir and the two upstream projects.

The standard file comment is now:

```
// This file is licensed under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
// Also available under a BSD-style license. See LICENSE.
```

See `LICENSE` in the project root for the terms of both licenses.
2021-10-01 10:46:08 -07:00
Sean Silva 8b2c099914 Update llvm-project to 204d301bb1921431a853c0bfba32007c018df1d5
This brings in the fix for the obscure RefBackend bug we were hitting.
2021-09-28 17:38:10 -07:00
Sean Silva 4fad753073 Move external/torch-mlir to the root of the repo. 2021-09-27 17:11:08 -07:00
Sean Silva a99cbeeb7e Move TorchConversion dialect and TorchTo* into torch-mlir 2021-09-23 21:39:31 -07:00
Yi Zhang 603e068e45 E2e implementation for `aten.cat`,`aten.gather`, `aten.bmm`
Also contains the following changes:
- Remove derefineOp canonicalizer because it's not safe.
- Support for optional tensor and list tensors in reduceOpVariant. This
only works for some special detected and easy to handle cases. For list,
it covers the case list is got from a `ListConstruct`. For optional, it
covers the case optional is constructed from a `DerefineOp`.
- Remove the `inferReturnTypes` for `FromBuiltinTensorOp` because it's
not safe to deduce types from the input. For example, a built-in tensor
of i8 could be converted to si8 or ui8. It's better to let the user
specify the return type explicitly.
2021-09-22 19:15:01 -04:00
Sean Silva 1a0b953ea7 Eliminate almost all mentions of IREE.
A few remain in examples/docs that will be naturally be updated in due
time.

This regresses the list support and the general direction of more widely
supported control flow, lists/dicts/globals that we were going for with
the TorchScript path. The idea is that we are deferring that work to
make torch-mlir a very clean standalone thing. We will reboot it,
probably using some of the tools of iree_pydm to make it simpler, and in
a more natural place (such as an iree-torch repo that depends on IREE and
torch-mlir to build a working PyTorch frontend solution for IREE -- it
was really weird that npcomp depended on IREE).
2021-09-22 16:06:38 -07:00
George Petterson ecc334123c Added transpose lowering 2021-09-19 20:28:27 -04:00
Sean Silva b6be96d722 [torch-mlir earthmoving (2/N)] Python code movement.
This moves the bulk of the Python code (including the Torch interop)
from `frontends/pytorch` into `torch-mlir/TorchPlugin`. This also
required reconciling a bunch of other Python-related stuff, like the
`torch` dialects.

As I did this, it was simpler to just remove all the old numpy/basicpy
stuff because we were going to delete it anyway and it was faster than
debugging an intermediate state that would only last O(days) anyway.

torch-mlir has two top-level python packages (built into the
`python_packages` directory):

- `torch_mlir_dialects`: `torch` dialect Python bindings (does not
  depend on PyTorch). This also involves building the aggregate CAPI for
  `torch-mlir`.
- `torch_mlir`: bindings to the part of the code that links against
  PyTorch (or C++ code that transitively does).

Additionally, there remain two more Python packages in npcomp (but
outside `torch-mlir`):

- `npcomp_torch`: Contains the e2e test framework and testing configs
  that plug into RefBackend and IREE.
- `npcomp_core`: Contains the low-level interfaces to RefBackend and
  IREE that `npcomp_torch` uses, along with its own
  `MLIR_PYTHON_PACKAGE_PREFIX=npcomp.` aggregation of the core MLIR
  python bindings. (all other functionality has been stripped out)

After all the basicpy/numpy deletions, the `npcomp` C++ code is now very
tiny. It basically just contains RefBackend and the `TorchConversion`
dialect/passes (e.g. `TorchToLinalg.cpp`).

Correspondingly, there are now 4 main testing targets paralleling the
Python layering (which is reflective of the deeper underlying dependency
structure)

- `check-torch-mlir`: checks the `torch-mlir` pure MLIR C++ code.
- `check-torch-mlir-plugin`: checks the code in `TorchPlugin` (e.g.
  TorchScript import)
- `check-frontends-pytorch`: Checks the little code we have in
  `frontends/pytorch` -- mainly things related to the e2e framework
  itself.
- `check-npcomp`: Checks the pure MLIR C++ code inside npcomp.

There is a target `check-npcomp-all` that runs all of them.
The `torch-mlir/build_standalone.sh` script does a standalone build of
`torch-mlir`.

The e2e tests (`tools/torchscript_e2e_test.sh`) are working too.

The update_torch_ods script now lives in
`torch-mlir/build_tools/update_torch_ods.sh` and expects a standalone
build.

This change also required a fix upstream related to cross-shlib Python
dependencies, so we also update llvm-project to
8dca953dd39c0cd8c80decbeb38753f58a4de580 to get
https://reviews.llvm.org/D109776 (no other fixes were needed for the
integrate, thankfully).

This completes most of the large source code changes. Next will be
bringing the CI/packaging/examples back to life.
2021-09-15 13:40:30 -07:00
Sean Silva 28a7738189 [torch-mlir earthmoving (1/N)] C/C++ code movement.
This creates the `external/torch-mlir` directory as an
LLVM_EXTERNAL_PROJECTS-compatible project (analogous to
`iree-dialects`) and completes movement/rename of all pure MLIR C/C++
compiler code into there. The next step will be to move all the Python
code / code that links/includes PyTorch C++ code (which currently lives
in `frontends/pytorch`) into a subdirectory here.

I call this "earthmoving" because it is mostly mechanical changes and
renames. As a quick summary (we can change this down the road easily)
- C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch`
- CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet`
- preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_`
- CMake `NPCOMPFoo -> TorchMLIRFoo`

The goal of this is to create a standalone project creating a center of
mass for entry into the MLIR ecosystem from PyTorch, suitable in scope
for eventual inclusion/ownership in PyTorch. The idea is that
`external/torch-mlir` will some day be pulled out into its own
repository, and then npcomp will simply pull it in as a submodule.

Layering-wise, what lives in `torch-mlir` lowers code from PyTorch
(currently TorchScript, but TorchFX or pytorch/xla-style tracing are
possible extensions) down to what we have been calling the "Torch
backend contract" which is cleaned up IR (inlining, simplifcation,
conversion to value tensors, ...) entirely in the `torch` dialect. This
is the branching off point for further lowering, of which npcomp takes
one opinion (outside `torch-mlir` of course!), namely the
`TorchConversion` dialect/transforms which lower to IR suitable for IREE
and other linalg-on-tensors based lower-level compilers.

Summary of changes:
- move `{include,lib,test}/Dialect/Torch` into `torch-mlir`
- move relevant parts of CAPI into `torch-mlir`.
- leave a few things related to the `torch-mlir` Python build commented
  out, which should be resolved in a subsequent change.
2021-09-10 21:44:37 -07:00
Sean Silva 5f3eb637c4 Fix lowering of reduce ops
We were not filling the `outs` with the neutral element of the
reduction, which resulted in reading uninitialized values (we were
getting lucky that sometimes the uninitialized buffers were all zero's).

Also,
- Slight tweak to error messages in the e2e framework.
2021-09-08 15:30:15 -07:00
Ramiro Leal-Cavazos 6724de7692 Added sum lowering
Added lowering to torch.sum into linalg
2021-09-03 17:37:06 -07:00
Sean Silva ed2afe43e7 Fix TorchToIREE lowering.
We needed to resize the list, not just reserve capacity.
2021-09-03 23:57:54 +00:00
dan d9df4bfc95 Add sigmoid lowering
Follows existing conventions for activation functions
2021-08-30 17:32:23 -04:00
Stella Laurenzo 32f56c67f4 Integrate llvm-project at a8de667af092c9b4b3b4a95827a521602ebf14ed.
* Requires patch https://reviews.llvm.org/D108527
2021-08-22 18:59:59 -07:00
Sean Silva cab8d922ec Add TorchToIREE and factor out TorchConversion dialect.
This converts a basic list op (torch.prim.ListConstruct) to the IREE
dialect.

```
    def forward(self, x: float):
            return [x, x]
```

turns into:

```
builtin.func @forward(%arg0: !torch.float) -> !torch.list<!torch.float> {
  %0 = torch.prim.ListConstruct %arg0, %arg0 : (!torch.float, !torch.float) -> !torch.list<!torch.float>
  return %0 : !torch.list<!torch.float>
}
```

which turns into:

```
builtin.func @forward(%arg0: f64) -> !iree.list<f64> {
  %c1 = constant 1 : index
  %c0 = constant 0 : index
  %c2 = constant 2 : index
  %0 = iree.list.create %c2 : !iree.list<f64>
  iree.list.set %0[%c0], %arg0 : !iree.list<f64>, f64
  iree.list.set %0[%c1], %arg0 : !iree.list<f64>, f64
  return %0 : !iree.list<f64>
}
```

As part of doing this, I realized that it was time to formalize the IR
form that we reach right before running TorchTo{Linalg,Std,...}. We now
call it the "Torch backend contract". We then lower the "Torch backend
contract" to the "npcomp backend contract", which involves the new
TorchConversion (`torch_c`) dialect, which holds ops that need to
operate on both the npcomp backend types (e.g. builtin tensors, i1, IREE
list, etc.) and the `!torch` types.

This made more sense, as I realized that if I didn't factor out
`torch_c` then the Torch dialect would have a dependency on IREE
dialect (we previously didn't notice this was an issue because we only
depended on `builtin` types), which seemed wrong to me.

Recommended review order:
- TorchToIREE.cpp / `TorchToIREE/basic.mlir`
- Look at the new structure of createTorchScriptToNpcompBackendPipeline.
  It now lives in TorchConversion/Transforms/Passes.cpp and cleanly
  calls into `Torch::createTorchScriptToTorchBackendPipeline` for the
  frontend lowering to the Torch backend contract.
- Mechanical change extracting
  `torch_c.{to,from}_{i1,i64,f64,builtin_tensor,iree_list}` into a new
  TorchConversion dialect, and a few passes specific to the lowering
  from the Torch backend contract to the npcomp backend contract.
- Minor fixes to TorchToLinalg.cpp to use unconverted operands (now that
  we convert lists as part of operand materialization, we need to use
  the original operands). Also added test for AtenMaxPool2dOp and fixed
  m_TorchConstantIntList.
- TmpDeleteDeadIREELists pass. Temporary pass for deleting dead IREE lists that
  are created as part of operand materialization for conv/max pool/avg pool ops
  in TorchToLinalg.
2021-08-16 15:01:58 -07:00
Yi Zhang 0342b73bf1 Add torch.aten.flatten.using_ints and aten.MaxPool2d linalg lowering
- torch.aten.flatten.using_ints to linalg lowering
- torch.aten.max_pool2d to linalg lowering
- Support torch.aten.conv2d for more flexible dilation and strides values
2021-08-04 12:00:43 -04:00
Sean Silva f168cacd6d Remove TCF and TCP.
These were legacy concepts that are now superceded by direct Torch to
linalg-on-tensors lowering. These were based on some very early thinking
related to the layering of frontends vs codegen, which is now obsolete
because:
- We expected a lot more centralization at the frontend (TCF) level. It
  turns out that frontend needs really vary a lot, and there is no grand
  unifying TCF dialect plausible. The additional layer isn't worth it.
- Linalg-on-tensors obsoletes the primary need for TCP. There are still
  a few things not representable with linalg-on-tensors, but the support
  is growing and the whole "not included in linalg-on-tensors" direction
  needs to be rethought. Our TCP dialect didn't cover any of the
  actually important things in this space (such as sort, FFT, top-k,
  etc.).

See historical [slides](https://drive.google.com/file/d/1iljcpTQ5NPaMfGpoPDFml1XkYxjK_6A4/view) / [recording](https://drive.google.com/file/d/1jSPa8TwPKUt0WuLquGc8OgSUVYJHMvWZ/view)
for more details on the origin story here.

Their presence was confusing users too
[bug](https://github.com/llvm/mlir-npcomp/issues/248).

Also,
- Trim down npcomp-run-mlir testing. It was testing TCF to TCP
  lowering for the most part. The essential stuff is retained and
  rephrased with linalg-on-tensors. (we should probably rename it
  "refback-run" or something, as it is just a way to invoke RefBackend)
- test/Python/Backend/RefJIT/simple_invoke_numpy.py is XFAIL'ed. Our
  "anti-framework" direction seems to be the likely future path.
2021-08-02 12:08:39 -07:00
Stella Laurenzo 2dbab50444
Rework the python build to a static assembly of MLIR+NPCOMP (#251)
* Adapt to python build system updates.

* Bump llvm to 310c9496d80961188e8d8f8ad306cdf44bd7541f (includes python build updates)
* Adds refback C-API.
* Re-layers all python builds.
* Rework CI.
2021-07-27 16:10:10 -07:00
Stella Laurenzo 2ecbcbf8c7
Bump llvm-project to a085c23aa3c8f91866d7f4588d4f683407dc775d. (#250)
* Added additional *ToLLVM conversion patterns (they were disaggregated from standard).
* Misc renames.
* Spelling change on ConvNCHW op, and it now expects strides and dilations attributes.
2021-07-23 14:13:19 -07:00
Yi Zhang 89d4931324 Linalg lowering for aten.conv2d and aten.AdaptiveAvgPool2d
1. Add m_TorchConstantIntList
2. Lowering for aten.conv2d
3. Lowering aten.AdaptiveAvgPool2d
2021-07-09 15:04:29 -07:00
Sean Silva 83b5b5456d Bump llvm-project to da289a174fc6617c7be37be2947480510fd4f02a
- Build adjustments for `.cpp.inc` dialect files.
- Renaming of `memref.dim` to `tensor.dim` for tensor case.

Minor changes:
- Renaming of `mlir::linalg::ReassociationIndices` to
  `mlir::ReassociationIndices`.
- Adjust command line option parsing in npcomp-run-mlir.
2021-07-07 13:57:29 -07:00
Sean Silva 79928cd2dd Generalize support for elementwise ops.
We plumb through e2e a fair number of interesting cases:
- unary, binary, ternary elementwise ops
- ops like `torch.aten.add.Tensor` that also take a scalar parameter
- static size-1 broadcasting

We allow the static size-1 broadcasting case, but emit a runtime error
in the case of dynamic size-1 broadcasting. This seems like a sweet spot
subset of things that can be lowered directly to linalg, while not being
overly constraining to users. This is consistent with what IREE is doing
for CHLO->Linalg lowering as well
([code](50bf7a87e4/iree/compiler/InputConversion/MHLO/BroadcastingToLinalgPatterns.cpp (L1))).

To test the static size-1 case, we added support for the
`torch.aten.unsqueeze` op and lowering for it through
`linalg.tensor_expand_shape`. This involved a generalization of
`MaximizeValueSemantics` able to handle it (the solution there also
works for `torch.aten.flatten.using_ints` which we need for ResNet
anyway)

Also, a few minor additional changes:
- Add `VerifyInvariantsBeforeBackendLowering` pass, which catches a
  large class of errors before we get to backend lowering (now that we
  are doing dialect conversion, the errors are way nicer if we just emit
  them up front rather than in the guts of a random pattern).
- Minor change to RefBackend to allow `linalg.tensor_expand_shape`.

Recommended review order:
- e2e tests in elementwise.py
- `ConvertElementwiseOp` in TorchToLinalg.cpp + elementwise.mlir test
- `ConvertAtenUnsqueezeOp` in TorchToLinalg.cpp + unsqueeze.mlir test
- RefineTypes.cpp + tests
- MaximizeValueSemantics changes + test
- VerifyInvariantsBeforeBackendLowering pass + test
2021-06-28 13:28:38 -07:00
Sean Silva 145d4ae23c Bump llvm-project to a37cf17834d39411ed1d669098b428f8374c5b45
Changes:
- Change to operand ordering of `linalg.fill`.
2021-06-23 10:03:29 -07:00
Yi Zhang 45f2edfc7a Add TorchToSCF pass.
1. Add TorchToSCF pass.
2. Convert prim.If and prim.If.yield.
2021-06-23 08:06:43 -07:00
Yi Zhang 6dddb4d4fe Add torch.aten.batch_norm Linalg lowering support
1. Added a simplified version of torch.aten.batch_norm which only handles
inference and assumes the weight, bias, running_mean, running_var are not
None.

2. Removed the primitive types check in verifyLinalgCompatibleTypes check
since now we have proper type converter to handle torch types conversion.
The checks for RankedTensorType is kept because the type converter
doesn't guarantee the converted builtin tensor type is ranked. A
separate verification pass to verify the invariant expected by later
passes will need to be added before those can be removed as well.
2021-06-22 16:45:21 -07:00
Yi Zhang e6adecac83 Convert Torch constant ops to std.constant 2021-06-18 12:22:47 -07:00
Sean Silva 333e07a74e Add `torch.vtensor.literal` op.
This op is much better behaved than the `torch.tensor.literal` op
(which is the new name of the `torch.tensor` op). In particular
`torch.tensor.literal`:
- always has a maximally refined type.
- always has value semantics.
- can be constant folded / CSE'd.

ReduceOpVariants is changed to perform the transformation from
`torch.tensor.literal` to `torch.vtensor.literal` (which in general
involves static information casts and copies.

This new op also allowed tightening up `torch.tensor.literal` to only
accept NonValueTensorType (instead of any tensor type).

This new ".literal" name is more descriptive. It was getting too
confusing seeing an op called just `torch.tensor` (we originally called
it that because that's the name of the similar function in the Torch
Python API, but it just doesn't fit here).
2021-06-17 14:37:04 -07:00
Sean Silva 4a0eb44d17 Add a !torch.float type.
This removes the dependence of the `torch` dialect on the low-level
builtin types.
Now the `torch` dialect is a standalone layer, suitable for targeting
from higher-level Python abstractions without any premature lowering to
primitive types.
2021-06-17 09:24:18 -07:00
Sean Silva f49ebf1690 Add `!torch.int` type.
This replaces the ad-hoc use of `i64` throughout the Torch layer, and
helps to keep it crystal clear the distinction between `!torch.int`
(which is modeling the Python `int` type) and the various types that
serve as dtypes of tensors, which are a totally different type universe.

Changes:
- `!torch.int` type and C bindings.
- Change `torch.constant.int` parser to not need the `: i64` at the end.
- `m_TorchConstantInt` matcher to aid with matching constants.
- BackendTypeConversion changes for `!torch.int` -> `i64` type
  conversion.
- Refactor finalizing patterns in FinalizingBackendTypeConversionPass
  (they were getting very repetitive).
- Mechanical rewriting of `!torch.int` to `i64` in all the tests, and
  `AnyTorchIntType` to `Torch_IntType` in the `.td` files.
2021-06-17 07:28:23 -07:00
Sean Silva 784156a998 Add `!torch.bool` type.
This finishes removing the dependence on the basicpy dialect!

Changes:
- Add `!torch.bool` type and replace use of `!basicpy.BoolType` in
  Torch-related code.
- Rename BuiltinTensorize to BackendTypeConversion since now it handles
  bool conversions (and, when we add !torch.int and !torch.float, it
  will handle those as well), and generalize the related utilities (I
  also moved them to Torch/Transforms since they aren't really part of
  Torch/IR).
  - Add `torch.to_i1` and `torch.from_i1` ops for materializations
- [cleanup] Reorganize `torch.constant.*` ops in TorchOps.td
- Remove dependency of `torch` dialect on `basicpy` dialect and also
  `std` dialect. For `std`, we use some call related ops, but the
  `torch` dialect itself never produces them (we have passes that do
  though).

This is fairly mechanical. Recommended review order:
- New stuff in Torch/IR
- New BuiltinTypeConversion files.
- Mechnical fixups elsewhere.
2021-06-16 13:22:00 -07:00
Yi Zhang 7b7c9c5d3d Add aten.relu Linalg lowering support 2021-06-16 08:18:14 -07:00
Sean Silva 370e3270ab Introduce `!torch.tensor` / `!torch.vtensor` types.
This removes our reliance on the numpy dialect and avoids our off-label
use of the builtin tnesor type for modeling unknown dtypes.  The
`!torch.vtensor` (`ValueTensorType`) type is a value-semantic tensor.
The `!torch.tensor` (`NonValueTensorType`) type is a non-value-semantic
tensor. The new types look as follows syntactically:

```
// Least-static-information, non-value-semantic tensor.
!torch.tensor
// Explicit form of least-static-information variant.
!torch.tensor<*,unk>
// Least-static-information, value-semantic tensor.
!torch.vtensor
// Explicit form of least-static-information variant.
!torch.vtensor<*,unk>
// Fixed-set of allowable element types, with first-class support for
// Torch's frontend signedness semantics.
!torch.tensor<*,si32>
// First-class support for unknown dtypes.
!torch.tensor<[?,?,?],unk>
// Standard MLIR representation of `?` for unknown dimensions.
!torch.tensor<[?,2,?,4],unk>
// Statically shaped / dtyped example.
!torch.vtensor<[1,2,3,4],f32>
```

This required fairly significant changes throughout the compiler, but
overall it is a big cleanup. We now have a much clearer layering of "the
Torch frontend lowering" vs "lowering to std + linalg + etc.".

At the C++ level, there is `ValueTensorType`, `NonValueTensorType`.
We also have a helper `BaseTensorType` (kind of like ShapedType) which
interoperates with those two.

Included changes:
- New `torch.tensor(dense<0.0> : tensor<5xf32>) : !torch.tensor` op for
  creating torch tensor literals in the frontend.
- Consistently use signedness for the types (except i1 which I didn't
  touch -- we need to sort out the situation with !basicpy.BoolType
  there anyway so will be attending to that soon)
- Frontend can annotate whether an argument to the function has value
  semantics. We currently require this, as our backend contract does not
  currently allow us to even model the non-value-semantic case. Before,
  the value-semantic assumption was randomly injected in the middle of
  the pass pipeline.
- Move ArrayToTensor (now called MaximizeValueSemantics) and
  RefinePublicReturn passes to torch dialect.
- The TorchToStd and TorchToLinalg passes are now type conversions from
  `!torch.vtensor` to `tensor` and use the dialect conversion infra.
  The overall conversion pipeline is set up following the best practices
  of the "Type Conversions the Not-So-Hard Way" talk. This required
  introducing `torch-func-builtin-tensorize` and
  `torch-finalizing-builtin-tensorize` passes analogous to the upstream
  bufferization passes with the corresponding names (mostly just
  copypasta from there).
- Misc Torch-level canonicalizations -- we now cleanly layer the
  lowering to std later in the pipeline, so we are gradually lessening
  our reliance on random std constant folding before we get to that
  point.

Recommended review order:
- New types in TorchTypes.td/TorchTypes.h/TorchDialect.cpp
- New ops in TorchOps.td / TorchOps.cpp
- Less important / more mechanical stuff
  - Frontend changes.
  - Pass changes/additions in `Torch/Transforms` and `Conversion/`
2021-06-10 10:56:48 -07:00
Sean Silva 2efda323ff Significantly restructure torch/aten import design.
This is a really major and invasive restructuring of the way we get
torch operators (`torch::jit::Operator` / `c10::OperatorHandle`) into
MLIR. Please forgive the challenging review, but due to the sheer
invasiveness, it wasn't really practical do do it in sane smaller
pieces.

This fully replaces everything that was already working on the
TorchScript path (actually, more -- we added tanh support to
TorchToLinalg in order to delete the older code paths). Additionally,
I've kept the lights on for the acap path too, including what little e2e
stuff was working before (for expediency I made a few tiny compromises
along the way that will be easy to undo when we give that path proper
attention).

Overview of the new design:
- The torch operator `somens::someunqualname.someoverloadname` is
  imported as `torch.somens.someunqualname.someoverloadname` (skip the
  last dotted part if the overload name is empty), OR, if we don't have
  such an op registered, it is imported as
  `torch.operator "somens.someunqualname.someoverloadname" (...) : ...`.
  - The addition of the "overload name" is a critical element here, as
    the `(ns,unqual,overload)` triple is unique, which solves a lot of
    problems we were having.
  - This involves having separate MLIR ops for the `trailing_` and
    `.out` variants and all the different overloads. This seemed
    necessary, because the set of overloads is so wild and varied and
    unstructured. The previous design was leaning into some underlying
    structure that just isn't there -- the default situation is
    the "random overload that we want to manage on the MLIR side",
    rather than that being an exception. E.g.  `aten::ne` (not-equal)
    has 21 overloads, only 4 of which are c10 dispatcher ops see
    [gist](https://gist.github.com/silvasean/190ba918c550c956260e21254e1b8aa1),
    and the "out" variant is really called `.Tensor_out` instead of
    `.out` as it frequently is for other ops.
  - Rationale for all being in `torch` namespace: the set of operators
    are so varied and unstructured that "dialect per namespace"
    doesn't result in anything resembling the typical MLIR dialect
    boundary expectations. We could maybe draw the boundary at
    dispatcher ops vs non-dispatcher ops, but that doesn't seem to
    really result in very much useful structure at this point in time.
  - Note: within the torch operator registry, we effectively have a
    mini-basicpy subdialect (already type-resolved), which is reasonably
    structured.
  - The existing Torch op interfaces are also removed -- now that we
    track the overload name, we can losslessly find the original
    operator.
- Instead of `ATenRecognizeKernelsPass`, we now have a
  `ReduceOpVariantsPass` that keys off certain traits (and perhaps
  eventually interfaces) to reduce variants of ops to a smaller set,
  ideally operating on immutable tensors and using surrounding ops to
  model the mutability/aliasing aspects.
  - Note: `torch.ns.unqual.overload` ops allow both immutable and
    mutable tensors (unlike the previous hard distinction in the common
    case). This is a premonition for a future change that will introduce a
    bona fide `!torch.tensor` type that will clean up a bunch of stuff.
- `TorchToLinalg` / `TorchToStd` supercede the existing
  "ATen->TCF->TCP->Linalg" path.
- The new `torch_ods_gen.py` supercedes `torch_signature_ods_gen.py`.
  It should look somewhat familiar, but the benefit of hindsight has
  allowed a lot of simplifications.

The overall trend seems to be to make the `torch` dialect a nice layer
independent of anything else. It feels like as a natural result of
various future changes we will be removing the reliance on basicpy+numpy
dialects and have a nice self-contained type system too that properly
models the TorchScript type system (including proper subtyping,
mutable/immutable tensors, optional dtype, etc.).

Recommended review order:
- Start at some of the new import IR, e.g. in
  `frontends/pytorch/test/node_import/prim.py`,
  `frontends/pytorch/test/acap_export/test_export_add3.py`, and other
  tests.
- `frontends/pytorch/python/torch_mlir_utils/codegen/torch_ods_gen.py`
  and associated generated files:
  - `include/npcomp/Dialect/Torch/IR/GeneratedAtenOps.td`
  - `include/npcomp/Dialect/Torch/IR/GeneratedPrimOps.td`
- Inspect `ReduceOpVariants.cpp` / `reduce-op-variants.mlir` and the new
  traits in `include/npcomp/Dialect/Torch/IR/TorchTraits.h`
- Various code changes in the import path in
  `frontends/pytorch/csrc/builder`. Probably most interesting is the new
  code in `torch_to_mlir_utils.cpp` that has the logic to create the
  `torch.operator` ops or `torch.ns.unqual.overload` ops.

This is the [new ResNet IR](https://gist.github.com/silvasean/5407aafb710d07612b7b5b92eabecebe),
just to be able to look at a substantial sample of IR in the new style.
2021-05-19 13:37:39 -07:00
Sean Silva 122cae2ee3 Add aten::len.t, aten::size, and aten::gt.int primitive ops
Also add some canonicalizations that finally reduce ResNet down to a
single block.
2021-04-30 10:57:02 -07:00
Sean Silva 55c3cc6624 Add recognition/folder/lowering for aten::__is__, aten::ne.int, and aten::dim
Interestingly, TorchScript has its own op (`torch::jit::Operator`)
registry separate from the dispatcher (it is a superset of the
dispatcher).

This is where the "prim" ops and some "aten" ops (that should probably
be renamed to "prim") live. In particular, `aten::__is__` is in that
latter category of "aten but really prim". This registry is also the
source of truth for what the TorchScript interpreter calls into when it
executes.

The bulk of the "not part of the dispatcher" ops live in
09feb5f579/torch/csrc/jit/runtime/register_prim_ops.cpp (L82)

And the registry itself lives in:
09feb5f579/torch/csrc/jit/runtime/operator.cpp (L196)

This fold further reduces the IR of ResNet by folding away some
more not-taken branches. These not-taken branches in ResNet require
first-class handling of the list type which we don't yet have on any
backend.
2021-04-30 10:57:02 -07:00
Sean Silva 642482429c Bump llvm-project to 12011b5217929ef8a56c2099c6f3233934ea4fbc
- Rename FrozenRewritePatternList -> FrozenRewritePatternSet
2021-04-27 13:12:33 -07:00
Sean Silva 179105ca3e Add basic MLP's to the e2e curriculum.
These tests pass on the reference backend.

- Add aten.linear op + shape xfer function + ATen->Linalg lowering.
 - Note: this needs to be more automated, and needs to cover more cases.
 - Current not implemented caveats:
  - size-1 broadcasting for bias vector (either static-size-1 or ? case)
  - higher-rank aten.linear ops (not produced by torch.nn.Linear though)
  - type promotion (still don't even know the exact rules here)
- Add folder for torch.derefine op. Now the inliner can clean it up as
  it inlines. (call boundaries are a main place we need to insert
  torch.derefine) This is brittle -- the other important case is control
  flow which will need to be handled via an extension to
  RefineTypes.cpp (as will more robust call handling). River has an
  in-flight patch to update it to the new dataflow framework so I didn't
  want to do anything intrusive here.
    - Also adjust torch.derefine syntax to use the keyword `to` instead of
      `->`, as most type-only, cast-like ops do.
2021-04-27 12:18:54 -07:00
Sean Silva 544cb4ef54 Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d
- add_mlir_doc arg order
- fix some dependent dialects on passes that were now causing errors
- "encoding" attribute on mlirRankedTensorTypeGetChecked
2021-04-22 18:12:55 -07:00
Sean Silva f5dfa02523 Add `aten.mm` to linalg lowering.
This is our first op with error semantics, and stresses the system.

There are a few design notes of special interest:
- RefineTypes.cpp's note about shape inference in the presence of code
  that dynamically produces and error, and it is provable statically.
- ATenToLinalg.cpp's notes about future automation of the ATen->linalg
  path.
- The notes in Passes.td about using low-tech `std.assert` ops instead
  of `shape.assuming`.

Note: Doesn't work on IREE yet due to the `std.assert` op (needs to be
lowered to `vm.fail` on the IREE side).
2021-04-16 12:03:31 -07:00
Aaron J Arthurs f9d9518f6e Declare TCP dialect dependency in TCFToTCP conversion 2021-04-07 14:23:56 -07:00
Sean Silva c6d56fed8a Add unary tanh lowering. 2021-03-30 16:39:49 -07:00