Commit Graph

50 Commits (7247c6a3a76482f6c9c4589c8c4f42872e7a1e1a)

Author SHA1 Message Date
Quinn Dawkins 11a8901078
[MLIR][TORCH] Add support for multiple indexing tensors for aten.index.Tensor (#1097)
- Includes a canonicalizer for `aten.add.t`needed for successfully lowering the shape function
 - Only offers support for statically sized index tensors when there is more than one
 - Dynamic shape support remains for single indexing tensors
2022-07-28 19:00:02 -04:00
Quinn Dawkins 3c9addf19c Add e2e support for aten.expm1 2022-07-27 12:31:35 +05:30
Kevin Kiningham e8f327cc00 Add lowering to linalg for softplus and log1p
Follows existing conventions for unary operators.
2022-07-25 21:25:57 +05:30
Vivek Khandelwal 3589134d31 [MLIR][TORCH] Add decomposition for aten.var.dim op
This commit adds the decomposition for `aten.var.dim` op.
This commit also make changes in the decomposition for `aten.var` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-15 09:53:42 +05:30
Ramiro Leal-Cavazos 11148e60d6
Undo shape lib changes + update function signature of sum + zero (#1035)
This commit does three things:
  1. Reverts some of the shape lib changes merged in
  https://github.com/llvm/torch-mlir/pull/844
  2. Updates the signature of `aten.sum_dim_IntList` that was recently
  updated in
  23bdb570cf
  3. Replaces `aten.zero.functional` with `aten.zero`, updated in 960758b0b7
2022-07-11 10:56:12 -07:00
Prateek Gupta 2d75654b2c [TORCH][MLIR] Add lowering of `aten.slice_scatter` and
`aten.select_scatter` op.

This commit adds:
1.  Lowering of `aten.slice_scatter` op into `tensor.insert_slice`
op.
2. Decomposes the `aten.select_scatter` op into `aten.slice_scater`
op.

Signed-Off-By: Prateek Gupta <gprateek93@gmail.com>
2022-07-11 14:07:21 +05:30
George Petterson a08ff0d7f2 Add lowering for _convolution 2022-07-11 11:03:03 +05:30
Tanyo Kwok d4f1f41435
[MLIR][TORCH] Add decomposition of aten.repeat (#932)
* [MLIR][TORCH] Add decomposition of aten.repeat

* refine & rebase

* refine static shapes

* add e2e test

* Rebase and Refine naming style
2022-07-01 13:02:31 +08:00
Ramiro Leal-Cavazos 400fecc1e5
[LINALG] Fix shape function of index.Tensor + support N-rank inputs (#972)
This commit fixes the shape function for `index.Tensor`, adding
support for multiple index tensors and `None`s in the indices
list. This commit also adds support for input tensors of rank greater
than 1. The lowering for `index.Tensor` still has the the limitation
that only a single index tensor along the first dimension of the input
tensor is supported.
2022-06-24 09:45:44 -07:00
Ashay Rane 234fc7fe0c
linalg: lower `aten.triu` op to `linalg.generic` (#965)
Prior to this patch, the torch dialect included `AtenTriuOp` for
computing the upper triangular part of the input matrix, but there was
no code for lowering the op to the linalg dialect.

This patch adds code to generate a `linalg.generic` operation that
compares indices (computed using `linalg.index`) to choose between zero
or the original value (using `arith.select`).  The lowering fails if the
number of dimensions are less than two.  This patch also adds a few
end-to-end tests.
2022-06-23 22:45:48 -07:00
Ramiro Leal-Cavazos 189afa82c5
Update shape library with LLVM bump changes (#973) 2022-06-23 18:13:03 -07:00
Vivek Khandelwal 77ab31641f [MLIR][TORCH] Add decomposition of aten.numpy_T op
This commit adds the decomposition of `aten.numpy_T` op into
`aten.t` or `aten.permute` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-16 00:01:22 +05:30
Bob Adolf b90837ee24
Temporarily revert support for custom op extensions. (#944)
The MacOS builders are having linking trouble with the extension library.
Until it's fixed, all support for op extensions is disabled. It should be
easy to restore once the issue is resolved.
2022-06-14 18:24:40 -07:00
Vivek Khandelwal 33fa8e7761 [MLIR][TORCH] Add decomposition of aten.floor_divide op
This commit adds the decomposition of `aten.floor_divide` op into
`aten.div.Tensor_mode` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-14 08:56:25 +05:30
Bob Adolf 0a7ba62438
Allow torch-mlir to support PyTorch extensions. (#895)
PyTorch allows new operators to be registered dynamically in modules.
Torch-mlir already makes it fairly straightforward to add support for
new operators, and this commit just extends that support to allow new
PyTorch ops to come from a external module.

This does *not* allow ops to be dynamically loaded into torch-mlir.
Torch-mlir must still be compiled with support built-in.

Add a `_torch_mlir_custom_op_example` subpackage to `torch_mlir` which
registers an demonstration op. It will not be imported by default when
importing torch_mlir. It's strictly for testing and documentation.

Adds an end-to-end test for the `torch_mlir_custom_op_example::identity` op.

With all these changes, we should now be actively testing PyTorch extension
support with all future patches.
2022-06-13 14:51:30 -07:00
Sean Silva e1b38e74dd Use upstream shape functions directly.
Now that upstream exposes them nicely, we can use them.

I noticed that we had added stuff into the upstream_shape_helpers.py
file (which was supposed to stay pristine), so some more shape functions
need to be upstreamed.

Going forward, all shape functions should be upstreamed similar to
https://github.com/pytorch/pytorch/pull/76889 instead of added in this
file.
2022-06-07 11:15:03 -07:00
Vivek Khandelwal b95b3d844d [MLIR][TORCH] Add E2E support for aten.div.Tensor_mode op
This commit adds lowering of `aten.div.Tensor_mode` op.
This commit also fixes formatting for the test file elementwise.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:44 +05:30
Vivek Khandelwal a11ef674a7 [MLIR][TORCH] Add E2E support for aten.baddbmm op
This commit decomposes `aten.baddbmm` op into `aten.bmm`,
`aten.mul.Scalar`, and `aten.add.Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:28 +05:30
Vivek Khandelwal 2718b4d838 [MLIR][TORCH] Add E2E support for aten.clamp_[min|max] op
This commit decomposes `aten.clamp_min` and `aten.clamp_max` op
into `aten.clamp` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-06 11:52:29 +05:30
Vidush Singhal fc419b1e7d
Add E2E support for AtenLogicalOrOp. (#883) 2022-06-03 16:21:03 -07:00
Henry Tu abf5c94a1b
Replace valsem.aten.zero with aten.zero.functional (#893) 2022-06-03 16:27:31 -04:00
Ashay Rane 7fdc1cff02
build: remove manual changes to ShapeLibrary.cpp (#894)
The patch bumped up the LLVM tag made manual fixes to the code in
`ShapeLibrary.cpp`.  However, since that file is generated by the
`update_shape_lib.sh` script, its contents were reverted each time the
script was run.  This patch fixes the problem by removing the manual
changes to that file.
2022-06-01 14:11:29 -07:00
Ashay Rane f18b2be911
torch,linalg: add support for translating aten.linalg.vector_norm (#839)
This patch adds support for the torch.linalg.vector_norm op to the torch
dialect, including the necessary shape function.  It also extends the
conversion of reduction operators to support lowering of
AtenLinalgVectorNormOp, in addition to adding a handful of end-to-end
tests to validate the lowering.

There exist several opportunities to make this lowering optimal and
robust.  For instance, in its current form, the translation does not
support ord = 0, +inf, or -inf.  For L1 norms, we don't need to raise
each element to the power 1.0.  Similarly, L2 norms could benefit from
strength reduction.  Since the canonicalization pass is not able to
apply these optimizations, we should consider applying them during the
linalg lowering itself.
2022-05-19 15:48:15 -07:00
Ashay Rane bb52a460cb
mlir: bump llvm tag to 5380e3 (#856)
In addition to updating the llvm-project submodule, this patch also:

1. updates shape functions and tests so that `func` and `call`
   operations refer to the `func` dialect
2. avoid duplicate registration of dialects
2022-05-16 12:54:35 -07:00
Ramiro Leal-Cavazos 96f90efd16
Add shape info to `rand_like` + support for `dtype` flag (#851)
The op `aten.rand_like` was missing a shape function, unit tests, and
the `dtype` argument was being ignored in its decomposition. This
commit fixes all three things.
2022-05-12 16:00:59 -07:00
Prashant Kumar 12b3af70d3 [TORCH] Add folding of aten.detach op.
`aten.detach` op is folded and returns the first operand since it's an
identity function(kind of identity just remove the has_grad attribute).
2022-05-10 21:54:45 +05:30
Vivek Khandelwal c0634bc996 [MLIR][TORCH] Add E2E support for aten.to.dtype_layout op
This commit decomposes `aten.to.dtype_layout` op into `aten.to.dtype` op.
This commit also fixes the formatting for the file type_conversion.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-03 12:48:58 +05:30
gpetters94 c4dcdd1e34
Add aten.flip (#817) 2022-05-02 16:01:15 -04:00
Vivek Khandelwal 8a06419980 [MLIR][TORCH] Add E2E support for aten.masked_fill.Scalar op
This commit adds lowering of `aten.masked_fill.Scalar` op.
This commit also fixes the formatting of the file constant_alloc.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-02 22:27:33 +05:30
Vivek Khandelwal 4b11284440 [MLIR][TORCH] Add E2E support for aten.avg_pool2d op
This commit adds lowering of `aten.avg_pool2d` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-02 12:31:44 +05:30
Prateek Gupta 81ee5bb58c [TORCH][MLIR] Fix ConstantPad2dStaticModule test.
This commit fixes the `ConstantPad2dStaticModule` test case by adding
the lowering of `aten.pad` operation. Previously the test case
mapped to `aten.constant_pad_nd` operation.
The `aten.pad` now decomposes into `aten.constant_pad_nd` operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-04-29 21:57:01 +05:30
Vivek Khandelwal e57e1968bc [MLIR][TORCH] Add E2E support for aten.index_put.hacked_twin op
This commit decomposes `aten.index_put.hacked_twin` op into
`valsem.aten.index_put_impl` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-28 13:41:47 +05:30
Vivek Khandelwal 769f3a8870 [MLIR][TORCH] Add E2E support for max_pool2d_with_indices op
This commit adds lowering of `max_pool2d_with_indices` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-18 21:05:19 +05:30
Ashay Rane a893c7d5cf
Add shape transfer function and lowering to linalg for aten.neg (#759)
* shape: add shape transfer function for aten.neg

Prior to this patch, the list of shape transfer functions did not
include `aten.neg`, which resulted in errors like below.

```
error: unsupported by backend lowering: tensor with unknown rank or dtype
note: see current operation: %0 = "torch.aten.neg"(%arg0) :
  (!torch.vtensor<[256,256],f32>) -> !torch.vtensor<*,f32>
note: this is likely due to a missing shape transfer function in shape_lib_gen.py
```

This patch fixes the problem by adding a shape transfer function to
reflect the point-wise nature of this operation.

* linalg: add translation of aten.neg operation

This patch adds a translation rule to lower `aten.neg` operations on
tensors to an `arith.negf` operation wrapped inside a `linalg.generic`
operation.  This patch also adds a rudimentary test.
2022-04-15 11:11:22 -07:00
Vivek Khandelwal 1bccb4fc8a [MLIR][TORCH] Add E2E support for aten::max_pool2d_with_indices_backward op
This commit adds lowering of `aten::max_pool2d_with_indices_backward` op.

This commit also fixes formatting issues in basic.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-14 21:46:47 +05:30
gpetters94 9ec0683e92
Add 2D case for convolution (#693) 2022-04-08 00:47:57 -04:00
Prashant Kumar fb8cb0c5f3 [LINALG] Add the lowering of `aten.ne.Scalar` op
The lowering of `aten.ne.Scalar` op has been added to
the linalg backend.
2022-04-05 21:07:28 +05:30
Gaurav Shukla 969785d1b6 [LINALG] Add E2E support for `aten.where.[Scalar|ScalarSelf|ScalarOther]` ops
This commit decomposes different variants of `aten.where.*` op into
`aten.where.Self` op. It covers `aten.where.Scalar`,
`aten.where.ScalarSelf` and `aten.where.ScalarOther` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-30 20:36:48 +05:30
Vivek Khandelwal 2597c481f6 [MLIR][TORCH] Add E2E support for aten.new_empty op
This commit decomposes `aten.new_empty` op into `aten.empty.memory_format` op.

This commit also made a dtype fix to the constant tensor allocation like ops.
Earlier the dtype for the result was inferred from the result type; now, it's
being evaluated as per the original definition of the op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-30 13:21:01 +05:30
Maksim Levental 25ba51b2af
This commit decomposes aten._reshape_alias op into aten.view op. (#690) 2022-03-28 23:54:28 -05:00
Gaurav Shukla 02b6d04eb4 [LINALG] Add E2E support for `aten.zero_` op
This commit adds decomposition of `aten.zero_` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-25 12:46:50 +05:30
Qiang Fu f7c7bb800c
Add non-default dtype support for a few elementwise math ops. (#687)
* fix type inference
* fix Torch2Linalg conversion
* add test cases
2022-03-23 13:35:43 -07:00
Vivek Khandelwal 5b9bdfaf3f [MLIR][TORCH] Add E2E support for aten._to_copy op
This commit decomposes `aten._to_copy` op into
`valsem.aten.copy` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 13383b03b8 [MLIR][TORCH] Add value tensor variant to aten::copy_ op
This commit adds the op `ValsemVariantAtenCopyOp` that represents
`AtenCopy_Op` without the underscore. This is needed to make sure
that the `ReduceOpVariants` pass turns the in-place op into an op
that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenCopyOp`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 4c0cd5c23d [MLIR][TORCH] Add E2E support for aten.expand_as op
This commit decomposes `aten.expand_as` op into `aten.broadcast_to` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 12:47:39 +05:30
Prateek Gupta 7256c9e395 [TORCH][MLIR] Fix the return types of `aten.native_layer_norm`.
This commit fixes the 2nd and 3rd return types of the `aten.native_layer_norm`.
Previously the mean and rSTD were returned with reduction dims removed.
This commit fixes this and keeps the reduction dims of the results.

Signed-Off-By: Prateek Gupta <prateek@nord-labs.com>
2022-03-17 12:08:32 +05:30
Vivek Khandelwal 8da7d90611 [MLIR][TORCH] Add E2E support for aten.index_put op
This commit decomposes `aten.index_put` op into
`valsem.aten.index_put_impl` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-16 22:02:02 +05:30
Vivek Khandelwal 3d95c3d6c9 [MLIR][TORCH] Add value tensor variant to aten::_index_put_impl_
This commit adds the op `ValsemVariantAtenIndexPutImplOp` that represents
`Aten_IndexPutImpl_Op` without the underscore. This is needed to
make sure that the `ReduceOpVariants` pass turns the in-place op
into an op that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenIndexPutImplOp` op.

This commit also updates the `torch.bincount` op test cases.
2022-03-16 22:02:02 +05:30
Sean Silva 84a9693006 Elide `!torch.` prefix in nested dialect types.
This leads to much more succinct types in many cases:

```
!torch.list<!torch.int>
!torch.list<int>

!torch.tuple<!torch.list<!torch.int>, !torch.list<!torch.int>>
!torch.tuple<list<int>, list<int>>

!torch.optional<!torch.list<!torch.int>>
!torch.optional<list<int>>

!torch.list<list<list<tensor>>>
!torch.list<!torch.list<!torch.list<!torch.tensor>>>
```

I would like to take this further and allow omitting the `!torch.`
prefix in all cases, but that's harder -- for example, we currently use
`FuncOp` for functions, and so I don't think we can customize the
printing there. It seems like it will be a longer road to getting that
level of customization.
2022-03-15 17:24:08 -07:00
Sean Silva a5fe0cf063 Introduce new shape library design.
See the documentation in `docs/shape_lib.md` and
`docs/adding_a_shape_function.md` for an overview of the system.

This completely overhauls how we represent shape functions. In
particular, RefineTypes does not infer shapes anymore (only dtypes).
Shape functions are now written in (TorchScript'able) Python.

Recommended review order:

1. Read `docs/shape_lib.md` and `docs/adding_a_shape_function.md`.
1. Code and tests for ReifyShapeCalculations, DropShapeCalculations.
1. Code and tests for SimplifyShapeCalculations.
1. shape_lib_gen.py
1. Code and tests for new RefineTypes pass.
1. Random folders/canonicalizers in TorchOps.cpp and associated test in
   `canonicalize.mlir`.
1. New ReadOnly trait inferred from the registry.
1. Any miscellaneous remaining stuff.

Example `-print-ir-after-all` for ElementwiseUnaryModule:
[IR lowering dump](https://gist.github.com/silvasean/e4dc8cbc8d00aac7819602e3cbd8e212).

Example `-print-ir-after-all` for ElementwiseBinaryModule:
[IR lowering dump](https://gist.github.com/silvasean/daf6860ecced732af3568af6b1899113).
2022-03-15 12:41:58 -07:00