Commit Graph

90 Commits (40efd2cb8e7b8160db106c97175001d2f5fc8931)

Author SHA1 Message Date
Liam Fitzpatrick 077e55d756 Add support for constant_pad_nd
Note that to enable folding of the code coming from an example
like the ConstantPad2dStaticModule e2e test, support for other
operations had to be added/improved:
- aten::neg.int
- aten::eq.float
- aten::eq.str
- prim::Uninitialized
2022-01-11 10:25:25 -05:00
Vivek Khandelwal ca662dc9cc [MLIR][TORCH] Add E2E support for aten.threshold, aten.threshold_backward op
This commit adds lowering of `aten.threshold` op
This commit adds lowering of `aten.threshold_backward` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-10 11:56:56 +05:30
Yi Zhang 732a76f45c Make broadcasting result shape more static
This involes the following 2 parts:
- Change refine type to propagate more static shape info.
- Get as much static shape info as possible when creating the result
tensor when converting to linalg.
2022-01-06 18:39:27 -05:00
Vivek Khandelwal 4486de5ef3 [MLIR][TORCH] Add E2E support for torch.arange op
This commit adds lowering of `aten.arange.start_step` op.
This commit decomposes `aten.arange` and `aten.arange.start` into
`aten.arange.start_step` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-27 22:45:48 +05:30
Prashant Kumar 9e1ecf2c0b Add Add and Sub scalar op conversions.
`aten.add.Scalar` and `aten.sub.Scalar` op conversions have been added.
The changes have been made as a part of `-convert-torch-to-linalg` pass.
2021-12-22 21:41:49 +05:30
Yi Zhang d8ba68119e Lower aten::view with linalg.collapse and linalg.expand
We only handle the expanding OR collapsing cases, we do not handle
expanding And collapsing happening at the same time or cases where
it's neither collapsing nor expanding like view of [2,3] for
3x2 tensor.

It's assumed that if a shape list element is got from
`aten.size(tensor, dim)` the corresponding dim is not splitted or
collapsed. This assumption makes it easier to deal with dynamic shapes.
2021-12-16 17:58:20 -05:00
Gaurav Shukla eddc09aa55 [TORCH][MLIR] Add E2E support for `aten.eq` and `aten.lt` ops
- Added E2E support for `aten.eq.Tensor` and `aten.lt.Tensor` ops. Both
  the operands are expected to be of the same type, i.e., type promotion
  is not addressed as a part of this commit.
- Added E2E support for `aten.eq.Scalar` and `aten.lt.Scalar` ops.
  Tensor operand type to Scalar operand type promotion has not been
  handled in this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-16 18:47:22 +05:30
Daniel Garvey 396ab35c9d
Small fixes for slice edge cases (#476) 2021-12-15 15:54:41 -06:00
Gaurav Shukla d13bb0e5c1 [TORCH]MLIR] Fix C++17 extension warning
The existing implementation of `ConvertConstantTensorAllocOp<>` requires
a C++17 feature `if constexpr ()`. This commit removes the use of that
feature to support the implementation even for lower C++ versions.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-15 23:35:06 +05:30
Prashant Kumar ab81f871e4 Add aten.tensor.int and aten.tensor.float op lowerings.
Add the required lowerings and correct test cases.
These op produce zero-d tensors and it was incorrectly mentioned in
refine types to produce 1d tensor of size 1.
2021-12-15 17:21:34 +05:30
Prashant Kumar 6dabf185f5 Add support for int types in gtScalar op.
Support for integer types in gtScalar op has been added.
The code share same logic with gtTensor op and can be merged
which is added as a TODO.
2021-12-14 01:29:52 +05:30
Gaurav Shukla 8d4879feb0 [TORCH][MLIR] Add and templatize lowering of [`aten.zeros|aten.ones|aten.empty`] ops
- Templatize `aten.zeros` and `aten.ones` ops lowering.
- Add E2E support for `aten.empty` op.
- Add Integer type support in `aten.mul.Scalar` op lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-14 00:07:11 +05:30
Prashant Kumar 528354de84 Add `aten.gt.Tensor` op
`aten.gt.Tensor` op has been added in torch dialect and the
lowering of the op has been done to the linalg dialect.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-13 00:08:52 +05:30
Gaurav Shukla a778f990e9 [TORCH][MLIR] Add E2E support for `aten.ceil` op
This commit adds lowering of `aten.ceil` op as a part of element-wise
ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-12 01:15:47 +05:30
Prateek Gupta cfc8de36f8
[MLIR][TORCH] Add E2E support for `aten.native_layer_norm`. (#470)
This commit adds support for aten.native_layer_norm operation. Here
the previous code for aten.layer_norm is tweaked a little bit to
accomodate both mean and variance values alongwith the layer norm
value. This commit also adds decomposition of aten.layer_norm into
aten.native_layer_norm, which was previously getting lowered directly
to linalg.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2021-12-10 19:06:19 +05:30
Gaurav Shukla 5a47f92390 [TORCH][MLIR] Add E2E support for `aten.squeeze.dim` op
This commit adds lowering of `aten.squeeze.dim` op into
`linalg.TensorCollapseShape` op. Here, the dim(th) dimension of the
input tensor is not supposed to be dynamic.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-10 17:01:20 +05:30
Vivek Khandelwal 8130354c09 [MLIR][TORCH] Add E2E support for aten.index_select op
This commit adds lowering of `aten.index_select` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-09 23:13:36 +05:30
Vivek Khandelwal 0a0a1b4476 [MLIR][Torch] Resolve styling issues related to aten zeros/ones op
https://github.com/llvm/torch-mlir/pull/464#discussion_r765065092

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-09 17:42:28 +05:30
Gaurav Shukla f34eb66124 [TORCH][MLIR] Add E2E support for [`aten.gt.Scalar`|`aten.where.self`]
This commit adds lowering of `aten.gt.Scalar` and `aten.where.self` as a
part of element-wise ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-09 12:47:10 +05:30
Liam Fitzpatrick 2414bdb1f0 Linalg lowering for aten.conv2d(bias=True)
Previously aten.conv2d was only lowered if there was no bias.
Here lowering is extended to support bias.
2021-12-08 14:44:36 -08:00
Vivek Khandelwal 9958cf08b6 [MLIR][TORCH] Add E2E support for aten.zeros op
This commit adds lowering of `aten.zeros` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-08 22:42:33 +05:30
Prashant Kumar 977b1b03ea Add aten::nll_loss_forward op lowering.
The op lowering has been added as a part of `torch-lower-to-linalg`
pass. This takes care of ignore_index but the weight and reduction
operand is still to be accounted for.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-07 17:11:08 +05:30
Daniel Garvey b0cb49ca93
Add scalar type promotion for mul and div (#454) 2021-12-03 13:51:25 -06:00
Yi Zhang 24bc06fc8d Fix compilation warnings. 2021-12-03 11:44:32 -05:00
Daniel Garvey a52aded0b9
Add lowering for slice and selectInt (#398) 2021-12-02 22:09:21 -06:00
Vivek Khandelwal 46a2189a41 [MLIR][TORCH] Add E2E support for aten.bitwise_and.tensor op
This commit adds lowering of `aten.bitwise_and.tensor` op.

Signed-Off By: Vivek Khandelwal vivek@nod-labs.com
2021-12-02 21:06:15 +05:30
Vivek Khandelwal 46a0668b3b [MLIR][TORCH] Add E2E support for aten.mean and aten.numel op.
This commit adds lowering of `aten.mean` and `aten.numel` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-02 11:51:13 +05:30
Ramiro Leal-Cavazos e6675a50d3 Add support for dtype argument in reduction ops
Many reduction ops take as an argument an optional output dtype that
can change the type of the input tensor before the reduction is
performed. This commit adds support for the optional dtype flag that
had been previously ignored.

Test:
/tools/torchscript_e2e_test.sh -f 'ReduceSumDtype'
/tools/torchscript_e2e_test.sh -f 'ReduceSumDImIntListDtype'
2021-11-30 12:53:59 -05:00
Gaurav Shukla 73b27b32dc [MLIR][TORCH] Add E2E support for `aten.squeeze` op
This commit adds lowering of `aten.Squeeze` op into
`linalg.TensorCollapseShape` op. The size 1 dynamic dimensions are not
handled as a part of this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-30 23:00:28 +05:30
ds1231h 9ad5954e41 aten.abs and aten.reciprocal to linalg 2021-11-30 11:31:55 -05:00
Prashant Kumar 36afa4a4d3 Add aten.fill.Scalar op lowering
The lowering of aten.fill.Scalar has been added.
The changes have been made as a part of -torch-convert-to-linalg pass.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-30 21:12:15 +05:30
Daniel Garvey 539511c19b
Add dropout op (#436)
Co-authored-by: dan <dan@nod-labs.com>
2021-11-29 12:30:03 -06:00
Liam Fitzpatrick 7616d28ce1 Add leakyrelu support 2021-11-27 23:04:46 +05:30
Vivek Khandelwal 8d8d2c2fb8 [MLIR][TORCH] Add E2E support for aten.div.Scalar
This commit adds lowering of `aten.div.Scalar`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-11-24 11:17:40 +05:30
Gaurav Shukla 663fc1ef51 [MLIR][TORCH] Add E2E support for [`aten.mul.Scalar`|`aten.addmm`]
This commit adds lowering of `aten.mul.Scalar` and also adds
decomposition of `aten.addmm` to `aten.mul.Scalar`, `aten.add.Tensor`
and `aten.mm` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-18 22:26:41 +05:30
Prashant Kumar f8ff6d84f4 Support aten::linear with rank 3 inputs
Now, aten::linear supports rank 3 inputs. This is a fix
for upcoming bert-inference task. The correct way should be
to support broadcasting in `aten.matmul` op and decompose
`aten.linear` into right ops.
2021-11-18 22:15:04 +05:30
Prateek Gupta 146f109152 [NFC] Cleanup code for aten.gelu_backward operation.
This commit adds minor non functional changes to the aten.gelu_backward
operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-18 11:24:04 -05:00
Prateek Gupta ecf78b9849
[TORCH][MLIR] Add E2E support for `aten.gelu_backward` operation. (#418)
This commit adds new operation `aten.gelu_backward` in the aten
dialect and adds lowering of this operation from aten to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-17 14:59:38 +05:30
Yi Zhang 53733933a4 Update llvm upstream to 0b17336f793108a7b10c3fa913039144ef1d0f61
Update AsmPrinter/Parser and MatchAndRewrite
2021-11-16 13:04:51 -05:00
Ramiro Leal-Cavazos a2392a0f19 Fix bug in handling of pin_memory in AtenOnesOp conversion
This commit fixes a bug with the way ConvertAtenOnesOp was matching on
the pin_memory bool argument, which always resulted in a failed match.
2021-11-12 11:38:25 -05:00
George Petterson 2764e86f02 Add Rsqrt 2021-11-09 11:08:28 -05:00
Yi Zhang 05c4dd8e39 Add convertScalarToDtype helper.
This is to facilitate scalar type conversion in the TorchToLinalg. As
part of adding the helper, this PR also:
- Updated `AtenAddTensorOp`, `AtenSubTensorOp` to use the helpers to
support more type variants.
- Added e2e type promotion testing.
- Added i32 memref return/arg type to support e2e testing.
2021-11-08 17:50:52 -05:00
George Petterson e23cabf3a9 Add log2 2021-11-08 16:19:59 -05:00
George Petterson f41958037a Add NumToTensor 2021-11-08 15:56:52 -05:00
Prateek Gupta 18e8806b14 [TORCH][MLIR] Add E2E support for aten::to.dtype.
This commit adds end to end support for AtenToDtypeOp from aten
to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-08 12:56:03 -05:00
Wang Kangyu 4bb9b44775 Add lowering of "aten.pow.Tensor_Scalar" op
Add e2e support for torch.pow(Tensor, Float)
2021-11-08 09:19:50 -08:00
Wang Kangyu b33543af85 Add lowering of aten.floor op 2021-11-06 17:31:44 -04:00
nodlabs 5ff823ace9 lowerd Sqrt to linalg
reused clang-format, as changes got deleted
2021-11-06 11:29:46 -04:00
Prashant Kumar 127c7d8e27 Add lowering of `torch.log` op
The lowering of `torch.log` op has been added.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-02 21:18:00 +05:30
George Petterson 6dde5b347e Add rsub 2021-11-02 09:56:48 -04:00