Commit Graph

411 Commits (3db2197ef57a379558fad8c9195e3440bcf82393)

Author SHA1 Message Date
Jae Hoon (Antonio) Kim fe784fd900
Add Support for aten::scatter_add (#906) 2022-06-06 15:02:45 -04:00
Jae Hoon (Antonio) Kim 8a1839a17e
Add support for aten::arange.start_out (#905) 2022-06-06 15:02:27 -04:00
Vivek Khandelwal 2718b4d838 [MLIR][TORCH] Add E2E support for aten.clamp_[min|max] op
This commit decomposes `aten.clamp_min` and `aten.clamp_max` op
into `aten.clamp` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-06 11:52:29 +05:30
Vidush Singhal fc419b1e7d
Add E2E support for AtenLogicalOrOp. (#883) 2022-06-03 16:21:03 -07:00
Henry Tu abf5c94a1b
Replace valsem.aten.zero with aten.zero.functional (#893) 2022-06-03 16:27:31 -04:00
Henry Tu 650f5a5008
Added support for native_dropout_backward (#892) 2022-06-03 14:08:51 -04:00
Henry Tu b7082a8d4e
Added support for native_dropout (#891) 2022-06-03 14:05:57 -04:00
Henry Tu a635fd2287
Added support for native_batch_norm_backward (#890) 2022-06-03 13:49:02 -04:00
Henry Tu bfe8ff4b42
Added support for embedding_dense_backward (#889) 2022-06-03 13:33:43 -04:00
Henry Tu a29903dfc8
Added support for native_layer_norm_backward (#888) 2022-06-03 13:15:23 -04:00
Vidush Singhal 0a913bc904
Add E2E support for AtenAllBoolOp (#874) 2022-06-01 18:20:25 -07:00
Vivek Khandelwal 6f548fc3ad [MLIR][TORCH] Add decomposition of aten.adaptive_avg_pool2d op
This commit adds the decomposition of `aten.adaptive_avg_pool2d` op into
`aten.avg_pool2d` op. The current decomposition only supports cases where
input size is equal to the output size.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Ramiro Leal-Cavazos b76c8c82dc
Emit `aten.unsqueeze` with mutating variants (#873)
The op `aten.unsqueeze` has a mutating variant. This commit adds
support for that variant.
2022-05-26 19:19:38 -05:00
Vivek Khandelwal 56e77d4213 [MLIR][TORCH] Add E2E support for aten.Bool.[float|int] op
This commit adds lowering of `aten.Bool.float` and `aten.Bool.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 21:18:34 +05:30
Vivek Khandelwal 014a6d16c7 [MLIR][TORCH] Add E2E support for aten.any.bool op
This commit adds lowering of `aten.any.bool` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 17:24:28 +05:30
Vivek Khandelwal bc9b2156e3 [MLIR][TORCH] Add E2E support for aten.sqrt.int op
This commit adds lowering of `aten.sqrt.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 16:50:39 +05:30
Ashay Rane f18b2be911
torch,linalg: add support for translating aten.linalg.vector_norm (#839)
This patch adds support for the torch.linalg.vector_norm op to the torch
dialect, including the necessary shape function.  It also extends the
conversion of reduction operators to support lowering of
AtenLinalgVectorNormOp, in addition to adding a handful of end-to-end
tests to validate the lowering.

There exist several opportunities to make this lowering optimal and
robust.  For instance, in its current form, the translation does not
support ord = 0, +inf, or -inf.  For L1 norms, we don't need to raise
each element to the power 1.0.  Similarly, L2 norms could benefit from
strength reduction.  Since the canonicalization pass is not able to
apply these optimizations, we should consider applying them during the
linalg lowering itself.
2022-05-19 15:48:15 -07:00
Vivek Khandelwal c69a1e5688 [MLIR][TORCH] Add E2E support for ScalarImplicit, Int.Scalar op
This commit adds lowering of `aten.ScalarImplicit` and `aten.Int.Scalar` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-10 22:40:49 +05:30
Prashant Kumar 12b3af70d3 [TORCH] Add folding of aten.detach op.
`aten.detach` op is folded and returns the first operand since it's an
identity function(kind of identity just remove the has_grad attribute).
2022-05-10 21:54:45 +05:30
Yi Zhang 28be6511d2 Fix type promotion code for scalar only operations
Fix the type promotion code for scalar only operation to return
TorchType which is the type tracked in ValueKnowledge.scalarType.

- Fix `getPromotedResultScalarType` to return Torch type.
- Add `getBuiltInTypeForTorchScalar` helper to convert scalar type
to builtin type before passing to the next level type promotion
helper `updateResultTypeState`.
- Add `setScalarType` helper to make setting ValueKnowledge.scalarType
  easier.
2022-05-07 10:37:21 -04:00
Vivek Khandelwal b20679e1b8 [MLIR][TORCH] Modify aten::dropout op description
Signed-Off By: Vivek Khandelwal vivek@nod-labs.com
2022-05-06 11:15:52 +05:30
Vivek Khandelwal 96fabc0036 [MLIR][TORCH] E2E support for [ge|ceil].float, [ge|ne|gt].float_int op
This commit adds lowering of `aten.ge.float`, `aten.ge.float_int`,
`aten.ne.float_int`, `aten.gt.float_int` and `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py and scalar_comparison.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-05 21:48:35 +05:30
Kristof Denolf e682b1d0f3 changed name option to decompose-complex-ops 2022-05-05 00:38:51 -07:00
Kristof Denolf 5243638e33 add no decompose option 2022-05-05 00:38:51 -07:00
Yi Zhang 9f7264a7a4 Add support for scalar type propagation
The main changes are:
- Added `ValueKnowledge.scalarType` to track scalar type information.
- Added `ValueKnowledge.kind` to indicate the value kind.
- Modified the meet and join helper functions. The ValueKnowledge has
slightly more complicated state now so the meet and join function need
to look at the `kind` field in addition to just the type field.
2022-05-04 16:57:56 -04:00
Sean Silva ab5ad7af09 Add tracing suport to `torch_mlir.compile`.
This also has a fix for the adjustment of types of TupleConstruct
inputs, which I found when using this new functionality on a model.

Some scenarios in tracing create situations where the output of
TupleConstruct has a more refined type than the inputs.

This introduces a helper `adjustStaticInformationForValues` which
subsumes the `derefineValues` helper and the tensor static information
adjustment we were doing.
2022-05-03 09:08:40 -07:00
Vivek Khandelwal c0634bc996 [MLIR][TORCH] Add E2E support for aten.to.dtype_layout op
This commit decomposes `aten.to.dtype_layout` op into `aten.to.dtype` op.
This commit also fixes the formatting for the file type_conversion.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-03 12:48:58 +05:30
gpetters94 c4dcdd1e34
Add aten.flip (#817) 2022-05-02 16:01:15 -04:00
Vivek Khandelwal 4b11284440 [MLIR][TORCH] Add E2E support for aten.avg_pool2d op
This commit adds lowering of `aten.avg_pool2d` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-02 12:31:44 +05:30
Prateek Gupta 81ee5bb58c [TORCH][MLIR] Fix ConstantPad2dStaticModule test.
This commit fixes the `ConstantPad2dStaticModule` test case by adding
the lowering of `aten.pad` operation. Previously the test case
mapped to `aten.constant_pad_nd` operation.
The `aten.pad` now decomposes into `aten.constant_pad_nd` operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-04-29 21:57:01 +05:30
Sean Silva 44c7b181d3 Revert "[MLIR][TORCH] Add E2E support for aten.ge.float op"
This reverts commit 564734b2d7.
2022-04-28 07:49:58 -07:00
Sean Silva eff144c0b7 Revert "[MLIR][TORCH] Add E2E support for aten.ge.float_int op"
This reverts commit 1f102cc400.
2022-04-28 07:49:58 -07:00
Sean Silva 7669ee4e4a Revert "[MLIR][TORCH] Add E2E support for aten.ne.float_int op"
This reverts commit 51dd462592.
2022-04-28 07:49:58 -07:00
Sean Silva 5ef9f501fa Revert "[MLIR][TORCH] Add E2E support for aten.ceil.float op"
This reverts commit 78f5747568.
2022-04-28 07:49:58 -07:00
Vivek Khandelwal e57e1968bc [MLIR][TORCH] Add E2E support for aten.index_put.hacked_twin op
This commit decomposes `aten.index_put.hacked_twin` op into
`valsem.aten.index_put_impl` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-28 13:41:47 +05:30
Vivek Khandelwal 78f5747568 [MLIR][TORCH] Add E2E support for aten.ceil.float op
This commit adds lowering of `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-28 11:49:35 +05:30
Yi Zhang 86eb493a44 Change to AnyTorch* except for Torch_X ones 2022-04-27 14:18:52 -04:00
Vivek Khandelwal 51dd462592 [MLIR][TORCH] Add E2E support for aten.ne.float_int op
This commit adds lowering of `aten.ne.float_int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal 1f102cc400 [MLIR][TORCH] Add E2E support for aten.ge.float_int op
This commit adds lowering of `aten.ge.float_int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal 564734b2d7 [MLIR][TORCH] Add E2E support for aten.ge.float op
This commit adds lowering of `aten.ge.float` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal f5b6c4b601 [MLIR][TORCH] Add E2E support for aten.div.float op
This commit adds lowering of `aten.div.float` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Ashay Rane 9208bf0eb6
llvm: bump tag to e1318078 (#781)
The updated LLVM code includes a patch to create bfloat16 array
attributes, thus enabling a different patch to torch-mlir to flesh out
support for the bfloat16 type.
2022-04-26 12:27:51 -07:00
Prashant Kumar 5cdef0213d [LINALG] Bug fix i64 vs i32 type comparison.
Comparing index type instead of integer types solves the problem.
2022-04-22 08:09:58 +05:30
Vivek Khandelwal 769f3a8870 [MLIR][TORCH] Add E2E support for max_pool2d_with_indices op
This commit adds lowering of `max_pool2d_with_indices` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-18 21:05:19 +05:30
Vivek Khandelwal 1bccb4fc8a [MLIR][TORCH] Add E2E support for aten::max_pool2d_with_indices_backward op
This commit adds lowering of `aten::max_pool2d_with_indices_backward` op.

This commit also fixes formatting issues in basic.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-14 21:46:47 +05:30
gpetters94 9ec0683e92
Add 2D case for convolution (#693) 2022-04-08 00:47:57 -04:00
gpetters94 fa0b24a73c
Rename optional list types (#643) 2022-04-07 18:15:51 -04:00
Prashant Kumar 1d5b5a89e8 [LINALG] Add torch.layout information
torch.layout information has been added.
2022-04-07 20:47:49 +05:30
Ramiro Leal-Cavazos 51d4d55f8a
Add support for multi-dim input to `index_put_impl` (#722)
This commit adds support for multi-dimensional tensors as input to the
`_index_put_impl_` op. The support was to some degree already there,
since `ScatterOp` already supports multi-dimensional tensors. This
commit also adds a bit more error checking to `index_put` and
refactors the code for creating `ScatterOp`s to mimic the way one
would make a `Linalg::GenericOp`.
2022-03-31 09:27:21 -07:00
Gaurav Shukla 969785d1b6 [LINALG] Add E2E support for `aten.where.[Scalar|ScalarSelf|ScalarOther]` ops
This commit decomposes different variants of `aten.where.*` op into
`aten.where.Self` op. It covers `aten.where.Scalar`,
`aten.where.ScalarSelf` and `aten.where.ScalarOther` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-30 20:36:48 +05:30