Commit Graph

244 Commits (f85ae9c685d6c9f2d43fe60fdad2ad5d3183ec52)

Author SHA1 Message Date
Tanyo Kwok 1ee865983b
[MHLO] fix tensor mode aten.div op pattern (#1160)
* [MHLO] fix tensor mode aten.div op pattern

See RFC #999
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-06 23:38:06 +08:00
Vivek Khandelwal c129a6de93 [MLIR][TORCH] Add support for dim=None to Aten[Var|Std]DimOp
PyTorch recently added support for `dim=None` in the `torch.var`
(5ca9b2b6fa)
and `torch.std`op (eb0e30e0bc).
This commit adds the corresponding support in torch-mlir.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-08-05 20:28:56 +05:30
Ramiro Leal-Cavazos a7af1fd873
Add support for `dim=None` to `AtenMeanDimOp` (#1129)
PyTorch recently added support for `dim=None` in the `torch.mean`
op (2bfae07a79). This
commit adds the corresponding support in torch-mlir.
2022-08-02 16:08:06 +00:00
Quinn Dawkins 38d8498b21
add e2e support for aten.atan2 (#1117)
- Includes math-to-libm pass in refbackend for math::atan2 support
2022-08-02 11:39:41 -04:00
Vidush Singhal ed13ebfd8d
E2E support for AtenEmbeddingBagPaddingIdxOp SUM Mode (#1066) 2022-08-01 16:44:11 -04:00
Alec 554570f3ab Implemented a decomposition of aten::narrow 2022-08-01 18:32:14 +05:30
PhaneeshB 8b5631d4c5 [MLIR][TORCH] Add decomposition for aten.std.dim Op
Signed-Off By: Phaneesh Barwaria <phaneesh@nod-labs.com>
2022-07-29 23:52:54 +05:30
Vivek Khandelwal c681c3497a [MLIR][TORCH} Fix empty dim cases for the .dim ops
This commit fixes the shape calculation for:
1.) aten.mean.dim
2.) aten.var.dim
3.) aten.sum.dim_IntList op

Also, it fixes the lowering of `aten.mean.dim` and
`aten.sum.dim_IntList` for handling the cases of empty dim list.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com
2022-07-29 11:08:57 +05:30
Vivek Khandelwal d386b8f9e5 [MLIR][TORCH] Add decomposition for aten.var.correction op
This commit adds the decomposition for `aten.var.correction` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com
2022-07-29 11:08:57 +05:30
Quinn Dawkins 11a8901078
[MLIR][TORCH] Add support for multiple indexing tensors for aten.index.Tensor (#1097)
- Includes a canonicalizer for `aten.add.t`needed for successfully lowering the shape function
 - Only offers support for statically sized index tensors when there is more than one
 - Dynamic shape support remains for single indexing tensors
2022-07-28 19:00:02 -04:00
Quinn Dawkins 3c9addf19c Add e2e support for aten.expm1 2022-07-27 12:31:35 +05:30
Kevin Kiningham e8f327cc00 Add lowering to linalg for softplus and log1p
Follows existing conventions for unary operators.
2022-07-25 21:25:57 +05:30
Ashay Rane e06ee08506
torch: [nfc] use `WalkResult::isInterrupted()` instead of booleans (#1081)
An upstream MLIR bug (that was recently fixed) caused the result to be
ignored for Region- and Block-visitor functions.  Now that the bug is
fixed, we don't need an auxiliary boolean to track whether the visitor
function has succeeded.
2022-07-19 10:17:57 -07:00
Sean Silva 85858d2743 Bump LLVM to 889c6f3996769a991a24da957f597e7500d158e7
The biggest change here is to upgrade RefineTypes to the new sparse
dataflow framework.

Smaller changes:
- minor changes to type parsing
- suppress warnings in e2e tests
2022-07-15 13:36:04 -07:00
Vivek Khandelwal 3589134d31 [MLIR][TORCH] Add decomposition for aten.var.dim op
This commit adds the decomposition for `aten.var.dim` op.
This commit also make changes in the decomposition for `aten.var` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-15 09:53:42 +05:30
Ashay Rane 29bc48aedb
torch: add pass to catch non-value tensors (#1052)
This patch adds a new pass `torch-verify-conversion-to-value-semantics`,
which looks for non-value semantics tensors to catch such tensors early
during compilation.

This pass requires `torch-refine-public-return` pass to ensure that
return operations are updated to use value tensors, followed by the
canonicalize pass to remove any dead ops that may use or produce
non-value tensors.
2022-07-13 17:11:15 -07:00
Sean Silva e5e11e214b GlobalizeObjectGraph: Clean up handling of unused slots
The way we did it previously still created the slot and copied the
initializer even if unused.
2022-07-12 10:47:28 -07:00
Ashay Rane 9017be9e9e
torch: copy uses to prevent iterator invalidation (#1033)
Prior to this patch, the code in the `torch-simplify-shape-calculations`
pass iterated on the uses of an op's result while also modifying the
value.  This caused the iterator to get invalidated, thus terminating
the loop early and producing incorrect IR.  This patch makes use of
`llvm::make_early_inc_range()` to ensure that the iterator is not
invalidated while executing the loop body.
2022-07-11 18:47:04 -07:00
Ramiro Leal-Cavazos 11148e60d6
Undo shape lib changes + update function signature of sum + zero (#1035)
This commit does three things:
  1. Reverts some of the shape lib changes merged in
  https://github.com/llvm/torch-mlir/pull/844
  2. Updates the signature of `aten.sum_dim_IntList` that was recently
  updated in
  23bdb570cf
  3. Replaces `aten.zero.functional` with `aten.zero`, updated in 960758b0b7
2022-07-11 10:56:12 -07:00
Prateek Gupta 2d75654b2c [TORCH][MLIR] Add lowering of `aten.slice_scatter` and
`aten.select_scatter` op.

This commit adds:
1.  Lowering of `aten.slice_scatter` op into `tensor.insert_slice`
op.
2. Decomposes the `aten.select_scatter` op into `aten.slice_scater`
op.

Signed-Off-By: Prateek Gupta <gprateek93@gmail.com>
2022-07-11 14:07:21 +05:30
George Petterson a08ff0d7f2 Add lowering for _convolution 2022-07-11 11:03:03 +05:30
Ashay Rane 340d8af28a
torch: handle `torch.prim.dtype` ops during type refinement (#1013)
The canonicalizer converts `torch.prim.dtype` ops into integer constants
for valid types, but the type may not be known until type refinement is
complete.  However, type refinement cannot make progress until
`torch.prim.dtype` ops have been resolved to their corresponding integer
constants, thus creating a circular dependency.

This patch creates a tight coupling between type refinement and the
lowering of `torch.prim.dtype` ops by handling such ops as they are
encountered during type refinement.  The unit test in this patch aims to
check whether the type refinement pass can now handle chains of
operations that alternate between type construction and type refinement.
2022-07-08 16:38:51 -07:00
Ramiro Leal-Cavazos 6a72ab4502
Add basic support for list of optional tensors in reduce-op-variants (#971)
This commit adds support for lists of type `list<optional<tensor>>`
where each element in the list is either a `!torch.tensor` or a
`!torch.none`.
2022-07-08 11:12:15 -07:00
Ramiro Leal-Cavazos bbb648410e
Fix compilation warning Wsign-compare (#1003) 2022-07-06 09:06:10 -07:00
Tanyo Kwok d4f1f41435
[MLIR][TORCH] Add decomposition of aten.repeat (#932)
* [MLIR][TORCH] Add decomposition of aten.repeat

* refine & rebase

* refine static shapes

* add e2e test

* Rebase and Refine naming style
2022-07-01 13:02:31 +08:00
Ashay Rane 163fa57cde
torch: allow torch dialect ops after running drop-shape pass (#979)
In the `pyhpc_turbulent_kinetic_energy` TorchBench benchmark, the shape
calculation occurs inside loops, but because `DropShapeCalculationsPass`
does not explicitly mark the Torch dialect as legal, the pass execution
fails.

This patch adds Torch to the list of legal dialects, and adds a test to
validate the translation.
2022-06-25 07:27:47 -07:00
Ramiro Leal-Cavazos 400fecc1e5
[LINALG] Fix shape function of index.Tensor + support N-rank inputs (#972)
This commit fixes the shape function for `index.Tensor`, adding
support for multiple index tensors and `None`s in the indices
list. This commit also adds support for input tensors of rank greater
than 1. The lowering for `index.Tensor` still has the the limitation
that only a single index tensor along the first dimension of the input
tensor is supported.
2022-06-24 09:45:44 -07:00
Ashay Rane 234fc7fe0c
linalg: lower `aten.triu` op to `linalg.generic` (#965)
Prior to this patch, the torch dialect included `AtenTriuOp` for
computing the upper triangular part of the input matrix, but there was
no code for lowering the op to the linalg dialect.

This patch adds code to generate a `linalg.generic` operation that
compares indices (computed using `linalg.index`) to choose between zero
or the original value (using `arith.select`).  The lowering fails if the
number of dimensions are less than two.  This patch also adds a few
end-to-end tests.
2022-06-23 22:45:48 -07:00
Ramiro Leal-Cavazos 189afa82c5
Update shape library with LLVM bump changes (#973) 2022-06-23 18:13:03 -07:00
Vivek Khandelwal 77ab31641f [MLIR][TORCH] Add decomposition of aten.numpy_T op
This commit adds the decomposition of `aten.numpy_T` op into
`aten.t` or `aten.permute` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-16 00:01:22 +05:30
Bob Adolf b90837ee24
Temporarily revert support for custom op extensions. (#944)
The MacOS builders are having linking trouble with the extension library.
Until it's fixed, all support for op extensions is disabled. It should be
easy to restore once the issue is resolved.
2022-06-14 18:24:40 -07:00
Vivek Khandelwal 33fa8e7761 [MLIR][TORCH] Add decomposition of aten.floor_divide op
This commit adds the decomposition of `aten.floor_divide` op into
`aten.div.Tensor_mode` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-14 08:56:25 +05:30
Bob Adolf 0a7ba62438
Allow torch-mlir to support PyTorch extensions. (#895)
PyTorch allows new operators to be registered dynamically in modules.
Torch-mlir already makes it fairly straightforward to add support for
new operators, and this commit just extends that support to allow new
PyTorch ops to come from a external module.

This does *not* allow ops to be dynamically loaded into torch-mlir.
Torch-mlir must still be compiled with support built-in.

Add a `_torch_mlir_custom_op_example` subpackage to `torch_mlir` which
registers an demonstration op. It will not be imported by default when
importing torch_mlir. It's strictly for testing and documentation.

Adds an end-to-end test for the `torch_mlir_custom_op_example::identity` op.

With all these changes, we should now be actively testing PyTorch extension
support with all future patches.
2022-06-13 14:51:30 -07:00
Sean Silva e1b38e74dd Use upstream shape functions directly.
Now that upstream exposes them nicely, we can use them.

I noticed that we had added stuff into the upstream_shape_helpers.py
file (which was supposed to stay pristine), so some more shape functions
need to be upstreamed.

Going forward, all shape functions should be upstreamed similar to
https://github.com/pytorch/pytorch/pull/76889 instead of added in this
file.
2022-06-07 11:15:03 -07:00
Vivek Khandelwal b95b3d844d [MLIR][TORCH] Add E2E support for aten.div.Tensor_mode op
This commit adds lowering of `aten.div.Tensor_mode` op.
This commit also fixes formatting for the test file elementwise.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:44 +05:30
Vivek Khandelwal a11ef674a7 [MLIR][TORCH] Add E2E support for aten.baddbmm op
This commit decomposes `aten.baddbmm` op into `aten.bmm`,
`aten.mul.Scalar`, and `aten.add.Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:28 +05:30
Vivek Khandelwal 2718b4d838 [MLIR][TORCH] Add E2E support for aten.clamp_[min|max] op
This commit decomposes `aten.clamp_min` and `aten.clamp_max` op
into `aten.clamp` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-06 11:52:29 +05:30
Vidush Singhal fc419b1e7d
Add E2E support for AtenLogicalOrOp. (#883) 2022-06-03 16:21:03 -07:00
Henry Tu abf5c94a1b
Replace valsem.aten.zero with aten.zero.functional (#893) 2022-06-03 16:27:31 -04:00
Ashay Rane 7fdc1cff02
build: remove manual changes to ShapeLibrary.cpp (#894)
The patch bumped up the LLVM tag made manual fixes to the code in
`ShapeLibrary.cpp`.  However, since that file is generated by the
`update_shape_lib.sh` script, its contents were reverted each time the
script was run.  This patch fixes the problem by removing the manual
changes to that file.
2022-06-01 14:11:29 -07:00
Vivek Khandelwal 6f548fc3ad [MLIR][TORCH] Add decomposition of aten.adaptive_avg_pool2d op
This commit adds the decomposition of `aten.adaptive_avg_pool2d` op into
`aten.avg_pool2d` op. The current decomposition only supports cases where
input size is equal to the output size.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Ashay Rane 029cd54327
build: fix code so that the compiler does not emit warnings (#871)
When compiling without assertions (i.e. in `NDEBUG` mode), a handful of
statements turn to NOPs, which results in warnings such as missing
return statement or unused variables and function. This patch replaces
such statements with `llvm_unreachable()`, which informs the compiler
about program termination regardless of the `NDEBUG` mode. This also
enables torch-mlir to be compiled using the flags `-Wall`, `-Wextra`,
`-Wpedantic`, and `-Werror`.
2022-05-25 14:04:59 -07:00
Ashay Rane f18b2be911
torch,linalg: add support for translating aten.linalg.vector_norm (#839)
This patch adds support for the torch.linalg.vector_norm op to the torch
dialect, including the necessary shape function.  It also extends the
conversion of reduction operators to support lowering of
AtenLinalgVectorNormOp, in addition to adding a handful of end-to-end
tests to validate the lowering.

There exist several opportunities to make this lowering optimal and
robust.  For instance, in its current form, the translation does not
support ord = 0, +inf, or -inf.  For L1 norms, we don't need to raise
each element to the power 1.0.  Similarly, L2 norms could benefit from
strength reduction.  Since the canonicalization pass is not able to
apply these optimizations, we should consider applying them during the
linalg lowering itself.
2022-05-19 15:48:15 -07:00
Ashay Rane bb52a460cb
mlir: bump llvm tag to 5380e3 (#856)
In addition to updating the llvm-project submodule, this patch also:

1. updates shape functions and tests so that `func` and `call`
   operations refer to the `func` dialect
2. avoid duplicate registration of dialects
2022-05-16 12:54:35 -07:00
Ramiro Leal-Cavazos 96f90efd16
Add shape info to `rand_like` + support for `dtype` flag (#851)
The op `aten.rand_like` was missing a shape function, unit tests, and
the `dtype` argument was being ignored in its decomposition. This
commit fixes all three things.
2022-05-12 16:00:59 -07:00
Vivek Khandelwal c69a1e5688 [MLIR][TORCH] Add E2E support for ScalarImplicit, Int.Scalar op
This commit adds lowering of `aten.ScalarImplicit` and `aten.Int.Scalar` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-10 22:40:49 +05:30
Prashant Kumar 12b3af70d3 [TORCH] Add folding of aten.detach op.
`aten.detach` op is folded and returns the first operand since it's an
identity function(kind of identity just remove the has_grad attribute).
2022-05-10 21:54:45 +05:30
Yi Zhang 28be6511d2 Fix type promotion code for scalar only operations
Fix the type promotion code for scalar only operation to return
TorchType which is the type tracked in ValueKnowledge.scalarType.

- Fix `getPromotedResultScalarType` to return Torch type.
- Add `getBuiltInTypeForTorchScalar` helper to convert scalar type
to builtin type before passing to the next level type promotion
helper `updateResultTypeState`.
- Add `setScalarType` helper to make setting ValueKnowledge.scalarType
  easier.
2022-05-07 10:37:21 -04:00
Kristof Denolf e682b1d0f3 changed name option to decompose-complex-ops 2022-05-05 00:38:51 -07:00
Kristof Denolf 5243638e33 add no decompose option 2022-05-05 00:38:51 -07:00