Commit Graph

674 Commits (738f4fe96a5416191858d105039977f1d91100ff)

Author SHA1 Message Date
Ramana Radhakrishnan 738f4fe96a
Rename TorchToStd pass as TorchToArith (#1163)
All the converters in this pass appear to create ops from the
arith dialect. Hence the full rename.

Fix GH Issue #409.
2022-08-10 20:12:51 +01:00
武家伟 87562773f8
[MHLO] Add AtenCatOp conversion pattern to MHLO (#1208)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
Co-authored-by: Vremold <xremold@gamil.com>
2022-08-09 22:12:34 -07:00
Marius Brehler 202076c6e3
Add CMake dep to Func dialect (#1196)
The Torch dialect has an include to `mlir/Dialect/Func/IR/FuncOps.h` and
should therefore have a CMake dependency to the MLIRFuncDialect.
Otherwise, the build can fail since it may occur that
`mlir/Dialect/Func/IR/FuncOps.h.inc` isn't generated yet.
2022-08-09 06:54:30 -07:00
Yan Xu f83a905856
[MHLO]fix lowering failed on reduction op with i32 shape (#1185)
fixed lowering failed on torch::max.dim while shape type is i32
2022-08-09 17:02:50 +08:00
powderluv e55fc4deb5
Revert "E2E support for AtenRemainderScalarOp (#1119)" (#1190)
This reverts commit 34e207eeb5.
2022-08-08 22:59:57 -07:00
Ashay Rane bb47c166a0
llvm: update tag to 061e0189 (#1180)
Summary of changes:
 - Switch to C++17 (similar to https://reviews.llvm.org/D131348)
 - Update MHLO to build with LLVM commit hash 061e0189
 - Replace deprecated `hasValue()` and `getValue()` with `has_value()`
   and `value()` respectively (https://reviews.llvm.org/D131349)
 - Use `TypedAttr` (https://reviews.llvm.org/D130092)
 - Use updated assembly format of `mhlo.compare` op (commit
   d03ef01e70fbf9afd0fa1976fbb7ed31838929b3 in MHLO repo)
2022-08-08 20:17:35 -07:00
武家伟 351f15424e
[MHLO] Add transposed convolution conversion pattern (#1171)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-09 09:50:07 +08:00
Sean Silva 504de5e701 Rework how global slot initializers work.
Rather than a per-global-slot initializer region, we now have one for
the whole module. For example, it might look like this:

```
torch.global_slot "private" @tensor : !torch.tensor
torch.global_slot "private" @list : !torch.list<tensor>
torch.global_slot.module_initializer {
  %0 = torch.tensor.literal(dense<0.0> : tensor<f32>) : !torch.tensor
  %1 = torch.prim.ListConstruct %0 : (!torch.tensor) -> !torch.list<tensor>
  torch.initialize.global_slots [
    @tensor(%0 : !torch.tensor)
    @list(%1 : !torch.list<tensor>)
  ]
}
```

This new structure allows GlobalizeObjectGraph to create the initializer in a
much simpler way, avoiding the need to reason about whether different slots
alias each other. Reasoning about whether slots alias each other now is the
responsibility of InlineGlobalSlots, which has to do a much more complicated
analysis, implemented using MLIR's dataflow analysis framework.

Recommended review order:
- Check out the new IR constructs in the .mlir files of various passes
- Op definitions (*.td)
- Changes to GlobalizeObjectGraph pass.
- InlineGlobalSlots pass (~total rewrite)
- Misc changes:
  - Moving torchMlirAdjustStaticInformation for sharing with C++ code.
  - EraseModuleInitializer pass

To make this a bit nicer, it would be good to have a `torch.module` op
with an initializer region attached. That would be more invasive though.

This change has highlighted certain aspects of our project layering
which are worth calling out. None of our backends can handle global
slots, so we enforce that there are no global slots before backend
lowering. At an earlier stage in the project, we had aspirations of
transparently handling mutable global state and such, but for reasons
described below, that is no longer a goal. So really global slots should
be seen as a progressive lowering step as part of inlining all the
IValue's in the original program (GlobalizeObjectGraph is also one such
step).

Over time, with insights from work like IREE-JAX, it has become clear
that there isn't a reliable programming model we can compile for users
where we just transparently handle mutable global state (and some other
things, like lists and dictionaries). There is a need for an "outer
program" that orchestrates more restricted subroutines of the kind we
can handle in our compile flow here. The benefit of that is that it
decouples considerations like shapes, dtypes, etc. from the program
constructs used in the outer program. As long as the outer program can
efficiently invoke (pipelining/async/etc.) high-performance
data-parallel numerical subroutines of the kind we compile in our flow
here, then there is a complete programming model. This is also
consistent with the direction of upstream PyTorch which is becoming more
tracing-based (which inherently loses a lot of program structure, which
then has to be applied back with an "outer program" orchestrating the
traced subroutines).
2022-08-08 18:12:06 -07:00
Vidush Singhal 34e207eeb5
E2E support for AtenRemainderScalarOp (#1119)
* E2E support for AtenRemainderScalarOp
2022-08-08 20:02:52 -04:00
Vidush Singhal b70548edff
Add decomposition and E2E support for Aten_EmbeddingBag (#1137)
* Add decomposition and E2E support for Aten_EmbeddingBag
2022-08-08 18:56:49 -04:00
Tanyo Kwok 290d7755fb
importer: add initial support for loading Float16 tensors (#1169)
follow up #761:

    This patch updates the `torch_mlir::convertTensorToMlirElementsAttr()`
    method to enable the creation of tensors whose base type is Float16.
    This patch also adds a test to validate the IR generation, and it
    updates the test for importing tensors of various types.
2022-08-08 12:37:31 +08:00
Tanyo Kwok 1ee865983b
[MHLO] fix tensor mode aten.div op pattern (#1160)
* [MHLO] fix tensor mode aten.div op pattern

See RFC #999
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-06 23:38:06 +08:00
Vivek Khandelwal c129a6de93 [MLIR][TORCH] Add support for dim=None to Aten[Var|Std]DimOp
PyTorch recently added support for `dim=None` in the `torch.var`
(5ca9b2b6fa)
and `torch.std`op (eb0e30e0bc).
This commit adds the corresponding support in torch-mlir.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-08-05 20:28:56 +05:30
武家伟 c94431f71c
[MHLO] Add convolution op pattern (#1152)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-04 00:41:35 -07:00
gpetters94 08fc2d89bb
Add non-unit groups support to aten.convolution (#858) 2022-08-04 02:18:38 -04:00
武家伟 d030591df9
[MHLO] Init MHLO pooling-like op conversion (#1141)
* [MHLO] Init MHLO pooling-like op conversion and remove 'op' suffix in filenames

Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>

See RFC #999
2022-08-04 12:34:22 +08:00
Tanyo Kwok f0a24f59f6
[MHLO] Init MHLO linear op patterns (#1132)
See RFC https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-08-03 19:10:54 -07:00
武家伟 636f5acb10
[MHLO] Init MHLO reduce-like op conversion (#1133)
* [MHLO] init reduce-like op conversion from Torch to MHLO
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-03 10:47:52 +08:00
Tanyo Kwok 0b23af27d3
[MHLO] support non-constant torch scalar in BasicOps (#1134)
See RFC https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-08-03 08:16:31 +08:00
Ramiro Leal-Cavazos a7af1fd873
Add support for `dim=None` to `AtenMeanDimOp` (#1129)
PyTorch recently added support for `dim=None` in the `torch.mean`
op (2bfae07a79). This
commit adds the corresponding support in torch-mlir.
2022-08-02 16:08:06 +00:00
Quinn Dawkins 38d8498b21
add e2e support for aten.atan2 (#1117)
- Includes math-to-libm pass in refbackend for math::atan2 support
2022-08-02 11:39:41 -04:00
Yan Xu 704efdc259
[MHLO] add aten::gelu op pattern (#1127)
add aten::gelu op pattern, and moved some unit tests from basic.mlir to elementwise.mlir
2022-08-02 15:01:30 +08:00
武家伟 76c976682c
[MHLO] Support for dynamic shape in basic op conversion by introducing CHLO dialect (#1123)
* [MHLO] Support for dynamic shape in basic op conversion by introducing CHLO dialect
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>

* [MHLO] Support I32 as shape tensor dtype

* [NFC] Add a 'TODO' annotation
2022-08-02 12:53:24 +08:00
Tanyo Kwok 3772e0bd91
[NFC][MHLO] move util funcs to MhloLegalizeUtils.h/cpp (#1128)
See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-08-02 09:21:37 +08:00
Vidush Singhal ed13ebfd8d
E2E support for AtenEmbeddingBagPaddingIdxOp SUM Mode (#1066) 2022-08-01 16:44:11 -04:00
Alec 554570f3ab Implemented a decomposition of aten::narrow 2022-08-01 18:32:14 +05:30
Henry Tu 70395de197 Resolve CI testing failure for Lazy Tensor Core (#1088)
* Xfail unsupported ops

* Register FuncDialect

* Include dynamic_ir in build

* Code reformat

* Enable LTC tests for macOS and Source Build
2022-07-30 09:40:02 -04:00
Henry Tu 0c35e607b3 Add static shape for scalar tensors (#833)
* Assume zero rank tensors are scalar

* Run RefineTypes pass on JIT Graph

* Rollback assumption that zero rank tensors are scalar

* Set numSizes to -1 for non-ranked tensors

* Rename RefineTypes to RefineTupleTypes
2022-07-30 09:40:02 -04:00
PhaneeshB 8b5631d4c5 [MLIR][TORCH] Add decomposition for aten.std.dim Op
Signed-Off By: Phaneesh Barwaria <phaneesh@nod-labs.com>
2022-07-29 23:52:54 +05:30
Vivek Khandelwal c681c3497a [MLIR][TORCH} Fix empty dim cases for the .dim ops
This commit fixes the shape calculation for:
1.) aten.mean.dim
2.) aten.var.dim
3.) aten.sum.dim_IntList op

Also, it fixes the lowering of `aten.mean.dim` and
`aten.sum.dim_IntList` for handling the cases of empty dim list.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com
2022-07-29 11:08:57 +05:30
Vivek Khandelwal d386b8f9e5 [MLIR][TORCH] Add decomposition for aten.var.correction op
This commit adds the decomposition for `aten.var.correction` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com
2022-07-29 11:08:57 +05:30
Vivek Khandelwal 7247c6a3a7 [MLIR][TORCH] Add E2E support for aten.ge.int op
This commit adds lowering of `aten.ge.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-29 11:08:57 +05:30
Quinn Dawkins 11a8901078
[MLIR][TORCH] Add support for multiple indexing tensors for aten.index.Tensor (#1097)
- Includes a canonicalizer for `aten.add.t`needed for successfully lowering the shape function
 - Only offers support for statically sized index tensors when there is more than one
 - Dynamic shape support remains for single indexing tensors
2022-07-28 19:00:02 -04:00
Quinn Dawkins 3c9addf19c Add e2e support for aten.expm1 2022-07-27 12:31:35 +05:30
武家伟 052d2f84dc
[MHLO] Init MHLO basic op conversion (#1092)
* [MHLO] Init MHLO basic Op Conversion
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>

* [NFC] Remove 'from @llvm-project' annotation

Co-authored-by: wujiawei.jw <wujiawei.jw@bytedance.com>
2022-07-27 13:07:51 +08:00
Kevin Kiningham e8f327cc00 Add lowering to linalg for softplus and log1p
Follows existing conventions for unary operators.
2022-07-25 21:25:57 +05:30
Tanyo Kwok 44ead68772
[MHLO] Init MHLO gather op patterns (#1104)
See RFC https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-07-25 23:47:46 +08:00
Tanyo Kwok f50d7013cd
[MHLO] Add [un]squeeze op patterns (#1099)
* [MHLO] Add [un]squeeze op patterns

* Conform to llvm coding standard

* minor update
2022-07-25 23:28:48 +08:00
Tanyo Kwok b80ce79b9f
[MHLO] Init MHLO view like op patterns (#1090)
* [MHLO] Init MHLO view like op patterns

See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com

* update filecheck test cases

* rebase, remove chlo and clang-format
2022-07-22 15:18:18 +08:00
Tanyo Kwok a02dbb2d5e
[MHLO] Init MHLO slice like op patterns (#1091)
See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com
2022-07-22 11:32:45 +08:00
Ramiro Leal-Cavazos f271e6a88c
Add verifiers for ToBuiltinTensorOp and FromBuiltinTensorOp (#1089)
This commit adds verifiers to the ops `ToBuiltinTensorOp` and
`FromBuiltinTensorOp` that make sure that the input and output have
the same shape and data type.
2022-07-21 21:41:45 +00:00
Sean Silva c0ef192865
Improve error message
The unknown dtype case can come from RefineTypes.
2022-07-21 13:52:24 -07:00
Ashay Rane 72dd04cdb3
Revert "python: trim registration and loading of dialects and passes" (#1093)
This reverts commit ad283c1043, since it's
causing nightly build failures for all platforms.
2022-07-21 09:35:42 -07:00
Ashay Rane ad283c1043
python: trim registration and loading of dialects and passes (#1084)
In the interest of merging upstream LLVM quickly, a previous patch
(7f08169) updated the torch-mlir build to register all dialects and
passes through Python bindings.  This patch limits the dialects and
passes to only those that are used in torch-mlir.

Key to this change are the removal of
`MLIRPythonExtension.RegisterEverything` and the introduction of a new
Python module (`_mlir_libs/_site_initialize_0.py`), where we register
the dialects and passes used by torch-mlir.
2022-07-20 18:34:17 -07:00
Ziheng Jiang c61c99e887
[MHLO] Init MHLO integration. (#1083)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-07-20 16:18:16 -07:00
Quinn Dawkins 647e75e029
Allow expanding and collapsing in aten::view (#1082)
- Supports cases where the view op expands and collapses dims
simulataneously. This does not handle the case where it is neither
expanding nor collapsing (e.g. [2, 3] -> [3, 2])
 - Additionally fixes a previous bug with adding 1-sized dims on both
sides of a tensor with aten.view
2022-07-20 17:35:51 -04:00
Ashay Rane e06ee08506
torch: [nfc] use `WalkResult::isInterrupted()` instead of booleans (#1081)
An upstream MLIR bug (that was recently fixed) caused the result to be
ignored for Region- and Block-visitor functions.  Now that the bug is
fixed, we don't need an auxiliary boolean to track whether the visitor
function has succeeded.
2022-07-19 10:17:57 -07:00
Quinn Dawkins c73a39e40a Add support for index.Tensor on dimensions other than the first
This patch still only supports a single indexing tensor.
2022-07-19 11:36:52 +05:30
Vivek Khandelwal df0b1e77a4 [MLIR][TORCH] Add negative dim support for aten.cat and aten.slice op
This commit adds the support for negative dim cases for `aten.cat`,
`aten.slice.Tensor` and `aten.slice_scatter` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-18 14:01:33 +05:30
Vivek Khandelwal 4c25878e64 [MLIR][TORCH] Add canonicalization pattern for prim.ListUnpack op
This commit adds the canonicalization pattern for the `prim.ListUnpack` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-18 13:51:25 +05:30