Commit Graph

710 Commits (234b2f2bd4f657da300163dbd01757fcbc5d04d7)

Author SHA1 Message Date
Tanyo Kwok 512f2d9c23
Add decomposition to aten.native_layer_norm (#1332)
* Add decomposition to aten.native_layer_norm

* fix ci error
2022-09-02 09:29:22 +08:00
Tanyo Kwok 57d8ec151f
[MHLO] add VerifyMhloBackendContract (#1321)
* [MHLO] add VerifyMhloBackendContract

* guard with macro
2022-09-01 17:08:17 +08:00
Tanyo Kwok 29cafdbb61
[MHLO] refactor pass configurations (#1315)
Related to https://github.com/llvm/torch-mlir/issues/1227

1. Reduce MHLO #ifdefs
2. Dismiss compilation warnings
2022-09-01 10:36:02 +08:00
Ashay Rane e52e886845
build: update llvm tag to 00d648bd (#1307)
- Update MHLO commit to build with LLVM commit hash 00d648bd
 - Update TorchToMhlo code to work with Stablehlo
 - Re-enabled two failing TOSA tests, thus resolving Github Issue #1231
2022-08-30 14:44:00 -05:00
Sean Silva 51ef1b141c Add some missing dependencies.
Caught in the wild here:
https://github.com/llvm/torch-mlir/runs/8046660640?check_suite_focus=true

It is common for a missing dependency to only surface as an issue on the
CI machines since they have fewer cores which prevents a "race" that
happens to cause the dependency to be built before the dependent.
2022-08-30 11:52:30 -07:00
Sean Silva bcccf41d96 Add CI for generated files.
This ensures that they are always up to date.

This also updates the shape lib to make the new CI actually pass :)
2022-08-29 12:07:16 -07:00
Sean Silva 26231853ab Rename an outdated class name
We used to not have "value-semantic" tensors but rather "immutable"
tensors
2022-08-29 10:08:59 -07:00
Sean Silva 0e3ddbac91 Remove VerifyInvariantsBeforeBackendLowering
LowerToBackendContract now checks all this consistently.
2022-08-26 10:24:43 -07:00
Sean Silva b1fa7a2b9d Fix a few build warnings 2022-08-26 10:24:22 -07:00
Ashay Rane 1d9d925f6e
mlir: fix replacement of `OpaqueElementsAttr` (#1274)
An earlier patch (bb47c166) incorrectly replaced the now-dropped
`OpaqueElementsAttr` with `SparseElementsAttr` in one place and with
`DenseElementsAttr` in another.  This patch fixes the problem by making
both replacements use the dense-equivalent type.
2022-08-24 17:10:40 -05:00
gpetters94 f012279fa2
Add transposed case for at::convolution (#917)
Also adds a decomposition for aten::conv_transposed2d.input
2022-08-24 12:19:35 -04:00
Tanyo Kwok 3d0e18bbe7
Add decomposition for aten.roll (#1170)
* Add decomposition for aten.roll

* add e2e unittest

* refine type of torch.roll

* fix aten::cat output type
2022-08-24 08:36:05 +08:00
Tanyo Kwok 2374098d71
[MHLO] Init end to end unit tests (#1223) 2022-08-23 16:47:21 +08:00
Vivek Khandelwal 8cad02f87e [MLIR][TORCH] Add torch.Device type to backend contract scalar types
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-08-23 10:50:09 +05:30
Tanyo Kwok 9176b5ed29
Add decomposition for aten.flatten.using_ints (#1161) 2022-08-23 11:52:54 +08:00
Sean Silva 01290d134a Add a way for backends to control which ops are legal for them.
We were already hitting many cases where backends different in terms of
the legal ops that they wanted. This caused unnecessary coupling between
the backends. Examples:
- https://github.com/llvm/torch-mlir/pull/1161
- https://github.com/llvm/torch-mlir/pull/862

This PR centralizes all compilation to go through `torch_mlir.compile`
so that we can keep the logic centralized there. We should move these
lists closer to each backend. Especially cases like
https://github.com/llvm/torch-mlir/pull/862 where blocking a
decomposition is necessary to avoid a crash emphasize that the set of
decompositions is tightly coupled to the backend, and should be
"controlled by the backend" and not something arbitrarily tweakable.

Also:
- Fix a small bug in the way we passed through the backendLegalOps
  option.
- Add better error messages in `torch_mlir.compile` for import errors.
2022-08-22 14:16:13 -07:00
Alex Tsao c38308f3ef
Add lowering for _convolution.deprecated (#1259)
* Add lowering for _convolution.deprecated
2022-08-22 11:17:36 +08:00
武家伟 99fb4c8637
Add folder for ToF64Op and FromF64Op (#1257) 2022-08-22 09:49:39 +08:00
Vivek Khandelwal 65d811e267 [MLIR][TORCH] Fix dynamic cases for aten.index.Tensor 2022-08-19 12:13:20 +05:30
武家伟 7bd173a1c4
[MHLO] Eliminate explicit dynamic output shape generating in converting AtenSliceTensorOp (#1245)
[MHLO] Eliminate explicit dynamic output shape generating in converting AtenSliceTensorOp
2022-08-19 10:14:57 +08:00
Ramiro Leal-Cavazos 9bc606c384
Add support for returning more than one copy of the same tensor (#1228)
One of the simplifications made by the pass `RefinePublicReturn`
currently only happens if the tensor in question only has one
user. However, the current method of checking this does not correctly
handle the case of a user having multiple uses of the same
tensor. This commit makes sure only unique users are considered.
2022-08-18 22:41:45 +00:00
Sean Silva 283e0f141a Add a concept of "backend legal ops".
This is a first step towards formalizing the set of ops in our backend
contract. The goal is to eventually formalize `torch` dialect ops into 3
categories:
1. Legal in backend contract
2. Illegal in backend contract
3. Conditionally legal in backend contract

The "conditionally legal" set are the ops that we can optionally
decompose for backends.

This patch adds relevant pass options for this throughout the compiler,
in preparation for a new set of traits which will formalize this
classification.
2022-08-18 11:46:50 -07:00
Ramiro Leal-Cavazos f07f7d20f9
Clean up shape functions that use `sum_mean_dim` (#1217)
I recently fixed the handling of the `dim` argument in
`sum_mean_dim` (59fccab857). Therefore,
the checks that the `dim` input is `None` or `[]` are no longer needed.
2022-08-18 08:23:43 -07:00
Sean Silva 57681f7947 Iteratively run the main simplification pipeline.
This introduces a new pass LowerToBackendContract (better name very
welcome) which performs the bulk of the simplifications that we do,
such as
- shape refinement
- dtype refinement
- maximizing value semantics
- inlining global slots
- decomposing complex ops

The key difference from before is that it iterates the set of
transformations, which can help to break a number of "catch-22" issues
where one simplification depends on another, the latest example being
here:
https://github.com/llvm/torch-mlir/issues/1131

This also exposed that RefineTypes was sometimes crashing/asserting for
certain inputs. This commit hardens it a bit.
2022-08-17 14:54:33 -07:00
Yan Xu 9be8997536
Revert "add native_dropout and related ops pattern (#1211)" (#1230)
This reverts commit c935795086.
2022-08-17 13:48:10 +08:00
Quinn Dawkins 85f383ce0b
Bump the shape lib to match the upstream functions currently in PyTorch (#1236)
Bumps the shape library:
 - Updates the function signature for aten.arange.start_step
 - upstream_shape_functions.mean_dim -> upstream_shape_functions.sum_mean_dim
2022-08-17 00:11:04 -04:00
武家伟 11a5b5ac52
[MHLO] Add AtenRSubScalarOp conversion pattern to MHLO (#1233)
* [MHLO] Add AtenRSubScalarOp conversion pattern
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-17 09:07:36 +08:00
nithinsubbiah fde390c766 Re-enable custom op support 2022-08-16 22:49:08 +05:30
Ashay Rane 84d345c650
build: update llvm tag to 2dde4ba6 (#1229)
Summary of changes:
 - Tensor dialect now sets `emitAccessorPrefix` to prefixed, thus
   requring updates to methods that retrieve arguments
   [https://reviews.llvm.org/D131361]
 - Update MHLO to build with LLVM commit hash 2dde4ba6
 - Replace `AbsOp` with `AbsFOp` [https://reviews.llvm.org/D131325]
 - Replace deprecated `getValue()` with `value()`
   [https://reviews.llvm.org/D131349]
 - Remove `AnalysisState::defaultInitialize()`
   [https://reviews.llvm.org/D131746]
 - Update MHLO MLIR tests to use the updated assembly format
 - Disabled two failing TOSA tests (Github Issue link:
   https://github.com/llvm/torch-mlir/issues/1231)
2022-08-15 23:54:45 -07:00
武家伟 3b3cb99ef8
Generalize canonicalization pattern for more aten.sub/div/mul/add op (#1209)
Generalize canonicalization pattern for more sub/div/mul/add op, but for AtenDivTensorModeOp in 'trunc' rounding mode, we try to fold it.
2022-08-16 13:24:08 +08:00
Ramiro Leal-Cavazos 9d6ee48661
Fix unused-variables warnings about EmbeddingBag ops (#1220)
According to the documentation for
`torch.embedding_bag` (https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html),
the default value for `scale_grad_by_freq` is False.
2022-08-15 09:43:55 -07:00
Yan Xu c935795086
add native_dropout and related ops pattern (#1211) 2022-08-15 09:28:47 +08:00
Prashant Kumar b1a506624c Add decomposition of `aten.masked.tensor` op.
`aten.masked.tensor` op has been decomposed to `aten.masked.scalar` op.
2022-08-11 07:48:04 +05:30
Yan Xu d96ec64be1
remove torch dialect from legal list (#1192) 2022-08-11 09:22:41 +08:00
Vidush Singhal dd2da5a038
E2E support for AtenRemainderScalarOp (#1200) 2022-08-10 20:02:06 -04:00
gpetters94 79b9cf9468
Add lowering for aten.to.device (#1107) 2022-08-10 19:24:02 -04:00
Ramana Radhakrishnan 738f4fe96a
Rename TorchToStd pass as TorchToArith (#1163)
All the converters in this pass appear to create ops from the
arith dialect. Hence the full rename.

Fix GH Issue #409.
2022-08-10 20:12:51 +01:00
武家伟 87562773f8
[MHLO] Add AtenCatOp conversion pattern to MHLO (#1208)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
Co-authored-by: Vremold <xremold@gamil.com>
2022-08-09 22:12:34 -07:00
Marius Brehler 202076c6e3
Add CMake dep to Func dialect (#1196)
The Torch dialect has an include to `mlir/Dialect/Func/IR/FuncOps.h` and
should therefore have a CMake dependency to the MLIRFuncDialect.
Otherwise, the build can fail since it may occur that
`mlir/Dialect/Func/IR/FuncOps.h.inc` isn't generated yet.
2022-08-09 06:54:30 -07:00
Yan Xu f83a905856
[MHLO]fix lowering failed on reduction op with i32 shape (#1185)
fixed lowering failed on torch::max.dim while shape type is i32
2022-08-09 17:02:50 +08:00
powderluv e55fc4deb5
Revert "E2E support for AtenRemainderScalarOp (#1119)" (#1190)
This reverts commit 34e207eeb5.
2022-08-08 22:59:57 -07:00
Ashay Rane bb47c166a0
llvm: update tag to 061e0189 (#1180)
Summary of changes:
 - Switch to C++17 (similar to https://reviews.llvm.org/D131348)
 - Update MHLO to build with LLVM commit hash 061e0189
 - Replace deprecated `hasValue()` and `getValue()` with `has_value()`
   and `value()` respectively (https://reviews.llvm.org/D131349)
 - Use `TypedAttr` (https://reviews.llvm.org/D130092)
 - Use updated assembly format of `mhlo.compare` op (commit
   d03ef01e70fbf9afd0fa1976fbb7ed31838929b3 in MHLO repo)
2022-08-08 20:17:35 -07:00
武家伟 351f15424e
[MHLO] Add transposed convolution conversion pattern (#1171)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-09 09:50:07 +08:00
Sean Silva 504de5e701 Rework how global slot initializers work.
Rather than a per-global-slot initializer region, we now have one for
the whole module. For example, it might look like this:

```
torch.global_slot "private" @tensor : !torch.tensor
torch.global_slot "private" @list : !torch.list<tensor>
torch.global_slot.module_initializer {
  %0 = torch.tensor.literal(dense<0.0> : tensor<f32>) : !torch.tensor
  %1 = torch.prim.ListConstruct %0 : (!torch.tensor) -> !torch.list<tensor>
  torch.initialize.global_slots [
    @tensor(%0 : !torch.tensor)
    @list(%1 : !torch.list<tensor>)
  ]
}
```

This new structure allows GlobalizeObjectGraph to create the initializer in a
much simpler way, avoiding the need to reason about whether different slots
alias each other. Reasoning about whether slots alias each other now is the
responsibility of InlineGlobalSlots, which has to do a much more complicated
analysis, implemented using MLIR's dataflow analysis framework.

Recommended review order:
- Check out the new IR constructs in the .mlir files of various passes
- Op definitions (*.td)
- Changes to GlobalizeObjectGraph pass.
- InlineGlobalSlots pass (~total rewrite)
- Misc changes:
  - Moving torchMlirAdjustStaticInformation for sharing with C++ code.
  - EraseModuleInitializer pass

To make this a bit nicer, it would be good to have a `torch.module` op
with an initializer region attached. That would be more invasive though.

This change has highlighted certain aspects of our project layering
which are worth calling out. None of our backends can handle global
slots, so we enforce that there are no global slots before backend
lowering. At an earlier stage in the project, we had aspirations of
transparently handling mutable global state and such, but for reasons
described below, that is no longer a goal. So really global slots should
be seen as a progressive lowering step as part of inlining all the
IValue's in the original program (GlobalizeObjectGraph is also one such
step).

Over time, with insights from work like IREE-JAX, it has become clear
that there isn't a reliable programming model we can compile for users
where we just transparently handle mutable global state (and some other
things, like lists and dictionaries). There is a need for an "outer
program" that orchestrates more restricted subroutines of the kind we
can handle in our compile flow here. The benefit of that is that it
decouples considerations like shapes, dtypes, etc. from the program
constructs used in the outer program. As long as the outer program can
efficiently invoke (pipelining/async/etc.) high-performance
data-parallel numerical subroutines of the kind we compile in our flow
here, then there is a complete programming model. This is also
consistent with the direction of upstream PyTorch which is becoming more
tracing-based (which inherently loses a lot of program structure, which
then has to be applied back with an "outer program" orchestrating the
traced subroutines).
2022-08-08 18:12:06 -07:00
Vidush Singhal 34e207eeb5
E2E support for AtenRemainderScalarOp (#1119)
* E2E support for AtenRemainderScalarOp
2022-08-08 20:02:52 -04:00
Vidush Singhal b70548edff
Add decomposition and E2E support for Aten_EmbeddingBag (#1137)
* Add decomposition and E2E support for Aten_EmbeddingBag
2022-08-08 18:56:49 -04:00
Tanyo Kwok 290d7755fb
importer: add initial support for loading Float16 tensors (#1169)
follow up #761:

    This patch updates the `torch_mlir::convertTensorToMlirElementsAttr()`
    method to enable the creation of tensors whose base type is Float16.
    This patch also adds a test to validate the IR generation, and it
    updates the test for importing tensors of various types.
2022-08-08 12:37:31 +08:00
Tanyo Kwok 1ee865983b
[MHLO] fix tensor mode aten.div op pattern (#1160)
* [MHLO] fix tensor mode aten.div op pattern

See RFC #999
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-06 23:38:06 +08:00
Vivek Khandelwal c129a6de93 [MLIR][TORCH] Add support for dim=None to Aten[Var|Std]DimOp
PyTorch recently added support for `dim=None` in the `torch.var`
(5ca9b2b6fa)
and `torch.std`op (eb0e30e0bc).
This commit adds the corresponding support in torch-mlir.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-08-05 20:28:56 +05:30
武家伟 c94431f71c
[MHLO] Add convolution op pattern (#1152)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-08-04 00:41:35 -07:00