2020-05-07 09:41:54 +08:00
|
|
|
//===-- Passes.td - Pass definition file -------------------*- tablegen -*-===//
|
|
|
|
//
|
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
#ifndef NPCOMP_CONVERSION_PASSES
|
|
|
|
#define NPCOMP_CONVERSION_PASSES
|
|
|
|
|
|
|
|
include "mlir/Pass/PassBase.td"
|
|
|
|
|
|
|
|
//===----------------------------------------------------------------------===//
|
Significantly restructure torch/aten import design.
This is a really major and invasive restructuring of the way we get
torch operators (`torch::jit::Operator` / `c10::OperatorHandle`) into
MLIR. Please forgive the challenging review, but due to the sheer
invasiveness, it wasn't really practical do do it in sane smaller
pieces.
This fully replaces everything that was already working on the
TorchScript path (actually, more -- we added tanh support to
TorchToLinalg in order to delete the older code paths). Additionally,
I've kept the lights on for the acap path too, including what little e2e
stuff was working before (for expediency I made a few tiny compromises
along the way that will be easy to undo when we give that path proper
attention).
Overview of the new design:
- The torch operator `somens::someunqualname.someoverloadname` is
imported as `torch.somens.someunqualname.someoverloadname` (skip the
last dotted part if the overload name is empty), OR, if we don't have
such an op registered, it is imported as
`torch.operator "somens.someunqualname.someoverloadname" (...) : ...`.
- The addition of the "overload name" is a critical element here, as
the `(ns,unqual,overload)` triple is unique, which solves a lot of
problems we were having.
- This involves having separate MLIR ops for the `trailing_` and
`.out` variants and all the different overloads. This seemed
necessary, because the set of overloads is so wild and varied and
unstructured. The previous design was leaning into some underlying
structure that just isn't there -- the default situation is
the "random overload that we want to manage on the MLIR side",
rather than that being an exception. E.g. `aten::ne` (not-equal)
has 21 overloads, only 4 of which are c10 dispatcher ops see
[gist](https://gist.github.com/silvasean/190ba918c550c956260e21254e1b8aa1),
and the "out" variant is really called `.Tensor_out` instead of
`.out` as it frequently is for other ops.
- Rationale for all being in `torch` namespace: the set of operators
are so varied and unstructured that "dialect per namespace"
doesn't result in anything resembling the typical MLIR dialect
boundary expectations. We could maybe draw the boundary at
dispatcher ops vs non-dispatcher ops, but that doesn't seem to
really result in very much useful structure at this point in time.
- Note: within the torch operator registry, we effectively have a
mini-basicpy subdialect (already type-resolved), which is reasonably
structured.
- The existing Torch op interfaces are also removed -- now that we
track the overload name, we can losslessly find the original
operator.
- Instead of `ATenRecognizeKernelsPass`, we now have a
`ReduceOpVariantsPass` that keys off certain traits (and perhaps
eventually interfaces) to reduce variants of ops to a smaller set,
ideally operating on immutable tensors and using surrounding ops to
model the mutability/aliasing aspects.
- Note: `torch.ns.unqual.overload` ops allow both immutable and
mutable tensors (unlike the previous hard distinction in the common
case). This is a premonition for a future change that will introduce a
bona fide `!torch.tensor` type that will clean up a bunch of stuff.
- `TorchToLinalg` / `TorchToStd` supercede the existing
"ATen->TCF->TCP->Linalg" path.
- The new `torch_ods_gen.py` supercedes `torch_signature_ods_gen.py`.
It should look somewhat familiar, but the benefit of hindsight has
allowed a lot of simplifications.
The overall trend seems to be to make the `torch` dialect a nice layer
independent of anything else. It feels like as a natural result of
various future changes we will be removing the reliance on basicpy+numpy
dialects and have a nice self-contained type system too that properly
models the TorchScript type system (including proper subtyping,
mutable/immutable tensors, optional dtype, etc.).
Recommended review order:
- Start at some of the new import IR, e.g. in
`frontends/pytorch/test/node_import/prim.py`,
`frontends/pytorch/test/acap_export/test_export_add3.py`, and other
tests.
- `frontends/pytorch/python/torch_mlir_utils/codegen/torch_ods_gen.py`
and associated generated files:
- `include/npcomp/Dialect/Torch/IR/GeneratedAtenOps.td`
- `include/npcomp/Dialect/Torch/IR/GeneratedPrimOps.td`
- Inspect `ReduceOpVariants.cpp` / `reduce-op-variants.mlir` and the new
traits in `include/npcomp/Dialect/Torch/IR/TorchTraits.h`
- Various code changes in the import path in
`frontends/pytorch/csrc/builder`. Probably most interesting is the new
code in `torch_to_mlir_utils.cpp` that has the logic to create the
`torch.operator` ops or `torch.ns.unqual.overload` ops.
This is the [new ResNet IR](https://gist.github.com/silvasean/5407aafb710d07612b7b5b92eabecebe),
just to be able to look at a substantial sample of IR in the new style.
2021-05-05 05:42:50 +08:00
|
|
|
// Torch conversions
|
2020-05-07 09:41:54 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
Significantly restructure torch/aten import design.
This is a really major and invasive restructuring of the way we get
torch operators (`torch::jit::Operator` / `c10::OperatorHandle`) into
MLIR. Please forgive the challenging review, but due to the sheer
invasiveness, it wasn't really practical do do it in sane smaller
pieces.
This fully replaces everything that was already working on the
TorchScript path (actually, more -- we added tanh support to
TorchToLinalg in order to delete the older code paths). Additionally,
I've kept the lights on for the acap path too, including what little e2e
stuff was working before (for expediency I made a few tiny compromises
along the way that will be easy to undo when we give that path proper
attention).
Overview of the new design:
- The torch operator `somens::someunqualname.someoverloadname` is
imported as `torch.somens.someunqualname.someoverloadname` (skip the
last dotted part if the overload name is empty), OR, if we don't have
such an op registered, it is imported as
`torch.operator "somens.someunqualname.someoverloadname" (...) : ...`.
- The addition of the "overload name" is a critical element here, as
the `(ns,unqual,overload)` triple is unique, which solves a lot of
problems we were having.
- This involves having separate MLIR ops for the `trailing_` and
`.out` variants and all the different overloads. This seemed
necessary, because the set of overloads is so wild and varied and
unstructured. The previous design was leaning into some underlying
structure that just isn't there -- the default situation is
the "random overload that we want to manage on the MLIR side",
rather than that being an exception. E.g. `aten::ne` (not-equal)
has 21 overloads, only 4 of which are c10 dispatcher ops see
[gist](https://gist.github.com/silvasean/190ba918c550c956260e21254e1b8aa1),
and the "out" variant is really called `.Tensor_out` instead of
`.out` as it frequently is for other ops.
- Rationale for all being in `torch` namespace: the set of operators
are so varied and unstructured that "dialect per namespace"
doesn't result in anything resembling the typical MLIR dialect
boundary expectations. We could maybe draw the boundary at
dispatcher ops vs non-dispatcher ops, but that doesn't seem to
really result in very much useful structure at this point in time.
- Note: within the torch operator registry, we effectively have a
mini-basicpy subdialect (already type-resolved), which is reasonably
structured.
- The existing Torch op interfaces are also removed -- now that we
track the overload name, we can losslessly find the original
operator.
- Instead of `ATenRecognizeKernelsPass`, we now have a
`ReduceOpVariantsPass` that keys off certain traits (and perhaps
eventually interfaces) to reduce variants of ops to a smaller set,
ideally operating on immutable tensors and using surrounding ops to
model the mutability/aliasing aspects.
- Note: `torch.ns.unqual.overload` ops allow both immutable and
mutable tensors (unlike the previous hard distinction in the common
case). This is a premonition for a future change that will introduce a
bona fide `!torch.tensor` type that will clean up a bunch of stuff.
- `TorchToLinalg` / `TorchToStd` supercede the existing
"ATen->TCF->TCP->Linalg" path.
- The new `torch_ods_gen.py` supercedes `torch_signature_ods_gen.py`.
It should look somewhat familiar, but the benefit of hindsight has
allowed a lot of simplifications.
The overall trend seems to be to make the `torch` dialect a nice layer
independent of anything else. It feels like as a natural result of
various future changes we will be removing the reliance on basicpy+numpy
dialects and have a nice self-contained type system too that properly
models the TorchScript type system (including proper subtyping,
mutable/immutable tensors, optional dtype, etc.).
Recommended review order:
- Start at some of the new import IR, e.g. in
`frontends/pytorch/test/node_import/prim.py`,
`frontends/pytorch/test/acap_export/test_export_add3.py`, and other
tests.
- `frontends/pytorch/python/torch_mlir_utils/codegen/torch_ods_gen.py`
and associated generated files:
- `include/npcomp/Dialect/Torch/IR/GeneratedAtenOps.td`
- `include/npcomp/Dialect/Torch/IR/GeneratedPrimOps.td`
- Inspect `ReduceOpVariants.cpp` / `reduce-op-variants.mlir` and the new
traits in `include/npcomp/Dialect/Torch/IR/TorchTraits.h`
- Various code changes in the import path in
`frontends/pytorch/csrc/builder`. Probably most interesting is the new
code in `torch_to_mlir_utils.cpp` that has the logic to create the
`torch.operator` ops or `torch.ns.unqual.overload` ops.
This is the [new ResNet IR](https://gist.github.com/silvasean/5407aafb710d07612b7b5b92eabecebe),
just to be able to look at a substantial sample of IR in the new style.
2021-05-05 05:42:50 +08:00
|
|
|
def ConvertTorchToStd : Pass<"convert-torch-to-std", "FuncOp"> {
|
|
|
|
let summary = "Convert recognized Torch ops to Std ops";
|
|
|
|
let constructor = "mlir::NPCOMP::createConvertTorchToStdPass()";
|
2020-05-07 09:41:54 +08:00
|
|
|
}
|
|
|
|
|
2021-06-19 03:40:40 +08:00
|
|
|
def ConvertTorchToSCF: Pass<"convert-torch-to-scf", "FuncOp"> {
|
|
|
|
let summary = "Convert recognized Torch ops to SCF ops";
|
|
|
|
let constructor = "mlir::NPCOMP::createConvertTorchToSCFPass()";
|
|
|
|
}
|
|
|
|
|
Significantly restructure torch/aten import design.
This is a really major and invasive restructuring of the way we get
torch operators (`torch::jit::Operator` / `c10::OperatorHandle`) into
MLIR. Please forgive the challenging review, but due to the sheer
invasiveness, it wasn't really practical do do it in sane smaller
pieces.
This fully replaces everything that was already working on the
TorchScript path (actually, more -- we added tanh support to
TorchToLinalg in order to delete the older code paths). Additionally,
I've kept the lights on for the acap path too, including what little e2e
stuff was working before (for expediency I made a few tiny compromises
along the way that will be easy to undo when we give that path proper
attention).
Overview of the new design:
- The torch operator `somens::someunqualname.someoverloadname` is
imported as `torch.somens.someunqualname.someoverloadname` (skip the
last dotted part if the overload name is empty), OR, if we don't have
such an op registered, it is imported as
`torch.operator "somens.someunqualname.someoverloadname" (...) : ...`.
- The addition of the "overload name" is a critical element here, as
the `(ns,unqual,overload)` triple is unique, which solves a lot of
problems we were having.
- This involves having separate MLIR ops for the `trailing_` and
`.out` variants and all the different overloads. This seemed
necessary, because the set of overloads is so wild and varied and
unstructured. The previous design was leaning into some underlying
structure that just isn't there -- the default situation is
the "random overload that we want to manage on the MLIR side",
rather than that being an exception. E.g. `aten::ne` (not-equal)
has 21 overloads, only 4 of which are c10 dispatcher ops see
[gist](https://gist.github.com/silvasean/190ba918c550c956260e21254e1b8aa1),
and the "out" variant is really called `.Tensor_out` instead of
`.out` as it frequently is for other ops.
- Rationale for all being in `torch` namespace: the set of operators
are so varied and unstructured that "dialect per namespace"
doesn't result in anything resembling the typical MLIR dialect
boundary expectations. We could maybe draw the boundary at
dispatcher ops vs non-dispatcher ops, but that doesn't seem to
really result in very much useful structure at this point in time.
- Note: within the torch operator registry, we effectively have a
mini-basicpy subdialect (already type-resolved), which is reasonably
structured.
- The existing Torch op interfaces are also removed -- now that we
track the overload name, we can losslessly find the original
operator.
- Instead of `ATenRecognizeKernelsPass`, we now have a
`ReduceOpVariantsPass` that keys off certain traits (and perhaps
eventually interfaces) to reduce variants of ops to a smaller set,
ideally operating on immutable tensors and using surrounding ops to
model the mutability/aliasing aspects.
- Note: `torch.ns.unqual.overload` ops allow both immutable and
mutable tensors (unlike the previous hard distinction in the common
case). This is a premonition for a future change that will introduce a
bona fide `!torch.tensor` type that will clean up a bunch of stuff.
- `TorchToLinalg` / `TorchToStd` supercede the existing
"ATen->TCF->TCP->Linalg" path.
- The new `torch_ods_gen.py` supercedes `torch_signature_ods_gen.py`.
It should look somewhat familiar, but the benefit of hindsight has
allowed a lot of simplifications.
The overall trend seems to be to make the `torch` dialect a nice layer
independent of anything else. It feels like as a natural result of
various future changes we will be removing the reliance on basicpy+numpy
dialects and have a nice self-contained type system too that properly
models the TorchScript type system (including proper subtyping,
mutable/immutable tensors, optional dtype, etc.).
Recommended review order:
- Start at some of the new import IR, e.g. in
`frontends/pytorch/test/node_import/prim.py`,
`frontends/pytorch/test/acap_export/test_export_add3.py`, and other
tests.
- `frontends/pytorch/python/torch_mlir_utils/codegen/torch_ods_gen.py`
and associated generated files:
- `include/npcomp/Dialect/Torch/IR/GeneratedAtenOps.td`
- `include/npcomp/Dialect/Torch/IR/GeneratedPrimOps.td`
- Inspect `ReduceOpVariants.cpp` / `reduce-op-variants.mlir` and the new
traits in `include/npcomp/Dialect/Torch/IR/TorchTraits.h`
- Various code changes in the import path in
`frontends/pytorch/csrc/builder`. Probably most interesting is the new
code in `torch_to_mlir_utils.cpp` that has the logic to create the
`torch.operator` ops or `torch.ns.unqual.overload` ops.
This is the [new ResNet IR](https://gist.github.com/silvasean/5407aafb710d07612b7b5b92eabecebe),
just to be able to look at a substantial sample of IR in the new style.
2021-05-05 05:42:50 +08:00
|
|
|
def ConvertTorchToLinalg : Pass<"convert-torch-to-linalg", "FuncOp"> {
|
|
|
|
let summary = "Convert recognized Torch ops to Linalg ops";
|
2021-04-09 08:43:41 +08:00
|
|
|
let description = [{
|
|
|
|
Convert ATen ops to linalg ops.
|
|
|
|
|
|
|
|
This pass's main responsibility is to bridge the world between ops
|
|
|
|
that safely terminate the program in case of operand shape mismatches
|
|
|
|
(ATen) and ops where such mismatches are undefined behavior (linalg).
|
|
|
|
|
|
|
|
To model the termination of the program for implementing error guards,
|
|
|
|
we use the `std.assert` op.
|
|
|
|
This is a design decision that is at variance from other passes in npcomp,
|
|
|
|
such as `convert-tcf-to-std` and `convert-tcf-to-linalg` which use the
|
|
|
|
`shape` dialect's witness system (`shape.cstr_*` family of ops feeding into
|
|
|
|
`shape.assuming` regions). This is a change in design decisions
|
|
|
|
from those passes (which will be subsumed by this one). The reasons for this
|
|
|
|
change are heuristic, but boil down to:
|
|
|
|
1. The modeling of `shape.assuming` is odd, as it uses a region, which is
|
|
|
|
not a good fit for modeling error guards. Regions mark a "start" and an
|
|
|
|
"end" (which is their nesting property). But
|
|
|
|
modeling assertions in the program doesn't fit into that. For assertions,
|
|
|
|
only the "start" matters (once tested, a predicate remains true "forever"
|
|
|
|
-- it doesn't end at the "yield" of the region).
|
|
|
|
Thus, having regions places arbitrary "end"s that just add IR structure
|
|
|
|
that has no semantic value for modeling this problem! (and to make things
|
|
|
|
worse the "end"s, which we don't need, are what require "yielding"
|
|
|
|
values, which interrupts use-def chains). Consider the different
|
|
|
|
structural properties of regions:
|
|
|
|
a. IsolatedFromAbove region:
|
|
|
|
- "start" interrupts use-def chains,
|
|
|
|
- "end" interrupts use-def chains
|
|
|
|
- structurally protects from intra-block upward and downward
|
|
|
|
code motion
|
|
|
|
b. Capturing region (like `shape.assuming`):
|
|
|
|
- "start" does not interrupt use-def chains,
|
|
|
|
- "end" interrupts use-def chains
|
|
|
|
- structurally protects from intra-block upward and downward
|
|
|
|
code motion
|
|
|
|
c. What we "ideally" want:
|
|
|
|
- "start" interrupts use-def chains (can be pruned though)
|
|
|
|
- no "end" IR structure!
|
|
|
|
- structurally protects from intra-block upward code motion
|
|
|
|
(but not downward code motion!)
|
|
|
|
- Observation: We probably can't get all of this, but overall this
|
|
|
|
problem is much better suited for a "MemorySSA"-like
|
|
|
|
abstraction, call it "EffectSSA" which is constructed on-demand
|
|
|
|
based on MLIR's effect modeling system (rather than
|
|
|
|
`shape.assuming`, which only covers the effects the IR creator
|
|
|
|
encoded -- with witnesses/`shape.assuming` -- it is easy to forget
|
|
|
|
to handle effects other than those encoded in the
|
|
|
|
witness structure).
|
|
|
|
2. The presence of `shape.assuming` regions tends to create highly nested
|
|
|
|
IR structures, which don't interoperate well with any other IR
|
|
|
|
structures, and creates very bulky IR (and IR creation code). In general
|
|
|
|
if we are going to do anything with anything (e.g. canonicalize) we
|
|
|
|
end up needing need to either:
|
|
|
|
a. Flatten the `shape.assuming` IR (defeating the purpose of having
|
|
|
|
it).
|
|
|
|
b. Do some sort of shape.assuming "region merging".
|
|
|
|
c. Have special patterns that handle a subset of special cases (looking
|
|
|
|
through "yields" and such) and don't generalize.
|
|
|
|
3. Witnesses tend to encourage non-scalable peephole transformations, which
|
|
|
|
tend to make analyses/transformations non-robust to the presence of
|
|
|
|
control flow and side effecting ops (easy to forget to handle side
|
|
|
|
effects other than those modeled by the witness system).
|
|
|
|
4. All this code operates on ranked tensors, for which using individual
|
|
|
|
SSA values for sizes (rather than a "shape type") seems to
|
|
|
|
work really well at this level of abstraction based on prior experience
|
|
|
|
in IREE. (unranked code tends to benefit from having a discrete
|
|
|
|
"shape type" to model shapes).
|
|
|
|
|
|
|
|
We will see if we end up needing something like `shape.assuming`, but for
|
|
|
|
now, it seems likely we can do something simpler and just bypass it. The
|
|
|
|
design of having an EffectSSA that is constructed on-demand seems very
|
|
|
|
compelling for modeling effects more broadly.
|
|
|
|
}];
|
Significantly restructure torch/aten import design.
This is a really major and invasive restructuring of the way we get
torch operators (`torch::jit::Operator` / `c10::OperatorHandle`) into
MLIR. Please forgive the challenging review, but due to the sheer
invasiveness, it wasn't really practical do do it in sane smaller
pieces.
This fully replaces everything that was already working on the
TorchScript path (actually, more -- we added tanh support to
TorchToLinalg in order to delete the older code paths). Additionally,
I've kept the lights on for the acap path too, including what little e2e
stuff was working before (for expediency I made a few tiny compromises
along the way that will be easy to undo when we give that path proper
attention).
Overview of the new design:
- The torch operator `somens::someunqualname.someoverloadname` is
imported as `torch.somens.someunqualname.someoverloadname` (skip the
last dotted part if the overload name is empty), OR, if we don't have
such an op registered, it is imported as
`torch.operator "somens.someunqualname.someoverloadname" (...) : ...`.
- The addition of the "overload name" is a critical element here, as
the `(ns,unqual,overload)` triple is unique, which solves a lot of
problems we were having.
- This involves having separate MLIR ops for the `trailing_` and
`.out` variants and all the different overloads. This seemed
necessary, because the set of overloads is so wild and varied and
unstructured. The previous design was leaning into some underlying
structure that just isn't there -- the default situation is
the "random overload that we want to manage on the MLIR side",
rather than that being an exception. E.g. `aten::ne` (not-equal)
has 21 overloads, only 4 of which are c10 dispatcher ops see
[gist](https://gist.github.com/silvasean/190ba918c550c956260e21254e1b8aa1),
and the "out" variant is really called `.Tensor_out` instead of
`.out` as it frequently is for other ops.
- Rationale for all being in `torch` namespace: the set of operators
are so varied and unstructured that "dialect per namespace"
doesn't result in anything resembling the typical MLIR dialect
boundary expectations. We could maybe draw the boundary at
dispatcher ops vs non-dispatcher ops, but that doesn't seem to
really result in very much useful structure at this point in time.
- Note: within the torch operator registry, we effectively have a
mini-basicpy subdialect (already type-resolved), which is reasonably
structured.
- The existing Torch op interfaces are also removed -- now that we
track the overload name, we can losslessly find the original
operator.
- Instead of `ATenRecognizeKernelsPass`, we now have a
`ReduceOpVariantsPass` that keys off certain traits (and perhaps
eventually interfaces) to reduce variants of ops to a smaller set,
ideally operating on immutable tensors and using surrounding ops to
model the mutability/aliasing aspects.
- Note: `torch.ns.unqual.overload` ops allow both immutable and
mutable tensors (unlike the previous hard distinction in the common
case). This is a premonition for a future change that will introduce a
bona fide `!torch.tensor` type that will clean up a bunch of stuff.
- `TorchToLinalg` / `TorchToStd` supercede the existing
"ATen->TCF->TCP->Linalg" path.
- The new `torch_ods_gen.py` supercedes `torch_signature_ods_gen.py`.
It should look somewhat familiar, but the benefit of hindsight has
allowed a lot of simplifications.
The overall trend seems to be to make the `torch` dialect a nice layer
independent of anything else. It feels like as a natural result of
various future changes we will be removing the reliance on basicpy+numpy
dialects and have a nice self-contained type system too that properly
models the TorchScript type system (including proper subtyping,
mutable/immutable tensors, optional dtype, etc.).
Recommended review order:
- Start at some of the new import IR, e.g. in
`frontends/pytorch/test/node_import/prim.py`,
`frontends/pytorch/test/acap_export/test_export_add3.py`, and other
tests.
- `frontends/pytorch/python/torch_mlir_utils/codegen/torch_ods_gen.py`
and associated generated files:
- `include/npcomp/Dialect/Torch/IR/GeneratedAtenOps.td`
- `include/npcomp/Dialect/Torch/IR/GeneratedPrimOps.td`
- Inspect `ReduceOpVariants.cpp` / `reduce-op-variants.mlir` and the new
traits in `include/npcomp/Dialect/Torch/IR/TorchTraits.h`
- Various code changes in the import path in
`frontends/pytorch/csrc/builder`. Probably most interesting is the new
code in `torch_to_mlir_utils.cpp` that has the logic to create the
`torch.operator` ops or `torch.ns.unqual.overload` ops.
This is the [new ResNet IR](https://gist.github.com/silvasean/5407aafb710d07612b7b5b92eabecebe),
just to be able to look at a substantial sample of IR in the new style.
2021-05-05 05:42:50 +08:00
|
|
|
let constructor = "mlir::NPCOMP::createConvertTorchToLinalgPass()";
|
2021-04-09 08:43:41 +08:00
|
|
|
}
|
|
|
|
|
2020-05-07 09:41:54 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
2020-06-14 14:45:43 +08:00
|
|
|
// Basicpy conversions
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
def ConvertBasicpyToStd : Pass<"convert-basicpy-to-std", "FuncOp"> {
|
|
|
|
let summary = "Convert representable Basicpy ops to std";
|
|
|
|
let constructor = "mlir::NPCOMP::createConvertBasicpyToStdPass()";
|
|
|
|
}
|
|
|
|
|
2020-07-09 12:03:57 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// Numpy conversions
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
|
|
|
def ConvertNumpyToTCF : Pass<"convert-numpy-to-tcf", "FuncOp"> {
|
|
|
|
let summary = "Convert the numpy dialect to supported TCF ops";
|
|
|
|
let constructor = "mlir::NPCOMP::createConvertNumpyToTCFPass()";
|
|
|
|
}
|
|
|
|
|
2020-11-10 07:49:22 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// TCFToTCP
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2020-11-14 07:34:24 +08:00
|
|
|
def ConvertTCFToLinalg : Pass<"convert-tcf-to-linalg", "FuncOp"> {
|
2020-11-10 07:49:22 +08:00
|
|
|
let summary = "Convert TCF to Linalg";
|
|
|
|
let description = [{
|
|
|
|
The intention is for this pass to convert mainly to linalg named ops.
|
|
|
|
|
|
|
|
Because linalg is at the "TCP" layer of abstraction, this pass has to
|
|
|
|
concern itself with generating guards for error cases.
|
|
|
|
}];
|
|
|
|
let constructor = "mlir::NPCOMP::createConvertTCFToLinalgPass()";
|
|
|
|
}
|
|
|
|
|
2020-11-07 09:17:28 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// TCFToStd
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2020-11-14 07:34:24 +08:00
|
|
|
def ConvertTCFToStd : Pass<"convert-tcf-to-std", "FuncOp"> {
|
2020-11-07 09:17:28 +08:00
|
|
|
let summary = "Convert TCF to Std";
|
|
|
|
let constructor = "mlir::NPCOMP::createConvertTCFToStdPass()";
|
|
|
|
}
|
|
|
|
|
2020-11-05 08:54:52 +08:00
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
// TCFToTCP
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2020-11-14 07:34:24 +08:00
|
|
|
def ConvertTCFToTCP : Pass<"convert-tcf-to-tcp", "FuncOp"> {
|
2020-11-05 08:54:52 +08:00
|
|
|
let summary = "Convert TCF to TCP";
|
|
|
|
let constructor = "mlir::NPCOMP::createConvertTCFToTCPPass()";
|
|
|
|
}
|
|
|
|
|
2020-05-07 09:41:54 +08:00
|
|
|
#endif // NPCOMP_CONVERSION_PASSES
|