torch-mlir/include/npcomp/Dialect/TCP/IR/TCPOps.td

78 lines
3.0 KiB
TableGen
Raw Normal View History

//===-------------------------------------------------------*- tablegen -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef TCP_OPS
#define TCP_OPS
include "npcomp/Dialect/TCP/IR/TCPBase.td"
include "mlir/Dialect/Shape/IR/ShapeBase.td"
include "mlir/Interfaces/SideEffectInterfaces.td"
include "mlir/Interfaces/InferTypeOpInterface.td"
Totally rework RefE2E tensor to memref flow. (#42) This now gets the overall "RefE2E" compilation stack to a point that I'm fairly happy with. We simplify it by mostly embracing the "descriptor" view of the world. The overall flow is best understood by reading through the createE2ELoweringPipeline function in lib/E2E/E2E.cpp That function creates a pass pipeline that lowers from "TCF" (which is ~numpy level of abstraction) down to LLVM IR. A brief high-level summary of what happens there: 1. TCF to TCP conversion. This involves reifying error handling in the form of shape constraints. See test/Conversion/TCFToTCP/basic.mlir 2. Lowering shape constraints. This converts shape constraints into eager error-handling code. See test/E2E/lower-shape-constraints.mlir This pass will soon go upstream. Because this lowers to std.assert, some later passes like LowerToNpcomprtABI and LowerToLLVM are updated to properly plumb this through e2e. See test/npcomp-run-mlir/invalid-broadcast.mlir for an execution test that properly aborts in case of an error. 3. Lowering tensors to memrefs. This is done via a series of passes rather than an single mega conversion. Unlike the previous code that mixed in the npcomprt ABI stuff here, it's now a very clean "pure memref" conversion. See test/E2E/lower-*-to-memref.mlir and lib/E2E/TensorToMemref/ Most of the changes are concentrated here. 4. As part of the above, we use the upstream ConvertShapeToStandard for lowering shapes. 5. We lower linalg to loops and lower loops to CFG using upstream passes. 6. Rewrite the "ABI" boundaries of the program to npcomprt data structures (LowerToNpcomprtABI). This mainly affects ABI boundaries and how global tensor constants are represented. One of the major improvements in this commit is that now it's a very clean rewrite that just replaces memrefs on ABI boundaries with !npcomprt.tensor (before there was a get_extent function that is not needed). See test/E2E/lower-to-npcomprt-abi.mlir 7. Lower to LLVM with upstream mlir patterns + some patterns for the npcomprt lowerings. One aspect here that is still a remnant of a non-descriptor-based tensor to memref flow is the BypassShapes + LowerShapedResultsToMemref. BypassShapes wraps the "tensor compute" ops in a tcp.shaped_results (basically a "tie_shape" kind of op), and then LowerShapedResultsToMemref uses those annotations to allocate output buffers while lowering the "tensor compute ops". Note that there are very few "tensor compute" ops currently supported (tcp.add + tcp.broadcast_to), so we just hardcode them in both passes. Realistically, I expect this to go away as we fully embrace the descriptor-based approach for simplicity, so don't look too deep into it.
2020-09-17 08:31:40 +08:00
include "mlir/Interfaces/ControlFlowInterfaces.td"
include "mlir/IR/SymbolInterfaces.td"
class TCP_Op<string mnemonic, list<OpTrait> traits = []>
: Op<TCP_Dialect, mnemonic, traits> {
}
def TCP_BroadcastToOp : TCP_Op<"broadcast_to"> {
let summary = "Broadcasts an operand to a given shape.";
let description = [{
Broadcasts `operand` to the shape `shape`.
It is undefined behavior if such a broadcast is not legal.
}];
let arguments = (ins AnyRankedTensor:$operand, Shape_ExtentTensorType:$shape);
let results = (outs AnyRankedTensor:$result);
let assemblyFormat = "$operand `,` $shape attr-dict `:` functional-type(operands, results)";
}
def TCP_SplattedOp : TCP_Op<"splatted"> {
let summary = "Creates a tensor filled with a particular scalar value.";
let description = [{
Creates a tensor of shape `shape` with all elements filled with `splatVal`.
This op is somewhat redundant with tcp.broadcast_to. However,
tcp.broadcast_to handles degenerate "size-1" broadcasting which structurally
cannot happen with this op. So to avoid losing that information, we keep
this op separate.
NOTE: The name "splatted" separates it from std.splat, which currently
only handles statically shaped memrefs.
TODO: Improve std.splat to take dynamic shapes.
}];
let arguments = (ins AnyType:$splatVal, Shape_ExtentTensorType:$shape);
let results = (outs AnyRankedTensor:$result);
let assemblyFormat = "$splatVal `,` $shape attr-dict `:` functional-type(operands, results)";
}
2020-12-18 02:56:46 +08:00
def TCP_PadOp : TCP_Op<"pad"> {
let summary = "Pads a tensor with a fill value";
let description = [{
2021-01-27 05:23:58 +08:00
Pads a tensor with `fillVal` along the borders of each dimension according
to `lowerExpansion` and `upperExpansion`. Note that this op is unmanaged,
meaning that it assumes its operands and their shapes are valid.
2020-12-18 02:56:46 +08:00
The tensors have dimensions:
- operand: [D1, D2, ..., DN]
- lowerExpansion: [L1, L2, ..., LN]
- upperExpansion: [U1, U2, ..., UN]
- fillVal: scalar
- result: [D1+L1+U1, D2+L2+U2, ..., DN+LN+UN]
}];
let arguments = (ins AnyRankedTensor:$operand, Shape_ExtentTensorType:$lowerExpansion, Shape_ExtentTensorType:$upperExpansion, AnyType:$fillVal);
let results = (outs AnyRankedTensor:$result);
let assemblyFormat = "$operand `,` $lowerExpansion `,` $upperExpansion `,` $fillVal attr-dict `:` functional-type(operands, results)";
}
#endif // TCP_OPS