torch-mlir/test/refback-run/basic.mlir

28 lines
1.2 KiB
MLIR
Raw Normal View History

// RUN: refback-run %s \
Remove TCF and TCP. These were legacy concepts that are now superceded by direct Torch to linalg-on-tensors lowering. These were based on some very early thinking related to the layering of frontends vs codegen, which is now obsolete because: - We expected a lot more centralization at the frontend (TCF) level. It turns out that frontend needs really vary a lot, and there is no grand unifying TCF dialect plausible. The additional layer isn't worth it. - Linalg-on-tensors obsoletes the primary need for TCP. There are still a few things not representable with linalg-on-tensors, but the support is growing and the whole "not included in linalg-on-tensors" direction needs to be rethought. Our TCP dialect didn't cover any of the actually important things in this space (such as sort, FFT, top-k, etc.). See historical [slides](https://drive.google.com/file/d/1iljcpTQ5NPaMfGpoPDFml1XkYxjK_6A4/view) / [recording](https://drive.google.com/file/d/1jSPa8TwPKUt0WuLquGc8OgSUVYJHMvWZ/view) for more details on the origin story here. Their presence was confusing users too [bug](https://github.com/llvm/mlir-npcomp/issues/248). Also, - Trim down npcomp-run-mlir testing. It was testing TCF to TCP lowering for the most part. The essential stuff is retained and rephrased with linalg-on-tensors. (we should probably rename it "refback-run" or something, as it is just a way to invoke RefBackend) - test/Python/Backend/RefJIT/simple_invoke_numpy.py is XFAIL'ed. Our "anti-framework" direction seems to be the likely future path.
2021-08-03 01:27:16 +08:00
// RUN: -invoke forward \
// RUN: -arg-value="dense<[[1.0, 2.0], [3.0, 4.0]]> : tensor<2x2xf32>" \
// RUN: -arg-value="dense<[10.0, 20.0]> : tensor<2xf32>" \
Rework e2e flow to use new "npcomprt" This ~totally reworks the existing "runtime" stuff to be more principled and usable, such as from Python. It's still not fully production-quality, mainly in the department of memory management (e.g. it currently leaks memory; we need to figure out "who frees memrefs" + the analysis and transformation needed to do that (maybe use upstream buffer allocation pass?)). The user API is in include/npcomp/runtime/UserAPI.h, though include/npcomp/JITRuntime/JITModule.h is a friendlier wrapper. The stuff under {include,lib}/runtime is totally firewalled from the compiler and tiny (<6kB, though no attention has gone into optimizing that size). For example, we don't link in libSupport into the runtime, instead having our own bare bones replacements for basics like ArrayRef (the JITRuntime helps with bridging that gap, since it *can* depend on all common LLVM utilities). The overall features of npcomprt is that it exposes a module that with multiple function entry points. Each function has arguments and results that are tensor-valued, and npcomprt::Tensor is the runtime type that is used to interact with that (and a npcomprt::Ref<T> reference-counting wrapper is provided to wrap npcomprt::Tensor in the common case). From an implementation perspective, an npcomprt module at the LLVM/object/binary level exposes a single module descriptor struct that has pointers to other metadata (currently just a list of function metadata descriptors). All interactions with the npcomp runtime are keyed off of that module descriptor, including function lookups and dispatching. This is done to dodge platform ABI issues and also allow enough reflection to e.g. verify provided arguments. Most of the compiler-side work here was in LowerToNpcomprtABI and LowerToLLVM. Also, - Rename npcomp_rt/NpcompRt to npcomprt/Npcomprt; it was getting annoying to type the underscores/caps. - misc improvements to bash_helpers.sh
2020-07-09 08:15:40 +08:00
// RUN: -shared-libs=%npcomp_runtime_shlib 2>&1 \
// RUN: | FileCheck %s
Remove TCF and TCP. These were legacy concepts that are now superceded by direct Torch to linalg-on-tensors lowering. These were based on some very early thinking related to the layering of frontends vs codegen, which is now obsolete because: - We expected a lot more centralization at the frontend (TCF) level. It turns out that frontend needs really vary a lot, and there is no grand unifying TCF dialect plausible. The additional layer isn't worth it. - Linalg-on-tensors obsoletes the primary need for TCP. There are still a few things not representable with linalg-on-tensors, but the support is growing and the whole "not included in linalg-on-tensors" direction needs to be rethought. Our TCP dialect didn't cover any of the actually important things in this space (such as sort, FFT, top-k, etc.). See historical [slides](https://drive.google.com/file/d/1iljcpTQ5NPaMfGpoPDFml1XkYxjK_6A4/view) / [recording](https://drive.google.com/file/d/1jSPa8TwPKUt0WuLquGc8OgSUVYJHMvWZ/view) for more details on the origin story here. Their presence was confusing users too [bug](https://github.com/llvm/mlir-npcomp/issues/248). Also, - Trim down npcomp-run-mlir testing. It was testing TCF to TCP lowering for the most part. The essential stuff is retained and rephrased with linalg-on-tensors. (we should probably rename it "refback-run" or something, as it is just a way to invoke RefBackend) - test/Python/Backend/RefJIT/simple_invoke_numpy.py is XFAIL'ed. Our "anti-framework" direction seems to be the likely future path.
2021-08-03 01:27:16 +08:00
// CHECK{LITERAL}: output #0: dense<[[1.100000e+01, 2.200000e+01], [1.300000e+01, 2.400000e+01]]> : tensor<2x2xf32>
#map0 = affine_map<(d0, d1) -> (d0, d1)>
#map1 = affine_map<(d0, d1) -> (d1)>
builtin.func @forward(%arg0: tensor<?x?xf32>, %arg1: tensor<?xf32>) -> tensor<?x?xf32> {
%c1 = constant 1 : index
%c0 = constant 0 : index
%0 = tensor.dim %arg0, %c0 : tensor<?x?xf32>
%1 = tensor.dim %arg0, %c1 : tensor<?x?xf32>
%2 = tensor.dim %arg1, %c0 : tensor<?xf32>
%3 = cmpi eq, %1, %2 : index
assert %3, "mismatched size for broadcast"
%4 = linalg.init_tensor [%0, %1] : tensor<?x?xf32>
%5 = linalg.generic {indexing_maps = [#map0, #map1, #map0], iterator_types = ["parallel", "parallel"]} ins(%arg0, %arg1 : tensor<?x?xf32>, tensor<?xf32>) outs(%4 : tensor<?x?xf32>) {
^bb0(%arg2: f32, %arg3: f32, %arg4: f32): // no predecessors
%6 = addf %arg2, %arg3 : f32
linalg.yield %6 : f32
} -> tensor<?x?xf32>
return %5 : tensor<?x?xf32>
}