2021-08-04 08:53:31 +08:00
|
|
|
# refback-run is always linked dynamically as we want to distribute the
|
2020-10-09 09:29:59 +08:00
|
|
|
# binaries with the python packages for hacking/debugging.
|
|
|
|
|
|
|
|
get_property(dialect_libs GLOBAL PROPERTY NPCOMP_DIALECT_LIBS)
|
|
|
|
get_property(conversion_libs GLOBAL PROPERTY NPCOMP_CONVERSION_LIBS)
|
2020-05-29 07:41:36 +08:00
|
|
|
|
2021-08-04 08:53:31 +08:00
|
|
|
add_npcomp_executable(refback-run
|
|
|
|
refback-run.cpp
|
2020-05-29 07:41:36 +08:00
|
|
|
)
|
2020-10-09 09:29:59 +08:00
|
|
|
|
2021-08-04 08:53:31 +08:00
|
|
|
llvm_update_compile_flags(refback-run)
|
|
|
|
target_link_libraries(refback-run PRIVATE
|
2020-10-09 09:29:59 +08:00
|
|
|
NPCOMPCAPI
|
2021-07-28 07:10:10 +08:00
|
|
|
NPCOMPInitAll
|
2020-05-29 07:41:36 +08:00
|
|
|
MLIRAnalysis
|
|
|
|
MLIRIR
|
|
|
|
MLIRJitRunner
|
|
|
|
MLIRParser
|
|
|
|
MLIRSupport
|
2020-06-12 08:47:14 +08:00
|
|
|
NPCOMPInitAll
|
2020-10-08 07:11:41 +08:00
|
|
|
NPCOMPRefBackendJITHelpers
|
[torch-mlir earthmoving (1/N)] C/C++ code movement.
This creates the `external/torch-mlir` directory as an
LLVM_EXTERNAL_PROJECTS-compatible project (analogous to
`iree-dialects`) and completes movement/rename of all pure MLIR C/C++
compiler code into there. The next step will be to move all the Python
code / code that links/includes PyTorch C++ code (which currently lives
in `frontends/pytorch`) into a subdirectory here.
I call this "earthmoving" because it is mostly mechanical changes and
renames. As a quick summary (we can change this down the road easily)
- C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch`
- CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet`
- preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_`
- CMake `NPCOMPFoo -> TorchMLIRFoo`
The goal of this is to create a standalone project creating a center of
mass for entry into the MLIR ecosystem from PyTorch, suitable in scope
for eventual inclusion/ownership in PyTorch. The idea is that
`external/torch-mlir` will some day be pulled out into its own
repository, and then npcomp will simply pull it in as a submodule.
Layering-wise, what lives in `torch-mlir` lowers code from PyTorch
(currently TorchScript, but TorchFX or pytorch/xla-style tracing are
possible extensions) down to what we have been calling the "Torch
backend contract" which is cleaned up IR (inlining, simplifcation,
conversion to value tensors, ...) entirely in the `torch` dialect. This
is the branching off point for further lowering, of which npcomp takes
one opinion (outside `torch-mlir` of course!), namely the
`TorchConversion` dialect/transforms which lower to IR suitable for IREE
and other linalg-on-tensors based lower-level compilers.
Summary of changes:
- move `{include,lib,test}/Dialect/Torch` into `torch-mlir`
- move relevant parts of CAPI into `torch-mlir`.
- leave a few things related to the `torch-mlir` Python build commented
out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
|
|
|
TorchMLIRInitAll
|
2021-07-28 07:10:10 +08:00
|
|
|
|
|
|
|
# TODO: Remove these in favor of interface deps.
|
2020-08-05 08:10:14 +08:00
|
|
|
${conversion_libs}
|
|
|
|
${dialect_libs}
|
2020-10-09 09:29:59 +08:00
|
|
|
)
|
2021-08-04 08:53:31 +08:00
|
|
|
add_dependencies(refback-run
|
Rework e2e flow to use new "npcomprt"
This ~totally reworks the existing "runtime" stuff to be more
principled and usable, such as from Python. It's still not fully
production-quality, mainly in the department of memory management (e.g.
it currently leaks memory; we need to figure out "who frees memrefs" +
the analysis and transformation needed to do that (maybe use upstream
buffer allocation pass?)).
The user API is in include/npcomp/runtime/UserAPI.h, though
include/npcomp/JITRuntime/JITModule.h is a friendlier wrapper.
The stuff under {include,lib}/runtime is totally firewalled from the
compiler and tiny (<6kB, though no attention has gone into optimizing
that size). For example, we don't link in libSupport into the runtime,
instead having our own bare bones replacements for basics like ArrayRef
(the JITRuntime helps with bridging that gap, since it *can* depend on
all common LLVM utilities).
The overall features of npcomprt is that it exposes a module that
with multiple function entry points. Each function has arguments and
results that are tensor-valued, and npcomprt::Tensor is the runtime type
that is used to interact with that (and a npcomprt::Ref<T>
reference-counting wrapper is provided to wrap npcomprt::Tensor in the
common case).
From an implementation perspective, an npcomprt module at the
LLVM/object/binary level exposes a single module descriptor struct that
has pointers to other metadata (currently just a list of function
metadata descriptors). All interactions with the npcomp runtime are
keyed off of that module descriptor, including function lookups and
dispatching. This is done to dodge platform ABI issues and also allow
enough reflection to e.g. verify provided arguments.
Most of the compiler-side work here was in LowerToNpcomprtABI and
LowerToLLVM.
Also,
- Rename npcomp_rt/NpcompRt to npcomprt/Npcomprt; it was getting
annoying to type the underscores/caps.
- misc improvements to bash_helpers.sh
2020-07-09 08:15:40 +08:00
|
|
|
NPCOMPCompilerRuntimeShlib
|
2020-05-29 07:41:36 +08:00
|
|
|
)
|