* Forks a subset of my shell functions into docker_shell_funcs.sh, specifically needed to create docker images that run as yourself.
* Extends the readme with the three command bootstrap to get a dev container running.
* Step by step instructions for configuring VSCode for Intellisense in either npcomp or LLVM.
* Changes LLVM config options to enable tests. This setup is now suitable for upstream changes as well without rebuilding.
This is the first in a patch series that is refactoring the
constellation of things variously called or associated with "E2E",
"RefE2E", "npcomprt", and "TCP" into a more cleanly layered result.
Concretely, this first patch fixes the fact that TCP was basically
acting like a dumping ground needed by the reference backend. This
splits it out, which is fairly mechanical, but touches a lot of lines of
code (basically replacing `tcp` with `refback` and `TCP` with
`RefBackend).
Now, the RefBackend dialect is that dumping ground, which
is slighly better, as it starts allowing TCP to become a nice clean
middle layer that is not related per se to the reference backend.
The previous name RefE2E or "reference e2e flow" was super confusing.
Now that we are seeing more clearly where the "backend" distinction
lies, the [RefBackend] commit tag is born :)
* Had to stop short of modifying the function return signature because of a missing C-API upstream.
* Committing here is good enough for a test and will resolve the various TODOs about upstream APIs next.
This unblocks backend progress while the PyTorch frontend work is coming
online. Hopefully we can delete this soon.
See tools/mnist-playground/README.md for more context on what this tool
is for, next steps, and current status.
I was seeing some miscompiles due to the uninitialized data read here
before. Interestingly, this was masked in some of our previous test
cases, since the uninitialized data "always" was so small that it would
present as a rounding error for the 1.0-10.0 sized values that the
matmul was computing on.
* Adds at::Tensor -> MlirValue tracking.
* Adds conversions for tensor and scalar types to MLIR types.
* Adds npcomp C APIs for constructing custom types.
* Reworks pybind include so as to get Torch pybind helpers (needed to pass at::Tensor type from Python->C++).
* Uses the MLIR-C API since that will save us a lot of grief down the road (i.e. will give PyTorch and libMLIR/libNPCOMP the ability to skew version-wise).
* Quite a few TODOs and not yet populating the function in any way.
* Uses the new dispatcher API.
* Just prints to the console for the moment when an op is captured.
* Executes the op through the existing implementation.
Now that we upstreamed our pass, we can remove it.
The final pass that landed upstream doesn't do the shape.assuming
canonicalization to legalize that op away, so added a
restricted-canonicalizer pass that allowed to run just shape dialect
canonicalizations, which deletes the shape.assuming.
The pass ended up kind of ugly. See the TODO's on it for some potential
cleaner directions.
Date: Fri Sep 18 13:55:52 2020 -0700
- Update to linalg syntax
- New generated builders are better. Custom builder for
tcp.shaped_results is now redundant.
* Exposes the op registry via a get_registered_ops method.
* Moves the aten dialect generation scripts in prep for integrating them with this facility.
This cleans up the lowering pipeline to easily allow extending to
multiple binary ops. It looks fairly repetitive at multiple levels, but
I don't want to prematurely generalize. I think that in principle we
could derive a large swatch of TCF + TCP from a single linalg-style
specification. Another direction is to use an OpInterface (something
like "buildLinalgGenericBody"). I'm keeping my eye on it.
In a subsequent commit, I'll mechanically add a set of binary ops
modeled off of the std arithmetic ops.
* Still need to add a systematic mechanism for discovering gradient ops.
* Work needed on the various _ suffixed inplace ops.
* Other randoms still not mapped.
* Outside of this commit, I do have enough commented/reworked to roughly build but that will take another handful of commits to get going.
It was previously going through this awkward route that prematurely
created linalg.generic ops, which was an annoying layering problem since
we can't compute a shape transfer function for linalg.generic in the
general case. Now we pass it through the same path as tcp.matmul, with
the shape transfer function being defined for tcp.add.
This also removed the need for TCPToLinalg (now deleted). The equivalent
of that is happening in lower-shaped-results-to-memref. One interesting
outcome of this: we're basically using linalg as a "Buffer TCP". We
might want to look into using named structured ops for more of TCP, but
that would be a big velocity hit since then any change to the ODS /
verification for those ops would be a change to the upstream structured
op ODS generator. After we have more experience defining this manually,
we should re-evaluate rebasing TCP on generated named linalg ops.
I'm pretty happy with how this turned out. It looks pretty much like it
should -- one change at each layer. This particular op bottoms out on
linalg which takes care of the rest.
- Add tcf.matmul
- Add tcp.matmul
- Add TCF->TCP lowering
- Add tcp.matmul shape transfer function (BypassShapes.cpp)
- Add tcp.matmul -> linalg.matmul lowering (LowerShapedResultsToMemref.cpp)
- Add support to LowerShapeConstraints for lowering the new
shape.cstr_require
This matmul op is pretty limited in its capabilities. There is no
batching and no multidimensional contraction. Certainly more design work
will be needed to find the right abstractions that aren't too general
but also help to canonicalize many cases from frontends. This is mainly
to show that adding a new op needn't be very "scary" once we have the
e2e infra in place.
Also,
- this clears out some exploratory cruft from the TCF dialect now that
this is starting to become real.
* Includes pybind11 directly (for some reason using the pytorch helper header for this depends on a source file not in the image).
* Installs nnpack into the image.
* Installs new-clang and LLD and configures environment to use it (otherwise, link time is terrible).
* Fixes a gcc compile error (in the off chance you build with default gcc compiler).
* Tests are failing based on some dialect registration stuff that must not have been factored correctly. Will followup with a fix.
This now gets the overall "RefE2E" compilation stack to a point that I'm
fairly happy with. We simplify it by mostly embracing the "descriptor"
view of the world.
The overall flow is best understood by reading through the
createE2ELoweringPipeline function in lib/E2E/E2E.cpp
That function creates a pass pipeline that lowers from "TCF" (which is
~numpy level of abstraction) down to LLVM IR.
A brief high-level summary of what happens there:
1. TCF to TCP conversion. This involves reifying error handling in the
form of shape constraints. See test/Conversion/TCFToTCP/basic.mlir
2. Lowering shape constraints. This converts shape constraints into
eager error-handling code. See test/E2E/lower-shape-constraints.mlir
This pass will soon go upstream.
Because this lowers to std.assert, some later passes like
LowerToNpcomprtABI and LowerToLLVM are updated to properly plumb this
through e2e.
See test/npcomp-run-mlir/invalid-broadcast.mlir for an execution test
that properly aborts in case of an error.
3. Lowering tensors to memrefs. This is done via a series of passes
rather than an single mega conversion. Unlike the previous code that
mixed in the npcomprt ABI stuff here, it's now a very clean "pure
memref" conversion.
See test/E2E/lower-*-to-memref.mlir and
lib/E2E/TensorToMemref/
Most of the changes are concentrated here.
4. As part of the above, we use the upstream ConvertShapeToStandard for
lowering shapes.
5. We lower linalg to loops and lower loops to CFG using upstream
passes.
6. Rewrite the "ABI" boundaries of the program to npcomprt data
structures (LowerToNpcomprtABI). This mainly affects ABI boundaries and
how global tensor constants are represented. One of the major
improvements in this commit is that now it's a very clean rewrite that
just replaces memrefs on ABI boundaries with !npcomprt.tensor (before
there was a get_extent function that is not needed).
See test/E2E/lower-to-npcomprt-abi.mlir
7. Lower to LLVM with upstream mlir patterns + some patterns for the
npcomprt lowerings.
One aspect here that is still a remnant of a non-descriptor-based tensor
to memref flow is the BypassShapes + LowerShapedResultsToMemref.
BypassShapes wraps the "tensor compute" ops in a tcp.shaped_results
(basically a "tie_shape" kind of op), and then
LowerShapedResultsToMemref uses those annotations to allocate output
buffers while lowering the "tensor compute ops". Note that there are
very few "tensor compute" ops currently supported (tcp.add +
tcp.broadcast_to), so we just hardcode them in both passes.
Realistically, I expect this to go away as we fully embrace the
descriptor-based approach for simplicity, so don't look too deep into
it.
* Add a new python script to auto-generate ATen op ODS definitions.
* There is still some work on some of the ops to annotate correct types.
* The ODS is not actually included into the dialect yet, but I'd like to commit it so that we can track changes.
* Will reconcile this with the ops produced by the existing script in a followup. Still need to do some more iteration to reach parity.
* llvm-project: b5924a8e27536d19dd5c4d302db29fb6163d5faa
* mhlo: 848ca244d20f045b7921da55a98a04d95ef94f0e
* Multiple breakages that need to be fixed.
Fixes:
* Refactor dialect registration
* Remove all kindof methods (Casting functionality has been added upstream and is implicitly
available, see https://llvm.discourse.group/t/removing-kinds-from-attributes-and-types/1547.)
* Update dialect registration to comply with https://reviews.llvm.org/D85495.
* Remove type kinds and update some changed dialect signatures.
* Upgrade ATen dialect to match upstream needs.
* Move dialect registration to tablegen.
* Register the ListType in tablegen.
* Change dialect initialization signature.
* Use TypeSwitch in MlirIr location printer.
* Remove global registry depends from npcomp-opt.
* Change LowerToLLVM to pass an MLIRContext vs an LLVMDialect for type creation.
* Remove dep on MLIREDSCInterface that is removed upstream.
* Thread through the DialectRegistry for opt and python-like tools.
* Modernize pass registration (This was forced because the GEN_PASS_REGISTRATION code now generates inline functions vs literal pass registration statements)
Co-authored-by: Marius Brehler <marius.brehler@iml.fraunhofer.de>