Commit Graph

130 Commits (83ad70ef548a188277d7c77f6874ee38d5bf7df5)

Author SHA1 Message Date
Sean Silva 83ad70ef54 [RefBackend] Move runtime related code under npcomp/RefBackend/
Other than the dialect definitions (which will live in standard Dialect/
subdirectory), the goal here is to keep RefBackend-related code nested
in {include/npcomp,lib,test}/RefBackend.
2020-10-08 09:07:00 -07:00
Sean Silva 21255d5f8e [RefBackend] Rename "E2E" to RefBackend. 2020-10-07 10:29:48 -07:00
Sean Silva 03846ed8e7 Rename a couple CMake targets.
NPCOMPFoo to NPCOMPFooDialect for consistency with others.
2020-10-07 10:29:48 -07:00
Sean Silva 5017430dc7 [RefBackend] Split out RefBackend (refback) dialect from TCP.
This is the first in a patch series that is refactoring the
constellation of things variously called or associated with "E2E",
"RefE2E", "npcomprt", and "TCP" into a more cleanly layered result.

Concretely, this first patch fixes the fact that TCP was basically
acting like a dumping ground needed by the reference backend. This
splits it out, which is fairly mechanical, but touches a lot of lines of
code (basically replacing `tcp` with `refback` and `TCP` with
`RefBackend).

Now, the RefBackend dialect is that dumping ground, which
is slighly better, as it starts allowing TCP to become a nice clean
middle layer that is not related per se to the reference backend.

The previous name RefE2E or "reference e2e flow" was super confusing.
Now that we are seeing more clearly where the "backend" distinction
lies, the [RefBackend] commit tag is born :)
2020-10-07 10:29:48 -07:00
Stella Laurenzo 58b6033537 Bump llvm to ed46e84c7aaffd847656ac559acb06089096ec33.
* Minor change of MLIRStandardOps -> MLIRStandard
2020-10-06 22:02:57 -07:00
Sean Silva 8022dfaf1a [RefE2E] Initialize the linalg matmul accumulator buffer.
I was seeing some miscompiles due to the uninitialized data read here
before. Interestingly, this was masked in some of our previous test
cases, since the uninitialized data "always" was so small that it would
present as a rounding error for the 1.0-10.0 sized values that the
matmul was computing on.
2020-10-02 16:24:52 -07:00
Stella Laurenzo e5433e314f Add capture function arguments.
* Adds at::Tensor -> MlirValue tracking.
* Adds conversions for tensor and scalar types to MLIR types.
* Adds npcomp C APIs for constructing custom types.
* Reworks pybind include so as to get Torch pybind helpers (needed to pass at::Tensor type from Python->C++).
2020-10-01 18:59:58 -07:00
Stella Laurenzo 3d74337be0 Add a torch.kernel_call op and associated predicates. 2020-09-29 15:10:38 -07:00
Stella Laurenzo ba03ecc652 Add public API for constructing a module/function to capture PyTorch ops.
* Uses the MLIR-C API since that will save us a lot of grief down the road (i.e. will give PyTorch and libMLIR/libNPCOMP the ability to skew version-wise).
* Quite a few TODOs and not yet populating the function in any way.
2020-09-29 14:23:22 -07:00
Stella Laurenzo 2c9ca79c89 Add boilerplate for Torch dialect. 2020-09-28 15:26:17 -07:00
Stella Laurenzo fb895173f2 Run format_sources.sh. 2020-09-28 12:04:24 -07:00
Stella Laurenzo b5f010284f Add boilerplate to do device capture (pytorch 1.6).
* Uses the new dispatcher API.
* Just prints to the console for the moment when an op is captured.
* Executes the op through the existing implementation.
2020-09-28 10:30:54 -07:00
Sean Silva 16c26ef57e [RefE2E] Use upstream shape constraint conversion pass.
Now that we upstreamed our pass, we can remove it.
The final pass that landed upstream doesn't do the shape.assuming
canonicalization to legalize that op away, so added a
restricted-canonicalizer pass that allowed to run just shape dialect
canonicalizations, which deletes the shape.assuming.
The pass ended up kind of ugly. See the TODO's on it for some potential
cleaner directions.
2020-09-28 09:34:44 -07:00
Sean Silva 6ea37cfed6 Bump llvm-project to 9ed1e5873c19eb817fb9e36d0262c7effee5d35e
Date:   Fri Sep 18 13:55:52 2020 -0700

- Update to linalg syntax
- New generated builders are better. Custom builder for
tcp.shaped_results is now redundant.
2020-09-28 09:34:44 -07:00
Sean Silva f9b37c55b7 [RefE2E] Add support for unary ops exp and tanh
This is fairly mechanical.
2020-09-24 18:41:30 -07:00
Sean Silva 6b69beae6a [NFC] Remove stray .dump() that snuck in. 2020-09-24 18:41:30 -07:00
Sean Silva c69e9fabc5 [RefE2E] Add support for "max".
This cleans up the lowering pipeline to easily allow extending to
multiple binary ops. It looks fairly repetitive at multiple levels, but
I don't want to prematurely generalize. I think that in principle we
could derive a large swatch of TCF + TCP from a single linalg-style
specification. Another direction is to use an OpInterface (something
like "buildLinalgGenericBody"). I'm keeping my eye on it.

In a subsequent commit, I'll mechanically add a set of binary ops
modeled off of the std arithmetic ops.
2020-09-22 18:38:32 -07:00
Marius Brehler 681c4e1d4a Inject missing dialects in E2E passes 2020-09-22 08:52:23 +02:00
Sean Silva 7b7f35744b [RefE2E] Add interesting control flow example.
This also required adding a lowering for ForOp in our tensor->memref
conversion.
2020-09-21 12:25:24 -07:00
Sean Silva dc8afc9271 [RefE2E] Refactor how tcf.add is lowered.
It was previously going through this awkward route that prematurely
created linalg.generic ops, which was an annoying layering problem since
we can't compute a shape transfer function for linalg.generic in the
general case. Now we pass it through the same path as tcp.matmul, with
the shape transfer function being defined for tcp.add.

This also removed the need for TCPToLinalg (now deleted). The equivalent
of that is happening in lower-shaped-results-to-memref. One interesting
outcome of this: we're basically using linalg as a "Buffer TCP". We
might want to look into using named structured ops for more of TCP, but
that would be a big velocity hit since then any change to the ODS /
verification for those ops would be a change to the upstream structured
op ODS generator. After we have more experience defining this manually,
we should re-evaluate rebasing TCP on generated named linalg ops.
2020-09-18 15:03:53 -07:00
Sean Silva d8675f8ad2 [RefE2E] Add support for matmul.
I'm pretty happy with how this turned out. It looks pretty much like it
should -- one change at each layer. This particular op bottoms out on
linalg which takes care of the rest.

- Add tcf.matmul
- Add tcp.matmul
- Add TCF->TCP lowering
- Add tcp.matmul shape transfer function (BypassShapes.cpp)
- Add tcp.matmul -> linalg.matmul lowering (LowerShapedResultsToMemref.cpp)
- Add support to LowerShapeConstraints for lowering the new
shape.cstr_require

This matmul op is pretty limited in its capabilities. There is no
batching and no multidimensional contraction. Certainly more design work
will be needed to find the right abstractions that aren't too general
but also help to canonicalize many cases from frontends. This is mainly
to show that adding a new op needn't be very "scary" once we have the
e2e infra in place.

Also,
- this clears out some exploratory cruft from the TCF dialect now that
this is starting to become real.
2020-09-18 11:31:01 -07:00
Sean Silva 62738d3641 [RefE2E] Fix nul-termination bug.
I was seeing some of the error messages come out with some garbage at
the end. This fixes it.
2020-09-18 11:31:01 -07:00
Stella Laurenzo 678989a321
Update docker, instructions and some fixes for the pytorch 1.3 build. (#45)
* Includes pybind11 directly (for some reason using the pytorch helper header for this depends on a source file not in the image).
* Installs nnpack into the image.
* Installs new-clang and LLD and configures environment to use it (otherwise, link time is terrible).
* Fixes a gcc compile error (in the off chance you build with default gcc compiler).
* Tests are failing based on some dialect registration stuff that must not have been factored correctly. Will followup with a fix.
2020-09-16 21:57:46 -07:00
Sean Silva 75f57b461e
Totally rework RefE2E tensor to memref flow. (#42)
This now gets the overall "RefE2E" compilation stack to a point that I'm
fairly happy with. We simplify it by mostly embracing the "descriptor"
view of the world.

The overall flow is best understood by reading through the
createE2ELoweringPipeline function in lib/E2E/E2E.cpp
That function creates a pass pipeline that lowers from "TCF" (which is
~numpy level of abstraction) down to LLVM IR.

A brief high-level summary of what happens there:

1. TCF to TCP conversion. This involves reifying error handling in the
form of shape constraints. See test/Conversion/TCFToTCP/basic.mlir

2. Lowering shape constraints. This converts shape constraints into
eager error-handling code. See test/E2E/lower-shape-constraints.mlir
This pass will soon go upstream.
Because this lowers to std.assert, some later passes like
LowerToNpcomprtABI and LowerToLLVM are updated to properly plumb this
through e2e.
See test/npcomp-run-mlir/invalid-broadcast.mlir for an execution test
that properly aborts in case of an error.

3. Lowering tensors to memrefs. This is done via a series of passes
rather than an single mega conversion. Unlike the previous code that
mixed in the npcomprt ABI stuff here, it's now a very clean "pure
memref" conversion.
See test/E2E/lower-*-to-memref.mlir and
lib/E2E/TensorToMemref/
Most of the changes are concentrated here.

4. As part of the above, we use the upstream ConvertShapeToStandard for
lowering shapes.

5. We lower linalg to loops and lower loops to CFG using upstream
passes.

6. Rewrite the "ABI" boundaries of the program to npcomprt data
structures (LowerToNpcomprtABI). This mainly affects ABI boundaries and
how global tensor constants are represented. One of the major
improvements in this commit is that now it's a very clean rewrite that
just replaces memrefs on ABI boundaries with !npcomprt.tensor (before
there was a get_extent function that is not needed).
See test/E2E/lower-to-npcomprt-abi.mlir

7. Lower to LLVM with upstream mlir patterns + some patterns for the
npcomprt lowerings.

One aspect here that is still a remnant of a non-descriptor-based tensor
to memref flow is the BypassShapes + LowerShapedResultsToMemref.
BypassShapes wraps the "tensor compute" ops in a tcp.shaped_results
(basically a "tie_shape" kind of op), and then
LowerShapedResultsToMemref uses those annotations to allocate output
buffers while lowering the "tensor compute ops". Note that there are
very few "tensor compute" ops currently supported (tcp.add +
tcp.broadcast_to), so we just hardcode them in both passes.
Realistically, I expect this to go away as we fully embrace the
descriptor-based approach for simplicity, so don't look too deep into
it.
2020-09-16 17:31:40 -07:00
Marius Brehler d62f8227c2
Bump LLVM to @7d1ed69 and fix namespace handling changed upstream.
* Bump LLVM to llvm/llvm-project@7d1ed69
* Bump MLIR-HLO to tensorflow/mlir-hlo@1880f87
* Adopt to MLIR's changed namespace handling
2020-09-16 15:52:15 -07:00
Stella Laurenzo dd9172fd75 Run clang-format on files that do not comply. 2020-09-15 17:54:58 -07:00
Marius Brehler 843448cde9 Register dialects in E2E passes 2020-09-11 09:33:44 +02:00
Marius Brehler a2fb68059f Remove unused include 2020-09-11 09:33:44 +02:00
Marius Brehler 124bc65a70 Register dialects in ATen lowering pass 2020-09-09 21:55:17 -07:00
Marius Brehler fb2d1a1559 Register dialects in conversion passes 2020-09-09 21:55:17 -07:00
Stella Laurenzo 81dd571c23 Integrate upstream LLVM at 8d9c13f37d2081c11186718ae8b5aef8b507d152.
* mlir-hlo: 062a3ac4a0671d15b5199ed2cd3a9ce02a5bf077

Fixes:

* numInputs() just returns an int instead of requiring a call to .getLimitedValue()
2020-09-08 20:34:31 -07:00
Stella Laurenzo 97d83f786a Bump submodule versions.
* llvm-project: b5924a8e27536d19dd5c4d302db29fb6163d5faa
* mhlo: 848ca244d20f045b7921da55a98a04d95ef94f0e
* Multiple breakages that need to be fixed.

Fixes:
* Refactor dialect registration
* Remove all kindof methods (Casting functionality has been added upstream and is implicitly
available, see https://llvm.discourse.group/t/removing-kinds-from-attributes-and-types/1547.)
* Update dialect registration to comply with https://reviews.llvm.org/D85495.
* Remove type kinds and update some changed dialect signatures.
* Upgrade ATen dialect to match upstream needs.
  * Move dialect registration to tablegen.
  * Register the ListType in tablegen.
  * Change dialect initialization signature.
* Use TypeSwitch in MlirIr location printer.
* Remove global registry depends from npcomp-opt.
* Change LowerToLLVM to pass an MLIRContext vs an LLVMDialect for type creation.
* Remove dep on MLIREDSCInterface that is removed upstream.
* Thread through the DialectRegistry for opt and python-like tools.
* Modernize pass registration (This was forced because the GEN_PASS_REGISTRATION code now generates inline functions vs literal pass registration statements)

Co-authored-by: Marius Brehler <marius.brehler@iml.fraunhofer.de>
2020-09-08 13:26:42 -07:00
Stella Laurenzo fc4f374345 Format sources. 2020-08-27 14:47:49 -07:00
Stella Laurenzo 69cda404ef NFC: Fix extra namespace declaration.
* Was causing build break on GCC9.
2020-08-20 16:22:41 -07:00
stephenneuendorffer bb668e6e26
Add ATen Dialect (#16)
This patch adds a dialect intended to be used as a frontend dialect
to facilitate lowering from "A Tensor Library" in torch/pytorch.

This patch includes several passes that are useful in conjuction with the
dialect:

--aten-layer-name: Generates layer names for each operation, which are not
  present in the original pytorch.
--aten-to-std: Lower the ATen dialect into standard dialect function calls.
--return-elimination-pass: convert functions (primarily the toplevel function)
  to pass return values by reference.  This simplifies pytorch integration.
--aten-op-report: generate a textual report about the model
--liveness-report

Future patches will implement actual integration with the pytorch jit to
intercept and generates MLIR in this dialect, then lower the resulting MLIR
into function calls through aten-layer-name -> aten-to-std ->
return-elimination -> std-to-llvm. The result would then jitted using the LLVM
jit, linked against a runtime library which makes calls back into pytorch to
implement all the layers.

Co-authored-by: Jeff Fifield <jeff.fifield@xilinx.com>

Co-authored-by: Jeff Fifield <jeff.fifield@xilinx.com>
2020-08-12 19:28:04 -07:00
stephenneuendorffer 111ba12e7f
Fix build error (#12)
This debug rule only works with add_mlir_library.
2020-08-05 16:55:42 -07:00
stephenneuendorffer 146ea0a781
Update LLVM to c89e46e76... (#10)
Requires a fixup because BroadcastOp now has a configurable return type.
2020-08-05 14:51:02 -07:00
stephenneuendorffer 44af7a6d30
[cmake] Updates for basic shared library support (#7)
Mostly this is CMake cleanup.  Several library dependencies are missing, which
is often revealed with shared library builds.  Also, it's generally bad to
link directly against LLVM libraries because it fails when using
LLVM_LINK_LLVM_DYLIB.  MLIR will pull in libLLVM.so, and there will be
duplicate linkage with the the explicit libraries.  There may need to be more
refactoring here.
2020-08-05 14:49:18 -07:00
Stella Laurenzo 3efbbe8735 Misc fixes to enable building/testing on manylinux2014 images.
* Since the manylinux images do not hard-link against python libs (resolving them at runtime), the module must be built without resolving undefined references.
* For some reason, builds on this platform are stricter, exposing dependency ordering issues.
* The CMake bits to build the extension are still somewhat of a mess. I have better versions both upstream and in IREE and will be reconciling. For now, don't look too closely.
2020-08-04 17:26:15 -07:00
Stella Laurenzo fc484d1bd8 Rework reference shape lowering based on upstream shape dialect changes.
* Primarily, the upstream shape dialect now uses tensor<?xindex> for non-erroring, immediate shape calculations (and will return this for shape_of of a tensor or memref).
* In addition, upstream passes do not yet exist for fully lowering to standard ops, so the passes here need to be extended to handle this new convention.
* This should be seen as an intermediate state, necessary to integrate a new LLVM version and needs more work and cleanup for generality.
* There is a good deal of awkwardness in these conversions. The hope is that additional upstream work will yield better defined conversion paths once out of this intermediate state.
2020-08-03 13:43:49 -07:00
Stella Laurenzo 9d5d802cc8 Fix compilation issues due to llvm-project version bump.
* Redundant infer type implementations removed.
* Update to the linalg GenericOp build calls.
2020-08-01 15:23:57 -07:00
Sean Silva a9d7610f9d Cleanup after going to `llvm` dialect.
This reduces IR size a lot, which can help when staring at it.
2020-07-13 16:15:42 -07:00
Sean Silva d0f15d2cec Add -optimize flag to npcomp-run-mlir so that it runs optimizations.
My main interest in this is that tweaking the default of this flag is a
quick way to check for miscompiling canonicalizations / op definitions
not annotated properly (e.g. marked NoSideEffect when in fact it is not
safe to do so).
2020-07-13 16:07:44 -07:00
Stella Laurenzo 0356f65dcd Wire through codegen and runtime dependencies.
* Enables e2e test.
* With what I've learned in upstream about test directory layout, I can consolidate most of the separate directories we have for these things. Will do that in a followup.
* Not pleased with the LLVM global initialization depends but serviceable for now.
2020-07-10 22:57:26 -07:00
Stella Laurenzo 9e4a62fc71 Allow JITModule passes to be built separately.
* Re-introduces frontent/backend split.
* Adds a (very) trivial shape refinement pass.
2020-07-10 22:57:26 -07:00
Stella Laurenzo aea05d68d7 Initial python plumbing to interface with the refjit backend. 2020-07-10 22:57:26 -07:00
Sean Silva df0d3fcaff Consolidate LLVM definitions of runtime data structures.
This required making module descriptors hold a FuncDescriptor* instead
of a pointer to array of FuncDescriptors as it previously did, which is
innocuous (just requires an llvm.bitcast after the llvm.mlir.addressof).
2020-07-10 17:50:55 -07:00
Sean Silva e228aa4b11 npcomprt: add support for constants
- create tcp.global + tcp.get_global_memref
- create npcomprt.global + npcomprt.get_global
- LLVM lowering for new npcomprt ops
- Runtime:
 - GlobalDescriptor struct emitted by LLVM lowering
 - implement __npcomp_compiler_rt_get_global

Also,
- cleanly isolate all runtime data structure definitions shared by the
compiler and runtime into lib/runtime/CompilerDataStructures.h
2020-07-10 17:31:24 -07:00
Stella Laurenzo efbcf0aa44 Add NumpyPublicFunctionsToTensor pass.
* Rewrites public function signatures to operate on tensors (vs ndarray).
* Most of our backends presume immutable tensors at public function boundaries.
2020-07-08 22:51:54 -07:00
Stella Laurenzo 5ceb37c19b Add NumpyToTCF conversion.
* Just for numpy.add right now.
2020-07-08 21:03:57 -07:00