Commit Graph

372 Commits (73ba09c58738504869e65a5cf11e946facb61b92)

Author SHA1 Message Date
Sean Silva c69e9fabc5 [RefE2E] Add support for "max".
This cleans up the lowering pipeline to easily allow extending to
multiple binary ops. It looks fairly repetitive at multiple levels, but
I don't want to prematurely generalize. I think that in principle we
could derive a large swatch of TCF + TCP from a single linalg-style
specification. Another direction is to use an OpInterface (something
like "buildLinalgGenericBody"). I'm keeping my eye on it.

In a subsequent commit, I'll mechanically add a set of binary ops
modeled off of the std arithmetic ops.
2020-09-22 18:38:32 -07:00
Sean Silva 276f5b80ea [RefE2E] Add assemblyFormat for TCF and TCP ops and tidy up. 2020-09-18 15:03:53 -07:00
Sean Silva d8675f8ad2 [RefE2E] Add support for matmul.
I'm pretty happy with how this turned out. It looks pretty much like it
should -- one change at each layer. This particular op bottoms out on
linalg which takes care of the rest.

- Add tcf.matmul
- Add tcp.matmul
- Add TCF->TCP lowering
- Add tcp.matmul shape transfer function (BypassShapes.cpp)
- Add tcp.matmul -> linalg.matmul lowering (LowerShapedResultsToMemref.cpp)
- Add support to LowerShapeConstraints for lowering the new
shape.cstr_require

This matmul op is pretty limited in its capabilities. There is no
batching and no multidimensional contraction. Certainly more design work
will be needed to find the right abstractions that aren't too general
but also help to canonicalize many cases from frontends. This is mainly
to show that adding a new op needn't be very "scary" once we have the
e2e infra in place.

Also,
- this clears out some exploratory cruft from the TCF dialect now that
this is starting to become real.
2020-09-18 11:31:01 -07:00
Sean Silva 75f57b461e
Totally rework RefE2E tensor to memref flow. (#42)
This now gets the overall "RefE2E" compilation stack to a point that I'm
fairly happy with. We simplify it by mostly embracing the "descriptor"
view of the world.

The overall flow is best understood by reading through the
createE2ELoweringPipeline function in lib/E2E/E2E.cpp
That function creates a pass pipeline that lowers from "TCF" (which is
~numpy level of abstraction) down to LLVM IR.

A brief high-level summary of what happens there:

1. TCF to TCP conversion. This involves reifying error handling in the
form of shape constraints. See test/Conversion/TCFToTCP/basic.mlir

2. Lowering shape constraints. This converts shape constraints into
eager error-handling code. See test/E2E/lower-shape-constraints.mlir
This pass will soon go upstream.
Because this lowers to std.assert, some later passes like
LowerToNpcomprtABI and LowerToLLVM are updated to properly plumb this
through e2e.
See test/npcomp-run-mlir/invalid-broadcast.mlir for an execution test
that properly aborts in case of an error.

3. Lowering tensors to memrefs. This is done via a series of passes
rather than an single mega conversion. Unlike the previous code that
mixed in the npcomprt ABI stuff here, it's now a very clean "pure
memref" conversion.
See test/E2E/lower-*-to-memref.mlir and
lib/E2E/TensorToMemref/
Most of the changes are concentrated here.

4. As part of the above, we use the upstream ConvertShapeToStandard for
lowering shapes.

5. We lower linalg to loops and lower loops to CFG using upstream
passes.

6. Rewrite the "ABI" boundaries of the program to npcomprt data
structures (LowerToNpcomprtABI). This mainly affects ABI boundaries and
how global tensor constants are represented. One of the major
improvements in this commit is that now it's a very clean rewrite that
just replaces memrefs on ABI boundaries with !npcomprt.tensor (before
there was a get_extent function that is not needed).
See test/E2E/lower-to-npcomprt-abi.mlir

7. Lower to LLVM with upstream mlir patterns + some patterns for the
npcomprt lowerings.

One aspect here that is still a remnant of a non-descriptor-based tensor
to memref flow is the BypassShapes + LowerShapedResultsToMemref.
BypassShapes wraps the "tensor compute" ops in a tcp.shaped_results
(basically a "tie_shape" kind of op), and then
LowerShapedResultsToMemref uses those annotations to allocate output
buffers while lowering the "tensor compute ops". Note that there are
very few "tensor compute" ops currently supported (tcp.add +
tcp.broadcast_to), so we just hardcode them in both passes.
Realistically, I expect this to go away as we fully embrace the
descriptor-based approach for simplicity, so don't look too deep into
it.
2020-09-16 17:31:40 -07:00
stephenneuendorffer bb668e6e26
Add ATen Dialect (#16)
This patch adds a dialect intended to be used as a frontend dialect
to facilitate lowering from "A Tensor Library" in torch/pytorch.

This patch includes several passes that are useful in conjuction with the
dialect:

--aten-layer-name: Generates layer names for each operation, which are not
  present in the original pytorch.
--aten-to-std: Lower the ATen dialect into standard dialect function calls.
--return-elimination-pass: convert functions (primarily the toplevel function)
  to pass return values by reference.  This simplifies pytorch integration.
--aten-op-report: generate a textual report about the model
--liveness-report

Future patches will implement actual integration with the pytorch jit to
intercept and generates MLIR in this dialect, then lower the resulting MLIR
into function calls through aten-layer-name -> aten-to-std ->
return-elimination -> std-to-llvm. The result would then jitted using the LLVM
jit, linked against a runtime library which makes calls back into pytorch to
implement all the layers.

Co-authored-by: Jeff Fifield <jeff.fifield@xilinx.com>

Co-authored-by: Jeff Fifield <jeff.fifield@xilinx.com>
2020-08-12 19:28:04 -07:00
Sean Silva e228aa4b11 npcomprt: add support for constants
- create tcp.global + tcp.get_global_memref
- create npcomprt.global + npcomprt.get_global
- LLVM lowering for new npcomprt ops
- Runtime:
 - GlobalDescriptor struct emitted by LLVM lowering
 - implement __npcomp_compiler_rt_get_global

Also,
- cleanly isolate all runtime data structure definitions shared by the
compiler and runtime into lib/runtime/CompilerDataStructures.h
2020-07-10 17:31:24 -07:00
Stella Laurenzo efbcf0aa44 Add NumpyPublicFunctionsToTensor pass.
* Rewrites public function signatures to operate on tensors (vs ndarray).
* Most of our backends presume immutable tensors at public function boundaries.
2020-07-08 22:51:54 -07:00
Sean Silva b4f0cea8fa Rework e2e flow to use new "npcomprt"
This ~totally reworks the existing "runtime" stuff to be more
principled and usable, such as from Python. It's still not fully
production-quality, mainly in the department of memory management (e.g.
it currently leaks memory; we need to figure out "who frees memrefs" +
the analysis and transformation needed to do that (maybe use upstream
buffer allocation pass?)).

The user API is in include/npcomp/runtime/UserAPI.h, though
include/npcomp/JITRuntime/JITModule.h is a friendlier wrapper.

The stuff under {include,lib}/runtime is totally firewalled from the
compiler and tiny (<6kB, though no attention has gone into optimizing
that size). For example, we don't link in libSupport into the runtime,
instead having our own bare bones replacements for basics like ArrayRef
(the JITRuntime helps with bridging that gap, since it *can* depend on
all common LLVM utilities).

The overall features of npcomprt is that it exposes a module that
with multiple function entry points. Each function has arguments and
results that are tensor-valued, and npcomprt::Tensor is the runtime type
that is used to interact with that (and a npcomprt::Ref<T>
reference-counting wrapper is provided to wrap npcomprt::Tensor in the
common case).

From an implementation perspective, an npcomprt module at the
LLVM/object/binary level exposes a single module descriptor struct that
has pointers to other metadata (currently just a list of function
metadata descriptors). All interactions with the npcomp runtime are
keyed off of that module descriptor, including function lookups and
dispatching. This is done to dodge platform ABI issues and also allow
enough reflection to e.g. verify provided arguments.

Most of the compiler-side work here was in LowerToNpcomprtABI and
LowerToLLVM.

Also,
- Rename npcomp_rt/NpcompRt to npcomprt/Npcomprt; it was getting
annoying to type the underscores/caps.
- misc improvements to bash_helpers.sh
2020-07-08 19:36:19 -07:00
Stella Laurenzo 5aa2f0f9f6 Add a trivial copy elision canonicalization on ndarray->tensor.
* This elides the very common code the compiler adds for chaining otherwise tensor-related numpy ops together.
* More aggressive canonicalizations would require more advanced analysis.
2020-07-05 18:09:43 -07:00
Stella Laurenzo fae15ec5e7 Allow the ndarray type to carry a shape. 2020-07-05 17:34:03 -07:00
Stella Laurenzo 046751254f Refactor old tracing tests and remove deprecated ops.
* Old doctests to run under lit.
* Old custom filecheck tests -> pytest directory (under lit).
* Rename some old ufunc ops in the tracer.
2020-06-29 16:19:03 -07:00
Stella Laurenzo b2708e4687 Add test case for !numpy.ndarray. 2020-06-28 17:41:21 -07:00
Stella Laurenzo 7bd5733d38 Add "template function" ops and importer code.
* This starts to lay down the infra for reasoning about calls
* Adds the importer code to generate IR for function calls of compiler recognized static functions.
2020-06-26 18:36:36 -07:00
Stella Laurenzo 529873d13c Wire up IREE compilation and runtime in a new backend test.
* Adds python bindings for invoking flow, HAL, and VM lowering pipelines.
* Adds pythong bindings for translating to VM module flatbuffer.
* Adds a new backend_test/iree directory and configure lit to find the IREE python rt bindings.
* Open code a simple_invoke.py that exercises the whole pipeline (need real APIs for a lot of this).
* Fails when invoking the function because I never implemented argument marshaling for scalars :(
* Plenty of stuff to do tomorrow.
2020-06-19 00:30:34 -07:00
Sean Silva e8b1a07ef4 Initial NpcompRt (npcomp_rt) dialect boilerplate. 2020-06-01 19:07:53 -07:00
Sean Silva 1b48d0d80b Remove the present tcp.island.
The idea was half-baked and after some deep thought felt like a solution
looking for a problem. What we had here (and is removed in this patch)
just wasn't pulling its weight.

I cannot think of anything we would want to do with tcp.island as it is
removed here beyond just sinking and merging them within a basic block,
such that the witness argument is kind of pointless (only matters for
hoisting).

TCP compute ops like tcp.add and tcp.broadcast_to have the strong
invariant of "pure or undefined behavior", which means they are always
safe to sink. The island concept as removed here conferred no benefit.

Also, I'll note that "islands" are a trick you can only play once in a
system (unless they strictly nest). I have some early-stage thoughs on
having an island concept that helps with modeling tensor shapes
robustly which seems promising (the island would serve a similar role as
tie_shape).
2020-05-14 15:19:37 -07:00
Sean Silva e29aef855b Initial TCF/TCP E2E seed.
Very much WIP.

This is enough to get tcf.add down to approximately the "linalg.generic
on buffers" level of abstraction. (but there are nuances)
2020-05-08 20:20:41 -07:00
Stella Laurenzo bc5ef81d68 Add basicpy.SlotObject type and ops to create/index into it.
* This is intended to provide low-level modeling for built-in objects.
* It is now possible to trace slice tuples (which are tuples of NoneType|EllipsisType|SlotObjectType<slice, ...>).
2020-05-05 18:16:01 -07:00
Stella Laurenzo d3632af675 Add !numpy.any_dtype dialect type. 2020-04-29 18:20:42 -07:00
Stella Laurenzo b4425fe1d2 Add numpy.ufunc_call op. 2020-04-29 17:49:56 -07:00
Stella Laurenzo e845db8a20 Add builtin_ufunc and generic_ufunc ops. 2020-04-28 23:51:54 -07:00
Stella Laurenzo 953ef89a30 Add npcomp-opt and lit runner. 2020-04-26 17:55:15 -07:00