Commit Graph

527 Commits (2a0c56741834f5bb433d6d7c90494dd10b96bd0b)

Author SHA1 Message Date
Sean Silva 1fed1cb016 Update llvm-project to 753a21928413f8a7e76978cb1354e09150e114e0 2020-05-21 13:09:06 -07:00
Sean Silva 87aa561c69 Remove RtGetTensorExtentOp.
It is unused now, and will be superceded by a proper runtime dialect.
2020-05-21 10:17:49 -07:00
Sean Silva 1d3dbd9d5c Lower to LLVM dialect.
With this commit, we finish conversion to LLVM dialect, and should be
ready for subsequent commits to convert to an LLVM module and let LLVM
codegen to native machine code.

This required a custom "lower to LLVM" pass to support lowering
tcp.abort_if to a runtime call. In the future, this pass will grow to do
type conversions for our own runtime types as we add those.
2020-05-20 18:56:10 -07:00
Sean Silva be1971c4fc Rename tcp.abort_if to tcp.shape_observe_error
This more clearly captures its semantics as a structural "observer" of
code that we currently mark as NoSideEffect but eventually lowers to
eager error handling code.

Also, update LowerRankedShapes to erase it, now that the layering here
is clear. That pass reifies the eager error handling code, so the need
for the dummy op to keep things alive isn't needed.

With this change, we are now ready to start lowering to LLVM!
This is the current print-ir-after-all from e2e-lowering-pipeline:
https://reviews.llvm.org/P8221
2020-05-18 13:38:47 -07:00
Sean Silva 9191efdfc1 Fix typo in LowerLinalgLoopDimOps
The legality condition was reversed. It's unclear to me why this didn't
cause "failed to legalize".
2020-05-18 12:53:31 -07:00
Sean Silva 836a8d4bec Lower tcp.alloc_memref ops to tcp.get_extent + std.alloc.
- tcp.get_extent will be liminated while lowering shapes
- std.alloc is supported by the upstream LLVM lowering.
2020-05-18 12:53:31 -07:00
Sean Silva 9eaab7537c Add a comment about how IREE would layer in.
Thanks to Stella for probing me on this.
2020-05-15 17:22:47 -07:00
Sean Silva 993338a12d Lower to the upstream memref ABI.
Specifically, we use unranked memrefs which get passed as a fixed-size
set of arguments/returns. One big caveat about this is that returning
results isn't going to work. See TODO in LowerTensorLoadOp.

This is far from enough runtime-wise, but it starts to demarcate a
plausible layering. Notice for example how this removes the
runtime-dependence from LowerRankedShapes.

Eventually, we want to have an `npcomp_rt` or `npcomp_hal` dialect with
its own set of runtime types that will supercede this.

See comments in LowerTensorLoadOp for more direction about where this is
going to evolve.
2020-05-15 17:19:57 -07:00
Sean Silva 1b48d0d80b Remove the present tcp.island.
The idea was half-baked and after some deep thought felt like a solution
looking for a problem. What we had here (and is removed in this patch)
just wasn't pulling its weight.

I cannot think of anything we would want to do with tcp.island as it is
removed here beyond just sinking and merging them within a basic block,
such that the witness argument is kind of pointless (only matters for
hoisting).

TCP compute ops like tcp.add and tcp.broadcast_to have the strong
invariant of "pure or undefined behavior", which means they are always
safe to sink. The island concept as removed here conferred no benefit.

Also, I'll note that "islands" are a trick you can only play once in a
system (unless they strictly nest). I have some early-stage thoughs on
having an island concept that helps with modeling tensor shapes
robustly which seems promising (the island would serve a similar role as
tie_shape).
2020-05-14 15:19:37 -07:00
Sean Silva 98a38c3527 Add some anonymous namespaces.
This brings this code in line with the LLVM style guide and avoids
potential ODR issues.
2020-05-14 15:02:46 -07:00
Sean Silva eaeb4011e6 Lower !shape.shape to SSA values.
This uses an approach inspired by what is done in IREE. See comments on
LowerRankedShapes.cpp for how it works.

The basic gist is that we have an op that creates a !shape.shape from a
set of SSA values representing the extents, and then iteratively replace
any op producing a !shape.shape with instances of that op.
2020-05-13 17:20:23 -07:00
Sean Silva ef25428fe3 Add lowering from linalg to loops.
This also adds a small pass to clean up the `dim` ops that linalg
introduces. For now, it only has a trivial pattern that looks for a
`tcp.alloc_memref(%shape)` op to get the shape as we currently have an
invariant that all memrefs are the result of such ops.

But eventually this will need to look through view ops and any other
shape-ish stuff that linalg introduces as it lowers to loops, along with
any slicing ops introduced by buffer allocation.
2020-05-11 18:54:52 -07:00
Sean Silva 174ab19c5f Add some cleanup passes.
This makes the IR more presentable before going to the next phase of
lowering (ops on memref/buffers -> LLVM ops).
2020-05-11 15:28:21 -07:00
Sean Silva 83db558db9 Update llvm-project to 310d32cb80a611e6384a921e85607fea05841f26 2020-05-11 15:12:47 -07:00
Sean Silva 53c17dbed9 "Finish" tensor -> memref conversion.
There's a lot of details to flesh out here, but the basic approach seems
promising (see comments in createE2ELoweringPipeline).

This approach will be put to the test when we try to do our first
fusions since that tickles some of the nasty phase ordering issues
involved here.

But we're not there yet.
2020-05-11 15:00:12 -07:00
Sean Silva fec2ee0072 Avoid introducing DimOp's in LowerBroadcastToToLoops.
This makes sure we stay resonably canonically using the shape machinery.
(In fact, DimOp should probably be in the shape dialect since it hides a
`shape.shape_of` call)
2020-05-11 13:12:16 -07:00
Stella Laurenzo 950ba12426 Bump llvm-project to 3af85fa8f06220b43f03f26de216a67be4568fe7. 2020-05-08 20:42:40 -07:00
Sean Silva e29aef855b Initial TCF/TCP E2E seed.
Very much WIP.

This is enough to get tcf.add down to approximately the "linalg.generic
on buffers" level of abstraction. (but there are nuances)
2020-05-08 20:20:41 -07:00
Stella Laurenzo a91b0bfbe1 Add numpy.get_slice op and wire it up to the tracer. 2020-05-08 16:04:58 -07:00
Stella Laurenzo bc5ef81d68 Add basicpy.SlotObject type and ops to create/index into it.
* This is intended to provide low-level modeling for built-in objects.
* It is now possible to trace slice tuples (which are tuples of NoneType|EllipsisType|SlotObjectType<slice, ...>).
2020-05-05 18:16:01 -07:00
Stella Laurenzo 502ef8f195 Create skeleton for 'Basicpy' dialect.
* It is time to start adding more python mechanisms.
* Running into this for materializing slice() objects.
2020-05-04 17:48:02 -07:00
Stella Laurenzo d3632af675 Add !numpy.any_dtype dialect type. 2020-04-29 18:20:42 -07:00
Stella Laurenzo c4a192d5c9 Rename from npcomp::NUMPY to NPCOMP::numpy to follow IREE convention. 2020-04-29 17:10:10 -07:00
Stella Laurenzo e845db8a20 Add builtin_ufunc and generic_ufunc ops. 2020-04-28 23:51:54 -07:00
Stella Laurenzo 953ef89a30 Add npcomp-opt and lit runner. 2020-04-26 17:55:15 -07:00
Stella Laurenzo d3b6e1767a Add stub numpy dialect. 2020-04-26 17:20:58 -07:00
Stella Laurenzo 9ee2f6ff7f Initial commit of python boiler-plate. 2020-04-26 15:50:23 -07:00