Commit Graph

49 Commits (85ff8b692b75f1b3b59ef9e52dcac47499465585)

Author SHA1 Message Date
Sean Silva f168cacd6d Remove TCF and TCP.
These were legacy concepts that are now superceded by direct Torch to
linalg-on-tensors lowering. These were based on some very early thinking
related to the layering of frontends vs codegen, which is now obsolete
because:
- We expected a lot more centralization at the frontend (TCF) level. It
  turns out that frontend needs really vary a lot, and there is no grand
  unifying TCF dialect plausible. The additional layer isn't worth it.
- Linalg-on-tensors obsoletes the primary need for TCP. There are still
  a few things not representable with linalg-on-tensors, but the support
  is growing and the whole "not included in linalg-on-tensors" direction
  needs to be rethought. Our TCP dialect didn't cover any of the
  actually important things in this space (such as sort, FFT, top-k,
  etc.).

See historical [slides](https://drive.google.com/file/d/1iljcpTQ5NPaMfGpoPDFml1XkYxjK_6A4/view) / [recording](https://drive.google.com/file/d/1jSPa8TwPKUt0WuLquGc8OgSUVYJHMvWZ/view)
for more details on the origin story here.

Their presence was confusing users too
[bug](https://github.com/llvm/mlir-npcomp/issues/248).

Also,
- Trim down npcomp-run-mlir testing. It was testing TCF to TCP
  lowering for the most part. The essential stuff is retained and
  rephrased with linalg-on-tensors. (we should probably rename it
  "refback-run" or something, as it is just a way to invoke RefBackend)
- test/Python/Backend/RefJIT/simple_invoke_numpy.py is XFAIL'ed. Our
  "anti-framework" direction seems to be the likely future path.
2021-08-02 12:08:39 -07:00
Stella Laurenzo ec611c1e6f
Misc fixes for MacOS. (#255)
* Change aligned_alloc -> malloc. It can fail (and does on MacOS) and is a bit over-aggressive optimization for a reference backend.
* Fixed a fragile test that prints -0.0 on MacOS.
* Fail the test (not the framework) on failure to trace (Torch on MacOS is missing features).
* Fix .so -> .dylib for compiler runtime.
2021-07-27 17:48:47 -07:00
Stella Laurenzo 2dbab50444
Rework the python build to a static assembly of MLIR+NPCOMP (#251)
* Adapt to python build system updates.

* Bump llvm to 310c9496d80961188e8d8f8ad306cdf44bd7541f (includes python build updates)
* Adds refback C-API.
* Re-layers all python builds.
* Rework CI.
2021-07-27 16:10:10 -07:00
Stella Laurenzo 2ecbcbf8c7
Bump llvm-project to a085c23aa3c8f91866d7f4588d4f683407dc775d. (#250)
* Added additional *ToLLVM conversion patterns (they were disaggregated from standard).
* Misc renames.
* Spelling change on ConvNCHW op, and it now expects strides and dilations attributes.
2021-07-23 14:13:19 -07:00
Sean Silva 79928cd2dd Generalize support for elementwise ops.
We plumb through e2e a fair number of interesting cases:
- unary, binary, ternary elementwise ops
- ops like `torch.aten.add.Tensor` that also take a scalar parameter
- static size-1 broadcasting

We allow the static size-1 broadcasting case, but emit a runtime error
in the case of dynamic size-1 broadcasting. This seems like a sweet spot
subset of things that can be lowered directly to linalg, while not being
overly constraining to users. This is consistent with what IREE is doing
for CHLO->Linalg lowering as well
([code](50bf7a87e4/iree/compiler/InputConversion/MHLO/BroadcastingToLinalgPatterns.cpp (L1))).

To test the static size-1 case, we added support for the
`torch.aten.unsqueeze` op and lowering for it through
`linalg.tensor_expand_shape`. This involved a generalization of
`MaximizeValueSemantics` able to handle it (the solution there also
works for `torch.aten.flatten.using_ints` which we need for ResNet
anyway)

Also, a few minor additional changes:
- Add `VerifyInvariantsBeforeBackendLowering` pass, which catches a
  large class of errors before we get to backend lowering (now that we
  are doing dialect conversion, the errors are way nicer if we just emit
  them up front rather than in the guts of a random pattern).
- Minor change to RefBackend to allow `linalg.tensor_expand_shape`.

Recommended review order:
- e2e tests in elementwise.py
- `ConvertElementwiseOp` in TorchToLinalg.cpp + elementwise.mlir test
- `ConvertAtenUnsqueezeOp` in TorchToLinalg.cpp + unsqueeze.mlir test
- RefineTypes.cpp + tests
- MaximizeValueSemantics changes + test
- VerifyInvariantsBeforeBackendLowering pass + test
2021-06-28 13:28:38 -07:00
Sean Silva 544cb4ef54 Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d
- add_mlir_doc arg order
- fix some dependent dialects on passes that were now causing errors
- "encoding" attribute on mlirRankedTensorTypeGetChecked
2021-04-22 18:12:55 -07:00
Sean Silva 464feacba9 Bump llvm-project to 223dcdcfbe23affdf17ada7f023ee1872fd76160
- ModuleOp no longer has a terminator.
2021-04-05 17:56:35 -07:00
Sean Silva 641098be54 Clean up some compiler warnings on my machine. 2021-03-23 14:29:05 -07:00
Sean Silva 99178a167d Bump llvm-project to 0524a09cc7e1a0797982feacf505825231efbee7
- renames of OwningRewritePatternList -> RewritePatternSet
  - also `insert` to `add`
- RewritePatternSet holds a context now
- memref dialect split from std
2021-03-23 14:29:05 -07:00
Bryce Arden 4591884d06 [refbackrt] Scalar arg support
* Adds f32 scalar argument support across the ABI boundary.
* Adds support for passing input type / shape information
  across the ABI boundary
* Adds support for parsing / creating input FloatAttr's in
  `npcomp-run-mlir`
2021-03-23 13:16:44 -07:00
Bairen Yi 5fed296904 Address missing default label in switch statement
Signed-off-by: Bairen Yi <yibairen.byron@bytedance.com>
2021-03-11 11:55:59 -08:00
Bryce Arden e7a8fd76e2
[refbackrt] Update Invoke API to support more than just Tensor's (#181) 2021-03-10 15:39:26 -08:00
Bairen Yi 53b01cb9ba Bump llvm-project to e31c77b1827fa4dd3511f21af11cfab18ecf6d38
Signed-off-by: Bairen Yi <yibairen.byron@bytedance.com>
2021-03-10 11:01:16 -08:00
Sean Silva c424c24ed8 Bump llvm-project to c68d2895a1f4019b387c69d1e5eec31b0eb5e7b0
- dialect registration
- StringAttr::get: order of context arg
- math dialect
- LogicalResult nodiscard
- error message for invalid broadcast
2021-02-22 12:23:24 -08:00
Sean Silva 6351474382 Bump llvm-project to bc556e5685c0f97e79fb7b3c6f15cc5062db8e36
- `let typeDesription` -> `let description`
- LLVMIntegerType -> IntegerType
2021-01-08 14:18:09 -08:00
Sean Silva 97d6d04d41 Bump llvm-project to 16c6e9c58e9ae50a775945e6b407f1891f353d2f
Changes:
- linalg init tensor change (outs+init -> just outs)
- IntegerType::get and other builtin types now take the context as the
  first arg
- LLVMType::* is gone. Now LLVM Types are just regular Type's.
2021-01-05 16:12:11 -08:00
powderluv 4237172bbf
Fix OSX builds. (#143)
--version_script doesn't work on OSX.
Shared libs are .dylibs on OSX.

TEST=Build on iMac Pro. M1 has other issues will be fixed later

Change-Id: I2bda46349a878b8265e273c05d8db6b46c0df633
2020-12-28 01:30:45 -08:00
Aaron Arthurs 85898aaf10
Add TCF convolutional op with bias addition (#137) 2020-12-15 12:53:12 -08:00
Sean Silva d818043986 Bump llvm-project to d50d7c37a159802c89454a6c53c0ec2e7949d84a
Fixes:
- use `op->(method on Operation)`
- update for MlirIdentifier in signature of mlirNamedAttributeGet
2020-12-14 14:30:51 -08:00
Sean Silva b2077738ca Bump llvm-project to 444822d77a7fea28aa49edf24533c987efa1b2ee
Fixes:
- renames StandardTypes -> BuiltinTypes
- std.extract_element -> tensor.extract
2020-12-11 14:43:38 -08:00
Sean Silva 251aa6e435 Bump llvm-project to 774f1d3ffd458d6cb82d5039758ef1cf6370957f
Date:   Mon Nov 30 15:20:30 2020 -0800

Changes:
- finalizing-bufferize is stricter now, and we need to pull in a DimOp
  bufferization which was previously working by happenstance. The
  offending DimOp's are actually created by the linalg bufferization
  (which creates dim ops on the original tensor values, not the
  converted memrefs), so the fix is moving std-bufferize after
  linalg-bufferize.
2020-11-30 18:40:13 -08:00
Sean Silva f9b32a99fc Bump llvm-project to 164410324d8bf3b5a99e39f7dfe3c6d6972dab30
Date:   Mon Nov 30 12:44:35 2020 -0800

Fixes:
- func-bufferize is no longer finalizing, so we need to add
  finalizing-bufferize.
2020-11-30 13:58:13 -08:00
Sean Silva 955fd3eeda Add some much-needed comments around refbackrt::invoke.
This code is really tricky, and was not commented.
2020-11-25 15:39:41 -08:00
Sean Silva 46aa6d0a24 [RefBackend] Fix leaks related to ABI boundaries.
Best as I can tell (e.g. from LeakSanitizer), this fixes all the leaks
except for those due to buffers created internally to the codegenned
code itself (up next I'll add the buffer deallocation pass to fix
those).

The main change is that instead of attempting to pass `refbackrt::Tensor`
to the codegenned function directly, we make all the ABI types be
UnrankedMemRef which gets passed awkwardly (but workably) as a
`{size_t rank, void *ptrToDescriptor}` on the ABI. The reason why
refbackrt::Tensor wasn't workable is that is that MLIR doesn't really
have a way to deal with the lifetime of unranked memref descriptors that
happen inside the function, which is inevitably what would happen in the
old code that would emit runtime calls to
`refbackrt.to_memref/refbackrt.from_memref` to convert back and forth to
`refbackrt::Tensor` inside the codegenned code.

So, instead of the `refbackrt.to_memref/refbackrt.from_memref` with no
real sound basis for valid lifetime management, we now have a lovely
piece of code in `refbackrt::invoke` in `Runtime.cpp` that just barely
seems to be sound. We rely on the codegenned code having these
properties, which it seems to have:

- it won't free memref descriptors or their backing buffer for arguments
  of UnrankedMemRef type.

- it will allocate a separate memref descriptor for each result
  UnrankedMemRef (which is ensured by having a separate memref_cast for
  each)

- we can sniff the `allocatedPtr`'s (i.e. the backing buffer pointers)
  to avoid double-freeing in the case of aliasing of the backing buffer
  (including backing buffers for arguments feeding into results)

- to catch the case of statically allocated data (which we need to avoid
  passing to `free`) , check if the `allocatedPtr` is (no joke) equal to
  `0xDEADBEEF`, because there is otherwise no way to distinguish
  statically allocated from malloc'ed data...  (std.global_memref lowering
  to LLVM by happenstance sets the allocatedPtr equal to `0xDEADBEEF`,
  presumably mainly as a debugging thing)

Even with all this, we *still* need to (internally to refbackrt::invoke)
make copies of all inputs/outputs! And the details of how the LLVM-level
ABI gets laid out for e.g. function arguments/returns is still super
tricky.

This really highlights how deficient memref is as the general runtime
type for our use case. It's stewing in my mind how best to improve the
situation. My general gut feeling is that IREE's abstractions for this
are "right", but I need to think more how to distill those aspects of
IREE's design in a "reference" way for RefBackend.

Some implementation notes:

- In terms of how this is implemented, this did catch a bug in our ABI
  wrapper functions in LowerToLLVM.cpp, which I had to fix (it happened to
  work before through some combination of npcomprt::Tensor being passed as
  a single pointer + probably me infinite-monkey-ing it until it worked)

- This actually removes 2 out of the 3 compiler runtime functions (the
  only one left is "abort_if". (most of the memref descriptor code moved
  from CopmilerRuntime.cpp to Runtime.cpp)

  - this also means deleting `refbackrt.from_memref` and
  `refbackrt.to_memref`
2020-11-25 13:09:58 -08:00
Sean Silva 0b7c443256 [RefBackend] Properly initialize refbackrt::Tensor refcount.
Although `refCount` is initialized as `std::atomic<int> refCount{0};` in
the definition of Tensor, our tail-allocating malloc would ignore it,
resulting in bogus values that led to leaks.

Caught with LeakSanitizer, but I added an assertion that the refcount is
non-negative to begin with, which should catch this bug in the future
fairly consistently (assuming the garbage refcount is negative half the
time).
2020-11-24 12:01:35 -08:00
Sean Silva 64a7e83184 [RefBackend] Add refback-tcf-to-tcp-pipeline
This allows invoking TCF to TCP-level conversion more easily, and starts
us towards a path of factoring it out of the RefBackend.
2020-11-17 12:33:37 -08:00
Sean Silva 358159a6eb [RefBackend] Open-code shape.get_extent as extract_element
It was annoying that we were creating shape.get_extent in the middle of
the bufferization pipeline, as it required running convert-shape-to-std
at an awkward place. To make that cleaner, just open-code the
extract_element ops that shape.get_extent expands into.

This is a little gross, but it helps with the macroscopic pipeline
ordering issues. Anyway, the train is long-gone of trying to treat
shapes as some special data type that should only be operated on with
shape ops.

Also,
- reorder tensor constant bufferize (which is a module pass) to bracket
all the bufferization function passes, to make the parallelism
opportunities there clearer. Now we have a very clean little
bufferization segment of our pipeline construction.
2020-11-17 11:00:38 -08:00
Sean Silva 5227d52c26 [RefBackend] Use std.global_memref instead of homegrown thing
This vastly simplifies our code, allowing deleting multiple ops,
simplifying multiple passes, and removing a whole pass.

Now `refback` dialect is down to one op (refback.alloc_memref, which
simplifies allocations to just take a shape instead of individual
extents).
2020-11-13 18:43:50 -08:00
Sean Silva 32388d938b Make some passes run on FuncOp so they can run in parallel. 2020-11-13 16:12:18 -08:00
Stella Laurenzo b4c7ae1e0c Repurpose numpy-compiler compiler/runtime flow for PyTorch.
* A bit gross because I took the chance to upgrade all of the backend bits to the new MLIR Python bindings and we still co-mingle the old and new for now.
* Since the Python created PassManagers are configured for explicit nesting, I had to upgrade some of the pass pipelines to be explicit.
* The demo in mul_maximum_e2e.py now compiles, runs through PyTorch and through the JIT, prints and asserts the same results.
* I am not claiming that this is the prettiest API in this patch: consider that this is just directly using low-level APIs and there should be an intervening high level API.
2020-11-11 10:38:13 -08:00
Sean Silva 1c7c362e29 [TCP] Replace tcp.matmul with linalg.matmul.
This involved adding a `tcp.splatted` op to splat a dynamically sized
init tensor. See rationale in TCPOps.td docs.

One interesting observation is that when lowering tcf.matmul to
linalg.matmul, we need to both 1) create the error checks and 2)
calculate a shape transfer function to create the init tensors.
Previously, 2) was deferred to bufferizing tcp.matmul later. I'm not
sure if this is a conflation of concerns or not. For now, it's not a big
burden.
2020-11-10 18:58:28 -08:00
Sean Silva 0427aacb0b [TCP] Replace elementwise ops with std elementwise ops. 2020-11-10 18:58:28 -08:00
Sean Silva 57e58b9272 [RefBackend] Use upstream func-bufferize pass.
Now, the only bufferization we have left is lowering tensor constants to
memref, which will hopefully proceed soon after Rahul's new
std.global_memref lands + the lowering to LLVM IR. Then I'll port
LowerConstantTensorsToMemref to upstream and we'll be 100% upstream
bufferization, except for our local TCP dialect (which will probably go
away and be replaced by std elementwise + linalg named ops on tensors :)
).
2020-11-02 17:38:33 -08:00
Sean Silva 1874bf5eb1 NFC: Clean up some minor nits
- Remove GreedyPatternRewriteDriver.h from files that don't need it
- fix typo shouldBeCloned -> wouldBeCloned
2020-10-30 18:48:25 -07:00
Sean Silva f9c2f8eb0d [RefBackend] Use upstream SCF bufferization pass. 2020-10-30 18:12:41 -07:00
Stella Laurenzo a3f4db9fe8 Bump llvm-project to c8c07b76b2cf2ada8e7ec132f7f57b97d76743cf.
* Several NFC changes to signatures/includes.
2020-10-29 15:25:55 -07:00
Stella Laurenzo 9618c2dbf7 NFC: Re-organize ATen directory structure and fix warnings.
* Still some more work to do on the Transforms tree to bring it in line with the others (will do that as I add things).
2020-10-22 14:13:26 -07:00
Sean Silva 14470f9ff6 [RefBackend] Use upstream std bufferization.
It now subsumes the one we had.
2020-10-21 16:46:56 -07:00
Sean Silva b6ae53b312 [RefBackend] Use new upstream SCF type conversions. 2020-10-21 16:46:56 -07:00
Stella Laurenzo fe5ceed18d NFC: Format a file that had not been. 2020-10-21 12:47:12 -07:00
Sean Silva 06a8ba6900 [RefBackend] Use more idiomatic bufferize pattern for TCP.
The time has come for BypassShapes/LowerShapedResultsToMemref to go away :(
For the reference backend, being consistent with upstream conventions is
the name of the game now.

This is a step down in a number of ways, e.g. test clarity and
separation of concerns. But it is fewer files and fewer tests, and
*does* address the "TODO: This is really fragile". It also eliminates two
more ops from the refback dialect (sadly, they are the
shaped_results/yield that we were getting kind of fond of, but alas).
2020-10-15 20:15:53 -07:00
Sean Silva b6bdc8cc4f [RefBackend] Use upstream BufferizeTypeConverter
Now that it has grown source/target materialization capabilities
(spelled with ops tensor_load/tensor_to_memref), we can use it. We can
also now delete refback.memref_to_tensor/refback.tensor_to_memref.

This is also a first step to reducing the downstream functionality
needed in the refback dialect.
2020-10-15 15:58:51 -07:00
Sean Silva 93fc21dad0 [RefBackend] Split out TCF->TCP conversion.
Now the reference backend is cleanly accepts "TCP"+scalar ops.

We introduce tcf-refback-lowering-pipeline which also does TCF->TCP
conversion for convenience until we have a "target interface".
2020-10-12 11:56:39 -07:00
Stella Laurenzo af4edb63ae Start reworking towards a shared library build.
* Need to have a dag of shared library deps in order to interop across python extensions (as presented in ODM).
* Introduced add_npcomp_library and friends to mirror the MLIR setup.
* Adds a libNPCOMP.so shared library.
* Redirects tools and extensions to link against libNPCOMP.so (instead of static libs).
* Moves all libraries to lib/, all binaries to bin/ and all python extensions to python/. The invariant is that the rpaths are setup to have a one level directory structure.
* Reworks the _torch_mlir extension to build like the others (still need to come up with a consolidated rule to do this instead of open coded).
* Includes an upstream version bump to pick up needed changes.

Sizes with dynamic linking (stripped, release, asserts enabled):
  libNPCOMP.so: 43M (includes much of the underlying LLVM codegen deps)
  libMLIR.so: 31M
  _npcomp.so: 1.6M (python extension)
  _torch_mlir.so: 670K (python extension)
  npcomp-capi-ir-test: 6.3K
  npcomp-opt: 351K
  npcomp-run-mlir: 461K
  mnist-playground: 530K

Still more can be done to normalize and optimize but this gets us structurally to the starting point.
2020-10-09 16:02:58 -07:00
Sean Silva 631c8070df [RefBackend] Put JITModule in refback namsepace. 2020-10-08 09:07:00 -07:00
Sean Silva 7edb5f3641 [RefBackend] Rename RefBackend dialect to Refback
I now realize that VerboseCamelCase is not the best choice for dialect
directory/file names and C++ identifiers (take e.g. "Linalg", "Basicpy",
etc. as prior art here; not LinearAlgebra or BasicPython). If I had to
name the convention it seems to be "Shortword" (or of course just
acronym dialects like LLVM, SCF, etc.).

This rename also has the side benefit of differentiating RefBackend
directories, which now refer to the actual backend itself, from
Refback/Refbackrt, which are the dialects which happen to be used by
that backend.
2020-10-08 09:07:00 -07:00
Sean Silva bf99a82832 [RefBackend] Rename Npcomprt dialect to Refbackrt. 2020-10-08 09:07:00 -07:00
Sean Silva 83ad70ef54 [RefBackend] Move runtime related code under npcomp/RefBackend/
Other than the dialect definitions (which will live in standard Dialect/
subdirectory), the goal here is to keep RefBackend-related code nested
in {include/npcomp,lib,test}/RefBackend.
2020-10-08 09:07:00 -07:00
Sean Silva 21255d5f8e [RefBackend] Rename "E2E" to RefBackend. 2020-10-07 10:29:48 -07:00