Commit Graph

710 Commits (a75ae8253094f9da543b8a6060abc35c27e73819)
 

Author SHA1 Message Date
Stella Laurenzo b0623b7793 Bump LLVM version to 4f5355ee73626f8b8fe6bf0dd6d167fea7628a2c.
* Incorporates changes around LLVM StringRef.
* Ports fix in upstream pybind11 detection.
* Disables CI hack due to broken pybind detection.
2020-11-24 13:12:04 -08:00
meadowlark@google.com 959c0a79cb Expand pytype coverage for torch_signature_ods_gen.py 2020-11-24 12:42:32 -08:00
Sean Silva 0b7c443256 [RefBackend] Properly initialize refbackrt::Tensor refcount.
Although `refCount` is initialized as `std::atomic<int> refCount{0};` in
the definition of Tensor, our tail-allocating malloc would ignore it,
resulting in bogus values that led to leaks.

Caught with LeakSanitizer, but I added an assertion that the refcount is
non-negative to begin with, which should catch this bug in the future
fairly consistently (assuming the garbage refcount is negative half the
time).
2020-11-24 12:01:35 -08:00
Stella Laurenzo f13994fdf7 NFC: Remove TODO about creating an mlirOperationStateDestroy (unnecessary). 2020-11-23 15:01:51 -08:00
Stella Laurenzo 9ffd2556ab Add TorchScript import tests missed in previous change. 2020-11-23 14:43:42 -08:00
Stella Laurenzo 78a3c90758 Add TorchScript graph importer.
* Does not handle all features yet but should conservatively fail on unsupported things.
* Location tracking is still somewhat mismatched between what TorchScript and MLIR do. Likely need a better heuristic for tracking locations from defs for nodes that do not carry location.
* Sets the ground-work for a specialized/generic split but only implements the generic side.
* Had some evidence that this requires a recent bump of PT nightly (within the last month) to pick up pybind11 2.6, which includes some cross-module symbol fixes (vs the previously sync'd version). No source changes, but older versions fail to cast function types at runtime.
2020-11-23 14:20:09 -08:00
Stella Laurenzo 2021d3609e
Make CMAKE_PREFIX_PATH explicit for now (#125)
* Installs numpy as well.
2020-11-22 16:23:36 -08:00
Stella Laurenzo 31d80064a9 Update CI to be verbose about pybind11 detection logic. 2020-11-22 13:42:44 -08:00
Stella Laurenzo f03225b1f1 Bump llvm-project to f4f8a67aaf13bc66a2b7d55561b14a3724a5e0de.
* Incorporates source fixes.
* Uses upstream pybind11 detection logic.
* Patches CI.
* This may break the CI, which will need to be fixed manually in a followup.
2020-11-22 13:14:44 -08:00
Sean Silva 45ca371129 cmake_configure.sh: Add mlir native modules to PYTHONPATH
Also, update README.md to use the canonical .env file written by
`cmake_configure.sh`.
2020-11-20 17:29:57 -08:00
Sean Silva 1dfcfa9cd1 Add aten.mm op and "test" it e2e.
Note that unlike aten.matmul which has dynamic behavior
depending on the argument ranks (can do matrix-matrix, matrix-vector,
batch matmul, etc.), aten.mm is just a vanilla matrix
multiply, which can be lowered precisely to tcf.matmul.

The "test" is really just an example that I stared at while getting my
feet wet with this. We probably want something that actually tests this
as part of `ninja check-npcomp`.
2020-11-20 17:21:24 -08:00
Sean Silva ec1336a8a3 Make pytorch/backend/refjit.py a bit tidier
- Print out initial PyTorch IR.
- Rename ambiguous "frontend IR" to "TCF IR".
- Add newlines to prints
- Rename FRONTEND_PASSES to TORCH_TO_TCF_PASSES
2020-11-20 17:21:24 -08:00
Sean Silva 32b2dc6ce7 Revert "Bump llvm-project to 369c51a74b5327464e27e0749ca7ac59ac1349ce"
This reverts commit c60d7b4aae.

It seems to have tickled some sort of pybind version issue:
https://github.com/llvm/mlir-npcomp/runs/1433414550?check_suite_focus=true
2020-11-20 15:09:18 -08:00
Sean Silva c60d7b4aae Bump llvm-project to 369c51a74b5327464e27e0749ca7ac59ac1349ce 2020-11-20 13:03:24 -08:00
harsh-nod 67d6694fdc
Update PYTHON cmake variables to Python3 (#121)
After the recent change of cmake variables
from PYTHON_INCLUDE_DIRS to Python3_INCLUDE_DIRS
and PYTHON_LIBRARIES to Python3_LIBRARIES, there were
a few files that still had references to the old
variables. This patch fixes that.
2020-11-17 16:04:14 -08:00
Sean Silva 64a7e83184 [RefBackend] Add refback-tcf-to-tcp-pipeline
This allows invoking TCF to TCP-level conversion more easily, and starts
us towards a path of factoring it out of the RefBackend.
2020-11-17 12:33:37 -08:00
Sean Silva 358159a6eb [RefBackend] Open-code shape.get_extent as extract_element
It was annoying that we were creating shape.get_extent in the middle of
the bufferization pipeline, as it required running convert-shape-to-std
at an awkward place. To make that cleaner, just open-code the
extract_element ops that shape.get_extent expands into.

This is a little gross, but it helps with the macroscopic pipeline
ordering issues. Anyway, the train is long-gone of trying to treat
shapes as some special data type that should only be operated on with
shape ops.

Also,
- reorder tensor constant bufferize (which is a module pass) to bracket
all the bufferization function passes, to make the parallelism
opportunities there clearer. Now we have a very clean little
bufferization segment of our pipeline construction.
2020-11-17 11:00:38 -08:00
Stella Laurenzo 4f9c9ecda0 Fix optional Torch package lookup.
* Days since CMake-is-not-a-language failure: 0
2020-11-16 21:41:59 -08:00
Stella Laurenzo a7ff87a922 Sever C++ level depend on IREE and rebase on exe and python interface.
* IREE doesn't have proper install support, so there is some temporary hoaky hacking in our CMakeLists.txt to shuttle some symlinks around.
* Reworked the original numpy e2e with IREE test to pipe through iree-translate.
* Removed all of the C++-level dependencies.
* Will generalize and apply to the PyTorch backend in a followup.
2020-11-16 21:32:56 -08:00
Sean Silva 5227d52c26 [RefBackend] Use std.global_memref instead of homegrown thing
This vastly simplifies our code, allowing deleting multiple ops,
simplifying multiple passes, and removing a whole pass.

Now `refback` dialect is down to one op (refback.alloc_memref, which
simplifies allocations to just take a shape instead of individual
extents).
2020-11-13 18:43:50 -08:00
Stella Laurenzo 6850295ec5 Teach cmake how to find the installed PyTorch.
* In most situations, this eliminates the need to explicitly set a path to the Torch cmake files.
* Also upgrades to new Python3 find package. (should eliminate 2.x mismatches)
* Since PyTorch is located by asking Python where it is, this eliminates a lot of causes of mismatch. (one source of truth)
2020-11-13 17:19:25 -08:00
Sean Silva 32388d938b Make some passes run on FuncOp so they can run in parallel. 2020-11-13 16:12:18 -08:00
Sean Silva 482791fa4a Bump llvm-project to 703ef17e7a0a0f51e1d000bb1f71ad437a9933e4
Date:   Fri Nov 13 15:27:29 2020 -0800
2020-11-13 16:12:18 -08:00
Stella Laurenzo 47ac80491c Delete old PyTorch 1.3 type dispatch oriented code paths.
* We aren't quite at e2e parity, but we aren't going back and the old path is bit-rotted.
2020-11-12 22:27:05 -08:00
Stella Laurenzo e359167562 Fix dispatch of arange.
* Fixes #107
* I wouldn't say I love what had to be done here. Worth a conversation with the PT devs (probably as part of a rollup of a bunch of this stuff).
2020-11-12 22:07:23 -08:00
Stella Laurenzo b4c7ae1e0c Repurpose numpy-compiler compiler/runtime flow for PyTorch.
* A bit gross because I took the chance to upgrade all of the backend bits to the new MLIR Python bindings and we still co-mingle the old and new for now.
* Since the Python created PassManagers are configured for explicit nesting, I had to upgrade some of the pass pipelines to be explicit.
* The demo in mul_maximum_e2e.py now compiles, runs through PyTorch and through the JIT, prints and asserts the same results.
* I am not claiming that this is the prettiest API in this patch: consider that this is just directly using low-level APIs and there should be an intervening high level API.
2020-11-11 10:38:13 -08:00
Stella Laurenzo d1488c8572 Move existing npcomp.compiler -> npcomp.compiler.numpy.
* Makes room for the pytorch compiler.
* Some common things can be hoisted from the numpy side but some more consolidation needs to happen first.
2020-11-10 19:26:40 -08:00
Sean Silva 1c7c362e29 [TCP] Replace tcp.matmul with linalg.matmul.
This involved adding a `tcp.splatted` op to splat a dynamically sized
init tensor. See rationale in TCPOps.td docs.

One interesting observation is that when lowering tcf.matmul to
linalg.matmul, we need to both 1) create the error checks and 2)
calculate a shape transfer function to create the init tensors.
Previously, 2) was deferred to bufferizing tcp.matmul later. I'm not
sure if this is a conflation of concerns or not. For now, it's not a big
burden.
2020-11-10 18:58:28 -08:00
Sean Silva 0427aacb0b [TCP] Replace elementwise ops with std elementwise ops. 2020-11-10 18:58:28 -08:00
Sean Silva ceab22cf90 Bump llvm-project to 53a0d45db6d0f33dfbb724c99ce2560ae25473c2
Date:   Wed Oct 28 13:25:48 2020 -0700

- fixup for func syntax change
2020-11-10 15:22:46 -08:00
Stella Laurenzo 36d750ca89 Default to -DLLVM_LINK_LLVM_DYLIB=ON.
* We're building libLLVM.so anyway. Saves a lot of time/space to link tools against it.
* MLIR tools do not yet respect this (but it doesn't seem to hurt).
2020-11-09 14:13:42 -08:00
Stella Laurenzo 4b10fe94fe Bump llvm-project to head.
* Incorporates a dep on the new MLIRPublicAPI shared library.
* More work is needed to further separate npcomp between public API and impl libraries, but amalgamating them will hold until then.
2020-11-08 17:31:03 -08:00
Stella Laurenzo 966253fb11 Bump llvm-project to pick up python extension install fix. 2020-11-06 16:49:42 -08:00
Stella Laurenzo e60dc2470e Add aten.maximum op and conversions from aten->tcf.
* Conversions are very simple, suporting mul, maximum and add (alpha=1 only).
* Example added with pass pipeline needed to run.
* Much missing off of the golden path but sufficient for such simple cases.
2020-11-04 17:20:54 -08:00
Stella Laurenzo 6c702b149f Add a number of kernels and new patterns.
* convolution, convolution_backward, _log_softmax, _log_softmax_backward_data, nll_loss_forward, nll_loss_backward, nll_loss2d_forward, nll_loss2d_backward, copy_
* Extends the recognition logic and metadata for handling inplace transformations, optional tensors, ints, lists and dropped args.
* The kernel_calls generated by test_conv_nllloss_grads.py now convert to ATen.
* The result *almost* comes out as a pure tensor program with the exception of the copy_ op, which I will do some followup work to deal with.
* More progress on #97
2020-11-04 14:36:59 -08:00
Sean Silva 3dab9056f0 Bump llvm-project to eb8d386d513bf4243d0adb814d862af25b8c4e2f
Two changes:
- no more "verifyPasses" constructor arg for PassManager
- OpPassManager defaults to requiring explicit "nest" calls when created
via the C++ API. The behavior upstream for mlir-opt still obeys the
"implicit" mode, so I just slapped that onto all our pass managers.

I pinged https://reviews.llvm.org/D90671 to get a signal for whether we
are expected to migrate to explicit mode. If so, I'll do that too later.
2020-11-04 14:14:46 -08:00
Stella Laurenzo 59b7c559f4 Tweak build flags for efficiency and document building without a container.
* Enables -gsplit-dwarf for both LLVM and NPCOMP, reducing the occurrence of the ~GB scale binaries.
* CMake shared linking seems incompatible with this, so shared objects are still "too big" but there are few of them.
* Reduces disk thrash on clean/install of everything.
2020-11-03 13:46:46 -08:00
Sean Silva 57e58b9272 [RefBackend] Use upstream func-bufferize pass.
Now, the only bufferization we have left is lowering tensor constants to
memref, which will hopefully proceed soon after Rahul's new
std.global_memref lands + the lowering to LLVM IR. Then I'll port
LowerConstantTensorsToMemref to upstream and we'll be 100% upstream
bufferization, except for our local TCP dialect (which will probably go
away and be replaced by std elementwise + linalg named ops on tensors :)
).
2020-11-02 17:38:33 -08:00
Sean Silva 94bee9ec23 Bump llvm-project to 773ad135a30dbe0f969086e3ed518ab17502e9f5 2020-11-02 17:38:33 -08:00
Harsh Menon c2d3820e48 Fix insertion point bug #102
The current code was inserting all build_list ops
after the last constant op since it was assuming that all
elements being passed in were constants.

This patch replaces that patch with a new function that
inserts the build_list ops before the terminator.

Also modifies test_export_conv2d_fwd.py since its output
no longer matches.

TEST: Added test_export_cat.py which is the code in #102
2020-11-02 16:41:26 -08:00
Stella Laurenzo 0c73c535d6 Capture backward conv and copy_ kernels.
* This is sufficient to capture the forward and backward pass and gradients of a convolutional model with an nllloss.
* As with the forward conv, the backward conv is a special case wrapped in an enigma on the PyTorch side. There aren't many like it, so special casing is just what we do.
* When I traced this, I found that the copy_ op is not yet boxing compatible so I had to map it manually. If there are many more like this, I'll probably do something a bit more clever to reduce duplication.
* This exposes new signature patterns that will need to be handled by the ATen lowering. Will take care of that next: It will be nice to have an e2e of a non-trivial case with full gradients.
* Fixes #97.
2020-10-30 22:59:26 -07:00
Sean Silva 1874bf5eb1 NFC: Clean up some minor nits
- Remove GreedyPatternRewriteDriver.h from files that don't need it
- fix typo shouldBeCloned -> wouldBeCloned
2020-10-30 18:48:25 -07:00
Sean Silva f9c2f8eb0d [RefBackend] Use upstream SCF bufferization pass. 2020-10-30 18:12:41 -07:00
Sean Silva 0761df9f58 Bump llvm-project to 72ddd559b8aafef402091f8e192e025022e4ebef
- Fixup to OpBuilderDAG
- Update for affine map naming
2020-10-30 18:12:41 -07:00
Aaron J Arthurs 29c715b6b1 Add TCP mul test 2020-10-30 15:11:52 -07:00
Stella Laurenzo 8d98dd4551 Support optional args/returns and other odds and ends.
* None's out Device? args.
* Emits bool tensors if needed.
* Adds some stderr tracing to better see what is going on.
* Test case that exercises NLLLoss.
* This test case emits something for backward calculations but there are some issues still to be worked out, so that part is left out of the test case.
* Progress on #97
2020-10-30 14:50:28 -07:00
Stella Laurenzo a3f4db9fe8 Bump llvm-project to c8c07b76b2cf2ada8e7ec132f7f57b97d76743cf.
* Several NFC changes to signatures/includes.
2020-10-29 15:25:55 -07:00
Marius Brehler 30adf9e6b0 Fix TCP_MulOp tablegen definition 2020-10-28 19:28:15 +01:00
Stella Laurenzo c08935a418 Rewrite ATen ODS code generator to be based on new op registry and new signature recognition system.
* Deletes prior code generator from previous attempt (moved some of it into this one).
* Renames old generated tablegen source to "Legacy".
* Generates ODS and import rules for most binary and unary arithmetic ops.
* Removes old generated ops and integration tests that were testing details of the prior setup.
2020-10-28 10:37:37 -07:00
Aaron J Arthurs 94ea6f7c92 [RefBackend] Support element-wise multiply op
Register the following for the multiply op:
- tcf.mul
- tcp.mul
- TCP->TCP lowering
- Shape transfer, broadcasted multiplicands
- Lower to standard `MulFOp` op
2020-10-27 19:41:23 -07:00