Commit Graph

64 Commits (e2f862cb8538846d205ac0a837faddb50d65174f)

Author SHA1 Message Date
Marius Brehler 1f1abda179
Don't explicitly set MLIR_PDLL_TABLEGEN_EXE (#1262)
With llvm/llvm-project@91b6f76, the variable `MLIR_PDLL_TABLEGEN_EXE` is
set as a cache variable in MLIR upstream.

Likely requires an update of externals/mlir-hlo to
tensorflow/mlir-hlo@4f2a00b or later.
2022-08-22 16:45:56 +02:00
Henry Tu ba17a4d6c0
Reenable LTC in out-of-tree build (for real this time) (#1205)
* Fix OOT LTC CI build failure

* Disable LTC during macOS package gen

* Add more details about static TorchMLIRJITIRImporter library
2022-08-19 15:25:00 -04:00
Jae Hoon (Antonio) Kim 0af55781ae
Propagate device data names (#1157)
* Propagate device data names

* Address PR comment

* Add example usage

* Add test for device data names

* Make TorchMlirComputation fields protected

* Add lazy backend device data name unit tests

* Disable lazy backend tests if LTC is disabled

* Add comments
2022-08-16 09:30:22 -04:00
Ashay Rane 606f4d2c0e
build: streamline options for enabling LTC and MHLO (#1221) 2022-08-12 23:49:28 -07:00
Marius Brehler 747356186f
Don't set MLIR_TABLEGEN_EXE (#1197)
With llvm/llvm-project@112499f landed, `MLIR_TABLEGEN_EXE` is given as a
cache variable in the MLIR core project. Other external projects, such
as TORCH-MLIR, should not set the variable as this breaks
cross-compilation.
2022-08-09 16:06:12 +02:00
Ashay Rane bb47c166a0
llvm: update tag to 061e0189 (#1180)
Summary of changes:
 - Switch to C++17 (similar to https://reviews.llvm.org/D131348)
 - Update MHLO to build with LLVM commit hash 061e0189
 - Replace deprecated `hasValue()` and `getValue()` with `has_value()`
   and `value()` respectively (https://reviews.llvm.org/D131349)
 - Use `TypedAttr` (https://reviews.llvm.org/D130092)
 - Use updated assembly format of `mhlo.compare` op (commit
   d03ef01e70fbf9afd0fa1976fbb7ed31838929b3 in MHLO repo)
2022-08-08 20:17:35 -07:00
Henry Tu 2c3b3606d0 Resolve remaining LTC CI failures (#1110)
* Replace CHECK_EQ with TORCH_CHECK_EQ

* Check value of TORCH_MLIR_USE_INSTALLED_PYTORCH during LTC build

* Update LTC XFAIL with NewZerosModule ops

* Explicitly blacklist _like ops

* Automatically blacklist new_/_like ops

* Prune away unused Python dependencies from LTC

* Add flag to disable LTC

* Autogen dummy _REFERENCE_LAZY_BACKEND library when LTC is disabled

* Implement compute_shape_var

* Removed Var tests from XFAIL Set

* XFAIL tests using _local_scalar_dense or index.Tensor

* Add StdDim tests to XFAIL set

* Autogen aten::cat
2022-07-30 09:40:02 -04:00
Henry Tu 47bb38d180 Reference Lazy Backend (#1045)
* Changed Example MLIR backend to Reference MLIR backend

* Moved reference_ltc_backend into csrc

* Merged sys_utils.h

* Renamed reference_ltc_backend to reference_lazy_backend

* Addressed review comments

* Update docs with new library name

* Removed _REFERENCE_LAZY_BACKEND from .gitignore

* Added reference_lazy_backend to the TorchMLIRPythonModules dependency list

Fixed typo in `ltc_examples.md`

Missed instance where `ltc_backend` was used instead of `lazy_backend`.
2022-07-30 09:40:02 -04:00
Henry Tu a605fe279c Add example Torch MLIR LTC Backend (#725) 2022-07-30 09:40:02 -04:00
Tanyo Kwok b80ce79b9f
[MHLO] Init MHLO view like op patterns (#1090)
* [MHLO] Init MHLO view like op patterns

See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com

* update filecheck test cases

* rebase, remove chlo and clang-format
2022-07-22 15:18:18 +08:00
Ziheng Jiang c61c99e887
[MHLO] Init MHLO integration. (#1083)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-07-20 16:18:16 -07:00
Clément Fournier 886ad169e5
Fix out-of-tree build of torch-mlir-dialects (#726)
Follows up on #623 for out-of-tree builds of torch-mlir, which
added building `torch-mir-dialects` as a subdirectory.

Our goal is to support both in-tree and out-of-tree builds of
`torch-mlir` with minimum hassle, for instance by using the same
variable names in both setups.

Specific changes to `externals/llvm-external-projects/torch-mlir-dialects/CMakeLists.txt`:
- We use `MLIR_FOUND` to detect that it is being build as a subdirectory
and the llvm+mlir cmake infrastructure is already set up (via
find_package in the parent build) as opposed to an in-tree build.
- For in-tree, the setting of variables and loading of llvm+mlir cmake
infrastructure is now conditionally performed.
- For in-tree, the names of cmake variables being defined for are
adjusted to match those `llvm-project` makes available through
`find_package(MLIR REQUIRED CONFIG)`, under the assumption that those
are the more "standardized" names.

Co-authored-by: Clément Fournier <clement.fournier@amd.com>

Co-authored-by: Liam Fitzpatrick <liam.fitzpatrick@xilinx.com>
2022-04-04 11:37:28 +02:00
Ahmed S. Taei 8383497704
[NFC] Rename external -> externals (#699) 2022-03-26 09:12:27 -07:00
Stephen Neuendorffer 9b2613533b Ensure that torch-mlir-dialects is built when we're out of tree
Its unclear to me what the right layering is here: Are you expecting
torch-mlir-dialects to always get built with LLVM?  This is pretty
breaking for us, if so.
2022-02-28 11:41:31 -05:00
Stephen Neuendorffer 330042aa4c
Don't override MLIR_TABLEGEN_EXE (#622)
This should be set elsewhere depending on the build configuration.
In particular, we need to be careful when cross-compiling to pick
up the host mlir-tblgen.
2022-02-25 14:41:09 -08:00
Yi Zhang 869daf3c22 Add TMTensor dialect to torch-mlir
This is intended to explore support for non-structured ops that can't
be modeled by Linalg dialect. `tm_tensor.scan` and `tm_tensor.scatter`
are added as the first such ops. The dialect should aim to be
upstreamed in the future.
2022-02-15 16:45:38 -05:00
stephenneuendorffer 3fd9b7789e
Bump LLVM to 881ff4e4ebe8cc0cc045c7c167cffb01f94f27f8 (#539) 2022-01-25 22:16:30 -08:00
stephenneuendorffer 614b889dc6
Enable python extensions when building out of tree (#363) 2021-10-27 17:04:12 -07:00
Stella Laurenzo fe69bb339c
Bump llvm-project to 3d92722f74993969243d1400bc3257ca3d03902f. (#369)
* Picks up Python configure changes (was pinned to a bad intermediate commit).
* Uses the new mlir_configure_python_dev_packages() to ensure CMake python is found consistently.
* Fixes the JIT importer to build as a MODULE vs SHARED (needed for linking to Python as a module, per config changes).
* Adds some notes to the README to help folks build a smaller set focused just on this project.
2021-10-21 21:09:00 -07:00
Stephen Neuendorffer fc5d03c081 post-commit review cleanup for #356
see https://github.com/llvm/torch-mlir/pull/356
2021-10-08 11:28:40 -07:00
stephenneuendorffer df12cc0c37
Add support for out-of-tree building (#356)
Currently I haven't attempted to get the python bits working.
Instead the python bits are forcibly disabled.
2021-10-07 21:44:15 -07:00
Sean Silva 4fad753073 Move external/torch-mlir to the root of the repo. 2021-09-27 17:11:08 -07:00
Sean Silva d8f603a4e5 Remove old stuff in prep for move-to-root. 2021-09-27 17:11:08 -07:00
Sean Silva 404bd74ddf Port the bulk of the remaining code to torch-mlir
This leaves no real code outside torch-mlir.

This also renames the "npcomp backend contract" to "linalg on tensors
backend contract" as the name of the abstraction layer that RefBackend
(IREE too) accepts.
2021-09-27 12:48:33 -07:00
Sean Silva a99cbeeb7e Move TorchConversion dialect and TorchTo* into torch-mlir 2021-09-23 21:39:31 -07:00
Sean Silva 1a0b953ea7 Eliminate almost all mentions of IREE.
A few remain in examples/docs that will be naturally be updated in due
time.

This regresses the list support and the general direction of more widely
supported control flow, lists/dicts/globals that we were going for with
the TorchScript path. The idea is that we are deferring that work to
make torch-mlir a very clean standalone thing. We will reboot it,
probably using some of the tools of iree_pydm to make it simpler, and in
a more natural place (such as an iree-torch repo that depends on IREE and
torch-mlir to build a working PyTorch frontend solution for IREE -- it
was really weird that npcomp depended on IREE).
2021-09-22 16:06:38 -07:00
Sean Silva a25163fbfa Remove old RefBackend
It is superceded by the new one.
2021-09-22 15:33:28 -07:00
Sean Silva 6d8e7f1bb1 Implement Python relayout from #311
Fixes https://github.com/llvm/mlir-npcomp/issues/311

The key change is that TorchPlugin is folded into
`torch_mlir.dialects.torch.importer.jit_ir` (it imports the PyTorch
JIT's IR, so that's a good, scoped name for it).
The CMake option `-DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=OFF` disables it,
which allows building without a PyTorch native dependency.
2021-09-21 09:29:40 -07:00
Sean Silva 68fefe7e1f Remove NPCOMP_ENABLE_IREE CMake flag.
Our new dependency management solution relies:
- on the C++ side with the public iree-dialects project, which we
  include and are using as representative of some missing upstream
  ops (so we treat them "as if" they were upstream, with the hope of
  upstreaming them after some codevelopment has happened)
- on the Python side, with simple PYTHONPATH manipulation or installed
  Python packages. No CMake stuff required.
2021-09-17 09:27:49 -07:00
Sean Silva 0eb767ea45 Remove frontends/pytorch directory.
It just contained the e2e testing framework. We now fold it into the
main project to reduce complexity.

- `frontends/pytorch/python/` -> `python/torch_support`
- `frontends/pytorch/e2e_testing -> e2e_testing`
- `frontends/pytorch/examples -> examples`
- `frontends/pytorch/test` -> `python/test`
- `torch_mlir_torchscript` python module -> `npcomp_torchscript`
- `torch_mlir_torchscript_e2e_test_configs` python module ->
  `npcomp_torchscript_e2e_test_configs`

This also changes the license of a handful of files from the
"pytorch-style" license to the regular LLVM/npcomp license. The only
people who committed to those files were myself and Yi.
2021-09-17 09:27:49 -07:00
Sean Silva b6be96d722 [torch-mlir earthmoving (2/N)] Python code movement.
This moves the bulk of the Python code (including the Torch interop)
from `frontends/pytorch` into `torch-mlir/TorchPlugin`. This also
required reconciling a bunch of other Python-related stuff, like the
`torch` dialects.

As I did this, it was simpler to just remove all the old numpy/basicpy
stuff because we were going to delete it anyway and it was faster than
debugging an intermediate state that would only last O(days) anyway.

torch-mlir has two top-level python packages (built into the
`python_packages` directory):

- `torch_mlir_dialects`: `torch` dialect Python bindings (does not
  depend on PyTorch). This also involves building the aggregate CAPI for
  `torch-mlir`.
- `torch_mlir`: bindings to the part of the code that links against
  PyTorch (or C++ code that transitively does).

Additionally, there remain two more Python packages in npcomp (but
outside `torch-mlir`):

- `npcomp_torch`: Contains the e2e test framework and testing configs
  that plug into RefBackend and IREE.
- `npcomp_core`: Contains the low-level interfaces to RefBackend and
  IREE that `npcomp_torch` uses, along with its own
  `MLIR_PYTHON_PACKAGE_PREFIX=npcomp.` aggregation of the core MLIR
  python bindings. (all other functionality has been stripped out)

After all the basicpy/numpy deletions, the `npcomp` C++ code is now very
tiny. It basically just contains RefBackend and the `TorchConversion`
dialect/passes (e.g. `TorchToLinalg.cpp`).

Correspondingly, there are now 4 main testing targets paralleling the
Python layering (which is reflective of the deeper underlying dependency
structure)

- `check-torch-mlir`: checks the `torch-mlir` pure MLIR C++ code.
- `check-torch-mlir-plugin`: checks the code in `TorchPlugin` (e.g.
  TorchScript import)
- `check-frontends-pytorch`: Checks the little code we have in
  `frontends/pytorch` -- mainly things related to the e2e framework
  itself.
- `check-npcomp`: Checks the pure MLIR C++ code inside npcomp.

There is a target `check-npcomp-all` that runs all of them.
The `torch-mlir/build_standalone.sh` script does a standalone build of
`torch-mlir`.

The e2e tests (`tools/torchscript_e2e_test.sh`) are working too.

The update_torch_ods script now lives in
`torch-mlir/build_tools/update_torch_ods.sh` and expects a standalone
build.

This change also required a fix upstream related to cross-shlib Python
dependencies, so we also update llvm-project to
8dca953dd39c0cd8c80decbeb38753f58a4de580 to get
https://reviews.llvm.org/D109776 (no other fixes were needed for the
integrate, thankfully).

This completes most of the large source code changes. Next will be
bringing the CI/packaging/examples back to life.
2021-09-15 13:40:30 -07:00
Sean Silva 28a7738189 [torch-mlir earthmoving (1/N)] C/C++ code movement.
This creates the `external/torch-mlir` directory as an
LLVM_EXTERNAL_PROJECTS-compatible project (analogous to
`iree-dialects`) and completes movement/rename of all pure MLIR C/C++
compiler code into there. The next step will be to move all the Python
code / code that links/includes PyTorch C++ code (which currently lives
in `frontends/pytorch`) into a subdirectory here.

I call this "earthmoving" because it is mostly mechanical changes and
renames. As a quick summary (we can change this down the road easily)
- C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch`
- CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet`
- preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_`
- CMake `NPCOMPFoo -> TorchMLIRFoo`

The goal of this is to create a standalone project creating a center of
mass for entry into the MLIR ecosystem from PyTorch, suitable in scope
for eventual inclusion/ownership in PyTorch. The idea is that
`external/torch-mlir` will some day be pulled out into its own
repository, and then npcomp will simply pull it in as a submodule.

Layering-wise, what lives in `torch-mlir` lowers code from PyTorch
(currently TorchScript, but TorchFX or pytorch/xla-style tracing are
possible extensions) down to what we have been calling the "Torch
backend contract" which is cleaned up IR (inlining, simplifcation,
conversion to value tensors, ...) entirely in the `torch` dialect. This
is the branching off point for further lowering, of which npcomp takes
one opinion (outside `torch-mlir` of course!), namely the
`TorchConversion` dialect/transforms which lower to IR suitable for IREE
and other linalg-on-tensors based lower-level compilers.

Summary of changes:
- move `{include,lib,test}/Dialect/Torch` into `torch-mlir`
- move relevant parts of CAPI into `torch-mlir`.
- leave a few things related to the `torch-mlir` Python build commented
  out, which should be resolved in a subsequent change.
2021-09-10 21:44:37 -07:00
Sean Silva cab8d922ec Add TorchToIREE and factor out TorchConversion dialect.
This converts a basic list op (torch.prim.ListConstruct) to the IREE
dialect.

```
    def forward(self, x: float):
            return [x, x]
```

turns into:

```
builtin.func @forward(%arg0: !torch.float) -> !torch.list<!torch.float> {
  %0 = torch.prim.ListConstruct %arg0, %arg0 : (!torch.float, !torch.float) -> !torch.list<!torch.float>
  return %0 : !torch.list<!torch.float>
}
```

which turns into:

```
builtin.func @forward(%arg0: f64) -> !iree.list<f64> {
  %c1 = constant 1 : index
  %c0 = constant 0 : index
  %c2 = constant 2 : index
  %0 = iree.list.create %c2 : !iree.list<f64>
  iree.list.set %0[%c0], %arg0 : !iree.list<f64>, f64
  iree.list.set %0[%c1], %arg0 : !iree.list<f64>, f64
  return %0 : !iree.list<f64>
}
```

As part of doing this, I realized that it was time to formalize the IR
form that we reach right before running TorchTo{Linalg,Std,...}. We now
call it the "Torch backend contract". We then lower the "Torch backend
contract" to the "npcomp backend contract", which involves the new
TorchConversion (`torch_c`) dialect, which holds ops that need to
operate on both the npcomp backend types (e.g. builtin tensors, i1, IREE
list, etc.) and the `!torch` types.

This made more sense, as I realized that if I didn't factor out
`torch_c` then the Torch dialect would have a dependency on IREE
dialect (we previously didn't notice this was an issue because we only
depended on `builtin` types), which seemed wrong to me.

Recommended review order:
- TorchToIREE.cpp / `TorchToIREE/basic.mlir`
- Look at the new structure of createTorchScriptToNpcompBackendPipeline.
  It now lives in TorchConversion/Transforms/Passes.cpp and cleanly
  calls into `Torch::createTorchScriptToTorchBackendPipeline` for the
  frontend lowering to the Torch backend contract.
- Mechanical change extracting
  `torch_c.{to,from}_{i1,i64,f64,builtin_tensor,iree_list}` into a new
  TorchConversion dialect, and a few passes specific to the lowering
  from the Torch backend contract to the npcomp backend contract.
- Minor fixes to TorchToLinalg.cpp to use unconverted operands (now that
  we convert lists as part of operand materialization, we need to use
  the original operands). Also added test for AtenMaxPool2dOp and fixed
  m_TorchConstantIntList.
- TmpDeleteDeadIREELists pass. Temporary pass for deleting dead IREE lists that
  are created as part of operand materialization for conv/max pool/avg pool ops
  in TorchToLinalg.
2021-08-16 15:01:58 -07:00
M4tr1xt4ng 78fd07da5f Deal with CMP0116 2021-08-12 09:40:55 -07:00
Stella Laurenzo 2dbab50444
Rework the python build to a static assembly of MLIR+NPCOMP (#251)
* Adapt to python build system updates.

* Bump llvm to 310c9496d80961188e8d8f8ad306cdf44bd7541f (includes python build updates)
* Adds refback C-API.
* Re-layers all python builds.
* Rework CI.
2021-07-27 16:10:10 -07:00
mikeurbach 0f6a65a1c5
Enable building using LLVM_EXTERNAL_PROJECTS. (#152)
This allows building NPCOMP as an external project of LLVM, similar to
how CIRCT can be built: https://github.com/llvm/circt/pull/227.

The CMake options to use this build style look like this:

```
  -DLLVM_EXTERNAL_PROJECTS=npcomp \
  -DLLVM_EXTERNAL_NPCOMP_SOURCE_DIR=/path/to/mlir-npcomp \
```
2021-01-26 11:43:43 -07:00
Stella Laurenzo f6d7ee06ef Make torch_mlir compatible with binary PyTorch installations.
* This has been anticipated for a long time in that it is quite hard to keep C++ binary compatibility across a system landscape as diverse as PyTorch, LLVM, and this project. This is why we based the PyTorch extension on the MLIR and NPCOMP C APIs only: that is the only sane linkage story for the entire matrix.
* Removes the few LLVM'isms in torch_mlir that had snuck in, using either STL or PyTorch support utilities. The new rule here is that LLVM C++ includes are forbidden at this level and (as stated in the design), torch_mlir should use the PyTorch runtime and support libraries (not introduce an incidental C++ dependency on LLVM).
* Also deletes mnist-playground as it was proving impossible to keep the grid of PyTorch vs system ABI divisions functioning. I am open to a less drastic course here (optional/disabled by default?)
* This gets us pretty close to just using PyTorch's extension builder API, which will be nice for distribution (i.e. it integrates well with the PyTorch ecosystem for deployment). I ended up just simplifying the in-tree CMake support for now.
* Fixes #138
2020-12-14 09:51:00 -08:00
Stella Laurenzo f03225b1f1 Bump llvm-project to f4f8a67aaf13bc66a2b7d55561b14a3724a5e0de.
* Incorporates source fixes.
* Uses upstream pybind11 detection logic.
* Patches CI.
* This may break the CI, which will need to be fixed manually in a followup.
2020-11-22 13:14:44 -08:00
Stella Laurenzo 4f9c9ecda0 Fix optional Torch package lookup.
* Days since CMake-is-not-a-language failure: 0
2020-11-16 21:41:59 -08:00
Stella Laurenzo a7ff87a922 Sever C++ level depend on IREE and rebase on exe and python interface.
* IREE doesn't have proper install support, so there is some temporary hoaky hacking in our CMakeLists.txt to shuttle some symlinks around.
* Reworked the original numpy e2e with IREE test to pipe through iree-translate.
* Removed all of the C++-level dependencies.
* Will generalize and apply to the PyTorch backend in a followup.
2020-11-16 21:32:56 -08:00
Stella Laurenzo 6850295ec5 Teach cmake how to find the installed PyTorch.
* In most situations, this eliminates the need to explicitly set a path to the Torch cmake files.
* Also upgrades to new Python3 find package. (should eliminate 2.x mismatches)
* Since PyTorch is located by asking Python where it is, this eliminates a lot of causes of mismatch. (one source of truth)
2020-11-13 17:19:25 -08:00
Stella Laurenzo 59b7c559f4 Tweak build flags for efficiency and document building without a container.
* Enables -gsplit-dwarf for both LLVM and NPCOMP, reducing the occurrence of the ~GB scale binaries.
* CMake shared linking seems incompatible with this, so shared objects are still "too big" but there are few of them.
* Reduces disk thrash on clean/install of everything.
2020-11-03 13:46:46 -08:00
Stella Laurenzo af4edb63ae Start reworking towards a shared library build.
* Need to have a dag of shared library deps in order to interop across python extensions (as presented in ODM).
* Introduced add_npcomp_library and friends to mirror the MLIR setup.
* Adds a libNPCOMP.so shared library.
* Redirects tools and extensions to link against libNPCOMP.so (instead of static libs).
* Moves all libraries to lib/, all binaries to bin/ and all python extensions to python/. The invariant is that the rpaths are setup to have a one level directory structure.
* Reworks the _torch_mlir extension to build like the others (still need to come up with a consolidated rule to do this instead of open coded).
* Includes an upstream version bump to pick up needed changes.

Sizes with dynamic linking (stripped, release, asserts enabled):
  libNPCOMP.so: 43M (includes much of the underlying LLVM codegen deps)
  libMLIR.so: 31M
  _npcomp.so: 1.6M (python extension)
  _torch_mlir.so: 670K (python extension)
  npcomp-capi-ir-test: 6.3K
  npcomp-opt: 351K
  npcomp-run-mlir: 461K
  mnist-playground: 530K

Still more can be done to normalize and optimize but this gets us structurally to the starting point.
2020-10-09 16:02:58 -07:00
stephenneuendorffer 31b3041e88
Add pytorch interface to ATen Dialect (#30)
This patch adds a pytorch interface to npcomp.  This interface is modeled
after pytorch_xla and exposes the MLIR-based flow as a virtual device (similar
to a gpu device or the xla backend).  Usage is intended to be something like:

  dev = torch_mlir.mlir_device()
  t0 = torch.randn((4,4), device=dev)
  t1 = torch.randn((4,4), device=dev)
  t2 = t0 + t1
  t2_mlir = torch_mlir.get_mlir( t2 )
  t2_cpu = t2.to('cpu')

In this case t2_cpu would contain the result of the computation, and t2_mlir
contains the mlir description of the computation.  Note that this also
properly returns backward paths synthesized by pytorch.  There are several
parts of this:

1) A tensor type (implemented by tensor.* and tensor_impl.*)
2) The device modeling (aten_mlir_bridge.*, aten_mlir_device.*, aten_mlir_type*)
3) a temporary IR (implemented by ir.cpp)

There is also a reference lowering directly from the ATen dialect to C
function calls consisting of two parts:

1) The driver that uses the IR to generate MLIR, run Passes and compile the
result using mlir::ExecutionEngine (implemented by jit.cpp and
mlir_gen.cpp)
2) A runtime library implemented by lib/aten_ops.cpp.  Most of the operations
are implemented by callbacks into the torch C++ libraries.

Some aspects of this are known to be less than optimal, in particular:
1) There's some function definitions that don't live in the file corresponding
to their declaration.
2) More aspects of this (e.g. the IR) seem like they should be automatically
generated.
3) It's unclear to me how much of the 'IR' is actually necessary, or whether
MLIR could be created on the fly.

Note that this code is licensed in a way similar to pytorch, with the
intention that eventually (when npcomp reaches some maturity) it should be
pushed there.  (see frontends/pytorch/LICENSE)  The code is also structured
much closer to the pytorch coding style than the LLVM coding style.
2020-08-21 11:22:47 -07:00
stephenneuendorffer 44af7a6d30
[cmake] Updates for basic shared library support (#7)
Mostly this is CMake cleanup.  Several library dependencies are missing, which
is often revealed with shared library builds.  Also, it's generally bad to
link directly against LLVM libraries because it fails when using
LLVM_LINK_LLVM_DYLIB.  MLIR will pull in libLLVM.so, and there will be
duplicate linkage with the the explicit libraries.  There may need to be more
refactoring here.
2020-08-05 14:49:18 -07:00
Stella Laurenzo 3efbbe8735 Misc fixes to enable building/testing on manylinux2014 images.
* Since the manylinux images do not hard-link against python libs (resolving them at runtime), the module must be built without resolving undefined references.
* For some reason, builds on this platform are stricter, exposing dependency ordering issues.
* The CMake bits to build the extension are still somewhat of a mess. I have better versions both upstream and in IREE and will be reconciling. For now, don't look too closely.
2020-08-04 17:26:15 -07:00
Stella Laurenzo 38abe99805 Collapse python_native/ into python/.
* These were separated originally for layering reasons that no longer apply.
* Most of the python extension code is under lib/ with just the module setup in python/.
2020-08-03 17:46:34 -07:00
Stella Laurenzo 571c8b448a Collapse different top level test directories into test/.
* Uses local configs and unsupported annotation to disable optional tests.
* This separation was just an artifact of having initial trouble getting lit setup.
2020-08-03 17:41:16 -07:00
Stella Laurenzo aea05d68d7 Initial python plumbing to interface with the refjit backend. 2020-07-10 22:57:26 -07:00
Sean Silva b4f0cea8fa Rework e2e flow to use new "npcomprt"
This ~totally reworks the existing "runtime" stuff to be more
principled and usable, such as from Python. It's still not fully
production-quality, mainly in the department of memory management (e.g.
it currently leaks memory; we need to figure out "who frees memrefs" +
the analysis and transformation needed to do that (maybe use upstream
buffer allocation pass?)).

The user API is in include/npcomp/runtime/UserAPI.h, though
include/npcomp/JITRuntime/JITModule.h is a friendlier wrapper.

The stuff under {include,lib}/runtime is totally firewalled from the
compiler and tiny (<6kB, though no attention has gone into optimizing
that size). For example, we don't link in libSupport into the runtime,
instead having our own bare bones replacements for basics like ArrayRef
(the JITRuntime helps with bridging that gap, since it *can* depend on
all common LLVM utilities).

The overall features of npcomprt is that it exposes a module that
with multiple function entry points. Each function has arguments and
results that are tensor-valued, and npcomprt::Tensor is the runtime type
that is used to interact with that (and a npcomprt::Ref<T>
reference-counting wrapper is provided to wrap npcomprt::Tensor in the
common case).

From an implementation perspective, an npcomprt module at the
LLVM/object/binary level exposes a single module descriptor struct that
has pointers to other metadata (currently just a list of function
metadata descriptors). All interactions with the npcomp runtime are
keyed off of that module descriptor, including function lookups and
dispatching. This is done to dodge platform ABI issues and also allow
enough reflection to e.g. verify provided arguments.

Most of the compiler-side work here was in LowerToNpcomprtABI and
LowerToLLVM.

Also,
- Rename npcomp_rt/NpcompRt to npcomprt/Npcomprt; it was getting
annoying to type the underscores/caps.
- misc improvements to bash_helpers.sh
2020-07-08 19:36:19 -07:00