Commit Graph

80 Commits (c0634bc996b983f1bcf27510114c2dc72e124ac8)

Author SHA1 Message Date
powderluv fe1237b2a4
Provide a way to override MacOS target and arch (#818)
Useful when we are only building for one architecture.
2022-05-02 09:04:12 -07:00
Prashant Kumar 5192a4e9f3 Remove heavy_deps models that don't get serialized.
BART, BigBird and GPT2 are not being serialized and hence removed.
Also, changed the script to obtain the resnest model.
2022-04-29 17:21:25 +05:30
powderluv ef546e1137
Add a script to build and upload M1 snapshots (#801)
Uses the latest snapshot tags and adds the releases to same asset
directories so it can be run on a cronjob without a GH runner.
2022-04-28 14:50:58 -07:00
Vivek Khandelwal 4635d36efb [MLIR][TORCH] Add heavydep tests for torch benchmarks
This commit adds e2e heavydep tests for the torch benchmarks.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-26 13:22:08 +05:30
powderluv 6d09c98b2f
Fix version information in Release builds (#788)
env vars seems to be lost in manylinux docker.
Use a version file like IREE does.
2022-04-25 14:13:17 -07:00
powderluv 7d9138f497
Update build_macos_packages.sh (#787)
Set the environment variable and export it since it doesn't seem to get passed down.
2022-04-22 15:48:03 -07:00
Prashant Kumar e9c785b04b Generate backward graph via functorch-aot module
Example to demonstrate the extraction of forward as well as
backward graph via Functorch's AOT module is added.
2022-04-22 20:58:35 +05:30
powderluv 4ef61aa27f
Minor buildsystem fixes (#778)
Sets up auto-pinning of latest torch-nightly
2022-04-21 15:53:00 -07:00
powderluv b03eac4224
Enable OSX (Intel, Apple Silicon Builds) (#776)
Update pinned pytorch version. Will submit a follow on PR to bump.
Also update artifacts directory
2022-04-21 10:47:28 -07:00
powderluv cc3a4a58ef
Add oneshot release snapshot for test/ondemand (#768)
* Add oneshot release snapshot for test/ondemand

Add some build scripts to test new release flow based on IREE.
Wont affect current builds, once this works well we can plumb it
in.

Build with manylinux docker

* Fixes a few issues found when debugging powderluv's setup.

* It is optional to link against Python3_LIBRARIES. Check that and don't do it if they don't exist for this config.
* Clean and auditwheel need to operate on sanitized package names. So "torch_mlir" vs "torch-mlir".
* Adds a pyproject.toml file that pins the build dependencies needed to detect both Torch and Python (the MLIR Python build was failing to detect because Numpy wasn't in the pip venv).
* Commented out auditwheel: These wheels are not PyPi compliant since they weak link to libtorch at runtime. However, they should be fine to deploy to users.
* Adds the --extra-index-url to the pip wheel command, allowing PyTorch to be found.
* Hack setup.py to remove the _mlir_libs dir before building. This keeps back-to-back versions from accumulating in the wheels for subsequent versions. IREE has a more principled way of doing this, but what I have here should work.

Co-authored-by: Stella Laurenzo <stellaraccident@gmail.com>
2022-04-21 02:19:12 -07:00
Sean Silva b69db60f85 Pin the Python package to the exact PyTorch nightly.
This avoids issues where PyTorch version drift has made things
incompatible.

One caveat is that you will need to specify
`-f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
--pre` on the command line for pip to know where to find the nightly
packages (there is no way around this) -- this is easiest to do by
simultaneously passing `-r requirements.txt` on the pip command line.
2022-04-20 16:47:38 -07:00
powderluv 91d3e7ba15 Remove CCACHE settings and validate on OSX
Builds whl package for OSX. Need to validate smoke tests next
2022-04-14 01:32:49 -07:00
Sean Silva 3a96078571 Pin the CI to the latest working PyTorch.
I am investigating the breakage.

Also, fix "externals" rename in setup.py and some cases where we weren't
using `requirements.txt` consistently.

Also, fix a case where the packaging script would get confused due to
".." in the path name.
2022-03-29 15:02:17 -07:00
Sean Silva 52c330cca2 Fix some more uses of "e2e" that I missed in the last commit. 2022-03-28 19:09:56 +00:00
Sean Silva 0378c75b35 Centralize all test serialization logic. 2022-03-28 10:17:13 -07:00
Ahmed S. Taei 8383497704
[NFC] Rename external -> externals (#699) 2022-03-26 09:12:27 -07:00
Prashant Kumar 730cdcd071 Add hugging face `albert-base-v2` in torchscript_e2e_heavydep_tests
`albert-base-v2` for sequence classification is added in e2e_heavy_test.
2022-03-24 17:43:24 +05:30
Sean Silva 729402c3f4 Reduce compilation time for TorchOps.cpp.inc
The `assemblyFormat` stuff (which generates unrolled, per-op C++ code)
was taking up a lot of compile time, and all the ops are essentially
printed with the same logic. So this PR makes them all call the same
helper function. This is done by using
`let hasCustomAssemblyFormat = 1` and then implementing `FooOp::parse`
and `FooOp::print`.

Additionally, the `Generated*Ops.td` files are all collapsed into just
`GeneratedTorchOps.td` (there is no reason to have the files separate,
since the files are very large anyway so one is always having to search
within them -- editors don't care that the file to search is now a bit
bigger :) ).

This reduces TorchOpsODSGenerated.cpp compile time (which is now
GeneratedTorchOps.cpp) from 39 to 31 seconds on my machine. This is
actually less than I expected, but this PR is an overall cleanup to the
code anyway. The next step will be to introduce (better) functionality
upstream for sharding the TorchOps.cpp.inc file, so that we can truly
parallelize the O(#ops) costs. This is also necessary, because after
this PR, TorchDialect.cpp is now the slowest file to compile, due to the
`addOperations<... all the ops ...>` call, which needs to be shareded
too.
2022-03-21 14:42:26 -07:00
Sean Silva 3734f69119 Remove basic_mt from the heavydep tests
This was an aspirational goal at an earlier stage in the project where
the focus was heavily on programs with state, control flow, and
lists/dicts. We will circle back to such programs likely 2022H2 at some
point, but for now, having this test doesn't add much, since basically
nothing works or is being worked on.
2022-03-15 15:25:53 -07:00
Sean Silva a5fe0cf063 Introduce new shape library design.
See the documentation in `docs/shape_lib.md` and
`docs/adding_a_shape_function.md` for an overview of the system.

This completely overhauls how we represent shape functions. In
particular, RefineTypes does not infer shapes anymore (only dtypes).
Shape functions are now written in (TorchScript'able) Python.

Recommended review order:

1. Read `docs/shape_lib.md` and `docs/adding_a_shape_function.md`.
1. Code and tests for ReifyShapeCalculations, DropShapeCalculations.
1. Code and tests for SimplifyShapeCalculations.
1. shape_lib_gen.py
1. Code and tests for new RefineTypes pass.
1. Random folders/canonicalizers in TorchOps.cpp and associated test in
   `canonicalize.mlir`.
1. New ReadOnly trait inferred from the registry.
1. Any miscellaneous remaining stuff.

Example `-print-ir-after-all` for ElementwiseUnaryModule:
[IR lowering dump](https://gist.github.com/silvasean/e4dc8cbc8d00aac7819602e3cbd8e212).

Example `-print-ir-after-all` for ElementwiseBinaryModule:
[IR lowering dump](https://gist.github.com/silvasean/daf6860ecced732af3568af6b1899113).
2022-03-15 12:41:58 -07:00
Prashant Kumar 126dac3ded Cmake build commands fix.
The external projects torch-mlir and torch-mlir-dialects should be
placed inside double quotes.
2022-02-16 20:46:53 +05:30
Yi Zhang 869daf3c22 Add TMTensor dialect to torch-mlir
This is intended to explore support for non-structured ops that can't
be modeled by Linalg dialect. `tm_tensor.scan` and `tm_tensor.scatter`
are added as the first such ops. The dialect should aim to be
upstreamed in the future.
2022-02-15 16:45:38 -05:00
Sean Silva 4a8d05e4a5 Add torch_mlir snapshot packages.
This closely follows IREE's
[schedule_snapshot_release.yml](f2f153d394/.github/workflows/schedule_snapshot_release.yml (L1))
workflow.

The snapshot releases can be installed with:
```
python -m pip install torch_mlir -f "https://github.com/llvm/torch-mlir/releases"
```
2021-10-06 14:50:31 -07:00
Sean Silva 712445eaa8 Bring back Python packaging.
Will add a CI job that builds and uploads snapshot packages next.
2021-10-05 13:33:30 -07:00
Sean Silva dcab39146f Remove the last mentions of npcomp from torch-mlir
These snuck through.
2021-10-05 20:17:23 +00:00
Yi Zhang fadd76e9b8 E2e for MiniLM-L6-H384-uncased-sst2
Replace the original BertSequenceClassification with this new one.
The ops needed to support are identical.
2021-10-05 12:45:19 -04:00
Sean Silva f0ed9e2d8d Fix update_torch_ods.sh 2021-10-01 17:47:25 +00:00
Sean Silva 5b6902e31c Dual license the torch-mlir project.
This commit (with approval from all contributors) dual licenses
the torch-mlir project under both the standard LLVM license and the
standard PyTorch license. This will facilitate moving code between
torch-mlir and the two upstream projects.

The standard file comment is now:

```
// This file is licensed under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
// Also available under a BSD-style license. See LICENSE.
```

See `LICENSE` in the project root for the terms of both licenses.
2021-10-01 10:46:08 -07:00
Yi Zhang 89225b0cd8 Add BertSequenceClassification model to e2e
Use torch tracing to get the module because the original model is not
TorchScriptable out of box.
2021-09-30 13:30:29 -04:00
Sean Silva 8b2c099914 Update llvm-project to 204d301bb1921431a853c0bfba32007c018df1d5
This brings in the fix for the obscure RefBackend bug we were hitting.
2021-09-28 17:38:10 -07:00
powderluv b55baf508a
Updates to Readme.md (#334) 2021-09-28 13:50:25 -07:00
Sean Silva 4fad753073 Move external/torch-mlir to the root of the repo. 2021-09-27 17:11:08 -07:00
Sean Silva 404bd74ddf Port the bulk of the remaining code to torch-mlir
This leaves no real code outside torch-mlir.

This also renames the "npcomp backend contract" to "linalg on tensors
backend contract" as the name of the abstraction layer that RefBackend
(IREE too) accepts.
2021-09-27 12:48:33 -07:00
Sean Silva 3dc9b4ee2f Remove some more old stray files. 2021-09-22 16:13:03 -07:00
Sean Silva 1a0b953ea7 Eliminate almost all mentions of IREE.
A few remain in examples/docs that will be naturally be updated in due
time.

This regresses the list support and the general direction of more widely
supported control flow, lists/dicts/globals that we were going for with
the TorchScript path. The idea is that we are deferring that work to
make torch-mlir a very clean standalone thing. We will reboot it,
probably using some of the tools of iree_pydm to make it simpler, and in
a more natural place (such as an iree-torch repo that depends on IREE and
torch-mlir to build a working PyTorch frontend solution for IREE -- it
was really weird that npcomp depended on IREE).
2021-09-22 16:06:38 -07:00
Sean Silva 5f3b1ce0b8 Fold torch_mlir_dialects python package into `torch_mlir`.
After this change, there are now just two subdirectories in the
`python_packages` directory in our combined build:
- `npcomp_core` with all the npcomp stuff
- `torch_mlir` with all the `torch-mlir` stuff.

The combined `torch_mlir` build will be packaged for use by `pip`.
There isn't anything super useful for wider use in `npcomp_core` so for
now we aren't going to package that one.
2021-09-17 09:27:49 -07:00
Sean Silva 0eb767ea45 Remove frontends/pytorch directory.
It just contained the e2e testing framework. We now fold it into the
main project to reduce complexity.

- `frontends/pytorch/python/` -> `python/torch_support`
- `frontends/pytorch/e2e_testing -> e2e_testing`
- `frontends/pytorch/examples -> examples`
- `frontends/pytorch/test` -> `python/test`
- `torch_mlir_torchscript` python module -> `npcomp_torchscript`
- `torch_mlir_torchscript_e2e_test_configs` python module ->
  `npcomp_torchscript_e2e_test_configs`

This also changes the license of a handful of files from the
"pytorch-style" license to the regular LLVM/npcomp license. The only
people who committed to those files were myself and Yi.
2021-09-17 09:27:49 -07:00
Sean Silva b6be96d722 [torch-mlir earthmoving (2/N)] Python code movement.
This moves the bulk of the Python code (including the Torch interop)
from `frontends/pytorch` into `torch-mlir/TorchPlugin`. This also
required reconciling a bunch of other Python-related stuff, like the
`torch` dialects.

As I did this, it was simpler to just remove all the old numpy/basicpy
stuff because we were going to delete it anyway and it was faster than
debugging an intermediate state that would only last O(days) anyway.

torch-mlir has two top-level python packages (built into the
`python_packages` directory):

- `torch_mlir_dialects`: `torch` dialect Python bindings (does not
  depend on PyTorch). This also involves building the aggregate CAPI for
  `torch-mlir`.
- `torch_mlir`: bindings to the part of the code that links against
  PyTorch (or C++ code that transitively does).

Additionally, there remain two more Python packages in npcomp (but
outside `torch-mlir`):

- `npcomp_torch`: Contains the e2e test framework and testing configs
  that plug into RefBackend and IREE.
- `npcomp_core`: Contains the low-level interfaces to RefBackend and
  IREE that `npcomp_torch` uses, along with its own
  `MLIR_PYTHON_PACKAGE_PREFIX=npcomp.` aggregation of the core MLIR
  python bindings. (all other functionality has been stripped out)

After all the basicpy/numpy deletions, the `npcomp` C++ code is now very
tiny. It basically just contains RefBackend and the `TorchConversion`
dialect/passes (e.g. `TorchToLinalg.cpp`).

Correspondingly, there are now 4 main testing targets paralleling the
Python layering (which is reflective of the deeper underlying dependency
structure)

- `check-torch-mlir`: checks the `torch-mlir` pure MLIR C++ code.
- `check-torch-mlir-plugin`: checks the code in `TorchPlugin` (e.g.
  TorchScript import)
- `check-frontends-pytorch`: Checks the little code we have in
  `frontends/pytorch` -- mainly things related to the e2e framework
  itself.
- `check-npcomp`: Checks the pure MLIR C++ code inside npcomp.

There is a target `check-npcomp-all` that runs all of them.
The `torch-mlir/build_standalone.sh` script does a standalone build of
`torch-mlir`.

The e2e tests (`tools/torchscript_e2e_test.sh`) are working too.

The update_torch_ods script now lives in
`torch-mlir/build_tools/update_torch_ods.sh` and expects a standalone
build.

This change also required a fix upstream related to cross-shlib Python
dependencies, so we also update llvm-project to
8dca953dd39c0cd8c80decbeb38753f58a4de580 to get
https://reviews.llvm.org/D109776 (no other fixes were needed for the
integrate, thankfully).

This completes most of the large source code changes. Next will be
bringing the CI/packaging/examples back to life.
2021-09-15 13:40:30 -07:00
Sean Silva 28a7738189 [torch-mlir earthmoving (1/N)] C/C++ code movement.
This creates the `external/torch-mlir` directory as an
LLVM_EXTERNAL_PROJECTS-compatible project (analogous to
`iree-dialects`) and completes movement/rename of all pure MLIR C/C++
compiler code into there. The next step will be to move all the Python
code / code that links/includes PyTorch C++ code (which currently lives
in `frontends/pytorch`) into a subdirectory here.

I call this "earthmoving" because it is mostly mechanical changes and
renames. As a quick summary (we can change this down the road easily)
- C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch`
- CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet`
- preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_`
- CMake `NPCOMPFoo -> TorchMLIRFoo`

The goal of this is to create a standalone project creating a center of
mass for entry into the MLIR ecosystem from PyTorch, suitable in scope
for eventual inclusion/ownership in PyTorch. The idea is that
`external/torch-mlir` will some day be pulled out into its own
repository, and then npcomp will simply pull it in as a submodule.

Layering-wise, what lives in `torch-mlir` lowers code from PyTorch
(currently TorchScript, but TorchFX or pytorch/xla-style tracing are
possible extensions) down to what we have been calling the "Torch
backend contract" which is cleaned up IR (inlining, simplifcation,
conversion to value tensors, ...) entirely in the `torch` dialect. This
is the branching off point for further lowering, of which npcomp takes
one opinion (outside `torch-mlir` of course!), namely the
`TorchConversion` dialect/transforms which lower to IR suitable for IREE
and other linalg-on-tensors based lower-level compilers.

Summary of changes:
- move `{include,lib,test}/Dialect/Torch` into `torch-mlir`
- move relevant parts of CAPI into `torch-mlir`.
- leave a few things related to the `torch-mlir` Python build commented
  out, which should be resolved in a subsequent change.
2021-09-10 21:44:37 -07:00
Sean Silva 0b7dbf5f81 Initial import of iree-dialects.
We plan on using these dialects "natively" as part of the npcomp backend
contract, and provide feedback to evolve them in IREE. Roughly speaking,
we can consider these dialects as "what's missing from upstream that we
think belongs in the general abstraction layer that npcomp's backend
contract targets".

We integrate them by just copying the relevant directory from the IREE
source tree (with `build_tools/update_iree_dialects.sh`). This avoids
adding IREE as a submodule, which is way too heavyweight (including
IREE itself, another copy of LLVM, TensorFlow, ...) and would give the
false impression of a source dependency rather than the lightweight (and
eventually versioned/stabilized) IR-level compatibility that we strive
for.
2021-08-11 13:00:04 -07:00
Yi Zhang bfc3ee35c6 Import Machine Translation model to MLIR.
This includes the following changes to import MT model into MLIR. There
are still a lot of work to for actual compilation.
- Add `torch.dict<>`, `torch.any`, `torch.number` types
- Add `torch.prim.DictConstruct` op
- Fix `torch.prim.TupleConstruct` op assembly format to include resulting types
2021-08-10 15:22:06 -04:00
Sean Silva 453e29ea05 Add E2E support for tests with heavy dependencies (heavydep tests).
The tests use the same (pure-Python) test framework as the
normal torchscript_e2e_test.sh, but the tests are added in
`build_tools/torchscript_e2e_heavydep_tests` instead of
`frontends/pytorch/e2e_testing/torchscript`. Any needed dependencies can
easily be configured in generate_serialized_tests.sh.

We add an initial machine translation model with a complex set of
dependencies to seed the curriculum there. I verified that this model
gets to the point of MLIR import (it fails there with a segfault due to
not being able to import the "Any" type).

This required moving a few files from the `torch_mlir` Python module
into multiple modules to isolate the code that depends on our C++
extensions (which now live in `torch_mlir` and
`torch_mlir_torchscript_e2e_test_configs`) from the pure Python code
(which now lives in `torch_mlir_torchscript`). This is an entirely
mechanical change, and lots of imports needed to be updated.

The dependency graph is:
```
       torch_mlir_torchscript_e2e_test_configs
                  /              |
                 /               |
                /                |
               V                 V
torch_mlir_torchscript       torch_mlir
```

The `torch_mlir_torchscript_e2e_test_configs` are then dependency-injected
into the `torch_mlir_torchscript` modules to successfully assemble a
working test harness (the code was already structured this way, but this
new file organization allows the isolation from C++ code to actually
happen).  This isolation is critical to allowing the serialized programs
to be transported across PyTorch versions and for the test harness to be
used seamlessly to generate the heavydep tests.

Also:
- Extend `_Tracer` class to support nested property (submodule) accesses.

Recommended review order:
- "user-level" docs in README.md
- code in `build_tools/torchscript_e2e_heavydep_tests`.
- changes in `torch_mlir_torchscript/e2e_test/framework.py`
- misc mechanical changes.
2021-08-03 14:09:56 -07:00
Stella Laurenzo 445472c51e Build packages for npcomp-torch.
* Adds a minimal setup.py for frontends/pytorch
* Makes npcomp-core export its headers and libraries
* Adds a script to build packages.
* Adds CI step to package and smoke test.
* Will need some more tweaks and coordination prior to deploying (version locking etc).
2021-07-29 19:58:59 -07:00
Yi Zhang 6fbf94f0b2 Update readme and scripts for setting the new PYTHONPATH
Add scripts for generating .env and update instructions in README.
2021-07-28 15:06:40 -04:00
Stella Laurenzo 2dbab50444
Rework the python build to a static assembly of MLIR+NPCOMP (#251)
* Adapt to python build system updates.

* Bump llvm to 310c9496d80961188e8d8f8ad306cdf44bd7541f (includes python build updates)
* Adds refback C-API.
* Re-layers all python builds.
* Rework CI.
2021-07-27 16:10:10 -07:00
Sean Silva 0b6516c7cc Bump llvm-project to cbd0054b9eb17ec48f0702e3828209646c8f5ebd
Changes:
- MLIR_BINDINGS_PYTHON_ENABLED -> MLIR_ENABLE_BINDINGS_PYTHON
- canonicalizer constant insertion order
- EDSC is gone now
2021-06-10 16:26:45 -07:00
Sean Silva 2efda323ff Significantly restructure torch/aten import design.
This is a really major and invasive restructuring of the way we get
torch operators (`torch::jit::Operator` / `c10::OperatorHandle`) into
MLIR. Please forgive the challenging review, but due to the sheer
invasiveness, it wasn't really practical do do it in sane smaller
pieces.

This fully replaces everything that was already working on the
TorchScript path (actually, more -- we added tanh support to
TorchToLinalg in order to delete the older code paths). Additionally,
I've kept the lights on for the acap path too, including what little e2e
stuff was working before (for expediency I made a few tiny compromises
along the way that will be easy to undo when we give that path proper
attention).

Overview of the new design:
- The torch operator `somens::someunqualname.someoverloadname` is
  imported as `torch.somens.someunqualname.someoverloadname` (skip the
  last dotted part if the overload name is empty), OR, if we don't have
  such an op registered, it is imported as
  `torch.operator "somens.someunqualname.someoverloadname" (...) : ...`.
  - The addition of the "overload name" is a critical element here, as
    the `(ns,unqual,overload)` triple is unique, which solves a lot of
    problems we were having.
  - This involves having separate MLIR ops for the `trailing_` and
    `.out` variants and all the different overloads. This seemed
    necessary, because the set of overloads is so wild and varied and
    unstructured. The previous design was leaning into some underlying
    structure that just isn't there -- the default situation is
    the "random overload that we want to manage on the MLIR side",
    rather than that being an exception. E.g.  `aten::ne` (not-equal)
    has 21 overloads, only 4 of which are c10 dispatcher ops see
    [gist](https://gist.github.com/silvasean/190ba918c550c956260e21254e1b8aa1),
    and the "out" variant is really called `.Tensor_out` instead of
    `.out` as it frequently is for other ops.
  - Rationale for all being in `torch` namespace: the set of operators
    are so varied and unstructured that "dialect per namespace"
    doesn't result in anything resembling the typical MLIR dialect
    boundary expectations. We could maybe draw the boundary at
    dispatcher ops vs non-dispatcher ops, but that doesn't seem to
    really result in very much useful structure at this point in time.
  - Note: within the torch operator registry, we effectively have a
    mini-basicpy subdialect (already type-resolved), which is reasonably
    structured.
  - The existing Torch op interfaces are also removed -- now that we
    track the overload name, we can losslessly find the original
    operator.
- Instead of `ATenRecognizeKernelsPass`, we now have a
  `ReduceOpVariantsPass` that keys off certain traits (and perhaps
  eventually interfaces) to reduce variants of ops to a smaller set,
  ideally operating on immutable tensors and using surrounding ops to
  model the mutability/aliasing aspects.
  - Note: `torch.ns.unqual.overload` ops allow both immutable and
    mutable tensors (unlike the previous hard distinction in the common
    case). This is a premonition for a future change that will introduce a
    bona fide `!torch.tensor` type that will clean up a bunch of stuff.
- `TorchToLinalg` / `TorchToStd` supercede the existing
  "ATen->TCF->TCP->Linalg" path.
- The new `torch_ods_gen.py` supercedes `torch_signature_ods_gen.py`.
  It should look somewhat familiar, but the benefit of hindsight has
  allowed a lot of simplifications.

The overall trend seems to be to make the `torch` dialect a nice layer
independent of anything else. It feels like as a natural result of
various future changes we will be removing the reliance on basicpy+numpy
dialects and have a nice self-contained type system too that properly
models the TorchScript type system (including proper subtyping,
mutable/immutable tensors, optional dtype, etc.).

Recommended review order:
- Start at some of the new import IR, e.g. in
  `frontends/pytorch/test/node_import/prim.py`,
  `frontends/pytorch/test/acap_export/test_export_add3.py`, and other
  tests.
- `frontends/pytorch/python/torch_mlir_utils/codegen/torch_ods_gen.py`
  and associated generated files:
  - `include/npcomp/Dialect/Torch/IR/GeneratedAtenOps.td`
  - `include/npcomp/Dialect/Torch/IR/GeneratedPrimOps.td`
- Inspect `ReduceOpVariants.cpp` / `reduce-op-variants.mlir` and the new
  traits in `include/npcomp/Dialect/Torch/IR/TorchTraits.h`
- Various code changes in the import path in
  `frontends/pytorch/csrc/builder`. Probably most interesting is the new
  code in `torch_to_mlir_utils.cpp` that has the logic to create the
  `torch.operator` ops or `torch.ns.unqual.overload` ops.

This is the [new ResNet IR](https://gist.github.com/silvasean/5407aafb710d07612b7b5b92eabecebe),
just to be able to look at a substantial sample of IR in the new style.
2021-05-19 13:37:39 -07:00
Sean Silva c424c24ed8 Bump llvm-project to c68d2895a1f4019b387c69d1e5eec31b0eb5e7b0
- dialect registration
- StringAttr::get: order of context arg
- math dialect
- LogicalResult nodiscard
- error message for invalid broadcast
2021-02-22 12:23:24 -08:00
powderluv cecf1fbba5
Add a CI builder with latest pytorch CPU nightly. Also add AArch64 to the build (#166) 2021-02-21 13:36:06 -08:00
Stella Laurenzo 72f785c4b2 Update install_mlir.sh to take extra configure flags. 2021-01-22 16:30:23 -08:00