Commit Graph

72 Commits (1d6aca3823e33a026320dadfd4d680452758edfa)

Author SHA1 Message Date
Xida Ren (Cedar) 18669b38cb
Create add_ops.md (#2770) 2024-01-19 10:44:45 -08:00
John Wu 779a141f8d
Mentioned helpful tooling to convert Onnx models to Torch MLIR (#2683)
- Going through the `#torch-mlir` channel on the `llvm` discord, I
realize that there are some useful commands that would be extremely
helpful in creating Onnx lowers to Torch MLIR. Seems a lot of people are
contributing to this. So, I thought it would be good to add this
information to the docs.

These tools helped streamlined the development of this PR:
https://github.com/llvm/torch-mlir/pull/2682
2023-12-21 07:26:20 -08:00
Rik Huijzer 8fa81d181b
Tweak development.md for more speed (#2667)
Adding the `--progress` flag shows the same output as what `git clone`
would show. This is very nice for slow connections. Without it, the
command may run for many minutes without providing any indication that
it is still doing something.

For `--depth=1`, I think it should be safe as most people have new
enough git versions nowadays, but let's be safe and make it an optional
suggestion. I ran all the tests fine with `--depth=1`, but I don't know
whether things will keep working when the submodules get updated for
systems with old git versions.
2023-12-20 09:34:50 +01:00
Yinrun Lyu 89cfbe894d
Update PYTHONPATH in development.md (#2644)
Modify PYTHONPATH to new related directory in docs.
2023-12-18 22:46:55 -08:00
Stella Laurenzo 74f7a0c9d6
Upstream the ONNX importer. (#2636)
This is part 1 of 2, which will also include upstreaming the FX
importer. I started with ONNX because it forces some project layout
updates and is more self contained/easier as a first step.

Deviating somewhat from the RFCs on project layout, I made the following
decisions:

* Locating the `onnx_importer.py` into `torch_mlir.extras` as Maks
already has opened up that namespace and it seemed to fit. Better to
have fewer things at that level.
* Setup the build so that the root project only contains MLIR Python and
pure Python deps (like the importers), but this can be augmented with
the `projects/` adding more depending on which features are enabled.
* The default build continues to build everything whereas in
`TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS=1` mode, it builds a
`torch-mlir-core` wheel with the pure contents only.

`onnx_importer.py` and `importer_smoke_test.py` are almost verbatim
copies from SHARK-Turbine. I made some minor local alterations to adapt
to paths and generalize the way they interact with the outer project. I
expect I can copy these back to Turbine verbatim from here. I also
updated the license boilerplate (they have the same license but slightly
different project norms for the headers) but retained the correct
copyright.

Other updates:

* Added the ONNX importer unit test (which also can generate test data)
in lit, conditioned on the availability of the Python `onnx` package. In
a followup once I know everything is stable, I'll add another env var
that the CI can set to always enable this so we know conclusively if
tests pass.
* Moved the ONNX conversion readme to `docs/`.
* Renamed CMake option `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS` ->
`TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` and inverted the sense. Made the
JitIR importer and LTC options `cmake_dependent_options` for robustness.
2023-12-12 19:02:51 -08:00
srcarroll 7d0f5cc5a8
Update out of date docs (#2602)
Some of docs referred to old file paths that no longer exists. This
patch updates some of the instructions that I happened to notice were
out of date. This is not a full update
2023-12-01 16:29:37 -06:00
Stella Laurenzo 5eae0adff1
Breakup python pytorch deps (#2582)
This lifts the core of the jit_ir_importer and ltc out of the pt1
project, making them peers to it. As a side-effect of this layering, now
the "MLIR bits" (dialects, etc) are not commingled with the various
parts of the pt1 project, allowing pt1 and ltc to overlay cleanly onto a
more fundamental "just MLIR" Python core. Prior to this, the Python
namespace was polluted to the point that this could not happen.

That "just MLIR" Python core will be introduced in a followup, which
will create the space to upstream the FX and ONNX pure Python importers.

This primary non-NFC change to the API is:

* `torch_mlir.dialects.torch.importer.jit_ir` ->
`torch_mlir.jit_ir_importer`.

The rest is source code layering so that we can make the pt1 project
optional without losing the other features.

Progress on #2546.
2023-11-19 12:10:19 -08:00
James Newling 98ee7fe548 Update E2E links 2023-11-09 13:55:37 -06:00
Ramiro Leal-Cavazos d082310bd8 Move Wiki to `docs/`
Currently the docs are split into two places, the `docs/` directory
and the Github Wiki of Torch-MLIR. This commit moves the wiki docs to
`docs/` to consolidate everything into one place. This has the added
benefit that users will get all the documentation when they clone the
repository.

Note: there are 4 files in the wiki, but only one is truly needed
- Torch-ops-E2E-implementation.md: only file needed
- Coding-Style.md: the contents of this file are already in
Torch-ops-E2E-implementation.md
- Weekly-LLVM-Update.md: this is outdated. We no longer have a weekly
schedule for llvm updates
- Home.md: Contains links to talks and resources that are already
present in the documentation in `docs/` or in
Torch-ops-E2E-implementation.md

Co-authored-by: Yi Zhang <cathyzhyi@google.com>
Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
Co-authored-by: Sean Silva <silvasean@google.com>
Co-authored-by: Daniel Ellis <1346302+dellis23@users.noreply.github.com>
Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-11-09 13:55:37 -06:00
James Newling 026cb314da
Specify path of e2e_test.sh after directory change (#2557)
Is there a way to disable some of CI for docs-only PR's?
2023-11-07 16:07:02 -08:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Stella Laurenzo 078d1e1a1d
Remove mlir-hlo (replace with stablehlo). (#2460)
We just have to do this: I ran into an issue today where I needed to make a one line patch to stablehlo to work around a compiler issue, and it is completely unapparent how to do so given that the mlir-hlo repo is a read-only export and is at the tail end of a multi-week integration chain from the open-source stablehlo repo.

We've discussed this often enough and gotten +1 from everyone that they are ok with taking the e2e testing hit if it becomes necessary: It is necessary as the current situation is unmanageable.

Looking at it, I expect it wouldn't actually be very difficult to build a little runner binary out of the stablehlo interpreter and subprocess call that in order to get the testing coverage back. I leave that as an exercise to the users of this part of the stack and recommend following the breadcrumbs from the deleted python/torch_mlir_e2e_test/stablehlo_backends/linalg_on_tensors.py file and the main.py changes.

Note that I am pointing us at a stablehlo fork for the moment until it is apparent that we don't need to carry any local patches to it. We can update this in a few days if everything is clear.
2023-09-12 19:10:02 -07:00
Stella Laurenzo 8ffe5d17da Add Sean Silva to code owners as emeritus.
Per request from #2403.
2023-08-20 18:06:07 -07:00
Stella Laurenzo 6648ad91e7
Per request, swap Sean Silva for Stella Laurenzo in code owners. (#2403)
Sean has decided to move on to other ventures and has requested that I help him disengage by resuming top level accountability for the project.
2023-08-18 12:52:00 -07:00
Tanyo Kwok 3a1b92c463
Update code_owners.md (#2197) 2023-06-07 12:16:35 +08:00
Ramiro Leal-Cavazos de02b56e17
Replace RefineTypes with dtype functions (#2105)
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.

All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`

These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.

Co-authored-by: Jiahao Li <liplus17@163.com>
2023-05-12 13:40:45 -07:00
Sean Silva 4e82b30c88 Update long_term_roadmap.md 2023-03-27 12:34:07 -07:00
Ramiro Leal-Cavazos eae3ff7f1c
Change dtype functions interface to take ints tuple for each tensor (#1965)
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.

This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.

To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
2023-03-23 11:05:39 -07:00
Sean Silva a412c85fd7 [docs] Add changes to e2e testing to long-term roadmap 2023-03-20 11:38:13 -07:00
Ashay Rane 711646d095
mhlo: migrate conversion to stablehlo (#1840)
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.

This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
2023-02-02 07:29:47 -06:00
Sean Silva 8c3774bb2a
Minor fixes for development.md
- Mention the rotation doc
- Fix minor typos / broken link
2022-12-14 02:55:51 -08:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00
Sean Silva d52359a891 [docs] Add info about special e2e testing cases. 2022-12-07 12:53:07 +01:00
Sean Silva 9fb63ce9d9 Add link to e2e testing docs 2022-11-25 04:53:57 -08:00
Sean Silva 27a2a180d5 [cleanup] Remove docs/roadmaps.
This directory didn't have much and was generally out of date.
The [long-term
roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md)
supercedes this anyway.
2022-11-24 04:16:47 -08:00
Kazuaki Ishizaki 638a884e8c fix typos 2022-11-17 11:03:27 -08:00
Sambhav Jain ba5b90ee27
Enable bazel LIT tests in CI (#1596)
Bazel LIT test support was added in https://github.com/llvm/torch-mlir/pull/1585. This PR enables the tests in CI.

```
INFO: Build completed successfully, 254 total actions
@torch-mlir//test/Conversion:TorchToArith/basic.mlir.test                PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToLinalg/basic.mlir.test               PASSED in 0.5s
@torch-mlir//test/Conversion:TorchToLinalg/elementwise.mlir.test         PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToLinalg/flatten.mlir.test             PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToLinalg/pooling.mlir.test             PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToLinalg/unsqueeze.mlir.test           PASSED in 0.2s
@torch-mlir//test/Conversion:TorchToLinalg/view.mlir.test                PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToMhlo/basic.mlir.test                 PASSED in 0.5s
@torch-mlir//test/Conversion:TorchToMhlo/elementwise.mlir.test           PASSED in 0.9s
@torch-mlir//test/Conversion:TorchToMhlo/gather.mlir.test                PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToMhlo/linear.mlir.test                PASSED in 0.6s
@torch-mlir//test/Conversion:TorchToMhlo/pooling.mlir.test               PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToMhlo/reduction.mlir.test             PASSED in 0.4s
@torch-mlir//test/Conversion:TorchToMhlo/view_like.mlir.test             PASSED in 0.6s
@torch-mlir//test/Conversion:TorchToSCF/basic.mlir.test                  PASSED in 0.2s
@torch-mlir//test/Conversion:TorchToTosa/basic.mlir.test                 PASSED in 1.1s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/basic.mlir.test     PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/error.mlir.test     PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/free-functions.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/initializers.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/methods.mlir.test   PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/module-uses-error.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/module-uses.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/multiple-instances-error.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/multiple-instances-multiple-module-args.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/multiple-instances.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/submodules.mlir.test PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/visibility.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/adjust-calling-conventions.mlir.test     PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/canonicalize.mlir.test                   PASSED in 0.4s
@torch-mlir//test/Dialect:Torch/decompose-complex-ops-legal.mlir.test    PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/decompose-complex-ops.mlir.test          PASSED in 0.9s
@torch-mlir//test/Dialect:Torch/drop-shape-calculations.mlir.test        PASSED in 0.4s
@torch-mlir//test/Dialect:Torch/erase-module-initializer.mlir.test       PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/inline-global-slots-analysis.mlir.test   PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/inline-global-slots-transform.mlir.test  PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/invalid.mlir.test                        PASSED in 0.4s
@torch-mlir//test/Dialect:Torch/lower-to-backend-contract-error.mlir.test PASSED in 17.3s
@torch-mlir//test/Dialect:Torch/maximize-value-semantics.mlir.test       PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/ops.mlir.test                            PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/prepare-for-globalize-object-graph.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/promote-types.mlir.test                  PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/reduce-op-variants-error.mlir.test       PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/reduce-op-variants.mlir.test             PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/refine-public-return.mlir.test           PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/refine-types-branch.mlir.test            PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/refine-types-ops.mlir.test               PASSED in 0.6s
@torch-mlir//test/Dialect:Torch/refine-types.mlir.test                   PASSED in 0.4s
@torch-mlir//test/Dialect:Torch/reify-shape-calculations.mlir.test       PASSED in 2.9s
@torch-mlir//test/Dialect:Torch/simplify-shape-calculations.mlir.test    PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/torch-function-to-torch-backend-pipeline.mlir.test PASSED in 0.6s
@torch-mlir//test/Dialect:TorchConversion/canonicalize.mlir.test         PASSED in 0.2s
@torch-mlir//test/Dialect:TorchConversion/finalizing-backend-type-conversion.mlir.test PASSED in 0.3s
@torch-mlir//test/Dialect:TorchConversion/func-backend-type-conversion.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:TorchConversion/ops.mlir.test                  PASSED in 0.3s
@torch-mlir//test/Dialect:TorchConversion/verify-linalg-on-tensors-backend-contract.mlir.test PASSED in 0.3s
@torch-mlir//test/Dialect:TorchConversion/verify-tosa-backend-contract.mlir.test PASSED in 0.2s
@torch-mlir//test/RefBackend:insert-rng-globals.mlir.test                PASSED in 0.2s
INFO: Build completed successfully, 2[54](https://github.com/sjain-stanford/torch-mlir/actions/runs/3476816449/jobs/5812368489#step:7:55) total actions
@torch-mlir//test/RefBackend:munge-calling-conventions.mlir.test         PASSED in 0.2s

Executed [59](https://github.com/sjain-stanford/torch-mlir/actions/runs/3476816449/jobs/5812368489#step:7:60) out of 59 tests: 59 tests pass.
```

GHA workflow: https://github.com/sjain-stanford/torch-mlir/actions/runs/3476816449/jobs/5812368489
2022-11-16 11:59:33 -08:00
Sambhav Jain 4032eeca64
Add Bazel buildifier to torch-mlir (#1586)
Formats bazel BUILD and .bzl files with a standard convention. 

Invoke using
```
bazel run @torch-mlir//:buildifier
```
2022-11-15 12:34:27 -08:00
Sambhav Jain b320f7fb77
Simplify Bazel build workflow (#1587)
Remove `run_bazel_build.sh`, simplify docker's entrypoint to start container at `utils/bazel` directory, update docs.
2022-11-15 08:34:43 -08:00
Sean Silva ec4e01c321
Add Suraj to TorchToTOSA owners (#1566) 2022-11-09 14:55:13 -08:00
Sean Silva de4bcbfe9b [docs] Centralize all images in docs/images/ 2022-11-04 03:12:17 -07:00
Sean Silva 2162253401 [docs] Add long-term roadmap
Add a roadmap covering expected project evolution over the next 1-2
years.
2022-11-02 03:25:52 -07:00
Sean Silva efbebf2001 [docs] Initial code_owners.md
As discussed in #1506, this should help to distribute the review load
and ensure timely, high quality reviews.

Closes #1506
2022-10-21 04:53:00 -07:00
powderluv 796c317aa9
doc: Use $PWD as cross platform way to get working dir (#1451)
Use "$PWD" as it works across Powershell / Windows, macOS and Ubuntu/Linux.
2022-10-02 14:08:43 -07:00
Ashay Rane ffea3de412
doc: add instructions for building and testing on Windows (#1441) 2022-10-02 08:27:53 -05:00
Daniel Ellis e193e4b9be Add environment instructions to development.md 2022-09-29 13:24:39 -04:00
Sean Silva 8a91e4c962 Make docker instructions a bit more scoped
I was helping an engineer the other day who was attempting to use the Docker flow for interactive development and ran into countless issues. Add a note that it is not recommended for interactive development, and also move the Docker section down to avoid positioning it as the "default" that people should be using.
2022-09-29 08:56:56 -07:00
Sambhav Jain e749831434
[Arch] Remove torch_dispatch frontend and move figure to docs/ dir (#1372)
Addresses leftover comment from earlier PRs: #1254 , #1265 to remove `torch_dispatch` frontend. In addition, moves the main arch diagram into `docs/` directory for consistency.
2022-09-14 23:00:38 -07:00
powderluv 9111b9ab21
Update Torch-MLIR architecture diagram (#1265) 2022-09-14 18:47:01 -07:00
Sean Silva 1c8b389ead Add more info about unit tests. 2022-09-13 12:14:22 -07:00
Jacques Pienaar 51c868f4c0 Avoid linking to specific time
Recording was linking to specific time towards end of recording, changed to remove specific time.
2022-09-02 10:32:08 -07:00
powderluv 9f061ea97d
Dockerize CI + Release builds (#1234)
Gets both CI and Release builds integrated in one workflow.
Mount ccache and pip cache as required for fast iterative builds
Current Release docker builds still run with root perms, fix it
in the future to run as the same user.

There may be some corner cases left especially when switching
build types etc.

Docker build TEST plan:

tl;dr:
Build everythin: Releases (Python 3.8, 3.9, 3.10) and CIs.
  TM_PACKAGES="torch-mlir out-of-tree in-tree"
  2.57s user 2.49s system 0% cpu 30:33.11 total

Out of Tree + PyTorch binaries:

  Fresh build (purged cache):
    TM_PACKAGES="out-of-tree"
    0.47s user 0.51s system 0% cpu 5:24.99 total

  Incremental with ccache:
    TM_PACKAGES="out-of-tree"
    0.09s user 0.08s system 0% cpu 34.817 total

Out of Tree + PyTorch from source

  Incremental
    TM_PACKAGES="out-of-tree" TM_USE_PYTORCH_BINARY=OFF
    1.58s user 1.81s system 2% cpu 1:59.61 total

In-Tree + PyTorch binaries:

  Fresh build and tests: (purge ccache)
  TM_PACKAGES="in-tree"
  0.53s user 0.49s system 0% cpu 6:23.35 total

  Fresh build/ but with prior ccache
  TM_PACKAGES="in-tree"
  0.45s user 0.66s system 0% cpu 3:57.47 total

  Incremental in-tree with all tests and regression tests
  TM_PACKAGES="in-tree"
  0.16s user 0.09s system 0% cpu 2:18.52 total

In-Tree + PyTorch from source

  Fresh build and tests: (purge ccache)
  TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF
  2.03s user 2.28s system 0% cpu 11:11.86 total

  Fresh build/ but with prior ccache
  TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF
  1.58s user 1.88s system 1% cpu 4:53.15 total

  Incremental in-tree with all tests and regression tests
  TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF
  1.09s user 1.10s system 1% cpu 3:29.84 total

  Incremental without tests
  TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF TM_SKIP_TESTS=ON
  1.52s user 1.42s system 3% cpu 1:15.82 total

In-tree+out-of-tree + Pytorch Binaries
  TM_PACKAGES="out-of-tree in-tree"
  0.25s user 0.18s system 0% cpu 3:01.91 total

To clear all artifacts:
rm -rf build build_oot llvm-build libtorch docker_venv
externals/pytorch/build
2022-08-30 11:07:25 -07:00
Sean Silva e16b43e20b Remove "torchscript" association from the e2e framework.
We use it for more than TorchScript testing now. This is a purely
mechanical change to adjust some file paths to remove "torchscript".

The most perceptible change here is that now e2e tests are run with

```
./tools/e2e_test.sh
instead of:
./tools/torchscript_e2e_test.sh
```
2022-08-29 14:10:03 -07:00
Sean Silva f402eb270e Move development.md to docs/ for consistency 2022-08-29 13:26:25 -07:00
Prashant Kumar f245613b71 Add reference to slides and presentation. 2022-08-27 09:01:57 +05:30
Ramiro Leal-Cavazos 493f45f939
Add documentation for adding E2E tests (#1269)
This commit adds a file explaining the structure of an E2E test, as
well as useful things to keep in mind when adding new tests.
2022-08-26 23:17:43 +00:00
Henry Tu 883c6b40dd
Add LTC architecture diagram (#1291)
* Add LTC architecture diagram

* Use PNG for diagrams

* Update diagram
2022-08-26 18:21:05 -04:00
Henry Tu e869e68559
Fix LTC lib_torch_mlir_ltc.so import error (#1283)
* Build LTC to _mlir_libs directory

* Update CMakeLists.txt
2022-08-25 18:25:01 -04:00
powderluv ef89dadf52
Update Torch-MLIR Architecture Diagram (#1254)
Add MHLO path
Add custom accelarator dialects
Rename Torch Dialect back to original Torch-MLIR Dialect
(Surrounding text still refers to Torch-MLIR dialect)
Check in source for Excalidraw(https://excalidraw.com/)
so anyone can use / update it using the open source version
2022-08-22 13:09:32 -07:00
Sean Silva f601435fdf Add white background to diagram.
This makes it easier to read in dark mode browsers.
2022-08-18 15:53:43 -07:00