Commit Graph

91 Commits (3f79a2982ad2f3b847b73999d1e415de964fba89)

Author SHA1 Message Date
Chi_Liu fbb0db17dc
Disable TORCH_MLIR_ENABLE_JIT_IR_IMPORTER and TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS by default (#3693)
Only enable it in CI and debug for update_abstract_interp_lib.sh and update_torch_ods.sh usage.
2024-09-09 22:58:27 -07:00
Marius Brehler 56a663690c
Update links to examples (#3641)
Closes #3440
2024-08-16 18:59:44 +02:00
Hacker1337 cb6a499460
Update architecture.md. Fixed brocken link (#3565) 2024-08-14 16:38:51 +05:30
Aart Bik 1f73895f93
[torch-mlir] bump to llvm/llvm-project@9b78ddf3b2 (#3491)
This bump triggered an upstream assert. Includes a WAR for #3506.

Also includes several things I needed to do to repro:

* When TORCH_MLIR_TEST_CONCURRENCY=1, test runs will be printed.
* Added TORCH_MLIR_TEST_VERBOSE=1 handling to enable verbose mode
(useful on CI).

---------

Co-authored-by: Stella Laurenzo <stellaraccident@gmail.com>
2024-06-27 19:28:02 -07:00
Xida Ren (Cedar) 948981a773
Update development.md to use ld.lld (#3412)
@kuhar mentioned in the previous PR that we should use ld.lld. I kept
using ld because for my LLD version, it worked.

After updating to a new LLD version, that became necessary.
2024-06-03 14:10:48 -04:00
Xida Ren (Cedar) 2937753070
[Documentation] Show faster build command first in docs/development.md (#3355) 2024-05-17 18:59:51 +00:00
Angel Zhang 2c9c763191
Update development.md (#3314)
Add a command for installing the `python-dev` package

---------

Co-authored-by: Jakub Kuderski <kubakuderski@gmail.com>
2024-05-10 10:39:13 -04:00
Stella Laurenzo 5d4b803914 [NFC reformat] Run pre-commit on all files and format misc.
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.

Subsequent patches will format Python files and remaining CPP files.
2024-04-27 14:08:09 -07:00
Stella Laurenzo fb8748bdfa
Switch to pre-commit for lint checks. (#3200)
Users can run via `pre-commit run` or set up a hook as described in the
instructions: https://pre-commit.com/

The CI is set to only run pre-commit on files changed in the patch. We
will run with `--all-files` in a separate patch.
2024-04-27 13:29:51 -07:00
zjgarvey 189b3f112f
Fix broken link in abstract_interp_lib.md (#2800) 2024-04-28 02:27:05 +08:00
Xida Ren (Cedar) 7be22bb260
Update add_ops.md to link torch mlir get started instructions prominently (#3222) 2024-04-24 17:03:41 +00:00
penguin_wwy 9ac90ec7b2
Refactor the parameters and usage instructions of the setup script. (#3162)
As this
issuecomment(https://github.com/llvm/torch-mlir/pull/3021#issuecomment-2031248199)
suggests, `setup.py` should only be used for building Python packages,
so:
* disabled the develop command
* refactor the environment variable parameters
* add more doc for the usage and env var of setup.py
2024-04-14 10:40:25 -07:00
Xida Ren (Cedar) 895ea8663a
add llvm style guide 2024-03-18 18:25:22 +00:00
penguin_wwy d5693b3f51
[doc] fix broken links in documents (#2990)
Co-authored-by: wenyangwang <wenyangwang@tencent.com>
2024-03-06 19:52:34 -08:00
James Newling 723b8b1d28
Fix dev docs error/typo (#2880)
Just a one line change in a .md file
2024-02-07 03:55:38 -08:00
saienduri bfcf93ea21
Rename torch_mlir.compile APIs and introduce FX based analogs (#2842)
Link to related RFC:
https://discourse.llvm.org/t/rfc-rename-torch-mlir-compile-apis-and-introduce-fx-based-analogs/76646
This commit updates the documentation, tests, CMake files, and API for
the proposed changes in the RFC. There is a new torch_mlir/fx.py for
user level APIs related to importing modules and a corresponding test
for this path can be found at test/python/fx_importer/basic_test.py.

---------

Co-authored-by: MaheshRavishankar <mravisha@amd.com>
2024-02-06 19:07:59 -08:00
Xida Ren (Cedar) b3a56c0711
Update add_ops to mention llvm-project/mlir/utils/generate-test-checks.py (#2862) 2024-02-05 12:13:43 -08:00
Aart Bik d1cd117998
[torch-mlir] remove trailing whitespace from md documentation (#2853) 2024-02-02 11:02:53 -08:00
James Newling 9d983161fc
Describe how to get --debug and --debug-only flags in dev notes (#2793)
Change should be visible :
https://github.com/newling/torch-mlir/blob/docs_update/docs/development.md
2024-01-30 08:30:00 -08:00
Xida Ren (Cedar) 18669b38cb
Create add_ops.md (#2770) 2024-01-19 10:44:45 -08:00
John Wu 779a141f8d
Mentioned helpful tooling to convert Onnx models to Torch MLIR (#2683)
- Going through the `#torch-mlir` channel on the `llvm` discord, I
realize that there are some useful commands that would be extremely
helpful in creating Onnx lowers to Torch MLIR. Seems a lot of people are
contributing to this. So, I thought it would be good to add this
information to the docs.

These tools helped streamlined the development of this PR:
https://github.com/llvm/torch-mlir/pull/2682
2023-12-21 07:26:20 -08:00
Rik Huijzer 8fa81d181b
Tweak development.md for more speed (#2667)
Adding the `--progress` flag shows the same output as what `git clone`
would show. This is very nice for slow connections. Without it, the
command may run for many minutes without providing any indication that
it is still doing something.

For `--depth=1`, I think it should be safe as most people have new
enough git versions nowadays, but let's be safe and make it an optional
suggestion. I ran all the tests fine with `--depth=1`, but I don't know
whether things will keep working when the submodules get updated for
systems with old git versions.
2023-12-20 09:34:50 +01:00
Yinrun Lyu 89cfbe894d
Update PYTHONPATH in development.md (#2644)
Modify PYTHONPATH to new related directory in docs.
2023-12-18 22:46:55 -08:00
Stella Laurenzo 74f7a0c9d6
Upstream the ONNX importer. (#2636)
This is part 1 of 2, which will also include upstreaming the FX
importer. I started with ONNX because it forces some project layout
updates and is more self contained/easier as a first step.

Deviating somewhat from the RFCs on project layout, I made the following
decisions:

* Locating the `onnx_importer.py` into `torch_mlir.extras` as Maks
already has opened up that namespace and it seemed to fit. Better to
have fewer things at that level.
* Setup the build so that the root project only contains MLIR Python and
pure Python deps (like the importers), but this can be augmented with
the `projects/` adding more depending on which features are enabled.
* The default build continues to build everything whereas in
`TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS=1` mode, it builds a
`torch-mlir-core` wheel with the pure contents only.

`onnx_importer.py` and `importer_smoke_test.py` are almost verbatim
copies from SHARK-Turbine. I made some minor local alterations to adapt
to paths and generalize the way they interact with the outer project. I
expect I can copy these back to Turbine verbatim from here. I also
updated the license boilerplate (they have the same license but slightly
different project norms for the headers) but retained the correct
copyright.

Other updates:

* Added the ONNX importer unit test (which also can generate test data)
in lit, conditioned on the availability of the Python `onnx` package. In
a followup once I know everything is stable, I'll add another env var
that the CI can set to always enable this so we know conclusively if
tests pass.
* Moved the ONNX conversion readme to `docs/`.
* Renamed CMake option `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS` ->
`TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` and inverted the sense. Made the
JitIR importer and LTC options `cmake_dependent_options` for robustness.
2023-12-12 19:02:51 -08:00
srcarroll 7d0f5cc5a8
Update out of date docs (#2602)
Some of docs referred to old file paths that no longer exists. This
patch updates some of the instructions that I happened to notice were
out of date. This is not a full update
2023-12-01 16:29:37 -06:00
Stella Laurenzo 5eae0adff1
Breakup python pytorch deps (#2582)
This lifts the core of the jit_ir_importer and ltc out of the pt1
project, making them peers to it. As a side-effect of this layering, now
the "MLIR bits" (dialects, etc) are not commingled with the various
parts of the pt1 project, allowing pt1 and ltc to overlay cleanly onto a
more fundamental "just MLIR" Python core. Prior to this, the Python
namespace was polluted to the point that this could not happen.

That "just MLIR" Python core will be introduced in a followup, which
will create the space to upstream the FX and ONNX pure Python importers.

This primary non-NFC change to the API is:

* `torch_mlir.dialects.torch.importer.jit_ir` ->
`torch_mlir.jit_ir_importer`.

The rest is source code layering so that we can make the pt1 project
optional without losing the other features.

Progress on #2546.
2023-11-19 12:10:19 -08:00
James Newling 98ee7fe548 Update E2E links 2023-11-09 13:55:37 -06:00
Ramiro Leal-Cavazos d082310bd8 Move Wiki to `docs/`
Currently the docs are split into two places, the `docs/` directory
and the Github Wiki of Torch-MLIR. This commit moves the wiki docs to
`docs/` to consolidate everything into one place. This has the added
benefit that users will get all the documentation when they clone the
repository.

Note: there are 4 files in the wiki, but only one is truly needed
- Torch-ops-E2E-implementation.md: only file needed
- Coding-Style.md: the contents of this file are already in
Torch-ops-E2E-implementation.md
- Weekly-LLVM-Update.md: this is outdated. We no longer have a weekly
schedule for llvm updates
- Home.md: Contains links to talks and resources that are already
present in the documentation in `docs/` or in
Torch-ops-E2E-implementation.md

Co-authored-by: Yi Zhang <cathyzhyi@google.com>
Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
Co-authored-by: Sean Silva <silvasean@google.com>
Co-authored-by: Daniel Ellis <1346302+dellis23@users.noreply.github.com>
Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-11-09 13:55:37 -06:00
James Newling 026cb314da
Specify path of e2e_test.sh after directory change (#2557)
Is there a way to disable some of CI for docs-only PR's?
2023-11-07 16:07:02 -08:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Stella Laurenzo 078d1e1a1d
Remove mlir-hlo (replace with stablehlo). (#2460)
We just have to do this: I ran into an issue today where I needed to make a one line patch to stablehlo to work around a compiler issue, and it is completely unapparent how to do so given that the mlir-hlo repo is a read-only export and is at the tail end of a multi-week integration chain from the open-source stablehlo repo.

We've discussed this often enough and gotten +1 from everyone that they are ok with taking the e2e testing hit if it becomes necessary: It is necessary as the current situation is unmanageable.

Looking at it, I expect it wouldn't actually be very difficult to build a little runner binary out of the stablehlo interpreter and subprocess call that in order to get the testing coverage back. I leave that as an exercise to the users of this part of the stack and recommend following the breadcrumbs from the deleted python/torch_mlir_e2e_test/stablehlo_backends/linalg_on_tensors.py file and the main.py changes.

Note that I am pointing us at a stablehlo fork for the moment until it is apparent that we don't need to carry any local patches to it. We can update this in a few days if everything is clear.
2023-09-12 19:10:02 -07:00
Stella Laurenzo 8ffe5d17da Add Sean Silva to code owners as emeritus.
Per request from #2403.
2023-08-20 18:06:07 -07:00
Stella Laurenzo 6648ad91e7
Per request, swap Sean Silva for Stella Laurenzo in code owners. (#2403)
Sean has decided to move on to other ventures and has requested that I help him disengage by resuming top level accountability for the project.
2023-08-18 12:52:00 -07:00
Tanyo Kwok 3a1b92c463
Update code_owners.md (#2197) 2023-06-07 12:16:35 +08:00
Ramiro Leal-Cavazos de02b56e17
Replace RefineTypes with dtype functions (#2105)
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.

All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`

These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.

Co-authored-by: Jiahao Li <liplus17@163.com>
2023-05-12 13:40:45 -07:00
Sean Silva 4e82b30c88 Update long_term_roadmap.md 2023-03-27 12:34:07 -07:00
Ramiro Leal-Cavazos eae3ff7f1c
Change dtype functions interface to take ints tuple for each tensor (#1965)
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.

This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.

To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
2023-03-23 11:05:39 -07:00
Sean Silva a412c85fd7 [docs] Add changes to e2e testing to long-term roadmap 2023-03-20 11:38:13 -07:00
Ashay Rane 711646d095
mhlo: migrate conversion to stablehlo (#1840)
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.

This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
2023-02-02 07:29:47 -06:00
Sean Silva 8c3774bb2a
Minor fixes for development.md
- Mention the rotation doc
- Fix minor typos / broken link
2022-12-14 02:55:51 -08:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00
Sean Silva d52359a891 [docs] Add info about special e2e testing cases. 2022-12-07 12:53:07 +01:00
Sean Silva 9fb63ce9d9 Add link to e2e testing docs 2022-11-25 04:53:57 -08:00
Sean Silva 27a2a180d5 [cleanup] Remove docs/roadmaps.
This directory didn't have much and was generally out of date.
The [long-term
roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md)
supercedes this anyway.
2022-11-24 04:16:47 -08:00
Kazuaki Ishizaki 638a884e8c fix typos 2022-11-17 11:03:27 -08:00
Sambhav Jain ba5b90ee27
Enable bazel LIT tests in CI (#1596)
Bazel LIT test support was added in https://github.com/llvm/torch-mlir/pull/1585. This PR enables the tests in CI.

```
INFO: Build completed successfully, 254 total actions
@torch-mlir//test/Conversion:TorchToArith/basic.mlir.test                PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToLinalg/basic.mlir.test               PASSED in 0.5s
@torch-mlir//test/Conversion:TorchToLinalg/elementwise.mlir.test         PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToLinalg/flatten.mlir.test             PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToLinalg/pooling.mlir.test             PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToLinalg/unsqueeze.mlir.test           PASSED in 0.2s
@torch-mlir//test/Conversion:TorchToLinalg/view.mlir.test                PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToMhlo/basic.mlir.test                 PASSED in 0.5s
@torch-mlir//test/Conversion:TorchToMhlo/elementwise.mlir.test           PASSED in 0.9s
@torch-mlir//test/Conversion:TorchToMhlo/gather.mlir.test                PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToMhlo/linear.mlir.test                PASSED in 0.6s
@torch-mlir//test/Conversion:TorchToMhlo/pooling.mlir.test               PASSED in 0.3s
@torch-mlir//test/Conversion:TorchToMhlo/reduction.mlir.test             PASSED in 0.4s
@torch-mlir//test/Conversion:TorchToMhlo/view_like.mlir.test             PASSED in 0.6s
@torch-mlir//test/Conversion:TorchToSCF/basic.mlir.test                  PASSED in 0.2s
@torch-mlir//test/Conversion:TorchToTosa/basic.mlir.test                 PASSED in 1.1s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/basic.mlir.test     PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/error.mlir.test     PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/free-functions.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/initializers.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/methods.mlir.test   PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/module-uses-error.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/module-uses.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/multiple-instances-error.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/multiple-instances-multiple-module-args.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/multiple-instances.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/submodules.mlir.test PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/GlobalizeObjectGraph/visibility.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/adjust-calling-conventions.mlir.test     PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/canonicalize.mlir.test                   PASSED in 0.4s
@torch-mlir//test/Dialect:Torch/decompose-complex-ops-legal.mlir.test    PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/decompose-complex-ops.mlir.test          PASSED in 0.9s
@torch-mlir//test/Dialect:Torch/drop-shape-calculations.mlir.test        PASSED in 0.4s
@torch-mlir//test/Dialect:Torch/erase-module-initializer.mlir.test       PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/inline-global-slots-analysis.mlir.test   PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/inline-global-slots-transform.mlir.test  PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/invalid.mlir.test                        PASSED in 0.4s
@torch-mlir//test/Dialect:Torch/lower-to-backend-contract-error.mlir.test PASSED in 17.3s
@torch-mlir//test/Dialect:Torch/maximize-value-semantics.mlir.test       PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/ops.mlir.test                            PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/prepare-for-globalize-object-graph.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/promote-types.mlir.test                  PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/reduce-op-variants-error.mlir.test       PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/reduce-op-variants.mlir.test             PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/refine-public-return.mlir.test           PASSED in 0.2s
@torch-mlir//test/Dialect:Torch/refine-types-branch.mlir.test            PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/refine-types-ops.mlir.test               PASSED in 0.6s
@torch-mlir//test/Dialect:Torch/refine-types.mlir.test                   PASSED in 0.4s
@torch-mlir//test/Dialect:Torch/reify-shape-calculations.mlir.test       PASSED in 2.9s
@torch-mlir//test/Dialect:Torch/simplify-shape-calculations.mlir.test    PASSED in 0.3s
@torch-mlir//test/Dialect:Torch/torch-function-to-torch-backend-pipeline.mlir.test PASSED in 0.6s
@torch-mlir//test/Dialect:TorchConversion/canonicalize.mlir.test         PASSED in 0.2s
@torch-mlir//test/Dialect:TorchConversion/finalizing-backend-type-conversion.mlir.test PASSED in 0.3s
@torch-mlir//test/Dialect:TorchConversion/func-backend-type-conversion.mlir.test PASSED in 0.2s
@torch-mlir//test/Dialect:TorchConversion/ops.mlir.test                  PASSED in 0.3s
@torch-mlir//test/Dialect:TorchConversion/verify-linalg-on-tensors-backend-contract.mlir.test PASSED in 0.3s
@torch-mlir//test/Dialect:TorchConversion/verify-tosa-backend-contract.mlir.test PASSED in 0.2s
@torch-mlir//test/RefBackend:insert-rng-globals.mlir.test                PASSED in 0.2s
INFO: Build completed successfully, 2[54](https://github.com/sjain-stanford/torch-mlir/actions/runs/3476816449/jobs/5812368489#step:7:55) total actions
@torch-mlir//test/RefBackend:munge-calling-conventions.mlir.test         PASSED in 0.2s

Executed [59](https://github.com/sjain-stanford/torch-mlir/actions/runs/3476816449/jobs/5812368489#step:7:60) out of 59 tests: 59 tests pass.
```

GHA workflow: https://github.com/sjain-stanford/torch-mlir/actions/runs/3476816449/jobs/5812368489
2022-11-16 11:59:33 -08:00
Sambhav Jain 4032eeca64
Add Bazel buildifier to torch-mlir (#1586)
Formats bazel BUILD and .bzl files with a standard convention. 

Invoke using
```
bazel run @torch-mlir//:buildifier
```
2022-11-15 12:34:27 -08:00
Sambhav Jain b320f7fb77
Simplify Bazel build workflow (#1587)
Remove `run_bazel_build.sh`, simplify docker's entrypoint to start container at `utils/bazel` directory, update docs.
2022-11-15 08:34:43 -08:00
Sean Silva ec4e01c321
Add Suraj to TorchToTOSA owners (#1566) 2022-11-09 14:55:13 -08:00
Sean Silva de4bcbfe9b [docs] Centralize all images in docs/images/ 2022-11-04 03:12:17 -07:00