This commit adds a method to `TestUtils` that generates random integer
tensors with a similar interface to the `TestUtils.rand`. This commit
also replaces with `tu.randint` all test inputs generated with
`torch.randint`.
Now that we depend on MLIR-HLO, we have a slightly more involved process
for updating the LLVM submodule. This patch updates the instructions in
`development.md` with both the rationale for the process and the
individual steps of the process.
An earlier patch (bb47c166) incorrectly replaced the now-dropped
`OpaqueElementsAttr` with `SparseElementsAttr` in one place and with
`DenseElementsAttr` in another. This patch fixes the problem by making
both replacements use the dense-equivalent type.
We were already hitting many cases where backends different in terms of
the legal ops that they wanted. This caused unnecessary coupling between
the backends. Examples:
- https://github.com/llvm/torch-mlir/pull/1161
- https://github.com/llvm/torch-mlir/pull/862
This PR centralizes all compilation to go through `torch_mlir.compile`
so that we can keep the logic centralized there. We should move these
lists closer to each backend. Especially cases like
https://github.com/llvm/torch-mlir/pull/862 where blocking a
decomposition is necessary to avoid a crash emphasize that the set of
decompositions is tightly coupled to the backend, and should be
"controlled by the backend" and not something arbitrarily tweakable.
Also:
- Fix a small bug in the way we passed through the backendLegalOps
option.
- Add better error messages in `torch_mlir.compile` for import errors.
Add MHLO path
Add custom accelarator dialects
Rename Torch Dialect back to original Torch-MLIR Dialect
(Surrounding text still refers to Torch-MLIR dialect)
Check in source for Excalidraw(https://excalidraw.com/)
so anyone can use / update it using the open source version
One of the simplifications made by the pass `RefinePublicReturn`
currently only happens if the tensor in question only has one
user. However, the current method of checking this does not correctly
handle the case of a user having multiple uses of the same
tensor. This commit makes sure only unique users are considered.
This is a first step towards formalizing the set of ops in our backend
contract. The goal is to eventually formalize `torch` dialect ops into 3
categories:
1. Legal in backend contract
2. Illegal in backend contract
3. Conditionally legal in backend contract
The "conditionally legal" set are the ops that we can optionally
decompose for backends.
This patch adds relevant pass options for this throughout the compiler,
in preparation for a new set of traits which will formalize this
classification.
I recently fixed the handling of the `dim` argument in
`sum_mean_dim` (59fccab857). Therefore,
the checks that the `dim` input is `None` or `[]` are no longer needed.
This fixes a seeding issue with the [previous PR](https://github.com/llvm/torch-mlir/pull/1240) where bazel build's GHA cache is not present to begin with and one of the commands (chown) fails on it. Should get the Bazel build back to green.
This introduces a new pass LowerToBackendContract (better name very
welcome) which performs the bulk of the simplifications that we do,
such as
- shape refinement
- dtype refinement
- maximizing value semantics
- inlining global slots
- decomposing complex ops
The key difference from before is that it iterates the set of
transformations, which can help to break a number of "catch-22" issues
where one simplification depends on another, the latest example being
here:
https://github.com/llvm/torch-mlir/issues/1131
This also exposed that RefineTypes was sometimes crashing/asserting for
certain inputs. This commit hardens it a bit.
This PR adds:
- A minimal docker wrapper to the bazel GHA workflow to make it reproducible locally
- Bazel cache to speed up GHA workflows (down to ~5 minutes from ~40+minutes)
This is a no-op for non-bazel workflows and an incremental improvement.
Bumps the shape library:
- Updates the function signature for aten.arange.start_step
- upstream_shape_functions.mean_dim -> upstream_shape_functions.sum_mean_dim
* Propagate device data names
* Address PR comment
* Add example usage
* Add test for device data names
* Make TorchMlirComputation fields protected
* Add lazy backend device data name unit tests
* Disable lazy backend tests if LTC is disabled
* Add comments
When we renamed the directory containing submodules from `external` to
`externals`, we accidentally left the original name in the Github
workflow. This patch fixes the problem.