In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
Single element tuples in Python need a comma after the
element. However, the `registry.py` file, which generates the expected
abstract interpretation function signatures, was not inserting the
comma. This commit changes the expected signature generator to add a
comma after the last element in any non-empty default tuple argument.
* add support for mhlo
* Add Test for torch.ne
* fix torch.ne shape/add static test case
* add support for static torch.ne
---------
Co-authored-by: root <root@n31-177-039.byted.org>
* feat: split pytorch requirements into stable and nightly
* fix: add true to tests to see full output
* refactor: add comments to explain true statement
* feat: move some tests to experimental mode
* refactor: refactor pipeline into more fine grained difference
* feat: add version differentiation for some tests
* feat: activate more configs
* refactor: change implementation to use less requirement files
* refactor: remove contraints used for testing
* fix: revert some requirement file names
* refactor: remove unnecessary ninja install
* fix: fix version parsing
* refactor: remove dependency on torchvision in main requirements file
* refactor: remove index url
* style: remove unnecesary line switch
* fix: readd index url
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.
All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`
These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.
Co-authored-by: Jiahao Li <liplus17@163.com>
Bool tensors are represented in TorchScript as an array of
`int8_t`s. However, when importing them into Torch-MLIR, the importer
was assuming the array had `int32_t` elements, leading to the importer
reading into memory that was out of bounds. This commit fixes the
casting of the bool tensor.
-- This commit adds e2e support for atend.sort op.
-- 1. Adds aten.sort op in torch dialect.
-- 2. Adds tm_tensor.sort op in TMTensor dialect.
-- 3. Adds lowering of aten.sort -> tm_tensor.sort.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
-- This commit adds e2e support for aten.randint by decomposing it into
an aten.randint.low by setting low=0.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
This commit adds the ability to specify extra abstract interpretation
functions in `torch_mlir.compile` to use during type refinement. This
allows users to easily add custom ops without having to interact with
MLIR or C++ directly.
The ops `aten.convolution_overrideable` and
`aten.convolution_backward_overrideable` are currently not e2e tested
in Torch-MLIR. Moreover, there is no way to add e2e tests for them
because the ops cannot be called using the CPU backend (this also
prevents adding tested dtype functions for these ops). Since these two
ops are not expected to ever appear in PyTorch traces obtained through
standard means (https://github.com/pytorch/pytorch/issues/97481),
Torch-MLIR should not have to worry about them.
There are several ops that have their shape function upstream and had
not been updated in Torch-MLIR to use the upstream version. This
commit updates those shape function. In addition, TODOs have been
added for shape functions that should be upstream but are not.
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.
This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.
To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
Set PyTorch and TorchVision version to nightly release 2023-02-27.
This commit also adds the lowering for aten.add and aten.Float.Scalar op.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.
This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.
Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked. Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result. This patch reverts the
commit to make the post-merge builds green again.
This commit adds support for passing to `torch_mlir.compile` the
result of running `torch.jit.trace` on a model by relaxing the
condition that checks if the model is already in JIT IR to allow any
`torch.jit.ScriptModule`.
Fixes https://github.com/llvm/torch-mlir/issues/1739
pytorch/pytorch@140a3139 reverted a change from yesterday, causing the
RollPyTorch action to break. This patch reverts the corresponding
change in the torch-mlir LTC code.
This patch also re-enables tests that were previously marked as XFAIL.
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275),
use `torch._dynamo.optimizations.training.aot_autograd` instead of raw
`make_fx`. This is more future proof and gives us the backward pass and
functionalization. We don't currently get functionalization because of
https://github.com/pytorch/pytorch/issues/90759
This also incidentally fixes the source location handling, which makes
`lockstep_basic.py` give an accurate source location!
* [custom op] Generalize shape library logic to work with dtypes
This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.
For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.
Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.
This adds a basic e2e Config for TorchDynamo using
Linalg-on-Tensors/RefBackend.
But TorchDynamo is pretty orthogonal to
various other pieces, so it should compose nicely with variations like:
- Switching out all the backends (Linalg-on-Tensors, TOSA, MHLO)
- PyTorch functionalization and decompositions
- Taking the example inputs and compiling with all dynamic or all static
shapes without duplicating tests.
This adds it to the CI, but there are still a lot of XFAIL's.
This also adds a helper `from torch_mlir.dynamo import
make_simple_dynamo_backend` which simplifies some of the steps for
making a Torch-MLIR-based TorchDynamo backend. We include "simple" in
the name because we are going to be exploring various things next from
the long-term roadmap.
The next steps are:
- Burn down all the XFAIL's.
- Start working on the pieces from the [long-term roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md).
- Add functionalization/decompositions into the TorchDynamo flow and
remove reliance on the current Torch-MLIR "frontend".
- Write a pure-Python direct FX->MLIR importer.
- Hook up the new PyTorch symbolic shape stuff.
- Explore PrimTorch decompositions for simplifying backends.
-- aten.upsample_nearest2d.vec op is not present
owing to https://github.com/pytorch/pytorch/pull/85638
-- So this commit adds a lowering on aten.upsample_nearest2d.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
For AoT deployments models often have multiple exported methods.
This patch enables something like this:
```
class TwoMethodsModule(torch.nn.Module):
def sin(self, x):
return torch.ops.aten.sin(x)
def cos(self, x):
return torch.ops.aten.cos(x)
example_args = torch_mlir.ExampleArgs()
example_args.add_method("sin", torch.ones(2, 3))
example_args.add_method("cos", torch.ones(2, 4))
print(torch_mlir.compile(TwoMethodsModule(), example_args))
```
In the
[long-term](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md#tools-for-advanced-aot-deployments)
we will need to reconcile this with our story for stateful models and the
backend contract being purely functional. For now, this provides some basic
infra that seems harmless. Arguably, we could tighten up the backend contract
even more to only allow a single compiled function which would prohibit this or
require building out a layer above.
Fixes#1557
Unless requested otherwise, PyTorch no longer installs most of the
header files under the caffe2 directory (see
https://github.com/pytorch/pytorch/pull/87986). This breaks our
importer code since we need to use the `MakeGuard()` function to execute
statements in the event of exceptions.
To fix this issue, this patch implements a rudimentary version of
PyTorch's ScopeGuard, where once the class variable goes out of scope,
it executes a predefined method.
This commit removes almost all of the valsem ops, since the value
semantics version of the ops now exist in PyTorch. The only op missing
is `aten.bernoulli_.float`. In addition, this commit also simplifies
the implementation of `aten.fill.Scalar` by moving it to the pattern
that converts elementwise ops.
* Add LazyGraphExecutor registration
* Update PyTorch version to 1.14.0.dev20221024
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
This commit makes the following changes needed to update bump LLVM:
- Replace `linalg.init_tensor` with `tensor.empty` (see:
https://reviews.llvm.org/D135129)
- Replace `NoSideEffect` with `Pure` (see
https://reviews.llvm.org/D135505)
- Replace `body` region accessor for `ReduceOp` and `ReduceWindowOp`
with `getBody`
- Fix incorrect use of `tosa::ReduceSumOp` in `AtenNativeLayerNormOp`
conversion pattern. The result type of `tosa::ReduceSumOp` must have
the same rank as the input type. (see:
https://www.mlplatform.org/tosa/tosa_spec.html#_reduce_sum)
Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
Allow customizing `backend_legal_ops` for "torch" output type, since we
don't know which backend will be used (it might be a custom backend).
We don't allow customizing the `backend_legal_ops` for the other output
types (Linalg, TOSA, MHLO) since those backends control their set of
legal ops directly.
Fixes#1418