* add support for mhlo
* Add Test for torch.ne
* fix torch.ne shape/add static test case
* add support for static torch.ne
---------
Co-authored-by: root <root@n31-177-039.byted.org>
* feat: split pytorch requirements into stable and nightly
* fix: add true to tests to see full output
* refactor: add comments to explain true statement
* feat: move some tests to experimental mode
* refactor: refactor pipeline into more fine grained difference
* feat: add version differentiation for some tests
* feat: activate more configs
* refactor: change implementation to use less requirement files
* refactor: remove contraints used for testing
* fix: revert some requirement file names
* refactor: remove unnecessary ninja install
* fix: fix version parsing
* refactor: remove dependency on torchvision in main requirements file
* refactor: remove index url
* style: remove unnecesary line switch
* fix: readd index url
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.
All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`
These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.
Co-authored-by: Jiahao Li <liplus17@163.com>
Bool tensors are represented in TorchScript as an array of
`int8_t`s. However, when importing them into Torch-MLIR, the importer
was assuming the array had `int32_t` elements, leading to the importer
reading into memory that was out of bounds. This commit fixes the
casting of the bool tensor.
-- This commit adds e2e support for atend.sort op.
-- 1. Adds aten.sort op in torch dialect.
-- 2. Adds tm_tensor.sort op in TMTensor dialect.
-- 3. Adds lowering of aten.sort -> tm_tensor.sort.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
-- This commit adds e2e support for aten.randint by decomposing it into
an aten.randint.low by setting low=0.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
This commit adds the ability to specify extra abstract interpretation
functions in `torch_mlir.compile` to use during type refinement. This
allows users to easily add custom ops without having to interact with
MLIR or C++ directly.
The ops `aten.convolution_overrideable` and
`aten.convolution_backward_overrideable` are currently not e2e tested
in Torch-MLIR. Moreover, there is no way to add e2e tests for them
because the ops cannot be called using the CPU backend (this also
prevents adding tested dtype functions for these ops). Since these two
ops are not expected to ever appear in PyTorch traces obtained through
standard means (https://github.com/pytorch/pytorch/issues/97481),
Torch-MLIR should not have to worry about them.
There are several ops that have their shape function upstream and had
not been updated in Torch-MLIR to use the upstream version. This
commit updates those shape function. In addition, TODOs have been
added for shape functions that should be upstream but are not.
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.
This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.
To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
Set PyTorch and TorchVision version to nightly release 2023-02-27.
This commit also adds the lowering for aten.add and aten.Float.Scalar op.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.
This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.
Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked. Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result. This patch reverts the
commit to make the post-merge builds green again.
This commit adds support for passing to `torch_mlir.compile` the
result of running `torch.jit.trace` on a model by relaxing the
condition that checks if the model is already in JIT IR to allow any
`torch.jit.ScriptModule`.
Fixes https://github.com/llvm/torch-mlir/issues/1739
pytorch/pytorch@140a3139 reverted a change from yesterday, causing the
RollPyTorch action to break. This patch reverts the corresponding
change in the torch-mlir LTC code.
This patch also re-enables tests that were previously marked as XFAIL.
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275),
use `torch._dynamo.optimizations.training.aot_autograd` instead of raw
`make_fx`. This is more future proof and gives us the backward pass and
functionalization. We don't currently get functionalization because of
https://github.com/pytorch/pytorch/issues/90759
This also incidentally fixes the source location handling, which makes
`lockstep_basic.py` give an accurate source location!
* [custom op] Generalize shape library logic to work with dtypes
This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.
For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.
Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.