In torch.index_put like ops, `values` is only required to be
broadcastable to `input[indices]`, rather than exact dimension match.
This patch fixes the problem by add additional
stablehlo.dynamic_broadcast_in_dim before creating stablehlo.scatter op.
BTW, this patch also enhance the `getBroadcastResultShape` utility in
hlo namespace.
- Add Torch to TOSA legalization for aten.as_strided op
- Update xfail_sets with the following:
+ New aten.as_strided results
+ Changes from this commit:
7f9f99c6f8
+ Failed tests from new PyTorch version update
- Add new LIT test to basic.mlir
Change-Id: I6f471ea116ca47f2bf9537b62950fce75a2c624f
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
# Tracking
[Issue](https://github.com/nod-ai/SHARK-ModelDev/issues/848)
[TorchToLinalg Op
Support](https://github.com/nod-ai/SHARK-ModelDev/issues/347)
# Description
Aten_TrilinearOp is an implementation of a "trilinear einstein sum".
Essentially, just an einsum across 3 tensors.
There are a few inputs:
## Tensor Inputs
- i1, i2, i3 - The three input tensors for the _trilinear op.
## Expands
These inputs allow you to unsqueeze an input tensor at the specified
dims as a pre-processing step to make the shapes compatible for the rest
of the op:
- expand1: List[int], expand2: List[int], expand3: List[int]
## sumdim
- sumdim: List[int] - After applying element wise multiplication, the
values in sumdim denote where to collapse a dimension by summing over it
## unroll_dim
- unroll_dim: int - In the PyTorch implementation, this specifies a
dimension where you could slice the input tensors, multiply and sum
them, then concatenate the results in an output tensor. This complicates
the implementation significantly, but doesn't change the result, so I
opted against it. Along with that, a previously accepted path for
solving this involved reusing the AtenEinsumOp, which also would also
ignore this input.
# Solution
After trying a bunch of more complicated approaches for it, this op
actually ended up being quite simple: [See
_trilinear](https://dev-discuss.pytorch.org/t/defining-the-core-aten-opset/1464)
`_trilinear = (i1.unsqueeze(expand1) * i2.unsqueeze(expand2) *
i3.unsqueeze(expand3)).sum(sumdim)`
Wish I saw this earlier, but watcha gonna do: 🙃
## Not Reusing AtenEinsumOp
Frankly, I found multiple cases where valid inputs would have numerical
mismatches for EinsumOp, even when running tests against EinsumOp
directly. I think it has something to do with the singleton dimensions.
Will need to look into this further, but once I realized the simplified
approach, it appeared to be more reliable and much simpler.
Either way (credit to @zjgarvey), there are improvements to the einsum
op here. When I was originally trying to use the op, intermediate
tensors were being flattened properly, but then its 0th dimension was
being cast from a static dim to a dynamic dim due to integers not
folding correctly in the MLIR. Figured it's worth keeping these
improvements for future reusers of EinsumOp.
# The zero'd out dim "bug"
For some reason, if you specify a dimension in all `expands`,
```i.e.
[expand1=[0], expand2=[0], expand3=[0]],
[expand1=[1], expand2=[1], expand3=[1]]
```
The _trilinear op would specify `0` for that dimension in the output
shape, unless it was also included in `sumdim`. This goes against the
implementation of torch.einsum:
```
>>> a, b, c = [torch.rand(1, 3, 3, 3) for i in range(3)] # Simulate expand at dim=0 for all input tensors
>>> torch.einsum('abcd,abcd,abcd->abcd', a, b, c).shape
torch.Size([1, 3, 3, 3])
```
And is just straight up incorrect mathematically. I considered
"replacing" singleton dims with zeroed out dims, but that seemed like
carrying over a bug. Instead, I included a test for the case, verified
that the singleton dimensions were handled the way that torch.einsum
handles it, instead of torch._trilinear, and xfailed it with a note as
to why.
- Add/Extend Torch to TOSA legalization for the following ops:
+ Add aten.threshold_backward
+ Fix aten.threshold
+ Re-implement aten.broadcast_to using tosa.reshape and tosa.tile
+ Add support for rank 0 index for aten.index_select
+ Fix aten.index_put.hacked_twin
+ Add aten.uniform
+ Add aten.logical_and
- Update xfail_sets.py with new e2e results
- Add LIT tests to basic.mlir for newly added ops
Change-Id: I8910564a049d18293284fe2e55e82bc1d2cf10e3
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
This commit sets the PyTorch and TorchVision version to nightly release
2024-10-29.
This commit also fixes the CI failure after this commit
54d9e24013
got merged. The issue was that the CI checks in the PR were run before
the previous roll pytorch update but the PR was actually merged after
the roll pytorch update. Hence, the failure was not caught before
merging the PR.
While exporting the fx_graph through fx_importer for `rrelu` and
`rrelu_with_noise` op for train mode, it decomposes the
`aten.rrelu_with_noise` op based on the PyTorch decomposition which is
the default behavior. However, the decomposition contains an input
mutation specifically here
9bbe4a67ad/torch/_decomp/decompositions.py (L325),
resulting in the runtime failure. This issue would probably be fixed by
https://github.com/pytorch/pytorch/pull/138503. Until then, the failing
tests are added to the xfail set.
Also, after the roll pytorch update following tests started passing for
fx_importer, and fx_importer_stablehlo config.
- "ElementwiseRreluTrainModule_basic"
- "ElementwiseRreluTrainStaticModule_basic"
- "ElementwiseRreluWithNoiseTrainModule_basic"
- "ElementwiseRreluWithNoiseTrainStaticModule_basic"
This commit also updates the dtype check for the `aten.linear` op since
the op now expects both the input tensors to have the same dtype.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
1. Negative indices for tensor indexing is handled by wrapping around
the index values by checking their values at run time. Without the fix,
there was a runtime error.
2. Added a lit test to lock down the behavior.
3. Updated the `xfails_set` for `fx_importer_tosa` config to lockdown
the behavior with e2e test as well.
"THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY."
The fx tracer does not support tracing "real" quantized tensors
currently. A "real" quantized tensor here means a tensor that is created
using a method like `torch.quantize_per_tensor()` and carries the
quantization parameters (scale, zero_point, scheme) in the object.
However, it seems like the DQ-Q type fake quantizatation is now commonly
used as a high level representation of quantized operators and is only
lowered to native quantized ops (if available) in the respective
hardware backend. Quantization of floating point modules in PyTorch is
recently also performed as a graph transformation after
exporting/tracing the original module.
```python
# Examples of "real"/native quantization
tens = torch.randint(-127, 127, (1,), dtype=torch.int8)
torch._make_per_tensor_quantized_tensor(tens, 1, 0)
# tensor([90.], size=(1,), dtype=torch.qint8,
# quantization_scheme=torch.per_tensor_affine, scale=1.0, zero_point=0)
tens = torch.rand((1,))
torch.quantize_per_tensor(tens, 1, 0, torch.qint8)
# tensor([1.], size=(1,), dtype=torch.qint8,
# quantization_scheme=torch.per_tensor_affine, scale=1.0, zero_point=0)
# Example of DQ/Q quantization
import torch.ao.quantization.fx._decomposed
tens = torch.rand((1,))
torch.ops.quantized_decomposed.quantize_per_tensor.default(tens, 1, 0, -128, 127, torch.int8)
# tensor([1], dtype=torch.int8)
```
This means that a typical import flow for a quantized network
into/through torch-mlir would look like this:
`torch.export() -> quantization transformations on fx graph ->
fx_importer` Where the tensors in the graph are normal float/int tensors
and the quantization parameters are carried by the DQ/Q ops. These kinds
of graphs can be traced without issues.
Currently, our quantized convolution tests use the "real" quantized
tensors. This means that with the retirement of the `jit_ir_importer`,
these tests cannot be imported any longer. In summary, I see no reason
to stick to the "real" quantization in these tests, as both PyTorch 2.0
is using DQ/Q quantization and our linalg backend is also using it.
This patch updates our quantized convolution tests to use the DQ-Q
quantization with the ops from `torch.ops.quantized_decomposed`.
Note: For future reference, there seems to be an ongoing consolidation
of the ops for the DQ/Q scheme on the PyTorch side
(https://github.com/pytorch/ao/issues/986#issuecomment-2390296826).
Set PyTorch and TorchVision version to nightly release 2024-10-15.
Tracker issue for the failing tests added to xfail_set in this PR.
Issue: https://github.com/llvm/torch-mlir/issues/3796
This commit disables the failing sparse tensor tests since they are not
maintained on day-to-day basis and blocks the roll PyTorch update for now.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
- Add Torch to TOSA legalization for the following ops:
+ aten.empty.memory_format
+ aten.scatter.src
+ aten.slice_scatter
+ aten.diag_embed
- Update xfail_sets.py with new e2e results
- Update basic.mlir with new LIT tests
Change-Id: I817ecf207bcfcf97ca54f30c10c76c4f0f4145ae
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
# Description
Implementation of the op for `torch.aten.unfold`: [TorchToLinalg Op
Support #347](https://github.com/nod-ai/SHARK-ModelDev/issues/849)
Documentation of op can be found here: [PyTorch
Docs](https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html)
For this op, we apply a sliding window of some `size` along a single
`dimension`, with `step` in between iterations.
`Declaration: aten::unfold(Tensor(a) self, int dimension, int size, int
step) -> Tensor(a)`
The resulting `unfolded` tensor modifies the shape of `dimension` to be
equal to the number of blocks that the sliding windows extracts/inserts,
with an additional dimension of `size` appended (the number of cols of
the output tensor directly translates from the size of the sliding
window).
So if we had a tensor of rank 3 (A x B x C), with dimension = 1, size =
2 and step = 2:
(A x B x C) |=> (A x (B - size) // step + 1 x C x size)
After extracting the window from the input tensor, we insert the (1 x
size) slice into the output tensor. We can make this simpler by mapping
the output indices from the input indices, like they do in the official
implementation:
[PyTorch
Code](https://github.com/pytorch/pytorch/blob/main/torch/_inductor/lowering.py#L1694)
This commit adds the support for the 1-d depthwise convolution as a
special case of 1-d group convolution.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
- Add Torch to TOSA lowering for aten.fill.Scalar/Tensor, aten.flip, and
aten.round
- Fix torchScalarToTosaTensor function to correctly convert Torch scalar
input to TOSA tensor
- Update xfail_sets.py with new e2e results
- Update basic.mlir with LIT tests for new ops
Change-Id: If1e42c2e582710dd8ad0465eed29806fbcdbde41
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
- Add Torch to TOSA legalization for aten.index_select
- Fix createOneDimTfIndices function in TosaLegalizeCommon.cpp to
correctly convert Torch indices to TF-style indices, which is used in
convertGatherNdOp
- Update e2e tests in xfail_sets.py
- Update basic.mlir with new LIT test for aten.index_select
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
Change-Id: I52519246183949353a3cf22f0a685fe3df8ec8ff
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
- Add lowering from Torch to TOSA for aten.diagonal
- Clean up some code
- Update xfail_sets.py with the new e2e results
- Update basic_mlir with the new op mlir test
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
Change-Id: I99bed685455752d09ed96edd837c4dfbee152701
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
Current version does not work for a mixture of dynamic and static shaped
batch dimensions. Rework to grab the correct dynamic shapes.
---------
Co-authored-by: dan <danimal197@gmail.com>
- Add Torch to TOSA legalization for the following reduction ops:
+ aten.min.dim
+ aten.min
+ aten.max
+ aten.prod
+ aten.prod.dim_int
+ aten.all.dim
- Add dtype casting support for reduce sum and prod ops
- Extend aten.max.dim legalization to a template to support aten.min.dim
legalization
- Update end-to-end tests sets in xfail_sets.py
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
Change-Id: I854dd6c0c55e570c1fb7242f20c85cf64d6e7fe0
Signed-off-by: Justin Ngo <justin.ngo@arm.com>
Enabled mask and is_causal parameters for torch.aten.scaled_dot_product
attention + relevant comments + tests.
The tests added highlight the new capabilities introduced in this PR,
including:
Attention with F16 mask
Attention with Boolean mask
Causal attention with same Q K V shapes
Causal attention without Q K V shapes
Made sure that one cannot input both mask and is_causal.
As titled, create a new decomposition for `aten.fmod.Tensor` to
`aten.div`, `aten.trunc`, `aten.mul` and `aten.sub`. Note that we only
use `aten.trunc` for floating point operations. This further gets
decomposed to `aten.where` etc. by other existing decompositions.
This decomposition now makes TOSA pass for a simple model with
`aten.fmod` while it makes `stablehlo` fail. For now, we disallow this
decomposition for `stablehlo`
---------
Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
This PR add `floordiv` to the `PY_BUILTIN_TO_TORCH_OP`. For
`aten.mul.int` and `aten.floordiv.int` ops, we add new Canonicalization
Patterns as follow:
```
%1 = torch.aten.mul.int %input, %const-5
%2 = torch.aten.mul.int %1, %const-6
```
Will be replaced by
`torch.aten.mul.int %input, %const-30`
And
```
%1 = torch.aten.mul.int %input, %const-5
%2 = torch.aten.floordiv.int %1, %const-5
```
Will directly return `%input`
This PR also relaxes the `float` type constraint in TorchToTosa for the
`AtenRsubScalarOp` conversion.
To test:
`cmake --build build --target check-torch-mlir-all`
Previously we only had full suite timeouts, making it impossible to
identify
which specific tests were hanging. This patch adds:
1. Per-test timeout support in the test framework
2. A default 600s timeout for all tests
3. A deliberately slow test to verify the timeout mechanism works
The timeout is implemented using Python's signal module. Tests that
exceed
their timeout are marked as failures with an appropriate error message.
This should help catch and isolate problematic tests that enter infinite
loops, without needing to re-run the entire suite multiple times.
Set PyTorch and TorchVision version to nightly release 2024-08-18.
This commit also updates the `scaled_dot_product_attention` op.
A new attribute `enable_gqa` has been added. As of now, only the
default value for the same is supported.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>