Commit Graph

3170 Commits (58489faf7fdd3e3f20fb849fd89e7bfffe6540fe)
 

Author SHA1 Message Date
Phaneesh Barwaria 026dfade64
onnx.MelWeightMatrix TorchOnnxToTorch (#3503)
Just uploading what I have till now

[Gist](https://gist.github.com/PhaneeshB/761f75f5522d9f4a40ef949a328e93fe)
of pytorch impl that I'm following to implement the OnnxToTorch lowering

Additional Details - (also pasted as comment in gist)
[Op
Description](https://github.com/onnx/onnx/blob/main/docs/Operators.md#melweightmatrix)
in Onnx Documentation

[Example](https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-93)
Used the same example in this file.
the Expected output is shown in the example

[Reference Onnx
Impl](4c3ed5e08b/onnx/reference/ops/op_mel_weight_matrix.py (L13))
- This is the base for the above code.
2024-08-12 21:18:29 +05:30
Matthias Gehre 334633b738
e2e: Enable generate-runtime-verification pass (#3615)
This adds the `generate-runtime-verification` pass into the linalg
refbackend, and moves all tests that now abort at runtime into the crash
set, sorted by their respective errors.

I have fixed on set of errors found that way, which are mismatches
between the static dimensions we cast to and the actual dynamic
dimensions. This was caused by wrong annotations on the test cases, like
in
https://github.com/llvm/torch-mlir/pull/3615/files#diff-48bfbf41fcad5fa01b49197d251114f84a2b8de4f1d87ab938a061aedd1419b1R1931
2024-08-12 14:15:12 +02:00
Felix Schneider 0314188dbe
[torch] Basic support for per-channel quantized graphs (#3623)
This patch adds basic support for lowering graphs with per-channel
quantization. Per-channel quantized ops have to be excluded from
`FuseQuantizedOps` for now but can be used in QDQ quantized form.

Using this patch, we're able to import and execute (on the linalg
backend) graphs with per-channel quantization applied using the "new"
PyTorch 2.0 Export Quantization.
2024-08-10 15:51:09 +02:00
Rob Suderman 44266ab0c4
[onnx] Support `fp8` for `onnx.QuantizeLinear` (#3619)
We need to directly decompose quantize linear for `fp8` types as the
equivalent torch operations do not support the operation.
2024-08-09 12:32:46 -07:00
Rob Suderman 8358e8c255
[onnx] Add support for `fp8` `onnx.DequantizeLinear` (#3617)
Fp8 needs a slightly different path for dequantization as the `torch`
dequantize operation does not support `fp8` types.
2024-08-08 16:20:53 -07:00
Rob Suderman 880e64bbbb
[onnx] `onnx.Split` may not have `num_outputs` which can be inferred (#3608)
The attribute does not exist in all variants of the operation. It can be
inferred from the number of results so we should just do that.
2024-08-08 16:17:38 -07:00
Rob Suderman fd98476f77
[torch] Unpacking sometimes misses shape inference (#3609)
It is possible that the unpacked tensor does not match the same inferred
shapes. This is pretty common when ingesting form the `onnx` frontend.
2024-08-08 16:17:31 -07:00
Vivek Khandelwal f91f816336
Bump llvm to 585523750e2bbe374d1cb3bf4ff9d53de29b9593 (#3613)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-08-09 00:36:10 +08:00
Rob Suderman 4350672685
[torch] Add integer support for pooling operations (#3610)
If we pass an integer type to the pooling operation we incorrectly pad
with an integer value with causes downstream compilation failures.
2024-08-07 21:42:10 -07:00
zjgarvey 7f2a17e757
[ONNX] fix padding for `onnx.MaxPool` (#3611)
The saga of aligning onnx and torch padding conventions continues. 

```python
onnx_pads = [low_x, low_y, low_z, high_x, high_y, high_z]
torch_pads = [low_z, high_z, low_y, high_y, low_x, high_x]
```

So not only is the lexicographical ordering hierarchy swapped (low/high
x spatial-dim -> spatial-dim x low/high) but the ordering in the the
spatial-dim specification is also reversed.

This patch properly reverses the pad ordering (and actually uses the
`shuffledPadding` to pad).
2024-08-07 20:34:00 -07:00
Rob Suderman 6c33ab024e
[onnx] `onnx.CenterCropPad` used an incorrect type for toScalar (#3605)
To scalar should have a rank-0 tensor type not rank-1 with length 1.
Changing allows proper compilation.
2024-08-07 20:33:33 -07:00
Rob Suderman 59a4c6fda4
[onnx] Fix transposition code for `onnx.OneHot` (#3606)
The post onehot transposition code was unexercised. Fixed the test and
transformation to check use.
2024-08-07 18:20:26 -07:00
zjgarvey c8efc201f4
[Onnx] expand support for constant matching (#3607)
The pattern `m_OnnxListOfConstantInts` previously only checked if the
attr inside an `onnx.Constant` op is a `DenseResourceElementsAttr`, but
didn't handle `ElementsAttr`'s. This patch adds support for
`ElementsAttr` and provides an example of it's use via a lit test for
`onnx.Unsqueeze`.
2024-08-07 19:35:34 -05:00
Marius Brehler 341f415b1e
[onnx] Fix lowering `onnx.Shrink` to Torch (#3603)
This fixes the result type of the `torch.aten.lt.Scalar` and
`torch.aten.ge.Scalar` ops created during the lowering of `onnx.Shrink`
to Torch.
2024-08-07 21:25:14 +02:00
Rob Suderman 18139994e8
[onnx] Fix edge condition for `onnx.ReduceMax` (#3598)
For length-0 on `onnx.ReduceMax` the length 0 case was incorrect due to
a copy paste error.
2024-08-07 10:32:28 -07:00
zjgarvey 8d95fe9eeb
[TorchToArith] Add a lowering for `torch.add.float_int` (#3594) 2024-08-07 11:55:27 -05:00
Chi_Liu a51b4e014a
[Torch] Disable 1-d quantized convolution (#3601)
To fix https://github.com/nod-ai/SHARK-Turbine/issues/253#issuecomment-2271815640
Prevent fusion for 1d convolution ops and just do it as an f32 conv
since there isn't a linalg named op for quantized 1-d convolution yet.  
Get 24 onnx eca* models passed in iree-comiple.
2024-08-07 09:01:16 -07:00
Branko Trifkovic 2d6bfb2dec
[LINALG] Added support for conversion from float to complex. (#3595) 2024-08-07 12:36:48 +05:30
Rob Suderman b48e55c2f7
[onnx] Handle negative indices for `onnx.GatherElements` (#3599)
Add a check for negative indices and offset appropriately for
`onnx.GatherElements`.
2024-08-06 18:54:01 -07:00
Vivek Khandelwal 78d0fa8998
build: manually update PyTorch version (#3568)
Set PyTorch and TorchVision version to nightly release 2024-08-04.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-08-06 21:36:39 +05:30
Rob Suderman b1a232222f
[onnx] Fix `onnx.Shape` to include `start` and `end` processing (#3580)
`onnx.Shape` can select only a subset of indices using attributes. Add
support for these attributes.

---------

Co-authored-by: zjgarvey <47986913+zjgarvey@users.noreply.github.com>
2024-08-05 13:56:07 -07:00
Gaurav Shukla 839fe90f86
[MLIR][ONNX] Add support for onnx.scan op (#3516)
This commit lowers onnx.scan op to torch.prim.Loop op and adds the
lowering in the onnx pipeline.

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-08-05 15:37:26 +05:30
Yuanqiang Liu 7030445c15
[e2e_testing] check process exitcode early in e2e (#3591)
It will exit immediately. So it doesn't need to wait 6 min.
2024-08-05 10:41:09 +08:00
Rob Suderman 7e7af67080
Avoid warnings-as-errors build failure (#3588)
Lambda needs a return value to avoid a build failure.
2024-08-02 12:27:31 -07:00
zjgarvey d0933b0eb6
[TorchToLinalg] Fix possible OOB access in Interpolate lowering (#3570)
Following up from the discussion in
<https://github.com/llvm/torch-mlir/pull/3550>, I've edited the lowering
to prevent OOB extracts in a more direct fashion (i.e., just clamping
directly).

I don't think this affects the lit tests at all, but I've tested the
changes in our external test suite at
<https://github.com/nod-ai/SHARK-TestSuite/tree/main/>. I found the
issue when I was unexpectedly getting `nan`'s along the output image
border for a resize test there.
2024-08-02 13:55:37 -05:00
zjgarvey 79ae0afc2f
[TorchToLinalg] Simplify QuantizePerTensor lowering (#3576)
Uses arith::MaximumFOp and arith::MinimumFOp instead of comparison and
select ops to improve readability of IR.
2024-08-02 13:40:52 -05:00
Rob Suderman f7b5c13870
Change linalg.matmul_unsigned to linalg.matmul with unsigned type_fn (#3587)
Change linalg.matmul_unsigned to linalg.matmul with unsigned type_fn

Signed-off-by: Max Dawkins <max.dawkins@gmail.com>
Co-authored-by: Max Dawkins <max.dawkins@gmail.com>
2024-08-02 11:32:24 -07:00
Rob Suderman d273bdfabf
[onnx] Fix default `alpha` for `onnx.Elu` (#3583)
We were defaulting to `0.0` for `onnx.Elu` when it is supposed to be
`1.0`.
2024-08-02 09:29:17 -07:00
Rob Suderman 3d33c5a206
[onnx] Fix `onnx.ScatterElements` for negative indices (#3582)
We need to adjust for negative scatter indice values. Added
materializing out the inbounds adjustment.
2024-08-02 09:01:10 -07:00
Rob Suderman 306ed62edd
[onnx][torch] Fix `onnx.SoftmaxCrossEntropyLoss` for ignore index (#3585)
There were two issues related to `ignore_index` being set

(1) the onnx-to-linalg pass as not reading the value correctly (2) the
mean pass was not considering the `ignore_index` value

For (2) when taking the mean we need to know how many of the values were
considered in the sum and therefore we cannot divide by the total number
of elements. Adding a summation across the total number should correct
this issue.
2024-08-02 09:00:56 -07:00
yyp0 22cd4441e7
[Torch] Add support for static uneven divisible AdaptiveAvgPool2d (#3566)
The static uneven divisible AdaptiveAvgPool2d means that although the
input size is not an integer multiple of ouput size, but the kernel and
stride size can also be fixed (not dynamic). The derivation logic of
kernel and stride size is consistent with
torch/_decomp/decomposations.py:adaptive_avg_pool2d as described in the
following:

1. Stride Size
Firstly , derive the start index in each reduce operation according to
the output size (`n`), `start_index = ([0, 1, ..., n - 1] * input_size)
// output_size`. For each index `k`, if `k * (input_size % output_size)
< output_size`, then the current and previous stride keeps the same as
`input_size // output_size`. So suppose `(n-1) * (input_size %
output_size) < output_size`, the stride in the whole AdaptiveAvgPool2d
process keeps static, as `input_size // output_size`.

2. Kernel Size
torch/_decomp/decomposations.py:adaptive_avg_pool2d calculates a static
kernel size when the input/output sizes satisfy either of the two
conditions, `input_size % output_size == 0` or `output_size %
(input_size % output_size) == 0`. Here if `input_size % output_size ==
0`, then the kernel size equals `input_size // output_size`, otherwise
`input_size // output_size + 1.`
2024-08-01 11:37:53 +08:00
Jiawei Wu 6f7a5db801
[FxImporter] small fixes for fx importer compatibility issues between different pytorch versions (#3577) 2024-08-01 10:52:41 +08:00
Jiawei Wu edc87fc577
[stablehlo] support dynamic-shaped index in stablehlo conversion for aten.index-like ops (#3322)
For now, at most one dynamic dim of index tensors in
aten.index/aten.index_put-like op is supported.
2024-08-01 10:41:09 +08:00
Rob Suderman 7f475e174e
Add extf-trunc f32-f64-f32 ellision (#3579)
Torch has all scalars represented as i64 and f64 types which results in
extraneous trunc-extf commands. We can rework this by elliding
widen-narrow cases away.
2024-07-31 16:50:00 -07:00
Jiawei Wu 7b2902f6e2
[stablehlo]: fix aten.index_put_hacked_twin lowering to StableHlo (#3572)
Current StableHlo lowering strategy works well when `src` tensor's rank
is no bigger than `dst` tensor's. The new patch make it succeed in other
cases. The following is an example.
```
%190 = torch.prim.ListConstruct %arg4 : (!torch.vtensor<[1,1024],si64>) -> !torch.list<vtensor>
%191 = torch.aten.index_put.hacked_twin %189, %190, %186, %true : !torch.vtensor<[1024,768],f32>, !torch.list<vtensor>, !torch.vtensor<[1,1024,768],f32>, !torch.bool -> !torch.vtensor<[1024,768],f32>
```
2024-07-31 22:33:57 +08:00
yyp0 f49b9c14f1
[Torch] Add support for Aten__Or__BoolOp (#3574) 2024-07-31 17:23:53 +08:00
Suraj Sudhir d3efab984b
[TOSA] Fix Tensor.hacked_twin to support diff size indexes (#3547)
- Broadcasts index list tensors
- Adds torch.nn.Unfold test

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-07-30 14:32:05 -07:00
Ivan Butygin 8bd1b9751f
`max_unpool3d` linalg lowering (#3536)
An attempt of  `aten.max_unpool3d` to linalg lowering.
There are known issues with this implementation (see comment in code).
2024-07-30 20:59:17 +03:00
zjgarvey f1c74e1431
[TorchToLinalg] add support for depthwise qconv (#3564)
- Adds support for lowering depthwise + quantized convolution ops to
linalg::DepthwiseConv2DNhwcHwcQOp
- Changed the variable name for groupSize (which is really C/G) to the
more appropriate numGroups (G).
- Discovered in e2e testing that linalg does not accept (Cin = groups &&
Cout = K*groups for K>1) as a "depthwise" conv, so this also updates the
case-checking to reflect this issue.
2024-07-29 12:25:07 -07:00
zjgarvey 50d6ce225f
Align Quantization Rounding Scheme with ONNX/Pytorch (#3569)
Pytorch and ONNX apparently round to nearest, ties go to nearest even,
but we were using `math::round` for the torch-to-linalg conversion of
`quantize_per_tensor`, which rounds away from zero on ties.
2024-07-29 12:24:46 -07:00
Vinayak Dev 30c4d2f2b8
[torch] Add OnnxToTorch lowering for Onnx.Unique op (#3523)
Adds OnnxToTorch Lowering for the `Onnx.Unique` op.
2024-07-29 17:32:44 +05:30
pdhirajkumarprasad a211ccbcff
Implementation of SplitToSequence ops lowering (#3509)
Added support for splitToSequence ops lowering
Added test case with filecheck
2024-07-29 15:44:22 +05:30
Vivek Khandelwal b6e4725259
[ONNX] Add OnnxToTorch lowering for NonMaxSuppression op (#3501)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-07-26 21:01:27 +05:30
yyp0 ea60d72489
[Torch] Add AtenMaskedFillTensorOp support (#3561) 2024-07-26 15:32:13 +08:00
Vivek Khandelwal 15cf7106c4
[ONNX] Reduce Onnx.Flatten op version (#3560)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-07-24 21:27:20 +05:30
Yuanqiang Liu 003b06dfa1
[Torch] enhance naryFolderHelper to support mixed dtypes (#3559)
* so that it could support like `i64 + f64 => f64`.
* also unify `aten.log`'s folder code to use `naryFolderHelper`.
2024-07-24 17:54:59 +08:00
Yuanqiang Liu aad1604046
[Torch] enhance fold of aten.squeeze.dim (#3558) 2024-07-24 14:13:48 +08:00
Ze Zhang d1e172f418
Register fake_quantize_cachemask ops and add their decompose patterns (#3556)
Test:

`cmake --build build --target check-torch-mlir-all`
2024-07-23 11:33:12 -07:00
Yuanqiang Liu 21ad890009
[Torch] enhance fold of aten.slice.Tensor (#3557)
so that it could support folding slice with any static shape.
2024-07-23 22:53:03 +08:00
Yuanqiang Liu 78846425e2
[Torch] add constriants when decompose aten.split_with_sizes (#3555) 2024-07-23 10:34:29 +08:00