Commit Graph

253 Commits (638ef1451290d471830e9ad594c0a037dc861811)

Author SHA1 Message Date
Rob Suderman 6cf139687d
[onnx] Support for optional `axis` attribute for `onnx.Pad` (#3635)
The `axis` attribute is optionally available. Added support by computing
the pad based on the axis values.

---------

Signed-off-by: Rob Suderman <rob.suderman@gmail.com>
2024-08-24 11:41:08 -07:00
Phaneesh Barwaria 9a6fe58a02
onnx.MelWeightMatrix Onnx to Torch to Linalg (#3659)
- This PR adds new (and equivalent) more tensorized impl of
MelWeightMatrix which lowers all the way to linalg.
- [Ref Pytorch
Impl](https://gist.github.com/PhaneeshB/4e6dfcded3007b1b686fbe28f07a67cd)
- Thanks to @rsuderman for pointing out the difficulties [earlier
impl](#3503) posed during lowering to linalg and also for providing a
better numpy impl 🙏
2024-08-22 08:55:03 -07:00
Rob Suderman 78deb175b3
[onnx] Fix shortcircuit path (#3633)
The implementation was short circuiting the second result. Updated to
guarantee we do not short circuit.
2024-08-16 09:23:47 -07:00
Rob Suderman 3a599bec80
[onnx] Fix onnx.ThresholdedRelu crash (#3638)
Result type was not fetched causing a crash on construction
2024-08-16 09:23:38 -07:00
Vivek Khandelwal 4a0bed0ce0
[ONNX] Add training mode support for BatchNormalization op (#3597)
This commit extends the OnnxToTorch lowering for BatchNormalization op
for supporting the case when training=True.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-08-14 10:46:38 +05:30
Rob Suderman 2511cf46b4
[onnx] Fix `onnx.RNN` for layout attribute (#3620)
The `layout` attribute was not considered for the `onnx.RNN` operation.
Added support for the attribute to transpose the inputs / outputs of the
RNN when valid.
2024-08-13 14:34:25 -07:00
Rob Suderman af67f9efb0
[onnx] Support integer types for `onnx.Pow` (#3626)
Pow is not support for the `torch` operator. Add casting for integer
types.
2024-08-13 09:39:04 -07:00
Rob Suderman 39307f0462
[onnx] Fix `onnx.Gather` for bad expansion (#3625)
A case where unsqueeze was require was missed causing compilation
failures.
2024-08-13 09:38:55 -07:00
aldesilv a4ba02eef5
[ONNX] add support for tfidfvectorizer (#3553)
1-d/2-d input and output
implemented based on the description and example test cases in
https://github.com/onnx/onnx/blob/main/docs/Operators.md#TfIdfVectorizer
and some notes from

https://github.com/onnx/onnx/blob/main/onnx/reference/ops/op_tfidf_vectorizer.py#L128

---------

Co-authored-by: zjgarvey <zjgarvey@gmail.com>
2024-08-12 18:10:11 -05:00
Rob Suderman d3695a97a0
[onnx] Fix `onnx.Hardmax` lowering to torch (#3624)
The lowering to torch makes assumption about the dimensions / types of
reduce max and onehot. We need to correct for expected torch behavior.
2024-08-12 11:19:02 -07:00
Phaneesh Barwaria 026dfade64
onnx.MelWeightMatrix TorchOnnxToTorch (#3503)
Just uploading what I have till now

[Gist](https://gist.github.com/PhaneeshB/761f75f5522d9f4a40ef949a328e93fe)
of pytorch impl that I'm following to implement the OnnxToTorch lowering

Additional Details - (also pasted as comment in gist)
[Op
Description](https://github.com/onnx/onnx/blob/main/docs/Operators.md#melweightmatrix)
in Onnx Documentation

[Example](https://github.com/onnx/onnx/blob/main/docs/Operators.md#examples-93)
Used the same example in this file.
the Expected output is shown in the example

[Reference Onnx
Impl](4c3ed5e08b/onnx/reference/ops/op_mel_weight_matrix.py (L13))
- This is the base for the above code.
2024-08-12 21:18:29 +05:30
Rob Suderman 44266ab0c4
[onnx] Support `fp8` for `onnx.QuantizeLinear` (#3619)
We need to directly decompose quantize linear for `fp8` types as the
equivalent torch operations do not support the operation.
2024-08-09 12:32:46 -07:00
Rob Suderman 8358e8c255
[onnx] Add support for `fp8` `onnx.DequantizeLinear` (#3617)
Fp8 needs a slightly different path for dequantization as the `torch`
dequantize operation does not support `fp8` types.
2024-08-08 16:20:53 -07:00
Rob Suderman 880e64bbbb
[onnx] `onnx.Split` may not have `num_outputs` which can be inferred (#3608)
The attribute does not exist in all variants of the operation. It can be
inferred from the number of results so we should just do that.
2024-08-08 16:17:38 -07:00
zjgarvey 7f2a17e757
[ONNX] fix padding for `onnx.MaxPool` (#3611)
The saga of aligning onnx and torch padding conventions continues. 

```python
onnx_pads = [low_x, low_y, low_z, high_x, high_y, high_z]
torch_pads = [low_z, high_z, low_y, high_y, low_x, high_x]
```

So not only is the lexicographical ordering hierarchy swapped (low/high
x spatial-dim -> spatial-dim x low/high) but the ordering in the the
spatial-dim specification is also reversed.

This patch properly reverses the pad ordering (and actually uses the
`shuffledPadding` to pad).
2024-08-07 20:34:00 -07:00
Rob Suderman 6c33ab024e
[onnx] `onnx.CenterCropPad` used an incorrect type for toScalar (#3605)
To scalar should have a rank-0 tensor type not rank-1 with length 1.
Changing allows proper compilation.
2024-08-07 20:33:33 -07:00
Rob Suderman 59a4c6fda4
[onnx] Fix transposition code for `onnx.OneHot` (#3606)
The post onehot transposition code was unexercised. Fixed the test and
transformation to check use.
2024-08-07 18:20:26 -07:00
Marius Brehler 341f415b1e
[onnx] Fix lowering `onnx.Shrink` to Torch (#3603)
This fixes the result type of the `torch.aten.lt.Scalar` and
`torch.aten.ge.Scalar` ops created during the lowering of `onnx.Shrink`
to Torch.
2024-08-07 21:25:14 +02:00
Rob Suderman 18139994e8
[onnx] Fix edge condition for `onnx.ReduceMax` (#3598)
For length-0 on `onnx.ReduceMax` the length 0 case was incorrect due to
a copy paste error.
2024-08-07 10:32:28 -07:00
Rob Suderman b48e55c2f7
[onnx] Handle negative indices for `onnx.GatherElements` (#3599)
Add a check for negative indices and offset appropriately for
`onnx.GatherElements`.
2024-08-06 18:54:01 -07:00
Rob Suderman b1a232222f
[onnx] Fix `onnx.Shape` to include `start` and `end` processing (#3580)
`onnx.Shape` can select only a subset of indices using attributes. Add
support for these attributes.

---------

Co-authored-by: zjgarvey <47986913+zjgarvey@users.noreply.github.com>
2024-08-05 13:56:07 -07:00
Gaurav Shukla 839fe90f86
[MLIR][ONNX] Add support for onnx.scan op (#3516)
This commit lowers onnx.scan op to torch.prim.Loop op and adds the
lowering in the onnx pipeline.

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-08-05 15:37:26 +05:30
Rob Suderman d273bdfabf
[onnx] Fix default `alpha` for `onnx.Elu` (#3583)
We were defaulting to `0.0` for `onnx.Elu` when it is supposed to be
`1.0`.
2024-08-02 09:29:17 -07:00
Rob Suderman 3d33c5a206
[onnx] Fix `onnx.ScatterElements` for negative indices (#3582)
We need to adjust for negative scatter indice values. Added
materializing out the inbounds adjustment.
2024-08-02 09:01:10 -07:00
Rob Suderman 306ed62edd
[onnx][torch] Fix `onnx.SoftmaxCrossEntropyLoss` for ignore index (#3585)
There were two issues related to `ignore_index` being set

(1) the onnx-to-linalg pass as not reading the value correctly (2) the
mean pass was not considering the `ignore_index` value

For (2) when taking the mean we need to know how many of the values were
considered in the sum and therefore we cannot divide by the total number
of elements. Adding a summation across the total number should correct
this issue.
2024-08-02 09:00:56 -07:00
Vinayak Dev 30c4d2f2b8
[torch] Add OnnxToTorch lowering for Onnx.Unique op (#3523)
Adds OnnxToTorch Lowering for the `Onnx.Unique` op.
2024-07-29 17:32:44 +05:30
pdhirajkumarprasad a211ccbcff
Implementation of SplitToSequence ops lowering (#3509)
Added support for splitToSequence ops lowering
Added test case with filecheck
2024-07-29 15:44:22 +05:30
Vivek Khandelwal b6e4725259
[ONNX] Add OnnxToTorch lowering for NonMaxSuppression op (#3501)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-07-26 21:01:27 +05:30
Vivek Khandelwal 15cf7106c4
[ONNX] Reduce Onnx.Flatten op version (#3560)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-07-24 21:27:20 +05:30
jinchen f0ce1e94ce
[ONNX] Add OnnxToTorch support for SequenceMap (#3535) 2024-07-17 14:25:09 -07:00
Arham Khan 574143448b
[E2E][ONNX] torch.multinomial (#3404)
This PR adds a conversion in the TorchOnnxToTorch pass for the ONNX
Multinomial operation. It also adds a TorchToLinalg lowering for the
`aten.Multinomial` op and does a light refactor of some repeated code
that generates random floating point numbers in
`TorchToLinalg/Random.cpp`.
2024-07-16 23:09:39 +05:30
zjgarvey 0fb8b017d8
Adds misc fixes for some padding related issues (#3528)
This patch adds a few misc pad op related changes:

1. Addresses issue <https://github.com/llvm/torch-mlir/issues/3457>
2. Addresses issue <https://github.com/llvm/torch-mlir/issues/3442>
3. Fixes the padding order for asymmetrically padded onnx.Conv ops
4. Enables passing quantization through those onnx.Conv op pre-paddings
5. Modifies the torch-to-linalg lowering of AtenReplicationPad2d op to
enable support for input rank != 4

Unfortunately, even with all of these changes, the e2e tests for the
ReplicationPad2d still fail the onnx config, since the torch export
procedure for rearranging the pad order is complicated enough that the
padding ints end up not being able to fold back to constants.
2024-07-11 20:01:45 -05:00
Xida Ren (Cedar) 5342aa70cf
Support onnx.GRU and onnx.RNN (#3447) 2024-07-10 14:04:17 -04:00
zjgarvey dcb48dd46c
[ONNX] Fix LpNormalization Lowering (#3521)
The LpNormalization lowering was previously just computing the norm,
which is incorrect. This computes the norm then divides the input tensor
by it's norm.

I've tested this against some simple onnx models locally. I'll look into
adding a test case for this in an external test suite.
2024-07-09 15:42:26 -05:00
Gaurav Shukla 0b46d1110a
[MLIR][ONNX] Add support for onnx.ScatterND (#3479)
This commit adds support for onnx.ScatterND op in the onnx pipeline.

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-07-08 13:27:14 +05:30
Matthias Gehre 6ea6a6c2fe
TorchOnnxToTorch: Fix stack-use-after-free (#3480)
We used to move the SmallVector into an ArrayRef and then the
SmallVector left the scope.

Found by asan.
2024-07-08 09:20:09 +02:00
Sagar Kulkarni 0fe74845da
[ONNX] Fix bug in ONNXToTorch PadOp's pads tensor rearrangement (#3485)
Fix the pad tensor rearrangement such that we change the representation
from [x1_begin, x2_begin, ..., x1_end, x2_end,...] to [xn_begin, xn_end,
...., x2_begin, x2_end, x1_begin, x1_end] where x1, x2 .. xn are the
dimensions of the pads tensor argument.

---------

Co-authored-by: zjgarvey <zjgarvey@gmail.com>
Co-authored-by: zjgarvey <47986913+zjgarvey@users.noreply.github.com>
2024-07-03 15:02:49 -05:00
jinchen 3915db0a86
[ONNX] Add OnnxToTorch support for CenterCropPad (#3496) 2024-06-28 12:47:29 -07:00
Phaneesh Barwaria 5a627c46b7
onnx.DFT basic support (#3463)
- adds support for DFT v20 on the FFT and IFFT path
- adds required skeleton code for IFFT ops to be recognised in TMlir
2024-06-28 20:08:43 +05:30
jinchen 6d0ca499e6
[ONNX] Add OnnxToTorch support for ReverseSequence (#3495) 2024-06-27 14:33:41 -07:00
Phaneesh Barwaria 39d1332008
add onnx loop support (#3408)
- Adds limited support for lowering onnx.Loop to primLoopOp
- lower in the pipeline`torch-to-scf` there is a check to see if loop is
for like. A primLoopOp is for like when the input condition is a
`trueBoolConstant`. To adapt the onnx to torch lowering to take
advantage of it, the implementation checks for specific op patterns in
the loodBody region and decides if loop is for like and uses the right
input condition op.
- to adapt the onnxLoopBody to torchLoopBody, we need to adapt the input
block arguments and set the correct output condition variable in the
loop body.
- scanOutput variables are currently not supported.
2024-06-27 17:08:44 +05:30
zjgarvey d2bc70f188
[TorchToLinalg][ONNX] Add Basic Determinant Support (#3481)
This adds support for a few ops:

- torch.linalg_det
- torch._linalg_det (if the LU and pivot returns are unused)
- onnx.Det

An scf loop is used, since the row reduction algorithm applied here has
some loop-carried dependencies.
The current support being added here is very basic, and only works if no
permutations are required during row reduction, and assumes the matrices
are non-singular.
2024-06-25 13:34:19 -05:00
zjgarvey 368fabf0c1
[ONNX] Basic Support for DeformConv (#3469)
This adds a torchvision op to torch-mlir and a path from onnx.DeformConv
to torchvision.deform_conv2d.

I'm not implementing the torch->linalg lowering for the torchvision op
yet, but posting this PR to get feedback on some of the choices being
made here and to flesh out the onnx frontend a bit.
2024-06-25 12:16:51 -05:00
zjgarvey e346c911f7
[ONNX] Add basic support for RoiAlign (#3493)
This adds an onnx->torch conversion for onnx.RoiAlign into
torchvision.roi_align or torchvision.roi_pool, and adds those two
torchvision ops to torch-mlir.
2024-06-25 11:02:45 -05:00
Vinayak Dev 02340408b7
[torch] Add OnnxToTorch lowering for Onnx.STFT op (#3492)
Adds OnnxToTorch lowering for `Onnx.STFT` op.
2024-06-25 19:00:45 +05:30
Vivek Khandelwal 3c3fbe4680
[ONNX] Add OnnxToTorch lowering for Onnx.Upsample Op (#3371)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-25 12:58:31 +05:30
Chi_Liu fc19709daa
[ONNX] Add averagepool dilations support (#3490)
- To fix dilations issue: https://github.com/llvm/torch-mlir/issues/3428
- Test by: https://github.com/nod-ai/SHARK-TestSuite/pull/268
2024-06-21 17:24:57 -07:00
Vivek Khandelwal 83bfb6fb19
[ONNX] Add OnnxToTorch lowering for OptionalHasElement op (#3472)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-21 11:19:00 +05:30
Vivek Khandelwal d29ad4dfbd
[ONNX] Fix Onnx.Hardsigmoid lowering (#3239)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-21 11:18:14 +05:30
Vivek Khandelwal 822d763308
[ONNX] Add OnnxToTorch lowering for Optional, OptionalGetElement op (#3467)
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-18 19:40:18 +05:30