Rob Suderman
60bf6c25af
[onnx] Lower `onnx.QLinearMatMul` lowering to `torch` operators ( #2776 )
...
We can plumb the linear matmul into pytorch using its quantized types
with side channel information. To handle the final int8 operation we
dequantize and requantize.
2024-01-24 12:28:48 -08:00
Vivek Khandelwal
894805dd5e
[MLIR][TORCH] Support for `onnx.LayerNormalization` ( #2789 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-24 11:08:20 -08:00
Gaurav Shukla
12f123eff8
[ONNX][MLIR] Add support for pad op in the onnx pipeline ( #2738 )
...
This commit adds mapping from `onnx.pad` op to `torch.pad` op. Currently
it does not support `axes` parameter of `onnx.pad` op.
Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-01-25 00:33:37 +05:30
Phaneesh Barwaria
ac8975ea12
[MLIR] [ONNX] lowering for onnx tile op and sign op ( #2725 )
2024-01-24 22:56:21 +05:30
Chi_Liu
77ae56337d
[ONNX][MLIR] Add support for onnx.Exp op ( #2792 )
...
https://github.com/nod-ai/SHARK-Turbine/issues/312
2024-01-23 13:45:00 -08:00
James Newling
dc056e58e6
[MLIR][TORCH] Add onnx.cast cases used by OPT-1.25M ( #2787 )
2024-01-23 21:06:25 +05:30
Gaurav Shukla
b7a0329676
[ONNX][MLIR] Fix padding size constraint for onnx.maxpool op ( #2782 )
...
Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2024-01-23 19:23:01 +05:30
Chi_Liu
cad98e8113
[ONNX][TORCH-MLIR] Add TopK support ( #2774 )
...
https://github.com/nod-ai/SHARK-Turbine/issues/331
2024-01-22 12:56:39 -08:00
Ramiro Leal-Cavazos
5883ef0f21
Fix unused variable warnings ( #2775 )
2024-01-22 11:05:55 -08:00
Dave Liddell
2f4924015d
[onnx] Added flatten ( #2760 )
...
[https://github.com/nod-ai/SHARK-Turbine/issues/328 ](url)
---------
Co-authored-by: Dave Liddell <dliddell@xilinx.com>
2024-01-19 16:18:16 -08:00
Gaurav Shukla
3b85c70748
[ONNX][MLIR] Add support for onnx.gather op ( #2726 )
...
This commit adds support for gather op in the onnx pipeline.
https://github.com/nod-ai/SHARK-Turbine/issues/242
Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-01-19 21:58:29 +05:30
Andreas Falkenberg
4de4d38b87
Initial commit of NonZero op ( #2766 )
2024-01-18 15:23:13 -10:00
Rob Suderman
b5387c0f29
[onnx] Lowering `onnx.dequantize_linear` to `torch` ( #2759 )
...
We can make the per-tensor version of the operation to the dequantize
operation via marking with the make quantized tensor component. This
introductions the `qint*` and `quint*` tensor type that can be lowered
to teh appropriate dequantization behavior during the torch-to-linalg
conversion.
2024-01-18 16:47:21 -08:00
Rob Suderman
bd11877f6f
[onnx] Support lowering quantize linear to `torch` ( #2751 )
...
We can map the per_tensor case to the `torch.aten.quantize_per_linear`
operation. In this case we extract the `scale` and `zeropoint` values
and directly invoke the quantization, then return the integer
representation value.
2024-01-18 16:33:10 -08:00
Phaneesh Barwaria
eed144bfbc
[ONNX][MLIR] add Identity op support ( #2754 )
2024-01-16 19:06:54 +05:30
kumardeepakamd
87389f0762
[ONNXToTorch] Add conversion for Onnx range ( #2752 )
...
Implemented ONNX.Range. The spec says the data type for start, limit,
delta are 0-D can be double, float, int16, int32, int64, All int types
mapped to !torch.int and all float types mapped to !torch.float
---------
Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-15 14:26:46 -05:00
Rob Suderman
197b3b475c
[onnx] Convert `onnx.constant` to `torch` literal tensor ( #2748 )
...
Handles the multiple cases of `onnx` constant values and converts them
to `torch` literal tensors. This can include splats with a single
integer or floating point value, a set of explicit integer values, or
an elements array attr of values.
2024-01-15 09:31:22 -08:00
Chi_Liu
c7452af4fa
[MLIR][ONNX] Add OnnxToTorch support for Maxpool Op ( #2695 )
...
Add Maxpool ONNX op support.
Add Utils.h/cpp files to create a constant int list for ONNX.
2024-01-12 14:54:38 -08:00
James Newling
47ffc90db4
signed/unsigned c++ compiler warning fixes ( #2742 )
2024-01-11 09:46:46 -08:00
Andreas Falkenberg
5862854bc8
[ONNX][TORCH-MLIR] LayerNorm ( #2716 )
...
Layer Normalization using the torch.aten.native_layer_norm
https://github.com/nod-ai/SHARK-Turbine/issues/325
2024-01-11 14:27:04 +05:30
Xida Ren (Cedar)
aee1fca251
Minor typo fix: in not implemented message for the exclusive and reverse attributes for cumsum ( #2740 )
2024-01-10 14:24:37 -08:00
kumardeepakamd
29569713f3
support for onnx.expand operator ( #2729 )
...
maps onnx.expand to torch aten broadcast_to, three tests added
---------
Co-authored-by: Kumar Deepak <kumar@xilinx.com>
2024-01-10 13:05:37 -08:00
Vivek Khandelwal
208ae35583
[MLIR][ONNX] Add TorchToOnnx Support for DepthToSpace op
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 17:50:47 +05:30
Vivek Khandelwal
4707d3bdc6
[MLIR][ONNX] Add OnnxToTorch support for Bernoulli and CastLike op
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 16:24:06 +05:30
Vivek Khandelwal
35e8f86792
[MLIR][ONNX] Add OnnxToTorch support for Dropout and Elu op
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-01-10 16:23:55 +05:30
Ben Vanik
4dd17f0b71
Fixing implicit double->float truncation warnings. ( #2733 )
...
Floating-point literals should use the correct type specifier.
2024-01-08 17:26:38 -05:00
Han-Chung Wang
6096fcb347
[OnnxToTorch] Delete unused variables. ( #2728 )
2024-01-04 17:30:05 -08:00
John Wu
4e5e34d215
[MLIR][ONNX] Add OnnxToTorch support for Slice Op ( #2696 )
2024-01-03 19:41:10 -08:00
Xida Ren (Cedar)
1778314620
add basic cumsum. this doesn't support the exclusive and reverse attrs ( #2717 )
...
fixes #2711
2024-01-03 09:52:59 -08:00
Xida Ren (Cedar)
9fc212ea9a
support Onnx opset 1-13 ReduceMean where axes is supplied as an attr ( #2703 )
...
(instead of an input)
Addresses part of #2689 . fixes #2702
2023-12-28 09:31:41 -08:00
Xida Ren (Cedar)
d560698e3d
Lower `onnx.split` to `torch.aten` ( #2686 )
2023-12-27 17:53:07 -08:00
aldesilv
2d796b7502
lower onnx max op to torch aten maximum op ( #2618 )
...
lower onnx min op to torch aten minimum op
2023-12-27 11:07:35 -08:00
aldesilv
336cfb64b5
OnnxToTorch support for onnx.Mul op ( #2699 )
2023-12-27 10:50:08 -08:00
Xida Ren (Cedar)
6847fc1fc6
Fix since-opset too high ( #2701 )
...
Addresses two of the ops from
https://github.com/llvm/torch-mlir/issues/2689
https://github.com/llvm/torch-mlir/issues/2700
2023-12-27 10:08:09 -08:00
aldesilv
abc6b0a25a
onnx to torch pow support ( #2656 )
2023-12-27 09:34:48 -08:00
Vivek Khandelwal
4f252c88b4
[MLIR][ONNX] Add OnnxToTorch support for GlobalAveragePool op. ( #2692 )
...
This commit adds the OnnxToTorch support for GlobalAveragePool op.
Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-26 10:25:31 -08:00
saienduri
ee75e8d1ae
[MLIR][ONNX] Add OnnxToTorch support for Reshape Op ( #2698 )
...
This commit adds the OnnxToTorch support for Reshape op.
2023-12-26 10:20:13 -08:00
Vivek Khandelwal
0849fd0a06
[MLIR][ONNX] Fix onnx.conv lowering to handle bias tensor
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2023-12-22 16:36:21 +05:30
Vivek Khandelwal
9a72c6584e
[MLIR][ONNX] Add OnnxToTorch support for BatchNormalization and Concat op.
...
This commit adds the OnnxToTorch support for BatchNormalization and Concat op.
Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-22 11:25:33 +05:30
John Wu
46f2cb50dc
[onnx] Lower onnx.HardSigmoid to torch ( #2682 )
...
The expression for HardSigmoid in Onnx
(https://onnx.ai/onnx/operators/onnx__HardSigmoid.html ): max(0, min(1,
alpha * x + beta))
is inherently different from HardSigmoid in Torch
(https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html )
which is: if x < -3 -> 0
elif x > 3 -> 1
else x/6 + 1/2
That being said, it was just better to compute out the entire expression
when translating the Onnx expression to Torch mlir, which is done in
this PR. Some of the logic is shared from the files in
`DecomposeComplexOps`. Therefore, refactored some shared logic between
`DecomposeComplexOps` and `DefaultDomainGToP` and put it in a `Utils`
file.
2023-12-21 07:29:22 -08:00
Vivek Khandelwal
3226241521
[MLIR][ONNX] Add OnnxToTorch support for Conv and ConvTranspose op.
...
This commit adds the OnnxToTorch support for Conv and ConvTranspose op.
Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-21 11:12:14 +05:30
Stella Laurenzo
d75cff6cd1
NFC: Remove unused variable causing a warning.
2023-12-20 19:23:27 -08:00
Rob Suderman
11cc92d4ab
[onnx] Lowerings from `onnx.tan` ( #2642 )
...
Started work on the `tan` lowerings for ONNX to Torch. Uses `sin` and
`cos` to represent a `tan`.
2023-12-20 10:09:39 -08:00
Andreas Falkenberg
ebaab4200f
[ONNX] ONNX -> TORCH for Erf ( #2673 )
...
TorchOnnxToTorch
For Erf function
2023-12-19 08:07:27 -08:00
Vivek Khandelwal
8649b84e3f
[MLIR][ONNX] Add OnnxToTorch support for AveragePool op. ( #2672 )
...
This commit adds the OnnxToTorch support for AveragePool op.
Signed-Off By: vivekkhandelwal1424@gmail.com
2023-12-18 18:17:11 -06:00
saienduri
698ff3a736
[MLIR][ONNX] Add OnnxToTorch support for Reduction Ops ( #2657 )
...
This commit adds the OnnxToTorch support for ReduceSum, ReduceMean, and
ReduceMin ops.
2023-12-18 12:37:31 -08:00
John Wu
deacb8ef38
[MLIR][ONNX] Add OnnxToTorch support for Gelu ( #2647 )
...
This commit adds the OnnxToTorch support for Gelu op.
---------
Co-authored-by: Rob Suderman <suderman@google.com>
2023-12-18 10:57:08 -08:00
Rob Suderman
ae1a6e4a5a
[onnx] Lower `onnx.Gemm` to `torch` ( #2663 )
...
General lowering for `onnx.Gemm` to `torch`
2023-12-16 10:47:58 -08:00
Andreas Falkenberg
cee8563060
[onnx] Support of onnx.Greater, onnx.Less, onnx.GreaterOrEqual to Torch ( #2649 )
...
The three remaining compare operations
onnx.Greater
onnx.Less
onnx.GreaterOrEqual
Are also added with this push request.
This concludes a set of basic tensor compare functions.
2023-12-16 12:42:11 -05:00
Rob Suderman
61888690bb
[onnx] Add support for `onnx.sinh` ( #2643 )
...
Adds a lowering from `onnx.sinh` to `aten.sinh`. This includes adding
the `aten.sinh` operator.
2023-12-15 21:23:51 -08:00