jinchen
fb499192df
Fix onnx acosh lowering ( #3262 )
...
iree tests `test_acosh` and `test_acosh_example` passed
2024-04-30 00:49:44 -07:00
jinchen
aa471f1d96
Fix onnx cosh lowering ( #3254 )
...
iree tests `test_cosh` and `test_cosh_example` passed
2024-04-30 00:49:29 -07:00
jinchen
b64c22cfc1
Fix onnx sinh lowering ( #3253 )
...
iree tests `test_sinh` and `test_sinh_example` passed
2024-04-30 00:44:41 -07:00
Rob Suderman
db6721084a
Integrate LLVM at llvm/llvm-project@593f6fdcb4 ( #3260 )
2024-04-29 12:01:40 -07:00
Xinyu Yang
0a5ff68d9d
[stablehlo] Support PrimsCollapseOp and PrimsSplitDimOp in stablehlo ( #3230 )
2024-04-29 17:40:30 +08:00
Vivek Khandelwal
b1e2241479
[ONNX] Fix Onnx.Selu lowering and canonicalizer for IntImplicit op ( #3221 )
...
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-29 04:00:01 +00:00
Yuanqiang Liu
aed2cf3351
[Torch] emit aten.__contains__.str_list and add folder ( #3249 )
2024-04-29 10:51:17 +08:00
Xinyu Yang
5684dc0441
[Torch] emit aten.celu and decompose it ( #3247 )
...
CELU(x)=max(0,x)+min(0,α∗(exp(x/α)−1))
2024-04-28 17:23:40 +08:00
Yuanqiang Liu
46c0f3cad0
[Torch] emit aten.log_sigmoid and decompose it to log(sigmoid) ( #3246 )
2024-04-28 11:47:43 +08:00
Stella Laurenzo
5d4b803914
[NFC reformat] Run pre-commit on all files and format misc.
...
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.
Subsequent patches will format Python files and remaining CPP files.
2024-04-27 14:08:09 -07:00
penguin_wwy
6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa ( #3243 )
...
Like #3130 , gradually replace the deprecated code
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
Yuanqiang Liu
f173a06fa7
[Torch] emit aten.ne.str and add folder ( #3242 )
2024-04-28 00:58:50 +08:00
Rob Suderman
9a12a093a6
[onnx] Support `onnx.OneHot` lowering to `torch` ( #3196 )
...
[onnx] Support `onnx.OneHot` lowering to `torch`
Leverage the `aten.onehot` implementation along with `aten.transpose`
and `aten.where.scalar`.
2024-04-26 12:08:15 -07:00
Xinyu Yang
ac85338491
[Stablehlo] Support AtenPowScalarOp, AtenTanOp, AtenAsinhOp, AtenAcoshOp, AtenAtanhOp, Atan2Op ( #3233 )
2024-04-26 15:47:44 +08:00
Yuanqiang Liu
634a796933
[Torch] fold aten.log ( #3223 )
2024-04-26 10:10:02 +08:00
penguin_wwy
122eb69a98
[stablehlo] add aten left/right shift op conversion support ( #3234 )
2024-04-26 09:20:49 +08:00
Andreas Falkenberg
cd33d8b011
[onnx] Update DefaultDomainGtoP.cpp gridsampler ( #3228 )
...
Gridsampler
In onnx the interpolation mode is called 'linear' whereas in pytorch it
is called 'bilinear'. This led to the problem that everything other than
'bilinear' was rejected. It needed to be changed to linear.
2024-04-25 18:07:05 -07:00
Archana Ramalingam
ac11ec796d
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSum Op ( #3229 )
...
This commit adds the OnnxToTorch support for ReduceLogSum op
2024-04-25 19:37:57 -04:00
Aart Bik
2eac8a992f
[torch-mlir][sparse] sparse tensor dialect is a legal dialect ( #3227 )
2024-04-26 02:36:42 +08:00
Yuanqiang Liu
b0ba3def93
[Torch] support AtenScalarImplicitOp canonicalize with float ( #3231 )
2024-04-26 02:36:13 +08:00
Aart Bik
4361178caa
[torch-mlir][sparse] recognize sparse tensor conversion ( #3226 )
...
Sparse tensor conversions are represented by special aten operators.
This PR ensures the conversions are recognized (instead of failing the
full torch aten lowering to linalg).
2024-04-26 02:32:07 +08:00
Xinyu Yang
7030eacb76
[stablehlo] Support aten.any and aten.all lowering ( #3217 )
2024-04-25 11:15:52 +08:00
Avinash Sharma
678c03b762
Fix nan issue for fp16 torch.randn/randn_like in ConvertAtenUniformOp ( #3184 )
...
For ops that use ConvertAtenUniformOp (e.g. torch.randn/randn_like),
fp16 datatype returns nan values. Trying to lower [this
repro](https://gist.github.com/aviator19941/1c65e658241dea6906ca423f9abaee69 )
will result in nan's, this PR fixes the issue.
2024-04-24 12:28:08 +05:30
Yuanqiang Liu
fab2696489
[Torch] support aten.trunc ( #3219 )
...
decompose `trunc(x)` to `sign(x) * floor(abs(x))`
2024-04-24 14:32:33 +08:00
Xinyu Yang
e18bf42d0e
[stablehlo] Support ConstantPadNdOp in stablehlo ( #3211 )
...
as title
2024-04-24 14:15:11 +08:00
Phaneesh Barwaria
f77d88390a
[onnx] handle dynamic padSize tensor in onnx.Pad ( #3214 )
...
- Fix pad size to data_rank for dynamic paddingSize Tensor.
- This fix is in accordance with [input
specification](https://onnx.ai/onnx/operators/onnx__Pad.html#inputs ) for
onnx.Pad
- Impl will need to be updated for dynamic padSize when support for
`axes` is added.
2024-04-24 11:31:37 +08:00
Xinyu Yang
42b9eccdb3
[Stablehlo] Fix AtenSumDimIntListOp when dim==None ( #3216 )
...
as titile
2024-04-24 11:25:46 +08:00
Xinyu Yang
4da3d714cc
[Torch] Support AtenProdOp on linalg and stablehlo ( #3215 )
2024-04-24 11:14:04 +08:00
zjgarvey
a8ba865fca
[torch] Adds Quantization Support for `aten.relu` ( #3177 )
...
A choice was made to quantize the return type of Relu with a scale and
zero point copied from the input's quantization scheme. With this
choice, the torch-to-linalg conversion of quantized Relu essentially
computes max(input, zeroPoint) in the elementwise payload.
2024-04-23 11:01:36 -07:00
jinchen
09d42044b4
Support select_last_index attribute of onnx argmin op ( #3212 )
...
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/648
all compiled, and the values of results match, but having runtime issue
of dtype mismatch of i/si.
2024-04-23 10:43:38 -07:00
jinchen
61e6312c87
Support select_last_index attribute of onnx argmax op ( #3192 )
...
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/635
all compiled, but having run issue of dtype mismatch of i/si.
2024-04-23 10:16:08 -07:00
jinchen
ddb29c2c02
[onnx] Add OnnxToTorch support for `onnx.ConvInteger` ( #3179 )
...
All e2e iree tests compiled, but they have the run issue of mismatch of
dtype like the following
```
expected:
1x1x2x2xsi32=[[[12 16][24 28]]]
actual:
1x1x2x2xi32=[[[12 16][24 28]]]
```
2024-04-23 09:42:02 -07:00
Yuanqiang Liu
db3842f2e8
[Stablehlo] support lowering sinh & cosh to stablehlo ( #3213 )
2024-04-23 19:54:58 +08:00
Xinyu Yang
c1967b607f
[Stablehlo] add AtenLog10Op, AtenLog2Op lowering to stablehlo ( #3208 )
2024-04-23 19:06:55 +08:00
Yuanqiang Liu
1f8123b5f0
[Stablehlo] support unary ops which promote to floating point ( #3209 )
...
* promote input to output element-type when lowering to stablehlo, so
that it could satisfy stablehlo's type constraints.
* split promote-to-fp unary ops from fp-only unary ops.
2024-04-23 17:57:12 +08:00
Yuanqiang Liu
797e4cd395
[Stablehlo] lowering asin, acos, atan ( #3207 )
...
* lowering asin, acos and atan to chlo ops.
2024-04-23 16:24:53 +08:00
Vinayak Dev
cff2f084d4
[torch] Add OnnxToTorch lowering for `onnx.ReduceL2` ( #3175 )
...
Adds OnnxToTorch lowering for the ReduceL2 op.
2024-04-23 02:03:05 -04:00
Vivek Khandelwal
3c252cdd44
[onnx] Add `onnx-to-torch` lowering for random ops ( #3193 )
...
This commit adds the OnnxToTorch lowering for Onnx's RandomNormal, RandomNormalLike, RandomUniform, and RandomUniformLike op.
2024-04-22 22:28:07 +05:30
Vivek Khandelwal
6abc7371c8
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for Squeeze and Unsqueeze op ( #2991 )
...
This commit also cleans up the OnnxToTorch lowering for the Squeeze and
Unsqueeze op and adds the support for handling edge cases.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-22 08:52:42 +00:00
penguin_wwy
e5bdd71baf
[Torch] Emit and decompose prims.iota op ( #3132 )
2024-04-21 19:45:01 -07:00
penguin_wwy
a60e84e5ee
[stablehlo] add aten.expm1 op conversion support ( #3199 )
2024-04-21 19:20:49 -07:00
Rob Suderman
8222637159
[onnx] Extend op version number of `onnx.ScatterElements` ( #3195 )
...
Version number was set too high. Lowered to support more cases allows
more tests to pass.
Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:32:18 -04:00
Rob Suderman
733cace1df
[onnx] Fix `onnx.split` by directly handling slicing ( #3194 )
...
Previous implementation erroneously mixed up num_outputs with
slice_size. New version correctly computs the slice size and directly
performs slicing rather than leveraging `aten.split.tensor`. This is due
to `onnx` supporting a fixed number of splits making the size
computation more easily computeable when lowering to `aten` rather than
deferring to `aten.split.tensor`.
---------
Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:31:56 -04:00
penguin_wwy
b6b01602d3
[stablehlo] add aten.fmod.Tensor op conversion support ( #3198 )
2024-04-21 08:39:36 +08:00
penguin_wwy
ea0ecb67be
[stablehlo] add aten.remainder.Tensor op conversion support ( #3197 )
2024-04-21 00:03:37 +08:00
Rob Suderman
b01245c0e8
[onnx] Fix `onnx.Not` for non-bool inputs ( #3187 )
...
Need to perform a bool cast to support `onnx.Not` on non-bool inputs.
2024-04-19 11:32:24 -07:00
Xinyu Yang
790a697245
[Torch] Add folder for AtenIntOp, AtenFloatOp ( #3189 )
...
See unit test below:
```
// CHECK-LABEL: func.func @torch.aten.tensor.float(
// CHECK-NEXT: torch.vtensor.literal(dense<1.000000e+01> : tensor<f32>) : !torch.vtensor<[],f32>
func.func @torch.aten.tensor.float() -> !torch.vtensor<[],f32> {
%none = torch.constant.none
%false = torch.constant.bool false
%float1.000000e01 = torch.constant.float 1.000000e+01
%67 = torch.aten.tensor.float %float1.000000e01, %none, %none, %false : !torch.float, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],f32>
return %67 : !torch.vtensor<[],f32>
}
// CHECK-LABEL: func.func @torch.aten.tensor.int(
// CHECK-NEXT: torch.vtensor.literal(dense<45> : tensor<si32>) : !torch.vtensor<[],si32>
func.func @torch.aten.tensor.int() -> !torch.vtensor<[],si32> {
%none = torch.constant.none
%false = torch.constant.bool false
%int45 = torch.constant.int 45
%67 = torch.aten.tensor.int %int45, %none, %none, %false : !torch.int, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],si32>
return %67 : !torch.vtensor<[],si32>
}
```
2024-04-19 22:17:06 +08:00
penguin_wwy
5a98c72c7f
[StableHLO] Fix aten.clamp.Tensor in FxImporter2StableHLO ( #3190 )
...
The FX importer will pass static shapes to the Torch dialect, so it
needs to generate a StableHLO that satisfies shape inference.
2024-04-19 17:08:29 +08:00
penguin_wwy
6c4f7deebb
[stablehlo] add aten.clamp.Tensor op conversion support ( #3185 )
2024-04-19 10:55:27 +08:00
Rob Suderman
0e77de996a
[torch] Add support for `torch.view` with dynamic shapes ( #3164 )
...
We can map to `tensor.reshape` for handling multiple output dynamic
shapes. Later we can perform a more complex analysis for indentifying
expand/collapse cases from the tensor.reshape.
Initially we planned to handle this identification at the `torch` level
however it will be easier to handle once converted to core
mlir-dialects.
2024-04-18 11:47:19 -07:00