zjgarvey
189b3f112f
Fix broken link in abstract_interp_lib.md ( #2800 )
2024-04-28 02:27:05 +08:00
Yuanqiang Liu
695458daea
Fix ArgAnnotation with boolean flag which instructs value semantics ( #3238 )
2024-04-28 02:24:55 +08:00
penguin_wwy
4fbe77a051
[dynamo] Verify the default value is passed by kwargs ( #2998 )
2024-04-28 02:18:33 +08:00
Yuanqiang Liu
f173a06fa7
[Torch] emit aten.ne.str and add folder ( #3242 )
2024-04-28 00:58:50 +08:00
penguin_wwy
944a6df611
Extract the Python APIs in the pt1 dir back to the root ( #3237 )
2024-04-27 18:27:37 +08:00
Rob Suderman
9a12a093a6
[onnx] Support `onnx.OneHot` lowering to `torch` ( #3196 )
...
[onnx] Support `onnx.OneHot` lowering to `torch`
Leverage the `aten.onehot` implementation along with `aten.transpose`
and `aten.where.scalar`.
2024-04-26 12:08:15 -07:00
Xinyu Yang
ac85338491
[Stablehlo] Support AtenPowScalarOp, AtenTanOp, AtenAsinhOp, AtenAcoshOp, AtenAtanhOp, Atan2Op ( #3233 )
2024-04-26 15:47:44 +08:00
Yuanqiang Liu
634a796933
[Torch] fold aten.log ( #3223 )
2024-04-26 10:10:02 +08:00
penguin_wwy
122eb69a98
[stablehlo] add aten left/right shift op conversion support ( #3234 )
2024-04-26 09:20:49 +08:00
Andreas Falkenberg
cd33d8b011
[onnx] Update DefaultDomainGtoP.cpp gridsampler ( #3228 )
...
Gridsampler
In onnx the interpolation mode is called 'linear' whereas in pytorch it
is called 'bilinear'. This led to the problem that everything other than
'bilinear' was rejected. It needed to be changed to linear.
2024-04-25 18:07:05 -07:00
Archana Ramalingam
ac11ec796d
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSum Op ( #3229 )
...
This commit adds the OnnxToTorch support for ReduceLogSum op
2024-04-25 19:37:57 -04:00
Aart Bik
2eac8a992f
[torch-mlir][sparse] sparse tensor dialect is a legal dialect ( #3227 )
2024-04-26 02:36:42 +08:00
Yuanqiang Liu
b0ba3def93
[Torch] support AtenScalarImplicitOp canonicalize with float ( #3231 )
2024-04-26 02:36:13 +08:00
Aart Bik
4361178caa
[torch-mlir][sparse] recognize sparse tensor conversion ( #3226 )
...
Sparse tensor conversions are represented by special aten operators.
This PR ensures the conversions are recognized (instead of failing the
full torch aten lowering to linalg).
2024-04-26 02:32:07 +08:00
Vivek Khandelwal
9e2fe47c5d
build: manually update PyTorch version ( #3210 )
...
Set PyTorch and TorchVision version to nightly release 2024-04-22.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-25 08:53:10 -07:00
Xinyu Yang
7030eacb76
[stablehlo] Support aten.any and aten.all lowering ( #3217 )
2024-04-25 11:15:52 +08:00
Xida Ren (Cedar)
7be22bb260
Update add_ops.md to link torch mlir get started instructions prominently ( #3222 )
2024-04-24 17:03:41 +00:00
Avinash Sharma
678c03b762
Fix nan issue for fp16 torch.randn/randn_like in ConvertAtenUniformOp ( #3184 )
...
For ops that use ConvertAtenUniformOp (e.g. torch.randn/randn_like),
fp16 datatype returns nan values. Trying to lower [this
repro](https://gist.github.com/aviator19941/1c65e658241dea6906ca423f9abaee69 )
will result in nan's, this PR fixes the issue.
2024-04-24 12:28:08 +05:30
Yuanqiang Liu
fab2696489
[Torch] support aten.trunc ( #3219 )
...
decompose `trunc(x)` to `sign(x) * floor(abs(x))`
2024-04-24 14:32:33 +08:00
Xinyu Yang
e18bf42d0e
[stablehlo] Support ConstantPadNdOp in stablehlo ( #3211 )
...
as title
2024-04-24 14:15:11 +08:00
Yuanqiang Liu
dc470e65c8
add torch.qint32 to dtype-spec in TorchTypes.td ( #3206 )
2024-04-24 11:49:26 +08:00
Yuanqiang Liu
8a1dbbd597
[torchscript] export extra library file name to user ( #3203 )
...
* so that it could be specified by user.
2024-04-24 11:34:02 +08:00
Phaneesh Barwaria
f77d88390a
[onnx] handle dynamic padSize tensor in onnx.Pad ( #3214 )
...
- Fix pad size to data_rank for dynamic paddingSize Tensor.
- This fix is in accordance with [input
specification](https://onnx.ai/onnx/operators/onnx__Pad.html#inputs ) for
onnx.Pad
- Impl will need to be updated for dynamic padSize when support for
`axes` is added.
2024-04-24 11:31:37 +08:00
Xinyu Yang
42b9eccdb3
[Stablehlo] Fix AtenSumDimIntListOp when dim==None ( #3216 )
...
as titile
2024-04-24 11:25:46 +08:00
Xinyu Yang
4da3d714cc
[Torch] Support AtenProdOp on linalg and stablehlo ( #3215 )
2024-04-24 11:14:04 +08:00
zjgarvey
a8ba865fca
[torch] Adds Quantization Support for `aten.relu` ( #3177 )
...
A choice was made to quantize the return type of Relu with a scale and
zero point copied from the input's quantization scheme. With this
choice, the torch-to-linalg conversion of quantized Relu essentially
computes max(input, zeroPoint) in the elementwise payload.
2024-04-23 11:01:36 -07:00
jinchen
09d42044b4
Support select_last_index attribute of onnx argmin op ( #3212 )
...
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/648
all compiled, and the values of results match, but having runtime issue
of dtype mismatch of i/si.
2024-04-23 10:43:38 -07:00
jinchen
61e6312c87
Support select_last_index attribute of onnx argmax op ( #3192 )
...
The tests listed in https://github.com/nod-ai/SHARK-Turbine/issues/635
all compiled, but having run issue of dtype mismatch of i/si.
2024-04-23 10:16:08 -07:00
jinchen
ddb29c2c02
[onnx] Add OnnxToTorch support for `onnx.ConvInteger` ( #3179 )
...
All e2e iree tests compiled, but they have the run issue of mismatch of
dtype like the following
```
expected:
1x1x2x2xsi32=[[[12 16][24 28]]]
actual:
1x1x2x2xi32=[[[12 16][24 28]]]
```
2024-04-23 09:42:02 -07:00
Yuanqiang Liu
db3842f2e8
[Stablehlo] support lowering sinh & cosh to stablehlo ( #3213 )
2024-04-23 19:54:58 +08:00
Xinyu Yang
c1967b607f
[Stablehlo] add AtenLog10Op, AtenLog2Op lowering to stablehlo ( #3208 )
2024-04-23 19:06:55 +08:00
Yuanqiang Liu
1f8123b5f0
[Stablehlo] support unary ops which promote to floating point ( #3209 )
...
* promote input to output element-type when lowering to stablehlo, so
that it could satisfy stablehlo's type constraints.
* split promote-to-fp unary ops from fp-only unary ops.
2024-04-23 17:57:12 +08:00
Yuanqiang Liu
797e4cd395
[Stablehlo] lowering asin, acos, atan ( #3207 )
...
* lowering asin, acos and atan to chlo ops.
2024-04-23 16:24:53 +08:00
Vinayak Dev
cff2f084d4
[torch] Add OnnxToTorch lowering for `onnx.ReduceL2` ( #3175 )
...
Adds OnnxToTorch lowering for the ReduceL2 op.
2024-04-23 02:03:05 -04:00
Vivek Khandelwal
3c252cdd44
[onnx] Add `onnx-to-torch` lowering for random ops ( #3193 )
...
This commit adds the OnnxToTorch lowering for Onnx's RandomNormal, RandomNormalLike, RandomUniform, and RandomUniformLike op.
2024-04-22 22:28:07 +05:30
Vivek Khandelwal
6abc7371c8
[MLIR][TORCH] Fix OnnxToLinalg lowering issue for Squeeze and Unsqueeze op ( #2991 )
...
This commit also cleans up the OnnxToTorch lowering for the Squeeze and
Unsqueeze op and adds the support for handling edge cases.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-22 08:52:42 +00:00
penguin_wwy
e5bdd71baf
[Torch] Emit and decompose prims.iota op ( #3132 )
2024-04-21 19:45:01 -07:00
penguin_wwy
a60e84e5ee
[stablehlo] add aten.expm1 op conversion support ( #3199 )
2024-04-21 19:20:49 -07:00
Rob Suderman
8222637159
[onnx] Extend op version number of `onnx.ScatterElements` ( #3195 )
...
Version number was set too high. Lowered to support more cases allows
more tests to pass.
Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:32:18 -04:00
Rob Suderman
733cace1df
[onnx] Fix `onnx.split` by directly handling slicing ( #3194 )
...
Previous implementation erroneously mixed up num_outputs with
slice_size. New version correctly computs the slice size and directly
performs slicing rather than leveraging `aten.split.tensor`. This is due
to `onnx` supporting a fixed number of splits making the size
computation more easily computeable when lowering to `aten` rather than
deferring to `aten.split.tensor`.
---------
Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:31:56 -04:00
penguin_wwy
b6b01602d3
[stablehlo] add aten.fmod.Tensor op conversion support ( #3198 )
2024-04-21 08:39:36 +08:00
penguin_wwy
ea0ecb67be
[stablehlo] add aten.remainder.Tensor op conversion support ( #3197 )
2024-04-21 00:03:37 +08:00
Rob Suderman
b01245c0e8
[onnx] Fix `onnx.Not` for non-bool inputs ( #3187 )
...
Need to perform a bool cast to support `onnx.Not` on non-bool inputs.
2024-04-19 11:32:24 -07:00
Xinyu Yang
790a697245
[Torch] Add folder for AtenIntOp, AtenFloatOp ( #3189 )
...
See unit test below:
```
// CHECK-LABEL: func.func @torch.aten.tensor.float(
// CHECK-NEXT: torch.vtensor.literal(dense<1.000000e+01> : tensor<f32>) : !torch.vtensor<[],f32>
func.func @torch.aten.tensor.float() -> !torch.vtensor<[],f32> {
%none = torch.constant.none
%false = torch.constant.bool false
%float1.000000e01 = torch.constant.float 1.000000e+01
%67 = torch.aten.tensor.float %float1.000000e01, %none, %none, %false : !torch.float, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],f32>
return %67 : !torch.vtensor<[],f32>
}
// CHECK-LABEL: func.func @torch.aten.tensor.int(
// CHECK-NEXT: torch.vtensor.literal(dense<45> : tensor<si32>) : !torch.vtensor<[],si32>
func.func @torch.aten.tensor.int() -> !torch.vtensor<[],si32> {
%none = torch.constant.none
%false = torch.constant.bool false
%int45 = torch.constant.int 45
%67 = torch.aten.tensor.int %int45, %none, %none, %false : !torch.int, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],si32>
return %67 : !torch.vtensor<[],si32>
}
```
2024-04-19 22:17:06 +08:00
penguin_wwy
5a98c72c7f
[StableHLO] Fix aten.clamp.Tensor in FxImporter2StableHLO ( #3190 )
...
The FX importer will pass static shapes to the Torch dialect, so it
needs to generate a StableHLO that satisfies shape inference.
2024-04-19 17:08:29 +08:00
penguin_wwy
0a6073414d
[FxImporter] Add fx importer to stablehlo e2e test config ( #3183 )
2024-04-18 21:29:17 -07:00
penguin_wwy
6c4f7deebb
[stablehlo] add aten.clamp.Tensor op conversion support ( #3185 )
2024-04-19 10:55:27 +08:00
Rob Suderman
be742a937d
[onnx] Update the failure triage for onnx ( #3186 )
...
Reclassifying what the source of failures are for various bugs so we can
reprioritize what failures are common.
2024-04-18 14:58:13 -07:00
Rob Suderman
0e77de996a
[torch] Add support for `torch.view` with dynamic shapes ( #3164 )
...
We can map to `tensor.reshape` for handling multiple output dynamic
shapes. Later we can perform a more complex analysis for indentifying
expand/collapse cases from the tensor.reshape.
Initially we planned to handle this identification at the `torch` level
however it will be easier to handle once converted to core
mlir-dialects.
2024-04-18 11:47:19 -07:00
Rob Suderman
4c21e20caa
[torch] Support rank-0 index for torch index select ( #3182 )
...
Need to perform an expand in the case where the indices is rank-0.
2024-04-18 11:32:31 -07:00