Jiawei Wu
16923fdbd2
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op ( #2340 )
...
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.
2023-07-29 21:55:49 +08:00
Vivek Khandelwal
0109bf705b
[MLIR][TORCH] Fix aten.cumsum lowering for int32 input ( #2351 )
...
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-07-28 09:45:12 -07:00
Yuanqiang Liu
c7c59b540e
[Stablehlo] support dynamic shape when convert aten.fill.Scalar ( #2349 )
2023-07-27 18:35:25 +08:00
Matthias Gehre
c56cb531d5
Ignore constants in the legality error ( #2328 )
2023-07-25 10:11:40 +02:00
JianzheXiao
31ef08b63d
[Stablehlo]Add support for AvgPool1dOp ( #2268 )
...
* Add support for AvgPool1d
* Update AbstractInterpLibrary
* support avgpool1d in linalg
* refactored code
* fix nit problem
2023-07-25 14:09:53 +08:00
Jiawei Wu
d57f67e7f8
[Torch Dialect] emit aten.nonzero, aten.nonzero_numpy, aten.nonzero_static op ( #2338 )
...
By the way, this PR also adds the missing shape function for aten.masked_select.
2023-07-25 09:01:19 +08:00
Yuanqiang Liu
238c0501da
fix cmake torch-mlir-capi linking and bazel build ( #2336 )
2023-07-24 12:38:56 +08:00
Jiawei Wu
026e8db2e4
[Stablehlo] add converter for aten.scatter.src op ( #2295 )
2023-07-24 10:14:45 +08:00
Matthias Gehre
3ca35b4f3c
TorchToTosa: aten.embedding: Allow indices with any rank ( #2327 )
...
It's actually fine to not check the rank of the indices, because the conversion anyways flattens the index tensor to be (1, numElements) before applying tosa::gather, and then anyways reshapes the output tensor to the output shape of the aten.embedding.
2023-07-21 08:54:19 +02:00
Alexandre Rames
1e468e8294
Fix canonicalization of `torch.prim.TupleUnpack`.
2023-07-20 20:08:46 +02:00
Alexandre Rames
a20422ce65
Support `DerefineOp` in `RefinePublicReturn`.
2023-07-20 20:08:46 +02:00
Alexandre Rames
4847563bed
Clean up verification of calling conventions.
...
The implementation at this place was a remnent of the times the pipeline was
run only once.
Rely instead on the backend verification, after optimizations have had an
opportunity to resolve some uncertainties. (e.g. `!torch.optional`).
2023-07-20 20:08:46 +02:00
Jiawei Wu
9535be7903
[Torch-Dialect] emit aten.narrow.Tensor op and decompose it to aten.narrow op ( #2297 )
2023-07-20 16:46:44 +08:00
Matthias Gehre
64d7626a52
Fixes for split tensor and slice ( #2314 )
...
* RecomposeComplexOps: Remove dead slice op
* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors
* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none
* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max
* More tests for aten.split.Tensor
2023-07-20 09:53:54 +02:00
Jiawei Wu
3f843c8fd9
[torch-dialect] fix aten.type_as op's folder ( #2283 )
...
[torch-dialect] fix torch.type_as op's folder by decomposing it to prim.dtype + aten.to_dtype
2023-07-20 09:51:58 +08:00
AyaanShah2204
a308a54255
Fixes Windows DLL crash ( #2321 )
...
* explicit inliner extension
* fixed import formatting
2023-07-18 19:12:46 -07:00
Matthias Gehre
0c17997000
Don't crash when the input to aten.copy is unranked ( #2307 )
...
This can happen when the input comes from an unsupported operator
2023-07-18 09:52:33 +02:00
Ramiro Leal-Cavazos
718f53ff8a
Fix handling of `!torch.number` in abstract interpretation library ( #2309 )
...
In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
2023-07-17 09:52:04 -07:00
Chi_Liu
5706697e0b
[TOSA] Add aten._index_put_impl support ( #2031 )
...
Add e2e support by add "tosa-to-scf"
2023-07-17 09:51:24 -07:00
Matthias Gehre
06c9bd08e0
lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix legalization of comparions where the input type is bool ( #2304 )
2023-07-17 09:49:04 +02:00
Tiago Trevisan Jost
48383554da
TorchToTosa: Legalization for torch.aten.sqrt ( #2234 )
2023-07-14 08:23:10 +02:00
Yuanqiang Liu
7f6b72aec8
[Torch Dialect] add runtime.assert to check constraint when recomposing complex ops ( #2281 )
2023-07-14 10:13:19 +08:00
Matthias Gehre
c23a61f4b6
DecomposeComplexOps: Use static shape if available ( #2289 )
2023-07-12 10:07:30 +02:00
Zhekun Zhang
6a072d4f4a
[Stablehlo] AtenEmptyMemoryFormat remove device cpu check ( #2288 )
...
* remove cpu check
* update dtype
---------
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-07-10 15:36:21 +08:00
Abhishek Varma
6c9ba4ce95
[Torch-to-Linalg] Add dynamic dimension support for BroadcastTo op ( #2174 )
...
-- This commit adds support for dynamic dimension in BroadcastTo op.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-07-07 10:01:51 -07:00
Sean Silva
8c87057f50
update PyTorch version to 2.1.0.dev20230704 ( #2282 )
...
- torch version: 2.1.0.dev20230704
- torch commit hash: e5472fd3c324c5ecb343884e5399e0227cc30a6c
- torchvision version: 0.16.0.dev20230704
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-04 08:23:00 -07:00
Jiawei Wu
c7fa42b7d3
[Torch Dialect] Add canonicalizer for aten.to.other op ( #2273 )
...
Canonicalize aten.to.other to prim.device + prim.dtype + aten.to.device
Co-authored-by: wujiawei.aml <wujiawei.aml@bytedance.com>
2023-06-30 09:43:08 +08:00
Yuanqiang Liu
449cfb8375
[Torch Dialect] add more scalar op folders ( #2265 )
2023-06-29 10:37:13 +08:00
Chi_Liu
ddd0c06970
[TORCH] Fix recompose off by -1 error ( #2271 )
2023-06-27 13:34:14 -07:00
Yuanqiang Liu
859885c1d3
[Torch Dialect] Support aten.native_dropout ( #2259 )
...
* [Torch Dialect] Support aten.native_dropout
* update
2023-06-27 14:19:33 +08:00
Yuanqiang Liu
1ea2b57ab7
[Torch Dialect] add folder for aten.add ( #2264 )
...
* [Torch Dialect] add folder for aten.add
* update
* update
* update
2023-06-27 10:55:28 +08:00
Yuanqiang Liu
0548e2ef3b
[Stablehlo] fix promoteType() when input doesn't have DefiningOp ( #2262 )
2023-06-26 00:04:17 +08:00
Sean Silva
fbb5ed52cf
update PyTorch version to 2.1.0.dev20230623 ( #2260 )
...
- torch version: 2.1.0.dev20230623
- torch commit hash: ad724c83fb0d94cb3bb2cec94e15d88023c64e0d
- torchvision version: 0.16.0.dev20230623
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-23 09:03:50 -07:00
Yuanqiang Liu
64afc08dab
[Torch Dialect] add missing one_hot dtype function ( #2143 )
...
* [Torch Dialect] add missing one_hot dtype function
* update
* update
* update
2023-06-23 16:11:33 +08:00
Yuanqiang Liu
39201a4be5
[Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp'… ( #2256 )
...
* [Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp's input is torch.number
* update
2023-06-23 16:02:45 +08:00
Yuanqiang Liu
96b14e952e
[Torch Dialect] Support aten.device.with_index ( #2254 )
2023-06-23 01:07:14 +08:00
Yuanqiang Liu
4fd4477e15
[Torch Dialect] require hasSizes when decompose aten.amax ( #2248 )
2023-06-22 11:26:51 +08:00
Abhishek Varma
a0d2789840
[MLIR][TORCH] Add e2e support for aten.alias
...
-- This commit adds e2e support for aten.alias op.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-06-21 12:15:31 +05:30
Maksim Levental
0244f540a7
Add typeids to CAPI. ( #2253 )
2023-06-20 22:06:43 -05:00
Vivek Khandelwal
f6a6cfea4e
[MLIR][TORCH] Add support for negative index values for index.Tensor op ( #2233 )
...
This commit adds the support for index.Tensor op when the index values
are negative. This commit wraps around the index values by checking
their values at run time.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-16 14:21:04 -05:00
Matthias Gehre
6f420019cb
TorchToTosa: Cast float constants to correct type to support bfloat16 ( #2239 )
2023-06-16 09:51:24 +02:00
Yuanqiang Liu
bba0f5891b
[Stablehlo] add conversion for AtenFlipOp ( #2163 )
2023-06-15 10:27:34 +08:00
Yuanqiang Liu
7c6961bcbf
[Torch Dialect] Support aten.cuda and add canonicalizer for aten.cuda ( #2231 )
2023-06-14 09:56:39 +08:00
Maksim Levental
0caaf8d32a
Bump LLVM ( #2176 )
...
* Bump LLVM
---------
Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Yuanqiang Liu
ddea56a832
[Torch Dialect] fix torch.uint8's dtype infer ( #2227 )
2023-06-13 10:38:20 +08:00
Christopher McGirr
b461daa06e
fix(TorchToTosa.cpp): adjust torch->tosa div conversion ( #2200 )
...
check the return type of the division to figure out whether to use
the floating point implementation of a division or to use the integer.
the issue rose from the fact that the inputs are all integer but the
result was casted to floating point. The conversion then chose to
use the integer implementation of division which is not legal in tosa
when all the inputs get casted to floating point.
fix(TorchToLinalg): AtenDivScalarOp
upcast self operand as well if applicable, the self operand must also
be casted to float as it can be an integer.
2023-06-12 11:18:38 +02:00
Tiago Trevisan Jost
cc75557119
feat: support unchanged dimensions in torch.aten.broadcast_to operation. ( #2204 )
2023-06-12 11:17:25 +02:00
Matthias Gehre
4e2ba2e0af
Support aten.sign ( #2205 )
2023-06-10 20:45:35 +02:00
Matthias Gehre
27a3d09917
Torch: Fold RuntimeAssertOp when condition is true ( #2198 )
2023-06-09 19:06:25 +08:00
Yuanqiang Liu
5a7bf4e4cb
[Torch Dialect] Add canonicalize pattern for aten.is_floating_point ( #2194 )
...
* [Torch Dialect] Add canonicalize pattern for aten.is_floating_point
* implement as fold
* add lit test
2023-06-07 17:05:31 +08:00