Sean Silva
8c87057f50
update PyTorch version to 2.1.0.dev20230704 ( #2282 )
...
- torch version: 2.1.0.dev20230704
- torch commit hash: e5472fd3c324c5ecb343884e5399e0227cc30a6c
- torchvision version: 0.16.0.dev20230704
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-04 08:23:00 -07:00
Jiawei Wu
c7fa42b7d3
[Torch Dialect] Add canonicalizer for aten.to.other op ( #2273 )
...
Canonicalize aten.to.other to prim.device + prim.dtype + aten.to.device
Co-authored-by: wujiawei.aml <wujiawei.aml@bytedance.com>
2023-06-30 09:43:08 +08:00
Yuanqiang Liu
449cfb8375
[Torch Dialect] add more scalar op folders ( #2265 )
2023-06-29 10:37:13 +08:00
Chi_Liu
ddd0c06970
[TORCH] Fix recompose off by -1 error ( #2271 )
2023-06-27 13:34:14 -07:00
Yuanqiang Liu
859885c1d3
[Torch Dialect] Support aten.native_dropout ( #2259 )
...
* [Torch Dialect] Support aten.native_dropout
* update
2023-06-27 14:19:33 +08:00
Yuanqiang Liu
1ea2b57ab7
[Torch Dialect] add folder for aten.add ( #2264 )
...
* [Torch Dialect] add folder for aten.add
* update
* update
* update
2023-06-27 10:55:28 +08:00
Sean Silva
fbb5ed52cf
update PyTorch version to 2.1.0.dev20230623 ( #2260 )
...
- torch version: 2.1.0.dev20230623
- torch commit hash: ad724c83fb0d94cb3bb2cec94e15d88023c64e0d
- torchvision version: 0.16.0.dev20230623
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-23 09:03:50 -07:00
Yuanqiang Liu
64afc08dab
[Torch Dialect] add missing one_hot dtype function ( #2143 )
...
* [Torch Dialect] add missing one_hot dtype function
* update
* update
* update
2023-06-23 16:11:33 +08:00
Yuanqiang Liu
39201a4be5
[Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp'… ( #2256 )
...
* [Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp's input is torch.number
* update
2023-06-23 16:02:45 +08:00
Yuanqiang Liu
96b14e952e
[Torch Dialect] Support aten.device.with_index ( #2254 )
2023-06-23 01:07:14 +08:00
Yuanqiang Liu
4fd4477e15
[Torch Dialect] require hasSizes when decompose aten.amax ( #2248 )
2023-06-22 11:26:51 +08:00
Abhishek Varma
a0d2789840
[MLIR][TORCH] Add e2e support for aten.alias
...
-- This commit adds e2e support for aten.alias op.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-06-21 12:15:31 +05:30
Yuanqiang Liu
7c6961bcbf
[Torch Dialect] Support aten.cuda and add canonicalizer for aten.cuda ( #2231 )
2023-06-14 09:56:39 +08:00
Maksim Levental
0caaf8d32a
Bump LLVM ( #2176 )
...
* Bump LLVM
---------
Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Yuanqiang Liu
ddea56a832
[Torch Dialect] fix torch.uint8's dtype infer ( #2227 )
2023-06-13 10:38:20 +08:00
Matthias Gehre
4e2ba2e0af
Support aten.sign ( #2205 )
2023-06-10 20:45:35 +02:00
Matthias Gehre
27a3d09917
Torch: Fold RuntimeAssertOp when condition is true ( #2198 )
2023-06-09 19:06:25 +08:00
Yuanqiang Liu
5a7bf4e4cb
[Torch Dialect] Add canonicalize pattern for aten.is_floating_point ( #2194 )
...
* [Torch Dialect] Add canonicalize pattern for aten.is_floating_point
* implement as fold
* add lit test
2023-06-07 17:05:31 +08:00
JianzheXiao
e4f8fb1b8c
[Torch Dialect] add support for AtenIsnanOp ( #2170 )
...
* add support for mhlo
* Add Test for torch.ne
* fix torch.ne shape/add static test case
* add support for static torch.ne
---------
Co-authored-by: root <root@n31-177-039.byted.org>
2023-06-07 10:06:27 +08:00
Yuanqiang Liu
faec8698ea
[Torch Dialect] Support recompose aten.split.Tensor + prim.ListUnpack ( #2192 )
2023-06-07 01:38:04 +08:00
Vivek Khandelwal
da886280fe
[MLIR][TORCH] Add E2E support for aten.tril op ( #2202 )
...
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-05 16:17:01 -07:00
Ramiro Leal-Cavazos
a46b5c6af2
Fix types + off-by-1 error, clamp `end` in slice+copy_ recomposition
...
The `copy_` op being replaced by `RecomposeSliceCopy_` operates on a
subset of the tensor being mutated, while the `index_put` op being
used to replace the `copy_` op operates on the entire tensor being
mutated. This means that the result type of the `index_put` should be
the type of the input to `index_put` and we need to make sure that
`copy_` does not have users before replacing to avoid type conflicts.
This commit also fixes the result type used for the
`AtenArangeStartStepOp`, and an off-by-1 error when creating the
indices vector.
Lastly, this commit also clamps the `end` value from the slice to the
size of the dimension.
2023-06-01 11:14:53 -07:00
Zhekun Zhang
8af3e50662
[Torch Dialect] Add support for AtenScalarTensorOp ( #2085 )
...
* add scalar_tensor op
* add dynamo pass test; needs PR2062
* try to fix
* Empty commit, trigger test
* Empty commit, trigger test
* address comments
* use dtype function
* fix decompose rule
* remove unused include
* Empty commit, trigger test
* fix test
* disable ltc
* fix dtype
---------
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-06-01 11:38:50 +08:00
Gaurav Shukla
552887783a
[TM_TENSOR] Add `aten.scatter.[src|value]` op
...
This commit adds support of `aten.scatter.src` and `aten.scatter.value`
ops.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-05-29 12:35:53 +05:30
Zhekun Zhang
69e993b03f
[Torch Op] Add AtenChunkOp support ( #2152 )
...
* add chunkOp support
* update LTC xfail list
* address comments
* address comments
---------
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-26 10:05:19 +08:00
Ramiro Leal-Cavazos
dff3405d5a
Add alias analysis for cast-like ops to maximize-value-semantics ( #2160 )
...
When `use_tracing=True` is used to import a model into Torch-MLIR,
several casts get inserted in the IR to bridge the untyped inputs and
outputs with the typed body of the computation. These casts create
extra aliases of tensors that cause the current analysis in
`maximize-value-semantics` to fail.
In particular, the `maximize-value-semantics` analysis assumes that the
only valid alias right after an overwrite is the overwritten
alias. So, if there is a use of a casted version of the overwritten
alias after the overwrite, the analysis fails.
This commit improves the analysis by identifying all cast-like aliases
of the overwritten alias and allowing such aliases to be used after an
overwrite.
Because this issue only arises when using tracing, it cannot be
currently tested e2e, so only lit test is added.
2023-05-25 17:05:41 +00:00
Zhekun Zhang
a426363b7d
[Torch Dialect] Add split.tensor support + recompose rules ( #2102 )
...
* add split.tensor support + recompose rules
* add e2e test
* address comments
* address comments
* erase op in recomposeOp
---------
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-23 12:43:33 -07:00
Zhekun Zhang
5b63138d55
[Torch Dialect] Enforce signless attribute for ConstantIntOp ( #2078 )
...
* fix torch_c.to_i64
* restore dialect.cpp
* Empty commit, trigger test
* Empty commit, trigger test
* fix uint case
* address comments
* update error msg
* clean up
* use i64 for ConstantIntOp
* use I64Attr
---------
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-22 19:21:34 -05:00
Ramiro Leal-Cavazos
588bdc1344
Fix sign-compare warning ( #2136 )
2023-05-22 09:15:33 -07:00
Zhekun Zhang
aa97c8383e
[Torch Op] Add unbind.int support with ListUnpack ( #2058 )
...
* add unbind int
* reformat
* use unpack canonicalize
* address comments
* Empty commit, trigger test
* add ltc blacklist
* clean up
* address comments
* check permute list
* erase in recompose
---------
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-18 19:07:58 -07:00
Vivek Khandelwal
5698893ae4
build: manually update PyTorch version
...
Set PyTorch and TorchVision version to nightly release 2023-05-16.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-18 21:30:11 +05:30
Yuanqiang Liu
e98f2ba04a
[Torch Dialect] require dtype exists when decompose to aten.where.self ( #2094 )
...
* [Torch Dialect] require dtype exists when decompose to aten.where.self
* update
2023-05-17 09:04:26 -07:00
gpetters94
0302cf1d92
Add TMTensor::Attention and lower ScaledDotProductAttentionOp to it ( #2027 )
2023-05-16 15:17:45 -04:00
Ramiro Leal-Cavazos
de02b56e17
Replace RefineTypes with dtype functions ( #2105 )
...
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.
All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`
These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.
Co-authored-by: Jiahao Li <liplus17@163.com>
2023-05-12 13:40:45 -07:00
Prashant Kumar
8eb0c7e656
torch.complex to builtin complex types matching.
...
The right approach would be to create our own !torch.complex type
and use that during import than have a pass that converts to the MLIR
complex types.
2023-05-11 21:29:07 +05:30
Ramiro Leal-Cavazos
ab694dfbc1
Add complex dtype support on refbackend
2023-05-11 21:29:07 +05:30
Prashant Kumar
3cd91affbc
Add complex types support with basic complex ops.
...
Add complex types support with basic complex types.
Add aten.imag and aten.real op lowering via linalg_backend.
2023-05-11 21:29:07 +05:30
Sean Silva
d7614c261d
Integrate LLVM
...
LLVM: 26ee8947702d79ce2cab8e577f713685a5ca4a55
MHLO: 4805d8498dfb81566076f56f52273b426c1cc5bf
Per: https://github.com/llvm/torch-mlir/issues/1178#issuecomment-1538492185
2023-05-09 10:14:27 -07:00
Yuanqiang Liu
9f1ed4b2ba
[Torch Dialect] typo fix for RefineTypes ( #2087 )
2023-05-05 15:22:14 -07:00
Vivek Khandelwal
378860f51b
[MLIR][TORCH] Add E2E support for aten.topk op
...
This commit adds the decomposition for the aten.topk op.
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-05-05 15:50:33 +05:30
Zhekun Zhang
0cf9ee340b
[Torch Dialect] Add to.dtype_layout canonicalize patterns ( #2062 )
...
* add to.dtype_layout canonicalize patterns
* update comment
---------
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-02 20:06:02 -07:00
Yuanqiang Liu
c596d11b98
[Torch Dailect] add canonicalize pattern for prim.device ( #2066 )
2023-05-02 20:05:46 -07:00
Ze Zhang
7b73e0cfaf
Add e2e linalg support for aten.atan ( #2070 )
...
* new atan op
* update shape
---------
Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-04-28 00:04:58 -07:00
Vivek Khandelwal
491ae5eda4
[MLIR][TORCH] Add E2E support for aten.var_mean.dim op
...
This commit adds the decomposition for the aten.var_mean.dim op.
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-27 22:00:44 +05:30
Ramiro Leal-Cavazos
f85f5799e4
Fix creation of empty tensor in decomposition for randn ops ( #2043 )
...
The current decomposition for `aten.randn.generator` does not specify
the `dtype` argument of the empty tensors created to store the random
values. This leads to invalid IR when the output type of the `randn`
op is not the default PyTorch dtype.
2023-04-19 08:25:39 -07:00
Yuanqiang Liu
4d98f76d4f
[Torch Dialect] fold aten.detach ( #2021 )
2023-04-18 08:59:14 -07:00
Vivek Khandelwal
ed56e614b7
[MLIR][TORCH] Add E2E support for cross entropy lowering
...
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-18 08:00:20 +05:30
Roll PyTorch Action
811f330283
update PyTorch version to 2.1.0.dev20230414
2023-04-14 17:10:36 +00:00
Abhishek Varma
318fe13468
[MLIR][TORCH] Patch up Ops and their lowerings to deal with +ve `dim`
...
-- In Python we have the concept of negative dimension indexing.
-- We would want to normalize such dimensions to be +ve and within the
expected range instead.
-- This commit takes care of a few remaining set of Ops and their
lowerings by applying `toPositiveDim` and `isValidDim` to the
extracted integer `dim` value.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-14 13:12:56 +05:30
Abhishek Varma
a13d301356
[MLIR][TORCH] Add e2e support for aten.sort op
...
-- This commit adds e2e support for atend.sort op.
-- 1. Adds aten.sort op in torch dialect.
-- 2. Adds tm_tensor.sort op in TMTensor dialect.
-- 3. Adds lowering of aten.sort -> tm_tensor.sort.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-13 12:59:43 +05:30