Commit Graph

1227 Commits (e824fbc65cd9481a733af7dcf11ffe7e9eaa9bf0)

Author SHA1 Message Date
Jiawei Wu 20a2b68ed6
[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355) 2023-08-04 09:05:34 +08:00
Jiawei Wu 6db92d1b14
[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)
* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op
2023-08-03 16:21:14 +08:00
Jiawei Wu 16923fdbd2
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.
2023-07-29 21:55:49 +08:00
Vivek Khandelwal 0109bf705b
[MLIR][TORCH] Fix aten.cumsum lowering for int32 input (#2351)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-07-28 09:45:12 -07:00
Yuanqiang Liu c7c59b540e
[Stablehlo] support dynamic shape when convert aten.fill.Scalar (#2349) 2023-07-27 18:35:25 +08:00
Matthias Gehre c56cb531d5
Ignore constants in the legality error (#2328) 2023-07-25 10:11:40 +02:00
JianzheXiao 31ef08b63d
[Stablehlo]Add support for AvgPool1dOp (#2268)
* Add support for AvgPool1d

* Update AbstractInterpLibrary

* support avgpool1d in linalg

* refactored code

* fix nit problem
2023-07-25 14:09:53 +08:00
Jiawei Wu d57f67e7f8
[Torch Dialect] emit aten.nonzero, aten.nonzero_numpy, aten.nonzero_static op (#2338)
By the way, this PR also adds the missing shape function for aten.masked_select.
2023-07-25 09:01:19 +08:00
Yuanqiang Liu 238c0501da
fix cmake torch-mlir-capi linking and bazel build (#2336) 2023-07-24 12:38:56 +08:00
Jiawei Wu 026e8db2e4
[Stablehlo] add converter for aten.scatter.src op (#2295) 2023-07-24 10:14:45 +08:00
Matthias Gehre 3ca35b4f3c
TorchToTosa: aten.embedding: Allow indices with any rank (#2327)
It's actually fine to not check the rank of the indices, because the conversion anyways flattens the index tensor to be (1, numElements) before applying tosa::gather, and then anyways reshapes the output tensor to the output shape of the aten.embedding.
2023-07-21 08:54:19 +02:00
Alexandre Rames 1e468e8294 Fix canonicalization of `torch.prim.TupleUnpack`. 2023-07-20 20:08:46 +02:00
Alexandre Rames a20422ce65 Support `DerefineOp` in `RefinePublicReturn`. 2023-07-20 20:08:46 +02:00
Alexandre Rames 4847563bed Clean up verification of calling conventions.
The implementation at this place was a remnent of the times the pipeline was
run only once.
Rely instead on the backend verification, after optimizations have had an
opportunity to resolve some uncertainties. (e.g. `!torch.optional`).
2023-07-20 20:08:46 +02:00
Jiawei Wu 9535be7903
[Torch-Dialect] emit aten.narrow.Tensor op and decompose it to aten.narrow op (#2297) 2023-07-20 16:46:44 +08:00
Matthias Gehre 64d7626a52
Fixes for split tensor and slice (#2314)
* RecomposeComplexOps: Remove dead slice op

* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors

* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none

* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max

* More tests for aten.split.Tensor
2023-07-20 09:53:54 +02:00
Jiawei Wu 3f843c8fd9
[torch-dialect] fix aten.type_as op's folder (#2283)
[torch-dialect] fix torch.type_as op's folder by decomposing it to prim.dtype + aten.to_dtype
2023-07-20 09:51:58 +08:00
AyaanShah2204 a308a54255
Fixes Windows DLL crash (#2321)
* explicit inliner extension

* fixed import formatting
2023-07-18 19:12:46 -07:00
Matthias Gehre 0c17997000
Don't crash when the input to aten.copy is unranked (#2307)
This can happen when the input comes from an unsupported operator
2023-07-18 09:52:33 +02:00
Ramiro Leal-Cavazos 718f53ff8a
Fix handling of `!torch.number` in abstract interpretation library (#2309)
In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
2023-07-17 09:52:04 -07:00
Chi_Liu 5706697e0b
[TOSA] Add aten._index_put_impl support (#2031)
Add e2e support by add  "tosa-to-scf"
2023-07-17 09:51:24 -07:00
Matthias Gehre 06c9bd08e0
lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix legalization of comparions where the input type is bool (#2304) 2023-07-17 09:49:04 +02:00
Tiago Trevisan Jost 48383554da
TorchToTosa: Legalization for torch.aten.sqrt (#2234) 2023-07-14 08:23:10 +02:00
Yuanqiang Liu 7f6b72aec8
[Torch Dialect] add runtime.assert to check constraint when recomposing complex ops (#2281) 2023-07-14 10:13:19 +08:00
Matthias Gehre c23a61f4b6
DecomposeComplexOps: Use static shape if available (#2289) 2023-07-12 10:07:30 +02:00
Zhekun Zhang 6a072d4f4a
[Stablehlo] AtenEmptyMemoryFormat remove device cpu check (#2288)
* remove cpu check

* update dtype

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-07-10 15:36:21 +08:00
Abhishek Varma 6c9ba4ce95
[Torch-to-Linalg] Add dynamic dimension support for BroadcastTo op (#2174)
-- This commit adds support for dynamic dimension in BroadcastTo op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-07-07 10:01:51 -07:00
Sean Silva 8c87057f50
update PyTorch version to 2.1.0.dev20230704 (#2282)
- torch version: 2.1.0.dev20230704
 - torch commit hash: e5472fd3c324c5ecb343884e5399e0227cc30a6c
 - torchvision version: 0.16.0.dev20230704

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-07-04 08:23:00 -07:00
Jiawei Wu c7fa42b7d3
[Torch Dialect] Add canonicalizer for aten.to.other op (#2273)
Canonicalize aten.to.other to prim.device + prim.dtype + aten.to.device
Co-authored-by: wujiawei.aml <wujiawei.aml@bytedance.com>
2023-06-30 09:43:08 +08:00
Yuanqiang Liu 449cfb8375
[Torch Dialect] add more scalar op folders (#2265) 2023-06-29 10:37:13 +08:00
Chi_Liu ddd0c06970
[TORCH] Fix recompose off by -1 error (#2271) 2023-06-27 13:34:14 -07:00
Yuanqiang Liu 859885c1d3
[Torch Dialect] Support aten.native_dropout (#2259)
* [Torch Dialect] Support aten.native_dropout

* update
2023-06-27 14:19:33 +08:00
Yuanqiang Liu 1ea2b57ab7
[Torch Dialect] add folder for aten.add (#2264)
* [Torch Dialect] add folder for aten.add

* update

* update

* update
2023-06-27 10:55:28 +08:00
Yuanqiang Liu 0548e2ef3b
[Stablehlo] fix promoteType() when input doesn't have DefiningOp (#2262) 2023-06-26 00:04:17 +08:00
Sean Silva fbb5ed52cf
update PyTorch version to 2.1.0.dev20230623 (#2260)
- torch version: 2.1.0.dev20230623
 - torch commit hash: ad724c83fb0d94cb3bb2cec94e15d88023c64e0d
 - torchvision version: 0.16.0.dev20230623

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-06-23 09:03:50 -07:00
Yuanqiang Liu 64afc08dab
[Torch Dialect] add missing one_hot dtype function (#2143)
* [Torch Dialect] add missing one_hot dtype function

* update

* update

* update
2023-06-23 16:11:33 +08:00
Yuanqiang Liu 39201a4be5
[Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp'… (#2256)
* [Torch Dialect] avoid assertion failure when PrimNumToTensorScalarOp's input is torch.number

* update
2023-06-23 16:02:45 +08:00
Yuanqiang Liu 96b14e952e
[Torch Dialect] Support aten.device.with_index (#2254) 2023-06-23 01:07:14 +08:00
Yuanqiang Liu 4fd4477e15
[Torch Dialect] require hasSizes when decompose aten.amax (#2248) 2023-06-22 11:26:51 +08:00
Abhishek Varma a0d2789840 [MLIR][TORCH] Add e2e support for aten.alias
-- This commit adds e2e support for aten.alias op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-06-21 12:15:31 +05:30
Maksim Levental 0244f540a7
Add typeids to CAPI. (#2253) 2023-06-20 22:06:43 -05:00
Vivek Khandelwal f6a6cfea4e
[MLIR][TORCH] Add support for negative index values for index.Tensor op (#2233)
This commit adds the support for index.Tensor op when the index values
are negative. This commit wraps around the index values by checking
their values at run time.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-16 14:21:04 -05:00
Matthias Gehre 6f420019cb
TorchToTosa: Cast float constants to correct type to support bfloat16 (#2239) 2023-06-16 09:51:24 +02:00
Yuanqiang Liu bba0f5891b
[Stablehlo] add conversion for AtenFlipOp (#2163) 2023-06-15 10:27:34 +08:00
Yuanqiang Liu 7c6961bcbf
[Torch Dialect] Support aten.cuda and add canonicalizer for aten.cuda (#2231) 2023-06-14 09:56:39 +08:00
Maksim Levental 0caaf8d32a
Bump LLVM (#2176)
* Bump LLVM

---------

Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Yuanqiang Liu ddea56a832
[Torch Dialect] fix torch.uint8's dtype infer (#2227) 2023-06-13 10:38:20 +08:00
Christopher McGirr b461daa06e
fix(TorchToTosa.cpp): adjust torch->tosa div conversion (#2200)
check the return type of the division to figure out whether to use
the floating point implementation of a division or to use the integer.

the issue rose from the fact that the inputs are all integer but the
result was casted to floating point. The conversion then chose to
use the integer implementation of division which is not legal in tosa
when all the inputs get casted to floating point.

fix(TorchToLinalg): AtenDivScalarOp

upcast self operand as well if applicable, the self operand must also
be casted to float as it can be an integer.
2023-06-12 11:18:38 +02:00
Tiago Trevisan Jost cc75557119
feat: support unchanged dimensions in torch.aten.broadcast_to operation. (#2204) 2023-06-12 11:17:25 +02:00
Matthias Gehre 4e2ba2e0af
Support aten.sign (#2205) 2023-06-10 20:45:35 +02:00
Matthias Gehre 27a3d09917
Torch: Fold RuntimeAssertOp when condition is true (#2198) 2023-06-09 19:06:25 +08:00
Yuanqiang Liu 5a7bf4e4cb
[Torch Dialect] Add canonicalize pattern for aten.is_floating_point (#2194)
* [Torch Dialect] Add canonicalize pattern for aten.is_floating_point

* implement as fold

* add lit test
2023-06-07 17:05:31 +08:00
JianzheXiao e4f8fb1b8c
[Torch Dialect] add support for AtenIsnanOp (#2170)
* add support for mhlo

* Add Test for torch.ne

* fix torch.ne shape/add static test case

* add support for static torch.ne

---------

Co-authored-by: root <root@n31-177-039.byted.org>
2023-06-07 10:06:27 +08:00
Yuanqiang Liu faec8698ea
[Torch Dialect] Support recompose aten.split.Tensor + prim.ListUnpack (#2192) 2023-06-07 01:38:04 +08:00
Vivek Khandelwal da886280fe
[MLIR][TORCH] Add E2E support for aten.tril op (#2202)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-05 16:17:01 -07:00
Ramiro Leal-Cavazos a46b5c6af2 Fix types + off-by-1 error, clamp `end` in slice+copy_ recomposition
The `copy_` op being replaced by `RecomposeSliceCopy_` operates on a
subset of the tensor being mutated, while the `index_put` op being
used to replace the `copy_` op operates on the entire tensor being
mutated. This means that the result type of the `index_put` should be
the type of the input to `index_put` and we need to make sure that
`copy_` does not have users before replacing to avoid type conflicts.

This commit also fixes the result type used for the
`AtenArangeStartStepOp`, and an off-by-1 error when creating the
indices vector.

Lastly, this commit also clamps the `end` value from the slice to the
size of the dimension.
2023-06-01 11:14:53 -07:00
Ramiro Leal-Cavazos 281dccc681 [LINALG] Add dynamic support for `PrimMinIntOp` 2023-06-01 11:14:53 -07:00
Zhekun Zhang 8af3e50662
[Torch Dialect] Add support for AtenScalarTensorOp (#2085)
* add scalar_tensor op

* add dynamo pass test; needs PR2062

* try to fix

* Empty commit, trigger test

* Empty commit, trigger test

* address comments

* use dtype function

* fix decompose rule

* remove unused include

* Empty commit, trigger test

* fix test

* disable ltc

* fix dtype

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-06-01 11:38:50 +08:00
Vivek Khandelwal 959f4f48d5 [MLIR][TORCH] Add support for the total_weight for aten.nll_loss_forward op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-30 20:29:27 +05:30
Gaurav Shukla 552887783a [TM_TENSOR] Add `aten.scatter.[src|value]` op
This commit adds support of `aten.scatter.src` and `aten.scatter.value`
ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-05-29 12:35:53 +05:30
Yuanqiang Liu 5223f990df
[Stablehlo] Enable Stablehlo backend with arith dialect (#2139) 2023-05-26 22:57:57 +08:00
Zhekun Zhang 69e993b03f
[Torch Op] Add AtenChunkOp support (#2152)
* add chunkOp support

* update LTC xfail list

* address comments

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-26 10:05:19 +08:00
Ramiro Leal-Cavazos dff3405d5a
Add alias analysis for cast-like ops to maximize-value-semantics (#2160)
When `use_tracing=True` is used to import a model into Torch-MLIR,
several casts get inserted in the IR to bridge the untyped inputs and
outputs with the typed body of the computation. These casts create
extra aliases of tensors that cause the current analysis in
`maximize-value-semantics` to fail.

In particular, the `maximize-value-semantics` analysis assumes that the
only valid alias right after an overwrite is the overwritten
alias. So, if there is a use of a casted version of the overwritten
alias after the overwrite, the analysis fails.

This commit improves the analysis by identifying all cast-like aliases
of the overwritten alias and allowing such aliases to be used after an
overwrite.

Because this issue only arises when using tracing, it cannot be
currently tested e2e, so only lit test is added.
2023-05-25 17:05:41 +00:00
Zhekun Zhang f0b7b63be0
[Stablehlo] Add aten.uniform lowering (#2101)
* add uniform stablehlo lowering

* add unit test

* new line

* rm redundant file

* Empty commit, trigger test

* fix include

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-25 10:32:55 +08:00
Zhekun Zhang eb8f56aeb7
[Stablehlo] Add `AtenIndexTensor` StableHlo support (#2107)
* Add AtenIndexTensor StableHlo support

* clean up

* Empty commit, trigger test

* try to debug hanging test

* fix segfulat

* fix bad include

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-24 11:13:57 -07:00
Zhekun Zhang a426363b7d
[Torch Dialect] Add split.tensor support + recompose rules (#2102)
* add split.tensor support + recompose rules

* add e2e test

* address comments

* address comments

* erase op in recomposeOp

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-23 12:43:33 -07:00
Zhekun Zhang 5b63138d55
[Torch Dialect] Enforce signless attribute for ConstantIntOp (#2078)
* fix torch_c.to_i64

* restore dialect.cpp

* Empty commit, trigger test

* Empty commit, trigger test

* fix uint case

* address comments

* update error msg

* clean up

* use i64 for ConstantIntOp

* use I64Attr

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-22 19:21:34 -05:00
Ramiro Leal-Cavazos 588bdc1344
Fix sign-compare warning (#2136) 2023-05-22 09:15:33 -07:00
Zhekun Zhang aa97c8383e
[Torch Op] Add unbind.int support with ListUnpack (#2058)
* add unbind int

* reformat

* use unpack canonicalize

* address comments

* Empty commit, trigger test

* add ltc blacklist

* clean up

* address comments

* check permute list

* erase in recompose

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-18 19:07:58 -07:00
Zhekun Zhang 1333674905
[StableHlo] Support AtenEmptyMemoryFormatOp (#2092)
* add empty conversion

* clean up

* add tests

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-18 19:07:35 -07:00
TatWai Chong ed4ecb072f
[tosa] support lowering basic torch binary ops with mixed dtypes (#2122)
Lowering torch operations that allow different compatible data types
in its operands to tosa end up generating invalid tosa IR with mixed
data types. In tosa spec, certain operations (generally element-wise
operations) require all operands to have the same data type.

Add wrapper functions for those element-wise tosa ops to perform op
creation with type conversion if necessary.
2023-05-18 17:12:18 -07:00
Vivek Khandelwal 5698893ae4 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-05-16.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-18 21:30:11 +05:30
Yuanqiang Liu 6f7d9e83df
[Stablehlo] add e2e test for aten.batch_norm (#2129) 2023-05-17 09:04:40 -07:00
Yuanqiang Liu e98f2ba04a
[Torch Dialect] require dtype exists when decompose to aten.where.self (#2094)
* [Torch Dialect] require dtype exists when decompose to aten.where.self

* update
2023-05-17 09:04:26 -07:00
gpetters94 0302cf1d92
Add TMTensor::Attention and lower ScaledDotProductAttentionOp to it (#2027) 2023-05-16 15:17:45 -04:00
Maksim Levental c76a48308e
[CAPI] add isValidSubtype to CAPI (#2127) 2023-05-13 22:15:45 -05:00
Matthias Gehre 3a8196588f
TorchToTosa: Support casts from and to bf16 (#2118) 2023-05-12 15:18:23 -07:00
Ramiro Leal-Cavazos de02b56e17
Replace RefineTypes with dtype functions (#2105)
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.

All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`

These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.

Co-authored-by: Jiahao Li <liplus17@163.com>
2023-05-12 13:40:45 -07:00
Zhekun Zhang 1eb18dd8b5
Add AtenFillScalarOp Stablehlo support (#2108)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-11 16:41:46 -07:00
Prashant Kumar c47d3aab01 Fix torchdynamo fail test. 2023-05-11 21:29:07 +05:30
Prashant Kumar 8eb0c7e656 torch.complex to builtin complex types matching.
The right approach would be to create our own !torch.complex type
and use that during import than have a pass that converts to the MLIR
complex types.
2023-05-11 21:29:07 +05:30
Ramiro Leal-Cavazos ab694dfbc1 Add complex dtype support on refbackend 2023-05-11 21:29:07 +05:30
Prashant Kumar 3cd91affbc Add complex types support with basic complex ops.
Add complex types support with basic complex types.
Add aten.imag and aten.real op lowering via linalg_backend.
2023-05-11 21:29:07 +05:30
yifei410 86718cb203
[TOSA] lowering support for aten cat (#2039)
Add support for lowering torch.aten.cat to tosa.concat

* add support for aten cat to tosa

---------

Co-authored-by: yifei <y.zhou@xilinx.com>
Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-05-10 08:25:58 -07:00
Sean Silva d7614c261d Integrate LLVM
LLVM: 26ee8947702d79ce2cab8e577f713685a5ca4a55
MHLO: 4805d8498dfb81566076f56f52273b426c1cc5bf

Per: https://github.com/llvm/torch-mlir/issues/1178#issuecomment-1538492185
2023-05-09 10:14:27 -07:00
Chi_Liu 51e0a2c933
[Stablehlo] Add stablehlo support for aten.abs (#2068)
Co-authored-by: AmosLewis <Amos_Lewsi@foxmail.com>
2023-05-08 22:13:00 -07:00
Yuanqiang Liu ef6dae6ae2
[Linalg] fix lowering reduce max with -inf (#2097) 2023-05-08 09:17:49 -07:00
Yuanqiang Liu 0096ceae2f
[Stablehlo] fix reduce max init_value with -inf (#2064)
* [Stablehlo] fix reduce max init_value with -inf

* update
2023-05-06 12:05:51 -07:00
Yuanqiang Liu 9f1ed4b2ba
[Torch Dialect] typo fix for RefineTypes (#2087) 2023-05-05 15:22:14 -07:00
Zhekun Zhang fc62b8e9ab
[StableHlo] Fix AtenWhereSelfOp convert rule (#2093)
* fix whereself convert rule

* use int to test promotion

* add dynamo failing test

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-05 15:21:55 -07:00
Vivek Khandelwal 378860f51b [MLIR][TORCH] Add E2E support for aten.topk op
This commit adds the decomposition for the aten.topk op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-05-05 15:50:33 +05:30
Zhekun Zhang 1eceb84899
add stablehlo support for pow.tensor_tensor (#2086)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-04 09:55:03 -07:00
Zhekun Zhang 0cf9ee340b
[Torch Dialect] Add to.dtype_layout canonicalize patterns (#2062)
* add to.dtype_layout canonicalize patterns

* update comment

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-02 20:06:02 -07:00
Yuanqiang Liu c596d11b98
[Torch Dailect] add canonicalize pattern for prim.device (#2066) 2023-05-02 20:05:46 -07:00
Ze Zhang 7b73e0cfaf
Add e2e linalg support for aten.atan (#2070)
* new atan op

* update shape

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-04-28 00:04:58 -07:00
Vivek Khandelwal 491ae5eda4 [MLIR][TORCH] Add E2E support for aten.var_mean.dim op
This commit adds the decomposition for the aten.var_mean.dim op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-27 22:00:44 +05:30
Ramiro Leal-Cavazos c8e062fb4e
Fix default value of `stride` in 2d pooling ops in linalg and tosa (#2065)
When the user does not specify the `stride` value in 2d pooling ops,
`stride` is given the value of an empty list. However, the current
lowerings for pooling ops assumed that the `stride` operand would
always be a list of two ints, leading to crashes when that was not the
case. This commit fixes the crashes by setting the value of `stride`
to `kernel_size` when `stride` is the empty list, since this is the
default `stride` value specified in PyTorch docs. See:
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html#torch.nn.MaxPool2d
2023-04-27 08:31:36 -07:00
Eric Kunze 6a833e1922
Update to LLVM 3157f03a349cfc852cdd994675eaa9652caa2e3a (#2060)
New requirement to explicitly cast for interfaces https://reviews.llvm.org/D148493
2023-04-25 08:52:46 -07:00
Ramiro Leal-Cavazos f85f5799e4
Fix creation of empty tensor in decomposition for randn ops (#2043)
The current decomposition for `aten.randn.generator` does not specify
the `dtype` argument of the empty tensors created to store the random
values. This leads to invalid IR when the output type of the `randn`
op is not the default PyTorch dtype.
2023-04-19 08:25:39 -07:00
Chi_Liu 8d25dd454f
[TOSA] Add torch.prim.NumToTensor.Scalar float support (#1802) 2023-04-18 13:36:57 -07:00
Yuanqiang Liu 4d98f76d4f
[Torch Dialect] fold aten.detach (#2021) 2023-04-18 08:59:14 -07:00
Vivek Khandelwal ed56e614b7 [MLIR][TORCH] Add E2E support for cross entropy lowering
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-18 08:00:20 +05:30
Roll PyTorch Action 811f330283 update PyTorch version to 2.1.0.dev20230414 2023-04-14 17:10:36 +00:00
Chi_Liu f3d1eda09f
[TOSA] Add aten.abs support (#2032) 2023-04-14 08:43:39 -07:00
Chi_Liu ad36e61040
[TOSA] Add aten.le.tensor support (#2033) 2023-04-14 08:43:14 -07:00
Abhishek Varma 318fe13468 [MLIR][TORCH] Patch up Ops and their lowerings to deal with +ve `dim`
-- In Python we have the concept of negative dimension indexing.
-- We would want to normalize such dimensions to be +ve and within the
   expected range instead.
-- This commit takes care of a few remaining set of Ops and their
   lowerings by applying `toPositiveDim` and `isValidDim` to the
   extracted integer `dim` value.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-14 13:12:56 +05:30
Zhekun Zhang 1bd5747ca3
[StableHlo] Fix transposed convolution conversion (#2026)
* fix conv bwd

* fix

* fix group case

* clean up

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-04-13 11:24:39 -07:00
Abhishek Varma a13d301356 [MLIR][TORCH] Add e2e support for aten.sort op
-- This commit adds e2e support for atend.sort op.
-- 1. Adds aten.sort op in torch dialect.
-- 2. Adds tm_tensor.sort op in TMTensor dialect.
-- 3. Adds lowering of aten.sort -> tm_tensor.sort.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-13 12:59:43 +05:30
Alexandre Rames 224ee27610
Fix a few missing dependencies. (#2014)
`TorchToTMTensor` depends on `TorchMLIRTorchUtils` for
`mlir::torch::torch_upstream::get_reduction_enum`.

`TorchMLIRTorchConversionPasses` depends on multiple libs for both tblgen'd
headers and definitions. Test with `ninja TorchMLIRTorchConversionPasses` from
a clean build.
2023-04-11 11:18:49 -07:00
Yuanqiang Liu 72c3326097
[Torch Dialect] support for aten.one_hot (#1852) 2023-04-11 01:02:28 -07:00
Yuanqiang Liu 3e83a86354
[Torch Dialect] fix isValidSubtype with dynamic dim (#2018) 2023-04-11 01:02:18 -07:00
Vivek Khandelwal 98747d09a8 [MLIR][TORCH] Add support for prims::view_of op
This op does nothing and just returns the input operand as the
result of the op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-04-11 07:58:10 +05:30
Chi_Liu 975815216f
[TOSA] Add todtype f32 to i64 (#2012) 2023-04-07 09:40:56 -07:00
Chi_Liu 4df1d8ae2f
[MLIR] Fold aten select and fill_ pattern (#2000) 2023-04-06 21:16:51 -07:00
Chi_Liu 8dcd0b2e76
[TOSA] Add support for sliceOp end < 0 (#2011) 2023-04-06 16:19:00 -07:00
Abhishek Varma 5337944ddb [MLIR][TORCH] Add e2e support for aten.randint
-- This commit adds e2e support for aten.randint by decomposing it into
   an aten.randint.low by setting low=0.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-07 00:13:56 +05:30
Vivek Khandelwal e90ea3d7ab [MLIR][TORCH] Extend implementation of aten._index_put_impl op.
This commits adds the support for cases for index_put_op:
1.) where index is a 2-d tensor.
2.) where indices is a list of tensors and none, with exactly
2 non none tensors along the consecutive dimensions.

This commit also adds a utility to compute the broadcast shape
given the two input tensors.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-04-05 14:04:30 +05:30
Alexandre Rames d24fa71368
Minor fixes for `ConvertTorchConversionToMLProgram`. (#1991)
* Only create the global seed variable if it does not exist already.
* Make the pass a module pass. A func pass may not modify its parent op.
2023-04-04 09:09:58 -07:00
Vivek Khandelwal 788efc3180 [MLIR][TORCH] Add support for non-unit stride for conv backward
This commit also adds the support for non-unit output padding in the
case of transposed convolution.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-04 17:53:27 +05:30
Vivek Khandelwal 5e9582b055 [MLIR][TORCH] Add e2e support aten.movedim.int op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-04 17:53:27 +05:30
Vivek Khandelwal 82fb9c7fb8 [MLIR][TORCH] Add decomposition for prims::squeeze op
This commit adds the decomposition for the prims.squeeze op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-01 21:45:58 +05:30
Chi_Liu 6bb9965a41
[TOSA] Add support for AtenZerosOp 0/strided layout (#1983) 2023-03-30 07:08:20 -07:00
Ramiro Leal-Cavazos 42d780dde0
Remove convolution_overrideable, convolution_backward_overrideable (#1984)
The ops `aten.convolution_overrideable` and
`aten.convolution_backward_overrideable` are currently not e2e tested
in Torch-MLIR. Moreover, there is no way to add e2e tests for them
because the ops cannot be called using the CPU backend (this also
prevents adding tested dtype functions for these ops). Since these two
ops are not expected to ever appear in PyTorch traces obtained through
standard means (https://github.com/pytorch/pytorch/issues/97481),
Torch-MLIR should not have to worry about them.
2023-03-29 15:05:56 -07:00
Ramiro Leal-Cavazos 0103c55e55
Add `RecomposeComplexOps` declaration + fix typos in pass name (#1950)
The `RecomposeComplexOps` pass currently does not have a TableGen
declaration and it is using the base class of `DecomposeComplexOps`,
which causes `--mlir-print-ir-after-all` to create wrong pass
labels. This commit fixes that as well as some minor typos in the name
of the pass.
2023-03-28 11:07:47 -07:00
Ramiro Leal-Cavazos d803ab4eeb
Cast `number` to `float` when shape function takes Scalar arg (#1978)
To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
2023-03-28 09:30:31 -07:00
Ziheng Jiang 72bb902640
[STABLEHLO] Move utils.h to include/ (#1974) 2023-03-27 21:16:21 -07:00
Maksim Levental 953ea39cb5
handles 2,3,4 from https://github.com/llvm/torch-mlir/issues/1963 (#1964) 2023-03-24 21:50:01 -05:00
Ramiro Leal-Cavazos a7449785ec
Use upstream shape functions when available (#1952)
There are several ops that have their shape function upstream and had
not been updated in Torch-MLIR to use the upstream version. This
commit updates those shape function. In addition, TODOs have been
added for shape functions that should be upstream but are not.
2023-03-24 09:13:43 -07:00
Michael Feliz 2389729fb9
Add support for aten_remainder in TorchToTosa (#1966) 2023-03-23 17:55:58 -07:00
Ramiro Leal-Cavazos eae3ff7f1c
Change dtype functions interface to take ints tuple for each tensor (#1965)
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.

This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.

To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
2023-03-23 11:05:39 -07:00
Zhekun Zhang 5758a0bfbb
[StableHLO] Support for slice_scatter (#1960)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-03-22 13:41:04 -07:00
lisaliu1 d632afce31
Max pool2d ceil mode to tosa (#1957)
* implemented ceil_mode== true support for lowering aten.max_pool2d to tosa
* add e2e test for lowering aten.max_pool2d to tosa with ceil_mode=true

---------

Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-03-21 10:17:39 -07:00
Sean Silva c319a20828 Update to LLVM 029313cc979ae71877b65794b1063d4e51184cc8
- mergeBlockBefore -> inlineBlockBefore
- move tosa-to-tensor pass ordering

https://github.com/llvm/torch-mlir/issues/1178#issuecomment-1476217922
2023-03-21 04:16:20 -07:00
Yuanqiang Liu 3698a95586
[MHLO] add conversion for aten.linalg_vector_norm (#1850) 2023-03-20 14:14:27 -07:00
Matthias Gehre aa5bcb3cf2
LowerToBackendContract: Explicitly error out on unimplemented operator (#1947)
* LowerToBackendContract: Explicitly error out on unimplemented operator

But only reject torch.operator when results are invalid.
Otherwise it might be a custom op that the backend supports.
2023-03-20 16:27:08 +01:00
Ramiro Leal-Cavazos 18035021f0
RefineTypes aten.sum: check dtype exists before accessing methods (#1944)
This commit adds a check that `defaultDtype` exists in the RefineTypes
handling of `AtenSumOp` before accessing the method `isInteger`, which
crashes the program is `defaultDtype` is null.

The handling of `defaultDtype` is the same as the one used for the
`AtenSumDimIntListOp`.
2023-03-16 08:35:49 -07:00
lisaliu1 7d711b9f9f
Constant pad nd to tosa (#1933)
* implemented lowering torch.aten.constant_pad_nd to tosa
* add constant_pad_nd e2e tests to TOSA_PASS_SET
* add PadModule_basic & PadWithNoneValModule_basic to TOSA_PASS_SET
---------

Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-03-15 08:42:15 -07:00
Jiahao Li 4912c3937d
Support aten.stack op and decompose it into unsqueeze & cat (#1747) 2023-03-11 09:25:25 +08:00
Ramiro Leal-Cavazos d310bb12bd
Expand definition of tensor subtype to include shape/dtype info (#1929)
Currently, the op `torch.tensor_static_info_cast` will not get
canonicalized away if the result type has any shape or dtype
information. This is because `isValidSubtype` only returns true when
the tensor types being compared are exactly the same or the supertype
has no shape and dtype information. Being unable to canonicalize away
the `torch.tensor_static_info_cast` gets in the way of further
optimizations, such as shape propagation.

This commit improves `isValidSubtype` by adding logic that compares
the shapes and dtypes of the two tensor types to determine of one type
is indeed a valid subtype of the other.

Fixes https://github.com/llvm/torch-mlir/issues/1926
2023-03-10 16:43:57 -08:00
gpetters94 66b1045a80
Add a new RecomposeComplexOps pass, fold slice+copy_ into indeX_put_ (#1901) 2023-03-10 16:42:11 -05:00
Ramiro Leal-Cavazos 2be48c3a67
Fix deprecation warnings for `isOneValue` and `getAllOnesValue` (#1928)
The functions `isOneValue` and `getAllOnesValues` are
deprecated. `isOne` and `getAllOnes` should be used instead.
2023-03-10 09:50:56 -08:00
Ziheng Jiang dca2b8a40a
[TORCH] Improve type refinement for aten.cat. (#1908)
* [TORCH] Fix type refinement for aten.cat.

* Add test.

* Address comments.

* Update.

* Update.

* Update.

* Update.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-03-09 16:17:35 -08:00
Roll PyTorch Action 40c25cecc4 update PyTorch version to 2.1.0.dev20230308 2023-03-08 15:25:16 +00:00
Zhekun Zhang 1d3a7419c5
[Torch Dialect] add RSub, ScalarImplicit canonicalize (#1899)
* add rsub, scalarimplit canonicalizer

* reformat

* address comments

* fix bug

* fix test

* Update elementwise.py

* resolve merge conflict

* change to 3

* change to 3

* real fix

* fix name

* add torchdynamo fail test

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-03-06 17:38:27 -08:00
Priya Savithiri c2ef5f4165
Add HardtanhBackward TOSA and LINALG support (#1721) 2023-03-06 10:16:37 -08:00
Ramiro Leal-Cavazos 671be048fe
Fix handling of non-int tensors in `getScalarValue` (#1914)
The current implementation of `getScalarValue` does not check that the
input to a `ValueTensorLiteralOp` is an i64 before extracting the
value, and it does not check that the result type of the
`PrimNumToTensorScalarOp` is also an i64. This leads to crashes or
invalid IR generated when the `input` is something other than an i64
tensor or `!torch.int`.

This commit addresses those issues. In addition, the function
`getScalarValue` is renamed to `getScalarIntValue` to make it clear
that it *only* extracts scalar integers.
2023-03-06 10:12:58 -08:00
Ramiro Leal-Cavazos d30af8772b
Handle uninitialized lattice elements in RefineTypes (#1911)
The data-flow analysis does not always propagate information to the
entire graph. This results in some lattice elements being
uninitialized. Currently the lattice elements are not checked to see
if they are uninitialized before rewriting the graph, potentially
resulting in invalid IR (see
https://github.com/llvm/torch-mlir/issues/1896).

This commit adds handling for uninitialized lattice elements.
2023-03-03 08:55:58 -08:00
Yuanqiang Liu 7a8304f935
[Torch Dialect] add folder for aten.sub.float (#1871) 2023-03-02 09:07:33 -08:00
Yuanqiang Liu fc1e091d6a
[Torch Dialect] add aten.pow.int_float op and it's folder (#1872) 2023-02-28 09:36:05 -08:00
Vivek Khandelwal a32840ffd7 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-02-27.
This commit also adds the lowering for aten.add and aten.Float.Scalar op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-28 22:43:39 +05:30
Zachary Cetinic e7111d473b
[Torch Dialect] Scatter reduce lowering (#1884)
- Lowers the torch.scatter_reduce to linalg_on_tensors dialect.
- Includes support for "sum", "prod", "amax", "amin" and "mean".
2023-02-21 23:05:55 +00:00
Yuanqiang Liu eb74014dd8
[Torch] decompose aten.norm.ScalarOpt_dim to aten.linalg_vector_norm (#1849) 2023-02-20 20:08:29 -08:00
Ziheng Jiang 38ed559398
[StableHLO] Add support for AtenPowTensorScalar. (#1883)
* [MHLO] Add support for AtenPowTensorScalar.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-02-16 20:26:46 -08:00
Chi_Liu e85def790c
[TOSA] Add todtype f64 and i1 to f32 support (#1845) 2023-02-16 09:10:11 -08:00
Vivek Khandelwal b17d4d4f08
[MLIR][TORCH] Add decomposition for aten.bernoulli.p op (#1882)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-15 22:36:29 +05:30
Vivek Khandelwal f6f2e4d040 [MLIR][TORCH] Add support for integer type input for max.dim op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-15 16:14:15 +05:30
Chi_Liu 8a7340dfb5
[TOSA] aten.index.tensor multiple indexes support (#1868) 2023-02-13 23:07:15 -08:00
Yuanqiang Liu 6ab990e1e8
[Torch Dialect] add folder for aten.Int.float (#1863) 2023-02-10 13:59:03 -08:00
Ziheng Jiang f1b8d5e581
[MHLO] Support AtenMaskedFillScalar (#1839)
* [MHLO] Support MaskedFillScalar.

* Update.

* Update.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-02-10 13:58:39 -08:00
Yuanqiang Liu 2f6fdb7f0b
[Torch Dialect] add folder for prim.min.int (#1864) 2023-02-10 13:58:15 -08:00
Chi_Liu cc819e73dd
[TOSA] Fix broadcast_to input and output different shape support (#1855) 2023-02-09 09:15:14 -08:00
Yuanqiang Liu 089018b658
[MHLO] move AtenTanhOp to ConvertAtenUnaryFPOnlyPatten and add sin/cos/ceil/floor pattern (#1847) 2023-02-06 11:14:26 -08:00
Jiahao Li f58ba19448
Add aten.bucketize op and its decomposition (#1834) 2023-02-03 10:20:47 +08:00
Ashay Rane 711646d095
mhlo: migrate conversion to stablehlo (#1840)
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.

This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
2023-02-02 07:29:47 -06:00
Vivek Khandelwal ed9d8d1fb7 [MLIR][TORCH] Add support for clone op with channels last memory format
Fixes https://github.com/llvm/torch-mlir/issues/1829

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-02 16:04:42 +05:30
Jiahao Li f5b689e12f
[MHLO] Support aten.cumsum op in mhlo backend (#1825) 2023-01-29 21:38:27 -08:00
Chi_Liu 00fc14a6e1
[TOSA] Add to.dtype i1 to i64 (#1830) 2023-01-27 09:21:06 -08:00
Matthias Gehre adaf05f03e
[TorchToLinalg] Lower AtenRoundOp to math::RoundEvenOp (Fixes #1811) (#1823)
[TorchToLinalg] Lower AtenRoundOp to math::RoundEvenOp (Fixes #1811)
2023-01-25 08:51:29 +01:00
Gleb Kazantaev 3930588a7e
Enable VerifyBackendContract in LTC backend (#1798)
* Enable VerifyBackendContract in LTC backend

* Update VerifyBackendContract pass

* Move convert_scalar_implicit to jit_utils

* Rename VerifyBackendContract to VerifyBackendContractNoDecompositions

* Update verify-backend-contract-error.mlir test
2023-01-24 22:14:17 -05:00
Ramiro Leal-Cavazos 6c86bec04f
build: update llvm tag to 9acc2f37 (#1828)
This commit makes the following changes:

- Update dialects to use fold API `kEmitFoldAdaptorFolder` and update
signature of `fold` methods (see PSA
https://discourse.llvm.org/t/psa-new-improved-fold-method-signature-has-landed-please-update-your-downstream-projects/67618)
- Replace `makeArrayRef` with `ArrayRef` (see
https://reviews.llvm.org/D140896)
- Remove `TypeRange{}` arg from `b.create<scf::IfOp>` since builder no
longer takes that argument
- Make `func`s in `Torch/invalid.mlir` private, since symbol
declarations cannot be public. (see https://discourse.llvm.org/t/rfc-symbol-definition-declaration-x-visibility-checks/2140)
2023-01-25 01:29:42 +00:00
Eric Kunze 95bdfaa9bf
update llvm to d23516e9ad477527a9db4d06b1fa9566680ac67c (#1812)
Rename BlockAndValueMapping to IRMapping
Moved PrimTupleConstructOp type validation to its own verifier as the
tablegen version does not work for a combination of variadic input and
non-variadic output.
2023-01-23 16:34:22 -08:00
Victor Guerra 1cf09a6a34
Using `ArrayRef` deduction guides. (#1809)
`llvm::makeArrayRef` is now deprecated and can be
replaced by the newly introduced `ArrayRef` deduction guides.

Fixes: #1808

Co-authored-by: Victor Guerra <vm.guerramoran@criteo.com>
2023-01-21 08:58:07 -06:00
Chi_Liu c5ac42a198
[TOSA] Add aten.view shape -1 support (#1815) 2023-01-20 11:56:26 -08:00
Ramiro Leal-Cavazos d849cbad14
Make `getTypeForScalarType` safer by returning `FailureOr<Type>` (#1814)
One of the potential values for a `torch_upstream::ScalarType` is
`Undefined`. This means that conversion of a `ScalarType` to another
type is a computation that can fail. To enforce handling of the
failure case, this commit makes the two helper functions that convert
`ScalarType`s into other types return `failure()` when the
`ScalarType` is `Undefined`.
2023-01-20 18:40:13 +00:00
Chi_Liu 2587b3f583
[TOSA] Add aten.Index.Tensor support (#1771) 2023-01-19 21:19:00 -08:00
Vivek Khandelwal abf4f207cd [MLIR][TORCH] Add canonicalizer for aten.new_empty_strided op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-19 13:37:32 +05:30
Vivek Khandelwal f9d59eb500 [MLIR][TORCH] Add decomposition for aten.randn_like op
This commit decomposes aten.randn_like op into aten.randn.generator op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-18 12:09:27 +05:30
Yuanqiang Liu 0a85033780
[MHLO] simplify aten.frobenius_norm.dim's lowering (#1800) 2023-01-17 13:52:12 -08:00
Jiahao Li e2698433db
Fix empty tensor when select -1 (#1787) 2023-01-17 10:14:14 -08:00
Maksim Levental 8696752eb6
Expose metadata of torch-mlir types (plus verify DictType key and value types). (#1785) 2023-01-16 10:25:02 -06:00
Ramiro Leal-Cavazos 7714bebfe3
Fix sign-compare warnings in TosaLegalizeCommon.cpp (#1794) 2023-01-12 11:23:30 -08:00
Jiahao Li 4f94831fed
[LINALG][TOSA][MHLO] Add e2e support for aten bitwise ops (#1753) 2023-01-11 14:40:03 -08:00
Vivek Khandelwal fd236b2c89 [MLIR][TORCH] Add decomposition for prims.var and prims.sqrt op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Vivek Khandelwal b966733e04 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-01-08.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Ashay Rane 4e4a571104
[TOSA] Add LeakyReLU conversion pass (#1790)
* feat(TorchToTOSA): LeakyReLU legalization

* test(LeakyReLU): Add LIT test and enable e2e test

Co-authored-by: Philipp Braun <philipp.braun@amd.com>
2023-01-10 21:42:07 -08:00
Ashay Rane 0faba6d2fc
build: update llvm tag to de3f0f7f (#1789)
Credit to @vivekkhandelwal1 for finding the necessary changes.

Summary of changes:

 - Switch Tosa_IntArrayAttr[N], Tosa_IntArrayAttrUpto[N] to DenseI64ArrayAttr.

 - Replace kNoIterationLimit with kNoLimit. (https://reviews.llvm.org/D140525)

 - Add dependency on MhloPasses when MHLO is enabled

 - Specify result type when using mhlo::DotOp
2023-01-10 17:07:19 -06:00
Raghavan Raman 0979df6589
Fix unsqueeze in Torch to Tosa conversion (#1780) 2023-01-10 11:09:58 -08:00
Jiahao Li 8dc5d985eb
Add e2e support for aten logical or/and/xor/not ops (#1761) 2023-01-03 18:11:25 -08:00
Ramiro Leal-Cavazos d44bdd2728
Add `hasDtype` checks everywhere dtypes are used in decompositions (#1750)
There are several decompositions that assume the operands of the op
have dtypes available; however, the only time dtypes are guaranteed to
be present is when the graph has reached the backend contract. In
general, every pass that happens before reaching the backend contract
should not assume dtypes are available and should use `hasDtype` to
check first.

This commit adds `hasDtype` checks to every decomposition that uses
dtypes.
2023-01-03 14:19:18 -08:00
Ramiro Leal-Cavazos 273664ded6
[custom op] Replace `tanh` dtype function with `expm1` (#1769)
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.

Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
2023-01-03 14:18:26 -08:00
Srirammaswamy a88e3766e8
Add E2E support for LeakyRelu and LeakyReluBackward ops (#1733)
Co-authored-by: srirammaswamy <srirammaswamy@gmail.com>
2023-01-03 08:30:16 -08:00
powderluv 3d50d3d9fe
Revert "rebase llvm: 5f24f893cac7aaea292c70f8aa83b021499114be (#1760)" (#1765)
This reverts commit fa356cce50.
2023-01-01 10:56:06 -08:00
Xiafei Qiu fa356cce50
rebase llvm: 5f24f893cac7aaea292c70f8aa83b021499114be (#1760) 2022-12-31 00:07:54 +08:00
Ashay Rane ac780529b4
Revert e2e support for aten logical or/and/xor/not ops (#1757)
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked.  Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result.  This patch reverts the
commit to make the post-merge builds green again.
2022-12-29 21:01:06 -06:00
Shivam Gupta 2f45959f0d
Prelu lowering to linalg (#1712)
Prelu lowering to linalg
2022-12-28 08:51:33 +05:30
Jiahao Li eaab9be207
Add e2e support for aten logical or/and/xor/not ops (#1752) 2022-12-26 10:23:38 +08:00
Jiahao Li 49071f86e6
[MHLO] Evaluate RuntimeAssertOp at compile time (#1732) 2022-12-22 17:12:52 +08:00
Tanyo Kwok 297fd3aa47
Revert "reimplement linear lowering torchToMhlo (#1524)" (#1744)
This reverts commit 50b524546f.
2022-12-21 21:24:07 -08:00
Jiahao Li 60a139271d
Add aten.std.correction op and its decomposition (#1731) 2022-12-21 21:02:40 -08:00
zzp_miracle 50b524546f
reimplement linear lowering torchToMhlo (#1524) 2022-12-22 10:15:16 +08:00
Jiahao Li 15b249777b
[Torch][MHLO] Decompose aten.copy op. Lower aten.rsqrt & sigmoid to mhlo. (#1734) 2022-12-22 10:13:59 +08:00
Chi_Liu 9dc09ac8c5
[TOSA] Add aten.gather support for tosa (#1680) 2022-12-21 11:04:07 -08:00
Chi_Liu b2cefc0b64
[TOSA] Add aten.masked_fill.Tensor/Scalar support (#1735) 2022-12-21 08:56:07 -08:00
pranavmulticore 0f6008c802
Added GeluBackward: MHLO support (#1725) 2022-12-21 20:09:43 +08:00
Abhishek Varma 66d7a412cb [RefineTypes] Fix knowledge dtype for `aten.embedding` op
-- The dtype of the result of `aten.embedding` should match that of
   the `weight` operand's (operand[0]) instead of hardcoding to f32.
-- This commit aims to provide a fix for the same.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-12-20 19:56:12 +05:30
Tanyo Kwok 577e38da58
build: update llvm tag to 7ccbb4df (#1736)
Summary of changes:

 - LLVM now includes <optional> instead of "llvm/ADT/Optional.h" in most
   (although not all) places
   (https://reviews.llvm.org/rG541ef3d61e9341cd38420c0dbca9250c4d0ea04c).
   This patch replaces the affected instances of `llvm::Optional` with
   `std::optional`.

 - In the usages of llvm::Optional that remain, llvm::Optional::value()
   is deprecated, so this patch replaces them with a dereference.
2022-12-20 18:17:27 +08:00
ataheridezfouli-groq 17ee643aeb
[TORCH] Add Complex Number support (#1673)
Add Complex number dtype support to torch tensors. Add
aten.fft_fft op to test complex numbers.
2022-12-15 21:40:01 +00:00
Ramiro Leal-Cavazos 211cf8fc36
Add `report_fatal_error` to `getTypeForScalarType` (#1722)
Functions like `getTypeForScalarType` that do a mapping from one set
of types to another should not fail, and if they do it
should be obvious to the developer that that function has an
unhandled case.

Instead of silently failing when encountering an unsupported type,
this commit adds a `report_fatal_error` at the end, similar to other
type translation functions in this file.
2022-12-15 08:33:14 -08:00
Ramiro Leal-Cavazos 60db793feb
Pass op legality info to `verifyBackendContractPass` (#1705)
In order to verify if a given IR satisfies the backend contract, the
verifier needs to know if decompositions took place, and if so, which
ops were decomposed and which were not.

This commit adds two arguments to `verifyBackendContractPass` to
specify if decompositions took place and which ops to consider backend
legal, similar to the arguments of `LowerToBackendContractPass`.
2022-12-15 08:32:52 -08:00
Prashant Kumar 564403e3a1 Add float16 support in the refbackend.
This will require https://reviews.llvm.org/D139121 patch to go through.
2022-12-15 21:19:52 +05:30
Sean Silva b60da34f84 [cleanup] Fix a few more llvm::None -> std::nullopt 2022-12-14 05:59:49 -08:00
Ashay Rane f63bb9f86c
build: update llvm tag to 3a020527 (#1717)
Summary of changes:

 - Replace `llvm::None` with `std::nullopt`, since the former is deprecated
   (https://reviews.llvm.org/D139763)

 - Use setter for symbol visibility instead of passing string attribute when
   creating FuncOp
2022-12-14 02:06:39 -06:00
Ahmed S. Taei b1f6832849
Add aten.slice.Tensor & aten.cat folders (#1691) 2022-12-13 13:02:47 -08:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00
Chi_Liu 163d19cce6
[TOSA] Add aten.add/sub.Scalar/Tensor si64 type support (#1604) 2022-12-12 12:13:07 -08:00
Ramiro Leal-Cavazos 73bd32d06c
Make `getTensorRank` safer by changing return to `Optional<unsigned>` (#1707)
Currently `getTensorRank` returns -1 if it was unable to get the rank
of the tensor. However, not every use in the codebase was checking the
return value, and in some cases, the return value was casted to
unsigned leading to some infinte loops when an unranked tensor reached
a decomposition.

This commit changes the return of `getTensorRank` to
`Optional<unsigned>` to make it clear to the user that the function
can fail.

This commit also changes a couple of for loops that iterate a vector
in reverse order that can potentially become infinite loops into
range-based for loops.
2022-12-12 08:56:28 -08:00
Vivek Khandelwal d4862ec611 [MLIR][TORCH] Add e2e support for aten.var_mean op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-12 15:46:54 +05:30
Vivek Khandelwal f783e19dcb Revert "[MLIR][TORCH] Fix mean and mean.dim op for large-sized inputs"
This reverts commit 55c7e66aa7.
2022-12-09 19:30:46 +05:30
Sambhav Jain f8a2592905
[Bazel] Resolve circular dependency and add targets for conversion to MLProgram dialect (#1694)
A circular dependency was introduced in e7edcc62fd. 

Specifically, the `makeShapeLLVMCompatible` and `makeShapeTorchCompatible` utilities were being called from `lib/Dialect/Torch/IR/TorchTypes.cpp` and `lib/Dialect/Torch/IR/TorchOps.cpp` defined under the `:TorchMLIRTorchDialect` bazel target, leading it to take a dependency on `:TorchMLIRConversionUtils` which already depends on `:TorchMLIRTorchDialect`, hence creating a circular dependency.

This commit resolves the same by moving said utilities from `lib/Conversion/Utils/Utils.cpp` to `lib/Dialect/Torch/Utils/Utils.cpp`. Please LMK if there's a better way to fix this and I will update the code.

This commit also adds the required targets to support building the new conversions from Torch to ML Program dialect that was introduced in f416953600.

Bazel build GHA triggered manually to verify: https://github.com/sjain-stanford/torch-mlir/actions/runs/3645944517
2022-12-08 09:49:54 -08:00
Ramiro Leal-Cavazos a54b334578
Allow running DecomposeComplexOps more than once (#1671)
The current implementation of `DecomposeComplexOps` fails if an op
expected to be decomposed does not get decomposed in the first
iteration of the `createTorchSimplificationPipeline` in
`LowerToBackendContractPass`. However, some graphs require multiple
iterations of `createTorchSimplificationPipeline` to fully propagate
all statically knowable information, such as dtypes and shapes, to the
entire graph, sometimes resulting in the need to run
`DecomposeComplexOps` more than once.

This commit changes `DecomposeComplexOps` to use a greedy algorithm
for pattern application and moves the legalization check of ops to the
`LowerToBackendContractPass` to allow for the `DecomposeComplexOps` to
run more than once.
2022-12-08 09:26:38 -08:00
Ramiro Leal-Cavazos dd35488da5
build: update llvm tag to 798fa4b4 (#1684)
- Support for non-prefixed accessors has been removed. See:
  https://reviews.llvm.org/D136727
- Rename `operands` to `methodOperands` in `prim.CallMethod` since the
  name `operands` overlaps with a builtin method name. See:
  https://reviews.llvm.org/D136727
- Add passes in refbackend to lower memref.subview. See:
  https://reviews.llvm.org/D136377
- Replace `CopyToValueTensorOps` first in `RewriteViewLikeSubgraph` in
  maximize-value-semantics.

  The current implementation of the `RewriteViewLikeSubgraph` pass in
  maximize-value-semantics creates temporarily invalid IR. In
  particular, given a forward slice starting from a
  `CopyToNonValueTensorOp` and ending in `CopyToValueTensorOp`s, the
  pass first replaces all uses of the `CopyToNonValueTensorOp` with
  its operand, which results in all the `CopyToValueTensorOp` users
  having their operand have type `!torch.vtensor`, which is invalid.

  The correct way to do things is to first replace all the
  `CopyToValueTensorOp`s with their operand, and then replace all uses
  of the `CopyToNonValueTensorOp` with its operand.

  This only started failing now because the generated accessor
  `getOperand` for the `CopyToValueTensorOp` now returns a
  `TypedValue<NonValueTensorType>`, which has an assert checking that
  the value returned is of the expected type.
2022-12-07 12:20:41 -08:00
Vivek Khandelwal 3e4bb2bd8e [MLIR][TORCH] Add E2E support for randn and randn.generator op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-06 22:41:24 +05:30
Vivek Khandelwal f416953600 [MLIR][TORCH] Add TorchConversionToMLProgram and MLProgramBufferize pass
This commit changes the `InsertRngGlobalsPass` to `TorchConversionToMLProgram`
pass. This commit also adds the `MLProgramBufferize` pass for the
bufferization of ml_program dialect ops to run on refbackend.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-02 13:20:46 +05:30
Eric Kunze 3fc27cf6ca
Update LLVM Tag to 2c1fa734 (#1670)
Summary of changes:
 - Change ShapedType::kDynamicSize -> ShapedType::kDynamic
 - llvm::NoneType has been deprecated, change convertScalarToDtype to use llvm::None
2022-12-01 20:38:28 -08:00
Ramiro Leal-Cavazos b4b92c990e
Replace LCG algorithm with squares64 algorithm in AtenUniformOp (#1633)
This commit replaces the LCG algorithm that was being used by the
`TorchToLinalg` lowering of `AtenUniformOp` to generate random numbers
with the `squares64` algorithm, for the LCG algorithm was producing
tensors that were highly correlated with one another.

Squares64 algorithm: https://arxiv.org/abs/2004.06278

Closes https://github.com/llvm/torch-mlir/issues/1608
2022-12-01 08:30:10 -08:00
Vivek Khandelwal e7edcc62fd build: update llvm tag to 147fe9de
Summary of changes:
- Replace call to `MemoryEffectOpInterface::hasNoEffect`
  with `isMemoryEffectFree`.
- Make fix for the dynamic dims, since
  `kDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1` in llvm
- `makeShapeLLVMCompatible` and `makeShapeTorchCompatible`
  utilities convert shapes in order to remain consistent
  with the Torch and MLIR semantics.
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-01 13:36:50 +05:30
Abhishek Varma 47f67853ac [RefineTypes] Add Float16Type dtype knowledge support for trivial ops
-- This commit adds Float16Type dtype knowledge support for trivial ops.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-12-01 10:22:43 +05:30
Ramiro Leal-Cavazos 0983a7f93a
Fix modulus calculation in LCG algorithm of refbackend (#1658)
The current implementation sets the `nextSeed` value to `temp & 127`,
which is wrong. The last step of the LCG algorithm for the multiplier
and increment chosen should be `temp % 2^{64} = temp & (1 <<
63)`. However, because we are dealing with i64 values, the modulus
operation happens automatically, so it is not needed.

See Donald Knuth's values for LCG here:
https://en.wikipedia.org/wiki/Linear_congruential_generator
2022-11-30 08:46:52 -08:00
Abhishek Varma c27c1791f1 [MLIR][TORCH] Add e2e support for `aten.amax` op
-- This commit adds e2e support for `atend.amax` op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-30 17:54:37 +05:30
Abhishek Varma 2c643adcb9 [TORCH][DECOMPOSE] Fix bug in computeReductionType API
-- This commit fixes a bug in computeReductionType API.
-- The bug pertains to removal of `dim` from the `sizes` array.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-30 17:54:37 +05:30
Tanyo Kwok bbcdb38d99
Revert "Decompose torch.slice_scatter (#1622)" (#1659)
This reverts commit f3f2f10030.
2022-11-30 12:47:13 +08:00
Sean Silva ecb09c2fc3 [torchdynamo] Fix output size computation for upsample_nearest2d 2022-11-29 01:46:29 -08:00
Abhishek Varma bb259f918a [MLIR][TORCH] Add lowering for `aten._softmax` when `half_to_float=True`
-- This commit adds decompose logic for `aten._softmax` when
   `half_to_float` is `True`.
-- An e2e test case will be added once support for half to float conversion for
   `aten._softmax` is added upstream.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-28 22:32:00 +05:30
Vivek Khandelwal d9cbf01d1e Revert "build: update llvm tag to 147fe9de"
This reverts commit e45ad313d4.
2022-11-25 12:41:56 +05:30
Vivek Khandelwal e45ad313d4 build: update llvm tag to 147fe9de
Summary of changes:
- Update call to `hasNoEffect` utility
- `KDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1`
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-24 12:44:43 +05:30
Tanyo Kwok 14f1260ac4
Add more mhlo basic converters (#1628)
* Add more mhlo basic converters

* remove unused pinnedMemory constraints

* refine naming
2022-11-24 14:28:34 +08:00
Tanyo Kwok f3f2f10030
Decompose torch.slice_scatter (#1622)
* Decompose torch.slice_scatter

* fix compilation error

* update file check

* fix ci

* fix i64 torch.tensor dtype
2022-11-23 18:14:12 +08:00
Vivek Khandelwal da8fdc9f96 [MLIR][TORCH] Fix refine types crash
This commit fixes https://github.com/llvm/torch-mlir/issues/1599.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-23 15:17:37 +05:30
Tanyo Kwok 4aad5ccf39
fix #1626 return type mismatch (#1634) 2022-11-23 15:02:41 +08:00
Vivek Khandelwal 68f568b704 [MLIR][TORCH] Add E2E support for prims.convert_element_type op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-22 09:36:36 +05:30
Vivek Khandelwal 55c7e66aa7 [MLIR][TORCH] Fix mean and mean.dim op for large-sized inputs
This commit fixes the aten.mean and aten.mean.dim op decomposition
for supporting large-sized inputs.
This commit also fixes the formatting for the file stats.py

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-22 08:38:51 +05:30
Tanyo Kwok a9fb0c5459
fix mhlo e2e ci crashes (#1620)
* fix mhlo e2e ci crashes

* add passed tests

* calc dynamic positive dim
2022-11-21 21:50:35 +08:00
Vivek Khandelwal 4cbd3927d7 [MLIR][TORCH] Add aten.sort.int op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-20 19:00:41 +05:30
Chi_Liu 29c8f47723
[TOSA] Add aten.clamp op tosa support (#1609)
Co-authored-by: AmosLewis <Amos_Lewsi@foxmail.com>
2022-11-18 13:32:13 -08:00
Abhishek Varma 1d949f3ac2 [MLIR][TORCH] Fix aten.upsample_nearest2d op
-- aten.upsample_nearest2d.vec op is not present
   owing to https://github.com/pytorch/pytorch/pull/85638
-- So this commit adds a lowering on aten.upsample_nearest2d.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-18 13:41:47 +05:30
Sean Silva 39de4d6265 [cleanup] Make diagnostics better
Also remove some unused imports.
2022-11-17 02:09:54 -08:00
Vivek Khandelwal 5f7177da35 [MLIR][TORCH] Add decomposition for aten.var_mean.correction op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-17 13:00:09 +05:30
Gaurav Shukla 0d209998d1
llvm: update tag to e864ac6945 (#1600)
Summary of changes:
1. Replace `string` iterator types by `IteratorType` enum.
(e6598b053d)
2. Update `includes` wrt new directory layout of MLIR HLO codebase.
(9fd8d251a8)
3. Update tags
   llvm: e864ac694540342d5e59f59c525c5082f2594fb8
   MHLO: eab364ba2a66bd0613efb94f8a738c1c97aaee92

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>

Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-11-16 14:40:36 -08:00
Ramiro Leal-Cavazos 09ca07bca0
`m_TorchConstant{Int/Bool}List` -> `m_TorchListOfConstant{Int/Bool}s` (#1601)
This commit renames the patterns used to match on lists of constant
values to `m_TorchListOfConstant{valueType}s`. This is needed to avoid
ambiguity for when `valueType` has `Optional` in it. In particular, it
makes it clear whether the values in the list are optional or the list
itself is optional.
2022-11-16 20:33:12 +00:00
Vivek Khandelwal a1d3afdba9 [MLIR][TORCH] Add E2E support for aten.randint.low op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-16 09:54:18 +05:30