Commit Graph

913 Commits (d35b6b412aa7252eb377967f4feb2a753ec1a7fb)

Author SHA1 Message Date
Yuanqiang Liu 1ea2b57ab7
[Torch Dialect] add folder for aten.add (#2264)
* [Torch Dialect] add folder for aten.add

* update

* update

* update
2023-06-27 10:55:28 +08:00
Yuanqiang Liu 64afc08dab
[Torch Dialect] add missing one_hot dtype function (#2143)
* [Torch Dialect] add missing one_hot dtype function

* update

* update

* update
2023-06-23 16:11:33 +08:00
Ramiro Leal-Cavazos 6f2bf31291
Fix single-element tuple construction in abstract interp library (#2258)
Single element tuples in Python need a comma after the
element. However, the `registry.py` file, which generates the expected
abstract interpretation function signatures, was not inserting the
comma. This commit changes the expected signature generator to add a
comma after the last element in any non-empty default tuple argument.
2023-06-22 11:27:40 -07:00
Yuanqiang Liu 96b14e952e
[Torch Dialect] Support aten.device.with_index (#2254) 2023-06-23 01:07:14 +08:00
Abhishek Varma a0d2789840 [MLIR][TORCH] Add e2e support for aten.alias
-- This commit adds e2e support for aten.alias op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-06-21 12:15:31 +05:30
Vivek Khandelwal f6a6cfea4e
[MLIR][TORCH] Add support for negative index values for index.Tensor op (#2233)
This commit adds the support for index.Tensor op when the index values
are negative. This commit wraps around the index values by checking
their values at run time.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-16 14:21:04 -05:00
Vivek Khandelwal ab8b23e767 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-05-16.
This commit removes the test `BaddbmmDifferentDtypesModule_basic`
since PyTorch expects all operands to have the same dtype.
Ref: 2abad0c184

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-15 17:53:16 +05:30
Yuanqiang Liu bba0f5891b
[Stablehlo] add conversion for AtenFlipOp (#2163) 2023-06-15 10:27:34 +08:00
Yuanqiang Liu 7c6961bcbf
[Torch Dialect] Support aten.cuda and add canonicalizer for aten.cuda (#2231) 2023-06-14 09:56:39 +08:00
Maksim Levental 0caaf8d32a
Bump LLVM (#2176)
* Bump LLVM

---------

Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Christopher McGirr b461daa06e
fix(TorchToTosa.cpp): adjust torch->tosa div conversion (#2200)
check the return type of the division to figure out whether to use
the floating point implementation of a division or to use the integer.

the issue rose from the fact that the inputs are all integer but the
result was casted to floating point. The conversion then chose to
use the integer implementation of division which is not legal in tosa
when all the inputs get casted to floating point.

fix(TorchToLinalg): AtenDivScalarOp

upcast self operand as well if applicable, the self operand must also
be casted to float as it can be an integer.
2023-06-12 11:18:38 +02:00
Tiago Trevisan Jost cc75557119
feat: support unchanged dimensions in torch.aten.broadcast_to operation. (#2204) 2023-06-12 11:17:25 +02:00
Matthias Gehre 4e2ba2e0af
Support aten.sign (#2205) 2023-06-10 20:45:35 +02:00
Matthias Gehre 0959b502ae
Print name of the backend when tests fail to help debugging issues in CI (#2210)
* Print name of the backend when tests fail to help debugging issues in CI

* Extended test python/test/torchscript_e2e_test/compilation_failure.py
2023-06-09 10:47:07 +02:00
Yuanqiang Liu 5a7bf4e4cb
[Torch Dialect] Add canonicalize pattern for aten.is_floating_point (#2194)
* [Torch Dialect] Add canonicalize pattern for aten.is_floating_point

* implement as fold

* add lit test
2023-06-07 17:05:31 +08:00
Matthias Gehre 816880774b
Fix version comparison against stable (#2209) 2023-06-07 10:19:38 +02:00
JianzheXiao e4f8fb1b8c
[Torch Dialect] add support for AtenIsnanOp (#2170)
* add support for mhlo

* Add Test for torch.ne

* fix torch.ne shape/add static test case

* add support for static torch.ne

---------

Co-authored-by: root <root@n31-177-039.byted.org>
2023-06-07 10:06:27 +08:00
Yuanqiang Liu faec8698ea
[Torch Dialect] Support recompose aten.split.Tensor + prim.ListUnpack (#2192) 2023-06-07 01:38:04 +08:00
Vivek Khandelwal da886280fe
[MLIR][TORCH] Add E2E support for aten.tril op (#2202)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-06-05 16:17:01 -07:00
Ramiro Leal-Cavazos a46b5c6af2 Fix types + off-by-1 error, clamp `end` in slice+copy_ recomposition
The `copy_` op being replaced by `RecomposeSliceCopy_` operates on a
subset of the tensor being mutated, while the `index_put` op being
used to replace the `copy_` op operates on the entire tensor being
mutated. This means that the result type of the `index_put` should be
the type of the input to `index_put` and we need to make sure that
`copy_` does not have users before replacing to avoid type conflicts.

This commit also fixes the result type used for the
`AtenArangeStartStepOp`, and an off-by-1 error when creating the
indices vector.

Lastly, this commit also clamps the `end` value from the slice to the
size of the dimension.
2023-06-01 11:14:53 -07:00
Ramiro Leal-Cavazos 281dccc681 [LINALG] Add dynamic support for `PrimMinIntOp` 2023-06-01 11:14:53 -07:00
Zhekun Zhang 8af3e50662
[Torch Dialect] Add support for AtenScalarTensorOp (#2085)
* add scalar_tensor op

* add dynamo pass test; needs PR2062

* try to fix

* Empty commit, trigger test

* Empty commit, trigger test

* address comments

* use dtype function

* fix decompose rule

* remove unused include

* Empty commit, trigger test

* fix test

* disable ltc

* fix dtype

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-06-01 11:38:50 +08:00
Yuanqiang Liu 72b8070e57
[Importer] import constant tuple (#2132)
* [Importer] import constant tuple

* update

* update

* update
2023-05-31 14:14:14 +08:00
Ramiro Leal-Cavazos 479b2175ef
Add `ReadOnly` trait to `copy.to_vtensor` (#2179)
Before inlining a global slot, the users of the global slot are
checked to see if they are `ReadOnly` or `MemoryEffectFree` to make
sure that the global slot is not being mutated. Because the op
`copy.to_vtensor` currently does not have the `ReadOnly` trait, if a
global slot is passed to `copy.to_vtensor`, the pass
`InlineGlobalSlots` will fail.

The op `copy.to_vtensor` is `ReadOnly`, since it does not modify the
contents of the input tensor; it simply makes a new copy. This commit
adds the trait as well as an e2e test that generates the case of a
global slot being passed to a `copy.to_vtensor`.
2023-05-30 21:40:36 +00:00
maxbartel db3f2e3fde
Add Stable PyTorch CI Pipeline (#2038)
* feat: split pytorch requirements into stable and nightly

* fix: add true to tests to see full output

* refactor: add comments to explain true statement

* feat: move some tests to experimental mode

* refactor: refactor pipeline into more fine grained difference

* feat: add version differentiation for some tests

* feat: activate more configs

* refactor: change implementation to use less requirement files

* refactor: remove contraints used for testing

* fix: revert some requirement file names

* refactor: remove unnecessary ninja install

* fix: fix version parsing

* refactor: remove dependency on torchvision in main requirements file

* refactor: remove index url

* style: remove unnecesary line switch

* fix: readd index url
2023-05-30 12:16:24 -07:00
Vivek Khandelwal 959f4f48d5 [MLIR][TORCH] Add support for the total_weight for aten.nll_loss_forward op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-30 20:29:27 +05:30
Gaurav Shukla 552887783a [TM_TENSOR] Add `aten.scatter.[src|value]` op
This commit adds support of `aten.scatter.src` and `aten.scatter.value`
ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2023-05-29 12:35:53 +05:30
Zhekun Zhang 69e993b03f
[Torch Op] Add AtenChunkOp support (#2152)
* add chunkOp support

* update LTC xfail list

* address comments

* address comments

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-26 10:05:19 +08:00
Zhekun Zhang a426363b7d
[Torch Dialect] Add split.tensor support + recompose rules (#2102)
* add split.tensor support + recompose rules

* add e2e test

* address comments

* address comments

* erase op in recomposeOp

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-23 12:43:33 -07:00
Prateek Gupta 938a489e74 [TORCH-MLIR] Add ODS for aten.sign op.
This commit adds ODS for the aten.sign op.

Signed-Off-By: Prateek Gupta <prateek.gupta2@cerebras.net>
2023-05-23 11:06:42 +05:30
Zhekun Zhang aa97c8383e
[Torch Op] Add unbind.int support with ListUnpack (#2058)
* add unbind int

* reformat

* use unpack canonicalize

* address comments

* Empty commit, trigger test

* add ltc blacklist

* clean up

* address comments

* check permute list

* erase in recompose

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-18 19:07:58 -07:00
Vivek Khandelwal 5698893ae4 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-05-16.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-18 21:30:11 +05:30
Yuanqiang Liu 6f7d9e83df
[Stablehlo] add e2e test for aten.batch_norm (#2129) 2023-05-17 09:04:40 -07:00
gpetters94 0302cf1d92
Add TMTensor::Attention and lower ScaledDotProductAttentionOp to it (#2027) 2023-05-16 15:17:45 -04:00
David Gens 17db2aafa3
add mse_loss_backward (#2111) 2023-05-12 14:29:13 -07:00
Ramiro Leal-Cavazos de02b56e17
Replace RefineTypes with dtype functions (#2105)
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.

All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`

These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.

Co-authored-by: Jiahao Li <liplus17@163.com>
2023-05-12 13:40:45 -07:00
Maksim Levental c3cd7471b4
Pure-Python FX importer. (#2098)
Co-authored-by: Sean Silva <silvasean@google.com>
2023-05-12 00:46:33 -05:00
Zhekun Zhang 1eb18dd8b5
Add AtenFillScalarOp Stablehlo support (#2108)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-11 16:41:46 -07:00
Prashant Kumar c47d3aab01 Fix torchdynamo fail test. 2023-05-11 21:29:07 +05:30
Prashant Kumar 8eb0c7e656 torch.complex to builtin complex types matching.
The right approach would be to create our own !torch.complex type
and use that during import than have a pass that converts to the MLIR
complex types.
2023-05-11 21:29:07 +05:30
Ramiro Leal-Cavazos ab694dfbc1 Add complex dtype support on refbackend 2023-05-11 21:29:07 +05:30
Prashant Kumar 3cd91affbc Add complex types support with basic complex ops.
Add complex types support with basic complex types.
Add aten.imag and aten.real op lowering via linalg_backend.
2023-05-11 21:29:07 +05:30
rahul shrivastava 86429d9656 Add e2e native_group_norm test-cases
Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-05-11 21:21:12 +05:30
rahul shrivastava 40a2c501a1 Add ODS for group_norm
- Add ODS for native_group_norm/backward.
- Add shape-inference for native_group_norm/backward .

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-05-11 21:21:12 +05:30
yifei410 86718cb203
[TOSA] lowering support for aten cat (#2039)
Add support for lowering torch.aten.cat to tosa.concat

* add support for aten cat to tosa

---------

Co-authored-by: yifei <y.zhou@xilinx.com>
Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-05-10 08:25:58 -07:00
Zhekun Zhang fc62b8e9ab
[StableHlo] Fix AtenWhereSelfOp convert rule (#2093)
* fix whereself convert rule

* use int to test promotion

* add dynamo failing test

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-05 15:21:55 -07:00
Vivek Khandelwal 378860f51b [MLIR][TORCH] Add E2E support for aten.topk op
This commit adds the decomposition for the aten.topk op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-05-05 15:50:33 +05:30
Zhekun Zhang 1eceb84899
add stablehlo support for pow.tensor_tensor (#2086)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-04 09:55:03 -07:00
Zhekun Zhang 0cf9ee340b
[Torch Dialect] Add to.dtype_layout canonicalize patterns (#2062)
* add to.dtype_layout canonicalize patterns

* update comment

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-02 20:06:02 -07:00
Yuanqiang Liu c596d11b98
[Torch Dailect] add canonicalize pattern for prim.device (#2066) 2023-05-02 20:05:46 -07:00
Maksim Levental c9fba95642
[Dynamo] turn on `no_python=True` for dynamo tests (#2040) 2023-04-28 18:05:17 -05:00
Ze Zhang 7b73e0cfaf
Add e2e linalg support for aten.atan (#2070)
* new atan op

* update shape

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-04-28 00:04:58 -07:00
rahul shrivastava a58442b50d Add ODS for aten.pow.Scalar
Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-04-27 22:09:45 +05:30
Vivek Khandelwal 491ae5eda4 [MLIR][TORCH] Add E2E support for aten.var_mean.dim op
This commit adds the decomposition for the aten.var_mean.dim op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-27 22:00:44 +05:30
Ramiro Leal-Cavazos c8e062fb4e
Fix default value of `stride` in 2d pooling ops in linalg and tosa (#2065)
When the user does not specify the `stride` value in 2d pooling ops,
`stride` is given the value of an empty list. However, the current
lowerings for pooling ops assumed that the `stride` operand would
always be a list of two ints, leading to crashes when that was not the
case. This commit fixes the crashes by setting the value of `stride`
to `kernel_size` when `stride` is the empty list, since this is the
default `stride` value specified in PyTorch docs. See:
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html#torch.nn.MaxPool2d
2023-04-27 08:31:36 -07:00
rahul shrivastava e3d876af42 Add aten.scatter.value Op ODS
Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-04-25 11:40:19 +05:30
rahul shrivastava b0f166bb9a Add Nll_loss2d
- Add both forward and backward op
- Add end-to-end xfailed testcases

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-04-24 23:47:26 +05:30
rahul shrivastava 85916dab33 Add ODS for aten.scatter.src
Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-04-24 23:46:35 +05:30
Ramiro Leal-Cavazos 96d662647f
Fix import of constant bool tensor parameters (#2047)
Bool tensors are represented in TorchScript as an array of
`int8_t`s. However, when importing them into Torch-MLIR, the importer
was assuming the array had `int32_t` elements, leading to the importer
reading into memory that was out of bounds. This commit fixes the
casting of the bool tensor.
2023-04-20 18:38:48 -07:00
Ramiro Leal-Cavazos f85f5799e4
Fix creation of empty tensor in decomposition for randn ops (#2043)
The current decomposition for `aten.randn.generator` does not specify
the `dtype` argument of the empty tensors created to store the random
values. This leads to invalid IR when the output type of the `randn`
op is not the default PyTorch dtype.
2023-04-19 08:25:39 -07:00
Yuanqiang Liu 4d98f76d4f
[Torch Dialect] fold aten.detach (#2021) 2023-04-18 08:59:14 -07:00
Vivek Khandelwal ed56e614b7 [MLIR][TORCH] Add E2E support for cross entropy lowering
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-18 08:00:20 +05:30
Abhishek Varma 318fe13468 [MLIR][TORCH] Patch up Ops and their lowerings to deal with +ve `dim`
-- In Python we have the concept of negative dimension indexing.
-- We would want to normalize such dimensions to be +ve and within the
   expected range instead.
-- This commit takes care of a few remaining set of Ops and their
   lowerings by applying `toPositiveDim` and `isValidDim` to the
   extracted integer `dim` value.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-14 13:12:56 +05:30
Abhishek Varma a13d301356 [MLIR][TORCH] Add e2e support for aten.sort op
-- This commit adds e2e support for atend.sort op.
-- 1. Adds aten.sort op in torch dialect.
-- 2. Adds tm_tensor.sort op in TMTensor dialect.
-- 3. Adds lowering of aten.sort -> tm_tensor.sort.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-13 12:59:43 +05:30
rahuls-cerebras c2c96c430a
Add Shape inference for CopyOp for lazy tensor core backend (#2006)
- Add Shape inference for CopyOp for LTC backend
2023-04-12 09:37:03 -04:00
Yuanqiang Liu 72c3326097
[Torch Dialect] support for aten.one_hot (#1852) 2023-04-11 01:02:28 -07:00
Vivek Khandelwal 98747d09a8 [MLIR][TORCH] Add support for prims::view_of op
This op does nothing and just returns the input operand as the
result of the op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-04-11 07:58:10 +05:30
Abhishek Varma 5337944ddb [MLIR][TORCH] Add e2e support for aten.randint
-- This commit adds e2e support for aten.randint by decomposing it into
   an aten.randint.low by setting low=0.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-07 00:13:56 +05:30
Vivek Khandelwal 2213ce0855 [TorchDynamo] Add aten.squeeze op to the decomposition list
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-04-06 22:21:25 +05:30
Vivek Khandelwal e90ea3d7ab [MLIR][TORCH] Extend implementation of aten._index_put_impl op.
This commits adds the support for cases for index_put_op:
1.) where index is a 2-d tensor.
2.) where indices is a list of tensors and none, with exactly
2 non none tensors along the consecutive dimensions.

This commit also adds a utility to compute the broadcast shape
given the two input tensors.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-04-05 14:04:30 +05:30
Vivek Khandelwal 788efc3180 [MLIR][TORCH] Add support for non-unit stride for conv backward
This commit also adds the support for non-unit output padding in the
case of transposed convolution.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-04 17:53:27 +05:30
Vivek Khandelwal 5e9582b055 [MLIR][TORCH] Add e2e support aten.movedim.int op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-04 17:53:27 +05:30
Vivek Khandelwal 82fb9c7fb8 [MLIR][TORCH] Add decomposition for prims::squeeze op
This commit adds the decomposition for the prims.squeeze op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-01 21:45:58 +05:30
Ramiro Leal-Cavazos e0f301c890
Add `extra_library` kwarg to `torch_mlir.compile` (#1986)
This commit adds the ability to specify extra abstract interpretation
functions in `torch_mlir.compile` to use during type refinement. This
allows users to easily add custom ops without having to interact with
MLIR or C++ directly.
2023-03-30 09:20:19 -07:00
Chi_Liu 6bb9965a41
[TOSA] Add support for AtenZerosOp 0/strided layout (#1983) 2023-03-30 07:08:20 -07:00
Ramiro Leal-Cavazos 42d780dde0
Remove convolution_overrideable, convolution_backward_overrideable (#1984)
The ops `aten.convolution_overrideable` and
`aten.convolution_backward_overrideable` are currently not e2e tested
in Torch-MLIR. Moreover, there is no way to add e2e tests for them
because the ops cannot be called using the CPU backend (this also
prevents adding tested dtype functions for these ops). Since these two
ops are not expected to ever appear in PyTorch traces obtained through
standard means (https://github.com/pytorch/pytorch/issues/97481),
Torch-MLIR should not have to worry about them.
2023-03-29 15:05:56 -07:00
Maksim Levental 953ea39cb5
handles 2,3,4 from https://github.com/llvm/torch-mlir/issues/1963 (#1964) 2023-03-24 21:50:01 -05:00
Ramiro Leal-Cavazos a7449785ec
Use upstream shape functions when available (#1952)
There are several ops that have their shape function upstream and had
not been updated in Torch-MLIR to use the upstream version. This
commit updates those shape function. In addition, TODOs have been
added for shape functions that should be upstream but are not.
2023-03-24 09:13:43 -07:00
Ramiro Leal-Cavazos eae3ff7f1c
Change dtype functions interface to take ints tuple for each tensor (#1965)
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.

This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.

To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
2023-03-23 11:05:39 -07:00
Zhekun Zhang 5758a0bfbb
[StableHLO] Support for slice_scatter (#1960)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-03-22 13:41:04 -07:00
lisaliu1 d632afce31
Max pool2d ceil mode to tosa (#1957)
* implemented ceil_mode== true support for lowering aten.max_pool2d to tosa
* add e2e test for lowering aten.max_pool2d to tosa with ceil_mode=true

---------

Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-03-21 10:17:39 -07:00
Sean Silva c319a20828 Update to LLVM 029313cc979ae71877b65794b1063d4e51184cc8
- mergeBlockBefore -> inlineBlockBefore
- move tosa-to-tensor pass ordering

https://github.com/llvm/torch-mlir/issues/1178#issuecomment-1476217922
2023-03-21 04:16:20 -07:00
Yuanqiang Liu 3698a95586
[MHLO] add conversion for aten.linalg_vector_norm (#1850) 2023-03-20 14:14:27 -07:00
Yuanqiang Liu b967469906
[e2e] fix stack e2e test typo (#1931) 2023-03-14 09:32:44 -07:00
Jiahao Li 4912c3937d
Support aten.stack op and decompose it into unsqueeze & cat (#1747) 2023-03-11 09:25:25 +08:00
gpetters94 66b1045a80
Add a new RecomposeComplexOps pass, fold slice+copy_ into indeX_put_ (#1901) 2023-03-10 16:42:11 -05:00
Ziheng Jiang dca2b8a40a
[TORCH] Improve type refinement for aten.cat. (#1908)
* [TORCH] Fix type refinement for aten.cat.

* Add test.

* Address comments.

* Update.

* Update.

* Update.

* Update.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-03-09 16:17:35 -08:00
Eric Kunze 4c7e7ec116
Update LLVM tag to 21f4b84c (#1918)
Update PassManager C interface to use mlirPassManagerRunOnOp
Update python calls to PassManager to also use operation instead of
module
2023-03-06 22:53:26 -08:00
Zhekun Zhang 1d3a7419c5
[Torch Dialect] add RSub, ScalarImplicit canonicalize (#1899)
* add rsub, scalarimplit canonicalizer

* reformat

* address comments

* fix bug

* fix test

* Update elementwise.py

* resolve merge conflict

* change to 3

* change to 3

* real fix

* fix name

* add torchdynamo fail test

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-03-06 17:38:27 -08:00
Priya Savithiri c2ef5f4165
Add HardtanhBackward TOSA and LINALG support (#1721) 2023-03-06 10:16:37 -08:00
Ramiro Leal-Cavazos 671be048fe
Fix handling of non-int tensors in `getScalarValue` (#1914)
The current implementation of `getScalarValue` does not check that the
input to a `ValueTensorLiteralOp` is an i64 before extracting the
value, and it does not check that the result type of the
`PrimNumToTensorScalarOp` is also an i64. This leads to crashes or
invalid IR generated when the `input` is something other than an i64
tensor or `!torch.int`.

This commit addresses those issues. In addition, the function
`getScalarValue` is renamed to `getScalarIntValue` to make it clear
that it *only* extracts scalar integers.
2023-03-06 10:12:58 -08:00
Yuanqiang Liu 7a8304f935
[Torch Dialect] add folder for aten.sub.float (#1871) 2023-03-02 09:07:33 -08:00
Yuanqiang Liu fc1e091d6a
[Torch Dialect] add aten.pow.int_float op and it's folder (#1872) 2023-02-28 09:36:05 -08:00
Vivek Khandelwal a32840ffd7 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-02-27.
This commit also adds the lowering for aten.add and aten.Float.Scalar op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-28 22:43:39 +05:30
Prateek Gupta 207229297e
[TORCH-MLIR] Add ODS for aten.clamp.Tensor op. (#1894)
This commit adds the ODS definition for the aten.clamp.Tensor op.

Signed-off-by: Prateek Gupta <prateek.gupta2@cerebras.net>
2023-02-24 09:18:24 -08:00
Vivek Khandelwal 6a3438f672 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-02-20.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-23 11:33:41 +05:30
Zachary Cetinic e7111d473b
[Torch Dialect] Scatter reduce lowering (#1884)
- Lowers the torch.scatter_reduce to linalg_on_tensors dialect.
- Includes support for "sum", "prod", "amax", "amin" and "mean".
2023-02-21 23:05:55 +00:00
Ramiro Leal-Cavazos 52dbb160fc
Replace `torch.rand` and `torch.randn` in e2e tests with `tu.rand` (#1890)
Random tensors used in e2e tests should be created using the
`TestUtils` object passed to the registered test case to ensure that
the compiled module and the golden trace receive the same tensors as
input. This commit changes all the cases of `torch.rand` and
`torch.randn` to use the `TestUtils` instead.
2023-02-21 14:30:05 -08:00
Yuanqiang Liu eb74014dd8
[Torch] decompose aten.norm.ScalarOpt_dim to aten.linalg_vector_norm (#1849) 2023-02-20 20:08:29 -08:00
Vivek Khandelwal b17d4d4f08
[MLIR][TORCH] Add decomposition for aten.bernoulli.p op (#1882)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-15 22:36:29 +05:30
Vivek Khandelwal f6f2e4d040 [MLIR][TORCH] Add support for integer type input for max.dim op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-15 16:14:15 +05:30
Chi_Liu 8a7340dfb5
[TOSA] aten.index.tensor multiple indexes support (#1868) 2023-02-13 23:07:15 -08:00
Maksim Levental 2eddb3fde7
WIP: No PyTorch dep (#1854) 2023-02-13 14:21:06 -06:00
Yuanqiang Liu 6ab990e1e8
[Torch Dialect] add folder for aten.Int.float (#1863) 2023-02-10 13:59:03 -08:00
Ziheng Jiang f1b8d5e581
[MHLO] Support AtenMaskedFillScalar (#1839)
* [MHLO] Support MaskedFillScalar.

* Update.

* Update.

* Update.

---------

Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2023-02-10 13:58:39 -08:00
Yuanqiang Liu 2f6fdb7f0b
[Torch Dialect] add folder for prim.min.int (#1864) 2023-02-10 13:58:15 -08:00
Zachary Cetinic 2a4a61f98f
Add aten.scatter_reduce op definition (#1846) 2023-02-07 21:59:07 +00:00
Vivek Khandelwal c957cebd03 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-02-05.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-06 13:23:28 +05:30
Zachary Cetinic 2c2009a13d
Add in-place variant of torch.scatter_add (#1836) 2023-02-03 17:54:28 +00:00
Jiahao Li f58ba19448
Add aten.bucketize op and its decomposition (#1834) 2023-02-03 10:20:47 +08:00
Ashay Rane 711646d095
mhlo: migrate conversion to stablehlo (#1840)
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.

This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
2023-02-02 07:29:47 -06:00
Vivek Khandelwal ed9d8d1fb7 [MLIR][TORCH] Add support for clone op with channels last memory format
Fixes https://github.com/llvm/torch-mlir/issues/1829

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-02 16:04:42 +05:30
Sean Silva 72fbf316b4 Update LLVM and MHLO submodules.
Week of 01/30/2023:

Green LLVM commit: e31ee6417c33a6e2f0e8440b1a86d5365279ad68
Green MHLO commit: c2a6f4064d426567b9ef7b0d29d5ab86dc7b2b02 (branch greencommit/2023-01-30-e31ee641)
2023-01-31 06:08:21 -08:00
Jiahao Li f5b689e12f
[MHLO] Support aten.cumsum op in mhlo backend (#1825) 2023-01-29 21:38:27 -08:00
Matthias Gehre adaf05f03e
[TorchToLinalg] Lower AtenRoundOp to math::RoundEvenOp (Fixes #1811) (#1823)
[TorchToLinalg] Lower AtenRoundOp to math::RoundEvenOp (Fixes #1811)
2023-01-25 08:51:29 +01:00
Gleb Kazantaev 3930588a7e
Enable VerifyBackendContract in LTC backend (#1798)
* Enable VerifyBackendContract in LTC backend

* Update VerifyBackendContract pass

* Move convert_scalar_implicit to jit_utils

* Rename VerifyBackendContract to VerifyBackendContractNoDecompositions

* Update verify-backend-contract-error.mlir test
2023-01-24 22:14:17 -05:00
Gleb Kazantaev aa3a88c8d9
Fix JIT schema matching for when ListType is used (#1826) 2023-01-23 21:43:18 -05:00
Vivek Khandelwal 23aa6903f7 [torchdynamo] Add default decomposition for ops in the dynamo backend
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-23 13:33:50 +05:30
Chi_Liu c5ac42a198
[TOSA] Add aten.view shape -1 support (#1815) 2023-01-20 11:56:26 -08:00
Chi_Liu 2587b3f583
[TOSA] Add aten.Index.Tensor support (#1771) 2023-01-19 21:19:00 -08:00
Vivek Khandelwal abf4f207cd [MLIR][TORCH] Add canonicalizer for aten.new_empty_strided op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-19 13:37:32 +05:30
Vivek Khandelwal f9d59eb500 [MLIR][TORCH] Add decomposition for aten.randn_like op
This commit decomposes aten.randn_like op into aten.randn.generator op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-18 12:09:27 +05:30
Vivek Khandelwal 999fd9036b [torchdynamo] Add native_group_norm and split op to the decomp list
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-18 10:40:46 +05:30
Jiahao Li e2698433db
Fix empty tensor when select -1 (#1787) 2023-01-17 10:14:14 -08:00
Jiahao Li 4f94831fed
[LINALG][TOSA][MHLO] Add e2e support for aten bitwise ops (#1753) 2023-01-11 14:40:03 -08:00
Vivek Khandelwal fd236b2c89 [MLIR][TORCH] Add decomposition for prims.var and prims.sqrt op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Vivek Khandelwal b966733e04 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-01-08.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Gleb Kazantaev c8b867b876
Added support for aten::norm.ScalarOpt_dim (#1774)
* Added support for aten::norm.ScalarOpt_dim

* Disable NormalizeModule_basic for linalg
2023-01-10 13:08:25 -05:00
Jiahao Li 8dc5d985eb
Add e2e support for aten logical or/and/xor/not ops (#1761) 2023-01-03 18:11:25 -08:00
Ramiro Leal-Cavazos 273664ded6
[custom op] Replace `tanh` dtype function with `expm1` (#1769)
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.

Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
2023-01-03 14:18:26 -08:00
Srirammaswamy a88e3766e8
Add E2E support for LeakyRelu and LeakyReluBackward ops (#1733)
Co-authored-by: srirammaswamy <srirammaswamy@gmail.com>
2023-01-03 08:30:16 -08:00
Ashay Rane ac780529b4
Revert e2e support for aten logical or/and/xor/not ops (#1757)
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked.  Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result.  This patch reverts the
commit to make the post-merge builds green again.
2022-12-29 21:01:06 -06:00
Shivam Gupta 2f45959f0d
Prelu lowering to linalg (#1712)
Prelu lowering to linalg
2022-12-28 08:51:33 +05:30
Jiahao Li eaab9be207
Add e2e support for aten logical or/and/xor/not ops (#1752) 2022-12-26 10:23:38 +08:00
Ramiro Leal-Cavazos 3260a1ea6e
Allow passing traced `torch.nn.Module`s into `torch_mlir.compile` (#1743)
This commit adds support for passing to `torch_mlir.compile` the
result of running `torch.jit.trace` on a model by relaxing the
condition that checks if the model is already in JIT IR to allow any
`torch.jit.ScriptModule`.

Fixes https://github.com/llvm/torch-mlir/issues/1739
2022-12-22 08:39:55 -08:00
Jiahao Li 60a139271d
Add aten.std.correction op and its decomposition (#1731) 2022-12-21 21:02:40 -08:00
Jiahao Li 15b249777b
[Torch][MHLO] Decompose aten.copy op. Lower aten.rsqrt & sigmoid to mhlo. (#1734) 2022-12-22 10:13:59 +08:00
Chi_Liu b2cefc0b64
[TOSA] Add aten.masked_fill.Tensor/Scalar support (#1735) 2022-12-21 08:56:07 -08:00
Jae Hoon (Antonio) Kim 1d695239ff
Unrevert #1724 (#1737)
* Unrevert #1724

* Update pytorch requirements.txt
2022-12-20 11:17:21 -05:00
Abhishek Varma 66d7a412cb [RefineTypes] Fix knowledge dtype for `aten.embedding` op
-- The dtype of the result of `aten.embedding` should match that of
   the `weight` operand's (operand[0]) instead of hardcoding to f32.
-- This commit aims to provide a fix for the same.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-12-20 19:56:12 +05:30
Ashay Rane dd1cf578a6
build: fix LTC code after upstream PyTorch change (#1727)
pytorch/pytorch@140a3139 reverted a change from yesterday, causing the
RollPyTorch action to break.  This patch reverts the corresponding
change in the torch-mlir LTC code.

This patch also re-enables tests that were previously marked as XFAIL.
2022-12-16 13:07:38 -06:00
ataheridezfouli-groq 17ee643aeb
[TORCH] Add Complex Number support (#1673)
Add Complex number dtype support to torch tensors. Add
aten.fft_fft op to test complex numbers.
2022-12-15 21:40:01 +00:00
Jae Hoon (Antonio) Kim a2a93891ea
Replace asIntArrayRefSlow with macro (#1724)
* Replace asIntArrayRefSlow with macro

* Update pytorch requirements.txt
2022-12-15 11:52:41 -05:00
Prashant Kumar 8ba77ae2a5 Yapf Format `refbacked.py`. 2022-12-15 21:19:52 +05:30
Prashant Kumar 564403e3a1 Add float16 support in the refbackend.
This will require https://reviews.llvm.org/D139121 patch to go through.
2022-12-15 21:19:52 +05:30
Sean Silva af9e8a5e63 [torchdynamo] Move to aot_autograd instead of raw make_fx
As [@ezyang suggested](https://github.com/pytorch/pytorch/issues/90276#issuecomment-1339791275),
use `torch._dynamo.optimizations.training.aot_autograd` instead of raw
`make_fx`. This is more future proof and gives us the backward pass and
functionalization. We don't currently get functionalization because of
https://github.com/pytorch/pytorch/issues/90759

This also incidentally fixes the source location handling, which makes
`lockstep_basic.py` give an accurate source location!
2022-12-15 01:55:50 -08:00
Ahmed S. Taei b1f6832849
Add aten.slice.Tensor & aten.cat folders (#1691) 2022-12-13 13:02:47 -08:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00
Ashay Rane 430737b820
[cleanup] fix naming of private variable according to the style guide (#1704) 2022-12-12 09:04:46 -06:00
Vivek Khandelwal d4862ec611 [MLIR][TORCH] Add e2e support for aten.var_mean op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-12 15:46:54 +05:30
Vivek Khandelwal f783e19dcb Revert "[MLIR][TORCH] Fix mean and mean.dim op for large-sized inputs"
This reverts commit 55c7e66aa7.
2022-12-09 19:30:46 +05:30
Sean Silva 7731211d02 Remove eager_mode
This was an experimental attempt at rolling out own op-by-op executor
with `__torch_dispatch__`, but it proved difficult to make it robust.
Op-by-op execution is very easy to implement robustly now with the
PyTorch 2.0 stack, so we don't need eager_mode.

Downstream users were using eager_mode to implement lockstep numerical
accuracy debuggers. We implemented the same functionality with
TorchDynamo in https://github.com/llvm/torch-mlir/pull/1681 so now there
is not much reason to continue maintaining it.
2022-12-09 03:50:00 -08:00
Gleb Kazantaev 804f9f1f8f
Extended TorchMLIRLoweringContext with virtual CreateComputation method (#1699)
* Extended TorchMLIRLoweringContext with virtual CreateComputation method

* Fix device_data_cast return value
2022-12-08 15:57:07 -05:00
Sean Silva e8511840c3 [cleanup] Use a single function pipeline for TOSA->Linalg
This should run faster and is overall clearer.
2022-12-08 09:02:38 -08:00
Sean Silva 69171c246a [RefBackend] Add elementwise fusion and buffer deallocation
This gives some decent improvements to memory consumption and latency of
testing. I would have expected buffer-deallocation to actually make a
big difference to the final process RSS but it doesn't appear to. Also
running buffer-deallocation later in the pipeline results in
miscompiles. I didn't have the time or interest to dig in deeper, but
something is off.

(numbers below are taken from a single run, but I did do a few runs to make
sure that the variance wasn't that great)

- Linalg-on-Tensors shows memory consumption improvements and some slight speedups.
```
./tools/e2e_test.sh -s -v -c refbackend
fuse=0 dealloc=0
RSS: 3071.33 MB
real    3m58.204s
user    6m22.299s
sys     0m51.235s
fuse=1 dealloc=0
RSS: 2515.89 MB
real    3m34.797s
user    5m56.902s
sys     0m44.933s
fuse=1 dealloc=post-bufferize:
RSS: 2290.25 MB
real    3m42.242s
user    6m0.560s
sys     0m46.335s
```

- TOSA ResNet18 gets significantly faster and uses significantly less memory.
```
time ./tools/e2e_test.sh -s -v -c tosa -f ResNet18
fuse=0 dealloc=0
rss 1328.56 MB
real    0m50.303s
user    0m55.355s
sys     0m12.260s
fuse=1 dealloc=0
rss 859MB
real    0m30.454s
user    0m35.551s
sys     0m11.879s
fuse=1 dealloc=post-bufferize:
rss 851MB
real    0m30.313s
user    0m39.889s
sys     0m11.941s
```

Big thanks to Ramiro for the methodology here for measuring the RSS with
`psutil`:
https://gist.github.com/ramiro050/5b5c2501f7389c008d9029210772c3a8
2022-12-08 03:14:42 -08:00
Ramiro Leal-Cavazos dd35488da5
build: update llvm tag to 798fa4b4 (#1684)
- Support for non-prefixed accessors has been removed. See:
  https://reviews.llvm.org/D136727
- Rename `operands` to `methodOperands` in `prim.CallMethod` since the
  name `operands` overlaps with a builtin method name. See:
  https://reviews.llvm.org/D136727
- Add passes in refbackend to lower memref.subview. See:
  https://reviews.llvm.org/D136377
- Replace `CopyToValueTensorOps` first in `RewriteViewLikeSubgraph` in
  maximize-value-semantics.

  The current implementation of the `RewriteViewLikeSubgraph` pass in
  maximize-value-semantics creates temporarily invalid IR. In
  particular, given a forward slice starting from a
  `CopyToNonValueTensorOp` and ending in `CopyToValueTensorOp`s, the
  pass first replaces all uses of the `CopyToNonValueTensorOp` with
  its operand, which results in all the `CopyToValueTensorOp` users
  having their operand have type `!torch.vtensor`, which is invalid.

  The correct way to do things is to first replace all the
  `CopyToValueTensorOp`s with their operand, and then replace all uses
  of the `CopyToNonValueTensorOp` with its operand.

  This only started failing now because the generated accessor
  `getOperand` for the `CopyToValueTensorOp` now returns a
  `TypedValue<NonValueTensorType>`, which has an assert checking that
  the value returned is of the expected type.
2022-12-07 12:20:41 -08:00
Sean Silva b1f9e09f85 [torchdynamo] Add ResNet18 example with TorchDynamo
This is a minor variation on our other resnet18 examples swapping in
TorchDynamo.

We replicate the refbackend_torchdynamo_backend out of the e2e test
config to avoid making that appear like a public API.

Also, some minor cleanups to TorchDynamoTestConfig.
2022-12-07 09:25:27 -08:00
Sean Silva c956c39c86 [cleanup] Remove disabled e2e test
This test has been disabled a long time, and since RefBackend is so slow
we don't want to add this unnecessarily. I believe it is covered by
downstream testing such as the Shark Tank.
2022-12-07 06:36:48 -08:00
Vivek Khandelwal 3e4bb2bd8e [MLIR][TORCH] Add E2E support for randn and randn.generator op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-06 22:41:24 +05:30
Sean Silva 485c18bb2f [torchdynamo] Add "lockstep" numerical accuracy debugger.
Thanks to TorchDynamo's great layering and design, this is only about
100 lines of code for a basic lockstep debugger.

This should allow us to deprecate eager_mode, since AFAIK the only
interesting use case that it was really supporting is for downstream users to
write lockstep debuggers.

NOTE: The exact reporting and interface here is subject to change. Please
try it out and provide feedback (or patches :) ).
- make_fx should not drop source locations: https://github.com/pytorch/pytorch/issues/90276
- Report tensors better (huge tensors should be summarized)
- Maybe don't abort, but just warn?
- Allow customizing atol/rtol.
- How best to print the failing node? And include surrounding graph
context?
2022-12-06 07:57:45 -08:00
Vivek Khandelwal ef39b9ebb4 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2022-12-05.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-05 22:44:32 +05:30
Vivek Khandelwal f416953600 [MLIR][TORCH] Add TorchConversionToMLProgram and MLProgramBufferize pass
This commit changes the `InsertRngGlobalsPass` to `TorchConversionToMLProgram`
pass. This commit also adds the `MLProgramBufferize` pass for the
bufferization of ml_program dialect ops to run on refbackend.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-02 13:20:46 +05:30
Sean Silva 88db99946b [torchdynamo] Use decompositions to support a few ops 2022-12-01 11:25:20 -08:00
Ramiro Leal-Cavazos b4b92c990e
Replace LCG algorithm with squares64 algorithm in AtenUniformOp (#1633)
This commit replaces the LCG algorithm that was being used by the
`TorchToLinalg` lowering of `AtenUniformOp` to generate random numbers
with the `squares64` algorithm, for the LCG algorithm was producing
tensors that were highly correlated with one another.

Squares64 algorithm: https://arxiv.org/abs/2004.06278

Closes https://github.com/llvm/torch-mlir/issues/1608
2022-12-01 08:30:10 -08:00
Ramiro Leal-Cavazos 0983a7f93a
Fix modulus calculation in LCG algorithm of refbackend (#1658)
The current implementation sets the `nextSeed` value to `temp & 127`,
which is wrong. The last step of the LCG algorithm for the multiplier
and increment chosen should be `temp % 2^{64} = temp & (1 <<
63)`. However, because we are dealing with i64 values, the modulus
operation happens automatically, so it is not needed.

See Donald Knuth's values for LCG here:
https://en.wikipedia.org/wiki/Linear_congruential_generator
2022-11-30 08:46:52 -08:00
Abhishek Varma c27c1791f1 [MLIR][TORCH] Add e2e support for `aten.amax` op
-- This commit adds e2e support for `atend.amax` op.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-30 17:54:37 +05:30
Tanyo Kwok bbcdb38d99
Revert "Decompose torch.slice_scatter (#1622)" (#1659)
This reverts commit f3f2f10030.
2022-11-30 12:47:13 +08:00
Daniel Ellis e2de20575f
Automatically strip overloads for FX-based models. 2022-11-29 22:19:09 -05:00
Ramiro Leal-Cavazos a8cbfff95b
Reduce memory usage of e2e tests by reducing input sizes (#1653)
There are a few e2e tests that take several very large tensors as
input, which leads to the e2e test suite leaking too much
memory. Running things locally resulted in a total memory usage of
12.5 GB when running the suite sequentially on the refbackend.

Many of the tests that take large tensors don't actually need
such large tensors to pass, and some that take several large tensors
as input are just doing the same thing multiple times. This commit
reduces the size of some of the tensors and removes repetitive parts
of tests to reduce the memory usage to a total of 3 GB.
2022-11-29 10:03:36 -08:00
Sean Silva 5a488ff085 Remove deprecated np.bool
`np.bool is bool` and will never be returned as a dtype of an
`np.ndarray`, so we don't need to handle it here.

```
>>> a = np.ndarray([1], dtype=bool)
>>> a.dtype.type is np.bool_
True
```

More info here:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
2022-11-29 01:46:21 -08:00
Sean Silva 5a27f826b8 Fix multiprocessing for `--config=torchdynamo`
For reasons that I haven't yet fully tracked down, the TorchDynamo
TestConfig seems to result in tensors that cannot be pickled. They seem
to be holding some sort of weak handles to a `torch.fx.graph.Graph`.

Here is the object structure that leads to the unpickleable object:
```
(<function _rebuild_tensor_v2 at 0x7f56346d56c0>, <class 'torch.Tensor'>, ( 1.0...
{<object object at 0x7f557529e6b0>: <WeakKeyDictionary at 0x7f556a3efbb0>}
{'data': {<weakref at 0x7f5615372ed0; to 'PythonKeyTracer' at 0x7f556a3ee5c0>: _...
<class 'torch.fx.graph.Graph'>
<class 'torch._ops.OpOverloadPacket'>
TypeError("cannot pickle 'torch._C.FunctionSchema' object")
```

Upstream bug filed: https://github.com/pytorch/pytorch/issues/89626
2022-11-28 04:03:11 -08:00
Shivam Gupta 853fd5c965
Fix RuntimeError while running examples/eager_mode.py (#1647) 2022-11-25 10:21:56 -06:00
Vivek Khandelwal d9cbf01d1e Revert "build: update llvm tag to 147fe9de"
This reverts commit e45ad313d4.
2022-11-25 12:41:56 +05:30
Vivek Khandelwal 9cac480a18 Revert "[MLIR][TORCH] Fix indentation and spacing for E2E tests"
This reverts commit 3790a4270e.
2022-11-25 12:41:56 +05:30
Sean Silva 28957adaac [torchdynamo] Initial TorchDynamo support
This adds a basic e2e Config for TorchDynamo using
Linalg-on-Tensors/RefBackend.
But TorchDynamo is pretty orthogonal to
various other pieces, so it should compose nicely with variations like:
- Switching out all the backends (Linalg-on-Tensors, TOSA, MHLO)
- PyTorch functionalization and decompositions
- Taking the example inputs and compiling with all dynamic or all static
  shapes without duplicating tests.

This adds it to the CI, but there are still a lot of XFAIL's.

This also adds a helper `from torch_mlir.dynamo import
make_simple_dynamo_backend` which simplifies some of the steps for
making a Torch-MLIR-based TorchDynamo backend. We include "simple" in
the name because we are going to be exploring various things next from
the long-term roadmap.

The next steps are:
- Burn down all the XFAIL's.
- Start working on the pieces from the [long-term roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md).
  - Add functionalization/decompositions into the TorchDynamo flow and
    remove reliance on the current Torch-MLIR "frontend".
  - Write a pure-Python direct FX->MLIR importer.
  - Hook up the new PyTorch symbolic shape stuff.
  - Explore PrimTorch decompositions for simplifying backends.
2022-11-24 04:10:25 -08:00
Vivek Khandelwal 3790a4270e [MLIR][TORCH] Fix indentation and spacing for E2E tests
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-24 12:44:43 +05:30
Vivek Khandelwal e45ad313d4 build: update llvm tag to 147fe9de
Summary of changes:
- Update call to `hasNoEffect` utility
- `KDynamicSize` value changed to
  `std::numeric_limits<int64_t>::min()` from `-1`
- Update tags
  llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
  mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-24 12:44:43 +05:30
Tanyo Kwok 14f1260ac4
Add more mhlo basic converters (#1628)
* Add more mhlo basic converters

* remove unused pinnedMemory constraints

* refine naming
2022-11-24 14:28:34 +08:00
Maksim Levental bfcfd60d55
[MLIR][TORCH] Refix differentiable view (#1639)
* `BatchMlpLayerModule_basic` passes

* Fix https://github.com/llvm/torch-mlir/issues/1618 by stripping `requires_grad` from results of view ops.
2022-11-23 15:35:39 -06:00
Tanyo Kwok f3f2f10030
Decompose torch.slice_scatter (#1622)
* Decompose torch.slice_scatter

* fix compilation error

* update file check

* fix ci

* fix i64 torch.tensor dtype
2022-11-23 18:14:12 +08:00
Vivek Khandelwal 68f568b704 [MLIR][TORCH] Add E2E support for prims.convert_element_type op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-22 09:36:36 +05:30
Vivek Khandelwal 55c7e66aa7 [MLIR][TORCH] Fix mean and mean.dim op for large-sized inputs
This commit fixes the aten.mean and aten.mean.dim op decomposition
for supporting large-sized inputs.
This commit also fixes the formatting for the file stats.py

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-22 08:38:51 +05:30
Maksim Levental ed901094c1
Fix https://github.com/llvm/torch-mlir/issues/1618 by stripping `requires_grad` from results of view ops. (#1624) 2022-11-21 19:15:53 -06:00
Sean Silva 22307a1427 Clean up some parts of the test suite
The purpose of the test suite is to accelerate the development of the
compiler. However, we had various tests there that were not expected to
work, had no in-progress work being tested by the test, and nobody was
actively working on them. Having such tests in our test suite just adds
clutter and slows down development on the compiler.
2022-11-21 06:14:31 -08:00
Vivek Khandelwal 25ab8fcc1f [MLIR][TORCH] Fix numel tests for Roll PyTorch action
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-20 19:19:42 +05:30
Vivek Khandelwal 4cbd3927d7 [MLIR][TORCH] Add aten.sort.int op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-20 19:00:41 +05:30
Abhishek Varma 1d949f3ac2 [MLIR][TORCH] Fix aten.upsample_nearest2d op
-- aten.upsample_nearest2d.vec op is not present
   owing to https://github.com/pytorch/pytorch/pull/85638
-- So this commit adds a lowering on aten.upsample_nearest2d.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2022-11-18 13:41:47 +05:30
Sean Silva 39de4d6265 [cleanup] Make diagnostics better
Also remove some unused imports.
2022-11-17 02:09:54 -08:00
Vivek Khandelwal 5f7177da35 [MLIR][TORCH] Add decomposition for aten.var_mean.correction op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-17 13:00:09 +05:30
Sean Silva 3695ca83e6 [torch_mlir.compile] Handle the case of already-scripted models better
Closes #1582
2022-11-16 10:47:13 -08:00
Vivek Khandelwal a1d3afdba9 [MLIR][TORCH] Add E2E support for aten.randint.low op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-16 09:54:18 +05:30
George Petterson 92f385bd9f [MLIR][TORCH] Add E2E support aten.convolution_backward op
This commit adds the decomposition for the `aten.convolution_backward`
and `aten.convolution_backward_overrideable` op.
2022-11-15 07:38:26 +05:30
Gleb Kazantaev 6909eaf7fc
Update TorchMlirBackendImpl Methods (#1580)
* Fix LTC build

* Remove passing test from xfail set
2022-11-14 00:37:49 -05:00
Vivek Khandelwal a558034c1a [MLIR][TORCH] Fix aten.upsample_nearest2d_backward op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-12 00:05:36 +05:30
Vivek Khandelwal d571d050fd [torch_mlir.compile] Fixes issue with the https://github.com/llvm/torch-mlir/issues/1557
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-11-11 18:05:15 +05:30
Sean Silva cc468d2d16 [cleanup] Be consistent about apostrophe 2022-11-10 07:42:15 -08:00
Daniel Ellis a7ac0def45
Move single-tensor-tuple-return test to mlir unit test.
Also, add multiple return test.
2022-11-10 09:23:53 -05:00
Xiafei Qiu 4f173c6e0f
update llvm tag to a2620e00. (#1567)
- also update MHLO to 57ba12a2(branch greencommit/2022-11-07-a2620e00)
- change -pass-pipeline format to make tests pass.
2022-11-10 18:39:28 +08:00
Sean Silva 64914603fa [torch_mlir.compile] Add support for multiple exported methods
For AoT deployments models often have multiple exported methods.
This patch enables something like this:

```
class TwoMethodsModule(torch.nn.Module):
    def sin(self, x):
        return torch.ops.aten.sin(x)

    def cos(self, x):
        return torch.ops.aten.cos(x)

example_args = torch_mlir.ExampleArgs()
example_args.add_method("sin", torch.ones(2, 3))
example_args.add_method("cos", torch.ones(2, 4))
print(torch_mlir.compile(TwoMethodsModule(), example_args))
```

In the
[long-term](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md#tools-for-advanced-aot-deployments)
we will need to reconcile this with our story for stateful models and the
backend contract being purely functional. For now, this provides some basic
infra that seems harmless. Arguably, we could tighten up the backend contract
even more to only allow a single compiled function which would prohibit this or
require building out a layer above.

Fixes #1557
2022-11-10 02:10:22 -08:00
Jae Hoon (Antonio) Kim 2ec4b06bbb
Remove MakeView from IR Builder (#1552)
* Remove MakeView from IR Builder

* Update PyTorch requirements
2022-11-09 13:46:34 -05:00