Commit Graph

29 Commits (7ae354340d496a82486e7462ec678724e031f127)

Author SHA1 Message Date
Abhishek Varma 5337944ddb [MLIR][TORCH] Add e2e support for aten.randint
-- This commit adds e2e support for aten.randint by decomposing it into
   an aten.randint.low by setting low=0.

Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
2023-04-07 00:13:56 +05:30
Vivek Khandelwal 5e9582b055 [MLIR][TORCH] Add e2e support aten.movedim.int op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-04 17:53:27 +05:30
Vivek Khandelwal 82fb9c7fb8 [MLIR][TORCH] Add decomposition for prims::squeeze op
This commit adds the decomposition for the prims.squeeze op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-01 21:45:58 +05:30
Ramiro Leal-Cavazos 42d780dde0
Remove convolution_overrideable, convolution_backward_overrideable (#1984)
The ops `aten.convolution_overrideable` and
`aten.convolution_backward_overrideable` are currently not e2e tested
in Torch-MLIR. Moreover, there is no way to add e2e tests for them
because the ops cannot be called using the CPU backend (this also
prevents adding tested dtype functions for these ops). Since these two
ops are not expected to ever appear in PyTorch traces obtained through
standard means (https://github.com/pytorch/pytorch/issues/97481),
Torch-MLIR should not have to worry about them.
2023-03-29 15:05:56 -07:00
Ramiro Leal-Cavazos a7449785ec
Use upstream shape functions when available (#1952)
There are several ops that have their shape function upstream and had
not been updated in Torch-MLIR to use the upstream version. This
commit updates those shape function. In addition, TODOs have been
added for shape functions that should be upstream but are not.
2023-03-24 09:13:43 -07:00
Ramiro Leal-Cavazos eae3ff7f1c
Change dtype functions interface to take ints tuple for each tensor (#1965)
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.

This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.

To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
2023-03-23 11:05:39 -07:00
Jiahao Li 4912c3937d
Support aten.stack op and decompose it into unsqueeze & cat (#1747) 2023-03-11 09:25:25 +08:00
Roll PyTorch Action 40c25cecc4 update PyTorch version to 2.1.0.dev20230308 2023-03-08 15:25:16 +00:00
Priya Savithiri c2ef5f4165
Add HardtanhBackward TOSA and LINALG support (#1721) 2023-03-06 10:16:37 -08:00
Vivek Khandelwal a32840ffd7 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-02-27.
This commit also adds the lowering for aten.add and aten.Float.Scalar op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-28 22:43:39 +05:30
Zachary Cetinic e7111d473b
[Torch Dialect] Scatter reduce lowering (#1884)
- Lowers the torch.scatter_reduce to linalg_on_tensors dialect.
- Includes support for "sum", "prod", "amax", "amin" and "mean".
2023-02-21 23:05:55 +00:00
Yuanqiang Liu eb74014dd8
[Torch] decompose aten.norm.ScalarOpt_dim to aten.linalg_vector_norm (#1849) 2023-02-20 20:08:29 -08:00
Vivek Khandelwal b17d4d4f08
[MLIR][TORCH] Add decomposition for aten.bernoulli.p op (#1882)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-02-15 22:36:29 +05:30
Jiahao Li f58ba19448
Add aten.bucketize op and its decomposition (#1834) 2023-02-03 10:20:47 +08:00
Vivek Khandelwal abf4f207cd [MLIR][TORCH] Add canonicalizer for aten.new_empty_strided op
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-19 13:37:32 +05:30
Vivek Khandelwal f9d59eb500 [MLIR][TORCH] Add decomposition for aten.randn_like op
This commit decomposes aten.randn_like op into aten.randn.generator op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-18 12:09:27 +05:30
Jiahao Li 4f94831fed
[LINALG][TOSA][MHLO] Add e2e support for aten bitwise ops (#1753) 2023-01-11 14:40:03 -08:00
Vivek Khandelwal fd236b2c89 [MLIR][TORCH] Add decomposition for prims.var and prims.sqrt op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Vivek Khandelwal b966733e04 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-01-08.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-01-11 17:39:10 +05:30
Jiahao Li 8dc5d985eb
Add e2e support for aten logical or/and/xor/not ops (#1761) 2023-01-03 18:11:25 -08:00
Ramiro Leal-Cavazos 273664ded6
[custom op] Replace `tanh` dtype function with `expm1` (#1769)
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.

Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
2023-01-03 14:18:26 -08:00
Srirammaswamy a88e3766e8
Add E2E support for LeakyRelu and LeakyReluBackward ops (#1733)
Co-authored-by: srirammaswamy <srirammaswamy@gmail.com>
2023-01-03 08:30:16 -08:00
Ashay Rane ac780529b4
Revert e2e support for aten logical or/and/xor/not ops (#1757)
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked.  Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result.  This patch reverts the
commit to make the post-merge builds green again.
2022-12-29 21:01:06 -06:00
Shivam Gupta 2f45959f0d
Prelu lowering to linalg (#1712)
Prelu lowering to linalg
2022-12-28 08:51:33 +05:30
Jiahao Li eaab9be207
Add e2e support for aten logical or/and/xor/not ops (#1752) 2022-12-26 10:23:38 +08:00
Jiahao Li 60a139271d
Add aten.std.correction op and its decomposition (#1731) 2022-12-21 21:02:40 -08:00
Jiahao Li 15b249777b
[Torch][MHLO] Decompose aten.copy op. Lower aten.rsqrt & sigmoid to mhlo. (#1734) 2022-12-22 10:13:59 +08:00
ataheridezfouli-groq 17ee643aeb
[TORCH] Add Complex Number support (#1673)
Add Complex number dtype support to torch tensors. Add
aten.fft_fft op to test complex numbers.
2022-12-15 21:40:01 +00:00
Ramiro Leal-Cavazos a710237437
[custom op] Generalize shape library logic to work with dtypes (#1594)
* [custom op] Generalize shape library logic to work with dtypes

This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.

For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
2022-12-13 08:25:41 -08:00