Commit Graph

213 Commits (f83a90585682c25367565fe8d612dd600e27ee04)

Author SHA1 Message Date
powderluv e55fc4deb5
Revert "E2E support for AtenRemainderScalarOp (#1119)" (#1190)
This reverts commit 34e207eeb5.
2022-08-08 22:59:57 -07:00
Vidush Singhal 34e207eeb5
E2E support for AtenRemainderScalarOp (#1119)
* E2E support for AtenRemainderScalarOp
2022-08-08 20:02:52 -04:00
Vidush Singhal b70548edff
Add decomposition and E2E support for Aten_EmbeddingBag (#1137)
* Add decomposition and E2E support for Aten_EmbeddingBag
2022-08-08 18:56:49 -04:00
Vivek Khandelwal c129a6de93 [MLIR][TORCH] Add support for dim=None to Aten[Var|Std]DimOp
PyTorch recently added support for `dim=None` in the `torch.var`
(5ca9b2b6fa)
and `torch.std`op (eb0e30e0bc).
This commit adds the corresponding support in torch-mlir.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-08-05 20:28:56 +05:30
Vivek Khandelwal f2a0e32127 [MLIR][TORCH] Fix CI failure
This commit fixes the CI failure by temporarily adding the failing
test to xfail set.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-08-03 20:07:56 +05:30
Vidush Singhal fe3c9f5208
Add embedding Bag e2e case in xfail set (#1130) 2022-08-01 18:23:45 -04:00
Henry Tu 2c3b3606d0 Resolve remaining LTC CI failures (#1110)
* Replace CHECK_EQ with TORCH_CHECK_EQ

* Check value of TORCH_MLIR_USE_INSTALLED_PYTORCH during LTC build

* Update LTC XFAIL with NewZerosModule ops

* Explicitly blacklist _like ops

* Automatically blacklist new_/_like ops

* Prune away unused Python dependencies from LTC

* Add flag to disable LTC

* Autogen dummy _REFERENCE_LAZY_BACKEND library when LTC is disabled

* Implement compute_shape_var

* Removed Var tests from XFAIL Set

* XFAIL tests using _local_scalar_dense or index.Tensor

* Add StdDim tests to XFAIL set

* Autogen aten::cat
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 425362263b Clean up Autogen (#1112)
* Remove unnecessary sed in autogen

* Remove .pyc files frrom VCS
2022-07-30 09:40:02 -04:00
Henry Tu 70395de197 Resolve CI testing failure for Lazy Tensor Core (#1088)
* Xfail unsupported ops

* Register FuncDialect

* Include dynamic_ir in build

* Code reformat

* Enable LTC tests for macOS and Source Build
2022-07-30 09:40:02 -04:00
Henry Tu cec74b8d37 Blacklist _convolution op (#1048)
* Blacklist _convolution op in LTC

* Removed duplicate Torch_AtenSelectScatterOp instance from autogen .td

* Removed duplicate Torch_AtenSliceScatterOp instance from autogen .td
2022-07-30 09:40:02 -04:00
Henry Tu f5acad8512 Prune xfail e2e LTC tests & fix bugs from functionalization pass (#1044)
- Pruned number of xfailed e2e LTC tests from 305 to 134
  - Reviewed every failure to ensure the error genuinely warrants an xfail
- Fixed bug where non-tensor outputs of LTC computation had `.to('cpu')` called, which caused a failure and inflated the xfail count
- Fixed bug with `HBC_basic` test where a constant tensor was created in its constructor without being declared as a buffer, which prevented the device from being updated when the parent `torch.nn.Module` got moved to the `lazy` device
  - Note that this test is still xfail'd due to some unsupported ops. Left a comment about some potential issues that may arise if it gets reenabled in the future
- Updated autogen `GeneratedTorchOps.td` to reflect the latest set of supported ops
- Renamed `aten.zero.functionalization` to `aten.zero` to reflect upstream PyTorch changes
2022-07-30 09:40:02 -04:00
Henry Tu dfcc26556a Added e2e LTC tests (#916)
* Added e2e LTC Torch MLIR tests

* Fix seed for reproducability

* Check if computation is None before getting debug string

* Updated unit tests, and added numeric tests

* Print name of the model layer that fails numeric validation

* Run LTC e2e test with CI/CD

* Set seed in main function, instead of beginning of execution

* Add comment to specify number of digits of precision

* Fixed typo

* Remove tests for LTC example models

* Added LTC option to torchscript e2e

* Implement compile and run for LTC e2e test

* xfail all tests that use ops that aren't currently supported
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 1bde00c73d Fix LTC Decoupling (#815)
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
2022-07-30 09:40:02 -04:00
Quinn Dawkins 647e75e029
Allow expanding and collapsing in aten::view (#1082)
- Supports cases where the view op expands and collapses dims
simulataneously. This does not handle the case where it is neither
expanding nor collapsing (e.g. [2, 3] -> [3, 2])
 - Additionally fixes a previous bug with adding 1-sized dims on both
sides of a tensor with aten.view
2022-07-20 17:35:51 -04:00
Sean Silva 85858d2743 Bump LLVM to 889c6f3996769a991a24da957f597e7500d158e7
The biggest change here is to upgrade RefineTypes to the new sparse
dataflow framework.

Smaller changes:
- minor changes to type parsing
- suppress warnings in e2e tests
2022-07-15 13:36:04 -07:00
Ramiro Leal-Cavazos afdaa60dd4
Fix typo in `inputRank` check of `AtenBatchNormOp` (#1046)
The original conversion pattern for `AtenBatchNormOp` required that
the input rank be greater than 2; however, the only
expectation in the conversion pattern and in Pytorch is that the input
rank is greater than 1, since the second dimension of the input must
match the size of the `weight`, `bias`, `runningMean`, and
`runningVar` inputs. This commit fixes the `inputRank` check.
2022-07-15 09:35:59 -07:00
Suraj Sudhir 5e2012c7dd
[tosa] aten.max.dim , aten.slice.tensor ops (#1027)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-07-13 10:10:18 -07:00
George Petterson a08ff0d7f2 Add lowering for _convolution 2022-07-11 11:03:03 +05:30
Suraj Sudhir d38f2cae5b
[tosa] aten.transpose.int support (#1017)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-07-07 13:05:33 -07:00
Ramiro Leal-Cavazos f204210266
[LINALG] Fix handling of size-1 dims in `aten.view` again. (#992)
A previous fix to the handling of size-1 dims in
`aten.view` (https://github.com/llvm/torch-mlir/pull/962) resulted in
the wrong grouping of dimensions when size-1 dims where between two
dims of size greater than 1. This commit fixes that.
2022-06-30 16:39:25 -07:00
Suraj Sudhir bb576c2cb3
[tosa] aten.embedding op support (#991)
Enables BERT legalization.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-06-30 13:13:52 -07:00
Gaurav Shukla 1be604bfd3 [LINALG] Lower `aten.Matmul` to `linalg.BatchMatmul`
This commit lowers `aten.matmul` to `linalg.BatchMatmul` under the
following conditions:
1. The result of matrix multiplication must have batch dimensions,
   i.e., rank greater than 2.
2. The resultant matrix must have at most 1 dynamic batch dimension.

It also handles broadcasting of batch dimensions when batch dimensions
of the matrices are broadcastable.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-06-25 10:58:06 +05:30
Ramiro Leal-Cavazos 8b94759303
[LINALG] Fix handling of size-1 dims in `aten.view` (#962)
This commit adds support for several size-1 dims in a row in an
expansion using `aten.view`.
2022-06-22 15:58:40 -07:00
Vivek Khandelwal 77ab31641f [MLIR][TORCH] Add decomposition of aten.numpy_T op
This commit adds the decomposition of `aten.numpy_T` op into
`aten.t` or `aten.permute` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-16 00:01:22 +05:30
Vivek Khandelwal aed5517fda [MLIR][TORCH] Add integer dtype support for aten.rsub.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-15 16:46:28 +05:30
Vivek Khandelwal a11ef674a7 [MLIR][TORCH] Add E2E support for aten.baddbmm op
This commit decomposes `aten.baddbmm` op into `aten.bmm`,
`aten.mul.Scalar`, and `aten.add.Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:28 +05:30
Vivek Khandelwal 06750815d1 [tosa] Support for AtenAvgPool2d op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Maksim Levental cec5aeedb0
add ci tests (#754) 2022-05-25 14:59:59 -05:00
Yi Zhang ec0e9e0bc7 Add -s flag to run e2e tests sequentially
A user might want to avoid the extra layer of multiprocessing libary for
debugging purpose. In such cases, the -s flag can be used to force
sequential execution.
2022-05-11 21:16:41 -04:00
Prashant Kumar 33c9d256ea [REFBACKEND] Add support for returning multiple different return types.
Added the dynamic registration of return function to the execution
engine. This makes sure that  different/multiple return types are supported.
Also, updated the .style.yapf indentation to 4.
2022-04-21 09:02:30 +05:30
Sean Silva 3b5310d6d2 Move COMMON_TORCH_MLIR_LOWERING_XFAILS into test_suite
That way, downstreams don't have to duplicate this list.

Also, remove "external config" feature, since it is subsumed by just
importing the test suite.
2022-04-19 14:32:58 -07:00
Vivek Khandelwal 769f3a8870 [MLIR][TORCH] Add E2E support for max_pool2d_with_indices op
This commit adds lowering of `max_pool2d_with_indices` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-18 21:05:19 +05:30
Ashay Rane d3c08376af
test: add end-to-end test for aten.neg (#760) 2022-04-15 12:37:57 -07:00
Maksim Levental 24f9de7120
Fixes https://github.com/llvm/torch-mlir/issues/751 where `torch.bool` is parsed as signless `i1`. (#752) 2022-04-13 12:28:27 -05:00
gpetters94 9ec0683e92
Add 2D case for convolution (#693) 2022-04-08 00:47:57 -04:00
Prashant Kumar fb8cb0c5f3 [LINALG] Add the lowering of `aten.ne.Scalar` op
The lowering of `aten.ne.Scalar` op has been added to
the linalg backend.
2022-04-05 21:07:28 +05:30
Anup Gangwar ccf924d3df
tosa] Support for Aten[Gelu|GeluBackward] ops (#720)
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>

Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
2022-03-30 17:00:55 -07:00
Sean Silva 0378c75b35 Centralize all test serialization logic. 2022-03-28 10:17:13 -07:00
Anup Gangwar 5d7a6c2976
[tosa] Support for Aten[Unsqueeze|Contiguous|Dropout|Reshape|View] ops (#700) 2022-03-25 14:15:07 -07:00
Sean Silva 6b637a9fd9 Move e2e test definitions into the `torch_mlir_e2e_test` package
This is the first step to making the e2e framework convenient to use
by downstream backends.
2022-03-25 13:56:41 -07:00
Vivek Khandelwal 88c216da13 [MLIR][TORCH] Add support for same input and output shapes for view op
This commit adds support for the cases of view op where the rank and
the shapes of the input and result are equal.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-25 22:26:10 +05:30
Gaurav Shukla 02b6d04eb4 [LINALG] Add E2E support for `aten.zero_` op
This commit adds decomposition of `aten.zero_` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-25 12:46:50 +05:30
Qiang Fu f7c7bb800c
Add non-default dtype support for a few elementwise math ops. (#687)
* fix type inference
* fix Torch2Linalg conversion
* add test cases
2022-03-23 13:35:43 -07:00
max fe8ac57e6d This PR implements an eager mode backend for PyTorch through the torch-mlir framework. This is accomplished by overriding the `__torch_dispatch__` class method on wrapper subclass `TorchMLIRTensor(torch.Tensor)`.
Effectively, this mode works by compiling op by op as the NN is eagerly executed by PyTorch. Entailed in that compilation is building a representation of the op that can be `torch.jit.script`ed, importing using `ModuleBuilder`, and then executing (e.g., with `RefBackendLinalgOnTensorsBackend`). This mode includes a fallback to conventional PyTorch if anything in the torch-mlir compilation process fails (e.g., unsupported op).

Currently, all e2e tests pass execpt for two that involve an upstream PyTorch bug (https://github.com/pytorch/pytorch/issues/74400).

High priority next steps:

1. A compile cache in order to speed up reruns of the same NN.
2. Integration with IREE (though not in this repo).
3. Integration with `torch.distributed`.
2022-03-22 14:42:57 -07:00
Gaurav Shukla 7c3ba25238 [LINALG] Add decomposition of `aten.dropout` op
- This commit adds decomposition of `aten.dropout` op. It also covers the
  training mode of the same op.
- It also adds lowering of `aten.sub.float` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-22 13:14:49 +05:30
Vivek Khandelwal 5b9bdfaf3f [MLIR][TORCH] Add E2E support for aten._to_copy op
This commit decomposes `aten._to_copy` op into
`valsem.aten.copy` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 13383b03b8 [MLIR][TORCH] Add value tensor variant to aten::copy_ op
This commit adds the op `ValsemVariantAtenCopyOp` that represents
`AtenCopy_Op` without the underscore. This is needed to make sure
that the `ReduceOpVariants` pass turns the in-place op into an op
that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenCopyOp`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 4c0cd5c23d [MLIR][TORCH] Add E2E support for aten.expand_as op
This commit decomposes `aten.expand_as` op into `aten.broadcast_to` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 12:47:39 +05:30
Prateek Gupta 7256c9e395 [TORCH][MLIR] Fix the return types of `aten.native_layer_norm`.
This commit fixes the 2nd and 3rd return types of the `aten.native_layer_norm`.
Previously the mean and rSTD were returned with reduction dims removed.
This commit fixes this and keeps the reduction dims of the results.

Signed-Off-By: Prateek Gupta <prateek@nord-labs.com>
2022-03-17 12:08:32 +05:30
Vivek Khandelwal 8da7d90611 [MLIR][TORCH] Add E2E support for aten.index_put op
This commit decomposes `aten.index_put` op into
`valsem.aten.index_put_impl` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-16 22:02:02 +05:30