Commit Graph

901 Commits (11cd7cd9e7705fd69f40fabdad2e0e5b5b738914)

Author SHA1 Message Date
Ramiro Leal-Cavazos 86095dd432
Replace linear transformation with `low` and `high` in test inputs (#1485)
This commit replaces test inputs that were being linearly transformed
by multiplying and adding/subtracting to the input tensor with inputs
that use the `low` and `high` keyword arguments instead.
2022-10-14 18:52:07 +00:00
Gleb Kazantaev bdb5083d33
New ops support & enhancements (#1494)
* New ops support & enhancements

* Enabled xfail ltc tests
2022-10-14 10:28:21 -04:00
Prashant Kumar 3a2cd23380 [LINALG] Add lowering for aten::round op.
-- Added the lowering for aten::round op.
-- Added the folding for integer cases.
2022-10-13 02:41:26 +05:30
Sean Silva c8280d67bd Remove the heavydep tests
We originally added these to help bring up more complex models with
heavier dependencies. However, over time it has become clear that these
models usually require more than just heavier dependencies -- they often
require a nontrivial amount of "one-off" code to extract the relevant
parts of the model and compile them. This is not a good fit for a
component in the core Torch-MLIR repo.

However, in the community, nod.ai has developed the ["Shark
Tank"](https://github.com/nod-ai/SHARK/tree/main/tank) which has all the
appropriate code to wrangle these models and organize them. We intend to
more heaviliy lean on that as a community and improve the symbiosis
there to serve the role that these heavydep tests were meant to play.
2022-10-12 05:19:36 -07:00
Sean Silva 6403c0e56f torch_mlir.compile: allow custom backend_legal_ops set
Allow customizing `backend_legal_ops` for "torch" output type, since we
don't know which backend will be used (it might be a custom backend).
We don't allow customizing the `backend_legal_ops` for the other output
types (Linalg, TOSA, MHLO) since those backends control their set of
legal ops directly.

Fixes #1418
2022-10-12 04:21:22 -07:00
Abhishek Varma 61db1b5c4d
[MLIR][TORCH] Add e2e support for `aten.Mish` op (#1470)
-- This commit adds e2e support for `aten.Mish` op.
-- `aten.Mish` op is decomposed as following :-
    Mish(x) = x * Tanh(Softplus(x))

Signed-off-by: Abhishek Varma <avarma094@gmail.com>

Signed-off-by: Abhishek Varma <avarma094@gmail.com>
2022-10-11 14:03:10 -07:00
Jae Hoon (Antonio) Kim 3e08f5a779
Fix `fromIntArrayRef` call (#1479)
* Fix fromSymint call

* Update PyTorch requirement

* Re-enable LTC
2022-10-11 13:29:07 -04:00
Ashay Rane aefbf65e27
Disable LTC and update PyTorch (#1472)
* build: disable LTC again so that we can bump PyTorch version

When built using PyTorch's master branch, the LTC code has been failing
to build for a few days.  As a result, the PyTorch version referenced by
Torch-MLIR is stalled to the one from October 4th.

In an effort to advance to PyTorch version, this patch disables LTC, and
a subsequent patch will advance the PyTorch version.

* update PyTorch version to 1.14.0.dev20221010

Also disables the `UpSampleNearest2dDynamicFactor_basic` e2e test, since
the (PyTorch) oracle differs from the computed value for both the
refbackend and the eager_mode backends.
2022-10-10 23:05:40 -05:00
Gaurav Shukla da90a25f90 [MLIR][TORCH] Add E2E support for `aten.[div.int|bitwise_or.Tensor]` ops
This commit adds lowering of `aten.div.int` and `aten.bitwise_or.Tensor`
ops. Both these ops are required in order to support bloom_560m model.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-10-10 22:28:51 +05:30
Vivek Khandelwal d3cc3f1aff [tosa] Add lowering for aten.to.dtype and aten._to_copy op
This commit adds the TorchToTosa lowering for `aten.to.dtype` and
`aten._to_copy` op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-10-06 12:00:25 +05:30
Daniel Ellis e7b2b84a66 Update torch-mlir-opt error message. 2022-10-05 15:02:10 -04:00
Jae Hoon (Antonio) Kim c57d801260
Fix functionalize_aten_op calls for symint ops (#1459)
* Fix functionalize_aten_op calls for symint ops

* Update PyTorch version
2022-10-05 10:23:48 -04:00
Gleb Kazantaev 708fa346a6
Fix Base Lazy Backend Type Conversion (#1412)
* Fix c10::prim::Constant conversion; Added CAPI for passes; Added passes to base lazy backend

* Update ivalue_importer to use ImportOptions; Added tests for non-value/value tensor types

* Added tests for scalar Constant import; Updated MB::importFunction to use ImportOptions

* Test updates

* Move back module variable name

* Remove RefineTypes from TorchMlirLoweringContext::Build()

* Rename pass; Remove passes from base lazy backend

* Rename pass to VerifyBackendContractPass

* Aligned cmd pass name; Fixed TorchConversion passes registration
2022-10-04 15:53:28 -07:00
Daniel Ellis 2ba71af651 Add support for mv decomposition. 2022-10-04 11:34:45 -04:00
Prashant Kumar 6777a9484d [LINALG] Add lowering for the aten.upsample_nearest2d op. 2022-10-04 17:20:29 +05:30
Daniel Ellis 4d47f1671a Reject dictionary inputs when tracing.
The underlying error message was misleading.  See https://github.com/llvm/torch-mlir/issues/1425
2022-09-30 16:02:35 -04:00
AmosLewis 940959589b [MLIR][TORCH] Add Byte and Char Dtype support 2022-09-30 13:19:31 +05:30
Ashay Rane 0b46462528
Miscellaneous fixes for Windows builds (#1376)
* test: allow spaces in path to Python executable

On Windows, the path to the Python binary may contain spaces, so this
patch adds quotes around the path to the python executable.

Thanks to @sstamenova for suggesting the fix!

* python: remove header file that causes Windows build failures

Similar to https://reviews.llvm.org/D125284, we can safely remove this
header file without affecting the build on either Linux.  It is
necessary to remove this header file on Windows builds since otherwise
it causes build errors.

* python: drop `TORCH_API` from function defined in Torch-MLIR

`TORCH_API` should apply to functions that are either exported by
libtorch.so or ones that are imported from libtorch.so by its downstream
consumers (like Torch-MLIR).  Neither case applies to the
`importJitFunctionAsFuncOp()` function, since it is defined in
Torch-MLIR (and thus outside libtorch.so).  This patch fixes the problem
by dropping `TORCH_API` from that function's declaration.

* python: make output of class anotations deterministic

The `class-annotator-repr.py` test checks for class annotations in a
specific order, but prior to this patch, the order was
non-deterministic, since the code iterated on an _unordered_ map.

This patch makes the iteration order deterministic through two changes:
1. using a sorted map
2. using the class qualified name instead of the address of the class in
memory

* test: use Python3_EXECUTABLE as interpreter path for consistency

This ensures that tests use the Python3 version that was detected using
CMake, instead of whichever python version that happens to be in the
PATH variable when invoking the test.

* test: fix RUN string

The parenthesis syntax does not run on Windows (the shell interprets the
`(` character as part of the path).  Moreover, the ODR violation in the
comment no longer seems to apply.

* python: port parallel test framework to Windows

Since Windows does not support `fork` natively, Python's
`multiprocessing` module needs to use `spawn` on Windows.  However, to
use `spawn`, the multiprocessing module serializes (or pickles) the
worker function and its arguments.  Sadly, the multiprocessing module
(both the default one in Python and the one that is extended in PyTorch)
is unable to serialize lambda functions (see
https://stackoverflow.com/a/19985580) for detals.

Unfortunately, given how our tests are structured, we require that the
function under test is passed as an argument to another function, so we
cannot sidestep our use of lambda functions.

To resolve this problem, this patch makes use of the `multiprocess` and
`dill` Python modules, which together offers a multiprocessing mechanism
that can serialize lambda functions.  The multiprocess module also
offers a process pool, which simplifies the code for our parallel
testing framework.
2022-09-29 12:07:43 -05:00
Vivek Khandelwal 6db513c51d
[tosa] Add support for some cases of aten.broadcast_to op (#1429)
This commit adds support for TorchToTosa lowering of
`aten.broadcast_to` op for cases:
1.) When the rank of input and output tensor is equal.
2.) When the rank of input tensor is zero.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-29 09:40:56 -07:00
Jae Hoon (Antonio) Kim fa5a8e21a3
Propagate parameter names to TorchMlirComputation (#1420)
* Propagate parameter name to MLIR

* Add TorchMlirNode Constructor Hook

* Make func_op mutable

- Purpose of this is to allow modification of func_op by subclass
  backend

* Clean up unnecessary changes

* Remove unnecessary attribute case

* Address PR comments
2022-09-29 11:43:39 -04:00
JakopinA 8ef0c874c2
Implement Expand/Collapse Functionality for Aten.View (#1353) 2022-09-27 11:08:14 -07:00
武家伟 c03aa63325
[MLIR] Add canonicalizer for aten.slice.t op (#1413)
* [MLIR] Add canonicalizer for aten.slice.t op

* Add mlir tests and strength the canonicalizer

* rename variable

Co-authored-by: Vremold <xremold@gamil.com>
2022-09-26 14:35:50 -07:00
Jae Hoon (Antonio) Kim 3e27aa2be3
Fix as_strided/slice symint (#1401)
* Fix as_strided symint

* Re-enable LTC tests

* Re-enable LTC

* Add hardtanh shape inference function

* Fix slice symint
2022-09-26 12:16:49 -04:00
武家伟 ab7aa01b1e
[MHLO] Add torch-to-mhlo e2e support for aten.gather op (#1410)
* Add torch-to-mhlo e2e support for aten.gather op 

* Add more e2e tests for torch.aten.gather op
2022-09-25 22:07:46 +08:00
Vivek Khandelwal bc11e1aba6 [tosa] Add "-tosa-to-tensor" pass in the lowering pipeline
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-24 10:03:07 +05:30
Tanyo Kwok 72e422b589
Add relu6 and binary broadcasts (#1408)
* Add relu6 and binary broadcasts
2022-09-23 20:39:15 +08:00
Sean Silva 7a77f9fe3d Add a way to turn off crashing tests
This adds a very long and obnoxious option to disable crashing tests.
The right fix here is to use the right multiprocessing techniques to
ensure that segfaulting tests can be XFAILed like normal tests, but we
currently don't know how to implement "catch a segfault" in Python
(patches or even just ideas welcome).

Motivated by #1361, where we ended up removing two tests from *all*
backends due to a failure in one backend, which is undesirable.
2022-09-23 05:01:39 -07:00
Vivek Khandelwal 5090ac9359
[MLIR][TORCH] Add a test for sum.dim_IntList op working for tosa (#1387)
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>

Co-authored-by: Suraj Sudhir <16977902+sjarus@users.noreply.github.com>
2022-09-20 11:38:09 -07:00
Vivek Khandelwal 1ffd42bbde
[MLIR][TORCH] Add TorchToTosa lowering for aten.broadcast_to op (#1386)
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-09-20 10:04:51 -07:00
武家伟 0e2e94d542
Add torch-to-mhlo e2e support for AtenArangeStartStepOp (#1385)
Co-authored-by: Vremold <xremold@gamil.com>
2022-09-20 22:31:24 +08:00
Jae Hoon (Antonio) Kim 8967463980
Fix symint ops and blacklist `lift_fresh_copy` (#1373)
* Add symint to native functions yaml

* Re-enable LTC

* Fix new_empty_strided and narrow_copy
2022-09-20 10:16:04 -04:00
武家伟 4f3cd236dd
Strength the shape inference for aten.arange-like op (#1367)
Strength the shape inference for aten.arange-like op by
1. registering aten.sub and aten.ceil.Scalar op and design folders for them.
2. register a new constant-like op: Torch::ConstantNumberOp and design canonicalizer for it.
2022-09-20 12:40:19 +08:00
Vivek Khandelwal 04f3a4ffce [MLIR][TORCH] Add support for bool element type for aten.sum[.dim_IntList] op
This commit adds bool element type support for `aten.sum` and
`aten.sum.dim_IntList` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-17 09:18:34 +05:30
Ashay Rane 1895b581c4
shape-lib: generate string as multiple lines to work with MSVC (#1370)
As @oroppas identified, literal strings that are over 16,380 characters
cause the MSVC compiler to throw an error (C2026), eventually causing
the Windows build of Torch-MLIR to fail because the length of the
generated MLIR for the shape library crosses the allowed threshold.

This patch fixes the problem by making the Python script generate one
literal string per line to satisfy the MSVC compiler.

Thanks to @oroppas for the bulk of the effort required to resolve this!
2022-09-16 15:16:01 -05:00
Ashay Rane 2bb5f4d8fe
build: update llvm tag to 4d4ca6c9 (#1359)
Summary of changes:
 - Updated emitAccessorPrefix since the default value has changed
   (https://reviews.llvm.org/D133179)
 - Updated RefineTypes pass since Lattice::isUninitialized() is removed
   (https://reviews.llvm.org/D132800)
 - Updated MHLO tag so that it builds with the updated LLVM tag
 - Disabled two tests that cause segfaults in the TOSA backend (see Issue
   #1361)
2022-09-13 21:24:43 -05:00
gpetters94 48418b9c22
Fold away type_as (#1358) 2022-09-12 18:59:12 -04:00
Vivek Khandelwal 71b1f0dd7a [MLIR][TORCH] Add E2E support for aten.index.Tensor_hacked_twin op
This commit adds lowering of `index.Tensor_hacked_twin` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-12 21:47:18 +05:30
George Petterson a12b9c4492 Add lowering for aten::cumsum 2022-09-12 09:28:07 +05:30
Vivek Khandelwal 326f21229e [MLIR][TORCH] Fix shape calculation for aten::pow.Tensor_Tensor op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-08 21:14:12 +05:30
Vivek Khandelwal e35741fb1d [MLIR][TORCH] Add E2E support for aten.bitwise_not op
This commit adds lowering of `aten.bitwise_not` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-08 17:52:12 +05:30
Vivek Khandelwal 7dfadc2498 [MLIR][TORCH] Add E2E support for aten.lift_fresh_copy op
This commit adds lowering of `aten.lift_fresh_copy` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-08 12:32:16 +05:30
Vivek Khandelwal c19fccfca2 [MLIR][TORCH] Add E2E support for aten.pow.Tensor_Tensor op
This commit adds lowering of `aten.pow.Tensor_Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-09-08 10:01:42 +05:30
武家伟 6a1893a517
[MLIR][MHLO] Add AtenFrobeniusNormDimOp and add its conversion pattern to MHLO and linalg (#1306)
* Add aten.frobenius_norm.dim op and init its conversion pattern to linalg and MHLO, 
* run symbolic-shape-optimization before hlo-legalize-to-linalg to fit more mhlo e2e tests.
2022-09-08 10:15:36 +08:00
Ashay Rane 93f7c0ceb5
build: update llvm tag to d2613d5b (#1343)
Summary of changes:
 - Update the dataflow analysis in RefineTypes.cpp
 - Add tosa-to-arith pass after tosa-to-linalg pass, since
   tosa-to-linalg (and canonicalizations) can produce tosa.const() ops
 - Fixed warning about not making `matchAndRewrite` as override
2022-09-07 14:35:14 -05:00
Gaurav Shukla 99093d0623 [TORCH] Add decomposition of `aten.linear` op
This commit adds decomposition of `aten.linear` op. Due to limited
support at tosa backend in case of dynamic dimensions, this
decomposition is currently disabled for tosa backend.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-09-07 16:58:27 +05:30
Quinn Dawkins cc86cc0f02
Revert "Implement Non-Expand/Collapse Functionality for Aten.View (#1309)" (#1347)
Reverting commit a6a48ba233 to revise unit tests and address dynamic shape handling based on comments in #1309
2022-09-07 01:38:11 -04:00
JakopinA a6a48ba233
Implement Non-Expand/Collapse Functionality for Aten.View (#1309)
Focuses on statically sized cases such as [2, 3] -> [3, 2].
2022-09-06 14:46:04 -04:00
Tanyo Kwok 512f2d9c23
Add decomposition to aten.native_layer_norm (#1332)
* Add decomposition to aten.native_layer_norm

* fix ci error
2022-09-02 09:29:22 +08:00
Sean Silva 0f40d98009 Ensure that tests have unique names 2022-08-29 16:25:23 -07:00
Sean Silva 079bff33f1 Sort tests before anything else.
In the sequential case we weren't sorting, which was confusing.
2022-08-29 16:23:56 -07:00
Sean Silva e16b43e20b Remove "torchscript" association from the e2e framework.
We use it for more than TorchScript testing now. This is a purely
mechanical change to adjust some file paths to remove "torchscript".

The most perceptible change here is that now e2e tests are run with

```
./tools/e2e_test.sh
instead of:
./tools/torchscript_e2e_test.sh
```
2022-08-29 14:10:03 -07:00
Sean Silva a507ae498a Avoid cascading failures when compiler crashes
Change logic so that we never run the multiprocessing codepath with only
1 worker. That configuration was causing all subsequent tests to
spuriously fail if one test failed with a crash (this was easy to see
after sorting the tests). That configuration was the one used by the CI.

Also, sort tests to make output nicer.
Also, make verbose mode more verbose so that it is easy to see in `-s`
mode which test is crashing.
2022-08-26 16:54:00 -07:00
Jae Hoon (Antonio) Kim 8e880a2d00
Fix symint related functionalization ops (#1289)
* Fix symint related functionalization ops

* Remove zeros xfail from LTC tests
2022-08-26 16:13:28 -04:00
Ramiro Leal-Cavazos e153694c94
Add TestUtils.randint + replace torch.randint with tu.randint (#1276)
This commit adds a method to `TestUtils` that generates random integer
tensors with a similar interface to the `TestUtils.rand`. This commit
also replaces with `tu.randint` all test inputs generated with
`torch.randint`.
2022-08-26 08:50:16 -07:00
Henry Tu e869e68559
Fix LTC lib_torch_mlir_ltc.so import error (#1283)
* Build LTC to _mlir_libs directory

* Update CMakeLists.txt
2022-08-25 18:25:01 -04:00
Henry Tu a1ace0657d
Revert updating mlir_native_functions.cpp signature (#1281)
* Revert updating mlir_native_functions.cpp signature, due to a7edf71360

* Restored NewZeros to LTC XFAIL set
2022-08-25 13:00:33 -04:00
Henry Tu e2f862cb85
Fix LTC build warnings (#1272)
* Resolved Wunused-variable

* Fix Wunneeded-internal-declaration

* Address review comment

* Update autogen_ltc_backend.py

* Update mlir_native_functions.cpp to work with updated PyTorch

* Remove NewZeros from LTC XFAIL set
2022-08-24 15:04:28 -04:00
gpetters94 f012279fa2
Add transposed case for at::convolution (#917)
Also adds a decomposition for aten::conv_transposed2d.input
2022-08-24 12:19:35 -04:00
Sean Silva d7d67979b2 [cleanup] Change OutputType enum values to strings
The use of numbers was arbitrary and was preventing the enum values from
being put in the natural order.
2022-08-23 17:59:39 -07:00
Tanyo Kwok 3d0e18bbe7
Add decomposition for aten.roll (#1170)
* Add decomposition for aten.roll

* add e2e unittest

* refine type of torch.roll

* fix aten::cat output type
2022-08-24 08:36:05 +08:00
Tanyo Kwok 2374098d71
[MHLO] Init end to end unit tests (#1223) 2022-08-23 16:47:21 +08:00
Vivek Khandelwal 8cad02f87e [MLIR][TORCH] Add torch.Device type to backend contract scalar types
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-08-23 10:50:09 +05:30
Tanyo Kwok 9176b5ed29
Add decomposition for aten.flatten.using_ints (#1161) 2022-08-23 11:52:54 +08:00
Sean Silva 01290d134a Add a way for backends to control which ops are legal for them.
We were already hitting many cases where backends different in terms of
the legal ops that they wanted. This caused unnecessary coupling between
the backends. Examples:
- https://github.com/llvm/torch-mlir/pull/1161
- https://github.com/llvm/torch-mlir/pull/862

This PR centralizes all compilation to go through `torch_mlir.compile`
so that we can keep the logic centralized there. We should move these
lists closer to each backend. Especially cases like
https://github.com/llvm/torch-mlir/pull/862 where blocking a
decomposition is necessary to avoid a crash emphasize that the set of
decompositions is tightly coupled to the backend, and should be
"controlled by the backend" and not something arbitrarily tweakable.

Also:
- Fix a small bug in the way we passed through the backendLegalOps
  option.
- Add better error messages in `torch_mlir.compile` for import errors.
2022-08-22 14:16:13 -07:00
Alex Tsao c38308f3ef
Add lowering for _convolution.deprecated (#1259)
* Add lowering for _convolution.deprecated
2022-08-22 11:17:36 +08:00
Henry Tu ba17a4d6c0
Reenable LTC in out-of-tree build (for real this time) (#1205)
* Fix OOT LTC CI build failure

* Disable LTC during macOS package gen

* Add more details about static TorchMLIRJITIRImporter library
2022-08-19 15:25:00 -04:00
Vivek Khandelwal 65d811e267 [MLIR][TORCH] Fix dynamic cases for aten.index.Tensor 2022-08-19 12:13:20 +05:30
Ramiro Leal-Cavazos f07f7d20f9
Clean up shape functions that use `sum_mean_dim` (#1217)
I recently fixed the handling of the `dim` argument in
`sum_mean_dim` (59fccab857). Therefore,
the checks that the `dim` input is `None` or `[]` are no longer needed.
2022-08-18 08:23:43 -07:00
Quinn Dawkins 85f383ce0b
Bump the shape lib to match the upstream functions currently in PyTorch (#1236)
Bumps the shape library:
 - Updates the function signature for aten.arange.start_step
 - upstream_shape_functions.mean_dim -> upstream_shape_functions.sum_mean_dim
2022-08-17 00:11:04 -04:00
nithinsubbiah fde390c766 Re-enable custom op support 2022-08-16 22:49:08 +05:30
Jae Hoon (Antonio) Kim 0af55781ae
Propagate device data names (#1157)
* Propagate device data names

* Address PR comment

* Add example usage

* Add test for device data names

* Make TorchMlirComputation fields protected

* Add lazy backend device data name unit tests

* Disable lazy backend tests if LTC is disabled

* Add comments
2022-08-16 09:30:22 -04:00
武家伟 3b3cb99ef8
Generalize canonicalization pattern for more aten.sub/div/mul/add op (#1209)
Generalize canonicalization pattern for more sub/div/mul/add op, but for AtenDivTensorModeOp in 'trunc' rounding mode, we try to fold it.
2022-08-16 13:24:08 +08:00
Sambhav Jain 41aa562fb4
s/external/externals/g (#1222)
Fix remaining instances of `external/llvm-project`.
2022-08-13 07:13:56 -07:00
Prashant Kumar b1a506624c Add decomposition of `aten.masked.tensor` op.
`aten.masked.tensor` op has been decomposed to `aten.masked.scalar` op.
2022-08-11 07:48:04 +05:30
Vidush Singhal dd2da5a038
E2E support for AtenRemainderScalarOp (#1200) 2022-08-10 20:02:06 -04:00
gpetters94 79b9cf9468
Add lowering for aten.to.device (#1107) 2022-08-10 19:24:02 -04:00
powderluv 2342456356
mac m1 cross compile (#1204)
* mac m1 cross compile

Add support for M1 cross compile

* Remove redundant ExecutionEngine

It is registered as part of RegisterEverything

* nuke non-universal zstd

disable LTC
2022-08-10 08:48:39 -07:00
powderluv e55fc4deb5
Revert "E2E support for AtenRemainderScalarOp (#1119)" (#1190)
This reverts commit 34e207eeb5.
2022-08-08 22:59:57 -07:00
Henry Tu 3e97a33c80
Revert "Reenable LTC in out-of-tree build (#1177)" (#1183)
This reverts commit f85ae9c685.
2022-08-08 18:58:35 -07:00
Vidush Singhal 34e207eeb5
E2E support for AtenRemainderScalarOp (#1119)
* E2E support for AtenRemainderScalarOp
2022-08-08 20:02:52 -04:00
Vidush Singhal b70548edff
Add decomposition and E2E support for Aten_EmbeddingBag (#1137)
* Add decomposition and E2E support for Aten_EmbeddingBag
2022-08-08 18:56:49 -04:00
Henry Tu f85ae9c685
Reenable LTC in out-of-tree build (#1177) 2022-08-08 17:35:22 -04:00
Tanyo Kwok 290d7755fb
importer: add initial support for loading Float16 tensors (#1169)
follow up #761:

    This patch updates the `torch_mlir::convertTensorToMlirElementsAttr()`
    method to enable the creation of tensors whose base type is Float16.
    This patch also adds a test to validate the IR generation, and it
    updates the test for importing tensors of various types.
2022-08-08 12:37:31 +08:00
Sean Silva 5618890ca0 development.md: Avoid name collisions with PYTORCH_ variables 2022-08-05 19:41:08 -07:00
Henry Tu e322f6a878
Update LTC CMake hack documentation (#1155)
* Update CMakeLists.txt

* Update CMakeLists.txt

* Update CMakeLists.txt

* Update CMakeLists.txt

* Update buildAndTest.yml

* Update setup.py

* Address review comments
2022-08-05 14:12:20 -04:00
Sean Silva 8ce5d3f12c E2E framework: Report tensor dtype in summary
This helps to triage issues related to backends that don't support all
dtypes.
2022-08-05 10:05:18 -07:00
Vivek Khandelwal c129a6de93 [MLIR][TORCH] Add support for dim=None to Aten[Var|Std]DimOp
PyTorch recently added support for `dim=None` in the `torch.var`
(5ca9b2b6fa)
and `torch.std`op (eb0e30e0bc).
This commit adds the corresponding support in torch-mlir.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-08-05 20:28:56 +05:30
Sean Silva 31727f81d8 torch_mlir.compile: Allow ignoring traced shapes
In some cases, users know that a traced graph is valid for a wider set
of shapes than they originally traced it with. Provide an option for
users to ignore the shapes in the traced graph when they know it is
legal.

Fixes #997
2022-08-04 10:18:34 -07:00
Sean Silva 6484776a25 Make numerical stability test more perverse
To test the summation stability of `torch.aten.var`, add a large
constant to it, which increases the effective precision requirements.
2022-08-04 10:04:38 -07:00
gpetters94 08fc2d89bb
Add non-unit groups support to aten.convolution (#858) 2022-08-04 02:18:38 -04:00
Ramiro Leal-Cavazos a7af1fd873
Add support for `dim=None` to `AtenMeanDimOp` (#1129)
PyTorch recently added support for `dim=None` in the `torch.mean`
op (2bfae07a79). This
commit adds the corresponding support in torch-mlir.
2022-08-02 16:08:06 +00:00
Quinn Dawkins 38d8498b21
add e2e support for aten.atan2 (#1117)
- Includes math-to-libm pass in refbackend for math::atan2 support
2022-08-02 11:39:41 -04:00
Vidush Singhal ed13ebfd8d
E2E support for AtenEmbeddingBagPaddingIdxOp SUM Mode (#1066) 2022-08-01 16:44:11 -04:00
Alec 554570f3ab Implemented a decomposition of aten::narrow 2022-08-01 18:32:14 +05:30
Henry Tu 2c3b3606d0 Resolve remaining LTC CI failures (#1110)
* Replace CHECK_EQ with TORCH_CHECK_EQ

* Check value of TORCH_MLIR_USE_INSTALLED_PYTORCH during LTC build

* Update LTC XFAIL with NewZerosModule ops

* Explicitly blacklist _like ops

* Automatically blacklist new_/_like ops

* Prune away unused Python dependencies from LTC

* Add flag to disable LTC

* Autogen dummy _REFERENCE_LAZY_BACKEND library when LTC is disabled

* Implement compute_shape_var

* Removed Var tests from XFAIL Set

* XFAIL tests using _local_scalar_dense or index.Tensor

* Add StdDim tests to XFAIL set

* Autogen aten::cat
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 425362263b Clean up Autogen (#1112)
* Remove unnecessary sed in autogen

* Remove .pyc files frrom VCS
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 368963243e Export LTC Headers (#1108) 2022-07-30 09:40:02 -04:00
Henry Tu 70395de197 Resolve CI testing failure for Lazy Tensor Core (#1088)
* Xfail unsupported ops

* Register FuncDialect

* Include dynamic_ir in build

* Code reformat

* Enable LTC tests for macOS and Source Build
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 0d16a91656 Add support for lift_fresh op (#1101) 2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim e37891b997 Default Device Ordinal API (#1079)
* Add default device ordinal API

* Fix reference backend
2022-07-30 09:40:02 -04:00
Antonio Kim de6c135dc3 Fix LTC autogen for CI with nightly PyTorch
- Update llvm-project pin to match main
2022-07-30 09:40:02 -04:00
Henry Tu cec74b8d37 Blacklist _convolution op (#1048)
* Blacklist _convolution op in LTC

* Removed duplicate Torch_AtenSelectScatterOp instance from autogen .td

* Removed duplicate Torch_AtenSliceScatterOp instance from autogen .td
2022-07-30 09:40:02 -04:00
Henry Tu 47bb38d180 Reference Lazy Backend (#1045)
* Changed Example MLIR backend to Reference MLIR backend

* Moved reference_ltc_backend into csrc

* Merged sys_utils.h

* Renamed reference_ltc_backend to reference_lazy_backend

* Addressed review comments

* Update docs with new library name

* Removed _REFERENCE_LAZY_BACKEND from .gitignore

* Added reference_lazy_backend to the TorchMLIRPythonModules dependency list

Fixed typo in `ltc_examples.md`

Missed instance where `ltc_backend` was used instead of `lazy_backend`.
2022-07-30 09:40:02 -04:00
Henry Tu f5acad8512 Prune xfail e2e LTC tests & fix bugs from functionalization pass (#1044)
- Pruned number of xfailed e2e LTC tests from 305 to 134
  - Reviewed every failure to ensure the error genuinely warrants an xfail
- Fixed bug where non-tensor outputs of LTC computation had `.to('cpu')` called, which caused a failure and inflated the xfail count
- Fixed bug with `HBC_basic` test where a constant tensor was created in its constructor without being declared as a buffer, which prevented the device from being updated when the parent `torch.nn.Module` got moved to the `lazy` device
  - Note that this test is still xfail'd due to some unsupported ops. Left a comment about some potential issues that may arise if it gets reenabled in the future
- Updated autogen `GeneratedTorchOps.td` to reflect the latest set of supported ops
- Renamed `aten.zero.functionalization` to `aten.zero` to reflect upstream PyTorch changes
2022-07-30 09:40:02 -04:00
Henry Tu 9de06f3ebd Update Torch MLIR readme 2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim fb21c9e6cb Integrate Functionalization Pass (#998)
* Fix autogen build dir issue

* Got functionalization pass to compile

* Add slice/diagonal backwards functionalization

* Fix codegen invocation in CMakeLists.txt

* Add functionalization view ops

* Fix logsumexp out functionalization

* Fix ComputationPtr

* Blacklist new_empty op

* Add op comparison

* Remove unnecessary ops

Co-authored-by: Henry Tu <henry.tu@cerebras.net>
2022-07-30 09:40:02 -04:00
Henry Tu 1510eae75d Upstream native_batch_norm and native_batch_norm_backward shape inference functions (#978)
* Removed compute_shape_native_batch_norm

* Removed compute_shape_native_batch_norm_backward
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim a62d60829c Refactor autogen (#925) 2022-07-30 09:40:02 -04:00
Henry Tu dfcc26556a Added e2e LTC tests (#916)
* Added e2e LTC Torch MLIR tests

* Fix seed for reproducability

* Check if computation is None before getting debug string

* Updated unit tests, and added numeric tests

* Print name of the model layer that fails numeric validation

* Run LTC e2e test with CI/CD

* Set seed in main function, instead of beginning of execution

* Add comment to specify number of digits of precision

* Fixed typo

* Remove tests for LTC example models

* Added LTC option to torchscript e2e

* Implement compile and run for LTC e2e test

* xfail all tests that use ops that aren't currently supported
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 8312fa535b Refactor Node Lowering (#914) 2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim d9aee0d7a7 E2E HuggingFace Bert using LTC Backend (#912)
* Update native function definitions

* Add ops to support bert lowering

- Add empty_strided and as_strided

- Restore zeros_like to op blacklist (Without this, tensors will be unintentionally created with a CPU device rather than lazy)

- Check for composite implicit ops and add device data IR

- Also fix codegen for functionalization

* Add autogen to CMakeList

* Remove PyTorch submodule

* Reduced BERT model size

* Print Mark Step status in Torch MLIR LTC debug string

* Apply fixes to work with latest upstream/main

- Pass importOptions into getMlirTypeFromTorchType during NodeImporter::importNode

  Without this, the tensor type created may have a mismatched type as ImportOptions may cause vtensor to be used instead of tensor

* Update shape inference functions

- Fixed compute_shape_native_batch_norm when mean and var are uninitialized

  Previously, the number of shapes returned would be <3 if either mean or val was didn't exist. Instead, we now initialize them with a vector matching the number of channels.

- Implemented compute_shape_mul

- Fixed bug in reshape shape inference error message

* Get MLIR backend more consistent with TS backend

- Remove LazyNativeFunctions::_unsafe_view from autogen

- Blacklist ops to make JIT graph more like output of TS backend

- Print graph when SSA value has mismatch of types and results

- Remove normalize_index from LazyShapeInference

- Fix seeds for LTC example models

* Update and clean up shape inference functions

- Prune shape inference functions

- Add shape inference function for GenerateSlice

- Add shape inference function for GenerateCopy

Co-authored-by: Henry Tu <henry.tu@cerebras.net>
2022-07-30 09:40:02 -04:00
Henry Tu 0c35e607b3 Add static shape for scalar tensors (#833)
* Assume zero rank tensors are scalar

* Run RefineTypes pass on JIT Graph

* Rollback assumption that zero rank tensors are scalar

* Set numSizes to -1 for non-ranked tensors

* Rename RefineTypes to RefineTupleTypes
2022-07-30 09:40:02 -04:00
Henry Tu de5b380143 Bert example and relevant shape inference functions (#831) 2022-07-30 09:40:02 -04:00
Henry Tu 406d1e7538 Use JIT GraphExecutor for execution in example backend (#830)
* Update LazyShapeInference header

* Use JIT GraphExecutor for execution in example backend
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 1bde00c73d Fix LTC Decoupling (#815)
* Initial changes

* Fix up native functions

* Further fix decoupling

* Remove unnecessary ops

* Formatting and copyright banners:

* Add pytorch submodule
2022-07-30 09:40:02 -04:00
Henry Tu cca9fe126e Enable support for LTC Input/Output Mapping (#764)
* Save InputOutputAliases to TorchMlirComputation

* Implement GetResultShape for TorchMlirLoweringContext

* Use optional return type for GetResultShape

* Remove support for aten::detach

With this op enabled, tensors were being copied, which resulted in incorrect aliasing.

* Add newline before printing I/O alias mapping

* Changed printout to use "Input param" as label instead of "Input"

* Remote shape inference function for aten::detach

* Moved implementation of SetUpAlias to MlirLoweringContext

As part of this change, TorchMlirComputation has been moved to the end of mlir_lowering_context.h so that it can access some new structs in TorchMlirLoweringContext

* Use updated PyTorch API

* Remove GetResultShape

Complements this upstream PyTorch PR: pytorch/pytorch#75828

This PR adds support for mapping input and output tensors which alias each other. (e.g. maps input weight tensor in parameter to the same tensor in output after a training iteration)

MLIR: 
func @graph(%arg0: !torch.vtensor<[1,5],f32>, %arg1: !torch.vtensor<[1],si64>, ..., %arg6: !torch.vtensor<[10,5],f32>, %arg7: !torch.vtensor<[10],f32>, ...) {
  ...
  return %arg0, %arg1, %17, %23, ... : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1],si64>, !torch.vtensor<[10,5],f32>, !torch.vtensor<[10],f32>, ...
}

Input/Output Alias Mapping: 
Output: 0 -> Input: 0
Output: 1 -> Input: 1
Output: 2 -> Input: 6
Output: 3 -> Input: 7
The aten::detach op has also been disabled in this PR to fix the issue of tensors not aliasing properly due to copying.
2022-07-30 09:40:02 -04:00
Antonio Kim 615ff1d31c Generate MLIR with shape information via LTC frontend (#742) 2022-07-30 09:40:02 -04:00
Henry Tu a605fe279c Add example Torch MLIR LTC Backend (#725) 2022-07-30 09:40:02 -04:00
Henry Tu 3e9b1cbd36 Added JIT to MLIR lowering (#724)
* Added JIT to MLIR lowering

Lowering to JIT is performed in a way similar to how it's done in the TS LTC backend. After a jit::Graph is constructed, it gets converted to a jit::Function, which is fed into the existing utility to generate an MlirModule in torch-mlir.

* Renamed `csrc/backend` to `csrc/base_lazy_backend`
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 65cf1465ef Fix Torch-MLIR LTC Backend based off latest PyTorch master (#723)
* Changes as a result of the LTC TS backend decoupling

* Fix bugs in BackendImpl and codegen

* Fix based on latest PyTorch master
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim c3b20e444c Got LTC working until compile (#689) 2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 58338f79a1 Torch-MLIR LTC Backend Lowering Codegen (#621)
* Codegen and build LTC lowering

* Add LazyShapeInference header
2022-07-30 09:40:02 -04:00
Jae Hoon (Antonio) Kim 2f22e2ef40 Add initial LTC backend (#610)
* Add initial LTC backend skeleton

* Disable CI build and move TorchMLIRPyTorch.cmake
2022-07-30 09:40:02 -04:00
PhaneeshB 8b5631d4c5 [MLIR][TORCH] Add decomposition for aten.std.dim Op
Signed-Off By: Phaneesh Barwaria <phaneesh@nod-labs.com>
2022-07-29 23:52:54 +05:30
Vivek Khandelwal 9a1203c844 Fix CI failure due to upstream PyTorch change in aten.mean.dim op
Fixes https://github.com/llvm/torch-mlir/issues/1121

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-07-29 17:19:22 +05:30
Vivek Khandelwal c681c3497a [MLIR][TORCH} Fix empty dim cases for the .dim ops
This commit fixes the shape calculation for:
1.) aten.mean.dim
2.) aten.var.dim
3.) aten.sum.dim_IntList op

Also, it fixes the lowering of `aten.mean.dim` and
`aten.sum.dim_IntList` for handling the cases of empty dim list.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com
2022-07-29 11:08:57 +05:30
Vivek Khandelwal d386b8f9e5 [MLIR][TORCH] Add decomposition for aten.var.correction op
This commit adds the decomposition for `aten.var.correction` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com
2022-07-29 11:08:57 +05:30
Vivek Khandelwal 7247c6a3a7 [MLIR][TORCH] Add E2E support for aten.ge.int op
This commit adds lowering of `aten.ge.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-29 11:08:57 +05:30
Quinn Dawkins 11a8901078
[MLIR][TORCH] Add support for multiple indexing tensors for aten.index.Tensor (#1097)
- Includes a canonicalizer for `aten.add.t`needed for successfully lowering the shape function
 - Only offers support for statically sized index tensors when there is more than one
 - Dynamic shape support remains for single indexing tensors
2022-07-28 19:00:02 -04:00
Quinn Dawkins 3c9addf19c Add e2e support for aten.expm1 2022-07-27 12:31:35 +05:30
Kevin Kiningham e8f327cc00 Add lowering to linalg for softplus and log1p
Follows existing conventions for unary operators.
2022-07-25 21:25:57 +05:30
powderluv f424930a28
Add option to expose custom PyTorch repo/branch (#1103) 2022-07-24 20:08:48 -07:00
powderluv 31fd812acf
Add linux and macOS source builds in CI (#1070)
This enables building Pytorch from source in the CI.
The build should mostly hit the ccache.
Release builds will follow once we have some runtime on the CI.
2022-07-21 14:16:03 -07:00
Ashay Rane 72dd04cdb3
Revert "python: trim registration and loading of dialects and passes" (#1093)
This reverts commit ad283c1043, since it's
causing nightly build failures for all platforms.
2022-07-21 09:35:42 -07:00
Ashay Rane ad283c1043
python: trim registration and loading of dialects and passes (#1084)
In the interest of merging upstream LLVM quickly, a previous patch
(7f08169) updated the torch-mlir build to register all dialects and
passes through Python bindings.  This patch limits the dialects and
passes to only those that are used in torch-mlir.

Key to this change are the removal of
`MLIRPythonExtension.RegisterEverything` and the introduction of a new
Python module (`_mlir_libs/_site_initialize_0.py`), where we register
the dialects and passes used by torch-mlir.
2022-07-20 18:34:17 -07:00
Ziheng Jiang c61c99e887
[MHLO] Init MHLO integration. (#1083)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-07-20 16:18:16 -07:00
Quinn Dawkins 647e75e029
Allow expanding and collapsing in aten::view (#1082)
- Supports cases where the view op expands and collapses dims
simulataneously. This does not handle the case where it is neither
expanding nor collapsing (e.g. [2, 3] -> [3, 2])
 - Additionally fixes a previous bug with adding 1-sized dims on both
sides of a tensor with aten.view
2022-07-20 17:35:51 -04:00
Kevin Kiningham 21f905afbe
Emit underscore version of aten.sqrt (#1072) 2022-07-18 23:57:47 -07:00
Quinn Dawkins c73a39e40a Add support for index.Tensor on dimensions other than the first
This patch still only supports a single indexing tensor.
2022-07-19 11:36:52 +05:30
Ashay Rane 7f08169380
bump llvm tag to 3580daa (#1078)
This patch makes some rudimentary changes to torch-mlir's use of MLIR
Python bindings to work with the most recent LLVM code.  We can perhaps
do better by being more selective in what we link against, instead of
using `MLIRPythonExtension.RegisterEverything`.
2022-07-18 16:49:03 -07:00
Vivek Khandelwal df0b1e77a4 [MLIR][TORCH] Add negative dim support for aten.cat and aten.slice op
This commit adds the support for negative dim cases for `aten.cat`,
`aten.slice.Tensor` and `aten.slice_scatter` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-18 14:01:33 +05:30
Sean Silva 795479a88d Remove HasValueSemantics from `is` ops. 2022-07-15 17:03:17 -07:00
Maksim Levental d70bb68c9e
Add named exception TorchMlirCompilerError. (#1064) 2022-07-15 16:32:36 -05:00
Ramiro Leal-Cavazos afdaa60dd4
Fix typo in `inputRank` check of `AtenBatchNormOp` (#1046)
The original conversion pattern for `AtenBatchNormOp` required that
the input rank be greater than 2; however, the only
expectation in the conversion pattern and in Pytorch is that the input
rank is greater than 1, since the second dimension of the input must
match the size of the `weight`, `bias`, `runningMean`, and
`runningVar` inputs. This commit fixes the `inputRank` check.
2022-07-15 09:35:59 -07:00
Vivek Khandelwal 3589134d31 [MLIR][TORCH] Add decomposition for aten.var.dim op
This commit adds the decomposition for `aten.var.dim` op.
This commit also make changes in the decomposition for `aten.var` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-07-15 09:53:42 +05:30
powderluv 479a8a8963
Remove libtorch downloads (#1058)
Remove all the libtorch downloads. If the user sets
-DTORCH_MLIR_USE_INSTALLED_PYTORCH=OFF then just build from src.

Doesn't change developer workflow since we still default to local
PyTorch versions.

TEST: Build and verify all tests (except one xfail quant) pass on linux
2022-07-14 17:16:51 -07:00
Maksim Levental 1bb990afc7
Speed up libtorch build. (#1031) 2022-07-11 20:46:49 -05:00
Ramiro Leal-Cavazos 11148e60d6
Undo shape lib changes + update function signature of sum + zero (#1035)
This commit does three things:
  1. Reverts some of the shape lib changes merged in
  https://github.com/llvm/torch-mlir/pull/844
  2. Updates the signature of `aten.sum_dim_IntList` that was recently
  updated in
  23bdb570cf
  3. Replaces `aten.zero.functional` with `aten.zero`, updated in 960758b0b7
2022-07-11 10:56:12 -07:00
Prateek Gupta 2d75654b2c [TORCH][MLIR] Add lowering of `aten.slice_scatter` and
`aten.select_scatter` op.

This commit adds:
1.  Lowering of `aten.slice_scatter` op into `tensor.insert_slice`
op.
2. Decomposes the `aten.select_scatter` op into `aten.slice_scater`
op.

Signed-Off-By: Prateek Gupta <gprateek93@gmail.com>
2022-07-11 14:07:21 +05:30
George Petterson a08ff0d7f2 Add lowering for _convolution 2022-07-11 11:03:03 +05:30
Sean Silva 93f1c3138b torch_mlir.compile: Allow OutputType as a string.
A lot of code was super verbose with `torch_mlir.OutputType.XYZ`. Now,
you can simply do `"xyz"`. I updated a few examples.
2022-07-08 17:37:27 -07:00
Sean Silva 5bd9362c61 Remove mention of upstream_shape_helpers
There were some leftovers.
2022-07-08 14:43:55 -07:00
Henry Tu 3ad810a1fb
Update CMakeLists.txt (#1028) 2022-07-08 16:45:52 -04:00
powderluv f202ae0012
Revert to using local PyTorch binaries (#1024)
Temporarily revert to using PyTorch binaries until source builds
are ready to land.

TORCH_MLIR_USE_INSTALLED_PYTORCH can be turned to OFF if you want
to link against libtorch and/or source builds.
2022-07-07 15:42:08 -07:00
Quinn Dawkins f0c3b5a7ed
Add E2E support for aten.len.str (#969) 2022-07-07 10:41:55 -07:00
Ashay Rane 874fdb7e42
build: improve robustness of cmake and shell scripts (#1018)
On my local machine, `unzip` didn't exist (producing a "command not
found" error), but CMake ignored the error.  Although the build did
succeed (because it found a previously-built version of libtorch), it
seems better to abort builds on such failures, so this patch checks the
return code of all external process invocations.

Along similar lines, this patch also updates the shell scripts in
`build_tools` to extensively use double-quoting to prevent unintentional
word splitting or globbing.  Since some of the scripts execute `rm`
while using shell variables, this patch also adds the preamble `set -u`
to abort execution if an undefined variable is referenced, so that we
reduce the chances of executing `rm -rf /` if the path expression
happens to refer to an undefined variable.
2022-07-06 14:39:30 -07:00
powderluv 33bfeda4c5
Enable libtorch caching and source builds (#1004)
Add an option to cache libtorch/ releases if you don't want to
download the latest. Add an option to enable source builds.

TESTS:
macOS: verify with / without cache downloads
       verify source builds -- shared and static

Linux: Build Tests and Release builds
2022-07-05 10:25:43 -07:00
powderluv be3d14cf76
Fix multi-threaded tests on macOS (#1005)
Fixes https://github.com/llvm/torch-mlir/issues/994
2022-07-05 00:05:36 -07:00
Tanyo Kwok d4f1f41435
[MLIR][TORCH] Add decomposition of aten.repeat (#932)
* [MLIR][TORCH] Add decomposition of aten.repeat

* refine & rebase

* refine static shapes

* add e2e test

* Rebase and Refine naming style
2022-07-01 13:02:31 +08:00
Ramiro Leal-Cavazos f204210266
[LINALG] Fix handling of size-1 dims in `aten.view` again. (#992)
A previous fix to the handling of size-1 dims in
`aten.view` (https://github.com/llvm/torch-mlir/pull/962) resulted in
the wrong grouping of dimensions when size-1 dims where between two
dims of size greater than 1. This commit fixes that.
2022-06-30 16:39:25 -07:00
Ashay Rane f947443f98
python: lower `prim::{Load,Store,Enter,Exit}` nodes to torch dialect (#983)
TorchScript nodes like `prim::Load` and `prim::Store` aren't supported
in torch-mlir because they can't be lowered to backends, but such nodes
can occur in the TorchScript IR.

This patch adds a rudimentary translation from such nodes to
corresponding ops in the Torch dialect.  Since we expected such nodes to
go away during lowering because of the SymbolDCE pass, this patch does
not add code to lower these ops beyond the Torch dialect.
2022-06-30 13:17:35 -07:00
Suraj Sudhir bb576c2cb3
[tosa] aten.embedding op support (#991)
Enables BERT legalization.

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-06-30 13:13:52 -07:00
powderluv 2b52da951b
Link against libtorch (#955)
This moves torch-mlir to link against libtorch on macOS and linux

TESTS: Tests pass. Tested release builds on linux and macOS
2022-06-30 12:40:17 -07:00
Sean Silva 227dea7b2e Add support for ScalarType::QUInt8
I ran into this while poking around at
https://github.com/llvm/torch-mlir/issues/959
2022-06-29 15:33:28 -07:00
powderluv cd79538a0c
Update test to pass with newer versions of tanh (#990) 2022-06-28 20:28:13 -07:00
Tanyo Kwok 5fbf2a376c
fix export torch.literal on gpu (#10) (#985) 2022-06-29 10:10:34 +08:00
JakopinA 5888c4f7dc Added E2E support for torch::aten.__contains__int_list 2022-06-27 19:30:00 +05:30
Gaurav Shukla 1be604bfd3 [LINALG] Lower `aten.Matmul` to `linalg.BatchMatmul`
This commit lowers `aten.matmul` to `linalg.BatchMatmul` under the
following conditions:
1. The result of matrix multiplication must have batch dimensions,
   i.e., rank greater than 2.
2. The resultant matrix must have at most 1 dynamic batch dimension.

It also handles broadcasting of batch dimensions when batch dimensions
of the matrices are broadcastable.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-06-25 10:58:06 +05:30
Ramiro Leal-Cavazos 400fecc1e5
[LINALG] Fix shape function of index.Tensor + support N-rank inputs (#972)
This commit fixes the shape function for `index.Tensor`, adding
support for multiple index tensors and `None`s in the indices
list. This commit also adds support for input tensors of rank greater
than 1. The lowering for `index.Tensor` still has the the limitation
that only a single index tensor along the first dimension of the input
tensor is supported.
2022-06-24 09:45:44 -07:00
Ashay Rane 234fc7fe0c
linalg: lower `aten.triu` op to `linalg.generic` (#965)
Prior to this patch, the torch dialect included `AtenTriuOp` for
computing the upper triangular part of the input matrix, but there was
no code for lowering the op to the linalg dialect.

This patch adds code to generate a `linalg.generic` operation that
compares indices (computed using `linalg.index`) to choose between zero
or the original value (using `arith.select`).  The lowering fails if the
number of dimensions are less than two.  This patch also adds a few
end-to-end tests.
2022-06-23 22:45:48 -07:00
erman-gurses 5cff40c88a Add canonicalization for aten.add.tensor op 2022-06-23 17:24:59 -04:00
Maksim Levental 829717c96e
Bump LLVM (#958) 2022-06-22 22:23:46 -05:00
Ramiro Leal-Cavazos 8b94759303
[LINALG] Fix handling of size-1 dims in `aten.view` (#962)
This commit adds support for several size-1 dims in a row in an
expansion using `aten.view`.
2022-06-22 15:58:40 -07:00
Vivek Khandelwal 77ab31641f [MLIR][TORCH] Add decomposition of aten.numpy_T op
This commit adds the decomposition of `aten.numpy_T` op into
`aten.t` or `aten.permute` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-16 00:01:22 +05:30
Vivek Khandelwal 4605dc9c99 [MLIR][TORCH] Add support for bool type in convertScalarToDtype utility
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-16 00:00:47 +05:30
Albert Sandru 708a51ae2e Add E2E support for aten.is_floating_point 2022-06-15 11:54:00 -05:00
Ramiro Leal-Cavazos 246c2df65a
[LINALG] Fix typo in conversion pattern of `aten.embedding` (#942) 2022-06-15 09:45:10 -07:00
Vivek Khandelwal aed5517fda [MLIR][TORCH] Add integer dtype support for aten.rsub.Scalar op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-15 16:46:28 +05:30
Bob Adolf b90837ee24
Temporarily revert support for custom op extensions. (#944)
The MacOS builders are having linking trouble with the extension library.
Until it's fixed, all support for op extensions is disabled. It should be
easy to restore once the issue is resolved.
2022-06-14 18:24:40 -07:00
powderluv 8fd084377d
Update CMakeLists.txt 2022-06-14 14:46:52 -07:00
powderluv dfc6f7c547
Update CMakeLists.txt
Emergency fix to unblock the nightly Release builder
2022-06-14 14:38:35 -07:00
Ramiro Leal-Cavazos 93f6d8e776
[LINALG] Add 0-rank case for `aten.permute` (#940)
The function `AffineMap::inferFromExprList` does not work if the first
vector of expressions is empty, because it uses these expressions to
obtain the context. This prevented `aten.permute` from working for
inputs of 0-rank. This commit adds support for 0-rank inputs.
2022-06-14 12:50:46 -07:00
Vivek Khandelwal 33fa8e7761 [MLIR][TORCH] Add decomposition of aten.floor_divide op
This commit adds the decomposition of `aten.floor_divide` op into
`aten.div.Tensor_mode` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-14 08:56:25 +05:30
Tanyo Kwok 0d4445eaf9
Fix: 0 sizes tensor being regarded as unknown rank (#923) 2022-06-14 09:58:50 +08:00
Bob Adolf 0a7ba62438
Allow torch-mlir to support PyTorch extensions. (#895)
PyTorch allows new operators to be registered dynamically in modules.
Torch-mlir already makes it fairly straightforward to add support for
new operators, and this commit just extends that support to allow new
PyTorch ops to come from a external module.

This does *not* allow ops to be dynamically loaded into torch-mlir.
Torch-mlir must still be compiled with support built-in.

Add a `_torch_mlir_custom_op_example` subpackage to `torch_mlir` which
registers an demonstration op. It will not be imported by default when
importing torch_mlir. It's strictly for testing and documentation.

Adds an end-to-end test for the `torch_mlir_custom_op_example::identity` op.

With all these changes, we should now be actively testing PyTorch extension
support with all future patches.
2022-06-13 14:51:30 -07:00
powderluv 02b917f769
Change to the real PackedParams.h location (#929)
Also update the PyTorch nightly URL
2022-06-10 14:43:52 -07:00
powderluv 4cdf4e7d47
Fix new location for PackedParams.h (#928)
Looks like they renamed it in location
2022-06-10 14:30:31 -07:00
Tanyo Kwok e70d4f732d
Fix class_annotator_pybind.h header guard (#924)
merging to unblock builders
2022-06-10 11:58:26 -07:00
powderluv 6615add806
Fix the new header location (#926)
Seems to have moved in the latest nightly
2022-06-10 11:57:58 -07:00
Maksim Levental 5c85ac3100
Handle `nn.Linear(..., bias=False)` case for TorchToLinalg (#919) 2022-06-08 21:13:43 -05:00
Henry Tu 298d095acf
Use double quotes instead of single quotes (#918) 2022-06-08 15:00:56 -04:00
Henry Tu c1da9edcf0
Generate underscore variant of functional ops (#915)
* Generate underscore variant of functional ops

* Do not apply `IsTrailingUnderscoreInplaceVariant` trait to underscore variant of functional op
2022-06-08 14:27:36 -04:00
Tanyo Kwok bd53998da8
Remove pybind deps from importer and annotator (#903)
* Remove pybind deps from importer and annotator
* Rename files to class_annotator_pybind.cpp/.h
2022-06-08 10:12:05 +08:00
Sean Silva e1b38e74dd Use upstream shape functions directly.
Now that upstream exposes them nicely, we can use them.

I noticed that we had added stuff into the upstream_shape_helpers.py
file (which was supposed to stay pristine), so some more shape functions
need to be upstreamed.

Going forward, all shape functions should be upstreamed similar to
https://github.com/pytorch/pytorch/pull/76889 instead of added in this
file.
2022-06-07 11:15:03 -07:00
Ramiro Leal-Cavazos 22c0893ec6
Update debug options in compilation errors (#913)
The flag for printing the IR after each pass is now prefixed with
"mlir". This commit updates the flag in the error reporting for the
compiler.
2022-06-07 10:55:54 -07:00
Vivek Khandelwal b95b3d844d [MLIR][TORCH] Add E2E support for aten.div.Tensor_mode op
This commit adds lowering of `aten.div.Tensor_mode` op.
This commit also fixes formatting for the test file elementwise.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:44 +05:30
Vivek Khandelwal a11ef674a7 [MLIR][TORCH] Add E2E support for aten.baddbmm op
This commit decomposes `aten.baddbmm` op into `aten.bmm`,
`aten.mul.Scalar`, and `aten.add.Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:28 +05:30
Jae Hoon (Antonio) Kim fe784fd900
Add Support for aten::scatter_add (#906) 2022-06-06 15:02:45 -04:00
Jae Hoon (Antonio) Kim 8a1839a17e
Add support for aten::arange.start_out (#905) 2022-06-06 15:02:27 -04:00
Vivek Khandelwal 2718b4d838 [MLIR][TORCH] Add E2E support for aten.clamp_[min|max] op
This commit decomposes `aten.clamp_min` and `aten.clamp_max` op
into `aten.clamp` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-06 11:52:29 +05:30
Sean Silva ccc858f531 torch_mlir.compile: Fix API footgun
use_tracing=True was behaving unexpectedly because the handling of
single arguments was happening after the torch.jit.trace call.

This also fixes the check to specifically test for a torch.Tensor or
TensorPlaceholder so that both lists and tuples would be correctly
handled.
2022-06-05 18:10:07 -07:00
Vidush Singhal fc419b1e7d
Add E2E support for AtenLogicalOrOp. (#883) 2022-06-03 16:21:03 -07:00
Henry Tu abf5c94a1b
Replace valsem.aten.zero with aten.zero.functional (#893) 2022-06-03 16:27:31 -04:00
Henry Tu 650f5a5008
Added support for native_dropout_backward (#892) 2022-06-03 14:08:51 -04:00
Henry Tu b7082a8d4e
Added support for native_dropout (#891) 2022-06-03 14:05:57 -04:00
Henry Tu a635fd2287
Added support for native_batch_norm_backward (#890) 2022-06-03 13:49:02 -04:00
Henry Tu bfe8ff4b42
Added support for embedding_dense_backward (#889) 2022-06-03 13:33:43 -04:00
Henry Tu a29903dfc8
Added support for native_layer_norm_backward (#888) 2022-06-03 13:15:23 -04:00
Vidush Singhal 0a913bc904
Add E2E support for AtenAllBoolOp (#874) 2022-06-01 18:20:25 -07:00
Vivek Khandelwal 6f548fc3ad [MLIR][TORCH] Add decomposition of aten.adaptive_avg_pool2d op
This commit adds the decomposition of `aten.adaptive_avg_pool2d` op into
`aten.avg_pool2d` op. The current decomposition only supports cases where
input size is equal to the output size.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Ramiro Leal-Cavazos b76c8c82dc
Emit `aten.unsqueeze` with mutating variants (#873)
The op `aten.unsqueeze` has a mutating variant. This commit adds
support for that variant.
2022-05-26 19:19:38 -05:00
Maksim Levental cec5aeedb0
add ci tests (#754) 2022-05-25 14:59:59 -05:00
Vivek Khandelwal 56e77d4213 [MLIR][TORCH] Add E2E support for aten.Bool.[float|int] op
This commit adds lowering of `aten.Bool.float` and `aten.Bool.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 21:18:34 +05:30
Vivek Khandelwal 014a6d16c7 [MLIR][TORCH] Add E2E support for aten.any.bool op
This commit adds lowering of `aten.any.bool` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 17:24:28 +05:30
Vivek Khandelwal bc9b2156e3 [MLIR][TORCH] Add E2E support for aten.sqrt.int op
This commit adds lowering of `aten.sqrt.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 16:50:39 +05:30
Ashay Rane f18b2be911
torch,linalg: add support for translating aten.linalg.vector_norm (#839)
This patch adds support for the torch.linalg.vector_norm op to the torch
dialect, including the necessary shape function.  It also extends the
conversion of reduction operators to support lowering of
AtenLinalgVectorNormOp, in addition to adding a handful of end-to-end
tests to validate the lowering.

There exist several opportunities to make this lowering optimal and
robust.  For instance, in its current form, the translation does not
support ord = 0, +inf, or -inf.  For L1 norms, we don't need to raise
each element to the power 1.0.  Similarly, L2 norms could benefit from
strength reduction.  Since the canonicalization pass is not able to
apply these optimizations, we should consider applying them during the
linalg lowering itself.
2022-05-19 15:48:15 -07:00
Sean Silva 2af53ce434 torch_mlir.compile: Add OutputType.RAW
This can help with development and reporting bugs.
2022-05-19 03:41:43 -07:00
Sean Silva ef9e4c95f2 torch_mlir.compile: add support for dynamic sizes.
We do this by inroducing a TensorPlaceholder class, which can be used to
specify dynamic sizes. Internally, we canonicalize all example inputs
to TensorPlaceholder's.

This commit also adds some basic testing, which was missing before.
2022-05-17 07:02:32 -07:00
Ashay Rane bb52a460cb
mlir: bump llvm tag to 5380e3 (#856)
In addition to updating the llvm-project submodule, this patch also:

1. updates shape functions and tests so that `func` and `call`
   operations refer to the `func` dialect
2. avoid duplicate registration of dialects
2022-05-16 12:54:35 -07:00
Ramiro Leal-Cavazos 96f90efd16
Add shape info to `rand_like` + support for `dtype` flag (#851)
The op `aten.rand_like` was missing a shape function, unit tests, and
the `dtype` argument was being ignored in its decomposition. This
commit fixes all three things.
2022-05-12 16:00:59 -07:00
Yi Zhang ec0e9e0bc7 Add -s flag to run e2e tests sequentially
A user might want to avoid the extra layer of multiprocessing libary for
debugging purpose. In such cases, the -s flag can be used to force
sequential execution.
2022-05-11 21:16:41 -04:00
Vivek Khandelwal f15d257aac [MLIR][TORCH] Add support for ceil_mode = true for pooling ops
This commit adds support for aten.max_pool2d, aten.max_pool2d_with_indices,
and aten.avg_pool2d op for the cases where ceil_mode = true.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-11 12:52:47 +05:30
Vivek Khandelwal c69a1e5688 [MLIR][TORCH] Add E2E support for ScalarImplicit, Int.Scalar op
This commit adds lowering of `aten.ScalarImplicit` and `aten.Int.Scalar` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-10 22:40:49 +05:30
Prashant Kumar 12b3af70d3 [TORCH] Add folding of aten.detach op.
`aten.detach` op is folded and returns the first operand since it's an
identity function(kind of identity just remove the has_grad attribute).
2022-05-10 21:54:45 +05:30
Prashant Kumar 2b1b0f6e19 [LINALG] Add support for preserve memory format in aten_empty_like op.
The preserve memory specifies that `If any of the input tensors is in channels_last format,
operator output should be in channels_last format` and hence can be
added as is in aten_empty_like op.
2022-05-10 09:37:55 +05:30
Yi Zhang 5a6210b35b Workaround to make CI pass 2022-05-09 12:56:20 -04:00
yuhao 2e6a9c084e Update torch_mlir_tensor.py
typo
2022-05-07 21:46:10 -04:00
Yi Zhang 28be6511d2 Fix type promotion code for scalar only operations
Fix the type promotion code for scalar only operation to return
TorchType which is the type tracked in ValueKnowledge.scalarType.

- Fix `getPromotedResultScalarType` to return Torch type.
- Add `getBuiltInTypeForTorchScalar` helper to convert scalar type
to builtin type before passing to the next level type promotion
helper `updateResultTypeState`.
- Add `setScalarType` helper to make setting ValueKnowledge.scalarType
  easier.
2022-05-07 10:37:21 -04:00
Vivek Khandelwal b20679e1b8 [MLIR][TORCH] Modify aten::dropout op description
Signed-Off By: Vivek Khandelwal vivek@nod-labs.com
2022-05-06 11:15:52 +05:30
Yi Zhang 2ed90741eb Make e2e testing parallel
This change makes the e2e testing parallel using the multiprocessing
python module.
2022-05-05 21:27:58 -04:00
Vivek Khandelwal 96fabc0036 [MLIR][TORCH] E2E support for [ge|ceil].float, [ge|ne|gt].float_int op
This commit adds lowering of `aten.ge.float`, `aten.ge.float_int`,
`aten.ne.float_int`, `aten.gt.float_int` and `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py and scalar_comparison.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-05 21:48:35 +05:30
Yi Zhang 9f7264a7a4 Add support for scalar type propagation
The main changes are:
- Added `ValueKnowledge.scalarType` to track scalar type information.
- Added `ValueKnowledge.kind` to indicate the value kind.
- Modified the meet and join helper functions. The ValueKnowledge has
slightly more complicated state now so the meet and join function need
to look at the `kind` field in addition to just the type field.
2022-05-04 16:57:56 -04:00
Gaurav Shukla 4b911ada40 [LINALG] Add E2E support for `aten.mean.dim` op
- This commit adds support for `aten.mean.dim` op.
- It also adds a new test script `stats.py` for statistics related ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-05-04 20:11:42 +05:30
Sean Silva ab5ad7af09 Add tracing suport to `torch_mlir.compile`.
This also has a fix for the adjustment of types of TupleConstruct
inputs, which I found when using this new functionality on a model.

Some scenarios in tracing create situations where the output of
TupleConstruct has a more refined type than the inputs.

This introduces a helper `adjustStaticInformationForValues` which
subsumes the `derefineValues` helper and the tensor static information
adjustment we were doing.
2022-05-03 09:08:40 -07:00
Vivek Khandelwal c0634bc996 [MLIR][TORCH] Add E2E support for aten.to.dtype_layout op
This commit decomposes `aten.to.dtype_layout` op into `aten.to.dtype` op.
This commit also fixes the formatting for the file type_conversion.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-03 12:48:58 +05:30
gpetters94 c4dcdd1e34
Add aten.flip (#817) 2022-05-02 16:01:15 -04:00
Vivek Khandelwal 8a06419980 [MLIR][TORCH] Add E2E support for aten.masked_fill.Scalar op
This commit adds lowering of `aten.masked_fill.Scalar` op.
This commit also fixes the formatting of the file constant_alloc.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-02 22:27:33 +05:30
Vivek Khandelwal 4b11284440 [MLIR][TORCH] Add E2E support for aten.avg_pool2d op
This commit adds lowering of `aten.avg_pool2d` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-02 12:31:44 +05:30
Prateek Gupta 81ee5bb58c [TORCH][MLIR] Fix ConstantPad2dStaticModule test.
This commit fixes the `ConstantPad2dStaticModule` test case by adding
the lowering of `aten.pad` operation. Previously the test case
mapped to `aten.constant_pad_nd` operation.
The `aten.pad` now decomposes into `aten.constant_pad_nd` operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-04-29 21:57:01 +05:30
Ashay Rane 809f240f01
importer: add initial support for loading BFloat16 tensors (#761)
This patch updates the `torch_mlir::convertTensorToMlirElementsAttr()`
method to enable the creation of tensors whose base type is BFloat16.
This patch also adds a test to validate the IR generation, and it
updates the test for importing tensors of various types.
2022-04-29 09:01:49 -07:00
Prateek Gupta e1db318a3c [TORCH][MLIR]Add lowering for control flow operations.
1. This commit adds lowering of "while-like" prim loop to scf.while
operation.
2. Adds lowering of "for-like" prim loops to scf.for operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-04-29 16:25:58 +05:30
Yi Zhang 7be9783f16 Fix the input tensors inplace update issue for e2e tests
Fix the inplace update tensor issue we had
where the torchscript execution would update the input value inplace
resulting the actual test not being able to see the original input
value.
2022-04-28 11:43:54 -04:00
Sean Silva 44c7b181d3 Revert "[MLIR][TORCH] Add E2E support for aten.ge.float op"
This reverts commit 564734b2d7.
2022-04-28 07:49:58 -07:00
Sean Silva eff144c0b7 Revert "[MLIR][TORCH] Add E2E support for aten.ge.float_int op"
This reverts commit 1f102cc400.
2022-04-28 07:49:58 -07:00
Sean Silva 7669ee4e4a Revert "[MLIR][TORCH] Add E2E support for aten.ne.float_int op"
This reverts commit 51dd462592.
2022-04-28 07:49:58 -07:00
Sean Silva 5ef9f501fa Revert "[MLIR][TORCH] Add E2E support for aten.ceil.float op"
This reverts commit 78f5747568.
2022-04-28 07:49:58 -07:00
Vivek Khandelwal ab0eafb617 [MLIR][TORCH] Add test cases for index_put op and fix formatting for index_put.py
This commit adds more test cases `aten::index_put` op.
This commit also fixes formatting issues with the test file index_put.py

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-28 13:41:47 +05:30
Vivek Khandelwal e57e1968bc [MLIR][TORCH] Add E2E support for aten.index_put.hacked_twin op
This commit decomposes `aten.index_put.hacked_twin` op into
`valsem.aten.index_put_impl` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-28 13:41:47 +05:30
Vivek Khandelwal 78f5747568 [MLIR][TORCH] Add E2E support for aten.ceil.float op
This commit adds lowering of `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-28 11:49:35 +05:30
Yi Zhang 86eb493a44 Change to AnyTorch* except for Torch_X ones 2022-04-27 14:18:52 -04:00
Bob Adolf 0667a5b3ae
Expand checks against PyTorch C++ ABI settings. (#777)
Compiling torch-mlir against a source version of PyTorch or an official
wheel compiled with the new C++ stdlib ABI fails, as torch-mlir doesn't
know how to set compiler flags to remain compatible. This changes the
way torch-mlir looks at PyTorch and tries to more closely match the ABI
settings, regardless of whether it's the common official wheel or some
other version.
2022-04-27 10:44:46 -07:00
Vivek Khandelwal 51dd462592 [MLIR][TORCH] Add E2E support for aten.ne.float_int op
This commit adds lowering of `aten.ne.float_int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal 1f102cc400 [MLIR][TORCH] Add E2E support for aten.ge.float_int op
This commit adds lowering of `aten.ge.float_int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal 564734b2d7 [MLIR][TORCH] Add E2E support for aten.ge.float op
This commit adds lowering of `aten.ge.float` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Vivek Khandelwal f5b6c4b601 [MLIR][TORCH] Add E2E support for aten.div.float op
This commit adds lowering of `aten.div.float` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-27 21:16:48 +05:30
Sean Silva 73cc2ac152 Ensure that imported function input type and block arg types are consistent.
I wasn't able to find exactly what frontend situation created it, but
`torch.jit.trace` will sometimes create functions where the
`jit::Block`'s param node has refined tensor types. So we need to adjust
the function's formal param types to those refined types.
2022-04-27 08:01:23 -07:00
Ashay Rane 9208bf0eb6
llvm: bump tag to e1318078 (#781)
The updated LLVM code includes a patch to create bfloat16 array
attributes, thus enabling a different patch to torch-mlir to flesh out
support for the bfloat16 type.
2022-04-26 12:27:51 -07:00
Maksim Levental 693f79a2b6
Fix test fails due to upstream PyTorch change (#793)
* Add to eager tests to xfail while they are fixed.

Also XFAIL ConstantPad2dStaticModule_basic.

* Fix test fails due to upstream PyTorch change.
2022-04-25 12:34:32 -07:00
Prashant Kumar 5cdef0213d [LINALG] Bug fix i64 vs i32 type comparison.
Comparing index type instead of integer types solves the problem.
2022-04-22 08:09:58 +05:30
powderluv cc3a4a58ef
Add oneshot release snapshot for test/ondemand (#768)
* Add oneshot release snapshot for test/ondemand

Add some build scripts to test new release flow based on IREE.
Wont affect current builds, once this works well we can plumb it
in.

Build with manylinux docker

* Fixes a few issues found when debugging powderluv's setup.

* It is optional to link against Python3_LIBRARIES. Check that and don't do it if they don't exist for this config.
* Clean and auditwheel need to operate on sanitized package names. So "torch_mlir" vs "torch-mlir".
* Adds a pyproject.toml file that pins the build dependencies needed to detect both Torch and Python (the MLIR Python build was failing to detect because Numpy wasn't in the pip venv).
* Commented out auditwheel: These wheels are not PyPi compliant since they weak link to libtorch at runtime. However, they should be fine to deploy to users.
* Adds the --extra-index-url to the pip wheel command, allowing PyTorch to be found.
* Hack setup.py to remove the _mlir_libs dir before building. This keeps back-to-back versions from accumulating in the wheels for subsequent versions. IREE has a more principled way of doing this, but what I have here should work.

Co-authored-by: Stella Laurenzo <stellaraccident@gmail.com>
2022-04-21 02:19:12 -07:00
Prashant Kumar 33c9d256ea [REFBACKEND] Add support for returning multiple different return types.
Added the dynamic registration of return function to the execution
engine. This makes sure that  different/multiple return types are supported.
Also, updated the .style.yapf indentation to 4.
2022-04-21 09:02:30 +05:30
Sean Silva 075464fa74 Add a new `torch_mlir.compile` method.
This makes it much easier to convert models and hides all the
ClassAnnotator complexity.

This also adds a new example `torchscript_resnet18_all_output_types.py`
which shows the ResNet18 IR for all output types.

Also,

- This moves `run_pipeline_with_repro_report` to
  `torch_mlir.compiler_utils`.
2022-04-20 10:06:01 -07:00
Sean Silva 3b5310d6d2 Move COMMON_TORCH_MLIR_LOWERING_XFAILS into test_suite
That way, downstreams don't have to duplicate this list.

Also, remove "external config" feature, since it is subsumed by just
importing the test suite.
2022-04-19 14:32:58 -07:00
Vivek Khandelwal 769f3a8870 [MLIR][TORCH] Add E2E support for max_pool2d_with_indices op
This commit adds lowering of `max_pool2d_with_indices` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-18 21:05:19 +05:30
Ashay Rane d3c08376af
test: add end-to-end test for aten.neg (#760) 2022-04-15 12:37:57 -07:00
Ashay Rane a893c7d5cf
Add shape transfer function and lowering to linalg for aten.neg (#759)
* shape: add shape transfer function for aten.neg

Prior to this patch, the list of shape transfer functions did not
include `aten.neg`, which resulted in errors like below.

```
error: unsupported by backend lowering: tensor with unknown rank or dtype
note: see current operation: %0 = "torch.aten.neg"(%arg0) :
  (!torch.vtensor<[256,256],f32>) -> !torch.vtensor<*,f32>
note: this is likely due to a missing shape transfer function in shape_lib_gen.py
```

This patch fixes the problem by adding a shape transfer function to
reflect the point-wise nature of this operation.

* linalg: add translation of aten.neg operation

This patch adds a translation rule to lower `aten.neg` operations on
tensors to an `arith.negf` operation wrapped inside a `linalg.generic`
operation.  This patch also adds a rudimentary test.
2022-04-15 11:11:22 -07:00
Vivek Khandelwal 1bccb4fc8a [MLIR][TORCH] Add E2E support for aten::max_pool2d_with_indices_backward op
This commit adds lowering of `aten::max_pool2d_with_indices_backward` op.

This commit also fixes formatting issues in basic.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-04-14 21:46:47 +05:30
Maksim Levental 24f9de7120
Fixes https://github.com/llvm/torch-mlir/issues/751 where `torch.bool` is parsed as signless `i1`. (#752) 2022-04-13 12:28:27 -05:00
Maksim Levental d46f169c1a
Fix kwarg annotation in eager (#747) 2022-04-11 17:35:42 -05:00
Maksim Levental 66de821eaf
small framework plus build_script_function (#745) 2022-04-11 16:53:52 -05:00
Maksim Levental 18ef40acaf
Fixes a bug in use of upstream `normalize_function` in our `normalize_args_kwargs` (in eager mode) and introduces unit tests. (#740)
NB: `shouldnt_normalize2` and `shouldnt_normalize3` currently XPASS i.e., args *will* successfully normalize despite being incorrect due to an [upstream bug](https://github.com/pytorch/pytorch/issues/75342).
2022-04-11 16:17:44 -05:00
gpetters94 9ec0683e92
Add 2D case for convolution (#693) 2022-04-08 00:47:57 -04:00
gpetters94 fa0b24a73c
Rename optional list types (#643) 2022-04-07 18:15:51 -04:00
Prashant Kumar 1d5b5a89e8 [LINALG] Add torch.layout information
torch.layout information has been added.
2022-04-07 20:47:49 +05:30
Prashant Kumar fb8cb0c5f3 [LINALG] Add the lowering of `aten.ne.Scalar` op
The lowering of `aten.ne.Scalar` op has been added to
the linalg backend.
2022-04-05 21:07:28 +05:30
Ramiro Leal-Cavazos 5620fe030e
Add 1D, weight, and reduction support to nll_loss_backward (#729)
This commit adds the following support to the op `nll_loss_backward`:
- `input` tensor can be rank-1
- `weight` parameter
- `reduction` parameter
- `target`, `grad_output`, `total_weight` can be rank-0
- Checks that input tensors are of the expected type
2022-04-04 10:57:49 -07:00
Sean Silva 14cf87633c
Add link to forum post describing `__torch_dispatch__` 2022-04-01 10:10:43 -07:00
Ramiro Leal-Cavazos 51d4d55f8a
Add support for multi-dim input to `index_put_impl` (#722)
This commit adds support for multi-dimensional tensors as input to the
`_index_put_impl_` op. The support was to some degree already there,
since `ScatterOp` already supports multi-dimensional tensors. This
commit also adds a bit more error checking to `index_put` and
refactors the code for creating `ScatterOp`s to mimic the way one
would make a `Linalg::GenericOp`.
2022-03-31 09:27:21 -07:00
Sean Silva c17c0a6ba2 Fix for 0-size dim inferred incorrectly.
The issue was in the canonicalizer for torch.aten.ge.int -- in cases
where the operands were swapped, it would miscompile. This issue is
fixed and folding support generalized to `torch.aten.size.int < 0` as
well.

Fixes #716
2022-03-30 16:36:15 -07:00
Gaurav Shukla 969785d1b6 [LINALG] Add E2E support for `aten.where.[Scalar|ScalarSelf|ScalarOther]` ops
This commit decomposes different variants of `aten.where.*` op into
`aten.where.Self` op. It covers `aten.where.Scalar`,
`aten.where.ScalarSelf` and `aten.where.ScalarOther` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-30 20:36:48 +05:30
Vivek Khandelwal 2597c481f6 [MLIR][TORCH] Add E2E support for aten.new_empty op
This commit decomposes `aten.new_empty` op into `aten.empty.memory_format` op.

This commit also made a dtype fix to the constant tensor allocation like ops.
Earlier the dtype for the result was inferred from the result type; now, it's
being evaluated as per the original definition of the op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-30 13:21:01 +05:30
Sean Silva 140babd952 Add minimal support for Union types.
A recent PyTorch commit made ConstantPad2d call a helper function with a
`Union[int, float]` type annotated. This commit adds minimal support for
representing and dealing with that.
https://github.com/pytorch/pytorch/pull/73287

Changes:
- Adding support for `!torch.union<T1, T2, T3>`/`Torch::UnionType`,
  along with the importer and CAPI code.
- Add support in isValidSubtype for union types.
- Adding a canonicalizer for `torch.derefine` to help simplify some code
  that derefines to a UnionType (this also fixes #664).

There is still more work to do for really supporting UnionType well,
such as canonicalizing UnionType's so that they can be compared with
pointer equality.
2022-03-29 17:45:48 -07:00
Maksim Levental 25ba51b2af
This commit decomposes aten._reshape_alias op into aten.view op. (#690) 2022-03-28 23:54:28 -05:00
Maksim Levental 3e999beaea
Small bug fixes in eager mode (#691) 2022-03-28 13:31:07 -05:00
Sean Silva 0378c75b35 Centralize all test serialization logic. 2022-03-28 10:17:13 -07:00
Sean Silva 6b637a9fd9 Move e2e test definitions into the `torch_mlir_e2e_test` package
This is the first step to making the e2e framework convenient to use
by downstream backends.
2022-03-25 13:56:41 -07:00
Gaurav Shukla 02b6d04eb4 [LINALG] Add E2E support for `aten.zero_` op
This commit adds decomposition of `aten.zero_` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-25 12:46:50 +05:30
Sean Silva 94df096c11
Add note to not edit upstream_shape_helpers.py 2022-03-24 09:32:19 -07:00
Qiang Fu f7c7bb800c
Add non-default dtype support for a few elementwise math ops. (#687)
* fix type inference
* fix Torch2Linalg conversion
* add test cases
2022-03-23 13:35:43 -07:00
max fe8ac57e6d This PR implements an eager mode backend for PyTorch through the torch-mlir framework. This is accomplished by overriding the `__torch_dispatch__` class method on wrapper subclass `TorchMLIRTensor(torch.Tensor)`.
Effectively, this mode works by compiling op by op as the NN is eagerly executed by PyTorch. Entailed in that compilation is building a representation of the op that can be `torch.jit.script`ed, importing using `ModuleBuilder`, and then executing (e.g., with `RefBackendLinalgOnTensorsBackend`). This mode includes a fallback to conventional PyTorch if anything in the torch-mlir compilation process fails (e.g., unsupported op).

Currently, all e2e tests pass execpt for two that involve an upstream PyTorch bug (https://github.com/pytorch/pytorch/issues/74400).

High priority next steps:

1. A compile cache in order to speed up reruns of the same NN.
2. Integration with IREE (though not in this repo).
3. Integration with `torch.distributed`.
2022-03-22 14:42:57 -07:00
Gaurav Shukla 7c3ba25238 [LINALG] Add decomposition of `aten.dropout` op
- This commit adds decomposition of `aten.dropout` op. It also covers the
  training mode of the same op.
- It also adds lowering of `aten.sub.float` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-22 13:14:49 +05:30
Sean Silva 729402c3f4 Reduce compilation time for TorchOps.cpp.inc
The `assemblyFormat` stuff (which generates unrolled, per-op C++ code)
was taking up a lot of compile time, and all the ops are essentially
printed with the same logic. So this PR makes them all call the same
helper function. This is done by using
`let hasCustomAssemblyFormat = 1` and then implementing `FooOp::parse`
and `FooOp::print`.

Additionally, the `Generated*Ops.td` files are all collapsed into just
`GeneratedTorchOps.td` (there is no reason to have the files separate,
since the files are very large anyway so one is always having to search
within them -- editors don't care that the file to search is now a bit
bigger :) ).

This reduces TorchOpsODSGenerated.cpp compile time (which is now
GeneratedTorchOps.cpp) from 39 to 31 seconds on my machine. This is
actually less than I expected, but this PR is an overall cleanup to the
code anyway. The next step will be to introduce (better) functionality
upstream for sharding the TorchOps.cpp.inc file, so that we can truly
parallelize the O(#ops) costs. This is also necessary, because after
this PR, TorchDialect.cpp is now the slowest file to compile, due to the
`addOperations<... all the ops ...>` call, which needs to be shareded
too.
2022-03-21 14:42:26 -07:00
Vivek Khandelwal 5b9bdfaf3f [MLIR][TORCH] Add E2E support for aten._to_copy op
This commit decomposes `aten._to_copy` op into
`valsem.aten.copy` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 13383b03b8 [MLIR][TORCH] Add value tensor variant to aten::copy_ op
This commit adds the op `ValsemVariantAtenCopyOp` that represents
`AtenCopy_Op` without the underscore. This is needed to make sure
that the `ReduceOpVariants` pass turns the in-place op into an op
that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenCopyOp`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 4c0cd5c23d [MLIR][TORCH] Add E2E support for aten.expand_as op
This commit decomposes `aten.expand_as` op into `aten.broadcast_to` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 12:47:39 +05:30
Vigilans 63fb1e5aad Bump LLVM at 8361c5da30588d3d4a48eae648f53be1feb5cfad 2022-03-18 13:16:14 -04:00
Prateek Gupta 7256c9e395 [TORCH][MLIR] Fix the return types of `aten.native_layer_norm`.
This commit fixes the 2nd and 3rd return types of the `aten.native_layer_norm`.
Previously the mean and rSTD were returned with reduction dims removed.
This commit fixes this and keeps the reduction dims of the results.

Signed-Off-By: Prateek Gupta <prateek@nord-labs.com>
2022-03-17 12:08:32 +05:30
Vivek Khandelwal 8da7d90611 [MLIR][TORCH] Add E2E support for aten.index_put op
This commit decomposes `aten.index_put` op into
`valsem.aten.index_put_impl` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-16 22:02:02 +05:30
Vivek Khandelwal 3d95c3d6c9 [MLIR][TORCH] Add value tensor variant to aten::_index_put_impl_
This commit adds the op `ValsemVariantAtenIndexPutImplOp` that represents
`Aten_IndexPutImpl_Op` without the underscore. This is needed to
make sure that the `ReduceOpVariants` pass turns the in-place op
into an op that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenIndexPutImplOp` op.

This commit also updates the `torch.bincount` op test cases.
2022-03-16 22:02:02 +05:30
Ramiro Leal-Cavazos 0bcc6d1075
Add maximize-value-semantics support for multiple non-value tensor inputs (#659)
This commit adds value semantics support for ops such as
`aten.view_as` and `aten.expand_as` that take two non-value 
tensors as input.
2022-03-15 18:13:45 -07:00
Sean Silva 92da4988f0 Improve "pseudo" op terminology.
The term "pseudo" is very vague and was getting confusing (I felt I had
to explain it in every comment referencing it). Instead, rework the
"pseudo" ops to instead be named:

- MLIR Syntax: `torch.valsem.*`
- C++ / ODS: `ValsemVariant*Op`

This makes it clear what the concept is, and avoids confusion with other
things that might be called "pseudo", since these are very specific and
should be 100% consistently named w.r.t. the non-valsem-variant ops that
they correspond to.
2022-03-15 17:57:52 -07:00
Sean Silva a5fe0cf063 Introduce new shape library design.
See the documentation in `docs/shape_lib.md` and
`docs/adding_a_shape_function.md` for an overview of the system.

This completely overhauls how we represent shape functions. In
particular, RefineTypes does not infer shapes anymore (only dtypes).
Shape functions are now written in (TorchScript'able) Python.

Recommended review order:

1. Read `docs/shape_lib.md` and `docs/adding_a_shape_function.md`.
1. Code and tests for ReifyShapeCalculations, DropShapeCalculations.
1. Code and tests for SimplifyShapeCalculations.
1. shape_lib_gen.py
1. Code and tests for new RefineTypes pass.
1. Random folders/canonicalizers in TorchOps.cpp and associated test in
   `canonicalize.mlir`.
1. New ReadOnly trait inferred from the registry.
1. Any miscellaneous remaining stuff.

Example `-print-ir-after-all` for ElementwiseUnaryModule:
[IR lowering dump](https://gist.github.com/silvasean/e4dc8cbc8d00aac7819602e3cbd8e212).

Example `-print-ir-after-all` for ElementwiseBinaryModule:
[IR lowering dump](https://gist.github.com/silvasean/daf6860ecced732af3568af6b1899113).
2022-03-15 12:41:58 -07:00
Prashant Kumar b6d13301fc [TORCH] Fix the location of packed_params.
The location of packed_params.h is changed in aten src.
2022-03-14 17:52:19 +05:30
Prateek Gupta 3d9ba5e525 [MLIR][TORCH] Add E2E support for aten.erf op.
Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-03-09 22:22:03 +05:30
Vivek Khandelwal 1a2a9e066f [MLIR][TORCH] Add TorchToTMTensor pass
This pass is added to lower ops, which can not be lowered
via the TorchToLinalg pass, such as `torch.bincount` op.
This pass also uses torch-mlir's TMTensor Dialect to lower the
complex ops.

Also add torch.bincount op lowering with the help of TMTensor dialect

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-08 22:52:34 +05:30
Gaurav Shukla e57d3f9774 [LINALG] Fix `aten.bernoulli` op lowering
- This commit adds E2E support for `aten.rand_like` and
  `aten.bernoulli_.Tensor` ops.
- The `aten.bernoulli(x)` was implemented as:
  `aten.bernoulli(x) = rand_like(x) < 0.5`, assuming 0.5 as default
  probability, whereas according to the pytorch documentation:
  https://pytorch.org/docs/stable/generated/torch.bernoulli.html#torch.bernoulli
  the input x in `aten.bernoulli(x)` is itself a tensor containing
  probabilities to be used for drawing the binary random number.
- So this commit fixes the `aten.bernoulli(x)` implementation as:
  `aten.bernoulli(x) = rand_like(x) < x`.
- It also fixes the case where the input to `aten.bernoulli_.float` is
  an integer tensor. In this case the input must be casted to float type
  before passing it as operand to `aten.rand_like` op.
  `aten.bernoulli_.float(x, p) = rand_like(float(x)) < p`.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-05 09:38:22 +05:30
Vivek Khandelwal af551bd9cd [MLIR][TORCH] Add E2E support for aten.full_like op
This commit decomposes `aten.full_like` op into `aten.empty_like`
and `aten.fill` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-04 21:58:23 +05:30
Vivek Khandelwal d61ae92eee [MLIR][TORCH] Add E2E support for aten.full op
This commit decomposes `aten.full` op into `aten.empty` and
`aten.fill` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-04 21:58:23 +05:30
Yi Zhang 486f95e84f Add bufferization pass for TMTensor ops
The pass is mostly borrowed from the BufferizeAnyLinalgOp pass in mlir
upstream with some minor changes. At a high level, it's a naive partial
bufferization pass which allocate new buffers for all the output
tensors. The initial value of an output buffer is copied from the
original buffer if there are uses of the original value.

One difference from linalg bufferization pass is the way to tell if
the loop body uses the init value of output operand. For TMTensor ops,
it differs from op to op because the payload region doesn't represent
the entire loop body.
2022-03-03 11:39:14 -05:00
Yi Zhang 1d285f0153 Add aten.hardtanh e2e support. 2022-03-02 12:28:06 -05:00
Prashant Kumar 819f29316f Decompose aten.silu op
Decomposition of aten.silu.op is added as silu(x) = x * sigmoid(x).
2022-03-01 23:24:19 +05:30
Vivek Khandelwal ddd45d6068 [MLIR][TORCH] Add E2E support for aten.new_zeros, aten.new_ones op
This commit adds lowering of `aten.new_zeros` and `aten.new_ones` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-01 22:09:47 +05:30
Prashant Kumar 7c637eebc3 [LINALG] Decompose aten_hardswish op.
`aten.hardswish` op is decomposed into (x/6) * Relu6(x+3).
2022-02-25 21:59:27 +05:30
Prashant Kumar abbde7d439 [TORCH] The torch definition related to aten.gelu has changed.
New str argument approximation is added.
2022-02-18 21:57:46 +05:30
Nirvedh f8cb32faf0 LLVM bump
Major changes: opTrait changed to Trait, selectOp moved to arith dialect
assertOp moved to cf dialect
2022-02-16 15:28:13 -05:00
Gaurav Shukla cd21dda867 [LINALG] Add E2E support for `aten.Hardsigmoid` op
This commit adds lowering of `aten.Hardsigmoid` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-16 02:35:18 +05:30
Ramiro Leal-Cavazos 00a6e9c1bb
[LINALG] Add value tensor variant to `fill_.Scalar` (#600)
This commit adds the op `PseudoAtenFillScalarOp` that represents
`AtenFill_ScalarOp` without the underscore. The approach is the same
as in commit dd998fa4d4.

Adding this op allows for a simpler and more consistent version of the
`empty` and `empty_like` op e2e tests.
2022-02-15 11:58:03 -08:00
Gaurav Shukla 41acde599b [LINALG] Add E2E support for `aten.[le|ge].Scalar` ops
- This commit adds lowering of `aten.le.Scalar` and `aten.ge.Scalar` ops
  as a part of `convert-torch-to-linalg` pass.
- It also creates a new test script `elementwise_comparison.py` for all
  element-wise comparison ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-15 12:21:09 +05:30
Gaurav Shukla f00d1686c8 [LINALG] Add E2E support for `aten.[Bool.Tensor|Float.Tensor]` op
- This commit adds lowering of `aten.Bool.Tensor` and
  `aten.Float.Tensor` op as a part of `convert-torch-to-linalg` pass.
- It also adds support for returning bool types.
- It also fixes lowering of the `aten.Int.Tensor` op for non-zero rank
  input tensors.
- If a scalar number is converted to a 0-d tensor and passed on to the
  `aten.Float.Tensor` op, it folds to the scalar number.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-14 23:09:20 +05:30
Yi Zhang 9e7b6cab08 Add folder for aten.gt/lt.float 2022-02-14 12:34:01 -05:00
Henry Tu 73ac9a7e2e Added support for importing node prim::Constant with list type
Prior to this commit, importing a `prim::Constant` node with list type would result in an error since it was not supported. `ivalue_importer::importIValue` was modified to return the MlirValue corresponding to the root so its parent operation could be extracted.
2022-02-11 20:54:06 -05:00
Prashant Kumar 258660deb6 Add aten.bernoulli decomposition.
aten.bernoulli is decomposed to aten.gtTensor(aten.uniform(x), x).
2022-02-11 00:35:33 +05:30
Prashant Kumar 102c497c4c Add decomposition of _log_softmax op.
Decompose _log_softmax into log(softmax(x)).
2022-02-10 23:17:26 +05:30
Prateek Gupta 318946a650 [TORCH][MLIR] Add E2E support for `aten._unsafe_view` op.
This commit adds decomposition of `aten._unsafe_view` op into
`aten.view` op.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2022-02-10 22:28:58 +05:30
Ramiro Leal-Cavazos 9b89f8eb3f
[TORCH][MLIR] Add E2E support for aten.clone (#571)
This commit adds support for the aten.clone op.
2022-02-09 19:31:03 -08:00
Yi Zhang e09e2cbe70 Include IR dump options on e2e failure report 2022-02-09 11:19:34 -05:00
Gaurav Shukla 2fefe68ffd [TORCH][MLIR] Add E2E support for `aten.native_batch_norm` op
- This commit adds support for `aten.native_batch_norm` operation.
- The current implementation only supports inference mode of
  `aten.native_batch_norm` op.

Signed-Off-By: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-08 02:54:03 +05:30
Prashant Kumar ccf546f14c Add aten::nll_loss_backward op
The lowering of aten::nll_loss_backward op has been added
from torch to linalg dialect. The changes has been made as
a part of -torch-convert-to-linalg pass.

Signed-off-by: Prashant Kumar prashant@nod-labs.com
2022-02-04 21:57:53 +05:30
Yi Zhang 0cb216a1ad [Torch][Linalg] Add basic support for RNG
This PR include the following pieces:
- Add torch `Generator` type. `Generator` type is converted to i64 in
refbackend type converter.
- Add seed managment support for the default global generator.
`torch_c.getNextSeed` op is used to get the seed. On refbackend, the
`torch_c.getNextSeed` is lowered to load/store from [0] of global
variable `default_generator` memref<i64> in `InsertRngGlobals` pass.
- Add `aten.uniform_` and testing as an example op for RNG ops. Add
`torch.pseudo.aten.uniform` op. It has the same operands and return as
the `aten.uniform_` from the op registry except for value semantics.
2022-01-31 18:56:42 -05:00
Yi Zhang 5d9a15263a [TORCH] Add aten.std e2e support 2022-01-31 15:17:49 -05:00
Prashant Kumar e58b66bc3b Add lowering of `aten.max.dim` op.
Lowering of `aten.max.dim` op has been added.
2022-01-31 21:41:22 +05:30
Liam Fitzpatrick 8bc028af05 Fold __is__ and unchecked_cast of derefine
The added e2e maxpool testcase from #545 was not getting a static shape
due to an unfolded prim.If when RefineTypes was called. This was because
of unfolded torch.iaten.__is__ and torch.prim.unchecked_cast operators
with torch.derefine operands.
2022-01-28 17:54:40 -05:00
Yi Zhang e1b3e5bc92 Fix build failure 2022-01-28 13:21:36 -05:00
stephenneuendorffer 3fd9b7789e
Bump LLVM to 881ff4e4ebe8cc0cc045c7c167cffb01f94f27f8 (#539) 2022-01-25 22:16:30 -08:00
Yi Zhang ad4b9e0369 Minor fixes 2022-01-24 19:21:15 -05:00
Suraj Sudhir 5d6c4f48dc
[tosa] Enable tosa-to-linalg-named so Matmul works again (#530)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-19 12:10:04 -08:00
dan 3745f54489 Update external/llvm-project
- Add `qualified` to ods because of
https://reviews.llvm.org/D113873 and https://reviews.llvm.org/D116905
- Needed to revert https://github.com/llvm/torch-mlir/pull/520 as it
was based on an old torch version.
https://github.com/llvm/torch-mlir/pull/527 will bring this back with
a better design.
- Change ConvertAtenCatOp to use more accurate tensor shape info and
as much static info as possible to pass `tensor.insert_slice`
verification code added by https://reviews.llvm.org/D114715
- Other minor fixes
2022-01-18 13:25:42 -05:00
Yi Zhang 40efd2cb8e Revert "Add non-RNG aten ops to aten dialect."
This reverts commit c9a343267c.
2022-01-18 13:25:42 -05:00
Suraj Sudhir 5ded7d096f
[tosa] Add tosa-to-standard before tosa-to-linalg pass (#524)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2022-01-14 11:05:11 -08:00
Prateek Gupta c9a343267c Add non-RNG aten ops to aten dialect.
This commit adds the aten ops which do not require random number
support to aten dialect. This commit also adds some of the missing
torch types.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-01-14 14:20:33 +05:30
Liam Fitzpatrick 077e55d756 Add support for constant_pad_nd
Note that to enable folding of the code coming from an example
like the ConstantPad2dStaticModule e2e test, support for other
operations had to be added/improved:
- aten::neg.int
- aten::eq.float
- aten::eq.str
- prim::Uninitialized
2022-01-11 10:25:25 -05:00
Vivek Khandelwal 35cf8d18f7 Add support for two return values
This commit adds support for two return values of type
memref f32 and i64.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-11 11:07:10 +05:30
Vivek Khandelwal ca662dc9cc [MLIR][TORCH] Add E2E support for aten.threshold, aten.threshold_backward op
This commit adds lowering of `aten.threshold` op
This commit adds lowering of `aten.threshold_backward` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-01-10 11:56:56 +05:30
Gaurav Shukla 3c40539b34 [TORCH][MLIR] Add E2E support for `aten.[ones_like|zeros_like]`
- This commit adds E2E support for `aten.ones_like` and
  `aten.zeros_like` ops.
- Adds support for non-None `dtype` argument of `aten.empty_like` op.
- All the unit test cases related to constant tensor allocation like ops
  are moved to a different file named `constant_alloc.py`.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-01-06 20:24:40 +05:30
Ramiro Leal-Cavazos 9afaacedbd Fix build error regarding missing types in torch::jit
This commit adds include statements of the file
`torch/csrc/jit/ir/ir.h` for files that use types from torch::jit.

Fixes https://github.com/llvm/torch-mlir/issues/506
2022-01-03 13:36:22 -06:00
Vivek Khandelwal 4486de5ef3 [MLIR][TORCH] Add E2E support for torch.arange op
This commit adds lowering of `aten.arange.start_step` op.
This commit decomposes `aten.arange` and `aten.arange.start` into
`aten.arange.start_step` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-27 22:45:48 +05:30
Gaurav Shukla a83004c806 [TORCH][MLIR] Fold trivial cases of `aten.to.dtype` and `aten.view` op
- It folds `aten.to.dtype` when the input tensor type and result type
  are exactly same.
- It folds `aten.view` when the rank of both the input tensor type and
  result type is unity.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-24 13:32:34 +05:30
Nirvedh 3cb46cecef Added aten::t() Op 2021-12-22 10:57:10 -05:00
Gaurav Shukla eddc09aa55 [TORCH][MLIR] Add E2E support for `aten.eq` and `aten.lt` ops
- Added E2E support for `aten.eq.Tensor` and `aten.lt.Tensor` ops. Both
  the operands are expected to be of the same type, i.e., type promotion
  is not addressed as a part of this commit.
- Added E2E support for `aten.eq.Scalar` and `aten.lt.Scalar` ops.
  Tensor operand type to Scalar operand type promotion has not been
  handled in this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-16 18:47:22 +05:30
Ramiro Leal-Cavazos 707c113463 Fix naming of results in ODS generator
This commit fixes the naming of results in the torch ODS generator
when dealing with multiple results. In particular, this commit appends
an index to each result name to guarantee that they are all unique.
2021-12-15 13:53:15 -06:00
Gaurav Shukla a778f990e9 [TORCH][MLIR] Add E2E support for `aten.ceil` op
This commit adds lowering of `aten.ceil` op as a part of element-wise
ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-12 01:15:47 +05:30
harsh 03b6edce68 Add where, gt, bucketize and reshape ops to Torch dialect
This patch adds the where, gt, bucketize and reshape
ops to the Torch dialect. These ops are present in the histogram
calibration module.

TEST: Successfully lowers ops to Torch dialect in histogram module.
2021-12-10 10:08:20 -08:00
Prateek Gupta cfc8de36f8
[MLIR][TORCH] Add E2E support for `aten.native_layer_norm`. (#470)
This commit adds support for aten.native_layer_norm operation. Here
the previous code for aten.layer_norm is tweaked a little bit to
accomodate both mean and variance values alongwith the layer norm
value. This commit also adds decomposition of aten.layer_norm into
aten.native_layer_norm, which was previously getting lowered directly
to linalg.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2021-12-10 19:06:19 +05:30
Gaurav Shukla 5a47f92390 [TORCH][MLIR] Add E2E support for `aten.squeeze.dim` op
This commit adds lowering of `aten.squeeze.dim` op into
`linalg.TensorCollapseShape` op. Here, the dim(th) dimension of the
input tensor is not supposed to be dynamic.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-10 17:01:20 +05:30
Gaurav Shukla f34eb66124 [TORCH][MLIR] Add E2E support for [`aten.gt.Scalar`|`aten.where.self`]
This commit adds lowering of `aten.gt.Scalar` and `aten.where.self` as a
part of element-wise ops lowering.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-12-09 12:47:10 +05:30
Prashant Kumar c598e01529 Add support for passing & returning memref of bool types
Support for passing memref of bool types as a function argument
and return is added in ref-backend.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-09 00:23:38 +05:30
Prashant Kumar 977b1b03ea Add aten::nll_loss_forward op lowering.
The op lowering has been added as a part of `torch-lower-to-linalg`
pass. This takes care of ignore_index but the weight and reduction
operand is still to be accounted for.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-12-07 17:11:08 +05:30
Vivek Khandelwal 46a2189a41 [MLIR][TORCH] Add E2E support for aten.bitwise_and.tensor op
This commit adds lowering of `aten.bitwise_and.tensor` op.

Signed-Off By: Vivek Khandelwal vivek@nod-labs.com
2021-12-02 21:06:15 +05:30
Vivek Khandelwal 46a0668b3b [MLIR][TORCH] Add E2E support for aten.mean and aten.numel op.
This commit adds lowering of `aten.mean` and `aten.numel` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2021-12-02 11:51:13 +05:30
Gaurav Shukla 73b27b32dc [MLIR][TORCH] Add E2E support for `aten.squeeze` op
This commit adds lowering of `aten.Squeeze` op into
`linalg.TensorCollapseShape` op. The size 1 dynamic dimensions are not
handled as a part of this commit.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-30 23:00:28 +05:30
ds1231h 9ad5954e41 aten.abs and aten.reciprocal to linalg 2021-11-30 11:31:55 -05:00
Yi Zhang 5d28549c2c Add folder for torch.aten.Int.Tensor
This is to fold the common pattern from Bert inference like:
```
%111 = torch.prim.NumToTensor.Scalar %110 : !torch.int ->
    !torch.vtensor<[],si64>
%112 = torch.aten.Int.Tensor %111 : !torch.vtensor<[],si64> ->
    !torch.int
```
2021-11-30 21:55:48 +05:30
Daniel Garvey 539511c19b
Add dropout op (#436)
Co-authored-by: dan <dan@nod-labs.com>
2021-11-29 12:30:03 -06:00
Liam Fitzpatrick 7616d28ce1 Add leakyrelu support 2021-11-27 23:04:46 +05:30
Prateek Gupta f461a7ebce
[TORCH][MLIR] Add E2E support for aten._softmax operation. (#431)
Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-25 11:19:02 +05:30
nodlabs 67ce816fca lowered addcmul and addcdiv to linalg 2021-11-24 17:26:47 -05:00
Prashant Kumar ea7a30f9b9 Add e2e test for aten.log_softmax_back_data op
aten.log_softmax_back_data op lowering and required
tests has been added. Some NFC have also been added.

Signed-off-by: Prashant Kumar prashant@nod-labs.com
2021-11-19 00:08:28 +05:30
Gaurav Shukla 663fc1ef51 [MLIR][TORCH] Add E2E support for [`aten.mul.Scalar`|`aten.addmm`]
This commit adds lowering of `aten.mul.Scalar` and also adds
decomposition of `aten.addmm` to `aten.mul.Scalar`, `aten.add.Tensor`
and `aten.mm` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2021-11-18 22:26:41 +05:30
Prateek Gupta ecf78b9849
[TORCH][MLIR] Add E2E support for `aten.gelu_backward` operation. (#418)
This commit adds new operation `aten.gelu_backward` in the aten
dialect and adds lowering of this operation from aten to linalg.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-11-17 14:59:38 +05:30
Yi Zhang 0fe70994e5 Add support for multiple return values
This change is to unblock the work of some backprop ops returning more
than one tensors. We will need to think of a more scalable approach
in the future if more flexible return types combinations are needed.
2021-11-16 21:07:45 -05:00
Yi Zhang 53733933a4 Update llvm upstream to 0b17336f793108a7b10c3fa913039144ef1d0f61
Update AsmPrinter/Parser and MatchAndRewrite
2021-11-16 13:04:51 -05:00
Prashant Kumar 909f7d7171 Add e2e testing for aten_tanh_backward op.
The e2e testing for aten_tanh_backward op has been added.
The testing is done for ref_backend.
2021-11-09 11:28:49 -05:00
George Petterson 2764e86f02 Add Rsqrt 2021-11-09 11:08:28 -05:00
Yi Zhang 3bd9d2a4c7 Add e2e support for aten._softmax_backward_data.
Decompose aten._softmax_backward_data into aten math ops. Also decompose
`aten.size` to facilitate decomposing _softmax_backward_data.
2021-11-09 13:09:30 +05:30
Yi Zhang 05c4dd8e39 Add convertScalarToDtype helper.
This is to facilitate scalar type conversion in the TorchToLinalg. As
part of adding the helper, this PR also:
- Updated `AtenAddTensorOp`, `AtenSubTensorOp` to use the helpers to
support more type variants.
- Added e2e type promotion testing.
- Added i32 memref return/arg type to support e2e testing.
2021-11-08 17:50:52 -05:00
George Petterson e23cabf3a9 Add log2 2021-11-08 16:19:59 -05:00
Wang Kangyu 4bb9b44775 Add lowering of "aten.pow.Tensor_Scalar" op
Add e2e support for torch.pow(Tensor, Float)
2021-11-08 09:19:50 -08:00
Prashant Kumar fd505db2c6 Adding support for returning elemental types.
Support for returning elemental types. Previously, only
memref types as returning types was supported. All the hacky ways
to write tests which return elemental types should be taken care of.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-08 22:20:48 +05:30
Wang Kangyu b33543af85 Add lowering of aten.floor op 2021-11-06 17:31:44 -04:00
nodlabs 5ff823ace9 lowerd Sqrt to linalg
reused clang-format, as changes got deleted
2021-11-06 11:29:46 -04:00
Prashant Kumar ef897dbb19 Add lowering of `aten.log_softmax` op.
The `aten.log_softmax` is decomposed into `aten.softmax` and
`aten.log` op.
2021-11-03 22:10:05 +05:30
Prashant Kumar 127c7d8e27 Add lowering of `torch.log` op
The lowering of `torch.log` op has been added.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-02 21:18:00 +05:30
George Petterson 6dde5b347e Add rsub 2021-11-02 09:56:48 -04:00
Prashant Kumar 53b4275ef5 Add lowering of `aten.Int.Tensor` op.
The lowering of `aten.Int.Tensor` op has been added.
The changes has been made as a part of `convert-torch-to-linalg` pass.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-11-01 21:58:08 +05:30
Sean Silva c46d48f9f5 Make error reporting a bit better.
- Split out TOSA in the CI.
- Add summary of unexpected test outcomes. This works better when there
  are many XFAIL'ing tests, as it only prints out the error_str on
  FAIL, not on XFAIL. Example here:
  https://gist.github.com/silvasean/c7886ec7b3d35c21563cb09f7c3407da
2021-10-28 13:20:16 -07:00
Sean Silva b02b65cf6e Fix for upstream Torch change.
After https://github.com/pytorch/pytorch/pull/65967 the `graph()` method
is only available on `torch::jit::GraphFunction` now.

Fixes https://github.com/llvm/torch-mlir/issues/388
2021-10-28 11:12:05 -07:00
Prateek Gupta c33a2ca952 [TORCH][MLIR] Add E2E support for aten.permute.
This commit adds lowering of aten.permute to linalg.generic operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2021-10-28 10:25:26 -04:00
stephenneuendorffer 614b889dc6
Enable python extensions when building out of tree (#363) 2021-10-27 17:04:12 -07:00
Sean Silva 30df2ec71b Add min/max/clamp support.
Part of #380

Also
- BoolType is not considered as Scalar
- e2e framework fixes for nan handling
- `tu.rand(..., low=, high=)` support
- delete unused variable (fix warning)
- Add IouOfModule from #380 to e2e test suite (this is a common
  calculation in vision models)

 Your branch is ahead of 'origin/main' by 1 commit.
2021-10-27 13:29:21 -07:00
Prashant Kumar 5009cbf55c Add lowering of aten.matmul op.
Lowering of `aten.matmul` op is added from torch to linalg dialect.
The different cases correspond to
https://pytorch.org/docs/stable/generated/torch.matmul.html.
TODO: Broadcasting in case of batch-matmul is yet to be taken care of.

Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
2021-10-26 12:45:09 -04:00
Boian Petkantchin e276dbbaa6
Add aten::gelu lowering (#374)
* Print more exception info on error during test execution

* Fix formatting

* Add aten::gelu lowering

Co-authored-by: Boian Petkantchin <boian@nod-labs.com>
2021-10-25 16:16:01 -07:00
Sean Silva a6943ef90c Rename `tosa-to-linalg-on-tensors` to `tosa-to-linalg`
The pass name changed upstream.
2021-10-25 20:43:54 +00:00
Stella Laurenzo a23d77100b Set some wheel building optimization options.
* Also adds a requirements.txt and updates docs to reference it versus stringy pip install.
* Adds doc with instructions on creating a wheel.

Fixes #370
2021-10-25 18:30:53 +00:00
Stella Laurenzo fe69bb339c
Bump llvm-project to 3d92722f74993969243d1400bc3257ca3d03902f. (#369)
* Picks up Python configure changes (was pinned to a bad intermediate commit).
* Uses the new mlir_configure_python_dev_packages() to ensure CMake python is found consistently.
* Fixes the JIT importer to build as a MODULE vs SHARED (needed for linking to Python as a module, per config changes).
* Adds some notes to the README to help folks build a smaller set focused just on this project.
2021-10-21 21:09:00 -07:00
Yi Zhang abfaf8c577 Add aten.ne.bool to make CI pass 2021-10-21 14:45:41 -04:00
George Petterson 8853dfbc74 Add broadcast 2021-10-19 13:33:31 -04:00
Yi Zhang a459e09ab7 E2e support for aten.softmax.int and aten.embedding
- Added a DecomposeComplexOps pass to decompose complex torchOps.
- Refactored `visitAtenArgmaxOp` and `visitAtenAnyDimOp` to
`visitReductionAlongDimIntOp`.
- Moved some helper functions into
torch-mlir/Dialect/Torch/Utils/Utils.h to be shared by multiple files.
- Added support for f64 tensor as argument and return types.
2021-10-18 17:57:45 -04:00
dan 7750d2173a add argmax lowering
Add argmax lowering from torch to linalg
2021-10-13 14:31:16 -04:00
Sean Silva 19e9fc4ef1 Bring some more order to the e2e error reporting situation.
- Move `run_pipeline_with_repro_report` to a more common place, and use it
  consistently
- Attach a `torch.debug_module_name` to the enclosing `builtin.module`
  op to allow for self-contained error reporting (not needing to pass
  the names around.
- Remove redundant error reporting in linalg_on_tensors_backend.py and
  tosa_backend.py (their respective backend abstract base classes now
  take care of the error reports themselves)
- Save off original value of sys.stderr, rather than always resetting to
  `sys.__stderr__`. This is just more hygienic, and allows nesting if
  desired.
2021-10-08 13:00:12 -07:00
Sean Silva 0c5c84d63d Add a basic TOSA E2E backend.
We lower through linalg-on-tensors and use RefBackend to run it.
This adds enough support for a "tanh" op. Adding more ops should be
fairly mechanical now that things are wired up. Run with:
```
./tools/torchscript_e2e_test.sh -c tosa
```

The backend structure is very similar to linalg-on-tensors based E2E
backends and is a nice parallel (see `tosa_backend.py`). Actually, this
forced a nice refactoring to the layering here. We removed
`torchscript-module-to-linalg-on-tensors-backend-pipeline` and instead
require separately running
```
torchscript-function-to-torch-backend-pipeline,torch-backend-to-linalg-on-tensors-backend-pipeline
```
This highlights the step that lowers to the "torch backend contract"
of cleaned up `torch` dialect ops is a critical step in the lowering.
Going forward, that is the key load-bearing contract of the torch-mlir
project, not the linalg-on-tensors backend contract.

Recommended review order:
- `TorchToTosa.cpp` / `TorchToTosa/basic.mlir`
- `python/torch_mlir_e2e_test/torchscript/configs/tosa_backend.py` and
  the new `utils.py` file there.
- `python/torch_mlir_e2e_test/tosa_backends/linalg_on_tensors.py` and
  `abc.py` in that directory for the TOSA backend e2e interface.
- other misc mechanical changes
2021-10-08 09:59:45 -07:00