Commit Graph

1144 Commits (a08ff0d7f2c4388689428b0acccc4bf8d77ced14)
 

Author SHA1 Message Date
powderluv 8fd084377d
Update CMakeLists.txt 2022-06-14 14:46:52 -07:00
powderluv dfc6f7c547
Update CMakeLists.txt
Emergency fix to unblock the nightly Release builder
2022-06-14 14:38:35 -07:00
Ramiro Leal-Cavazos 93f6d8e776
[LINALG] Add 0-rank case for `aten.permute` (#940)
The function `AffineMap::inferFromExprList` does not work if the first
vector of expressions is empty, because it uses these expressions to
obtain the context. This prevented `aten.permute` from working for
inputs of 0-rank. This commit adds support for 0-rank inputs.
2022-06-14 12:50:46 -07:00
Vivek Khandelwal 33fa8e7761 [MLIR][TORCH] Add decomposition of aten.floor_divide op
This commit adds the decomposition of `aten.floor_divide` op into
`aten.div.Tensor_mode` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-14 08:56:25 +05:30
Tanyo Kwok 0d4445eaf9
Fix: 0 sizes tensor being regarded as unknown rank (#923) 2022-06-14 09:58:50 +08:00
Bob Adolf 0a7ba62438
Allow torch-mlir to support PyTorch extensions. (#895)
PyTorch allows new operators to be registered dynamically in modules.
Torch-mlir already makes it fairly straightforward to add support for
new operators, and this commit just extends that support to allow new
PyTorch ops to come from a external module.

This does *not* allow ops to be dynamically loaded into torch-mlir.
Torch-mlir must still be compiled with support built-in.

Add a `_torch_mlir_custom_op_example` subpackage to `torch_mlir` which
registers an demonstration op. It will not be imported by default when
importing torch_mlir. It's strictly for testing and documentation.

Adds an end-to-end test for the `torch_mlir_custom_op_example::identity` op.

With all these changes, we should now be actively testing PyTorch extension
support with all future patches.
2022-06-13 14:51:30 -07:00
powderluv 4bf34523d7
Update README.md 2022-06-10 20:00:43 -07:00
powderluv 1adc0f1661
Revert requirements.txt (#930)
https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html

is going to be deprecated via pip 22 since it is not html5.
2022-06-10 15:23:12 -07:00
powderluv 02b917f769
Change to the real PackedParams.h location (#929)
Also update the PyTorch nightly URL
2022-06-10 14:43:52 -07:00
powderluv 4cdf4e7d47
Fix new location for PackedParams.h (#928)
Looks like they renamed it in location
2022-06-10 14:30:31 -07:00
Tanyo Kwok e70d4f732d
Fix class_annotator_pybind.h header guard (#924)
merging to unblock builders
2022-06-10 11:58:26 -07:00
powderluv 6615add806
Fix the new header location (#926)
Seems to have moved in the latest nightly
2022-06-10 11:57:58 -07:00
Maksim Levental 5c85ac3100
Handle `nn.Linear(..., bias=False)` case for TorchToLinalg (#919) 2022-06-08 21:13:43 -05:00
Henry Tu 298d095acf
Use double quotes instead of single quotes (#918) 2022-06-08 15:00:56 -04:00
Henry Tu c1da9edcf0
Generate underscore variant of functional ops (#915)
* Generate underscore variant of functional ops

* Do not apply `IsTrailingUnderscoreInplaceVariant` trait to underscore variant of functional op
2022-06-08 14:27:36 -04:00
Tanyo Kwok bd53998da8
Remove pybind deps from importer and annotator (#903)
* Remove pybind deps from importer and annotator
* Rename files to class_annotator_pybind.cpp/.h
2022-06-08 10:12:05 +08:00
Sean Silva e1b38e74dd Use upstream shape functions directly.
Now that upstream exposes them nicely, we can use them.

I noticed that we had added stuff into the upstream_shape_helpers.py
file (which was supposed to stay pristine), so some more shape functions
need to be upstreamed.

Going forward, all shape functions should be upstreamed similar to
https://github.com/pytorch/pytorch/pull/76889 instead of added in this
file.
2022-06-07 11:15:03 -07:00
Ramiro Leal-Cavazos 22c0893ec6
Update debug options in compilation errors (#913)
The flag for printing the IR after each pass is now prefixed with
"mlir". This commit updates the flag in the error reporting for the
compiler.
2022-06-07 10:55:54 -07:00
Vivek Khandelwal b95b3d844d [MLIR][TORCH] Add E2E support for aten.div.Tensor_mode op
This commit adds lowering of `aten.div.Tensor_mode` op.
This commit also fixes formatting for the test file elementwise.py.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:44 +05:30
Vivek Khandelwal a11ef674a7 [MLIR][TORCH] Add E2E support for aten.baddbmm op
This commit decomposes `aten.baddbmm` op into `aten.bmm`,
`aten.mul.Scalar`, and `aten.add.Tensor` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-07 22:26:28 +05:30
Jae Hoon (Antonio) Kim fe784fd900
Add Support for aten::scatter_add (#906) 2022-06-06 15:02:45 -04:00
Jae Hoon (Antonio) Kim 8a1839a17e
Add support for aten::arange.start_out (#905) 2022-06-06 15:02:27 -04:00
Vivek Khandelwal 2718b4d838 [MLIR][TORCH] Add E2E support for aten.clamp_[min|max] op
This commit decomposes `aten.clamp_min` and `aten.clamp_max` op
into `aten.clamp` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-06-06 11:52:29 +05:30
Sean Silva ccc858f531 torch_mlir.compile: Fix API footgun
use_tracing=True was behaving unexpectedly because the handling of
single arguments was happening after the torch.jit.trace call.

This also fixes the check to specifically test for a torch.Tensor or
TensorPlaceholder so that both lists and tuples would be correctly
handled.
2022-06-05 18:10:07 -07:00
powderluv 2f0b1d0b08
bump macOS builds to Python 3.10 2022-06-04 22:44:32 -07:00
powderluv b14c5d619d
Build the nightly package only once a day/night
No need to be shipping two releases a day, our supported packages and binaries have grown.
2022-06-04 22:40:53 -07:00
Vidush Singhal fc419b1e7d
Add E2E support for AtenLogicalOrOp. (#883) 2022-06-03 16:21:03 -07:00
Henry Tu abf5c94a1b
Replace valsem.aten.zero with aten.zero.functional (#893) 2022-06-03 16:27:31 -04:00
Henry Tu 650f5a5008
Added support for native_dropout_backward (#892) 2022-06-03 14:08:51 -04:00
Henry Tu b7082a8d4e
Added support for native_dropout (#891) 2022-06-03 14:05:57 -04:00
Henry Tu a635fd2287
Added support for native_batch_norm_backward (#890) 2022-06-03 13:49:02 -04:00
Henry Tu bfe8ff4b42
Added support for embedding_dense_backward (#889) 2022-06-03 13:33:43 -04:00
Henry Tu a29903dfc8
Added support for native_layer_norm_backward (#888) 2022-06-03 13:15:23 -04:00
Vidush Singhal 0a913bc904
Add E2E support for AtenAllBoolOp (#874) 2022-06-01 18:20:25 -07:00
Ashay Rane 7fdc1cff02
build: remove manual changes to ShapeLibrary.cpp (#894)
The patch bumped up the LLVM tag made manual fixes to the code in
`ShapeLibrary.cpp`.  However, since that file is generated by the
`update_shape_lib.sh` script, its contents were reverted each time the
script was run.  This patch fixes the problem by removing the manual
changes to that file.
2022-06-01 14:11:29 -07:00
Vivek Khandelwal 06750815d1 [tosa] Support for AtenAvgPool2d op
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Vivek Khandelwal 6f548fc3ad [MLIR][TORCH] Add decomposition of aten.adaptive_avg_pool2d op
This commit adds the decomposition of `aten.adaptive_avg_pool2d` op into
`aten.avg_pool2d` op. The current decomposition only supports cases where
input size is equal to the output size.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-27 07:56:37 +05:30
Ramiro Leal-Cavazos b76c8c82dc
Emit `aten.unsqueeze` with mutating variants (#873)
The op `aten.unsqueeze` has a mutating variant. This commit adds
support for that variant.
2022-05-26 19:19:38 -05:00
Ashay Rane 029cd54327
build: fix code so that the compiler does not emit warnings (#871)
When compiling without assertions (i.e. in `NDEBUG` mode), a handful of
statements turn to NOPs, which results in warnings such as missing
return statement or unused variables and function. This patch replaces
such statements with `llvm_unreachable()`, which informs the compiler
about program termination regardless of the `NDEBUG` mode. This also
enables torch-mlir to be compiled using the flags `-Wall`, `-Wextra`,
`-Wpedantic`, and `-Werror`.
2022-05-25 14:04:59 -07:00
Maksim Levental cec5aeedb0
add ci tests (#754) 2022-05-25 14:59:59 -05:00
powderluv 24e04d5729 Update development.md 2022-05-25 08:00:21 -07:00
Vivek Khandelwal 56e77d4213 [MLIR][TORCH] Add E2E support for aten.Bool.[float|int] op
This commit adds lowering of `aten.Bool.float` and `aten.Bool.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 21:18:34 +05:30
Vivek Khandelwal 014a6d16c7 [MLIR][TORCH] Add E2E support for aten.any.bool op
This commit adds lowering of `aten.any.bool` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 17:24:28 +05:30
Vivek Khandelwal bc9b2156e3 [MLIR][TORCH] Add E2E support for aten.sqrt.int op
This commit adds lowering of `aten.sqrt.int` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-05-24 16:50:39 +05:30
Sean Silva f791b2ecae development.md: Add details about testing.
There is more detail to be added here, but this is a start.
2022-05-23 04:19:04 -07:00
Sean Silva 30adb1e675
Add weekly meetings to the readme. 2022-05-20 06:21:04 -07:00
Ashay Rane f18b2be911
torch,linalg: add support for translating aten.linalg.vector_norm (#839)
This patch adds support for the torch.linalg.vector_norm op to the torch
dialect, including the necessary shape function.  It also extends the
conversion of reduction operators to support lowering of
AtenLinalgVectorNormOp, in addition to adding a handful of end-to-end
tests to validate the lowering.

There exist several opportunities to make this lowering optimal and
robust.  For instance, in its current form, the translation does not
support ord = 0, +inf, or -inf.  For L1 norms, we don't need to raise
each element to the power 1.0.  Similarly, L2 norms could benefit from
strength reduction.  Since the canonicalization pass is not able to
apply these optimizations, we should consider applying them during the
linalg lowering itself.
2022-05-19 15:48:15 -07:00
Sean Silva 3fb54cba4c torch.prim.TupleIndex: Adjust tensor types when folding.
In cases where a refinement/derefinement was needed, we didn't fold.

Fixes https://github.com/llvm/torch-mlir/issues/863
2022-05-19 09:36:27 -07:00
Sean Silva 2af53ce434 torch_mlir.compile: Add OutputType.RAW
This can help with development and reporting bugs.
2022-05-19 03:41:43 -07:00
Prashant Kumar 10c8e3c593 Add simple neural_net and bert_training scripts.
1. With the help of `make_fx` we are able to get the full training graph
   with weight updates.
2. NeuralNet_training passes. Bert_training passes after cherry-picking
   https://github.com/llvm/torch-mlir/pull/844.
3. TODO: Remove the functorch's dependency after make_fx moves to
   pytorch core.
2022-05-19 06:18:42 +05:30