Now that upstream exposes them nicely, we can use them.
I noticed that we had added stuff into the upstream_shape_helpers.py
file (which was supposed to stay pristine), so some more shape functions
need to be upstreamed.
Going forward, all shape functions should be upstreamed similar to
https://github.com/pytorch/pytorch/pull/76889 instead of added in this
file.
This commit adds lowering of `aten.div.Tensor_mode` op.
This commit also fixes formatting for the test file elementwise.py.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
This commit decomposes `aten.baddbmm` op into `aten.bmm`,
`aten.mul.Scalar`, and `aten.add.Tensor` op.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
use_tracing=True was behaving unexpectedly because the handling of
single arguments was happening after the torch.jit.trace call.
This also fixes the check to specifically test for a torch.Tensor or
TensorPlaceholder so that both lists and tuples would be correctly
handled.
The patch bumped up the LLVM tag made manual fixes to the code in
`ShapeLibrary.cpp`. However, since that file is generated by the
`update_shape_lib.sh` script, its contents were reverted each time the
script was run. This patch fixes the problem by removing the manual
changes to that file.
This commit adds the decomposition of `aten.adaptive_avg_pool2d` op into
`aten.avg_pool2d` op. The current decomposition only supports cases where
input size is equal to the output size.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
When compiling without assertions (i.e. in `NDEBUG` mode), a handful of
statements turn to NOPs, which results in warnings such as missing
return statement or unused variables and function. This patch replaces
such statements with `llvm_unreachable()`, which informs the compiler
about program termination regardless of the `NDEBUG` mode. This also
enables torch-mlir to be compiled using the flags `-Wall`, `-Wextra`,
`-Wpedantic`, and `-Werror`.
This patch adds support for the torch.linalg.vector_norm op to the torch
dialect, including the necessary shape function. It also extends the
conversion of reduction operators to support lowering of
AtenLinalgVectorNormOp, in addition to adding a handful of end-to-end
tests to validate the lowering.
There exist several opportunities to make this lowering optimal and
robust. For instance, in its current form, the translation does not
support ord = 0, +inf, or -inf. For L1 norms, we don't need to raise
each element to the power 1.0. Similarly, L2 norms could benefit from
strength reduction. Since the canonicalization pass is not able to
apply these optimizations, we should consider applying them during the
linalg lowering itself.
1. With the help of `make_fx` we are able to get the full training graph
with weight updates.
2. NeuralNet_training passes. Bert_training passes after cherry-picking
https://github.com/llvm/torch-mlir/pull/844.
3. TODO: Remove the functorch's dependency after make_fx moves to
pytorch core.
We do this by inroducing a TensorPlaceholder class, which can be used to
specify dynamic sizes. Internally, we canonicalize all example inputs
to TensorPlaceholder's.
This commit also adds some basic testing, which was missing before.
In addition to updating the llvm-project submodule, this patch also:
1. updates shape functions and tests so that `func` and `call`
operations refer to the `func` dialect
2. avoid duplicate registration of dialects
The op `aten.rand_like` was missing a shape function, unit tests, and
the `dtype` argument was being ignored in its decomposition. This
commit fixes all three things.