This enables building Pytorch from source in the CI.
The build should mostly hit the ccache.
Release builds will follow once we have some runtime on the CI.
In the interest of merging upstream LLVM quickly, a previous patch
(7f08169) updated the torch-mlir build to register all dialects and
passes through Python bindings. This patch limits the dialects and
passes to only those that are used in torch-mlir.
Key to this change are the removal of
`MLIRPythonExtension.RegisterEverything` and the introduction of a new
Python module (`_mlir_libs/_site_initialize_0.py`), where we register
the dialects and passes used by torch-mlir.
- Supports cases where the view op expands and collapses dims
simulataneously. This does not handle the case where it is neither
expanding nor collapsing (e.g. [2, 3] -> [3, 2])
- Additionally fixes a previous bug with adding 1-sized dims on both
sides of a tensor with aten.view
An upstream MLIR bug (that was recently fixed) caused the result to be
ignored for Region- and Block-visitor functions. Now that the bug is
fixed, we don't need an auxiliary boolean to track whether the visitor
function has succeeded.
This patch makes some rudimentary changes to torch-mlir's use of MLIR
Python bindings to work with the most recent LLVM code. We can perhaps
do better by being more selective in what we link against, instead of
using `MLIRPythonExtension.RegisterEverything`.
This commit adds the support for negative dim cases for `aten.cat`,
`aten.slice.Tensor` and `aten.slice_scatter` op.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
emitError is intended for error cases and not match failures of
patterns. notifyMatchFailure is intended where pattern reports reason
for not matching.
Op verification should also not happen inside patterns but as part of
verify/verification, but left ones that were obviously verification to
emitError inside patterns to keep this change small.
The biggest change here is to upgrade RefineTypes to the new sparse
dataflow framework.
Smaller changes:
- minor changes to type parsing
- suppress warnings in e2e tests
The original conversion pattern for `AtenBatchNormOp` required that
the input rank be greater than 2; however, the only
expectation in the conversion pattern and in Pytorch is that the input
rank is greater than 1, since the second dimension of the input must
match the size of the `weight`, `bias`, `runningMean`, and
`runningVar` inputs. This commit fixes the `inputRank` check.
This commit adds the decomposition for `aten.var.dim` op.
This commit also make changes in the decomposition for `aten.var` op.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
Remove all the libtorch downloads. If the user sets
-DTORCH_MLIR_USE_INSTALLED_PYTORCH=OFF then just build from src.
Doesn't change developer workflow since we still default to local
PyTorch versions.
TEST: Build and verify all tests (except one xfail quant) pass on linux
A previous patch added a new file
("VerifyConversionToValueSemantics.cpp") to the build, but it did not
add it to the list files known to bazel. This patch fixes the problem.
Found while trying to build torch-mlir on an AArch64 Linux VM, worth
a belts and braces to prevent such cases.
Change-Id: I89c6fccb62e666dbda0d9acac2d0ee43c2899e9b
This patch adds a new pass `torch-verify-conversion-to-value-semantics`,
which looks for non-value semantics tensors to catch such tensors early
during compilation.
This pass requires `torch-refine-public-return` pass to ensure that
return operations are updated to use value tensors, followed by the
canonicalize pass to remove any dead ops that may use or produce
non-value tensors.
lowering.
This commit addresses the remaining comments on lowering of
slice_scatter and select_scatter.
Signed-Off-By: Prateek Gupta <gprateek93@gmail.com>
Prior to this patch, the canonicalizers for `AtenSizeOp` and
`AtenSizeIntOp` succeeded only if the tensor operand's type information
included the size of the requested dimension(s). We can extend the set
of optimizable cases by propagating types across operations whose result
type matches the input tensor type.
Specifically, this patch enables the canonicalizers for `AtenSizeOp` and
`AtenSizeIntOp` to see past `tensor_static_info_cast`,
`copy.to_vtensor`, and `copy.to_tensor` ops until it reaches the first
op whose result type contains size information for the requested
dimensions, with a maximum bound of 6 parent lookups to avoid indefinite
compilation times. All other encountered ops cause the canonicalizer to
give up.
Prior to this patch, the code in the `torch-simplify-shape-calculations`
pass iterated on the uses of an op's result while also modifying the
value. This caused the iterator to get invalidated, thus terminating
the loop early and producing incorrect IR. This patch makes use of
`llvm::make_early_inc_range()` to ensure that the iterator is not
invalidated while executing the loop body.
This commit does three things:
1. Reverts some of the shape lib changes merged in
https://github.com/llvm/torch-mlir/pull/844
2. Updates the signature of `aten.sum_dim_IntList` that was recently
updated in
23bdb570cf
3. Replaces `aten.zero.functional` with `aten.zero`, updated in 960758b0b7
`aten.select_scatter` op.
This commit adds:
1. Lowering of `aten.slice_scatter` op into `tensor.insert_slice`
op.
2. Decomposes the `aten.select_scatter` op into `aten.slice_scater`
op.
Signed-Off-By: Prateek Gupta <gprateek93@gmail.com>
The canonicalizer converts `torch.prim.dtype` ops into integer constants
for valid types, but the type may not be known until type refinement is
complete. However, type refinement cannot make progress until
`torch.prim.dtype` ops have been resolved to their corresponding integer
constants, thus creating a circular dependency.
This patch creates a tight coupling between type refinement and the
lowering of `torch.prim.dtype` ops by handling such ops as they are
encountered during type refinement. The unit test in this patch aims to
check whether the type refinement pass can now handle chains of
operations that alternate between type construction and type refinement.
Temporarily revert to using PyTorch binaries until source builds
are ready to land.
TORCH_MLIR_USE_INSTALLED_PYTORCH can be turned to OFF if you want
to link against libtorch and/or source builds.
This patch replaces the use of raw integers like 6, 4, etc. (that
represent PyTorch's scalar types) with named values from the ScalarType
enum (e.g. `ScalarType::Float`, `ScalarType::Long`, etc.) in code for
folding `prim.dtype` ops into numeric constants.
This patch isn't strictly a non-functional change, since its use of
`Torch::getScalarTypeForType()` implies that the input type has to be
one among the supported types, otherwise compilation will abort, whereas
previously, compilation proceeded without folding the unsupported data
type into a numeric constant.
A prior patch (63538de2) that added support for bfloat16 type did not
add the canonicalization pattern to fold `torch.prim.dtype` operations
on bfloat16 tensors into the integer constant 15. This patch fixes the
problem.
On my local machine, `unzip` didn't exist (producing a "command not
found" error), but CMake ignored the error. Although the build did
succeed (because it found a previously-built version of libtorch), it
seems better to abort builds on such failures, so this patch checks the
return code of all external process invocations.
Along similar lines, this patch also updates the shell scripts in
`build_tools` to extensively use double-quoting to prevent unintentional
word splitting or globbing. Since some of the scripts execute `rm`
while using shell variables, this patch also adds the preamble `set -u`
to abort execution if an undefined variable is referenced, so that we
reduce the chances of executing `rm -rf /` if the path expression
happens to refer to an undefined variable.
Add an option to cache libtorch/ releases if you don't want to
download the latest. Add an option to enable source builds.
TESTS:
macOS: verify with / without cache downloads
verify source builds -- shared and static
Linux: Build Tests and Release builds