A non-persistent buffer will not be a part of this module’s
`state_dict`. Hence when setting `experimental_support_mutation=True`
and have non-persistent buffer, the current fx importer will fail to
retrieve a value from `state_dict` and produce `torch.constant.none` to
represent the buffer. This fix get value of non-persistent buffer from
the module's `constants`.
---------
Co-authored-by: Dixin Zhou <dzhou@vdi-ahddp-020.dhcp.mathworks.com>
Set PyTorch and TorchVision version to nightly release 2024-10-15.
Tracker issue for the failing tests added to xfail_set in this PR.
Issue: https://github.com/llvm/torch-mlir/issues/3796
This commit disables the failing sparse tensor tests since they are not
maintained on day-to-day basis and blocks the roll PyTorch update for now.
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
New sympy type is introduced to represent integer infinity in upstream
PyTorch repo. Subsequently, sympy.oo is no longer used to represent
infinity upper bound for dynamic dimensions where the upper bound is
unknown. Instead `int_oo` is used to represent integer infinity. This
commit updates the `_sympy_int_to_int` utility in light of this change.
Now that the PyDev feature request pytorch/pytorch#117188 has been
completed, we can remove all the ad-hoc code that propagates sparsity
metadata and replace it with the built-int PyDev metadata for sparse
tensors. This removes a lot of code and also ensures sparsity is
consistent with the torch.sparse package for all cases.
This adds the `generate-runtime-verification` pass into the linalg
refbackend, and moves all tests that now abort at runtime into the crash
set, sorted by their respective errors.
I have fixed on set of errors found that way, which are mismatches
between the static dimensions we cast to and the actual dynamic
dimensions. This was caused by wrong annotations on the test cases, like
in
https://github.com/llvm/torch-mlir/pull/3615/files#diff-48bfbf41fcad5fa01b49197d251114f84a2b8de4f1d87ab938a061aedd1419b1R1931
This PR adds support to `fx_importer.py` for handling custom ops that
return an array of tensors. As long as the length of the array is
consistent across runs (determined statically), then this patch will
work. This does not require that the number of tensors returned is
determined by the op's definition.
CC @sjain-stanford
We do have support for translating unbacked symbolic_ints that arise
from data-dependent ops like `aten.nonzero`. This PR adds the python lit
test coverage for the same.
Resolves#3384.
Many ONNX operators are defined by functions and therefore could be
expanded into simpler ONNX operations during importing, avoiding the
need for tools downstream to support these operators directly.
This commit adds this capability to onnx_importer.py. When importing a
node, the schema for the node's operator is retrieved. If the schema
provides a function for the operator, a specialized version for the
node's types and attributes will be created and imported as an MLIR
function with private visibility. An MLIR function call will then be
emitted, instead of a normal operator node. Caching is used to avoid
generating redundant functions within the same module.
In order to avoid a disruptive change to the importer output for a
large number of operators that already have TorchOnnxToTorch support,
an allowlist strategy is used by default. With this commit, only one
operator is allowlisted for expansion, MeanVarianceNormalization.
However, many other operators can be correctly expanded by the current
code, so hopefully the allowlist can be gradually extended. It is
possible to disable the allowlist in the configuration, in which case
all functions are expanded (useful for testing).
Tools downstream of the importer may now need to do inlining when
consuming the output of the importer, e.g.:
cat imported.mlir | torch-mlir-opt --inline --convert-onnx-to-torch
Explanations for subtle code changes:
- Looking up the correct schema and function for an operator requires
knowing the opset version. NodeImporter retrieves this from the
opset imports on the ModelProto retained by the GraphInfo. Previously,
the model_proto field on GraphInfo was None when importing a subgraph
in import_regions, but this conflicts with the new need for opset
version info. Since the apparent purpose of setting it to None was to
control how GraphInfo generates its input map, a new flag is added to
GraphInfo (is_subgraph) to control this behavior, so that the actual
ModelProto can now be provided without breaking this. This also turned
out to be useful for getting the Config via ModelInfo via GraphInfo.
- Some operators' functions are context-dependent, which means the
function definition depends on the types of the inputs. Therefore node
importing now needs to look up the types of a node's inputs, not just
its outputs as was the case previously. Consequently the operand to
find_type_proto_for_name() may now be a graph input or initializer in
some cases, so it has to be updated.
Tests the basic constructs of registering a custom op and its abstract
implementations (with FakeTensors) in python, going through TorchDynamo
export, followed by importing the shape expressions in the Torch
dialect.
Also fixes the importer were previously the symbolic bind op insertion
was not gated in one place.
(1) test full pytorch output for eltwise
(2) use "random" input for LIF, to get general sparse tensor
(3) introduce way to get true sparsity into network (needs backend fix
first)
…cation and sparse tensors.
**NOTE**: This PR _doges_ the issue in buffer-deallocation pass instead
of resolving it. In the future, we need to fix the bug in
buffer-deallocation pass when handling code generated by sparse
compiler.
While waiting for the full resolution of feature request
https://github.com/pytorch/pytorch/issues/117188
(which will propagate sparsity the right way in upstream PyTorch for all
FX Graphs), this minor change allows us to start testing sparsity
"within" a network, rather than just the parameters. Feel free to add
your own rules for testing (but within reason for what will be done
upstream).
Note, two TODOs need to be addressed to work around some pending issues
to make the JIT execution work.
This scenario was uncovered in a downstream test that failed with a
previous snapshot of torch-mlir. See
https://github.com/cruise-automation/mlir-tcp/actions/runs/8605480116/job/23581829102?pr=65.
```
File "/home/runner/.cache/bazel/_bazel_runner/ce288f117ee4ca92dc028a6a28476a3d/sandbox/processwrapper-sandbox/2380/execroot/mlir-tcp/bazel-out/k8-opt-exec-2B5CBBC6/bin/test/AotCompile/broadcast_unit_dim_to_dynamic_with_unchanged_dim_dynamic_torch_exporter.runfiles/pip_deps_torch_mlir/site-packages/torch_mlir/extras/fx_importer.py", line 969, in value_info_to_type
raise NotImplementedError(
NotImplementedError: Could not deduce type from value info: tensor_meta=None, val=s1, sparsity=None
```
It seems to have resolved on current HEAD. Adding this test to ensure
coverage in the future.
This is a large change because prior to this point, Python files in the
project were not consistently formatted. This reformats them all with
black defaults.
Based on experience with prior projects, if you have a dev/long-term
branch with Python patches, you can minimize merge conflicts prior to
rebasing to include this commit by running `black` on your modified
Python files, squashing, and then rebasing/merging.
This is part 1 of ~3, formatting all miscellaneous text files and CPP files matched by a first run of pre-commit. These tend to be low change-traffic and are likely not disruptive.
Subsequent patches will format Python files and remaining CPP files.
weights and biases and other model parameters appear as a separate data
structure to the traced graph, but are needed when running the MLIR
compiled code; this PR implements that extended functionality
This tests COO for more than 2-dim. Note that sparsity should really
propagate into the relu activation and the output, but such cleverness
needs to wait for the pending work in the PyTorch tree.
In the prior state when I supported mutation of user inputs by treating
them as mutable-tensor SSA values, I had left the case of buffer
mutation only vaguely implemented until a concrete use emerged.
This patch reworks this buffer mutation support by assuming that buffers
must be resolved via the hooks symbolically and treated with load/store
semantics. This is implied in the structure since we have no SSA value
that represents a buffer and we already assume that reading parameters
happens via such a mechanism.
* Also adds the basic scaffolding for handling more of these, which will
be needed for cond, while, etc.
* Refactors some of the support in the generic OpOverload emitter so it
can be shared with these other special forms.
This has been on my list for a while, but it just so happens that as
part of upgrading to PyTorch 2.3 and a pure upstream flow in Turbine, we
were using a feature that required integration with auto_functionalized.
This is perhaps the "weirdest" of the higher-order ops and a poor place
to start, but needs must. We have testing for this in Turbine.
Full support in Turbine has an entire custom ops facility. I've reduced
this down to a unit test in torch-mlir.
At some point, this op became kwarg-only instead of arg/kwarg.
Discovered when upgrading to PyTorch 2.3.
Also adds a test as this was untested in-tree (was caught out of tree).
Set PyTorch and TorchVision version to nightly release 2024-03-07.
This commit also removes the deprecated constraints API:
342e7929b8
Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
Finish supporting importing the vast majority of `onnx` operations. This
includes:
- region support
- region value inherentance
- `torch.string` support
- `torch.list` support
- `torch.optional` support