torch-mlir/test/python/onnx_importer
Andrea 🦈 51902ec2dc
Create MLIR functions for ONNX operators that are functions (#3409)
Resolves #3384.

Many ONNX operators are defined by functions and therefore could be
expanded into simpler ONNX operations during importing, avoiding the
need for tools downstream to support these operators directly.

This commit adds this capability to onnx_importer.py. When importing a
node, the schema for the node's operator is retrieved. If the schema
provides a function for the operator, a specialized version for the
node's types and attributes will be created and imported as an MLIR
function with private visibility. An MLIR function call will then be
emitted, instead of a normal operator node. Caching is used to avoid
generating redundant functions within the same module.

In order to avoid a disruptive change to the importer output for a
large number of operators that already have TorchOnnxToTorch support,
an allowlist strategy is used by default. With this commit, only one
operator is allowlisted for expansion, MeanVarianceNormalization.
However, many other operators can be correctly expanded by the current
code, so hopefully the allowlist can be gradually extended. It is
possible to disable the allowlist in the configuration, in which case
all functions are expanded (useful for testing).

Tools downstream of the importer may now need to do inlining when
consuming the output of the importer, e.g.:

  cat imported.mlir | torch-mlir-opt --inline --convert-onnx-to-torch

Explanations for subtle code changes:

- Looking up the correct schema and function for an operator requires
  knowing the opset version. NodeImporter retrieves this from the
  opset imports on the ModelProto retained by the GraphInfo. Previously,
  the model_proto field on GraphInfo was None when importing a subgraph
  in import_regions, but this conflicts with the new need for opset
  version info. Since the apparent purpose of setting it to None was to
  control how GraphInfo generates its input map, a new flag is added to
  GraphInfo (is_subgraph) to control this behavior, so that the actual
  ModelProto can now be provided without breaking this. This also turned
  out to be useful for getting the Config via ModelInfo via GraphInfo.
- Some operators' functions are context-dependent, which means the
  function definition depends on the types of the inputs. Therefore node
  importing now needs to look up the types of a node's inputs, not just
  its outputs as was the case previously. Consequently the operand to
  find_type_proto_for_name() may now be a graph input or initializer in
  some cases, so it has to be updated.
2024-06-14 10:11:26 -07:00
..
function_expansion Create MLIR functions for ONNX operators that are functions (#3409) 2024-06-14 10:11:26 -07:00
.gitignore Upstream the ONNX importer. (#2636) 2023-12-12 19:02:51 -08:00
LeakyReLU.onnx [onnx] Add torch-mlir-import-onnx tool. (#2637) 2023-12-12 22:01:30 -08:00
_torch_mlir_config.py [NFC reformat] Applies pre-commit formatting to Python files. (#3244) 2024-04-27 14:16:31 -07:00
command_line_test.py [NFC reformat] Applies pre-commit formatting to Python files. (#3244) 2024-04-27 14:16:31 -07:00
import_onnx_tool.runlit [onnx] Add torch-mlir-import-onnx tool. (#2637) 2023-12-12 22:01:30 -08:00
import_smoke_test.py [NFC reformat] Applies pre-commit formatting to Python files. (#3244) 2024-04-27 14:16:31 -07:00
lit.local.cfg Upstream the ONNX importer. (#2636) 2023-12-12 19:02:51 -08:00