Tidy up README and examples

- update diagram to use the name "Eager Mode" instead of
  `torch.dispatch`, which wasn't a very accurate name
- rename `resnet_inference.ipynb` to
  `torchscript_resnet_inference.ipynb` - this is in preparation to LTC
  and Eager Mode versions
- remove mention of TorchFX - turns out that all TorchFX modules are
  actually scriptable modules, so there is literally "zero code" vs
  using the TorchScript path
- remove LazyTensorCore example, and instead point at the current
  in-development `torch_mlir_ltc_backend` branch.

Note: there were actually some pretty useful utilities built out in the
examples directory, but they now live inside the Eager Mode
`python/torch_mlir/eager_mode/ir_building.py` (and need to be rolled
into a proper home with the upcoming rewrite of our top-level
`torch_mlir.compile` API).
pull/709/head
Sean Silva 2022-03-25 23:36:57 +00:00
parent 8383497704
commit e59a91620a
12 changed files with 9 additions and 952 deletions

View File

@ -1,4 +1,4 @@
# torch-mlir
# The Torch-MLIR Project
The Torch-MLIR project aims to provide first class compiler support from the [PyTorch](https://pytorch.org) ecosystem to the MLIR ecosystem.
@ -14,7 +14,7 @@ An open source machine learning framework that accelerates the path from researc
The MLIR project is a novel approach to building reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain specific compilers, and aid in connecting existing compilers together.
[Torch-MLIR](https://github.com/llvm/torch-mlir)
Multiple Vendors use MLIR as the middle layer mapping from platform frameworks like PyTorch, JAX, TensorFlow onto MLIR and then progressively lower down to their target hardware. We have seen half a dozen custom lowerings from PyTorch to MLIR. Having canonical lowerings from the PyTorch ecosystem to the MLIR ecosystem would provide much needed relief to hardware vendors to focus on their unique value rather than implementing another PyTorch frontend for MLIR. It would be similar to current hardware vendors adding LLVM target support instead of each one also implementing the Clang/C++ frontend.
Multiple Vendors use MLIR as the middle layer, mapping from platform frameworks like PyTorch, JAX, and TensorFlow into MLIR and then progressively lowering down to their target hardware. We have seen half a dozen custom lowerings from PyTorch to MLIR. Having canonical lowerings from the PyTorch ecosystem to the MLIR ecosystem would provide much needed relief to hardware vendors to focus on their unique value rather than implementing yet another PyTorch frontend for MLIR. The goal is to be similar to current hardware vendors adding LLVM target support instead of each one also implementing Clang / a C++ frontend.
## All the roads from PyTorch to Torch MLIR Dialect
@ -22,13 +22,10 @@ We have few paths to lower down to the Torch MLIR Dialect.
![Torch Lowering Architectures](Torch-MLIR.png)
- Torchscript
This is the most tested path down to Torch MLIR Dialect.
- TorchFX
This provides a path to lower from TorchFX down to MLIR. This a functional prototype that we expect to mature as TorchFX matures
- Lazy Tensor Core (Based on lazy-tensor-core [staging branch](https://github.com/pytorch/pytorch/tree/lazy_tensor_staging/lazy_tensor_core))
This path provides the upcoming LTC path of capture. It is based of an unstable devel branch but is the closest way for you to adapt any existing torch_xla derivatives.
- “ACAP” - Deprecated torch_xla based capture Mentioned here for completeness.
- TorchScript
This is the most tested path down to Torch MLIR Dialect, and the PyTorch ecosystem is converging on using TorchScript IR as a lingua franca.
- LazyTensorCore (Based on the PyTorch [`lazy_tensor_staging` branch](https://github.com/pytorch/pytorch/tree/lazy_tensor_staging/lazy_tensor_core))
This path provides the upcoming LTC path of capture. It is based of an unstable devel branch but is the closest way for you to adapt any existing `torch/xla` derivatives.
## Project Communication
@ -129,39 +126,10 @@ python -m ipykernel install --user --name=torch-mlir --env PYTHONPATH "$PYTHONPA
jupyter notebook
```
### TorchFX
### LazyTensorCore
The `examples` folder includes the Python package `torchfx`, which is a functional prototype of a TorchFX to MLIR pipeline. The main entry point into the `torchfx` package is the `torchfx.builder` module, which includes a function for converting the output of a TorchFX trace into MLIR. Currently, the number of PyTorch operations supported is very limited, but will be expanded in the future.
#### Example usage of `torchfx`
The `examples` folder includes scripts `torchfx_*.py` showing how to use the TorchFX to MLIR pipeline. In order to run the examples, make sure you've setup your `PYTHONPATH` by following the [Setup Python Environment](#setup-python-environment) instructions.
Then, run
```shell
python torchfx_example_name.py
```
replacing `torchfx_example_name.py` with the actual `torchfx` example you want to run.
### Lazy Tensor Core
The `examples` folder includes the Python package `lazytensor`, which implements a Lazy Tensor Core (LTC) to MLIR pipeline. The main entry point into the `lazytensor` package is the `lazytensor.builder`, which includes the function `build_module` that takes a computation captured and converted to TorchScript IR by LTC, and converts it to MLIR.
#### Example usage of `lazytensor`
The `examples` folder includes scripts `lazytensor_*.py` showing how to use the Lazy Tensor to MLIR pipeline. The examples depend on the Lazy Tensor Core (LTC) of PyTorch. For information on how to obtain LTC, see [here](https://github.com/pytorch/pytorch/blob/lazy_tensor_staging/lazy_tensor_core/QUICKSTART.md).
In order to run the examples, make sure you've setup your `PYTHONPATH` by following the [Setup Python Environment](#setup-python-environment) instructions, and also add `/path/to/pytorch/lazy_tensor_core` to your `PYTHONPATH` as shown below:
```shell
export PYTHONPATH=$PYTHONPATH:`/replace/with/path/to/pytorch/lazy_tensor_core`
python lazytensor_example_name.py
```
replacing `lazytensor_example_name.py` with the actual `lazytensor` example you want to run.
The LazyTensorCore integration is still in progress, and is being built on the
[`torch_mlir_ltc_backend` branch](https://github.com/llvm/torch-mlir/tree/torch_mlir_ltc_backend).
## Repository Layout

Binary file not shown.

Before

Width:  |  Height:  |  Size: 216 KiB

After

Width:  |  Height:  |  Size: 250 KiB

View File

View File

@ -1,24 +0,0 @@
# Future Work for Lazy Tensor Core
In the last part of the section [Understand The Metrics Report](https://github.com/pytorch/pytorch/blob/lazy_tensor_staging/lazy_tensor_core/TROUBLESHOOTING.md#understand-the-metrics-report), it is mentioned that after running the metrics report,
> If you see `aten::` ops other than `nonzero` and `_local_scalar_dense`, that usually means a missing lowering in the accelerator plugin.
Looking at the sample [output](https://github.com/ramiro050/lazy-tensor-samples/blob/main/lazytensor_resnet18_example_output.txt) and the sample [output](https://github.com/ramiro050/lazy-tensor-samples/blob/main/lazytensor_maskrcnn_example_output.txt) produced by running a [ResNet18](https://github.com/ramiro050/lazy-tensor-samples/blob/main/lazytensor_resnet18_example.py) model and a [MaskRCNN](https://github.com/ramiro050/lazy-tensor-samples/blob/main/lazytensor_maskrcnn_example.py) model, respectively, on the Lazy Tensor Core using the TorchScript backend, the following operations are needed and not yet supported by the backend:
- `aten::convolution_overrideable`
- `aten::max_pool2d_with_indices`
- `aten::mean.out`
- `aten::sort`
- `aten::arange.start_out`
- `aten::bitwise_and.Tensor_out`
- `aten::clamp.out`
- `aten::exp.out`
- `aten::index.Tensor`
- `aten::nonzero`
- `aten::rsqrt.out`
- `aten::sigmoid.out`
- `aten::topk.values`
- `aten::upsample_nearest2d.out`
**Note:** This list is incomplete because currently the MaskRCNN example crashes halfway through when run on LTC. The output error can also be found in the MaskRCNN sample [output](https://github.com/ramiro050/lazy-tensor-samples/blob/main/lazytensor_maskrcnn_example_output.txt).

View File

@ -1,65 +0,0 @@
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
"""
Translator from torch.jit.ScriptFunction to MLIR.
The following defines a function that take a torch.jit.ScriptFunction
and converts it into an MLIR module.
The expected use for this module is to use the function
`build_module(jit_function: torch.jit.ScriptFunction
annotation: Annotation) -> ir.Module`
to convert the TorchScript function into MLIR using the `torch`
dialect.
"""
from typing import Optional
from torch.jit import ScriptFunction
from torch_mlir.dialects.torch.importer.jit_ir import ModuleBuilder
from torch_mlir.dialects.builtin import FuncOp
from torch_mlir import ir
from utils.annotator import AnnotationConverter as ac
from utils.annotator import Annotation
def _get_func_op_with_name(module: ir.Module, name: str) -> Optional[FuncOp]:
with module.context:
name_attr = ir.StringAttr.get(name)
for op in module.body.operations:
if isinstance(op, FuncOp) and op.name == name_attr:
return op
return None
def build_module(jit_function: ScriptFunction,
annotation: Annotation) -> ir.Module:
"""
Translate input function into an MLIR module in the `torch` dialect.
Parameters
----------
jit_function: ScriptFunction
Function in TorchScript IR to turn into MLIR.
annotation: Annotation
Annotation object representing the types of
the operands of `jit_function`.
Returns
-------
ir.Module
Translation of the input module into an MLIR module
"""
mb = ModuleBuilder()
mb.import_function(jit_function)
func_op = _get_func_op_with_name(mb.module, jit_function.name)
assert func_op is not None, 'Unable to find FuncOp in new module. Make sure function was imported correctly into ModuleBuilder'
arg_attrs = ac.to_mlir_array_attr(annotation, mb.context)
func_op.attributes['arg_attrs'] = arg_attrs
return mb.module

View File

@ -1,82 +0,0 @@
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
"""
Example of taking a Lazy Tensor computation and compiling it using torch-mlir.
This example depends on the Lazy Tensor Core (LTC) of PyTorch. For information
on how to obtain LTC, see here:
https://github.com/pytorch/pytorch/blob/lazy_tensor_staging/lazy_tensor_core/QUICKSTART.md
To run the example, make sure the following are in your PYTHONPATH:
1. /path/to/torch-mlir/examples
2. /path/to/pytorch/lazy_tensor_core
3. /path/to/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir
then, simply call `python lazytensor_tanh.py`.
"""
import numpy as np
import torch
import lazy_tensor_core as ltc
from torch._C import CompilationUnit
from torch_mlir_e2e_test.linalg_on_tensors_backends.refbackend \
import RefBackendLinalgOnTensorsBackend
from torch_mlir.passmanager import PassManager
from utils.annotator import Annotation
from utils.torch_mlir_types import TorchTensorType
from lazytensor.builder import build_module
ltc._LAZYC._ltc_init_ts_backend()
device = 'lazy'
dtype = torch.float32
shape = (2, 3)
x = torch.randn(shape, device=device, dtype=dtype)
y = torch.randn(shape, device=device, dtype=dtype)
def computation(x, y):
return y * x.tanh()
# Capture lazy computation and convert to TorchScript IR
graph_str = ltc._LAZYC._get_ltc_tensors_backend([computation(x, y)])
print("LAZY GRAPH")
print(graph_str)
graph = torch._C.parse_ir(graph_str)
# Create a torch.jit.ScriptFunction out of the graph
cu = CompilationUnit()
func_name = 'my_method'
script_function = cu.create_function(func_name, graph)
# `build_module` takes the torch.jit.ScriptFunction and the
# annotation on the operand types, and outputs an `ir.Module`
# with a single function representing the ScriptFunction in
# the torch MLIR dialect
func_annotation = Annotation([TorchTensorType(shape=shape, dtype=torch.float),
TorchTensorType(shape=shape, dtype=torch.float)])
mlir_module = build_module(script_function, func_annotation)
print("MLIR")
mlir_module.dump()
# Compile the torch MLIR and execute the compiled program
with mlir_module.context:
pm = PassManager.parse('torch-function-to-torch-backend-pipeline,torch-backend-to-linalg-on-tensors-backend-pipeline')
pm.run(mlir_module)
print("BEFORE LINALG-ON-TENSORS BACKEND PIPELINE")
print(mlir_module)
backend = RefBackendLinalgOnTensorsBackend()
compiled = backend.compile(mlir_module)
jit_module = backend.load(compiled)
print("\n\nRunning Example Calculation")
print("Compiled result:")
print(jit_module.my_method(x.cpu().numpy(), y.cpu().numpy()))
print("Expected result:")
print(computation(x, y))

View File

@ -1,438 +0,0 @@
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
"""
Translator from Torch.FX to MLIR.
The following defines a set of classes that take a module
generated by the `torch.fx.experimental.fx_acc.acc_tracer` function
and converts parts of it into an MLIR module.
The expected use for this module is to use the function
`build_module(py_module: torch.fx.GraphModule) -> ir.Module`
to convert the output from the tracer into MLIR using the `torch`
dialect.
"""
# pylint: disable=no-member, no-name-in-module, invalid-name, fixme
from typing import MutableMapping, Mapping, Optional, List, Callable, Any
import abc
from itertools import chain
from torch_mlir import ir
import torch_mlir.dialects.torch as torch_d
from torch_mlir.dialects import builtin, std, func
import torch.fx
from torch.fx.experimental.fx_acc import acc_ops
from utils.torch_mlir_types import TorchTensorType, PythonType, \
TorchNnModuleType
Environment = MutableMapping[torch.fx.Node, ir.Value]
class _Builder(abc.ABC):
"""
Abstract class for an MLIR builder.
A builder is an object that takes a torch.fx.GraphModule and
an ir.Module, and inserts information into the ir.Module
using information from the torch.fx.GraphModule.
The builders are expected to modify the ir.Module in place.
This means that using a builder on the same ir.Module
twice will result in duplicated information in the ir.Module
that is returned.
The builders should not modify the torch.fx.GraphModule.
The expected use of the builders is quite simple.
1. Initialize builder
2. call `to_mlir` method to get the updated ir.Module
Parameters
----------
py_module: torch.fx.GraphModule
GraphModule produced by the `acc_tracer` from
`torch.fx.experimental.fx_acc`.
mlir_module: ir.Module
`ir.Module` that will be modified to include the
MLIR generated by this builder.
Attributes
----------
py_module: torch.fx.GraphModule
mlir_module: ir.Module
context: ir.Context
Context used by the `mlir_module`.
module_ip: ir.InsertionPoint
Insertion point for the body of the `mlir_module`.
loc: ir.Location
Used to keep track of source code location information.
class_type_name: str
Qualified name of the class given by the type of `py_module`.
Methods
-------
to_mlir() -> ir.Module
Insert into `mlir_module` the MLIR produced by the builder.
"""
def __init__(self, py_module: torch.fx.GraphModule,
mlir_module: ir.Module):
self.py_module = py_module
self.mlir_module = mlir_module
self.context = mlir_module.context
self.module_ip = ir.InsertionPoint(mlir_module.body)
# TODO: find a way to get a real location
self.loc = ir.Location.unknown(self.context)
# TODO: is qualified module name necessary?
self.class_type_name = type(py_module).__name__
@abc.abstractmethod
def to_mlir(self) -> ir.Module:
"""
Insert into `mlir_module` the MLIR produced by the builder.
Returns
-------
ir.Module
Modified `mlir_module` with the new MLIR produced.
"""
class _ClassDeclAndInitBuilder(_Builder):
"""
Builder for creating a class in MLIR with attributes initialized.
This builder performs the following two tasks:
1. Create an MLIR class declaration based on the public
attributes of the `py_module` as well as the `forward` method.
2. Create MLIR that initializes each attribute of the declaration.
Parameters
----------
py_module: torch.fx.GraphModule
GraphModule produced by the `acc_tracer` from
`torch.fx.experimental.fx_acc`.
mlir_module: ir.Module
`ir.Module` that will be modified to include the
MLIR generated by this builder.
Attributes
----------
class_type_ip : Optional[ir.InsertionPoint]
Insertion point for `torch_d.ClassTypeOp`.
nn_module_ip : Optional[ir.InsertionPoint]
Insertion point for `torch_d.NnModuleOp`.
Methods
-------
to_mlir() -> ir.Module
Insert into `mlir_module` the MLIR produced by the builder.
"""
def __init__(self, py_module: torch.fx.GraphModule,
mlir_module: ir.Module):
super().__init__(py_module, mlir_module)
self.class_type_ip: Optional[ir.InsertionPoint] = None
self.nn_module_ip: Optional[ir.InsertionPoint] = None
def to_mlir(self) -> ir.Module:
with self.context:
class_name_attr = ir.StringAttr.get(self.class_type_name)
class_type_op = torch_d.ClassTypeOp(class_name_attr,
loc=self.loc,
ip=self.module_ip)
new_class_block = class_type_op.regions[0].blocks.append()
self.class_type_ip = ir.InsertionPoint(new_class_block)
module_type = TorchNnModuleType(self.class_type_name
).to_mlir(self.context)
nn_module_op = torch_d.NnModuleOp(module_type,
loc=self.loc, ip=self.module_ip)
new_nn_module_block = nn_module_op.regions[0].blocks.append()
self.nn_module_ip = ir.InsertionPoint(new_nn_module_block)
self._insert_attr_declarations_and_definitions()
self._insert_forward_method_declaration()
torch_d.ClassTypeTerminatorOp(loc=self.loc, ip=self.class_type_ip)
torch_d.NnModuleTerminatorOp(loc=self.loc, ip=self.nn_module_ip)
return self.mlir_module
def _insert_attr_declarations_and_definitions(self):
# TODO: not sure how good this definition is for unhidden vars
unhidden_vars = filter(lambda v: not v[0].startswith('_'),
self.py_module.__dict__.items())
# TODO: is anything else needed? There are some hidden
# attributes that get added by the torch.jit.script
# compilation pipeline, such as:
# torch.attr private "_is_full_backward_hook"
# that are not being added here
attrs = chain(unhidden_vars, self.py_module.named_parameters())
for name, value in attrs:
type_attr: Optional[ir.TypeAttr] = None
operand: Optional[ir.OpResult] = None
# TODO: this should be meta-programmable
if isinstance(value, bool):
with self.context:
bool_type = PythonType(bool).to_mlir(self.context)
type_attr = ir.TypeAttr.get(bool_type)
bool_attr = ir.BoolAttr.get(value)
operand = torch_d.ConstantBoolOp(
bool_type,
bool_attr,
loc=self.loc,
ip=self.module_ip).result
else:
err = f'Unsupported attribute type: {type(value)}'
raise NotImplementedError(err)
assert type_attr is not None and operand is not None, \
'Each clause must specify a value for`type_attr` and `operand`'
with self.context:
name_attr = ir.StringAttr.get(name)
# TODO: don't hardcode `private` field in `AttrOp`
torch_d.AttrOp(name_attr, type_attr, True,
loc=self.loc, ip=self.class_type_ip)
torch_d.SlotOp(name_attr, operand, loc=self.loc,
ip=self.nn_module_ip)
def _insert_forward_method_declaration(self):
if not hasattr(self.py_module, 'forward'):
return
with self.context:
method_name = 'forward'
name_attr = ir.StringAttr.get(method_name)
qualified_name = f'{self.class_type_name}.{method_name}'
# TODO: is there a nice python binding for this?
function_attr = ir.Attribute.parse(f'@{qualified_name}')
# TODO: don't hardcode `private` field in `AttrOp`
torch_d.MethodOp(name_attr, function_attr, False,
loc=self.loc, ip=self.class_type_ip)
class _ForwardFunctionBuilderError(Exception):
def __init__(self, value: str):
super().__init__()
self.value = value
def __str__(self) -> str:
return self.value
class _ForwardFunctionBuilder(_Builder):
"""
Builder for converting the forward method into MLIR.
This builder transverses the `torch.fx.Graph` of the
`py_module`, and translates the operations into MLIR.
Parameters
----------
py_module: torch.fx.GraphModule
GraphModule produced by the `acc_tracer` from
`torch.fx.experimental.fx_acc`.
mlir_module: ir.Module
`ir.Module` that will be modified to include the
MLIR generated by this builder.
Attributes
----------
func_ip : Optional[ir.InsertionPoint]
Insertion point for `torch_d.FuncOp` representing the forward method.
env : Environment
Used to keep track of the `ir.Value` corresponding to each
`torch.fx.Node` that has already been handled.
Methods
-------
to_mlir() -> ir.Module
Insert into `mlir_module` the MLIR produced by the builder.
"""
def __init__(self, py_module: torch.fx.GraphModule,
mlir_module: ir.Module):
super().__init__(py_module, mlir_module)
self.func_ip: Optional[ir.InsertionPoint] = None
self.env: Environment = {}
def to_mlir(self) -> ir.Module:
tensor_type = TorchTensorType().to_mlir(self.context)
module_type = TorchNnModuleType(self.class_type_name
).to_mlir(self.context)
# TODO: currently I am assuming that forward always returns a tensor
func_type = ([module_type] + self._get_arg_types(), [tensor_type])
with self.context:
# TODO: Don't hardcode method name
# TODO: is visibility always private?
func_op = builtin.FuncOp(f'{self.class_type_name}.forward',
func_type, visibility='private',
loc=self.loc, ip=self.module_ip)
func_op.add_entry_block()
self.func_ip = ir.InsertionPoint(func_op.entry_block)
self._initialize_environment(func_op.entry_block.arguments)
for node in self.py_module.graph.nodes:
if node.op == 'call_function':
result = self._insert_function_call(node)
self.env[node] = result
elif node.op == 'output':
func.ReturnOp([self.env[node_arg] for node_arg in node.args],
loc=self.loc, ip=self.func_ip)
elif node.op == 'placeholder':
continue
elif node.op == 'call_module':
err = f'Unsupported node.op type: {node.op}'
raise NotImplementedError(err)
elif node.op == 'get_attr':
err = f'Unsupported node.op type: {node.op}'
raise NotImplementedError(err)
else:
err = f'Unsupported node.op type: {node.op}'
raise NotImplementedError(err)
return self.mlir_module
def _initialize_environment(self, arg_list: ir.BlockArgumentList) -> None:
placeholders = filter(lambda node: node.op == 'placeholder',
self.py_module.graph.nodes)
self_type = TorchNnModuleType(self.class_type_name
).to_mlir(self.context)
non_self_args = filter(lambda arg: arg.type != self_type,
arg_list)
self.env.update(zip(placeholders, non_self_args))
def _get_arg_types(self) -> List[ir.Type]:
operands = filter(lambda node: node.op == 'placeholder',
self.py_module.graph.nodes)
types = []
for operand in operands:
type_ = operand.kwargs.get('torch_mlir_type')
types.append(type_.to_mlir(self.context))
return types
def _insert_function_call(self, f_node: torch.fx.Node) -> ir.OpResult:
assert f_node.op == 'call_function'
args: MutableMapping[str, ir.Value] = {}
for name, arg_node in f_node.kwargs.items():
if isinstance(arg_node, torch.fx.Node):
args[name] = self.env[arg_node]
if isinstance(f_node.target, str):
err = f'f_node.targe = {f_node.target} must be of type \
Callable[..., Any], not str. Make sure the torch.fx.Graph has been \
normalized to using torch.fx.experimental.fx_acc.acc_ops'
raise _ForwardFunctionBuilderError(err)
handler = ACC_OP_HANDLERS.get(f_node.target)
if handler is not None:
return handler(self, args)
raise NotImplementedError(f'Unsupported function: {f_node.target}')
_AccOpHandler = Callable[[_ForwardFunctionBuilder, Mapping[str, ir.Value]],
ir.OpResult]
_AccOpHandlerTable = MutableMapping[Callable[..., Any], _AccOpHandler]
ACC_OP_HANDLERS: _AccOpHandlerTable = {}
def _add_handler(table: _AccOpHandlerTable, acc_op: Callable[..., Any]):
def decorator(f: _AccOpHandler):
table[acc_op] = f
return f
return decorator
# TODO: these handlers should be meta-programmed
@_add_handler(ACC_OP_HANDLERS, acc_ops.sigmoid)
def _sigmoid_handler(func_builder: _ForwardFunctionBuilder,
args: Mapping[str, ir.Value]) -> ir.OpResult:
input_arg = args.get('input')
assert input_arg is not None, 'A call to this handler must include \
an argument named `input`'
tensor_type = TorchTensorType().to_mlir(func_builder.context)
result = torch_d.AtenSigmoidOp(tensor_type,
input_arg,
loc=func_builder.loc,
ip=func_builder.func_ip).result
return result
@_add_handler(ACC_OP_HANDLERS, acc_ops.tanh)
def _tanh_handler(func_builder: _ForwardFunctionBuilder,
args: Mapping[str, ir.Value]) -> ir.OpResult:
input_arg = args.get('input')
assert input_arg is not None, 'A call to this handler must include \
an argument named `input`'
tensor_type = TorchTensorType().to_mlir(func_builder.context)
result = torch_d.AtenTanhOp(tensor_type,
input_arg,
loc=func_builder.loc,
ip=func_builder.func_ip).result
return result
@_add_handler(ACC_OP_HANDLERS, acc_ops.add)
def _add_tensor_handler(func_builder: _ForwardFunctionBuilder,
args: Mapping[str, ir.Value]) -> ir.OpResult:
input_arg = args.get('input')
other_arg = args.get('other')
assert input_arg is not None and other_arg is not None, \
'A call to this handler must include an argument named `input` \
and an argument named `other`'
tensor_type = TorchTensorType().to_mlir(func_builder.context)
torch_int_type = PythonType(int).to_mlir(func_builder.context)
int_type = ir.Type.parse("i64", context=func_builder.context)
int_attr = ir.IntegerAttr.get(int_type, 1)
alpha_arg = torch_d.ConstantIntOp(torch_int_type,
int_attr,
loc=func_builder.loc,
ip=func_builder.func_ip).result
result = torch_d.AtenAddTensorOp(tensor_type,
input_arg,
other_arg,
alpha_arg,
loc=func_builder.loc,
ip=func_builder.func_ip).result
return result
def build_module(py_module: torch.fx.GraphModule) -> ir.Module:
"""
Translate input module into an MLIR module in the `torch` dialect.
Parameters
----------
py_module: torch.fx.GraphModule
GraphModule produced by the `acc_tracer` from
`torch.fx.experimental.fx_acc`.
Returns
-------
ir.Module
Translation of the input module into an MLIR module
"""
with ir.Context():
loc = ir.Location.unknown()
empty_mlir_module = ir.Module.create(loc)
torch_d.register_dialect(empty_mlir_module.context)
mlir_module = _ClassDeclAndInitBuilder(py_module,
empty_mlir_module).to_mlir()
return _ForwardFunctionBuilder(py_module, mlir_module).to_mlir()

View File

@ -1,48 +0,0 @@
# -*- Python -*-
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
#
# pylint: disable=no-member, no-name-in-module, invalid-name, missing-function-docstring, fixme
from typing import Mapping
import inspect
import ast
import torch.fx
class Annotation:
def __init__(self, name: str, row: int, col: int):
self.name = name
self.row = row
self.col = col
Annotations = Mapping[torch.fx.Node, Annotation]
class LocInspector:
#TODO: type of module?
def __init__(self, graph: torch.fx.Graph, module: torch.nn.Module):
self.annotations = {}
self.graph = graph
self.module = module
module_lines, self.module_start_lineno = \
inspect.getsourcelines(type(module))
module_src = "".join(module_lines)
self.src_file = inspect.getsourcefile(type(module))
self.module_ast = ast.parse(module_src)
def __str__(self):
newline = "\n\n"
values = ["Annotations: ", str(self.annotations), newline,
"Src File: ", self.src_file, newline,
"Module AST: ", ast.dump(self.module_ast)]
return "".join(values)
def annotate_defs(self) -> None:
for node in ast.walk(self.module_ast):
if isinstance(node, (ast.ClassDef,
ast.FunctionDef)):
# subtract 1 because lineno's begin on 1
lineno = node.lineno + self.module_start_lineno - 1
self.annotations[node.name] = (self.src_file, lineno,
node.col_offset)

View File

@ -1,68 +0,0 @@
# -*- Python -*-
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
"""
Example of taking a moduled traced by TorchFX and compiling it using torch-mlir.
To run the example, make sure the following are in your PYTHONPATH:
1. /path/to/torch-mlir/examples
2. /path/to/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir
then, simply call `python torchfx_add_tanh_sigmoid.py`.
"""
import torch
import numpy as np
from torch.fx.experimental.fx_acc import acc_tracer
from torch_mlir_e2e_test.linalg_on_tensors_backends.refbackend \
import RefBackendLinalgOnTensorsBackend
from torch_mlir.passmanager import PassManager
from torchfx.builder import build_module
from utils.annotator import annotate_forward_args
from utils.torch_mlir_types import TorchTensorType
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, y):
# TODO: Debug issue with RefBackend
#return torch.tanh(x) + torch.sigmoid(y)
return torch.tanh(x)
module = MyModule()
traced_module = acc_tracer.trace(module, [torch.Tensor(2,2),
torch.Tensor(2,2)])
print("TRACE")
arg_type = TorchTensorType(shape=[None, None], dtype=torch.float)
traced_module = annotate_forward_args(traced_module, [arg_type, arg_type])
print(traced_module.graph)
mlir_module = build_module(traced_module)
print("\n\nTORCH MLIR")
mlir_module.dump()
print(mlir_module.operation.verify())
with mlir_module.context:
pm = PassManager.parse('torchscript-module-to-torch-backend-pipeline,torch-backend-to-linalg-on-tensors-backend-pipeline')
pm.run(mlir_module)
print("\n\nLOWERED MLIR")
mlir_module.dump()
backend = RefBackendLinalgOnTensorsBackend()
compiled = backend.compile(mlir_module)
jit_module = backend.load(compiled)
print("\n\nRunning Forward Function")
np_t = np.random.rand(2, 2).astype(dtype=np.float32)
t = torch.tensor(np_t, dtype=torch.float)
print("Compiled result:\n", jit_module.forward(np_t, np_t))
print("\nExpected result:\n", module.forward(t, t))

View File

@ -1,59 +0,0 @@
# -*- Python -*-
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
#
# pylint: disable=no-member, no-name-in-module, invalid-name, missing-function-docstring, fixme
from typing import Iterable, Union
from torch.fx import GraphModule
from torch_mlir import ir
from torch_mlir.dialects import builtin
from .torch_mlir_types import TorchTensorType, PythonType
class Annotation:
def __init__(self, types: Iterable[Union[TorchTensorType, type]]):
self.types = list(map(lambda t:
PythonType(t) if isinstance(t, type) else t,
types))
def __str__(self):
result = f'Annotation instance with {len(self.types)} types\n'
for e, type_ in enumerate(self.types):
result += f' Type of argument {e + 1}: {str(type_)}\n'
return result
def __iter__(self):
return iter(self.types)
class AnnotationConverter:
@staticmethod
def to_mlir_array_attr(annotation: Annotation,
context: ir.Context) -> ir.ArrayAttr:
dict_attrs = []
for type_ in annotation:
if not isinstance(type_, TorchTensorType):
dict_attrs.append(ir.DictAttr.get({}, context=context))
continue
ir_type = type_.to_mlir(context)
with context:
type_attr = ir.TypeAttr.get(ir_type)
dict_attr = ir.DictAttr.get({'torch.type_bound': type_attr})
dict_attrs.append(dict_attr)
return ir.ArrayAttr.get(dict_attrs, context=context)
def annotate_forward_args(module: GraphModule,
types: Iterable[Union[TorchTensorType, type]]
) -> GraphModule:
operands = filter(lambda node: node.op == 'placeholder', module.graph.nodes)
for operand, type_ in zip(operands, types):
if isinstance(type_, type):
type_ = PythonType(type_)
operand.update_kwarg('torch_mlir_type', type_)
return module

View File

@ -1,127 +0,0 @@
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
# Also available under a BSD-style license. See LICENSE.
#
# pylint: disable=no-member, no-name-in-module, invalid-name, missing-function-docstring, fixme
"""
The following defines a set of classes for converting
types used by Python and PyTorch into MLIR types from the
`torch` dialect.
The expected use of this module is to create an instance
of one of the classes below, and then calling the
`to_mlir` method to generate the MLIR representation
of the type.
Information about what types are supported by each class
can be found in docstrings of each of the classes.
"""
import abc
from typing import Any, Optional, Iterable
import torch
from torch_mlir import ir
class TorchMlirType(abc.ABC):
"""
A `TorchMlirType` is an object that produces MLIR
types in the `torch` dialect. The only requirement
for a class to be a subclass of `TorchMlirType` is
to define a `to_mlir(self, ir.Context) -> ir.Type`.
Each class is allowed to have different types of
__init__ methods depending on the information they
require to produce the given MLIR representation.
"""
@abc.abstractmethod
def to_mlir(self, context: ir.Context) -> ir.Type:
pass
class TorchTensorTypeError(Exception):
def __init__(self, value: str):
super().__init__()
self.value = value
def __str__(self) -> str:
return self.value
class TorchTensorType(TorchMlirType):
"""
This class is used to generate types of the form
!torch.tensor and !torch.vtensor<SHAPE, DTYPE>,
where SHAPE is a list representing the shape of the tensor,
and DTYPE is an MLIR data type.
"""
def __init__(self, *, shape: Optional[Iterable[Optional[int]]] = None,
dtype: Optional[torch.dtype] = None):
self.shape = shape
self.dtype = dtype
if dtype is None and shape is not None:
err = "If shape is specified, dtype must also be specified"
raise TorchTensorTypeError(err)
def __str__(self):
return f'Torch Tensor (shape={self.shape}, dtype={self.dtype})'
def to_mlir(self, context: ir.Context) -> ir.Type:
if self.dtype is None:
return ir.Type.parse('!torch.tensor', context=context)
shape_asm = self._shape_to_mlir_asm()
dtype_asm = self._dtype_to_mlir_asm()
return ir.Type.parse(f'!torch.vtensor<{shape_asm},{dtype_asm}>',
context=context)
def _shape_to_mlir_asm(self) -> str:
if self.shape is None:
return '*'
str_sizes = map(lambda x: '?' if x is None else str(x), self.shape)
return f'[{",".join(str_sizes)}]'
def _dtype_to_mlir_asm(self) -> str:
if self.dtype in [torch.float, torch.float32]:
return 'f32'
raise NotImplementedError(f'Unsupported dtype: {self.dtype}')
class TorchNnModuleType(TorchMlirType):
"""This class is used to generate types for `!torch.nn.Module`s."""
def __init__(self, module_name: str):
self.module_name = module_name
def __str__(self):
return "torch.nn.Module"
def to_mlir(self, context: ir.Context) -> ir.Type:
return ir.Type.parse(f'!torch.nn.Module<"{self.module_name}">',
context=context)
class PythonType(TorchMlirType):
"""
This class is used to convert regular Python types
into their corresponding `torch` dialect representation.
The list of supported types can be found in the dictionary
`_type_to_asm_dict`.
"""
_type_to_asm_dict = {
bool: '!torch.bool',
int: '!torch.int',
type(None): '!torch.none',
}
def __init__(self, type_: Any):
self.type_ = type_
def __str__(self):
return str(self.type_)
def to_mlir(self, context: ir.Context) -> ir.Type:
asm = self._type_to_asm_dict.get(self.type_)
if asm is None:
raise NotImplementedError(f'Unsupported type: {self.type_}')
return ir.Type.parse(asm, context=context)