2022-07-08 02:02:48 +08:00
# Torch-MLIR Lazy Tensor Core Backend Examples
Refer to the main documentation [here ](ltc_backend.md ).
## Example Usage
```python
import torch
import torch._lazy
2022-08-26 06:25:01 +08:00
import torch_mlir._mlir_libs._REFERENCE_LAZY_BACKEND as lazy_backend
2022-07-08 02:02:48 +08:00
# Register the example LTC backend.
2022-07-13 03:56:52 +08:00
lazy_backend._initialize()
2022-07-08 02:02:48 +08:00
device = 'lazy'
# Create some tensors and perform operations.
inputs = torch.tensor([[1, 2, 3, 4, 5]], dtype=torch.float32, device=device)
outputs = torch.tanh(inputs)
# Mark end of training/evaluation iteration and lower traced graph.
torch._lazy.mark_step()
print('Results:', outputs)
# Optionally dump MLIR graph generated from LTC trace.
2022-07-13 03:56:52 +08:00
computation = lazy_backend.get_latest_computation()
2022-07-08 02:02:48 +08:00
if computation:
print(computation.debug_string())
```
```
Received 1 computation instances at Compile!
Received 1 arguments, and returned 2 results during ExecuteCompile!
Results: tensor([[0.7616, 0.9640, 0.9951, 0.9993, 0.9999]], device='lazy:0')
2024-02-03 03:02:53 +08:00
JIT Graph:
2022-07-08 02:02:48 +08:00
graph(%p0 : Float(1, 5)):
%1 : Float(1, 5) = aten::tanh(%p0)
return (%p0, %1)
2024-02-03 03:02:53 +08:00
MLIR:
2022-07-08 02:02:48 +08:00
func.func @graph (%arg0: !torch.vtensor< [1,5],f32>) -> (!torch.vtensor< [1,5],f32>, !torch.vtensor< [1,5],f32>) {
%0 = torch.aten.tanh %arg0 : !torch.vtensor< [1,5],f32> -> !torch.vtensor< [1,5],f32>
return %arg0, %0 : !torch.vtensor< [1,5],f32>, !torch.vtensor< [1,5],f32>
}
2024-02-03 03:02:53 +08:00
Input/Output Alias Mapping:
2022-07-08 02:02:48 +08:00
Output: 0 -> Input param: 0
In Mark Step: true
```
## Example Models
2024-08-17 00:59:44 +08:00
There are also examples of a [HuggingFace BERT ](../projects/pt1/examples/ltc_backend_bert.py ) and [MNIST ](../projects/pt1/examples/ltc_backend_mnist.py ) model running on the example LTC backend.