Add LTC architecture diagram (#1291)

* Add LTC architecture diagram

* Use PNG for diagrams

* Update diagram
pull/1296/head
Henry Tu 2022-08-26 18:21:05 -04:00 committed by GitHub
parent 8e880a2d00
commit 883c6b40dd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 5 additions and 3 deletions

View File

@ -76,6 +76,8 @@ Generated files are created in this directory, which is ignored by version contr
## Architecture ## Architecture
![LTC Diagram](ltc_images/ltc_architecture.png)
### Tracing LTC graph ### Tracing LTC graph
The journey begins with a tensor in PyTorch on the `lazy` device, which may undergo a number of operations during its lifetime. The journey begins with a tensor in PyTorch on the `lazy` device, which may undergo a number of operations during its lifetime.
@ -91,7 +93,7 @@ previously registered in `RegisterLazy.cpp`.
Next, `LazyNativeFunctions::tanh` from `LazyNativeFunctions.cpp` is called, which triggers the creation of a `Tanh` node, which is a subclass of `TorchMlirNode` and `torch::lazy::Node`, defined in `LazyIr.h`. Next, `LazyNativeFunctions::tanh` from `LazyNativeFunctions.cpp` is called, which triggers the creation of a `Tanh` node, which is a subclass of `TorchMlirNode` and `torch::lazy::Node`, defined in `LazyIr.h`.
These nodes are then tracked internally by LTC as the computation graph is traced out. These nodes are then tracked internally by LTC as the computation graph is traced out.
![Tracing Tensors](ltc_images/tracing_tensors.jpg) ![Tracing Tensors](ltc_images/tracing_tensors.png)
### Syncing Tensors ### Syncing Tensors
@ -107,7 +109,7 @@ creates an instance of `TorchMlirLoweringContext`. Here, the `TorchMlirNode`s ar
Next, `TorchMlirLoweringContext::Build` is executed and the final `jit::Graph` is sent to `torch_mlir::importJitFunctionAsFuncOp` to generate MLIR using the existing infrastructure from Torch-MLIR. Next, `TorchMlirLoweringContext::Build` is executed and the final `jit::Graph` is sent to `torch_mlir::importJitFunctionAsFuncOp` to generate MLIR using the existing infrastructure from Torch-MLIR.
At this point, a `TorchMlirComputation` is created containing the final `mlir::FuncOp`. At this point, a `TorchMlirComputation` is created containing the final `mlir::FuncOp`.
![Syncing Tensors](ltc_images/syncing_tensors.jpg) ![Syncing Tensors](ltc_images/syncing_tensors.png)
### Final Compilation and Execution ### Final Compilation and Execution
@ -115,7 +117,7 @@ The `TorchMlirComputation` is sent to the vendor specific implementation of `Tor
Finally, the compiled computation is sent to `TorchMlirBackendImpl::ExecuteComputation` to be executed on the vendor device, which produces some results to be send back to PyTorch. Finally, the compiled computation is sent to `TorchMlirBackendImpl::ExecuteComputation` to be executed on the vendor device, which produces some results to be send back to PyTorch.
![Vendor Execution](ltc_images/vendor_execution.jpg) ![Vendor Execution](ltc_images/vendor_execution.png)
## Implementing a custom backend ## Implementing a custom backend

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 295 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 227 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB