[docs] Centralize all images in docs/images/

pull/1555/head snapshot-20221104.647
Sean Silva 2022-11-03 12:46:10 +00:00
parent 2846776897
commit de4bcbfe9b
10 changed files with 6 additions and 6 deletions

View File

@ -22,7 +22,7 @@ Multiple Vendors use MLIR as the middle layer, mapping from platform frameworks
We have few paths to lower down to the Torch MLIR Dialect. We have few paths to lower down to the Torch MLIR Dialect.
![Torch Lowering Architectures](docs/Torch-MLIR.png) ![Simplified Architecture Diagram for README](docs/images/readme_architecture_diagram.png)
- TorchScript - TorchScript
This is the most tested path down to Torch MLIR Dialect, and the PyTorch ecosystem is converging on using TorchScript IR as a lingua franca. This is the most tested path down to Torch MLIR Dialect, and the PyTorch ecosystem is converging on using TorchScript IR as a lingua franca.

View File

@ -15,7 +15,7 @@ halves interface at an abstraction layer that we call the "backend contract",
which is a subset of the `torch` dialect with certain properties appealing for which is a subset of the `torch` dialect with certain properties appealing for
backends to lower from. backends to lower from.
![Torch-MLIR Architecture](Torch-MLIR_Architecture.png) ![Torch-MLIR Architecture](images/architecture.png)
The frontend of Torch-MLIR is concerned with interfacing to PyTorch itself, and The frontend of Torch-MLIR is concerned with interfacing to PyTorch itself, and
then normalizing the program to the "backend contract". This part involves build then normalizing the program to the "backend contract". This part involves build

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 98 KiB

View File

Before

Width:  |  Height:  |  Size: 169 KiB

After

Width:  |  Height:  |  Size: 169 KiB

View File

Before

Width:  |  Height:  |  Size: 126 KiB

After

Width:  |  Height:  |  Size: 126 KiB

View File

Before

Width:  |  Height:  |  Size: 129 KiB

After

Width:  |  Height:  |  Size: 129 KiB

View File

Before

Width:  |  Height:  |  Size: 262 KiB

After

Width:  |  Height:  |  Size: 262 KiB

View File

@ -76,7 +76,7 @@ Generated files are created in this directory, which is ignored by version contr
## Architecture ## Architecture
![LTC Diagram](ltc_images/ltc_architecture.png) ![LTC Diagram](images/ltc_architecture.png)
### Tracing LTC graph ### Tracing LTC graph
@ -93,7 +93,7 @@ previously registered in `RegisterLazy.cpp`.
Next, `LazyNativeFunctions::tanh` from `LazyNativeFunctions.cpp` is called, which triggers the creation of a `Tanh` node, which is a subclass of `TorchMlirNode` and `torch::lazy::Node`, defined in `LazyIr.h`. Next, `LazyNativeFunctions::tanh` from `LazyNativeFunctions.cpp` is called, which triggers the creation of a `Tanh` node, which is a subclass of `TorchMlirNode` and `torch::lazy::Node`, defined in `LazyIr.h`.
These nodes are then tracked internally by LTC as the computation graph is traced out. These nodes are then tracked internally by LTC as the computation graph is traced out.
![Tracing Tensors](ltc_images/tracing_tensors.png) ![Tracing Tensors](images/ltc_tracing_tensors.png)
### Syncing Tensors ### Syncing Tensors
@ -109,7 +109,7 @@ creates an instance of `TorchMlirLoweringContext`. Here, the `TorchMlirNode`s ar
Next, `TorchMlirLoweringContext::Build` is executed and the final `jit::Graph` is sent to `torch_mlir::importJitFunctionAsFuncOp` to generate MLIR using the existing infrastructure from Torch-MLIR. Next, `TorchMlirLoweringContext::Build` is executed and the final `jit::Graph` is sent to `torch_mlir::importJitFunctionAsFuncOp` to generate MLIR using the existing infrastructure from Torch-MLIR.
At this point, a `TorchMlirComputation` is created containing the final `mlir::FuncOp`. At this point, a `TorchMlirComputation` is created containing the final `mlir::FuncOp`.
![Syncing Tensors](ltc_images/syncing_tensors.png) ![Syncing Tensors](images/ltc_syncing_tensors.png)
### Final Compilation and Execution ### Final Compilation and Execution
@ -117,7 +117,7 @@ The `TorchMlirComputation` is sent to the vendor specific implementation of `Tor
Finally, the compiled computation is sent to `TorchMlirBackendImpl::ExecuteComputation` to be executed on the vendor device, which produces some results to be send back to PyTorch. Finally, the compiled computation is sent to `TorchMlirBackendImpl::ExecuteComputation` to be executed on the vendor device, which produces some results to be send back to PyTorch.
![Vendor Execution](ltc_images/vendor_execution.png) ![Vendor Execution](images/ltc_vendor_execution.png)
## Implementing a custom backend ## Implementing a custom backend