diff --git a/README.md b/README.md index 867cec0e5..22e07dffe 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ Multiple Vendors use MLIR as the middle layer, mapping from platform frameworks We have few paths to lower down to the Torch MLIR Dialect. -![Torch Lowering Architectures](docs/Torch-MLIR.png) +![Simplified Architecture Diagram for README](docs/images/readme_architecture_diagram.png) - TorchScript This is the most tested path down to Torch MLIR Dialect, and the PyTorch ecosystem is converging on using TorchScript IR as a lingua franca. diff --git a/docs/architecture.md b/docs/architecture.md index 3b19cf37d..b6bc86f91 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -15,7 +15,7 @@ halves interface at an abstraction layer that we call the "backend contract", which is a subset of the `torch` dialect with certain properties appealing for backends to lower from. -![Torch-MLIR Architecture](Torch-MLIR_Architecture.png) +![Torch-MLIR Architecture](images/architecture.png) The frontend of Torch-MLIR is concerned with interfacing to PyTorch itself, and then normalizing the program to the "backend contract". This part involves build diff --git a/docs/Torch-MLIR_Architecture.png b/docs/images/architecture.png similarity index 100% rename from docs/Torch-MLIR_Architecture.png rename to docs/images/architecture.png diff --git a/docs/ltc_images/ltc_architecture.png b/docs/images/ltc_architecture.png similarity index 100% rename from docs/ltc_images/ltc_architecture.png rename to docs/images/ltc_architecture.png diff --git a/docs/ltc_images/syncing_tensors.png b/docs/images/ltc_syncing_tensors.png similarity index 100% rename from docs/ltc_images/syncing_tensors.png rename to docs/images/ltc_syncing_tensors.png diff --git a/docs/ltc_images/tracing_tensors.png b/docs/images/ltc_tracing_tensors.png similarity index 100% rename from docs/ltc_images/tracing_tensors.png rename to docs/images/ltc_tracing_tensors.png diff --git a/docs/ltc_images/vendor_execution.png b/docs/images/ltc_vendor_execution.png similarity index 100% rename from docs/ltc_images/vendor_execution.png rename to docs/images/ltc_vendor_execution.png diff --git a/docs/Torch-MLIR.excalidraw b/docs/images/readme_architecture_diagram.excalidraw similarity index 100% rename from docs/Torch-MLIR.excalidraw rename to docs/images/readme_architecture_diagram.excalidraw diff --git a/docs/Torch-MLIR.png b/docs/images/readme_architecture_diagram.png similarity index 100% rename from docs/Torch-MLIR.png rename to docs/images/readme_architecture_diagram.png diff --git a/docs/ltc_backend.md b/docs/ltc_backend.md index 58c0e8de2..ae3cc887c 100644 --- a/docs/ltc_backend.md +++ b/docs/ltc_backend.md @@ -76,7 +76,7 @@ Generated files are created in this directory, which is ignored by version contr ## Architecture -![LTC Diagram](ltc_images/ltc_architecture.png) +![LTC Diagram](images/ltc_architecture.png) ### Tracing LTC graph @@ -93,7 +93,7 @@ previously registered in `RegisterLazy.cpp`. Next, `LazyNativeFunctions::tanh` from `LazyNativeFunctions.cpp` is called, which triggers the creation of a `Tanh` node, which is a subclass of `TorchMlirNode` and `torch::lazy::Node`, defined in `LazyIr.h`. These nodes are then tracked internally by LTC as the computation graph is traced out. -![Tracing Tensors](ltc_images/tracing_tensors.png) +![Tracing Tensors](images/ltc_tracing_tensors.png) ### Syncing Tensors @@ -109,7 +109,7 @@ creates an instance of `TorchMlirLoweringContext`. Here, the `TorchMlirNode`s ar Next, `TorchMlirLoweringContext::Build` is executed and the final `jit::Graph` is sent to `torch_mlir::importJitFunctionAsFuncOp` to generate MLIR using the existing infrastructure from Torch-MLIR. At this point, a `TorchMlirComputation` is created containing the final `mlir::FuncOp`. -![Syncing Tensors](ltc_images/syncing_tensors.png) +![Syncing Tensors](images/ltc_syncing_tensors.png) ### Final Compilation and Execution @@ -117,7 +117,7 @@ The `TorchMlirComputation` is sent to the vendor specific implementation of `Tor Finally, the compiled computation is sent to `TorchMlirBackendImpl::ExecuteComputation` to be executed on the vendor device, which produces some results to be send back to PyTorch. -![Vendor Execution](ltc_images/vendor_execution.png) +![Vendor Execution](images/ltc_vendor_execution.png) ## Implementing a custom backend