torch-mlir/CMakeLists.txt

238 lines
8.3 KiB
CMake
Raw Normal View History

2023-11-03 04:28:49 +08:00
#-------------------------------------------------------------------------------
# Project setup and globals
#-------------------------------------------------------------------------------
cmake_minimum_required(VERSION 3.12)
if(POLICY CMP0068)
cmake_policy(SET CMP0068 NEW)
set(CMAKE_BUILD_WITH_INSTALL_NAME_DIR ON)
endif()
if(POLICY CMP0075)
cmake_policy(SET CMP0075 NEW)
endif()
if(POLICY CMP0077)
cmake_policy(SET CMP0077 NEW)
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
endif()
if(POLICY CMP0116)
cmake_policy(SET CMP0116 OLD)
endif()
project(torch-mlir LANGUAGES CXX C)
set(CMAKE_C_STANDARD 11)
set(CMAKE_CXX_STANDARD 17)
2023-11-02 09:40:42 +08:00
#-------------------------------------------------------------------------------
# Project options
#-------------------------------------------------------------------------------
2023-11-02 09:40:42 +08:00
option(TORCH_MLIR_ENABLE_REFBACKEND "Enable reference backend" ON)
if(TORCH_MLIR_ENABLE_REFBACKEND)
add_definitions(-DTORCH_MLIR_ENABLE_REFBACKEND)
2023-02-14 04:21:06 +08:00
endif()
2023-11-02 09:40:42 +08:00
option(TORCH_MLIR_ENABLE_STABLEHLO "Add stablehlo dialect" ON)
if(TORCH_MLIR_ENABLE_STABLEHLO)
add_definitions(-DTORCH_MLIR_ENABLE_STABLEHLO)
endif()
option(TORCH_MLIR_OUT_OF_TREE_BUILD "Specifies an out of tree build" OFF)
2023-11-03 04:28:49 +08:00
# PT1 options.
option(TORCH_MLIR_ENABLE_PROJECT_PT1 "Enables the PyTorch1 project under projects/pt1" OFF)
# TODO: Rename/scope these. They use historic names for now to ease migration
# burden.
option(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER "Enables JIT IR Importer" ON)
option(TORCH_MLIR_ENABLE_LTC "Enables LTC backend" OFF)
option(TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS "Build Torch dialect MLIR Python bindings but neither JIT IR Importer nor LTC backend" OFF)
if(TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS)
set(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER OFF)
set(TORCH_MLIR_ENABLE_LTC OFF)
endif()
# Force enable the PT1 project if either the JIT_IR_IMPORTER or LTC is enabled.
if(NOT TORCH_MLIR_ENABLE_PROJECT_PT1)
if(TORCH_MLIR_ENABLE_JIT_IR_IMPORTER OR TORCH_MLIR_ENABLE_LTC)
message(STATUS "Enabling projects/pt1 because features requiring it are enabled")
set(TORCH_MLIR_ENABLE_PROJECT_PT1 ON)
endif()
endif()
#-------------------------------------------------------------------------------
# Configure out-of-tree vs in-tree build
#-------------------------------------------------------------------------------
if(CMAKE_SOURCE_DIR STREQUAL CMAKE_CURRENT_SOURCE_DIR OR TORCH_MLIR_OUT_OF_TREE_BUILD)
message(STATUS "Torch-MLIR out-of-tree build.")
# Out-of-tree build
#-------------------------------------------------------------------------------
# MLIR/LLVM Configuration
#-------------------------------------------------------------------------------
find_package(MLIR REQUIRED CONFIG)
message(STATUS "Using MLIRConfig.cmake in: ${MLIR_DIR}")
message(STATUS "Using LLVMConfig.cmake in: ${LLVM_DIR}")
set(LLVM_RUNTIME_OUTPUT_INTDIR ${CMAKE_BINARY_DIR}/bin)
set(LLVM_LIBRARY_OUTPUT_INTDIR ${CMAKE_BINARY_DIR}/lib)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/lib)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
# Define the default arguments to use with 'lit', and an option for the user to
# override.
set(LIT_ARGS_DEFAULT "-sv")
if (MSVC_IDE OR XCODE)
set(LIT_ARGS_DEFAULT "${LIT_ARGS_DEFAULT} --no-progress-bar")
endif()
set(LLVM_LIT_ARGS "${LIT_ARGS_DEFAULT}" CACHE STRING "Default options for lit")
list(APPEND CMAKE_MODULE_PATH "${MLIR_CMAKE_DIR}")
list(APPEND CMAKE_MODULE_PATH "${LLVM_CMAKE_DIR}")
include(TableGen)
include(AddLLVM)
include(AddMLIR)
include(HandleLLVMOptions)
# Don't try to compile the python extensions at the moment. We need
# to import lots of dependencies from AddMLIRPython to make this work.
2023-11-02 09:40:42 +08:00
set(MLIR_ENABLE_BINDINGS_PYTHON ON)
2023-11-02 09:40:42 +08:00
set(TORCH-MLIR_BUILT_STANDALONE ON)
set(BACKEND_PACKAGE_STRING "LLVM ${LLVM_PACKAGE_VERSION}")
add_subdirectory(externals/llvm-external-projects/torch-mlir-dialects)
else()
message(STATUS "Torch-MLIR in-tree build.")
# In-tree build with LLVM_EXTERNAL_PROJECTS=torch-mlir
option(MLIR_ENABLE_BINDINGS_PYTHON "Enables MLIR Python Bindings" OFF)
# TODO: Fix this upstream so that global include directories are not needed.
set(MLIR_MAIN_SRC_DIR ${LLVM_MAIN_SRC_DIR}/../mlir)
set(MLIR_INCLUDE_DIR ${LLVM_MAIN_SRC_DIR}/../mlir/include)
set(MLIR_GENERATED_INCLUDE_DIR ${LLVM_BINARY_DIR}/tools/mlir/include)
set(MLIR_INCLUDE_DIRS "${MLIR_INCLUDE_DIR};${MLIR_GENERATED_INCLUDE_DIR}")
endif()
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
if (TORCH_MLIR_ENABLE_STABLEHLO)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/externals/stablehlo)
endif()
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
set(TORCH_MLIR_SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}")
set(TORCH_MLIR_BINARY_DIR "${CMAKE_CURRENT_BINARY_DIR}")
message(STATUS "Building torch-mlir project at ${TORCH_MLIR_SOURCE_DIR} (into ${TORCH_MLIR_BINARY_DIR})")
include_directories(${LLVM_INCLUDE_DIRS})
include_directories(${MLIR_INCLUDE_DIRS})
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/include)
include_directories(${CMAKE_CURRENT_BINARY_DIR}/include)
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
function(torch_mlir_target_includes target)
set(_dirs
$<BUILD_INTERFACE:${MLIR_INCLUDE_DIRS}>
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
$<BUILD_INTERFACE:${TORCH_MLIR_SOURCE_DIR}/include>
$<BUILD_INTERFACE:${TORCH_MLIR_BINARY_DIR}/include>
)
# In LLVM parlance, the actual target may just be an interface and may not
# be responsible for actually compiling anything. The corresponding obj.
# target, when present, is just used for compilation and does not
# contribute to the interface properties.
# TODO: Normalize this upstream.
target_include_directories(${target} PUBLIC ${_dirs})
if(TARGET obj.${target})
target_include_directories(obj.${target} PRIVATE ${_dirs})
endif()
endfunction()
# Configure CMake.
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
list(APPEND CMAKE_MODULE_PATH ${MLIR_MAIN_SRC_DIR}/cmake/modules)
list(APPEND CMAKE_MODULE_PATH ${LLVM_MAIN_SRC_DIR}/cmake)
include(TableGen)
include(AddLLVM)
include(AddMLIR)
################################################################################
# Setup python.
################################################################################
if(MLIR_ENABLE_BINDINGS_PYTHON)
include(MLIRDetectPythonEnv)
mlir_configure_python_dev_packages()
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
endif()
add_subdirectory(include)
add_subdirectory(lib)
add_subdirectory(tools)
add_custom_target(check-torch-mlir-all)
2023-11-02 09:58:05 +08:00
add_dependencies(check-torch-mlir-all check-torch-mlir)
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
if(MLIR_ENABLE_BINDINGS_PYTHON)
# If parent projects want to configure where to place the python packages,
# respect that.
[torch-mlir earthmoving (2/N)] Python code movement. This moves the bulk of the Python code (including the Torch interop) from `frontends/pytorch` into `torch-mlir/TorchPlugin`. This also required reconciling a bunch of other Python-related stuff, like the `torch` dialects. As I did this, it was simpler to just remove all the old numpy/basicpy stuff because we were going to delete it anyway and it was faster than debugging an intermediate state that would only last O(days) anyway. torch-mlir has two top-level python packages (built into the `python_packages` directory): - `torch_mlir_dialects`: `torch` dialect Python bindings (does not depend on PyTorch). This also involves building the aggregate CAPI for `torch-mlir`. - `torch_mlir`: bindings to the part of the code that links against PyTorch (or C++ code that transitively does). Additionally, there remain two more Python packages in npcomp (but outside `torch-mlir`): - `npcomp_torch`: Contains the e2e test framework and testing configs that plug into RefBackend and IREE. - `npcomp_core`: Contains the low-level interfaces to RefBackend and IREE that `npcomp_torch` uses, along with its own `MLIR_PYTHON_PACKAGE_PREFIX=npcomp.` aggregation of the core MLIR python bindings. (all other functionality has been stripped out) After all the basicpy/numpy deletions, the `npcomp` C++ code is now very tiny. It basically just contains RefBackend and the `TorchConversion` dialect/passes (e.g. `TorchToLinalg.cpp`). Correspondingly, there are now 4 main testing targets paralleling the Python layering (which is reflective of the deeper underlying dependency structure) - `check-torch-mlir`: checks the `torch-mlir` pure MLIR C++ code. - `check-torch-mlir-plugin`: checks the code in `TorchPlugin` (e.g. TorchScript import) - `check-frontends-pytorch`: Checks the little code we have in `frontends/pytorch` -- mainly things related to the e2e framework itself. - `check-npcomp`: Checks the pure MLIR C++ code inside npcomp. There is a target `check-npcomp-all` that runs all of them. The `torch-mlir/build_standalone.sh` script does a standalone build of `torch-mlir`. The e2e tests (`tools/torchscript_e2e_test.sh`) are working too. The update_torch_ods script now lives in `torch-mlir/build_tools/update_torch_ods.sh` and expects a standalone build. This change also required a fix upstream related to cross-shlib Python dependencies, so we also update llvm-project to 8dca953dd39c0cd8c80decbeb38753f58a4de580 to get https://reviews.llvm.org/D109776 (no other fixes were needed for the integrate, thankfully). This completes most of the large source code changes. Next will be bringing the CI/packaging/examples back to life.
2021-09-11 02:44:38 +08:00
if(NOT TORCH_MLIR_PYTHON_PACKAGES_DIR)
set(TORCH_MLIR_PYTHON_PACKAGES_DIR "${CMAKE_CURRENT_BINARY_DIR}/python_packages")
endif()
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
endif()
[torch-mlir earthmoving (2/N)] Python code movement. This moves the bulk of the Python code (including the Torch interop) from `frontends/pytorch` into `torch-mlir/TorchPlugin`. This also required reconciling a bunch of other Python-related stuff, like the `torch` dialects. As I did this, it was simpler to just remove all the old numpy/basicpy stuff because we were going to delete it anyway and it was faster than debugging an intermediate state that would only last O(days) anyway. torch-mlir has two top-level python packages (built into the `python_packages` directory): - `torch_mlir_dialects`: `torch` dialect Python bindings (does not depend on PyTorch). This also involves building the aggregate CAPI for `torch-mlir`. - `torch_mlir`: bindings to the part of the code that links against PyTorch (or C++ code that transitively does). Additionally, there remain two more Python packages in npcomp (but outside `torch-mlir`): - `npcomp_torch`: Contains the e2e test framework and testing configs that plug into RefBackend and IREE. - `npcomp_core`: Contains the low-level interfaces to RefBackend and IREE that `npcomp_torch` uses, along with its own `MLIR_PYTHON_PACKAGE_PREFIX=npcomp.` aggregation of the core MLIR python bindings. (all other functionality has been stripped out) After all the basicpy/numpy deletions, the `npcomp` C++ code is now very tiny. It basically just contains RefBackend and the `TorchConversion` dialect/passes (e.g. `TorchToLinalg.cpp`). Correspondingly, there are now 4 main testing targets paralleling the Python layering (which is reflective of the deeper underlying dependency structure) - `check-torch-mlir`: checks the `torch-mlir` pure MLIR C++ code. - `check-torch-mlir-plugin`: checks the code in `TorchPlugin` (e.g. TorchScript import) - `check-frontends-pytorch`: Checks the little code we have in `frontends/pytorch` -- mainly things related to the e2e framework itself. - `check-npcomp`: Checks the pure MLIR C++ code inside npcomp. There is a target `check-npcomp-all` that runs all of them. The `torch-mlir/build_standalone.sh` script does a standalone build of `torch-mlir`. The e2e tests (`tools/torchscript_e2e_test.sh`) are working too. The update_torch_ods script now lives in `torch-mlir/build_tools/update_torch_ods.sh` and expects a standalone build. This change also required a fix upstream related to cross-shlib Python dependencies, so we also update llvm-project to 8dca953dd39c0cd8c80decbeb38753f58a4de580 to get https://reviews.llvm.org/D109776 (no other fixes were needed for the integrate, thankfully). This completes most of the large source code changes. Next will be bringing the CI/packaging/examples back to life.
2021-09-11 02:44:38 +08:00
add_subdirectory(test)
if (NOT LLVM_INSTALL_TOOLCHAIN_ONLY)
install(DIRECTORY include/torch-mlir include/torch-mlir-c
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
COMPONENT torch-mlir-headers
FILES_MATCHING
PATTERN "*.def"
PATTERN "*.h"
PATTERN "*.inc"
PATTERN "*.td"
PATTERN "LICENSE.TXT"
)
install(DIRECTORY ${TORCH_MLIR_BINARY_DIR}/include/torch-mlir
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
COMPONENT torch-mlir-headers
FILES_MATCHING
PATTERN "*.def"
PATTERN "*.h"
PATTERN "*.gen"
PATTERN "*.inc"
PATTERN "*.td"
PATTERN "CMakeFiles" EXCLUDE
PATTERN "config.h" EXCLUDE
)
if (NOT LLVM_ENABLE_IDE)
add_llvm_install_targets(install-torch-mlir-headers
DEPENDS torch-mlir-headers
COMPONENT torch-mlir-headers)
endif()
CI: prepare CI for ccache updates for MSVC/Windows (#2120) This patch, by itself, doesn't fix caching on Windows, but once a new release of ccache is available, caching for Windows builds should start working again (validated by building ccache from source and using it with LLVM builds). Ccache rejects caching when either the `/Zi` or `/ZI` flags are used during compilation on Windows, since these flags tell the compiler to embed debug information in a PDB file (separate from the object file produced by the compiler). In particular, our CI builds add the `/Zi` flag, making ccache mark these compiler invocations as uncacheable. But what caused our CI to add debug flags, especially when we specified `-DCMAKE_BUILD_TYPE=Release`? On Windows, unless we specify the `--config Release` flag during the CMake build step, CMake assumes a debug build. So all this while, we had been producing debug builds of torch-mlir for every PR! No doubt it took so long to build the Windows binaries. The reason for having to specify the configuration during the _build_ step (as opposed to the _configure_ step) of CMake on Windows is that CMake's Visual Studio generators will produce _both_ Release and Debug profiles during the CMake configure step (thus requiring a build-time value that tells CMake whether to build in Release or Debug mode). Luckily, on Linux and macOS, the `--config` flag seems to be simply ignored, instead of causing build errors. Strangely, based on cursory tests, it seems like on Windows we need to specify the Relase configuration as both `-DCMAKE_BUILD_TYPE=Release` as well as `--config Release`. Dropping either made my build switch to a Debug configuration. Additionally, there is a bug in ccache v4.8 (although this is addressed in trunk) that causes ccache to reject caching if the compiler invocation includes any flag that starts with `/Z`, including /`Zc`, which is added by LLVM's HandleLLVMOptions.cmake and which isn't related to debug info or PDB files. The next release of ccache should include the fix, which is to reject caching only for `/Zi` and `/ZI` flags and not all flags that start with `/Z`. As a side note, debugging this problem was possible because of ccache's log file, which is enabled by: `ccache --set-config="log_file=log.txt"`.
2023-05-13 01:45:01 +08:00
endif()
# Important: If loading StableHLO in this fashion, it must come last,
# after all of our libraries and test targets have been defined.
# It seems that they both abuse upstream CMake macros that accumulate
# properties.
# Getting this wrong results in building large parts of the stablehlo
# project that we don't actually depend on. Further some of those parts
# do not even compile on all platforms.
if (TORCH_MLIR_ENABLE_STABLEHLO)
set(STABLEHLO_BUILD_EMBEDDED ON)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/externals/stablehlo
${CMAKE_CURRENT_BINARY_DIR}/stablehlo
EXCLUDE_FROM_ALL)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/externals/stablehlo)
endif()
2023-11-03 04:28:49 +08:00
#-------------------------------------------------------------------------------
# Sub-projects
#-------------------------------------------------------------------------------
if(TORCH_MLIR_ENABLE_PROJECT_PT1)
add_subdirectory(projects/pt1)
endif()