torch-mlir/lib/CMakeLists.txt

40 lines
1.2 KiB
CMake
Raw Normal View History

Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
add_subdirectory(Backend)
add_subdirectory(CAPI)
add_subdirectory(Conversion)
2020-04-27 08:20:58 +08:00
add_subdirectory(Dialect)
add_subdirectory(Interfaces)
################################################################################
# Setup the initialization target.
# This includes conditional dependencies based on whether features are enabled.
################################################################################
get_property(mlir_dialect_libs GLOBAL PROPERTY MLIR_DIALECT_LIBS)
get_property(mlir_conversion_libs GLOBAL PROPERTY MLIR_CONVERSION_LIBS)
get_property(npcomp_dialect_libs GLOBAL PROPERTY NPCOMP_DIALECT_LIBS)
get_property(npcomp_conversion_libs GLOBAL PROPERTY NPCOMP_CONVERSION_LIBS)
message(STATUS "NPCOMP Dialect libs: ${npcomp_dialect_libs}")
message(STATUS "NPCOMP Conversion libs: ${npcomp_conversion_libs}")
add_npcomp_library(NPCOMPInitAll
InitAll.cpp
LINK_LIBS
PUBLIC
# Local depends
Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
NPCOMPCommonBackend
[torch-mlir earthmoving (1/N)] C/C++ code movement. This creates the `external/torch-mlir` directory as an LLVM_EXTERNAL_PROJECTS-compatible project (analogous to `iree-dialects`) and completes movement/rename of all pure MLIR C/C++ compiler code into there. The next step will be to move all the Python code / code that links/includes PyTorch C++ code (which currently lives in `frontends/pytorch`) into a subdirectory here. I call this "earthmoving" because it is mostly mechanical changes and renames. As a quick summary (we can change this down the road easily) - C++ `mlir::NPCOMP::Torch -> mlir::torch::Torch` - CAPI `npcompTorchListTypeGet -> torchMlirTorchListTypeGet` - preprocessor `#ifndef NPCOMP_ -> #ifndef TORCHMLIR_` - CMake `NPCOMPFoo -> TorchMLIRFoo` The goal of this is to create a standalone project creating a center of mass for entry into the MLIR ecosystem from PyTorch, suitable in scope for eventual inclusion/ownership in PyTorch. The idea is that `external/torch-mlir` will some day be pulled out into its own repository, and then npcomp will simply pull it in as a submodule. Layering-wise, what lives in `torch-mlir` lowers code from PyTorch (currently TorchScript, but TorchFX or pytorch/xla-style tracing are possible extensions) down to what we have been calling the "Torch backend contract" which is cleaned up IR (inlining, simplifcation, conversion to value tensors, ...) entirely in the `torch` dialect. This is the branching off point for further lowering, of which npcomp takes one opinion (outside `torch-mlir` of course!), namely the `TorchConversion` dialect/transforms which lower to IR suitable for IREE and other linalg-on-tensors based lower-level compilers. Summary of changes: - move `{include,lib,test}/Dialect/Torch` into `torch-mlir` - move relevant parts of CAPI into `torch-mlir`. - leave a few things related to the `torch-mlir` Python build commented out, which should be resolved in a subsequent change.
2021-09-10 03:24:10 +08:00
TorchMLIRTorchDialect
Add TorchToIREE and factor out TorchConversion dialect. This converts a basic list op (torch.prim.ListConstruct) to the IREE dialect. ``` def forward(self, x: float): return [x, x] ``` turns into: ``` builtin.func @forward(%arg0: !torch.float) -> !torch.list<!torch.float> { %0 = torch.prim.ListConstruct %arg0, %arg0 : (!torch.float, !torch.float) -> !torch.list<!torch.float> return %0 : !torch.list<!torch.float> } ``` which turns into: ``` builtin.func @forward(%arg0: f64) -> !iree.list<f64> { %c1 = constant 1 : index %c0 = constant 0 : index %c2 = constant 2 : index %0 = iree.list.create %c2 : !iree.list<f64> iree.list.set %0[%c0], %arg0 : !iree.list<f64>, f64 iree.list.set %0[%c1], %arg0 : !iree.list<f64>, f64 return %0 : !iree.list<f64> } ``` As part of doing this, I realized that it was time to formalize the IR form that we reach right before running TorchTo{Linalg,Std,...}. We now call it the "Torch backend contract". We then lower the "Torch backend contract" to the "npcomp backend contract", which involves the new TorchConversion (`torch_c`) dialect, which holds ops that need to operate on both the npcomp backend types (e.g. builtin tensors, i1, IREE list, etc.) and the `!torch` types. This made more sense, as I realized that if I didn't factor out `torch_c` then the Torch dialect would have a dependency on IREE dialect (we previously didn't notice this was an issue because we only depended on `builtin` types), which seemed wrong to me. Recommended review order: - TorchToIREE.cpp / `TorchToIREE/basic.mlir` - Look at the new structure of createTorchScriptToNpcompBackendPipeline. It now lives in TorchConversion/Transforms/Passes.cpp and cleanly calls into `Torch::createTorchScriptToTorchBackendPipeline` for the frontend lowering to the Torch backend contract. - Mechanical change extracting `torch_c.{to,from}_{i1,i64,f64,builtin_tensor,iree_list}` into a new TorchConversion dialect, and a few passes specific to the lowering from the Torch backend contract to the npcomp backend contract. - Minor fixes to TorchToLinalg.cpp to use unconverted operands (now that we convert lists as part of operand materialization, we need to use the original operands). Also added test for AtenMaxPool2dOp and fixed m_TorchConstantIntList. - TmpDeleteDeadIREELists pass. Temporary pass for deleting dead IREE lists that are created as part of operand materialization for conv/max pool/avg pool ops in TorchToLinalg.
2021-08-12 05:40:08 +08:00
NPCOMPTorchConversionDialect
NPCOMPConversionPasses
# TODO: We shouldn't need npcomp_conversion_libs here, but we have
# some dialect transform libraries accumulating into that property.
${npcomp_conversion_libs}
${npcomp_dialect_libs}
${mlir_dialect_libs}
${mlir_conversion_libs}
)