While working on https://github.com/openxla/iree/pull/14917, I noticed that it is somewhat hard to take a dependency on torch-mlir such that one only builds deps for the target(s) of interest (in this case Linalg). I noticed that some ifdef'ey optionality was added for stablehlo, but this was not mirrored for the others. Further, it does the switching very deep in the dependency graph vs having top-level directories and defines gating entire features. In addition, I noticed that a lot of things in the Linalg path were broken down to a fine level of detail but were not actually shared/shareable outside of that target. I opted to clump these together into TorchToLinalg. It is easy enough to "promote" them to common with this new organization if the need arises.
General approach:
* Isolate each conversion target in one of TorchToLinalg, TorchToStablehlo, TorchToTosa.
* Gate each by top-level CMake flags and defines.
* Common conversions go in a Common/ directory (currently Arith and SCF).
* Pull target specific conversions out of TorchConversion/Transforms and put in their top-level directory.
* General maintenance on the build graph and registration stuff that had bitrotted and was blocking progress.
The main functional change for people taking a source dep is that `#include "torch-mlir/Conversion/Passes.h"` no longer is a one stop shop: For optional conversions, you have to include the dedicated `Passes.h` of each and take a library dep. See `InitAll.cpp` which does it right (and *is* a one stop shop still).
* view_as_real test case, allow dtype in testutils.randn
* abstract python upstream func implemented
* fixed upstream dtype func, implemented view_as_real backend op
* formatted AtenViewAsRealOp, removed change in e2etest/framework
* removed test suit from reshape_like.py, because it's moved to basic.py
* implemented C-API wrapper for mlirComplexF128 type
* fixed torch.complex dtype width in MLIR and Torch MLIR, deleted float16 dtype dict
* Changed IR input of aten fft_fft unit test
* code refactored
* code refactored and fixed ci test
* refactored: removed white spaces, and rolled back to having both input/output affine expr
* refactored: deleted output affine expr to reduce redundancy
* xfail ltc backend
* removed ComplexImag and ComplexReal from torchdynamo xfail set
* copied and pasted from main branch as there's no change to be made in this file
* refactored abstract_interp_lib_gen.py
* refactored: torchtypes.td, formatted, removed commented out code
* Tensor[]? support operands type support using partial codegen
* aten.index.Tensor support via partial codegen
* Add torch.index_put tracing support
* Added optional tensor list type support for LTC/TorchMLIR lowering
* Added comments
Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
* Support brevitas custom op (#2320)
* f16 change for brevitas
* Adapt the change of brevitas quant custom op name
* Add unit tests
* Make brevitas conversions isolated
* Address the comments
---------
Co-authored-by: dan <danimal197@gmail.com>
* LTC/TorchMLIR multi-output operations support
* Update torch-mlir jit lowering to support ops with dynamic number of outputs
* Added support for aten::split_copy, aten::split_with_sizes_copy
* Fix native function for aten::split; cleanup code
* Fix TorchMlirTensorList lowering
* Remove xfails
* [TOSA] Fix conversion for depthwise convolutions
* Add e2e tests for depthwise and grouped convolutions
Co-authored-by: Lucas Camphausen <lucas.camphausen@iml.fraunhofer.de>
This commit updates the `llvm-project` and `mlir-hlo` submodules to
commits:
llvm-project: a3f2751f782f3cdc6ba4790488ec20163a40ac37
mlir-hlo: 97c7e4b4506c3a2441c923e592833f45da439009
Changes made:
- Rename `getSuccessorEntryOperands` with `getEntrySuccessorOperands`
and remove `operands` from
`getSuccessorRegions` (https://reviews.llvm.org/D157506)
- Make `TypeConverter` a `const` (https://reviews.llvm.org/D157601)
When using custom ops, sometimes PyTorch will insert namespaces to the
abstract interpretation function name in the format:
`__torch__.{namespace_1}.{namespace_2}...{op_name}`. The extra
namespaces are not part of the abstract interpretation function name,
so it needs to be removed before generating the library of MLIR
snippets of abstract interpretation functions. This commit adds
support for removing the namespace information.