mirror of https://github.com/llvm/torch-mlir
be8375d350
This PR introduces a sparse_jit wrapper that can run simple models with sparse tensor inputs end-to-end. The implementation shows all required components on modifying sparse tensor types with a 1:N relation on the call sites. Two tests shows that the JIT runs end-to-end while computing the correct results. More details to follow (generalizing to COO and different ranks, as well as support for *output* sparse tensors), but the general concepts are all here now. **_Update: Thanks to Rob, bump to proper LLVM/MLIR hash is done!_** _**NOTE that all parameter passing changes are nicely done "downstream" in MLIR, so very little changes are required in torch-mlir code proper**_ --------- Co-authored-by: Franz Haniel <77495327+frafranz@users.noreply.github.com> Co-authored-by: Franz Haniel <franz.haniel@amd.com> |
||
---|---|---|
.. | ||
torch_mlir | ||
CMakeLists.txt | ||
TorchMLIRModule.cpp |