mirror of https://github.com/llvm/torch-mlir
1b769f7841
This happens in practice. With this, we can globalize slots for the non-trivial classifier layer obtained from https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Joint_Intent_and_Slot_Classification.ipynb This also adds support for tuple return types, which were needed by that model. |
||
---|---|---|
.. | ||
Backend/Iree | ||
CAPI | ||
Conversion | ||
Dialect | ||
Python | ||
RefBackend | ||
npcomp-run-mlir | ||
CMakeLists.txt | ||
lit.cfg.py | ||
lit.site.cfg.py.in |