mirror of https://github.com/llvm/torch-mlir
dc8afc9271
It was previously going through this awkward route that prematurely created linalg.generic ops, which was an annoying layering problem since we can't compute a shape transfer function for linalg.generic in the general case. Now we pass it through the same path as tcp.matmul, with the shape transfer function being defined for tcp.add. This also removed the need for TCPToLinalg (now deleted). The equivalent of that is happening in lower-shaped-results-to-memref. One interesting outcome of this: we're basically using linalg as a "Buffer TCP". We might want to look into using named structured ops for more of TCP, but that would be a big velocity hit since then any change to the ODS / verification for those ops would be a change to the upstream structured op ODS generator. After we have more experience defining this manually, we should re-evaluate rebasing TCP on generated named linalg ops. |
||
---|---|---|
.. | ||
Backend/Iree | ||
Conversion | ||
Dialect | ||
E2E | ||
Python | ||
npcomp-run-mlir | ||
CMakeLists.txt | ||
lit.cfg.py | ||
lit.site.cfg.py.in |