mirror of https://github.com/llvm/torch-mlir
358159a6eb
It was annoying that we were creating shape.get_extent in the middle of the bufferization pipeline, as it required running convert-shape-to-std at an awkward place. To make that cleaner, just open-code the extract_element ops that shape.get_extent expands into. This is a little gross, but it helps with the macroscopic pipeline ordering issues. Anyway, the train is long-gone of trying to treat shapes as some special data type that should only be operated on with shape ops. Also, - reorder tensor constant bufferize (which is a module pass) to bracket all the bufferization function passes, to make the parallelism opportunities there clearer. Now we have a very clean little bufferization segment of our pipeline construction. |
||
---|---|---|
.. | ||
Backend/Iree | ||
CAPI | ||
Conversion | ||
Dialect | ||
Python | ||
RefBackend | ||
npcomp-run-mlir | ||
CMakeLists.txt | ||
lit.cfg.py | ||
lit.site.cfg.py.in |