mirror of https://github.com/llvm/torch-mlir
358159a6eb
It was annoying that we were creating shape.get_extent in the middle of the bufferization pipeline, as it required running convert-shape-to-std at an awkward place. To make that cleaner, just open-code the extract_element ops that shape.get_extent expands into. This is a little gross, but it helps with the macroscopic pipeline ordering issues. Anyway, the train is long-gone of trying to treat shapes as some special data type that should only be operated on with shape ops. Also, - reorder tensor constant bufferize (which is a module pass) to bracket all the bufferization function passes, to make the parallelism opportunities there clearer. Now we have a very clean little bufferization segment of our pipeline construction. |
||
---|---|---|
.. | ||
e2e-basic.mlir | ||
e2e-constants.mlir | ||
e2e-mixed-ranks.mlir | ||
lower-alloc-memref-ops.mlir | ||
lower-to-llvm.mlir | ||
lower-to-refbackrt-abi.mlir | ||
restricted-canonicalize.mlir |