torch-mlir/test/RefBackend
Sean Silva 358159a6eb [RefBackend] Open-code shape.get_extent as extract_element
It was annoying that we were creating shape.get_extent in the middle of
the bufferization pipeline, as it required running convert-shape-to-std
at an awkward place. To make that cleaner, just open-code the
extract_element ops that shape.get_extent expands into.

This is a little gross, but it helps with the macroscopic pipeline
ordering issues. Anyway, the train is long-gone of trying to treat
shapes as some special data type that should only be operated on with
shape ops.

Also,
- reorder tensor constant bufferize (which is a module pass) to bracket
all the bufferization function passes, to make the parallelism
opportunities there clearer. Now we have a very clean little
bufferization segment of our pipeline construction.
2020-11-17 11:00:38 -08:00
..
e2e-basic.mlir [RefBackend] Split out TCF->TCP conversion. 2020-10-12 11:56:39 -07:00
e2e-constants.mlir [RefBackend] Split out TCF->TCP conversion. 2020-10-12 11:56:39 -07:00
e2e-mixed-ranks.mlir [RefBackend] Split out TCF->TCP conversion. 2020-10-12 11:56:39 -07:00
lower-alloc-memref-ops.mlir [RefBackend] Open-code shape.get_extent as extract_element 2020-11-17 11:00:38 -08:00
lower-to-llvm.mlir [RefBackend] Use std.global_memref instead of homegrown thing 2020-11-13 18:43:50 -08:00
lower-to-refbackrt-abi.mlir [RefBackend] Use std.global_memref instead of homegrown thing 2020-11-13 18:43:50 -08:00
restricted-canonicalize.mlir [RefBackend] Rename test/E2E. 2020-10-07 15:52:11 -07:00