torch-mlir/lib/Dialect
Sean Silva 358159a6eb [RefBackend] Open-code shape.get_extent as extract_element
It was annoying that we were creating shape.get_extent in the middle of
the bufferization pipeline, as it required running convert-shape-to-std
at an awkward place. To make that cleaner, just open-code the
extract_element ops that shape.get_extent expands into.

This is a little gross, but it helps with the macroscopic pipeline
ordering issues. Anyway, the train is long-gone of trying to treat
shapes as some special data type that should only be operated on with
shape ops.

Also,
- reorder tensor constant bufferize (which is a module pass) to bracket
all the bufferization function passes, to make the parallelism
opportunities there clearer. Now we have a very clean little
bufferization segment of our pipeline construction.
2020-11-17 11:00:38 -08:00
..
ATen Add a number of kernels and new patterns. 2020-11-04 14:36:59 -08:00
Basicpy Add remaining pieces to capture full example models. 2020-10-19 22:16:59 -07:00
Numpy More progress on PyTorch acap device capture. 2020-10-15 21:43:21 -07:00
Refback [RefBackend] Use std.global_memref instead of homegrown thing 2020-11-13 18:43:50 -08:00
Refbackrt [RefBackend] Use std.global_memref instead of homegrown thing 2020-11-13 18:43:50 -08:00
TCF Start reworking towards a shared library build. 2020-10-09 16:02:58 -07:00
TCP [RefBackend] Open-code shape.get_extent as extract_element 2020-11-17 11:00:38 -08:00
Torch Expose signature metadata to ops and implement ATenRecognizeKernelsPass pass. 2020-10-26 20:31:45 -07:00
CMakeLists.txt [RefBackend] Rename RefBackend dialect to Refback 2020-10-08 09:07:00 -07:00