torch-mlir/lib
Sean Silva 358159a6eb [RefBackend] Open-code shape.get_extent as extract_element
It was annoying that we were creating shape.get_extent in the middle of
the bufferization pipeline, as it required running convert-shape-to-std
at an awkward place. To make that cleaner, just open-code the
extract_element ops that shape.get_extent expands into.

This is a little gross, but it helps with the macroscopic pipeline
ordering issues. Anyway, the train is long-gone of trying to treat
shapes as some special data type that should only be operated on with
shape ops.

Also,
- reorder tensor constant bufferize (which is a module pass) to bracket
all the bufferization function passes, to make the parallelism
opportunities there clearer. Now we have a very clean little
bufferization segment of our pipeline construction.
2020-11-17 11:00:38 -08:00
..
Backend/RefJIT Sever C++ level depend on IREE and rebase on exe and python interface. 2020-11-16 21:32:56 -08:00
CAPI Add remaining pieces to capture full example models. 2020-10-19 22:16:59 -07:00
Conversion Sever C++ level depend on IREE and rebase on exe and python interface. 2020-11-16 21:32:56 -08:00
Dialect [RefBackend] Open-code shape.get_extent as extract_element 2020-11-17 11:00:38 -08:00
Python Add missing dependency on NPCOMPCAPI from NPCOMPPythonCommon 2020-10-22 22:44:18 -07:00
RefBackend [RefBackend] Open-code shape.get_extent as extract_element 2020-11-17 11:00:38 -08:00
Typing Start reworking towards a shared library build. 2020-10-09 16:02:58 -07:00
CMakeLists.txt Sever C++ level depend on IREE and rebase on exe and python interface. 2020-11-16 21:32:56 -08:00
InitAll.cpp Sever C++ level depend on IREE and rebase on exe and python interface. 2020-11-16 21:32:56 -08:00