torch-mlir/include/npcomp
Sean Silva 358159a6eb [RefBackend] Open-code shape.get_extent as extract_element
It was annoying that we were creating shape.get_extent in the middle of
the bufferization pipeline, as it required running convert-shape-to-std
at an awkward place. To make that cleaner, just open-code the
extract_element ops that shape.get_extent expands into.

This is a little gross, but it helps with the macroscopic pipeline
ordering issues. Anyway, the train is long-gone of trying to treat
shapes as some special data type that should only be operated on with
shape ops.

Also,
- reorder tensor constant bufferize (which is a module pass) to bracket
all the bufferization function passes, to make the parallelism
opportunities there clearer. Now we have a very clean little
bufferization segment of our pipeline construction.
2020-11-17 11:00:38 -08:00
..
Backend/RefJIT Sever C++ level depend on IREE and rebase on exe and python interface. 2020-11-16 21:32:56 -08:00
Conversion Sever C++ level depend on IREE and rebase on exe and python interface. 2020-11-16 21:32:56 -08:00
Dialect [RefBackend] Open-code shape.get_extent as extract_element 2020-11-17 11:00:38 -08:00
Python Repurpose numpy-compiler compiler/runtime flow for PyTorch. 2020-11-11 10:38:13 -08:00
RefBackend [RefBackend] Use std.global_memref instead of homegrown thing 2020-11-13 18:43:50 -08:00
Typing Bump submodule versions. 2020-09-08 13:26:42 -07:00
CMakeLists.txt [RefBackend] Rename "E2E" to RefBackend. 2020-10-07 10:29:48 -07:00
InitAll.h Bump submodule versions. 2020-09-08 13:26:42 -07:00