torch-mlir/test
Sean Silva 358159a6eb [RefBackend] Open-code shape.get_extent as extract_element
It was annoying that we were creating shape.get_extent in the middle of
the bufferization pipeline, as it required running convert-shape-to-std
at an awkward place. To make that cleaner, just open-code the
extract_element ops that shape.get_extent expands into.

This is a little gross, but it helps with the macroscopic pipeline
ordering issues. Anyway, the train is long-gone of trying to treat
shapes as some special data type that should only be operated on with
shape ops.

Also,
- reorder tensor constant bufferize (which is a module pass) to bracket
all the bufferization function passes, to make the parallelism
opportunities there clearer. Now we have a very clean little
bufferization segment of our pipeline construction.
2020-11-17 11:00:38 -08:00
..
Backend/Iree Sever C++ level depend on IREE and rebase on exe and python interface. 2020-11-16 21:32:56 -08:00
CAPI Start reworking towards a shared library build. 2020-10-09 16:02:58 -07:00
Conversion [TCP] Replace tcp.matmul with linalg.matmul. 2020-11-10 18:58:28 -08:00
Dialect [RefBackend] Use std.global_memref instead of homegrown thing 2020-11-13 18:43:50 -08:00
Python Move existing npcomp.compiler -> npcomp.compiler.numpy. 2020-11-10 19:26:40 -08:00
RefBackend [RefBackend] Open-code shape.get_extent as extract_element 2020-11-17 11:00:38 -08:00
npcomp-run-mlir [RefBackend] Support element-wise multiply op 2020-10-27 19:41:23 -07:00
CMakeLists.txt Sever C++ level depend on IREE and rebase on exe and python interface. 2020-11-16 21:32:56 -08:00
lit.cfg.py Update test configuration to import mlir from LLVM install location. 2020-10-12 15:25:07 -07:00
lit.site.cfg.py.in Collapse different top level test directories into test/. 2020-08-03 17:41:16 -07:00