torch-mlir/include/npcomp
Sean Silva dc8afc9271 [RefE2E] Refactor how tcf.add is lowered.
It was previously going through this awkward route that prematurely
created linalg.generic ops, which was an annoying layering problem since
we can't compute a shape transfer function for linalg.generic in the
general case. Now we pass it through the same path as tcp.matmul, with
the shape transfer function being defined for tcp.add.

This also removed the need for TCPToLinalg (now deleted). The equivalent
of that is happening in lower-shaped-results-to-memref. One interesting
outcome of this: we're basically using linalg as a "Buffer TCP". We
might want to look into using named structured ops for more of TCP, but
that would be a big velocity hit since then any change to the ODS /
verification for those ops would be a change to the upstream structured
op ODS generator. After we have more experience defining this manually,
we should re-evaluate rebasing TCP on generated named linalg ops.
2020-09-18 15:03:53 -07:00
..
Backend Initial python plumbing to interface with the refjit backend. 2020-07-10 22:57:26 -07:00
Conversion [RefE2E] Refactor how tcf.add is lowered. 2020-09-18 15:03:53 -07:00
Dialect [RefE2E] Add support for matmul. 2020-09-18 11:31:01 -07:00
E2E Totally rework RefE2E tensor to memref flow. (#42) 2020-09-16 17:31:40 -07:00
JITRuntime Add -optimize flag to npcomp-run-mlir so that it runs optimizations. 2020-07-13 16:07:44 -07:00
Python Bump submodule versions. 2020-09-08 13:26:42 -07:00
Typing Bump submodule versions. 2020-09-08 13:26:42 -07:00
runtime Rework e2e flow to use new "npcomprt" 2020-07-08 19:36:19 -07:00
CMakeLists.txt Introduce a type interface for mapping to CPA types. 2020-07-02 13:56:27 -07:00
InitAll.h Bump submodule versions. 2020-09-08 13:26:42 -07:00