Commit Graph

5 Commits (703428eff42310714878f331442e6e912d1fe461)

Author SHA1 Message Date
Sean Silva 3f4161635c Bump llvm-project to be7352c00d51f4358db3a23ed6a077f7cb48eafd
- TensorFromElementsOp -> tensor::FromElementsOp
- `cmpi "eq", ...` -> `cmpi eq, ...`. Same for `cmpf`
- syntax change for private func ops
- some changes to the python bindings
2021-01-21 11:16:55 -08:00
Sean Silva 97d6d04d41 Bump llvm-project to 16c6e9c58e9ae50a775945e6b407f1891f353d2f
Changes:
- linalg init tensor change (outs+init -> just outs)
- IntegerType::get and other builtin types now take the context as the
  first arg
- LLVMType::* is gone. Now LLVM Types are just regular Type's.
2021-01-05 16:12:11 -08:00
Aaron Arthurs 85898aaf10
Add TCF convolutional op with bias addition (#137) 2020-12-15 12:53:12 -08:00
Sean Silva 32388d938b Make some passes run on FuncOp so they can run in parallel. 2020-11-13 16:12:18 -08:00
Sean Silva 1c7c362e29 [TCP] Replace tcp.matmul with linalg.matmul.
This involved adding a `tcp.splatted` op to splat a dynamically sized
init tensor. See rationale in TCPOps.td docs.

One interesting observation is that when lowering tcf.matmul to
linalg.matmul, we need to both 1) create the error checks and 2)
calculate a shape transfer function to create the init tensors.
Previously, 2) was deferred to bufferizing tcp.matmul later. I'm not
sure if this is a conflation of concerns or not. For now, it's not a big
burden.
2020-11-10 18:58:28 -08:00