torch-mlir/lib/Dialect
Sean Silva 9ba77c6e13 Add InlineGlobalSlots pass.
This inlines global slots if possible. This allows them to participate
in folding, canonicalization, shape inference, etc.

Example use cases:
- inlining weights and biases that are readonly during inference
- inlining the "training" bool to allow stuff to fold away

For training use cases (especially internal training loop), we will need
something smarter to get good performance. That would look like an "SSA
formation" which promotes the global slots to tensors in the program,
flushing them back to the slots at the minimal number of necessary
places. We might want to let backends do that transformation though.
This also interacts with shape inference (type bounds on the slots to
even lower them to backends in the first place).
2021-04-27 12:18:54 -07:00
..
ATen Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d 2021-04-22 18:12:55 -07:00
Basicpy Bump llvm-project to 0524a09cc7e1a0797982feacf505825231efbee7 2021-03-23 14:29:05 -07:00
Numpy Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d 2021-04-22 18:12:55 -07:00
Refback [RefBackend] Use std.global_memref instead of homegrown thing 2020-11-13 18:43:50 -08:00
Refbackrt [refbackrt] Scalar arg support 2021-03-23 13:16:44 -07:00
TCF Bump llvm-project to 16c6e9c58e9ae50a775945e6b407f1891f353d2f 2021-01-05 16:12:11 -08:00
TCP Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d 2021-04-22 18:12:55 -07:00
Torch Add InlineGlobalSlots pass. 2021-04-27 12:18:54 -07:00
CMakeLists.txt [RefBackend] Rename RefBackend dialect to Refback 2020-10-08 09:07:00 -07:00