torch-mlir/include/npcomp/Dialect
Sean Silva 9ba77c6e13 Add InlineGlobalSlots pass.
This inlines global slots if possible. This allows them to participate
in folding, canonicalization, shape inference, etc.

Example use cases:
- inlining weights and biases that are readonly during inference
- inlining the "training" bool to allow stuff to fold away

For training use cases (especially internal training loop), we will need
something smarter to get good performance. That would look like an "SSA
formation" which promotes the global slots to tensors in the program,
flushing them back to the slots at the minimal number of necessary
places. We might want to let backends do that transformation though.
This also interacts with shape inference (type bounds on the slots to
even lower them to backends in the first place).
2021-04-27 12:18:54 -07:00
..
ATen Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d 2021-04-22 18:12:55 -07:00
Basicpy Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d 2021-04-22 18:12:55 -07:00
Numpy Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d 2021-04-22 18:12:55 -07:00
Refback Bump llvm-project to 444822d77a7fea28aa49edf24533c987efa1b2ee 2020-12-11 14:43:38 -08:00
Refbackrt [refbackrt] Scalar arg support 2021-03-23 13:16:44 -07:00
TCF Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d 2021-04-22 18:12:55 -07:00
TCP Bump llvm-project to 484b6648fdd4b104eaf7a2504dd07b60af2c9f8d 2021-04-22 18:12:55 -07:00
Torch Add InlineGlobalSlots pass. 2021-04-27 12:18:54 -07:00
CMakeLists.txt [RefBackend] Rename RefBackend dialect to Refback 2020-10-08 09:07:00 -07:00