mirror of https://github.com/llvm/torch-mlir
8022dfaf1a
I was seeing some miscompiles due to the uninitialized data read here before. Interestingly, this was masked in some of our previous test cases, since the uninitialized data "always" was so small that it would present as a rounding error for the 1.0-10.0 sized values that the matmul was computing on. |
||
---|---|---|
.. | ||
bypass-shapes.mlir | ||
e2e-basic.mlir | ||
e2e-constants.mlir | ||
e2e-mixed-ranks.mlir | ||
lower-alloc-memref-ops.mlir | ||
lower-constant-tensors-to-memref.mlir | ||
lower-shaped-results-to-memref.mlir | ||
lower-std-to-memref.mlir | ||
lower-structural-to-memref.mlir | ||
lower-to-llvm-global.mlir | ||
lower-to-llvm.mlir | ||
lower-to-npcomprt-abi.mlir | ||
restricted-canonicalize.mlir |