torch-mlir/lib
zjgarvey 295bf418a4
Add a canonicalization pattern for `aten.unflatten.int` (#3656)
Addresses an issue in <https://github.com/llvm/torch-mlir/issues/3651>
where some unflatten ops generated from onnx models weren't propagating
static shape information. It may be necessary to add further
optimizations for the more general case when some static information is
present in the unflatten (or possibly reshape/view) op's `sizes` list,
but not reflected in the output shape. These ops will only successfully
infer shapes if the `sizes` list is gotten from a list of constant ints
(with possibly one -1). A common example where this fails is when some
of the `sizes` are determined from `aten.size.int` ops on dynamic
tensors, and other `sizes` are known statically.

This PR includes:
- a canonicalizer for `aten.unflatten.int` which converts to
`aten.unsqueeze` when it is expanding one dim to two, and one of the new
dims is statically 1.
- an improvement to the folder for `aten.__or__.bool` which does not
rely on *both* operands being static.
2024-09-03 16:38:20 -07:00
..
CAPI [ONNX] add int16 quantization support (#3446) 2024-06-12 10:37:22 +05:30
Conversion Add canonicalize pattern for aten.mul.int and aten.floordiv.int (#3680) 2024-09-03 09:13:59 -07:00
Dialect Add a canonicalization pattern for `aten.unflatten.int` (#3656) 2024-09-03 16:38:20 -07:00
RefBackend Add missing dependency to TorchMLIRRefBackend target (#3107) 2024-08-14 23:41:51 +08:00
CMakeLists.txt Link necessary op interface implementations (#3364) 2024-06-03 19:43:28 -05:00
InitAll.cpp [Stablehlo] legalize deprecated ops to stablehlo ops (#3543) 2024-07-17 00:05:11 +08:00