mirror of https://github.com/llvm/torch-mlir
224afb186e
This fixes a "regression" on ResNet where we weren't folding away all the control flow. For now, our policy is to "optimize hard enough" to make that control flow go away, because we don't yet have a way to lower to the backend the stuff guarded by the control flow (RaiseException, string operations, etc.). It remains to be seen how much optimization we decide to do at this level in the fullness of time -- the torch op set is not particularly well-designed (at least not idiomatically for MLIR) for general optimization. Ideally, with really good backend support for various features, all the heavy optimization will happen at that layer on `std` ops and `scf` control flow. But I have a suspicion we might end up needing more optimization earlier in the pipeline. |
||
---|---|---|
.. | ||
npcomp | ||
npcomp-c |