mirror of https://github.com/llvm/torch-mlir
703428eff4
We already had the `promoteTrailingOutTensor` flag, but weren't using it. A inplaceVariantKernelName flag needed to be added. This change is a little dissatisfying, as the conversions done by the RecognizeKernelsPass are currently non-orthogonal. In particular, `kDropResultAndAliasArg0` probably won't work as intended if mixed with these (we probably need to promote kDropResultAndAliasArg0 to not be an arg-level thing anyway, as we have done with promoteTrailingOutTensor). This involved adding a new op `numpy.overwrite_array`. ``` numpy.overwrite_array %arg2 overwrites %arg0 : tensor<2x3xf32>, !numpy.ndarray<[2,3]:f32> ``` This models the destructive update behavior. Note that in the above op, we cannot simply RAUW %arg0 with a suitably conveted %arg2 (for example, %arg0 might have uses that are not dominated by %arg2, or might have an alias relation with some other array in the program). In general, we need a pass analogous to "SSA-formation" which knows how to see through these to uncover an underlying tensor program. Also, add tanh_out_e2e.py/div_inplace_e2e.py and fix some bitrot in refjit.py which is my running example I'm trying to get working. |
||
---|---|---|
.. | ||
ATen | ||
Basicpy | ||
Numpy | ||
Refback | ||
Refbackrt | ||
TCF | ||
TCP | ||
Torch |