mirror of https://github.com/llvm/torch-mlir
bb668e6e26
This patch adds a dialect intended to be used as a frontend dialect to facilitate lowering from "A Tensor Library" in torch/pytorch. This patch includes several passes that are useful in conjuction with the dialect: --aten-layer-name: Generates layer names for each operation, which are not present in the original pytorch. --aten-to-std: Lower the ATen dialect into standard dialect function calls. --return-elimination-pass: convert functions (primarily the toplevel function) to pass return values by reference. This simplifies pytorch integration. --aten-op-report: generate a textual report about the model --liveness-report Future patches will implement actual integration with the pytorch jit to intercept and generates MLIR in this dialect, then lower the resulting MLIR into function calls through aten-layer-name -> aten-to-std -> return-elimination -> std-to-llvm. The result would then jitted using the LLVM jit, linked against a runtime library which makes calls back into pytorch to implement all the layers. Co-authored-by: Jeff Fifield <jeff.fifield@xilinx.com> Co-authored-by: Jeff Fifield <jeff.fifield@xilinx.com> |
||
---|---|---|
.. | ||
aten_add.mlir | ||
aten_addmm.mlir | ||
aten_as_strided.mlir | ||
aten_batchnorm.mlir | ||
aten_conv2d.mlir | ||
aten_conv2d_back.mlir | ||
aten_maxpool2d.mlir | ||
aten_relu.mlir | ||
aten_resA.mlir | ||
lenet_fwd.mlir |