torch-mlir/test/Dialect
Sean Silva 6431b0f11f Add primitive ArrayToTensor (numpy-array-to-tensor) pass.
The current implementation is just sufficient to do a unary aten.tanh
from the e2e spike, and just applies some local rewrite patterns.  I've
sketched out the more full explanation of where this pass eventually
need to go in the pass docs.

Adding this required adding `numpy.tensor_static_info_cast`, which is
the tensor analog of `numpy.static_info_cast`. This op encapsulates the
same numpy-specific "no runtime code" casting semantics, in particular
the interpretation of `!numpy.any_dtype`. The
`numpy.tensor_static_info_cast` I see in practice now are "information
erasing" and will be removed by a later pass that exploits the fact that
aten ops are agnostic to the static info in the operand types (so
substituting a type with more static info is fine).

Side note: we *need* to do dtype and rank inference before aten->tcf
(which will eventually mostly be aten->linalg+guards), because each aten
op is idiosyncratically overloaded based on dtype and rank. Without
copying that idiosyncratic overloading into lower layers (layering
violation), we cannot really lower it to anything until we do that.
2021-04-05 17:56:35 -07:00
..
ATen Add support for "trailing_" and "out" variants of various ops. 2021-03-19 10:34:50 -07:00
Basicpy Add initial TorchScript module importer 2021-01-28 11:55:17 -08:00
Numpy Add primitive ArrayToTensor (numpy-array-to-tensor) pass. 2021-04-05 17:56:35 -07:00
Refback [RefBackend] Use std.global_memref instead of homegrown thing 2020-11-13 18:43:50 -08:00
Refbackrt [refbackrt] Scalar arg support 2021-03-23 13:16:44 -07:00
TCF Add TCF convolutional op with bias addition (#137) 2020-12-15 12:53:12 -08:00
TCP Bump llvm-project to 0524a09cc7e1a0797982feacf505825231efbee7 2021-03-23 14:29:05 -07:00
Torch Add torch-adjust-calling-conventions pass. 2021-04-05 17:56:35 -07:00