torch-mlir/lib/Conversion/TorchToLinalg
zjgarvey c531f5495b
AtenAdaptiveMaxPool2d Conversion to Linalg (#2779)
The logic here is very similar to the conversion for AdaptiveAvgPool1d
#2661 with a few modifications:

1. buffVal = -inf instead of 0
2. the main linalg generic op accumulates a max, instead of a sum, to
the first output tensor
3. avg pooling requires dividing the sum pool by the kernel width, which
we stored as an auxilliary tensor (kSizeTensor). Here, the auxiliary
tensor will be recording the indices. Strangely enough, the only
signature available for this function is to return indices, and it
appears that they must be computed whether the user desires them or not.
See
[pytorch/torch/nn/functional.py](https://github.com/pytorch/pytorch/blob/main/torch/nn/functional.py#L1174).

Before writing other adaptive pooling conversions, the logic of this
decomposition should be rolled into a helper function that will work for
both max and avg pooling ops. Even the auxiliary tensor should likely be
automated. This code was written in a slightly more tedious way than
strictly necessary (often using loops to fill SmallVectors up to rank-2,
which is only two in this case), in order to more easily facilitate the
transition to a helper function.
2024-01-24 09:09:56 -08:00
..
CMakeLists.txt Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
DataMovement.cpp [TorchToLinalg] Add lowering for torch.aten.diagonal (#2632) 2024-01-22 12:47:13 -05:00
IndirectDataMovement.cpp [TorchToLinalg] NFC: Move Utils.h to an externally accessible location (#2603) 2023-12-01 19:38:21 -05:00
Linear.cpp implement aten.conv1d, aten.conv3d, and aten.conv_tbc (#2757) 2024-01-23 21:30:03 -08:00
Pooling.cpp AtenAdaptiveMaxPool2d Conversion to Linalg (#2779) 2024-01-24 09:09:56 -08:00
PopulatePatterns.h Re-enable custom op support 2022-08-16 22:49:08 +05:30
Random.cpp [TorchToLinalg] NFC: Move Utils.h to an externally accessible location (#2603) 2023-12-01 19:38:21 -05:00
Reduction.cpp [TorchToLinalg] Drop constexpr from ifs in argmin/max.dim (#2617) 2023-12-07 13:08:17 -05:00
TensorConstructors.cpp Fix unused variable warnings (#2775) 2024-01-22 11:05:55 -08:00
TensorScalarInterop.cpp Elide dynamic broadcast checks when in strict symbolic shapes mode. (#2496) 2023-09-29 16:45:48 -07:00
TorchToLinalg.cpp Add complex types support with basic complex ops. 2023-05-11 21:29:07 +05:30
Uncategorized.cpp Implement lowering of torch.aten.remainder.Tensor (#2763) 2024-01-19 18:09:08 +05:30
Utils.cpp [torch][quant] Support quantize and dequantize for torch (#2731) 2024-01-12 19:11:14 -08:00