torch-mlir/lib/Conversion/TorchToTosa
Abhishek-TyRnT df02692726
Dynamic size support for flatten (#3005)
Added support for dynamic shapes in `flattenusingints` op in tosa
dialect. Due to this some Argmax tests pass
This PR fixes this issue https://github.com/llvm/torch-mlir/issues/3004

The following tests pass after this PR
 ```
1. "ArgmaxIntModule_basic"
2. "ArgmaxIntModule_multiple_maxs"
3. "ArgmaxModule_basic"
```
2024-03-19 15:19:29 -07:00
..
CMakeLists.txt Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
TorchToTosa.cpp Dynamic size support for flatten (#3005) 2024-03-19 15:19:29 -07:00
TosaLegalizeCommon.cpp Clang format refresh (#2812) 2024-01-29 12:59:33 -05:00
TosaLegalizeUtils.cpp allow tosa.cast to convert from f32 to f16 (#2934) 2024-02-20 14:22:38 -08:00