mirror of https://github.com/llvm/torch-mlir
46f2cb50dc
The expression for HardSigmoid in Onnx (https://onnx.ai/onnx/operators/onnx__HardSigmoid.html): max(0, min(1, alpha * x + beta)) is inherently different from HardSigmoid in Torch (https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html) which is: if x < -3 -> 0 elif x > 3 -> 1 else x/6 + 1/2 That being said, it was just better to compute out the entire expression when translating the Onnx expression to Torch mlir, which is done in this PR. Some of the logic is shared from the files in `DecomposeComplexOps`. Therefore, refactored some shared logic between `DecomposeComplexOps` and `DefaultDomainGToP` and put it in a `Utils` file. |
||
---|---|---|
.. | ||
TMTensor | ||
Torch | ||
TorchConversion | ||
CMakeLists.txt |