mirror of https://github.com/llvm/torch-mlir
f8ff6d84f4
Now, aten::linear supports rank 3 inputs. This is a fix for upcoming bert-inference task. The correct way should be to support broadcasting in `aten.matmul` op and decompose `aten.linear` into right ops. |
||
---|---|---|
.. | ||
argmax.py | ||
backprop.py | ||
basic.py | ||
batchnorm.py | ||
conv.py | ||
elementwise.py | ||
main.py | ||
matmul.py | ||
mlp.py | ||
quantized_models.py | ||
reduction.py | ||
type_conversion.py | ||
type_promotion.py | ||
view.py | ||
vision_models.py | ||
xfail_sets.py |