In order to easily keep track of the dtype functions that have been
moved to `abstract_interp_lib_gen.py` and make it easier to add new
ones, this commit groups all the dtype functions together, rather than
having them interspersed between the shape functions.
Credit to @vivekkhandelwal1 for finding the necessary changes.
Summary of changes:
- Switch Tosa_IntArrayAttr[N], Tosa_IntArrayAttrUpto[N] to DenseI64ArrayAttr.
- Replace kNoIterationLimit with kNoLimit. (https://reviews.llvm.org/D140525)
- Add dependency on MhloPasses when MHLO is enabled
- Specify result type when using mhlo::DotOp
- Use v3 of actions/checkout, since the version we use (v2) uses
Node.js 12, which is deprecated by GitHub.
- Source the PowerShell venv sctipt (instead of the bash sript) since
the calling script is a PowerShell script. Without this, the build
doesn't use venv at all.
- Make the build dependencies in whl-requirements.txt (used by
setup.py) match those in requirements.txt. To that end, this patch
creates a build-requirements.txt that is referenced by
requirements.txt and whl-requirements.txt.
After a recent LLVM tag update, the Windows Release build failed even
though pre- and post-merge Windows CI builds passed. This patch makes
the Windows Release build use the same flags as the Window CI build to
try and address the Release build failures in the future.
There are several decompositions that assume the operands of the op
have dtypes available; however, the only time dtypes are guaranteed to
be present is when the graph has reached the backend contract. In
general, every pass that happens before reaching the backend contract
should not assume dtypes are available and should use `hasDtype` to
check first.
This commit adds `hasDtype` checks to every decomposition that uses
dtypes.
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.
Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked. Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result. This patch reverts the
commit to make the post-merge builds green again.
This commit adds support for passing to `torch_mlir.compile` the
result of running `torch.jit.trace` on a model by relaxing the
condition that checks if the model is already in JIT IR to allow any
`torch.jit.ScriptModule`.
Fixes https://github.com/llvm/torch-mlir/issues/1739