This commit makes the following changes needed to update bump LLVM:
- Replace `linalg.init_tensor` with `tensor.empty` (see:
https://reviews.llvm.org/D135129)
- Replace `NoSideEffect` with `Pure` (see
https://reviews.llvm.org/D135505)
- Replace `body` region accessor for `ReduceOp` and `ReduceWindowOp`
with `getBody`
- Fix incorrect use of `tosa::ReduceSumOp` in `AtenNativeLayerNormOp`
conversion pattern. The result type of `tosa::ReduceSumOp` must have
the same rank as the input type. (see:
https://www.mlplatform.org/tosa/tosa_spec.html#_reduce_sum)
Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
This commit replaces test inputs that were being linearly transformed
by multiplying and adding/subtracting to the input tensor with inputs
that use the `low` and `high` keyword arguments instead.
This commit removes the `weight` tensor from the inputs of one of the
`linalg.generic` ops generated by the `aten.convolution` linalg
lowering, since the indexed values are not actually used by the body
of the `linalg.generic`. Moreover, in general the `weight` tensor does
not have the same shape as the output tensor of the `linalg.generic`,
so both tensors being indexed by the same indexing maps is wrong.
We originally added these to help bring up more complex models with
heavier dependencies. However, over time it has become clear that these
models usually require more than just heavier dependencies -- they often
require a nontrivial amount of "one-off" code to extract the relevant
parts of the model and compile them. This is not a good fit for a
component in the core Torch-MLIR repo.
However, in the community, nod.ai has developed the ["Shark
Tank"](https://github.com/nod-ai/SHARK/tree/main/tank) which has all the
appropriate code to wrangle these models and organize them. We intend to
more heaviliy lean on that as a community and improve the symbiosis
there to serve the role that these heavydep tests were meant to play.
Allow customizing `backend_legal_ops` for "torch" output type, since we
don't know which backend will be used (it might be a custom backend).
We don't allow customizing the `backend_legal_ops` for the other output
types (Linalg, TOSA, MHLO) since those backends control their set of
legal ops directly.
Fixes#1418
-- This commit adds e2e support for `aten.Mish` op.
-- `aten.Mish` op is decomposed as following :-
Mish(x) = x * Tanh(Softplus(x))
Signed-off-by: Abhishek Varma <avarma094@gmail.com>
Signed-off-by: Abhishek Varma <avarma094@gmail.com>
* build: disable LTC again so that we can bump PyTorch version
When built using PyTorch's master branch, the LTC code has been failing
to build for a few days. As a result, the PyTorch version referenced by
Torch-MLIR is stalled to the one from October 4th.
In an effort to advance to PyTorch version, this patch disables LTC, and
a subsequent patch will advance the PyTorch version.
* update PyTorch version to 1.14.0.dev20221010
Also disables the `UpSampleNearest2dDynamicFactor_basic` e2e test, since
the (PyTorch) oracle differs from the computed value for both the
refbackend and the eager_mode backends.
This commit adds lowering of `aten.div.int` and `aten.bitwise_or.Tensor`
ops. Both these ops are required in order to support bloom_560m model.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
This commit updates the linalg conversion of `AtenMaxDimOp` to use
`arith.maxf` instead of `arith.select` to calculate the maximum. This
allows better vectorization further downstream, since the operation
can be converted to a simple max reduction when the `indices` result
is not used. See: https://github.com/iree-org/iree/issues/10666.
Summary of changes:
- Updated references to the Arith dialect
(https://reviews.llvm.org/D134762)
- Switched to prefixed accessors for MemRef dialect
(https://reviews.llvm.org/D134995)
- Fixed warnings about signed/unsigned comparisons, ignored return
values, and unused variables
* Fix c10::prim::Constant conversion; Added CAPI for passes; Added passes to base lazy backend
* Update ivalue_importer to use ImportOptions; Added tests for non-value/value tensor types
* Added tests for scalar Constant import; Updated MB::importFunction to use ImportOptions
* Test updates
* Move back module variable name
* Remove RefineTypes from TorchMlirLoweringContext::Build()
* Rename pass; Remove passes from base lazy backend
* Rename pass to VerifyBackendContractPass
* Aligned cmd pass name; Fixed TorchConversion passes registration
Instead of letting the auto-update script either fail because of script
errors or letting it commit bad versions, this patch makes the update
process manual, for now. Once the script stabilizes, I will its
re-enable periodic execution.
Updating the PyTorch version may break the Torch-MLIR build, as it did
recently, since the PyTorch update caused the shape library to change,
but the shape library was not updated in the commit for updating
PyTorch.
This patch introduces a new default-off environment variable to the
build_linux_packages.sh script called `TM_UPDATE_ODS_AND_SHAPE_LIB`
which instructs the script to run the update_torch_ods.sh and
update_shape_lib.sh scripts.
However, running these scripts requires an in-tree build and the tests
that run for an in-tree build of Torch-MLIR are more comprehensive than
those that run for an out-of-tree build, so this patch also swaps out
the out-of-tree build for an in-tree build.
A bug in the CI script caused the entire script to fail if the exit code
of the command for comparing with the existing hash returned a non-zero
exit status. The non-zero exit status for this comparison does not
imply failed execution, since it only indicates that the hash has
changed.
The auto-update of the PyTorch version broke the Torch-MLIR build
because it did not update the shape library. Going forward, we should
add the shape library update to the PyTorch version update action.
* build: push directly from CI to main branch
This avoids the need to create, approve, and merge a separate PR, in
addition to avoiding unnecessary CI runs for the PyTorch version update.
* build: schedule cronjob to run RollPyTorch action
This patch schedules the RollPyTorch action to be run at noon UTC, which
roughly corresponds to 4am Pacific Time. We pick this time since the
commit for PyTorch nightly releases are picked just after midnight
Pacific Time and the nightly release artifacts are produced in about 2
to 3 hours after the commit is picked.