* [tosa] Support for AtenNe[Tensor|Scalar]Op, AtenLog2Op,
AtenBitwiseAndTensorOp, AtenSquareOp and AtenThresholdOp
* Fix for Issue #532 - Mixed input types for few ops and updated few
tests to use i32 instead of i64
Signed-off-by: Anup Gangwar <anup.gangwar@arm.com>
Co-authored-by: Anup Gangwar <anup.gangwar@arm.com>
This involes the following 2 parts:
- Change refine type to propagate more static shape info.
- Get as much static shape info as possible when creating the result
tensor when converting to linalg.
- It folds `aten.to.dtype` when the input tensor type and result type
are exactly same.
- It folds `aten.view` when the rank of both the input tensor type and
result type is unity.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
- Added E2E support for `aten.eq.Tensor` and `aten.lt.Tensor` ops. Both
the operands are expected to be of the same type, i.e., type promotion
is not addressed as a part of this commit.
- Added E2E support for `aten.eq.Scalar` and `aten.lt.Scalar` ops.
Tensor operand type to Scalar operand type promotion has not been
handled in this commit.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
- Templatize `aten.zeros` and `aten.ones` ops lowering.
- Add E2E support for `aten.empty` op.
- Add Integer type support in `aten.mul.Scalar` op lowering.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
`aten.gt.Tensor` op has been added in torch dialect and the
lowering of the op has been done to the linalg dialect.
Signed-off-by: Prashant Kumar <prashant@nod-labs.com>
This commit adds lowering of `aten.gt.Scalar` and `aten.where.self` as a
part of element-wise ops lowering.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
This commit adds lowering of `aten.mul.Scalar` and also adds
decomposition of `aten.addmm` to `aten.mul.Scalar`, `aten.add.Tensor`
and `aten.mm` ops.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
Part of #380
Also
- BoolType is not considered as Scalar
- e2e framework fixes for nan handling
- `tu.rand(..., low=, high=)` support
- delete unused variable (fix warning)
- Add IouOfModule from #380 to e2e test suite (this is a common
calculation in vision models)
Your branch is ahead of 'origin/main' by 1 commit.
* Print more exception info on error during test execution
* Fix formatting
* Add aten::gelu lowering
Co-authored-by: Boian Petkantchin <boian@nod-labs.com>
Summary:
This commit fixes an off-by-one error in how negative dimensiosn were
being handled in the lowering of transpose. This commit also adds
tests to transpose and unsqueeze to test negative dimensions.
This commit (with approval from all contributors) dual licenses
the torch-mlir project under both the standard LLVM license and the
standard PyTorch license. This will facilitate moving code between
torch-mlir and the two upstream projects.
The standard file comment is now:
```
// This file is licensed under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
// Also available under a BSD-style license. See LICENSE.
```
See `LICENSE` in the project root for the terms of both licenses.
This leaves no real code outside torch-mlir.
This also renames the "npcomp backend contract" to "linalg on tensors
backend contract" as the name of the abstraction layer that RefBackend
(IREE too) accepts.
It just contained the e2e testing framework. We now fold it into the
main project to reduce complexity.
- `frontends/pytorch/python/` -> `python/torch_support`
- `frontends/pytorch/e2e_testing -> e2e_testing`
- `frontends/pytorch/examples -> examples`
- `frontends/pytorch/test` -> `python/test`
- `torch_mlir_torchscript` python module -> `npcomp_torchscript`
- `torch_mlir_torchscript_e2e_test_configs` python module ->
`npcomp_torchscript_e2e_test_configs`
This also changes the license of a handful of files from the
"pytorch-style" license to the regular LLVM/npcomp license. The only
people who committed to those files were myself and Yi.