Strided slicing can occur with a negative stride. In these cases we need
to bound end differently. This included removing a function that was
generating bad limits.
Simple folder for limited size aten tensor operations. This is primarily
useful for shape computation folding as they unfortunately can use
`aten` operators. Add, sub, mul are common examples of these folders.
Folds aten::index_select ops under the following conditions:
1. If the input and output are the same shape, the indexing operation is
a NOP, so just return the input.
2. If the input has shape <1x1x...xNx...x1> (all 1's except for one
dim), and the output shape is <1x1x...x1> (all 1's), then there is a
single index, so extract the single element value and return a tensor
with that value.
---------
Co-authored-by: Dave Liddell <dliddell@xilinx.com>
If a tensor is initialized by a list with a single constant integer,
this folder turns it into a torch.vtensor.literal
---------
Co-authored-by: Dave Liddell <dliddell@xilinx.com>
So that the CumSum Op in OPT can get the constant that it requires to be lowered to TMTensor
---------
Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
Co-authored-by: Xida Ren <xida.ren.dev@gmail.com>
We were seeing some assertion failures after some checks around folders
were tightened up in LLVM:
https://github.com/llvm/llvm-project/pull/75887 . This PR essentially
moves the logic that used to be applied at the LLVM level into the
folder, which seems to be the suggested fix.
I'm not sure if the IR that caused issues for us _should_ be valid?
```
%1 = torch.aten.detach %arg0 : !torch.tensor<[1],f32> -> !torch.tensor
```
A better fix might be to create a verifier ensuring the result of
`aten.detach` has the same type as its operand.
---------
Co-authored-by: aaron-stgeorge <aaron.stgeorge@getcruise.com>
After noticing a number of commits with unrelated formatting changes,
I think something was changed with clang-format at one point and we're
seeing a number of unrelated changes. Doing a refresh can help avoid
this.
The changes made here came from
```
find lib -iname *.h -o -iname *.cpp | xargs clang-format -i --style=llvm
find include -iname *.h -o -iname *.cpp | xargs clang-format -i --style=llvm
find projects -iname *.h -o -iname *.cpp | xargs clang-format -i --style=llvm
```
This preserves sparsity at the most obvious places of lowering TORCH
tensors to MLIR RankedTensorType tensors. Other places are marked for
audit. With some initial lowering tests.
This adds an encoding field to the torch type, using the interfaces for
printing, parsing, and verification. Note that although this change
prepares adding sparsity to the torch type (as illustrated by the round
trip and invalid tests), nothing in this change depends on the actual
contents of the encoding field!
Handle both `torch.dequantize` and `torch.quantize_per_tensor` including
the op based quantization parameter tracking. This includes adding
`qint32` to torch types as it was missing during the initial type
inclusion.
For testing we only have `torch.int8` and `torch.float` types on
function boundaries as the `qint8` types require passing the scale
and zero point quantization information which is not supported yet.
`AtenStackOp` defines this folder for list operand containing single
element:
```
OpFoldResult AtenStackOp::fold(FoldAdaptor adaptor) {
auto list = getOperand(0).getDefiningOp<PrimListConstructOp>();
if (!list || !list->hasOneUse() || list.getElements().size() != 1)
return nullptr;
return list.getElements()[0];
}
```
However, unlike `AtenCatOp`, `AtenStackOp` cannot be folded away for
single element list operand because the result from a stack operation
contains an additional dimension (of size 1, like expand_shape).
This PR removes the `AtenStackOp::fold` method, and adds an e2e test for
single element list input case, which fails on current `main` as
follows:
```
Unexpected outcome summary: (linalg)
****** Failed tests - 1 tests
FAIL - "TensorsStackSingleElementListModule_basic"
@ trace item #0 - call to "forward"
@ output of call to "forward"
ERROR: shape (torch.Size([10, 32])) is not equal to golden shape (torch.Size([10, 1, 32]))
```
Thanks Chris Lalau Keraly for the bug report.
- adds support for an optional verifier to the generated torch op
tablegen (GeneratedTorchOps.td)
- uses the above to add a verifier for the torch permute op.
Motivation: I hit an unclear error from linalg while developing a
decomposition pass for pixel_shuffle. The error would have been clearer
if the problem had been detected earlier in the invalid aten.permute op.
Testing: new tests added. To run added tests, from the base directory
run
```
./build/bin/llvm-lit test/Dialect/Torch/invalid.mlir
```
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20
There are two primary goals:
1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.
Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.
This also takes steps toward a couple of other layering enhancements:
* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.
It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.
Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
Strict symbolic shapes allow us to assume numpy-style dynamic broadcasts
never occur. This allows us to strengthen the folder for broadcasts to
cases where the rank is the same and all shapes match (including dynamic
sentinel values).
Corresponding commits:
* mlir-hlo: 16886a108eff5197f816ca0f1950cc5ff1b078d9
* stablehlo: 77a59815a82b34f7b08ed2d42a711d9920682d0e
* llvm-project: 4acc3ffbb0af5631bc7916aeff3570f448899647
* Adapt to ByteCodeOpInterface changes.
* Adapt to RegionBranchPoint changes: https://reviews.llvm.org/D159116
* Adapt inferReturnTypes to get the value from properties.
* Adapt invalid.mlir to properties syntax
* [TOSA] Align with custom assembly format change.
* [TOSA] handle change of axis to int32 type
* [TOSA] Restore improper convert to i32
Landing with Windows broken (it cannot be fixed because of the way the mlir-hlo dep is inserted). Will followup with an untangling.
---------
Co-authored-by: TatWai Chong <tatwai.chong@arm.com>
Co-authored-by: Eric Kunze <eric.kunze@arm.com>
* view_as_real test case, allow dtype in testutils.randn
* abstract python upstream func implemented
* fixed upstream dtype func, implemented view_as_real backend op
* formatted AtenViewAsRealOp, removed change in e2etest/framework
* removed test suit from reshape_like.py, because it's moved to basic.py
* implemented C-API wrapper for mlirComplexF128 type
* fixed torch.complex dtype width in MLIR and Torch MLIR, deleted float16 dtype dict
* Changed IR input of aten fft_fft unit test
* code refactored
* code refactored and fixed ci test
* refactored: removed white spaces, and rolled back to having both input/output affine expr
* refactored: deleted output affine expr to reduce redundancy
* xfail ltc backend
* removed ComplexImag and ComplexReal from torchdynamo xfail set
* copied and pasted from main branch as there's no change to be made in this file
* refactored abstract_interp_lib_gen.py
* refactored: torchtypes.td, formatted, removed commented out code
* Support brevitas custom op (#2320)
* f16 change for brevitas
* Adapt the change of brevitas quant custom op name
* Add unit tests
* Make brevitas conversions isolated
* Address the comments
---------
Co-authored-by: dan <danimal197@gmail.com>
This commit updates the `llvm-project` and `mlir-hlo` submodules to
commits:
llvm-project: a3f2751f782f3cdc6ba4790488ec20163a40ac37
mlir-hlo: 97c7e4b4506c3a2441c923e592833f45da439009
Changes made:
- Rename `getSuccessorEntryOperands` with `getEntrySuccessorOperands`
and remove `operands` from
`getSuccessorRegions` (https://reviews.llvm.org/D157506)
- Make `TypeConverter` a `const` (https://reviews.llvm.org/D157601)
* RecomposeComplexOps: Remove dead slice op
* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors
* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none
* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max
* More tests for aten.split.Tensor
-- In Python we have the concept of negative dimension indexing.
-- We would want to normalize such dimensions to be +ve and within the
expected range instead.
-- This commit takes care of a few remaining set of Ops and their
lowerings by applying `toPositiveDim` and `isValidDim` to the
extracted integer `dim` value.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
Currently, the op `torch.tensor_static_info_cast` will not get
canonicalized away if the result type has any shape or dtype
information. This is because `isValidSubtype` only returns true when
the tensor types being compared are exactly the same or the supertype
has no shape and dtype information. Being unable to canonicalize away
the `torch.tensor_static_info_cast` gets in the way of further
optimizations, such as shape propagation.
This commit improves `isValidSubtype` by adding logic that compares
the shapes and dtypes of the two tensor types to determine of one type
is indeed a valid subtype of the other.
Fixes https://github.com/llvm/torch-mlir/issues/1926
The current implementation of `getScalarValue` does not check that the
input to a `ValueTensorLiteralOp` is an i64 before extracting the
value, and it does not check that the result type of the
`PrimNumToTensorScalarOp` is also an i64. This leads to crashes or
invalid IR generated when the `input` is something other than an i64
tensor or `!torch.int`.
This commit addresses those issues. In addition, the function
`getScalarValue` is renamed to `getScalarIntValue` to make it clear
that it *only* extracts scalar integers.
Rename BlockAndValueMapping to IRMapping
Moved PrimTupleConstructOp type validation to its own verifier as the
tablegen version does not work for a combination of variadic input and
non-variadic output.
Summary of changes:
- LLVM now includes <optional> instead of "llvm/ADT/Optional.h" in most
(although not all) places
(https://reviews.llvm.org/rG541ef3d61e9341cd38420c0dbca9250c4d0ea04c).
This patch replaces the affected instances of `llvm::Optional` with
`std::optional`.
- In the usages of llvm::Optional that remain, llvm::Optional::value()
is deprecated, so this patch replaces them with a dereference.
Summary of changes:
- Replace `llvm::None` with `std::nullopt`, since the former is deprecated
(https://reviews.llvm.org/D139763)
- Use setter for symbol visibility instead of passing string attribute when
creating FuncOp
* [custom op] Generalize shape library logic to work with dtypes
This commit generalizes the shape library logic, so that dtype rules
for ops can also be expressed using the same mechanism. In other
words, each op can now have a shape function and a dtype function
specified in Python that is imported during lowering to calculate the
shapes and dtypes throught a program. For more information about how
to specify a dtype function, see the updated
`docs/adding_a_shape_and_dtype_function.md`.
For those not familiar with how the shape library works, the file
`docs/calculations_lib.md` provides an overview.
Currently `getTensorRank` returns -1 if it was unable to get the rank
of the tensor. However, not every use in the codebase was checking the
return value, and in some cases, the return value was casted to
unsigned leading to some infinte loops when an unranked tensor reached
a decomposition.
This commit changes the return of `getTensorRank` to
`Optional<unsigned>` to make it clear to the user that the function
can fail.
This commit also changes a couple of for loops that iterate a vector
in reverse order that can potentially become infinite loops into
range-based for loops.
A circular dependency was introduced in e7edcc62fd.
Specifically, the `makeShapeLLVMCompatible` and `makeShapeTorchCompatible` utilities were being called from `lib/Dialect/Torch/IR/TorchTypes.cpp` and `lib/Dialect/Torch/IR/TorchOps.cpp` defined under the `:TorchMLIRTorchDialect` bazel target, leading it to take a dependency on `:TorchMLIRConversionUtils` which already depends on `:TorchMLIRTorchDialect`, hence creating a circular dependency.
This commit resolves the same by moving said utilities from `lib/Conversion/Utils/Utils.cpp` to `lib/Dialect/Torch/Utils/Utils.cpp`. Please LMK if there's a better way to fix this and I will update the code.
This commit also adds the required targets to support building the new conversions from Torch to ML Program dialect that was introduced in f416953600.
Bazel build GHA triggered manually to verify: https://github.com/sjain-stanford/torch-mlir/actions/runs/3645944517
- Support for non-prefixed accessors has been removed. See:
https://reviews.llvm.org/D136727
- Rename `operands` to `methodOperands` in `prim.CallMethod` since the
name `operands` overlaps with a builtin method name. See:
https://reviews.llvm.org/D136727
- Add passes in refbackend to lower memref.subview. See:
https://reviews.llvm.org/D136377
- Replace `CopyToValueTensorOps` first in `RewriteViewLikeSubgraph` in
maximize-value-semantics.
The current implementation of the `RewriteViewLikeSubgraph` pass in
maximize-value-semantics creates temporarily invalid IR. In
particular, given a forward slice starting from a
`CopyToNonValueTensorOp` and ending in `CopyToValueTensorOp`s, the
pass first replaces all uses of the `CopyToNonValueTensorOp` with
its operand, which results in all the `CopyToValueTensorOp` users
having their operand have type `!torch.vtensor`, which is invalid.
The correct way to do things is to first replace all the
`CopyToValueTensorOp`s with their operand, and then replace all uses
of the `CopyToNonValueTensorOp` with its operand.
This only started failing now because the generated accessor
`getOperand` for the `CopyToValueTensorOp` now returns a
`TypedValue<NonValueTensorType>`, which has an assert checking that
the value returned is of the expected type.
Summary of changes:
- Replace call to `MemoryEffectOpInterface::hasNoEffect`
with `isMemoryEffectFree`.
- Make fix for the dynamic dims, since
`kDynamicSize` value changed to
`std::numeric_limits<int64_t>::min()` from `-1` in llvm
- `makeShapeLLVMCompatible` and `makeShapeTorchCompatible`
utilities convert shapes in order to remain consistent
with the Torch and MLIR semantics.
- Update tags
llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
This commit renames the patterns used to match on lists of constant
values to `m_TorchListOfConstant{valueType}s`. This is needed to avoid
ambiguity for when `valueType` has `Optional` in it. In particular, it
makes it clear whether the values in the list are optional or the list
itself is optional.
This commit adds lowering of `aten.div.int` and `aten.bitwise_or.Tensor`
ops. Both these ops are required in order to support bloom_560m model.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
Strength the shape inference for aten.arange-like op by
1. registering aten.sub and aten.ceil.Scalar op and design folders for them.
2. register a new constant-like op: Torch::ConstantNumberOp and design canonicalizer for it.
This PR adds an `AllowedInModuleInitializer` trait to keep track of ops that are permitted in the module initializer. We have a handful of such ops that are produced by the IValue importer, and so this change avoids maintaining a list of ops in `TorchOps.cpp` that could lead to spurious merge conflicts, and help us integrate torch-mlir in our downstream compiler better. Please let me know if you'd prefer a better name for the trait itself. Feedback is welcome!
The Torch dialect has an include to `mlir/Dialect/Func/IR/FuncOps.h` and
should therefore have a CMake dependency to the MLIRFuncDialect.
Otherwise, the build can fail since it may occur that
`mlir/Dialect/Func/IR/FuncOps.h.inc` isn't generated yet.
Summary of changes:
- Switch to C++17 (similar to https://reviews.llvm.org/D131348)
- Update MHLO to build with LLVM commit hash 061e0189
- Replace deprecated `hasValue()` and `getValue()` with `has_value()`
and `value()` respectively (https://reviews.llvm.org/D131349)
- Use `TypedAttr` (https://reviews.llvm.org/D130092)
- Use updated assembly format of `mhlo.compare` op (commit
d03ef01e70fbf9afd0fa1976fbb7ed31838929b3 in MHLO repo)