* Support brevitas custom op (#2320)
* f16 change for brevitas
* Adapt the change of brevitas quant custom op name
* Add unit tests
* Make brevitas conversions isolated
* Address the comments
---------
Co-authored-by: dan <danimal197@gmail.com>
This commit updates the `llvm-project` and `mlir-hlo` submodules to
commits:
llvm-project: a3f2751f782f3cdc6ba4790488ec20163a40ac37
mlir-hlo: 97c7e4b4506c3a2441c923e592833f45da439009
Changes made:
- Rename `getSuccessorEntryOperands` with `getEntrySuccessorOperands`
and remove `operands` from
`getSuccessorRegions` (https://reviews.llvm.org/D157506)
- Make `TypeConverter` a `const` (https://reviews.llvm.org/D157601)
* [MLIR][TORCH] Fix aten.cumsum lowering for int32 input (#2351)
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.
update PyTorch version to 2.1.0.dev20230729 (#2354)
- torch version: 2.1.0.dev20230729
- torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
- torchvision version: 0.16.0.dev20230729
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
update PyTorch version to 2.1.0.dev20230730 (#2356)
- torch version: 2.1.0.dev20230730
- torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
- torchvision version: 0.16.0.dev20230730
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
update PyTorch version to 2.1.0.dev20230731 (#2359)
- torch version: 2.1.0.dev20230731
- torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
- torchvision version: 0.16.0.dev20230731
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
LTC->MLIR Debug Info support (#1922)
* LTC->MLIR Debug Info support
* SW-95317 Propagate Lazy->Jit->MLIR scope name.
* Enhance location information based on op names
Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.
* Update locations logic; updated debug-info.py test
* Use {scope}/{op_name} format to track names by default
---------
Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>
build: update llvm tag to 41895843
Summary of changes:
- Update tags
llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
update PyTorch version to 2.1.0.dev20230802 (#2366)
- torch version: 2.1.0.dev20230802
- torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
- torchvision version: 0.16.0.dev20230802
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Change Python version from 3.10 to 3.11 in installation instructions (#2370)
Add CITATION file (#2371)
Add packaging as an install dependency (#2369)
Needed by `torch_mlir._version`. Resolves#2368.
[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)
* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op
update PyTorch version to 2.1.0.dev20230803 (#2372)
- torch version: 2.1.0.dev20230803
- torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
- torchvision version: 0.16.0.dev20230803
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Prevent failed stable CI job from cancelling nightly jobs (#2373)
The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.
[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355)
update
update xfail sets
update xfail_sets
update
fix xfail_sets
update:
update
update:
update
parent 22e88d523b1970b2e904eb5421d49d987a3d255e
author jianzhe.xiao <jianzhe.xiao@bytedance.com> 1691114110 +0800
committer jianzhe.xiao <jianzhe.xiao@bytedance.com> 1691114119 +0800
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op (#2340)
[Stablehlo] Add converter to stablehlo for aten.(Int,Float,Bool).Tensor op and configure crashing e2e sets for stablehlo backend.
update PyTorch version to 2.1.0.dev20230729 (#2354)
- torch version: 2.1.0.dev20230729
- torch commit hash: b638df0afb83572724032c824c64e481bb4499a0
- torchvision version: 0.16.0.dev20230729
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
update PyTorch version to 2.1.0.dev20230730 (#2356)
- torch version: 2.1.0.dev20230730
- torch commit hash: 0ff243ff350268cc98fe03fa6364375ee2824742
- torchvision version: 0.16.0.dev20230730
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
update PyTorch version to 2.1.0.dev20230731 (#2359)
- torch version: 2.1.0.dev20230731
- torch commit hash: 6298ac688f8caafe30d71ff2ea2e20fbb32065c7
- torchvision version: 0.16.0.dev20230731
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
LTC->MLIR Debug Info support (#1922)
* LTC->MLIR Debug Info support
* SW-95317 Propagate Lazy->Jit->MLIR scope name.
* Enhance location information based on op names
Currently, the location information attached to the ops just considers
the filename, line number and column number. Attaching operation name
would help identify the type of computation by just looking at the
profile of execution.
* Update locations logic; updated debug-info.py test
* Use {scope}/{op_name} format to track names by default
---------
Co-authored-by: Gleb Kazantaev <gleb.kazantaev@cerebras.net>
Co-authored-by: Mark Browning <mark@cerebras.net>
Co-authored-by: Vimal Patel <vimal@polymagelabs.com>
build: update llvm tag to 41895843
Summary of changes:
- Update tags
llvm: 41895843b5915bb78e9d02aa711fa10f7174db43
mhlo: 4726d31f7025da66de0dea709bd56c462edb83c2
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
update PyTorch version to 2.1.0.dev20230802 (#2366)
- torch version: 2.1.0.dev20230802
- torch commit hash: c89b16917755c2abbef7b6420e340baf9ae8089e
- torchvision version: 0.16.0.dev20230802
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Change Python version from 3.10 to 3.11 in installation instructions (#2370)
Add CITATION file (#2371)
Add packaging as an install dependency (#2369)
Needed by `torch_mlir._version`. Resolves#2368.
[Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op (#2358)
* [Torch Dialect] emit aten.masked_scatter and aten.masked_scatter_ op
update PyTorch version to 2.1.0.dev20230803 (#2372)
- torch version: 2.1.0.dev20230803
- torch commit hash: f89c73be3a3e8274d025ac46a33a780853841c9e
- torchvision version: 0.16.0.dev20230803
Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
Prevent failed stable CI job from cancelling nightly jobs (#2373)
The CI jobs that use stable PyTorch are currently not required to pass
in order for a patch to get merged in `main`. This commit makes sure
that if a CI job for stable PyTorch fails, it does not cancel the
other required jobs.
[Torch Dialect] emit aten.tile op and decompose it into aten.repeat (#2355)
update
update xfail sets
update xfail_sets
update
fix xfail_sets
update:
update
update:
add support for adaptive_pool_id
update xfail sets
update xfail_sets
update
fix xfail_sets
update:
update:
* update
---------
Co-authored-by: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
The implementation at this place was a remnent of the times the pipeline was
run only once.
Rely instead on the backend verification, after optimizations have had an
opportunity to resolve some uncertainties. (e.g. `!torch.optional`).
* RecomposeComplexOps: Remove dead slice op
* lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors
* lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none
* lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max
* More tests for aten.split.Tensor
In PyTorch, the `NumberType` is equal to `Union[int, float,
complex]`. However, the abstract interpretation library was treating
the `NumberType` as `Union[int, float]`, resulting in type mismatches
when reifying certain dtype functions. This commit fixes the type
inconsistency by having the abstract interpretation functions take as
an input a `Union[int, float, complex]` for the ops that take
`!torch.number` inputs.
* add support for mhlo
* Add Test for torch.ne
* fix torch.ne shape/add static test case
* add support for static torch.ne
---------
Co-authored-by: root <root@n31-177-039.byted.org>
The `copy_` op being replaced by `RecomposeSliceCopy_` operates on a
subset of the tensor being mutated, while the `index_put` op being
used to replace the `copy_` op operates on the entire tensor being
mutated. This means that the result type of the `index_put` should be
the type of the input to `index_put` and we need to make sure that
`copy_` does not have users before replacing to avoid type conflicts.
This commit also fixes the result type used for the
`AtenArangeStartStepOp`, and an off-by-1 error when creating the
indices vector.
Lastly, this commit also clamps the `end` value from the slice to the
size of the dimension.
When `use_tracing=True` is used to import a model into Torch-MLIR,
several casts get inserted in the IR to bridge the untyped inputs and
outputs with the typed body of the computation. These casts create
extra aliases of tensors that cause the current analysis in
`maximize-value-semantics` to fail.
In particular, the `maximize-value-semantics` analysis assumes that the
only valid alias right after an overwrite is the overwritten
alias. So, if there is a use of a casted version of the overwritten
alias after the overwrite, the analysis fails.
This commit improves the analysis by identifying all cast-like aliases
of the overwritten alias and allowing such aliases to be used after an
overwrite.
Because this issue only arises when using tracing, it cannot be
currently tested e2e, so only lit test is added.
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.
All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`
These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.
Co-authored-by: Jiahao Li <liplus17@163.com>
The current decomposition for `aten.randn.generator` does not specify
the `dtype` argument of the empty tensors created to store the random
values. This leads to invalid IR when the output type of the `randn`
op is not the default PyTorch dtype.
-- In Python we have the concept of negative dimension indexing.
-- We would want to normalize such dimensions to be +ve and within the
expected range instead.
-- This commit takes care of a few remaining set of Ops and their
lowerings by applying `toPositiveDim` and `isValidDim` to the
extracted integer `dim` value.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
-- This commit adds e2e support for atend.sort op.
-- 1. Adds aten.sort op in torch dialect.
-- 2. Adds tm_tensor.sort op in TMTensor dialect.
-- 3. Adds lowering of aten.sort -> tm_tensor.sort.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
`TorchToTMTensor` depends on `TorchMLIRTorchUtils` for
`mlir::torch::torch_upstream::get_reduction_enum`.
`TorchMLIRTorchConversionPasses` depends on multiple libs for both tblgen'd
headers and definitions. Test with `ninja TorchMLIRTorchConversionPasses` from
a clean build.
-- This commit adds e2e support for aten.randint by decomposing it into
an aten.randint.low by setting low=0.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
This commits adds the support for cases for index_put_op:
1.) where index is a 2-d tensor.
2.) where indices is a list of tensors and none, with exactly
2 non none tensors along the consecutive dimensions.
This commit also adds a utility to compute the broadcast shape
given the two input tensors.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
This commit also adds the support for non-unit output padding in the
case of transposed convolution.
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
The ops `aten.convolution_overrideable` and
`aten.convolution_backward_overrideable` are currently not e2e tested
in Torch-MLIR. Moreover, there is no way to add e2e tests for them
because the ops cannot be called using the CPU backend (this also
prevents adding tested dtype functions for these ops). Since these two
ops are not expected to ever appear in PyTorch traces obtained through
standard means (https://github.com/pytorch/pytorch/issues/97481),
Torch-MLIR should not have to worry about them.
The `RecomposeComplexOps` pass currently does not have a TableGen
declaration and it is using the base class of `DecomposeComplexOps`,
which causes `--mlir-print-ir-after-all` to create wrong pass
labels. This commit fixes that as well as some minor typos in the name
of the pass.
To keep things simple in shape functions, `Scalar` inputs are
considered `float`s. This means that when inserting the shape
functions into the IR, we must cast any `!torch.number`s into `float`s
so that the operand type matches the expected type in the shape
function. This commit adds the cast from `Scalar` to `float`.
There are several ops that have their shape function upstream and had
not been updated in Torch-MLIR to use the upstream version. This
commit updates those shape function. In addition, TODOs have been
added for shape functions that should be upstream but are not.
The original design for the dtype functions outlined in
https://github.com/llvm/torch-mlir/issues/1462 was unable to properly
handle ops that take optional tensors as an input when the optional
tensor has a value of None. By the time the op gets imported into
torch-mlir, if an optional value is None, all information about the
original type is lost from the op type signature, preventing
torch-mlir from knowing if a value of None was from an optional tensor
or not, which was crucial in the original design since each tensor
argument must be turned into two separate arguments for the dtype
function.
This commit changes the interface to dtype functions such that each
tensor turns into a tuple of two ints, the first representing the rank
of the tensor and the second the dtype of the tensor. Since now there
is a one-to-one correspondence between the operands of an op and the
operands of its dtype function, there is no ambiguity about which
operand of the op corresponds with which operand of the dtype
function.
To test the implementation, this commit defines dtype function for
convolution op, which takes one optional tensor as an argument.
* LowerToBackendContract: Explicitly error out on unimplemented operator
But only reject torch.operator when results are invalid.
Otherwise it might be a custom op that the backend supports.
This commit adds a check that `defaultDtype` exists in the RefineTypes
handling of `AtenSumOp` before accessing the method `isInteger`, which
crashes the program is `defaultDtype` is null.
The handling of `defaultDtype` is the same as the one used for the
`AtenSumDimIntListOp`.
Currently, the op `torch.tensor_static_info_cast` will not get
canonicalized away if the result type has any shape or dtype
information. This is because `isValidSubtype` only returns true when
the tensor types being compared are exactly the same or the supertype
has no shape and dtype information. Being unable to canonicalize away
the `torch.tensor_static_info_cast` gets in the way of further
optimizations, such as shape propagation.
This commit improves `isValidSubtype` by adding logic that compares
the shapes and dtypes of the two tensor types to determine of one type
is indeed a valid subtype of the other.
Fixes https://github.com/llvm/torch-mlir/issues/1926
The current implementation of `getScalarValue` does not check that the
input to a `ValueTensorLiteralOp` is an i64 before extracting the
value, and it does not check that the result type of the
`PrimNumToTensorScalarOp` is also an i64. This leads to crashes or
invalid IR generated when the `input` is something other than an i64
tensor or `!torch.int`.
This commit addresses those issues. In addition, the function
`getScalarValue` is renamed to `getScalarIntValue` to make it clear
that it *only* extracts scalar integers.
The data-flow analysis does not always propagate information to the
entire graph. This results in some lattice elements being
uninitialized. Currently the lattice elements are not checked to see
if they are uninitialized before rewriting the graph, potentially
resulting in invalid IR (see
https://github.com/llvm/torch-mlir/issues/1896).
This commit adds handling for uninitialized lattice elements.
Set PyTorch and TorchVision version to nightly release 2023-02-27.
This commit also adds the lowering for aten.add and aten.Float.Scalar op.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.
This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
Rename BlockAndValueMapping to IRMapping
Moved PrimTupleConstructOp type validation to its own verifier as the
tablegen version does not work for a combination of variadic input and
non-variadic output.
One of the potential values for a `torch_upstream::ScalarType` is
`Undefined`. This means that conversion of a `ScalarType` to another
type is a computation that can fail. To enforce handling of the
failure case, this commit makes the two helper functions that convert
`ScalarType`s into other types return `failure()` when the
`ScalarType` is `Undefined`.
Credit to @vivekkhandelwal1 for finding the necessary changes.
Summary of changes:
- Switch Tosa_IntArrayAttr[N], Tosa_IntArrayAttrUpto[N] to DenseI64ArrayAttr.
- Replace kNoIterationLimit with kNoLimit. (https://reviews.llvm.org/D140525)
- Add dependency on MhloPasses when MHLO is enabled
- Specify result type when using mhlo::DotOp
There are several decompositions that assume the operands of the op
have dtypes available; however, the only time dtypes are guaranteed to
be present is when the graph has reached the backend contract. In
general, every pass that happens before reaching the backend contract
should not assume dtypes are available and should use `hasDtype` to
check first.
This commit adds `hasDtype` checks to every decomposition that uses
dtypes.
This commit replaces the `tanh` dtype function, which was being used
to test the implementation of dtype functions in
a710237437, with a dtype function for
`expm1`. The dtype function for `expm1` is identical to the `tanh`
one, so the same level of testing is maintained.
Currently, there are ops getting dtype information from the
`RefineTypes` pass and ops getting dtype information from the
`TorchDtypeRefinementPipeline`. Since each pass can only propagete
dtype information for the ops it knows how to handle, some models with
many ops handled in both passes require the two dtype propagation
passes to execute many times, reaching the iteration limit set in the
`LowerToBackendContractPass`. To temporarily avoid this issue while
the migration to `TorchDtypeRefinementPipeline` is finished, this
commit switches `tanh` to `expm1`, since the latter is used a lot less
in large models.
This reverts commit eaab9be207, since it
is causing the post-merge CI tests to fail, causing subsequent PRs to be
blocked. Specifically, the tests
`ElementwiseAtenLogicalAndOpPromoteBroadcastModule_basic` and
`ElementwiseAtenLogicalXorOpPromoteBroadcastModule_basic` fail because
the oracle does not match the computed result. This patch reverts the
commit to make the post-merge builds green again.
-- The dtype of the result of `aten.embedding` should match that of
the `weight` operand's (operand[0]) instead of hardcoding to f32.
-- This commit aims to provide a fix for the same.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
Summary of changes:
- LLVM now includes <optional> instead of "llvm/ADT/Optional.h" in most
(although not all) places
(https://reviews.llvm.org/rG541ef3d61e9341cd38420c0dbca9250c4d0ea04c).
This patch replaces the affected instances of `llvm::Optional` with
`std::optional`.
- In the usages of llvm::Optional that remain, llvm::Optional::value()
is deprecated, so this patch replaces them with a dereference.