Commit Graph

3150 Commits (3ae2c11f42791c344d411dc4cd830d78ed9c2ae4)
 

Author SHA1 Message Date
penguin_wwy e4be197efd
[FxImporter] Fix transpose rank zero (#3382) 2024-05-30 14:31:18 +08:00
penguin_wwy 1f544c37d0
[NFC] Remove unused header files (#3386) 2024-05-30 14:30:36 +08:00
Xida Ren (Cedar) 23d2d66a59
Fix error when attempting to read elided onnx constants (#3398)
Co-authored-by: zjgarvey <zjgarvey@gmail.com>
2024-05-29 16:56:23 -07:00
penguin_wwy a5d3b546f8
[FxImporter] Fix embedding bag (#3387) 2024-05-29 14:46:21 +08:00
Yuanqiang Liu e0a5adb1db
[Torch] fix aten.linear's decomposition (#3391)
* support aten.linear with more rank.
2024-05-27 15:49:50 +08:00
Yuanqiang Liu 05929f9171
enhance verbose option in e2e_testing (#3390)
so that `python3 e2e_testing/main.py -v` would print intermediate IR.
2024-05-27 08:01:07 +08:00
Yuanqiang Liu 28aeb047c1
[Stablehlo] fix crashing on AtenEmbeddingBagSumExample_basic (#3389) 2024-05-26 12:34:56 +08:00
zjgarvey 27169dcda9
Replace some depreciated uses of cast (#3343)
Contributing towards #3299
2024-05-23 09:01:47 -07:00
Yuanqiang Liu 5bb1a65ec9
[Stablehlo] refactor reduction lowering and support aten.amin (#3383)
* implement detailed lowering template pattern
`ConvertAtenReduceAllDimsOp` and `ConvertAtenReduceKeepDimOp`
* support `aten.amin`'s lowering.
2024-05-23 20:40:20 +08:00
Gaurav Shukla 43f961eca4
[MLIR] Fix 64-bit product during aten.view lowering (#3378)
std::accumulate needs 64-bit init value to perform 64-bit arithmetic on
a list of integers.

Signed-off-by: Gaurav Shukla <gaurav.shukla@amd.com>
2024-05-23 08:59:28 +05:30
penguin_wwy d924d0047f
[FxImporter] Fix primitive type in return (#3379) 2024-05-23 09:55:33 +08:00
Angel Zhang 2e194e13d6
[Torch] Fix bugs for `Torch::AtenOneHotOp` (#3350)
This PR fixes the bugs for `Torch::AtenOneHotOp` by:

1) Using `Torch::kUnknownSize` as the default value for `numClasses` in
   the pattern matching stage in `DecomposeAtenOneHotOp`
2) Adding `AtenIntScalarOp` to the patterns in `TorchToArith`
3) Handling both `int` and `float` types for `off` and `on` values in
`TorchOnnxToTorch` conversion

It also includes:

1) A new test in `TorchToArith/basic.mlir`, for `torch.aten.Int.Scalar`,
and
2) A new test in `decompose-complex-ops.mlir`, for `torch.aten.one_hot`

**Dependencies**

This PR is dependent on #3334.
2024-05-22 17:19:08 +00:00
Yuanqiang Liu f4bfe3f948
Bump llvm and stablehlo (#3377)
* bump llvm to 1e5f29af81a5f6fda308074f6345b9fba4faa71c
* bump stablehlo to c44d9af8d4879adccf1054cb61a53377ae5898cb
2024-05-22 23:28:45 +08:00
Xinyu Yang 4d7cdba4bf
[Torch] eliminate "getWithLeastStaticInformation" in DecomposeAtenTriuOp (#3330)
I am trying to eliminate 'getWithLeastStaticInformation' in
DecomposeAtenTriuOp. Could you provide me with some suggestions?
@qingyunqu @zjgarvey 
See issue https://github.com/llvm/torch-mlir/issues/3312
2024-05-22 23:16:57 +08:00
penguin_wwy 972d47b586
[FxImporter] Fix constant bool tensor (#3375) 2024-05-22 22:59:01 +08:00
Angel Zhang 52be4bdc18
[ONNX] Fix bugs for the `onnx.OneHot` operator (#3334)
This commit fixes the bugs for the `onnx.OneHot` operator by:

1) Converting negative indices to non-negative indices
2) Handling both `int` and `float` types for `off` and `on` values
3) Using the correct result type

It also includes a new unit test.
2024-05-22 08:32:00 -04:00
Sambhav Jain 6e485574e5
[Pipeline] Use dedicated simplification pipeline for TorchDynamo frontend (#3376)
Discord Thread:
https://discord.com/channels/636084430946959380/1238330633328005243

## Context: 

[This](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/fx.py#L61)
was updated to support e2e tests for the TorchDynamo frontend in
Torch-MLIR, where we run FX decompositions and import the FX IR to
generate Torch dialect, followed by
`torch-function-to-torch-backend-pipeline`, skipping only the shape/type
refinement for now. However, we should be able to skip many of the torch
simplification passes, as depicted in the [frontend
roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/images/roadmap_frontend.png).

Based on IREE's TorchDynamo
[pipeline](https://github.com/iree-org/iree/blob/main/compiler/plugins/input/Torch/InputConversion/Passes.cpp#L29),
the only two passes we seem to require are: `ReduceOpVariantsPass` and
`DecomposeComplexOpsPass`. This is inline with our findings as well
based on initial exploration.

This PR creates a dedicated frontend simplification pipeline for
TorchDynamo / FX Importer which calls only `ReduceOpVariantsPass` and
`DecomposeComplexOpsPass`. We rely on the e2e fx_importer tests to
ensure we're not regressing by removing many of the passes that were
historically needed for TorchScript.

One notable change here is that we do not call the
`LowerToBackendContractPass` anymore, which used to call
`TorchSimplificationPipeline` iteratively until VerifyBackendContract
was clean. Some of this was required for the shape/type refinement to
converge, which seems a non-issue for Dynamo frontend. Do we anticipate
this (the iterative invocation of TorchSimplificationPipeline followed
by VerifyBackendContract) to be worth retaining in the Dynamo frontend
pipeline? If so, I can make those changes, PLMK.
2024-05-22 05:23:18 -07:00
Aart Bik 560ca24771
[torch-mlir][sparse] replace xavier with ones initialization (#3374)
ensures stability of results between different set ups
2024-05-21 17:12:55 -07:00
RattataKing fcf48872b3
[ONNX] Implement Softsign op (#3373) 2024-05-21 12:10:26 -07:00
penguin_wwy c2c1c2cfa4
[FxImporter] Fix failed e2e case (#3365) 2024-05-22 00:20:54 +08:00
Vivek Khandelwal b870729efe
[torch] Fix `onnx.MaxPool` lowering (#3133)
This commit fixes the onnx.MaxPool op lowering which was lacking the
indices result support.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-05-21 21:05:32 +05:30
Aart Bik c0e7d2667d
[torch-mlir][sparse] inference mode for sparse GCN test (#3369) 2024-05-20 19:52:16 -07:00
zjgarvey 297c270980
onnx.Resize and aten._interpolate : allow n spatial dims. (#3368)
The old lowering only had logic for 2d (i.e. images). this patch allows
interpolation for n spatial dims, which is required for some 3d vision
models such as

- onnx/models/pytorch-3dunet_vaiq_int8

which successfully compiles and runs with this patch.
2024-05-20 13:35:27 -07:00
lialan 99511cef82
Implement `onnx.Hardmax` lowering (#3342)
Co-authored-by: Ubuntu <xunli@wsno1.judsoscro3wupi0qm4bjlj5m3b.bx.internal.cloudapp.net>
Co-authored-by: Hasekawa-Takumi <bewater.private476@passmail.net>
2024-05-20 20:56:24 +05:30
Wu Yuan cc28d566ff
[Stablehlo] Support AtenTrilOp (#3359)
1. lower aten.tril to stablehlo composed by iota, select and so forth
2. add related e2e test cases
2024-05-20 15:49:24 +08:00
Yuanqiang Liu 8814d0ae64
[Torch] emit aten.dot and canonicalize it to aten.matmul (#3361)
* canonicalize `aten.dot` to `aten.matmul`
2024-05-18 22:45:14 +08:00
Aart Bik e80f072ba4
[torch-mlir][sparse] example of a sparse graph convolution (#3363) 2024-05-17 15:43:50 -07:00
zjgarvey 6cba93b16e
[ONNX][TorchToLinalg] Add support for dynamic dims in Interpolate lowering (#3351)
Addresses [Shark-Turbine
#196](https://github.com/nod-ai/SHARK-TestSuite/issues/196)

Related tracker [Shark-Turbine
#566](https://github.com/nod-ai/SHARK-Turbine/issues/566)

Related onnx.Resize issues [Shark-Turbine
#616](https://github.com/nod-ai/SHARK-Turbine/issues/616)
2024-05-17 12:18:57 -07:00
Andrew Woloszyn 513d89c16d
Add support for the onnx.SequenceLength op. (#3362) 2024-05-17 12:17:43 -07:00
Xida Ren (Cedar) 2937753070
[Documentation] Show faster build command first in docs/development.md (#3355) 2024-05-17 18:59:51 +00:00
Andrew Woloszyn 72e38dcbbc
Add support for the onnx.SequenceConstruct op. (#3316) 2024-05-17 22:51:28 +05:30
Sambhav Jain 706efaf57c
[Bazel] Add SparseTensorDialect deps (#3357)
Required after https://github.com/llvm/torch-mlir/pull/3318 landed.

GHA:
https://github.com/sjain-stanford/torch-mlir/actions/runs/9120607050/job/25078271790
2024-05-16 21:44:46 -07:00
Suraj Sudhir cba91a9b96
[ONNX][TOSA] Adds ONNX to TOSA e2e tests (#3358)
- Refactors OnnxBackend to be generic and consume any Torch backend.

---------

Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-05-16 21:44:26 -07:00
Xinyu Yang 28193fd985
[Stablehlo]index type use i64 (#3354) 2024-05-16 15:33:23 +08:00
Xinyu Yang 7faba75696
[Torch] Decompose AtenMaskedScatterOp (#3353)
Co-authored-by: Yuanqiang Liu <liuyuanqiang.yqliu@bytedance.com>
2024-05-16 15:27:25 +08:00
Xinyu Yang a9edefb3cf
[Torch] Fix AtenSliceTensorOp::fold (#3345) 2024-05-16 11:42:43 +08:00
penguin_wwy 405f884522
[stablehlo] verify stablehlo backend contract (#3338) 2024-05-16 11:03:43 +08:00
Suraj Sudhir 0ca88028cd
[FxImporter][TOSA] Enable FxImporter to TOSA e2e tests (#3349)
Signed-off-by: Suraj Sudhir <suraj.sudhir@arm.com>
2024-05-15 14:37:30 -07:00
Peiming Liu ccb772cd0f
[sparse] propagate sparsity properly when decompose torch operations. (#3318) 2024-05-15 10:09:27 -07:00
Aaron St George ba32b9cee7
Don't fold `aten.clone` if result isn't same type as input (#3347)
Similar to https://github.com/llvm/torch-mlir/pull/2824, we were seeing
some assertion failures after the addition checks around folders were
tightened up in LLVM: https://github.com/llvm/llvm-project/pull/75887 .
This PR essentially moves the logic that used to be applied at the LLVM
level into the folder, which seems to be the suggested fix.
2024-05-16 00:07:45 +08:00
Yuanqiang Liu 5928f68e60
[Stablehlo] refactor amax, max, max.dim's lowering to stablehlo (#3348)
* not to decompose `aten.amax` on `stablehlo` backend. Because it could
be lowering to `stablehlo.reduce` directly.
* lowering `aten.max.dim` to `stablehlo.reduce apply max` when
`AtenMaxDimOp.getIndices()` doesn't have users. It's more simple.
2024-05-16 00:05:19 +08:00
Xinyu Yang 6b95dd461d
[Torch] Fix PrimNumToTensorScalarOp::fold (#3339)
In constant folding progress, a new constant op will be created
according to the origin op's result type.

See the code in TorchDialect.cpp.

```cpp
Operation *TorchDialect::materializeConstant(OpBuilder &builder,
                                             Attribute value, Type type,
                                             Location loc) {
  if (auto integerType = dyn_cast<Torch::IntType>(type))
    return builder.create<Torch::ConstantIntOp>(loc, cast<IntegerAttr>(value));

  if (auto floatType = dyn_cast<Torch::FloatType>(type))
    return builder.create<Torch::ConstantFloatOp>(loc, cast<FloatAttr>(value));

  if (auto numberType = dyn_cast<Torch::NumberType>(type)) {
    if (auto floatValue = dyn_cast<mlir::FloatAttr>(value)) {
      return builder.create<Torch::ConstantNumberOp>(loc, floatValue);
    } else if (auto intValue = dyn_cast<mlir::IntegerAttr>(value)) {
      return builder.create<Torch::ConstantNumberOp>(loc, intValue);
    }
  }

  if (isa<Torch::BoolType>(type)) {
    return builder.create<Torch::ConstantBoolOp>(loc, cast<IntegerAttr>(value));
  }

  if (isa<Torch::NoneType>(type))
    return builder.create<ConstantNoneOp>(loc);

  if (auto stringAttr = dyn_cast<StringAttr>(value))
    return builder.create<ConstantStrOp>(loc, stringAttr);

  if (auto elementsAttr = dyn_cast<ElementsAttr>(value)) {
    // Only !torch.vtensor can be constant folded. !torch.tensor has
    // non-trivial aliasing semantics which prevent deduplicating it.
    assert(isa<ValueTensorType>(type) && "should be a vtensor type!");
    return builder.create<ValueTensorLiteralOp>(loc, elementsAttr);
  }

  return nullptr;
}
```
So when the op has a tensor result type, it must be "ValueTensorType"
due to the **assert** statement. However, many fold methods in
TorchOps.cpp only have a judgment of "BaseTensorType".
2024-05-15 20:54:19 +08:00
Aart Bik 44fa6c3afd
[torch-mlir][sparse] sparse diagonal feature scaling test (#3344) 2024-05-14 12:13:54 -07:00
Peiming Liu 8e74d64e8f
[sparse] convert to sparse before any use in sparse test. (#3337) 2024-05-14 09:10:36 -07:00
zjgarvey 73b3065a94
[ONNX] Reduces Transpose Opset Version (#3302)
As mentioned in issue #3290 , the difference between onnx.Transpose in
versions 1 and 13 is minimal, and therefore should be supported with the
same conversion pattern.
2024-05-14 21:38:56 +05:30
NeverRaR 26b78285bf
[MLIR][ONNX] Add OnnxToTorch support for GlobalMaxPool Op (#3232)
https://github.com/nod-ai/SHARK-Turbine/issues/658

---------

Co-authored-by: root <root@i32b01216.sqa.eu95>
2024-05-14 15:55:39 +05:30
Archana Ramalingam 20f312853c
[MLIR][ONNX] Add OnnxToTorch support for ReduceLogSumExp Op (#3201)
This commit adds the OnnxToTorch support for ReduceLogSumExp op
2024-05-14 09:54:26 +05:30
Aart Bik 667dfcbc5a
[torch-mlir][sparse] enable test on ReLu (#3336)
Downstream MLIR sparsifier has some (rudimentary) support for ReLU now,
and this test can now be enabled with correct end-to-end behavior.

Also see discussion at:

https://discourse.llvm.org/t/min-max-abs-relu-recognition-starter-project/78918
2024-05-13 15:34:26 -07:00
Aart Bik 08355be5d0
[torch-mlir] bump to llvm@70e227a404e51f9248c7ad5d79953805b2afacb4 (#3335) 2024-05-13 14:52:25 -07:00
zjgarvey 911e723581
Expands Q Commuting Ops (#3332)
After running the model tests in SHARK-TestSuite, I noticed a few model
failures due to half-fusion.

Notably, RDN_pytorch_vaiq_int8 had a depth=5 convolution chain with
multiple AtenViewOp's.
2024-05-13 11:01:53 -07:00