Commit Graph

2788 Commits (b2185195e8fecb3568d53a97a502fc77a22a6daf)
 

Author SHA1 Message Date
Rob Suderman 733cace1df
[onnx] Fix `onnx.split` by directly handling slicing (#3194)
Previous implementation erroneously mixed up num_outputs with
slice_size. New version correctly computs the slice size and directly
performs slicing rather than leveraging `aten.split.tensor`. This is due
to `onnx` supporting a fixed number of splits making the size
computation more easily computeable when lowering to `aten` rather than
deferring to `aten.split.tensor`.

---------

Co-authored-by: Robert Suderman <rsuderman@Roberts-MacBook-Pro.local>
2024-04-21 12:31:56 -04:00
penguin_wwy b6b01602d3
[stablehlo] add aten.fmod.Tensor op conversion support (#3198) 2024-04-21 08:39:36 +08:00
penguin_wwy ea0ecb67be
[stablehlo] add aten.remainder.Tensor op conversion support (#3197) 2024-04-21 00:03:37 +08:00
Rob Suderman b01245c0e8
[onnx] Fix `onnx.Not` for non-bool inputs (#3187)
Need to perform a bool cast to support `onnx.Not` on non-bool inputs.
2024-04-19 11:32:24 -07:00
Xinyu Yang 790a697245
[Torch] Add folder for AtenIntOp, AtenFloatOp (#3189)
See unit test below:
```
// CHECK-LABEL:   func.func @torch.aten.tensor.float(
// CHECK-NEXT: torch.vtensor.literal(dense<1.000000e+01> : tensor<f32>) : !torch.vtensor<[],f32>
func.func @torch.aten.tensor.float() -> !torch.vtensor<[],f32> {
  %none = torch.constant.none
  %false = torch.constant.bool false
  %float1.000000e01 = torch.constant.float 1.000000e+01
  %67 = torch.aten.tensor.float %float1.000000e01, %none, %none, %false : !torch.float, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],f32>
  return %67 : !torch.vtensor<[],f32>
}

// CHECK-LABEL:   func.func @torch.aten.tensor.int(
// CHECK-NEXT: torch.vtensor.literal(dense<45> : tensor<si32>) : !torch.vtensor<[],si32>
func.func @torch.aten.tensor.int() -> !torch.vtensor<[],si32> {
  %none = torch.constant.none
  %false = torch.constant.bool false 
  %int45 = torch.constant.int 45
  %67 = torch.aten.tensor.int %int45, %none, %none, %false : !torch.int, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],si32>
  return %67 : !torch.vtensor<[],si32>
}

```
2024-04-19 22:17:06 +08:00
penguin_wwy 5a98c72c7f
[StableHLO] Fix aten.clamp.Tensor in FxImporter2StableHLO (#3190)
The FX importer will pass static shapes to the Torch dialect, so it
needs to generate a StableHLO that satisfies shape inference.
2024-04-19 17:08:29 +08:00
penguin_wwy 0a6073414d
[FxImporter] Add fx importer to stablehlo e2e test config (#3183) 2024-04-18 21:29:17 -07:00
penguin_wwy 6c4f7deebb
[stablehlo] add aten.clamp.Tensor op conversion support (#3185) 2024-04-19 10:55:27 +08:00
Rob Suderman be742a937d
[onnx] Update the failure triage for onnx (#3186)
Reclassifying what the source of failures are for various bugs so we can
reprioritize what failures are common.
2024-04-18 14:58:13 -07:00
Rob Suderman 0e77de996a
[torch] Add support for `torch.view` with dynamic shapes (#3164)
We can map to `tensor.reshape` for handling multiple output dynamic
shapes. Later we can perform a more complex analysis for indentifying
expand/collapse cases from the tensor.reshape.

Initially we planned to handle this identification at the `torch` level
however it will be easier to handle once converted to core
mlir-dialects.
2024-04-18 11:47:19 -07:00
Rob Suderman 4c21e20caa
[torch] Support rank-0 index for torch index select (#3182)
Need to perform an expand in the case where the indices is rank-0.
2024-04-18 11:32:31 -07:00
Vivek Khandelwal 6e5630d696
build: manually update PyTorch version (#3170)
Set PyTorch and TorchVision version to nightly release 2024-04-16.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-18 12:55:10 +05:30
Xinyu Yang d4313eed4a
[Torch] Add decomposition of RepeatInterleaveSelfInt Op (#3075)
Decomposition RepeatInterleaveSelfInt with following ops:
```python

def my_repeat_interleave(input, repeats, dim=None):
    if dim is None:
        # Flatten the input and then repeat
        return input.flatten().unsqueeze(-1).tile((1, repeats)).flatten()
    else:
        # Calculate the shape after repeat
        expanded_shape = list(input.shape)
        expanded_shape[dim] *= repeats
        # Repeat the tensor along the specified dimension
        repeat_shape = [1] * (input.dim() + 1)
        repeat_shape[dim + 1] = repeats
        input = input.unsqueeze(-1)

        # Tile and then reshape
        tiled = torch.tile(input, repeat_shape)
        # Rearrange and reshape
        repeated = tiled.reshape(*expanded_shape)
    return repeated

```

I passed the tests of stablehlo and linalg. When testing onnx, strange
things happened.
In torch-mlir's CI **torch_nightly** and my own
environment(torch==2.4.0.dev20240318+cpu), it can **pass the pass**.
In torch-mlir's CI  **torch_stable**, it **failed**.
The test case is `RepeatInterleaveSelfIntNoDimModule_basic`, the result
shape should be [120].
```python
class RepeatInterleaveSelfIntNoDimModule(torch.nn.Module):

    def __init__(self):
        super().__init__()

    @export
    @annotate_args([
        None,
        ([3, 4, 5], torch.float32, True),
    ])
    def forward(self, x):
        return x.repeat_interleave(2)


@register_test_case(module_factory=lambda: RepeatInterleaveSelfIntNoDimModule())
def RepeatInterleaveSelfIntNoDimModule_basic(module, tu: TestUtils):
    module.forward(tu.rand(3, 4, 5))
```
The error log is as follows:
```
  Unexpected outcome summary: (onnx)
  
  ****** Failed tests - 1 tests
      FAIL - "RepeatInterleaveSelfIntNoDimModule_basic"
          @ trace item #0 - call to "forward"
          @ output of call to "forward"
          ERROR: shape (torch.Size([6, 4, 5])) is not equal to golden shape (torch.Size([120]))
```

@rsuderman 
Would you please help me check what's wrong with my PR? Thanks a lot.
2024-04-18 06:27:51 +08:00
Aart Bik 491f4820f5
[torch-mlir][sparse] pre-pend named buffers to parameter list (#3178)
weights and biases and other model parameters appear as a separate data
structure to the traced graph, but are needed when running the MLIR
compiled code; this PR implements that extended functionality
2024-04-17 14:44:05 -07:00
Andreas Falkenberg b66eabd492
[onnx][torch][linalg] Implementing align-corner modes for gridsampler (#3171)
Align corner modes which select what the corners mean. 
Either the center of the corner points or the edges of the edge points.

---------

Co-authored-by: Rob Suderman <rob.suderman@gmail.com>
2024-04-17 13:38:19 -07:00
penguin_wwy 3aa81f78d8
[FxImporter] Replace local_scalar_dense in fx_importer (#3180) 2024-04-17 22:45:47 +08:00
Xinyu Yang d2ba956e69
[Torch] Support Aten_CastLongOp. (#3160)
By canonicalize Aten_CastLongOp into AtenToDtypeOp
2024-04-17 21:58:32 +08:00
penguin_wwy e4b11a0ab4
[FxImporter] Fix fx importer test config and clean xfail set (#3176) 2024-04-16 22:36:07 -07:00
penguin_wwy 398aeeec87
[FxImporter] Fix kwarg operands in fx importer (#3166)
Remove the `kwarg_only` limitation, for example
```
torch.add(x, 3.0, alpha=2)
```
compiled to
```
%0 = torch.aten.add.Scalar %arg0, %float3.000000e00, %int1
```
fix to
```
%0 = torch.aten.add.Scalar %arg0, %float3.000000e00, %int2
```
2024-04-16 13:17:05 -07:00
zjgarvey 7a1ad0d7c0
[TorchToLinalg] Adds Support for Remaining Quantized Matmul Cases (#3167)
The new cases added for quantized matmuls are:

1. vec-vec
2. vec-mat
3. mat-vec

each of which are now lowered to expand(s), quantized_matmul, and
collapse.
2024-04-16 09:28:28 -07:00
Vinayak Dev a0232e9ebd
[MLIR][TORCH] Add OnnxToTorch lowering for ReduceL1 Op (#3146)
Adds OnnxToTorch Lowering for the ReduceL1 op.
2024-04-16 12:24:46 +05:30
penguin_wwy af5509c5d9
[FxImporter] Type conversion to resolve the mismatch between Py type and schema type (#3163) 2024-04-15 23:14:19 -07:00
penguin_wwy 10b6062d41
[CI] Enable the tests for fx_importer in the CI (#3168)
Replace the torchdynamo e2e with the fx_importer e2e
2024-04-15 21:20:23 -07:00
Xinyu Yang ae4724763a
[Stablehlo] Enhance broadcast pattern in matmul Ops (#3161)
To pass test "MatmulStaticBroadcast_basic" in stablehlo:
```python
class MatmulStaticBroadcast(torch.nn.Module):
    def __init__(self):
        super().__init__()

    @export
    @annotate_args([
        None,
        ([4, 1, 6, 7], torch.float32, True),
        ([8, 1, 5, 7, 6], torch.float32, True),
    ])
    def forward(self, lhs, rhs):
        return torch.matmul(lhs, rhs)


@register_test_case(module_factory=lambda: MatmulStaticBroadcast())
def MatmulStaticBroadcast_basic(module, tu: TestUtils):
    module.forward(tu.rand(4, 1, 6, 7), tu.rand(8, 1, 5, 7, 6))
```
2024-04-16 10:10:36 +08:00
zjgarvey 5e564b5864
Adds Some Quantization Support for AtenMatmulOp (#3147)
1. onnx.MatMulInteger now converts to aten.matmul instead of aten.mm
2. aten.matmul, for ranks >=2, now allows quantized inputs and will
lower to linalg::quantized_matmul or linalg::quantized_batch_matmul.
3. added AtenMatmulOp to the FuseQuantizeOps rewrite patters
QuantizeOperands, QuantizeTransposedOperands, and QuantizeAccumulator
4. added several tests, including some to test AtenMmOp with varying
quantization signed-ness.
5. a quantized matmul mat-vec test is added to verify the failure to
lower to linalg; cleaned of out-of-date code related to common
torch-mlir lowering xfails.
6. in debugging a real model with quantized matmuls, I found a bug on
the scalarize-shapes pass which resulted from the aten.full op folder
returning an incompatible result type. This is fixed by the small change
here to
[lib/Dialect/Torch/IR/TorchOps.cpp](https://github.com/llvm/torch-mlir/compare/main...zjgarvey:torch-mlir:MatMulIntegerFix?expand=1#diff-dc8ed165c207918e606490eee3984b1ad51d7034e6aac36fc046bf47f6f03f4f).
2024-04-15 16:06:47 -07:00
IanWood1 5708ee7ec9
Added 2 Ops: Floor divide scalar and Floor divide scalar mode (#3156)
- Added linalg lowering for `AtenFloorDivideScalarOp`
  - Needed `AtenDivScalarModeOp` for the decomp.
- Added linalg lowering for `AtenDivScalarModeOp`
- Moved linalg payload logic to `createDivModePayload()` since the logic
was nearly identical for both `AtenDivScalarModeOp` and
`AtenDivTensorModeOp`. Just a template function
 -  Added `AtenDivScalarModeOp` lowering for stablehlo
 

Pytorch's
[`torch.floor_divide()`](https://pytorch.org/docs/stable/generated/torch.floor_divide.html)
in a previous version (for a reason unknown to me) preformed a
truncation instead of "floor". The already implemented op
`AtenFloorDivideTensorOp` was done before this change. However, this
wasn't caught because our testcases only tested positive floor division.
I changed this to floor as well as adding a few test cases.
2024-04-15 13:45:10 -07:00
jinchen 83cba8c696
[onnx] Support for `onnx.EyeLike` via torch lowering (#2994) 2024-04-15 09:23:26 -07:00
penguin_wwy 9ac90ec7b2
Refactor the parameters and usage instructions of the setup script. (#3162)
As this
issuecomment(https://github.com/llvm/torch-mlir/pull/3021#issuecomment-2031248199)
suggests, `setup.py` should only be used for building Python packages,
so:
* disabled the develop command
* refactor the environment variable parameters
* add more doc for the usage and env var of setup.py
2024-04-14 10:40:25 -07:00
penguin_wwy 45eaeaaf36
[FxImporter] Add FxImporter config in e2e-test (#3151) 2024-04-12 16:07:56 -07:00
jinchen 859f5d280f
Generalize getting index for onnx compress op (#3150) 2024-04-12 15:18:22 -07:00
Xida Ren (Cedar) ed163f49e8
Delete .github/workflows/buildAndTest.yml to solve CI error messages … (#3155)
…on every push

fixes #3144
2024-04-12 12:11:48 -07:00
zjgarvey 197ef4224b
Avoid Type Mismatch in Slice Folder (#3154)
Fixes issue #3153
2024-04-12 11:43:45 -07:00
Aart Bik 307f49f566
[torch-mlir][sparse] support sparse tensor output (#3152)
Sparse inputs and outputs are now fully supported! They always consist
of their constituents buffers, passed as numpy arrays. Sparse on!
2024-04-12 09:56:32 -07:00
Aart Bik a7302a68a7
[torch-mlir] Bump LLVM to llvm/llvm-project@a952c12 (#3149) 2024-04-11 23:15:08 -07:00
Xinan Jiang(姜曦楠) 71d90788d3
[MLIR][TORCH] Support parallel dimemsions expand/collapse (#3051)
This PR support `aten.view` with unique unknown dimension both in input
shape and output shape while the pass convert-torch-to-linalg that
lowing `aten.view` to `tensor.collapse_shape` or `tensor.expand_shape`.

Below is an example
```
func.func @test_reshape(%arg0: !torch.vtensor<[1,?,50,16],f32>) -> !torch.vtensor<[1,?,16],f32> attributes {torch.assume_strict_symbolic_shapes, torch.onnx_meta.ir_version = 9 : si64, torch.onnx_meta.opset_version = 19 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
  %int1 = torch.constant.int 1
  %int-1 = torch.constant.int -1
  %int16 = torch.constant.int 16
  %0 = torch.prim.ListConstruct %int1, %int-1, %int16 : (!torch.int, !torch.int, !torch.int) -> !torch.list<int>
  %1 = torch.aten.view %arg0, %0 : !torch.vtensor<[1,?,50,16],f32>, !torch.list<int> -> !torch.vtensor<[1,?,16],f32>
  return %1 : !torch.vtensor<[1,?,16],f32>
}
```
2024-04-11 10:43:03 -07:00
Rob Suderman a1fe307a76
[torch] Support implicit batch for index_put (#3128)
If there is only a single value scattered there can be an implicit batch
dimension. This includes a check for the implicit batch dimension when
reshaping the update tensor. It includes an e2e test to verify
correctness.
2024-04-11 10:18:03 -07:00
penguin_wwy d4a30b7e67
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3130)
We should prefer functional style as the method style is deprecated
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
(https://mlir.llvm.org/deprecation/)
2024-04-11 06:47:35 -07:00
Xinyu Yang 308c45e61a
[Torch] Fix PrimListUnpackOp::getCanonicalizationPatterns (#3140)
Fix the case PrimListUnpackOp's result num is not equal to PrimList
length.
See the following example:
```python
    def forward(self, x):
        if len(x.shape) == 5:
            b0, t, c0, h0, w0 = x.shape
            b, c, h, w = torch.mul(b0, t), c0, h0, w0
        else:
            b1, c1, h1, w1 = x.shape
            b, c, h, w = b1, c1, h1, w1
        res = torch.reshape(x, [b, c, h, w])
        return res
```
Without this fix, the following error message will occur:
```
/root/torch-mlir/externals/llvm-project/mlir/lib/IR/PatternMatch.cpp:118: virtual void mlir::RewriterBase::replaceOp(mlir::Operation *, mlir::ValueRange): Assertion `op->getNumResults() == newValues.size() && "incorrect # of replacement values"' failed.
```
2024-04-11 19:48:49 +08:00
Xinyu Yang 6524838bcb
[Torch] Add general AdaptiveAvgPool2dOp decompose support (#3111)
Previously, it could only handle the situations where outputsize == (1,
1) or outputsize == (input_H, input_W). Now it supports all situations
where input_H % output_H== 0 && input_W % output_W == 0
2024-04-11 17:02:59 +08:00
Yuanqiang Liu 88533b1968
[Stablehlo] fix aten.arange's lowering to stablehlo (#3138)
* promote to f64 to do division, avoid division on i64 (floor div)
* refactor torch-to-stablehlo-pipeline
2024-04-11 15:55:56 +08:00
zjgarvey aa5e150313
Adds Some uint8 Quantization Fixes (#3122)
1. Changes the linalg lowering for dequantization ops to always sign
cast to float to prevent misrepresenting uint32 overflow on subtraction
with zero point.
2. Adds a basic quantized model test which only quantizes and
dequantizes and now passes with these changes in linalg and onnx
configs.
3. Changes the aten.mm lowering to allow mismatched quantized types. 
4. If a quantized matmul arg is uint8, we shift by 128 to faithfully
represent the quantization as a signed i8 quantization. This worked fine
in the AtenMmOp lowering, but I'd be happy to move it to a rewrite in
FuseQuantizedOps.cpp instead if that seems more appropriate.

With the changes 3 and 4, the QuantizedMLP_basic and
QuantizedSingleLayer_basic e2e tests now passes with the onnx config.
2024-04-10 12:36:58 -07:00
Vivek Khandelwal 3b84a7162b
build: manually update PyTorch version (#3116)
Set PyTorch and TorchVision version to nightly release 2024-04-08.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-04-10 21:16:34 +05:30
Xinyu Yang 5eb0cf9104
[Torch] Add decompose of AtenToPrimDeviceOp (#3131)
As device information isn't relevant to torch-mlir
2024-04-10 22:26:48 +08:00
Jae Hoon (Antonio) Kim 8951a8cc23
Replace c10::optional with std::optional (#3126)
They replaced all `c10::optional` usages with `std::optional` in
torchgen'd code in
fb90b4d4b2
causing the LTC build to break.

Replacing all usages of `c10::optional` with `std::optional` in
`projects/ltc` has fixed the issue

Issue: #3120
2024-04-09 18:38:33 +00:00
Aart Bik 184d8c13f4
[torch-mlir][sparse] add ID-net example (#3127)
first sparse-in/sparse-out example, will be used
to make actual sparse output work!
2024-04-09 11:21:30 -07:00
IanWood1 8ff28527cb
Add more descriptive error message to torch_ods_gen.py. (#3108)
Added error message when adding new torch op to
[torch_ods_gen.py](https://github.com/llvm/torch-mlir/compare/main...IanWood1:torch-mlir:ods_gen_error_message?expand=1#diff-889b60b904ed67a5065a14e8de6fc89e00e199577e4d2bfa134ac4d1c89832d2).


New message displays which op key is failing and possible matches in the
torch `Registry`.
```Op does not match any Torch ops in Registry 
Given op: 
    "aten::hardtanh_wrong : (Tensor, Scalar) -> (Tensor)" 
Possible matches: 
    "aten::hardshrink : (Tensor, Scalar) -> (Tensor)" 
    "aten::hardtanh_ : (Tensor, Scalar, Scalar) -> (Tensor)" 
    "aten::hardtanh : (Tensor, Scalar, Scalar) -> (Tensor)"
    "aten::clamp_min : (Tensor, Scalar) -> (Tensor)" 
    "aten::linalg_cond : (Tensor, Scalar?) -> (Tensor)"```



Also, ran black formatting on file. Based on LLVM style guides this seems to be correct, but I can revert the formatting if needed.
2024-04-09 09:50:34 -07:00
Yuanqiang Liu 8d5e2578b0
[Stablehlo] lowering aten.view to shape.num_elements + stablehlo.comp… (#3125)
…ute_reshape_shape

as that `aten.view` support at most one `-1` in dim list. The original
calculation of `numel` is wrong when there is a `-1` in dim list.
2024-04-09 14:54:57 +08:00
Xinyu Yang 42a16fa912
[Torch] Support Aten_CastFloatOp. (#3115)
By canonicalize Aten_CastFloatOp into AtenToDtypeOp
2024-04-09 11:06:53 +08:00
Sambhav Jain 04aeb4913f
Disable LTC from release builds to bypass linker issue (#3121)
Workaround for https://github.com/llvm/torch-mlir/issues/3120. This
should bring prebuilt releases back to green:
https://github.com/llvm/torch-mlir-release/actions.
2024-04-08 18:23:46 -07:00
Aart Bik 5797d3aa57
[torch-mlir][sparse] add a COO test for 3-dim (#3119)
This tests COO for more than 2-dim. Note that sparsity should really
propagate into the relu activation and the output, but such cleverness
needs to wait for the pending work in the PyTorch tree.
2024-04-08 16:46:51 -07:00