Commit Graph

81 Commits (6679728c56d27e96795133240ec700cf73d1ea74)

Author SHA1 Message Date
penguin_wwy 6679728c56
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243)
Like #3130, gradually replace the deprecated code

https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
2024-04-27 14:00:56 -07:00
Aart Bik 2eac8a992f
[torch-mlir][sparse] sparse tensor dialect is a legal dialect (#3227) 2024-04-26 02:36:42 +08:00
penguin_wwy d4a30b7e67
Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3130)
We should prefer functional style as the method style is deprecated
https://github.com/llvm/mlir-www/blob/main/website/content/deprecation/_index.md#deprecated
(https://mlir.llvm.org/deprecation/)
2024-04-11 06:47:35 -07:00
Yuanqiang Liu 88533b1968
[Stablehlo] fix aten.arange's lowering to stablehlo (#3138)
* promote to f64 to do division, avoid division on i64 (floor div)
* refactor torch-to-stablehlo-pipeline
2024-04-11 15:55:56 +08:00
Yuanqiang Liu 43d54efd14
[cmake] link TorchMLIRTorchConversionPasses to TorchMLIRConversionPasses (#3113)
* as that `TorchMLIRTorchConversionPasses` missing dependencies of
`TorchMLIRTorchToStablehlo` and `TorchMLIRTorchToTensor`.
* use `TorchMLIRConversionPasses` instead of scattered targets.
2024-04-08 14:44:34 +08:00
Yuanqiang Liu 6cbb2f7ae0
[Stablehlo] add stablehlo-canonicalize-dynamism when lowering (#3097)
so that many stablehlo e2e testcases could pass
2024-04-02 22:47:24 +08:00
Rob Suderman 14b548f968
[torch] Improve shape inference for `torch-to-linalg` path for reshapes (#3055)
Reshaping tensors depend on directly matching individual dimensions to
their corresponding dim in the `torch.view` reshape dimensions. This
involves decoupling dynamic dimensions from their static counterparts
and support cleanup / canonicalization.
2024-03-26 12:41:40 -07:00
Yuanqiang Liu 8b96727d0d
[Stablehlo] lowering chlo to stablehlo in torch-to-stablehlo pipeline (#3037)
as that stablehlo is better than chlo as the boundary between frontend
compiler and backend compiler.
2024-03-19 21:18:54 +08:00
Rob Suderman 4a7a7d76f8
[onnx] Fix ReduceMean lowering to torch (#2956)
Torch lowering only supported the most recent version. Refactored the
lowering so more easily handle default values and optional operands /
attributes.
2024-02-27 22:48:07 -08:00
Rob Suderman e30a083aff
[torch] Rework lowering to tm_tensor.scatter to stop serialization (#2940)
We collapsed and broadcasted scatter indices to a single element
version. We should instead upport `tm_tensor.scatter`s support for
multiple indices and the implicitly broadcasted behavior. This avoids
the serialization and materializing a needlessly large indices tensor.
2024-02-27 11:46:57 -08:00
Stella Laurenzo 4446fa00d8
Migrate passes in TorchConversion to use FunctionOpInterface. (#2935)
This enables better re-use in downstreams which use different func
implementations and should have no impact on those that don't except in
opt pipelines if using the old form. With interfaces, explicit pipelines
via `--pass-pipeline=` must be used.
2024-02-20 08:54:02 -08:00
Scott Todd d6e1d836ca
Drop torch attributes at the end of backend conversion. (#2876)
Fixes https://github.com/llvm/torch-mlir/issues/2866

Some backends / downstream projects expect that a "fully converted"
program has no remaining ops or attributes from the original dialect(s).
2024-02-13 14:32:02 -08:00
Rob Suderman 0114a570e3
[torch] Support lowering `torch.item` to `tensor.extract` (#2835)
Extracting scalar values from tensors can be implemented via a lowering
to tensor.extract.
2024-01-31 15:09:12 -08:00
Quinn Dawkins 494089d53d
Clang format refresh (#2812)
After noticing a number of commits with unrelated formatting changes,
I think something was changed with clang-format at one point and we're
seeing a number of unrelated changes. Doing a refresh can help avoid
this.

The changes made here came from
```
find lib -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
find include -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
find projects -iname *.h -o -iname *.cpp  | xargs clang-format -i --style=llvm
```
2024-01-29 12:59:33 -05:00
Rob Suderman f6f890520b
[torch][quant] Quantized `torch.mm` for linalg with end-to-end test (#2750)
This includes custom op matching for decomposed operations and fusing
dequantization into dense operations. As a validation we compare
to the dequant+mm torch implementation.
2024-01-24 14:02:50 -08:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Stella Laurenzo 078d1e1a1d
Remove mlir-hlo (replace with stablehlo). (#2460)
We just have to do this: I ran into an issue today where I needed to make a one line patch to stablehlo to work around a compiler issue, and it is completely unapparent how to do so given that the mlir-hlo repo is a read-only export and is at the tail end of a multi-week integration chain from the open-source stablehlo repo.

We've discussed this often enough and gotten +1 from everyone that they are ok with taking the e2e testing hit if it becomes necessary: It is necessary as the current situation is unmanageable.

Looking at it, I expect it wouldn't actually be very difficult to build a little runner binary out of the stablehlo interpreter and subprocess call that in order to get the testing coverage back. I leave that as an exercise to the users of this part of the stack and recommend following the breadcrumbs from the deleted python/torch_mlir_e2e_test/stablehlo_backends/linalg_on_tensors.py file and the main.py changes.

Note that I am pointing us at a stablehlo fork for the moment until it is apparent that we don't need to carry any local patches to it. We can update this in a few days if everything is clear.
2023-09-12 19:10:02 -07:00
Yuanqiang Liu 5895b9f8ca
fix compile warning (#2453) 2023-09-12 09:31:47 +08:00
jinchen62 1682b540bf
Prototype passes for lowering quantized group matmul (#2402)
* Support brevitas custom op (#2320)

* f16 change for brevitas

* Adapt the change of brevitas quant custom op name

* Add unit tests

* Make brevitas conversions isolated

* Address the comments

---------

Co-authored-by: dan <danimal197@gmail.com>
2023-08-29 21:25:45 -07:00
Maksim Levental 0caaf8d32a
Bump LLVM (#2176)
* Bump LLVM

---------

Co-authored-by: Matthias Gehre <matthias.gehre@xilinx.com>
2023-06-13 16:17:23 +02:00
Yuanqiang Liu 5223f990df
[Stablehlo] Enable Stablehlo backend with arith dialect (#2139) 2023-05-26 22:57:57 +08:00
Prashant Kumar 3cd91affbc Add complex types support with basic complex ops.
Add complex types support with basic complex types.
Add aten.imag and aten.real op lowering via linalg_backend.
2023-05-11 21:29:07 +05:30
Eric Kunze 6a833e1922
Update to LLVM 3157f03a349cfc852cdd994675eaa9652caa2e3a (#2060)
New requirement to explicitly cast for interfaces https://reviews.llvm.org/D148493
2023-04-25 08:52:46 -07:00
Alexandre Rames 224ee27610
Fix a few missing dependencies. (#2014)
`TorchToTMTensor` depends on `TorchMLIRTorchUtils` for
`mlir::torch::torch_upstream::get_reduction_enum`.

`TorchMLIRTorchConversionPasses` depends on multiple libs for both tblgen'd
headers and definitions. Test with `ninja TorchMLIRTorchConversionPasses` from
a clean build.
2023-04-11 11:18:49 -07:00
Alexandre Rames d24fa71368
Minor fixes for `ConvertTorchConversionToMLProgram`. (#1991)
* Only create the global seed variable if it does not exist already.
* Make the pass a module pass. A func pass may not modify its parent op.
2023-04-04 09:09:58 -07:00
Ashay Rane 711646d095
mhlo: migrate conversion to stablehlo (#1840)
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.

This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
2023-02-02 07:29:47 -06:00
Eric Kunze 95bdfaa9bf
update llvm to d23516e9ad477527a9db4d06b1fa9566680ac67c (#1812)
Rename BlockAndValueMapping to IRMapping
Moved PrimTupleConstructOp type validation to its own verifier as the
tablegen version does not work for a combination of variadic input and
non-variadic output.
2023-01-23 16:34:22 -08:00
Ashay Rane f63bb9f86c
build: update llvm tag to 3a020527 (#1717)
Summary of changes:

 - Replace `llvm::None` with `std::nullopt`, since the former is deprecated
   (https://reviews.llvm.org/D139763)

 - Use setter for symbol visibility instead of passing string attribute when
   creating FuncOp
2022-12-14 02:06:39 -06:00
Vivek Khandelwal f416953600 [MLIR][TORCH] Add TorchConversionToMLProgram and MLProgramBufferize pass
This commit changes the `InsertRngGlobalsPass` to `TorchConversionToMLProgram`
pass. This commit also adds the `MLProgramBufferize` pass for the
bufferization of ml_program dialect ops to run on refbackend.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2022-12-02 13:20:46 +05:30
Gaurav Shukla 0d209998d1
llvm: update tag to e864ac6945 (#1600)
Summary of changes:
1. Replace `string` iterator types by `IteratorType` enum.
(e6598b053d)
2. Update `includes` wrt new directory layout of MLIR HLO codebase.
(9fd8d251a8)
3. Update tags
   llvm: e864ac694540342d5e59f59c525c5082f2594fb8
   MHLO: eab364ba2a66bd0613efb94f8a738c1c97aaee92

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>

Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-11-16 14:40:36 -08:00
Ashay Rane faa9a78e38
build: update llvm tag to 6f46ff37 (#1448)
Summary of changes:
 - Updated references to the Arith dialect
   (https://reviews.llvm.org/D134762)
 - Switched to prefixed accessors for MemRef dialect
   (https://reviews.llvm.org/D134995)
 - Fixed warnings about signed/unsigned comparisons, ignored return
   values, and unused variables
2022-10-05 08:28:06 -05:00
Gleb Kazantaev 708fa346a6
Fix Base Lazy Backend Type Conversion (#1412)
* Fix c10::prim::Constant conversion; Added CAPI for passes; Added passes to base lazy backend

* Update ivalue_importer to use ImportOptions; Added tests for non-value/value tensor types

* Added tests for scalar Constant import; Updated MB::importFunction to use ImportOptions

* Test updates

* Move back module variable name

* Remove RefineTypes from TorchMlirLoweringContext::Build()

* Rename pass; Remove passes from base lazy backend

* Rename pass to VerifyBackendContractPass

* Aligned cmd pass name; Fixed TorchConversion passes registration
2022-10-04 15:53:28 -07:00
Tanyo Kwok 72e422b589
Add relu6 and binary broadcasts (#1408)
* Add relu6 and binary broadcasts
2022-09-23 20:39:15 +08:00
Sean Silva 851ce0c940 Remove TorchLoweringPipelineOptions from TorchConversion pipelines
TorchLoweringPipelineOptions only applies to the frontend lowering
pipeline.
2022-09-14 11:20:29 -07:00
Tanyo Kwok 7f63a17a46
[MHLO] add new options to pipeline (#1331) 2022-09-12 10:27:41 -07:00
Tanyo Kwok 57d8ec151f
[MHLO] add VerifyMhloBackendContract (#1321)
* [MHLO] add VerifyMhloBackendContract

* guard with macro
2022-09-01 17:08:17 +08:00
Sean Silva 0e3ddbac91 Remove VerifyInvariantsBeforeBackendLowering
LowerToBackendContract now checks all this consistently.
2022-08-26 10:24:43 -07:00
Sean Silva 57681f7947 Iteratively run the main simplification pipeline.
This introduces a new pass LowerToBackendContract (better name very
welcome) which performs the bulk of the simplifications that we do,
such as
- shape refinement
- dtype refinement
- maximizing value semantics
- inlining global slots
- decomposing complex ops

The key difference from before is that it iterates the set of
transformations, which can help to break a number of "catch-22" issues
where one simplification depends on another, the latest example being
here:
https://github.com/llvm/torch-mlir/issues/1131

This also exposed that RefineTypes was sometimes crashing/asserting for
certain inputs. This commit hardens it a bit.
2022-08-17 14:54:33 -07:00
Yan Xu 9be8997536
Revert "add native_dropout and related ops pattern (#1211)" (#1230)
This reverts commit c935795086.
2022-08-17 13:48:10 +08:00
Yan Xu c935795086
add native_dropout and related ops pattern (#1211) 2022-08-15 09:28:47 +08:00
Ramana Radhakrishnan 738f4fe96a
Rename TorchToStd pass as TorchToArith (#1163)
All the converters in this pass appear to create ops from the
arith dialect. Hence the full rename.

Fix GH Issue #409.
2022-08-10 20:12:51 +01:00
Sean Silva 504de5e701 Rework how global slot initializers work.
Rather than a per-global-slot initializer region, we now have one for
the whole module. For example, it might look like this:

```
torch.global_slot "private" @tensor : !torch.tensor
torch.global_slot "private" @list : !torch.list<tensor>
torch.global_slot.module_initializer {
  %0 = torch.tensor.literal(dense<0.0> : tensor<f32>) : !torch.tensor
  %1 = torch.prim.ListConstruct %0 : (!torch.tensor) -> !torch.list<tensor>
  torch.initialize.global_slots [
    @tensor(%0 : !torch.tensor)
    @list(%1 : !torch.list<tensor>)
  ]
}
```

This new structure allows GlobalizeObjectGraph to create the initializer in a
much simpler way, avoiding the need to reason about whether different slots
alias each other. Reasoning about whether slots alias each other now is the
responsibility of InlineGlobalSlots, which has to do a much more complicated
analysis, implemented using MLIR's dataflow analysis framework.

Recommended review order:
- Check out the new IR constructs in the .mlir files of various passes
- Op definitions (*.td)
- Changes to GlobalizeObjectGraph pass.
- InlineGlobalSlots pass (~total rewrite)
- Misc changes:
  - Moving torchMlirAdjustStaticInformation for sharing with C++ code.
  - EraseModuleInitializer pass

To make this a bit nicer, it would be good to have a `torch.module` op
with an initializer region attached. That would be more invasive though.

This change has highlighted certain aspects of our project layering
which are worth calling out. None of our backends can handle global
slots, so we enforce that there are no global slots before backend
lowering. At an earlier stage in the project, we had aspirations of
transparently handling mutable global state and such, but for reasons
described below, that is no longer a goal. So really global slots should
be seen as a progressive lowering step as part of inlining all the
IValue's in the original program (GlobalizeObjectGraph is also one such
step).

Over time, with insights from work like IREE-JAX, it has become clear
that there isn't a reliable programming model we can compile for users
where we just transparently handle mutable global state (and some other
things, like lists and dictionaries). There is a need for an "outer
program" that orchestrates more restricted subroutines of the kind we
can handle in our compile flow here. The benefit of that is that it
decouples considerations like shapes, dtypes, etc. from the program
constructs used in the outer program. As long as the outer program can
efficiently invoke (pipelining/async/etc.) high-performance
data-parallel numerical subroutines of the kind we compile in our flow
here, then there is a complete programming model. This is also
consistent with the direction of upstream PyTorch which is becoming more
tracing-based (which inherently loses a lot of program structure, which
then has to be applied back with an "outer program" orchestrating the
traced subroutines).
2022-08-08 18:12:06 -07:00
武家伟 76c976682c
[MHLO] Support for dynamic shape in basic op conversion by introducing CHLO dialect (#1123)
* [MHLO] Support for dynamic shape in basic op conversion by introducing CHLO dialect
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>

* [MHLO] Support I32 as shape tensor dtype

* [NFC] Add a 'TODO' annotation
2022-08-02 12:53:24 +08:00
Sean Silva c0ef192865
Improve error message
The unknown dtype case can come from RefineTypes.
2022-07-21 13:52:24 -07:00
Ziheng Jiang c61c99e887
[MHLO] Init MHLO integration. (#1083)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-07-20 16:18:16 -07:00
Ashay Rane e06ee08506
torch: [nfc] use `WalkResult::isInterrupted()` instead of booleans (#1081)
An upstream MLIR bug (that was recently fixed) caused the result to be
ignored for Region- and Block-visitor functions.  Now that the bug is
fixed, we don't need an auxiliary boolean to track whether the visitor
function has succeeded.
2022-07-19 10:17:57 -07:00
Tanyo Kwok 143a7bcb76
[MLIR][TORCH] Add folder for torch_c.from_i64 & torch_c.to_i64 (#933)
* [MLIR][TORCH] Add folder for torch_c.from_i64 & torch_c.to_i64

* add unit tests for each individual fold

* fix failure of NumelZeroRankModule & TestMultipleTensorAndPrimitiveTypesReturn
2022-06-24 09:34:39 +08:00
Maksim Levental 829717c96e
Bump LLVM (#958) 2022-06-22 22:23:46 -05:00
Prateek Gupta e1db318a3c [TORCH][MLIR]Add lowering for control flow operations.
1. This commit adds lowering of "while-like" prim loop to scf.while
operation.
2. Adds lowering of "for-like" prim loops to scf.for operation.

Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-04-29 16:25:58 +05:30
Ashay Rane 9208bf0eb6
llvm: bump tag to e1318078 (#781)
The updated LLVM code includes a patch to create bfloat16 array
attributes, thus enabling a different patch to torch-mlir to flesh out
support for the bfloat16 type.
2022-04-26 12:27:51 -07:00