Commit Graph

1967 Commits (5698893ae424ffaf30923a38256b8bd4a1dcd344)
 

Author SHA1 Message Date
Vivek Khandelwal 5698893ae4 build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-05-16.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2023-05-18 21:30:11 +05:30
Yuanqiang Liu 6f7d9e83df
[Stablehlo] add e2e test for aten.batch_norm (#2129) 2023-05-17 09:04:40 -07:00
Yuanqiang Liu e98f2ba04a
[Torch Dialect] require dtype exists when decompose to aten.where.self (#2094)
* [Torch Dialect] require dtype exists when decompose to aten.where.self

* update
2023-05-17 09:04:26 -07:00
gpetters94 0302cf1d92
Add TMTensor::Attention and lower ScaledDotProductAttentionOp to it (#2027) 2023-05-16 15:17:45 -04:00
Maksim Levental c76a48308e
[CAPI] add isValidSubtype to CAPI (#2127) 2023-05-13 22:15:45 -05:00
Ashay Rane 19a08d51f3
CI: [nfc] Use actions/cache instead of modified fork (#2124)
We previously used a fork of the action/cache repository for the PyTorch
cache since the actions/cache repo did not support read-only caches.
Now that actions/cache supports separate read and write steps, this
patch switches back to the actions/cache repo.
2023-05-12 23:25:17 -05:00
Matthias Gehre 3a8196588f
TorchToTosa: Support casts from and to bf16 (#2118) 2023-05-12 15:18:23 -07:00
David Gens 17db2aafa3
add mse_loss_backward (#2111) 2023-05-12 14:29:13 -07:00
Ramiro Leal-Cavazos de02b56e17
Replace RefineTypes with dtype functions (#2105)
This commit adds dtype functions for all the torch ops that did not
previously have one and removes the pass `RefineTypes`, since the
abstract interpretation library now takes care of all the dtype
propagation.

All dtype functions added are tested except for
- `aten.embedding`
- `aten._embedding_bag`
- `aten.embedding_bag`

These functions need a change to the testing framework to allow
specifying the actual data inside the tensor used for testing. I will
fix this in a follow up patch.

Co-authored-by: Jiahao Li <liplus17@163.com>
2023-05-12 13:40:45 -07:00
Ashay Rane 28bb866260
CI: prepare CI for ccache updates for MSVC/Windows (#2120)
This patch, by itself, doesn't fix caching on Windows, but once a new
release of ccache is available, caching for Windows builds should start
working again (validated by building ccache from source and using it
with LLVM builds).

Ccache rejects caching when either the `/Zi` or `/ZI` flags are used
during compilation on Windows, since these flags tell the compiler to
embed debug information in a PDB file (separate from the object file
produced by the compiler).  In particular, our CI builds add the `/Zi`
flag, making ccache mark these compiler invocations as uncacheable.

But what caused our CI to add debug flags, especially when we specified
`-DCMAKE_BUILD_TYPE=Release`?  On Windows, unless we specify the
`--config Release` flag during the CMake build step, CMake assumes a
debug build.  So all this while, we had been producing debug builds of
torch-mlir for every PR!  No doubt it took so long to build the Windows
binaries.

The reason for having to specify the configuration during the _build_
step (as opposed to the _configure_ step) of CMake on Windows is that
CMake's Visual Studio generators will produce _both_ Release and Debug
profiles during the CMake configure step (thus requiring a build-time
value that tells CMake whether to build in Release or Debug mode).
Luckily, on Linux and macOS, the `--config` flag seems to be simply
ignored, instead of causing build errors.

Strangely, based on cursory tests, it seems like on Windows we need to
specify the Relase configuration as both `-DCMAKE_BUILD_TYPE=Release` as
well as `--config Release`.  Dropping either made my build switch to a
Debug configuration.

Additionally, there is a bug in ccache v4.8 (although this is addressed
in trunk) that causes ccache to reject caching if the compiler
invocation includes any flag that starts with `/Z`, including /`Zc`,
which is added by LLVM's HandleLLVMOptions.cmake and which isn't related
to debug info or PDB files.  The next release of ccache should include
the fix, which is to reject caching only for `/Zi` and `/ZI` flags and
not all flags that start with `/Z`.

As a side note, debugging this problem was possible because of ccache's
log file, which is enabled by: `ccache --set-config="log_file=log.txt"`.
2023-05-12 12:45:01 -05:00
github-actions[bot] c26e96e9a6
update PyTorch version to 2.1.0.dev20230512 (#2119)
- torch version: 2.1.0.dev20230512
 - torch commit hash: 1a3d3669efa55e3360060c9b81f87900ae0c906c
 - torchvision version: 0.16.0.dev20230512

Co-authored-by: Roll PyTorch Action <torch-mlir@users.noreply.github.com>
2023-05-12 12:09:07 -05:00
Maksim Levental c3cd7471b4
Pure-Python FX importer. (#2098)
Co-authored-by: Sean Silva <silvasean@google.com>
2023-05-12 00:46:33 -05:00
Ashay Rane e161f2511a
CI: let GitHub action create commit (#2114)
The GitHub action for creating the PR expects that either the changes
are not committed (in which case it commits them with the specified
commit message) or that the commit exists but that it is also pushed to
remote.

Prior to this patch, we created the commit but did not push it to
remote, causing failures.  This patch leaves the changes uncommitted so
that they're committed and pushed to remote as part of the PR creation.
2023-05-11 19:19:32 -05:00
Zhekun Zhang 1eb18dd8b5
Add AtenFillScalarOp Stablehlo support (#2108)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-11 16:41:46 -07:00
Prashant Kumar c47d3aab01 Fix torchdynamo fail test. 2023-05-11 21:29:07 +05:30
Prashant Kumar 8eb0c7e656 torch.complex to builtin complex types matching.
The right approach would be to create our own !torch.complex type
and use that during import than have a pass that converts to the MLIR
complex types.
2023-05-11 21:29:07 +05:30
Ramiro Leal-Cavazos ab694dfbc1 Add complex dtype support on refbackend 2023-05-11 21:29:07 +05:30
Prashant Kumar 3cd91affbc Add complex types support with basic complex ops.
Add complex types support with basic complex types.
Add aten.imag and aten.real op lowering via linalg_backend.
2023-05-11 21:29:07 +05:30
rahul shrivastava 86429d9656 Add e2e native_group_norm test-cases
Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-05-11 21:21:12 +05:30
rahul shrivastava 40a2c501a1 Add ODS for group_norm
- Add ODS for native_group_norm/backward.
- Add shape-inference for native_group_norm/backward .

Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-05-11 21:21:12 +05:30
yifei410 86718cb203
[TOSA] lowering support for aten cat (#2039)
Add support for lowering torch.aten.cat to tosa.concat

* add support for aten cat to tosa

---------

Co-authored-by: yifei <y.zhou@xilinx.com>
Co-authored-by: Lisa Liu <lingl@xilinx.com>
2023-05-10 08:25:58 -07:00
Ashay Rane 377720af87
CI: create PR for RollPyTorch updates (#2106)
Currently, we run just the Linux in-tree tests when the RollPyTorch
workflow runs, but this is insufficient since WHL files for macOS or
Windows are sometimes not uploaded by PyTorch, causing the RollPyTorch
action to pass but all subsequent torch-mlir CI tests to fail because of
the broken build.

The easiest way to validate the RollPyTorch action on all platforms is
to run the standard set of tests that we run for each submitted PR, so
this patch makes the RollPyTorch action submit a PR instead of
committing the changes to the main branch directly.  The PR is assigned
to a handful of folks for review, although this can be changed in the
future.
2023-05-10 09:25:59 -05:00
Sean Silva d7614c261d Integrate LLVM
LLVM: 26ee8947702d79ce2cab8e577f713685a5ca4a55
MHLO: 4805d8498dfb81566076f56f52273b426c1cc5bf

Per: https://github.com/llvm/torch-mlir/issues/1178#issuecomment-1538492185
2023-05-09 10:14:27 -07:00
Chi_Liu 51e0a2c933
[Stablehlo] Add stablehlo support for aten.abs (#2068)
Co-authored-by: AmosLewis <Amos_Lewsi@foxmail.com>
2023-05-08 22:13:00 -07:00
Maksim Levental c7a24c4d21
[CMake] Add headers to install target (#2100) 2023-05-08 14:27:00 -05:00
Yuanqiang Liu ef6dae6ae2
[Linalg] fix lowering reduce max with -inf (#2097) 2023-05-08 09:17:49 -07:00
Roll PyTorch Action 11a91b9d14 update PyTorch version to 2.1.0.dev20230508 2023-05-08 13:19:18 +00:00
Yuanqiang Liu 0096ceae2f
[Stablehlo] fix reduce max init_value with -inf (#2064)
* [Stablehlo] fix reduce max init_value with -inf

* update
2023-05-06 12:05:51 -07:00
Roll PyTorch Action 0d0366c319 update PyTorch version to 2.1.0.dev20230506 2023-05-06 13:18:36 +00:00
Yuanqiang Liu 9f1ed4b2ba
[Torch Dialect] typo fix for RefineTypes (#2087) 2023-05-05 15:22:14 -07:00
Zhekun Zhang fc62b8e9ab
[StableHlo] Fix AtenWhereSelfOp convert rule (#2093)
* fix whereself convert rule

* use int to test promotion

* add dynamo failing test

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-05 15:21:55 -07:00
Roll PyTorch Action eaaaeb6ff1 update PyTorch version to 2.1.0.dev20230505 2023-05-05 13:26:09 +00:00
Vivek Khandelwal 378860f51b [MLIR][TORCH] Add E2E support for aten.topk op
This commit adds the decomposition for the aten.topk op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-05-05 15:50:33 +05:30
Zhekun Zhang 1eceb84899
add stablehlo support for pow.tensor_tensor (#2086)
Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-04 09:55:03 -07:00
Zhekun Zhang 0cf9ee340b
[Torch Dialect] Add to.dtype_layout canonicalize patterns (#2062)
* add to.dtype_layout canonicalize patterns

* update comment

---------

Co-authored-by: zhekun.zhang <zhekun.zhang@bytedance.com>
2023-05-02 20:06:02 -07:00
Yuanqiang Liu c596d11b98
[Torch Dailect] add canonicalize pattern for prim.device (#2066) 2023-05-02 20:05:46 -07:00
Ramiro Leal-Cavazos f9c2b46e62
[build] Update llvm tag to 68754241 (#2079)
This commit updates the `llvm-project` and `mlir-hlo` submodules to
commits:

- llvm-project: 6875424135312aeb26ab8e0358ba7f9e6e80e741
- mlir-hlo: 92fd33a4bacbeb93ab276a49f38bdebd5f9d7487

The calls to `mlir::MlirOptMain` are updated to no longer specify the
flag `preloadDialectInContext` that has been removed (see:
https://reviews.llvm.org/D149039).
2023-05-02 09:13:54 -07:00
Maksim Levental 100cb46baa
[Dynamo] fix TORCHDYNAMO_CRASHING_SET (#2082) 2023-05-01 21:35:26 -05:00
Ramiro Leal-Cavazos 55caaf6dda
Revert "update PyTorch version to 2.1.0.dev20230430" (#2080)
This reverts commit eab1930c49.

The PyTorch version does not yet exist for MacOS.
2023-05-01 15:57:54 -07:00
Roll PyTorch Action eab1930c49 update PyTorch version to 2.1.0.dev20230430 2023-04-30 13:16:50 +00:00
Maksim Levental 2b56ee46e1
Missing ";" bewtween includes in MHLOTargets.cmake (#2074)
Co-authored-by: Tilmann Bartsch <info@tebartsch.ai>
2023-04-29 18:05:20 -05:00
Roll PyTorch Action 2994ba00f2 update PyTorch version to 2.1.0.dev20230429 2023-04-29 13:17:28 +00:00
Maksim Levental c9fba95642
[Dynamo] turn on `no_python=True` for dynamo tests (#2040) 2023-04-28 18:05:17 -05:00
Roll PyTorch Action 61a8142d23 update PyTorch version to 2.1.0.dev20230428 2023-04-28 13:31:17 +00:00
Ze Zhang 7b73e0cfaf
Add e2e linalg support for aten.atan (#2070)
* new atan op

* update shape

---------

Co-authored-by: Ze Zhang <ze.zhang@getcruise.com>
2023-04-28 00:04:58 -07:00
rahul shrivastava a58442b50d Add ODS for aten.pow.Scalar
Signed-off-by: rahul shrivastava <rahul.shrivastava@cerebras.net>
2023-04-27 22:09:45 +05:30
Vivek Khandelwal 491ae5eda4 [MLIR][TORCH] Add E2E support for aten.var_mean.dim op
This commit adds the decomposition for the aten.var_mean.dim op.

Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
2023-04-27 22:00:44 +05:30
Ramiro Leal-Cavazos c8e062fb4e
Fix default value of `stride` in 2d pooling ops in linalg and tosa (#2065)
When the user does not specify the `stride` value in 2d pooling ops,
`stride` is given the value of an empty list. However, the current
lowerings for pooling ops assumed that the `stride` operand would
always be a list of two ints, leading to crashes when that was not the
case. This commit fixes the crashes by setting the value of `stride`
to `kernel_size` when `stride` is the empty list, since this is the
default `stride` value specified in PyTorch docs. See:
https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html#torch.nn.MaxPool2d
2023-04-27 08:31:36 -07:00
Roll PyTorch Action 24381abea4 update PyTorch version to 2.1.0.dev20230427 2023-04-27 13:32:54 +00:00
Roll PyTorch Action 4f1e8f66ae update PyTorch version to 2.1.0.dev20230426 2023-04-26 13:32:25 +00:00