Commit Graph

34 Commits (79ae0afc2fc1a7b3bc25060de45f4de53444247b)

Author SHA1 Message Date
Aart Bik 6fece25ff3
[torch-mlir][sparse] add decomposition features to sparse compiler (#3505)
Fixes https://github.com/llvm/torch-mlir/issues/3499
2024-06-28 10:18:36 -07:00
Aart Bik d77bab37d1
[torch-mlir][sparse] re-enable all sparse tests (#3444)
this fixes the following issue:

https://github.com/llvm/torch-mlir/issues/3418
2024-06-10 11:19:32 -07:00
Vivek Khandelwal 72837fbb3d
build: manually update PyTorch version (#3340)
Set PyTorch and TorchVision version to nightly release 2024-05-14.

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-06-06 22:23:40 +05:30
Aart Bik 560ca24771
[torch-mlir][sparse] replace xavier with ones initialization (#3374)
ensures stability of results between different set ups
2024-05-21 17:12:55 -07:00
Aart Bik c0e7d2667d
[torch-mlir][sparse] inference mode for sparse GCN test (#3369) 2024-05-20 19:52:16 -07:00
Aart Bik e80f072ba4
[torch-mlir][sparse] example of a sparse graph convolution (#3363) 2024-05-17 15:43:50 -07:00
Peiming Liu ccb772cd0f
[sparse] propagate sparsity properly when decompose torch operations. (#3318) 2024-05-15 10:09:27 -07:00
Aart Bik 44fa6c3afd
[torch-mlir][sparse] sparse diagonal feature scaling test (#3344) 2024-05-14 12:13:54 -07:00
Peiming Liu 8e74d64e8f
[sparse] convert to sparse before any use in sparse test. (#3337) 2024-05-14 09:10:36 -07:00
Aart Bik 667dfcbc5a
[torch-mlir][sparse] enable test on ReLu (#3336)
Downstream MLIR sparsifier has some (rudimentary) support for ReLU now,
and this test can now be enabled with correct end-to-end behavior.

Also see discussion at:

https://discourse.llvm.org/t/min-max-abs-relu-recognition-starter-project/78918
2024-05-13 15:34:26 -07:00
Peiming Liu 2c22087cab
[sparse] match fx node using target name instead of variables name (#3315) 2024-05-09 12:34:14 -07:00
Aart Bik 97a822de0a
[torch-mlir][sparse] minor tweaks in sparse tests (#3311)
(1) test full pytorch output for eltwise
(2) use "random" input for LIF, to get general sparse tensor 
(3) introduce way to get true sparsity into network (needs backend fix
first)
2024-05-09 10:03:25 -07:00
Aart Bik 89bb7404c1
[torch-mlir][sparse] add a true network to our NN tests (#3305)
Objective: make the to_sparse work end-to-end!
2024-05-08 21:18:42 -07:00
Peiming Liu cff144b3ac
[sparse] fix double free due to incompatibility between buffer-deallo… (#3303)
…cation and sparse tensors.

**NOTE**: This PR _doges_ the issue in buffer-deallocation pass instead
of resolving it. In the future, we need to fix the bug in
buffer-deallocation pass when handling code generated by sparse
compiler.
2024-05-08 21:18:17 -07:00
Aart Bik c4b28e8d9f
[torch-mlir][sparse] test for sparse "activation" (#3304)
Example of introducing sparsity into the forward pass. With a bespoke
propagation (but upstream PyTorch will support this).
2024-05-08 19:01:24 -07:00
Aart Bik c77f3b559a
[torch-mlir][sparse] add simple sparsity "propagation" rules (#3297)
While waiting for the full resolution of feature request
https://github.com/pytorch/pytorch/issues/117188
(which will propagate sparsity the right way in upstream PyTorch for all
FX Graphs), this minor change allows us to start testing sparsity
"within" a network, rather than just the parameters. Feel free to add
your own rules for testing (but within reason for what will be done
upstream).

Note, two TODOs need to be addressed to work around some pending issues
to make the JIT execution work.
2024-05-07 15:27:36 -07:00
Stella Laurenzo 6877302504
[NFC reformat] Applies pre-commit formatting to Python files. (#3244)
This is a large change because prior to this point, Python files in the
project were not consistently formatted. This reformats them all with
black defaults.

Based on experience with prior projects, if you have a dev/long-term
branch with Python patches, you can minimize merge conflicts prior to
rebasing to include this commit by running `black` on your modified
Python files, squashing, and then rebasing/merging.
2024-04-27 14:16:31 -07:00
Aart Bik 491f4820f5
[torch-mlir][sparse] pre-pend named buffers to parameter list (#3178)
weights and biases and other model parameters appear as a separate data
structure to the traced graph, but are needed when running the MLIR
compiled code; this PR implements that extended functionality
2024-04-17 14:44:05 -07:00
Aart Bik 307f49f566
[torch-mlir][sparse] support sparse tensor output (#3152)
Sparse inputs and outputs are now fully supported! They always consist
of their constituents buffers, passed as numpy arrays. Sparse on!
2024-04-12 09:56:32 -07:00
Aart Bik 184d8c13f4
[torch-mlir][sparse] add ID-net example (#3127)
first sparse-in/sparse-out example, will be used
to make actual sparse output work!
2024-04-09 11:21:30 -07:00
Aart Bik 5797d3aa57
[torch-mlir][sparse] add a COO test for 3-dim (#3119)
This tests COO for more than 2-dim. Note that sparsity should really
propagate into the relu activation and the output, but such cleverness
needs to wait for the pending work in the PyTorch tree.
2024-04-08 16:46:51 -07:00
Rob Suderman ec4cb8be44
Bump LLVM to llvm/llvm-project@0030fc4ac7 (#3079)
Co-authored-by: Peiming Liu <peiming@google.com>
2024-04-01 16:34:59 -07:00
penguin_wwy f34c187ac4
Normalize type hints to be compatible with multiple Python versions (#3028)
Although we provide a wheel package for Python 3.8, it may actually
throw the following exception:
`TypeError: 'type' object is not subscriptable`
2024-03-15 08:29:48 -07:00
Vivek Khandelwal 6e84752c39
build: manually update PyTorch version (#2992)
Set PyTorch and TorchVision version to nightly release 2024-03-07.
This commit also removes the deprecated constraints API:
342e7929b8

Signed-Off By: Vivek Khandelwal <vivekkhandelwal1424@gmail.com>
2024-03-07 21:42:38 +05:30
Aart Bik f21b76b68a
[torch-mlir][sparse] fixed merge conflict (#2967) 2024-02-28 17:14:00 -08:00
Peiming Liu e85a2a87c5
[torch-mlir][sparse] support e2e sparse kernels with COO inputs. (#2939) 2024-02-28 16:08:37 -08:00
Aart Bik 30212547a9
[torch-mlir][sparse] add JIT test for block sparse SpMV (#2955)
This required adding a "decompose" pass to the torch lowering, since
torch.mv was not directly handled by lowering to linalg
2024-02-27 11:49:32 -08:00
Aart Bik 4147b280ce
[torch-mlir][sparse] add block sparsity to mlir lowering (#2942)
Also note that we are in the process of proposing SparseTensorMetadata
to PyTorch FX graph export (see
https://github.com/pytorch/pytorch/pull/117907). This will hopefully
eventually replace the current data structures in torch-mlir.
2024-02-23 11:57:20 -08:00
Aart Bik c5d8c12469
[torch-mlir][sparse][NFC] fixed typo (#2917)
grammar police
2024-02-16 13:02:00 -08:00
Stella Laurenzo 5253282c55
[fx] Support mutation in ExportedProgram. (#2916)
As of https://github.com/pytorch/pytorch/pull/118969, `ExportedProgram`
has the long awaited fixes to correctly categorize various things
relating to parameters, buffers, mutated inputs and constants.

With this additional modeling, we are finally able to implement
(safely/soundly) the mutable semantics that were attempted on the
TorchScript path. The difference is that on that path, we had to
conservatively treat everything as mutable and run some dodgy heuristics
(which have been the cause of many bugs relating to
"MaximizeValueSemantics") to try to get back to an immutable state.

The new model supports mutability at the graph edges, allowing both user
inputs and buffers to be mutated (there is some more support than that,
but that is all I fully tracked through to implementation).

Therefore, when we receive programs like this, we now can selectively
enable mutation at the edges. This happens to be the mutability model
that IREE supports, which I expect to be a primary beneficiary. However,
there is nothing stopping anyone else from handling the `!torch.tensor`
types and the existing copy/overwrite ops that will be selectively
added.

Since this relies on API changes that will not release until 2.3, I'm
being a bit cautious about not refactoring existing facilities.
2024-02-16 09:46:30 -08:00
Aart Bik 24c2fc0b5f
[torch-mlir][sparse] add JIT test to expose pending issues (#2906)
This test exposes issues that need fixing
(1) propagate sparsity into the FX graph (over elt-wise) (2) batched
dimensions need a new "dense(batch)" format
2024-02-13 13:42:56 -08:00
Aart Bik b6f4ca512e
[torch-mlir][sparse] sparsity metadata refinement (#2901)
Various improvements on sparsity metadata:

(1) define single data structure for all sparsity related metadata 
(2) handle batched dense dimensions, as well as dense subtensor
dimensions
(3) refine sparsity propagation for deeper networks
2024-02-12 16:10:57 -08:00
Aart Bik be8375d350
[torch-mlir][sparse] implement first sparse_jit end-to-end path (#2894)
This PR introduces a sparse_jit wrapper that can run simple models with
sparse tensor inputs end-to-end. The implementation shows all required
components on modifying sparse tensor types with a 1:N relation on the
call sites. Two tests shows that the JIT runs end-to-end while computing
the correct results.

More details to follow (generalizing to COO and different ranks, as well
as support for *output* sparse tensors), but the general concepts are
all here now.

**_Update: Thanks to Rob, bump to proper LLVM/MLIR hash is done!_**

_**NOTE that all parameter passing changes are nicely done "downstream"
in MLIR, so very little changes are required in torch-mlir code
proper**_

---------

Co-authored-by: Franz Haniel <77495327+frafranz@users.noreply.github.com>
Co-authored-by: Franz Haniel <franz.haniel@amd.com>
2024-02-12 10:04:54 -08:00
Aart Bik 105aad6f57
[torch-mlir] provide FX traced graph importer for sparse tensors (#2817)
Note that we are waiting for actual FX traced graph support for sparse
tensors. For details see

https://github.com/pytorch/pytorch/issues/117188

Until then, however, we provide this clever importer that builds the FX
traced graph for for the dense case and then puts a sparse annotation
back on the parameters.

With import test.
2024-01-30 21:22:12 -08:00