Fix the inplace update tensor issue we had
where the torchscript execution would update the input value inplace
resulting the actual test not being able to see the original input
value.
This commit adds more test cases `aten::index_put` op.
This commit also fixes formatting issues with the test file index_put.py
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
This commit adds lowering of `aten.ceil.float` op.
This commit also fixes formatting for the file scalar.py.
Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
Compiling torch-mlir against a source version of PyTorch or an official
wheel compiled with the new C++ stdlib ABI fails, as torch-mlir doesn't
know how to set compiler flags to remain compatible. This changes the
way torch-mlir looks at PyTorch and tries to more closely match the ABI
settings, regardless of whether it's the common official wheel or some
other version.
I wasn't able to find exactly what frontend situation created it, but
`torch.jit.trace` will sometimes create functions where the
`jit::Block`'s param node has refined tensor types. So we need to adjust
the function's formal param types to those refined types.
The updated LLVM code includes a patch to create bfloat16 array
attributes, thus enabling a different patch to torch-mlir to flesh out
support for the bfloat16 type.
Prior to this patch, the result type for several tensor operations could
only be float32, float64, or null. This patch adds bf16 to the list of
allowed result types.
Prior to this patch, the top-level README did not include the line for
running the Python regression tests in `//python/test`. This patch
fixes the problem by adding a line to run the `check-torch-mlir-python`
target.
* Add oneshot release snapshot for test/ondemand
Add some build scripts to test new release flow based on IREE.
Wont affect current builds, once this works well we can plumb it
in.
Build with manylinux docker
* Fixes a few issues found when debugging powderluv's setup.
* It is optional to link against Python3_LIBRARIES. Check that and don't do it if they don't exist for this config.
* Clean and auditwheel need to operate on sanitized package names. So "torch_mlir" vs "torch-mlir".
* Adds a pyproject.toml file that pins the build dependencies needed to detect both Torch and Python (the MLIR Python build was failing to detect because Numpy wasn't in the pip venv).
* Commented out auditwheel: These wheels are not PyPi compliant since they weak link to libtorch at runtime. However, they should be fine to deploy to users.
* Adds the --extra-index-url to the pip wheel command, allowing PyTorch to be found.
* Hack setup.py to remove the _mlir_libs dir before building. This keeps back-to-back versions from accumulating in the wheels for subsequent versions. IREE has a more principled way of doing this, but what I have here should work.
Co-authored-by: Stella Laurenzo <stellaraccident@gmail.com>
Added the dynamic registration of return function to the execution
engine. This makes sure that different/multiple return types are supported.
Also, updated the .style.yapf indentation to 4.
This avoids issues where PyTorch version drift has made things
incompatible.
One caveat is that you will need to specify
`-f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
--pre` on the command line for pip to know where to find the nightly
packages (there is no way around this) -- this is easiest to do by
simultaneously passing `-r requirements.txt` on the pip command line.
This makes it much easier to convert models and hides all the
ClassAnnotator complexity.
This also adds a new example `torchscript_resnet18_all_output_types.py`
which shows the ResNet18 IR for all output types.
Also,
- This moves `run_pipeline_with_repro_report` to
`torch_mlir.compiler_utils`.
Since they run in distinct jobs, using the same ccache would
cause one job to overwrite the cache of the other.
See https://github.com/ljfitz/torch-mlir/pull/16 for a proof
that this works. The first build takes a long time but ccache
takes over in the dummy commit.