Thanks to TorchDynamo's great layering and design, this is only about
100 lines of code for a basic lockstep debugger.
This should allow us to deprecate eager_mode, since AFAIK the only
interesting use case that it was really supporting is for downstream users to
write lockstep debuggers.
NOTE: The exact reporting and interface here is subject to change. Please
try it out and provide feedback (or patches :) ).
- make_fx should not drop source locations: https://github.com/pytorch/pytorch/issues/90276
- Report tensors better (huge tensors should be summarized)
- Maybe don't abort, but just warn?
- Allow customizing atol/rtol.
- How best to print the failing node? And include surrounding graph
context?
This commit changes the `InsertRngGlobalsPass` to `TorchConversionToMLProgram`
pass. This commit also adds the `MLProgramBufferize` pass for the
bufferization of ml_program dialect ops to run on refbackend.
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
Summary of changes:
- Change ShapedType::kDynamicSize -> ShapedType::kDynamic
- llvm::NoneType has been deprecated, change convertScalarToDtype to use llvm::None
This commit replaces the LCG algorithm that was being used by the
`TorchToLinalg` lowering of `AtenUniformOp` to generate random numbers
with the `squares64` algorithm, for the LCG algorithm was producing
tensors that were highly correlated with one another.
Squares64 algorithm: https://arxiv.org/abs/2004.06278
Closes https://github.com/llvm/torch-mlir/issues/1608
Summary of changes:
- Replace call to `MemoryEffectOpInterface::hasNoEffect`
with `isMemoryEffectFree`.
- Make fix for the dynamic dims, since
`kDynamicSize` value changed to
`std::numeric_limits<int64_t>::min()` from `-1` in llvm
- `makeShapeLLVMCompatible` and `makeShapeTorchCompatible`
utilities convert shapes in order to remain consistent
with the Torch and MLIR semantics.
- Update tags
llvm: 147fe9de29dc13c14835127b35280c4d95c8e8ba
mhlo: 1944b5fa6062ec4c065d726c9c5d64f1487ee8c5
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
The current implementation sets the `nextSeed` value to `temp & 127`,
which is wrong. The last step of the LCG algorithm for the multiplier
and increment chosen should be `temp % 2^{64} = temp & (1 <<
63)`. However, because we are dealing with i64 values, the modulus
operation happens automatically, so it is not needed.
See Donald Knuth's values for LCG here:
https://en.wikipedia.org/wiki/Linear_congruential_generator
-- This commit fixes a bug in computeReductionType API.
-- The bug pertains to removal of `dim` from the `sizes` array.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
There are a few e2e tests that take several very large tensors as
input, which leads to the e2e test suite leaking too much
memory. Running things locally resulted in a total memory usage of
12.5 GB when running the suite sequentially on the refbackend.
Many of the tests that take large tensors don't actually need
such large tensors to pass, and some that take several large tensors
as input are just doing the same thing multiple times. This commit
reduces the size of some of the tensors and removes repetitive parts
of tests to reduce the memory usage to a total of 3 GB.
Set PyTorch and TorchVision version to nightly release 2022-11-22.
Add failing tests to the xfail set.
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
`np.bool is bool` and will never be returned as a dtype of an
`np.ndarray`, so we don't need to handle it here.
```
>>> a = np.ndarray([1], dtype=bool)
>>> a.dtype.type is np.bool_
True
```
More info here:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
-- This commit adds decompose logic for `aten._softmax` when
`half_to_float` is `True`.
-- An e2e test case will be added once support for half to float conversion for
`aten._softmax` is added upstream.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com>
For reasons that I haven't yet fully tracked down, the TorchDynamo
TestConfig seems to result in tensors that cannot be pickled. They seem
to be holding some sort of weak handles to a `torch.fx.graph.Graph`.
Here is the object structure that leads to the unpickleable object:
```
(<function _rebuild_tensor_v2 at 0x7f56346d56c0>, <class 'torch.Tensor'>, ( 1.0...
{<object object at 0x7f557529e6b0>: <WeakKeyDictionary at 0x7f556a3efbb0>}
{'data': {<weakref at 0x7f5615372ed0; to 'PythonKeyTracer' at 0x7f556a3ee5c0>: _...
<class 'torch.fx.graph.Graph'>
<class 'torch._ops.OpOverloadPacket'>
TypeError("cannot pickle 'torch._C.FunctionSchema' object")
```
Upstream bug filed: https://github.com/pytorch/pytorch/issues/89626
This adds a basic e2e Config for TorchDynamo using
Linalg-on-Tensors/RefBackend.
But TorchDynamo is pretty orthogonal to
various other pieces, so it should compose nicely with variations like:
- Switching out all the backends (Linalg-on-Tensors, TOSA, MHLO)
- PyTorch functionalization and decompositions
- Taking the example inputs and compiling with all dynamic or all static
shapes without duplicating tests.
This adds it to the CI, but there are still a lot of XFAIL's.
This also adds a helper `from torch_mlir.dynamo import
make_simple_dynamo_backend` which simplifies some of the steps for
making a Torch-MLIR-based TorchDynamo backend. We include "simple" in
the name because we are going to be exploring various things next from
the long-term roadmap.
The next steps are:
- Burn down all the XFAIL's.
- Start working on the pieces from the [long-term roadmap](https://github.com/llvm/torch-mlir/blob/main/docs/long_term_roadmap.md).
- Add functionalization/decompositions into the TorchDynamo flow and
remove reliance on the current Torch-MLIR "frontend".
- Write a pure-Python direct FX->MLIR importer.
- Hook up the new PyTorch symbolic shape stuff.
- Explore PrimTorch decompositions for simplifying backends.
Until recently, the metadata file in the torchvision package included
the nightly version of the torch package, but since that is no longer
the case, our RollPyTorch workflow is broken.
As a workaround, this patch uses the `pip download` command's ability to
fetch the dependent torch package for the specified version of
torchvision, before peeking into the WHL file for the torch package to
determine the release version and the commit hash.
The upload timestamp of the nightly torchvision package has drifted
beyond the scheduled time of the RollPyTorch action because of the time
change due to daylight saving. As a result, the RollPyTorch action now
picks the torchvision package from a day earlier instead of the most
recent package.
This patch schedules the RollPyTorch action to start one hour later than
before so that it continues to pick the most recent nightly package.
This commit fixes the aten.mean and aten.mean.dim op decomposition
for supporting large-sized inputs.
This commit also fixes the formatting for the file stats.py
Signed-Off By: Vivek Khandelwal<vivek@nod-labs.com>
The purpose of the test suite is to accelerate the development of the
compiler. However, we had various tests there that were not expected to
work, had no in-progress work being tested by the test, and nobody was
actively working on them. Having such tests in our test suite just adds
clutter and slows down development on the compiler.