torch-mlir/lib/Dialect/Torch/Transforms
zjgarvey 140cad5659
Add More Scalarize Shapes Patterns (#3810)
### new patterns:

1. Propagates `aten.broadcast_to` ops of a single value to an
`aten.full` op
2. Propagates arithmetic operations through a templated class which
associates some tensor arithmetic ops to their integer-scalar
counterparts. These are a major blocker right now, since some models
have a bunch of rank 0 arithmetic being done with tensor ops. See the
lit test for an interesting example that pads an input to the smallest
shape which will become divisible by twelve in `dim0`. If you think this
is convoluted, you haven't been staring at ONNX generated IR long
enough.
3. Adds a stronger folder for `aten.eq.int` to fold `size.int == 0` to
`false`. See the comment in that conversion pattern for more
justification as to why it is acceptable to make this assumption here.
This is another major blocker for models, since this lack of folding
propagates to lack of folding for subsequent `where.self` operations.
4. Add `AtenSqueezeDim` to the existing `FoldAtenSqueezeOpPattern`

### other changes:
 
1. Add two new anchor ops: `AtenArangeStartStepOp` and
`Torch::RuntimeAssertOp`. I've checked all possible sources of the
runtime assert ops and it is always shape related. The Arange op only
takes int inputs, and these are all shape related. Adds a size check to
getting a list from literal ops.
2. Improved folders for int arithmetic ops to fold some common patterns.
3. adds the ability to get some values from scalar-tensor ops to
getListFromTensor.
4. further cleans up getListFromTensor for readability.

### points to scrutinize:

1. I made the choice to scalarize `div.Tensor` (int dtype result) to
`floordiv.int`. This is because our shape computations involving this
kind of arithmetic are never negative in practice, and we don't have a
"round towards zero" scalar int divide counterpart.
2. Anchoring on `RuntimeAssertOp` sounds really suspicious, and if
someone happens to add a runtime assert in the future that doesn't boil
down to shapes, then it would add to the worklist considerably. We might
be able to get around this by adding "NoMemoryEffect" to ops which are
"ReadOnly" so that the inputs for the runtime asserts get cse'd with
existing elements of the worklist before we even get to this pass.
2024-10-21 19:42:39 -05:00
..
AbstractInterpLibrary.cpp build: manually update PyTorch version (#3727) 2024-10-18 13:32:14 +05:30
AdjustCallingConventions.cpp Update to llvm/llvm-proect@27ac46e6be (2024-6-12) (#3454) 2024-06-12 19:34:01 -07:00
CMakeLists.txt Add RestructureNonConstantAxes pass to address reduce op tests failing on non constant axes (#3600) 2024-08-26 14:06:06 -07:00
DecomposeComplexOps.cpp [Torch] support adaptive_max_pool1d when return_indices equals False (#3783) 2024-10-11 23:42:15 +08:00
DropAbstractInterpCalculations.cpp Update to LLVM 029313cc979ae71877b65794b1063d4e51184cc8 2023-03-21 04:16:20 -07:00
EraseModuleInitializer.cpp [NFC] Remove unused header files (#3386) 2024-05-30 14:30:36 +08:00
FuseQuantizedOps.cpp [torch] Basic support for per-channel quantized graphs (#3623) 2024-08-10 15:51:09 +02:00
GlobalizeObjectGraph.cpp [NFC] Remove unused header files (#3386) 2024-05-30 14:30:36 +08:00
InlineGlobalSlots.cpp Integrate LLVM at llvm/llvm-project@c13f806 (#3789) 2024-10-14 15:00:45 +02:00
LowerToBackendContract.cpp Add Decompostion for `Aten_SafeSoftmaxOp` (#3708) 2024-09-12 16:58:10 -05:00
MatchQuantizedOps.cpp [torch] Basic support for per-channel quantized graphs (#3623) 2024-08-10 15:51:09 +02:00
MaximizeValueSemantics.cpp [NFC] Change to *cast instead of .*cast variants (#3405) 2024-05-30 23:45:13 -07:00
PassDetail.h llvm: bump tag to e1318078 (#781) 2022-04-26 12:27:51 -07:00
Passes.cpp [MLIR][TORCH] Add torch-onnx-to-torch-backend pipeline (#3801) 2024-10-21 11:20:44 -05:00
PrepareForGlobalizeObjectGraph.cpp [NFC] Remove unused header files (#3386) 2024-05-30 14:30:36 +08:00
RecomposeComplexOps.cpp [Torch] Add support for Meshgrid (#3462) 2024-06-14 23:59:08 +08:00
ReduceOpVariants.cpp build: manually update PyTorch version (#3627) 2024-08-19 12:03:56 +05:30
RefinePublicReturn.cpp [NFC] Remove unused header files (#3386) 2024-05-30 14:30:36 +08:00
ReifyAbstractInterpCalculationsUtils.cpp [NFC] Change to *cast instead of .*cast variants (#3405) 2024-05-30 23:45:13 -07:00
ReifyAbstractInterpCalculationsUtils.h handles 2,3,4 from https://github.com/llvm/torch-mlir/issues/1963 (#1964) 2023-03-24 21:50:01 -05:00
ReifyDtypeCalculations.cpp Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243) 2024-04-27 14:00:56 -07:00
ReifyShapeCalculations.cpp Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243) 2024-04-27 14:00:56 -07:00
RestructureNonConstantAxes.cpp Add RestructureNonConstantAxes pass to address reduce op tests failing on non constant axes (#3600) 2024-08-26 14:06:06 -07:00
ScalarizeShapes.cpp Add More Scalarize Shapes Patterns (#3810) 2024-10-21 19:42:39 -05:00
SimplifyAbstractInterpCalculationsUtils.cpp Fix deprecated uses of cast/dyn_cast/dyn_cast_or_null/isa (#3243) 2024-04-27 14:00:56 -07:00
SimplifyAbstractInterpCalculationsUtils.h Replace RefineTypes with dtype functions (#2105) 2023-05-12 13:40:45 -07:00
SimplifyDtypeCalculations.cpp [NFC] Change to *cast instead of .*cast variants (#3405) 2024-05-30 23:45:13 -07:00
SimplifyShapeCalculations.cpp Add AtenSliceTOp Canonicalization to SimplifyShapeCalculations pass (#3791) 2024-10-14 14:41:31 -05:00