Commit Graph

276 Commits (5620fe030e75215f65bca2b2ac5a72b87a3adf9d)

Author SHA1 Message Date
Ramiro Leal-Cavazos 5620fe030e
Add 1D, weight, and reduction support to nll_loss_backward (#729)
This commit adds the following support to the op `nll_loss_backward`:
- `input` tensor can be rank-1
- `weight` parameter
- `reduction` parameter
- `target`, `grad_output`, `total_weight` can be rank-0
- Checks that input tensors are of the expected type
2022-04-04 10:57:49 -07:00
Sean Silva 14cf87633c
Add link to forum post describing `__torch_dispatch__` 2022-04-01 10:10:43 -07:00
Ramiro Leal-Cavazos 51d4d55f8a
Add support for multi-dim input to `index_put_impl` (#722)
This commit adds support for multi-dimensional tensors as input to the
`_index_put_impl_` op. The support was to some degree already there,
since `ScatterOp` already supports multi-dimensional tensors. This
commit also adds a bit more error checking to `index_put` and
refactors the code for creating `ScatterOp`s to mimic the way one
would make a `Linalg::GenericOp`.
2022-03-31 09:27:21 -07:00
Sean Silva c17c0a6ba2 Fix for 0-size dim inferred incorrectly.
The issue was in the canonicalizer for torch.aten.ge.int -- in cases
where the operands were swapped, it would miscompile. This issue is
fixed and folding support generalized to `torch.aten.size.int < 0` as
well.

Fixes #716
2022-03-30 16:36:15 -07:00
Gaurav Shukla 969785d1b6 [LINALG] Add E2E support for `aten.where.[Scalar|ScalarSelf|ScalarOther]` ops
This commit decomposes different variants of `aten.where.*` op into
`aten.where.Self` op. It covers `aten.where.Scalar`,
`aten.where.ScalarSelf` and `aten.where.ScalarOther` ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-30 20:36:48 +05:30
Vivek Khandelwal 2597c481f6 [MLIR][TORCH] Add E2E support for aten.new_empty op
This commit decomposes `aten.new_empty` op into `aten.empty.memory_format` op.

This commit also made a dtype fix to the constant tensor allocation like ops.
Earlier the dtype for the result was inferred from the result type; now, it's
being evaluated as per the original definition of the op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-30 13:21:01 +05:30
Sean Silva 140babd952 Add minimal support for Union types.
A recent PyTorch commit made ConstantPad2d call a helper function with a
`Union[int, float]` type annotated. This commit adds minimal support for
representing and dealing with that.
https://github.com/pytorch/pytorch/pull/73287

Changes:
- Adding support for `!torch.union<T1, T2, T3>`/`Torch::UnionType`,
  along with the importer and CAPI code.
- Add support in isValidSubtype for union types.
- Adding a canonicalizer for `torch.derefine` to help simplify some code
  that derefines to a UnionType (this also fixes #664).

There is still more work to do for really supporting UnionType well,
such as canonicalizing UnionType's so that they can be compared with
pointer equality.
2022-03-29 17:45:48 -07:00
Maksim Levental 25ba51b2af
This commit decomposes aten._reshape_alias op into aten.view op. (#690) 2022-03-28 23:54:28 -05:00
Maksim Levental 3e999beaea
Small bug fixes in eager mode (#691) 2022-03-28 13:31:07 -05:00
Sean Silva 0378c75b35 Centralize all test serialization logic. 2022-03-28 10:17:13 -07:00
Sean Silva 6b637a9fd9 Move e2e test definitions into the `torch_mlir_e2e_test` package
This is the first step to making the e2e framework convenient to use
by downstream backends.
2022-03-25 13:56:41 -07:00
Gaurav Shukla 02b6d04eb4 [LINALG] Add E2E support for `aten.zero_` op
This commit adds decomposition of `aten.zero_` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-25 12:46:50 +05:30
Sean Silva 94df096c11
Add note to not edit upstream_shape_helpers.py 2022-03-24 09:32:19 -07:00
Qiang Fu f7c7bb800c
Add non-default dtype support for a few elementwise math ops. (#687)
* fix type inference
* fix Torch2Linalg conversion
* add test cases
2022-03-23 13:35:43 -07:00
max fe8ac57e6d This PR implements an eager mode backend for PyTorch through the torch-mlir framework. This is accomplished by overriding the `__torch_dispatch__` class method on wrapper subclass `TorchMLIRTensor(torch.Tensor)`.
Effectively, this mode works by compiling op by op as the NN is eagerly executed by PyTorch. Entailed in that compilation is building a representation of the op that can be `torch.jit.script`ed, importing using `ModuleBuilder`, and then executing (e.g., with `RefBackendLinalgOnTensorsBackend`). This mode includes a fallback to conventional PyTorch if anything in the torch-mlir compilation process fails (e.g., unsupported op).

Currently, all e2e tests pass execpt for two that involve an upstream PyTorch bug (https://github.com/pytorch/pytorch/issues/74400).

High priority next steps:

1. A compile cache in order to speed up reruns of the same NN.
2. Integration with IREE (though not in this repo).
3. Integration with `torch.distributed`.
2022-03-22 14:42:57 -07:00
Gaurav Shukla 7c3ba25238 [LINALG] Add decomposition of `aten.dropout` op
- This commit adds decomposition of `aten.dropout` op. It also covers the
  training mode of the same op.
- It also adds lowering of `aten.sub.float` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-22 13:14:49 +05:30
Sean Silva 729402c3f4 Reduce compilation time for TorchOps.cpp.inc
The `assemblyFormat` stuff (which generates unrolled, per-op C++ code)
was taking up a lot of compile time, and all the ops are essentially
printed with the same logic. So this PR makes them all call the same
helper function. This is done by using
`let hasCustomAssemblyFormat = 1` and then implementing `FooOp::parse`
and `FooOp::print`.

Additionally, the `Generated*Ops.td` files are all collapsed into just
`GeneratedTorchOps.td` (there is no reason to have the files separate,
since the files are very large anyway so one is always having to search
within them -- editors don't care that the file to search is now a bit
bigger :) ).

This reduces TorchOpsODSGenerated.cpp compile time (which is now
GeneratedTorchOps.cpp) from 39 to 31 seconds on my machine. This is
actually less than I expected, but this PR is an overall cleanup to the
code anyway. The next step will be to introduce (better) functionality
upstream for sharding the TorchOps.cpp.inc file, so that we can truly
parallelize the O(#ops) costs. This is also necessary, because after
this PR, TorchDialect.cpp is now the slowest file to compile, due to the
`addOperations<... all the ops ...>` call, which needs to be shareded
too.
2022-03-21 14:42:26 -07:00
Vivek Khandelwal 5b9bdfaf3f [MLIR][TORCH] Add E2E support for aten._to_copy op
This commit decomposes `aten._to_copy` op into
`valsem.aten.copy` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 13383b03b8 [MLIR][TORCH] Add value tensor variant to aten::copy_ op
This commit adds the op `ValsemVariantAtenCopyOp` that represents
`AtenCopy_Op` without the underscore. This is needed to make sure
that the `ReduceOpVariants` pass turns the in-place op into an op
that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenCopyOp`.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 19:12:37 +05:30
Vivek Khandelwal 4c0cd5c23d [MLIR][TORCH] Add E2E support for aten.expand_as op
This commit decomposes `aten.expand_as` op into `aten.broadcast_to` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-21 12:47:39 +05:30
Vigilans 63fb1e5aad Bump LLVM at 8361c5da30588d3d4a48eae648f53be1feb5cfad 2022-03-18 13:16:14 -04:00
Prateek Gupta 7256c9e395 [TORCH][MLIR] Fix the return types of `aten.native_layer_norm`.
This commit fixes the 2nd and 3rd return types of the `aten.native_layer_norm`.
Previously the mean and rSTD were returned with reduction dims removed.
This commit fixes this and keeps the reduction dims of the results.

Signed-Off-By: Prateek Gupta <prateek@nord-labs.com>
2022-03-17 12:08:32 +05:30
Vivek Khandelwal 8da7d90611 [MLIR][TORCH] Add E2E support for aten.index_put op
This commit decomposes `aten.index_put` op into
`valsem.aten.index_put_impl` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-16 22:02:02 +05:30
Vivek Khandelwal 3d95c3d6c9 [MLIR][TORCH] Add value tensor variant to aten::_index_put_impl_
This commit adds the op `ValsemVariantAtenIndexPutImplOp` that represents
`Aten_IndexPutImpl_Op` without the underscore. This is needed to
make sure that the `ReduceOpVariants` pass turns the in-place op
into an op that takes value tensors as inputs, otherwise the
`MaximizeValueSemantics` pass will not be able to add value
semantics correctly.

This commit also adds the lowering of `ValsemVariantAtenIndexPutImplOp` op.

This commit also updates the `torch.bincount` op test cases.
2022-03-16 22:02:02 +05:30
Ramiro Leal-Cavazos 0bcc6d1075
Add maximize-value-semantics support for multiple non-value tensor inputs (#659)
This commit adds value semantics support for ops such as
`aten.view_as` and `aten.expand_as` that take two non-value 
tensors as input.
2022-03-15 18:13:45 -07:00
Sean Silva 92da4988f0 Improve "pseudo" op terminology.
The term "pseudo" is very vague and was getting confusing (I felt I had
to explain it in every comment referencing it). Instead, rework the
"pseudo" ops to instead be named:

- MLIR Syntax: `torch.valsem.*`
- C++ / ODS: `ValsemVariant*Op`

This makes it clear what the concept is, and avoids confusion with other
things that might be called "pseudo", since these are very specific and
should be 100% consistently named w.r.t. the non-valsem-variant ops that
they correspond to.
2022-03-15 17:57:52 -07:00
Sean Silva a5fe0cf063 Introduce new shape library design.
See the documentation in `docs/shape_lib.md` and
`docs/adding_a_shape_function.md` for an overview of the system.

This completely overhauls how we represent shape functions. In
particular, RefineTypes does not infer shapes anymore (only dtypes).
Shape functions are now written in (TorchScript'able) Python.

Recommended review order:

1. Read `docs/shape_lib.md` and `docs/adding_a_shape_function.md`.
1. Code and tests for ReifyShapeCalculations, DropShapeCalculations.
1. Code and tests for SimplifyShapeCalculations.
1. shape_lib_gen.py
1. Code and tests for new RefineTypes pass.
1. Random folders/canonicalizers in TorchOps.cpp and associated test in
   `canonicalize.mlir`.
1. New ReadOnly trait inferred from the registry.
1. Any miscellaneous remaining stuff.

Example `-print-ir-after-all` for ElementwiseUnaryModule:
[IR lowering dump](https://gist.github.com/silvasean/e4dc8cbc8d00aac7819602e3cbd8e212).

Example `-print-ir-after-all` for ElementwiseBinaryModule:
[IR lowering dump](https://gist.github.com/silvasean/daf6860ecced732af3568af6b1899113).
2022-03-15 12:41:58 -07:00
Prashant Kumar b6d13301fc [TORCH] Fix the location of packed_params.
The location of packed_params.h is changed in aten src.
2022-03-14 17:52:19 +05:30
Prateek Gupta 3d9ba5e525 [MLIR][TORCH] Add E2E support for aten.erf op.
Signed-Off-By: Prateek Gupta <prateek@nod-labs.com>
2022-03-09 22:22:03 +05:30
Vivek Khandelwal 1a2a9e066f [MLIR][TORCH] Add TorchToTMTensor pass
This pass is added to lower ops, which can not be lowered
via the TorchToLinalg pass, such as `torch.bincount` op.
This pass also uses torch-mlir's TMTensor Dialect to lower the
complex ops.

Also add torch.bincount op lowering with the help of TMTensor dialect

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-08 22:52:34 +05:30
Gaurav Shukla e57d3f9774 [LINALG] Fix `aten.bernoulli` op lowering
- This commit adds E2E support for `aten.rand_like` and
  `aten.bernoulli_.Tensor` ops.
- The `aten.bernoulli(x)` was implemented as:
  `aten.bernoulli(x) = rand_like(x) < 0.5`, assuming 0.5 as default
  probability, whereas according to the pytorch documentation:
  https://pytorch.org/docs/stable/generated/torch.bernoulli.html#torch.bernoulli
  the input x in `aten.bernoulli(x)` is itself a tensor containing
  probabilities to be used for drawing the binary random number.
- So this commit fixes the `aten.bernoulli(x)` implementation as:
  `aten.bernoulli(x) = rand_like(x) < x`.
- It also fixes the case where the input to `aten.bernoulli_.float` is
  an integer tensor. In this case the input must be casted to float type
  before passing it as operand to `aten.rand_like` op.
  `aten.bernoulli_.float(x, p) = rand_like(float(x)) < p`.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-03-05 09:38:22 +05:30
Vivek Khandelwal af551bd9cd [MLIR][TORCH] Add E2E support for aten.full_like op
This commit decomposes `aten.full_like` op into `aten.empty_like`
and `aten.fill` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-04 21:58:23 +05:30
Vivek Khandelwal d61ae92eee [MLIR][TORCH] Add E2E support for aten.full op
This commit decomposes `aten.full` op into `aten.empty` and
`aten.fill` op.

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-04 21:58:23 +05:30
Yi Zhang 486f95e84f Add bufferization pass for TMTensor ops
The pass is mostly borrowed from the BufferizeAnyLinalgOp pass in mlir
upstream with some minor changes. At a high level, it's a naive partial
bufferization pass which allocate new buffers for all the output
tensors. The initial value of an output buffer is copied from the
original buffer if there are uses of the original value.

One difference from linalg bufferization pass is the way to tell if
the loop body uses the init value of output operand. For TMTensor ops,
it differs from op to op because the payload region doesn't represent
the entire loop body.
2022-03-03 11:39:14 -05:00
Yi Zhang 1d285f0153 Add aten.hardtanh e2e support. 2022-03-02 12:28:06 -05:00
Prashant Kumar 819f29316f Decompose aten.silu op
Decomposition of aten.silu.op is added as silu(x) = x * sigmoid(x).
2022-03-01 23:24:19 +05:30
Vivek Khandelwal ddd45d6068 [MLIR][TORCH] Add E2E support for aten.new_zeros, aten.new_ones op
This commit adds lowering of `aten.new_zeros` and `aten.new_ones` op

Signed-Off By: Vivek Khandelwal <vivek@nod-labs.com>
2022-03-01 22:09:47 +05:30
Prashant Kumar 7c637eebc3 [LINALG] Decompose aten_hardswish op.
`aten.hardswish` op is decomposed into (x/6) * Relu6(x+3).
2022-02-25 21:59:27 +05:30
Prashant Kumar abbde7d439 [TORCH] The torch definition related to aten.gelu has changed.
New str argument approximation is added.
2022-02-18 21:57:46 +05:30
Nirvedh f8cb32faf0 LLVM bump
Major changes: opTrait changed to Trait, selectOp moved to arith dialect
assertOp moved to cf dialect
2022-02-16 15:28:13 -05:00
Gaurav Shukla cd21dda867 [LINALG] Add E2E support for `aten.Hardsigmoid` op
This commit adds lowering of `aten.Hardsigmoid` op.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-16 02:35:18 +05:30
Ramiro Leal-Cavazos 00a6e9c1bb
[LINALG] Add value tensor variant to `fill_.Scalar` (#600)
This commit adds the op `PseudoAtenFillScalarOp` that represents
`AtenFill_ScalarOp` without the underscore. The approach is the same
as in commit dd998fa4d4.

Adding this op allows for a simpler and more consistent version of the
`empty` and `empty_like` op e2e tests.
2022-02-15 11:58:03 -08:00
Gaurav Shukla 41acde599b [LINALG] Add E2E support for `aten.[le|ge].Scalar` ops
- This commit adds lowering of `aten.le.Scalar` and `aten.ge.Scalar` ops
  as a part of `convert-torch-to-linalg` pass.
- It also creates a new test script `elementwise_comparison.py` for all
  element-wise comparison ops.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-15 12:21:09 +05:30
Gaurav Shukla f00d1686c8 [LINALG] Add E2E support for `aten.[Bool.Tensor|Float.Tensor]` op
- This commit adds lowering of `aten.Bool.Tensor` and
  `aten.Float.Tensor` op as a part of `convert-torch-to-linalg` pass.
- It also adds support for returning bool types.
- It also fixes lowering of the `aten.Int.Tensor` op for non-zero rank
  input tensors.
- If a scalar number is converted to a 0-d tensor and passed on to the
  `aten.Float.Tensor` op, it folds to the scalar number.

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-02-14 23:09:20 +05:30
Yi Zhang 9e7b6cab08 Add folder for aten.gt/lt.float 2022-02-14 12:34:01 -05:00
Henry Tu 73ac9a7e2e Added support for importing node prim::Constant with list type
Prior to this commit, importing a `prim::Constant` node with list type would result in an error since it was not supported. `ivalue_importer::importIValue` was modified to return the MlirValue corresponding to the root so its parent operation could be extracted.
2022-02-11 20:54:06 -05:00
Prashant Kumar 258660deb6 Add aten.bernoulli decomposition.
aten.bernoulli is decomposed to aten.gtTensor(aten.uniform(x), x).
2022-02-11 00:35:33 +05:30
Prashant Kumar 102c497c4c Add decomposition of _log_softmax op.
Decompose _log_softmax into log(softmax(x)).
2022-02-10 23:17:26 +05:30
Prateek Gupta 318946a650 [TORCH][MLIR] Add E2E support for `aten._unsafe_view` op.
This commit adds decomposition of `aten._unsafe_view` op into
`aten.view` op.

Signed-Off-By: Prateek Gupta<prateek@nod-labs.com>
2022-02-10 22:28:58 +05:30
Ramiro Leal-Cavazos 9b89f8eb3f
[TORCH][MLIR] Add E2E support for aten.clone (#571)
This commit adds support for the aten.clone op.
2022-02-09 19:31:03 -08:00