Add an end-to-end test to the [end-to-end test suite](https://github.com/llvm/torch-mlir/blob/main/docs/adding_an_e2e_test.md)). Ideally there is an existing file that your op fits into. If not, you can create a new file.
We generally recommend testing by invoking `torch.ops.aten.someop` from Python -- that gives a very precise test for the individual Torch operator you are implementing (calling `torch.ops.aten.someop` from Python always lowers into the MLIR `torch.aten.someop` operation)
The end-to-end test is important to check the correctness of the other steps.
Update [torch_ods_gen.py](https://github.com/llvm/torch-mlir/blob/main/projects/pt1/python/torch_mlir/dialects/torch/importer/jit_ir/build_tools/torch_ods_gen.py) with the new op and run [update_torch_ods.sh](https://github.com/llvm/torch-mlir/blob/main/build_tools/update_torch_ods.sh) to generate the ods. Running `update_torch_ods.sh` would dump all the operators with schema into `JITOperatorRegistryDump.txt`. It’s convenient to look for ops signatures and operands names in this file.
It’s essential to make sure the new op implements shape and dtype inference. See [abstract_interp_lib](https://github.com/llvm/torch-mlir/blob/main/docs/abstract_interp_lib.md) for information on adding shape and dtype inference.
If your op can be decomposed into other supported ops, then you can add a pattern into [DecomposeComplexOps](https://github.com/llvm/torch-mlir/blob/8d3ca887df5ac5126fa3fc2ec3546c6322a4d066/lib/Dialect/Torch/Transforms/DecomposeComplexOps.cpp#L1).
The `Torch` dialect needs to be lowered to [Linalg](https://mlir.llvm.org/docs/Dialects/Linalg/) dialect which can be used as input IR of backends. [Here](https://mlir.llvm.org/docs/Dialects/Linalg/#high-level-description-of-linalg-opsa-namelinalg_opsa) is a high level introduction about Linalg ops and [here](https://www.youtube.com/watch?v=PdQPlPudT90) is a video explaining `linalg.generic` op. The building block is the `linalg.generic` op which consists of indexing maps, iterator types, input/output tensors and a compute payload. You would want to get familiar with the concept of [affine map](https://mlir.llvm.org/docs/Dialects/Affine/#affine-expressions). The `linalg.generic` op anatomy [tutorial](https://www.youtube.com/watch?v=PdQPlPudT90&list=PLHPjgRtRcfTpVGFMrLP2KQyXhvtSQiiai&index=1) covers the basics of `linalg.generic` from a user's perspective.
You can find an [example PR here](https://github.com/llvm/torch-mlir/pull/294).
1. The codebase follows the [LLVM’s coding conventions](https://llvm.org/docs/CodingStandards.html).The following items might be the most frequently used rules:
2. Try to refactor and reuse existing code/helpers when working on RefineTypes and TorchToLinalg lowering for easier maintenance, testing and better readability. Try not to copy & paste existing code.
3. Squash all the commits into one, including the commits addressing review comments.
4. Use `git clang-format HEAD~1` to automatically format your commit.