torch-mlir/test/Dialect/Torch
zjgarvey de28c8540b
[ONNX] add int16 quantization support (#3446)
There is currently no int16 quantization support in torch. This patch
adds a new mlir type to correspond to the missing "torch.qint16" type,
and enables lowering of quantization-related onnx ops using int16 types.

In follow-up patches, custom quantization logic for ops like
aten.matmul/aten.mm/aten.convolution may need to be revisited to allow
support for qint16. The passes in FuseQuantizedOps.cpp may also need
slight modifications.
2024-06-12 10:37:22 +05:30
..
GlobalizeObjectGraph [torch-mlir][test] cleanup trailing whitespace in mlir files (#2806) 2024-01-25 14:24:13 -08:00
adjust-calling-conventions.mlir Clean up verification of calling conventions. 2023-07-20 20:08:46 +02:00
canonicalize.mlir Representing Symbolic Shape Expressions in Torch Dialect (#3372) 2024-06-07 04:04:03 -07:00
decompose-complex-ops-legal.mlir handles 2,3,4 from https://github.com/llvm/torch-mlir/issues/1963 (#1964) 2023-03-24 21:50:01 -05:00
decompose-complex-ops.mlir [Torch] Fix bugs for `Torch::AtenOneHotOp` (#3350) 2024-05-22 17:19:08 +00:00
drop-abstract-interp-calculations.mlir [custom op] Generalize shape library logic to work with dtypes (#1594) 2022-12-13 08:25:41 -08:00
erase-module-initializer.mlir Iteratively run the main simplification pipeline. 2022-08-17 14:54:33 -07:00
fuse-quantized-ops.mlir Generalize Operand Quantization in FuseQuantizeOps (#3327) 2024-05-12 20:49:59 -07:00
inline-global-slots-analysis.mlir Rework how global slot initializers work. 2022-08-08 18:12:06 -07:00
inline-global-slots-transform.mlir Rework how global slot initializers work. 2022-08-08 18:12:06 -07:00
invalid.mlir Representing Symbolic Shape Expressions in Torch Dialect (#3372) 2024-06-07 04:04:03 -07:00
lower-to-backend-contract-error.mlir Allow running DecomposeComplexOps more than once (#1671) 2022-12-08 09:26:38 -08:00
match-quantized-customs-ops.mlir [torch-mlir][test] cleanup trailing whitespace in mlir files (#2806) 2024-01-25 14:24:13 -08:00
maximize-value-semantics.mlir Add alias analysis for cast-like ops to maximize-value-semantics (#2160) 2023-05-25 17:05:41 +00:00
ops.mlir [ONNX] add int16 quantization support (#3446) 2024-06-12 10:37:22 +05:30
prepare-for-globalize-object-graph.mlir mlir: bump llvm tag to 5380e3 (#856) 2022-05-16 12:54:35 -07:00
reduce-op-variants-error.mlir mlir: bump llvm tag to 5380e3 (#856) 2022-05-16 12:54:35 -07:00
reduce-op-variants.mlir [torch] Fix tm_tensor.attention for end-to-end (#2907) 2024-02-13 21:18:01 -08:00
refine-public-return.mlir Support `DerefineOp` in `RefinePublicReturn`. 2023-07-20 20:08:46 +02:00
reify-dtype-calculations.mlir Breakup python pytorch deps (#2582) 2023-11-19 12:10:19 -08:00
reify-shape-calculations.mlir Cast `number` to `float` when shape function takes Scalar arg (#1978) 2023-03-28 09:30:31 -07:00
scalarize-shapes.mlir [torch] Improve shape inference for `torch-to-linalg` path for reshapes (#3055) 2024-03-26 12:41:40 -07:00
simplify-dtype-calculations.mlir [MLIR][TORCH] Add E2E support for view_as_real op (#2419) 2023-09-01 21:12:01 -07:00
simplify-shape-calculations.mlir [torch-mlir][test] cleanup trailing whitespace in mlir files (#2806) 2024-01-25 14:24:13 -08:00
torch-function-to-torch-backend-pipeline.mlir [Torch Dialect] fix torch.uint8's dtype infer (#2227) 2023-06-13 10:38:20 +08:00
torch-nary-canonicalize.mlir [torch] Folders for `torch.aten.*.tensor` operators [add, sub, mul] (#2878) 2024-02-19 10:28:23 -08:00
verify-backend-contract-error.mlir Clean up verification of calling conventions. 2023-07-20 20:08:46 +02:00
verify-backend-contract-unimplemented-op.mlir LowerToBackendContract: Explicitly error out on unimplemented operator (#1947) 2023-03-20 16:27:08 +01:00