torch-mlir/projects/pt1/python/torch_mlir
Xinyu Yang 790a697245
[Torch] Add folder for AtenIntOp, AtenFloatOp (#3189)
See unit test below:
```
// CHECK-LABEL:   func.func @torch.aten.tensor.float(
// CHECK-NEXT: torch.vtensor.literal(dense<1.000000e+01> : tensor<f32>) : !torch.vtensor<[],f32>
func.func @torch.aten.tensor.float() -> !torch.vtensor<[],f32> {
  %none = torch.constant.none
  %false = torch.constant.bool false
  %float1.000000e01 = torch.constant.float 1.000000e+01
  %67 = torch.aten.tensor.float %float1.000000e01, %none, %none, %false : !torch.float, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],f32>
  return %67 : !torch.vtensor<[],f32>
}

// CHECK-LABEL:   func.func @torch.aten.tensor.int(
// CHECK-NEXT: torch.vtensor.literal(dense<45> : tensor<si32>) : !torch.vtensor<[],si32>
func.func @torch.aten.tensor.int() -> !torch.vtensor<[],si32> {
  %none = torch.constant.none
  %false = torch.constant.bool false 
  %int45 = torch.constant.int 45
  %67 = torch.aten.tensor.int %int45, %none, %none, %false : !torch.int, !torch.none, !torch.none, !torch.bool -> !torch.vtensor<[],si32>
  return %67 : !torch.vtensor<[],si32>
}

```
2024-04-19 22:17:06 +08:00
..
_torch_mlir_custom_op_example Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
csrc Clang format refresh (#2812) 2024-01-29 12:59:33 -05:00
jit_ir_importer [Torch] Add folder for AtenIntOp, AtenFloatOp (#3189) 2024-04-19 22:17:06 +08:00
_dynamo_fx_importer.py Add stateless fx graph import (#3036) 2024-03-21 14:44:54 -07:00
_version.py Re-organize project structure to separate PyTorch dependencies from core project. (#2542) 2023-11-02 19:45:55 -07:00
compiler_utils.py [torch-mlir][NFC] remove trailing whitespace (#2936) 2024-02-20 11:23:14 -08:00
dynamo.py [torch-mlir][NFC] remove trailing whitespace (#2936) 2024-02-20 11:23:14 -08:00
torchscript.py Converts all Adaptive Pooling Ops to Linalg (#2808) 2024-03-22 11:05:20 -07:00