torch-mlir/test/Dialect/TorchConversion/verify-linalg-on-tensors-ba...

54 lines
2.0 KiB
MLIR
Raw Normal View History

// RUN: torch-mlir-opt -torch-verify-linalg-on-tensors-backend-contract -split-input-file -verify-diagnostics -allow-unregistered-dialect %s | FileCheck %s
Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
// CHECK: func.func @mm
func.func @mm(%arg0: tensor<?x?xf32>, %arg1: tensor<?x?xf32>) -> tensor<?x?xf32> {
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%cst = arith.constant 0.000000e+00 : f32
%0 = tensor.dim %arg0, %c0 : tensor<?x?xf32>
%1 = tensor.dim %arg0, %c1 : tensor<?x?xf32>
%2 = tensor.dim %arg1, %c0 : tensor<?x?xf32>
%3 = tensor.dim %arg1, %c1 : tensor<?x?xf32>
%4 = arith.cmpi eq, %1, %2 : index
cf.assert %4, "mismatching contracting dimension for aten.mm"
%5 = tensor.empty(%0, %3) : tensor<?x?xf32>
%6 = linalg.fill ins(%cst : f32) outs(%5 : tensor<?x?xf32>) -> tensor<?x?xf32>
%7 = linalg.matmul ins(%arg0, %arg1 : tensor<?x?xf32>, tensor<?x?xf32>) outs(%6 : tensor<?x?xf32>) -> tensor<?x?xf32>
return %7 : tensor<?x?xf32>
}
Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
// -----
// Basic check of error reporting.
// expected-error@+1 {{Module does not conform to the linalg-on-tensors backend contract.}}
Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
module {
func.func @disallowed() {
Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
// expected-error@+1 {{failed to legalize operation 'unknown_dialect.unknown_op'}}
"unknown_dialect.unknown_op"() : () -> ()
return
}
}
// -----
// TODO: Improve these errors to give more exact reporting.
//
// The reporting we inherit from dialect conversion is not precise.
// For example, here we want it to explicitly call out that
// `!torch.tensor` is the problem here, which suggests
Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
// that type inference didn't succeed, or insufficient type information
// was available.
//
// Ultimately, the output of this pass needs to be conveyed to the user
// in an understandable way, such as suggesting a particular place where
// a shape annotation is needed.
// expected-error@+1 {{Module does not conform to the linalg-on-tensors backend contract.}}
Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
module {
func.func @disallowed(%arg0: !torch.tensor) -> !torch.tensor {
// expected-error@+1 {{failed to legalize operation 'func.return'}}
return %arg0 : !torch.tensor
Add npcomp-verify-backend-contract pass. This pass verifies that a given module satisfies the contract that we have for backends. This is phrased as an "allowlist", because we want to keep this interface tight. Also, this gives much better diagnostics than a backend randomly crashing or failing to compile would (though they could still be improved). This was especially painful because if we had `tensor<?x!numpy.any_dtype>` slip through, at some point RefBackend would convert it to a memref type and trip the "verify type invariants" assertion which gives no location or anything and crashed the process, which was very unpleasant. We implement this with the dialect conversion framework, which works reasonably well and was quick to put together and familiar, but is still very "op oriented". We probably want to make this hand-rolled eventually, especially the error reporting (the most useful kind of error for a dialect conversion user is not necessarily the best for this use case). Also, in production, these error will go to users, and need to be surfaced carefully such as "the compiler needs a type annotation on this function parameter" which in general requires some special analysis, wordsmithing, and overall awareness of the e2e use case (such as how much we can lean into certain source locations) to provide a meaningful user-level diagnostic. Also, add `inline` to the current frontend lowering pass pipeline to allow slightly more complicated programs that otherwise would fail on shape inference.
2021-04-13 09:39:53 +08:00
}
}