torch-mlir/test/Python/Backend/RefJIT/simple_invoke_numpy.py

45 lines
1.2 KiB
Python
Raw Normal View History

# RUN: %PYTHON %s | FileCheck %s --dump-input=fail
Remove TCF and TCP. These were legacy concepts that are now superceded by direct Torch to linalg-on-tensors lowering. These were based on some very early thinking related to the layering of frontends vs codegen, which is now obsolete because: - We expected a lot more centralization at the frontend (TCF) level. It turns out that frontend needs really vary a lot, and there is no grand unifying TCF dialect plausible. The additional layer isn't worth it. - Linalg-on-tensors obsoletes the primary need for TCP. There are still a few things not representable with linalg-on-tensors, but the support is growing and the whole "not included in linalg-on-tensors" direction needs to be rethought. Our TCP dialect didn't cover any of the actually important things in this space (such as sort, FFT, top-k, etc.). See historical [slides](https://drive.google.com/file/d/1iljcpTQ5NPaMfGpoPDFml1XkYxjK_6A4/view) / [recording](https://drive.google.com/file/d/1jSPa8TwPKUt0WuLquGc8OgSUVYJHMvWZ/view) for more details on the origin story here. Their presence was confusing users too [bug](https://github.com/llvm/mlir-npcomp/issues/248). Also, - Trim down npcomp-run-mlir testing. It was testing TCF to TCP lowering for the most part. The essential stuff is retained and rephrased with linalg-on-tensors. (we should probably rename it "refback-run" or something, as it is just a way to invoke RefBackend) - test/Python/Backend/RefJIT/simple_invoke_numpy.py is XFAIL'ed. Our "anti-framework" direction seems to be the likely future path.
2021-08-03 01:27:16 +08:00
# TODO: Rebase this path on linalg-on-tensors or Torch dialect.
# XFAIL: *
import numpy as np
from npcomp.compiler.numpy.backend import refjit
from npcomp.compiler.numpy.frontend import *
from npcomp.compiler.numpy import test_config
from npcomp.compiler.numpy.target import *
from npcomp.compiler.utils import logging
logging.enable()
def compile_function(f):
fe = ImportFrontend(config=test_config.create_test_config(
target_factory=GenericTarget32))
fe.import_global_function(f)
compiler = refjit.CompilerBackend()
2020-07-11 13:50:24 +08:00
blob = compiler.compile(fe.ir_module)
loaded_m = compiler.load(blob)
return loaded_m[f.__name__]
global_data = (np.zeros((2, 3)) + [1.0, 2.0, 3.0] * np.reshape([1.0, 2.0],
(2, 1)))
a = np.asarray([1.0, 2.0], dtype=np.float32)
b = np.asarray([3.0, 4.0], dtype=np.float32)
@compile_function
def global_add():
return np.add(a, np.add(b, a))
# Make sure we aren't accidentally invoking the python function :)
assert global_add.__isnpcomp__
# CHECK: GLOBAL_ADD: [5. 8.]
result = global_add()
print("GLOBAL_ADD:", result)