Commit Graph

84 Commits (3aa81f78d8819e275c630f4d05bd0937664bde07)

Author SHA1 Message Date
Stella Laurenzo 7301aa80fd
Enable -Werror in lib/ and LTC. (#2841)
Required some massaging of LTC to make it warning clean, and I had to
manually disable some warnings on the generated source files (which we
don't control).

The project is warning clean now.

The `-Werror` flag is disabled by default as we can't control everywhere
people will try to build/install. The CI enables it via
-DTORCH_MLIR_ENABLE_WERROR_FLAG=ON.
2024-01-30 23:33:21 -08:00
Stella Laurenzo 1b40b6384e
[onnx] Add torch-mlir-import-onnx native port as an optional tool/library. (#2694)
As noted in the plan when this work started, we need to produce an ORT
EP plugin for a downstream project, and this will necessitate a C-based
ONNX importer (as opposed to the existing Python one). Because this
comes with dependencies that we do not want to impart on various
projects, this is optional in torch-mlir. It is also factored so that it
can be used as standalone sources in downstreams that need it. Since it
only depends on public C APIs on the MLIR side, this will make build
coupling a lot better (since a C++ dep is not needed on the compiler and
it is trivial to dynamically load).

Our original plan was just to maintain this fork off to the side in our
ORT plugin, but once work started, it seemed better to write it clean
and contribute it upstream for anyone to use. We expect that for non-ORT
use, the Python importer will have better ergonomics for most folks.

I will follow-up with a test suite refactor so that we can drive the
Python or C importer.

This is a relatively mechanical port from Python to C, borrowing some
scaffolding from the old JitIR importer. It does attempt to lay some
groundwork for external data, which will need to be implemented on the
Python side as well.
2023-12-27 12:13:34 -08:00
Stella Laurenzo 74f7a0c9d6
Upstream the ONNX importer. (#2636)
This is part 1 of 2, which will also include upstreaming the FX
importer. I started with ONNX because it forces some project layout
updates and is more self contained/easier as a first step.

Deviating somewhat from the RFCs on project layout, I made the following
decisions:

* Locating the `onnx_importer.py` into `torch_mlir.extras` as Maks
already has opened up that namespace and it seemed to fit. Better to
have fewer things at that level.
* Setup the build so that the root project only contains MLIR Python and
pure Python deps (like the importers), but this can be augmented with
the `projects/` adding more depending on which features are enabled.
* The default build continues to build everything whereas in
`TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS=1` mode, it builds a
`torch-mlir-core` wheel with the pure contents only.

`onnx_importer.py` and `importer_smoke_test.py` are almost verbatim
copies from SHARK-Turbine. I made some minor local alterations to adapt
to paths and generalize the way they interact with the outer project. I
expect I can copy these back to Turbine verbatim from here. I also
updated the license boilerplate (they have the same license but slightly
different project norms for the headers) but retained the correct
copyright.

Other updates:

* Added the ONNX importer unit test (which also can generate test data)
in lit, conditioned on the availability of the Python `onnx` package. In
a followup once I know everything is stable, I'll add another env var
that the CI can set to always enable this so we know conclusively if
tests pass.
* Moved the ONNX conversion readme to `docs/`.
* Renamed CMake option `TORCH_MLIR_ENABLE_ONLY_MLIR_PYTHON_BINDINGS` ->
`TORCH_MLIR_ENABLE_PYTORCH_EXTENSIONS` and inverted the sense. Made the
JitIR importer and LTC options `cmake_dependent_options` for robustness.
2023-12-12 19:02:51 -08:00
Stella Laurenzo 66b73edb28 Move TORCH_MLIR_USE_INSTALLED_PYTORCH to top-level.
This was unfortunately being initialized in a directory below its first use. This was causing the first configure to mis-detect the ABI flags, which was causing type conversion failures at runtime.

Fixes #2298 and hardens some additional messages and checks to better make it clear when something goes awry.
2023-11-22 17:56:26 -08:00
Stella Laurenzo 5eae0adff1
Breakup python pytorch deps (#2582)
This lifts the core of the jit_ir_importer and ltc out of the pt1
project, making them peers to it. As a side-effect of this layering, now
the "MLIR bits" (dialects, etc) are not commingled with the various
parts of the pt1 project, allowing pt1 and ltc to overlay cleanly onto a
more fundamental "just MLIR" Python core. Prior to this, the Python
namespace was polluted to the point that this could not happen.

That "just MLIR" Python core will be introduced in a followup, which
will create the space to upstream the FX and ONNX pure Python importers.

This primary non-NFC change to the API is:

* `torch_mlir.dialects.torch.importer.jit_ir` ->
`torch_mlir.jit_ir_importer`.

The rest is source code layering so that we can make the pt1 project
optional without losing the other features.

Progress on #2546.
2023-11-19 12:10:19 -08:00
Stella Laurenzo 6961f0a247
Re-organize project structure to separate PyTorch dependencies from core project. (#2542)
This is a first step towards the structure we discussed here:
https://gist.github.com/stellaraccident/931b068aaf7fa56f34069426740ebf20

There are two primary goals:

1. Separate the core project (C++ dialects and conversions) from the
hard PyTorch dependencies. We move all such things into projects/pt1 as
a starting point since they are presently entangled with PT1-era APIs.
Additional work can be done to disentangle components from that
(specifically LTC is identified as likely ultimately living in a
`projects/ltc`).
2. Create space for native PyTorch2 Dynamo-based infra to be upstreamed
without needing to co-exist with the original TorchScript path.

Very little changes in this path with respect to build layering or
options. These can be updated in a followup without commingling
directory structure changes.

This also takes steps toward a couple of other layering enhancements:

* Removes the llvm-external-projects/torch-mlir-dialects sub-project,
collapsing it into the main tree.
* Audits and fixes up the core C++ build to account for issues found
while moving things. This is just an opportunistic pass through but
roughly ~halves the number of build actions for the project from the
high 4000's to the low 2000's.

It deviates from the discussed plan by having a `projects/` tree instead
of `compat/`. As I was thinking about it, this will better accommodate
the follow-on code movement.

Once things are roughly in place and the CI passing, followups will
focus on more in-situ fixes and cleanups.
2023-11-02 19:45:55 -07:00
Stella Laurenzo 107ed0dec9
Fix two CMake issues that were causing Windows compilation failures. (#2461)
At some point in the past month, stablehlo gained a number of patches that implement a non-trivial bit of threaded reference code. It fails to compile in Windows in pretty catastrophic ways.

But this isn't the main problem: by way of the MLIR CMake macros being used, if we include stablehlo before our code, we end up building the whole project, whether needed or not.
2023-09-12 20:51:45 -07:00
Stella Laurenzo 078d1e1a1d
Remove mlir-hlo (replace with stablehlo). (#2460)
We just have to do this: I ran into an issue today where I needed to make a one line patch to stablehlo to work around a compiler issue, and it is completely unapparent how to do so given that the mlir-hlo repo is a read-only export and is at the tail end of a multi-week integration chain from the open-source stablehlo repo.

We've discussed this often enough and gotten +1 from everyone that they are ok with taking the e2e testing hit if it becomes necessary: It is necessary as the current situation is unmanageable.

Looking at it, I expect it wouldn't actually be very difficult to build a little runner binary out of the stablehlo interpreter and subprocess call that in order to get the testing coverage back. I leave that as an exercise to the users of this part of the stack and recommend following the breadcrumbs from the deleted python/torch_mlir_e2e_test/stablehlo_backends/linalg_on_tensors.py file and the main.py changes.

Note that I am pointing us at a stablehlo fork for the moment until it is apparent that we don't need to carry any local patches to it. We can update this in a few days if everything is clear.
2023-09-12 19:10:02 -07:00
Ashay Rane 28bb866260
CI: prepare CI for ccache updates for MSVC/Windows (#2120)
This patch, by itself, doesn't fix caching on Windows, but once a new
release of ccache is available, caching for Windows builds should start
working again (validated by building ccache from source and using it
with LLVM builds).

Ccache rejects caching when either the `/Zi` or `/ZI` flags are used
during compilation on Windows, since these flags tell the compiler to
embed debug information in a PDB file (separate from the object file
produced by the compiler).  In particular, our CI builds add the `/Zi`
flag, making ccache mark these compiler invocations as uncacheable.

But what caused our CI to add debug flags, especially when we specified
`-DCMAKE_BUILD_TYPE=Release`?  On Windows, unless we specify the
`--config Release` flag during the CMake build step, CMake assumes a
debug build.  So all this while, we had been producing debug builds of
torch-mlir for every PR!  No doubt it took so long to build the Windows
binaries.

The reason for having to specify the configuration during the _build_
step (as opposed to the _configure_ step) of CMake on Windows is that
CMake's Visual Studio generators will produce _both_ Release and Debug
profiles during the CMake configure step (thus requiring a build-time
value that tells CMake whether to build in Release or Debug mode).
Luckily, on Linux and macOS, the `--config` flag seems to be simply
ignored, instead of causing build errors.

Strangely, based on cursory tests, it seems like on Windows we need to
specify the Relase configuration as both `-DCMAKE_BUILD_TYPE=Release` as
well as `--config Release`.  Dropping either made my build switch to a
Debug configuration.

Additionally, there is a bug in ccache v4.8 (although this is addressed
in trunk) that causes ccache to reject caching if the compiler
invocation includes any flag that starts with `/Z`, including /`Zc`,
which is added by LLVM's HandleLLVMOptions.cmake and which isn't related
to debug info or PDB files.  The next release of ccache should include
the fix, which is to reject caching only for `/Zi` and `/ZI` flags and
not all flags that start with `/Z`.

As a side note, debugging this problem was possible because of ccache's
log file, which is enabled by: `ccache --set-config="log_file=log.txt"`.
2023-05-12 12:45:01 -05:00
Maksim Levental c7a24c4d21
[CMake] Add headers to install target (#2100) 2023-05-08 14:27:00 -05:00
Maksim Levental 2eddb3fde7
WIP: No PyTorch dep (#1854) 2023-02-13 14:21:06 -06:00
Ashay Rane 711646d095
mhlo: migrate conversion to stablehlo (#1840)
This patch replaces all MHLO operations with their StableHLO
counterparts and adds a validation pass to ensure that no MHLO operations
remain before translating all Stablehlo operations to the MHLO dialect
for further lowering to the Linalg dialect.

This patch also updates all lit tests so that they refer to the
`convert-torch-to-stablehlo` pass and so that they check for StableHLO
operations.
2023-02-02 07:29:47 -06:00
Maksim Levental 8696752eb6
Expose metadata of torch-mlir types (plus verify DictType key and value types). (#1785) 2023-01-16 10:25:02 -06:00
Gaurav Shukla 0d209998d1
llvm: update tag to e864ac6945 (#1600)
Summary of changes:
1. Replace `string` iterator types by `IteratorType` enum.
(e6598b053d)
2. Update `includes` wrt new directory layout of MLIR HLO codebase.
(9fd8d251a8)
3. Update tags
   llvm: e864ac694540342d5e59f59c525c5082f2594fb8
   MHLO: eab364ba2a66bd0613efb94f8a738c1c97aaee92

Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com>

Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com>
2022-11-16 14:40:36 -08:00
Ashay Rane 2846776897
ci: enable ccache on Windows (#1548)
This patch makes a few small, but key, changes to enable ccache on
Windows.  First, it replaces the hendrikmuhs/ccache-action action with
command line invocations to the ccache binary, since the action has two
bugs, one of which causes CI to refer to different ccache artifacts
before versus after the build on Windows whereas the other bug can
sometimes cause the action to incorrectly infer that the cache is empty.

Second, this patch slightly alters the cache key, so that our old cache
artifacts, which have grown too big, are eventually discarded in favor
of the new, smaller cache artifacts.  Along the way, this patch also
keeps the RollPyTorch's cache artifact separate from the regular build's
cache artifact so as to keep these artifacts small, and also because the
RollPyTorch action is off the critical path for most contributors.

Finally, this patch makes small changes to the CMake file so that on
Windows, the ccache binary is added as a prefix, as recommended on the
[ccache Wiki](https://github.com/ccache/ccache/wiki/MS-Visual-Studio).
2022-11-03 12:17:22 -05:00
Jae Hoon (Antonio) Kim fa5a8e21a3
Propagate parameter names to TorchMlirComputation (#1420)
* Propagate parameter name to MLIR

* Add TorchMlirNode Constructor Hook

* Make func_op mutable

- Purpose of this is to allow modification of func_op by subclass
  backend

* Clean up unnecessary changes

* Remove unnecessary attribute case

* Address PR comments
2022-09-29 11:43:39 -04:00
Quinn Dawkins 41d45400be
Change out-of-tree build flag name (#1416) 2022-09-26 11:58:05 -04:00
Quinn Dawkins 8f2bf4ce10
Add build flag for out-of-tree builds (#1403) 2022-09-22 22:01:37 -04:00
Tanyo Kwok 29cafdbb61
[MHLO] refactor pass configurations (#1315)
Related to https://github.com/llvm/torch-mlir/issues/1227

1. Reduce MHLO #ifdefs
2. Dismiss compilation warnings
2022-09-01 10:36:02 +08:00
powderluv c0630da678
Disable LTC by default until upstream revert relands (#1303)
* Disable LTC by default until upstream revert relands

Tracked with the WIP https://github.com/llvm/torch-mlir/pull/1292

* Disable LTC e2e tests temporarily

* Update setup.py

Disable LTC in setup.py temporarily until upstream is fixed.
2022-08-28 19:11:40 -07:00
Marius Brehler 1f1abda179
Don't explicitly set MLIR_PDLL_TABLEGEN_EXE (#1262)
With llvm/llvm-project@91b6f76, the variable `MLIR_PDLL_TABLEGEN_EXE` is
set as a cache variable in MLIR upstream.

Likely requires an update of externals/mlir-hlo to
tensorflow/mlir-hlo@4f2a00b or later.
2022-08-22 16:45:56 +02:00
Henry Tu ba17a4d6c0
Reenable LTC in out-of-tree build (for real this time) (#1205)
* Fix OOT LTC CI build failure

* Disable LTC during macOS package gen

* Add more details about static TorchMLIRJITIRImporter library
2022-08-19 15:25:00 -04:00
Jae Hoon (Antonio) Kim 0af55781ae
Propagate device data names (#1157)
* Propagate device data names

* Address PR comment

* Add example usage

* Add test for device data names

* Make TorchMlirComputation fields protected

* Add lazy backend device data name unit tests

* Disable lazy backend tests if LTC is disabled

* Add comments
2022-08-16 09:30:22 -04:00
Ashay Rane 606f4d2c0e
build: streamline options for enabling LTC and MHLO (#1221) 2022-08-12 23:49:28 -07:00
Marius Brehler 747356186f
Don't set MLIR_TABLEGEN_EXE (#1197)
With llvm/llvm-project@112499f landed, `MLIR_TABLEGEN_EXE` is given as a
cache variable in the MLIR core project. Other external projects, such
as TORCH-MLIR, should not set the variable as this breaks
cross-compilation.
2022-08-09 16:06:12 +02:00
Ashay Rane bb47c166a0
llvm: update tag to 061e0189 (#1180)
Summary of changes:
 - Switch to C++17 (similar to https://reviews.llvm.org/D131348)
 - Update MHLO to build with LLVM commit hash 061e0189
 - Replace deprecated `hasValue()` and `getValue()` with `has_value()`
   and `value()` respectively (https://reviews.llvm.org/D131349)
 - Use `TypedAttr` (https://reviews.llvm.org/D130092)
 - Use updated assembly format of `mhlo.compare` op (commit
   d03ef01e70fbf9afd0fa1976fbb7ed31838929b3 in MHLO repo)
2022-08-08 20:17:35 -07:00
Henry Tu 2c3b3606d0 Resolve remaining LTC CI failures (#1110)
* Replace CHECK_EQ with TORCH_CHECK_EQ

* Check value of TORCH_MLIR_USE_INSTALLED_PYTORCH during LTC build

* Update LTC XFAIL with NewZerosModule ops

* Explicitly blacklist _like ops

* Automatically blacklist new_/_like ops

* Prune away unused Python dependencies from LTC

* Add flag to disable LTC

* Autogen dummy _REFERENCE_LAZY_BACKEND library when LTC is disabled

* Implement compute_shape_var

* Removed Var tests from XFAIL Set

* XFAIL tests using _local_scalar_dense or index.Tensor

* Add StdDim tests to XFAIL set

* Autogen aten::cat
2022-07-30 09:40:02 -04:00
Henry Tu 47bb38d180 Reference Lazy Backend (#1045)
* Changed Example MLIR backend to Reference MLIR backend

* Moved reference_ltc_backend into csrc

* Merged sys_utils.h

* Renamed reference_ltc_backend to reference_lazy_backend

* Addressed review comments

* Update docs with new library name

* Removed _REFERENCE_LAZY_BACKEND from .gitignore

* Added reference_lazy_backend to the TorchMLIRPythonModules dependency list

Fixed typo in `ltc_examples.md`

Missed instance where `ltc_backend` was used instead of `lazy_backend`.
2022-07-30 09:40:02 -04:00
Henry Tu a605fe279c Add example Torch MLIR LTC Backend (#725) 2022-07-30 09:40:02 -04:00
Tanyo Kwok b80ce79b9f
[MHLO] Init MHLO view like op patterns (#1090)
* [MHLO] Init MHLO view like op patterns

See RFC: https://github.com/llvm/torch-mlir/issues/999

Co-authored-by: Bairen Yi yibairen.byron@bytedance.com
Co-authored-by: Jiawei Wu xremold@gmail.com
Co-authored-by: Tianyou Guo tianyou.gty@alibaba-inc.com
Co-authored-by: Xu Yan yancey.yx@alibaba-inc.com
Co-authored-by: Ziheng Jiang ziheng.jiang@bytedance.com

* update filecheck test cases

* rebase, remove chlo and clang-format
2022-07-22 15:18:18 +08:00
Ziheng Jiang c61c99e887
[MHLO] Init MHLO integration. (#1083)
Co-authored-by: Bairen Yi <yibairen.byron@bytedance.com>
Co-authored-by: Jiawei Wu <xremold@gmail.com>
Co-authored-by: Tianyou Guo <tianyou.gty@alibaba-inc.com>
Co-authored-by: Xu Yan <yancey.yx@alibaba-inc.com>
Co-authored-by: Ziheng Jiang <ziheng.jiang@bytedance.com>
2022-07-20 16:18:16 -07:00
Clément Fournier 886ad169e5
Fix out-of-tree build of torch-mlir-dialects (#726)
Follows up on #623 for out-of-tree builds of torch-mlir, which
added building `torch-mir-dialects` as a subdirectory.

Our goal is to support both in-tree and out-of-tree builds of
`torch-mlir` with minimum hassle, for instance by using the same
variable names in both setups.

Specific changes to `externals/llvm-external-projects/torch-mlir-dialects/CMakeLists.txt`:
- We use `MLIR_FOUND` to detect that it is being build as a subdirectory
and the llvm+mlir cmake infrastructure is already set up (via
find_package in the parent build) as opposed to an in-tree build.
- For in-tree, the setting of variables and loading of llvm+mlir cmake
infrastructure is now conditionally performed.
- For in-tree, the names of cmake variables being defined for are
adjusted to match those `llvm-project` makes available through
`find_package(MLIR REQUIRED CONFIG)`, under the assumption that those
are the more "standardized" names.

Co-authored-by: Clément Fournier <clement.fournier@amd.com>

Co-authored-by: Liam Fitzpatrick <liam.fitzpatrick@xilinx.com>
2022-04-04 11:37:28 +02:00
Ahmed S. Taei 8383497704
[NFC] Rename external -> externals (#699) 2022-03-26 09:12:27 -07:00
Stephen Neuendorffer 9b2613533b Ensure that torch-mlir-dialects is built when we're out of tree
Its unclear to me what the right layering is here: Are you expecting
torch-mlir-dialects to always get built with LLVM?  This is pretty
breaking for us, if so.
2022-02-28 11:41:31 -05:00
Stephen Neuendorffer 330042aa4c
Don't override MLIR_TABLEGEN_EXE (#622)
This should be set elsewhere depending on the build configuration.
In particular, we need to be careful when cross-compiling to pick
up the host mlir-tblgen.
2022-02-25 14:41:09 -08:00
Yi Zhang 869daf3c22 Add TMTensor dialect to torch-mlir
This is intended to explore support for non-structured ops that can't
be modeled by Linalg dialect. `tm_tensor.scan` and `tm_tensor.scatter`
are added as the first such ops. The dialect should aim to be
upstreamed in the future.
2022-02-15 16:45:38 -05:00
stephenneuendorffer 3fd9b7789e
Bump LLVM to 881ff4e4ebe8cc0cc045c7c167cffb01f94f27f8 (#539) 2022-01-25 22:16:30 -08:00
stephenneuendorffer 614b889dc6
Enable python extensions when building out of tree (#363) 2021-10-27 17:04:12 -07:00
Stella Laurenzo fe69bb339c
Bump llvm-project to 3d92722f74993969243d1400bc3257ca3d03902f. (#369)
* Picks up Python configure changes (was pinned to a bad intermediate commit).
* Uses the new mlir_configure_python_dev_packages() to ensure CMake python is found consistently.
* Fixes the JIT importer to build as a MODULE vs SHARED (needed for linking to Python as a module, per config changes).
* Adds some notes to the README to help folks build a smaller set focused just on this project.
2021-10-21 21:09:00 -07:00
Stephen Neuendorffer fc5d03c081 post-commit review cleanup for #356
see https://github.com/llvm/torch-mlir/pull/356
2021-10-08 11:28:40 -07:00
stephenneuendorffer df12cc0c37
Add support for out-of-tree building (#356)
Currently I haven't attempted to get the python bits working.
Instead the python bits are forcibly disabled.
2021-10-07 21:44:15 -07:00
Sean Silva 4fad753073 Move external/torch-mlir to the root of the repo. 2021-09-27 17:11:08 -07:00
Sean Silva d8f603a4e5 Remove old stuff in prep for move-to-root. 2021-09-27 17:11:08 -07:00
Sean Silva 404bd74ddf Port the bulk of the remaining code to torch-mlir
This leaves no real code outside torch-mlir.

This also renames the "npcomp backend contract" to "linalg on tensors
backend contract" as the name of the abstraction layer that RefBackend
(IREE too) accepts.
2021-09-27 12:48:33 -07:00
Sean Silva a99cbeeb7e Move TorchConversion dialect and TorchTo* into torch-mlir 2021-09-23 21:39:31 -07:00
Sean Silva 1a0b953ea7 Eliminate almost all mentions of IREE.
A few remain in examples/docs that will be naturally be updated in due
time.

This regresses the list support and the general direction of more widely
supported control flow, lists/dicts/globals that we were going for with
the TorchScript path. The idea is that we are deferring that work to
make torch-mlir a very clean standalone thing. We will reboot it,
probably using some of the tools of iree_pydm to make it simpler, and in
a more natural place (such as an iree-torch repo that depends on IREE and
torch-mlir to build a working PyTorch frontend solution for IREE -- it
was really weird that npcomp depended on IREE).
2021-09-22 16:06:38 -07:00
Sean Silva a25163fbfa Remove old RefBackend
It is superceded by the new one.
2021-09-22 15:33:28 -07:00
Sean Silva 6d8e7f1bb1 Implement Python relayout from #311
Fixes https://github.com/llvm/mlir-npcomp/issues/311

The key change is that TorchPlugin is folded into
`torch_mlir.dialects.torch.importer.jit_ir` (it imports the PyTorch
JIT's IR, so that's a good, scoped name for it).
The CMake option `-DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=OFF` disables it,
which allows building without a PyTorch native dependency.
2021-09-21 09:29:40 -07:00
Sean Silva 68fefe7e1f Remove NPCOMP_ENABLE_IREE CMake flag.
Our new dependency management solution relies:
- on the C++ side with the public iree-dialects project, which we
  include and are using as representative of some missing upstream
  ops (so we treat them "as if" they were upstream, with the hope of
  upstreaming them after some codevelopment has happened)
- on the Python side, with simple PYTHONPATH manipulation or installed
  Python packages. No CMake stuff required.
2021-09-17 09:27:49 -07:00
Sean Silva 0eb767ea45 Remove frontends/pytorch directory.
It just contained the e2e testing framework. We now fold it into the
main project to reduce complexity.

- `frontends/pytorch/python/` -> `python/torch_support`
- `frontends/pytorch/e2e_testing -> e2e_testing`
- `frontends/pytorch/examples -> examples`
- `frontends/pytorch/test` -> `python/test`
- `torch_mlir_torchscript` python module -> `npcomp_torchscript`
- `torch_mlir_torchscript_e2e_test_configs` python module ->
  `npcomp_torchscript_e2e_test_configs`

This also changes the license of a handful of files from the
"pytorch-style" license to the regular LLVM/npcomp license. The only
people who committed to those files were myself and Yi.
2021-09-17 09:27:49 -07:00