torch-mlir/.gitignore

35 lines
347 B
Plaintext
Raw Normal View History

*.swp
.cache/
2020-04-27 08:20:58 +08:00
.vscode
Dockerize CI + Release builds (#1234) Gets both CI and Release builds integrated in one workflow. Mount ccache and pip cache as required for fast iterative builds Current Release docker builds still run with root perms, fix it in the future to run as the same user. There may be some corner cases left especially when switching build types etc. Docker build TEST plan: tl;dr: Build everythin: Releases (Python 3.8, 3.9, 3.10) and CIs. TM_PACKAGES="torch-mlir out-of-tree in-tree" 2.57s user 2.49s system 0% cpu 30:33.11 total Out of Tree + PyTorch binaries: Fresh build (purged cache): TM_PACKAGES="out-of-tree" 0.47s user 0.51s system 0% cpu 5:24.99 total Incremental with ccache: TM_PACKAGES="out-of-tree" 0.09s user 0.08s system 0% cpu 34.817 total Out of Tree + PyTorch from source Incremental TM_PACKAGES="out-of-tree" TM_USE_PYTORCH_BINARY=OFF 1.58s user 1.81s system 2% cpu 1:59.61 total In-Tree + PyTorch binaries: Fresh build and tests: (purge ccache) TM_PACKAGES="in-tree" 0.53s user 0.49s system 0% cpu 6:23.35 total Fresh build/ but with prior ccache TM_PACKAGES="in-tree" 0.45s user 0.66s system 0% cpu 3:57.47 total Incremental in-tree with all tests and regression tests TM_PACKAGES="in-tree" 0.16s user 0.09s system 0% cpu 2:18.52 total In-Tree + PyTorch from source Fresh build and tests: (purge ccache) TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 2.03s user 2.28s system 0% cpu 11:11.86 total Fresh build/ but with prior ccache TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 1.58s user 1.88s system 1% cpu 4:53.15 total Incremental in-tree with all tests and regression tests TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 1.09s user 1.10s system 1% cpu 3:29.84 total Incremental without tests TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF TM_SKIP_TESTS=ON 1.52s user 1.42s system 3% cpu 1:15.82 total In-tree+out-of-tree + Pytorch Binaries TM_PACKAGES="out-of-tree in-tree" 0.25s user 0.18s system 0% cpu 3:01.91 total To clear all artifacts: rm -rf build build_oot llvm-build libtorch docker_venv externals/pytorch/build
2022-08-31 02:07:25 +08:00
.ccache
.env
2021-01-09 02:34:49 +08:00
*.code-workspace
.ipynb_checkpoints
*.venv/
mlir_venv/
externals/pytorch/
libtorch*
2020-04-27 08:20:58 +08:00
/build/
__pycache__
E2E HuggingFace Bert using LTC Backend (#912) * Update native function definitions * Add ops to support bert lowering - Add empty_strided and as_strided - Restore zeros_like to op blacklist (Without this, tensors will be unintentionally created with a CPU device rather than lazy) - Check for composite implicit ops and add device data IR - Also fix codegen for functionalization * Add autogen to CMakeList * Remove PyTorch submodule * Reduced BERT model size * Print Mark Step status in Torch MLIR LTC debug string * Apply fixes to work with latest upstream/main - Pass importOptions into getMlirTypeFromTorchType during NodeImporter::importNode Without this, the tensor type created may have a mismatched type as ImportOptions may cause vtensor to be used instead of tensor * Update shape inference functions - Fixed compute_shape_native_batch_norm when mean and var are uninitialized Previously, the number of shapes returned would be <3 if either mean or val was didn't exist. Instead, we now initialize them with a vector matching the number of channels. - Implemented compute_shape_mul - Fixed bug in reshape shape inference error message * Get MLIR backend more consistent with TS backend - Remove LazyNativeFunctions::_unsafe_view from autogen - Blacklist ops to make JIT graph more like output of TS backend - Print graph when SSA value has mismatch of types and results - Remove normalize_index from LazyShapeInference - Fix seeds for LTC example models * Update and clean up shape inference functions - Prune shape inference functions - Add shape inference function for GenerateSlice - Add shape inference function for GenerateCopy Co-authored-by: Henry Tu <henry.tu@cerebras.net>
2022-06-08 02:38:50 +08:00
*.pyc
.pytype
# Pip artifacts.
*.egg-info
*.whl
/wheelhouse
# Bazel
bazel-*
# Autogenerated files
/python/torch_mlir/csrc/base_lazy_backend/generated
Dockerize CI + Release builds (#1234) Gets both CI and Release builds integrated in one workflow. Mount ccache and pip cache as required for fast iterative builds Current Release docker builds still run with root perms, fix it in the future to run as the same user. There may be some corner cases left especially when switching build types etc. Docker build TEST plan: tl;dr: Build everythin: Releases (Python 3.8, 3.9, 3.10) and CIs. TM_PACKAGES="torch-mlir out-of-tree in-tree" 2.57s user 2.49s system 0% cpu 30:33.11 total Out of Tree + PyTorch binaries: Fresh build (purged cache): TM_PACKAGES="out-of-tree" 0.47s user 0.51s system 0% cpu 5:24.99 total Incremental with ccache: TM_PACKAGES="out-of-tree" 0.09s user 0.08s system 0% cpu 34.817 total Out of Tree + PyTorch from source Incremental TM_PACKAGES="out-of-tree" TM_USE_PYTORCH_BINARY=OFF 1.58s user 1.81s system 2% cpu 1:59.61 total In-Tree + PyTorch binaries: Fresh build and tests: (purge ccache) TM_PACKAGES="in-tree" 0.53s user 0.49s system 0% cpu 6:23.35 total Fresh build/ but with prior ccache TM_PACKAGES="in-tree" 0.45s user 0.66s system 0% cpu 3:57.47 total Incremental in-tree with all tests and regression tests TM_PACKAGES="in-tree" 0.16s user 0.09s system 0% cpu 2:18.52 total In-Tree + PyTorch from source Fresh build and tests: (purge ccache) TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 2.03s user 2.28s system 0% cpu 11:11.86 total Fresh build/ but with prior ccache TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 1.58s user 1.88s system 1% cpu 4:53.15 total Incremental in-tree with all tests and regression tests TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF 1.09s user 1.10s system 1% cpu 3:29.84 total Incremental without tests TM_PACKAGES="in-tree" TM_USE_PYTORCH_BINARY=OFF TM_SKIP_TESTS=ON 1.52s user 1.42s system 3% cpu 1:15.82 total In-tree+out-of-tree + Pytorch Binaries TM_PACKAGES="out-of-tree in-tree" 0.25s user 0.18s system 0% cpu 3:01.91 total To clear all artifacts: rm -rf build build_oot llvm-build libtorch docker_venv externals/pytorch/build
2022-08-31 02:07:25 +08:00
#Docker builds
build_oot/
docker_venv/
llvm-build/