* download_hash.py: generalized and data-driven
The script is currently limited to one hardcoded URL for kubernetes
related binaries, and a fixed set of architectures.
The solution is three-fold:
1. Use an url template dictionary for each download -> this allow to easily
add support for new downloads.
2. Source the architectures to search from the existing data
3. Enumerate the existing versions in the data and start searching from
the last one until no newer version is found (newer in the version
order sense, irrespective of actual age)
* download_hash.py: support for 'multi-hash' file + runc
runc upstream does not provide one hash file per assets in their
releases, but one file with all the hashes.
To handle this (and/or any arbitrary format from upstreams), add a
dictionary mapping the name of the download to a lambda function which
transform the file provided by upstream into a dictionary of hashes,
keyed by architecture.
* download_hash: argument handling with argparse
Allow the script to be called with a list of components, to only
download new versions checksums for those.
By default, we get new versions checksums for all supported (by the
script) components.
* download_hash: propagate new patch versions to all archs
* download_hash: add support for 'simple hash' components
* download_hash: support 'multi-hash' components
* download_hash: document missing support
* download_hash: use persistent session
This allows to reuse http connection and be more efficient.
From rough measuring it saves around 25-30% of execution time.
* download_hash: cache request for 'multi-hash' files
This avoid re-downloading the same file for different arch and
re-parsing it
* download_hash: document usage
---------
Co-authored-by: Max Gautier <mg@max.gautier.name>
* CI: reduce VM resources requests to improve scheduling
* CI: Reduce default jobs; add labels(ci-full/extended) to run more test
* CI: use jobs dependencies instead of stages
* precommit one-job
* CI: Use Kubevirt VM to run Molecule and Vagrant jobs
---------
Co-authored-by: ant31 <2t.antoine@gmail.com>
- Require a 'lgtm' or 'ok-to-test' label for running CI after the
moderator stage
Signed-off-by: ant31 <2t.antoine@gmail.com>
Co-authored-by: ant31 <2t.antoine@gmail.com>
* Use alternate self-sufficient shellcheck precommit
This pre-commit does not require prerequisite on the host, making it
easier to run in CI workflows.
* Switch to upstream ansible-lint pre-commit hook
This way, the hook is self contained and does not depend on a previous
virtualenv installation.
* pre-commit: fix hooks dependencies
- ansible-syntax-check
- tox-inventory-builder
- jinja-syntax-check
* Fix ci-matrix pre-commit hook
- Remove dependency of pydblite which fails to setup on recent pythons
- Discard shell script and put everything into pre-commit
* pre-commit: apply autofixes hooks and fix the rest manually
- markdownlint (manual fix)
- end-of-file-fixer
- requirements-txt-fixer
- trailing-whitespace
* Convert check_typo to pre-commit + use maintained version
client9/misspell is unmaintained, and has been forked by the golangci
team, see https://github.com/client9/misspell/issues/197#issuecomment-1596318684.
They haven't yet added a pre-commit config, so use my fork with the
pre-commit hook config until the pull request is merged.
* collection-build-install convert to pre-commit
* Run pre-commit hooks in dynamic pipeline
Use gitlab dynamic child pipelines feature to have one source of truth
for the pre-commit jobs, the pre-commit config file.
Use one cache per pre-commit. This should reduce the "fetching cache"
time steps in gitlab-ci, since each job will have a separate cache with
only its hook installed.
* Remove gitlab-ci job done in pre-commit
* pre-commit: adjust mardownlint default, md fixes
Use a style file as recommended by upstream. This makes for only one
source of truth.
Conserve previous upstream default for MD007 (upstream default changed
here https://github.com/markdownlint/markdownlint/pull/373)
* Update pre-commit hooks
---------
Co-authored-by: Max Gautier <mg@max.gautier.name>