master->main

Signed-off-by: David Galloway <dgallowa@redhat.com>
pull/7198/head
David Galloway 2022-05-24 14:40:00 -04:00 committed by Guillaume Abrioux
parent 41d62596fc
commit bcedff95bd
54 changed files with 129 additions and 129 deletions

View File

@ -41,11 +41,11 @@ This will prevent the engine merging your pull request.
### Backports (maintainers only) ### Backports (maintainers only)
If you wish to see your work from 'master' being backported to a stable branch you can ping a maintainer If you wish to see your work from 'main' being backported to a stable branch you can ping a maintainer
so he will set the backport label on your PR. Once the PR from master is merged, a backport PR will be created by mergify, so he will set the backport label on your PR. Once the PR from main is merged, a backport PR will be created by mergify,
if there is a cherry-pick conflict you must resolv it by pulling the branch. if there is a cherry-pick conflict you must resolv it by pulling the branch.
**NEVER** push directly into a stable branch, **unless** the code from master has diverged so much that the files don't exist in the stable branch. **NEVER** push directly into a stable branch, **unless** the code from main has diverged so much that the files don't exist in the stable branch.
If that happens, inform the maintainers of the reasons why you pushed directly into a stable branch, if the reason is invalid, maintainers will immediatly close your pull request. If that happens, inform the maintainers of the reasons why you pushed directly into a stable branch, if the reason is invalid, maintainers will immediatly close your pull request.
## Good to know ## Good to know
@ -77,8 +77,8 @@ You must run `./generate_group_vars_sample.sh` before you commit your changes so
### Keep your branch up-to-date ### Keep your branch up-to-date
Sometimes, a pull request can be subject to long discussion, reviews and comments, meantime, `master` Sometimes, a pull request can be subject to long discussion, reviews and comments, meantime, `main`
moves forward so let's try to keep your branch rebased on master regularly to avoid huge conflict merge. moves forward so let's try to keep your branch rebased on main regularly to avoid huge conflict merge.
A rebased branch is more likely to be merged easily & shorter. A rebased branch is more likely to be merged easily & shorter.
### Organize your commits ### Organize your commits
@ -100,4 +100,4 @@ If you've got commits fixing typos or other problems introduced by previous comm
If you are new to Git, these links might help: If you are new to Git, these links might help:
- [https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History) - [https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History)
- [http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html) - [http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html)

View File

@ -101,7 +101,7 @@ tag:
echo "$(SHORTCOMMIT) on $(BRANCH) is already tagged as $(TAG)"; \ echo "$(SHORTCOMMIT) on $(BRANCH) is already tagged as $(TAG)"; \
exit 1; \ exit 1; \
fi fi
if [[ "$(BRANCH)" != "master" ]] && \ if [[ "$(BRANCH)" != "master" || "$(BRANCH)" != "main" ]] && \
! [[ "$(BRANCH)" =~ ^stable- ]]; then \ ! [[ "$(BRANCH)" =~ ^stable- ]]; then \
echo Cannot tag $(BRANCH); \ echo Cannot tag $(BRANCH); \
exit 1; \ exit 1; \

View File

@ -119,7 +119,7 @@ A secondary zone pulls a realm in order to sync data to it.
Finally, The variable `rgw_zone` is set to "default" to enable compression for clusters configured without rgw multi-site. Finally, The variable `rgw_zone` is set to "default" to enable compression for clusters configured without rgw multi-site.
If multisite is configured `rgw_zone` should not be set to "default". If multisite is configured `rgw_zone` should not be set to "default".
For more detail information on multisite please visit: <https://docs.ceph.com/docs/master/radosgw/multisite/>. For more detail information on multisite please visit: <https://docs.ceph.com/docs/main/radosgw/multisite/>.
## Deployment Scenario #1: Single Realm & Zonegroup with Multiple Ceph Clusters ## Deployment Scenario #1: Single Realm & Zonegroup with Multiple Ceph Clusters

View File

@ -7,5 +7,5 @@ Ansible playbooks for Ceph, the distributed filesystem.
Please refer to our hosted documentation here: https://docs.ceph.com/projects/ceph-ansible/en/latest/ Please refer to our hosted documentation here: https://docs.ceph.com/projects/ceph-ansible/en/latest/
You can view documentation for our ``stable-*`` branches by substituting ``master`` in the link You can view documentation for our ``stable-*`` branches by substituting ``main`` in the link
above for the name of the branch. For example: https://docs.ceph.com/projects/ceph-ansible/en/stable-6.0/ above for the name of the branch. For example: https://docs.ceph.com/projects/ceph-ansible/en/stable-6.0/

View File

@ -22,7 +22,7 @@ verify_commit () {
for com in ${commit//,/ }; do for com in ${commit//,/ }; do
if [[ $(git cat-file -t "$com" 2>/dev/null) != commit ]]; then if [[ $(git cat-file -t "$com" 2>/dev/null) != commit ]]; then
echo "$com does not exist in your tree" echo "$com does not exist in your tree"
echo "Run 'git fetch origin master && git pull origin master'" echo "Run 'git fetch origin main && git pull origin main'"
exit 1 exit 1
fi fi
done done

View File

@ -22,7 +22,7 @@ function check_existing_remote {
} }
function pull_origin { function pull_origin {
git pull origin master git pull origin main
} }
function reset_hard_origin { function reset_hard_origin {
@ -30,7 +30,7 @@ function reset_hard_origin {
git checkout "$LOCAL_BRANCH" git checkout "$LOCAL_BRANCH"
git fetch origin --prune git fetch origin --prune
git fetch --tags git fetch --tags
git reset --hard origin/master git reset --hard origin/main
} }
function check_git_status { function check_git_status {
@ -79,9 +79,9 @@ for ROLE in $ROLES; do
REMOTE=$ROLE REMOTE=$ROLE
check_existing_remote "$REMOTE" check_existing_remote "$REMOTE"
reset_hard_origin reset_hard_origin
# First we filter branches by rewriting master with the content of roles/$ROLE # First we filter branches by rewriting main with the content of roles/$ROLE
# this gives us a new commit history # this gives us a new commit history
for BRANCH in $(git branch --list --remotes "origin/stable-*" "origin/master" "origin/ansible-1.9" | cut -d '/' -f2); do for BRANCH in $(git branch --list --remotes "origin/stable-*" "origin/main" "origin/ansible-1.9" | cut -d '/' -f2); do
git checkout -B "$BRANCH" origin/"$BRANCH" git checkout -B "$BRANCH" origin/"$BRANCH"
# use || true to avoid exiting in case of 'Found nothing to rewrite' # use || true to avoid exiting in case of 'Found nothing to rewrite'
git filter-branch -f --prune-empty --subdirectory-filter roles/"$ROLE" || true git filter-branch -f --prune-empty --subdirectory-filter roles/"$ROLE" || true

View File

@ -86,9 +86,9 @@ If a change should be backported to a ``stable-*`` Git branch:
- Create a new pull request against the ``stable-5.0`` branch. - Create a new pull request against the ``stable-5.0`` branch.
- Ensure that your pull request's title has the prefix "backport:", so it's clear - Ensure that your pull request's title has the prefix "backport:", so it's clear
to reviewers what this is about. to reviewers what this is about.
- Add a comment in your backport pull request linking to the original (master) pull request. - Add a comment in your backport pull request linking to the original (main) pull request.
All changes to the stable branches should land in master first, so we avoid All changes to the stable branches should land in main first, so we avoid
regressions. regressions.
Once this is done, one of the project maintainers will tag the tip of the Once this is done, one of the project maintainers will tag the tip of the

View File

@ -20,7 +20,7 @@ You can install directly from the source on GitHub by following these steps:
$ git clone https://github.com/ceph/ceph-ansible.git $ git clone https://github.com/ceph/ceph-ansible.git
- Next, you must decide which branch of ``ceph-ansible`` you wish to use. There - Next, you must decide which branch of ``ceph-ansible`` you wish to use. There
are stable branches to choose from or you could use the master branch: are stable branches to choose from or you could use the main branch:
.. code-block:: console .. code-block:: console
@ -79,7 +79,7 @@ Releases
The following branches should be used depending on your requirements. The ``stable-*`` The following branches should be used depending on your requirements. The ``stable-*``
branches have been QE tested and sometimes receive backport fixes throughout their lifecycle. branches have been QE tested and sometimes receive backport fixes throughout their lifecycle.
The ``master`` branch should be considered experimental and used with caution. The ``main`` branch should be considered experimental and used with caution.
- ``stable-3.0`` Supports Ceph versions ``jewel`` and ``luminous``. This branch requires Ansible version ``2.4``. - ``stable-3.0`` Supports Ceph versions ``jewel`` and ``luminous``. This branch requires Ansible version ``2.4``.
@ -93,7 +93,7 @@ The ``master`` branch should be considered experimental and used with caution.
- ``stable-6.0`` Supports Ceph version ``pacific``. This branch requires Ansible version ``2.9``. - ``stable-6.0`` Supports Ceph version ``pacific``. This branch requires Ansible version ``2.9``.
- ``master`` Supports the master branch of Ceph. This branch requires Ansible version ``2.10``. - ``main`` Supports the main branch of Ceph. This branch requires Ansible version ``2.10``.
.. NOTE:: ``stable-3.0`` and ``stable-3.1`` branches of ceph-ansible are deprecated and no longer maintained. .. NOTE:: ``stable-3.0`` and ``stable-3.1`` branches of ceph-ansible are deprecated and no longer maintained.

View File

@ -44,7 +44,7 @@ Dev repository
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
If ``ceph_repository`` is set to ``dev``, packages you will be by default installed from https://shaman.ceph.com/, this can not be tweaked. If ``ceph_repository`` is set to ``dev``, packages you will be by default installed from https://shaman.ceph.com/, this can not be tweaked.
You can obviously decide which branch to install with the help of ``ceph_dev_branch`` (defaults to 'master'). You can obviously decide which branch to install with the help of ``ceph_dev_branch`` (defaults to 'main').
Additionally, you can specify a SHA1 with ``ceph_dev_sha1``, defaults to 'latest' (as in latest built). Additionally, you can specify a SHA1 with ``ceph_dev_sha1``, defaults to 'latest' (as in latest built).
Custom repository Custom repository

View File

@ -27,13 +27,13 @@ The following environent variables are available for use:
* ``CEPH_DOCKER_IMAGE_TAG``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_docker_image_name``. * ``CEPH_DOCKER_IMAGE_TAG``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_docker_image_name``.
* ``CEPH_DEV_BRANCH``: (default: ``master``) This would configure the ``ceph-ansible`` variable ``ceph_dev_branch`` which defines which branch we'd * ``CEPH_DEV_BRANCH``: (default: ``main``) This would configure the ``ceph-ansible`` variable ``ceph_dev_branch`` which defines which branch we'd
like to install from shaman.ceph.com. like to install from shaman.ceph.com.
* ``CEPH_DEV_SHA1``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_dev_sha1`` which defines which sha1 we'd like * ``CEPH_DEV_SHA1``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_dev_sha1`` which defines which sha1 we'd like
to install from shaman.ceph.com. to install from shaman.ceph.com.
* ``UPDATE_CEPH_DEV_BRANCH``: (default: ``master``) This would configure the ``ceph-ansible`` variable ``ceph_dev_branch`` which defines which branch we'd * ``UPDATE_CEPH_DEV_BRANCH``: (default: ``main``) This would configure the ``ceph-ansible`` variable ``ceph_dev_branch`` which defines which branch we'd
like to update to from shaman.ceph.com. like to update to from shaman.ceph.com.
* ``UPDATE_CEPH_DEV_SHA1``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_dev_sha1`` which defines which sha1 we'd like * ``UPDATE_CEPH_DEV_SHA1``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_dev_sha1`` which defines which sha1 we'd like

View File

@ -208,14 +208,14 @@ dummy:
# #
# Enabled when ceph_repository == 'dev' # Enabled when ceph_repository == 'dev'
# #
#ceph_dev_branch: master # development branch you would like to use e.g: master, wip-hack #ceph_dev_branch: main # development branch you would like to use e.g: main, wip-hack
#ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built) #ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built)
#nfs_ganesha_dev: false # use development repos for nfs-ganesha #nfs_ganesha_dev: false # use development repos for nfs-ganesha
# Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman # Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman
# flavors so far include: ceph_master, ceph_jewel, ceph_kraken, ceph_luminous # flavors so far include: ceph_main, ceph_jewel, ceph_kraken, ceph_luminous
#nfs_ganesha_flavor: "ceph_master" #nfs_ganesha_flavor: "ceph_main"
#ceph_iscsi_config_dev: true # special repo for deploying iSCSI gateways #ceph_iscsi_config_dev: true # special repo for deploying iSCSI gateways
@ -561,7 +561,7 @@ dummy:
# DOCKER # # DOCKER #
########## ##########
#ceph_docker_image: "ceph/daemon" #ceph_docker_image: "ceph/daemon"
#ceph_docker_image_tag: latest-master #ceph_docker_image_tag: latest-main
#ceph_docker_registry: quay.io #ceph_docker_registry: quay.io
#ceph_docker_registry_auth: false #ceph_docker_registry_auth: false
#ceph_docker_registry_username: #ceph_docker_registry_username:
@ -694,7 +694,7 @@ dummy:
#grafana_uid: 472 #grafana_uid: 472
#grafana_datasource: Dashboard #grafana_datasource: Dashboard
#grafana_dashboards_path: "/etc/grafana/dashboards/ceph-dashboard" #grafana_dashboards_path: "/etc/grafana/dashboards/ceph-dashboard"
#grafana_dashboard_version: master #grafana_dashboard_version: main
#grafana_dashboard_files: #grafana_dashboard_files:
# - ceph-cluster.json # - ceph-cluster.json
# - cephfs-overview.json # - cephfs-overview.json

View File

@ -208,14 +208,14 @@ ceph_rhcs_version: 5
# #
# Enabled when ceph_repository == 'dev' # Enabled when ceph_repository == 'dev'
# #
#ceph_dev_branch: master # development branch you would like to use e.g: master, wip-hack #ceph_dev_branch: main # development branch you would like to use e.g: main, wip-hack
#ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built) #ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built)
#nfs_ganesha_dev: false # use development repos for nfs-ganesha #nfs_ganesha_dev: false # use development repos for nfs-ganesha
# Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman # Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman
# flavors so far include: ceph_master, ceph_jewel, ceph_kraken, ceph_luminous # flavors so far include: ceph_main, ceph_jewel, ceph_kraken, ceph_luminous
#nfs_ganesha_flavor: "ceph_master" #nfs_ganesha_flavor: "ceph_main"
ceph_iscsi_config_dev: false ceph_iscsi_config_dev: false
@ -694,7 +694,7 @@ grafana_container_image: registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:5
#grafana_uid: 472 #grafana_uid: 472
#grafana_datasource: Dashboard #grafana_datasource: Dashboard
#grafana_dashboards_path: "/etc/grafana/dashboards/ceph-dashboard" #grafana_dashboards_path: "/etc/grafana/dashboards/ceph-dashboard"
#grafana_dashboard_version: master #grafana_dashboard_version: main
#grafana_dashboard_files: #grafana_dashboard_files:
# - ceph-cluster.json # - ceph-cluster.json
# - cephfs-overview.json # - cephfs-overview.json

View File

@ -84,7 +84,7 @@ EXAMPLES = '''
cephadm_adopt: cephadm_adopt:
name: mon.foo name: mon.foo
style: legacy style: legacy
image: quay.ceph.io/ceph/daemon-base:latest-master-devel image: quay.ceph.io/ceph/daemon-base:latest-main-devel
pull: false pull: false
firewalld: false firewalld: false
@ -93,7 +93,7 @@ EXAMPLES = '''
name: mon.foo name: mon.foo
style: legacy style: legacy
environment: environment:
CEPHADM_IMAGE: quay.ceph.io/ceph/daemon-base:latest-master-devel CEPHADM_IMAGE: quay.ceph.io/ceph/daemon-base:latest-main-devel
''' '''
RETURN = '''# ''' RETURN = '''# '''

View File

@ -124,7 +124,7 @@ EXAMPLES = '''
cephadm_bootstrap: cephadm_bootstrap:
mon_ip: 192.168.42.1 mon_ip: 192.168.42.1
fsid: 3c9ba63a-c7df-4476-a1e7-317dfc711f82 fsid: 3c9ba63a-c7df-4476-a1e7-317dfc711f82
image: quay.ceph.io/ceph/daemon-base:latest-master-devel image: quay.ceph.io/ceph/daemon-base:latest-main-devel
dashboard: false dashboard: false
monitoring: false monitoring: false
firewalld: false firewalld: false
@ -133,7 +133,7 @@ EXAMPLES = '''
cephadm_bootstrap: cephadm_bootstrap:
mon_ip: 192.168.42.1 mon_ip: 192.168.42.1
environment: environment:
CEPHADM_IMAGE: quay.ceph.io/ceph/daemon-base:latest-master-devel CEPHADM_IMAGE: quay.ceph.io/ceph/daemon-base:latest-main-devel
''' '''
RETURN = '''# ''' RETURN = '''# '''

View File

@ -1,4 +1,4 @@
# These are Python requirements needed to run ceph-ansible master # These are Python requirements needed to run ceph-ansible main
ansible>=2.10,<2.11,!=2.9.10 ansible>=2.10,<2.11,!=2.9.10
netaddr netaddr
six six

View File

@ -1,7 +1,7 @@
--- ---
# These are Ansible requirements needed to run ceph-ansible master # These are Ansible requirements needed to run ceph-ansible main
collections: collections:
- name: https://opendev.org/openstack/ansible-config_template - name: https://opendev.org/openstack/ansible-config_template
version: 1.2.1 version: 1.2.1
type: git type: git
- name: ansible.utils - name: ansible.utils

View File

@ -200,14 +200,14 @@ ceph_obs_repo: "https://download.opensuse.org/repositories/filesystems:/ceph:/{{
# #
# Enabled when ceph_repository == 'dev' # Enabled when ceph_repository == 'dev'
# #
ceph_dev_branch: master # development branch you would like to use e.g: master, wip-hack ceph_dev_branch: main # development branch you would like to use e.g: main, wip-hack
ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built) ceph_dev_sha1: latest # distinct sha1 to use, defaults to 'latest' (as in latest built)
nfs_ganesha_dev: false # use development repos for nfs-ganesha nfs_ganesha_dev: false # use development repos for nfs-ganesha
# Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman # Set this to choose the version of ceph dev libraries used in the nfs-ganesha packages from shaman
# flavors so far include: ceph_master, ceph_jewel, ceph_kraken, ceph_luminous # flavors so far include: ceph_main, ceph_jewel, ceph_kraken, ceph_luminous
nfs_ganesha_flavor: "ceph_master" nfs_ganesha_flavor: "ceph_main"
ceph_iscsi_config_dev: true # special repo for deploying iSCSI gateways ceph_iscsi_config_dev: true # special repo for deploying iSCSI gateways
@ -553,7 +553,7 @@ ceph_tcmalloc_max_total_thread_cache: 134217728
# DOCKER # # DOCKER #
########## ##########
ceph_docker_image: "ceph/daemon" ceph_docker_image: "ceph/daemon"
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main
ceph_docker_registry: quay.io ceph_docker_registry: quay.io
ceph_docker_registry_auth: false ceph_docker_registry_auth: false
#ceph_docker_registry_username: #ceph_docker_registry_username:
@ -686,7 +686,7 @@ grafana_container_memory: 4
grafana_uid: 472 grafana_uid: 472
grafana_datasource: Dashboard grafana_datasource: Dashboard
grafana_dashboards_path: "/etc/grafana/dashboards/ceph-dashboard" grafana_dashboards_path: "/etc/grafana/dashboards/ceph-dashboard"
grafana_dashboard_version: master grafana_dashboard_version: main
grafana_dashboard_files: grafana_dashboard_files:
- ceph-cluster.json - ceph-cluster.json
- cephfs-overview.json - cephfs-overview.json

View File

@ -29,7 +29,7 @@
block: block:
- name: ceph-iscsi dependency repositories - name: ceph-iscsi dependency repositories
get_url: get_url:
url: "https://shaman.ceph.com/api/repos/tcmu-runner/master/latest/{{ ansible_facts['distribution'] | lower }}/{{ ansible_facts['distribution_major_version'] }}/repo?arch={{ ansible_facts['architecture'] }}" url: "https://shaman.ceph.com/api/repos/tcmu-runner/main/latest/{{ ansible_facts['distribution'] | lower }}/{{ ansible_facts['distribution_major_version'] }}/repo?arch={{ ansible_facts['architecture'] }}"
dest: '/etc/yum.repos.d/tcmu-runner-dev.repo' dest: '/etc/yum.repos.d/tcmu-runner-dev.repo'
force: true force: true
register: result register: result
@ -37,7 +37,7 @@
- name: ceph-iscsi development repository - name: ceph-iscsi development repository
get_url: get_url:
url: "https://shaman.ceph.com/api/repos/{{ item }}/master/latest/{{ ansible_facts['distribution'] | lower }}/{{ ansible_facts['distribution_major_version'] }}/repo" url: "https://shaman.ceph.com/api/repos/{{ item }}/main/latest/{{ ansible_facts['distribution'] | lower }}/{{ ansible_facts['distribution_major_version'] }}/repo"
dest: '/etc/yum.repos.d/{{ item }}-dev.repo' dest: '/etc/yum.repos.d/{{ item }}-dev.repo'
force: true force: true
register: result register: result

View File

@ -29,4 +29,4 @@ ceph_conf_overrides:
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -29,4 +29,4 @@ ceph_conf_overrides:
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -29,4 +29,4 @@ ceph_conf_overrides:
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -29,4 +29,4 @@ ceph_conf_overrides:
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -29,4 +29,4 @@ ceph_conf_overrides:
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -31,4 +31,4 @@ rgw_bucket_default_quota_max_objects: 1638400
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -45,4 +45,4 @@ lvm_volumes:
db_vg: journals db_vg: journals
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -38,7 +38,7 @@ dashboard_admin_password: $sX!cD$rYU6qR^B!
grafana_admin_password: +xFRe+RES@7vg24n grafana_admin_password: +xFRe+RES@7vg24n
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main
node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0" node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0"
prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2" prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2"
alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2" alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2"

View File

@ -7,4 +7,4 @@ ganesha_conf_overrides: |
} }
nfs_ganesha_stable: true nfs_ganesha_stable: true
nfs_ganesha_dev: false nfs_ganesha_dev: false
nfs_ganesha_flavor: "ceph_master" nfs_ganesha_flavor: "ceph_main"

View File

@ -5,5 +5,5 @@ cluster_network: "192.168.31.0/24"
dashboard_admin_password: $sX!cD$rYU6qR^B! dashboard_admin_password: $sX!cD$rYU6qR^B!
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon-base ceph_docker_image: ceph-ci/daemon-base
ceph_docker_image_tag: latest-master-devel ceph_docker_image_tag: latest-main-devel
containerized_deployment: true containerized_deployment: true

View File

@ -27,7 +27,7 @@ dashboard_admin_user_ro: true
grafana_admin_password: +xFRe+RES@7vg24n grafana_admin_password: +xFRe+RES@7vg24n
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main
node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0" node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0"
prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2" prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2"
alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2" alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2"

View File

@ -35,7 +35,7 @@ dashboard_admin_password: $sX!cD$rYU6qR^B!
grafana_admin_password: +xFRe+RES@7vg24n grafana_admin_password: +xFRe+RES@7vg24n
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main
node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0" node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0"
prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2" prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2"
alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2" alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2"

View File

@ -39,4 +39,4 @@ fsid: 40358a87-ab6e-4bdc-83db-1d909147861c
generate_fsid: false generate_fsid: false
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -25,4 +25,4 @@ handler_health_mon_check_delay: 10
handler_health_osd_check_delay: 10 handler_health_osd_check_delay: 10
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -29,4 +29,4 @@ handler_health_mon_check_delay: 10
handler_health_osd_check_delay: 10 handler_health_osd_check_delay: 10
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -31,4 +31,4 @@ handler_health_mon_check_delay: 10
handler_health_osd_check_delay: 10 handler_health_osd_check_delay: 10
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -39,4 +39,4 @@ openstack_pools:
- "{{ openstack_cinder_pool }}" - "{{ openstack_cinder_pool }}"
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -11,7 +11,7 @@ all:
rgw_keystone_url: 'http://192.168.95.10:5000', rgw_s3_auth_use_keystone: 'true', rgw_keystone_revocation_interval: 0} rgw_keystone_url: 'http://192.168.95.10:5000', rgw_s3_auth_use_keystone: 'true', rgw_keystone_revocation_interval: 0}
cluster: mycluster cluster: mycluster
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
cephfs_data_pool: cephfs_data_pool:
name: 'manila_data' name: 'manila_data'

View File

@ -34,7 +34,7 @@ dashboard_admin_password: $sX!cD$rYU6qR^B!
grafana_admin_password: +xFRe+RES@7vg24n grafana_admin_password: +xFRe+RES@7vg24n
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main
node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0" node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0"
prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2" prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2"
alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2" alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2"

View File

@ -30,4 +30,4 @@ ceph_conf_overrides:
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -30,4 +30,4 @@ ceph_conf_overrides:
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -18,4 +18,4 @@ dashboard_enabled: False
copy_admin_key: True copy_admin_key: True
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -17,4 +17,4 @@ openstack_config: False
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -17,4 +17,4 @@ openstack_config: False
dashboard_enabled: False dashboard_enabled: False
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -18,4 +18,4 @@ dashboard_enabled: False
copy_admin_key: True copy_admin_key: True
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -17,4 +17,4 @@ dashboard_enabled: False
copy_admin_key: True copy_admin_key: True
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -19,4 +19,4 @@ dashboard_enabled: False
copy_admin_key: True copy_admin_key: True
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main

View File

@ -29,7 +29,7 @@ dashboard_admin_password: $sX!cD$rYU6qR^B!
grafana_admin_password: +xFRe+RES@7vg24n grafana_admin_password: +xFRe+RES@7vg24n
ceph_docker_registry: quay.ceph.io ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-master ceph_docker_image_tag: latest-main
node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0" node_exporter_container_image: "quay.ceph.io/prometheus/node-exporter:v0.17.0"
prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2" prometheus_container_image: "quay.ceph.io/prometheus/prometheus:v2.7.2"
alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2" alertmanager_container_image: "quay.ceph.io/prometheus/alertmanager:v0.16.2"

View File

@ -7,4 +7,4 @@ ganesha_conf_overrides: |
} }
nfs_ganesha_stable: true nfs_ganesha_stable: true
nfs_ganesha_dev: false nfs_ganesha_dev: false
nfs_ganesha_flavor: "ceph_master" nfs_ganesha_flavor: "ceph_main"

View File

@ -4,7 +4,7 @@ import ca_test_common
import cephadm_bootstrap import cephadm_bootstrap
fake_fsid = '0f1e0605-db0b-485c-b366-bd8abaa83f3b' fake_fsid = '0f1e0605-db0b-485c-b366-bd8abaa83f3b'
fake_image = 'quay.ceph.io/ceph/daemon-base:latest-master-devel' fake_image = 'quay.ceph.io/ceph/daemon-base:latest-main-devel'
fake_ip = '192.168.42.1' fake_ip = '192.168.42.1'
fake_registry = 'quay.ceph.io' fake_registry = 'quay.ceph.io'
fake_registry_user = 'foo' fake_registry_user = 'foo'

View File

@ -39,10 +39,10 @@ commands=
# configure lvm # configure lvm
ansible-playbook -vv -i {changedir}/inventory/hosts {toxinidir}/tests/functional/lvm_setup.yml ansible-playbook -vv -i {changedir}/inventory/hosts {toxinidir}/tests/functional/lvm_setup.yml
non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup=True change_dir={changedir} ceph_dev_branch=master ceph_dev_sha1=latest" --tags "vagrant_setup" non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup=True change_dir={changedir} ceph_dev_branch=main ceph_dev_sha1=latest" --tags "vagrant_setup"
ansible-playbook -vv -i {changedir}/inventory/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} --limit 'all:!clients' --extra-vars "\ ansible-playbook -vv -i {changedir}/inventory/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} --limit 'all:!clients' --extra-vars "\
delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \ delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \
ceph_dev_branch=master \ ceph_dev_branch=main \
ceph_dev_sha1=latest \ ceph_dev_sha1=latest \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -56,7 +56,7 @@ commands=
fsid=40358a87-ab6e-4bdc-83db-1d909147861c \ fsid=40358a87-ab6e-4bdc-83db-1d909147861c \
external_cluster_mon_ips=192.168.31.10,192.168.31.11,192.168.31.12 \ external_cluster_mon_ips=192.168.31.10,192.168.31.11,192.168.31.12 \
generate_fsid=false \ generate_fsid=false \
ceph_dev_branch=master \ ceph_dev_branch=main \
ceph_dev_sha1=latest \ ceph_dev_sha1=latest \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -70,7 +70,7 @@ commands=
fsid=40358a87-ab6e-4bdc-83db-1d909147861c \ fsid=40358a87-ab6e-4bdc-83db-1d909147861c \
external_cluster_mon_ips=192.168.31.10,192.168.31.11,192.168.31.12 \ external_cluster_mon_ips=192.168.31.10,192.168.31.11,192.168.31.12 \
generate_fsid=false \ generate_fsid=false \
ceph_dev_branch=master \ ceph_dev_branch=main \
ceph_dev_sha1=latest \ ceph_dev_sha1=latest \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \

View File

@ -30,7 +30,7 @@ setenv=
non_container: PLAYBOOK = site.yml.sample non_container: PLAYBOOK = site.yml.sample
non_container: DEV_SETUP = True non_container: DEV_SETUP = True
CEPH_DOCKER_IMAGE_TAG = latest-master CEPH_DOCKER_IMAGE_TAG = latest-main
deps= -r{toxinidir}/tests/requirements.txt deps= -r{toxinidir}/tests/requirements.txt
changedir={toxinidir}/tests/functional/filestore-to-bluestore{env:CONTAINER_DIR:} changedir={toxinidir}/tests/functional/filestore-to-bluestore{env:CONTAINER_DIR:}
@ -40,7 +40,7 @@ commands=
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/setup.yml ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/setup.yml
ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir} ceph_dev_branch={env:CEPH_DEV_BRANCH:master} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup" ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir} ceph_dev_branch={env:CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup"
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml --limit 'osd0:osd1' ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml --limit 'osd0:osd1'
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml --limit 'osd3:osd4' --tags partitions ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml --limit 'osd3:osd4' --tags partitions
@ -48,7 +48,7 @@ commands=
# deploy the cluster # deploy the cluster
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \ delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -56,7 +56,7 @@ commands=
" "
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/filestore-to-bluestore.yml --limit osds --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/filestore-to-bluestore.yml --limit osds --extra-vars "\
delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \ delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
" "

View File

@ -69,9 +69,9 @@ setenv=
container: PURGE_PLAYBOOK = purge-container-cluster.yml container: PURGE_PLAYBOOK = purge-container-cluster.yml
non_container: PLAYBOOK = site.yml.sample non_container: PLAYBOOK = site.yml.sample
CEPH_DOCKER_IMAGE_TAG = latest-master CEPH_DOCKER_IMAGE_TAG = latest-main
CEPH_DOCKER_IMAGE_TAG_BIS = latest-bis-master CEPH_DOCKER_IMAGE_TAG_BIS = latest-bis-main
UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-master UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-main
deps= -r{toxinidir}/tests/requirements.txt deps= -r{toxinidir}/tests/requirements.txt
changedir= changedir=
@ -80,7 +80,7 @@ changedir=
commands= commands=
ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir} ceph_dev_branch={env:CEPH_DEV_BRANCH:master} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup" ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir} ceph_dev_branch={env:CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup"
bash {toxinidir}/tests/scripts/vagrant_up.sh --no-provision {posargs:--provider=virtualbox} bash {toxinidir}/tests/scripts/vagrant_up.sh --no-provision {posargs:--provider=virtualbox}
bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir} bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir}
@ -92,7 +92,7 @@ commands=
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \ delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -109,7 +109,7 @@ commands=
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --limit osds --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --limit osds --extra-vars "\
delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \ delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -118,4 +118,4 @@ commands=
# retest to ensure OSDs are well redeployed # retest to ensure OSDs are well redeployed
py.test --reruns 5 --reruns-delay 1 -n 8 --durations=0 --sudo -v --connection=ansible --ansible-inventory={changedir}/{env:INVENTORY} --ssh-config={changedir}/vagrant_ssh_config {toxinidir}/tests/functional/tests py.test --reruns 5 --reruns-delay 1 -n 8 --durations=0 --sudo -v --connection=ansible --ansible-inventory={changedir}/{env:INVENTORY} --ssh-config={changedir}/vagrant_ssh_config {toxinidir}/tests/functional/tests
vagrant destroy --force vagrant destroy --force

View File

@ -28,8 +28,8 @@ setenv=
container: PLAYBOOK = site-container.yml.sample container: PLAYBOOK = site-container.yml.sample
non_container: PLAYBOOK = site.yml.sample non_container: PLAYBOOK = site.yml.sample
UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-master UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-main
UPDATE_CEPH_DEV_BRANCH = master UPDATE_CEPH_DEV_BRANCH = main
UPDATE_CEPH_DEV_SHA1 = latest UPDATE_CEPH_DEV_SHA1 = latest
ROLLING_UPDATE = True ROLLING_UPDATE = True
deps= -r{toxinidir}/tests/requirements.txt deps= -r{toxinidir}/tests/requirements.txt
@ -41,10 +41,10 @@ commands=
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/setup.yml ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/setup.yml
non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup=True change_dir={changedir} ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest}" --tags "vagrant_setup" non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup=True change_dir={changedir} ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest}" --tags "vagrant_setup"
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \ delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -55,7 +55,7 @@ commands=
# mon1 # mon1
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --limit mon1 --tags=mons --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --limit mon1 --tags=mons --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -64,7 +64,7 @@ commands=
# mon0 and mon2 # mon0 and mon2
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --limit 'mons:!mon1' --tags=mons --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --limit 'mons:!mon1' --tags=mons --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -73,7 +73,7 @@ commands=
# upgrade mgrs # upgrade mgrs
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --tags=mgrs --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --tags=mgrs --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -82,7 +82,7 @@ commands=
# upgrade osd1 # upgrade osd1
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --limit=osd1 --tags=osds --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --limit=osd1 --tags=osds --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -91,7 +91,7 @@ commands=
# upgrade remaining osds (serially) # upgrade remaining osds (serially)
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --limit='osds:!osd1' --tags=osds --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --limit='osds:!osd1' --tags=osds --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -100,7 +100,7 @@ commands=
# upgrade rgws # upgrade rgws
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --tags=rgws --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --tags=rgws --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -109,7 +109,7 @@ commands=
# post upgrade actions # post upgrade actions
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --tags=post_upgrade --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --tags=post_upgrade --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \

View File

@ -28,8 +28,8 @@ setenv=
container: PLAYBOOK = site-container.yml.sample container: PLAYBOOK = site-container.yml.sample
non_container: PLAYBOOK = site.yml.sample non_container: PLAYBOOK = site.yml.sample
UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-master UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-main
UPDATE_CEPH_DEV_BRANCH = master UPDATE_CEPH_DEV_BRANCH = main
UPDATE_CEPH_DEV_SHA1 = latest UPDATE_CEPH_DEV_SHA1 = latest
ROLLING_UPDATE = True ROLLING_UPDATE = True
deps= -r{toxinidir}/tests/requirements.txt deps= -r{toxinidir}/tests/requirements.txt
@ -43,10 +43,10 @@ commands=
# configure lvm, we exclude osd2 given this node uses lvm batch scenario (see corresponding inventory host file) # configure lvm, we exclude osd2 given this node uses lvm batch scenario (see corresponding inventory host file)
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml --limit 'osds:!osd2' ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml --limit 'osds:!osd2'
non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup=True change_dir={changedir} ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest}" --tags "vagrant_setup" non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup=True change_dir={changedir} ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest}" --tags "vagrant_setup"
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \ delegate_facts_host={env:DELEGATE_FACTS_HOST:True} \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -55,7 +55,7 @@ commands=
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/rolling_update.yml --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:UPDATE_CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:UPDATE_CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \

46
tox.ini
View File

@ -45,7 +45,7 @@ commands=
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/rbd_map_devices.yml --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/rbd_map_devices.yml --extra-vars "\
ceph_docker_registry={env:CEPH_DOCKER_REGISTRY:quay.ceph.io} \ ceph_docker_registry={env:CEPH_DOCKER_REGISTRY:quay.ceph.io} \
ceph_docker_image={env:CEPH_DOCKER_IMAGE:ceph-ci/daemon} \ ceph_docker_image={env:CEPH_DOCKER_IMAGE:ceph-ci/daemon} \
ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-master} \ ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-main} \
" "
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/{env:PURGE_PLAYBOOK:purge-cluster.yml} --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/{env:PURGE_PLAYBOOK:purge-cluster.yml} --extra-vars "\
@ -53,7 +53,7 @@ commands=
remove_packages=yes \ remove_packages=yes \
ceph_docker_registry={env:CEPH_DOCKER_REGISTRY:quay.ceph.io} \ ceph_docker_registry={env:CEPH_DOCKER_REGISTRY:quay.ceph.io} \
ceph_docker_image={env:CEPH_DOCKER_IMAGE:ceph-ci/daemon} \ ceph_docker_image={env:CEPH_DOCKER_IMAGE:ceph-ci/daemon} \
ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-master} \ ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-main} \
" "
# re-setup lvm, we exclude osd2 given this node uses lvm batch scenario (see corresponding inventory host file) # re-setup lvm, we exclude osd2 given this node uses lvm batch scenario (see corresponding inventory host file)
@ -61,7 +61,7 @@ commands=
# set up the cluster again # set up the cluster again
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars @ceph-override.json --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars @ceph-override.json --extra-vars "\
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -76,12 +76,12 @@ commands=
ireallymeanit=yes \ ireallymeanit=yes \
ceph_docker_registry={env:CEPH_DOCKER_REGISTRY:quay.ceph.io} \ ceph_docker_registry={env:CEPH_DOCKER_REGISTRY:quay.ceph.io} \
ceph_docker_image={env:CEPH_DOCKER_IMAGE:ceph-ci/daemon} \ ceph_docker_image={env:CEPH_DOCKER_IMAGE:ceph-ci/daemon} \
ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-master} \ ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-main} \
" "
# set up the cluster again # set up the cluster again
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars @ceph-override.json --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars @ceph-override.json --extra-vars "\
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -104,7 +104,7 @@ commands=
# set up the cluster again # set up the cluster again
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
" "
# test that the cluster can be redeployed in a healthy state # test that the cluster can be redeployed in a healthy state
@ -155,7 +155,7 @@ commands=
commands= commands=
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_docker_image_tag=latest-master-devel \ ceph_docker_image_tag=latest-main-devel \
ceph_docker_registry=quay.ceph.io \ ceph_docker_registry=quay.ceph.io \
ceph_docker_image=ceph-ci/daemon \ ceph_docker_image=ceph-ci/daemon \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
@ -174,7 +174,7 @@ commands=
ansible-playbook -vv -i {changedir}/hosts-2 --limit mon1 {toxinidir}/tests/functional/setup.yml ansible-playbook -vv -i {changedir}/hosts-2 --limit mon1 {toxinidir}/tests/functional/setup.yml
ansible-playbook -vv -i {changedir}/hosts-2 {toxinidir}/infrastructure-playbooks/add-mon.yml --extra-vars "\ ansible-playbook -vv -i {changedir}/hosts-2 {toxinidir}/infrastructure-playbooks/add-mon.yml --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
" "
py.test --reruns 5 --reruns-delay 1 -n 8 --durations=0 --sudo -v --connection=ansible --ansible-inventory={changedir}/hosts-2 --ssh-config={changedir}/vagrant_ssh_config {toxinidir}/tests/functional/tests py.test --reruns 5 --reruns-delay 1 -n 8 --durations=0 --sudo -v --connection=ansible --ansible-inventory={changedir}/hosts-2 --ssh-config={changedir}/vagrant_ssh_config {toxinidir}/tests/functional/tests
@ -184,7 +184,7 @@ commands=
ansible-playbook -vv -i {changedir}/hosts-2 --limit mgrs {toxinidir}/tests/functional/setup.yml ansible-playbook -vv -i {changedir}/hosts-2 --limit mgrs {toxinidir}/tests/functional/setup.yml
ansible-playbook -vv -i {changedir}/hosts-2 --limit mgrs {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/hosts-2 --limit mgrs {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -197,7 +197,7 @@ commands=
ansible-playbook -vv -i {changedir}/hosts-2 --limit mdss {toxinidir}/tests/functional/setup.yml ansible-playbook -vv -i {changedir}/hosts-2 --limit mdss {toxinidir}/tests/functional/setup.yml
ansible-playbook -vv -i {changedir}/hosts-2 --limit mdss {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/hosts-2 --limit mdss {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -210,7 +210,7 @@ commands=
ansible-playbook -vv -i {changedir}/hosts-2 --limit rbdmirrors {toxinidir}/tests/functional/setup.yml ansible-playbook -vv -i {changedir}/hosts-2 --limit rbdmirrors {toxinidir}/tests/functional/setup.yml
ansible-playbook -vv -i {changedir}/hosts-2 --limit rbdmirrors {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/hosts-2 --limit rbdmirrors {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -223,7 +223,7 @@ commands=
ansible-playbook -vv -i {changedir}/hosts-2 --limit rgws {toxinidir}/tests/functional/setup.yml ansible-playbook -vv -i {changedir}/hosts-2 --limit rgws {toxinidir}/tests/functional/setup.yml
ansible-playbook -vv -i {changedir}/hosts-2 --limit rgws {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/hosts-2 --limit rgws {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -236,21 +236,21 @@ commands=
bash -c "cd {changedir}/secondary && bash {toxinidir}/tests/scripts/vagrant_up.sh --no-provision {posargs:--provider=virtualbox}" bash -c "cd {changedir}/secondary && bash {toxinidir}/tests/scripts/vagrant_up.sh --no-provision {posargs:--provider=virtualbox}"
bash -c "cd {changedir}/secondary && bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir}/secondary" bash -c "cd {changedir}/secondary && bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir}/secondary"
ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/tests/functional/setup.yml ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/tests/functional/setup.yml
ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir}/secondary ceph_dev_branch={env:CEPH_DEV_BRANCH:master} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup" ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir}/secondary ceph_dev_branch={env:CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup"
ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/tests/functional/lvm_setup.yml ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/tests/functional/lvm_setup.yml
# ensure the rule isn't already present # ensure the rule isn't already present
ansible -i localhost, all -c local -b -m iptables -a 'chain=FORWARD protocol=tcp source=192.168.0.0/16 destination=192.168.0.0/16 jump=ACCEPT action=insert rule_num=1 state=absent' ansible -i localhost, all -c local -b -m iptables -a 'chain=FORWARD protocol=tcp source=192.168.0.0/16 destination=192.168.0.0/16 jump=ACCEPT action=insert rule_num=1 state=absent'
ansible -i localhost, all -c local -b -m iptables -a 'chain=FORWARD protocol=tcp source=192.168.0.0/16 destination=192.168.0.0/16 jump=ACCEPT action=insert rule_num=1 state=present' ansible -i localhost, all -c local -b -m iptables -a 'chain=FORWARD protocol=tcp source=192.168.0.0/16 destination=192.168.0.0/16 jump=ACCEPT action=insert rule_num=1 state=present'
ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
ireallymeanit=yes \ ireallymeanit=yes \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
ceph_docker_registry_password={env:DOCKER_HUB_PASSWORD} \ ceph_docker_registry_password={env:DOCKER_HUB_PASSWORD} \
" "
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --limit rgws --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --limit rgws --extra-vars "\
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -267,7 +267,7 @@ commands=
[storage-inventory] [storage-inventory]
commands= commands=
ansible-playbook -vv -i {changedir}/hosts {toxinidir}/infrastructure-playbooks/storage-inventory.yml --extra-vars "\ ansible-playbook -vv -i {changedir}/hosts {toxinidir}/infrastructure-playbooks/storage-inventory.yml --extra-vars "\
ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-master} \ ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG:latest-main} \
" "
[cephadm-adopt] [cephadm-adopt]
@ -317,11 +317,11 @@ setenv=
shrink_rbdmirror: RBDMIRROR_TO_KILL = rbd-mirror0 shrink_rbdmirror: RBDMIRROR_TO_KILL = rbd-mirror0
shrink_rgw: RGW_TO_KILL = rgw0.rgw0 shrink_rgw: RGW_TO_KILL = rgw0.rgw0
CEPH_DOCKER_IMAGE_TAG = latest-master CEPH_DOCKER_IMAGE_TAG = latest-main
CEPH_DOCKER_IMAGE_TAG_BIS = latest-bis-master CEPH_DOCKER_IMAGE_TAG_BIS = latest-bis-main
UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-master UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-main
switch_to_containers: CEPH_DOCKER_IMAGE_TAG = latest-master-devel switch_to_containers: CEPH_DOCKER_IMAGE_TAG = latest-main-devel
deps= -r{toxinidir}/tests/requirements.txt deps= -r{toxinidir}/tests/requirements.txt
changedir= changedir=
@ -355,7 +355,7 @@ changedir=
commands= commands=
ansible-galaxy install -r {toxinidir}/requirements.yml -v ansible-galaxy install -r {toxinidir}/requirements.yml -v
rhcs: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/rhcs_setup.yml --extra-vars "change_dir={changedir}" --tags "vagrant_setup" rhcs: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/rhcs_setup.yml --extra-vars "change_dir={changedir}" --tags "vagrant_setup"
non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir} ceph_dev_branch={env:CEPH_DEV_BRANCH:master} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup" non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir} ceph_dev_branch={env:CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup"
bash {toxinidir}/tests/scripts/vagrant_up.sh --no-provision {posargs:--provider=virtualbox} bash {toxinidir}/tests/scripts/vagrant_up.sh --no-provision {posargs:--provider=virtualbox}
bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir} bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir}
@ -370,7 +370,7 @@ commands=
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\ ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
deploy_secondary_zones=False \ deploy_secondary_zones=False \
ceph_dev_branch={env:CEPH_DEV_BRANCH:master} \ ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \ ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \ ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \ ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
@ -387,7 +387,7 @@ commands=
all_daemons,collocation: py.test --reruns 20 --reruns-delay 3 -n 8 --durations=0 --sudo -v --connection=ansible --ansible-inventory={changedir}/{env:INVENTORY} --ssh-config={changedir}/vagrant_ssh_config {toxinidir}/tests/functional/tests all_daemons,collocation: py.test --reruns 20 --reruns-delay 3 -n 8 --durations=0 --sudo -v --connection=ansible --ansible-inventory={changedir}/{env:INVENTORY} --ssh-config={changedir}/vagrant_ssh_config {toxinidir}/tests/functional/tests
# handlers/idempotency test # handlers/idempotency test
all_daemons,all_in_one,collocation: ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "delegate_facts_host={env:DELEGATE_FACTS_HOST:True} ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG_BIS:latest-bis-master} ceph_dev_branch={env:CEPH_DEV_BRANCH:master} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --extra-vars @ceph-override.json all_daemons,all_in_one,collocation: ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "delegate_facts_host={env:DELEGATE_FACTS_HOST:True} ceph_docker_image_tag={env:CEPH_DOCKER_IMAGE_TAG_BIS:latest-bis-main} ceph_dev_branch={env:CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --extra-vars @ceph-override.json
purge: {[purge]commands} purge: {[purge]commands}
purge_dashboard: {[purge-dashboard]commands} purge_dashboard: {[purge-dashboard]commands}