rbd-mirror: major refactor

- Use config-key store to add cluster peer.
- Support multiple pools mirroring.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
BZ211101-limit-mds
Guillaume Abrioux 2022-05-12 17:22:54 +02:00
parent 3a8daafbe8
commit b74ff6e22c
28 changed files with 859 additions and 170 deletions

View File

@ -300,6 +300,16 @@ ceph-ansible provides a set of playbook in ``infrastructure-playbooks`` director
day-2/purge
day-2/upgrade
RBD Mirroring
-------------
Ceph-ansible provides the role ``ceph-rbd-mirror`` that can setup an RBD mirror replication.
.. toctree::
:maxdepth: 1
rbdmirror/index
Contribution
============

View File

@ -0,0 +1,60 @@
RBD Mirroring
=============
There's not so much to do from the primary cluster side in order to setup an RBD mirror replication.
``ceph_rbd_mirror_configure`` has to be set to ``true`` to make ceph-ansible create the mirrored pool
defined in ``ceph_rbd_mirror_pool`` and the keyring that is going to be used to add the rbd mirror peer.
group_vars from the primary cluster:
.. code-block:: yaml
ceph_rbd_mirror_configure: true
ceph_rbd_mirror_pool: rbd
Optionnally, you can tell ceph-ansible to set the name and the secret of the keyring you want to create:
.. code-block:: yaml
ceph_rbd_mirror_local_user: client.rbd-mirror-peer # 'client.rbd-mirror-peer' is the default value.
ceph_rbd_mirror_local_user_secret: AQC+eM1iKKBXFBAAVpunJvqpkodHSYmljCFCnw==
This secret will be needed to add the rbd mirror peer from the secondary cluster.
If you do not enforce it as shown above, you can get it from a monitor by running the following command:
``ceph auth get {{ ceph_rbd_mirror_local_user }}``
.. code-block:: shell
$ sudo ceph auth get client.rbd-mirror-peer
Once your variables are defined, you can run the playbook (you might want to run with --limit option):
.. code-block:: shell
$ ansible-playbook -vv -i hosts site-container.yml --limit rbdmirror0
The configuration of the rbd mirror replication strictly speaking is done on the secondary cluster.
The rbd-mirror daemon pulls the data from the primary cluster. This is where the rbd mirror peer addition has to be done.
The configuration is similar with what was done on the primary cluster, it just needs few additional variables.
``ceph_rbd_mirror_remote_user`` : This user must match the name defined in the variable ``ceph_rbd_mirror_local_user`` from the primary cluster.
``ceph_rbd_mirror_remote_mon_hosts`` : This must a comma separated list of the monitor addresses from the primary cluster.
``ceph_rbd_mirror_remote_key`` : This must be the same value as the user (``{{ ceph_rbd_mirror_local_user }}``) keyring secret from the primary cluster.
group_vars from the secondary cluster:
.. code-block:: yaml
ceph_rbd_mirror_configure: true
ceph_rbd_mirror_pool: rbd
ceph_rbd_mirror_remote_user: client.rbd-mirror-peer # This must match the value defined in {{ ceph_rbd_mirror_local_user }} on primary cluster.
ceph_rbd_mirror_remote_mon_hosts: 1.2.3.4
ceph_rbd_mirror_remote_key: AQC+eM1iKKBXFBAAVpunJvqpkodHSYmljCFCnw== # This must match the secret of the registered keyring of the user defined in {{ ceph_rbd_mirror_local_user }} on primary cluster.
Once you variables are defined, you can run the playbook (you might want to run with --limit option):
.. code-block:: shell
$ ansible-playbook -vv -i hosts site-container.yml --limit rbdmirror0

View File

@ -18,29 +18,15 @@ dummy:
# valid for Luminous and later releases.
#copy_admin_key: false
# NOTE: deprecated generic local user id for pre-Luminous releases
#ceph_rbd_mirror_local_user: "admin"
#################
# CONFIGURATION #
#################
#ceph_rbd_mirror_local_user: client.rbd-mirror-peer
#ceph_rbd_mirror_configure: false
#ceph_rbd_mirror_pool: ""
#ceph_rbd_mirror_mode: pool
# NOTE (leseb): the following variable needs the name of the remote cluster.
# The name of this cluster must be different than your local cluster simply
# because we need to have both keys and ceph.conf inside /etc/ceph.
# Thus if cluster names are identical we can not have them under /etc/ceph
#ceph_rbd_mirror_remote_cluster: ""
# NOTE: the rbd-mirror daemon needs a user to authenticate with the
# remote cluster. By default, this key should be available under
# /etc/ceph/<remote_cluster>.client.<remote_user>.keyring
#ceph_rbd_mirror_remote_user: ""
#ceph_rbd_mirror_remote_cluster: remote
##########
# DOCKER #

View File

@ -10,29 +10,15 @@
# valid for Luminous and later releases.
copy_admin_key: false
# NOTE: deprecated generic local user id for pre-Luminous releases
ceph_rbd_mirror_local_user: "admin"
#################
# CONFIGURATION #
#################
ceph_rbd_mirror_local_user: client.rbd-mirror-peer
ceph_rbd_mirror_configure: false
ceph_rbd_mirror_pool: ""
ceph_rbd_mirror_mode: pool
# NOTE (leseb): the following variable needs the name of the remote cluster.
# The name of this cluster must be different than your local cluster simply
# because we need to have both keys and ceph.conf inside /etc/ceph.
# Thus if cluster names are identical we can not have them under /etc/ceph
ceph_rbd_mirror_remote_cluster: ""
# NOTE: the rbd-mirror daemon needs a user to authenticate with the
# remote cluster. By default, this key should be available under
# /etc/ceph/<remote_cluster>.client.<remote_user>.keyring
ceph_rbd_mirror_remote_user: ""
ceph_rbd_mirror_remote_cluster: remote
##########
# DOCKER #

View File

@ -1,50 +0,0 @@
---
- name: get keys from monitors
ceph_key:
name: "{{ item.name }}"
cluster: "{{ cluster }}"
output_format: plain
state: info
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _rbd_mirror_keys
with_items:
- { name: "client.bootstrap-rbd-mirror", path: "/var/lib/ceph/bootstrap-rbd-mirror/{{ cluster }}.keyring", copy_key: true }
- { name: "client.admin", path: "/etc/ceph/{{ cluster }}.client.admin.keyring", copy_key: "{{ copy_admin_key }}" }
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
run_once: true
when:
- cephx | bool
- item.copy_key | bool
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: copy ceph key(s) if needed
copy:
dest: "{{ item.item.path }}"
content: "{{ item.stdout + '\n' }}"
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
with_items: "{{ _rbd_mirror_keys.results }}"
when:
- cephx | bool
- item.item.copy_key | bool
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: create rbd-mirror keyring
ceph_key:
name: "client.rbd-mirror.{{ ansible_facts['hostname'] }}"
cluster: "{{ cluster }}"
user: client.bootstrap-rbd-mirror
user_key: "/var/lib/ceph/bootstrap-rbd-mirror/{{ cluster }}.keyring"
caps:
mon: "profile rbd-mirror"
osd: "profile rbd"
dest: "/etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring"
import_key: false
owner: ceph
group: ceph
mode: "{{ ceph_keyring_permissions }}"
no_log: "{{ no_log_on_ceph_key_tasks }}"
when: not containerized_deployment | bool

View File

@ -1,18 +1,161 @@
---
- name: cephx tasks
when:
- cephx | bool
block:
- name: get client.bootstrap-rbd-mirror from ceph monitor
ceph_key:
name: client.bootstrap-rbd-mirror
cluster: "{{ cluster }}"
output_format: plain
state: info
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _bootstrap_rbd_mirror_key
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
run_once: true
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: copy ceph key(s)
copy:
dest: "/var/lib/ceph/bootstrap-rbd-mirror/{{ cluster }}.keyring"
content: "{{ _bootstrap_rbd_mirror_key.stdout + '\n' }}"
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: create rbd-mirror keyrings
ceph_key:
name: "{{ item.name }}"
cluster: "{{ cluster }}"
user: client.admin
user_key: "/etc/ceph/{{ cluster }}.client.admin.keyring"
caps:
mon: "profile rbd-mirror"
osd: "profile rbd"
dest: "{{ item.dest }}"
secret: "{{ item.secret | default(omit) }}"
import_key: true
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: "{{ no_log_on_ceph_key_tasks }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
loop:
- { name: "client.rbd-mirror.{{ ansible_facts['hostname'] }}",
dest: "/etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring" }
- { name: "{{ ceph_rbd_mirror_local_user }}",
dest: "/etc/ceph/{{ cluster }}.{{ ceph_rbd_mirror_local_user }}.keyring",
secret: "{{ ceph_rbd_mirror_local_user_secret | default('') }}" }
- name: get "client.rbd-mirror.{{ ansible_facts['hostname'] }}" from ceph monitor
ceph_key:
name: "client.rbd-mirror.{{ ansible_facts['hostname'] }}"
cluster: "{{ cluster }}"
output_format: plain
state: info
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _rbd_mirror_key
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
run_once: true
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: copy ceph key
copy:
dest: "/etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring"
content: "{{ _rbd_mirror_key.stdout + '\n' }}"
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: false
- name: start and add the rbd-mirror service instance
service:
name: "ceph-rbd-mirror@rbd-mirror.{{ ansible_facts['hostname'] }}"
state: started
enabled: yes
masked: no
changed_when: false
when:
- not containerized_deployment | bool
- ceph_rbd_mirror_remote_user is defined
- name: set_fact ceph_rbd_mirror_pools
set_fact:
ceph_rbd_mirror_pools:
- name: "{{ ceph_rbd_mirror_pool }}"
when: ceph_rbd_mirror_pools is undefined
- name: create pool if it doesn't exist
ceph_pool:
name: "{{ item.name }}"
cluster: "{{ cluster }}"
pg_num: "{{ item.pg_num | default(omit) }}"
pgp_num: "{{ item.pgp_num | default(omit) }}"
size: "{{ item.size | default(omit) }}"
min_size: "{{ item.min_size | default(omit) }}"
pool_type: "{{ item.type | default('replicated') }}"
rule_name: "{{ item.rule_name | default(omit) }}"
erasure_profile: "{{ item.erasure_profile | default(omit) }}"
pg_autoscale_mode: "{{ item.pg_autoscale_mode | default(omit) }}"
target_size_ratio: "{{ item.target_size_ratio | default(omit) }}"
application: "{{ item.application | default('rbd') }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
loop: "{{ ceph_rbd_mirror_pools }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
- name: enable mirroring on the pool
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool enable {{ ceph_rbd_mirror_pool }} {{ ceph_rbd_mirror_mode }}"
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool enable {{ item.name }} {{ ceph_rbd_mirror_mode }}"
register: result
changed_when: false
retries: 90
retries: 60
delay: 1
until: result is succeeded
loop: "{{ ceph_rbd_mirror_pools }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: list mirroring peer
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool info {{ ceph_rbd_mirror_pool }}"
changed_when: false
register: mirror_peer
- name: add mirroring peer
when: ceph_rbd_mirror_remote_user is defined
block:
- name: list mirroring peer
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool info {{ item.name }}"
changed_when: false
register: mirror_peer
loop: "{{ ceph_rbd_mirror_pools }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: add a mirroring peer
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool peer add {{ ceph_rbd_mirror_pool }} {{ ceph_rbd_mirror_remote_user }}@{{ ceph_rbd_mirror_remote_cluster }}"
changed_when: false
when: ceph_rbd_mirror_remote_user not in mirror_peer.stdout
- name: create a temporary file
tempfile:
path: /etc/ceph
state: file
suffix: _ceph-ansible
register: tmp_file
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: write secret to temporary file
copy:
dest: "{{ tmp_file.path }}"
content: "{{ ceph_rbd_mirror_remote_key }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: add a mirroring peer
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool peer add {{ item.item.name }} {{ ceph_rbd_mirror_remote_user }}@{{ ceph_rbd_mirror_remote_cluster }} --remote-mon-host {{ ceph_rbd_mirror_remote_mon_hosts }} --remote-key-file {{ tmp_file.path }}"
changed_when: false
delegate_to: "{{ groups[mon_group_name][0] }}"
loop: "{{ mirror_peer.results }}"
when: ceph_rbd_mirror_remote_user not in item.stdout
- name: rm temporary file
file:
path: "{{ tmp_file.path }}"
state: absent
delegate_to: "{{ groups[mon_group_name][0] }}"

View File

@ -1,26 +1,52 @@
---
- name: include pre_requisite.yml
include_tasks: pre_requisite.yml
when: not containerized_deployment | bool
- name: include common.yml
include_tasks: common.yml
when: cephx | bool
- name: tasks for non-containerized deployment
include_tasks: start_rbd_mirror.yml
when: not containerized_deployment | bool
- name: tasks for containerized deployment
when: containerized_deployment | bool
- name: non-containerized related tasks
when:
- not containerized_deployment | bool
- ceph_rbd_mirror_remote_user is defined
block:
- name: set_fact container_exec_cmd
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-rbd-mirror-{{ ansible_facts['hostname'] }}"
- name: install dependencies
package:
name: rbd-mirror
state: present
register: result
until: result is succeeded
tags: package-install
- name: include start_container_rbd_mirror.yml
include_tasks: start_container_rbd_mirror.yml
- name: ensure systemd service override directory exists
file:
state: directory
path: "/etc/systemd/system/ceph-rbd-mirror@.service.d/"
when:
- ceph_rbd_mirror_systemd_overrides is defined
- ansible_facts['service_mgr'] == 'systemd'
- name: add ceph-rbd-mirror systemd service overrides
openstack.config_template.config_template:
src: "ceph-rbd-mirror.service.d-overrides.j2"
dest: "/etc/systemd/system/ceph-rbd-mirror@.service.d/ceph-rbd-mirror-systemd-overrides.conf"
config_overrides: "{{ ceph_rbd_mirror_systemd_overrides | default({}) }}"
config_type: "ini"
when:
- ceph_rbd_mirror_systemd_overrides is defined
- ansible_facts['service_mgr'] == 'systemd'
- name: enable ceph-rbd-mirror.target
systemd:
name: "ceph-rbd-mirror.target"
state: started
enabled: yes
masked: no
changed_when: false
- name: set_fact ceph_cmd
set_fact:
rbd_cmd: "{{ container_binary + ' run --rm --net=host -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph:/var/lib/ceph:z -v /var/run/ceph:/var/run/ceph:z --entrypoint=rbd ' + ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else 'rbd' }}"
- name: include start_container_rbd_mirror.yml
include_tasks: start_container_rbd_mirror.yml
when:
- containerized_deployment | bool
- ceph_rbd_mirror_remote_user is defined
- name: include configure_mirroring.yml
include_tasks: configure_mirroring.yml
when: ceph_rbd_mirror_configure | bool
include_tasks: configure_mirroring.yml

View File

@ -1,10 +0,0 @@
---
- name: install dependencies
# XXX Determine what RH repository this will belong to so that it can be
# properly checked and errored if the repository is not enabled.
package:
name: rbd-mirror
state: present
register: result
until: result is succeeded
tags: package-install

View File

@ -1,41 +0,0 @@
---
- name: ensure systemd service override directory exists
file:
state: directory
path: "/etc/systemd/system/ceph-rbd-mirror@.service.d/"
when:
- ceph_rbd_mirror_systemd_overrides is defined
- ansible_facts['service_mgr'] == 'systemd'
- name: add ceph-rbd-mirror systemd service overrides
openstack.config_template.config_template:
src: "ceph-rbd-mirror.service.d-overrides.j2"
dest: "/etc/systemd/system/ceph-rbd-mirror@.service.d/ceph-rbd-mirror-systemd-overrides.conf"
config_overrides: "{{ ceph_rbd_mirror_systemd_overrides | default({}) }}"
config_type: "ini"
when:
- ceph_rbd_mirror_systemd_overrides is defined
- ansible_facts['service_mgr'] == 'systemd'
- name: stop and remove the generic rbd-mirror service instance
service:
name: "ceph-rbd-mirror@{{ ceph_rbd_mirror_local_user }}"
state: stopped
enabled: no
changed_when: false
- name: enable ceph-rbd-mirror.target
systemd:
name: "ceph-rbd-mirror.target"
state: started
enabled: yes
masked: no
changed_when: false
- name: start and add the rbd-mirror service instance
service:
name: "ceph-rbd-mirror@rbd-mirror.{{ ansible_facts['hostname'] }}"
state: started
enabled: yes
masked: no
changed_when: false

View File

@ -7,9 +7,6 @@
- name: ensure ceph_rbd_mirror_remote_cluster is set
fail:
msg: "ceph_rbd_mirror_remote_cluster needs to be provided"
when: ceph_rbd_mirror_remote_cluster | default("") | length == 0
- name: ensure ceph_rbd_mirror_remote_user is set
fail:
msg: "ceph_rbd_mirror_remote_user needs to be provided"
when: ceph_rbd_mirror_remote_user | default("") | length == 0
when:
- ceph_rbd_mirror_remote_cluster | default("") | length == 0
- ceph_rbd_mirror_remote_user | default("") | length > 0

View File

@ -0,0 +1,32 @@
---
- hosts: mon0
gather_facts: True
become: True
tasks:
- name: import_role ceph-defaults
import_role:
name: ceph-defaults
- name: import_role ceph-facts
include_role:
name: ceph-facts
tasks_from: "container_binary.yml"
- name: set_fact ceph_cmd
set_fact:
rbd_cmd: "{{ container_binary + ' run --rm --net=host -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph:/var/lib/ceph:z -v /var/run/ceph:/var/run/ceph:z --entrypoint=rbd ' + ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else 'rbd' }}"
- name: create an image in rbd mirrored pool
command: "{{ rbd_cmd }} create foo --size 1024 --pool {{ ceph_rbd_mirror_pool }} --image-feature exclusive-lock,journaling"
changed_when: false
tags: primary
- name: check the image is replicated
command: "{{ rbd_cmd }} --pool {{ ceph_rbd_mirror_pool }} ls --format json"
register: rbd_ls
changed_when: false
tags: secondary
retries: 30
delay: 1
until: "'foo' in (rbd_ls.stdout | default('{}') | from_json)"

View File

@ -0,0 +1 @@
../../../Vagrantfile

View File

@ -0,0 +1 @@
../../../../Vagrantfile

View File

@ -0,0 +1,32 @@
---
docker: True
containerized_deployment: true
ceph_origin: repository
ceph_repository: community
cluster: ceph
public_network: "192.168.144.0/24"
cluster_network: "192.168.145.0/24"
monitor_interface: "{{ 'eth1' if ansible_facts['distribution'] == 'CentOS' else 'ens6' }}"
radosgw_interface: "{{ 'eth1' if ansible_facts['distribution'] == 'CentOS' else 'ens6' }}"
journal_size: 100
osd_objectstore: "bluestore"
# test-volume is created by tests/functional/lvm_setup.yml from /dev/sdb
lvm_volumes:
- data: data-lv1
data_vg: test_group
- data: data-lv2
data_vg: test_group
db: journal1
db_vg: journals
os_tuning_params:
- { name: fs.file-max, value: 26234859 }
ceph_conf_overrides:
global:
mon_allow_pool_size_one: true
mon_warn_on_pool_no_redundancy: false
osd_pool_default_size: 1
mon_max_pg_per_osd: 512
dashboard_enabled: False
ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-main

View File

@ -0,0 +1,11 @@
[mons]
mon0
[mgrs]
mon0
[osds]
osd0
[rbdmirrors]
osd0

View File

@ -0,0 +1 @@
../../../../../Vagrantfile

View File

@ -0,0 +1,32 @@
---
docker: True
containerized_deployment: true
ceph_origin: repository
ceph_repository: community
cluster: ceph
public_network: "192.168.146.0/24"
cluster_network: "192.168.147.0/24"
monitor_interface: "{{ 'eth1' if ansible_facts['distribution'] == 'CentOS' else 'ens6' }}"
radosgw_interface: "{{ 'eth1' if ansible_facts['distribution'] == 'CentOS' else 'ens6' }}"
journal_size: 100
osd_objectstore: "bluestore"
# test-volume is created by tests/functional/lvm_setup.yml from /dev/sdb
lvm_volumes:
- data: data-lv1
data_vg: test_group
- data: data-lv2
data_vg: test_group
db: journal1
db_vg: journals
os_tuning_params:
- { name: fs.file-max, value: 26234859 }
ceph_conf_overrides:
global:
mon_allow_pool_size_one: true
mon_warn_on_pool_no_redundancy: false
osd_pool_default_size: 1
mon_max_pg_per_osd: 512
dashboard_enabled: False
ceph_docker_registry: quay.ceph.io
ceph_docker_image: ceph-ci/daemon
ceph_docker_image_tag: latest-main

View File

@ -0,0 +1,12 @@
[mons]
mon0
[mgrs]
mon0
[osds]
osd0
[rbdmirrors]
osd0

View File

@ -0,0 +1,71 @@
---
# DEPLOY CONTAINERIZED DAEMONS
docker: true
# DEFINE THE NUMBER OF VMS TO RUN
mon_vms: 1
osd_vms: 1
mds_vms: 0
rgw_vms: 0
nfs_vms: 0
grafana_server_vms: 0
rbd_mirror_vms: 0
client_vms: 0
iscsi_gw_vms: 0
mgr_vms: 0
# INSTALL SOURCE OF CEPH
# valid values are 'stable' and 'dev'
ceph_install_source: stable
# SUBNETS TO USE FOR THE VMS
public_subnet: 192.168.146
cluster_subnet: 192.168.147
# MEMORY
# set 1024 for CentOS
memory: 1024
# Ethernet interface name
# use eth1 for libvirt and ubuntu precise, enp0s8 for CentOS and ubuntu xenial
eth: 'eth1'
# Disks
# For libvirt use disks: "[ '/dev/vdb', '/dev/vdc' ]"
# For CentOS7 use disks: "[ '/dev/sda', '/dev/sdb' ]"
disks: "[ '/dev/sdb', '/dev/sdc' ]"
# VAGRANT BOX
# Ceph boxes are *strongly* suggested. They are under better control and will
# not get updated frequently unless required for build systems. These are (for
# now):
#
# * ceph/ubuntu-xenial
#
# Ubuntu: ceph/ubuntu-xenial bento/ubuntu-16.04 or ubuntu/trusty64 or ubuntu/wily64
# CentOS: bento/centos-7.1 or puppetlabs/centos-7.0-64-puppet
# libvirt CentOS: centos/7
# parallels Ubuntu: parallels/ubuntu-14.04
# Debian: deb/jessie-amd64 - be careful the storage controller is named 'SATA Controller'
# For more boxes have a look at:
# - https://atlas.hashicorp.com/boxes/search?utf8=✓&sort=&provider=virtualbox&q=
# - https://download.gluster.org/pub/gluster/purpleidea/vagrant/
vagrant_box: centos/atomic-host
#ssh_private_key_path: "~/.ssh/id_rsa"
# The sync directory changes based on vagrant box
# Set to /home/vagrant/sync for Centos/7, /home/{ user }/vagrant for openstack and defaults to /vagrant
#vagrant_sync_dir: /home/vagrant/sync
vagrant_sync_dir: /vagrant
# Disables synced folder creation. Not needed for testing, will skip mounting
# the vagrant directory on the remote box regardless of the provider.
vagrant_disable_synced_folder: true
# VAGRANT URL
# This is a URL to download an image from an alternate location. vagrant_box
# above should be set to the filename of the image.
# Fedora virtualbox: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
# Fedora libvirt: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-libvirt.box
# vagrant_box_url: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
os_tuning_params:
- { name: fs.file-max, value: 26234859 }

View File

@ -0,0 +1,71 @@
---
# DEPLOY CONTAINERIZED DAEMONS
docker: true
# DEFINE THE NUMBER OF VMS TO RUN
mon_vms: 1
osd_vms: 1
mds_vms: 0
rgw_vms: 0
nfs_vms: 0
grafana_server_vms: 0
rbd_mirror_vms: 0
client_vms: 0
iscsi_gw_vms: 0
mgr_vms: 0
# INSTALL SOURCE OF CEPH
# valid values are 'stable' and 'dev'
ceph_install_source: stable
# SUBNETS TO USE FOR THE VMS
public_subnet: 192.168.144
cluster_subnet: 192.168.145
# MEMORY
# set 1024 for CentOS
memory: 1024
# Ethernet interface name
# use eth1 for libvirt and ubuntu precise, enp0s8 for CentOS and ubuntu xenial
eth: 'eth1'
# Disks
# For libvirt use disks: "[ '/dev/vdb', '/dev/vdc' ]"
# For CentOS7 use disks: "[ '/dev/sda', '/dev/sdb' ]"
disks: "[ '/dev/sdb', '/dev/sdc' ]"
# VAGRANT BOX
# Ceph boxes are *strongly* suggested. They are under better control and will
# not get updated frequently unless required for build systems. These are (for
# now):
#
# * ceph/ubuntu-xenial
#
# Ubuntu: ceph/ubuntu-xenial bento/ubuntu-16.04 or ubuntu/trusty64 or ubuntu/wily64
# CentOS: bento/centos-7.1 or puppetlabs/centos-7.0-64-puppet
# libvirt CentOS: centos/7
# parallels Ubuntu: parallels/ubuntu-14.04
# Debian: deb/jessie-amd64 - be careful the storage controller is named 'SATA Controller'
# For more boxes have a look at:
# - https://atlas.hashicorp.com/boxes/search?utf8=✓&sort=&provider=virtualbox&q=
# - https://download.gluster.org/pub/gluster/purpleidea/vagrant/
vagrant_box: centos/atomic-host
#ssh_private_key_path: "~/.ssh/id_rsa"
# The sync directory changes based on vagrant box
# Set to /home/vagrant/sync for Centos/7, /home/{ user }/vagrant for openstack and defaults to /vagrant
#vagrant_sync_dir: /home/vagrant/sync
vagrant_sync_dir: /vagrant
# Disables synced folder creation. Not needed for testing, will skip mounting
# the vagrant directory on the remote box regardless of the provider.
vagrant_disable_synced_folder: true
# VAGRANT URL
# This is a URL to download an image from an alternate location. vagrant_box
# above should be set to the filename of the image.
# Fedora virtualbox: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
# Fedora libvirt: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-libvirt.box
# vagrant_box_url: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
os_tuning_params:
- { name: fs.file-max, value: 26234859 }

View File

@ -0,0 +1,27 @@
---
ceph_origin: repository
ceph_repository: community
cluster: ceph
public_network: "192.168.140.0/24"
cluster_network: "192.168.141.0/24"
monitor_interface: "{{ 'eth1' if ansible_facts['distribution'] == 'CentOS' else 'ens6' }}"
radosgw_interface: "{{ 'eth1' if ansible_facts['distribution'] == 'CentOS' else 'ens6' }}"
journal_size: 100
osd_objectstore: "bluestore"
# test-volume is created by tests/functional/lvm_setup.yml from /dev/sdb
lvm_volumes:
- data: data-lv1
data_vg: test_group
- data: data-lv2
data_vg: test_group
db: journal1
db_vg: journals
os_tuning_params:
- { name: fs.file-max, value: 26234859 }
ceph_conf_overrides:
global:
mon_allow_pool_size_one: true
mon_warn_on_pool_no_redundancy: false
osd_pool_default_size: 1
mon_max_pg_per_osd: 512
dashboard_enabled: False

View File

@ -0,0 +1,12 @@
[mons]
mon0
[mgrs]
mon0
[osds]
osd0
[rbdmirrors]
osd0

View File

@ -0,0 +1 @@
../../../../Vagrantfile

View File

@ -0,0 +1,27 @@
---
ceph_origin: repository
ceph_repository: community
cluster: ceph
public_network: "192.168.142.0/24"
cluster_network: "192.168.143.0/24"
monitor_interface: "{{ 'eth1' if ansible_facts['distribution'] == 'CentOS' else 'ens6' }}"
radosgw_interface: "{{ 'eth1' if ansible_facts['distribution'] == 'CentOS' else 'ens6' }}"
journal_size: 100
osd_objectstore: "bluestore"
# test-volume is created by tests/functional/lvm_setup.yml from /dev/sdb
lvm_volumes:
- data: data-lv1
data_vg: test_group
- data: data-lv2
data_vg: test_group
db: journal1
db_vg: journals
os_tuning_params:
- { name: fs.file-max, value: 26234859 }
ceph_conf_overrides:
global:
mon_allow_pool_size_one: true
mon_warn_on_pool_no_redundancy: false
osd_pool_default_size: 1
mon_max_pg_per_osd: 512
dashboard_enabled: False

View File

@ -0,0 +1,12 @@
[mons]
mon0
[mgrs]
mon0
[osds]
osd0
[rbdmirrors]
osd0

View File

@ -0,0 +1,71 @@
---
# DEPLOY CONTAINERIZED DAEMONS
docker: false
# DEFINE THE NUMBER OF VMS TO RUN
mon_vms: 1
osd_vms: 1
mds_vms: 0
rgw_vms: 0
nfs_vms: 0
grafana_server_vms: 0
rbd_mirror_vms: 0
client_vms: 0
iscsi_gw_vms: 0
mgr_vms: 0
# INSTALL SOURCE OF CEPH
# valid values are 'stable' and 'dev'
ceph_install_source: stable
# SUBNETS TO USE FOR THE VMS
public_subnet: 192.168.142
cluster_subnet: 192.168.143
# MEMORY
# set 1024 for CentOS
memory: 1024
# Ethernet interface name
# use eth1 for libvirt and ubuntu precise, enp0s8 for CentOS and ubuntu xenial
eth: 'eth1'
# Disks
# For libvirt use disks: "[ '/dev/vdb', '/dev/vdc' ]"
# For CentOS7 use disks: "[ '/dev/sda', '/dev/sdb' ]"
disks: "[ '/dev/sdb', '/dev/sdc' ]"
# VAGRANT BOX
# Ceph boxes are *strongly* suggested. They are under better control and will
# not get updated frequently unless required for build systems. These are (for
# now):
#
# * ceph/ubuntu-xenial
#
# Ubuntu: ceph/ubuntu-xenial bento/ubuntu-16.04 or ubuntu/trusty64 or ubuntu/wily64
# CentOS: bento/centos-7.1 or puppetlabs/centos-7.0-64-puppet
# libvirt CentOS: centos/7
# parallels Ubuntu: parallels/ubuntu-14.04
# Debian: deb/jessie-amd64 - be careful the storage controller is named 'SATA Controller'
# For more boxes have a look at:
# - https://atlas.hashicorp.com/boxes/search?utf8=✓&sort=&provider=virtualbox&q=
# - https://download.gluster.org/pub/gluster/purpleidea/vagrant/
vagrant_box: centos/stream8
#ssh_private_key_path: "~/.ssh/id_rsa"
# The sync directory changes based on vagrant box
# Set to /home/vagrant/sync for Centos/7, /home/{ user }/vagrant for openstack and defaults to /vagrant
#vagrant_sync_dir: /home/vagrant/sync
vagrant_sync_dir: /vagrant
# Disables synced folder creation. Not needed for testing, will skip mounting
# the vagrant directory on the remote box regardless of the provider.
vagrant_disable_synced_folder: true
# VAGRANT URL
# This is a URL to download an image from an alternate location. vagrant_box
# above should be set to the filename of the image.
# Fedora virtualbox: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
# Fedora libvirt: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-libvirt.box
# vagrant_box_url: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
os_tuning_params:
- { name: fs.file-max, value: 26234859 }

View File

@ -0,0 +1,71 @@
---
# DEPLOY CONTAINERIZED DAEMONS
docker: false
# DEFINE THE NUMBER OF VMS TO RUN
mon_vms: 1
osd_vms: 1
mds_vms: 0
rgw_vms: 0
nfs_vms: 0
grafana_server_vms: 0
rbd_mirror_vms: 0
client_vms: 0
iscsi_gw_vms: 0
mgr_vms: 0
# INSTALL SOURCE OF CEPH
# valid values are 'stable' and 'dev'
ceph_install_source: stable
# SUBNETS TO USE FOR THE VMS
public_subnet: 192.168.140
cluster_subnet: 192.168.141
# MEMORY
# set 1024 for CentOS
memory: 1024
# Ethernet interface name
# use eth1 for libvirt and ubuntu precise, enp0s8 for CentOS and ubuntu xenial
eth: 'eth1'
# Disks
# For libvirt use disks: "[ '/dev/vdb', '/dev/vdc' ]"
# For CentOS7 use disks: "[ '/dev/sda', '/dev/sdb' ]"
disks: "[ '/dev/sdb', '/dev/sdc' ]"
# VAGRANT BOX
# Ceph boxes are *strongly* suggested. They are under better control and will
# not get updated frequently unless required for build systems. These are (for
# now):
#
# * ceph/ubuntu-xenial
#
# Ubuntu: ceph/ubuntu-xenial bento/ubuntu-16.04 or ubuntu/trusty64 or ubuntu/wily64
# CentOS: bento/centos-7.1 or puppetlabs/centos-7.0-64-puppet
# libvirt CentOS: centos/7
# parallels Ubuntu: parallels/ubuntu-14.04
# Debian: deb/jessie-amd64 - be careful the storage controller is named 'SATA Controller'
# For more boxes have a look at:
# - https://atlas.hashicorp.com/boxes/search?utf8=✓&sort=&provider=virtualbox&q=
# - https://download.gluster.org/pub/gluster/purpleidea/vagrant/
vagrant_box: centos/stream8
#ssh_private_key_path: "~/.ssh/id_rsa"
# The sync directory changes based on vagrant box
# Set to /home/vagrant/sync for Centos/7, /home/{ user }/vagrant for openstack and defaults to /vagrant
#vagrant_sync_dir: /home/vagrant/sync
vagrant_sync_dir: /vagrant
# Disables synced folder creation. Not needed for testing, will skip mounting
# the vagrant directory on the remote box regardless of the provider.
vagrant_disable_synced_folder: true
# VAGRANT URL
# This is a URL to download an image from an alternate location. vagrant_box
# above should be set to the filename of the image.
# Fedora virtualbox: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
# Fedora libvirt: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-libvirt.box
# vagrant_box_url: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
os_tuning_params:
- { name: fs.file-max, value: 26234859 }

97
tox-rbdmirror.ini 100644
View File

@ -0,0 +1,97 @@
[tox]
envlist = centos-{container,non_container}-rbdmirror
skipsdist = True
[testenv]
allowlist_externals =
vagrant
bash
git
pip
passenv=*
setenv=
ANSIBLE_SSH_ARGS = -F {changedir}/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey
ANSIBLE_CONFIG = {toxinidir}/ansible.cfg
ANSIBLE_CALLBACK_WHITELIST = profile_tasks
ANSIBLE_CACHE_PLUGIN = memory
ANSIBLE_GATHERING = implicit
# only available for ansible >= 2.5
ANSIBLE_KEEP_REMOTE_FILES = 1
ANSIBLE_STDOUT_CALLBACK = yaml
# non_container: DEV_SETUP = True
# Set the vagrant box image to use
centos-non_container: CEPH_ANSIBLE_VAGRANT_BOX = centos/stream8
centos-container: CEPH_ANSIBLE_VAGRANT_BOX = centos/stream8
INVENTORY = {env:_INVENTORY:hosts}
container: CONTAINER_DIR = /container
container: PLAYBOOK = site-container.yml.sample
non_container: PLAYBOOK = site.yml.sample
container: CEPH_RBD_MIRROR_REMOTE_MON_HOSTS = 192.168.144.10
non_container: CEPH_RBD_MIRROR_REMOTE_MON_HOSTS = 192.168.140.10
UPDATE_CEPH_DOCKER_IMAGE_TAG = latest-main
UPDATE_CEPH_DEV_BRANCH = main
UPDATE_CEPH_DEV_SHA1 = latest
ROLLING_UPDATE = True
deps= -r{toxinidir}/tests/requirements.txt
changedir={toxinidir}/tests/functional/rbdmirror{env:CONTAINER_DIR:}
commands=
ansible-galaxy install -r {toxinidir}/requirements.yml -v
bash {toxinidir}/tests/scripts/vagrant_up.sh --no-provision {posargs:--provider=virtualbox}
bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir}
non_container: ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir} ceph_dev_branch={env:CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup"
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/setup.yml
# configure lvm
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/tests/functional/lvm_setup.yml
ansible-playbook -vv -i {changedir}/{env:INVENTORY} {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
ceph_rbd_mirror_configure=true \
ceph_rbd_mirror_pool=rbd \
ceph_rbd_mirror_local_user_secret=AQC+eM1iKKBXFBAAVpunJvqpkodHSYmljCFCnw== \
yes_i_know=true \
ireallymeanit=yes \
ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
ceph_docker_registry_password={env:DOCKER_HUB_PASSWORD} \
"
bash -c "cd {changedir}/secondary && bash {toxinidir}/tests/scripts/vagrant_up.sh --no-provision {posargs:--provider=virtualbox}"
bash -c "cd {changedir}/secondary && bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir}/secondary"
ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/tests/functional/setup.yml
ansible-playbook -vv -i "localhost," -c local {toxinidir}/tests/functional/dev_setup.yml --extra-vars "dev_setup={env:DEV_SETUP:False} change_dir={changedir}/secondary ceph_dev_branch={env:CEPH_DEV_BRANCH:main} ceph_dev_sha1={env:CEPH_DEV_SHA1:latest}" --tags "vagrant_setup"
ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/tests/functional/lvm_setup.yml
# ensure the rule isn't already present
ansible -i localhost, all -c local -b -m iptables -a 'chain=FORWARD protocol=tcp source=192.168.0.0/16 destination=192.168.0.0/16 jump=ACCEPT action=insert rule_num=1 state=absent'
ansible -i localhost, all -c local -b -m iptables -a 'chain=FORWARD protocol=tcp source=192.168.0.0/16 destination=192.168.0.0/16 jump=ACCEPT action=insert rule_num=1 state=present'
ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/{env:PLAYBOOK:site.yml.sample} --extra-vars "\
yes_i_know=true \
ceph_rbd_mirror_configure=true \
ceph_rbd_mirror_pool=rbd \
ceph_rbd_mirror_remote_user=client.rbd-mirror-peer \
ceph_rbd_mirror_remote_mon_hosts={env:CEPH_RBD_MIRROR_REMOTE_MON_HOSTS} \
ceph_rbd_mirror_remote_key=AQC+eM1iKKBXFBAAVpunJvqpkodHSYmljCFCnw== \
ceph_rbd_mirror_remote_cluster=remote \
ireallymeanit=yes \
ceph_dev_branch={env:CEPH_DEV_BRANCH:main} \
ceph_dev_sha1={env:CEPH_DEV_SHA1:latest} \
ceph_docker_registry_auth=True \
ceph_docker_registry_username={env:DOCKER_HUB_USERNAME} \
ceph_docker_registry_password={env:DOCKER_HUB_PASSWORD} \
"
ansible-playbook -vv -i {changedir}/hosts {toxinidir}/tests/functional/rbdmirror.yml --skip-tags=secondary --extra-vars "\
ceph_rbd_mirror_pool=rbd \
"
ansible-playbook --ssh-common-args='-F {changedir}/secondary/vagrant_ssh_config -o ControlMaster=auto -o ControlPersist=600s -o PreferredAuthentications=publickey' -vv -i {changedir}/secondary/hosts {toxinidir}/tests/functional/rbdmirror.yml --skip-tags=primary -e 'ceph_rbd_mirror_pool=rbd'
vagrant destroy --force
bash -c "cd {changedir}/secondary && vagrant destroy --force"
# clean rule after the scenario is complete
ansible -i localhost, all -c local -b -m iptables -a 'chain=FORWARD protocol=tcp source=192.168.0.0/16 destination=192.168.0.0/16 jump=ACCEPT action=insert rule_num=1 state=absent'