ceph-ansible/infrastructure-playbooks
Dimitri Savineau ee43555148 switch2container: disable ceph-osd enabled-runtime
When deploying the ceph OSD via the packages then the ceph-osd@.service
unit is configured as enabled-runtime.
This means that each ceph-osd service will inherit from that state.
The enabled-runtime systemd state doesn't survive after a reboot.
For non containerized deployment the OSD are still starting after a
reboot because there's the ceph-volume@.service and/or ceph-osd.target
units that are doing the job.

$ systemctl list-unit-files|egrep '^ceph-(volume|osd)'|column -t
ceph-osd@.service     enabled-runtime
ceph-volume@.service  enabled
ceph-osd.target       enabled

When switching to containerized deployment we are stopping/disabling
ceph-osd@XX.servive, ceph-volume and ceph.target and then removing the
systemd unit files.
But the new systemd units for containerized ceph-osd service will still
inherit from ceph-osd@.service unit file.

As a consequence, if an OSD host is rebooting after the playbook execution
then the ceph-osd service won't come back because they aren't enabled at
boot.

This patch also adds a reboot and testinfra run after running the switch
to container playbook.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1881288

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit fa2bb3af86)
2020-11-12 17:04:30 -05:00
..
untested-by-ci ansible: use 'bool' filter on boolean conditionals 2019-06-06 10:21:17 +02:00
vars osd: remove variable osd_scenario 2019-04-11 11:57:02 -04:00
README.md doc: update infra playbooks statements 2020-02-25 15:27:52 +01:00
add-mon.yml facts: explicitly disable facter and ohai 2020-07-03 06:37:08 +02:00
ceph-keys.yml ceph_key: set state as optional 2020-09-14 15:37:56 -04:00
cephadm-adopt.yml monitor: use quorum_status instead of ceph status 2020-11-03 14:32:09 +01:00
cephadm.yml monitor: use quorum_status instead of ceph status 2020-11-03 14:32:09 +01:00
docker-to-podman.yml ceph-crash: introduce new role ceph-crash 2020-07-22 18:47:01 -04:00
filestore-to-bluestore.yml fs2bs: support `osd_auto_discovery` scenario 2020-09-29 16:28:43 +02:00
gather-ceph-logs.yml global: add newline at end of file 2019-08-23 15:56:47 +02:00
lv-create.yml lv-create: fix a typo 2019-09-26 11:35:24 +02:00
lv-teardown.yml improve coding style 2019-04-23 15:37:07 +02:00
purge-cluster.yml common: drop `fetch_directory` feature 2020-10-21 18:28:25 -04:00
purge-container-cluster.yml common: drop `fetch_directory` feature 2020-10-21 18:28:25 -04:00
purge-iscsi-gateways.yml purge-iscsi-gateways: don't run all ceph-facts 2020-01-10 15:46:15 +01:00
rgw-add-users-buckets.yml Example ceph_add_users_buckets playbook 2018-12-20 14:23:25 +01:00
rolling_update.yml rolling_update: fix mgr start with mon collocation 2020-11-03 14:32:42 +01:00
shrink-mds.yml infrastructure: consume ceph_fs module 2020-11-03 14:32:25 +01:00
shrink-mgr.yml shrink-mgr: fix systemd condition 2020-03-03 10:32:15 +01:00
shrink-mon.yml monitor: use quorum_status instead of ceph status 2020-11-03 14:32:09 +01:00
shrink-osd.yml common: don't enable debug log on ceph-volume calls by default 2020-08-12 22:57:10 +02:00
shrink-rbdmirror.yml rgw/rbdmirror: use service dump instead of ceph -s 2020-11-03 14:32:09 +01:00
shrink-rgw.yml rgw/rbdmirror: use service dump instead of ceph -s 2020-11-03 14:32:09 +01:00
storage-inventory.yml common: don't enable debug log on ceph-volume calls by default 2020-08-12 22:57:10 +02:00
switch-from-non-containerized-to-containerized-ceph-daemons.yml switch2container: disable ceph-osd enabled-runtime 2020-11-12 17:04:30 -05:00
take-over-existing-cluster.yml remove ceph-agent role and references 2019-06-03 13:35:50 +02:00

README.md

Infrastructure playbooks

This directory contains a variety of playbooks that can be used independently of the Ceph roles we have. They aim to perform infrastructure related tasks that would help use managing a Ceph cluster or performing certain operational tasks.

To use them, run ansible-playbook infrastructure-playbooks/<playbook>.