mirror of https://github.com/ceph/ceph-ansible.git
fa2bb3af86
When deploying the ceph OSD via the packages then the ceph-osd@.service unit is configured as enabled-runtime. This means that each ceph-osd service will inherit from that state. The enabled-runtime systemd state doesn't survive after a reboot. For non containerized deployment the OSD are still starting after a reboot because there's the ceph-volume@.service and/or ceph-osd.target units that are doing the job. $ systemctl list-unit-files|egrep '^ceph-(volume|osd)'|column -t ceph-osd@.service enabled-runtime ceph-volume@.service enabled ceph-osd.target enabled When switching to containerized deployment we are stopping/disabling ceph-osd@XX.servive, ceph-volume and ceph.target and then removing the systemd unit files. But the new systemd units for containerized ceph-osd service will still inherit from ceph-osd@.service unit file. As a consequence, if an OSD host is rebooting after the playbook execution then the ceph-osd service won't come back because they aren't enabled at boot. This patch also adds a reboot and testinfra run after running the switch to container playbook. Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1881288 Signed-off-by: Dimitri Savineau <dsavinea@redhat.com> |
||
---|---|---|
.. | ||
untested-by-ci | ||
vars | ||
README.md | ||
add-mon.yml | ||
ceph-keys.yml | ||
cephadm-adopt.yml | ||
cephadm.yml | ||
docker-to-podman.yml | ||
filestore-to-bluestore.yml | ||
gather-ceph-logs.yml | ||
lv-create.yml | ||
lv-teardown.yml | ||
purge-cluster.yml | ||
purge-container-cluster.yml | ||
purge-iscsi-gateways.yml | ||
rgw-add-users-buckets.yml | ||
rolling_update.yml | ||
shrink-mds.yml | ||
shrink-mgr.yml | ||
shrink-mon.yml | ||
shrink-osd.yml | ||
shrink-rbdmirror.yml | ||
shrink-rgw.yml | ||
storage-inventory.yml | ||
switch-from-non-containerized-to-containerized-ceph-daemons.yml | ||
take-over-existing-cluster.yml |
README.md
Infrastructure playbooks
This directory contains a variety of playbooks that can be used independently of the Ceph roles we have. They aim to perform infrastructure related tasks that would help use managing a Ceph cluster or performing certain operational tasks.
To use them, run ansible-playbook infrastructure-playbooks/<playbook>
.