ceph-ansible/roles/ceph-osd/tasks
Guillaume Abrioux 873fc8ec0f osd: ensure /var/lib/ceph/osd/{cluster}-{id} is present
This commit ensures that the `/var/lib/ceph/osd/{{ cluster }}-{{ osd_id }}` is
present before starting OSDs.

This is needed specificly when redeploying an OSD in case of OS upgrade
failure.
Since ceph data are still present on its devices then the node can be
redeployed, however those directories aren't present since they are
initially created by ceph-volume. We could recreate them manually but
for better user experience we can ask ceph-ansible to recreate them.

NOTE:
this only works for OSDs that were deployed with ceph-volume.
ceph-disk deployed OSDs would have to get those directories recreated
manually.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1898486

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-11-19 09:20:28 +01:00
..
scenarios add missing boolean filter 2020-09-28 20:45:01 +02:00
common.yml keyring: use ceph_key module for auth get command 2020-11-02 17:17:29 +01:00
container_options_facts.yml ceph-osd: set container objectstore env variables 2020-01-20 13:59:44 -05:00
crush_rules.yml osd: use default crush rule name when needed 2020-03-31 14:49:38 -04:00
main.yml osds: use ceph osd stat instead of ceph status 2020-11-03 09:05:33 +01:00
openstack_config.yml common: follow up on #5948 2020-11-02 20:16:36 -05:00
start_osds.yml osd: ensure /var/lib/ceph/osd/{cluster}-{id} is present 2020-11-19 09:20:28 +01:00
system_tuning.yml ceph-osd: fix fs.aio-max-nr sysctl condition 2019-11-07 13:51:48 +01:00
systemd.yml ceph-osd: remove ceph-osd-run.sh script 2020-06-18 17:51:13 +02:00