ceph-ansible/roles/ceph-osd/tasks
David Waiting eba80adb1a ensure at least one osd is up
The existing task checks that the number of OSDs is equal to the number of up OSDs before continuing.

The problem is that if none of the OSDs have been discovered yet, the task will exit immediately and subsequent pool creation will fail (num_osds = 0, num_up_osds = 0).

This is related to Bugzilla 1578086.

In this change, we also check that at least one OSD is present. In our testing, this results in the task correctly waiting for all OSDs to come up before continuing.

Signed-off-by: David Waiting <david_waiting@comcast.com>
(cherry picked from commit 3930791cb7)
2019-02-19 19:02:16 +00:00
..
scenarios add support for rocksdb and wal on the same partition in non-collocated 2018-12-20 14:21:14 +01:00
activate_osds.yml osd: Add support for multipath disks 2018-02-09 18:06:25 +01:00
build_devices.yml Revert "osd: generate device list for osd_auto_discovery on rolling_update" 2018-08-16 11:13:12 +02:00
ceph_disk_cli_options_facts.yml remove jewel support 2018-10-12 23:38:17 +00:00
check_gpt.yml osd: fix check gpt 2017-12-20 17:42:45 +01:00
common.yml Add ceph_keyring_permissions variable to control permissions for 2018-06-28 15:48:39 +00:00
copy_configs.yml Expose /var/run/ceph 2018-04-20 15:48:32 +02:00
main.yml osd: commonize start_osd code 2018-11-28 23:11:46 +01:00
openstack_config.yml ensure at least one osd is up 2019-02-19 19:02:16 +00:00
start_osds.yml osd: expose udev into the container 2019-02-06 00:37:11 +00:00
system_tuning.yml Allow os_tuning_params to overwrite fs.aio-max-nr 2018-05-11 10:49:37 +01:00