ceph-ansible/roles/ceph-osd/tasks
Dimitri Savineau f4212b20e5 ceph-volume: Set max open files limit on container
The ceph-volume lvm list command takes ages to complete when having
a lot of LV devices on containerized deployment.
For instance, with 25 OSDs on a node it takes 3 mins 44s to list the
OSD.
Adding the max open files limit to the container engine cli when
executing the ceph-volume command seems to improve a lot thee
execution time ~30s.

This was impacting the OSDs creation with ceph-volume (both filestore
and bluestore) when using multiple LV devices.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1702285

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit b987534881)
2019-06-20 20:01:13 -04:00
..
scenarios osd: set default bluestore_wal_devices empty 2019-04-25 07:13:38 +00:00
activate_osds.yml osd: Add support for multipath disks 2018-02-09 18:06:25 +01:00
build_devices.yml Revert "osd: generate device list for osd_auto_discovery on rolling_update" 2018-08-16 11:13:12 +02:00
ceph_disk_cli_options_facts.yml remove jewel support 2018-10-12 23:38:17 +00:00
check_gpt.yml osd: fix check gpt 2017-12-20 17:42:45 +01:00
common.yml Add ceph_keyring_permissions variable to control permissions for 2018-06-28 15:48:39 +00:00
copy_configs.yml Expose /var/run/ceph 2018-04-20 15:48:32 +02:00
main.yml ceph-osd: Ensure lvm2 is installed 2019-03-20 22:59:28 +00:00
openstack_config.yml remove all NBSPs char in stable-3.2 branch 2019-04-10 13:27:48 +02:00
start_osds.yml ceph-volume: Set max open files limit on container 2019-06-20 20:01:13 -04:00
system_tuning.yml Allow os_tuning_params to overwrite fs.aio-max-nr 2018-05-11 10:49:37 +01:00