Commit Graph

491 Commits (4e3301d3614bb9e9ee40d531fcd4421ccf5ea18b)

Author SHA1 Message Date
Dimitri Savineau 4e3301d361 ceph-osd: exit gracefully when no data partition
When using collocated or non-collocated osd_scenarios (ceph-disk) and
trying to deterime the OSD_DEVICE from the OSD_ID passed to the systemd
unit then we can be in a situation where the OSD hasn't been activated
but the OSD ID exists.
This means the data partition isn't in activate state and the ceph-disk
list command won't show the OSD ID on the data partition.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1850377

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-07 18:18:14 +02:00
Guillaume Abrioux 90f3f61548 infra: introduce docker to podman playbook
This isn't backported from master because there are too many changes
between stable-3.2 and other newer branches.

NOTE:
This playbook  *doesn't* add podman support in stable-3.2 at all.
This is a tripleO dedicated playbook which is intended to be run
early during FFU workflow in order to prepare the OS upgrade.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1853457

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-07 12:11:09 -04:00
Guillaume Abrioux 8b8fa74db7 switch_to_containers: don't set noup flag
We shouldn't set this flag when running switch_to_containers playbook.
Otherwise the playbook fails waiting for pgs to be clean.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1843569

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b91d60d384)
2020-06-29 15:25:01 +02:00
Guillaume Abrioux 693e534ee9 Revert "switch_to_containers: don't set noup flag"
This reverts commit b7ec4a995b.

We need to provide a tag for RHCS 3.3z6 without this commit.
2020-06-25 17:07:25 +02:00
Dimitri Savineau a2556f084d docker: Add Requires on docker service
When using docker container engine then the systemd unit scripts only
use a dependency on the docker daemon via the After parameter.
But if docker is restarted on a live system then the ceph systemd units
should wait for the docker daemon to be fully restarted.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1846830

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit bd22f1d1ec)
2020-06-22 21:08:13 -04:00
Guillaume Abrioux b7ec4a995b switch_to_containers: don't set noup flag
We shouldn't set this flag when running switch_to_containers playbook.
Otherwise the playbook fails waiting for pgs to be clean.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1843569

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b91d60d384)
2020-06-18 09:56:28 +02:00
Guillaume Abrioux 8ccf91c1f0 add-osd: unset noup flag after last osd is deployed
this commit fixes a bug when using `add-osd.yml` playbook.
`noup` flag is set early but it never got unset before the "wait for pgs
clean" check, so the playbook always fails because OSDs aren't never
seen UP.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1816023

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-04-07 11:19:53 -04:00
Guillaume Abrioux d4ffe21225 osd: support changing default rule even when osd_crush_location isn't defined
Creating crush rules even with no crush hierarchy configuration is a
valid scenario so we shouldn't be bound to the first task result (which
configure crush hierarchy) to be able to add new crush rules.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1816989

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5b0476385c)
2020-03-31 23:04:03 +02:00
Dimitri Savineau db8902d444 ceph-{mon,osd}: move default crush variables
Since ed36a11 we move the crush rules creation code from the ceph-mon to
the ceph-osd role.
To keep the backward compatibility we kept the possibility to set the
crush variables on the mons side but we didn't move the default values.
As a result, when using crush_rule_config set to true and wanted to use
the default values for crush_rules then the crush rule ansible task
creation will fail.

"msg": "'ansible.vars.hostvars.HostVarsVars object' has no attribute
'crush_rules'"

This patch move the default crush variables from ceph-mon to ceph-osd
role but also use those default values when nothing is defined on the
mons side.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1798864

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 1fc6b33714)
2020-02-17 16:23:33 +01:00
Dimitri Savineau aea4257807 ceph-osd: wait for all osds once
cf8c6a3 moves the 'wait for all osds' task from openstack_config to the
main tasks list.
But the openstack_config code was executed only on the last OSD node.
We don't need to do this check on all OSD node so we need to add set
run_once to true on that task.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5bd1cf40eb)
2020-01-13 16:54:01 +01:00
Dimitri Savineau 9a42fe580f ceph-osd: wait for all osd before crush rules
When creating crush rules with device class parameter we need to be sure
that all OSDs are up and running because the device class list is
is populated with this information.
This is now enable for all scenario not openstack_config only.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit cf8c6a3849)
2020-01-13 16:54:01 +01:00
Dimitri Savineau 8b2659bf6d rolling_update: create crush rule after osd play
When upgrading from jewel to luminous we can execute the crush rule tasks
only when the 'osd require-osd-release luminous' command.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-01-13 16:54:01 +01:00
Dimitri Savineau af57597df6 ceph-osd: add device class to crush rules
This adds device class support to crush rules when using the class key
in the rule dict via the create-replicated sub command.
If the class key isn't specified then we use the create-simple sub
command for backward compatibility.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1636508

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ef2cb99f73)
2020-01-13 16:54:01 +01:00
Dimitri Savineau 0ac43d83f4 move crush rule creation from mon to osd role
If we want to create crush rules with the create-replicated sub command
and device class then we need to have the OSD created before the crush
rules otherwise the device classes won't exist.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ed36a11eab)
2020-01-13 16:54:01 +01:00
Dimitri Savineau 56a7537f48 ceph-osd: update systemd unit script
The systemd unit script wasn't updated with the new container name
format (without the hostname).
We now have the same start/stop docker commands for all scenarios.
During the device to id OSD migration we need to be sure that the
old container with the hostname are stopped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1780688

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-12-10 23:59:13 +01:00
Andrew Schoen 690860affc ceph-facts: generate devices when osd_auto_discovery is true
This task used to live in ceph-osd, but we need it defined here to that
ceph-config can use it when trying to determine the number of osds.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 88eda479a9)
2019-12-09 09:32:55 +01:00
Noah Watkins 146d144045 Remove outdated documentation
Fixes BZ
https://bugzilla.redhat.com/show_bug.cgi?id=1640525

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2019-11-13 16:04:55 +01:00
Dimitri Savineau b47f7763fc ceph-osd: fix fs.aio-max-nr sysctl condition
[1] introduced a regression on the fs.aio-max-nr sysctl value condition.
The enable key isn't a boolean but a string because the expression isn't
evaluated.
This string output "(osd_objectstore == 'bluestore')" is always true
because item.enable condition only matches non empty string. So the
sysctl value was applyied for both filestore and bluestore backend.

[2] added the bool filter to the condition but the filter always returns
false on string and the sysctl wasn't applyed at all.

This commit fixes the enable key value by evaluating the value instead
of using the string.

[1] https://github.com/ceph/ceph-ansible/commit/08a2b58
[2] https://github.com/ceph/ceph-ansible/commit/ab54fe2

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit ece46d33be)
2019-11-07 20:38:33 +01:00
Dimitri Savineau 4cd53bfbe5 ceph-osd: Remove ulimit nofile on container start
Even if this improves ceph-disk/ceph-volume performances then it also
impact the ceph-osd process.
The ceph-osd process shouldn't use 1024:4096 value for the max open
files.
Removing the ulimit option from the container engine and doing this kind
of change on the container side [1].

[1] https://github.com/ceph/ceph-container/pull/1497

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1702285

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 9a996aef7f)
2019-10-31 14:42:41 -04:00
Dimitri Savineau f3fc97caa0 openstack_config: fix docker exec command
container_exec_cmd should be replace by docker_exec_cmd.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1765110

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-10-24 14:13:52 -04:00
Guillaume Abrioux 8a1bda6d91 osd: refact 'wait for all osd to be up' task
let's use `until` instead of doing test in bash using python oneliner
also, use `command` instead of `shell`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit c76cd5ad84)
2019-10-04 04:25:20 +02:00
Dimitri Savineau 211dd2fcf6 ceph-osd: handle loop devices with containers
Since we change the way to run the OSD containers with the ID instead
of the device name, we lost the ability to use loop devices.
Loop devices are like nvme or cciss devices because the partitions are
referenced with an extra 'p' before the partition number.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1749097

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-09-25 16:11:29 +02:00
Dimitri Savineau 7d2b29d0eb ceph-osd: Add ulimit nofile on container start
On containerized deployment, the OSD entrypoint runs some ceph-volume
commands (lvm/simple scan and/or activate) which perform badly without
the ulimit option.
This option was added for all previous ceph-volume commands but not on
the ceph-osd container startup.
Also updating hard limit value to 4096 to reflect default baremetal
value.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1744390

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 9a4ac46d19)
2019-08-27 20:52:58 +02:00
Guillaume Abrioux 81906344ee osd: copy systemd-device-to-id.sh on all osd nodes before running it
Otherwise it will fail when running rolling_update.yml playbook because
of `serial: 1` usage.
The task which copies the script is run against the current node being
played only whereas the task which runs the script is run against all
nodes in a loop, it ends up with the typical error:

```
2019-08-08 17:47:05,115 p=14905 u=ubuntu |  failed: [magna023 -> magna030] (item=magna030) => {
    "changed": true,
    "cmd": [
        "/usr/bin/env",
        "bash",
        "/tmp/systemd-device-to-id.sh"
    ],
    "delta": "0:00:00.004339",
    "end": "2019-08-08 17:46:59.059670",
    "invocation": {
        "module_args": {
            "_raw_params": "/usr/bin/env bash /tmp/systemd-device-to-id.sh",
            "_uses_shell": false,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "warn": true
        }
    },
    "item": "magna030",
    "msg": "non-zero return code",
    "rc": 127,
    "start": "2019-08-08 17:46:59.055331",
    "stderr": "bash: /tmp/systemd-device-to-id.sh: No such file or directory",
    "stderr_lines": [
        "bash: /tmp/systemd-device-to-id.sh: No such file or directory"
    ],
    "stdout": "",
    "stdout_lines": []
}
```

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1739209

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-08-12 21:57:29 +02:00
Dimitri Savineau 4dffcfb429 ceph-osd: check container engine rc for pools
When creating OpenStack pools, we only check if the return code from
the pool list command isn't 0 (ie: if it doesn't exist). In that case,
the return code will be 2. That's why the next condition is rc != 0 for
the pool creation.
But in containerized deployment, the return code could be different if
there's a failure on the container engine command (like container not
running). In that case, the return code could but either 1 (docker) or
125 (podman) so we should fail at this point and not in the next tasks.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1732157

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit d549fffdd2)
2019-07-31 14:08:22 -04:00
Dimitri Savineau bedc0ab69d ceph-osd: use OSD id with systemd ceph-disk
When using containerized deployment we have to create the systemd
service unit based on a template.
The current implementation with ceph-disk is using the device name
as paramater to the systemd service and for the container name too.

$ systemctl start ceph-osd@sdb
$ docker ps --filter 'name=ceph-osd-*'
CONTAINER ID IMAGE                        NAMES
065530d0a27f ceph/daemon:latest-luminous  ceph-osd-strg0-sdb

This is the only scenario (compared to non containerized or
ceph-volume based deployment) that isn't using the OSD id.

$ systemctl start ceph-osd@0
$ docker ps --filter 'name=ceph-osd-*'
CONTAINER ID IMAGE                        NAMES
d34552ec157e ceph/daemon:latest-luminous  ceph-osd-0

Also if the device mapping doesn't persist to system reboot (ie sdb
might be remapped to sde) then the OSD service won't come back after
the reboot.

This patch allows to use the OSD id with the ceph-osd systemd service
but requires to activate the OSD manually with ceph-disk first in
order to affect the ID to that OSD.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1670734

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-07-26 16:07:22 -04:00
Dimitri Savineau d08af0a654 ceph-disk: Set max open files limit on container
Same behaviour than ceph-volume (b987534). The ceph-disk command runs
faster when using ulimit nofile with container cli.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-24 10:06:11 +02:00
Dimitri Savineau f4212b20e5 ceph-volume: Set max open files limit on container
The ceph-volume lvm list command takes ages to complete when having
a lot of LV devices on containerized deployment.
For instance, with 25 OSDs on a node it takes 3 mins 44s to list the
OSD.
Adding the max open files limit to the container engine cli when
executing the ceph-volume command seems to improve a lot thee
execution time ~30s.

This was impacting the OSDs creation with ceph-volume (both filestore
and bluestore) when using multiple LV devices.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1702285

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit b987534881)
2019-06-20 20:01:13 -04:00
Guillaume Abrioux f29366b848 ceph-osd: do not relabel /run/udev in containerized context
Otherwise content in /run/udev is mislabeled and prevent some services
like NetworkManager from starting.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 80875adba7)
2019-06-19 23:46:46 +02:00
Dimitri Savineau 0b653ee5b4 update default rhcs values and docs
The RHCS documentation mentionned in the default values and
group_vars directory are referring to RHCS 2.x while it should be
3.x.

Revolves: https://bugzilla.redhat.com/show_bug.cgi?id=1702732

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-04 14:18:23 +02:00
Dimitri Savineau 2fa8099fa7 osd: set default bluestore_wal_devices empty
We only need to set the wal dedicated device when there's three tiers
of storage used.
Currently the block.wal partition will also be created on the same
device than block.db.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1685253

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-25 07:13:38 +00:00
Dimitri Savineau 54128db5cd ceph-osd: Fix merge conflict from mergify
The PR #3916 was merged automatically by mergify even if there was a
confict in the ceph-osd-run.sh.j2 template.
This commit resolves the conflict.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-24 12:41:23 -04:00
Dimitri Savineau 3ae2a687ed ceph-osd: Increase cpu limit to 4
In containerized deployment the default osd cpu quota is too low
for production environment using NVMe devices.
This is causing performance degradation compared to bare-metal.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1695880

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit c17106874c)

# Conflicts:
#	roles/ceph-osd/templates/ceph-osd-run.sh.j2
2019-04-24 16:02:28 +00:00
Guillaume Abrioux c5c354a61a remove all NBSPs char in stable-3.2 branch
this can cause issues, let's replace all of these chars with real
spaces.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-10 13:27:48 +02:00
Dimitri Savineau efa0083f3c ceph-osd: Drop memory flag with bluestore
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit dc1c0dcee2)
2019-04-09 13:26:20 +00:00
Dimitri Savineau e4a71eabd9 ceph-osd: Ensure lvm2 is installed
When using osd_scenario lvm, we never check if the lvm2 package is
present on the host.
When using containerized deployment and docker on CentOS/RedHat this
package will be automatically installed as a dependency but not for
Ubuntu distribution.
OSD deployed via ceph-volume require the lvmetad.socket to be active
and running.

Resolves: #3728

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 179fdfbc19)
2019-03-20 22:59:28 +00:00
Guillaume Abrioux d3f6556041 osd: backward compatibility with old disk_list.sh location
Since all files in container image have moved to `/opt/ceph-container`
this check must look for new AND the old path so it's backward
compatible. Otherwise it could end up by templating an inconsistent
`ceph-osd-run.sh`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 987bdac963)
2019-03-18 21:56:53 +00:00
Dimitri Savineau 2f3206abeb ceph-osd: Install numactl package when needed
With 3e32dce we can run OSD containers with numactl support.
When using numactl command in a containerized deployment we need to
be sure that the corresponding package is installed on the host.
The package installation is only executed when the
ceph_osd_numactl_opts variable isn't empty.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit b7f4e3e7c7)
2019-03-12 08:14:47 +00:00
Guillaume Abrioux 34086ec233 osd: support numactl options on OSD activate
This commit adds OSD containers activate with numactl support.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1684146

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b3eb9206fa)
2019-03-11 09:50:29 +00:00
Guillaume Abrioux de3465b6a3 osd: add ipc=host in systemd template for containers
in addition to 15812970f0

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit d5be83e504)
2019-02-28 13:48:39 +00:00
Guillaume Abrioux d15b055854 osd: make the 'wait for all osd to be up' task configurable
introduce two new variables to make the check that 'wait for all osd to
be up' configurable.
It's possible that for some deployments, OSDs can take longer to be seen
as UP and IN.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1676763

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 21e5db8982)
2019-02-20 16:53:06 +00:00
David Waiting eba80adb1a ensure at least one osd is up
The existing task checks that the number of OSDs is equal to the number of up OSDs before continuing.

The problem is that if none of the OSDs have been discovered yet, the task will exit immediately and subsequent pool creation will fail (num_osds = 0, num_up_osds = 0).

This is related to Bugzilla 1578086.

In this change, we also check that at least one OSD is present. In our testing, this results in the task correctly waiting for all OSDs to come up before continuing.

Signed-off-by: David Waiting <david_waiting@comcast.com>
(cherry picked from commit 3930791cb7)
2019-02-19 19:02:16 +00:00
Sébastien Han 7db797d8df osd: expose udev into the container
In order to be able to retrieve udev information, we must expose its
socket. As per, https://github.com/ceph/ceph/pull/25201 ceph-volume will
start consuming udev output.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 997667a873)
2019-02-06 00:37:11 +00:00
Guillaume Abrioux 303cc85754 osd: bind mount /var/run/udev/
without this, the command `ceph-volume lvm list --format json` hangs and
takes a very long time to complete.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 7ade032807)
2019-02-06 00:37:11 +00:00
Noah Watkins e57e2d98a1 start_osds: use list instead of keys (re-introduce)
the python3 fix merged by:

  https://github.com/ceph/ceph-ansible/pull/3346

was reintroduced a few days later by:

  82a6b5adec

and this patch fixes it again :)

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
(cherry picked from commit 3cf5fd2c3e)
2019-01-16 15:48:35 +00:00
Kai Wembacher e2852eb40e add support for rocksdb and wal on the same partition in non-collocated
Signed-off-by: Kai Wembacher <kai@ktwe.de>
(cherry picked from commit a273ed7f60)
2018-12-20 14:21:14 +01:00
Sébastien Han 8d1c67beb2 osd: discover osd_objectstore on the fly
Applying and passing the OSD_BLUESTORE/FILESTORE on the fly is wrong for
existing clusters as their config will be changed.

Typically, if an OSD was prepared with ceph-disk on filestore and we
change the default objectstore to bluestore, the activation will fail.
The flag osd_objectstore should only be used for the preparation, not
activation. The activate in this case detects the osd objecstore which
prevents failures like the one described above.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 4c51130198)
2018-12-04 09:01:50 +00:00
Sébastien Han 1151521784 ceph-osd: change jinja condition
If an existing cluster runs this config, and has ceph-disk OSD, the
`expose_partitions` won't be expected by jinja since it's inside the
'old' if. We need it as part of the osd_scenario != 'lvm' condition.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1640273
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit bef522627e)
2018-12-04 09:01:50 +00:00
Sébastien Han 452069cb3a osd: manage legacy ceph-disk non-container startup
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: https://github.com/ceph/ceph-ansible/issues/3388
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-29 23:30:21 +01:00
Guillaume Abrioux f0195e97ed refact osd pool size customization
Add real default value for osd pool size customization.
Ceph itself has an `osd_pool_default_size` default value to `3`.

If users don't specify a pool size in various pools definition within
ceph-ansible, we should default to `3`.

By the way, this kind of condition isn't really clear:
```
when:
  - rbd_pool_size | default ("")
```

we should try to get the customized value then default to what is in
`osd_pool_default_size` (which has its default value pointing to
`ceph_osd_pool_default_size` (`3`) as well) and compare it to
`ceph_osd_pool_default_size`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 7774069d45)
2018-11-29 01:49:05 +00:00