For newly created cluster the command: ceph --cluster {{ cluster }} osd
pool get rbd size does not respond properly.
We only want to check if the rbd pool exists, so we know use an ls |
grep approach.
Closes: https://github.com/ceph/ceph-ansible/issues/1547
Signed-off-by: Sébastien Han <seb@redhat.com>
Rewrite the check_pgs by using json parsing instead of complex regexp to
parse the `ceph -s` output.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add a default value for `ceph_docker_on_openstack` to avoid a
conditional check error for the task `pause after docker install before starting` in
`roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We ship ceph-iscsi-gw in a separate repo downstream and do not package
it with ceph-ansible. Including the play for ceph-iscsi-gw in
site.yml.sample makes the playbook fail when using the downstream
packages.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1454945
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
The fact ['ansible_$interface']['ipv4'] is a dictionary where
['ansible_$interface']['ipv6'] is a list. If we use
ansible_default_ipv6|ipv4 is is always a dictionary which allows us to
get the ipv6 and ipv4 address without adding more complexity to the
template.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Currently we cannot install the ceph-iscsi-ansible RPM on a node where
the ceph-ansible RPM is already installed.
ceph-iscsi-ansible should install on top of the ceph-ansible environment
without issues.
We need to include ceph_docker_registry when removing containers/images
because if we don't it will assume docker.io which is not always where
the image originated from, causing the playbook to fail.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
For some reason we changed the check of pgs but it appears it could be
dangerous because the current check might satisfied as long as 1 PG is
active+clean.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
since `-e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` is already hardcoded in
`eph-osd-run.sh.j2` there is no need to add `-e
CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` as a default value in defaults vars.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When we purge a containerized cluster we need to use the correct
playbook when redploying the cluster.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
If we're purging a containerized cluster that did not use the
raw_multi_journal OSD scenario then raw_journal_devices will not be
defined which causes the playbook to fail.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1455187
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
`ceph-docker-common`:
At the moment there is a lot of duplicated tasks in each
`./roles/ceph-<role>/tasks/docker/main.yml` that could be refactored in
`./roles/ceph-docker-common/tasks/main.yml`.
`*_containerized_deployment` variables:
All `*_containerized_deployment` have been refactored to a single
variable `containerized_deployment`
duplicate `cephx` variables in `group_vars/* have been removed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We only check for everything expect 'distro' because that
is a valid way of deploying RHCS, with preprepared repos
present on the nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: we could end up in situation where we would install a package
on a machine that does not have the right repo enabled. Because the
condition was set to OR we weren't pinning a particular host but just a
condition. Let's say someone sets 'ceph_origin == "distro"', this would
try to install OSD packages on Monitors.
Solution: use a AND condition to first pin to the group_name (which
identifies a set of hosts) AND then after this one of the installation
condition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1453119
Co-Authored-By: https://github.com/zhsj
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: we are delegating the set/unset flag to a monitor node but we
try to call an osd container
Solution: use the right container name.
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: fail to deploy a containerized Ceph cluster with ipv6
Solution: do not hardcode ipv4 when bootstrapping the container.
Now use ip_version: ipv6 to get a containerized cluster deployed with
ipv6.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1451786
Signed-off-by: Sébastien Han <seb@redhat.com>
To ease the backport, I wrote a quick script.
Usage: ./backport.sh.sh stable-2.2
6892670d31 my-work
We can also pass multiple commits.
Follow up on @ktdreyer write up here:
https://github.com/ceph/ceph-ansible/pull/1529
Signed-off-by: Sébastien Han <seb@redhat.com>