Some users purge their environments and leave it in a non-optimal state.
e.g: packages are still installed but /etc/ceph and /var/lib/ceph don't
exist anymore. This will result in multiple failures across the play,
sometimes hard to detect. Populating these directories "just in case"
should help us solving these problems.
Closes: #1253
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 1149825f8f)
Doing this cause some all the daemons to go down at the same time. In a
scenario where we colocate a monitor and an osd, this osds will take
some time to go down which will make the 'umount' task fail.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit d5dd658cfa)
On systems running docker there is an issue with lxfs that results in
the find command returning 1 but actually did the job.
e.g: on a system with docker runnning find /var will give us the
following error:
find:
'/var/lib/lxcfs/cgroup/devices/lxc/x1/system.slice/systemd-update-utmp.service/devices.deny':
Permission denied
find:
'/var/lib/lxcfs/cgroup/devices/lxc/x1/system.slice/dev-random.mount/devices.allow':
Permission denied
...
...
However ceph files got deleted so we ignore the error.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit cb57a359ba)
We now rely on the cli tool ceph-detect-init which will tell us the init
system in used on the distribution. We do this instead of the previous
lookup for systemd unit files to call the right task depending on the
init system.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit e371bd591c)
with_items is evaluated before the when so in a second run where the
variable is empty if will fail with "'dict object' has no attribute
'stdout_lines'". To fix this we had a default array so with_items does
not fail and the task is skipped with the when.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 0e2e270ab2)
Sometimes users for testing, tend to delete the whole /var/lib/ceph and
then run ansible again, OSD will never come up if we do not create their
directory.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 6f53774ee9)
The purge_dmcrypt scenario also tests centos7, so change this one to
xenial so we can have more test coverage.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 2a87c13f17)
Because the purge-cluster.yml playbook does not have access to the roles
default vars then we can be sure that raw_multi_journal is defined. For
example, if this was purging a dmcrypt journal then raw_multi_journal
might not be defined at all in group_vars/all.yml or
group_vars/osds.yml.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit d3cb8dba4e)
This also removes the purge_cluster_collocated scenario as it's not
needed now because of purge_cluster.
Moving all the purge commands into its own section allows for ease of
reuse when creating new purge scenarios.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit e05df64fd0)
This patch makes sure we set the proper pool size on the rbd pool.
Usually during bootstrap the rbd pool size is not honoured so we need to
add this workaround.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit e35070f6ce)
When running encrypted OSDs, an encrypted device mapper is used (because
created by the crypsetup tool). So before attempting to remove all the
partitions on a device we must delete all the encrypted device mappers,
then we can delete all the partitions.
Signed-off-by: Sébastien Han <seb@redhat.com>
Please enter the commit message for your changes. Lines starting
(cherry picked from commit 73ca1a7a00)
Resolves: backport#1235
The name of this variable was a bit confusing since its activation will
zap all the block devices no matter which osd scenario we are using.
Removing this variable and applying a condition on the OSD scenario is
now feasible and easier since we import group_vars variable files for
OSDs.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit adeb3decf3)
Resolves: backport#1235
Just applying our writing syntax convention in the playbook.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit b7fcbe5ca2)
Resolves: backport#1235
This allows the user to set ip_version to either ipv4 or ipv6. This
resolves a bug where monitor_address is set to an ipv6 address, but the
template fails to render because it's hardcoded to look for an 'ipv4'
key in the ansible facts.
See: https://bugzilla.redhat.com/show_bug.cgi?id=1416010
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Resolves: bz#1416010
(cherry picked from commit 03cb803bd1)
It is not enough to check for the mds to exists, it actually always does
because we declare the variable. So we need to make sure that there is a
mds host.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 90648e7518)
Since we introduced config_overrides we removed a lot of options from
the default template. In some cases, like mds pool, openstack pools etc
we need to know the amount of PGs required. The idea here is to skip the
task if ceph_conf_overrides.global.osd_pool_default_pg_num is not define
in your `group_vars/all.yml`.
Closes: #1145
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-By: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit ddac3a1fb5)
This allows for the role to be used with ansible-galaxy and to fix the
include in all the meta/main.yml files in the roles.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 3713824b79)
We can use this to share common variables and tasks needed for every
containerized deployment.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit f770780dda)
There is an Ansible bug which makes the playbook fail when we are
running a playbook from the non-git root directory. The real problem is
that the ansible.cfg is not honoured and we are including variable from
roles/<role>/defaults/main.yml
The fix is too copy the purge cluster playbook on the git root directory
and execute it.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 48ac9579b6)
When purging OSDs we do not need to include these defaults as nothing in
the following tasks uses them. Also, it has the side effect of
overwriting any variables defined in group_vars files that are relative
to the inventory you are using with the default values. That behavior
was causing the CI tests to fail.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit dd8389cdf7)
This scenario brings up a 1 mon 1 osd cluster using journal collocation,
purges the cluster and then verifies it can redeploy the cluster.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 0ce18daa49)
In my testing zapping the osd disks deleted the journal
partitions, making the 'zap ceph journal partitions' task fail because
the partitions it found previously do not exist anymore.
This moves the task that finds the journal partitions after 'zap osd disks'
to catch any partitions ceph-disk might have missed.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 321cea8ba9)
Using failed_when will still throw an exception and stop the playbook if
the file you're trying to include doesn't exist.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit c9e5914377)
Prior to this change, a playbook run with '--tags' or '--skip-tags'
would fail, because the ceph-common role would not include the
release.yml task, and this file defines critical things like
ceph_release.
Thanks Andrew Schoen <aschoen@redhat.com> for help with the fix.
(cherry picked from commit 63e5b5c406)
Prior to this patch we had several ways to runs containers, we could use
ansible's docker module on some distro and on containers distros we were
using systemd. We strongly believe threating containers as services with
systemd is the right approach so this patch generalizes to all the
distros. These days most of the distros are running systemd so it's fair
assumption.
Signed-off-by: Sébastien Han <seb@redhat.com>