This fixes#845 for containerized deployments. We now also mount the
/etc/localtime volume in the containers in order to synchronize the host
timezone with the container timezone.
Signed-off-by: Ivan Font <ivan.font@redhat.com>
Prior to this change, each ceph cluster node would end up with several
"qemu-client-$pid.log" files owned by root. The [client] section would
capture *all* client activity (for example the "ceph health" command,
etc), not just librbd-in-qemu.
Restrict this section to libvirt clients only so that we don't generate
these spurious log files for other Ceph client traffic.
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
Deployment fails when the ``secure_cluster`` is false:
TASK [ceph-mon : secure the cluster]
*******************************************
fatal: [saceph-mon.vm.ceph.asheplyakov]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'stdout_lines'"}
fatal: [saceph-mon2.vm.ceph.asheplyakov]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'stdout_lines'"}
fatal: [saceph-mon3.vm.ceph.asheplyakov]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'stdout_lines'"}
A conditional include evaluates all included tasks with the (additional)
conditional applied to every task [1]. Thus all tasks from `secure_cluster.yml'
are always evaluated (with an additional 'when: secure_cluster' condition).
The `secure the cluster' task iterates over ``ceph_pools.stdout_lines``
even if ``secure_cluster`` is false: in loops ansible applies conditional
to every item (by design) [2]. However the `collect all the pools' task
is skipped if the very same condition evaluates to false, which leaves
the ``ceph_pools`` undefined, so the `secure the cluster' task fails:
Provide the default (empty) list to avoid the problem.
[1] http://docs.ansible.com/ansible/playbooks_conditionals.html#applying-when-to-roles-and-includes
[2] http://docs.ansible.com/ansible/playbooks_conditionals.html#loops-and-conditionalsCloses: #913
Signed-off-by: Alexey Sheplyakov <asheplyakov@mirantis.com>
Update each role's task to use the respective role's username, image
name, and image tag to check if a container is already running. This was
causing false failures because we were not matching any running
containers and subsequently running checks.yml to check the status of
cluster files being left behind.
Signed-off-by: Ivan Font <ivan.font@redhat.com>
Journal size is not mandatory anymore, a default from 5GB is being
added. A simple warning message will show up if the size is set to
something below 5GB.
Signed-off-by: Sébastien Han <seb@redhat.com>
This change allows keys in INI format to be any case.
The default ConfigParse module sets this to be lower
however in some cases keys are needed to be upper and/or
mixed.
Change-Id: I4e0dedb1b73ee596929bd425af6b0aaefd3a6c27
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
(cherry picked from commit f946160dd0)
- Update plays to use *_group_name variables that can be overridden by
user on the command line to use whatever group names desired
- Only gather_facts when necessary to greatly increase speed of playbook
- Prompt user for removing packages on non-atomic hosts only if desired
- Removed unnecessary package-removal
- Zap OSD devices twice to avoid lingering partitions
Signed-off-by: Ivan Font <ivan.font@redhat.com>
This removes containers, container images, packages, configuration files
and all the data. Removal of container image tasks are tagged with
'remove_img' to skip removal if desired.
Signed-off-by: Ivan Font <ivan.font@redhat.com>
The ceph-common role fails when you run ansible with --check. Adding
always_run to a few tasks makes the check go through easier (although
it's not foolproof).