Third of occurrence of <your_host> didn't have a closing tag. This is a tiny change to add the he missing '>'.
This will let people quickly update this file with sed. For example: `sed s/\<your_host\>/192.168.1.41/g -i cluster-maintenance.yml`.
This fixes a bug where monitor_interface might be set in your inventory
file and not by using group_vars or --extra-vars causing the template to
use the default address of 0.0.0.0 instead of the defined
monitor_interface.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Instead of creating the RBD client socket path three different places
in three different ways, this creates it once. Ceph on OpenStack users
have the option to customize the permissions of the RBD client
directories.
Fixes#687
1. Change how sysvinit ceph is determined to be enabled
For mons, the playbook checks if sysvinit is enabled by trying to
stat /var/lib/ceph/mon/ceph-{{ ansible_hostname }}/sysvinit [1].
However, that file is not created when a monitor is configured to
use sysvinit, instead, Ansible's service module is used [2]. Ansible
2.0 can verify if a service is enabled and does this by checking for
a glob, in this context it would be '/etc/rc?.d/S??ceph' [3]. Because
Ansible 1.9 does not support this feature, this change updates
rolling_update.yml by checking if sysvinit is enabled by having stat
glob for the same pattern and following the symlink. This is done only
for the mons.
2. Change how sysvinit ceph is restarted
The playbook passes the argument "mon" to the sysv init script
but, the init script does not necessarily take that argument and
it failed when tested on a RHEL7 system. However, dropping the
argument and just using Ansible's service module for state=restarted
worked so this change does not have this line for when
monsysvinit.stat.exists. A similar change is in this pull request
for the OSD restart (removing args=osd).
A second ceph mon restart command is run regardless of any conditions
being met. I am not sure why the service is restarted in the case of
upstart or sysvinit and then restarted again. I am going to assume
there is a subtle reason for this and not touch this second run but I
added a condition so that when ansible_os_family is Red Hat, then mon
is not passed as an argument, otherwise it is restarted as was already
in place.
[1] https://github.com/ceph/ceph-ansible/blob/v1.0.3/rolling_update.yml#L32-L33
[2] https://github.com/ceph/ceph-ansible/blob/v1.0.3/roles/ceph-mon/tasks/start_monitor.yml#L42-L45
[3] https://github.com/ansible/ansible-modules-core/blob/stable-2.0/system/service.py#L492-L493
As written, generating the config file for ceph-mon in Docker yielded:
ERROR: config_template is not a legal parameter in an Ansible task or
handler
This fixes that error condition.
Vagrant will not always be available everywhere, sometimes we want to
bootstrap a ceph on a virtual machine. This script will help you
bootstrapping a ceph cluster from a development branch only.
So you can run it like this: ./ceph-aio-no-vagrant.sh v10.1.0
To get the v10.1.0 version installed. If the first argument ($1) is
empty then master will be used in consequence.
Signed-off-by: Sébastien Han <seb@redhat.com>