Somehow on CentOS 7.2 with Jewel, the service enablement by the Ansible service module
does not seem to work properly.
Signed-off-by: Sébastien Han <seb@redhat.com>
We just add epel to conviently install Ansible. However we don't keep it
as it could disrupt ceph's installation and dependancies.
Signed-off-by: Sébastien Han <seb@redhat.com>
The ceph-mds role is being tested, but there was not group for it in the
inventory so ceph-mds was not being installed on the testing machine.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This adds a helper fact that uses the ``init_system`` fact to determine if
we should be using systemd or not when controlling services.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
The ceph-osd role currently uses ansible_service_mgr, which is a fact
only available on ansible 2.x and greater. This commit sets a similar
fact called init_system which will store the contents of /proc/1/comm
(systemd, init, etc.) and then references it ceph-osd instead.
Closes#741
If the ceph cluster name includes numbers, the grep used to find the OSD
IDs from /var/lib/ceph/osd/ would also return the numbers that were in
the cluster name.
For example, if the cluster was named 'mine123' and there was only one
OSD on the node, then the task that finds the OSD IDs would return
'123' and '0'.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This adds support to allow the install of Ceph from the
Ubuntu Cloud Archive. The Ubuntu Cloud Archive provides newer
release of Ceph than the normal Ubuntu distro repository.
Signed-off-by: Samuel Matzek <smatzek@us.ibm.com>
Since developement versions of Ceph are after infernalis a package split
happened. So basically ceph-mon, ceph-osd, ceph-mds need to be
installed.
Signed-off-by: Sébastien Han <seb@redhat.com>
Introducing a playbook helper to control a ceph cluster that was not
deployed with ceph ansible.
The procedure is rather simple. If the cluster was deployed with the
following project there won’t be any issue:
* Ceph Deploy
* Puppet Ceph
* Chef Ceph
* Any other deployment tool that relies on ceph-disk
The procedure comes as fellow:
1. Install Ansible and add your monitors and osds hosts in it. For more
detailed information you can read the Ceph Ansible Wiki
2. Set generate_fsid: false in group_vars
3. Get your current cluster fsid with ceph fsid and set cluster_fsid
accordingly in group_vars
4. Run the playbook called: take-over-existing-cluster.yml like this
ansible-playbook take-over-existing-cluster.yml.
5. Eventually run Ceph Ansible to validate everything by doing:
ansible-playbook site.yml.
Signed-off-by: Sébastien Han <seb@redhat.com>
When running via Vagrant, rather than always starting RESTAPI on each
monitor, make it optional via a configuration option in
vagrant_variables.yml.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
This will allow a user to conditionally install the ceph package on rpm
based systems. Installing this package is not required or wanted in
versions passed infernalis.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>