Recovery and/or re-balancing decrease performance, adding more options
might help tweaking this behavior.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Since 192.168.0.0/24 is very commong and might overlap with some
existing networks on your laptop, using another subnet like '42' is less
bound to happen.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Because of the following bug: http://tracker.ceph.com/issues/8551
If we use a disk file size of 1GB the OSD weight calculation ends up
being 0. Thus no data will be stored on any OSD.
Increasing the disk file size up to 11GB (we save some filesystem
overhead since the calculation is based on df thus 11GB is safer than
10GB). Because we significantly increased the size of the disk files we
now only create 2 devices per OSD host.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Proviously we used osd_crush_update_on_start: true, this was interpreted
by Ansible as a boolean and appeared as 'True' inside the Ceph configuration
file. However the Ceph's init script is looking for 'true'.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
This commits introduces the support of the developpement branches of
Ceph. You can now install Ceph from master.
The behavior is done through 2 new options:
* ceph_stable: true will use the stable branch
* ceph_dev: true will use the dev branch
For the dev packages don't forget to set the branch that you want to
use.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Prior to this patch, the first match was winning and the playbook wasn't
doing any difference both "restart ceph", adding a distro filtrer fixes
this.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
It has been reported a couple of months ago by Dan van der Ster from
CERN that updatedb was consumming 100% of CPU while parsing system's
directories. Indeed the process was parsing the OSD PG directories that
might contains billions of objects.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
After a change is made on the configuration file we must restart the
Ceph services. I also added a check that verifies if a socker exists
because during the first play there are no services running. We check if
a socket exists, if not we don't try to restart the services, if it
exists we can restart them.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
* Use a box that supports all providers
* Fix hdd creation so it doesn't call customize more than once
* Introduce method to create vmdks
* Add provider customization for VMware Fusion
I added a 'ceph-' prefix to all the roles related to Ceph. Since we are
about to push the roles into the Ansible Galaxy that will be easier when
we want to use these roles into a larger environement with other roles.
Fixes: #94
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>