This module allows us to create Ceph CRUSH hierarchy. The module works
with
hostvars from individual OSD hosts.
Here is an example of the expected configuration in the inventory file:
[osds]
ceph-osd-01 osd_crush_location="{ 'root': 'mon-roottt', 'rack':
'mon-rackkkk', 'pod': 'monpod', 'host': 'localhost' }" # valid case
Then, if create_crush_tree is enabled the module will create the
appropriate CRUSH buckets and their types in Ceph.
Some pre-requesites:
* a 'host' bucket must be defined
* at least two buckets must be defined (this includes the 'host')
Signed-off-by: Sébastien Han <seb@redhat.com>
For readibility and clarity we do not run any tasks directly in the
main.yml file. This file should only contain include, which helps us
later to apply conditionnals if we want to.
Signed-off-by: Sébastien Han <seb@redhat.com>
* ignore yml files in general
* refactor based on commit f8e043b6ea5ac4e886532d4f2f675c507b44b955 that
changed directory layouts
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit ec5c6f5da566611c4e0b88f925cbd26dc90368d6)
"make rpm" will build a ceph-ansible RPM and place it in the current
working directory.
This will allow us to run this command in Jenkins for every branch.
run containerized daemons in virtual machines.
to enable it simply do:
`cp site-docker.yml.sample site-docker.yml`
and set `docker: true` in `vagrant_variables.yml`
Signed-off-by: Sébastien Han <seb@redhat.com>
Thanks to @cloudnull great patch at
https://github.com/ansible/ansible/pull/12555
we now have the ability to add more configuration options instead of
having to push a PR to add a new option to the template. So you can
dynamically add and remove flags.
To use it, edit `ceph_conf_overrides` in `group_vars/all` like so:
```
ceph_conf_overrides
global:
foo: 12345
bar: 6789
```
Signed-off-by: Sébastien Han <seb@redhat.com>
While deploying it's a bit annoying to have these files tracked by git.
If we want to closely work with the upstream version it will be easier.
Signed-off-by: leseb <seb@redhat.com>
This is really handy when we are testing code since we don't need to
modify the Vagrantfile, which is tracked by git.
The next commit will ignore the vagrant_variables.yml file.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Add a fetch/ceph_cluster_uuid.conf file so we keep the same UUID from
the Vagrantfile and from the ceph-common run.
Prior to that change the Vagrantfile was setting
4a158d27-f750-41d5-9e7f-26ce4c9d2d45 but while playing the ceph-common
role we check if fetch/ceph_cluster_uuid.conf exists, if not we generate
an UUID. Here we ended up with 2 UUIDs...
IMPORTANT NOTE FOR NON-VAGRANT DEPLOYMENT. IF YOU WANT TO USE YOUR OWN
UUID PLEASE REMOVE THAT FILE BEFORE RUNNING THE PLAYBOOK.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Use Vagrant's built-in support for Ansible provisioner. This eliminates the need
for a hosts file, and simplifies the ansible config file.
Renames config from .ansible.cfg to ansible.cfg since Ansible expects the file
to be called ansible.cfg and be adjacent to Vagrantfile when using the Vagrant
provisioner.