If using another method to generate a consistent fsid, then we can
skip creation of an (unused) cluster UUID file. If cephx is disabled
as well, we can skip creation of the fetch directory entirely.
The firewall checks can fail for any number of reasons -- e.g., the
ceph cluster hostnames are unresolvable from the ansible host, or the
ports are filtered by some intermediate hop, etc. Make two changes to
make those checks better:
* Set pipefail when running the checks, so if nmap itself fails the
command will be marked as 'failed'. Specifically, this fixes the
case where the hostnames cannot be resolved.
* Add a new variable, check_firewall, which can be used to disable
checks entirely. Specifically, this fixes the case where some
intermediate firewall filters the ports, so nmap returns "filtered".
Currently, all the ceph package installation resources use
"state=latest", which means subsequent runs of the ceph playbooks
could result in ceph being upgraded if there are package updates
available in the selected repo.
This commit adds a new variable to ceph-common called
'upgrade_ceph_packages' which defaults to False. This variable is used
in the package installation resources for ceph packages to determine if
the resource should use "state=present" or "state=latest". If the
variable gets set to True, "state=latest" will be used.
Additionally, we update rolling_update.yml to override
upgrade_ceph_packages to true to permit package upgrades in this
context specifically.
Closes issue #506
This change allows for configurable Ceph Conf Directory permissions. This
is required for integrators of Ceph, like OpenStack Cinder, which needs to
read from /etc/ceph for operation.
Thanks to @cloudnull great patch at
https://github.com/ansible/ansible/pull/12555
we now have the ability to add more configuration options instead of
having to push a PR to add a new option to the template. So you can
dynamically add and remove flags.
To use it, edit `ceph_conf_overrides` in `group_vars/all` like so:
```
ceph_conf_overrides
global:
foo: 12345
bar: 6789
```
Signed-off-by: Sébastien Han <seb@redhat.com>
Since we renamed the variables and removed the old 'docker' variable we
can now collocate container daemons with standard bare metal deployment.
For instance, monitors can be containerized but osds can be deployed
traditionally.
Signed-off-by: Sébastien Han <seb@redhat.com>
It should be used to disable health warnings about number of PGs
being too low if some pools have very few objects bringing down
the average number of objects per pool. This happens when running RadosGW.
The default is 10 and since the warnings only occur with some use cases,
the default here is 10 as well. Set to 20 or more to silence the warnings.
Currently, the fetch directory is created in your working directory
(where ansible is run from). We prefer to not keep any state in this
directory and would prefer to have the fetch directory configurable so
we can store it outside of our code checkout.
This commit creates a new variable in each role called
`fetch_directory` (defaulting to the previous value of 'fetch/'), and
then updates each reference to 'fetch' to use the new variable instead.
Closes issue #383
While deploying it's a bit annoying to have these files tracked by git.
If we want to closely work with the upstream version it will be easier.
Signed-off-by: leseb <seb@redhat.com>
Cool stuff :). We don't need to specify an initial monitor key anymore.
A key will automatically be generated.
The default key can always be overriden with the `monitor_secret`
variable.
Signed-off-by: leseb <seb@redhat.com>
We don't always have a dedicated cluster network so we can by default
re-use the public network value.
This is just laziness :).
Signed-off-by: leseb <seb@redhat.com>
We want to force the user to only enable the options they need. Thus
they shouldn't have to enable one option and then disable another.
Signed-off-by: leseb <seb@redhat.com>
Now we don't need to activate the services through a variable. If the
role is activated in the inventory, actions will occur automatically.
Fixing the repo creation for red hat storage too.
Signed-off-by: leseb <seb@redhat.com>
The new product version has jsut came out. ICE doesn't exist anymore and
Red Hat Storage is the name of the new product.
Signed-off-by: leseb <seb@redhat.com>
* fix the Vagrantfile ruby check
* fix the variable positions
Bring more mandatory variables and try to separate Vagrant vars from the
playbook vars.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Now the Ceph REST API can be deployed.
Default implementation deploys it on the same nodes as the monitors
which should be fine.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Still WIP, @mwheckmann free to test
As requested by #162
Current known issue, since ceph.conf gets modified during every single
run (at the end during the merge) so this will restart ceph daemons.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Updating each group_vars file to reflect the content of each role
default variables.
Closes: #169
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
This commit implements a fourth scenario where we can directely use a
directory instead of a block device for the OSDs. The purpose of this
scenario is more testing-oriented. Please note that we do not check
the filesystem underneath the directory so it is really up to you to
configure this properly. Declaring more than one directory on the
same filesystem will confuse Ceph.
Fixes: #14
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
This commit introduces a new config option 'osd crush chooseleaf type'.
With the help of this option and by setting it to '0' we tell Ceph to
store all the replicas on a single host. Basically we tell CRUSH to
iterate over disk and not over host.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
While trying to auto-provision with vagrant, new disks get /dev/sdb and
so forth. So starting from /dev/sdd doesn't make sense.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
We introduced a key generation mechanism that aimed to ease deployment.
In the end, it brought more complexity to the playbook and doesn't
scale.
Reverting the auto generation commit and instructing users to generate
their own keys.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
As mentionned in the issue 24 it's not really safe to store a default
fsid nor a monitor key. Thus the commit brings the auto-generation of
the initial monitor key. However it is quite complex to do the same for
the fsid, so I leave this to the person in charge of the deployment to
generate one and edit group_vars/all accordingly. The default fsid has
been removed as well.
Close: #24
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Even if MDS are not configured in site.yml the playbook has a
dependancy on the ceph.conf template.
This disables the mds section from the ceph.conf file.
Closes: #21
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>