is_atomic is defined in ceph-facts or very early in main playbook.
In non containerized deployment, is_atomic is only set in ceph-facts
which is played after ceph-validate.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
to avoid duplicating code in `site.yml.sample`, `site-docker.yml.sample`
and `setup.yml`, let's isolate this part of the code and simply include
it each time we need it.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
configure_firewall tests the is_atomic variable if the firewalld package
is not present. is_atomic is defined in ceph_facts so include that.
Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
This condition is useless and it's also creating issues we don't see in
our CI. ceph_release is set by either ceph-common or ceph-docker-common
so let's keep it this way.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1645379
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add `any_errors_fatal: true` to all host sections in `site.yml.sample`
and `site-container.yml.sample` so that the playbook execution
ceases spontaneously and instantaneously when errors occurs.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
sometimes we play the whole role `ceph-defaults` just to access the
default value of some variables. It means we play the `facts.yml` part
in this role while it's not desired. Splitting this role will speedup
the playbook.
Closes: #3282
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since sharing variables amongst roles has been made default since
Ansible 2.6, private option has been deprecated; so stop using it.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
This will speed up the deployment and also deploy mon and mgr collocated
just as recommended.
This won't prevent you of adding more and dedicaded machines for mgr if
needed.
Signed-off-by: Sébastien Han <seb@redhat.com>
We need to apply any_errors_fatal: true to every play so it can take
effect, not only on the initial pass. With this flag, any error in the
playbook will cause the playbook to stop.
Signed-off-by: Sébastien Han <seb@redhat.com>
Since import_role and include_role are more readable, explicit (about
the nature of inclusion) and flexible (allows placibf inclusion
anywhere) amongst the tasks, use them instead of using roles or role
keyword. Besides, these keywords also allow more arguments.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
As of now, we should no longer support Jewel in ceph-ansible.
The latest ceph-ansible release supporting Jewel is `stable-3.1`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If someone's cluster name is 'ceph' then the playbook will fail (with no
errors because of ignore_errors) saying it can not find the variable. So
let's declare the default. If the cluster name is different then it'll
be in group_vars and thus there won't be any failre.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1636962
Signed-off-by: Sébastien Han <seb@redhat.com>
The role contains all the handlers for Ceph services. We decided to
leave ceph-defaults role with variables and a few facts only. This is
useful when organizing the site.yml files and also adding the known
variables to infrastructure-playbooks.
Signed-off-by: Sébastien Han <seb@redhat.com>
Same problem again... ceph_release_num[ceph_release] is only set in
ceph-docker-common/common roles so putting the condition on that role
will never work. Removing the condition.
The downside of this is we will be installing packages and then skip the
role on the node.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622210
Signed-off-by: Sébastien Han <seb@redhat.com>
Recently we renamed the group_name for iscsi iscsigws where previously
it was named iscsi-gws. Existing deployments with a host file section
with iscsi-gws must continue to work.
This commit adds the old group name as a backoward compatility, no error
from Ansible should be expected, if the hostgroup is not found nothing
is played.
Close: https://bugzilla.redhat.com/show_bug.cgi?id=1619167
Signed-off-by: Sébastien Han <seb@redhat.com>
ceph-rest-api binary has been removed in mimic so we cannot deploy it
anymore. We just keep the role and the compability for existing users.
Signed-off-by: Sébastien Han <seb@redhat.com>
Check `systempython2.stat` instead of `systempython2.stat.exists`.
Without this change, in the case that python2 is not installed, the `stat`
task fails without defining `systempython2.stat`. It leads that the next
installation tasks fail because of undefined `systempython2.stat`.
An example error output (edited for readability):
```
TASK [check for python2] ***********************************************
Wednesday 25 July 2018 14:52:47 +0900 (0:00:00.182) 0:00:00.182 *
fatal: [ceph-osd1.vlan221.vtj]: FAILED! => {
"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not
found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
...ignoring
TASK [install python2 for debian based systems] ************************
Wednesday 25 July 2018 14:51:00 +0900 (0:00:01.742) 0:00:01.926 *
fatal: [ceph-mon2]: FAILED! => {
"msg": "The conditional check 'systempython2.stat.exists is undefined or
systempython2.stat.exists == false' failed. The error was: error while
evaluating conditional (systempython2.stat.exists is undefined or
systempython2.stat.exists == false): 'dict object' has no attribute 'stat'
\n\n The error appears to have been in
'/Users/arata/git/ceph-ansible/site.yml.sample': line 36, column 7, but
may\n be elsewhere in the file depending on the exact syntax problem.\n\n
The offending line appears to be:\n\n\n
- name: install python2 for debian based systems\n
^ here\n
"}
...ignoring
```
Fixes: #2930
Signed-off-by: Arata Notsu <arata776@gmail.com>
Let's try to avoid using dashes as testinfra needs to be able to read
the groups.
Typically, with iscsi-gws we can't add a marker for these iscsi nodes,
using an underscore fixes the issue.
Signed-off-by: Sébastien Han <seb@redhat.com>
This needs to be in it's own play with ceph-defaults included
so that I can validate things that might be defaulted in that
role.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
there is no need to gather facts with O(N^2) way.
Only one node should gather facts from other node.
Fixes: #2553
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This was taken from the openshift ansible repository here:
https://github.com/leseb/openshift-ansible/tree/master/roles/installer_checkpoint
Rationale:
A complete OpenShift cluster installation is comprised of many different
components which can take 30 minutes to several hours to complete. If
the installation should fail, it could be confusing to understand at
which component the failure occurred. Additionally, it may be desired to
re-run only the component which failed instead of starting over from the
beginning. Components which came after the failed component would also
need to be run individually.
Ceph has a similar situation so we can benefit from that
callback_plugin.
Signed-off-by: Sébastien Han <seb@redhat.com>
Now by running the playbook like this:
ansible-playbook site.yml --tags='ceph_update_config'
You can only generate a ceph configuration file on the nodes.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1543434
Signed-off-by: Sébastien Han <seb@redhat.com>
openSUSE Leap 42.3 provides support for Ceph Luminous in both the
distribution package and the latest available version in the OBS
repository so add these as the only available installation methods for
openSUSE.
Signed-off-by: Markos Chandras <mchandras@suse.de>
Since we encountered issue with this on ansible2.2, this commit provide
the ability to enable or disable it regarding which ansible we are
running.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This patch changes the `when:` keys so that they have no jinja2
delimiters. This avoids Ansible warnings which could turn into
errors in a future Ansible release.
So we can later evaluate the conditions.
Also fix the variable, we are comparing ceph_release not
ceph_stable_release
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1486062
Signed-off-by: Sébastien Han <seb@redhat.com>
we don't know ceph_stable_release before executing the role so at least
we need to run ceph-defaults and ceph-docker-common or
ceph-common.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1486062
Signed-off-by: Sébastien Han <seb@redhat.com>
If we don't bootstrap the mgr after the mon and the osds handler are
called, we will never be able to reach a clean state since the pgs
stats are handled by the mgr. This also happens when doing daemon
collocation.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1493920
Signed-off-by: Sébastien Han <seb@redhat.com>
Before this patch we couldn't use --limit properly to only interact with
a particular set of hosts. We basically always required to have ceph-mon
role being played to properly get facts and then build the ceph.conf.
Now, the current running host will get the facts from the machines that
are not part of the current play. This is achieved with the help of the
new option delegate_facts, for more info see:
http://docs.ansible.com/ansible/latest/playbooks_delegation.html#delegated-facts
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1482067
Signed-off-by: Sébastien Han <seb@redhat.com>
This will give us more flexibility and the possibility to deploy a client node
for an external ceph-cluster.
related BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1469426Fixes: #1670
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This will give us more flexibility and avoid a lot of useless when
skipping all tasks from a non-desired role.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We ship ceph-iscsi-gw in a separate repo downstream and do not package
it with ceph-ansible. Including the play for ceph-iscsi-gw in
site.yml.sample makes the playbook fail when using the downstream
packages.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1454945
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to
provide additional monitoring and interfaces to external monitoring and
management systems.
Only works as of the Kraken release.
Co-Authored-By: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Sébastien Han <seb@redhat.com>
Then avoids an issue where if you're using tags to run the playbook then
no facts are gathered, resulting in subsequent tasks failing that depend
on them.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Some modern distros ship without python2 so we need to install it blindly,
before any module is run. Fedora > 23 also requires some aditional packages be
installed.
Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
Ceph has the ability to export it's filesystem via NFS using Ganesha.
Add a ceph-nfs role that will start Ganesha and export the Ceph
filesystems.
Note that, although support is going in to export RGW via NFS, this is
not working yet.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
By default, this roles will create a ceph config file and get the admin
key. You can optionnally add other users, keys and pools for your tests.
Closes: #769
Signed-off-by: Sébastien Han <seb@redhat.com>
Mainly because some packages are still not there for upstream which will cause
it to break installs that would otherwise would succeed
Signed-off-by: Alfredo Deza <adeza@redhat.com>
run containerized daemons in virtual machines.
to enable it simply do:
`cp site-docker.yml.sample site-docker.yml`
and set `docker: true` in `vagrant_variables.yml`
Signed-off-by: Sébastien Han <seb@redhat.com>