we must pass the value of osd_scenario from the stable-3.2 branch which
is used for the initial deployment.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This variable was related to ceph-disk scenarios.
Since we are entirely dropping ceph-disk support as of stable-4.0, let's
remove this variable.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commits adds documentation for the lvm scenario and the deprecation
of collocated and non-collocated scenario.
Signed-off-by: Sébastien Han <seb@redhat.com>
osd_scenario has become obsolete and defaults to lvm. With lvm there is
no such things has collocated and non-collocated.
Signed-off-by: Sébastien Han <seb@redhat.com>
We don't support the preparation of OSD with ceph-disk. ceph-volume is
only supported. However, the start operation of OSD is still supported.
So let's say you change a config option, the handlers will be able to
restart all the OSDs via their respective systemd unit files.
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
It's usefull to have logs in debug mode enabled in order to have
more information for developpers.
Also reindent to json file.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
We don't need to have multiple ceph-override.json copies. We
currently already have symlink to all_daemons/ceph-override.json so
we can do it for all scenarios.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
We don't need to use the cephfs variable for the application pool
name because it's always cephfs.
If the cephfs variable is set to something else than the default
value it will break the appplication pool task.
Resolves: #3790
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
ceph-volume didn't work when the devices where passed by path.
Since it now support it, let's allow this feature in ceph-ansible
Closes: #3812
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
As discussed in ceph/ceph#26599, beast is now the default frontend
for rados gateway with nautilus release.
Add rgw_thread_pool_size variable with 512 as default value and keep
backward compatibility with num_threads option when using civetweb.
Update radosgw_civetweb_num_threads to reflect rgw_thread_pool_size
change.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
docker daemon is automatically started during package installation
but the service isn't enabled on boot.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
ceph aliases have been introduced in stable-3.2 during the ceph
deployment. On master this has been removed but we don't handle
this removal in the upgrade from stable-3.2 to master via the
rolling_update playbook.
Also remove the task from purge-docker-cluster missing from
d9e7835
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The Ubuntu Cloud Archive-related (UCA) defaults in
roles/ceph-defaults/defaults/main.yml were commented out, which means
if you set `ceph_repository` to "uca", you get undefined variable
errors, e.g.
```
The task includes an option with an undefined variable. The error was: 'ceph_stable_repo_uca' is undefined
The error appears to have been in '/nfs/users/nfs_m/mv3/software/ceph-ansible/roles/ceph-common/tasks/installs/debian_uca_repository.yml': line 6, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: add ubuntu cloud archive repository
^ here
```
Unfortunately, uncommenting these results in some other breakage,
because further roles were written that use the fact of
`ceph_stable_release_uca` being defined as a proxy for "we're using
UCA", so try and install packages from the bionic-updates/queens
release, for example, which doesn't work. So there are a few `apt` tasks
that need modifying to not use `ceph_stable_release_uca` unless
`ceph_origin` is `repository` and `ceph_repository` is `uca`.
Closes: #3475
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
When using monitor_address_block or radosgw_address_block variables
to configure the mon/rgw address we're getting the first ip address
from the ansible facts present in that cidr.
When there's VIP on that network the first filter could return the
wrong value.
This seems to affect only IPv6 setup because the VIP addresses are
added to the ansible facts at the beginning of the list. This is the
opposite (at the end) when using IPv4.
This causes the mon/rgw processes to bind on the VIP address.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1680155
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The path of the RGW environment file (in the /var/lib/ceph/radosgw/
directory) depends on the Ceph clustername. It was not taken into
account in the Ansible role `ceph-rgw`.
Signed-off-by: flaf <francois.lafont.1978@gmail.com>
When mgrs are implicitly collocated on monitors (no mgrs in mgrs group).
That include was skipped because of this condition :
`inventory_hostname == groups[mgr_group_name][0]`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
before managing mgr modules, we must ensure all mgr are available
otherwise we can hit failure like following:
```
stdout:Error ENOENT: all mgr daemons do not support module 'restful', pass --force to force enablement
```
It happens because all mgr are not yet available when trying to manage
with mgr modules.
Closes: #3100
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
even on master, force the release to be nautilus.
this scenarios is failing because at multiple times this scenario is
actually downgrading the ceph version.
It might happen that the latest-master image is older than what was
deployed in the first step of the scenario (the RPM deployment).
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 41b0fa15ddf2a45402d17faa3bd1e817692fc1d2)
Add a tox scenario that adds an new MDS node as a part of already
deployed Ceph cluster and deploys MDS there.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Instead of hardcoding group names, import ceph-defaults earlier. Also,
rectify a minor mistake in vagrant_varaibles.yml for containerized
version of add_osds.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
In containerized deployment the default radosgw quota is too low
for production environment.
This is causing performance degradation compared to bare-metal.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1680171
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
According to rdo testing https://review.rdoproject.org/r/#/c/18721
a check on the output of the ceph_health value is added to
allow the playbook to make several attempts (according to the
retry/delay variables) when waiting the cluster quorum or
when the container bootstrap is not ended.
It avoids the failure of the command execution when it doesn't
receive a valid json object to decode (because cluster is too
slow to boostrap compared to ceph-ansible task execution).
Signed-off-by: fpantano <fpantano@redhat.com>
Add a script to retry several times to fire up VMs to avoid vagrant
failures.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Andrew Schoen <aschoen@redhat.com>
`lvm_volumes` and/or `devices` variable(s) can be undefined depending on
the scenario chosen.
These tasks should be run only if these variable are defined, otherwise
it ends up with undefined variable errors.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1653307
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since https://github.com/ceph/ceph/commit/77912c0 ceph-volume uses
stdout encoding based on LC_CTYPE and PYTHONIOENCODING environment
variables.
Thoses variables aren't set when using ansible.
Currently this commit breaks non containerized deployment on Ubuntu.
TASK [use ceph-volume to create bluestore osds] ********************
cmd:
- ceph-volume
- --cluster
- ceph
- lvm
- create
- --bluestore
- --data
- /dev/sdb
rc: 1
stderr: |-
Traceback (most recent call last):
(...)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in
position 132: ordinal not in range(128)
Note that the task is failing on ansible side due to the stdout
decoding but the osd creation is successful.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
there's no need to test this on all scenarios.
testing idempotency on all_daemons should be enough and allow us to save
precious resources for the CI.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The systemd ceph-osd@.service file used for starting the ceph osd
containers is used in all osd_scenarios.
Currently purging a containerized deployment using the lvm scenario
didn't remove the ceph-osd systemd service.
If the next deployment is a non-containerized deployment, the OSDs
won't be online because the file is still present and override the
one from the package.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
update scenario has been moved to a dedicated tox ini file.
We shouldn't have any references to this scenario in the main tox ini
file.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Similar to #3658
Since there's too many changes between master and stable branches let's
commit directly in each branches instead of trying to backport this
commit.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When installing python-minimal on Ubuntu bionic, this will add the
/usr/bin/python symlink to the default python interpreter.
On bionic, this isn't python2 but python3.
$ /usr/bin/python --version
Python 3.6.7
The python docker library is only installed for python2 which causes
issues when running the purge-docker-cluster playbook. This playbook
uses the ansible docker modules and requires to have python bindings
installed on the remote host.
Without the bindings we can see python error reported by the docker
module.
msg: Failed to import docker or docker-py - No module named 'docker'.
Try `pip install docker` or `pip install docker-py` (Python 2.6)
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>