When the zonegroup or the zone doesn't have a realm associated then
it's not possible to modify that ressource.
This patch allows to retrieve the current realm id and compare it to
the realm id from the realm in parameter.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Rerunning the cephadm_adopt module on an already adopted daemon will
fail because the cephadm adopt command isn't idempotent.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1918424
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Add the possibility to deploy rgw multisite configuration with a mix of
secondary and primary zones on a same rgw node.
Before that, on a same node, all instances were either primary
zones *OR* secondary.
Now you can define a rgw instance like following:
```
rgw_instances:
- instance_name: 'rgw0'
rgw_zonemaster: false
rgw_zonesecondary: true
rgw_zonegroupmaster: false
rgw_realm: 'france'
rgw_zonegroup: 'zonegroup-france'
rgw_zone: paris-00
radosgw_address: "{{ _radosgw_address }}"
radosgw_frontend_port: 8080
rgw_zone_user: jacques.chirac
rgw_zone_user_display_name: "Jacques Chirac"
system_access_key: P9Eb6S8XNyo4dtZZUUMy
system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB
endpoint: http://192.168.101.12:8080
```
Basically it's now possible to define `rgw_zonemaster`,
`rgw_zonesecondary` and `rgw_zonegroupmaster` at the intsance
level instead of the whole node level.
Also, this commit adds an option `deploy_secondary_zones` (default True)
which can be set to `False` in order to explicitly ask the playbook to
not deploy secondary zones in case where the corresponding endpoint are
not deployed yet.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1915478
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This update the grafana container tag to 6.7.4.
The RHCS version is now based on the RHCS 5 container image which is
also based on 6.7.4.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
When executing a command via the run_command method and passing some
data with stdin then the default behavior is to add append a newline.
This breaks the value of password used by our modules.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Remove duplicate fake_params parameter as it's already defined later
as a dict (instead of an empty list).
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
refact this module due to recent changes in ceph pacific.
The password must be passed with `-i` option.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When creating a new pool, target_size_ratio was ignored by ansible module ceph_pool.py.
target_size_ratio is now used when pg_autoscale_mode is on.
Tests added to library tests.
This adds too the use in the role ceph-rgw.
Signed-off-by: Fabien Brachere <fabien.brachere@celeste.fr>
This avoids interactive mode for `vagrant box remove`.
This can happen for some reason when there's leftover from previous
deployment (VMs not destroyed as expected)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Currently we create an object from the primary sites but we try to read
that object still from the master which doesn't make sense, we should
try to read it from a secondary site.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since [1] has been resolved then we don't need to apply this workaround
anymore.
[1] https://tracker.ceph.com/issues/46759
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds cephadm_adopt ansible module for replacing the command module
usage with the cephadm adopt command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds ceph_crush_rule ansible module for replacing the command
module usage with the ceph osd crush rule commands.
This module can manage both erasure and replicated crush rules.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds cephadm_bootstrap ansible module for replacing the command module
usage with the cephadm bootstrap command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds ceph_osd_flag ansible module for replacing the command module
usage with the ceph osd set/unset commands.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds ceph_osd ansible module for replacing the command module
usage with the ceph osd destroy/down/in/out/purge/rm commands.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds ceph_mgr_module ansible module for replacing the command module
usage with the ceph mgr module enable/disable commands.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
- The plugins/filter directory wasn't present in the flake8 workflow
configuration.
- Fix the flake8 syntax.
- Add the directory to PYTHONPATH environment variable for pytest
to avoid importing the plugin filter via sys.
- Add unittest on missing netaddr module import.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds the module_utils and associated test directory into the flake8
and pytest workflow configuration.
It also moves the ca_common module_utils test file from tests/library to
it's own directory tests/module_utils.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
- update `generate_ceph_cmd()` so `user_key` is automatically built from
`cluster` and `user` params.
- update and add testing.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds ceph_volume_simple_{activate,scan} ansible modules for replacing
the command module usage with the ceph-volume simple activate/scan commands.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
adding monitor is no longer possible because we generate a new mon
keyring each time the playbook is run.
Fixes: #5864
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds a new `module_utils` namespace in order to avoid defining same
functions in each module.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The ceph_key module currently only supports the json output for the
info state.
When using this state on an entity then we something want the output
as:
- plain for copying it to another node.
- json in order to get only a subset information of the entity (like
the key or caps).
This patch adds the output_format parameter which uses json as a
default value for backward compatibility. It removes the internal and
hardcoded variable also called output_format.
In addition of json and plain outputs, there's also xml and yaml
values available.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
`all_daemons` scenario can't handle pools with `size: 3` because we have
1 osd node in root=HDD and two nodes in root=default.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds radosgw_zone ansible module for replacing the command module
usage with the radosgw-admin zone command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds radosgw_zonegroup ansible module for replacing the command
module usage with the radosgw-admin zonegroup command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds radosgw_realm ansible module for replacing the command module
usage with the radosgw-admin realm command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds radosgw_user ansible module for replacing the command module
usage with the radosgw-admin user command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds the ceph_fs ansible module for replacing the command module
usage with the ceph fs command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
tests/conftest.py and tests present in tests/functional/tests/ has been
missed from previous commit
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Currently the `ceph_key` module doesn't support using a different
keyring than `client.admin`.
This commit adds the possibility to use a different keyring.
Usage:
```
ceph_key:
name: "client.rgw.myrgw-node.rgw123"
cluster: "ceph"
user: "client.bootstrap-rgw"
user_key: /var/lib/ceph/bootstrap-rgw/ceph.keyring
dest: "/var/lib/ceph/radosgw/ceph-rgw.myrgw-node.rgw123/keyring"
caps:
osd: 'allow rwx'
mon: 'allow rw'
import_key: False
owner: "ceph"
group: "ceph"
mode: "0400"
```
Where:
`user` corresponds to `-n (--name)`
`user_key` corresponds to `-k (--keyring)`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This job is redundant with 'collocation' job.
The only difference is osd/rgw collocation so let's add this usecase in
'collocation'.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 19d683d7acfb5344b38ac1ba4c123dcdd4d80f35)
This reverts commit 7348e9a253.
Since the nfs-ganesha rpm build for CentOS 8 has been fixed, and
the nfs-ganesha segfault caused by an issue in librgw has also been
fixed.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
this commit changes defaults value in default pool definitions.
there's no need to define `pg_num`, `pgp_num`, `size` and `min_size`,
`ceph_pool` module will use the current default if needed.
This also drops the 3 following `set_fact` in `ceph-facts`:
- osd_pool_default_pg_num,
- osd_pool_default_pgp_num,
- osd_pool_default_size_num
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This change default value of grafana-server group name.
Adding some tasks in ceph-defaults in order to keep backward
compatibility.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The `enable extras on centos` task just doesn't work when using the
variable ceph_docker_enable_centos_extra_repo to true.
fatal: [xxx]; FAILED! => {"changed": false, "msg": "Parameter
'baseurl', 'metalink' or 'mirrorlist' is required."}
The CentOS extras repository is enabled by default so it's pretty
safe to remove this task and the associated variable.
This also removes the ceph_docker_on_openstack variable as it's a
leftover and it is unused.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Looks like nfs-ganesha 3.3 and 4.-dev doesn't work with recent changes
in librgw 16.0.0.
The nfs-ganesha daemon is segfaulting and restart in a loop.
See https://tracker.ceph.com/issues/47520
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds the ceph_dashboard_user ansible module for replacing the
command module usage with the ceph dashboard ac-user-xxx command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Even if the non containerized collocation scenario deploys ceph with
RPMs then we also deploy the dashboard/monitoring but with containers.
This requires to set the registry variable to ceph's quay.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This changes the grafana container image regitry from docker.io to
quay.io to avoid rate limit.
This also adds the missing container image values for docker2podman
and podman scenarios.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This commit diables nfs-ganesha testing on master for non-containerized
deployment because the dev repos are broken at the moment.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This re-adds the ceph-iscsi testing for both non containerized and
containerized deployment since the rados connection error on ceph
dev has been fixed [1].
[1] https://tracker.ceph.com/issues/47002
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This commit moves the erasure pool creation testing from `all_daemons`
to `lvm_osds` so we can decrease the number of osd nodes we spawn so the
OVH Jenkins slaves aren't less overwhelmed when a `all_daemons` based
scenario is being tested.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since we've dropped ubuntu testing, we don't need these inventories
anymore. Let's remove this leftover.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Temporarily disable iscsigw testing for containerized deployments
because it's broken upstream on ceph@master.
non-containerized deployments use stable build for iscsigw to get around
this issue.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since it is broken at the moment with dev repos, let's test against
stable builds so the CI is unlocked.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Otherwise we see some pytest warning.
PytestUnknownMarkWarning: Unknown pytest.mark.ceph_crash - is this a typo?
You can register custom marks to avoid this warning - for details,
see https://docs.pytest.org/en/latest/mark.html
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Due to [1], ceph-volume has now a dependency on pyyaml but it's not
installed by default via the package dependency.
This patch only add the required package on non containerized
deployment and as temporary workaround for the CI.
[1] https://tracker.ceph.com/issues/46759
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
when rerunning lvm_setup.yml on existing cluster with OSDs already
deployed, it fails like following:
```
fatal: [osd0]: FAILED! => changed=false
msg: Sorry, no shrinking of data-lv2 to 0 permitted.
```
because we are asking `lvol` module to create a volume on an empty VG
with size extents = `100%FREE`.
The default behavior of `lvol` is to shrink the volume if the LV's current
size is greater than the requested size.
Given the requested size is calculated like this:
`size_requested = size_percent * this_vg['free'] / 100`
in our case, it is similar to:
`size_requested = 100 * 0 / 100` which basically means `0`
So the current LV size is well greater than the requested size which
leads the module to attempt to shrink it to 0 which isn't obviously now
allowed.
Adding `shrink: false` to the module calls fixes this issue.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit introduces a new role `ceph-crash` in order to deploy
everything needed for the ceph-crash daemon.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds a new playbook for deploying ceph via cephadm.
This also adds a new dedicated tox file for CI purpose.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This commit enforces the pytest-rerunfailures installed so it's <9.0
This is to avoid the following error:
```
ERROR: pytest-rerunfailures 9.0 has requirement pytest>=5.0, but you'll have pytest 4.6.11 which is incompatible.
```
latest version of pytest-rerunfailures isn't compatible with the version
of pytest we are using.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit makes the zap function idempotent, especially when using
lvm_volumes variable.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1845668
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
ansible 2.9.10 seems to have introduced a bug.
See https://github.com/ansible/ansible/issues/70168
This commit excludes this version from ceph-ansible requirements.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since we only have one scenario since nautilus then we can just move
the container start command from ceph-osd-run.sh to the systemd unit
service.
As a result, the ceph-osd-run.sh.j2 template and the
ceph_osd_docker_run_script_path variable are removed.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This fixes a long standing fail in ceph-volumes lvm test suite.
Otherwise the default behaviour should not change.
Signed-off-by: Jan Fajerski <jfajerski@suse.com>
setting attributes with empty string is a bad user input.
Also, removing `rule_name` attribute when creating a code erasure pool.
(this rule isnt intended for code erasure pool type).
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
In containerized deployment, the ceph_volume module will always uses
the same container command prefix for all actions.
Instead of duplicate this code in all container tests we can define it
once.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The dashboard nodes (alertmanager, grafana, node-exporter, and prometheus)
were not manage during the docker to podman migration.
This adds the systemd container template of those services to a dedicated
file (systemd.yml) in order to include it in the docker2podman playbook.
This also adds the dashboard container images pull from docker to podman.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1829389
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The CentOS 7 distribution could still be used be deploying ceph if
- it's a containerized deployment
- it's a non containerized deployment without the dashboard (due to
missing python3 libraries).
The ceph_stable_redhat_distro variable has been remove because we can
rely on the ansible_distribution_major_version fact instead.
The copr el8 repository configuration is only applied for CentOS 8.
The ceph-mgr-dashboard package is only installed when the
dashboard_enabled variable is set to true.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The current centos/8 vagrant image (libvirt) is still using the
CentOS 8.0 release (1905) while the 8.1 release (1911) is already
available since few months.
Using an update CentOS 8 release fixes slow ceph-volume/lvm commands.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
With this change, the state `present` is enough to update a keyring.
If the keyring already exist, it will be updated if caps or secret
passed to the module are different.
If the keyring doen't exist, it will be created.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1808367
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since Ceph Octopus is python3 only we don't need to specify the max open
files anymore with the container engine.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
When using the lvm batch ceph-volume subcommand with dedicated devices
for filestore (journal) or bluestore (db/wal) then the list of devices
is convert to a string instead of being extended via an iterable.
This was working with only one dedicated device but starting with more
then the ceph_volume module fails.
TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds] **
fatal: [xxxxxx]: FAILED! => changed=true
cmd:
- ceph-volume
- --cluster
- ceph
- lvm
- batch
- --bluestore
- --yes
- --prepare
- --osds-per-device
- '4'
- /dev/nvme2n1
- /dev/nvme3n1
- /dev/nvme4n1
- /dev/nvme5n1
- /dev/nvme6n1
- --db-devices
- /dev/nvme0n1 /dev/nvme1n1
- --report
- --format=json
msg: non-zero return code
rc: 2
stderr: |2-
stderr: lsblk: /dev/nvme0n1 /dev/nvme1n1: not a block device
stderr: error: /dev/nvme0n1 /dev/nvme1n1: No such file or directory
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
[--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
[--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
[--no-auto] [--bluestore] [--filestore]
[--report] [--yes] [--format {json,pretty}]
[--dmcrypt]
[--crush-device-class CRUSH_DEVICE_CLASS]
[--no-systemd]
[--osds-per-device OSDS_PER_DEVICE]
[--block-db-size BLOCK_DB_SIZE]
[--block-wal-size BLOCK_WAL_SIZE]
[--journal-size JOURNAL_SIZE] [--prepare]
[--osd-ids [OSD_IDS [OSD_IDS ...]]]
[DEVICES [DEVICES ...]]
ceph-volume lvm batch: error: Unable to proceed with non-existing device: /dev/nvme0n1 /dev/nvme1n1
So the dedicated device list is considered as a single string.
This commit also adds the journal_devices, block_db_devices and
wal_devices documentation to the ceph_volume module.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1816713
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Since 15ed9ee the ceph-mgr daemon binds on the IP address on the public
network instead of binding on all addresses.
This commit updates the testinfra code to reflect that change.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Unregister marks generates warnings like:
PytestUnknownMarkWarning: Unknown pytest.mark.docker - is this a typo?
You can register custom marks to avoid this warning
https://docs.pytest.org/en/latest/mark.html
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This commit allows one to set the role for the admin user as read-only.
This can be controlled via the dashboard_admin_user_ro variable but the
default value is false for backward compatibility.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1810176
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Since [1] we can't use osd pool without replicas (size: 1) by default.
We now need to set the mon_allow_pool_size_one flag to true in the ceph
configuration and add the --yes-i-really-mean-it flag to the osd pool
set size cli.
[1] https://github.com/ceph/ceph/commit/21508bd
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Make it so that more than one realm, zonegroup,
or zone can be created during a run of the rgw
multisite ansible playbooks.
The rgw hosts now need to be grouped into zones
and realms in the inventory.
.yml files need to be created in group_vars
for the realms and zones. Sample yaml files
are available.
Also remove multsite destroy playbook
and add --cluster before radosgw-admin commands
remove manually added rgw_zone_endpoints var
and have ceph-ansible automatically add the
correct endpoints of all the rgws in a rgw_zone
from the information provided in that rgws hostvars.
Signed-off-by: Ali Maredia <amaredia@redhat.com>
This commit changes the value passed for the attribute 'rule_name' in
openstack_pools definition. It doesn't make sense to have emptry string
as passed value here.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit makes the CI testing an OSD pool erasure creation due to the
recent refact of the OSD pool creation tasks in the playbook.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Looks like we are still seeing issue [1].
Let's increase this value to unlock the CI (however, it still needs to
be investigated).
Typical error (see [1] for further details) :
```
[root@osd2 ~]# ceph-volume --cluster ceph lvm batch --filestore --yes --journal-size '2048' /dev/sda /dev/sdb --journal-devices /dev/sdc
Running command: /sbin/vgcreate --force --yes ceph-journals-817ef90b-77ac-4f52-b8a9-30893849fb78 /dev/sdc
stdout: Physical volume "/dev/sdc" successfully created.
stdout: Volume group "ceph-journals-817ef90b-77ac-4f52-b8a9-30893849fb78" successfully created
--> Refusing to continue with configured size for journal
--> RuntimeError: journal sizes must be larger than 2GB, detected: 1024.00 MB
```
[1] https://tracker.ceph.com/issues/41374
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since [1] if a rgw user already exists then the radosgw-admin user create
command will return an error instead of modifying the current user.
We were already doing separated tasks for create and get operation but
only for multisite configuration but it's not enough.
Instead we should do the get task first and depending on the result
execute the create.
This commit also adds missing run_once and delegate_to statement.
[1] https://github.com/ceph/ceph/commit/269e9b9
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
When osd_auto_discovery is set then we need to refresh the
ansible_devices fact between after the filestore OSD purge
otherwise the devices fact won't be populated.
Also remove the gpt header on ceph_disk_osds_devices because
the devices is empty at this point for osd_auto_discovery.
Adding the bool filter when needed.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1729267
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
We still need --destroy when using a raw device otherwise we won't be
able to recreate the lvm stack on that device with bluestore.
Running command: /usr/sbin/vgcreate -s 1G --force --yes ceph-bdc67a84-894a-4687-b43f-bcd76317580a /dev/sdd
stderr: Physical volume '/dev/sdd' is already in volume group 'ceph-b7801d50-e827-4857-95ec-3291ad6f0151'
Unable to add physical volume '/dev/sdd' to volume group 'ceph-b7801d50-e827-4857-95ec-3291ad6f0151'
/dev/sdd: physical volume not initialized.
--> Was unable to complete a new OSD, will rollback changes
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1792227
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The CentOS cloud infrastructure storing the vagrant CentOS 8 image
changed the directory path and remove the old 8.0 image so the vagrant
box add centos/8 fails returning a 404 http error.
As a workaround we can pull the image from CentOS instead of letting
vagrant doing the resolution.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The nobarrier mount flag doesn't exist anymoer on XFS in the EL 8
kernel. That's why the task wasn't working on those systems.
We can still use the other options instead of skipping the task.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Some docker commands were hardcoded in tests playbooks and some
conditions were not taking care of the containerized_deployment
variable but only the atomic fact.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This commit adds a new job in order to test the
filestore-to-bluestore.yml infrastructure playbook.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Having max_mds value equals to the number of mds nodes generates a
warning in the ceph cluster status:
cluster:
id: 6d3e49a4-ab4d-4e03-a7d6-58913b8ec00a'
health: HEALTH_WARN'
insufficient standby MDS daemons available'
(...)
services:
mds: cephfs:3 {0=mds1=up:active,1=mds0=up:active,2=mds2=up:active}'
Let's use 2 active and 1 standby mds.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The current ceph cluster health is in warning state:
health: HEALTH_WARN
13 pool(s) have no replicas configured
2 pool(s) have non-power-of-two pg_num
Because we're using only 1 replica then we need to disable the redundancy
check.
The pool pg num should be a power of two number (like 16).
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds device class support to crush rules when using the class key
in the rule dict via the create-replicated sub command.
If the class key isn't specified then we use the create-simple sub
command for backward compatibility.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1636508
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This commit adds a playbook to be played before we run purge playbook,
it first creates an rbd image then map an rbd device on client0 so the
purge playbook will try to unmap it.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The ansible ssh connections are now using the ssh backend instead of
paramiko starting testinfra 3.1 and persistent connections too.
pytest 4.6 is the latest release to be supported by python 2.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
To avoid unnecessary ansible warnings during playbook execution we can
move the library and plugins test files under a different directory.
[WARNING]: Skipping plugin (plugins/filter/test_ipaddrs_in_ranges.py) as
it seems to be invalid:
cannot import name 'ipaddrs_in_ranges'
Closes: #4656
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This commit removes the backslash in allow command parameter, this was
needed before the ceph_key module integration.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
It doesn't make sense to test the old 3.0.x container images with
nautilus+ ceph releases.
Also disable the dashboard deployment and switch to bluestore backend.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The commit replaces the pv/vg/lv commands used with the ansible command
module by the lvg and lvol modules.
This also fixes the size of the second data LV because we were only using
50% of the remaining space instead of 100%.
With a 50G device, the result was:
- data-lv1 was 25G
- data-lv2 was 12.5G
Instead of:
- data-lv1 was 25G
- data-lv2 was 25G
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The current ceph-validate role is using both validate action and fail
module tasks to validate the ceph configuration.
The validate action is based on the notario python library. When one of
the notario validation fails then a python stack trace is reported to the
ansible task. This output isn't understandable by users.
This patch removes the validate action and the notario depencendy. The
validation is now done with only fail ansible module.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1654790
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The secondary vagrant variables didn't have the grafana vm variable
set which create an vagrant error.
There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
an invalid or undefined variable.
This patch also changes the ssh-extra-args parameter to ssh-common-args
to get the same values for ssh/sftp/scp. Otherwise we can see warnings
from ansible and some tasks are failing.
[WARNING]: sftp transfer mechanism failed on [mon0]. Use ANSIBLE_DEBUG=1
to see detailed information
It also updates the ssh-common-args value for the rgw-multisite scenario
to reflect the ANSIBLE_SSH_ARGS environment variable value.
Finally changing the IP addresses due to the Vagrant refact done in the
commit 778c51a
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This was added for debugging purpose.
It's generating very large log output, let's remove this now.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>