Commit Graph

6014 Commits (a9d1ec844d24fcc3ddea7c030eff4cd6c414d23d)
 

Author SHA1 Message Date
Guillaume Abrioux 31311b03ed cephadm-adopt/rgw: add host target in svc_id
If multi-realms were deployed with several instances belonging to the same
realm and zone using the same port on different nodes, the service id
expected by cephadm will be the same and therefore only one service will
be deployed. We need to create a service called
`<node>.<realm>.<zone>.<port>` to be sure the service name will be unique
and well deployed on the expected node in order to preserve backward
compatibility with the rgws instances that were deployed with
ceph-ansible.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-29 14:41:09 +02:00
Dimitri Savineau fc160b3be1 switch2container: run ceph-validate role
This adds the ceph-validate role before starting the switch to a containerized
deployment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1968177

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-06-28 18:06:53 +02:00
Wong Hoi Sing Edison 793d529302 library/ceph_key.py: rewrite for generate_ceph_cmd()
Also code lint with flake8

Signed-off-by: Wong Hoi Sing Edison <hswong3i@pantarei-design.com>
2021-06-24 09:46:29 +02:00
Boris Ranto 2491d4e004 dashboard: Add new prometheus alert
It was requested for us to update our alerting definitions to include a
slow OSD Ops health check.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1951664

Signed-off-by: Boris Ranto <branto@redhat.com>
2021-06-24 09:02:21 +02:00
Guillaume Abrioux fc784fc44c cephadm-adopt: support rgw multisite adoption
We need to support rgw multisite deployments.
This commit makes the adoption playbook support this kind of deployment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-23 22:01:59 +02:00
Guillaume Abrioux 8279d14d32 multisite: fix bug during switch2containers
When running the switch-to-containers playbook with multisite enabled,
the fact "rgw_instances" is only set for the node being processed
(serial: 1), the consequence of that is that the set_fact of
'rgw_instances_all' can't iterate over all rgw node in order to look up
each 'rgw_instances_host'.

Adding a condition checking whether hostvars[item]["rgw_instances_host"]
is defined fixes this issue.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967926

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-17 01:49:29 +02:00
David Galloway 3eba2a1584 tests: Retry generating SSH vagrant config. Also add some debug.
Signed-off-by: David Galloway <dgallowa@redhat.com>
2021-06-16 18:57:11 +02:00
Guillaume Abrioux 8dbee99882 nfs: do no copy client.bootstrap-rgw when using mds
There's no need to copy this keyring when using nfs with mds

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-16 06:32:43 +02:00
Guillaume Abrioux 38bfad46e8 container: conditionnally disable lvmetad
Enabling lvmetad in containerized deployments on el7 based OS might
cause issues.
This commit make it possible to disable this service if needed.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955040

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-15 20:16:38 +02:00
Guillaume Abrioux d58500ade0 ceph_key: handle error in a better way
When calling the `ceph_key` module with `state: info`, if the ceph
command called fails, the actual error is hidden by the module which
makes it pretty difficult to troubleshoot.

The current code always states that if rc is not equal to 0 the keyring
doesn't exist.

`state: info` should always return the actual rc, stdout and stderr.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1964889

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-14 23:46:20 +02:00
Guillaume Abrioux f9a73149a4 cephadm-adopt: fix mgr placement hosts task
When no `[mgrs]` group is defined in the inventory, mgr daemon are
implicitly collocated with monitors.
This task currently relies on the length of the mgr group in order to
tell cephadm to deploy mgr daemons.
If there's no `[mgrs]` group defined in the inventory, it will ask
cephadm to deploy 0 mgr daemon which doesn't make sense and will throw
an error.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970313

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-14 10:38:37 +02:00
Guillaume Abrioux b49cdea750 tests: allocate more memory for all_in_one job
Since we fire up much less VMs than other job, we can affoard allocating
more memory here for this job.
Each VM hosts more daemon so 1024Mb can be too few.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-14 10:10:15 +02:00
Guillaume Abrioux f7166cccbf rolling_update: fix mon+rgw/multisite collocation
When monitors and rgw are collocated with multisite enabled, the
rolling_update playbook fails because during the workflow, we run some
radosgw-admin commands very early on the first mon even though this is
the monitor being upgraded, it means the container doesn't exist since
it was stopped.

This block is relevant only for scaling out rgw daemons or initial
deployment. In rolling_update workflow, it is not needed so let's skip
it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970232

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-11 10:50:50 +02:00
Guillaume Abrioux c2aaa96fc7 tests: use CentOS 8.4 image
CentOS 8.4 vagrant image is available at https://cloud.centos.org
let's use it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-11 06:53:41 +02:00
Neelaksh Singh d18a9860cd Sensitive key data now hidden in output log
Fixes: #6529

Signed-off-by: Neelaksh Singh <neelaksh48@gmail.com>
2021-06-08 20:46:37 +02:00
Guillaume Abrioux d4dfa204d2 Revert "tests: disable test_mgr_dashboard_is_listening"
This reverts commit 2e19d1705e.

A new build of ceph@master including the fix is available so
this is not needed anymore.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-08 09:03:20 +02:00
Guillaume Abrioux 2e19d1705e tests: disable test_mgr_dashboard_is_listening
Due to a recent commit that has introduced a regression in ceph, this
test is failing.
Temporarily disabling it to unblock the CI.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-04 14:01:28 +02:00
Guillaume Abrioux 4daed1f137 dashboard: set cookie_secure in grafana
When using grafana behind https `cookie_secure` should be set to `true`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1966880

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-06-04 14:01:28 +02:00
Guillaume Abrioux d6745e9cd9 fs2bs: use match filter in selectattr()
0990ae4109 changed the filter in
selectattr() from 'match' to 'equalto' but due to an incompatibility with
the Jinja2 version for python 2.7 on el7 we must stick to using 'match'
filter.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-05-26 08:14:38 +02:00
Guillaume Abrioux 0990ae4109 fs2bs: fix wrong filter when setting osd_ids
using 'match' filter in that task will lead to bad behavior if I have
the following node names for instance:

- node1
- node11
- node111

with `selectattr('name', 'match', inventory_hostname)` it will match
'node1' along with 'node11' and 'node111'.

using 'equalto' filter will make sure we only match the target node.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1963066

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-05-25 16:59:30 +02:00
Guillaume Abrioux 664dae0564 prometheus: enforce osd nodes in templates
When osd nodes are collocated in the clients group (HCI context for
instance), the current logic will exclude osd nodes since they are
present in the client group.

The best fix would be to exclude clients node only when they are not
member of another group but for now, as a workaround, we can enforce
the addition of osd nodes to fix this specific case.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1947695

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-05-25 16:53:49 +02:00
Guillaume Abrioux 43b1c7bea9 vagrant_up: fix bash legacy syntax
This commit rewrites the deprecated syntax used in vagrant_up.sh

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-05-25 10:57:00 +02:00
Guillaume Abrioux 9efca34ac3 tests: pull images from cloud.centos.org
temporary work around vagrant cloud issue which seems broken at the time
of pushing this commit.
Let's pull images from cloud.centos.org for now since vagrant cloud
hosted images return a 403 error.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-05-25 10:17:37 +02:00
Guillaume Abrioux 2c77d0094c update: do not gather facts on each play
There's no benefit to gather facts again on each play in
rolling_update.yml

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-05-22 08:33:44 +02:00
Guillaume Abrioux e6d8b058ba nfs: get org.ganesha.nfsd.conf from container
Since we need to revert 33bfb10, this is an alternative to initial approach.
We can avoid maintaining this file since it is present in container
image. The idea is to simply get it from the image container and write
it to the host.

Fixes: #6501

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-05-07 13:35:37 +02:00
Dimitri Savineau a670982a38 ceph-rgw: fix pg_autoscale_mode for pool
The pg_autoscale_mode for rgw pools introduced in 9f03a52 was wrong
and was missing a `value` keyword because `rgw_create_pools` is a
dict.

Fixes: #6516

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-05-06 10:15:13 +02:00
Guillaume Abrioux 3db1ea7ec4 update: fix ceph-crash stop task
This is a workaround for an issue in ansible.
When trying to stop/mask/disable this service in one task, the stop
didn't actually happen, the task doesn't fail but for some reason the
container is still present and running.
Then the task starting the service in the role ceph-crash fails because
it can't start the container since it's already running with the same
name.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955393

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-05-04 13:06:47 +02:00
Guillaume Abrioux 8f87754b76 ceph-nfs: fix dev repo task
We need to filter with the OS architecture in order to fetch the right
dev repository in shaman

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-29 19:44:17 +02:00
Seena Fallah 41295f0ef6 ceph-osd: allow to use ceph_tcmalloc_max_total_thread_cache for bluestore
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES is for both bluestore and filestore

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2021-04-28 20:03:46 +02:00
Guillaume Abrioux 22c18e82f0 cephadm_adopt: fix ceph-crash migration
ceph-ansible leaves a ceph-crash container in containerized deployment.
It means we end up with 2 ceph-crash containers running after the
migration playbook is complete.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1954614

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-28 19:53:01 +02:00
Guillaume Abrioux 1f40c12502 cephadm_adopt: fix rgw placement task
Due to a recent breaking change in ceph, this command must be modified
to add the <svc_id> parameter.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-27 13:37:56 +02:00
Guillaume Abrioux bb7d37fb6a cephadm_adopt: create a 'nfs-ganesha' pool
When migrating from a cluster with no MDS nodes deployed,
`{{ cephfs_data_pool.name }}` doesn't exist so we need to create a pool
for storing nfs export objects.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1950403

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-27 13:37:56 +02:00
Dimitri Savineau 83a8dd5a6a group_vars: fix default values
It looks like the generate_group_vars_sample.sh script wasn't executed
during previous PRs that were modifying the default values.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-04-15 19:45:49 +02:00
Dimitri Savineau 4e6b2a54d2 ceph-defaults: update multisite readme reference
The multisite README file has been merged into a single file.

Closes: #6411

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-04-15 19:44:38 +02:00
Dimitri Savineau 0df23194ce docs: update ansible version for master
Since 839fac8 we now use ansible 2.10 on the master branch.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-04-15 19:44:12 +02:00
Francesco Pantano 441651638d Config the monitoring stack components api urls using a VIP
When dashboard_frontend_vip is provided, all the services should be
configured using the related VIP. A new VIP variable is added for
both prometheus and alertmanager: we're already able to properly
config the grafana vip using dashboard_frontend_vip variable.
This change adds the same variable for both prometheus and
alertmanager.

Signed-off-by: Francesco Pantano <fpantano@redhat.com>
2021-04-15 14:25:53 +02:00
Guillaume Abrioux 06a998dde0 tests: run dev_setup.yml on non_container job only
There's no need to run this playbook on container jobs.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-15 13:49:24 +02:00
Guillaume Abrioux 839fac8f94 core: bump ansible version
We should consider bumping ansible version for future releases, so let's
start testing against ansible 2.10

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-15 13:49:24 +02:00
Benoît Knecht c078513475 ceph-rgw-loadbalancer: Fix rgw_ports fact
The `set_fact rgw_ports` task was failing due to a templating error, because
`hostvars[item].rgw_instances` is a list, but it was treated as if it was a
dictionary.

Another issue was the fact that the `unique` filter only applied to the list
being appended to `rgw_ports` instead of the entire list, which means it was
possible to have duplicate items.

Lastly, `rgw_ports` would have been a list of integers, but the `seport` module
expects a list of strings.

This commit fixes all of the issues above, allowing the `ceph-rgw-loadbalancer`
role to work on systems with SELinux enabled.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
2021-04-15 10:39:08 +02:00
Guillaume Abrioux ddbc11c4a9 switch-to-containers: only chown corresponding files
When collocating daemons, if we chown all files under `/var/lib/ceph` it
can cause issues for the collocated daemons that wouldn't have been
migrated yet.

This commit makes the playbook chown only the files corresponding to the
daemon being migrated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-14 21:32:20 +02:00
Guillaume Abrioux bab403b603 container/systemd: ensure /var/log/ceph exists
This adds a `ExecStartPre=-/usr/bin/mkdir -p /var/log/ceph` in all
systemd service templates for all ceph daemon.
This is specific to RHCS after a Leapp upgrade is done. Indeed, the
`/var/log/ceph` seems to be removed after the upgrade.
In order to work around this issue let's ensure the directory is present
before trying to start the containers with podman.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1949489

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-14 16:37:33 +02:00
Guillaume Abrioux 3d4267051f fs2bs: add a final play
This removes the fact `skipped_nodes` which is useless when we run with
`--limit` since it gets reset when a new iteration is made.

Instead, let's print within a final play which node has been skipped
reusing the `skip_this_node` fact.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-14 14:56:02 +02:00
Guillaume Abrioux b1e7e1ad0f rbdmirror: add retries/until when configuring mirroring
`configure_mirroring.yml` is called right after the daemon is started.
Sometimes, it can happen the first task in `configure_mirroring.yml` is
run while the daemon isn't yet ready, adding a retries/until on that
task should help to avoid causing the playbook to fail.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1944996

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-14 11:37:26 +02:00
Guillaume Abrioux a9220654f5 cephadm_adopt: support nfs-ganesha adoption
This commit adds the nfs-ganesha adoption support in the
`cephadm-adopt.yml` playbook.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1944504

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-12 14:43:19 +02:00
Guillaume Abrioux 0772b3d28d nfs: remove legacy task
This fact is never used, let's remove the task.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-12 14:43:19 +02:00
Guillaume Abrioux d3d3d01528 nfs: rename two tasks
set the name of those tasks accordingly with the fact name being set.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-12 14:43:19 +02:00
Guillaume Abrioux 1ffc4df6b6 cephadm_adopt: modify placement policy for rgw
the adoption playbook should use `radosgw_num_instances` in order to
determine how much rgw instance it should set recreate.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1943170

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-12 14:43:19 +02:00
Guillaume Abrioux ee44d86072 cephadm_adopt: fix a typo
This play doesn't nothing else than stopping/removing rgw daemons.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-12 14:43:19 +02:00
Guillaume Abrioux 36b4227dcd docker2podman: add documentation/header
this adds a small documentation in the header of the playbook in order
to explain what is the goal of this playbook.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-12 09:30:26 +02:00
Guillaume Abrioux 70f19be367 docker2podman: skip some role imports from handler
when running docker-to-podman playbook, there's no need to call
`ceph-config` and `ceph-rgw` from the role `ceph-handler`.
It can even have side effects when coming from a baremetal cluster that
was previously migrated using the switch-to-containers playbook. Indeed
it might complain about missing .target systemd unit since they are
removed during that migration.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1944999

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-04-09 15:28:50 +02:00