In order to avoid breaking changes, let's use named argument
instead of positional argument syntax in the command line
used to create rgw export.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2037691
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When deploying the monitoring stack on a dedicated node, the directory
`/etc/ceph` has never been created. Therefore, the play for adopting the
monitoring stack fails because it can't write the minimal config file.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2029697
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When collocating osds with iscsigw daemons, cephadm bindmounts the
following:
```
-v /var/lib/ceph/6126c064-6a9e-4092-8a64-977930df0843/iscsi.rbd.ceph-ameenasuhani-4fs3bq-node5.vomtqb/configfs:/sys/kernel/config
```
this prevents cephadm-adopt playbook from running container and bindmounting `/var/lib/ceph:/var/lib/ceph:z`
since 'ro' is enough in this playbook, let's replace the ':z' option on
this bindmount with ':ro'
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2027411
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This is overriding those variables because the precedence at the 'block
var' level is greater than the group_vars/host_vars.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2026861
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
cephadm uses its own logrotate configuration file so ceph-ansible needs
to remove that custom file during the cephadm-adopt playbook.
Closes: #6944
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
ceph-facts roles makes decisions based on the fact `rolling_update` so
it must be called before we run this role.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2014304
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Change needed in order to support --limit on mon nodes.
Otherwise, a call to `hostvars[groups[mon_group_name][0]]['_current_monitor_address']`
throws an error:
```
"The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute '_current_monitor_address'"
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2014304#c28
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When adding host, using ansible_facts['default_ipv4']['address'] might
not be the desired network, we shouldn't enforce the subnet with the
default route.
Let's use the public_network instead.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2006415
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When adding host, using `ansible_facts['default_ipv4']['address']` might
not be the desired network, we shouldn't enforce the subnet with the
default route.
Let's use the public_network instead.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2006415
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The rbd mirroring is broken because cephadm doesn't bindmount /etc/ceph anymore.
It means the keyrings and ceph config file aren't available after the
migration.
The idea here is to remove the current rbd mirror peer and add it back
to the mon config store so we aren't bound to the /etc/ceph directory.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967440
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
when using --limit osds, the play before and after osd upgrade are
skipped because we use `hosts: "{{ mon_group_name | default('mons') }}[0]"`
using `hosts: "{{ osds_group_name | default('osds') }}" with
`delegate_to` to the first monitor addresses this issue.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
It can be useful in a large cluster deployment to split the upgrade and
only upgrade a group of nodes at a time.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2014304
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
968891f449 introduced a regression.
The regex is wrong because it doesn't allow to shrink osds with id
greater than 9
Fixes: #6950
Signed-off-by: Per Abildgaard Toft <per@minfejl.dk>
Configure repository for cephadm installation and use package install in both containerized and non containerized deployment
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
There are existing OpenStack scenarios where nfs is still not managed
by cephadm. For this reason sometimes is useful skip the nfs part of
the adoption playbook and leave this daemon unmanaged.
The purpose of this patch is providing a tag to enable the OpenStack
operators to skip this playbook section.
Closes: https://bugzilla.redhat.com/2009212
Signed-off-by: Francesco Pantano <fpantano@redhat.com>
If old containers are still running, it can make tcmu-runner process
unable to open devices and there's nothing else to do than restarting
the container.
Also, as per discussion with iscsi experts, iscsi should be migrated before
OSDs. (the client should be closed before the server)
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2000412
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
`container_binary` isn't set anymore in the purge osd play because of a
regression introduced by 60aa70a.
The CI didn't catch it because the play purging node-exporter sets this
variable for all nodes before we run the purge osd play.
This commit fixes this regression.
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
When a node is configured with FQDN as the hostname value then the
`ceph orch host add` command will fail because the `ansible_hostname` used
by that command contains the short hostname which won't match the current
hostname (FQDN)
Instead we can use the ansible_nodename fact.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1997083
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds ceph-*.target systemd unit files support for containerized
deployments.
This also fixes a regression introduced by PR #6719 (rgw and nfs systemd
units not getting purged)
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1962748
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The balancer status is registered during the cephadm-adopt, rolling_update
and swith2container playbooks. But it is also used in the ceph-handler role
which is included in those playbooks too.
Even if the ceph-handler tasks are skipped for rolling_update and
switch2container, the balancer_status variable is erased with the skip task
result.
play1:
register: balancer_status
play2:
register: balancer_status <-- skipped
play3:
when: (balancer_status.stdout | from_json)['active'] | bool
This leads to issue like:
The conditional check '(balancer_status.stdout | from_json)['active'] | bool'
failed. The error was: Unexpected templating type error occurred on
({% if (balancer_status.stdout | from_json)['active'] | bool %} True
{% else %} False {% endif %}): expected string or buffer.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1982054
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The ceph osd pool ls detail command is a subset of the ceph osd dump
command.
$ ceph osd dump --format json|wc -c
10117
$ ceph osd pool ls detail --format json|wc -c
4740
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
eec3878 introduced a regression for upgrade scenarios where there's no
monitor nodes at all (like ganesha standalone, external clients, etc..)
TASK [get the ceph release being deployed] ************************************
task path: infrastructure-playbooks/rolling_update.yml:121
Thursday 29 July 2021 15:55:29 +0000 (0:00:00.484) 0:00:15.802 *********
fatal: [client0]: FAILED! =>
msg: '''dict object'' has no attribute ''mons'''
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
In the `set osd flags` block, run the Ceph commands that gather information
from the cluster (and don't make any changes to it) even when running in check
mode.
This allows the tasks that depend on the variables set by those tasks to
succeed in check mode.
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
Check early which Ceph release is going to be deployed and fail if it
doesn't correspond to the ceph-ansible version being used.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978643
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds a task that zaps by osd id so we can support the scenario
where osds were deployed with `osd_auto_discovery` is true.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1876860
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This refactor merges the two playbooks so we only have to maintain 1
playbook.
(Symlink the old purge-container-cluster.yml playbook for backward
compatibility).
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Those variables are useless given this is not possible to override them.
Let's replace them with the hardcoded name instead.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
1303611 introduced tasks for disabling the pg_autoscaler on pools and
the balancer but thoses tasks are already executed on the first monitor
node so we don't need to add the run_once statement.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
When using python 2 and the task with a loop is skipped then it generates
an error.
Unexpected templating type error occurred on
({{ (pool_list.stdout | from_json)['pools'] }}): expected string or buffer
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The PG autoscaler can disrupt the PG checks so the idea here is to
disable it and re-enable it back after the restart is done.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit reindents the playbook.
Also improve readability by adding an extra line between plays.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If one a the monitor is out of the quorum then nothing prevents the upgrade
playbook to run.
We only check if we have at least three monitor nodes but we should also
check if those monitor nodes are correctly present in the quorum.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1952571
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
After an upgrade, the presence of straw buckets will produce the
following warning (HEALTH_WARN):
```
crush map has legacy tunables (require firefly, min is hammer)
```
because straw bucket is a firefly feature it needs to be converted to
straw2.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967964
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The dashboard/monitoring stack can be deployed via the dashboard_enabled
variable. But there's nothing similar if we can to remove that part only
and keep the ceph cluster up and running.
The current purge playbooks remove everything.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786691
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Add any_errors_fatal: true in cephadm-adopt playbook.
We should stop the playbook execution when a task throws an error.
Otherwise it can lead to unexpected behavior.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1976179
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds the monitoring group in the "final cleanup play" so any cid
files generated are well removed when purging the cluster.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1974536
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Starting RHCS 5, there's no ISO available anymore.
This removes all ISO variables and the ceph_repository_type variable.
Closes: #6626
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Do not rely on the inventory aliases in order to check if the selected
manager to be removed is present.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967897
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If multi-realms were deployed with several instances belonging to the same
realm and zone using the same port on different nodes, the service id
expected by cephadm will be the same and therefore only one service will
be deployed. We need to create a service called
`<node>.<realm>.<zone>.<port>` to be sure the service name will be unique
and well deployed on the expected node in order to preserve backward
compatibility with the rgws instances that were deployed with
ceph-ansible.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds the ceph-validate role before starting the switch to a containerized
deployment.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1968177
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
We need to support rgw multisite deployments.
This commit makes the adoption playbook support this kind of deployment.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967455
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When no `[mgrs]` group is defined in the inventory, mgr daemon are
implicitly collocated with monitors.
This task currently relies on the length of the mgr group in order to
tell cephadm to deploy mgr daemons.
If there's no `[mgrs]` group defined in the inventory, it will ask
cephadm to deploy 0 mgr daemon which doesn't make sense and will throw
an error.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970313
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
0990ae4109 changed the filter in
selectattr() from 'match' to 'equalto' but due to an incompatibility with
the Jinja2 version for python 2.7 on el7 we must stick to using 'match'
filter.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
using 'match' filter in that task will lead to bad behavior if I have
the following node names for instance:
- node1
- node11
- node111
with `selectattr('name', 'match', inventory_hostname)` it will match
'node1' along with 'node11' and 'node111'.
using 'equalto' filter will make sure we only match the target node.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1963066
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This is a workaround for an issue in ansible.
When trying to stop/mask/disable this service in one task, the stop
didn't actually happen, the task doesn't fail but for some reason the
container is still present and running.
Then the task starting the service in the role ceph-crash fails because
it can't start the container since it's already running with the same
name.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1955393
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
ceph-ansible leaves a ceph-crash container in containerized deployment.
It means we end up with 2 ceph-crash containers running after the
migration playbook is complete.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1954614
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Due to a recent breaking change in ceph, this command must be modified
to add the <svc_id> parameter.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When migrating from a cluster with no MDS nodes deployed,
`{{ cephfs_data_pool.name }}` doesn't exist so we need to create a pool
for storing nfs export objects.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1950403
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When collocating daemons, if we chown all files under `/var/lib/ceph` it
can cause issues for the collocated daemons that wouldn't have been
migrated yet.
This commit makes the playbook chown only the files corresponding to the
daemon being migrated.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This removes the fact `skipped_nodes` which is useless when we run with
`--limit` since it gets reset when a new iteration is made.
Instead, let's print within a final play which node has been skipped
reusing the `skip_this_node` fact.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
the adoption playbook should use `radosgw_num_instances` in order to
determine how much rgw instance it should set recreate.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1943170
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
this adds a small documentation in the header of the playbook in order
to explain what is the goal of this playbook.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
when running docker-to-podman playbook, there's no need to call
`ceph-config` and `ceph-rgw` from the role `ceph-handler`.
It can even have side effects when coming from a baremetal cluster that
was previously migrated using the switch-to-containers playbook. Indeed
it might complain about missing .target systemd unit since they are
removed during that migration.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1944999
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Playbook must fail anyway, the `rescue` block has been introduced for
unmasking the unit after the playbook has failed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
if for some reason the playbook fails after the service was
stopped, disabled and masked and before it got restarted, enabled and
unmasked, the playbook leaves the service masked and which can make users
confused and forces them to unmask the unit manually.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1917680
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit makes the playbook fetch the minimal current ceph
configuration and write it later on monitoring nodes so `cephadm` can
proceed with the adoption.
When a monitoring stack was deployed on a dedicated node, it means no
`ceph.conf` file was written, `cephadm` requires a `ceph.conf` in order
to adopt the daemon present on the node.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1939887
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If the legacy name `grafana-server` is still being used when upgrading
from Nautilus to Pacific, the task that sets the fact `rolling_update`
to `true` doesn't run on the node(s) included in that group. Indeed the
play where we set this fact (`rolling_update`) only runs on the group
`monitoring_group_name | default('monitoring')`.
As a workaround, we can run earlier the task which converts the
`grafana-server` group name to `monitoring`.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1935554
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
It has come to our attention that using ansible_* vars that are
populated with INJECT_FACTS_AS_VARS=True is not very performant. In
order to be able to support setting that to off, we need to update the
references to use ansible_facts[<thing>] instead of ansible_<thing>.
Related: ansible#73654
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1935406
Signed-off-by: Alex Schultz <aschultz@redhat.com>
This commit makes sure purge playbooks remove those file if for any reason they
have been left.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1920900
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
There's no need to slow down the playbook execution time by migrating
all the `ceph-crash` instances in a serial way. Let's remove the
`serial: 1` so the migration is achieved in a parallel way.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When running the rolling_update.yml playbook and adding the dashboard
component in the same time then the requirement (like container packages)
aren't installed.
This could lead to a failure in case of using authentication on the
container registry because the playbook will try to login on the registry
but podman/docker aren't yet installed.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1903504
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1918650
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Since b105549 we don't install node-exporter on client nodes so we should
also exclude the client node from the node-exporter upgrade.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
since master is now deploying quincy, we must update this.
Otherwise, it will fail like following:
```
Error EPERM: require_osd_release cannot be lowered once it has been set
```
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
There's no reason to not use the ceph_osd_flag module to set/unset osd
flags.
Also if there's no OSD nodes in the inventory then we don't need to
execute the set/unset play.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Instead of doing some scripting via the shell module, we can use the
parted ansible module to check the boot flag on partitions.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Those devices (db/wal) are never zapped in lvm batch deployment.
Iterating over `dedicated_devices` and `bluestore_wal_devices` fixes
this issue.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1922926
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When rerunning the cephadm-adopt.yml playbook the radosgw realm,
zonegroup and zone tasks will fail because the task isn't
idempotent.
Using the radosgw ansible modules solves that problem.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>