The current approach is extremely complex and introduced a lot
of spaghetti code. This doesn't offer a good user experience at all.
It's time to think to another approach (dedicated playbook) and drop
the current implementation in order to clean up the code.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
The linter complains about that.
It doesn't work anyway so it doesn't make sense to leave these variables
here.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
This adds the resquired changes in order to support
CentOS stream 9.
Also, this bumps the Ansible version support to 2.15
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
The tasks "manage nodes with cephadm - ipv4/6" are skipped when
cephadm_mgmt_network contains more than one ip network which prevent
cephadm from managing the host.
Signed-off-by: Teoman ONAY <tonay@ibm.com>
779523f86f introduced a regression
related to rbdmirrors tasks. They were executed while
ceph_rbd_mirror_remot_* variables were not set.
Signed-off-by: Teoman ONAY <tonay@ibm.com>
This directory should be removed when the cluster is purged.
most of the services are started with the `--security-opt label=disable`
option. If the directory is not removed, it can cause SElinux issues
when the cluster is redeployed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
for some reason, this task has to be done in 2 steps otherwise it fails.
1/ stop and disable the service
2/ mask it
when done with with a single task, the module says the service has been
stopped while this isn't the case (Ansible systemd module bug?).
it possibly relates to https://github.com/ansible/ansible/issues/68680
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
filestore objectstore will be gone in the next Ceph release.the
This drops the filestore support in ceph-ansible.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Instead of checking ip_version variable we should check the input
address for ip version and select code path based on that.
This solves ceph adoption with mixed ipv6 and ipv4 networks.
Resolves: rhbz#2186226
Signed-off-by: Lukas Bezdicka <lbezdick@redhat.com>
The recent rbdmirror refactor introduced a regression in the
cephadm-adopt playbook.
Given that the rbd-mirror peer addition is now done by using the monitor
config-key store method during the cluster deployment, we can drop this
play from the cephadm-adopt.yml playbook.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2140569
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
`--state=enabled` isn't a valid filter so the unit from the packaging
never gets removed.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2134917
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
There's no service to stop/mask when the node being upgraded is
a 'primary node' only (1 way replication).
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When using the legacy group name 'grafana-server', this playbook will run but
won't remove properly all monitoring resources as expected.
Fixes: #7265
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com>
- preserve mode and ownership on main directories
- make sure the directories are well present prior to restoring files.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If the user doesn't pass a valid name (present in the inventory)
the playbook will fail like following:
```
fatal: [localhost -> {{ target_node }}]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['10.70.46.40']" is undefined
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Typical failure:
```
fatal: [localhost]: FAILED! =>
msg: |-
The conditional check 'mode not in ['backup', 'restore']' failed. The error was: error while evaluating conditional (mode not in ['backup', 'restore']): 'mode' is undefined
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds a task that sets `autotune_memory_target_ratio` depending on the
value of `is_hci`.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2028693
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When the upgrade from Ceph 4 to 5 is performed in the OpenStack context,
ceph-ansible triggers the rolling_update playbook, which is supposed to
rollout new Ceph containers. The ceph-infra role tries to take care
about firewall, ntp config and logrotate; however, TripleO manages them
through tripleo-heat-templates. This patch just add an additional tag
to skip the ceph-infra role in the OpenStack context.
Closes: https://bugzilla.redhat.com/2090456
Signed-off-by: Francesco Pantano <fpantano@redhat.com>
When this directory is left after the osd adoption, it leads to the following error:
```
[WRN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices
host axdesec2ocs1n002.ecommerce.inditex.grp `cephadm ceph-volume` failed: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/41555360-e96b-4b16-a37c-873e0c940091/mon.axdesec2ocs1n002/config
ERROR: [Errno 2] No such file or directory: '/var/lib/ceph/41555360-e96b-4b16-a37c-873e0c940091/mon.axdesec2ocs1n002/config'.
```
this is because of an unexpected behavior regarding 'config inferring' when a legacy directory is present in /var/lib/ceph.
Note: this doesn't fix the root cause, this is a workaround.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2075510
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
With this commit, upgrading a cluster from Nautilus to Pacific with
active rgw multisite replication will be blocked.
This is because a lot of bugs are currently present in Pacific regarding
RGW multisite.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2063702
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When using group of group, the playbook will apply undesired
labels on nodes.
This commit fixes it by applying only the expected labels.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2057528
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When using cluster custom names, cephadm commands are executed using
the default admin keyring name which fails.
Signed-off-by: Teoman ONAY <tonay@redhat.com>
By default cephadm uses root account to connect remotely
to other nodes in the cluster. This change allows to choose
another account.
This commit also allows to use a dedicated subnet for cephadm mgmt.
Signed-off-by: Teoman ONAY <tonay@redhat.com>
This fixes the service file removal and makes the playbook
call `systemctl reset-failed` on the service because in Ceph
Nautilus, ceph-crash doesn't handle `SIGTERM` signal.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2055992
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This playbook doesn't support less than 3 monitors present in the inventory.
Just like the rolling_update playbook, let's fail if less than
3 monitors are present.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2049132
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We can't use `{{ cephadm_cmd }}` here because the monitors aren't yet adopted.
We must use `{{ ceph_cmd }}` instead.
This also fixes some filters `| default()` (they must be moved before `| from_json()`)
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967440
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit makes the cephadm-adopt playbook fail if the cluster
has the `POOL_APP_NOT_ENABLED` warning raised.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2040243
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
In the OpenStack context we let the integration tool (TripleO)
deal with repositories and packages.
This change just adds the with_pkg tag to allow TripleO skipping
both the repositories and packages installation.
Signed-off-by: Francesco Pantano <fpantano@redhat.com>