If a custom Prometheus port was used before adoption, it was not
taken into account and default 9095 was set instead. Now custom
port is re-applied.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2242346
Signed-off-by: Teoman ONAY <tonay@ibm.com>
ceph orch ls rgw --format=yaml returns multiple documents
when multiple rgw are installed, this was not handled
correctly.
Signed-off-by: Teoman ONAY <tonay@ibm.com>
networks was at the wrong level in the spec file. Failed with
"got an unexpected keyword argument 'networks'"
Signed-off-by: Teoman ONAY <tonay@ibm.com>
Alertmanager was bind to default * network instead of grafana_server_addr
as it was before. Now on if grafana_server_addr is defined, it will be
bind to that network.
Signed-off-by: Teoman ONAY <tonay@ibm.com>
Add new module ceph_orch_spec which applies ceph spec files.
This feature was needed to bind extra mount points to the RGW
container (/etc/pki/ca-trust/).
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2262133
Signed-off-by: Teoman ONAY <tonay@ibm.com>
This is needed by ceph-exporter as it is parsing the socket by the number of dots.
Although the rgw_zone variable is only using for constructing the client name
and has nothing to do with multisiting.
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
this drops the following parameters:
- monitor_address_block
- monitor_interface
- monitor_address
The monitor address will be automatically set from `public_network` parameter.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
The current approach is extremely complex and introduced a lot
of spaghetti code. This doesn't offer a good user experience at all.
It's time to think to another approach (dedicated playbook) and drop
the current implementation in order to clean up the code.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
The linter complains about that.
It doesn't work anyway so it doesn't make sense to leave these variables
here.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
This adds the resquired changes in order to support
CentOS stream 9.
Also, this bumps the Ansible version support to 2.15
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
The tasks "manage nodes with cephadm - ipv4/6" are skipped when
cephadm_mgmt_network contains more than one ip network which prevent
cephadm from managing the host.
Signed-off-by: Teoman ONAY <tonay@ibm.com>
779523f86f introduced a regression
related to rbdmirrors tasks. They were executed while
ceph_rbd_mirror_remot_* variables were not set.
Signed-off-by: Teoman ONAY <tonay@ibm.com>
This directory should be removed when the cluster is purged.
most of the services are started with the `--security-opt label=disable`
option. If the directory is not removed, it can cause SElinux issues
when the cluster is redeployed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
for some reason, this task has to be done in 2 steps otherwise it fails.
1/ stop and disable the service
2/ mask it
when done with with a single task, the module says the service has been
stopped while this isn't the case (Ansible systemd module bug?).
it possibly relates to https://github.com/ansible/ansible/issues/68680
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
filestore objectstore will be gone in the next Ceph release.the
This drops the filestore support in ceph-ansible.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Instead of checking ip_version variable we should check the input
address for ip version and select code path based on that.
This solves ceph adoption with mixed ipv6 and ipv4 networks.
Resolves: rhbz#2186226
Signed-off-by: Lukas Bezdicka <lbezdick@redhat.com>
The recent rbdmirror refactor introduced a regression in the
cephadm-adopt playbook.
Given that the rbd-mirror peer addition is now done by using the monitor
config-key store method during the cluster deployment, we can drop this
play from the cephadm-adopt.yml playbook.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2140569
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
`--state=enabled` isn't a valid filter so the unit from the packaging
never gets removed.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2134917
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
There's no service to stop/mask when the node being upgraded is
a 'primary node' only (1 way replication).
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When using the legacy group name 'grafana-server', this playbook will run but
won't remove properly all monitoring resources as expected.
Fixes: #7265
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com>
- preserve mode and ownership on main directories
- make sure the directories are well present prior to restoring files.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If the user doesn't pass a valid name (present in the inventory)
the playbook will fail like following:
```
fatal: [localhost -> {{ target_node }}]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['10.70.46.40']" is undefined
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Typical failure:
```
fatal: [localhost]: FAILED! =>
msg: |-
The conditional check 'mode not in ['backup', 'restore']' failed. The error was: error while evaluating conditional (mode not in ['backup', 'restore']): 'mode' is undefined
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds a task that sets `autotune_memory_target_ratio` depending on the
value of `is_hci`.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2028693
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>