in order to avoid the following error:
```
multiple RX peers are not currently supported
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2037646
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit aa64747cd1)
This `run_once: true` breaks multiple rbd-mirror daemons support
as it would make all rbd-mirror daemons use the same keyring.
Each rbd-mirror daemon needs its own keyring in order to start.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2037646
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit cfe6ca7adf)
This task doesn't setup a proper keyring.
This task wasn't backported because it relies on a feature of the `ceph_key` module that wasn't
available in the branch `rhcs-4.3`.
Given that this feature is now backported, let's use it.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2037646
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The ceph_key module currently only supports the json output for the
info state.
When using this state on an entity then we something want the output
as:
- plain for copying it to another node.
- json in order to get only a subset information of the entity (like
the key or caps).
This patch adds the output_format parameter which uses json as a
default value for backward compatibility. It removes the internal and
hardcoded variable also called output_format.
In addition of json and plain outputs, there's also xml and yaml
values available.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 7d3d51d6da)
If `osd_memory_target` is set in group_vars, the default value (4Gb)
should be overridden.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2118544
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 03713070eb)
When we come from configure_dashboard.yml, this fact should be set if
`rgw_instances` is defined in group_vars/host_vars. Otherwise, the next
task that set the fact `rgw_instances` will be run as it will assume it
wasn't user defined.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2117294
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 33ac715cfb)
the daemon is not running on the 'primary' daemon.
Therefore, these tests are not needed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 37e67fb672)
In order to not have to always reproduce it when a failure shows up in the CI
having the failure logged can make us save some time.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f1239b6907)
- Use config-key store to add cluster peer.
- Support multiple pools mirroring.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b74ff6e22c)
When 'osd_memory_target' is overridden in ceph_conf_overrides.
The task that sets the fact `osd_memory_target` in the ceph-config role
should be skipped.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2056675#c11
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit cb5d6b48fb)
"set_fact container_run_cmd" is not set when using --limit on MDS as facts
were not run on first MON.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2111017
Signed-off-by: Teoman ONAY <tonay@redhat.com>
(cherry picked from commit 9a4a3f5f19)
Add missing `--cluster {{ cluster }}` on task
`set osd_memory_target` in the main.yml file of the
ceph-config role.
Also it moves the task after ceph configuration file is actually written.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 9f59c7286f)
- preserve mode and ownership on main directories
- make sure the directories are well present prior to restoring files.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 047af3a3f6)
If the user doesn't pass a valid name (present in the inventory)
the playbook will fail like following:
```
fatal: [localhost -> {{ target_node }}]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: "hostvars['10.70.46.40']" is undefined
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b18a1aa3ca)
Typical failure:
```
fatal: [localhost]: FAILED! =>
msg: |-
The conditional check 'mode not in ['backup', 'restore']' failed. The error was: error while evaluating conditional (mode not in ['backup', 'restore']): 'mode' is undefined
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2051640
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 848dd03fa6)
If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com>
(cherry picked from commit 64e08f2c0b)
use `include_tasks` instead of `import_tasks`.
Given that with `import_tasks` statements are preprocessed
and the tasks that defines it hasn't been run yet, it will fail
and complain like following:
```
The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute '_interface'
```
Using `include_tasks` instead fixes this.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 434793e2fe)
(cherry picked from commit d57377ef61)
there's no need to run the roles ceph-facts, ceph-config and ceph-client
altogether on client nodes in rolling update playbook.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2019831
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 817c03bc0e)
(cherry picked from commit c0da98b1d6)
Update `After=` and `Wants=` parameters in container systemd units
and make them be aligned with the systemd units that come
from the packaging.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2027440
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f01536ea19)
(cherry picked from commit 690c879aef)
This playbook doesn't support less than 3 monitors present in the inventory.
Just like the rolling_update playbook, let's fail if less than
3 monitors are present.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2049132
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f08129edf2)
(cherry picked from commit b970ab6691)
This commit makes podman bindmount `/:/rootfs:ro` so the container can
collect data from the host.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2028775
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 0f34cd16d8)
(cherry picked from commit 2e2d893d28)
This fixes the service file removal and makes the playbook
call `systemctl reset-failed` on the service because in Ceph
Nautilus, ceph-crash doesn't handle `SIGTERM` signal.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2055992
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 2f11982590)
(cherry picked from commit 7a570c719e)
The current implementation doesn't allow to disable the pg autoscaler
on created pools. This allows only 'on' or 'warn'.
With this commit, this is now possible to disable it.
Valid values would be ['on', 'yes', 'true', 'off', 'no', 'false']
```
openstack_glance_pool:
name: "images"
pg_num: 128
pgp_num: 128
rule_name: "replicated_rule"
type: 1
application: "rbd"
size: 3
pg_autoscale_mode: off
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2062621
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 9d1ff8f236)
Initially MONs and RGW binded /etc/pki/ca-trust/extracted using the :z flag
(introduced to solve an OSP TripleO issue on RHEL - #3638) but using
this flag prevents local services (like sssd) running on the host from accessing
the certificates/files in that folder.
Signed-off-by: Teoman ONAY <tonay@redhat.com>
(cherry picked from commit 7e8ce2567e)
(cherry picked from commit cf44ad76f6)
when these variables are defined in the inventory host file,
all tasks are skipped then because the node being played isn't
aware about the values from the rgw nodes.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2063029
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 328bd7c975)
When the following conditions are met:
- rgw is deployed,
- dashboard is deployed,
- playbook is called with --limit,
- a node being processed is collocated on either a mon or mgr.
The playbook fails because `rgw_instances` is undefined.
The idea here is to make sure this variable is always defined.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2063029
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit aa0cc9381d)
When running the playbook with `--limit`, if the play targeted doesn't match
hosts present in the mgr group the playbook can fail.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2063029
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 72e4654aae)
(cherry picked from commit d1e4b83106)
The radosgw system user creation will fail when `rgw_instances`
is set at the host_var level because this variable won't bet set
on monitor nodes, given that this is where the tasks is delegated, it fails.
The idea here is to check over all rgw instances that are defined and set a
boolean fact in order to check if at least one instance has `rgw_zonemaster` set
to `True`
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2034595
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
since a variable encrypted with vault is no longer a string but a
encrypted object we can't use the filter | length, we have to convert it
to a string before.
Fixes: #6991
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 6ad7e52869)
otherwise the osd play in rolling_update can fail when it tries to
disable it before upgrading osd nodes.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 45a1d634d8)
ceph-facts roles makes decisions based on the fact `rolling_update` so
it must be called before we run this role.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2014304
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e5edcc4214)
Change needed in order to support --limit on mon nodes.
Otherwise, a call to `hostvars[groups[mon_group_name][0]]['_current_monitor_address']`
throws an error:
```
"The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute '_current_monitor_address'"
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2014304#c28
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 82eee4303b)