Add the possibility to deploy rgw multisite configuration with a mix of
secondary and primary zones on a same rgw node.
Before that, on a same node, all instances were either primary
zones *OR* secondary.
Now you can define a rgw instance like following:
```
rgw_instances:
- instance_name: 'rgw0'
rgw_zonemaster: false
rgw_zonesecondary: true
rgw_zonegroupmaster: false
rgw_realm: 'france'
rgw_zonegroup: 'zonegroup-france'
rgw_zone: paris-00
radosgw_address: "{{ _radosgw_address }}"
radosgw_frontend_port: 8080
rgw_zone_user: jacques.chirac
rgw_zone_user_display_name: "Jacques Chirac"
system_access_key: P9Eb6S8XNyo4dtZZUUMy
system_secret_key: qqHCUtfdNnpHq3PZRHW5un9l0bEBM812Uhow0XfB
endpoint: http://192.168.101.12:8080
```
Basically it's now possible to define `rgw_zonemaster`,
`rgw_zonesecondary` and `rgw_zonegroupmaster` at the intsance
level instead of the whole node level.
Also, this commit adds an option `deploy_secondary_zones` (default True)
which can be set to `False` in order to explicitly ask the playbook to
not deploy secondary zones in case where the corresponding endpoint are
not deployed yet.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1915478
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If for some reason `get_zonegroup()` returns a failure, we must handle
and make the module exit properly instead of failing with the following
python trace:
```
Traceback (most recent call last):
File "./AnsiballZ_radosgw_zone.py", line 247, in <module>
_ansiballz_main()
File "./AnsiballZ_radosgw_zone.py", line 234, in _ansiballz_main
exitcode = debug(sys.argv[1], zipped_mod, ANSIBALLZ_PARAMS)
File "./AnsiballZ_radosgw_zone.py", line 202, in debug
runpy.run_module(mod_name='ansible.modules.radosgw_zone', init_globals=None, run_name='__main__', alter_sys=True)
File "/usr/lib64/python3.6/runpy.py", line 205, in run_module
return _run_module_code(code, init_globals, run_name, mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/vagrant/.ansible/tmp/ansible-tmp-1610728441.41-685133-218973990589597/debug_dir/ansible/modules/radosgw_zone.py", line 467, in <module>
main()
File "/home/vagrant/.ansible/tmp/ansible-tmp-1610728441.41-685133-218973990589597/debug_dir/ansible/modules/radosgw_zone.py", line 463, in main
run_module()
File "/home/vagrant/.ansible/tmp/ansible-tmp-1610728441.41-685133-218973990589597/debug_dir/ansible/modules/radosgw_zone.py", line 425, in run_module
zonegroup = json.loads(_out)
File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since the default of `osd_objectstore` has changed as of 3.2, some
deployments might have a mix of filestore and bluestore OSDs on a same
node. In some specific cases, there's a possibility that a filestore OSD
shares a journal/db device with a bluestore OSD. We shouldn't try to
redeploy in this context because ceph-volume will complain. (either
because in lvm batch you can't pass partition or about gpt header).
The safest option is to skip the migration on the node when such a mix
is detected or force all osds including those already using bluestore
(option `force_filestore_to_bluestore=True` has to be passed as an extra var).
If all OSDs are using filestore, then they will be migrated to
bluestore.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1875777
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit checks the length of `virtual_ips` doesn't exceed the length
of `groups[rgwloadbalancer_group_name]`.
It also ensure this variable is defined when
`groups[rgwloadbalancer_group_name]` contains at least one node.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
While 2ca33641 fixed a bug in the way the `keepalived.conf.j2` template matched
hostnames to set the VRRP `MASTER`/`BACKUP` states, it also introduced a
regression in the case where `virtual_ips` is a list of more than one IP
address.
The previous behavior would result in each host in the `rgwloadbalancers` group
to be `MASTER` for one of the `virtual_ips`, but the new behavior caused the
first host to be `MASTER` for all the IP address in `virtual_ips`.
This commit restores the original behavior.
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
The current check makes no sense because it checks any of other monitor
than the one being played (either a previous one already converted or a
next that isn't yet converted) is present on the quorum.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1909011
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Instead of using the command module for retrieving a sysctl value then
we can use the slurp module and read the value directly from /proc.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
refact this module due to recent changes in ceph pacific.
The password must be passed with `-i` option.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since d7fd468 the ansible modules are using the common code shared in
the module_utils directory but that one wasn't added to the spec file.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1910214
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The path where ceph.conf is located (/etc/ceph) missing in the Docker container bind mounts, this throws errors
Signed-off-by: Mike Currin <currin@gmail.com>
When collocating rgw with either a mon, mgr or osd, switching from
single site to a multisite rgw setup failed because of the handlers
triggered between the ansible play of the collocated daemon and the play
of the rgw. Since the multisite changes are not yet applied the handlers
fail.
The idea here is to ensure we run the multisite configuration from the
ceph-handler role before the restart happens, this way it won't complain
because of non existing multisite configuration.
(Note: this is also valid when simply changing a multisite configuration)
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1888630
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The ceph-volume module relies on environment variables to determine if
the command should be executed within a container or not.
The containerized parameter isn't used anymore and we can remove it.
Fixes: #6153
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Instead of iterate over the host list for adding the node/label to the
host orchestrator configuration then we can do it parallelly.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
When creating a new pool, target_size_ratio was ignored by ansible module ceph_pool.py.
target_size_ratio is now used when pg_autoscale_mode is on.
Tests added to library tests.
This adds too the use in the role ceph-rgw.
Signed-off-by: Fabien Brachere <fabien.brachere@celeste.fr>
Since the major ceph-volume lvm batch refactoring, the report value
is different.
Before the refact, the report was a dict with the OSDs list to be created
under the "osds" key.
After the refact, the report is a list of dict.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This avoids interactive mode for `vagrant box remove`.
This can happen for some reason when there's leftover from previous
deployment (VMs not destroyed as expected)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Currently we create an object from the primary sites but we try to read
that object still from the master which doesn't make sense, we should
try to read it from a secondary site.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
set fetch_directory variable in default/main.yml instead of using the
defaults jinja filter in tasks/main.yml.
Fixes: #6072
Signed-off-by: Karl-Heinz Preuß <karl-heinz.preuss@cms.hu-berlin.de>
This reverts commit 4d1fdd2b05.
This breaks the backward compatibility with previous osd_memory_target
calculation and we could have a value lower than the minimum value allowed
(896M) which causes some ceph commands to fail (like ceph assimilate-conf).
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The alertmanager, grafana and prometheus configuration file are
generated with the template module which doesn't allow for using
config overrides.
Instead we could use the config_template plugin action and add a
new variable for overrides (one for each component).
With this patch, one should be able to add configuration to
prometheus with the following:
---
alertmanager_conf_overrides:
global:
smtp_smarthost: 'localhost:25'
...
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1902999
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
81233dd introduced a regression with the ceph_ec_profile module call in
the ceph-rgw role due the missing cluster module parameter.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The conversion fact task was only executed when the grafana_server_group_name
variable was explicitly set in the user configuration. If an user was using
the default value then the conversion wasn't executed.
This also adds back the default grafana_server_group_name value in case user
was using the default value and to avoid undefined variable error.
Instead of hardcoding the "monitoring" group name then we can reuse the
monitoring_group_name variable.
There's no need to override the monitoring_group_name variable, it's either
using the default value or the one defined by the user.
Finally removing the delegate_to statement on the add_host task since it's
always executed on the ansible controller.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1903732
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Since [1] has been resolved then we don't need to apply this workaround
anymore.
[1] https://tracker.ceph.com/issues/46759
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Since the fetch_directory variable has been dropped then we don't need
the override in rhcs file.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Since the backing generate_secret() just hands out urandom output,
running as privileged doesn't seem to be required. It's not
desireable to provide sudo in some Ansible runner environments.
Signed-off-by: Jukka Nousiainen <jukka.nousiainen@csc.fi>
This adds cephadm_adopt ansible module for replacing the command module
usage with the cephadm adopt command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Let's discard the ansible lint error 306 and add a "# noqa 306" on tasks
where we don't need `set -o pipefail`
Fixes: #6090
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We should always use the ceph_volume ansible module when possible.
This patch replace the ceph-volume inventory and lvm {list,zap} commands
called via the command/shell modules by the corresponding call with the
ceph_volume module.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds ceph_crush_rule ansible module for replacing the command
module usage with the ceph osd crush rule commands.
This module can manage both erasure and replicated crush rules.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This allows skipping this task if really desired.
Use it carefully. Use it at your own risk.
Fixes: #6073
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Instead of iterate over all client nodes with a loop sequentially, we
can use the group_by ansible buildin.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds cephadm_bootstrap ansible module for replacing the command module
usage with the cephadm bootstrap command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds ceph_osd_flag ansible module for replacing the command module
usage with the ceph osd set/unset commands.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
As of podman `2.0.5`, `--cap-add` and `--privileged` are exclusive
options.
```
Nov 30 13:56:30 magna089 podman[171677]: Error: invalid config provided: CapAdd and privileged are mutually exclusive options
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1902149
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
As of podman 2.0.5, `--ignore` param conflicts with `--storage`.
```
Nov 30 13:53:10 magna089 podman[164443]: Error: --storage conflicts with --volumes, --all, --latest, --ignore and --cidfile
```
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds ceph_osd ansible module for replacing the command module
usage with the ceph osd destroy/down/in/out/purge/rm commands.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds ceph_mgr_module ansible module for replacing the command module
usage with the ceph mgr module enable/disable commands.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
- The plugins/filter directory wasn't present in the flake8 workflow
configuration.
- Fix the flake8 syntax.
- Add the directory to PYTHONPATH environment variable for pytest
to avoid importing the plugin filter via sys.
- Add unittest on missing netaddr module import.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
`ceph.target` should be disabled only. Otherwise, in collocation
scenario you stop other collocated services in the OSD play which isn't
what we want to do. Each daemon has its corresponding play for managing
the transition to container.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1901865
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>