When deploying the ceph OSD via the packages then the ceph-osd@.service
unit is configured as enabled-runtime.
This means that each ceph-osd service will inherit from that state.
The enabled-runtime systemd state doesn't survive after a reboot.
For non containerized deployment the OSD are still starting after a
reboot because there's the ceph-volume@.service and/or ceph-osd.target
units that are doing the job.
$ systemctl list-unit-files|egrep '^ceph-(volume|osd)'|column -t
ceph-osd@.service enabled-runtime
ceph-volume@.service enabled
ceph-osd.target enabled
When switching to containerized deployment we are stopping/disabling
ceph-osd@XX.servive, ceph-volume and ceph.target and then removing the
systemd unit files.
But the new systemd units for containerized ceph-osd service will still
inherit from ceph-osd@.service unit file.
As a consequence, if an OSD host is rebooting after the playbook execution
then the ceph-osd service won't come back because they aren't enabled at
boot.
This patch also adds a reboot and testinfra run after running the switch
to container playbook.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1881288
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
There are some use cases where there's a need to skip the execution
of the ceph-ansible client role even though the client section of the
inventory isn't empty.
This can happen in contexts where the services are colocated or when
a all-in-one deployment is performed.
The purpose of this change is adding a 'ceph_client' tag to avoid
altering the ceph-ansible execution flow but at the same time be able
to include or exclude a set of tasks using this tag.
Signed-off-by: Francesco Pantano <fpantano@redhat.com>
This sets the `dashboard_grafana_api_no_ssl_verify` default value
according to the length of `dashboard_crt` and `dashboard_key`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If some OSDs are to be created and others already exist the calculation
only counted the to be created OSDs. This changes the calculation to
take all OSDs into account.
Signed-off-by: Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
cec994b introduced a regression when a mgr is collocated with a mon.
During the mon upgrade, the mgr service is masked to avoid to be
restarted on packages update.
Then the start mgr task is failing because the service is still masked.
Instead we should unmask it.
Fixes: #5983
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
bd611a7 introduced the new ceph_fs module but missed some tasks in
rolling_update and shrink-mds playbooks.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The ceph status command returns a lot of information stored in variables
and/or facts which could consume resources for nothing.
When checking the cluster health, we're using the health structure in the
ceph status output.
To optimize this, we could use the ceph health command which contains
the same needed information.
$ ceph status -f json | wc -c
2001
$ ceph health -f json | wc -c
46
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The ceph status command returns a lot of information stored in variables
and/or facts which could consume resources for nothing.
When checking the rgw/rbdmirror services status, we're only using the
servicmap structure in the ceph status output.
To optimize this, we could use the ceph service dump command which contains
the same needed information.
This command returns less information and is slightly faster than the ceph
status command.
$ ceph status -f json | wc -c
2001
$ ceph service dump -f json | wc -c
1105
$ time ceph status -f json > /dev/null
real 0m0.557s
user 0m0.516s
sys 0m0.040s
$ time ceph service dump -f json > /dev/null
real 0m0.454s
user 0m0.434s
sys 0m0.020s
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The ceph status command returns a lot of information stored in variables
and/or facts which could consume resources for nothing.
When checking the quorum status, we're only using the quorum_names
structure in the ceph status output.
To optimize this, we could use the ceph quorum_status command which contains
the same needed information.
This command returns less information.
$ ceph status -f json | wc -c
2001
$ ceph quorum_status -f json | wc -c
957
$ time ceph status -f json > /dev/null
real 0m0.577s
user 0m0.538s
sys 0m0.029s
$ time ceph quorum_status -f json > /dev/null
real 0m0.544s
user 0m0.527s
sys 0m0.016s
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The ceph status command returns a lot of information stored in variables
and/or facts which could consume resources for nothing.
When checking the pgs state, we're using the pgmap structure in the ceph
status output.
To optimize this, we could use the ceph pg stat command which contains
the same needed information.
This command returns less information (only about pgs) and is slightly
faster than the ceph status command.
$ ceph status -f json | wc -c
2000
$ ceph pg stat -f json | wc -c
240
$ time ceph status -f json > /dev/null
real 0m0.529s
user 0m0.503s
sys 0m0.024s
$ time ceph pg stat -f json > /dev/null
real 0m0.426s
user 0m0.409s
sys 0m0.016s
The data returned by the ceph status is even bigger when using the
nautilus release.
$ ceph status -f json | wc -c
35005
$ ceph pg stat -f json | wc -c
240
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Improve the checked way of the OSD created checking process.
This replaces the ceph status command by the ceph osd stat command.
The osdmap structure isn't needed anymore.
$ ceph status -f json | wc -c
2001
$ ceph osd stat -f json | wc -c
132
$ time ceph status -f json > /dev/null
real 0m0.563s
user 0m0.526s
sys 0m0.036s
$ time ceph osd stat -f json > /dev/null
real 0m0.457s
user 0m0.411s
sys 0m0.045s
Signed-off-by: wangxiaotong <wangxiaotong@fiberhome.com>
After rolling updates performed with
`infrastructure-playbooks/rolling_updates.yml`, files located in
`/var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }}` had mode 0755 (including
the keyring), making them world-readable.
This commit separates the task that configured permissions recursively on
`/var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }}` into two separate tasks:
1. Set the ownership and mode of the directory itself;
2. Recursively set ownership in the directory, but don't modify the mode.
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
Move the import at the top of the file and remove unused module import.
- E402 module level import not at top of file
- F401 'xxxx' imported but unused
This also removes the '# noqa E402' statement from the code.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Instead of using ceph auth get-or-create command via the ansible command
module then we can use the ceph_key module.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Instead of using ceph auth get command via the ansible command module
then we can use the ceph_key module and the info state.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The ceph_key module currently only supports the json output for the
info state.
When using this state on an entity then we something want the output
as:
- plain for copying it to another node.
- json in order to get only a subset information of the entity (like
the key or caps).
This patch adds the output_format parameter which uses json as a
default value for backward compatibility. It removes the internal and
hardcoded variable also called output_format.
In addition of json and plain outputs, there's also xml and yaml
values available.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Otherwise this task fails if no permission is set on the item.
Previously the code omited the mode parameter if it was not set, but
this was lost with commit ab370b6ad8.
Signed-off-by: Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
Since we've changed to podman configuration using the detach mode and
systemd type to forking then the container logs aren't present in the
journald anymore.
The default conmon log driver is using k8s-file.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1890439
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
When using the curl command with ipv6 address and brackets then we need
to use the -g option otherwise the command fails.
$ curl http://[fdc2:328:750b:6983::6]:8080
curl: (3) [globbing] error: bad range specification after pos 9
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This file is currently deployed with '0644' ownership making this file
readable by any user on the system.
Since it contains sensitive information it should be readable by the
owner only.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1890119
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit cleans up the `main.yml` task file of `ceph-config`.
It drops the local ceph.conf generation.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When running rhel8 containers on a rhel7 host, after zapping an OSD
there's a discrepancy with the lvmetad cache that needs to be refreshed.
Otherwise, the host still sees the lv and can makes the user confused.
If user tries to redeploy an OSD, it will fail because the LV isn't
present and need to be recreated.
ie:
```
stderr: lsblk: ceph-block-8/block-8: not a block device
stderr: blkid: error: ceph-block-8/block-8: No such file or directory
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm prepare [-h] --data DATA [--data-size DATA_SIZE]
[--data-slots DATA_SLOTS] [--filestore]
[--journal JOURNAL]
[--journal-size JOURNAL_SIZE] [--bluestore]
[--block.db BLOCK_DB]
[--block.db-size BLOCK_DB_SIZE]
[--block.db-slots BLOCK_DB_SLOTS]
[--block.wal BLOCK_WAL]
[--block.wal-size BLOCK_WAL_SIZE]
[--block.wal-slots BLOCK_WAL_SLOTS]
[--osd-id OSD_ID] [--osd-fsid OSD_FSID]
[--cluster-fsid CLUSTER_FSID]
[--crush-device-class CRUSH_DEVICE_CLASS]
[--dmcrypt] [--no-systemd]
ceph-volume lvm prepare: error: Unable to proceed with non-existing device: ceph-block-8/block-8
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1886534
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Correctly set `osd_ids_non_container.stdout_lines` to an empty list if it's
undefined (i.e. in check mode).
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
Skip the `get initial keyring when it already exists` task when both commands
whose `stdout` output it requires have been skipped (e.g. when running in check
mode).
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
The current task installs the ceph-crash key to "most" hosts via
"delegate_to". This key is only used by the ceph-crash daemon and should
just be installed on all hosts targeted by this role. There is no need
for using a delegated task.
Signed-off-by: Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
The service should be started after the ceph-osd systemd overrides has
been added, otherwise, the latter isn't considered.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1860739
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The container_binding_name package was only mandatory when we were
using the docker modules (docker_image and docker_container) but since
we manage both docker and podman containers without using the dedicated
module then we can remove it.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Using the + operation on two lists doesn't filter out the duplicate
keys.
Currently each OSDs is started (via systemd) twice.
Instead we could use the union filter.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
the `stat --printf=%n` returns something like following:
```
ok: [osd0] => changed=false
cmd: |-
stat --printf=%n /var/run/ceph/ceph-osd*.asok
delta: '0:00:00.009388'
end: '2020-10-06 06:18:28.109500'
failed_when_result: false
rc: 0
start: '2020-10-06 06:18:28.100112'
stderr: ''
stderr_lines: <omitted>
stdout: /var/run/ceph/ceph-osd.2.asok/var/run/ceph/ceph-osd.5.asok
stdout_lines: <omitted>
```
it makes the next task "check if the ceph osd socket is in-use" grep
like this:
```
ok: [osd0] => changed=false
cmd:
- grep
- -q
- /var/run/ceph/ceph-osd.2.asok/var/run/ceph/ceph-osd.5.asok
- /proc/net/unix
```
which will obviously fail because this path never exists. It makes the
OSD handler broken.
Let's use `find` module instead.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Make sure the `site.yml.sample` playbook can be run in check mode by skipping
tasks that try to read the output of commands that have been skipped.
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
`all_daemons` scenario can't handle pools with `size: 3` because we have
1 osd node in root=HDD and two nodes in root=default.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds radosgw_zone ansible module for replacing the command module
usage with the radosgw-admin zone command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds radosgw_zonegroup ansible module for replacing the command
module usage with the radosgw-admin zonegroup command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds radosgw_realm ansible module for replacing the command module
usage with the radosgw-admin realm command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This adds radosgw_user ansible module for replacing the command module
usage with the radosgw-admin user command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This playbook isn't needed anymore, we can achieve this operation by
running main playbook with `--limit` option.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This adds the ceph_fs ansible module for replacing the command module
usage with the ceph fs command.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
tests/conftest.py and tests present in tests/functional/tests/ has been
missed from previous commit
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
It's time to remove this backward compatibility. Users had enough time
to convert their openstack_keys and key values.
We now fail in ceph-validate if the caps key isn't set.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Currently the `ceph_key` module doesn't support using a different
keyring than `client.admin`.
This commit adds the possibility to use a different keyring.
Usage:
```
ceph_key:
name: "client.rgw.myrgw-node.rgw123"
cluster: "ceph"
user: "client.bootstrap-rgw"
user_key: /var/lib/ceph/bootstrap-rgw/ceph.keyring
dest: "/var/lib/ceph/radosgw/ceph-rgw.myrgw-node.rgw123/keyring"
caps:
osd: 'allow rwx'
mon: 'allow rw'
import_key: False
owner: "ceph"
group: "ceph"
mode: "0400"
```
Where:
`user` corresponds to `-n (--name)`
`user_key` corresponds to `-k (--keyring)`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When rgw and osd are collocated, the current workflow prevents from
scaling out the radosgw_num_instances parameter when rerunning the
playbook in baremetal deployments.
When ceph-osd notifies handlers, it means rgw handlers are triggered
too. The issue with this is that they are triggered before the role
ceph-rgw is run.
In the case a scaleout operation is expected on `radosgw_num_instances`
it causes an issue because keyrings haven't been created yet so the new
instances won't start.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1881313
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit drops nested jinja construction in this set_fact task.
It also rename it to `container_exec_start_osd`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>