Commit Graph

5972 Commits (1f7b3ac5a387485b0e36994cb00c28eac8ee7572)
 

Author SHA1 Message Date
Guillaume Abrioux 7511195738 common: do not log keyring secret
let's not display any keyring secret by default in ansible log.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1980744

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-08-11 17:33:34 +02:00
Dimitri Savineau 5e0ace7e54 ceph-dashboard: fix TLS cert openssl generation
With OpenSSL version prior 1.1.1 (like CentOS 7 with 1.0.2k), the -addext
doesn't exist.
As a solution, this uses the default openssl.cnf configuration file as a
template and add the subjectAltName in the v3_ca section. This temp openssl
configuration file is removed after the TLS certificate creation.
This patch also move the run_once statement at the block level.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978869

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-08-09 14:19:17 -04:00
VasishtaShastry 478d9fdcb6 Fixes typo in rgw-add-users-buckets playbook
Signed-off-by: VasishtaShastry <vipin.indiasmg@gmail.com>
2021-08-09 15:35:55 +02:00
Guillaume Abrioux 6f1a0634f7 dashboard: subj_alt_names fact refactor
the current way the variable is built results in:

```
2021-08-03 04:18:23,020 - ceph.ceph - INFO - ok: [ceph-sangadi-4x-indpt6-node1-installer] => changed=false
  ansible_facts:
    subj_alt_names: |-
      subjectAltName=ceph-sangadi-4x-indpt6-node1-installer/subjectAltName=10.0.210.223/subjectAltName=ceph-sangadi-4x-indpt6-node1-installersubjectAltName=ceph-sangadi-4x-indpt6-node2/subjectAltName=10.0.210.252/subjectAltName=ceph-sangadi-4x-indpt6-node2/
```

which is incorrect.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978869

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-08-05 18:53:38 -04:00
Guillaume Abrioux 930fc4c850 adopt: import rgw ssl certificate into kv store
Without this, when rgw is managed by cephadm, it fails to start because
the ssl certificate isn't present in the kv store.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1987010
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1988404

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Dimitri Savineau <dsavinea@redhat.com>
2021-08-05 13:02:25 -04:00
Dimitri Savineau 7c38e64681 cephadm-adopt: remove nfs pool and namespace
This has been removed from the code (orch apply name).
The default pool name is now .nfs

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-08-05 16:59:54 +02:00
Dimitri Savineau 386661699b infra: use dedicated variables for balancer status
The balancer status is registered during the cephadm-adopt, rolling_update
and swith2container playbooks. But it is also used in the ceph-handler role
which is included in those playbooks too.
Even if the ceph-handler tasks are skipped for rolling_update and
switch2container, the balancer_status variable is erased with the skip task
result.

play1:
  register: balancer_status
play2:
  register: balancer_status <-- skipped
play3:
  when: (balancer_status.stdout | from_json)['active'] | bool

This leads to issue like:

The conditional check '(balancer_status.stdout | from_json)['active'] | bool'
failed. The error was: Unexpected templating type error occurred on
({% if (balancer_status.stdout | from_json)['active'] | bool %} True
{% else %} False {% endif %}): expected string or buffer.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1982054

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-08-04 17:39:54 +02:00
Teoman ONAY 9b5d97adb9 podman pids.max default value is 2048, docker's one is 4096 which are
sufficient for the default value (512) of rgw thread pool size.
But if its value is increased near to the pids-limit value,
it does not leave place for the other processes to spawn and run within
the container and the container crashes.

pids-limit set to unlimited regardless of the container engine.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1987041

Signed-off-by: Teoman ONAY <tonay@redhat.com>
2021-08-04 10:20:25 +02:00
Dimitri Savineau b02cc6931f ceph-defaults: remove radosgw_civetweb_ variables
radosgw_civetweb_xxx variables are legacy variables and users should
have switched to radosgw_frontend_xxx variables instead.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-08-04 09:13:08 +02:00
Dimitri Savineau 06471a4b82 osds: use osd pool ls instead of osd dump command
The ceph osd pool ls detail command is a subset of the ceph osd dump
command.

$ ceph osd dump --format json|wc -c
10117
$ ceph osd pool ls detail --format json|wc -c
4740

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-08-02 15:51:01 +02:00
Dimitri Savineau 17784624e0 library: exit on user creation failure
When the ceph dashboard user creation fails then the issue is hidden
as we don't check the return code and don't print the error message
in the module output.

This ends up with a failure on the ceph dashboard set roles command saying
that the user doesn't exist.

By failing on the user creation, we will have an explicit explaination of
the issue (like weak password).

Closes: #6197

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-08-02 15:50:02 +02:00
Dimitri Savineau e87a47cf0c rolling_update: get ceph version when mons exist
eec3878 introduced a regression for upgrade scenarios where there's no
monitor nodes at all (like ganesha standalone, external clients, etc..)

TASK [get the ceph release being deployed] ************************************
task path: infrastructure-playbooks/rolling_update.yml:121
Thursday 29 July 2021  15:55:29 +0000 (0:00:00.484)       0:00:15.802 *********
fatal: [client0]: FAILED! =>
  msg: '''dict object'' has no attribute ''mons'''

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-08-02 15:47:56 +02:00
Benoît Knecht d7653dca95 infrastructure-playbooks: Get Ceph info in check mode
In the `set osd flags` block, run the Ceph commands that gather information
from the cluster (and don't make any changes to it) even when running in check
mode.

This allows the tasks that depend on the variables set by those tasks to
succeed in check mode.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
2021-07-28 14:04:54 +02:00
Benoît Knecht 498acd7527 ceph-handler: Fix osd handler in check mode
Run the Ceph commands that only gather information (without making any changes
to the cluster) when running Ansible in check mode.

This allows the tasks that depend on the variables set by those tasks to
succeed in check mode.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
2021-07-28 14:04:54 +02:00
Dimitri Savineau f0ccf3ebf0 ceph-defaults: add missing grafana dashboards
The radosgw-sync-overview and rbd-details grafana dashboars were missing
from the list.

Closes: #6758

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-27 10:49:05 -04:00
Guillaume Abrioux eec38784ec update: check the ceph release
Check early which Ceph release is going to be deployed and fail if it
doesn't correspond to the ceph-ansible version being used.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978643

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-26 18:11:22 +02:00
Dimitri Savineau 9f77b929d1 alertmanager: allow disable dashboard tls verify
When using self-signed/untrusted CA certificates, alertmanager displays
an error in logs. With this commit this should make those messages
disappear.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1936299

Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-25 02:56:18 +02:00
Dimitri Savineau ad05a08160 multisite: use node fqdn for endpoints when https
When the rgw_multisite_proto variable is set to https then we shoudn't use
the IP address in the zone endpoints list but the node FQDN to match the
TLS certificate CN.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1965504

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-22 21:22:12 +02:00
Guillaume Abrioux 4144074a50 purge: support osd_auto_discovery
This adds a task that zaps by osd id so we can support the scenario
where osds were deployed with `osd_auto_discovery` is true.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1876860

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-22 10:49:44 -04:00
Guillaume Abrioux 17cd83bf3a purge: merge playbooks
This refactor merges the two playbooks so we only have to maintain 1
playbook.
(Symlink the old purge-container-cluster.yml playbook for backward
 compatibility).

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-22 10:49:44 -04:00
Guillaume Abrioux 6b50401d0c purge: drop variables from 'hosts' sections
Those variables are useless given this is not possible to override them.
Let's replace them with the hardcoded name instead.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-22 10:49:44 -04:00
Dimitri Savineau 738fa9428a common: remove unnecessary run_once statements
1303611 introduced tasks for disabling the pg_autoscaler on pools and
the balancer but thoses tasks are already executed on the first monitor
node so we don't need to add the run_once statement.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-21 09:55:21 -04:00
Dimitri Savineau cf6e33346e common: fix py2 pool_list from_json when skipped
When using python 2 and the task with a loop is skipped then it generates
an error.

Unexpected templating type error occurred on
({{ (pool_list.stdout | from_json)['pools'] }}): expected string or buffer

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-21 08:17:58 +02:00
Guillaume Abrioux 13036115e2 common: disable/enable pg_autoscaler
The PG autoscaler can disrupt the PG checks so the idea here is to
disable it and re-enable it back after the restart is done.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-20 07:37:07 +02:00
Dimitri Savineau cd06e7c046 ceph-mgr: move mgr module list to common
Populating the ceph_mgr_modules list in the mgr_modules doesn't make sense
since that file is only executed if the list isn't empty or we're using the
dashboard.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-19 18:23:38 +02:00
Dimitri Savineau 9817d29543 ceph-nfs: allow overriding NFS_CORE_PARAM
We already have config override variables for existing block (like
ganesha_ceph_export_overrides, ganesha_log_overrides, etc...) or a
global one (ganesha_conf_overrides) but redefining the NFS_CORE_PARAM
block in that variable will erase all previous values (currently only
Bind_Addr).

ganesha_core_param_overrides: |
        Enable_UDP = false;
        NFS_Port = 2050;

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1941775

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-19 18:22:14 +02:00
Guillaume Abrioux 60aa70a128 purge: reindent playbook
This commit reindents the playbook.
Also improve readability by adding an extra line between plays.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-13 09:47:30 -04:00
Guillaume Abrioux 70f1d6e2cd lib/ceph-volume: support zapping by osd_id
This commit adds the support for zapping an osd by osd_id in the
ceph_volume module.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-13 09:41:21 -04:00
Dimitri Savineau a305296384 cephadm-adopt: enable osd memory autotune for HCI
This enables the osd_memory_target_autotune option on HCI environment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1973149

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-12 18:17:37 +02:00
Dimitri Savineau 97148dd58c rolling_update: check quorum state before upgrade
If one a the monitor is out of the quorum then nothing prevents the upgrade
playbook to run.
We only check if we have at least three monitor nodes but we should also
check if those monitor nodes are correctly present in the quorum.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1952571

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-12 18:16:22 +02:00
Guillaume Abrioux c396122ad9 update: fail the playbook if straw2 conversion failed
It's better to fail the playbook so the user is aware the straw2
migration has failed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-09 11:44:06 -04:00
Guillaume Abrioux 4eb4268dee update: followup on pr #6689
add mising 'osd' command.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-09 10:01:45 +02:00
Guillaume Abrioux eee576477c update: convert straw bucket
After an upgrade, the presence of straw buckets will produce the
following warning (HEALTH_WARN):

```
crush map has legacy tunables (require firefly, min is hammer)
```

because straw bucket is a firefly feature it needs to be converted to
straw2.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1967964

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-09 08:28:46 +02:00
Dimitri Savineau aeb9f562e5 cephadm-adopt: set application on ganesha pool
Set the nfs application to the ganesha pool.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1956840

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-08 20:35:58 +02:00
Guillaume Abrioux 72a0336c71 dashboard: remove "certificate is valid for" error
When deploying dashboard with ssl certificates generated by
ceph-ansible, we enforce the CN to 'ceph-dashboard' which can makes
application such alertmanager complain like following:

`err="Post https://mgr0:8443/api/prometheus_receiver: x509: certificate is valid for ceph-dashboard, not mgr0" context_err="context deadline exceeded"`

The idea here is to add alternative names matching all mgr/mon instances
in the certificate so this error won't appear in logs.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1978869

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-07 09:38:34 -04:00
Dimitri Savineau c5a2239e5e workflow: add dashboard playbook to ansible-lint
The dashboard.yml playbook was missing from the ansible-lint workflow.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-06 09:03:48 +02:00
Dimitri Savineau 8e4ef7d6da infra: add playbook to purge dashboard/monitoring
The dashboard/monitoring stack can be deployed via the dashboard_enabled
variable. But there's nothing similar if we can to remove that part only
and keep the ceph cluster up and running.
The current purge playbooks remove everything.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786691

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-06 09:02:37 +02:00
Guillaume Abrioux f4f73b6197 dashboard: support dedicated network for the dashboard
This introduces a new variable `dashboard_network` in order to support
deploying the dashboard on a different subnet.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1927574

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-05 21:34:43 +02:00
Dimitri Savineau 993d06c4d9 ceph-crash: add install checkpoint
The ceph crash insatll checkpoint callback was missing in the main
playbooks.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-05 18:03:13 +02:00
Guillaume Abrioux 3b804a61dd cephadm_adopt: add any_errors_fatal on play
Add any_errors_fatal: true in cephadm-adopt playbook.
We should stop the playbook execution when a task throws an error.
Otherwise it can lead to unexpected behavior.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1976179

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-02 22:15:07 +02:00
Guillaume Abrioux 037d8cd05e purge: add monitoring group in final cleanup play
This adds the monitoring group in the "final cleanup play" so any cid
files generated are well removed when purging the cluster.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1974536

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-07-02 13:37:15 -04:00
Dimitri Savineau 1d56818658 prometheus: fix prometheus target url
The prometheus service isn't binding on localhost.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1933560

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-02 17:20:02 +02:00
Dimitri Savineau d704b05e52 ceph-facts: move device facts to its own file
Instead of reusing the condition 'inventory_hostname in groups[osds]'
on each device facts tasks then we can move all the tasks into a
dedicated file and set the condition on the import_tasks statement.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-02 14:02:30 +02:00
Dimitri Savineau 55bca07cb6 ceph-validate: check logical volumes
We currently don't check if the logical volume used in lvm_volumes list
for either bluestore data/db/wal or filestore data/journal exist.
We're only doing this on raw devices for batch scenario.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-02 14:02:30 +02:00
Dimitri Savineau 808e7106de ceph-validate: check db/journal/wal devices too
When using dedicated devices for db/journal/wal objecstore with
ceph-volume lvm batch then we should also validate that those devices
exist and don't use a gpt partition table in addition of the devices
and lvm_volume.data variables.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-02 14:02:30 +02:00
Dimitri Savineau 7e50380f7f ceph-validate: use root device from ansible_mounts
Instead of using findmnt command to find the device associated to the
root mount point then we can use the ansible_mounts fact.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-02 14:02:30 +02:00
Dimitri Savineau 0df99dda8d ceph-validate: do not resolve devices
This is already done in the ceph-facts role.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-02 14:02:30 +02:00
Dimitri Savineau 14d458b3b4 ceph-validate: check block presence first
Instead of doing two parted calls we can check first if the device exist
and then test the partition table.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-02 14:02:30 +02:00
Dimitri Savineau ac0342b72e ceph-validate: check devices from lvm_volumes
2888c08 introduced a regression as the check_devices tasks file was
only included based on the devices variable.
But that file also validate some devices from the lvm_volumes variable.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1906022

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-07-02 14:02:30 +02:00
Dimitri Savineau 9758e3c513 container: set tcmalloc value by default
All ceph daemons need to have the TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES
environment variable set to 128MB by default in container setup.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1970913

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2021-06-30 20:30:55 +02:00