Commit Graph

5747 Commits (2377da8f9b7cdb67c992a1536bd54ad2b8b30ccc)
 

Author SHA1 Message Date
Guillaume Abrioux 8ed11ea3ee infra: only install logrotate on right nodes
For intsance, there is no need to install logrotate on clients nodes.

This also ensure logrotate is installed only for containerized
deployments since the packaging has an explicit dependency to logrotate

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-18 10:56:09 -04:00
Guillaume Abrioux 04d77dcaeb travis: enforce ansible-lint 4.2.0
Let's pin to 4.2.0

(because of ansible/ansible-lint/issues/966)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-18 10:29:19 -04:00
Guillaume Abrioux 093e1dcb21 tests: remove hosts-ubuntu inventories
Since we've dropped ubuntu testing, we don't need these inventories
anymore. Let's remove this leftover.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-18 11:20:48 +02:00
Guillaume Abrioux bd9e126357 tests: disable iscsigw testing (container)
Temporarily disable iscsigw testing for containerized deployments
because it's broken upstream on ceph@master.
non-containerized deployments use stable build for iscsigw to get around
this issue.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-18 11:20:48 +02:00
Dimitri Savineau cb8f0237e1 ceph-rgw: allow specifying crush rule on pool
We already support specifiying a custom crush rule during pool creation
in ceph-osd role but not in ceph-rgw role.
This patch adds the missing code to implement this feature.
Note this is only available for replicated pool not erasure. The rule
must also exist prior the pool creation.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1855439

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-08-17 22:59:06 +02:00
Dimitri Savineau 9805589ef9 container: don't install the engine on all clients
We only need the container engine to be installed on the first clients
node in order to execute the pools/keys operation. We already do the
same worflow with the ceph-container-common role which pull the ceph
container image.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-08-17 22:57:28 +02:00
Ali Maredia 5c1f4b1a1e rgw: allow rgws to be concurrently with or without multisite
Allows rgws in a ceph cluster to be run with
multisite and without multisite at the same time.

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2020-08-17 11:11:11 +02:00
Guillaume Abrioux f77fa6e2a4 purge-cluster: use sysfs method for unmapping rbd devices
This way we keep consistency with purge-container-cluster.yml playbook.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-17 09:28:12 +02:00
Guillaume Abrioux e1cb385740 infra: add missing tag
This commit adds the missing `with_pkg` tag on the logrotate
installation task.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-13 10:08:18 -04:00
Guillaume Abrioux e256d8e948 tests: test iscsigw against stable
Since it is broken at the moment with dev repos, let's test against
stable builds so the CI is unlocked.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-13 09:49:00 +02:00
Guillaume Abrioux 33a544644a purge: import ceph-defaults in purge osd play
Otherwise, `ceph_volume_debug` variable is undefined

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-11 15:03:20 +02:00
Guillaume Abrioux f1aa6cea21 infra: add log rotation support (containers)
This commit adds the log rotation support via logrotate in containerized
deployments.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1848388

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-11 15:03:20 +02:00
Guillaume Abrioux 448cc280b7 common: don't enable debug log on ceph-volume calls by default
ceph-volume can generate large logs at some point.

debug logs by definition should be enabled only when debugging.

Let's make it customizable with a variable which is set to `False` by
default.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-11 15:03:20 +02:00
raul 110eaf5f9f rgw: support 1+ rgw instance in `radosgw_frontend_port`
Change the radosgw_frontend_port to take in account more than 1 RGW instance,
in it's original form `radosgw_frontend_port: radosgw_frontend_port | int`,
it configured the 8080 port to all instances, with the following modification
`radosgw_frontend_port: radosgw_frontend_port | int + item|int` we increase in
1 the port count.

Co-authored-by: Daniel Parkes <dparkes@redhat.com>
Signed-off-by: raul <rmahique@redhat.com>
2020-08-11 14:05:43 +02:00
Guillaume Abrioux dd4b5b0328 nfs: do not copy rgw keyring when `nfs_obj_gw` is true
This keyring shouldn't be copied when `nfs_obj_gw` is `True` if the
cluster doesn't contain a rgw node, which can be the case given we are
using `nfs_obj_gw` instead of `nfs_file_gw` (cephfs vs. object), the
deployment will fail trying to copy a key that doesn't exist.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-07 13:21:17 +02:00
Guillaume Abrioux 83d1b33a9b tox: only wait 30sec for right jobs
There's no need to call `sleep 30` for other job than `all_daemons` and
`all_in_one`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-06 17:22:52 +02:00
Benoît Knecht a57fd7a090 purge-cluster: check if rbdmap exists
When running `infrastructure-playbooks/purge-cluster.yml` twice, it fails the
second time on the `ensure rbd devices are unmapped` task, because `rbdmap`
isn't installed anymore at that point.

This commit adds a check that ensures `rbdmap` is available, and skips the
`ensure rbd devices are unmapped` task if it isn't.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
2020-08-06 09:35:03 +02:00
Dimitri Savineau 03d4620269 pytest: register ceph_crash mark
Otherwise we see some pytest warning.

PytestUnknownMarkWarning: Unknown pytest.mark.ceph_crash - is this a typo?
You can register custom marks to avoid this warning - for details,
see https://docs.pytest.org/en/latest/mark.html

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-08-06 09:34:34 +02:00
Guillaume Abrioux c2e507b42d purge-cluster: replace shell by command in a task
There is no need to use `shell` here.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-05 09:37:41 +02:00
Benoît Knecht fe8fbd3ee2 shrink-osd: various fixes
This handles missing /etc/ceph/osd, by ensuring we actually found files in
`/etc/ceph/osd` before trying to slurp their content.

This also add a missing `| default(False)` to avoid fowlloing error:

```
fatal: [ceph01]: FAILED! =>
  msg: |-
    The conditional check 'ceph_osd_data_json[item.2]['encrypted'] | bool' failed. The error was: error while evaluating conditional (ceph_osd_data_json[item.2]['encrypted'] | bool): 'dict object' has no attribute 'encrypted'
```

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1862416

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
2020-08-05 01:30:57 +02:00
Kevin Coakley d19e6033b2 Remove ceph-radosgw.target when switching to containerize daemons
The task "remove old systemd unit file" under "switching from
non-containerized to containerized ceph rgw" only removes
the ceph-radosgw@.service file. The task should also remove
the ceph-radosgw.target file, like the "remove old systemd unit
files" tasks for the mons, mgrs, osds, etc, in order to clean up
all of the unused systemd unit files.

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2020-08-04 11:08:12 -04:00
Guillaume Abrioux 5df6225ede tests: change subnet in lvm_osds container scenario
This commit changes the subnets in container-lvm_osds scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-04 14:00:05 +02:00
Guillaume Abrioux 0a38d91b5b Revert "tests: add more coverage for test_ceph_key"
This reverts commit 1e46264bc1.
2020-08-04 11:28:42 +02:00
Guillaume Abrioux b15063b20e Revert "ceph_key: refact the code and minor fixes"
This reverts commit 9a950b8f0f.
2020-08-04 11:28:42 +02:00
Guillaume Abrioux 9a950b8f0f ceph_key: refact the code and minor fixes
wip

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-03 18:12:45 +02:00
Guillaume Abrioux 1e46264bc1 tests: add more coverage for test_ceph_key
This commit adds more coverage regarding the testing of ceph_key module

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-03 18:12:45 +02:00
Guillaume Abrioux 0a581a6e60 config: only add related rgw section
there's no need to add each rgw section on all rgw nodes.
With this commit, only related rgw section are rendered.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-03 14:47:27 +02:00
Guillaume Abrioux 8933bfde33 shrink_osd: remove osd data directory
Otherwise it leaves an empty directory.
When shrinking and redeploying multiple OSDs you have no guarantee it
will reuse the same osd id.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-03 14:46:56 +02:00
Guillaume Abrioux 78e4faf077 tox: split shrink_osd scenario
Let's split this scenario with a dedicated tox ini file.

This is for testing in two ways:

1/ shrinking OSDs one by one
2/ shrinking multiple OSDs with a single call of the playbook

ceph-build related PR: ceph/ceph-build#1629

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-03 14:46:56 +02:00
Guillaume Abrioux 7efea219d6 tests: refact shrink_osd scenario
This adds more coverage on the shrink_osd scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-03 14:46:56 +02:00
Dimitri Savineau 0d0f1e71df dashboard: allow remote TLS cert/key copy
When using TLS on the ceph dashboard or grafana services, we can provide
the TLS certificate and key.
Those files should be present on the ansible controller and they will be
copyied to the right node(s).
In some situation, the TLS certificate and key could be already present
on the target node and not on the ansible controller.
For this scenario, we just need to copy the files locally (on each remote
host).

This patch adds the dashboard_tls_external variable (with default to
false) to allow users to achieve this scenario when configuring this
variable to true.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1860815

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-08-03 13:39:47 +02:00
Dimitri Savineau ec0a37a74f rolling_update: restart mds after the upgrade
In addition of 155e2a2, the active mds daemons isn't stop/start
correctly as opposed as the other services so that daemon doesn't come
back after the upgrade.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1861688

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-29 16:45:41 -04:00
Dimitri Savineau 891234668e tests: install pyyaml on osd nodes
Due to [1], ceph-volume has now a dependency on pyyaml but it's not
installed by default via the package dependency.
This patch only add the required package on non containerized
deployment and as temporary workaround for the CI.

[1] https://tracker.ceph.com/issues/46759

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-29 12:49:15 -04:00
Dimitri Savineau a6209bd957 rolling_update: refact dashboard workflow
The dashboard upgrade workflow should do the same process than the ceph
upgrade otherwise any systemd unit modification won't be apply on the
monitoring/dashboard stack.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1859173

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-25 09:35:17 +02:00
Dimitri Savineau 155e2a23d5 rolling_update: stop/start instead of restart
During the daemon upgrade we're
  - stopping the service when it's not containerized
  - running the daemon role
  - start the service when it's not containerized
  - restart the service when it's containerized

This implementation has multiple issue.

1/ We don't use the same service workflow when using containers
or baremetal.

2/ The explicity daemon start isn't required since we'are already
doing this in the daemon role.

3/ Any non backward changes in the systemd unit template (for
containerized deployment) won't work due to the restart usage.

This patch refacts the rolling_update playbook by using the same service
stop task for both containerized and baremetal deployment at the start
of the upgrade play.
It removes the explicit service start task because it's already included
in the dedicated role.
The service restart tasks for containerized deployment are also
removed.

Finally, this adds the missing service stop task for ceph crash upgrade
workflow.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1859173

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-25 09:35:17 +02:00
Dimitri Savineau 4e84b4beed ceph-facts: remove mds_name fact
The mds_name fact always gets the ansible_hostname value so we don't
need to have a dedicated fact for this and use the ansible_hostname fact
instead.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-23 17:02:43 +02:00
Dimitri Savineau cbe79428e6 ceph-handler: remove iscsigws restart scripts
The iscsigws restart scripts for tcmu-runner and rbd-target-{api,gw}
services only call the systemctl restart command.
We don't really need to copy a shell script to do it when we can use
the ansible service module instead.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-23 17:02:12 +02:00
Dimitri Savineau 47b7c00287 podman: always remove container on start
In case of failure, the systemd ExecStop isn't executed so the container
isn't removed. After a reboot of a failed node, the container doesn't
start because the old container is still present in created state.
We should always try to remove the container in ExecStartPre for this
situation.
A normal reboot doesn't trigger this issue and this also doesn't affect
nodes running containers via docker.
This behaviour was introduced by d43769d.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1858865

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-23 17:00:38 +02:00
Guillaume Abrioux 44caa062b7 tox: remove ubuntu references
since we've dropped ubuntu testing on PRs and nightlies, we don't need
these references anymore in tox files.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-22 18:57:31 -04:00
Guillaume Abrioux 8ef9fb68bc tests: lvm_setup.yml, add carriage return
This commit adds crlf between each task.
It makes the playbook more readable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-22 14:25:45 +02:00
Guillaume Abrioux 218f4ae361 tests: (lvm_setup.yml), don't shrink lvol
when rerunning lvm_setup.yml on existing cluster with OSDs already
deployed, it fails like following:

```
fatal: [osd0]: FAILED! => changed=false
  msg: Sorry, no shrinking of data-lv2 to 0 permitted.
```

because we are asking `lvol` module to create a volume on an empty VG
with size extents = `100%FREE`.

The default behavior of `lvol` is to shrink the volume if the LV's current
size is greater than the requested size.

Given the requested size is calculated like this:

`size_requested = size_percent * this_vg['free'] / 100`

in our case, it is similar to:

`size_requested = 100 * 0 / 100` which basically means `0`

So the current LV size is well greater than the requested size which
leads the module to attempt to shrink it to 0 which isn't obviously now
allowed.

Adding `shrink: false` to the module calls fixes this issue.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-22 14:25:45 +02:00
Dimitri Savineau 18e3c7a0a2 ceph-handler: add missing condition on ceph-crash
The ceph-crash tasks present in the ceph-handler role don't need to be
executed on all nodes.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-21 23:26:11 +02:00
Guillaume Abrioux 39bb279a53 crash: rm container in ExecPreStart even with docker
We should ensure the container is removed in `ExecPreStart` even when
`{{ container_binary }}` is docker.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-21 23:23:18 +02:00
Guillaume Abrioux 9d2f2108e1 ceph-crash: introduce new role ceph-crash
This commit introduces a new role `ceph-crash` in order to deploy
everything needed for the ceph-crash daemon.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-21 20:22:12 +02:00
Guillaume Abrioux d490968fc8 defaults: remove legacy
These variables aren't consummed anywhere else than in ceph-nfs role so
there is no need to have them in `ceph-defaults`'s defaults

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-21 09:39:15 +02:00
Dimitri Savineau 5ef965c4dc cephadm: set the command as a fact
Set the cephadm cmd as a fact instead of rewriting the same command
over and over.
This also fix an issue when using docker as container engine because
the --docker cephadm parameter should be use before the subcommand
not after.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-20 16:32:20 -04:00
Guillaume Abrioux f8a951f50c facts: fix broken facts when using --limit
This commit fixes these tasks when --limit is used.

It makes sure the fact is set on right nodes even when the playbook is
run with `--limit`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-20 10:56:10 -04:00
Dimitri Savineau 2b8ebf1457 ceph-dashboard: copy TLS cert/key on monitor
The ceph-dashboard role is executed on the mgr nodes so the TLS cert/key
files are copied to those nodes.
But we are running importing the cert/key files into the ceph
configuration on the monitor.

Closes: #5557

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-20 16:16:35 +02:00
Dimitri Savineau 957903d561 cephadm: add playbook
This adds a new playbook for deploying ceph via cephadm.

This also adds a new dedicated tox file for CI purpose.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-16 11:40:45 -04:00
Dimitri Savineau 9596494911 cephadm-adopt: delegate task for orch apply
This is a partial revert of b38019e because we don't want to execute
the whole play on the monitor otherwise if we have some empty group
like rgws or mdss then the orchestrator commands will still be
executed.
Instead we should keep the real target group name at play level and
delegate the orchestator commands to the monitor. The whole play
will be skipped is the group is empty.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-07-16 09:44:33 -04:00