This commit adds a default value in the with_dict because when using
python 2.7, if a task using a with_dict has a condition, it is
evaluated anyway whereas in python 3 it isn't.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1766499
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
There's no need to use the default filter on active/standby groups
because if the group doesn't exist then the play is just skipped.
Currently this generates warnings like:
[WARNING]: Could not match supplied host pattern, ignoring: |
[WARNING]: Could not match supplied host pattern, ignoring: default([])
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2ca79fcc99)
The active mds host should be based on the inventory hostname and not on
the ansible hostname.
The value returns under the mdsmap structure is based on the OS hostname
so we need to find the right node in the inventory with this value when
doing operation on inventory nodes.
Othewise we could see error like:
The task includes an option with an undefined variable. The error was:
"hostvars[foobar]" is undefined
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit f1f2352c79)
Let's skip this part of the code if there's no mds node in the
inventory.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 5ec906c3af)
The commit replaces the pv/vg/lv commands used with the ansible command
module by the lvg and lvol modules.
This also fixes the size of the second data LV because we were only using
50% of the remaining space instead of 100%.
With a 50G device, the result was:
- data-lv1 was 25G
- data-lv2 was 12.5G
Instead of:
- data-lv1 was 25G
- data-lv2 was 25G
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 2c03c6fcd3)
Otherwise it fails like following:
```
TASK [ceph-mds : allow multimds] **************************************************************************************************************************************************
Monday 22 July 2019 16:37:38 +0800 (0:00:03.269) 0:13:25.651 ***********
fatal: [rhel7u6clone1]: FAILED! => {"msg": "The conditional check 'ceph_release_num[ceph_release] == ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] == ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml': line 43, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: allow multimds\n ^ here\n"}
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1645379
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 4e9504c939)
Due the 'failed_when: false' statement present in the peer task then
the playbook continues to ran even if the peer task was failing (like
incorrect remote peer format.
"stderr": "rbd: invalid spec 'admin@cluster1'"
This patch adds a task to list the peer present and add the peer only if
it's not already added. With this we don't need the failed_when statement
anymore.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1665877
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 0b1e9c0737)
The current ceph-validate role is using both validate action and fail
module tasks to validate the ceph configuration.
The validate action is based on the notario python library. When one of
the notario validation fails then a python stack trace is reported to the
ansible task. This output isn't understandable by users.
This patch removes the validate action and the notario depencendy. The
validation is now done with only fail ansible module.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1654790
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
The secondary vagrant variables didn't have the grafana vm variable
set which create an vagrant error.
There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
an invalid or undefined variable.
This patch also changes the ssh-extra-args parameter to ssh-common-args
to get the same values for ssh/sftp/scp. Otherwise we can see warnings
from ansible and some tasks are failing.
[WARNING]: sftp transfer mechanism failed on [mon0]. Use ANSIBLE_DEBUG=1
to see detailed information
It also updates the ssh-common-args value for the rgw-multisite scenario
to reflect the ANSIBLE_SSH_ARGS environment variable value.
Finally changing the IP addresses due to the Vagrant refact done in the
commit 778c51a
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 010158ff84)
This commit adds a validation task to prevent from installing an OSD on
the same disk as the OS.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1623580
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 80e2d00b16)
This commit reflects the recent changes in ceph/ceph-build#1406
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit bcaf8cedee)
When switching from a baremetal deployment to a containerized deployment
we only umount the OSD data partition.
If the OSD is encrypted (dmcrypt: true) then there's an additional
partition (part number 5) used for the lockbox and mount in the
/var/lib/ceph/osd-lockbox/ directory.
Because this partition isn't umount then the containerized OSD aren't
able to start. The partition is still mount by the system and can't be
remount from the container.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1616159
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 19edf707a5)
9e7972a introduced a regression via the container_binary variable
which is undefined.
The CEPH_CONTAINER_BINARY environment variable isn't used at all.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
556052b changed the way the mgr keyring are created but the ceph_key
module need the containerized parameter when the deployment is using
containers.
This module doesn't support CEPH_CONTAINER_[BINARY|IMAGE] environment
variables.
Closes: #4547
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
This commit moves this task in order to stop the nfs server service
regardless the deployment type desired (containerized or non
containerized).
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1508506
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 6c6a512a72)
The syntax here wasn't working, this refact fixes this task.
Also, removing the `ignore_errors: true` which was hidding the failure.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1508506
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 47034effe0)
Add missing tag on ceph-handler role call.
Otherwise, we can't use `--tags='ceph_update_config'` for updating the
ceph configuration file.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1754432
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f59dad620d)
Add code in ceph-mgr for creating a keyring for manager in so that
managers can be deployed on a separate node too.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1552210
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 56bfec7c58)
When using the ansible --limit option on one or few OSD nodes and if the
handler is triggered then we will restart the OSD service on all OSDs
nodes instead of the hosts limited by the limit value.
Even if the play is limited by the --limit value we are using all OSD
nodes from the OSD group.
with_items: '{{ groups[osd_group_name] }}'
Instead we should iterate only on the nodes present in both OSD group and
limit list.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 0346871fb5)
because of the current ip address assignation, it's not possible to
deploy more than 9 nodes per daemon type.
This commit refact a bit and allows us to get around this limitation.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 778c51a0ff)
let's use `until` instead of doing test in bash using python oneliner
also, use `command` instead of `shell`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit c76cd5ad84)
By changing the set ownership command from using the file module in combination with a with_items loop to a raw chown command, we can achieve a 98% performance increase here.
On a ceph cluster with a significant amount of directories and files in /var/lib/ceph, the file module has to run checks on ownership of all those directories and files to determine whether a change is needed.
In this case, we just want to explicitly set the ownership of all these directories and files to the ceph_uid
Added context note to all set proper ownership tasks
Signed-off-by: Kevin Jones <kevinjones@redhat.com>
(cherry picked from commit 47bf47c9d8)
CEPH_CONTAINER_IMAGE should be None if containerized_deployment
is False.
Resolves: #4498
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 70a4368bc5)
This commit replaces a `command` task with `ceph_key` in order to create
mgr keyrings.
This allows us to use `mode` parameter to set the right mode on
generated keys.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1734513
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since we change the way to run the OSD containers with the ID instead
of the device name, we lost the ability to use loop devices.
Loop devices are like nvme or cciss devices because the partitions are
referenced with an extra 'p' before the partition number.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1749097
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
In containerized deployment, the restart OSD handler couldn't be
triggered in most ansible execution.
This is due to the usage of run_once + a condition on the inventory
hostname and the last filter.
The run_once is triggered first so ansible will pick a node in the
osd group to execute the restart task. But if this node isn't the
last one in the osd group then the task is ignored. There's more
probability that the task will be ignored than executed.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5b1c15653f)
The ceph-rbd-mirror role allows to copy the admin keyring via the
copy_admin_key variable but there's actually no task in that role
doing the job.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 1f505628dd)
The admin keyring isn't present by default on the rbd mirror nodes so
the rbd commands related to the mirroring confguration will fail.
Instead we can use the rbd mirror client keyring.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit a3d36df025)
Ganesha cannot be operated active/active, in those deployments
where it is managed by pacemaker the container name can be
different than the default.
This change uses "ceph_nfs_service_suffix" where previously
missing to ensure tasks will work with customized names.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1750005
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
(cherry picked from commit d2a2bd7c42)
The rbd mirror configuration was only available for non containerized
deployment and was also imcomplete.
We now enable the mirroring on the pool and add the remote peer in both
scenarios.
The default mirroring mode is set to 'pool' but can be configured via
the ceph_rbd_mirror_mode variable.
This commit also fixes an issue on the rbd mirror command if the ceph
cluster name isn't using the default value (ceph) due to a missing
--cluster parameter to the command.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1665877
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 7e5e21741e)
5b29144 change the mgr node to a dedicated node instead of the first
monitor node.
But the change didn't update the switch-to-containers inventory which
cause this playbook to fail.
Also update the ubuntu inventory to have the same configuration.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
We don't have a reason to not apply firewall rules on the host when
using a containerized deployment.
The TripleO environments already manage the ceph firewall rules outside
ceph-ansible and set the configure_firewall variable to false.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1733251
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 771f25b1f8)
Like the OpenStack keyrings, we can use the profile rbd for the clients
keyring (both mon and osd).
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 49aa05b96c)
This reverts commit 2d955757ee.
The "osd blacklist" isn't an osd caps but should be used with mon caps.
Also the correct caps for this is: 'allow command "osd blacklist"'.
The current change is breaking the openstack and clients keyrings.
By using the profile rbd (which is already used) we already rely on the
ability to blacklist dead client.
Resolves: #4385
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 717af83475)
On containerized deployment, the OSD entrypoint runs some ceph-volume
commands (lvm/simple scan and/or activate) which perform badly without
the ulimit option.
This option was added for all previous ceph-volume commands but not on
the ceph-osd container startup.
Also updating hard limit value to 4096 to reflect default baremetal
value.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1744390
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 9a4ac46d19)
This can't be backported from master since there was too many
modifications meantime.
When mgr aren't all ready, sometimes the following error can show up:
```
stderr: 'Error ENOENT: all mgr daemons do not support module ''status'', pass --force to force enablement'
```
This commit adds a check so all mgr are available when we try to enable
modules.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
ceph-volume will complain if gpt headers are found on devices.
This commit checks whether a gpt header is present on devices passed in
`devices` variable and fail early.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1730541
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 487d701685)
In `stable-4.0`, the group name `iscsi-gws` will go away and some rgw
systemd service names will disappear as well:
(`ceph-rgw@<hostname>.service`, `ceph-radosgw@<hostname>.service`,
`ceph-radosgw@radosgw.<hostname>.service`,
`ceph-radosgw@radosgw.gateway.service`)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
we must use the ids instead of device names in the tasks executed in
`post_tasks` for the osd rolling update otherwise it ends up with old
systemd units enabled.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1739209
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Otherwise it will fail when running rolling_update.yml playbook because
of `serial: 1` usage.
The task which copies the script is run against the current node being
played only whereas the task which runs the script is run against all
nodes in a loop, it ends up with the typical error:
```
2019-08-08 17:47:05,115 p=14905 u=ubuntu | failed: [magna023 -> magna030] (item=magna030) => {
"changed": true,
"cmd": [
"/usr/bin/env",
"bash",
"/tmp/systemd-device-to-id.sh"
],
"delta": "0:00:00.004339",
"end": "2019-08-08 17:46:59.059670",
"invocation": {
"module_args": {
"_raw_params": "/usr/bin/env bash /tmp/systemd-device-to-id.sh",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "magna030",
"msg": "non-zero return code",
"rc": 127,
"start": "2019-08-08 17:46:59.055331",
"stderr": "bash: /tmp/systemd-device-to-id.sh: No such file or directory",
"stderr_lines": [
"bash: /tmp/systemd-device-to-id.sh: No such file or directory"
],
"stdout": "",
"stdout_lines": []
}
```
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1739209
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
let's deploy mgr on a dedicated node.
This makes update job failing on stable-4.0 branch since there's a
mismatch between the two inventories.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commits adds the `osd blacklist` cap on all OSP clients keyrings.
Fixes: #2296
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 2d955757ee)