Commit Graph

4737 Commits (cf82ac5590b84c74059268f383abb7ab3dea3704)
 

Author SHA1 Message Date
Dimitri Savineau cf82ac5590 ceph-dashboard: Add run_once on delegate tasks
Because we need to execute commands from a monitor node (the first one
in the mons list) we are using delegate_to option.
If there's multiple nodes running the ceph-dashboard role then the
delegated task will be executed multiple times.
Also remove a mgr config-key option not present for nautilus+ releases.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit f545b5be0d)
2019-08-08 13:47:09 +02:00
Dimitri Savineau 8bb1be30fa ceph-infra: Apply firewall rules with container
We don't have a reason to not apply firewall rules on the host when
using a containerized deployment.
The TripleO environments already manage the ceph firewall rules outside
ceph-ansible and set the configure_firewall variable to false.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1733251

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 771f25b1f8)
2019-08-07 10:41:47 +02:00
Guillaume Abrioux 7550f47661 dashboard: do not deploy on Debian based OS/non-containerized
in non-containerized deployment, we can't deploy dashboard on Debian
based distribution since the package `ceph-grafana-dashboards` isn't
available.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit dc7eb535b6)
2019-08-07 10:41:24 +02:00
Dimitri Savineau 308e5fe9f4 ceph-grafana: Set grafana uid/gid on files
We don't need to create a grafana system user (in fact we even don't
set the righ uid to this user) because we're using a container setup.
Instead we just need to be sure to set the owner/group to 472 (grafana
user/group from the container) like we do for ceph/167.
We don't need to set the user/group recursively on /etc/grafana
directory in a dedicated task.
Also on Ubuntu system, the ceph-grafana-dashboards isn't present so on
non containerized deployment we won't have the
/etc/grafana/dashboards/ceph-dashboard directory present (coming with
the package) so we need to be sure it exists.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 34036c667c)
2019-08-07 10:41:03 +02:00
Dimitri Savineau 6a5308fa7f tests/shrink_rgw: Disable dashboard
The shrink_rgw scenario has been merge just after the PR about enable
ceph dashboard by default.
So right now the shrink_rgw scenrio doesn't have nodes in the grafana
group and fails.
We just need to set dashboard_enabled to false.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 867583d5dd)
2019-07-31 15:25:15 -04:00
Rishabh Dave 06c0a06122 tests/functional: add a test for shrink-rgw.yml
Add a new functional test that deploys a Ceph cluster with three nodes
for MON, OSD and RGW and then runs shrink-rgw.yml to test it.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 236b081a3a)

# Conflicts:
#	tox.ini
2019-07-31 15:25:15 -04:00
Rishabh Dave 72a062b6fa add a playbook the remove rgw from a given node
Add a playbook named shrink-rgw.yml to infrastructure-playbooks/ that
can remove a RGW from a node in an already deployed Ceph cluster.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 632a44bdf2)
2019-07-31 15:25:15 -04:00
Dimitri Savineau 36e18e20d1 ceph-osd: check container engine rc for pools
When creating OpenStack pools, we only check if the return code from
the pool list command isn't 0 (ie: if it doesn't exist). In that case,
the return code will be 2. That's why the next condition is rc != 0 for
the pool creation.
But in containerized deployment, the return code could be different if
there's a failure on the container engine command (like container not
running). In that case, the return code could but either 1 (docker) or
125 (podman) so we should fail at this point and not in the next tasks.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1732157

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit d549fffdd2)
2019-07-31 14:07:41 -04:00
Guillaume Abrioux d2ef85b615 tests: add more memory in podman job
Typical error :

```
fatal: [mon1 -> mon0]: FAILED! => changed=true
  cmd:
  - podman
  - exec
  - ceph-mon-mon0
  - ceph
  - config
  - set
  - mgr
  - mgr/dashboard/ssl
  - 'false'
  delta: '0:00:00.644870'
  end: '2019-07-30 10:17:32.715639'
  msg: non-zero return code
  rc: 1
  start: '2019-07-30 10:17:32.070769'
  stderr: |-
    Traceback (most recent call last):
      File "/usr/bin/ceph", line 140, in <module>
        import rados
    ImportError: libceph-common.so.0: cannot map zero-fill pages: Cannot allocate memory
    Error: exit status 1
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
```

Let's add more memory to get around this issue.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 0f620b2584)
2019-07-30 15:08:46 +02:00
Guillaume Abrioux d7d661d5d7 tests: deploy dashboard on mons
there's no dedicated nodes for mgr, let's use monitor nodes.
The mgr0 instance spawned isn't used, so if this node is part of the
inventory for this scenario, testinfra will complain because there's no
ceph.conf on this node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit d649e00893)
2019-07-30 15:08:46 +02:00
Guillaume Abrioux 51af74face dashboard: fix timeout usage on rgw user creation command
For some reason, this is making the playbook failing like following:

```
TASK [ceph-dashboard : create radosgw system user] ************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
task path: /home/guits/ceph-ansible/roles/ceph-dashboard/tasks/configure_dashboard.yml:106
Tuesday 30 July 2019  10:04:54 +0200 (0:00:01.910)       0:11:22.319 **********
FAILED - RETRYING: create radosgw system user (3 retries left).
FAILED - RETRYING: create radosgw system user (2 retries left).
FAILED - RETRYING: create radosgw system user (1 retries left).
fatal: [mgr0 -> mon0]: FAILED! => changed=true
  attempts: 3
  cmd: timeout 20 podman exec ceph-mon-mon0 radosgw-admin user create --uid=ceph-dashboard --display-name='Ceph dashboard' --system
  delta: '0:00:20.021973'
  end: '2019-07-30 08:06:32.656066'
  msg: non-zero return code
  rc: 124
  start: '2019-07-30 08:06:12.634093'
  stderr: 'exec failed: container_linux.go:336: starting container process caused "process_linux.go:82: copying bootstrap data to pipe caused \"write init-p: broken pipe\""'
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
```

using `timeout -f -s KILL` fixes this issue.

Also, there is no need to use `shell` module here, let's switch to
`command`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit c9d80af4e0)
2019-07-30 15:08:46 +02:00
Dimitri Savineau 5e273a9072 library/ceph_volume.py: remove six dependency
The ceph nodes couldn't have the python six library installed which
could lead to error during the ceph_volume custom module execution.

  ImportError: No module named six

The six library isn't useful in this module if we're sure that all
action variables passed to the build_ceph_volume_cmd function are a
list and not a string.

Resolves: #4071

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit a64a61429d)
2019-07-29 15:53:18 +02:00
Rishabh Dave 8ca88b41cc infra-playbooks: rewite a condition for better readability
Use facility built-in in Ansible to check whether a command was executed
successfully rather looking at its return value.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 5aecdd3ba6)
2019-07-29 15:52:29 +02:00
Guillaume Abrioux 432257b6dd tests: test dashboard deployment with podman scenario
This commit adds a grafana-server section in order to test dashboard
deployment with podman.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3c2fd337d9)
2019-07-29 15:46:58 +02:00
Guillaume Abrioux ea44783f3d validate: add checks for grafana-server group definition
this commit adds two checks:
- check that the `[grafana-server]` group is defined
- check that the `[grafana-server]` contains at least one node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 02beb00916)
2019-07-29 15:46:58 +02:00
Guillaume Abrioux e2b41a17c0 mgr: fix a typo
this tasks isn't using the right container_exec_cmd, that's delegating
to the wrong node.
Let's use the right fact to fix this command.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit ec33ee7574)
2019-07-29 15:46:58 +02:00
Guillaume Abrioux 1a9043128c dashboard: remove cfg80211 module installation
According to this comment [1], this seems to be needed to detect wifi
devices.

In node exporter we can see this:

```
--collector.wifi          Enable the wifi collector (default: disabled).
```

since it's enabled by default and we don't even change this in our
systemd templates for node-exporter, we can easily assume in the end
it's not needed. Therefore, let's remove this.

[1] dbf81b6b5b (diff-961545214e21efed3b84a9e178927a08L21-L23)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b9cdf341be)
2019-07-29 15:46:58 +02:00
Guillaume Abrioux d0ad1cf0f1 dashboard: use dedicated group only
There's no need to add complexity and trying to fallback on other group.
Let's deploy dashboard on all nodes present in grafana-server group.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit d67230b2a2)
2019-07-29 15:46:58 +02:00
Dimitri Savineau dd87db70ca dashboard: move code into a dedicated playbook
Move dashboard, grafana/prometheus and node-exporter plays into a
dedicated playbook in infrastructure-playbook directory.
To avoid using 'dashboard_enabled | bool' condition multiple time
in the main playbook we can just import the dashboard playbook or
not.
This patch also allows to use an unique dashboard playbook for
both baremetal and container playbooks.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 43135840b1)
2019-07-29 15:46:58 +02:00
Guillaume Abrioux 93826e061d dashboard: enable dashboard by default
This commit enables dashboard deployment by default.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1726739

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fb1b5b3251)

# Conflicts:
#	tox-dashboard.ini
2019-07-29 15:46:58 +02:00
Dimitri Savineau 43d625b59a Remove NBSP characters
Some NBSP are still present in the yaml files.
Adding a test in travis CI.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 07c6695d16)
2019-07-26 16:23:41 -04:00
Guillaume Abrioux 6ef73b59d2 container: rename docker directories
Those 2 directories should be renamed to be more generic (docker vs.
podman).

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 19950b5170)
2019-07-25 13:40:40 +02:00
fmount 15c745d998 Avoid to setup provisioners in a fully containerized environment
This commit adds a when clause to avoid the setup of grafana
provisioners in a fully containerized scenario.
This is needed when the ceph-grafana-dashboards package is not
installed and this task could result in a wrong grafana
configuration that let the container crash.

Signed-off-by: fmount <fpantano@redhat.com>
(cherry picked from commit fac1b030cb)
2019-07-24 14:16:55 +02:00
Guillaume Abrioux 12839a3f66 tests: bump nfs-ganesha version
in stable-4.0, nfs-ganesha 2.8 should be used.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-07-23 15:02:19 +02:00
Dimitri Savineau 367dce2894 ceph-dashboard: enable rgw options conditionally
The dashboard rgw frontend options only need to be applied when there's
some nodes present in the rgw ansible group.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5383c2f7f3)
2019-07-19 20:33:42 +00:00
Dimitri Savineau c0abaec019 tests/dashboard: use the dedicated grafana node
The Vagrant dashboard scenario creates a dedicated grafana node but
was not use in the ansible inventory.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit a9a1f633a9)
2019-07-19 20:33:42 +00:00
Dimitri Savineau 87db5aa55c dashboard: use variables for port value
The current port value for alertmanager, grafana, node-exporter and
prometheus is hardcoded in the roles so it's not possible to change the
port binding of those services.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 8ab9b719fa)
2019-07-19 20:33:42 +00:00
Giulio Fidente 985165dbf7 Fix backward compat with old cephfs_pools format
Previously cephfs_pools items used to have a pgs: key but not
pgp_num: nor pg_num:

Signed-off-by: Giulio Fidente <gfidente@redhat.com>
(cherry picked from commit edd1420217)
2019-07-19 17:50:57 +00:00
Guillaume Abrioux bbfd6965e0 handler: fix bug in osd handlers
fbf4ed42ae introduced a bug when
container binary is podman.
podman doesn't support ps -f using regular expression, the container id
is never set in the restart script causing the handler to fail.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1721536

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 618dbf271d)
2019-07-18 16:49:14 +00:00
Guillaume Abrioux 4aa4496fc1 validate: fail if gpt header found on unprepared devices
ceph-volume will complain if gpt headers are found on devices.
This commit checks whether a gpt header is present on devices passed in
`devices` variable and fail early.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1730541

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 487d701685)
2019-07-18 10:32:53 +02:00
Guillaume Abrioux 2c64166eac tests: remove useless setting
this setting is not needed here since we explicitely set it for
container and non container context.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 87b173d022)
2019-07-17 09:04:20 +00:00
Guillaume Abrioux bee8a31afe shrink-rbdmirror: check if rbdmirror is well removed from cluster
This commits adds a check to ensure the daemon has been removed from the
cluster.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 916dc1f52f)
2019-07-16 15:02:49 +02:00
Rishabh Dave 41a4ded2b5 tests/functional: add a test for shrink-rbdmirror.yml
Add a new functional test that deploys Ceph cluster with three nodes for
MON, OSD and RBD Mirror and, then, runs shrink-rbdmirror.yml to test it.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit f80521f773)

# Conflicts:
#	tox.ini
2019-07-16 15:02:49 +02:00
Rishabh Dave 0a15d1d112 add a playbook that removes rbd-mirror from a node
Add a playbook named "shrink-rbdmirror.yml" in infrastructure-playbooks/
that removes a RBD Mirror from a node in an already deployed Ceph
cluster.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit c4824acb19)
2019-07-16 15:02:49 +02:00
Dimitri Savineau 2d8ed4cc52 ceph-infra: update handler with daemon variable
Both ntp and chrony daemon use variable for the service name because it
could be different depending on the GNU/Linux distribution.
This has been update in 9d88d3199 for chrony but only for the start part
not for the handler.
The commit fixes this for both ntp and chrony.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 0ae0193144)
2019-07-15 16:36:49 +00:00
Dimitri Savineau b87c189299 ceph-infra: Open prometheus port
The Prometheus porrt 9090 isn't open in the firewall configuration.
Also the dashboard task on the grafana node was not required because
it's already present on the mgr node.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 41b44dde85)
2019-07-11 13:41:58 +00:00
Guillaume Abrioux 2742063aee handler: remove legacy condition
since everything is already in a block with the same condition, it's not
needed to leave all of them on these tasks.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit ee29f7370a)
2019-07-10 15:53:26 +00:00
Guillaume Abrioux bca8ac39c2 validate: improve message printed in check_devices.yml
The message prints the whole content of the registered variable in the
playbook, this is not needed and makes the message pretty unclear and
unreadable.

```
"msg": "{'_ansible_parsed': True, 'changed': False, '_ansible_no_log': False, u'err': u'Error: Could not stat device /dev/sdf - No such file or directory.\\n', 'item': u'/dev/sdf', '_ansible_item_result': True, u'failed': False, '_ansible_item_label': u'/dev/sdf', u'msg': u\"Error while getting device information with parted script: '/sbin/parted -s -m /dev/sdf -- unit 'MiB' print'\", u'rc': 1, u'invocation': {u'module_args': {u'part_start': u'0%', u'part_end': u'100%', u'name': None, u'align': u'optimal', u'number': None, u'label': u'msdos', u'state': u'info', u'part_type': u'primary', u'flags': None, u'device': u'/dev/sdf', u'unit': u'MiB'}}, 'failed_when_result': False, '_ansible_ignore_errors': None, u'out': u''} is not a block special file!"
```

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1719023

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit e6dc3ebd8c)
2019-07-10 09:37:01 -04:00
Boris Ranto 5d5e7d59fd dashboard: Use upstream default port
We are currently using incorrect dashboard default port. The upstream
uses 8443 instead of 8234 by default. This should get us closer to the
upstream project.

Signed-off-by: Boris Ranto <branto@redhat.com>
(cherry picked from commit 21758fcee8)
2019-07-10 11:49:35 +02:00
Dimitri Savineau 3bdcbb005f ceph-dashboard: remove bool filter for rgw vars
Some dashboard_rgw_api_* variables are using the bool filter but those
variables are strings with an empty string as default value.
So we should test the variable against an empty string instead of a
bool.

dashboard_rgw_api_host: ''
dashboard_rgw_api_port: ''
dashboard_rgw_api_scheme: ''
dashboard_rgw_api_admin_resource: ''

Resolves: #4179

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 5413274412)
2019-07-10 11:48:58 +02:00
Dimitri Savineau c040c34d97 ceph-iscsi: Update gateway config/template
- Remove gateway_keyring from the configuration file because it's
not used in ceph-iscsi 3.x release.
- Use config_template instead of template module for iscsi-gateway
configuration file. Because the file is an ini file and we might want
to override more parameters than those present in ceph-ansible.
- Because we can now set the pool name in the configuration, we should
use a variable for that. This is refact with the iscsi_pool_* variables
also used to configure the pool size.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 1f2a4f1910)
2019-07-10 09:35:21 +00:00
Rishabh Dave 1b6d8f9b45 tests/functional: add a test for shrink-mgr.yml
Add a new functional test that deploys a Ceph cluster with three nodes
for MON, OSD and MGR and then runs shrink-mgr.yml to test it.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 5c95c34d4b)

# Conflicts:
#	tox.ini
2019-07-09 15:00:56 +00:00
Rishabh Dave 6197d1c8d9 add a playbook that removes manager from a node
Add a playbook, named "shrink-mgr.yml", in infrastructure-playbooks/
that removes a MGR from a node in an already deployed Ceph cluster.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit f4ea75051b)
2019-07-09 15:00:56 +00:00
Guillaume Abrioux 85a448429d shrink-mds: refact post tasks
This commit refacts the way we check the "mds_to_kill" node is well
stopped.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 7df62fde34)
2019-07-09 12:07:47 +02:00
Rishabh Dave e213163b63 tests/functional: add a test for shrink-mds.yml
Add a new functional test that deploys a Ceph cluster with three nodes
for MON, OSD and MDS and then runs shrink-mds.yml to test it.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 324b3b4a6c)

# Conflicts:
#	tox.ini
2019-07-09 12:07:47 +02:00
Rishabh Dave 38c2785e95 add a playbook that removes mds from a node
Add a playbook, named "shrink-mds.yml", in infrastructure-playbooks/
that removes a MDS from a node in an already deployed Ceph cluster.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1677431
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 235b1fccc6)
2019-07-09 12:07:47 +02:00
Dimitri Savineau f13e6642a4 ceph-handler: fix cluster name in socket path
c90f605b5 introduces the default ceph cluster name value in the rgw
socket path for the rgw restart script. But this should use the
`cluster` variable instead.
This commit also fixes this in the osd restart script.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit de7f948b75)
2019-07-08 19:57:08 +00:00
ilyashestopalov 5c6a9e1a96 ceph-mon: Fix cluster name parameter
The ability to add nodes with the monitor role to an existing cluster
whose name differs from the default name is fixed.

Signed-off-by: ilyashestopalov <usr.tester@yandex.ru>
(cherry picked from commit 904532c5e2)
2019-07-08 09:12:37 -04:00
fmount ca378f1da0 Add package-install tag on ceph-grafana-dashboard pkg install.
According to the OSP pattern, we need the package-install tag
to control what is installed on the host. This commit just add
the missing tag to meet the TripleO requirements.

See: /issues/4197 for details

Fixes: #4197

Signed-off-by: fmount <fpantano@redhat.com>
(cherry picked from commit 95bd002b35)
2019-07-08 10:42:41 +00:00
Dimitri Savineau cd7156efee ceph-iscsi-gw: Update log directories bind mount
On containerized deployment we need to bind mount the ceph-iscsi
directory to avoid writing the logs in the container.
The /var/log/ceph directory isn't use by rbd-targe-api/gw services
because they have their own log directories.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 91bef94b6c)
2019-07-07 07:09:42 +00:00