Commit Graph

4667 Commits (b98753488110b04cd2071c2b103493235dfc0c80)
 

Author SHA1 Message Date
Dimitri Savineau b987534881 ceph-volume: Set max open files limit on container
The ceph-volume lvm list command takes ages to complete when having
a lot of LV devices on containerized deployment.
For instance, with 25 OSDs on a node it takes 3 mins 44s to list the
OSD.
Adding the max open files limit to the container engine cli when
executing the ceph-volume command seems to improve a lot thee
execution time ~30s.

This was impacting the OSDs creation with ceph-volume (both filestore
and bluestore) when using multiple LV devices.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1702285

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-20 22:37:40 +02:00
Guillaume Abrioux 46a2683944 facts: add a retry on get current fsid task
sometimes it can happen the following task fails:

```
TASK [ceph-facts : get current fsid] *******************************************
task path: /home/jenkins-build/build/workspace/ceph-ansible-prs-dev-centos-container-update/roles/ceph-facts/tasks/facts.yml:78
Wednesday 19 June 2019  18:12:49 +0000 (0:00:00.203)       0:02:39.995 ********
fatal: [mon2 -> mon1]: FAILED! => changed=true
  cmd:
  - timeout
  - --foreground
  - -s
  - KILL
  - 600s
  - docker
  - exec
  - ceph-mon-mon1
  - ceph
  - --cluster
  - ceph
  - daemon
  - mon.mon1
  - config
  - get
  - fsid
  delta: '0:00:00.239339'
  end: '2019-06-19 18:12:49.812099'
  msg: non-zero return code
  rc: 22
  start: '2019-06-19 18:12:49.572760'
  stderr: 'admin_socket: exception getting command descriptions: [Errno 2] No such file or directory'
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
```

not sure exactly why since just before this task, mon1 seems to be well
UP otherwise it wouldn't have passed the task `waiting for the
containerized monitor to join the quorum`.

As a quick fix/workaround, let's add a retry which allows us to get
around this situation:

```
TASK [ceph-facts : get current fsid] *******************************************
task path: /home/jenkins-build/build/workspace/ceph-ansible-scenario/roles/ceph-facts/tasks/facts.yml:78
Thursday 20 June 2019  15:35:07 +0000 (0:00:00.201)       0:03:47.288 *********
FAILED - RETRYING: get current fsid (3 retries left).
changed: [mon2 -> mon1] => changed=true
  attempts: 2
  cmd:
  - timeout
  - --foreground
  - -s
  - KILL
  - 600s
  - docker
  - exec
  - ceph-mon-mon1
  - ceph
  - --cluster
  - ceph
  - daemon
  - mon.mon1
  - config
  - get
  - fsid
  delta: '0:00:00.290252'
  end: '2019-06-20 15:35:13.960188'
  rc: 0
  start: '2019-06-20 15:35:13.669936'
  stderr: ''
  stderr_lines: <omitted>
  stdout: |-
    {
        "fsid": "153e159d-7ade-42a7-842c-4d04348b901e"
    }
  stdout_lines: <omitted>
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-20 13:13:04 -04:00
Dimitri Savineau 7c3640177b roles: Remove useless become (true) flag
We already set the become flag to true at a play level in the site*
playbooks so we don't need to set it at a task level.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-19 10:31:32 +02:00
Guillaume Abrioux eece362b38 osd: remove legacy task
`parted_results` isn't used anymore in the playbook.

By the way, `parted` seems to cause issue because it changes the
ownership on devices:

```
root@osd0 ~]# ls -l /dev/sdc*
brw-rw----. 1 root disk 8, 32 Jun 11 08:53 /dev/sdc
brw-rw----. 1 ceph ceph 8, 33 Jun 11 08:53 /dev/sdc1
brw-rw----. 1 ceph ceph 8, 34 Jun 11 08:53 /dev/sdc2

[root@osd0 ~]# parted -s /dev/sdc print
Model: ATA QEMU HARDDISK (scsi)
Disk /dev/sdc: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name           Flags
 1      1049kB  1075MB  1074MB               ceph block.db
 2      1075MB  2149MB  1074MB               ceph block.db

[root@osd0 ~]# #We can see ownerships have changed from ceph:ceph to root:disk:
[root@osd0 ~]# ls -l /dev/sdc*
brw-rw----. 1 root disk 8, 32 Jun 11 08:57 /dev/sdc
brw-rw----. 1 root disk 8, 33 Jun 11 08:57 /dev/sdc1
brw-rw----. 1 root disk 8, 34 Jun 11 08:57 /dev/sdc2
[root@osd0 ~]#
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-18 12:45:01 -04:00
Guillaume Abrioux 3a100cfa52 rolling_update: fail early if cluster state is not OK
starting an upgrade if the cluster isn't HEALTH_OK isn't a good idea.
Let's check for the cluster status before trying to upgrade.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-18 12:45:01 -04:00
Guillaume Abrioux 51b2813e04 rolling_update: only mask and stop unit in mgr part
Otherwise it fails like following:

```
fatal: [mon0]: FAILED! => changed=false
  msg: |-
    Unable to enable service ceph-mgr@mon0: Failed to execute operation: Cannot send after transport endpoint shutdown
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-18 12:45:01 -04:00
Dimitri Savineau c7a5967a6f Add installer phase for dashboard roles
This commits adds the support of the installer phase for dashboard,
grafana and node-exporter roles.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-18 09:16:35 +02:00
Dimitri Savineau da8b7ab7fb remove ceph restapi references
The ceph restapi configuration was only available until Luminous
release so we don't need those leftovers for nautilus+.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-18 09:13:19 +02:00
Guillaume Abrioux bdc870cbf5 dashboard: fix hosts sections in main playbook
ceph-dashboard should be deployed on either a dedicated mgr node or a
mon if they are collocated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-17 12:19:30 -04:00
Rishabh Dave 2034387f57 use pre_tasks and post_tasks in shrink-mon.yml too
This commit should've been part of commit
2fb12ae554.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-06-17 12:14:49 -04:00
Dimitri Savineau 34f9d51178 tests: Update ansible ssh_args variable
Because we're using vagrant, a ssh config file will be created for
each nodes with options like user, host, port, identity, etc...
But via tox we're override ANSIBLE_SSH_ARGS to use this file. This
remove the default value set in ansible.cfg.

Also adding PreferredAuthentications=publickey because CentOS/RHEL
servers are configured with GSSAPIAuthenticationis enabled for ssh
server forcing the client to make a PTR DNS query.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-17 09:24:24 +02:00
Guillaume Abrioux 1019e3b3dc tests: increase docker pull timeout
CI is facing issues where docker pull reach the timeout, let's increase
this to avoid CI failures.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-14 16:23:24 +02:00
Rishabh Dave 9d88d3199f ceph-infra: make chronyd default NTP daemon
Since timesyncd is not available on RHEL-based OSs, change the default
to chronyd for RHEL-based OSs. Also, chronyd is chrony on Ubuntu, so
set the Ansible fact accordingly.

Fixes: https://github.com/ceph/ceph-ansible/issues/3628
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-06-13 14:53:22 -04:00
Rishabh Dave d1c266e6c7 ceph-infra: update cache for Ubuntu
Ubuntu-based CI jobs often fail with error code 404 while installing
NTP daemons. Updating cache beforehand should fix the issue.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-06-13 14:00:29 +02:00
Rishabh Dave 67071c3169 align cephfs pool creation
The definitions of cephfs pools should match openstack pools.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
Co-Authored-by: Simone Caronni <simone.caronni@teralytics.net>
2019-06-13 09:44:05 +02:00
Guillaume Abrioux 4cf17a6fdd iscsi: assign application (rbd) to pool 'rbd'
if we don't assign the rbd application tag on this pool,
the cluster will get `HEALTH_WARN` state like following:

```
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
    application not enabled on pool 'rbd'
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-13 07:35:39 +02:00
Dimitri Savineau da9891da1e ceph-handler: replace fuser by /proc/net/unix
We're using fuser command to see if a process is using a ceph unix
socket file. But the fuser command runs through every PID present in
/proc/<PID> to see if one of them is using the file.
On a system running thousands processes, the fuser command can take
a long time to finish.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1717011

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-12 19:31:21 +02:00
Guillaume Abrioux 905c2256bd mon: enforce mon0 delegation for initial_mon_key register
since this task is designed to be always run on the first monitor, let's
enforce the container name accordingly otherwise it could fail like
following:

```
fatal: [mon1 -> mon0]: FAILED! => changed=true
  cmd:
  - docker
  - exec
  - ceph-mon-mon1
  - ceph
  - --cluster
  - ceph
  - --name
  - mon.
  - -k
  - /var/lib/ceph/mon/ceph-mon0/keyring
  - auth
  - get-key
  - mon.
  delta: '0:00:00.085025'
  end: '2019-06-12 06:12:27.677936'
  msg: non-zero return code
  rc: 1
  start: '2019-06-12 06:12:27.592911'
  stderr: 'Error response from daemon: No such container: ceph-mon-mon1'
  stderr_lines: <omitted>
  stdout: ''
  stdout_lines: <omitted>
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-12 11:21:19 -04:00
Guillaume Abrioux 9e4e692c61 tests: remove unused variable
`e MGR_DASHBOARD=0` isn't needed anymore here, let's remove this legacy.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-12 10:41:01 -04:00
Guillaume Abrioux 8dd774a99b tests: update docker image tag used in ooo job
ceph-ansible@master isn't intended to deploy luminous.
Let's use latest-master on ceph-ansible@master branch

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-12 10:41:01 -04:00
Guillaume Abrioux 27856cc499 dashboard: add allow_embedding support
Add a variable to support the allow_embedding support.

See ceph/ceph-ansible/issues/4084 for details.

Fixes: #4084

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-12 16:00:32 +02:00
Guillaume Abrioux 2c9cd9d9e7 dashboard: fix dashboard_url setting
This setting must be set to something resolvable.

See: ceph/ceph-ansible/issues/4085 for details

Fixes: #4085

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-12 15:59:58 +02:00
Dimitri Savineau d0840217f3 ceph-node-exporter: Fix systemd template
069076b introduced a bug in the systemd unit script template. This
commit fixes the options used by the node-exporter container.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-11 21:48:40 +02:00
Dimitri Savineau dbf81b6b5b ceph-node-exporter: use modprobe ansible module
Instead of using the modprobe command from the path in the systemd
unit script, we can use the modprobe ansible module.
That way we don't have to manage the binary path based on the linux
distribution.

Resolves: #4072

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-11 21:40:50 +02:00
fmount 069076bbfd Fix units and add ability to have a dedicated instance
Few fixes on systemd unit templates for node_exporter and
alertmanager container parameters.
Added the ability to use a dedicated instance to deploy the
dashboard components (prometheus and grafana).
This commit also introduces the grafana_group_name variable
to refer grafana group and keep consistency with the other
groups.
During the integration with TripleO some grafana/prometheus
template variables resulted undefined. This commit adds the
ability to check if the group exist and create, accordingly,
different job groups in prometheus template.

Signed-off-by: fmount <fpantano@redhat.com>
2019-06-10 18:18:46 +02:00
Guillaume Abrioux 771648304d validate: fail in check_devices at the right task
see https://bugzilla.redhat.com/show_bug.cgi?id=1648168#c17 for details.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1648168#c17

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-07 16:14:18 +02:00
Guillaume Abrioux c933645bf7 spec: bring back possibility to install ceph with custom repo
This can be seen as a regression for customers who were used to deploy
in offline environment with custom repositories.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1673254

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-07 15:21:42 +02:00
Dimitri Savineau f49090df7e podman: Add systemd dependency on network.target
When using podman, the systemd unit scripts don't have a dependency
on the network. So we're not sure that the network is up and running
when the containers are starting.
With docker this behaviour is already handled because the systemd
unit scripts depend on docker service which is started after the
network.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-07 09:28:58 +02:00
Dimitri Savineau 44c63903ca purge-cluster: clean all ceph repo files
We currently only purge rh_storage yum repository file but depending
on the ceph_repository value we are using, the ceph repository file
could have a different name.

Resolves: #4056

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-07 09:28:14 +02:00
guihecheng 59e702ec39 Add section for purging rgw loadbalancer in purge-cluster.yml
Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
2019-06-06 17:12:04 +02:00
guihecheng 96c346743b Add section for rgw loadbalancer in site.yml
This drives ceph rgw loadbalancer stuff to run.

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
2019-06-06 17:12:04 +02:00
guihecheng 35d40c65f8 Add role definitions of ceph-rgw-loadbalancer
This add support for rgw loadbalancer based on HAProxy and Keepalived.
We define a single role ceph-rgw-loadbalancer and include HAProxy and
Keepalived configurations all in this.

A single haproxy backend is used to balance all RGW instances and
a single frontend is exported via a single port, default 80.

Keepalived is used to maintain the high availability of all haproxy
instances. You are free to use any number of VIPs. A single VIP is
shared across all keepalived instances and there will be one
master for one VIP, selected sequentially, and others serve as
backups.
This assumes that each keepalived instance is on the same node as
one haproxy instance and we use a simple check script to detect
the state of each haproxy instance and trigger the VIP failover
upon its failure.

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
2019-06-06 17:12:04 +02:00
L3D ab54fe20ec ansible: use 'bool' filter on boolean conditionals
By running ceph-ansible there are a lot ``[DEPRECATION WARNING]`` like these:
```
[DEPRECATION WARNING]: evaluating containerized_deployment as a bare variable,
this behaviour will go away and you might need to add |bool to the expression
in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This
feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
```

Now appended ``| bool`` on a lot of the affected variables.

Sometimes the coding style from ``variable|bool`` changed to ``variable | bool`` *(with spaces at the pipe)*.

Closes: #4022

Signed-off-by: L3D <l3d@c3woc.de>
2019-06-06 10:21:17 +02:00
Dimitri Savineau 518ab794fb container-common: support podman on Ubuntu
Currently we're only able to use podman on ubuntu if podman's
installation is done manually before the ceph-ansible execution
because the deb package is present in an external repository.
We already manage the docker-ce installation via an external
repository so we should be able to allow the podman installation
with the same mechanism too.

https://github.com/containers/libpod/blob/master/install.md#ubuntu

Resolves: #3947

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-05 14:07:34 +02:00
Guillaume Abrioux 80875adba7 ceph-osd: do not relabel /run/udev in containerized context
Otherwise content in /run/udev is mislabeled and prevent some services
like NetworkManager from starting.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-04 11:32:41 -04:00
Guillaume Abrioux a78fb209b1 tests: test podman against atomic os instead rhel8
the rhel8 image used is an outdated beta version, it is not worth it to
maintain this image upstream, since it's possible to test podman with a
newer version of centos/atomic-host image.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-04 11:32:41 -04:00
Dimitri Savineau 2d375e1aa7 site-container: update container-engine role
Since the split between container-engine and container-common roles,
the tags and condition were not updated to reflect the change.

- ceph-container-engine needs with_pkg tag
- ceph-container-common needs fetch_container_images
- we don't need to pull the container image in a dedicated task for
atomic host. We can now use the ceph-container-common role.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-04 14:19:32 +02:00
Dimitri Savineau 616c484698 ceph-nfs: use template module for configuration
789cef7 introduces a regression in the ganesha configuration file
generation. The new config_template module version broke it.
But the ganesha.conf file isn't an ini file and doesn't really
need to use the config_template module. Instead we can use the
classic template module.

Resolves: #4045

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-04 09:11:52 +02:00
Guillaume Abrioux 5457312e62 dashboard: add a last default value in grafana-server host section
If there is no mgrs and mons in the inventory, it will fail with the following error:

```
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'dict object' has no attribute 'mons'

The error appears to be in '/home/guits/ceph-ansible/site-docker.yml.sample': line 539, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- hosts: '{{ (groups["mgrs"] | default(groups["mons"]))[0] }}'
  ^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:

    with_items:
      - {{ foo }}

Should be written as:

    with_items:
      - "{{ foo }}"
```

let's add an  `omit` so it just display this message instead:

```
PLAY [[]] *******************
skipping: no hosts matched
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:36:38 +02:00
Guillaume Abrioux 6e2e30db54 dashboard: move ceph-grafana-dashboards package installation
This commit moves the package installation into ceph-dashboard role.
This is needed to install ceph dasboard json file in
`/etc/grafana/dashboards/ceph-dashboard/`.

Closes: #4026

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:36:38 +02:00
Guillaume Abrioux 14f5fc3c86 infra: refact dashboard firewall rules
- There is no need to open ports 3000, 8234, 9283 on all nodes.
- Add missing rule for alertmanager (port 9093)

Closes: #4023

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:36:38 +02:00
Guillaume Abrioux a2b6f44665 dashboard: append mgr modules to ceph_mgr_modules
when `dashboard_enabled` is `True`, let's append `dashboard` and
`prometheus` modules to `ceph_mgr_modules` so they are automatically
loaded.

Closes: #4026

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:36:38 +02:00
Dimitri Savineau 7503098ca0 remove ceph-agent role and references
The ceph-agent role was used only for RHCS 2 (jewel) so it's not
usefull anymore.
The current code will fail on CentOS distribution because the rhscon
package is only avaible on Red Hat with the RHCS 2 repository and
this ceph release is supported on stable-3.0 branch.

Resolves: #4020

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-03 13:35:50 +02:00
Guillaume Abrioux 003aeea45a validate: add a check for nfs standalone
if `nfs_obj_gw` is True when deploying an internal ganesha with an
external ceph cluster, `ceph_nfs_rgw_access_key` and
`ceph_nfs_rgw_secret_key` must be provided so the
ganesha configuration file can be generated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:34:38 +02:00
Guillaume Abrioux 6a6785b719 nfs: support internal Ganesha with external ceph cluster
This commits allows to deploy an internal ganesha with an external ceph
cluster.

This requires to define `external_cluster_mon_ips` with a comma
separated list of external monitors.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1710358

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:34:38 +02:00
Dimitri Savineau daf92a9e1f ceph-facts: generate fsid on mon node
The fsid generation is done via a python command. When the ansible
controller node only have python3 available (like RHEL 8) then the
python command isn't necessarily present causing the fsid generation
to fail.
We already do some resource creation (like ceph keyring secret) with
the python command too but from the mon node so we should do the same
for fsid.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1714631

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-03 10:11:32 +02:00
Dimitri Savineau 24d0fd7003 vagrant: Default box to centos/7
We don't use ceph/ubuntu-xenial anymore but only centos/7 and
centos/atomic-host.
Changing the default to centos/7.

Resolves: #4036

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-31 10:32:04 -04:00
Kevin Carter 789cef7621 Sync config_template from upstream
This change pulls in the most recent release of the config_template module
into the ceph_ansible action plugins.

Signed-off-by: Kevin Carter <kecarter@redhat.com>
2019-05-28 14:27:02 -04:00
Guillaume Abrioux 4708b7615f tests: add retries on failing tests in testinfra
This commit adds `pytest-rerunfailures` in requirements.txt so we can
retry failing test in testinfra to avoid false positive. (eg: sometimes it
can happen for some reason a service takes too much time to start)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-22 09:47:36 -04:00
Guillaume Abrioux 55420d6253 roles: introduce `ceph-container-engine` role
This commit splits the current `ceph-container-common` role.

This introduces a new role `ceph-container-engine` which handles the
tasks specific to the installation of containers tools (docker/podman).

This is needed for the ceph-dashboard implementation for 2 main reasons:

1/ Since the ceph-dashboard stack is only containerized, we must install
everything needed to run containers even in non containerized
deployments. Splitting this role allows us to not have to call the full
`ceph-container-common` role which would run a bunch of unneeded tasks
that would have been skipped anyway.

2/ The current implementation would have required to run
`ceph-container-common` on all ceph-clients nodes which would have been
conflicting with 9d3517c670 (we don't want
to run ceph-container-common on all client nodes, see mentioned commit
for more details)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-22 13:02:10 +02:00