ceph-ansible/roles/ceph-rbd-mirror
Dimitri Savineau d408c75d76 podman: always remove container on start
In case of failure, the systemd ExecStop isn't executed so the container
isn't removed. After a reboot of a failed node, the container doesn't
start because the old container is still present in created state.
We should always try to remove the container in ExecStartPre for this
situation.
A normal reboot doesn't trigger this issue and this also doesn't affect
nodes running containers via docker.
This behaviour was introduced by d43769d.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1858865

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 47b7c00287)
2020-07-24 12:47:21 -04:00
..
defaults global: remove fetch_directory dependency 2019-09-26 16:21:54 +02:00
meta meta: set the right minimum ansible version required for galaxy 2018-12-11 09:59:25 +01:00
tasks travis: fail on ansible-lint errors 2019-10-21 15:55:54 -04:00
templates podman: always remove container on start 2020-07-24 12:47:21 -04:00
LICENSE ceph-rbd-mirorr: add license file 2016-04-08 12:17:46 +02:00
README.md Cleanup readme files in roles directories 2017-10-17 11:22:06 +02:00

README.md

Ansible role: ceph-rbd

Documentation is available at http://docs.ceph.com/ceph-ansible/.