ceph-ansible/roles/ceph-rgw
Dimitri Savineau d408c75d76 podman: always remove container on start
In case of failure, the systemd ExecStop isn't executed so the container
isn't removed. After a reboot of a failed node, the container doesn't
start because the old container is still present in created state.
We should always try to remove the container in ExecStartPre for this
situation.
A normal reboot doesn't trigger this issue and this also doesn't affect
nodes running containers via docker.
This behaviour was introduced by d43769d.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1858865

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 47b7c00287)
2020-07-24 12:47:21 -04:00
..
defaults rgw: set container memory limit to 4g 2020-07-23 17:24:48 +02:00
handlers rgw multisite: enable more than 1 realm per cluster 2020-03-04 14:39:23 -05:00
meta meta: set the right minimum ansible version required for galaxy 2018-12-11 09:59:25 +01:00
tasks rgw: fix multi instances scaleout 2020-07-20 21:23:27 +02:00
templates podman: always remove container on start 2020-07-24 12:47:21 -04:00
LICENSE Add READMEs for each roles 2015-07-25 10:51:53 +02:00
README.md Cleanup readme files in roles directories 2017-10-17 11:22:06 +02:00

README.md

Ansible role: ceph-rgw

Documentation is available at http://docs.ceph.com/ceph-ansible/.