mirror of https://github.com/ceph/ceph-ansible.git
9f54b3b4a7
On containerized deployment, if a mon is stopped, the socket is not purged and can cause failure when a cluster is redeployed after the purge playbook has been run. Typical error: ``` fatal: [osd0]: FAILED! => {} MSG: 'dict object' has no attribute 'osd_pool_default_pg_num' ``` the fact is not set because of this previous failure earlier: ``` ok: [mon0] => { "changed": false, "cmd": "docker exec ceph-mon-mon0 ceph --cluster test daemon mon.mon0 config get osd_pool_default_pg_num", "delta": "0:00:00.217382", "end": "2018-07-09 22:25:53.155969", "failed_when_result": false, "rc": 22, "start": "2018-07-09 22:25:52.938587" } STDERR: admin_socket: exception getting command descriptions: [Errno 111] Connection refused MSG: non-zero return code ``` This failure happens when the ceph-mon service is stopped, indeed, since the socket isn't purged, it's a leftover which is confusing the process. Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com> |
||
---|---|---|
.. | ||
ceph-aliases.sh.j2 | ||
ceph-mon.service.d-overrides.j2 | ||
ceph-mon.service.j2 |