ceph-ansible/infrastructure-playbooks
Guillaume Abrioux 3849f30f58 purge: do not remove /var/lib/apt/lists/*
removing the content of this directory seems a bit agressive and cause a
redeployment to fail after a purge on debian based distrubition.

Typical error:
```
fatal: [mon0]: FAILED! => changed=false
  attempts: 3
  msg: No package matching 'ceph' is available
```

The following task will consider the cache is still valid, so apt
doesn't refresh it:
```
- name: update apt cache if cache_valid_time has expired
  apt:
    update_cache: yes
    cache_valid_time: 3600
  register: result
  until: result is succeeded
```

since the task installing ceph packages has a `update_cache: no` it
fails:

```
- name: install ceph for debian
  apt:
    name: "{{ debian_ceph_pkgs | unique }}"
    update_cache: no
    state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
    default_release: "{{ ceph_stable_release_uca | default('') }}{{ ansible_distribution_release ~ '-backports' if ceph_origin == 'distro' and ceph_use_distro_backports else '' }}"
  register: result
  until: result is succeeded
```

/tmp/* isn't specific to ceph as well, so we shouldn't remove everything
in this directory.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-01 20:31:14 +00:00
..
untested-by-ci Make python print statements python3 compatible 2019-02-01 15:23:27 +00:00
vars infrastructure playbooks: ensure nvme_device is defined in lv-create.yml 2018-10-29 08:41:42 +00:00
README.md rolling_update: fix wrong indent 2016-10-26 12:51:08 -05:00
add-osd.yml introduce new role ceph-facts 2018-12-12 11:18:01 +01:00
ansible.cfg Cleanup plugins directories and references 2018-03-14 11:15:39 +01:00
ceph-keys.yml ceph_key: add fetch_initial_keys capability 2018-11-09 12:45:52 +01:00
gather-ceph-logs.yml infra: add a gather-ceph-logs.yml playbook 2018-10-17 13:52:19 +00:00
lv-create.yml retry on packages and repositories failures 2018-12-19 14:48:27 +00:00
lv-teardown.yml retry on packages and repositories failures 2018-12-19 14:48:27 +00:00
purge-cluster.yml purge: do not remove /var/lib/apt/lists/* 2019-03-01 20:31:14 +00:00
purge-docker-cluster.yml rgw: add support for multiple rgw instances on a single host 2019-01-18 11:12:28 +01:00
purge-iscsi-gateways.yml igw: stop tcmu-runner on iscsi purge 2018-11-09 10:02:16 +01:00
rgw-add-users-buckets.yml Example ceph_add_users_buckets playbook 2018-12-20 14:23:25 +01:00
rgw-standalone.yml don't use private option for import_role 2018-12-04 23:45:59 +00:00
rolling_update.yml rolling_update: support multiple rgw instance 2019-01-22 13:45:38 +01:00
shrink-mon.yml introduce new role ceph-facts 2018-12-12 11:18:01 +01:00
shrink-osd-ceph-disk.yml use pre_tasks and post_tasks when necessary 2018-12-05 08:17:10 +00:00
shrink-osd.yml shrink_osd: use cv zap by fsid to remove parts/lvs 2019-01-24 16:34:13 +01:00
storage-inventory.yml playbook: report storage device inventory 2018-12-18 10:51:31 +01:00
switch-from-non-containerized-to-containerized-ceph-daemons.yml switch_to_containers: support multiple rgw instances per host 2019-02-13 09:42:27 +01:00
take-over-existing-cluster.yml use pre_tasks and post_tasks when necessary 2018-12-05 08:17:10 +00:00

README.md

Infrastructure playbooks

This directory contains a variety of playbooks that can be used independently of the Ceph roles we have. They aim to perform infrastructure related tasks that would help use managing a Ceph cluster or performing certain operational tasks.

To use them, you must move them to ceph-ansible's root directory, then run using ansible-playbook <playbook>.