use the activation scenario instead of the full ceph_disk one, we
already have a task to prepare osds so we just need to activate the
device.
working for me using vagrant :)
Signed-off-by: Sébastien Han <seb@redhat.com>
There is no need to run the actions from
roles/ceph-mon/tasks/docker/create_configs.yml
on the first monitor only since the monitor deployment happens
**serially**.
Moreover with Vagrant it's useful to allow the auto creation of the
cluster fsid, so enabling the option. If this is not desired you can
still set `fsid: 9c9c0448-0551-401d-b55b-e5b3a42bae42` for example.
Signed-off-by: Sébastien Han <seb@redhat.com>
- Gather facts only for mons before processing ceph-mon role serially in
containerized playbook sample
- Updated ceph.conf in order to generate a valid ceph.conf
Signed-off-by: Ivan Font <ivan.font@redhat.com>
- Add all relevant group_vars files in containerized purge cluster
playbook and ignore errors if file may not exist.
- Also fixing indentation issues.
Signed-off-by: Ivan Font <ivan.font@redhat.com>
- Move fsal_rgw config to ceph-common, as it's shaered with ceph-rgw
- Update all.docker.sample with NFS config
- Rename fsal_rgw to nfs_obj_gw and fsal_ceph to nfs_file_gw, because
the former names mean nothing to non-Ganesha developers
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
Since we have a couple of infrastructure related playbooks
(additionnally to the roles we are using to deploy Ceph), it makes sense
to have them located in a separate directory.
Signed-off-by: Sébastien Han <seb@redhat.com>
We now have the ability to shrink a ceph cluster with the help of 2 new
playbooks. Even if a lot portions of those are identical I thought I
would make more sense to separate both for several reasons:
* it is rare to remove mon(s) and osd(s)
* this remains a tricky process so to avoid any overlap we keep things
* separated
For monitors, just select the list of the monitor hostnames you want to
delete from the cluster and execute the playbook like this. The hostname
must be resolvable. Then run the playbook like this:
ansible-playbook shrink-cluster.yml -e mon_host=ceph-mon-01,ceph-mon-02
Are you sure you want to shrink the cluster? [no]: yes
For OSDs, just select the list of the OSD id you want to delete from the
cluster and execute the playbook like this:
ansible-playbook shrink-cluster.yml -e osd_ids=0,2,4
Are you sure you want to shrink the cluster? [no]: yes
If you know what you're doing you can run it like this:
ansible-playbook shrink-cluster.yml -e ireallymeanit=yes -e
osd_ids=0,2,4
Thanks a lot to @SamYaple for his help on the complex
variables/fact/filters
Signed-off-by: Sébastien Han <seb@redhat.com>
-First install ceph into a directory with CMake
cmake -DCMAKE_INSTALL_LIBEXECDIR=/usr/lib -DWITH_SYSTEMD=ON -DCMAKE_INSTALL_PREFIX:PATH:=/usr <ceph_src_dir> && make DESTDIR=<install_dir> install/strip
-Ceph-ansible copies over the install_dir
-User can use rundep_installer.sh to install any runtime dependencies that ceph needs onto the machine from rundep