Ansible playbooks to deploy Ceph, the distributed filesystem.
 
 
 
 
Go to file
Leseb be26352a2c Merge pull request #88 from Sysnove/hosts
Replaced all by the list of host groups to be able to run the playbook with non-ceph servers.
2014-07-08 14:12:22 +02:00
fetch Add Ceph Playbook 2014-03-03 19:08:51 +01:00
group_vars Use a local key for package instead of downloading it 2014-07-03 15:53:13 +02:00
roles Merge pull request #89 from Sysnove/partitions 2014-07-08 10:27:11 +02:00
.gitignore Use Vagrant's Ansible provisioner 2014-05-11 19:34:37 -04:00
LICENSE Add Ceph Playbook 2014-03-03 19:08:51 +01:00
README.md Update README.md 2014-06-13 16:30:50 +02:00
Vagrantfile Use Vagrant's Ansible provisioner 2014-05-11 19:34:37 -04:00
ansible.cfg Use Vagrant's Ansible provisioner 2014-05-11 19:34:37 -04:00
maintenance.yml Add maintenance playbook 2014-04-09 17:51:59 +02:00
purge.yml Fix #73 by using `ansible_fqdn` 2014-04-18 10:55:31 +02:00
rolling_update.yml Replaced all by the list of host groups to be able to run the playbook with non-ceph servers. 2014-06-26 09:52:06 +02:00
site.yml Replaced all by the list of host groups to be able to run the playbook with non-ceph servers. 2014-06-26 09:52:06 +02:00

README.md

ceph-ansible

Ansible playbook for Ceph!

What does it do?

General support for:

  • Monitors
  • OSDs
  • MDSs
  • RGW

More details:

  • Authentication (cephx), this can be disabled.
  • Supports cluster public and private network.
  • Monitors deployment. You can easily start with one monitor and then progressively add new nodes. So can deploy one monitor for testing purpose. For production, I recommend to always use an odd number of monitors, 3 tends to be the standard.
  • Object Storage Daemons. Like the monitors you can start with a certain amount of nodes and then grow this number. The playbook either supports a dedicated device for storing the journal or both journal and OSD data on the same device (using a tiny partition at the beginning of the device).
  • Metadata daemons.
  • Collocation. The playbook supports collocating Monitors, OSDs and MDSs on the same machine.
  • The playbook was validated on Debian Wheezy, Ubuntu 12.04 LTS and CentOS 6.4.
  • Tested on Ceph Dumpling and Emperor.
  • A rolling upgrade playbook was written, an upgrade from Dumpling to Emperor was performed and worked.

Setup with Vagrant

Run your virtual machines:

$ vagrant up
...
...
...
 ____________
< PLAY RECAP >
 ------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||


mon0                       : ok=16   changed=11   unreachable=0    failed=0
mon1                       : ok=16   changed=10   unreachable=0    failed=0
mon2                       : ok=16   changed=11   unreachable=0    failed=0
osd0                       : ok=19   changed=7    unreachable=0    failed=0
osd1                       : ok=19   changed=7    unreachable=0    failed=0
osd2                       : ok=19   changed=7    unreachable=0    failed=0
rgw                        : ok=20   changed=17   unreachable=0    failed=0

Check the status:

$ vagrant ssh mon0 -c "sudo ceph -s"
    cluster 4a158d27-f750-41d5-9e7f-26ce4c9d2d45
     health HEALTH_OK
     monmap e3: 3 mons at {ceph-mon0=192.168.0.10:6789/0,ceph-mon1=192.168.0.11:6789/0,ceph-mon2=192.168.0.12:6789/0}, election epoch 6, quorum 0,1,2 ceph-mon0,ceph-mon1,ceph-mon
     mdsmap e6: 1/1/1 up {0=ceph-osd0=up:active}, 2 up:standby
     osdmap e10: 6 osds: 6 up, 6 in
      pgmap v17: 192 pgs, 3 pools, 9470 bytes data, 21 objects
            205 MB used, 29728 MB / 29933 MB avail
                 192 active+clean

To re-run the Ansible provisioning scripts:

$ vagrant provision

Specifying fsid and secret key in production

The Vagrantfile specifies an fsid for the cluster and a secret key for the monitor. If using these playbooks in production, you must generate your own fsid in group_vars/all and monitor_secret in group_vars/mons. Those files contain information about how to generate appropriate values for these variables.