diff --git a/LICENSE b/LICENSE new file mode 100644 index 000000000..acee72b2b --- /dev/null +++ b/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [2014] [Sébastien Han] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/README.md b/README.md index ae9aca94a..299261834 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,129 @@ ceph-ansible ============ -Ansible playbooks for Ceph +Ansible playbook for Ceph! + +## What does it do? + +* Authentication (cephx), this can be disabled. +* Supports cluster public and private network. +* Monitors deployment. You can easily start with one monitor and then progressively add new nodes. So can deploy one monitor for testing purpose. For production, I recommend to a +* Object Storage Daemons. Like the monitors you can start with a certain amount of nodes and then grow this number. The playbook either supports a dedicated device for storing th +* Metadata daemons. +* Collocation. The playbook supports collocating Monitors, OSDs and MDSs on the same machine. +* The playbook was validated on both Debian Wheezy and Ubuntu 12.04 LTS. +* Tested on Ceph Dumpling and Emperor. +* A rolling upgrade playbook was written, an upgrade from Dumpling to Emperor was performed and worked. + + +## Setup with Vagrant + +First modify the `rc` file we your home directory: + + export ANSIBLE_CONFIG=/.ansible.cfg + +Do the same for the `.ansible.cfg` file: + + [defaults] + host_key_checking = False + remote_user = vagrant + hostfile = /hosts + log_path = /ansible.log + ansible_managed = Ansible managed: modified on %Y-%m-%d %H:%M:%S by {uid} + private_key_file = ~/.vagrant.d/insecure_private_key + error_on_undefined_vars = False + +Edit your `/etc/hosts` file with: + + # Ansible hosts + 127.0.0.1 ceph-mon0 + 127.0.0.1 ceph-mon1 + 127.0.0.1 ceph-mon2 + 127.0.0.1 ceph-osd0 + 127.0.0.1 ceph-osd1 + 127.0.0.1 ceph-osd2 + +**Now since we use Vagrant and port forwarding, don't forget to grab the SSH local port of your VMs.** +Then edit your `hosts` file accordingly. + +Ok let's get serious now. +Run your virtual machines: + +```bash +$ vagrant up +... +... +... +``` + +Test if Ansible can access the virtual machines: + +```bash +$ ansible all -m ping +ceph-mon0 | success >> { + "changed": false, + "ping": "pong" +} + +ceph-mon1 | success >> { + "changed": false, + "ping": "pong" +} + +ceph-osd0 | success >> { + "changed": false, + "ping": "pong" +} + +ceph-osd2 | success >> { + "changed": false, + "ping": "pong" +} + +ceph-mon2 | success >> { + "changed": false, + "ping": "pong" +} + +ceph-osd1 | success >> { + "changed": false, + "ping": "pong" +} +``` + +Ready to deploy? Let's go! + +```bash +$ ansible-playbook -f 6 -v site.yml +... +... + ____________ +< PLAY RECAP > + ------------ + \ ^__^ + \ (oo)\_______ + (__)\ )\/\ + ||----w | + || || + + +ceph-mon0 : ok=13 changed=10 unreachable=0 failed=0 +ceph-mon1 : ok=13 changed=9 unreachable=0 failed=0 +ceph-mon2 : ok=13 changed=9 unreachable=0 failed=0 +ceph-osd0 : ok=19 changed=12 unreachable=0 failed=0 +ceph-osd1 : ok=19 changed=12 unreachable=0 failed=0 +ceph-osd2 : ok=19 changed=12 unreachable=0 failed=0 +``` + +Check the status: + +```bash +$ vagrant ssh mon0 -c "sudo ceph -s" + cluster 4a158d27-f750-41d5-9e7f-26ce4c9d2d45 + health HEALTH_OK + monmap e3: 3 mons at {ceph-mon0=192.168.0.10:6789/0,ceph-mon1=192.168.0.11:6789/0,ceph-mon2=192.168.0.12:6789/0}, election epoch 6, quorum 0,1,2 ceph-mon0,ceph-mon1,ceph-mon + mdsmap e6: 1/1/1 up {0=ceph-osd0=up:active}, 2 up:standby + osdmap e10: 6 osds: 6 up, 6 in + pgmap v17: 192 pgs, 3 pools, 9470 bytes data, 21 objects + 205 MB used, 29728 MB / 29933 MB avail + 192 active+clean diff --git a/Vagrantfile b/Vagrantfile new file mode 100644 index 000000000..db450166b --- /dev/null +++ b/Vagrantfile @@ -0,0 +1,31 @@ +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# Vagrantfile API/syntax version. Don't touch unless you know what you're doing! +VAGRANTFILE_API_VERSION = "2" + +Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| + config.vm.box = "precise64" + config.vm.box_url = "http://files.vagrantup.com/precise64.box" + + (0..2).each do |i| + config.vm.define "mon#{i}" do |mon| + mon.vm.hostname = "ceph-mon#{i}" + mon.vm.network :private_network, ip: "192.168.0.1#{i}" + end + end + + (0..2).each do |i| + config.vm.define "osd#{i}" do |osd| + osd.vm.hostname = "ceph-osd#{i}" + osd.vm.network :private_network, ip: "192.168.0.10#{i}" + osd.vm.network :private_network, ip: "192.168.0.20#{i}" + (0..2).each do |d| + osd.vm.provider :virtualbox do |vb| + vb.customize [ "createhd", "--filename", "disk-#{i}-#{d}", "--size", "5000" ] + vb.customize [ "storageattach", :id, "--storagectl", "SATA Controller", "--port", 3+d, "--device", 0, "--type", "hdd", "--medium", "disk-#{i}-#{d}.vdi" ] + end + end + end + end +end diff --git a/fetch/.empty b/fetch/.empty new file mode 100644 index 000000000..e69de29bb diff --git a/group_vars/all b/group_vars/all new file mode 100644 index 000000000..e4746de45 --- /dev/null +++ b/group_vars/all @@ -0,0 +1,20 @@ +--- +# Variables here are applicable to all host groups NOT roles + +# Setup options +distro_release: "{{ facter_lsbdistcodename }}" +apt_key: https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc +ceph_release: emperor + +# Ceph options +cephx: true +fsid: 4a158d27-f750-41d5-9e7f-26ce4c9d2d45 + +# OSD options +journal_size: 100 +pool_default_pg_num: 128 +pool_default_pgp_num: 128 +pool_default_size: 2 +cluster_network: 192.168.0.0/24 +public_network: 192.168.0.0/24 +osd_mkfs_type: xfs diff --git a/group_vars/mons b/group_vars/mons new file mode 100644 index 000000000..d1d9daa51 --- /dev/null +++ b/group_vars/mons @@ -0,0 +1,5 @@ +--- +# Variables here are applicable to all host groups NOT roles + +# Monitor options +monitor_secret: AQD7kyJQQGoOBhAAqrPAqSopSwPrrfMMomzVdw== diff --git a/group_vars/osds b/group_vars/osds new file mode 100644 index 000000000..934bb6a9c --- /dev/null +++ b/group_vars/osds @@ -0,0 +1,14 @@ +--- +# Variables here are applicable to all host groups NOT roles +# + +# Devices to be used as OSDs +devices: + - /dev/sdc + - /dev/sdd + +# Use 'None' to undefined the variable. +# Using 'None' will colocate both journal and data on the same disk +# creating a partition at the beginning of the device +# +journal_device: /dev/sdb diff --git a/hosts b/hosts new file mode 100644 index 000000000..7bc8227cc --- /dev/null +++ b/hosts @@ -0,0 +1,33 @@ +# +## If you use Vagrant and port forwarding, don't forget to grab the SSH local port of your VMs. +# + +## Common setup example +# +[mons] +ceph-mon0:2222 +ceph-mon1:2200 +ceph-mon2:2201 +[osds] +ceph-osd0:2202 +ceph-osd1:2203 +ceph-osd2:2204 +[mdss] +ceph-osd0:2202 +ceph-osd1:2203 +ceph-osd2:2204 + + +# Colocation setup example +#[mons] +#ceph-osd0:2222 +#ceph-osd1:2200 +#ceph-osd2:2201 +#[osds] +#ceph-osd0:2222 +#ceph-osd1:2200 +#ceph-osd2:2201 +#[mdss] +#ceph-osd0:2222 +#ceph-osd1:2200 +#ceph-osd2:2201 diff --git a/rc b/rc new file mode 100644 index 000000000..cbe899a41 --- /dev/null +++ b/rc @@ -0,0 +1 @@ +export ANSIBLE_CONFIG=~/.ansible.cfg diff --git a/roles/common/handlers/main.yml b/roles/common/handlers/main.yml new file mode 100644 index 000000000..3d83a193f --- /dev/null +++ b/roles/common/handlers/main.yml @@ -0,0 +1,4 @@ +--- + +- name: "update apt cache" + action: apt update-cache=yes diff --git a/roles/common/tasks/main.yml b/roles/common/tasks/main.yml new file mode 100644 index 000000000..55fcef580 --- /dev/null +++ b/roles/common/tasks/main.yml @@ -0,0 +1,28 @@ +--- +## Common to all the ceph nodes +# + +- name: Install dependancies + apt: pkg={{ item }} state=installed update_cache=yes # we update the cache just in case... + with_items: + - python-pycurl + - ntp + +- name: Install the ceph key + apt_key: url={{ apt_key }} state=present + +- name: Add ceph repository + apt_repository: repo='deb http://ceph.com/debian-{{ ceph_release }}/ {{ ansible_lsb.codename }} main' state=present + +- name: Install ceph + apt: pkg={{ item }} state=latest + with_items: + - ceph + - ceph-common #| + - ceph-fs-common #|--> yes, they are already all dependancies from 'ceph' + - ceph-fuse #|--> however while proceding to rolling upgrades and the 'ceph' package upgrade + - ceph-mds #|--> they don't get update so we need to force them + - libcephfs1 #| + +- name: Generate ceph configuration file + template: src=roles/common/templates/ceph.conf.j2 dest=/etc/ceph/ceph.conf owner=root group=root mode=0644 diff --git a/roles/common/templates/ceph.conf.j2 b/roles/common/templates/ceph.conf.j2 new file mode 100644 index 000000000..651c5523a --- /dev/null +++ b/roles/common/templates/ceph.conf.j2 @@ -0,0 +1,58 @@ +# {{ ansible_managed }} + +[global] +{% if cephx %} + auth cluster required = cephx + auth service required = cephx + auth client required = cephx +{% else %} + auth cluster required = none + auth service required = none + auth client required = none + auth supported = none +{% endif %} + fsid = {{ fsid }} + mon_initial_members = {{ hostvars[groups['mons'][0]]['ansible_hostname'] }} +{% if pool_default_pg_num is defined %} + osd pool default pg num = {{ pool_default_pg_num }} +{% endif %} +{% if pool_default_pgp_num is defined %} + osd pool default pgp num = {{ pool_default_pgp_num }} +{% endif %} +{% if pool_default_size is defined %} + osd pool default size = {{ pool_default_size }} +{% endif %} +{% if pool_default_min_size is defined %} + osd pool default min size = {{ pool_default_min_size }} +{% endif %} +{% if pool_default_crush_rule is defined %} + osd pool default crush rule = {{ pool_default_crush_rule }} +{% endif %} + +[mon] +{% for host in groups['mons'] %} + [mon.{{ hostvars[host]['ansible_hostname'] }}] + host = {{ hostvars[host]['ansible_hostname'] }} + mon addr = {{ hostvars[host]['ansible_eth1']['ipv4']['address'] }} +{% endfor %} + +[osd] +{% if osd_mkfs_type is defined %} + osd mkfs type = {{ osd_mkfs_type }} +{% endif %} +{% if osd_mkfs_type == "ext4" %} + filestore xattr use omap = true +{% endif %} + osd journal size = {{ journal_size }} +{% if cluster_network is defined %} + cluster_network = {{ cluster_network }} +{% endif %} +{% if public_network is defined %} + public_network = {{ public_network }} +{% endif %} + +[mds] +{% for host in groups['mdss'] %} + [mds.{{ hostvars[host]['ansible_hostname'] }}] + host = {{ hostvars[host]['ansible_hostname'] }} +{% endfor %} diff --git a/roles/mds/tasks/main.yml b/roles/mds/tasks/main.yml new file mode 100644 index 000000000..e30291d1c --- /dev/null +++ b/roles/mds/tasks/main.yml @@ -0,0 +1,27 @@ +--- +## Deploy Ceph metadata server(s) +# + +- name: Copy MDS bootstrap key + copy: src=fetch/{{ hostvars[groups['mons'][0]]['ansible_hostname'] }}/var/lib/ceph/bootstrap-mds/ceph.keyring dest=/var/lib/ceph/bootstrap-mds/ceph.keyring owner=root group=root mode=600 + when: cephx + +- name: Set MDS bootstrap key permissions + file: path=/var/lib/ceph/bootstrap-mds/ceph.keyring mode=0600 owner=root group=root + when: cephx + +- name: Create MDS directory + action: file path=/var/lib/ceph/mds/ceph-{{ ansible_hostname }} state=directory owner=root group=root mode=0644 + when: cephx + +- name: Create MDS keyring + command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.{{ ansible_hostname }} osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-{{ ansible_hostname }}/keyring creates=/var/lib/ceph/mds/ceph-{{ ansible_hostname }}/keyring + when: cephx + changed_when: False + +- name: Set MDS key permissions + file: path=/var/lib/ceph/mds/ceph-{{ ansible_hostname }}/keyring mode=0600 owner=root group=root + when: cephx + +- name: Start and add that the MDS service to the init sequence + service: name=ceph state=started enabled=yes args=mds diff --git a/roles/mon/tasks/main.yml b/roles/mon/tasks/main.yml new file mode 100644 index 000000000..f4eafc364 --- /dev/null +++ b/roles/mon/tasks/main.yml @@ -0,0 +1,36 @@ +--- +## Deploy Ceph monitor(s) +# + +- name: Create monitor initial keyring + command: ceph-authtool /var/lib/ceph/tmp/keyring.mon.{{ ansible_hostname }} --create-keyring --name=mon. --add-key={{ monitor_secret }} --cap mon 'allow *' creates=/var/lib/ceph/tmp/keyring.mon.{{ ansible_hostname }} + +- name: Set initial monitor key permissions + file: path=/var/lib/ceph/tmp/keyring.mon.{{ ansible_hostname }} mode=0600 owner=root group=root + +- name: Create monitor directory + action: file path=/var/lib/ceph/mon/ceph-{{ ansible_hostname }} state=directory owner=root group=root mode=0644 + +- name: Ceph monitor mkfs + command: ceph-mon --mkfs -i {{ ansible_hostname }} --keyring /var/lib/ceph/tmp/keyring.mon.{{ ansible_hostname }} creates=/var/lib/ceph/mon/ceph-{{ ansible_hostname }}/keyring + +- name: Start and add that the monitor service to the init sequence + service: name=ceph state=started enabled=yes args=mon + +# Wait for mon discovery and quorum resolution +# the admin key is not instantanely created so we have to wait a bit +# + +- name: If client.admin key exists + command: stat /etc/ceph/ceph.client.admin.keyring + register: result + until: result.rc == 0 + changed_when: False + +- name: Copy keys to the ansible server + fetch: src={{ item }} dest=fetch/ + when: ansible_hostname == hostvars[groups['mons'][0]]['ansible_hostname'] and cephx + with_items: + - /etc/ceph/ceph.client.admin.keyring # just in case another application needs it + - /var/lib/ceph/bootstrap-osd/ceph.keyring # this handles the non-colocation case + - /var/lib/ceph/bootstrap-mds/ceph.keyring diff --git a/roles/osd/tasks/main.yml b/roles/osd/tasks/main.yml new file mode 100644 index 000000000..68ac4c7eb --- /dev/null +++ b/roles/osd/tasks/main.yml @@ -0,0 +1,63 @@ +--- +## Deploy Ceph Oject Storage Daemon(s) +# + +- name: Install dependancies + apt: pkg=parted state=present + +- name: Copy OSD bootstrap key + copy: src=fetch/{{ hostvars[groups['mons'][0]]['ansible_hostname'] }}/var/lib/ceph/bootstrap-osd/ceph.keyring dest=/var/lib/ceph/bootstrap-osd/ceph.keyring owner=root group=root mode=600 + when: cephx + +- name: Set OSD bootstrap key permissions + file: path=/var/lib/ceph/bootstrap-osd/ceph.keyring mode=0600 owner=root group=root + when: cephx + +# NOTE (leseb): current behavior of ceph-disk is to fail when the device is mounted "stderr: ceph-disk: Error: Device is mounted: /dev/sdb1" +# the return code is 1, which makes sense, however ideally if ceph-disk will detect a ceph partition +# it should exist we rc=0 and don't do anything unless we do something like --force +# As as a final word, I prefer to keep the partition check instead of running ceph-disk prepare with "ignore_errors: True" +# I believe it's safer +# + +- name: If partition named 'ceph' exists + shell: parted {{ item }} print | egrep -sq '^ 1.*ceph' + ignore_errors: True + with_items: devices + register: parted + changed_when: False + +# Prepare means +# - create GPT partition +# - mark the partition with the ceph type uuid +# - create a file system +# - mark the fs as ready for ceph consumption +# - entire data disk is used (one big partition) +# - a new partition is added to the journal disk (so it can be easily shared) +# + +# NOTE (leseb): the prepare process must be parallelized somehow... +# if you have 64 disks with 4TB each, this will take a while +# since Ansible will sequential process the loop + +- name: Prepare OSD disk(s) + command: ceph-disk prepare {{ item.1 }} {{ journal_device }} + when: item.0.rc != 0 + with_together: + - parted.results + - devices + +# Activate means: +# - mount the volume in a temp location +# - allocate an osd id (if needed) +# - remount in the correct location /var/lib/ceph/osd/$cluster-$id +# - start ceph-osd +# + +- name: Activate OSD(s) + command: ceph-disk activate {{ item }}1 + with_items: devices + changed_when: False + +- name: Start and add that the OSD service to the init sequence + service: name=ceph state=started enabled=yes diff --git a/rolling_update.yml b/rolling_update.yml new file mode 100644 index 000000000..66769ec23 --- /dev/null +++ b/rolling_update.yml @@ -0,0 +1,46 @@ +--- +# This playbook does a rolling update for all the Ceph services +# Change the value of serial: to adjust the number of server to be updated. +# +# The four roles that apply to the ceph hosts will be applied: common, +# mon, osd and mds. So any changes to configuration, package updates, etc, +# will be applied as part of the rolling update process. +# + +# /!\ DO NOT FORGET TO CHANGE THE RELEASE VERSION FIRST! /!\ + +- hosts: all + sudo: True + roles: + - common + +- hosts: mons + serial: 1 + sudo: True + roles: + - mon + post_tasks: + - name: restart monitor(s) + service: name=ceph state=restarted args=mon + +- hosts: osds + serial: 1 + sudo: True + roles: + - osd + post_tasks: + - name: restart object storage daemon(s) + command: service ceph-osd-all restart + when: ansible_distribution == "Ubuntu" + - name: restart object storage daemon(s) + service: name=ceph state=restarted args=osd + when: ansible_distribution == "Debian" + +- hosts: mdss + serial: 1 + sudo: True + roles: + - mds + post_tasks: + - name: restart metadata server(s) + service: name=ceph state=restarted args=mds diff --git a/site.yml b/site.yml new file mode 100644 index 000000000..85e3adf89 --- /dev/null +++ b/site.yml @@ -0,0 +1,18 @@ +--- +# Defines deployment design and assigns role to server groups + +- hosts: all + sudo: True + roles: + - common + +- hosts: mons + sudo: True + roles: + - mon + +- hosts: osds + sudo: True + roles: + - osd + - mds