rollback previous change for ceph-common change

changing the name of the directory causes issues with git subtree which
will create new commits. Creating a symlink for vagrant to be happy.

Signed-off-by: Sébastien Han <seb@redhat.com>
pull/594/head
Sébastien Han 2016-03-02 18:44:36 +01:00
parent 12f5ac5678
commit 1ebb4de7f3
62 changed files with 2892 additions and 12 deletions

View File

@ -20,6 +20,7 @@
vars:
github: ceph/ansible
roles:
- ceph-common
- ceph-mon
- ceph-osd
- ceph-mds
@ -42,18 +43,14 @@
command: git subtree split --prefix=roles/{{ item }} -b {{ item }} --squash
args:
chdir: "{{ basedir }}"
with_items:
- roles
- ceph.ceph-common
with_items: roles
- name: adds remote github repos for the splits
tags: split
command: git remote add {{ item }} git@github.com:{{ github }}-{{ item }}.git
args:
chdir: "{{ basedir }}"
with_items:
- roles
- ceph-common
with_items: roles
- name: adds upstream remote
tags: update
@ -70,9 +67,3 @@
args:
chdir: "{{ basedir }}"
with_items: roles
- name: update the split repos from master (ceph-common)
tags: update
shell: git push ceph-common $(git subtree split --prefix roles/ceph.ceph-common master):master --force
args:
chdir: "{{ basedir }}"

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [2014] [Sébastien Han]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,79 @@
# Ansible role: Ceph Common
This role does several things prior to bootstrapping your Ceph cluster:
* Checks the system and validates that Ceph can be installed
* Tunes the operating system if the node is an OSD server
* Installs Ceph
* Generates `ceph.conf`
# Requirements
Move the `plugins/actions/config_template.py` file to your top level playbook directory.
Edit your `ansible.cfg` like so:
action_plugins = plugins/actions
Depending on how you are managing your playbook, the path might be different so edit the file accordingly if necessary.
# Role variables
Have a look at `defaults/main.yml`.
## Mandatory variables
* Install source, choose one of these:
* `ceph_stable`
* `ceph_dev`
* `ceph_stable_ice`
* `ceph_stable_rh_storage`
* `journal_size`
* `monitor_interface`
* `public_network`
* `cluster_network`
## Handlers
* update apt cache
* restart ceph-mon
* restart ceph-osd
* restart ceph-mds
* restart ceph-rgw
* restart ceph-restapi
* restart apache2
# Dependencies
None
# Example Playbook
```
- hosts: servers
remote_user: ubuntu
roles:
- { role: leseb.ceph-common }
```
# Misc
This role is a **mandatory** dependency for the following roles:
* ceph-mon
* ceph-osd
* ceph-mds
* ceph-rgw
* ceph-restapi
# Contribution
**THIS REPOSITORY DOES NOT ACCEPT PULL REQUESTS**.
**PULL REQUESTS MUST GO THROUGH [CEPH-ANSIBLE](https://github.com/ceph/ceph-ansible)**.
# License
Apache
# Author Information
This role was created by [Sébastien Han](http://sebastien-han.fr/).

View File

@ -0,0 +1,320 @@
---
# You can override vars by using host or group vars
###########
# GENERAL #
###########
fetch_directory: fetch/
###########
# INSTALL #
###########
mon_group_name: mons
osd_group_name: osds
rgw_group_name: rgws
mds_group_name: mdss
restapi_group_name: restapis
# If check_firewall is true, then ansible will try to determine if the
# Ceph ports are blocked by a firewall. If the machine running ansible
# cannot reach the Ceph ports for some other reason, you may need or
# want to set this to False to skip those checks.
check_firewall: True
# This variable determines if ceph packages can be updated. If False, the
# package resources will use "state=present". If True, they will use
# "state=latest".
upgrade_ceph_packages: False
# /!\ EITHER ACTIVE ceph_stable OR ceph_stable_ice OR ceph_dev /!\
debian_package_dependencies:
- python-pycurl
- hdparm
- ntp
redhat_package_dependencies:
- python-pycurl
- hdparm
- yum-plugin-priorities.noarch
- epel-release
- ntp
- python-setuptools
# Whether or not to install the ceph-test package.
ceph_test: False
## Configure package origin
#
ceph_origin: 'upstream' # or 'distro'
# 'distro' means that no separate repo file will be added
# you will get whatever version of Ceph is included in your Linux distro.
#
ceph_use_distro_backports: false # DEBIAN ONLY
# STABLE
########
# COMMUNITY VERSION
ceph_stable: false # use ceph stable branch
ceph_stable_key: https://download.ceph.com/keys/release.asc
ceph_stable_release: infernalis # ceph stable release
ceph_stable_repo: "http://ceph.com/debian-{{ ceph_stable_release }}"
###################
# Stable Releases #
###################
ceph_stable_releases:
- dumpling
- emperor
- firefly
- giant
- hammer
# Use the option below to specify your applicable package tree, eg. when using non-LTS Ubuntu versions
# # for a list of available Debian distributions, visit http://ceph.com/debian-{{ ceph_stable_release }}/dists/
# for more info read: https://github.com/ceph/ceph-ansible/issues/305
#ceph_stable_distro_source:
# This option is needed for _both_ stable and dev version, so please always fill the right version
# # for supported distros, see http://ceph.com/rpm-{{ ceph_stable_release }}/
ceph_stable_redhat_distro: el7
# ENTERPRISE VERSION ICE (old, prior to the 1.3)
ceph_stable_ice: false # use Inktank Ceph Enterprise
#ceph_stable_ice_url: https://download.inktank.com/enterprise
# these two variables are used in `with_items` and starting
# with ansible 2.0 these need to be defined even if the tasks's
# `when` clause doesn't evaluate to true
ceph_stable_ice_temp_path: /opt/ICE/ceph-repo/
ceph_stable_ice_kmod: 3.10-0.1.20140702gitdc9ac62.el7.x86_64
#ceph_stable_ice_distro: rhel7 # Please check the download website for the supported versions.
#ceph_stable_ice_version: 1.2.2
#ceph_stable_ice_kmod_version: 1.2
#ceph_stable_ice_user: # htaccess user
#ceph_stable_ice_password: # htaccess password
# ENTERPRISE VERSION RED HAT STORAGE (from 1.3)
# This version is only supported on RHEL 7.1
# As of RHEL 7.1, libceph.ko and rbd.ko are now included in Red Hat's kernel
# packages natively. The RHEL 7.1 kernel packages are more stable and secure than
# using these 3rd-party kmods with RHEL 7.0. Please update your systems to RHEL
# 7.1 or later if you want to use the kernel RBD client.
#
# The CephFS kernel client is undergoing rapid development upstream, and we do
# not recommend running the CephFS kernel module on RHEL 7's 3.10 kernel at this
# time. Please use ELRepo's latest upstream 4.x kernels if you want to run CephFS
# on RHEL 7.
#
ceph_stable_rh_storage: false
ceph_stable_rh_storage_cdn_install: false # assumes all the nodes can connect to cdn.redhat.com
ceph_stable_rh_storage_iso_install: false # usually used when nodes don't have access to cdn.redhat.com
#ceph_stable_rh_storage_iso_path:
ceph_stable_rh_storage_mount_path: /tmp/rh-storage-mount
ceph_stable_rh_storage_repository_path: /tmp/rh-storage-repo # where to copy iso's content
# DEV
# ###
ceph_dev: false # use ceph development branch
ceph_dev_key: https://download.ceph.com/keys/autobuild.asc
ceph_dev_branch: master # development branch you would like to use e.g: master, wip-hack
# supported distros are centos6, centos7, fc17, fc18, fc19, fc20, fedora17, fedora18,
# fedora19, fedora20, opensuse12, sles0. (see http://gitbuilder.ceph.com/).
# For rhel, please pay attention to the versions: 'rhel6 3' or 'rhel 4', the fullname is _very_ important.
ceph_dev_redhat_distro: centos7
######################
# CEPH CONFIGURATION #
######################
## Ceph options
#
# Each cluster requires a unique, consistent filesystem ID. By
# default, the playbook generates one for you and stores it in a file
# in `fetch_directory`. If you want to customize how the fsid is
# generated, you may find it useful to disable fsid generation to
# avoid cluttering up your ansible repo. If you set `generate_fsid` to
# false, you *must* generate `fsid` in another way.
fsid: "{{ cluster_uuid.stdout }}"
generate_fsid: true
cephx: true
cephx_require_signatures: true # Kernel RBD does NOT support signatures for Kernels < 3.18!
cephx_cluster_require_signatures: true
cephx_service_require_signatures: false
max_open_files: 131072
disable_in_memory_logs: true # set this to false while enabling the options below
# Debug logs
enable_debug_global: false
debug_global_level: 20
enable_debug_mon: false
debug_mon_level: 20
enable_debug_osd: false
debug_osd_level: 20
enable_debug_mds: false
debug_mds_level: 20
## Client options
#
rbd_cache: "true"
rbd_cache_writethrough_until_flush: "true"
rbd_concurrent_management_ops: 20
rbd_client_directories: false # this will create rbd_client_log_path and rbd_client_admin_socket_path directories with proper permissions, this WON'T work if libvirt and kvm are installed
rbd_client_log_file: /var/log/rbd-clients/qemu-guest-$pid.log # must be writable by QEMU and allowed by SELinux or AppArmor
rbd_client_log_path: /var/log/rbd-clients/
rbd_client_admin_socket_path: /var/run/ceph/rbd-clients # must be writable by QEMU and allowed by SELinux or AppArmor
rbd_default_features: 3
rbd_default_map_options: rw
rbd_default_format: 2
## Monitor options
#
monitor_interface: interface
mon_use_fqdn: false # if set to true, the MON name used will be the fqdn in the ceph.conf
mon_osd_down_out_interval: 600
mon_osd_min_down_reporters: 7 # number of OSDs per host + 1
mon_clock_drift_allowed: .15
mon_clock_drift_warn_backoff: 30
mon_osd_full_ratio: .95
mon_osd_nearfull_ratio: .85
mon_osd_report_timeout: 300
mon_pg_warn_max_per_osd: 0 # disable complains about low pgs numbers per osd
mon_osd_allow_primary_affinity: "true"
mon_pg_warn_max_object_skew: 10 # set to 20 or higher to disable complaints about number of PGs being too low if some pools have very few objects bringing down the average number of objects per pool. This happens when running RadosGW. Ceph default is 10
## OSD options
#
journal_size: 0
pool_default_pg_num: 128
pool_default_pgp_num: 128
pool_default_size: 2
pool_default_min_size: 1
public_network: 0.0.0.0/0
cluster_network: "{{ public_network }}"
osd_mkfs_type: xfs
osd_mkfs_options_xfs: -f -i size=2048
osd_mount_options_xfs: noatime,largeio,inode64,swalloc
osd_mon_heartbeat_interval: 30
# CRUSH
pool_default_crush_rule: 0
osd_crush_update_on_start: "true"
# Object backend
osd_objectstore: filestore
# xattrs. by default, 'filestore xattr use omap' is set to 'true' if
# 'osd_mkfs_type' is set to 'ext4'; otherwise it isn't set. This can
# be set to 'true' or 'false' to explicitly override those
# defaults. Leave it 'null' to use the default for your chosen mkfs
# type.
filestore_xattr_use_omap: null
# Performance tuning
filestore_merge_threshold: 40
filestore_split_multiple: 8
osd_op_threads: 8
filestore_op_threads: 8
filestore_max_sync_interval: 5
osd_max_scrubs: 1
# The OSD scrub window can be configured starting hammer only!
# Default settings will define a 24h window for the scrubbing operation
# The window is predefined from 0am midnight to midnight the next day.
osd_scrub_begin_hour: 0
osd_scrub_end_hour: 24
# Recovery tuning
osd_recovery_max_active: 5
osd_max_backfills: 2
osd_recovery_op_priority: 2
osd_recovery_max_chunk: 1048576
osd_recovery_threads: 1
# Deep scrub
osd_scrub_sleep: .1
osd_disk_thread_ioprio_class: idle
osd_disk_thread_ioprio_priority: 0
osd_scrub_chunk_max: 5
osd_deep_scrub_stride: 1048576
## MDS options
#
mds_use_fqdn: false # if set to true, the MDS name used will be the fqdn in the ceph.conf
## Rados Gateway options
#
#radosgw_dns_name: your.subdomain.tld # subdomains used by radosgw. See http://ceph.com/docs/master/radosgw/config/#enabling-subdomain-s3-calls
radosgw_frontend: civetweb # supported options are 'apache' or 'civetweb', also edit roles/ceph-rgw/defaults/main.yml
radosgw_civetweb_port: 8080 # on Infernalis we get: "set_ports_option: cannot bind to 80: 13 (Permission denied)"
radosgw_keystone: false # activate OpenStack Keystone options full detail here: http://ceph.com/docs/master/radosgw/keystone/
#radosgw_keystone_url: # url:admin_port ie: http://192.168.0.1:35357
radosgw_keystone_admin_token: password
radosgw_keystone_accepted_roles: Member, _member_, admin
radosgw_keystone_token_cache_size: 10000
radosgw_keystone_revocation_internal: 900
radosgw_s3_auth_use_keystone: "true"
radosgw_nss_db_path: /var/lib/ceph/radosgw/ceph-radosgw.{{ ansible_hostname }}/nss
# Toggle 100-continue support for Apache and FastCGI
# WARNING: Changing this value will cause an outage of Apache while it is reinstalled on RGW nodes
http_100_continue: false
# Rados Gateway options
redhat_distro_ceph_extra: centos6.4 # supported distros are centos6.3, centos6.4, centos6, fedora18, fedora19, opensuse12.2, rhel6.3, rhel6.4, rhel6.5, rhel6, sles11sp2
email_address: foo@bar.com
## REST API options
#
restapi_interface: "{{ monitor_interface }}"
restapi_port: 5000
restapi_base_url: /api/v0.1
restapi_log_level: warning # available level are: critical, error, warning, info, debug
## Testing mode
# enable this mode _only_ when you have a single node
# if you don't want it keep the option commented
#common_single_host_mode: true
###################
# CONFIG OVERRIDE #
###################
# Ceph configuration file override.
# This allows you to specify more configuration options
# using an INI style format.
# The following sections are supported: [global], [mon], [osd], [mds], [rgw]
#
# Example:
# ceph_conf_overrides:
# global:
# foo: 1234
# bar: 5678
#
ceph_conf_overrides: {}
#############
# OS TUNING #
#############
disable_transparent_hugepage: true
disable_swap: true
os_tuning_params:
- { name: kernel.pid_max, value: 4194303 }
- { name: fs.file-max, value: 26234859 }
- { name: vm.zone_reclaim_mode, value: 0 }
- { name: vm.vfs_cache_pressure, value: 50 }
- { name: vm.min_free_kbytes, value: "{{ vm_min_free_kbytes }}" }
##########
# DOCKER #
##########
docker: false

View File

@ -0,0 +1,41 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.10 (GNU/Linux)
mQGiBE1Rr28RBADCxdpLV3ea9ocpS/1+UCvHqD5xjmlw/9dmji4qrUX0+IhPMNuA
GBBt2CRaR7ygMF5S0NFXooegph0/+NT0KisLIuhUI3gde4SWb5jsb8hpGUse9MC5
DN39P46zZSpepIMlQuQUkge8W/H2qBu10RcwQhs7o2fZ1zK9F3MmRCkBqwCggpap
GsOgE2IlWjcztmE6xcPO0wED/R4BxTaQM+jxIjylnHgn9PYy6795yIc/ZoYjNnIh
QyjqbLWnyzeTmjPBwcXNljKqzEoA/Cjb2gClxHXrYAw7bGu7wKbnqhzdghSx7ab+
HwIoy/v6IQqv+EXZgYHonqQwqtgfAHp5ON2gWu03cHoGkXfmA4qZIoowqMolZhGo
cF30A/9GotDdnMlqh8bFBOCMuxfRow7H8RpfL0fX7VHA0knAZEDk2rNFeebL5QKH
GNJm9Wa6JSVj1NUIaz4LHyravqXi4MXzlUqauhLHw1iG+qwZlPM04z+1Dj6A+2Hr
b5UxI/I+EzmO5OYa38YWOqybNVBH0wO+sMCpdBq0LABa8X29LbRPQ2VwaCBhdXRv
bWF0ZWQgcGFja2FnZSBidWlsZCAoQ2VwaCBhdXRvbWF0ZWQgcGFja2FnZSBidWls
ZCkgPHNhZ2VAbmV3ZHJlYW0ubmV0PohgBBMRAgAgAhsDBgsJCAcDAgQVAggDBBYC
AwECHgECF4AFAlEUm1YACgkQbq6uIgPDlRqTUACeMqJ+vwatwb+y/KWeNfmgtQ8+
kDwAn0MHwY42Wmb7FA891j88enooCdxRuQQNBE1Rr28QEACKG04kxGY1cwGoInHV
P6z1+8oqGiaiYWFflYRtSiwoUVtl30T1sMOSzoEvmauc+rmBBfsyaBb8DLDUIgGK
v1FCOY/tfqnOyQXotPjgaLeCtK5A5Z5D212wbskf5fRHAxiychwKURiEeesRa7EW
rF6ohFxOTy9NOlFi7ctusShw6Q2kUtN7bQCX9hJdYs7PYQXvCXvW8DNt7IitF7Mp
gMHNcj0wik6p38I4s7pqK6mqP4AXVVSWbJKr/LSz8bI8KhWRAT7erVAZf6FElR2x
ZVr3c4zsE2HFpnZTsM5y/nj8fUkgKGl8OfBuUoh+MCVfnPmE6sgWfDTKkwWtUcmL
6V9UQ1INUJ3sk+XBY9SMNbOn04su9FjQyNEMI/3VK7yuyKBRAN7IIVgP2ch499m6
+YFV9ZkG3JSTovNiqSpQouW7YPkS+8mxlPo03LQcU5bHeacBl0T8Xjlvqu6q279E
liHul4huKL0+myPN4DtmOTh/kwgSy3BGCBdS+wfAJSZcuKI7pk7pHGCdUjNMHQZm
PFbwzp33bVLd16gnAx0OW5DOn6l0VfgIQNSJ2rn7WZ5jdyg/Flp2VlWVtAHFLzkC
a+LvQ5twSuzrV/VipSr3xz3pTDLY+ZxDztvrgA6AST8+sdq6uQTYjwUQV0wzanvp
9hkC5eqRY6YlzcgMkWFv8DCIEwADBQ//ZQaeVmG6T5vyfXf2JrCipmI4MAdO+ezE
tWE82wgixlCvvm26UmUejCYgtD6DmwY/7/bIjvJDhUwP0+hAHHOpR62gncoMtbMr
yHpm3FvYH58JNk5gx8ZA322WEc2GCRCQzrMQoMKBcpZY/703GpQ4l3RZ7/25gq7A
NohV5zeddFQftc05PMBBJLU3U+lrnahJS1WaOXNQzS6oVj9jNda1jkgcQni6QssS
IMT6rAPsVbGJhe9mxr2VWdQ90QlubpszIeSJuqqJxLwqH8XHXZmQOYxmyVP9a3pF
qWDmsNxDA8ttYnMIc+nUAgCDJ84ScwQ1GvoCUD1b1cFNzvvhEHsNb4D/XbdrFcFG
wEkeyivUsojdq2YnGjYSgauqyNWbeEgBrWzUe5USYysmziL/KAubcUjIbeRGxyPS
6iQ2kbvfEJJPgocWTfLs5j61FObO+MVlj+PEmxWbcsIRv/pnG2V2FPJ8evhzgvp7
cG9imZPM6dWHzc/ZFdi3Bcs51RtStsvPqXv4icKIi+01h1MLHNBqwuUkIiiK7ooM
lvnp+DiEsVSuYYKBdGTi+4+nduuYL2g8CTNJKZuC46dY7EcE3lRYZlxl7dwN3jfL
PRlnNscs34dwhZa+b70Flia0U1DNF4jrIFFBSHD3TqMg0Z6kxp1TfxpeGOLOqnBW
rr0GKehu9CGISQQYEQIACQIbDAUCURSbegAKCRBurq4iA8OVGv9TAJ9EeXVrRS3p
PZkT1R21FszUc9LvmgCeMduh5IPGFWSx9MjUc7/j1QKYm7g=
=per8
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -0,0 +1,29 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQINBFX4hgkBEADLqn6O+UFp+ZuwccNldwvh5PzEwKUPlXKPLjQfXlQRig1flpCH
E0HJ5wgGlCtYd3Ol9f9+qU24kDNzfbs5bud58BeE7zFaZ4s0JMOMuVm7p8JhsvkU
C/Lo/7NFh25e4kgJpjvnwua7c2YrA44ggRb1QT19ueOZLK5wCQ1mR+0GdrcHRCLr
7Sdw1d7aLxMT+5nvqfzsmbDullsWOD6RnMdcqhOxZZvpay8OeuK+yb8FVQ4sOIzB
FiNi5cNOFFHg+8dZQoDrK3BpwNxYdGHsYIwU9u6DWWqXybBnB9jd2pve9PlzQUbO
eHEa4Z+jPqxY829f4ldaql7ig8e6BaInTfs2wPnHJ+606g2UH86QUmrVAjVzlLCm
nqoGymoAPGA4ObHu9X3kO8viMBId9FzooVqR8a9En7ZE0Dm9O7puzXR7A1f5sHoz
JdYHnr32I+B8iOixhDUtxIY4GA8biGATNaPd8XR2Ca1hPuZRVuIiGG9HDqUEtXhV
fY5qjTjaThIVKtYgEkWMT+Wet3DPPiWT3ftNOE907e6EWEBCHgsEuuZnAbku1GgD
LBH4/a/yo9bNvGZKRaTUM/1TXhM5XgVKjd07B4cChgKypAVHvef3HKfCG2U/DkyA
LjteHt/V807MtSlQyYaXUTGtDCrQPSlMK5TjmqUnDwy6Qdq8dtWN3DtBWQARAQAB
tCpDZXBoLmNvbSAocmVsZWFzZSBrZXkpIDxzZWN1cml0eUBjZXBoLmNvbT6JAjgE
EwECACIFAlX4hgkCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEOhKwsBG
DzmUXdIQAI8YPcZMBWdv489q8CzxlfRIRZ3Gv/G/8CH+EOExcmkVZ89mVHngCdAP
DOYCl8twWXC1lwJuLDBtkUOHXNuR5+Jcl5zFOUyldq1Hv8u03vjnGT7lLJkJoqpG
l9QD8nBqRvBU7EM+CU7kP8+09b+088pULil+8x46PwgXkvOQwfVKSOr740Q4J4nm
/nUOyTNtToYntmt2fAVWDTIuyPpAqA6jcqSOC7Xoz9cYxkVWnYMLBUySXmSS0uxl
3p+wK0lMG0my/gb+alke5PAQjcE5dtXYzCn+8Lj0uSfCk8Gy0ZOK2oiUjaCGYN6D
u72qDRFBnR3jaoFqi03bGBIMnglGuAPyBZiI7LJgzuT9xumjKTJW3kN4YJxMNYu1
FzmIyFZpyvZ7930vB2UpCOiIaRdZiX4Z6ZN2frD3a/vBxBNqiNh/BO+Dex+PDfI4
TqwF8zlcjt4XZ2teQ8nNMR/D8oiYTUW8hwR4laEmDy7ASxe0p5aijmUApWq5UTsF
+s/QbwugccU0iR5orksM5u9MZH4J/mFGKzOltfGXNLYI6D5Mtwrnyi0BsF5eY0u6
vkdivtdqrq2DXY+ftuqLOQ7b+t1RctbcMHGPptlxFuN9ufP5TiTWSpfqDwmHCLsT
k2vFiMwcHdLpQ1IH8ORVRgPPsiBnBOJ/kIiXG2SxPUTjjEGOVgeA
=/Tod
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -0,0 +1,51 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.11 (GNU/Linux)
mQINBFJxnXIBEAC4QhhJgpTJFeZ9pHLuGseS2C/MQwYzcSyJEoJoEnbooS8uwNZt
fMHBwcacML/Yq7CkVCXYbMM1tQVTTU31wnAZIb5nrvYHL4MNWoTzEFIEjAD0VHZW
bRmydFa9aDNjwrVE4xXhvMClmrv51we4qW+Ht5s3I8nn+hJRc6oeFKw7FhYqAsyL
htYRzNg0Ji+MiqBgeAC6IEwmKK/lsmz4FK/5RLnu9tKMJhziQZNQbHvv5pntVcnI
M4UJdUtepaf/GUk256MmFW1Qmfv2KxUlEcms1fPusBFjQDnCZi+qRuezTjpPsLAx
ramqc8Dj6NfylSm374CKnkpQSB6Mn+78cwr2SzUB3mUoXq/IgZ3RRhNhVV6NzZpM
u9IvHE+xL80c/eGXgIx/q8uP8Mmi3PJxt+WS+X0m5pvbIEFZ305vya6ovlUK+kIV
MnWj8jcMIgFO2LM+UM51W5jrFjmgB/GcIof2G3iddM6r+ZGCqqzw/8blGwOy+sny
FsOQ1wwb4/ew2ehxU7fDSH/3Ohujs+2qVOsQUUSx6upm5cGLyJzF8YgqRz7NkPG/
7d9XA3OaO8tV5NtwU2jKInOeUUA56iAqQbY/StJIKMTCBe9263sHGIw2B1QnEHce
ALgQ5rxajBs2qfkweFYhax5LjRkMmtmpJ/Qknyryy0DpBPorxbu3byAlEwARAQAB
tB1Jbmt0YW5rIDxyZWxlYXNlQGlua3RhbmsuY29tPokCOAQTAQIAIgUCUnGdcgIb
AwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQVDjHAZ3O7q26CQ/9GELVHFtD
VSAYeYoDxpiKf9/8ZWzdKhF9XAcf/QbzNACFDBMpwONDtF1pLPQbSQYvMyGIQUnW
qP+s6mKHwlc5kl/0uVZeNDE0HRmxD6hSgJCLFwkcl5Ugzdnn71dHrr+87KsUdR2c
g9sn5MFpjToT2wb8BcdaDb7tEfyMfY/ehxpUK2SPOWvwf2sKJRt5xtfEjoY9aGve
75TCAh6yzjdh9XJ7IjGrqgNtdWBwNwsua3kucB2W4ZEe0QbmevemQC+JUjL0qjm/
dJrzWj1W3rj5xrRdEFiKlUU1f/of7CZU7ogWzLHFQFZahifPBaG7q6z5WVvGXY1x
xz8Tt3125kDvcq5dQPUqqqw43G0MTiYh26VOF1JK3CkdzKxTVX/bAx+mQGDqRKG4
HfM+PKnhU2uV5k8Km/5aqs6LkNlGffOHBqoZ2ySUYwA1XmcevCBD9EBabl8keLlO
DXLs1O/zdsAQypu8sUPD82ogH30QRALZuBoXPKEK+tNYRHvb20YbxSuzyRYq4JI+
ZqQ46lb1l77UfBRlmHs72eS1B1nS5si+UfsWR13O/UVhHQMk2A6lT+ZpAy/wSp1k
4RTXO1uxTduuGznRq3JyYXjpuQzds/3mWz/KG9BOWiJJGMEdmlB0Y4xQtuWAbi2B
VQu2eFkWJHRTcT1AIDobvyITVFHtEGeycBu5Ag0EUnGdcgEQAMHKeJvO9soO0uNc
z7LwNQZAHJdWp+mxurqFHAGqEMo9BlZcr05qk8cqeoyDaUJeER5qkncBFfIS+hCM
j6yG0DHX+rfrxPQktso6Gy+G3VJVCl4uzRB1aXY14KYMmLjQv4rdirfvkSV7Ikqk
PZqYhQIjoWOb4Ft/yejExCn5I4e7J4JDmpggH8YLHCdEqhv+gRPoHQskYxFcHkC6
eQ0+/pipSA46yQ2TGwJQyDWj7FPvhBEn/f4m8JCK+zmVp54r4Q+KpZYQYOrzgArV
XjCeDcuhXLofRJpDGpPGUzueahs7K9iQLFa1ARw3zxqPNJVDpR2hFddZfVCNmb6A
ZWqN4CIz5fDEYOBmZlJX7i+Q7RkFXs7+UJubHVdVSel7Ufhy12DXeanTWt2K1A12
9e5FFsIz2YuuLtnMi2HQ9T734w1CtaWxe/4s+xrqeB91+LVJ9V3Nk+2tdxso2zR4
zHXVIB6YRf0Azg4FhJ73equ7jxgZjwO7J37gwOsWETe7wmJQIex6EpLjjZGEAyfp
VilY/R24KwTq6Xt4wkinI+VD1Ze1kfFqyozUPkL86XvCbZT4UCBW7Nkk9VJ+8OgK
15gbljn6hjRDRo5FzkrLeesFTcspNRmSkcrJyw2WDceZtPdrvPDz+dg4DiqHUSsO
WolVgFWcIW5sTcybO+X0cRzXo9+JABEBAAGJAh8EGAECAAkFAlJxnXICGwwACgkQ
VDjHAZ3O7q00chAAlOWLNeFIjoxSVYF8TNIK/Ao8FBbk/Pg5SpsIxBfrZiDj5EwW
Gv8fKN4b0OBw20Wh53Yiv62lVSVyIe7SRVX6LeUC4OD+T+uDZ6LdnPGIHA4eCw5r
Zy2AIi6CExT07eipAQmAdt2VhBtYYITLXr34uvLZDkMn+hraEQs5h3Y5j99qDwZV
68Z+b5PK0v/Bmyd+biGQqLIbZypmiOUQqq7OkWPBIAg5P6y3PIi1KyWuRPvxJ33f
sIuDVVznAxSzKuYjwJ9QMvR6hTSyOOSFJsGr58Pc3p1atvEsiW4xQioYYieYZuyx
cb5ZjJmnnmv8AN1dwq3ETW1igiw/LTxATLZEHV4GM6i0SDghg9ricoyEqLzTzn2y
cjEDrBXQBucDDthuFk82FR8clihjeizKuqWo14QkjMUt/tW6q1eveamUs8aKVIpz
lNeHHf/aU/NA+w4rLNs49rQeKFOCA5ySSb55R6EBABtCnUdul3Xye5azKMfTVA07
PweRzqCUKZuXYmbaZ8DzalrNL7a8btCHfzII8ohj+br1WeynFMpYqhgKfitlM/Ie
KlnNRKxeDgU6NN27y6DKIzzrdpz36yNaFtdn3O0XcNzayj80qJTLNA96nI0bcpoy
xvbSx50M7NRbYugFLkz54bo8U3fOXKdw7qC2eLqhq1RvQEt05qz1EvwA9Is=
=0eqL
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -0,0 +1,135 @@
---
- name: update apt cache
apt:
update-cache: yes
- name: restart ceph mons
command: service ceph restart mon
when:
socket.rc == 0 and
ansible_distribution != 'Ubuntu' and
mon_group_name in group_names and not
is_ceph_infernalis
- name: restart ceph mons with systemd
service:
name: ceph-mon@{{ ansible_hostname }}
state: restarted
when:
socket.rc == 0 and
ansible_distribution != 'Ubuntu' and
mon_group_name in group_names and
is_ceph_infernalis
- name: restart ceph mons on ubuntu
command: restart ceph-mon-all
when:
socket.rc == 0 and
ansible_distribution == 'Ubuntu' and
mon_group_name in group_names
- name: restart ceph osds
command: service ceph restart osd
when:
socket.rc == 0 and
ansible_distribution != 'Ubuntu' and
osd_group_name in group_names and
not is_ceph_infernalis
# This does not just restart OSDs but everything else too. Unfortunately
# at this time the ansible role does not have an OSD id list to use
# for restarting them specifically.
- name: restart ceph osds with systemd
service:
name: ceph.target
state: restarted
when:
socket.rc == 0 and
ansible_distribution != 'Ubuntu' and
osd_group_name in group_names and
is_ceph_infernalis
- name: restart ceph osds on ubuntu
command: restart ceph-osd-all
when:
socket.rc == 0 and
ansible_distribution == 'Ubuntu' and
osd_group_name in group_names
- name: restart ceph mdss on ubuntu
command: restart ceph-mds-all
when:
socket.rc == 0 and
ansible_distribution == 'Ubuntu' and
mds_group_name in group_names
- name: restart ceph mdss
command: service ceph restart mds
when:
socket.rc == 0 and
ansible_distribution != 'Ubuntu' and
mds_group_name in group_names and
ceph_stable and
ceph_stable_release in ceph_stable_releases
- name: restart ceph mdss with systemd
service:
name: ceph-mds@{{ ansible_hostname }}
state: restarted
when:
socket.rc == 0 and
ansible_distribution != 'Ubuntu' and
mds_group_name in group_names and
ceph_stable and
ceph_stable_release not in ceph_stable_releases
- name: restart ceph rgws on ubuntu
command: restart ceph-all
when:
socketrgw.rc == 0 and
ansible_distribution == 'Ubuntu' and
rgw_group_name in group_names
- name: restart ceph rgws
command: /etc/init.d/radosgw restart
when:
socketrgw.rc == 0 and
ansible_distribution != 'Ubuntu' and
rgw_group_name in group_names and
not is_ceph_infernalis
- name: restart ceph rgws on red hat
command: /etc/init.d/ceph-radosgw restart
when:
socketrgw.rc == 0 and
ansible_os_family == 'RedHat' and
rgw_group_name in group_names and
not is_ceph_infernalis
- name: restart ceph rgws with systemd
service:
name: ceph-rgw@{{ ansible_hostname }}
state: restarted
when:
socketrgw.rc == 0 and
ansible_distribution != 'Ubuntu' and
rgw_group_name in group_names and
is_ceph_infernalis
- name: restart apache2
service:
name: apache2
state: restarted
enabled: yes
when:
ansible_os_family == 'Debian' and
rgw_group_name in group_names
- name: restart apache2
service:
name: httpd
state: restarted
enabled: yes
when:
ansible_os_family == 'RedHat' and
rgw_group_name in group_names

View File

@ -0,0 +1,13 @@
---
galaxy_info:
author: Sébastien Han
description: Installs Ceph
license: Apache
min_ansible_version: 1.7
platforms:
- name: Ubuntu
versions:
- trusty
categories:
- system
dependencies: []

View File

@ -0,0 +1,581 @@
# (c) 2015, Kevin Carter <kevin.carter@rackspace.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ConfigParser
import datetime
import io
import json
import os
import pwd
import time
import yaml
# Ansible v2
try:
from ansible.plugins.action import ActionBase
from ansible.utils.unicode import to_bytes, to_unicode
from ansible import constants as C
from ansible import errors
CONFIG_TYPES = {
'ini': 'return_config_overrides_ini',
'json': 'return_config_overrides_json',
'yaml': 'return_config_overrides_yaml'
}
def _convert_2_string(item):
"""Return byte strings for all items.
This will convert everything within a dict, list or unicode string such
that the values will be encode('utf-8') where applicable.
"""
if isinstance(item, dict):
# Old style dict comprehension for legacy python support
return dict(
(_convert_2_string(key), _convert_2_string(value))
for key, value in item.iteritems()
)
elif isinstance(item, list):
return [_convert_2_string(i) for i in item]
elif isinstance(item, unicode):
return item.encode('utf-8')
else:
return item
class ActionModule(ActionBase):
TRANSFERS_FILES = True
@staticmethod
def return_config_overrides_ini(config_overrides, resultant):
"""Returns string value from a modified config file.
:param config_overrides: ``dict``
:param resultant: ``str`` || ``unicode``
:returns: ``str``
"""
# If there is an exception loading the RawConfigParser The config obj
# is loaded again without the extra option. This is being done to
# support older python.
try:
config = ConfigParser.RawConfigParser(allow_no_value=True)
except Exception:
config = ConfigParser.RawConfigParser()
config_object = io.BytesIO(str(resultant))
config.readfp(config_object)
for section, items in config_overrides.items():
# If the items value is not a dictionary it is assumed that the
# value is a default item for this config type.
if not isinstance(items, dict):
config.set(
'DEFAULT',
section.encode('utf-8'),
_convert_2_string(items)
)
else:
# Attempt to add a section to the config file passing if
# an error is raised that is related to the section
# already existing.
try:
config.add_section(section.encode('utf-8'))
except (ConfigParser.DuplicateSectionError, ValueError):
pass
for key, value in items.items():
value = _convert_2_string(value)
try:
config.set(
section.encode('utf-8'),
key.encode('utf-8'),
value
)
except ConfigParser.NoSectionError as exp:
error_msg = str(exp)
error_msg += (
' Try being more explicit with your override'
' data. Sections are case sensitive.'
)
raise errors.AnsibleModuleError(error_msg)
else:
config_object.close()
resultant_bytesio = io.BytesIO()
try:
config.write(resultant_bytesio)
return resultant_bytesio.getvalue()
finally:
resultant_bytesio.close()
def return_config_overrides_json(self, config_overrides, resultant):
"""Returns config json
Its important to note that file ordering will not be preserved as the
information within the json file will be sorted by keys.
:param config_overrides: ``dict``
:param resultant: ``str`` || ``unicode``
:returns: ``str``
"""
original_resultant = json.loads(resultant)
merged_resultant = self._merge_dict(
base_items=original_resultant,
new_items=config_overrides
)
return json.dumps(
merged_resultant,
indent=4,
sort_keys=True
)
def return_config_overrides_yaml(self, config_overrides, resultant):
"""Return config yaml.
:param config_overrides: ``dict``
:param resultant: ``str`` || ``unicode``
:returns: ``str``
"""
original_resultant = yaml.safe_load(resultant)
merged_resultant = self._merge_dict(
base_items=original_resultant,
new_items=config_overrides
)
return yaml.safe_dump(
merged_resultant,
default_flow_style=False,
width=1000,
)
def _merge_dict(self, base_items, new_items):
"""Recursively merge new_items into base_items.
:param base_items: ``dict``
:param new_items: ``dict``
:returns: ``dict``
"""
for key, value in new_items.iteritems():
if isinstance(value, dict):
base_items[key] = self._merge_dict(
base_items.get(key, {}),
value
)
elif isinstance(value, list):
if key in base_items and isinstance(base_items[key], list):
base_items[key].extend(value)
else:
base_items[key] = value
else:
base_items[key] = new_items[key]
return base_items
def _load_options_and_status(self, task_vars):
"""Return options and status from module load."""
config_type = self._task.args.get('config_type')
if config_type not in ['ini', 'yaml', 'json']:
return False, dict(
failed=True,
msg="No valid [ config_type ] was provided. Valid options are"
" ini, yaml, or json."
)
# Access to protected method is unavoidable in Ansible
searchpath = [self._loader._basedir]
faf = self._task.first_available_file
if faf:
task_file = task_vars.get('_original_file', None, 'templates')
source = self._get_first_available_file(faf, task_file)
if not source:
return False, dict(
failed=True,
msg="could not find src in first_available_file list"
)
else:
# Access to protected method is unavoidable in Ansible
if self._task._role:
file_path = self._task._role._role_path
searchpath.insert(1, C.DEFAULT_ROLES_PATH)
searchpath.insert(1, self._task._role._role_path)
else:
file_path = self._loader.get_basedir()
user_source = self._task.args.get('src')
if not user_source:
return False, dict(
failed=True,
msg="No user provided [ src ] was provided"
)
source = self._loader.path_dwim_relative(
file_path,
'templates',
user_source
)
searchpath.insert(1, os.path.dirname(source))
_dest = self._task.args.get('dest')
if not _dest:
return False, dict(
failed=True,
msg="No [ dest ] was provided"
)
else:
# Expand any user home dir specification
user_dest = self._remote_expand_user(_dest)
if user_dest.endswith(os.sep):
user_dest = os.path.join(user_dest, os.path.basename(source))
return True, dict(
source=source,
dest=user_dest,
config_overrides=self._task.args.get('config_overrides', dict()),
config_type=config_type,
searchpath=searchpath
)
def run(self, tmp=None, task_vars=None):
"""Run the method"""
if not tmp:
tmp = self._make_tmp_path()
_status, _vars = self._load_options_and_status(task_vars=task_vars)
if not _status:
return _vars
temp_vars = task_vars.copy()
template_host = temp_vars['template_host'] = os.uname()[1]
source = temp_vars['template_path'] = _vars['source']
temp_vars['template_mtime'] = datetime.datetime.fromtimestamp(
os.path.getmtime(source)
)
try:
template_uid = temp_vars['template_uid'] = pwd.getpwuid(
os.stat(source).st_uid
).pw_name
except Exception:
template_uid = temp_vars['template_uid'] = os.stat(source).st_uid
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=template_host,
uid=template_uid,
file=to_bytes(source)
)
temp_vars['ansible_managed'] = time.strftime(
managed_str,
time.localtime(os.path.getmtime(source))
)
temp_vars['template_fullpath'] = os.path.abspath(source)
temp_vars['template_run_date'] = datetime.datetime.now()
with open(source, 'r') as f:
template_data = to_unicode(f.read())
self._templar.environment.loader.searchpath = _vars['searchpath']
self._templar.set_available_variables(temp_vars)
resultant = self._templar.template(
template_data,
preserve_trailing_newlines=True,
escape_backslashes=False,
convert_data=False
)
# Access to protected method is unavoidable in Ansible
self._templar.set_available_variables(
self._templar._available_variables
)
if _vars['config_overrides']:
type_merger = getattr(self, CONFIG_TYPES.get(_vars['config_type']))
resultant = type_merger(
config_overrides=_vars['config_overrides'],
resultant=resultant
)
# Re-template the resultant object as it may have new data within it
# as provided by an override variable.
resultant = self._templar.template(
resultant,
preserve_trailing_newlines=True,
escape_backslashes=False,
convert_data=False
)
# run the copy module
new_module_args = self._task.args.copy()
# Access to protected method is unavoidable in Ansible
transferred_data = self._transfer_data(
self._connection._shell.join_path(tmp, 'source'),
resultant
)
new_module_args.update(
dict(
src=transferred_data,
dest=_vars['dest'],
original_basename=os.path.basename(source),
follow=True,
),
)
# Remove data types that are not available to the copy module
new_module_args.pop('config_overrides', None)
new_module_args.pop('config_type', None)
# Run the copy module
return self._execute_module(
module_name='copy',
module_args=new_module_args,
task_vars=task_vars
)
# Ansible v1
except ImportError:
import ConfigParser
import io
import json
import os
import yaml
from ansible import errors
from ansible.runner.return_data import ReturnData
from ansible import utils
from ansible.utils import template
CONFIG_TYPES = {
'ini': 'return_config_overrides_ini',
'json': 'return_config_overrides_json',
'yaml': 'return_config_overrides_yaml'
}
class ActionModule(object):
TRANSFERS_FILES = True
def __init__(self, runner):
self.runner = runner
def grab_options(self, complex_args, module_args):
"""Grab passed options from Ansible complex and module args.
:param complex_args: ``dict``
:param module_args: ``dict``
:returns: ``dict``
"""
options = dict()
if complex_args:
options.update(complex_args)
options.update(utils.parse_kv(module_args))
return options
@staticmethod
def return_config_overrides_ini(config_overrides, resultant):
"""Returns string value from a modified config file.
:param config_overrides: ``dict``
:param resultant: ``str`` || ``unicode``
:returns: ``str``
"""
config = ConfigParser.RawConfigParser(allow_no_value=True)
config_object = io.BytesIO(resultant.encode('utf-8'))
config.readfp(config_object)
for section, items in config_overrides.items():
# If the items value is not a dictionary it is assumed that the
# value is a default item for this config type.
if not isinstance(items, dict):
config.set('DEFAULT', section, str(items))
else:
# Attempt to add a section to the config file passing if
# an error is raised that is related to the section
# already existing.
try:
config.add_section(section)
except (ConfigParser.DuplicateSectionError, ValueError):
pass
for key, value in items.items():
config.set(section, key, str(value))
else:
config_object.close()
resultant_bytesio = io.BytesIO()
try:
config.write(resultant_bytesio)
return resultant_bytesio.getvalue()
finally:
resultant_bytesio.close()
def return_config_overrides_json(self, config_overrides, resultant):
"""Returns config json
Its important to note that file ordering will not be preserved as the
information within the json file will be sorted by keys.
:param config_overrides: ``dict``
:param resultant: ``str`` || ``unicode``
:returns: ``str``
"""
original_resultant = json.loads(resultant)
merged_resultant = self._merge_dict(
base_items=original_resultant,
new_items=config_overrides
)
return json.dumps(
merged_resultant,
indent=4,
sort_keys=True
)
def return_config_overrides_yaml(self, config_overrides, resultant):
"""Return config yaml.
:param config_overrides: ``dict``
:param resultant: ``str`` || ``unicode``
:returns: ``str``
"""
original_resultant = yaml.safe_load(resultant)
merged_resultant = self._merge_dict(
base_items=original_resultant,
new_items=config_overrides
)
return yaml.safe_dump(
merged_resultant,
default_flow_style=False,
width=1000,
)
def _merge_dict(self, base_items, new_items):
"""Recursively merge new_items into base_items.
:param base_items: ``dict``
:param new_items: ``dict``
:returns: ``dict``
"""
for key, value in new_items.iteritems():
if isinstance(value, dict):
base_items[key] = self._merge_dict(
base_items.get(key, {}),
value
)
elif isinstance(value, list):
if key in base_items and isinstance(base_items[key], list):
base_items[key].extend(value)
else:
base_items[key] = value
else:
base_items[key] = new_items[key]
return base_items
def run(self, conn, tmp, module_name, module_args, inject,
complex_args=None, **kwargs):
"""Run the method"""
if not self.runner.is_playbook:
raise errors.AnsibleError(
'FAILED: `config_templates` are only available in playbooks'
)
options = self.grab_options(complex_args, module_args)
try:
source = options['src']
dest = options['dest']
config_overrides = options.get('config_overrides', dict())
config_type = options['config_type']
assert config_type.lower() in ['ini', 'json', 'yaml']
except KeyError as exp:
result = dict(failed=True, msg=exp)
return ReturnData(conn=conn, comm_ok=False, result=result)
source_template = template.template(
self.runner.basedir,
source,
inject
)
if '_original_file' in inject:
source_file = utils.path_dwim_relative(
inject['_original_file'],
'templates',
source_template,
self.runner.basedir
)
else:
source_file = utils.path_dwim(self.runner.basedir, source_template)
# Open the template file and return the data as a string. This is
# being done here so that the file can be a vault encrypted file.
resultant = template.template_from_file(
self.runner.basedir,
source_file,
inject,
vault_password=self.runner.vault_pass
)
if config_overrides:
type_merger = getattr(self, CONFIG_TYPES.get(config_type))
resultant = type_merger(
config_overrides=config_overrides,
resultant=resultant
)
# Retemplate the resultant object as it may have new data within it
# as provided by an override variable.
template.template_from_string(
basedir=self.runner.basedir,
data=resultant,
vars=inject,
fail_on_undefined=True
)
# Access to protected method is unavoidable in Ansible 1.x.
new_module_args = dict(
src=self.runner._transfer_str(conn, tmp, 'source', resultant),
dest=dest,
original_basename=os.path.basename(source),
follow=True,
)
module_args_tmp = utils.merge_module_args(
module_args,
new_module_args
)
# Remove data types that are not available to the copy module
complex_args.pop('config_overrides')
complex_args.pop('config_type')
# Return the copy module status. Access to protected method is
# unavoidable in Ansible 1.x.
return self.runner._execute_module(
conn,
tmp,
'copy',
module_args_tmp,
inject=inject,
complex_args=complex_args
)

View File

@ -0,0 +1,102 @@
---
- name: check if nmap is installed
command: "command -v nmap"
changed_when: false
failed_when: false
register: nmapexist
when: check_firewall
- name: inform that nmap is not present
debug:
msg: "nmap is not installed, can not test if ceph ports are allowed :("
when:
check_firewall and
nmapexist.rc != 0
- name: check if monitor port is not filtered
local_action: shell set -o pipefail && nmap -p 6789 {{ item }} {{ hostvars[item]['ansible_' + monitor_interface]['ipv4']['address'] }} | grep -sqo filtered
changed_when: false
failed_when: false
with_items: groups.{{ mon_group_name }}
register: monportstate
when:
check_firewall and
mon_group_name in group_names and
nmapexist.rc == 0
- name: fail if monitor port is filtered
fail:
msg: "Please allow port 6789 on your firewall"
with_items: monportstate.results
when:
check_firewall and
item.rc == 0 and
mon_group_name is defined and
mon_group_name in group_names and
nmapexist.rc == 0
- name: check if osd and mds range is not filtered
local_action: shell set -o pipefail && nmap -p 6800-7300 {{ item }} {{ hostvars[item]['ansible_default_ipv4']['address'] }} | grep -sqo filtered
changed_when: false
failed_when: false
with_items: groups.{{ osd_group_name }}
register: osdrangestate
when:
check_firewall and
osd_group_name in group_names and
nmapexist.rc == 0
- name: fail if osd and mds range is filtered (osd hosts)
fail:
msg: "Please allow range from 6800 to 7300 on your firewall"
with_items: osdrangestate.results
when:
check_firewall and
item.rc == 0 and
osd_group_name is defined and
osd_group_name in group_names and
nmapexist.rc == 0
- name: check if osd and mds range is not filtered
local_action: shell set -o pipefail && nmap -p 6800-7300 {{ item }} {{ hostvars[item]['ansible_default_ipv4']['address'] }} | grep -sqo filtered
changed_when: false
failed_when: false
with_items: groups.{{ mds_group_name }}
register: mdsrangestate
when:
check_firewall and
mds_group_name in group_names and
nmapexist.rc == 0
- name: fail if osd and mds range is filtered (mds hosts)
fail:
msg: "Please allow range from 6800 to 7300 on your firewall"
with_items: mdsrangestate.results
when:
check_firewall and
item.rc == 0 and
mds_group_name is defined and
mds_group_name in group_names and
nmapexist.rc == 0
- name: check if rados gateway port is not filtered
local_action: shell set -o pipefail && nmap -p {{ radosgw_civetweb_port }} {{ item }} {{ hostvars[item]['ansible_default_ipv4']['address'] }} | grep -sqo filtered
changed_when: false
failed_when: false
with_items: groups.rgws
register: rgwportstate
when:
check_firewall and
rgw_group_name in group_names and
nmapexist.rc == 0
- name: fail if rados gateway port is filtered
fail:
msg: "Please allow port {{ radosgw_civetweb_port }} on your firewall"
with_items: rgwportstate.results
when:
check_firewall and
item.rc == 0 and
rgw_group_name is defined and
rgw_group_name in group_names and
nmapexist.rc == 0

View File

@ -0,0 +1,108 @@
---
- name: make sure an installation origin was chosen
fail:
msg: "choose an installation origin"
when:
ceph_origin != 'upstream' and
ceph_origin != 'distro'
tags:
- package-install
- name: make sure an installation source was chosen
fail:
msg: "choose an upstream installation source or read https://github.com/ceph/ceph-ansible/wiki"
when:
ceph_origin == 'upstream' and
not ceph_stable and
not ceph_dev and
not ceph_stable_ice and
not ceph_stable_rh_storage
tags:
- package-install
- name: verify that a method was chosen for red hat storage
fail:
msg: "choose between ceph_stable_rh_storage_cdn_install and ceph_stable_rh_storage_iso_install"
when:
ceph_stable_rh_storage and
not ceph_stable_rh_storage_cdn_install and
not ceph_stable_rh_storage_iso_install
tags:
- package-install
- name: make sure journal_size configured
fail:
msg: "journal_size must be configured. See http://ceph.com/docs/master/rados/configuration/osd-config-ref/"
when:
journal_size|int == 0 and
osd_group_name in group_names
- name: make sure monitor_interface configured
fail:
msg: "monitor_interface must be configured. Interface for the monitor to listen on"
when:
monitor_interface == 'interface' and
mon_group_name in group_names
- name: make sure cluster_network configured
fail:
msg: "cluster_network must be configured. Ceph replication network"
when:
cluster_network == '0.0.0.0/0' and
osd_group_name in group_names
- name: make sure public_network configured
fail:
msg: "public_network must be configured. Ceph public network"
when:
public_network == '0.0.0.0/0' and
osd_group_name in group_names
- name: make sure an osd scenario was chosen
fail:
msg: "please choose an osd scenario"
when:
osd_group_name is defined and
osd_group_name in group_names and
not journal_collocation and
not raw_multi_journal and
not osd_directory
- name: verify only one osd scenario was chosen
fail:
msg: "please select only one osd scenario"
when:
osd_group_name is defined and
osd_group_name in group_names and
((journal_collocation and raw_multi_journal) or
(journal_collocation and osd_directory) or
(raw_multi_journal and osd_directory))
- name: verify devices have been provided
fail:
msg: "please provide devices to your osd scenario"
when:
osd_group_name is defined and
osd_group_name in group_names and
journal_collocation and
not osd_auto_discovery and
devices is not defined
- name: verify journal devices have been provided
fail:
msg: "please provide devices to your osd scenario"
when:
osd_group_name is defined and
osd_group_name in group_names and
raw_multi_journal and
(raw_journal_devices is not defined or
devices is not defined)
- name: verify directories have been provided
fail:
msg: "please provide directories to your osd scenario"
when:
osd_group_name is defined and
osd_group_name in group_names and
osd_directory and
osd_directories is not defined

View File

@ -0,0 +1,29 @@
---
- name: fail on unsupported system
fail:
msg: "System not supported {{ ansible_system }}"
when: "'{{ ansible_system }}' not in ['Linux']"
- name: fail on unsupported architecture
fail:
msg: "Architecture not supported {{ ansible_architecture }}"
when: "'{{ ansible_architecture }}' not in ['x86_64']"
- name: fail on unsupported distribution
fail:
msg: "Distribution not supported {{ ansible_os_family }}"
when: "'{{ ansible_os_family }}' not in ['Debian', 'RedHat']"
- name: fail on unsupported distribution for red hat storage
fail:
msg: "Distribution not supported {{ ansible_distribution_version }} by Red Hat Storage, only RHEL 7.1"
when:
ceph_stable_rh_storage and
{{ ansible_distribution_version | version_compare('7.1', '<') }}
- name: fail on unsupported ansible version
fail:
msg: "Ansible version must be >= 1.9, please update!"
when:
ansible_version.major|int == 1 and
ansible_version.minor|int < 9

View File

@ -0,0 +1,40 @@
---
- name: install the ceph repository stable key
apt_key:
data: "{{ lookup('file', role_path+'/files/cephstable.asc') }}"
state: present
when: ceph_stable
- name: install the ceph development repository key
apt_key:
data: "{{ lookup('file', role_path+'/files/cephdev.asc') }}"
state: present
when: ceph_dev
- name: install intank ceph enterprise repository key
apt_key:
data: "{{ lookup('file', role_path+'/files/cephstableice.asc') }}"
state: present
when: ceph_stable_ice
- name: add ceph stable repository
apt_repository:
repo: "deb {{ ceph_stable_repo }} {{ ceph_stable_distro_source | default(ansible_lsb.codename) }} main"
state: present
changed_when: false
when: ceph_stable
- name: add ceph development repository
apt_repository:
repo: "deb http://gitbuilder.ceph.com/ceph-deb-{{ ansible_lsb.codename }}-x86_64-basic/ref/{{ ceph_dev_branch }} {{ ansible_lsb.codename }} main"
state: present
changed_when: false
when: ceph_dev
- name: add inktank ceph enterprise repository
apt_repository:
repo: "deb file://{{ ceph_stable_ice_temp_path }} {{ ansible_lsb.codename }} main"
state: present
changed_when: false
when: ceph_stable_ice

View File

@ -0,0 +1,52 @@
---
- name: install dependencies
apt:
pkg: "{{ item }}"
state: present
update_cache: yes
cache_valid_time: 3600
with_items: debian_package_dependencies
- name: configure ceph apt repository
include: debian_ceph_repository.yml
when: ceph_origin == 'upstream'
- name: install ceph
apt:
pkg: "{{ item }}"
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
default_release: "{{ ansible_distribution_release }}{{ '-backports' if ceph_origin == 'distro' and ceph_use_distro_backports else ''}}"
with_items:
- ceph
- ceph-common #|
- ceph-fs-common #|--> yes, they are already all dependencies from 'ceph'
- ceph-fuse #|--> however while proceding to rolling upgrades and the 'ceph' package upgrade
- ceph-mds #|--> they don't get update so we need to force them
- libcephfs1 #|
- name: install ceph-test
apt:
pkg: ceph-test
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
default_release: "{{ ansible_distribution_release }}{{ '-backports' if ceph_origin == 'distro' and ceph_use_distro_backports else ''}}"
when: ceph_test
- name: install rados gateway
apt:
pkg: radosgw
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
update_cache: yes
when:
rgw_group_name in group_names
- name: configure rbd clients directories
file:
path: "{{ item }}"
state: directory
owner: libvirt-qemu
group: kvm
mode: 0755
with_items:
- rbd_client_log_path
- rbd_client_admin_socket_path
when: rbd_client_directories

View File

@ -0,0 +1,143 @@
---
- name: install dependencies
yum:
name: "{{ item }}"
state: present
with_items: redhat_package_dependencies
when: ansible_pkg_mgr == "yum"
- name: install dependencies
dnf:
name: "{{ item }}"
state: present
with_items: redhat_package_dependencies
when: ansible_pkg_mgr == "dnf"
- name: configure ceph yum repository
include: redhat_ceph_repository.yml
when: ceph_origin == 'upstream'
- name: install ceph
yum:
name: ceph
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
when: not ceph_stable_rh_storage
- name: install distro or red hat storage ceph mon
yum:
name: "{{ item }}"
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
with_items:
- ceph
- ceph-mon
when:
(ceph_origin == "distro" or ceph_stable_rh_storage) and
mon_group_name in group_names and
ansible_pkg_mgr == "yum"
- name: install distro or red hat storage ceph mon
dnf:
name: "{{ item }}"
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
with_items:
- ceph
- ceph-mon
when:
(ceph_origin == "distro" or ceph_stable_rh_storage) and
mon_group_name in group_names and
ansible_pkg_mgr == "dnf"
- name: install distro or red hat storage ceph osd
yum:
name: "{{ item }}"
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
with_items:
- ceph
- ceph-osd
when:
(ceph_origin == "distro" or ceph_stable_rh_storage) and
osd_group_name in group_names and
ansible_pkg_mgr == "yum"
- name: install distro or red hat storage ceph osd
dnf:
name: "{{ item }}"
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
with_items:
- ceph
- ceph-osd
when:
(ceph_origin == "distro" or ceph_stable_rh_storage) and
osd_group_name in group_names and
ansible_pkg_mgr == "dnf"
- name: install ceph-test
yum:
name: ceph-test
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
when:
ceph_test and
ansible_pkg_mgr == "yum"
- name: install ceph-test
dnf:
name: ceph-test
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
when:
ceph_test and
ansible_pkg_mgr == "dnf"
- name: install Inktank Ceph Enterprise RBD Kernel modules
yum:
name: "{{ item }}"
with_items:
- "{{ ceph_stable_ice_temp_path }}/kmod-libceph-{{ ceph_stable_ice_kmod }}.rpm"
- "{{ ceph_stable_ice_temp_path }}/kmod-rbd-{{ ceph_stable_ice_kmod }}.rpm"
when:
ceph_stable_ice and
ansible_pkg_mgr == "yum"
- name: install Inktank Ceph Enterprise RBD Kernel modules
dnf:
name: "{{ item }}"
with_items:
- "{{ ceph_stable_ice_temp_path }}/kmod-libceph-{{ ceph_stable_ice_kmod }}.rpm"
- "{{ ceph_stable_ice_temp_path }}/kmod-rbd-{{ ceph_stable_ice_kmod }}.rpm"
when:
ceph_stable_ice and
ansible_pkg_mgr == "dnf"
- name: install rados gateway
yum:
name: ceph-radosgw
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
when:
rgw_group_name in group_names and
ansible_pkg_mgr == "yum"
- name: install rados gateway
dnf:
name: ceph-radosgw
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
when:
rgw_group_name in group_names and
ansible_pkg_mgr == "dnf"
- name: configure rbd clients directories
file:
path: "{{ item }}"
state: directory
owner: qemu
group: libvirtd
mode: 0755
with_items:
- rbd_client_log_path
- rbd_client_admin_socket_path
when: rbd_client_directories
- name: get ceph rhcs version
shell: rpm -q --qf "%{version}\n" ceph-common | cut -f1,2 -d '.'
changed_when: false
failed_when: false
register: rh_storage_version
when: ceph_stable_rh_storage

View File

@ -0,0 +1,144 @@
---
- name: add ceph extra
apt_repository:
repo: "deb http://ceph.com/packages/ceph-extras/debian {{ ansible_lsb.codename }} main"
state: present
when: ansible_lsb.codename in ['natty', 'oneiric', 'precise', 'quantal', 'raring', 'sid', 'squeeze', 'wheezy']
# NOTE (leseb): needed for Ubuntu 12.04 to have access to libapache2-mod-fastcgi if 100-continue isn't being used
- name: enable multiverse repo for precise
apt_repository:
repo: "{{ item }}"
state: present
with_items:
- deb http://archive.ubuntu.com/ubuntu {{ ansible_lsb.codename }} multiverse
- deb http://archive.ubuntu.com/ubuntu {{ ansible_lsb.codename }}-updates multiverse
- deb http://security.ubuntu.com/ubuntu {{ ansible_lsb.codename }}-security multiverse
when:
ansible_lsb.codename in ['precise'] and not
http_100_continue
# NOTE (leseb): disable the repo when we are using the Ceph repo for 100-continue packages
- name: disable multiverse repo for precise
apt_repository:
repo: "{{ item }}"
state: absent
with_items:
- deb http://archive.ubuntu.com/ubuntu {{ ansible_lsb.codename }} multiverse
- deb http://archive.ubuntu.com/ubuntu {{ ansible_lsb.codename }}-updates multiverse
- deb http://security.ubuntu.com/ubuntu {{ ansible_lsb.codename }}-security multiverse
when:
ansible_lsb.codename in ['precise'] and
http_100_continue
# NOTE (leseb): needed for Ubuntu 14.04 to have access to libapache2-mod-fastcgi if 100-continue isn't being used
- name: enable multiverse repo for trusty
command: "apt-add-repository multiverse"
changed_when: false
when:
ansible_lsb.codename in ['trusty'] and not
http_100_continue
# NOTE (leseb): disable the repo when we are using the Ceph repo for 100-continue packages
- name: disable multiverse repo for trusty
command: "apt-add-repository -r multiverse"
changed_when: false
when:
ansible_lsb.codename in ['trusty'] and
http_100_continue
# NOTE (leseb): if using 100-continue, add Ceph dev key
- name: install the ceph development repository key
apt_key:
data: "{{ lookup('file', 'cephdev.asc') }}"
state: present
when: http_100_continue
# NOTE (leseb): if using 100-continue, add Ceph sources and update
- name: add ceph apache and fastcgi sources
apt_repository:
repo: "{{ item }}"
state: present
with_items:
- deb http://gitbuilder.ceph.com/apache2-deb-{{ ansible_lsb.codename }}-x86_64-basic/ref/master {{ ansible_lsb.codename }} main
- deb http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-{{ ansible_lsb.codename }}-x86_64-basic/ref/master {{ ansible_lsb.codename }} main
register: purge_default_apache
when: http_100_continue
# NOTE (leseb): else remove them to ensure you use the default packages
- name: remove ceph apache and fastcgi sources
apt_repository:
repo: "{{ item }}"
state: absent
with_items:
- deb http://gitbuilder.ceph.com/apache2-deb-{{ ansible_lsb.codename }}-x86_64-basic/ref/master {{ ansible_lsb.codename }} main
- deb http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-{{ ansible_lsb.codename }}-x86_64-basic/ref/master {{ ansible_lsb.codename }} main
register: purge_ceph_apache
when: not http_100_continue
# NOTE (leseb): purge Ceph Apache and FastCGI packages if needed
- name: purge ceph apache and fastcgi packages
apt:
pkg: "{{ item }}"
state: absent
purge: yes
with_items:
- apache2
- apache2-bin
- apache2-data
- apache2-mpm-worker
- apache2-utils
- apache2.2-bin
- apache2.2-common
- libapache2-mod-fastcgi
when:
purge_default_apache.changed or
purge_ceph_apache.changed
- name: install apache and fastcgi
apt:
pkg: "{{ item }}"
state: present
update_cache: yes
with_items:
- apache2
- libapache2-mod-fastcgi
- name: install default httpd.conf
template:
src: ../../templates/httpd.conf
dest: /etc/apache2/httpd.conf
owner: root
group: root
- name: enable some apache mod rewrite and fastcgi
command: "{{ item }}"
with_items:
- a2enmod rewrite
- a2enmod fastcgi
changed_when: false
- name: install rados gateway vhost
template:
src: ../../templates/rgw.conf
dest: /etc/apache2/sites-available/rgw.conf
owner: root
group: root
- name: enable rados gateway vhost and disable default site
command: "{{ item }}"
with_items:
- a2ensite rgw.conf
- a2dissite *default
changed_when: false
failed_when: false
notify:
- restart apache2
- name: install s3gw.fcgi script
template:
src: ../../templates/s3gw.fcgi.j2
dest: /var/www/s3gw.fcgi
mode: 0555
owner: root
group: root

View File

@ -0,0 +1,56 @@
---
- name: add ceph extra
template:
src: ../../templates/ceph-extra.repo
dest: /etc/yum.repos.d
owner: root
group: root
- name: add special fastcgi repository key
rpm_key:
key: http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt
- name: add special fastcgi repository
command: rpm -ivh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm
changed_when: false
- name: install apache and fastcgi
yum:
name: "{{ item }}"
state: present
with_items:
- httpd
- mod_fastcgi
- mod_fcgid
when: ansible_pkg_mgr == "yum"
- name: install apache and fastcgi
dnf:
name: "{{ item }}"
state: present
with_items:
- httpd
- mod_fastcgi
- mod_fcgid
when: ansible_pkg_mgr == "dnf"
- name: install rados gateway vhost
template:
src: ../../templates/rgw.conf
dest: /etc/httpd/conf.d/rgw.conf
owner: root
group: root
- name: install s3gw.fcgi script
template:
src: ../../templates/s3gw.fcgi.j2
dest: /var/www/s3gw.fcgi
mode: 0555
owner: root
group: root
- name: disable default site
shell: sed -i "s/^[^+#]/#/g" /etc/httpd/conf.d/welcome.conf
changed_when: false
notify:
- restart apache2

View File

@ -0,0 +1,78 @@
---
- name: install the ceph stable repository key
rpm_key:
key: "{{ ceph_stable_key }}"
state: present
when: ceph_stable
- name: install the ceph development repository key
rpm_key:
key: "{{ ceph_dev_key }}"
state: present
when: ceph_dev
- name: install inktank ceph enterprise repository key
rpm_key:
key: "{{ ceph_stable_ice_temp_path }}/release.asc"
state: present
when: ceph_stable_ice
- name: install red hat storage repository key
rpm_key:
key: "{{ ceph_stable_rh_storage_repository_path }}/RPM-GPG-KEY-redhat-release"
state: present
when:
ceph_stable_rh_storage and
ceph_stable_rh_storage_iso_install
- name: add ceph stable repository
yum:
name: http://ceph.com/rpm-{{ ceph_stable_release }}/{{ ceph_stable_redhat_distro }}/noarch/ceph-release-1-0.{{ ceph_stable_redhat_distro|replace('rhel', 'el') }}.noarch.rpm
changed_when: false
when:
ceph_stable and
ansible_pkg_mgr == "yum"
- name: add ceph stable repository
dnf:
name: http://ceph.com/rpm-{{ ceph_stable_release }}/{{ ceph_stable_redhat_distro }}/noarch/ceph-release-1-0.{{ ceph_stable_redhat_distro|replace('rhel', 'el') }}.noarch.rpm
changed_when: false
when:
ceph_stable and
ansible_pkg_mgr == "dnf"
- name: add ceph development repository
yum:
name: http://gitbuilder.ceph.com/ceph-rpm-{{ ceph_dev_redhat_distro }}-x86_64-basic/ref/{{ ceph_dev_branch }}/noarch/ceph-release-1-0.{{ ceph_stable_redhat_distro }}.noarch.rpm
changed_when: false
when:
ceph_dev and
ansible_pkg_mgr == "yum"
- name: add ceph development repository
dnf:
name: http://gitbuilder.ceph.com/ceph-rpm-{{ ceph_dev_redhat_distro }}-x86_64-basic/ref/{{ ceph_dev_branch }}/noarch/ceph-release-1-0.{{ ceph_stable_redhat_distro }}.noarch.rpm
changed_when: false
when:
ceph_dev and
ansible_pkg_mgr == "dnf"
- name: add inktank ceph enterprise repository
template:
src: redhat_ice_repo.j2
dest: /etc/yum.repos.d/ice.repo
owner: root
group: root
mode: 0644
when: ceph_stable_ice
- name: add red hat storage repository
template:
src: ../../templates/redhat_storage_repo.j2
dest: /etc/yum.repos.d/rh_storage.repo
owner: root
group: root
mode: 0644
when:
ceph_stable_rh_storage and
ceph_stable_rh_storage_iso_install

View File

@ -0,0 +1,185 @@
---
- include: ./checks/check_system.yml
- include: ./checks/check_mandatory_vars.yml
- include: ./checks/check_firewall.yml
- include: ./misc/system_tuning.yml
when: osd_group_name in group_names
- include: ./pre_requisites/prerequisite_ice.yml
when: ceph_stable_ice
tags:
- package-install
- include: ./pre_requisites/prerequisite_rh_storage_iso_install.yml
when:
ceph_stable_rh_storage and
ceph_stable_rh_storage_iso_install
tags:
- package-install
- include: ./pre_requisites/prerequisite_rh_storage_cdn_install.yml
when:
ceph_stable_rh_storage and
ceph_stable_rh_storage_cdn_install
tags:
- package-install
- include: ./installs/install_on_redhat.yml
when: ansible_os_family == 'RedHat'
tags:
- package-install
- include: ./installs/install_on_debian.yml
when: ansible_os_family == 'Debian'
tags:
- package-install
- include: ./installs/install_rgw_on_redhat.yml
when:
ansible_os_family == 'RedHat' and
radosgw_frontend == 'apache' and
rgw_group_name in group_names
tags:
- package-install
- include: ./installs/install_rgw_on_debian.yml
when:
ansible_os_family == 'Debian' and
radosgw_frontend == 'apache' and
rgw_group_name in group_names
tags:
- package-install
# NOTE (leseb): be careful with the following
# somehow the YAML syntax using "is_ceph_infernalis: {{"
# does NOT work, so we keep this syntax styling...
- set_fact:
is_ceph_infernalis={{ (ceph_stable and ceph_stable_release not in ceph_stable_releases) or (ceph_stable_rh_storage and (rh_storage_version.stdout | version_compare('0.94', '>'))) }}
- set_fact:
dir_owner: ceph
dir_group: ceph
dir_mode: "0755"
when: is_ceph_infernalis
- set_fact:
dir_owner: root
dir_group: root
dir_mode: "0755"
when: not is_ceph_infernalis
- set_fact:
key_owner: root
key_group: root
key_mode: "0600"
when: not is_ceph_infernalis
- set_fact:
key_owner: ceph
key_group: ceph
key_mode: "0600"
when: is_ceph_infernalis
- set_fact:
activate_file_owner: ceph
activate_file_group: ceph
activate_file_mode: "0644"
when: is_ceph_infernalis
- set_fact:
activate_file_owner: root
activate_file_group: root
activate_file_mode: "0644"
when: not is_ceph_infernalis
- set_fact:
rbd_client_dir_owner: root
rbd_client_dir_group: root
rbd_client_dir_mode: "1777"
when: not is_ceph_infernalis
- set_fact:
rbd_client_dir_owner: ceph
rbd_client_dir_group: ceph
rbd_client_dir_mode: "0770"
when: is_ceph_infernalis
- name: check for a ceph socket
shell: "stat /var/run/ceph/*.asok > /dev/null 2>&1"
changed_when: false
failed_when: false
register: socket
- name: check for a rados gateway socket
shell: "stat {{ rbd_client_admin_socket_path }}*.asok > /dev/null 2>&1"
changed_when: false
failed_when: false
register: socketrgw
- name: create a local fetch directory if it does not exist
local_action: file path={{ fetch_directory }} state=directory
changed_when: false
become: false
run_once: true
when:
cephx or
generate_fsid
- name: generate cluster uuid
local_action: shell python -c 'import uuid; print str(uuid.uuid4())' | tee {{ fetch_directory }}/ceph_cluster_uuid.conf
creates="{{ fetch_directory }}/ceph_cluster_uuid.conf"
register: cluster_uuid
become: false
when: generate_fsid
- name: read cluster uuid if it already exists
local_action: command cat {{ fetch_directory }}/ceph_cluster_uuid.conf
removes="{{ fetch_directory }}/ceph_cluster_uuid.conf"
changed_when: false
register: cluster_uuid
become: false
when: generate_fsid
- name: create ceph conf directory
file:
path: /etc/ceph
state: directory
owner: "{{ dir_owner }}"
group: "{{ dir_group }}"
mode: "{{ dir_mode }}"
- name: generate ceph configuration file
action: config_template
args:
src: ceph.conf.j2
dest: /etc/ceph/ceph.conf
owner: "{{ dir_owner }}"
group: "{{ dir_group }}"
mode: "{{ activate_file_mode }}"
config_overrides: "{{ ceph_conf_overrides }}"
config_type: ini
notify:
- restart ceph mons
- restart ceph mons on ubuntu
- restart ceph mons with systemd
- restart ceph osds
- restart ceph osds on ubuntu
- restart ceph osds with systemd
- restart ceph mdss
- restart ceph mdss on ubuntu
- restart ceph mdss with systemd
- restart ceph rgws
- restart ceph rgws on ubuntu
- restart ceph rgws on red hat
- restart ceph rgws with systemd
- name: create rbd client directory
file:
path: "{{ rbd_client_admin_socket_path }}"
state: directory
owner: "{{ rbd_client_dir_owner }}"
group: "{{ rbd_client_dir_group }}"
mode: "{{ rbd_client_dir_mode }}"

View File

@ -0,0 +1,36 @@
---
- name: disable osd directory parsing by updatedb
command: updatedb -e /var/lib/ceph
changed_when: false
failed_when: false
- name: disable transparent hugepage
command: "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
changed_when: false
failed_when: false
when: disable_transparent_hugepage
- name: disable swap
command: swapoff -a
changed_when: false
failed_when: false
when: disable_swap
- name: get default vm.min_free_kbytes
command: sysctl -b vm.min_free_kbytes
changed_when: false
failed_when: false
register: default_vm_min_free_kbytes
- name: define vm.min_free_kbytes
set_fact:
vm_min_free_kbytes: "{{ 4194303 if ansible_memtotal_mb >= 49152 else default_vm_min_free_kbytes.stdout }}"
- name: apply operating system tuning
sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
sysctl_file: /etc/sysctl.conf
ignoreerrors: yes
with_items: os_tuning_params

View File

@ -0,0 +1,52 @@
---
- name: create ice package directory
file:
path: "{{ ceph_stable_ice_temp_path }}"
state: directory
owner: root
group: root
mode: 0644
when: ceph_stable_ice
- name: get ice packages
get_url:
url_username: "{{ ceph_stable_ice_user }}"
url_password: "{{ ceph_stable_ice_password }}"
url: "{{ ceph_stable_ice_url }}/{{ ceph_stable_ice_version }}/ICE-{{ ceph_stable_ice_version }}-{{ ceph_stable_ice_distro }}.tar.gz"
dest: "{{ ceph_stable_ice_temp_path }}/ICE-{{ ceph_stable_ice_version }}-{{ ceph_stable_ice_distro }}.tar.gz"
when: ceph_stable_ice
- name: get ice Kernel Modules
get_url:
url_username: "{{ ceph_stable_ice_user }}"
url_password: "{{ ceph_stable_ice_password }}"
url: "{{ ceph_stable_ice_url }}/{{ ceph_stable_ice_kmod_version }}/{{ item }}"
dest: "{{ ceph_stable_ice_temp_path }}"
with_items:
- kmod-libceph-{{ ceph_stable_ice_kmod }}.rpm
- kmod-rbd-{{ ceph_stable_ice_kmod }}.rpm
when:
ceph_stable_ice and
ansible_os_family == 'RedHat'
- name: stat extracted ice repo files
stat:
path: "{{ ceph_stable_ice_temp_path }}/ice_setup.py"
register: repo_exist
when: ceph_stable_ice
- name: extract ice packages
shell: tar -xzf ICE-{{ ceph_stable_ice_version }}-{{ ceph_stable_ice_distro }}.tar.gz
args:
chdir: "{{ ceph_stable_ice_temp_path }}"
changed_when: false
when:
ceph_stable_ice and
repo_exist.stat.exists == False
- name: move ice extracted packages
shell: "mv {{ ceph_stable_ice_temp_path }}/ceph/*/* {{ ceph_stable_ice_temp_path }}"
changed_when: false
when:
ceph_stable_ice and
repo_exist.stat.exists == False

View File

@ -0,0 +1,58 @@
---
- name: determine if node is registered with subscription-manager.
command: subscription-manager identity
register: subscription
changed_when: false
- name: check if the red hat optional repo is present
shell: yum --noplugins --cacheonly repolist | grep -sq rhel-7-server-optional-rpms
changed_when: false
failed_when: false
register: rh_optional_repo
- name: enable red hat optional repository
command: subscription-manager repos --enable rhel-7-server-optional-rpms
changed_when: false
when: rh_optional_repo.rc != 0
- name: check if the red hat storage monitor repo is already present
shell: yum --noplugins --cacheonly repolist | grep -sq rhel-7-server-rhceph-1.3-mon-rpms
changed_when: false
failed_when: false
register: rh_storage_mon_repo
when: mon_group_name in group_names
- name: enable red hat storage monitor repository
command: subscription-manager repos --enable rhel-7-server-rhceph-1.3-mon-rpms
changed_when: false
when:
mon_group_name in group_names and
rh_storage_mon_repo.rc != 0
- name: check if the red hat storage osd repo is already present
shell: yum --noplugins --cacheonly repolist | grep -sq rhel-7-server-rhceph-1.3-osd-rpms
changed_when: false
failed_when: false
register: rh_storage_osd_repo
when: osd_group_name in group_names
- name: enable red hat storage osd repository
command: subscription-manager repos --enable rhel-7-server-rhceph-1.3-osd-rpms
changed_when: false
when:
osd_group_name in group_names and
rh_storage_osd_repo.rc != 0
- name: check if the red hat storage rados gateway repo is already present
shell: yum --noplugins --cacheonly repolist | grep -sq rhel-7-server-rhceph-1.3-tools-rpms
changed_when: false
failed_when: false
register: rh_storage_rgw_repo
when: rgw_group_name in group_names
- name: enable red hat storage rados gateway repository
command: subscription-manager repos --enable rhel-7-server-rhceph-1.3-tools-rpms
changed_when: false
when:
rgw_group_name in group_names and
rh_storage_rgw_repo.rc != 0

View File

@ -0,0 +1,36 @@
---
- name: create red hat storage package directories
file:
path: "{{ item }}"
state: directory
with_items:
- "{{ ceph_stable_rh_storage_mount_path }}"
- "{{ ceph_stable_rh_storage_repository_path }}"
- name: fetch the red hat storage iso from the ansible server
copy:
src: "{{ ceph_stable_rh_storage_iso_path }}"
dest: "{{ ceph_stable_rh_storage_iso_path }}"
# assumption: ceph_stable_rh_storage_mount_path does not specify directory
- name: mount red hat storage iso file
mount:
name: "{{ ceph_stable_rh_storage_mount_path }}"
src: "{{ ceph_stable_rh_storage_iso_path }}"
fstype: iso9660
opts: ro,loop,noauto
passno: 2
state: mounted
- name: copy red hat storage iso content
shell: cp -r {{ ceph_stable_rh_storage_mount_path }}/* {{ ceph_stable_rh_storage_repository_path }}
args:
creates: "{{ ceph_stable_rh_storage_repository_path }}/README"
- name: unmount red hat storage iso file
mount:
name: "{{ ceph_stable_rh_storage_mount_path }}"
src: "{{ ceph_stable_rh_storage_iso_path }}"
fstype: iso9660
state: unmounted

View File

@ -0,0 +1,30 @@
# {{ ansible_managed }}
[ceph-extras]
name=Ceph Extras Packages
baseurl=http://ceph.com/packages/ceph-extras/rpm/{{ redhat_distro_ceph_extra }}/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
{% if (redhat_distro_ceph_extra != "centos6.4" and redhat_distro_ceph_extra != "rhel6.4" and redhat_distro_ceph_extra != "rhel6.5") %}
[ceph-extras-noarch]
name=Ceph Extras noarch
baseurl=http://ceph.com/packages/ceph-extras/rpm/{{ redhat_distro_ceph_extra }}/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
{% endif %}
[ceph-extras-source]
name=Ceph Extras Sources
baseurl=http://ceph.com/packages/ceph-extras/rpm/{{ redhat_distro_ceph_extra }}/SRPMS
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

View File

@ -0,0 +1,206 @@
#jinja2: trim_blocks: "true", lstrip_blocks: "true"
# {{ ansible_managed }}
[global]
{% if cephx %}
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
cephx require signatures = {{ cephx_require_signatures }} # Kernel RBD does NOT support signatures!
cephx cluster require signatures = {{ cephx_cluster_require_signatures }}
cephx service require signatures = {{ cephx_service_require_signatures }}
{% else %}
auth cluster required = none
auth service required = none
auth client required = none
auth supported = none
{% endif %}
fsid = {{ fsid }}
max open files = {{ max_open_files }}
osd pool default pg num = {{ pool_default_pg_num }}
osd pool default pgp num = {{ pool_default_pgp_num }}
osd pool default size = {{ pool_default_size }}
osd pool default min size = {{ pool_default_min_size }}
osd pool default crush rule = {{ pool_default_crush_rule }}
{% if common_single_host_mode is defined %}
osd crush chooseleaf type = 0
{% endif %}
{% if disable_in_memory_logs %}
# Disable in-memory logs
debug_lockdep = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_buffer = 0/0
debug_timer = 0/0
debug_filer = 0/0
debug_objecter = 0/0
debug_rados = 0/0
debug_rbd = 0/0
debug_journaler = 0/0
debug_objectcatcher = 0/0
debug_client = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_ms = 0/0
debug_monc = 0/0
debug_tp = 0/0
debug_auth = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_perfcounter = 0/0
debug_asok = 0/0
debug_throttle = 0/0
debug_mon = 0/0
debug_paxos = 0/0
debug_rgw = 0/0
{% endif %}
{% if enable_debug_global %}
debug ms = {{ debug_global_level }}
{% endif %}
[client]
rbd cache = {{ rbd_cache }}
rbd cache writethrough until flush = true
rbd concurrent management ops = {{ rbd_concurrent_management_ops }}
admin socket = {{ rbd_client_admin_socket_path }}/$cluster-$type.$id.$pid.$cctid.asok # must be writable by QEMU and allowed by SELinux or AppArmor
log file = {{ rbd_client_log_file }} # must be writable by QEMU and allowed by SELinux or AppArmor
rbd default map options = {{ rbd_default_map_options }}
rbd default features = {{ rbd_default_features }} # sum features digits
rbd default format = {{ rbd_default_format }}
[mon]
mon osd down out interval = {{ mon_osd_down_out_interval }}
mon osd min down reporters = {{ mon_osd_min_down_reporters }}
mon clock drift allowed = {{ mon_clock_drift_allowed }}
mon clock drift warn backoff = {{ mon_clock_drift_warn_backoff }}
mon osd full ratio = {{ mon_osd_full_ratio }}
mon osd nearfull ratio = {{ mon_osd_nearfull_ratio }}
mon osd report timeout = {{ mon_osd_report_timeout }}
mon pg warn max per osd = {{ mon_pg_warn_max_per_osd }}
mon osd allow primary affinity = {{ mon_osd_allow_primary_affinity }}
mon pg warn max object skew = {{ mon_pg_warn_max_object_skew }}
{% if enable_debug_mon %}
debug mon = {{ debug_mon_level }}
debug paxos = {{ debug_mon_level }}
debug auth = {{ debug_mon_level }}
{% endif %}
{% for host in groups[mon_group_name] %}
{% if hostvars[host]['ansible_fqdn'] is defined and mon_use_fqdn %}
[mon.{{ hostvars[host]['ansible_fqdn'] }}]
host = {{ hostvars[host]['ansible_fqdn'] }}
mon addr = {{ hostvars[host]['ansible_' + (hostvars[host]['monitor_interface'] if hostvars[host]['monitor_interface'] is defined else monitor_interface) ]['ipv4']['address'] }}
{% else %}
[mon.{{ hostvars[host]['ansible_hostname'] }}]
host = {{ hostvars[host]['ansible_hostname'] }}
mon addr = {{ hostvars[host]['ansible_' + (hostvars[host]['monitor_interface'] if hostvars[host]['monitor_interface'] is defined else monitor_interface) ]['ipv4']['address'] }}
{% endif %}
{% endfor %}
[osd]
osd mkfs type = {{ osd_mkfs_type }}
osd mkfs options xfs = {{ osd_mkfs_options_xfs }}
osd mount options xfs = {{ osd_mount_options_xfs }}
osd journal size = {{ journal_size }}
{% if cluster_network is defined %}
cluster_network = {{ cluster_network }}
{% endif %}
{% if public_network is defined %}
public_network = {{ public_network }}
{% endif %}
osd mon heartbeat interval = {{ osd_mon_heartbeat_interval }}
# Performance tuning
filestore merge threshold = {{ filestore_merge_threshold }}
filestore split multiple = {{ filestore_split_multiple }}
osd op threads = {{ osd_op_threads }}
filestore op threads = {{ filestore_op_threads }}
filestore max sync interval = {{ filestore_max_sync_interval }}
{% if filestore_xattr_use_omap != None %}
filestore xattr use omap = {{ filestore_xattr_use_omap }}
{% elif osd_mkfs_type == "ext4" %}
filestore xattr use omap = true
{# else, default is false #}
{% endif %}
osd max scrubs = {{ osd_max_scrubs }}
{% if ceph_stable_release not in ['argonaut','bobtail','cuttlefish','dumpling','emperor','firefly','giant'] %}
osd scrub begin hour = {{ osd_scrub_begin_hour }}
osd scrub end hour = {{ osd_scrub_end_hour }}
{% endif %}
# Recovery tuning
osd recovery max active = {{ osd_recovery_max_active }}
osd max backfills = {{ osd_max_backfills }}
osd recovery op priority = {{ osd_recovery_op_priority }}
osd recovery max chunk = {{ osd_recovery_max_chunk }}
osd recovery threads = {{ osd_recovery_threads }}
osd objectstore = {{ osd_objectstore }}
osd crush update on start = {{ osd_crush_update_on_start }}
{% if enable_debug_osd %}
debug osd = {{ debug_osd_level }}
debug filestore = {{ debug_osd_level }}
debug journal = {{ debug_osd_level }}
debug monc = {{ debug_osd_level }}
{% endif %}
# Deep scrub impact
osd scrub sleep = {{ osd_scrub_sleep }}
osd disk thread ioprio class = {{ osd_disk_thread_ioprio_class }}
osd disk thread ioprio priority = {{ osd_disk_thread_ioprio_priority }}
osd scrub chunk max = {{ osd_scrub_chunk_max }}
osd deep scrub stride = {{ osd_deep_scrub_stride }}
{% if groups[mds_group_name] is defined %}
{% for host in groups[mds_group_name] %}
{% if hostvars[host]['ansible_fqdn'] is defined and mds_use_fqdn %}
[mds.{{ hostvars[host]['ansible_fqdn'] }}]
host = {{ hostvars[host]['ansible_fqdn'] }}
{% else %}
[mds.{{ hostvars[host]['ansible_hostname'] }}]
host = {{ hostvars[host]['ansible_hostname'] }}
{% endif %}
{% endfor %}
{% if enable_debug_mds %}
debug mds = {{ debug_mds_level }}
debug mds balancer = {{ debug_mds_level }}
debug mds log = {{ debug_mds_level }}
debug mds migrator = {{ debug_mds_level }}
{% endif %}
{% endif %}
{% if groups[rgw_group_name] is defined %}
{% for host in groups[rgw_group_name] %}
{% if hostvars[host]['ansible_hostname'] is defined %}
[client.rgw.{{ hostvars[host]['ansible_hostname'] }}]
{% if radosgw_dns_name is defined %}
rgw dns name = {{ radosgw_dns_name }}
{% endif %}
host = {{ hostvars[host]['ansible_hostname'] }}
keyring = /var/lib/ceph/radosgw/ceph-rgw.{{ hostvars[host]['ansible_hostname'] }}/keyring
rgw socket path = /tmp/radosgw-{{ hostvars[host]['ansible_hostname'] }}.sock
log file = /var/log/ceph/radosgw-{{ hostvars[host]['ansible_hostname'] }}.log
rgw data = /var/lib/ceph/radosgw/ceph-rgw.{{ hostvars[host]['ansible_hostname'] }}
{% if radosgw_frontend == 'civetweb' %}
rgw frontends = civetweb port={{ radosgw_civetweb_port }}
{% endif %}
{% if radosgw_keystone %}
rgw keystone url = {{ radosgw_keystone_url }}
rgw keystone admin token = {{ radosgw_keystone_admin_token }}
rgw keystone accepted roles = {{ radosgw_keystone_accepted_roles }}
rgw keystone token cache size = {{ radosgw_keystone_token_cache_size }}
rgw keystone revocation interval = {{ radosgw_keystone_revocation_internal }}
rgw s3 auth use keystone = {{ radosgw_s3_auth_use_keystone }}
nss db path = {{ radosgw_nss_db_path }}
{% endif %}
{% endif %}
{% endfor %}
{% endif %}
{% if groups[restapi_group_name] is defined %}
[client.restapi]
public addr = {{ hostvars[inventory_hostname]['ansible_' + restapi_interface]['ipv4']['address'] }}:{{ restapi_port }}
restapi base url = {{ restapi_base_url }}
restapi log level = {{ restapi_log_level }}
keyring = /var/lib/ceph/restapi/ceph-restapi/keyring
log file = /var/log/ceph/ceph-restapi.log
{% endif %}

View File

@ -0,0 +1,3 @@
# {{ ansible_managed }}
ServerName {{ ansible_hostname }}

View File

@ -0,0 +1,9 @@
# {{ ansible_managed }}
[ice]
name=Inktank Ceph Enterprise - local packages for Ceph
baseurl=file://{{ ceph_stable_ice_temp_path }}
enabled=1
gpgcheck=1
type=rpm-md
priority=1
gpgkey=file://{{ ceph_stable_ice_temp_path }}/release.asc

View File

@ -0,0 +1,45 @@
# {{ ansible_managed }}
[rh_storage_mon]
name=Red Hat Storage Ceph - local packages for Ceph
baseurl=file://{{ ceph_stable_rh_storage_repository_path }}/MON
enabled=1
gpgcheck=1
type=rpm-md
priority=1
gpgkey=file://{{ ceph_stable_rh_storage_repository_path }}/RPM-GPG-KEY-redhat-release
[rh_storage_osd]
name=Red Hat Storage Ceph - local packages for Ceph
baseurl=file://{{ ceph_stable_rh_storage_repository_path }}/OSD
enabled=1
gpgcheck=1
type=rpm-md
priority=1
gpgkey=file://{{ ceph_stable_rh_storage_repository_path }}/RPM-GPG-KEY-redhat-release
[rh_storage_tools]
name=Red Hat Storage Ceph - assorted tools
baseurl=file://{{ ceph_stable_rh_storage_repository_path }}/Tools
enabled=1
gpgcheck=1
type=rpm-md
priority=1
gpgkey=file://{{ ceph_stable_rh_storage_repository_path }}/RPM-GPG-KEY-redhat-release
[rh_storage_calamari]
name=Red Hat Storage Ceph - local packages for Ceph
baseurl=file://{{ ceph_stable_rh_storage_repository_path }}/Calamari
enabled=1
gpgcheck=1
type=rpm-md
priority=1
gpgkey=file://{{ ceph_stable_rh_storage_repository_path }}/RPM-GPG-KEY-redhat-release
[rh_storage_installer]
name=Red Hat Storage Ceph - local packages for Ceph
baseurl=file://{{ ceph_stable_rh_storage_repository_path }}/Installer
enabled=1
gpgcheck=1
type=rpm-md
priority=1
gpgkey=file://{{ ceph_stable_rh_storage_repository_path }}/RPM-GPG-KEY-redhat-release

View File

@ -0,0 +1,23 @@
# {{ ansible_managed }}
FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw-{{ ansible_hostname }}.sock
<VirtualHost *:80>
ServerName {{ ansible_hostname }}
ServerAdmin {{ email_address }}@{{ ansible_fqdn }}
DocumentRoot /var/www
<IfModule mod_fastcgi.c>
<Directory /var/www>
Options +ExecCGI
AllowOverride All
SetHandler fastcgi-script
Order allow,deny
Allow from all
AuthBasicAuthoritative Off
</Directory>
</IfModule>
RewriteEngine On
RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) /s3gw.fcgi?page=$1&params=$2&%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
</VirtualHost>

View File

@ -0,0 +1,3 @@
# {{ ansible_managed }}
#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.{{ ansible_hostname }}

View File

@ -0,0 +1 @@
ceph-common