* Monitors deployment. You can easily start with one monitor and then progressively add new nodes. So can deploy one monitor for testing purpose. For production, I recommend to always use an odd number of monitors, 3 tends to be the standard.
* Object Storage Daemons. Like the monitors you can start with a certain amount of nodes and then grow this number. The playbook either supports a dedicated device for storing the journal or both journal and OSD data on the same device (using a tiny partition at the beginning of the device).
The supported method for defining your ceph.conf is to use the `ceph_conf_overrides` variable. This allows you to specify configuration options using
an INI format. This variable can be used to override sections already defined in ceph.conf (see: `roles/ceph-common/templates/ceph.conf.j2`) or to provide
new configuration options. The following sections in ceph.conf are supported: [global], [mon], [osd], [mds] and [rgw].
* It is not recommended to use underscores when defining options in the `ceph_conf_overrides` variable (ex. osd_mkfs_type) as this may cause issues with
incorrect configuration options appearing in ceph.conf.
* We will no longer accept pull requests that modify the ceph.conf template unless it helps the deployment. For simple configuration tweaks
In any case, you must define `monitor_interface` variable with the network interface name which will carry the IP address in the `public_network` subnet.
`monitor_interface` must be defined at least in `group_vars/all.yml` but it can be overrided in inventory host file if needed.
You can specify for each monitor on which IP address it will bind to by specifying the `monitor_address` variable in the **inventory host file**.
You can also use the `monitor_address_block` feature, just specify a subnet, ceph-ansible will automatically set the correct addresses in ceph.conf
Preference will go to `monitor_address_block` if specified, then `monitor_address`, otherwise it will take the first IP address found on the network interface specified in `monitor_interface` by default.
monmap e3: 3 mons at {ceph-mon0=192.168.0.10:6789/0,ceph-mon1=192.168.0.11:6789/0,ceph-mon2=192.168.0.12:6789/0}, election epoch 6, quorum 0,1,2 ceph-mon0,ceph-mon1,ceph-mon
mdsmap e6: 1/1/1 up {0=ceph-osd0=up:active}, 2 up:standby
[vagrant_variables.yml.atomic](vagrant_variables.yml.atomic) to vagrant_variables.yml, and copy [group_vars/all.docker.yml.sample](group_vars/all.docker.yml.sample) to `group_vars/all.yml`.
Since `centos/atomic-host` VirtualBox box doesn't have spare storage controller to attach more disks, it is likely the first time `vagrant up --provider=virtualbox` runs, it will fail to attach to a storage controller. In such case, run the following command:
It might happen that the CI does not get reloaded so you can simply leave a comment on your PR with "test this please" and it will trigger a new CI build.
* Cherry-pick your change: `git cherry-pick -x (your-sha1)`
* Create a new pull request against the `stable-2.2` branch.
* Ensure that your PR's title has the prefix "backport:", so it's clear
to reviewers what this is about.
* Add a comment in your backport PR linking to the original (master) PR.
All changes to the stable branches should land in master first, so we avoid
regressions.
Once this is done, one of the project maintainers will tag the tip of the
stable branch with your change. For example:
```
git checkout stable-2.2
git pull --ff-only
git tag v2.2.5
git push origin v2.2.5
```
You can query backport-related items in GitHub:
* [all PRs labeled as backport candidates](https://github.com/ceph/ceph-ansible/pulls?q=is%3Apr%20label%3Abackport). The "open" ones must be merged to master first. The "closed" ones need to be backported to the stable branch.
* [all backport PRs for stable-2.2](https://github.com/ceph/ceph-ansible/pulls?q=base%3Astable-2.2)
[![Ceph-ansible bare metal demo](http://img.youtube.com/vi/dv_PEp9qAqg/0.jpg)](https://youtu.be/dv_PEp9qAqg "Deploy Ceph with Ansible (Bare metal demo)")