Merge pull request #1772 from ceph/docs-update

documentation update for osd scenarios and basic installation/usage
pull/1789/head
Sébastien Han 2017-08-22 15:36:21 +02:00 committed by GitHub
commit 7a191506c2
6 changed files with 430 additions and 434 deletions

418
README.md
View File

@ -1,418 +0,0 @@
ceph-ansible
============
Ansible playbook for Ceph!
Clone me:
```bash
git clone https://github.com/ceph/ceph-ansible.git
```
## What does it do?
General support for:
* Monitors
* OSDs
* MDSs
* RGW
More details:
* Authentication (cephx), this can be disabled.
* Supports cluster public and private network.
* Monitors deployment. You can easily start with one monitor and then progressively add new nodes. So can deploy one monitor for testing purpose. For production, I recommend to always use an odd number of monitors, 3 tends to be the standard.
* Object Storage Daemons. Like the monitors you can start with a certain amount of nodes and then grow this number. The playbook either supports a dedicated device for storing the journal or both journal and OSD data on the same device (using a tiny partition at the beginning of the device).
* Metadata daemons.
* Collocation. The playbook supports collocating Monitors, OSDs and MDSs on the same machine.
* The playbook was validated on Debian Wheezy, Ubuntu 12.04 LTS and CentOS 6.4.
* Tested on Ceph Dumpling and Emperor.
* A rolling upgrade playbook was written, an upgrade from Dumpling to Emperor was performed and worked.
## Configuring Ceph
The supported method for defining your ceph.conf is to use the `ceph_conf_overrides` variable. This allows you to specify configuration options using
an INI format. This variable can be used to override sections already defined in ceph.conf (see: `roles/ceph-common/templates/ceph.conf.j2`) or to provide
new configuration options. The following sections in ceph.conf are supported: [global], [mon], [osd], [mds] and [rgw].
An example:
```
ceph_conf_overrides:
global:
foo: 1234
bar: 5678
osd:
osd mkfs type: ext4
```
### Note
* It is not recommended to use underscores when defining options in the `ceph_conf_overrides` variable (ex. osd_mkfs_type) as this may cause issues with
incorrect configuration options appearing in ceph.conf.
* We will no longer accept pull requests that modify the ceph.conf template unless it helps the deployment. For simple configuration tweaks
please use the `ceph_conf_overrides` variable.
### Networking
In any case, you must define `monitor_interface` variable with the network interface name which will carry the IP address in the `public_network` subnet.
`monitor_interface` must be defined at least in `group_vars/all.yml` but it can be overrided in inventory host file if needed.
You can specify for each monitor on which IP address it will bind to by specifying the `monitor_address` variable in the **inventory host file**.
You can also use the `monitor_address_block` feature, just specify a subnet, ceph-ansible will automatically set the correct addresses in ceph.conf
Preference will go to `monitor_address_block` if specified, then `monitor_address`, otherwise it will take the first IP address found on the network interface specified in `monitor_interface` by default.
## Special notes
If you are looking at deploying a Ceph version older than Jewel.
It is highly recommended that you apply the following settings to your `group_vars/all.yml` file on the `ceph_conf_overrides` variable:
```
ceph_conf_overrides:
osd:
osd recovery max active: 5
osd max backfills: 2
osd recovery op priority: 2
osd recovery threads: 1
```
https://github.com/ceph/ceph-ansible/pull/694 removed all the default options that were part of the repo.
The goal is to keep the default from Ceph.
Below you will find the configuration that was applied prior to the PR in case you want to keep using them:
Setting | ceph-ansible | ceph
--- | --- | ---
cephx require signatures | true | false
cephx cluster require signatures | true | false
osd pool default pg num | 128 | 8
osd pool default pgp num | 128 | 8
rbd concurrent management ops | 20 | 10
rbd default map options | rw | ''
rbd default format | 2 | 1
mon osd down out interval | 600 | 300
mon osd min down reporters | 7 | 1
mon clock drift allowed | 0.15 | 0.5
mon clock drift warn backoff | 30 | 5
mon osd report timeout | 900 | 300
mon pg warn max per osd | 0 | 300
mon osd allow primary affinity | true | false
filestore merge threshold | 40 | 10
filestore split multiple | 8 | 2
osd op threads | 8 | 2
filestore op threads | 8 | 2
osd recovery max active | 5 | 15
osd max backfills | 2 | 10
osd recovery op priority | 2 | 63
osd recovery max chunk | 1048576 | 8 << 20
osd scrub sleep | 0.1 | 0
osd disk thread ioprio class | idle | ''
osd disk thread ioprio priority | 0 | -1
osd deep scrub stride | 1048576 | 524288
osd scrub chunk max | 5 | 25
If you want to use them, just use the `ceph_conf_overrides` variable as explained above.
## FAQ
1. I want to have OSD numbers seriallized between hosts, so the first OSD node has osd 1,2,3 and the second has osd 4,5,6 etc. How can I do this?
Simply add `serial: 1` after the osd section `- hosts: osds` in your `site.yml` file.
## Setup with Vagrant using virtualbox provider
* Create vagrant_variables.yml
```
$ cp vagrant_variables.yml.sample vagrant_variables.yml
```
* Create site.yml
```
$ cp site.yml.sample site.yml
```
* Create VMs
```
$ vagrant up --no-provision --provider=virtualbox
$ vagrant provision
...
...
...
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
mon0 : ok=16 changed=11 unreachable=0 failed=0
mon1 : ok=16 changed=10 unreachable=0 failed=0
mon2 : ok=16 changed=11 unreachable=0 failed=0
osd0 : ok=19 changed=7 unreachable=0 failed=0
osd1 : ok=19 changed=7 unreachable=0 failed=0
osd2 : ok=19 changed=7 unreachable=0 failed=0
rgw : ok=20 changed=17 unreachable=0 failed=0
```
Check the status:
```bash
$ vagrant ssh mon0 -c "sudo ceph -s"
cluster 4a158d27-f750-41d5-9e7f-26ce4c9d2d45
health HEALTH_OK
monmap e3: 3 mons at {ceph-mon0=192.168.0.10:6789/0,ceph-mon1=192.168.0.11:6789/0,ceph-mon2=192.168.0.12:6789/0}, election epoch 6, quorum 0,1,2 ceph-mon0,ceph-mon1,ceph-mon
mdsmap e6: 1/1/1 up {0=ceph-osd0=up:active}, 2 up:standby
osdmap e10: 6 osds: 6 up, 6 in
pgmap v17: 192 pgs, 3 pools, 9470 bytes data, 21 objects
205 MB used, 29728 MB / 29933 MB avail
192 active+clean
```
To re-run the Ansible provisioning scripts:
```bash
$ vagrant provision
```
## Specifying fsid and secret key in production
The Vagrantfile specifies an fsid for the cluster and a secret key for the
monitor. If using these playbooks in production, you must generate your own `fsid`
in `group_vars/all.yml` and `monitor_secret` in `group_vars/mons.yml`. Those files contain
information about how to generate appropriate values for these variables.
## Specifying package origin
By default, ceph-common installs from Ceph repository. However, you
can set `ceph_origin` to "distro" to install Ceph from your default repository.
## Setup for Vagrant using libvirt provider
* Create vagrant_variables.yml
```
$ cp vagrant_variables.yml.sample vagrant_variables.yml
```
* Edit `vagrant_variables.yml` and setup the following variables:
```yml
memory: 1024
disks: "[ '/dev/vdb', '/dev/vdc' ]"
vagrant_box: centos/7
```
* Create site.yml
```
$ cp site.yml.sample site.yml
```
* Create VMs
```
$ sudo vagrant up --no-provision --provider=libvirt
$ sudo vagrant provision
```
## Setup for Vagrant using parallels provider
* Create vagrant_variables.yml
```
$ cp vagrant_variables.yml.sample vagrant_variables.yml
```
* Edit `vagrant_variables.yml` and setup the following variables:
```yml
vagrant_box: parallels/ubuntu-14.04
```
* Create site.yml
```
$ cp site.yml.sample site.yml
```
* Create VMs
```
$ vagrant up --no-provision --provider=parallels
$ vagrant provision
```
### For Debian based systems
If you want to use "backports", you can set "true" to `ceph_use_distro_backports`.
Attention, ceph-common doesn't manage backports repository, you must add it yourself.
### For Atomic systems
If you want to run containerized deployment on Atomic systems (RHEL/CentOS Atomic), please copy
[vagrant_variables.yml.atomic](vagrant_variables.yml.atomic) to vagrant_variables.yml, and copy [group_vars/all.docker.yml.sample](group_vars/all.docker.yml.sample) to `group_vars/all.yml`.
Since `centos/atomic-host` VirtualBox box doesn't have spare storage controller to attach more disks, it is likely the first time `vagrant up --provider=virtualbox` runs, it will fail to attach to a storage controller. In such case, run the following command:
```console
VBoxManage storagectl `VBoxManage list vms |grep ceph-ansible_osd0|awk '{print $1}'|tr \" ' '` --name "SATA" --add sata
```
then run `vagrant up --provider=virtualbox` again.
## Setup for Vagrant using OpenStack provider
Install the Vagrant plugin for the openstack provider: `vagrant plugin install vagrant-openstack-provider`.
```bash
$ cp site.yml.sample site.yml
$ cp group_vars/all.docker.yml.sample group_vars/all.yml
$ cp vagrant_variables.yml.openstack vagrant_variables.yml
```
* Edit `vagrant_variables.yml`:
Set `mon_vms` and `osd_vms` to the numbers you want.
If you are using an Atomic image, un-comment out the `skip_tags` line.
Un-comment the `os_` lines.
Set `os_ssh_username` to 'centos' for Centos and 'cloud-user' for
RHEL images.
Set `os_ssh_private_key_path` to '~/.ssh/id_rsa'
Set `os_openstack_auth_url` to the auth url of your Open Stack cloud
Set `os_username` and `os_password` to what you provided for Open Stack
registration or leave them as ENV vars if you have set the
corresponding env vars for your user.
Set `os_tenant_name` to your Open Stack cloud project name.
Set `os_region` to your Open Stack cloud region name.
Set `os_flavor` to 'm3.medium'. This size has ephemeral storage that will
be used by the OSD for the /dev/vdb disk
Set the `os_image` to an image found in the Images list in the Open Stack
cloud Dashboard (i.e. 'centos-atomic-host').
Set the `os_keypair_name` to the keypair name you used when you did the
Open Stack registration.
```
$ vagrant up --provider=openstack
```
Once the playbook is finished, you should be able to do `vagrant ssh mon0` or
`vagrant ssh osd0` to get to the VMs.
`sudo docker ps` should show the running containers
When you are done, use `vagrant destroy` to get rid of the VMs. You should
also remove the associated entries in .ssh/known_hosts so that if the IP
addresses get reused by future Open Stack Cloud instances there will not be
old known_hosts entries.
# Want to contribute?
Read this carefully then :).
The repository centralises all the Ansible roles.
The roles are all part of the Galaxy.
We love contribution and we love giving visibility to our contributors, this is why all the **commits must be signed-off**.
## Tools
### Mailing list
Please register the mailing list at http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com
### IRC
Feel free to join us in the channel #ceph-ansible of the OFTC servers
### Github
The maing github account for the project is at https://github.com/ceph/ceph-ansible/
## Submit a patch
To start contributing just do:
```
$ git checkout -b my-working-branch
$ # do your changes #
$ git add -p
```
One more step, before pushing your code you should run a syntax check:
```
$ ansible-playbook -i dummy-ansible-hosts test.yml --syntax-check
```
If your change impacts a variable file in a role such as `roles/ceph-common/defaults/main.yml`, you need to generate a `group_vars` file:
```
$ ./generate_group_vars_sample.sh
```
You are finally ready to push your changes on Github:
```
$ git commit -s
$ git push origin my-working-branch
```
Worked on a change and you don't want to resend a commit for a syntax fix?
```
$ # do your syntax change #
$ git commit --amend
$ git push -f origin my-working-branch
```
# Testing PR
Go on the github interface and submit a PR.
Now we have 2 online CIs:
* Travis, simply does a syntax check
* Jenkins Ceph: bootstraps one monitor, one OSD, one RGW
If Jenkins detects that your commit broke something it will turn red.
You can then check the logs of the Jenkins by clicking on "Testing Playbooks" button in your PR and go to "Console Output".
You can now submit a new commit/change that will update the CI system to run a new play.
It might happen that the CI does not get reloaded so you can simply leave a comment on your PR with "test this please" and it will trigger a new CI build.
# Backporting changes
If a change should be backported to a `stable-*` Git branch:
* Mark your PR with the GitHub label "Backport" so we don't lose track of it.
* Fetch the latest updates into your clone: `git fetch`
* Determine the latest available stable branch:
`git branch -r --list "origin/stable-[0-9].[0-9]" | sort -r | sed 1q`
* Create a new local branch for your PR, based on the stable branch:
`git checkout --no-track -b my-backported-change origin/stable-2.2`
* Cherry-pick your change: `git cherry-pick -x (your-sha1)`
* Create a new pull request against the `stable-2.2` branch.
* Ensure that your PR's title has the prefix "backport:", so it's clear
to reviewers what this is about.
* Add a comment in your backport PR linking to the original (master) PR.
All changes to the stable branches should land in master first, so we avoid
regressions.
Once this is done, one of the project maintainers will tag the tip of the
stable branch with your change. For example:
```
git checkout stable-2.2
git pull --ff-only
git tag v2.2.5
git push origin v2.2.5
```
You can query backport-related items in GitHub:
* [all PRs labeled as backport candidates](https://github.com/ceph/ceph-ansible/pulls?q=is%3Apr%20label%3Abackport). The "open" ones must be merged to master first. The "closed" ones need to be backported to the stable branch.
* [all backport PRs for stable-2.2](https://github.com/ceph/ceph-ansible/pulls?q=base%3Astable-2.2)
to see if a change has already been backported.
## Vagrant Demo
[![Ceph-ansible Vagrant Demo](http://img.youtube.com/vi/E8-96NamLDo/0.jpg)](https://youtu.be/E8-96NamLDo "Deploy Ceph with Ansible (Vagrant demo)")
## Bare metal demo
Deployment from scratch on bare metal machines:
[![Ceph-ansible bare metal demo](http://img.youtube.com/vi/dv_PEp9qAqg/0.jpg)](https://youtu.be/dv_PEp9qAqg "Deploy Ceph with Ansible (Bare metal demo)")

8
README.rst 100644
View File

@ -0,0 +1,8 @@
ceph-ansible
============
Ansible playbooks for Ceph, the distributed filesystem.
Please refer to our hosted documentation here: http://docs.ceph.com/ceph-ansible/master/
You can view documentation for our ``stable-*`` branches by substituting ``master`` in the link
above for the name of the branch. For example: http://docs.ceph.com/ceph-ansible/stable-2.2/

View File

@ -0,0 +1,85 @@
Contribution Guidelines
=======================
The repository centralises all the Ansible roles. The roles are all part of the Galaxy.
We love contribution and we love giving visibility to our contributors, this is why all the **commits must be signed-off**.
Mailing list
------------
Please register the mailing list at http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com
IRC
---
Feel free to join us in the channel #ceph-ansible of the OFTC servers
Github
------
The main github account for the project is at https://github.com/ceph/ceph-ansible/
Submit a patch
--------------
To start contributing just do::
$ git checkout -b my-working-branch
$ # do your changes #
$ git add -p
If your change impacts a variable file in a role such as ``roles/ceph-common/defaults/main.yml``, you need to generate a ``group_vars`` file::
$ ./generate_group_vars_sample.sh
You are finally ready to push your changes on Github::
$ git commit -s
$ git push origin my-working-branch
Worked on a change and you don't want to resend a commit for a syntax fix?
::
$ # do your syntax change #
$ git commit --amend
$ git push -f origin my-working-branch
PR Testing
----------
Pull Request testing is handled by jenkins. All test must pass before your PR will be merged.
All of tests that are running are listed in the github UI and will list their current status.
If a test fails and you'd like to rerun it, comment on your PR in the following format::
jenkins test $scenario_name
For example::
jenkins test luminous-ansible2.3-journal_collocation
Backporting changes
-------------------
If a change should be backported to a ``stable-*`` Git branch:
- Mark your PR with the GitHub label "Backport" so we don't lose track of it.
- Fetch the latest updates into your clone: ``git fetch``
- Determine the latest available stable branch:
``git branch -r --list "origin/stable-[0-9].[0-9]" | sort -r | sed 1q``
- Create a new local branch for your PR, based on the stable branch:
``git checkout --no-track -b my-backported-change origin/stable-2.2``
- Cherry-pick your change: ``git cherry-pick -x (your-sha1)``
- Create a new pull request against the ``stable-2.2`` branch.
- Ensure that your PR's title has the prefix "backport:", so it's clear
to reviewers what this is about.
- Add a comment in your backport PR linking to the original (master) PR.
All changes to the stable branches should land in master first, so we avoid
regressions.
Once this is done, one of the project maintainers will tag the tip of the
stable branch with your change. For example::
git checkout stable-2.2
git pull --ff-only
git tag v2.2.5
git push origin v2.2.5

View File

@ -6,6 +6,5 @@ Glossary
:maxdepth: 3 :maxdepth: 3
:caption: Contents: :caption: Contents:
testing/glossary
index index
testing/glossary

View File

@ -8,30 +8,192 @@ ceph-ansible
Ansible playbooks for Ceph, the distributed filesystem. Ansible playbooks for Ceph, the distributed filesystem.
Testing Installation
======= ============
* :doc:`Testing with ceph-ansible <testing/index>` github
* :doc:`Glossary <testing/glossary>` ------
You can install directly from the source on github by following these steps:
- Clone the repository::
git clone https://github.com/ceph/ceph-ansible.git
- Next, you must decide which branch of ``ceph-ansible`` you wish to use. There
are stable branches to choose from or you could use the master branch::
git checkout $branch
Releases
========
The following branches should be used depending on your requirements. The ``stable-*``
branches have been QE tested and sometimes recieve backport fixes throughout their lifecycle.
The ``master`` branch should be considered experimental and used with caution.
- ``stable-2.1`` Support for ceph version ``jewel``. This branch supports ansible versions
``2.1`` and ``2.2.1``.
- ``stable-2.2`` Support for ceph versions ``jewel`` and ``kraken``. This branch supports ansible versions
``2.1`` and ``2.2.2``.
- ``master`` Support for ceph versions ``jewel``, ``kraken`` and ``luminous``. This branch supports ansible versions
``2.2.3`` and ``2.3.1``.
Configuration and Usage
=======================
This project assumes you have a basic knowledge of how ansible works and have already prepared your hosts for
configuration by ansible.
After you've cloned the ``ceph-ansible`` repository, selected your branch and installed ansible then you'll need to create
your inventory file, playbook and configuration for your ceph cluster.
Inventory
---------
The ansible inventory file defines the hosts in your cluster and what roles each host plays in your ceph cluster. The default
location for an inventory file is ``/etc/ansible/hosts`` but this file can be placed anywhere and used with the ``-i`` flag of
ansible-playbook. An example inventory file would look like::
[mons]
mon1
mon2
mon3
[osds]
osd1
osd2
osd3
.. note::
For more information on ansible inventories please refer to the ansible documentation: http://docs.ansible.com/ansible/latest/intro_inventory.html
Playbook
--------
You must have a playbook to pass to the ``ansible-playbook`` command when deploying your cluster. There is a sample playbook at the root of the ``ceph-ansible``
project called ``site.yml.sample``. This playbook should work fine for most usages, but it does include by default every daemon group which might not be
appropriate for your cluster setup. Perform the following steps to prepare your playbook:
- Rename the sample playbook: ``mv site.yml.sample site.yml``
- Modify the playbook as necessary for the requirements of your cluster
.. note::
It's important the playbook you use is placed at the root of the ``ceph-ansible`` project. This is how ansible will be able to find the roles that
``ceph-ansible`` provides.
OSDs ceph-ansible Configuration
==== --------------------------
The configuration for your ceph cluster will be set by the use of ansible variables that ``ceph-ansible`` provides. All of these options and their default
values are defined in the ``group_vars/`` directory at the root of the ``ceph-ansible`` project. Ansible will use configuration in a ``group_vars/`` directory
that is relative to your inventory file or your playbook. Inside of the ``group_vars/`` directory there are many sample ansible configuration files that relate
to each of the ceph daemon groups by their filename. For example, the ``osds.yml.sample`` contains all the default configuation for the OSD daemons. The ``all.yml.sample``
file is a special ``group_vars`` file that applies to all hosts in your cluster.
.. note::
For more information on setting group or host specific configuration refer to the ansible documentation: http://docs.ansible.com/ansible/latest/intro_inventory.html#splitting-out-host-and-group-specific-data
At the most basic level you must tell ``ceph-ansible`` what version of ceph you wish to install, the method of installation, your clusters network settings and
how you want your OSDs configured. To begin your configuration rename each file in ``group_vars/`` you wish to use so that it does not include the ``.sample``
at the end of the filename, uncomment the options you wish to change and provide your own value.
An example configuration that deploys the upstream ``jewel`` version of ceph with OSDs that have collocated journals would look like this in ``group_vars/all.yml``::
ceph_stable: True
ceph_stable_release: jewel
public_network: "192.168.3.0/24"
cluster_network: "192.168.4.0/24"
monitor_interface: eth1
journal_size: 100
osd_objectstore: "filestore"
devices:
- '/dev/sda'
- '/dev/sdb'
osd_scenario: collocated
# use this to set your PG config for the cluster
ceph_conf_overrides:
global:
osd_pool_default_pg_num: 8
osd_pool_default_size: 1
The following config options are required to be changed on all installations but there could be other required options depending on your OSD scenario
selection or other aspects of your cluster.
- ``ceph_stable_release``
- ``ceph_stable`` or ``ceph_rhcs`` or ``ceph_dev``
- ``public_network``
- ``osd_scenario``
- ``journal_size``
- ``monitor_interface`` or ``monitor_address``
ceph.conf Configuration
-----------------------
The supported method for defining your ceph.conf is to use the ``ceph_conf_overrides`` variable. This allows you to specify configuration options using
an INI format. This variable can be used to override sections already defined in ceph.conf (see: ``roles/ceph-common/templates/ceph.conf.j2``) or to provide
new configuration options. The following sections in ceph.conf are supported: [global], [mon], [osd], [mds] and [rgw].
An example::
ceph_conf_overrides:
global:
foo: 1234
bar: 5678
osd:
osd_mkfs_type: ext4
.. note::
We will no longer accept pull requests that modify the ceph.conf template unless it helps the deployment. For simple configuration tweaks
please use the `ceph_conf_overrides` variable.
Full documentation for configuring each of the ceph daemon types are in the following sections.
OSD Configuration
=================
OSD configuration is set by selecting an osd scenario and providing the configuration needed for
that scenario. Each scenario is different in it's requirements. Selecting your OSD scenario is done
by setting the ``osd_scenario`` configuration option.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
osds/scenarios osds/scenarios
Contribution
============
MONs See the following section for guidelines on how to contribute to ``ceph-ansible``.
====
RGW .. toctree::
=== :maxdepth: 1
Configuration dev/index
=============
Docker Testing
====== =======
Documentation for writing functional testing scenarios for ceph-ansible.
* :doc:`Testing with ceph-ansible <testing/index>`
* :doc:`Glossary <testing/glossary>`
Demos
=====
Vagrant Demo
------------
Deployment from scratch on bare metal machines: https://youtu.be/E8-96NamLDo
Bare metal demo
---------------
Deployment from scratch on bare metal machines: https://youtu.be/dv_PEp9qAqg

View File

@ -1,6 +1,166 @@
OSD Scenarios OSD Scenarios
============= =============
The following are all of the available options for the ``osd_scenario`` config
setting. Defining an ``osd_scenario`` is mandatory for using ``ceph-ansible``.
collocated
----------
This OSD scenario uses ``ceph-disk`` to create OSDs with collocated journals
from raw devices.
Use ``osd_scenario: collocated`` to enable this scenario. This scenario also
has the following required configuration options:
- ``devices``
This scenario has the following optional configuration options:
- ``osd_objectstore``: defaults to ``filestore`` if not set. Available options are ``filestore`` or ``bluestore``.
You can only select ``bluestore`` with the ceph release is Luminous or greater.
- ``dmcrypt``: defaults to ``false`` if not set.
This scenario supports encrypting your OSDs by setting ``dmcrypt: True``.
If ``osd_objectstore: filestore`` is enabled both 'ceph data' and 'ceph journal' partitions
will be stored on the same device.
If ``osd_objectstore: bluestore`` is enabled 'ceph data', 'ceph block', 'ceph block.db', 'ceph block.wal' will be stored
on the same device. The device will get 2 partitions:
- One for 'data', called 'ceph data'
- One for 'ceph block', 'ceph block.db', 'ceph block.wal' called 'ceph block'
Example of what you will get::
[root@ceph-osd0 ~]# blkid /dev/sda*
/dev/sda: PTTYPE="gpt"
/dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"
/dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"
An example of using the ``collocated`` OSD scenario with encryption would look like::
osd_scenario: collocated
dmcrypt: true
devices:
- /dev/sda
- /dev/sdb
non-collocated
--------------
This OSD scenario uses ``ceph-disk`` to create OSDs from raw devices with journals that
exit on a dedicated device.
Use ``osd_scenario: non-collocated`` to enable this scenario. This scenario also
has the following required configuration options:
- ``devices``
This scenario has the following optional configuration options:
- ``dedicated_devices``: defaults to ``devices`` if not set
- ``osd_objectstore``: defaults to ``filestore`` if not set. Available options are ``filestore`` or ``bluestore``.
You can only select ``bluestore`` with the ceph release is Luminous or greater.
- ``dmcrypt``: defaults to ``false`` if not set.
This scenario supports encrypting your OSDs by setting ``dmcrypt: True``.
If ``osd_objectstore: filestore`` is enabled 'ceph data' and 'ceph journal' partitions
will be stored on different devices:
- 'ceph data' will be stored on the device listed in ``devices``
- 'ceph journal' will be stored on the device listed in ``dedicated_devices``
Let's take an example, imagine ``devices`` was declared like this::
devices:
- /dev/sda
- /dev/sdb
- /dev/sdc
- /dev/sdd
And ``dedicated_devices`` was declared like this::
dedicated_devices:
- /dev/sdf
- /dev/sdf
- /dev/sdg
- /dev/sdg
This will result in the following mapping:
- /dev/sda will have /dev/sdf1 as journal
- /dev/sdb will have /dev/sdf2 as a journal
- /dev/sdc will have /dev/sdg1 as a journal
- /dev/sdd will have /dev/sdg2 as a journal
.. note::
On a containerized scenario we only support A SINGLE journal
for all the OSDs on a given machine. If you don't, bad things will happen
This is a limitation we plan to fix at some point.
If ``osd_objectstore: bluestore`` is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored
on a dedicated device.
So the following will happen:
- The devices listed in ``devices`` will get 2 partitions, one for 'block' and one for 'data'. 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata. 'block' will store all your actual data.
- The devices in ``dedicated_devices`` will get 1 partition for RocksDB DB, called 'block.db' and one for RocksDB WAL, called 'block.wal'
By default ``dedicated_devices`` will represent block.db
Example of what you will get::
[root@ceph-osd0 ~]# blkid /dev/sd*
/dev/sda: PTTYPE="gpt"
/dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"
/dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"
/dev/sdb: PTTYPE="gpt"
/dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"
/dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"
There is more device granularity for Bluestore ONLY if ``osd_objectstore: bluestore`` is enabled by setting the
``bluestore_wal_devices`` config option.
By default, if ``bluestore_wal_devices`` is empty, it will get the content of ``dedicated_devices``.
If set, then you will have a dedicated partition on a specific device for block.wal.
Example of what you will get::
[root@ceph-osd0 ~]# blkid /dev/sd*
/dev/sda: PTTYPE="gpt"
/dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"
/dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"
/dev/sdb: PTTYPE="gpt"
/dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"
/dev/sdc: PTTYPE="gpt"
/dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"
An example of using the ``non-collocated`` OSD scenario with encryption, bluestore and dedicated wal devices would look like::
osd_scenario: non-collocated
osd_objectstore: bluestore
dmcrypt: true
devices:
- /dev/sda
- /dev/sdb
dedicated_devices:
- /dev/sdc
- /dev/sdc
bluestore_wal_devices:
- /dev/sdd
- /dev/sdd
lvm lvm
--- ---
This OSD scenario uses ``ceph-volume`` to create OSDs from logical volumes and This OSD scenario uses ``ceph-volume`` to create OSDs from logical volumes and