mirror of https://github.com/ceph/ceph-ansible.git
172 lines
6.4 KiB
ReStructuredText
172 lines
6.4 KiB
ReStructuredText
OSD Scenarios
|
|
=============
|
|
|
|
The following are all of the available options for the ``osd_scenario`` config
|
|
setting. Defining an ``osd_scenario`` is mandatory for using ``ceph-ansible``.
|
|
|
|
collocated
|
|
----------
|
|
This OSD scenario uses ``ceph-disk`` to create OSDs with collocated journals
|
|
from raw devices.
|
|
|
|
Use ``osd_scenario: collocated`` to enable this scenario. This scenario also
|
|
has the following required configuration options:
|
|
|
|
- ``devices``
|
|
|
|
This scenario has the following optional configuration options:
|
|
|
|
- ``osd_objectstore``: defaults to ``filestore`` if not set. Available options are ``filestore`` or ``bluestore``.
|
|
You can only select ``bluestore`` with the ceph release is Luminous or greater.
|
|
|
|
- ``dmcrypt``: defaults to ``false`` if not set.
|
|
|
|
This scenario supports encrypting your OSDs by setting ``dmcrypt: True``.
|
|
|
|
If ``osd_objectstore: filestore`` is enabled both 'ceph data' and 'ceph journal' partitions
|
|
will be stored on the same device.
|
|
|
|
If ``osd_objectstore: bluestore`` is enabled 'ceph data', 'ceph block', 'ceph block.db', 'ceph block.wal' will be stored
|
|
on the same device. The device will get 2 partitions:
|
|
|
|
- One for 'data', called 'ceph data'
|
|
|
|
- One for 'ceph block', 'ceph block.db', 'ceph block.wal' called 'ceph block'
|
|
|
|
Example of what you will get::
|
|
|
|
[root@ceph-osd0 ~]# blkid /dev/sda*
|
|
/dev/sda: PTTYPE="gpt"
|
|
/dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"
|
|
/dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"
|
|
|
|
An example of using the ``collocated`` OSD scenario with encryption would look like::
|
|
|
|
osd_scenario: collocated
|
|
dmcrypt: true
|
|
|
|
non-collocated
|
|
--------------
|
|
|
|
This OSD scenario uses ``ceph-disk`` to create OSDs from raw devices with journals that
|
|
exit on a dedicated device.
|
|
|
|
Use ``osd_scenario: non-collocated`` to enable this scenario. This scenario also
|
|
has the following required configuration options:
|
|
|
|
- ``devices``
|
|
|
|
This scenario has the following optional configuration options:
|
|
|
|
- ``dedicated_devices``: defaults to ``devices`` if not set
|
|
|
|
- ``osd_objectstore``: defaults to ``filestore`` if not set. Available options are ``filestore`` or ``bluestore``.
|
|
You can only select ``bluestore`` with the ceph release is Luminous or greater.
|
|
|
|
- ``dmcrypt``: defaults to ``false`` if not set.
|
|
|
|
This scenario supports encrypting your OSDs by setting ``dmcrypt: True``.
|
|
|
|
If ``osd_objectstore: filestore`` is enabled 'ceph data' and 'ceph journal' partitions
|
|
will be stored on different devices:
|
|
- 'ceph data' will be stored on the device listed in ``devices``
|
|
- 'ceph journal' will be stored on the device listed in ``dedicated_devices``
|
|
|
|
Let's take an example, imagine ``devices`` was declared like this::
|
|
|
|
devices:
|
|
- /dev/sda
|
|
- /dev/sdb
|
|
- /dev/sdc
|
|
- /dev/sdd
|
|
|
|
And ``dedicated_devices`` was declared like this::
|
|
|
|
dedicated_devices:
|
|
- /dev/sdf
|
|
- /dev/sdf
|
|
- /dev/sdg
|
|
- /dev/sdg
|
|
|
|
This will result in the following mapping:
|
|
|
|
- /dev/sda will have /dev/sdf1 as journal
|
|
|
|
- /dev/sdb will have /dev/sdf2 as a journal
|
|
|
|
- /dev/sdc will have /dev/sdg1 as a journal
|
|
|
|
- /dev/sdd will have /dev/sdg2 as a journal
|
|
|
|
|
|
.. note::
|
|
On a containerized scenario we only support A SINGLE journal
|
|
for all the OSDs on a given machine. If you don't, bad things will happen
|
|
This is a limitation we plan to fix at some point.
|
|
|
|
|
|
If ``osd_objectstore: bluestore`` is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored
|
|
on a dedicated device.
|
|
|
|
So the following will happen:
|
|
|
|
- The devices listed in ``devices`` will get 2 partitions, one for 'block' and one for 'data'. 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata. 'block' will store all your actual data.
|
|
|
|
- The devices in ``dedicated_devices`` will get 1 partition for RocksDB DB, called 'block.db' and one for RocksDB WAL, called 'block.wal'
|
|
|
|
By default ``dedicated_devices`` will represent block.db
|
|
|
|
Example of what you will get::
|
|
|
|
[root@ceph-osd0 ~]# blkid /dev/sd*
|
|
/dev/sda: PTTYPE="gpt"
|
|
/dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"
|
|
/dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"
|
|
/dev/sdb: PTTYPE="gpt"
|
|
/dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"
|
|
/dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"
|
|
|
|
There is more device granularity for Bluestore ONLY if ``osd_objectstore: bluestore`` is enabled by setting the
|
|
``bluestore_wal_devices`` config option.
|
|
|
|
By default, if ``bluestore_wal_devices`` is empty, it will get the content of ``dedicated_devices``.
|
|
If set, then you will have a dedicated partition on a specific device for block.wal.
|
|
|
|
Example of what you will get::
|
|
|
|
[root@ceph-osd0 ~]# blkid /dev/sd*
|
|
/dev/sda: PTTYPE="gpt"
|
|
/dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"
|
|
/dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"
|
|
/dev/sdb: PTTYPE="gpt"
|
|
/dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"
|
|
/dev/sdc: PTTYPE="gpt"
|
|
/dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"
|
|
|
|
lvm
|
|
---
|
|
This OSD scenario uses ``ceph-volume`` to create OSDs from logical volumes and
|
|
is only available when the ceph release is Luminous or newer.
|
|
|
|
.. note::
|
|
The creation of the logical volumes is not supported by ``ceph-ansible``, ``ceph-volume``
|
|
only creates OSDs from existing logical volumes.
|
|
|
|
Use ``osd_scenario: lvm`` to enable this scenario. Currently we only support dedicated journals
|
|
when using lvm, not collocated journals.
|
|
|
|
To configure this scenario use the ``lvm_volumes`` config option. ``lvm_volumes`` is a dictionary whose
|
|
key/value pairs represent a data lv and a journal pair. Journals can be either a lv, device or partition.
|
|
You can not use the same journal for many data lvs.
|
|
|
|
.. note::
|
|
Any logical volume or logical group used in ``lvm_volumes`` must be a name and not a path.
|
|
|
|
For example, a configuration to use the ``lvm`` osd scenario would look like::
|
|
|
|
osd_scenario: lvm
|
|
lvm_volumes:
|
|
data-lv1: journal-lv1
|
|
data-lv2: /dev/sda
|
|
data:lv3: /dev/sdb1
|