This OSD scenario uses ``ceph-disk`` to create OSDs from raw devices with journals that
exit on a dedicated device.
Use ``osd_scenario: non-collocated`` to enable this scenario. This scenario also
has the following required configuration options:
-``devices``
This scenario has the following optional configuration options:
-``dedicated_devices``: defaults to ``devices`` if not set
-``osd_objectstore``: defaults to ``filestore`` if not set. Available options are ``filestore`` or ``bluestore``.
You can only select ``bluestore`` with the ceph release is Luminous or greater.
-``dmcrypt``: defaults to ``false`` if not set.
This scenario supports encrypting your OSDs by setting ``dmcrypt: True``.
If ``osd_objectstore: filestore`` is enabled 'ceph data' and 'ceph journal' partitions
will be stored on different devices:
- 'ceph data' will be stored on the device listed in ``devices``
- 'ceph journal' will be stored on the device listed in ``dedicated_devices``
Let's take an example, imagine ``devices`` was declared like this::
devices:
- /dev/sda
- /dev/sdb
- /dev/sdc
- /dev/sdd
And ``dedicated_devices`` was declared like this::
dedicated_devices:
- /dev/sdf
- /dev/sdf
- /dev/sdg
- /dev/sdg
This will result in the following mapping:
- /dev/sda will have /dev/sdf1 as journal
- /dev/sdb will have /dev/sdf2 as a journal
- /dev/sdc will have /dev/sdg1 as a journal
- /dev/sdd will have /dev/sdg2 as a journal
..note::
On a containerized scenario we only support A SINGLE journal
for all the OSDs on a given machine. If you don't, bad things will happen
This is a limitation we plan to fix at some point.
If ``osd_objectstore: bluestore`` is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored
on a dedicated device.
So the following will happen:
- The devices listed in ``devices`` will get 2 partitions, one for 'block' and one for 'data'. 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata. 'block' will store all your actual data.
- The devices in ``dedicated_devices`` will get 1 partition for RocksDB DB, called 'block.db' and one for RocksDB WAL, called 'block.wal'
By default ``dedicated_devices`` will represent block.db
To configure this scenario use the ``lvm_volumes`` config option. ``lvm_volumes`` is a list of dictionaries which can
contain a ``data``, ``journal``, ``data_vg`` and ``journal_vg`` key. The ``data`` key represents the logical volume name that is to be used for your OSD
data. The ``journal`` key represents the logical volume name, device or partition that will be used for your OSD journal. The ``data_vg``
key represents the volume group name that your ``data`` logical volume resides on. This key is required for purging of OSDs created
by this scenario. The ``journal_vg`` key is optional and should be the volume group name that your journal lv resides on, if applicable.