diff --git a/docs/source/index.rst b/docs/source/index.rst index 4239ead35..b4d53b9f0 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -18,6 +18,12 @@ Testing OSDs ==== +.. toctree:: + :maxdepth: 1 + + osds/scenarios + + MONs ==== diff --git a/docs/source/osds/scenarios.rst b/docs/source/osds/scenarios.rst new file mode 100644 index 000000000..03c46a2d1 --- /dev/null +++ b/docs/source/osds/scenarios.rst @@ -0,0 +1,27 @@ +OSD Scenarios +============= + +lvm_osds +-------- +This OSD scenario uses ``ceph-volume`` to create OSDs from logical volumes and +is only available when the ceph release is Luminous or greater. + +.. note:: + The creation of the logical volumes is not supported by ceph-ansible, ceph-volume + only creates OSDs from existing logical volumes. + +Use ``lvm_osds:true`` to enable this scenario. Currently we only support dedicated journals +when using lvm, not collocated journals. + +To configure this scenario use the ``lvm_volumes`` config option. ``lvm_volumes`` is a dictionary whose +key/value pairs represent a data lv and a journal pair. Journals can be either a lv, device or partition. +You can not use the same journal for many data lvs. + +For example, a configuration to use ``lvm_osds`` would look like:: + + lvm_osds: true + + lvm_volumes: + data-lv1: journal-lv1 + data-lv2: /dev/sda + data:lv3: /dev/sdb1