lvm: support --data as a raw device or partition in ceph-volume

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
pull/2172/head
Andrew Schoen 2017-11-13 14:56:51 -06:00
parent 04f02910a9
commit 3c604f1115
3 changed files with 101 additions and 11 deletions

View File

@ -174,9 +174,9 @@ is only available when the ceph release is Luminous or newer.
``lvm_volumes`` is the config option that needs to be defined to configure the
mappings for devices to be deployed. It is a list of dictionaries which expects
a volume name and a volume group for logical volumes, but can also accept
a device in the case of ``filestore`` for the ``journal``.
a partition in the case of ``filestore`` for the ``journal``.
The ``data`` key represents the logical volume name that is to be used for your
The ``data`` key represents the logical volume name, raw device or partition that is to be used for your
OSD data. The ``data_vg`` key represents the volume group name that your
``data`` logical volume resides on. This key is required for purging of OSDs
created by this scenario.
@ -196,17 +196,17 @@ There is filestore support which can be enabled with::
To configure this scenario use the ``lvm_volumes`` config option.
``lvm_volumes`` is a list of dictionaries which expects a volume name and
a volume group for logical volumes, but can also accept a device in the case of
a volume group for logical volumes, but can also accept a parition in the case of
``filestore`` for the ``journal``.
The following keys are accepted for a ``filestore`` deployment:
* ``data``
* ``data_vg``
* ``data_vg`` (not required if ``data`` is a raw device or partition)
* ``journal``
* ``journal_vg`` (not required if ``journal`` is a device and not a logical volume)
* ``journal_vg`` (not required if ``journal`` is a partition and not a logical volume)
The ``journal`` key represents the logical volume name, device or partition that will be used for your OSD journal.
The ``journal`` key represents the logical volume name or partition that will be used for your OSD journal.
For example, a configuration to use the ``lvm`` osd scenario would look like::
@ -223,19 +223,24 @@ For example, a configuration to use the ``lvm`` osd scenario would look like::
- data: data-lv3
journal: /dev/sdb1
data_vg: vg2
- data: /dev/sda
journal: /dev/sdb1
- data: /dev/sda1
journal: journal-lv1
journal_vg: vg2
``bluestore``
^^^^^^^^^^^^^
This scenario allows a combination of devices to be used in an OSD.
``bluestore`` can work just with a single "block" device (specified by the
``data`` and ``data_vg``) or additionally with a ``block.wal`` and ``block.db``
``data`` and optionally ``data_vg``) or additionally with a ``block.wal`` and ``block.db``
(interchangeably)
The following keys are accepted for a ``bluestore`` deployment:
* ``data`` (required)
* ``data_vg`` (required)
* ``data_vg`` (not required if ``data`` is a raw device or partition)
* ``db`` (optional for ``block.db``)
* ``db_vg`` (optional for ``block.db``)
* ``wal`` (optional for ``block.wal``)
@ -263,3 +268,4 @@ could look like::
db_vg: vg4
wal: wal-lv4
wal_vg: vg4
- data: /dev/sda

View File

@ -195,9 +195,9 @@ bluestore_wal_devices: "{{ dedicated_devices }}"
#
# Filestore: Each dictionary must contain a data, journal and vg_name key. Any
# logical volume or logical group used must be a name and not a path. data
# must be a logical volume. journal can be either a lv, device or partition.
# can be a logical volume, device or partition. journal can be either a lv or partition.
# You can not use the same journal for many data lvs.
# data_vg must be the volume group name of the data lv
# data_vg must be the volume group name of the data lv, only applicable when data is an lv.
# journal_vg is optional and must be the volume group name of the journal lv, if applicable.
# For example:
# lvm_volumes:
@ -206,11 +206,15 @@ bluestore_wal_devices: "{{ dedicated_devices }}"
# journal: journal-lv1
# journal_vg: vg2
# - data: data-lv2
# journal: /dev/sda
# journal: /dev/sda1
# data_vg: vg1
# - data: data-lv3
# journal: /dev/sdb1
# data_vg: vg2
# - data: /dev/sda
# journal: /dev/sdb1
# - data: /dev/sda1
# journal: /dev/sdb1
#
# Bluestore: Each dictionary must contain at least data. When defining wal or
# db, it must have both the lv name and vg group (db and wal are not required).
@ -232,6 +236,8 @@ bluestore_wal_devices: "{{ dedicated_devices }}"
# db_vg: vg3
# - data: data-lv4
# data_vg: vg4
# - data: /dev/sda
# - data: /dev/sdb1
lvm_volumes: []

View File

@ -84,3 +84,81 @@
- item.db is not defined
- item.db_vg is not defined
- "'{{ item.data_vg }}/{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create filestore osds with dedicated journals and a raw device or partition for data
command: "ceph-volume --cluster {{ cluster }} lvm create --filestore --data {{ item.data }} --journal {{ item.journal }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- item.journal_vg is not defined
- item.data_vg is not defined
- osd_objectstore == 'filestore'
- "'{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create filestore osds with dedicated lv journals and a raw device or partition for data
command: "ceph-volume --cluster {{ cluster }} lvm create --filestore --data {{ item.data }} --journal {{item.journal_vg }}/{{ item.journal }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- item.journal_vg is defined
- item.data_vg is not defined
- osd_objectstore == 'filestore'
- "'{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create bluestore osds with db and wal and a raw device or partition for data
command: "ceph-volume --cluster {{ cluster }} lvm create --bluestore --data {{ item.data }} --block.wal {{ item.wal_vg }}/{{ item.wal }} --block.db {{ item.db_vg }}/{{ item.db }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- osd_objectstore == 'bluestore'
- item.data_vg is not defined
- item.wal is defined
- item.wal_vg is defined
- item.db is defined
- item.db_vg is defined
- "'{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create bluestore osds with db only and a raw device or partition for data
command: "ceph-volume --cluster {{ cluster }} lvm create --bluestore --data {{ item.data }} --block.db {{ item.db_vg }}/{{ item.db }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- osd_objectstore == 'bluestore'
- item.data_vg is not defined
- item.wal is not defined
- item.wal_vg is not defined
- item.db is defined
- item.db_vg is defined
- "'{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create bluestore osds with wal only and a raw device or partition for data
command: "ceph-volume --cluster {{ cluster }} lvm create --bluestore --data {{ item.data }} --block.wal {{ item.wal_vg }}/{{ item.wal }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- osd_objectstore == 'bluestore'
- item.wal is defined
- item.data_vg is not defined
- item.wal_vg is defined
- item.db is not defined
- item.db_vg is not defined
- "'{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create bluestore osds with just a data device and a raw device or partition for data
command: "ceph-volume --cluster {{ cluster }} lvm create --bluestore --data {{ item.data }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- osd_objectstore == 'bluestore'
- item.wal is not defined
- item.data_vg is not defined
- item.wal_vg is not defined
- item.db is not defined
- item.db_vg is not defined
- "'{{ item.data }}' not in ceph_volume_lvm_list.stdout"