osd: remove dedicated_devices variable

This variable was related to ceph-disk scenarios.
Since we are entirely dropping ceph-disk support as of stable-4.0, let's
remove this variable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit f0416c8892)
pull/3857/head
Guillaume Abrioux 2019-04-11 10:08:22 +02:00 committed by mergify[bot]
parent 4a663e1fc0
commit 41e55a840f
5 changed files with 16 additions and 69 deletions

View File

@ -201,20 +201,19 @@ At the most basic level you must tell ``ceph-ansible`` what version of Ceph you
how you want your OSDs configured. To begin your configuration rename each file in ``group_vars/`` you wish to use so that it does not include the ``.sample``
at the end of the filename, uncomment the options you wish to change and provide your own value.
An example configuration that deploys the upstream ``jewel`` version of Ceph with OSDs that have collocated journals would look like this in ``group_vars/all.yml``:
An example configuration that deploys the upstream ``octopus`` version of Ceph with lvm batch method would look like this in ``group_vars/all.yml``:
.. code-block:: yaml
ceph_origin: repository
ceph_repository: community
ceph_stable_release: jewel
ceph_stable_release: octopus
public_network: "192.168.3.0/24"
cluster_network: "192.168.4.0/24"
monitor_interface: eth1
devices:
- '/dev/sda'
- '/dev/sdb'
osd_scenario: collocated
The following config options are required to be changed on all installations but there could be other required options depending on your OSD scenario
selection or other aspects of your cluster.
@ -222,7 +221,6 @@ selection or other aspects of your cluster.
- ``ceph_origin``
- ``ceph_stable_release``
- ``public_network``
- ``osd_scenario``
- ``monitor_interface`` or ``monitor_address``
@ -264,9 +262,8 @@ Full documentation for configuring each of the Ceph daemon types are in the foll
OSD Configuration
-----------------
OSD configuration is set by selecting an OSD scenario and providing the configuration needed for
that scenario. Each scenario is different in it's requirements. Selecting your OSD scenario is done
by setting the ``osd_scenario`` configuration option.
OSD configuration was used to be set by selecting an OSD scenario and providing the configuration needed for
that scenario. As of nautilus in stable-4.0, the only scenarios available is ``lvm``.
.. toctree::
:maxdepth: 1

View File

@ -1,24 +1,22 @@
OSD Scenario
============
As of stable-3.2, the following scenarios are not supported anymore since they are associated to ``ceph-disk``:
As of stable-4.0, the following scenarios are not supported anymore since they are associated to ``ceph-disk``:
* `collocated`
* `non-collocated`
``ceph-disk`` was deprecated during the ceph-ansible 3.2 cycle and has been removed entirely from Ceph itself in the Nautilus version.
Supported values for the required ``osd_scenario`` variable are:
Since the Ceph luminous release, it is preferred to use the :ref:`lvm scenario
<osd_scenario_lvm>` that uses the ``ceph-volume`` provisioning tool. Any other
scenario will cause deprecation warnings.
At present (starting from stable-3.2), there is only one scenario, which defaults to ``lvm``, see:
``ceph-disk`` was deprecated during the ceph-ansible 3.2 cycle and has been removed entirely from Ceph itself in the Nautilus version.
At present (starting from stable-4.0), there is only one scenario, which defaults to ``lvm``, see:
* :ref:`lvm <osd_scenario_lvm>`
So there is no need to configure ``osd_scenario`` anymore, it defaults to ``lvm``.
Since the Ceph luminous release, it is preferred to use the :ref:`lvm scenario
<osd_scenario_lvm>` that uses the ``ceph-volume`` provisioning tool. Any other
scenario will cause deprecation warnings.
The ``lvm`` scenario mentionned above support both containerized and non-containerized cluster.
As a reminder, deploying a containerized cluster can be done by setting ``containerized_deployment``
to ``True``.
@ -30,13 +28,7 @@ lvm
This OSD scenario uses ``ceph-volume`` to create OSDs, primarily using LVM, and
is only available when the Ceph release is luminous or newer.
**It is the preferred method of provisioning OSDs.**
It is enabled with the following setting::
osd_scenario: lvm
It is automatically enabled.
Other (optional) supported settings:
@ -72,7 +64,6 @@ Ceph usage, the configuration would be:
.. code-block:: yaml
osd_scenario: lvm
devices:
- /dev/sda
- /dev/sdb
@ -85,7 +76,6 @@ devices, for example:
.. code-block:: yaml
osd_scenario: lvm
devices:
- /dev/sda
- /dev/sdb
@ -102,7 +92,6 @@ This option can also be used with ``osd_auto_discovery``, meaning that you do no
.. code-block:: yaml
osd_scenario: lvm
osd_auto_discovery: true
Other (optional) supported settings:
@ -176,7 +165,6 @@ Supported ``lvm_volumes`` configuration settings:
.. code-block:: yaml
osd_objectstore: bluestore
osd_scenario: lvm
lvm_volumes:
- data: /dev/sda
- data: /dev/sdb
@ -189,7 +177,6 @@ Supported ``lvm_volumes`` configuration settings:
.. code-block:: yaml
osd_objectstore: bluestore
osd_scenario: lvm
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
@ -204,7 +191,6 @@ Supported ``lvm_volumes`` configuration settings:
.. code-block:: yaml
osd_objectstore: bluestore
osd_scenario: lvm
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
@ -227,7 +213,6 @@ Supported ``lvm_volumes`` configuration settings:
.. code-block:: yaml
osd_objectstore: filestore
osd_scenario: lvm
lvm_volumes:
- data: data-lv1
data_vg: data-vg1

View File

@ -55,9 +55,6 @@ dummy:
# If set to True, no matter which osd_objecstore you use the data will be encrypted
#dmcrypt: False
#dedicated_devices: []
# Use ceph-volume to create OSDs from logical volumes.
# lvm_volumes is a list of dictionaries.
#

View File

@ -47,9 +47,6 @@ osd_auto_discovery: false
# If set to True, no matter which osd_objecstore you use the data will be encrypted
dmcrypt: False
dedicated_devices: []
# Use ceph-volume to create OSDs from logical volumes.
# lvm_volumes is a list of dictionaries.
#

View File

@ -1,43 +1,14 @@
---
- name: devices validation
block:
- name: validate devices is actually a device
parted:
device: "{{ item }}"
unit: MiB
register: devices_parted
with_items: "{{ devices }}"
- name: fail if one of the devices is not a device
fail:
msg: "{{ item }} is not a block special file!"
when:
- item.failed
with_items: "{{ devices_parted.results }}"
when:
- devices is defined
- name: validate dedicated_device is/are actually device(s)
- name: validate devices is actually a device
parted:
device: "{{ item }}"
unit: MiB
register: dedicated_device_parted
with_items: "{{ dedicated_devices }}"
when:
- dedicated_devices|default([]) | length > 0
register: devices_parted
with_items: "{{ devices }}"
- name: fail if one of the dedicated_device is not a device
- name: fail if one of the devices is not a device
fail:
msg: "{{ item }} is not a block special file!"
with_items: "{{ dedicated_device_parted.results }}"
when:
- dedicated_devices|default([]) | length > 0
- item.failed
- name: fail if number of dedicated_devices is not equal to number of devices
fail:
msg: "Number of dedicated_devices must be equal to number of devices. dedicated_devices: {{ dedicated_devices | length }}, devices: {{ devices | length }}"
when:
- dedicated_devices|default([]) | length > 0
- devices | length > 0
- dedicated_devices | length != devices | length
with_items: "{{ devices_parted.results }}"