From 41e55a840f12dc7089d906f2ef08c76925c9c3f5 Mon Sep 17 00:00:00 2001 From: Guillaume Abrioux Date: Thu, 11 Apr 2019 10:08:22 +0200 Subject: [PATCH] osd: remove dedicated_devices variable This variable was related to ceph-disk scenarios. Since we are entirely dropping ceph-disk support as of stable-4.0, let's remove this variable. Signed-off-by: Guillaume Abrioux (cherry picked from commit f0416c88922db59a769f792d716bd89526227209) --- docs/source/index.rst | 11 +++--- docs/source/osds/scenarios.rst | 29 ++++----------- group_vars/osds.yml.sample | 3 -- roles/ceph-osd/defaults/main.yml | 3 -- roles/ceph-validate/tasks/check_devices.yml | 39 +++------------------ 5 files changed, 16 insertions(+), 69 deletions(-) diff --git a/docs/source/index.rst b/docs/source/index.rst index 465f621b7..04ad5317e 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -201,20 +201,19 @@ At the most basic level you must tell ``ceph-ansible`` what version of Ceph you how you want your OSDs configured. To begin your configuration rename each file in ``group_vars/`` you wish to use so that it does not include the ``.sample`` at the end of the filename, uncomment the options you wish to change and provide your own value. -An example configuration that deploys the upstream ``jewel`` version of Ceph with OSDs that have collocated journals would look like this in ``group_vars/all.yml``: +An example configuration that deploys the upstream ``octopus`` version of Ceph with lvm batch method would look like this in ``group_vars/all.yml``: .. code-block:: yaml ceph_origin: repository ceph_repository: community - ceph_stable_release: jewel + ceph_stable_release: octopus public_network: "192.168.3.0/24" cluster_network: "192.168.4.0/24" monitor_interface: eth1 devices: - '/dev/sda' - '/dev/sdb' - osd_scenario: collocated The following config options are required to be changed on all installations but there could be other required options depending on your OSD scenario selection or other aspects of your cluster. @@ -222,7 +221,6 @@ selection or other aspects of your cluster. - ``ceph_origin`` - ``ceph_stable_release`` - ``public_network`` -- ``osd_scenario`` - ``monitor_interface`` or ``monitor_address`` @@ -264,9 +262,8 @@ Full documentation for configuring each of the Ceph daemon types are in the foll OSD Configuration ----------------- -OSD configuration is set by selecting an OSD scenario and providing the configuration needed for -that scenario. Each scenario is different in it's requirements. Selecting your OSD scenario is done -by setting the ``osd_scenario`` configuration option. +OSD configuration was used to be set by selecting an OSD scenario and providing the configuration needed for +that scenario. As of nautilus in stable-4.0, the only scenarios available is ``lvm``. .. toctree:: :maxdepth: 1 diff --git a/docs/source/osds/scenarios.rst b/docs/source/osds/scenarios.rst index a6ffda2e2..210b07797 100644 --- a/docs/source/osds/scenarios.rst +++ b/docs/source/osds/scenarios.rst @@ -1,24 +1,22 @@ OSD Scenario ============ -As of stable-3.2, the following scenarios are not supported anymore since they are associated to ``ceph-disk``: +As of stable-4.0, the following scenarios are not supported anymore since they are associated to ``ceph-disk``: * `collocated` * `non-collocated` -``ceph-disk`` was deprecated during the ceph-ansible 3.2 cycle and has been removed entirely from Ceph itself in the Nautilus version. -Supported values for the required ``osd_scenario`` variable are: +Since the Ceph luminous release, it is preferred to use the :ref:`lvm scenario +` that uses the ``ceph-volume`` provisioning tool. Any other +scenario will cause deprecation warnings. -At present (starting from stable-3.2), there is only one scenario, which defaults to ``lvm``, see: +``ceph-disk`` was deprecated during the ceph-ansible 3.2 cycle and has been removed entirely from Ceph itself in the Nautilus version. +At present (starting from stable-4.0), there is only one scenario, which defaults to ``lvm``, see: * :ref:`lvm ` So there is no need to configure ``osd_scenario`` anymore, it defaults to ``lvm``. -Since the Ceph luminous release, it is preferred to use the :ref:`lvm scenario -` that uses the ``ceph-volume`` provisioning tool. Any other -scenario will cause deprecation warnings. - The ``lvm`` scenario mentionned above support both containerized and non-containerized cluster. As a reminder, deploying a containerized cluster can be done by setting ``containerized_deployment`` to ``True``. @@ -30,13 +28,7 @@ lvm This OSD scenario uses ``ceph-volume`` to create OSDs, primarily using LVM, and is only available when the Ceph release is luminous or newer. - -**It is the preferred method of provisioning OSDs.** - -It is enabled with the following setting:: - - - osd_scenario: lvm +It is automatically enabled. Other (optional) supported settings: @@ -72,7 +64,6 @@ Ceph usage, the configuration would be: .. code-block:: yaml - osd_scenario: lvm devices: - /dev/sda - /dev/sdb @@ -85,7 +76,6 @@ devices, for example: .. code-block:: yaml - osd_scenario: lvm devices: - /dev/sda - /dev/sdb @@ -102,7 +92,6 @@ This option can also be used with ``osd_auto_discovery``, meaning that you do no .. code-block:: yaml - osd_scenario: lvm osd_auto_discovery: true Other (optional) supported settings: @@ -176,7 +165,6 @@ Supported ``lvm_volumes`` configuration settings: .. code-block:: yaml osd_objectstore: bluestore - osd_scenario: lvm lvm_volumes: - data: /dev/sda - data: /dev/sdb @@ -189,7 +177,6 @@ Supported ``lvm_volumes`` configuration settings: .. code-block:: yaml osd_objectstore: bluestore - osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: data-vg1 @@ -204,7 +191,6 @@ Supported ``lvm_volumes`` configuration settings: .. code-block:: yaml osd_objectstore: bluestore - osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: data-vg1 @@ -227,7 +213,6 @@ Supported ``lvm_volumes`` configuration settings: .. code-block:: yaml osd_objectstore: filestore - osd_scenario: lvm lvm_volumes: - data: data-lv1 data_vg: data-vg1 diff --git a/group_vars/osds.yml.sample b/group_vars/osds.yml.sample index f64cd524f..138f96c91 100644 --- a/group_vars/osds.yml.sample +++ b/group_vars/osds.yml.sample @@ -55,9 +55,6 @@ dummy: # If set to True, no matter which osd_objecstore you use the data will be encrypted #dmcrypt: False - -#dedicated_devices: [] - # Use ceph-volume to create OSDs from logical volumes. # lvm_volumes is a list of dictionaries. # diff --git a/roles/ceph-osd/defaults/main.yml b/roles/ceph-osd/defaults/main.yml index 545685c6f..66effee86 100644 --- a/roles/ceph-osd/defaults/main.yml +++ b/roles/ceph-osd/defaults/main.yml @@ -47,9 +47,6 @@ osd_auto_discovery: false # If set to True, no matter which osd_objecstore you use the data will be encrypted dmcrypt: False - -dedicated_devices: [] - # Use ceph-volume to create OSDs from logical volumes. # lvm_volumes is a list of dictionaries. # diff --git a/roles/ceph-validate/tasks/check_devices.yml b/roles/ceph-validate/tasks/check_devices.yml index f196e34fc..6d938499e 100644 --- a/roles/ceph-validate/tasks/check_devices.yml +++ b/roles/ceph-validate/tasks/check_devices.yml @@ -1,43 +1,14 @@ --- -- name: devices validation - block: - - name: validate devices is actually a device - parted: - device: "{{ item }}" - unit: MiB - register: devices_parted - with_items: "{{ devices }}" - - - name: fail if one of the devices is not a device - fail: - msg: "{{ item }} is not a block special file!" - when: - - item.failed - with_items: "{{ devices_parted.results }}" - when: - - devices is defined - -- name: validate dedicated_device is/are actually device(s) +- name: validate devices is actually a device parted: device: "{{ item }}" unit: MiB - register: dedicated_device_parted - with_items: "{{ dedicated_devices }}" - when: - - dedicated_devices|default([]) | length > 0 + register: devices_parted + with_items: "{{ devices }}" -- name: fail if one of the dedicated_device is not a device +- name: fail if one of the devices is not a device fail: msg: "{{ item }} is not a block special file!" - with_items: "{{ dedicated_device_parted.results }}" when: - - dedicated_devices|default([]) | length > 0 - item.failed - -- name: fail if number of dedicated_devices is not equal to number of devices - fail: - msg: "Number of dedicated_devices must be equal to number of devices. dedicated_devices: {{ dedicated_devices | length }}, devices: {{ devices | length }}" - when: - - dedicated_devices|default([]) | length > 0 - - devices | length > 0 - - dedicated_devices | length != devices | length \ No newline at end of file + with_items: "{{ devices_parted.results }}"