Merge pull request #2100 from ceph/lvm-bluestore

ceph-volume lvm bluestore support
pull/2122/head
Sébastien Han 2017-10-27 17:36:16 +02:00 committed by GitHub
commit faccd0acf0
11 changed files with 315 additions and 33 deletions

View File

@ -170,14 +170,16 @@ is only available when the ceph release is Luminous or newer.
The creation of the logical volumes is not supported by ``ceph-ansible``, ``ceph-volume``
only creates OSDs from existing logical volumes.
Use ``osd_scenario: lvm`` to enable this scenario. Currently we only support dedicated journals
when using lvm, not collocated journals.
To configure this scenario use the ``lvm_volumes`` config option. ``lvm_volumes`` is a list of dictionaries which can
contain a ``data``, ``journal``, ``data_vg`` and ``journal_vg`` key. The ``data`` key represents the logical volume name that is to be used for your OSD
data. The ``journal`` key represents the logical volume name, device or partition that will be used for your OSD journal. The ``data_vg``
key represents the volume group name that your ``data`` logical volume resides on. This key is required for purging of OSDs created
by this scenario. The ``journal_vg`` key is optional and should be the volume group name that your journal lv resides on, if applicable.
``lvm_volumes`` is the config option that needs to be defined to configure the
mappings for devices to be deployed. It is a list of dictionaries which expects
a volume name and a volume group for logical volumes, but can also accept
a device in the case of ``filestore`` for the ``journal``.
The ``data`` key represents the logical volume name that is to be used for your
OSD data. The ``data_vg`` key represents the volume group name that your
``data`` logical volume resides on. This key is required for purging of OSDs
created by this scenario.
.. note::
Any logical volume or logical group used in ``lvm_volumes`` must be a name and not a path.
@ -185,8 +187,30 @@ by this scenario. The ``journal_vg`` key is optional and should be the volume gr
.. note::
You can not use the same journal for many OSDs.
``filestore``
^^^^^^^^^^^^^
There is filestore support which can be enabled with::
osd_objectstore: filestore
To configure this scenario use the ``lvm_volumes`` config option.
``lvm_volumes`` is a list of dictionaries which expects a volume name and
a volume group for logical volumes, but can also accept a device in the case of
``filestore`` for the ``journal``.
The following keys are accepted for a ``filestore`` deployment:
* ``data``
* ``data_vg``
* ``journal``
* ``journal_vg`` (not required if ``journal`` is a device and not a logical volume)
The ``journal`` key represents the logical volume name, device or partition that will be used for your OSD journal.
For example, a configuration to use the ``lvm`` osd scenario would look like::
osd_objectstore: filestore
osd_scenario: lvm
lvm_volumes:
- data: data-lv1
@ -199,3 +223,43 @@ For example, a configuration to use the ``lvm`` osd scenario would look like::
- data: data-lv3
journal: /dev/sdb1
data_vg: vg2
``bluestore``
^^^^^^^^^^^^^
This scenario allows a combination of devices to be used in an OSD.
``bluestore`` can work just with a single "block" device (specified by the
``data`` and ``data_vg``) or additionally with a ``block.wal`` and ``block.db``
(interchangeably)
The following keys are accepted for a ``bluestore`` deployment:
* ``data`` (required)
* ``data_vg`` (required)
* ``db`` (optional for ``block.db``)
* ``db_vg`` (optional for ``block.db``)
* ``wal`` (optional for ``block.wal``)
* ``wal_vg`` (optional for ``block.wal``)
A ``bluestore`` lvm deployment, for all four different combinations supported
could look like::
osd_objectstore: bluestore
osd_scenario: lvm
lvm_volumes:
- data: data-lv1
data_vg: vg1
- data: data-lv2
data_vg: vg1
wal: wal-lv1
wal_vg: vg2
- data: data-lv3
data_vg: vg2
db: db-lv1
db_vg: vg2
- data: data-lv4
data_vg: vg4
db: db-lv4
db_vg: vg4
wal: wal-lv4
wal_vg: vg4

View File

@ -197,14 +197,16 @@ dummy:
#bluestore_wal_devices: "{{ dedicated_devices }}"
# III. Use ceph-volume to create OSDs from logical volumes.
# Use 'osd_scenario: lvm' to enable this scenario. Currently we only support dedicated journals
# Use 'osd_scenario: lvm' to enable this scenario.
# when using lvm, not collocated journals.
# lvm_volumes is a list of dictionaries. Each dictionary must contain a data, journal and vg_name
# key. Any logical volume or logical group used must be a name and not a path.
# data must be a logical volume
# journal can be either a lv, device or partition. You can not use the same journal for many data lvs.
# lvm_volumes is a list of dictionaries.
#
# Filestore: Each dictionary must contain a data, journal and vg_name key. Any
# logical volume or logical group used must be a name and not a path. data
# must be a logical volume. journal can be either a lv, device or partition.
# You can not use the same journal for many data lvs.
# data_vg must be the volume group name of the data lv
# journal_vg is optional and must be the volume group name of the journal lv, if applicable
# journal_vg is optional and must be the volume group name of the journal lv, if applicable.
# For example:
# lvm_volumes:
# - data: data-lv1
@ -217,6 +219,28 @@ dummy:
# - data: data-lv3
# journal: /dev/sdb1
# data_vg: vg2
#
# Bluestore: Each dictionary must contain at least data. When defining wal or
# db, it must have both the lv name and vg group (db and wal are not required).
# This allows for four combinations: just data, data and wal, data and wal and
# db, data and db.
# For example:
# lvm_volumes:
# - data: data-lv1
# data_vg: vg1
# wal: wal-lv1
# wal_vg: vg1
# - data: data-lv2
# db: db-lv2
# db_vg: vg2
# - data: data-lv3
# wal: wal-lv1
# wal_vg: vg3
# db: db-lv3
# db_vg: vg3
# - data: data-lv4
# data_vg: vg4
#lvm_volumes: []

View File

@ -189,14 +189,16 @@ dedicated_devices: []
bluestore_wal_devices: "{{ dedicated_devices }}"
# III. Use ceph-volume to create OSDs from logical volumes.
# Use 'osd_scenario: lvm' to enable this scenario. Currently we only support dedicated journals
# Use 'osd_scenario: lvm' to enable this scenario.
# when using lvm, not collocated journals.
# lvm_volumes is a list of dictionaries. Each dictionary must contain a data, journal and vg_name
# key. Any logical volume or logical group used must be a name and not a path.
# data must be a logical volume
# journal can be either a lv, device or partition. You can not use the same journal for many data lvs.
# lvm_volumes is a list of dictionaries.
#
# Filestore: Each dictionary must contain a data, journal and vg_name key. Any
# logical volume or logical group used must be a name and not a path. data
# must be a logical volume. journal can be either a lv, device or partition.
# You can not use the same journal for many data lvs.
# data_vg must be the volume group name of the data lv
# journal_vg is optional and must be the volume group name of the journal lv, if applicable
# journal_vg is optional and must be the volume group name of the journal lv, if applicable.
# For example:
# lvm_volumes:
# - data: data-lv1
@ -209,6 +211,28 @@ bluestore_wal_devices: "{{ dedicated_devices }}"
# - data: data-lv3
# journal: /dev/sdb1
# data_vg: vg2
#
# Bluestore: Each dictionary must contain at least data. When defining wal or
# db, it must have both the lv name and vg group (db and wal are not required).
# This allows for four combinations: just data, data and wal, data and wal and
# db, data and db.
# For example:
# lvm_volumes:
# - data: data-lv1
# data_vg: vg1
# wal: wal-lv1
# wal_vg: vg1
# - data: data-lv2
# db: db-lv2
# db_vg: vg2
# - data: data-lv3
# wal: wal-lv1
# wal_vg: vg3
# db: db-lv3
# db_vg: vg3
# - data: data-lv4
# data_vg: vg4
lvm_volumes: []

View File

@ -56,16 +56,6 @@
- osd_scenario == "lvm"
- ceph_release_num[ceph_release] < ceph_release_num.luminous
- name: verify osd_objectstore is 'filestore' when using the lvm osd_scenario
fail:
msg: "the lvm osd_scenario currently only works for filestore, not bluestore"
when:
- osd_group_name is defined
- osd_group_name in group_names
- osd_scenario == "lvm"
- not osd_auto_discovery
- osd_objectstore != 'filestore'
- name: verify lvm_volumes have been provided
fail:
msg: "please provide lvm_volumes to your osd scenario"

View File

@ -1,12 +1,80 @@
---
- name: list all lvm osds
command: ceph-volume lvm list
register: ceph_volume_lvm_list
failed_when: False
changed_when: False
check_mode: no
- name: use ceph-volume to create filestore osds with dedicated journals
command: "ceph-volume lvm create --filestore --data {{ item.data_vg }}/{{ item.data }} --journal {{ item.journal }}"
command: "ceph-volume --cluster {{ cluster }} lvm create --filestore --data {{ item.data_vg }}/{{ item.data }} --journal {{ item.journal }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- item.journal_vg is not defined
- osd_objectstore == 'filestore'
- "'{{ item.data_vg }}/{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create filestore osds with dedicated lv journals
command: "ceph-volume lvm create --filestore --data {{ item.data_vg }}/{{ item.data }} --journal {{item.journal_vg }}/{{ item.journal }}"
command: "ceph-volume --cluster {{ cluster }} lvm create --filestore --data {{ item.data_vg }}/{{ item.data }} --journal {{item.journal_vg }}/{{ item.journal }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- item.journal_vg is defined
- osd_objectstore == 'filestore'
- "'{{ item.data_vg }}/{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create bluestore osds with db and wal
command: "ceph-volume --cluster {{ cluster }} lvm create --bluestore --data {{ item.data_vg }}/{{ item.data }} --block.wal {{ item.wal_vg }}/{{ item.wal }} --block.db {{ item.db_vg }}/{{ item.db }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- osd_objectstore == 'bluestore'
- item.wal is defined
- item.wal_vg is defined
- item.db is defined
- item.db_vg is defined
- "'{{ item.data_vg }}/{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create bluestore osds with db only
command: "ceph-volume --cluster {{ cluster }} lvm create --bluestore --data {{ item.data_vg }}/{{ item.data }} --block.db {{ item.db_vg }}/{{ item.db }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- osd_objectstore == 'bluestore'
- item.wal is not defined
- item.wal_vg is not defined
- item.db is defined
- item.db_vg is defined
- "'{{ item.data_vg }}/{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create bluestore osds with wal only
command: "ceph-volume --cluster {{ cluster }} lvm create --bluestore --data {{ item.data_vg }}/{{ item.data }} --block.wal {{ item.wal_vg }}/{{ item.wal }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- osd_objectstore == 'bluestore'
- item.wal is defined
- item.wal_vg is defined
- item.db is not defined
- item.db_vg is not defined
- "'{{ item.data_vg }}/{{ item.data }}' not in ceph_volume_lvm_list.stdout"
- name: use ceph-volume to create bluestore osds with just a data device
command: "ceph-volume --cluster {{ cluster }} lvm create --bluestore --data {{ item.data_vg }}/{{ item.data }}"
environment:
CEPH_VOLUME_DEBUG: 1
with_items: "{{ lvm_volumes }}"
when:
- osd_objectstore == 'bluestore'
- item.wal is not defined
- item.wal_vg is not defined
- item.db is not defined
- item.db_vg is not defined
- "'{{ item.data_vg }}/{{ item.data }}' not in ceph_volume_lvm_list.stdout"

View File

@ -0,0 +1 @@
../../../../../Vagrantfile

View File

@ -0,0 +1 @@
../cluster/ceph-override.json

View File

@ -0,0 +1,25 @@
---
ceph_origin: repository
ceph_repository: community
cluster: test
public_network: "192.168.39.0/24"
cluster_network: "192.168.40.0/24"
monitor_interface: eth1
osd_objectstore: "bluestore"
osd_scenario: lvm
copy_admin_key: true
# test-volume is created by tests/functional/lvm_setup.yml from /dev/sdb
lvm_volumes:
- data: data-lv1
data_vg: test_group
- data: data-lv2
data_vg: test_group
db: journal1
db_vg: journals
os_tuning_params:
- { name: kernel.pid_max, value: 4194303 }
- { name: fs.file-max, value: 26234859 }
ceph_conf_overrides:
global:
osd_pool_default_size: 1

View File

@ -0,0 +1,8 @@
[mons]
mon0
[mgrs]
mon0
[osds]
osd0

View File

@ -0,0 +1,74 @@
---
# DEPLOY CONTAINERIZED DAEMONS
docker: false
# DEFINE THE NUMBER OF VMS TO RUN
mon_vms: 1
osd_vms: 1
mds_vms: 0
rgw_vms: 0
nfs_vms: 0
rbd_mirror_vms: 0
client_vms: 0
iscsi_gw_vms: 0
mgr_vms: 0
# Deploy RESTAPI on each of the Monitors
restapi: true
# INSTALL SOURCE OF CEPH
# valid values are 'stable' and 'dev'
ceph_install_source: stable
# SUBNETS TO USE FOR THE VMS
public_subnet: 192.168.39
cluster_subnet: 192.168.40
# MEMORY
# set 1024 for CentOS
memory: 512
# Ethernet interface name
# use eth1 for libvirt and ubuntu precise, enp0s8 for CentOS and ubuntu xenial
eth: 'eth1'
# Disks
# For libvirt use disks: "[ '/dev/vdb', '/dev/vdc' ]"
# For CentOS7 use disks: "[ '/dev/sda', '/dev/sdb' ]"
disks: "[ '/dev/sdb', '/dev/sdc' ]"
# VAGRANT BOX
# Ceph boxes are *strongly* suggested. They are under better control and will
# not get updated frequently unless required for build systems. These are (for
# now):
#
# * ceph/ubuntu-xenial
#
# Ubuntu: ceph/ubuntu-xenial bento/ubuntu-16.04 or ubuntu/trusty64 or ubuntu/wily64
# CentOS: bento/centos-7.1 or puppetlabs/centos-7.0-64-puppet
# libvirt CentOS: centos/7
# parallels Ubuntu: parallels/ubuntu-14.04
# Debian: deb/jessie-amd64 - be careful the storage controller is named 'SATA Controller'
# For more boxes have a look at:
# - https://atlas.hashicorp.com/boxes/search?utf8=✓&sort=&provider=virtualbox&q=
# - https://download.gluster.org/pub/gluster/purpleidea/vagrant/
vagrant_box: centos/7
#ssh_private_key_path: "~/.ssh/id_rsa"
# The sync directory changes based on vagrant box
# Set to /home/vagrant/sync for Centos/7, /home/{ user }/vagrant for openstack and defaults to /vagrant
#vagrant_sync_dir: /home/vagrant/sync
vagrant_sync_dir: /vagrant
# Disables synced folder creation. Not needed for testing, will skip mounting
# the vagrant directory on the remote box regardless of the provider.
vagrant_disable_synced_folder: true
# VAGRANT URL
# This is a URL to download an image from an alternate location. vagrant_box
# above should be set to the filename of the image.
# Fedora virtualbox: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
# Fedora libvirt: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-libvirt.box
# vagrant_box_url: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
os_tuning_params:
- { name: kernel.pid_max, value: 4194303 }
- { name: fs.file-max, value: 26234859 }

View File

@ -1,6 +1,6 @@
[tox]
envlist = {dev,jewel,luminous,rhcs}-{ansible2.2,ansible2.3}-{xenial_cluster,centos7_cluster,docker_cluster,update_cluster,cluster,update_docker_cluster,switch_to_containers,purge_filestore_osds_container,purge_filestore_osds_non_container,purge_cluster_non_container,purge_cluster_container}
{dev,luminous}-{ansible2.2,ansible2.3}-{filestore_osds_non_container,filestore_osds_container,bluestore_osds_container,bluestore_osds_non_container,lvm_osds,purge_lvm_osds,shrink_mon,shrink_osd,shrink_mon_container,shrink_osd_container,docker_cluster_collocation,purge_bluestore_osds_non_container,purge_bluestore_osds_container}
{dev,luminous}-{ansible2.2,ansible2.3}-{filestore_osds_non_container,filestore_osds_container,bluestore_osds_container,bluestore_osds_non_container,bluestore_lvm_osds,lvm_osds,purge_lvm_osds,shrink_mon,shrink_osd,shrink_mon_container,shrink_osd_container,docker_cluster_collocation,purge_bluestore_osds_non_container,purge_bluestore_osds_container}
skipsdist = True
@ -151,6 +151,7 @@ setenv=
luminous: UPDATE_CEPH_STABLE_RELEASE = luminous
luminous: UPDATE_CEPH_DOCKER_IMAGE_TAG = tag-build-master-luminous-ubuntu-16.04
lvm_osds: CEPH_STABLE_RELEASE = luminous
bluestore_lvm_osds: CEPH_STABLE_RELEASE = luminous
deps=
ansible2.2: ansible==2.2.3
ansible2.3: ansible==2.3.1
@ -183,6 +184,7 @@ changedir=
update_cluster: {toxinidir}/tests/functional/centos/7/cluster
switch_to_containers: {toxinidir}/tests/functional/centos/7/cluster
lvm_osds: {toxinidir}/tests/functional/centos/7/lvm-osds
bluestore_lvm_osds: {toxinidir}/tests/functional/centos/7/bs-lvm-osds
purge_lvm_osds: {toxinidir}/tests/functional/centos/7/lvm-osds
commands=
@ -193,6 +195,7 @@ commands=
bash {toxinidir}/tests/scripts/generate_ssh_config.sh {changedir}
lvm_osds: ansible-playbook -vv -i {changedir}/hosts {toxinidir}/tests/functional/lvm_setup.yml
bluestore_lvm_osds: ansible-playbook -vv -i {changedir}/hosts {toxinidir}/tests/functional/lvm_setup.yml
purge_lvm_osds: ansible-playbook -vv -i {changedir}/hosts {toxinidir}/tests/functional/lvm_setup.yml
rhcs: ansible-playbook -vv -i {changedir}/hosts {toxinidir}/tests/functional/rhcs_setup.yml --extra-vars "ceph_docker_registry={env:CEPH_DOCKER_REGISTRY:docker.io} repo_url={env:REPO_URL:} rhel7_repo_url={env:RHEL7_REPO_URL:}" --skip-tags "vagrant_setup"