mirror of https://github.com/ceph/ceph-ansible.git
docs: overall syntax improvements
* add some missing dots and `` * add/remove line breaks * consistent use of shell prompt in consoles outpus * fix block indents Bearbeiten * use code blocks Signed-off-by: Christian Berendt <berendt@b1-systems.de>pull/2975/head
parent
52d9d406b1
commit
d168d81c16
|
@ -6,41 +6,50 @@ We love contribution and we love giving visibility to our contributors, this is
|
|||
|
||||
Mailing list
|
||||
------------
|
||||
Please register the mailing list at http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com
|
||||
|
||||
Please register the mailing list at http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com.
|
||||
|
||||
IRC
|
||||
---
|
||||
Feel free to join us in the channel #ceph-ansible of the OFTC servers
|
||||
|
||||
Feel free to join us in the channel #ceph-ansible of the OFTC servers.
|
||||
|
||||
GitHub
|
||||
------
|
||||
The main GitHub account for the project is at https://github.com/ceph/ceph-ansible/
|
||||
|
||||
The main GitHub account for the project is at https://github.com/ceph/ceph-ansible/.
|
||||
|
||||
Submit a patch
|
||||
--------------
|
||||
|
||||
To start contributing just do::
|
||||
To start contributing just do:
|
||||
|
||||
$ git checkout -b my-working-branch
|
||||
$ # do your changes #
|
||||
$ git add -p
|
||||
.. code-block:: console
|
||||
|
||||
If your change impacts a variable file in a role such as ``roles/ceph-common/defaults/main.yml``, you need to generate a ``group_vars`` file::
|
||||
$ git checkout -b my-working-branch
|
||||
$ # do your changes #
|
||||
$ git add -p
|
||||
|
||||
$ ./generate_group_vars_sample.sh
|
||||
If your change impacts a variable file in a role such as ``roles/ceph-common/defaults/main.yml``, you need to generate a ``group_vars`` file:
|
||||
|
||||
You are finally ready to push your changes on GitHub::
|
||||
.. code-block:: console
|
||||
|
||||
$ git commit -s
|
||||
$ git push origin my-working-branch
|
||||
$ ./generate_group_vars_sample.sh
|
||||
|
||||
You are finally ready to push your changes on GitHub:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ git commit -s
|
||||
$ git push origin my-working-branch
|
||||
|
||||
Worked on a change and you don't want to resend a commit for a syntax fix?
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ # do your syntax change #
|
||||
$ git commit --amend
|
||||
$ git push -f origin my-working-branch
|
||||
$ # do your syntax change #
|
||||
$ git commit --amend
|
||||
$ git push -f origin my-working-branch
|
||||
|
||||
PR Testing
|
||||
----------
|
||||
|
@ -48,13 +57,17 @@ Pull Request testing is handled by jenkins. All test must pass before your PR wi
|
|||
|
||||
All of tests that are running are listed in the GitHub UI and will list their current status.
|
||||
|
||||
If a test fails and you'd like to rerun it, comment on your PR in the following format::
|
||||
If a test fails and you'd like to rerun it, comment on your PR in the following format:
|
||||
|
||||
jenkins test $scenario_name
|
||||
.. code-block:: none
|
||||
|
||||
For example::
|
||||
jenkins test $scenario_name
|
||||
|
||||
jenkins test luminous-ansible2.3-journal_collocation
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
jenkins test luminous-ansible2.3-journal_collocation
|
||||
|
||||
Backporting changes
|
||||
-------------------
|
||||
|
@ -77,9 +90,11 @@ All changes to the stable branches should land in master first, so we avoid
|
|||
regressions.
|
||||
|
||||
Once this is done, one of the project maintainers will tag the tip of the
|
||||
stable branch with your change. For example::
|
||||
stable branch with your change. For example:
|
||||
|
||||
git checkout stable-3.0
|
||||
git pull --ff-only
|
||||
git tag v3.0.12
|
||||
git push origin v3.0.12
|
||||
.. code-block:: console
|
||||
|
||||
$ git checkout stable-3.0
|
||||
$ git pull --ff-only
|
||||
$ git tag v3.0.12
|
||||
$ git push origin v3.0.12
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
Glossary
|
||||
========
|
||||
|
||||
|
|
|
@ -1,8 +1,3 @@
|
|||
.. ceph-ansible documentation master file, created by
|
||||
sphinx-quickstart on Wed Apr 5 11:55:38 2017.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
ceph-ansible
|
||||
============
|
||||
|
||||
|
@ -16,19 +11,25 @@ GitHub
|
|||
------
|
||||
You can install directly from the source on GitHub by following these steps:
|
||||
|
||||
- Clone the repository::
|
||||
- Clone the repository:
|
||||
|
||||
git clone https://github.com/ceph/ceph-ansible.git
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://github.com/ceph/ceph-ansible.git
|
||||
|
||||
- Next, you must decide which branch of ``ceph-ansible`` you wish to use. There
|
||||
are stable branches to choose from or you could use the master branch::
|
||||
are stable branches to choose from or you could use the master branch:
|
||||
|
||||
git checkout $branch
|
||||
.. code-block:: console
|
||||
|
||||
$ git checkout $branch
|
||||
|
||||
- Next, use pip and the provided requirements.txt to install ansible and other
|
||||
needed python libraries::
|
||||
needed python libraries:
|
||||
|
||||
pip install -r requirements.txt
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install -r requirements.txt
|
||||
|
||||
.. _ansible-on-rhel-family:
|
||||
|
||||
|
@ -36,25 +37,28 @@ Ansible on RHEL and CentOS
|
|||
--------------------------
|
||||
You can acquire Ansible on RHEL and CentOS by installing from `Ansible channel <https://access.redhat.com/articles/3174981>`_.
|
||||
|
||||
On RHEL::
|
||||
On RHEL:
|
||||
|
||||
subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
|
||||
.. code-block:: console
|
||||
|
||||
$ subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
|
||||
|
||||
(CentOS does not use subscription-manager and already has "Extras" enabled by default.)
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
sudo yum install ansible
|
||||
$ sudo yum install ansible
|
||||
|
||||
Ansible on Ubuntu
|
||||
-----------------
|
||||
|
||||
You can acquire Ansible on Ubuntu by using the `Ansible PPA <https://launchpad.net/~ansible/+archive/ubuntu/ansible>`_.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
sudo add-apt-repository ppa:ansible/ansible
|
||||
sudo apt update
|
||||
sudo apt install ansible
|
||||
$ sudo add-apt-repository ppa:ansible/ansible
|
||||
$ sudo apt update
|
||||
$ sudo apt install ansible
|
||||
|
||||
|
||||
Releases
|
||||
|
@ -70,7 +74,7 @@ The ``master`` branch should be considered experimental and used with caution.
|
|||
- ``stable-3.1`` Support for ceph version ``luminous`` and ``mimic``. This branch supports ansible versions
|
||||
``2.4`` and ``2.5``
|
||||
|
||||
- ``master`` Support for ceph versions ``luminous``, and ``mimic``. This branch supports ansible version 2.5``.
|
||||
- ``master`` Support for ceph versions ``luminous``, and ``mimic``. This branch supports ansible version ``2.5``.
|
||||
|
||||
Configuration and Usage
|
||||
=======================
|
||||
|
@ -86,21 +90,23 @@ Inventory
|
|||
|
||||
The ansible inventory file defines the hosts in your cluster and what roles each host plays in your ceph cluster. The default
|
||||
location for an inventory file is ``/etc/ansible/hosts`` but this file can be placed anywhere and used with the ``-i`` flag of
|
||||
ansible-playbook. An example inventory file would look like::
|
||||
ansible-playbook. An example inventory file would look like:
|
||||
|
||||
[mons]
|
||||
mon1
|
||||
mon2
|
||||
mon3
|
||||
.. code-block:: ini
|
||||
|
||||
[osds]
|
||||
osd1
|
||||
osd2
|
||||
osd3
|
||||
[mons]
|
||||
mon1
|
||||
mon2
|
||||
mon3
|
||||
|
||||
[osds]
|
||||
osd1
|
||||
osd2
|
||||
osd3
|
||||
|
||||
.. note::
|
||||
|
||||
For more information on ansible inventories please refer to the ansible documentation: http://docs.ansible.com/ansible/latest/intro_inventory.html
|
||||
For more information on ansible inventories please refer to the ansible documentation: http://docs.ansible.com/ansible/latest/intro_inventory.html
|
||||
|
||||
Playbook
|
||||
--------
|
||||
|
@ -120,26 +126,28 @@ appropriate for your cluster setup. Perform the following steps to prepare your
|
|||
|
||||
Configuration Validation
|
||||
------------------------
|
||||
|
||||
The ``ceph-ansible`` project provides config validation through the ``ceph-validate`` role. If you are using one of the provided playbooks this role will
|
||||
be run early in the deployment as to ensure you've given ``ceph-ansible`` the correct config. This check is only making sure that you've provided the
|
||||
proper config settings for your cluster, not that the values in them will produce a healthy cluster. For example, if you give an incorrect address for
|
||||
``monitor_address`` then the mon will still fail to join the cluster.
|
||||
|
||||
An example of a validation failure might look like::
|
||||
An example of a validation failure might look like:
|
||||
|
||||
TASK [ceph-validate : validate provided configuration] *************************
|
||||
task path: /Users/andrewschoen/dev/ceph-ansible/roles/ceph-validate/tasks/main.yml:3
|
||||
Wednesday 02 May 2018 13:48:16 -0500 (0:00:06.984) 0:00:18.803 *********
|
||||
[ERROR]: [mon0] Validation failed for variable: osd_objectstore
|
||||
.. code-block:: console
|
||||
|
||||
[ERROR]: [mon0] Given value for osd_objectstore: foo
|
||||
TASK [ceph-validate : validate provided configuration] *************************
|
||||
task path: /Users/andrewschoen/dev/ceph-ansible/roles/ceph-validate/tasks/main.yml:3
|
||||
Wednesday 02 May 2018 13:48:16 -0500 (0:00:06.984) 0:00:18.803 *********
|
||||
[ERROR]: [mon0] Validation failed for variable: osd_objectstore
|
||||
|
||||
[ERROR]: [mon0] Reason: osd_objectstore must be either 'bluestore' or 'filestore'
|
||||
[ERROR]: [mon0] Given value for osd_objectstore: foo
|
||||
|
||||
fatal: [mon0]: FAILED! => {
|
||||
"changed": false
|
||||
}
|
||||
[ERROR]: [mon0] Reason: osd_objectstore must be either 'bluestore' or 'filestore'
|
||||
|
||||
fatal: [mon0]: FAILED! => {
|
||||
"changed": false
|
||||
}
|
||||
|
||||
ceph-ansible - choose installation method
|
||||
-----------------------------------------
|
||||
|
@ -162,25 +170,26 @@ file is a special ``group_vars`` file that applies to all hosts in your cluster.
|
|||
|
||||
.. note::
|
||||
|
||||
For more information on setting group or host specific configuration refer to the ansible documentation: http://docs.ansible.com/ansible/latest/intro_inventory.html#splitting-out-host-and-group-specific-data
|
||||
For more information on setting group or host specific configuration refer to the ansible documentation: http://docs.ansible.com/ansible/latest/intro_inventory.html#splitting-out-host-and-group-specific-data
|
||||
|
||||
At the most basic level you must tell ``ceph-ansible`` what version of ceph you wish to install, the method of installation, your clusters network settings and
|
||||
how you want your OSDs configured. To begin your configuration rename each file in ``group_vars/`` you wish to use so that it does not include the ``.sample``
|
||||
at the end of the filename, uncomment the options you wish to change and provide your own value.
|
||||
|
||||
An example configuration that deploys the upstream ``jewel`` version of ceph with OSDs that have collocated journals would look like this in ``group_vars/all.yml``::
|
||||
An example configuration that deploys the upstream ``jewel`` version of ceph with OSDs that have collocated journals would look like this in ``group_vars/all.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
ceph_origin: repository
|
||||
ceph_repository: community
|
||||
ceph_stable_release: jewel
|
||||
public_network: "192.168.3.0/24"
|
||||
cluster_network: "192.168.4.0/24"
|
||||
monitor_interface: eth1
|
||||
devices:
|
||||
- '/dev/sda'
|
||||
- '/dev/sdb'
|
||||
osd_scenario: collocated
|
||||
ceph_origin: repository
|
||||
ceph_repository: community
|
||||
ceph_stable_release: jewel
|
||||
public_network: "192.168.3.0/24"
|
||||
cluster_network: "192.168.4.0/24"
|
||||
monitor_interface: eth1
|
||||
devices:
|
||||
- '/dev/sda'
|
||||
- '/dev/sdb'
|
||||
osd_scenario: collocated
|
||||
|
||||
The following config options are required to be changed on all installations but there could be other required options depending on your OSD scenario
|
||||
selection or other aspects of your cluster.
|
||||
|
@ -199,18 +208,21 @@ The supported method for defining your ceph.conf is to use the ``ceph_conf_overr
|
|||
an INI format. This variable can be used to override sections already defined in ceph.conf (see: ``roles/ceph-config/templates/ceph.conf.j2``) or to provide
|
||||
new configuration options. The following sections in ceph.conf are supported: [global], [mon], [osd], [mds] and [rgw].
|
||||
|
||||
An example::
|
||||
An example:
|
||||
|
||||
ceph_conf_overrides:
|
||||
global:
|
||||
foo: 1234
|
||||
bar: 5678
|
||||
osd:
|
||||
osd_mkfs_type: ext4
|
||||
.. code-block:: yaml
|
||||
|
||||
ceph_conf_overrides:
|
||||
global:
|
||||
foo: 1234
|
||||
bar: 5678
|
||||
osd:
|
||||
osd_mkfs_type: ext4
|
||||
|
||||
.. note::
|
||||
We will no longer accept pull requests that modify the ceph.conf template unless it helps the deployment. For simple configuration tweaks
|
||||
please use the ``ceph_conf_overrides`` variable.
|
||||
|
||||
We will no longer accept pull requests that modify the ceph.conf template unless it helps the deployment. For simple configuration tweaks
|
||||
please use the ``ceph_conf_overrides`` variable.
|
||||
|
||||
Full documentation for configuring each of the ceph daemon types are in the following sections.
|
||||
|
||||
|
|
|
@ -6,6 +6,7 @@ setting. Defining an ``osd_scenario`` is mandatory for using ``ceph-ansible``.
|
|||
|
||||
collocated
|
||||
----------
|
||||
|
||||
This OSD scenario uses ``ceph-disk`` to create OSDs with collocated journals
|
||||
from raw devices.
|
||||
|
||||
|
@ -33,20 +34,24 @@ on the same device. The device will get 2 partitions:
|
|||
|
||||
- One for 'ceph block', 'ceph block.db', 'ceph block.wal' called 'ceph block'
|
||||
|
||||
Example of what you will get::
|
||||
Example of what you will get:
|
||||
|
||||
[root@ceph-osd0 ~]# blkid /dev/sda*
|
||||
/dev/sda: PTTYPE="gpt"
|
||||
/dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"
|
||||
/dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"
|
||||
.. code-block:: console
|
||||
|
||||
An example of using the ``collocated`` OSD scenario with encryption would look like::
|
||||
[root@ceph-osd0 ~]# blkid /dev/sda*
|
||||
/dev/sda: PTTYPE="gpt"
|
||||
/dev/sda1: UUID="9c43e346-dd6e-431f-92d8-cbed4ccb25f6" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="749c71c9-ed8f-4930-82a7-a48a3bcdb1c7"
|
||||
/dev/sda2: PARTLABEL="ceph block" PARTUUID="e6ca3e1d-4702-4569-abfa-e285de328e9d"
|
||||
|
||||
osd_scenario: collocated
|
||||
dmcrypt: true
|
||||
devices:
|
||||
- /dev/sda
|
||||
- /dev/sdb
|
||||
An example of using the ``collocated`` OSD scenario with encryption would look like:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
osd_scenario: collocated
|
||||
dmcrypt: true
|
||||
devices:
|
||||
- /dev/sda
|
||||
- /dev/sdb
|
||||
|
||||
non-collocated
|
||||
--------------
|
||||
|
@ -75,59 +80,66 @@ will be stored on different devices:
|
|||
- 'ceph data' will be stored on the device listed in ``devices``
|
||||
- 'ceph journal' will be stored on the device listed in ``dedicated_devices``
|
||||
|
||||
Let's take an example, imagine ``devices`` was declared like this::
|
||||
Let's take an example, imagine ``devices`` was declared like this:
|
||||
|
||||
devices:
|
||||
- /dev/sda
|
||||
- /dev/sdb
|
||||
- /dev/sdc
|
||||
- /dev/sdd
|
||||
.. code-block:: yaml
|
||||
|
||||
And ``dedicated_devices`` was declared like this::
|
||||
devices:
|
||||
- /dev/sda
|
||||
- /dev/sdb
|
||||
- /dev/sdc
|
||||
- /dev/sdd
|
||||
|
||||
dedicated_devices:
|
||||
- /dev/sdf
|
||||
- /dev/sdf
|
||||
- /dev/sdg
|
||||
- /dev/sdg
|
||||
And ``dedicated_devices`` was declared like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
dedicated_devices:
|
||||
- /dev/sdf
|
||||
- /dev/sdf
|
||||
- /dev/sdg
|
||||
- /dev/sdg
|
||||
|
||||
This will result in the following mapping:
|
||||
|
||||
- /dev/sda will have /dev/sdf1 as journal
|
||||
- ``/dev/sda`` will have ``/dev/sdf1`` as journal
|
||||
|
||||
- /dev/sdb will have /dev/sdf2 as a journal
|
||||
- ``/dev/sdb`` will have ``/dev/sdf2`` as a journal
|
||||
|
||||
- /dev/sdc will have /dev/sdg1 as a journal
|
||||
- ``/dev/sdc`` will have ``/dev/sdg1`` as a journal
|
||||
|
||||
- /dev/sdd will have /dev/sdg2 as a journal
|
||||
- ``/dev/sdd`` will have ``/dev/sdg2`` as a journal
|
||||
|
||||
|
||||
.. note::
|
||||
On a containerized scenario we only support A SINGLE journal
|
||||
for all the OSDs on a given machine. If you don't, bad things will happen
|
||||
This is a limitation we plan to fix at some point.
|
||||
.. warning::
|
||||
|
||||
On a containerized scenario we only support A SINGLE journal
|
||||
for all the OSDs on a given machine. If you don't, bad things will happen
|
||||
This is a limitation we plan to fix at some point.
|
||||
|
||||
If ``osd_objectstore: bluestore`` is enabled, both 'ceph block.db' and 'ceph block.wal' partitions will be stored
|
||||
on a dedicated device.
|
||||
|
||||
So the following will happen:
|
||||
|
||||
- The devices listed in ``devices`` will get 2 partitions, one for 'block' and one for 'data'. 'data' is only 100MB big and do not store any of your data, it's just a bunch of Ceph metadata. 'block' will store all your actual data.
|
||||
- The devices listed in ``devices`` will get 2 partitions, one for 'block' and one for 'data'. 'data' is only 100MB
|
||||
big and do not store any of your data, it's just a bunch of Ceph metadata. 'block' will store all your actual data.
|
||||
|
||||
- The devices in ``dedicated_devices`` will get 1 partition for RocksDB DB, called 'block.db' and one for RocksDB WAL, called 'block.wal'
|
||||
|
||||
By default ``dedicated_devices`` will represent block.db
|
||||
|
||||
Example of what you will get::
|
||||
Example of what you will get:
|
||||
|
||||
[root@ceph-osd0 ~]# blkid /dev/sd*
|
||||
/dev/sda: PTTYPE="gpt"
|
||||
/dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"
|
||||
/dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"
|
||||
/dev/sdb: PTTYPE="gpt"
|
||||
/dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"
|
||||
/dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"
|
||||
.. code-block:: console
|
||||
|
||||
[root@ceph-osd0 ~]# blkid /dev/sd*
|
||||
/dev/sda: PTTYPE="gpt"
|
||||
/dev/sda1: UUID="c6821801-2f21-4980-add0-b7fc8bd424d5" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="f2cc6fa8-5b41-4428-8d3f-6187453464d0"
|
||||
/dev/sda2: PARTLABEL="ceph block" PARTUUID="ea454807-983a-4cf2-899e-b2680643bc1c"
|
||||
/dev/sdb: PTTYPE="gpt"
|
||||
/dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="af5b2d74-4c08-42cf-be57-7248c739e217"
|
||||
/dev/sdb2: PARTLABEL="ceph block.wal" PARTUUID="af3f8327-9aa9-4c2b-a497-cf0fe96d126a"
|
||||
|
||||
There is more device granularity for Bluestore ONLY if ``osd_objectstore: bluestore`` is enabled by setting the
|
||||
``bluestore_wal_devices`` config option.
|
||||
|
@ -135,31 +147,35 @@ There is more device granularity for Bluestore ONLY if ``osd_objectstore: bluest
|
|||
By default, if ``bluestore_wal_devices`` is empty, it will get the content of ``dedicated_devices``.
|
||||
If set, then you will have a dedicated partition on a specific device for block.wal.
|
||||
|
||||
Example of what you will get::
|
||||
Example of what you will get:
|
||||
|
||||
[root@ceph-osd0 ~]# blkid /dev/sd*
|
||||
/dev/sda: PTTYPE="gpt"
|
||||
/dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"
|
||||
/dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"
|
||||
/dev/sdb: PTTYPE="gpt"
|
||||
/dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"
|
||||
/dev/sdc: PTTYPE="gpt"
|
||||
/dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"
|
||||
.. code-block:: console
|
||||
|
||||
An example of using the ``non-collocated`` OSD scenario with encryption, bluestore and dedicated wal devices would look like::
|
||||
[root@ceph-osd0 ~]# blkid /dev/sd*
|
||||
/dev/sda: PTTYPE="gpt"
|
||||
/dev/sda1: UUID="39241ae9-d119-4335-96b3-0898da8f45ce" TYPE="xfs" PARTLABEL="ceph data" PARTUUID="961e7313-bdb7-49e7-9ae7-077d65c4c669"
|
||||
/dev/sda2: PARTLABEL="ceph block" PARTUUID="bff8e54e-b780-4ece-aa16-3b2f2b8eb699"
|
||||
/dev/sdb: PTTYPE="gpt"
|
||||
/dev/sdb1: PARTLABEL="ceph block.db" PARTUUID="0734f6b6-cc94-49e9-93de-ba7e1d5b79e3"
|
||||
/dev/sdc: PTTYPE="gpt"
|
||||
/dev/sdc1: PARTLABEL="ceph block.wal" PARTUUID="824b84ba-6777-4272-bbbd-bfe2a25cecf3"
|
||||
|
||||
osd_scenario: non-collocated
|
||||
osd_objectstore: bluestore
|
||||
dmcrypt: true
|
||||
devices:
|
||||
- /dev/sda
|
||||
- /dev/sdb
|
||||
dedicated_devices:
|
||||
- /dev/sdc
|
||||
- /dev/sdc
|
||||
bluestore_wal_devices:
|
||||
- /dev/sdd
|
||||
- /dev/sdd
|
||||
An example of using the ``non-collocated`` OSD scenario with encryption, bluestore and dedicated wal devices would look like:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
osd_scenario: non-collocated
|
||||
osd_objectstore: bluestore
|
||||
dmcrypt: true
|
||||
devices:
|
||||
- /dev/sda
|
||||
- /dev/sdb
|
||||
dedicated_devices:
|
||||
- /dev/sdc
|
||||
- /dev/sdc
|
||||
bluestore_wal_devices:
|
||||
- /dev/sdd
|
||||
- /dev/sdd
|
||||
|
||||
lvm
|
||||
---
|
||||
|
@ -167,10 +183,10 @@ This OSD scenario uses ``ceph-volume`` to create OSDs from logical volumes and
|
|||
is only available when the ceph release is Luminous or newer.
|
||||
|
||||
.. note::
|
||||
|
||||
The creation of the logical volumes is not supported by ``ceph-ansible``, ``ceph-volume``
|
||||
only creates OSDs from existing logical volumes.
|
||||
|
||||
|
||||
``lvm_volumes`` is the config option that needs to be defined to configure the
|
||||
mappings for devices to be deployed. It is a list of dictionaries which expects
|
||||
a volume name and a volume group for logical volumes, but can also accept
|
||||
|
@ -185,17 +201,22 @@ OSD data. The ``data_vg`` key represents the volume group name that your
|
|||
created by this scenario.
|
||||
|
||||
.. note::
|
||||
|
||||
Any logical volume or logical group used in ``lvm_volumes`` must be a name and not a path.
|
||||
|
||||
.. note::
|
||||
|
||||
You can not use the same journal for many OSDs.
|
||||
|
||||
|
||||
``filestore``
|
||||
^^^^^^^^^^^^^
|
||||
There is filestore support which can be enabled with::
|
||||
|
||||
osd_objectstore: filestore
|
||||
There is filestore support which can be enabled with:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
osd_objectstore: filestore
|
||||
|
||||
To configure this scenario use the ``lvm_volumes`` config option.
|
||||
``lvm_volumes`` is a list of dictionaries which expects a volume name and
|
||||
|
@ -212,43 +233,47 @@ The following keys are accepted for a ``filestore`` deployment:
|
|||
|
||||
The ``journal`` key represents the logical volume name or partition that will be used for your OSD journal.
|
||||
|
||||
For example, a configuration to use the ``lvm`` osd scenario would look like::
|
||||
For example, a configuration to use the ``lvm`` osd scenario would look like:
|
||||
|
||||
osd_objectstore: filestore
|
||||
osd_scenario: lvm
|
||||
lvm_volumes:
|
||||
- data: data-lv1
|
||||
data_vg: vg1
|
||||
journal: journal-lv1
|
||||
journal_vg: vg2
|
||||
crush_device_class: foo
|
||||
- data: data-lv2
|
||||
journal: /dev/sda
|
||||
data_vg: vg1
|
||||
- data: data-lv3
|
||||
journal: /dev/sdb1
|
||||
data_vg: vg2
|
||||
- data: /dev/sda
|
||||
journal: /dev/sdb1
|
||||
- data: /dev/sda1
|
||||
journal: journal-lv1
|
||||
journal_vg: vg2
|
||||
.. code-block:: yaml
|
||||
|
||||
For example, a configuration to use the ``lvm`` osd scenario with encryption would look like::
|
||||
osd_objectstore: filestore
|
||||
osd_scenario: lvm
|
||||
lvm_volumes:
|
||||
- data: data-lv1
|
||||
data_vg: vg1
|
||||
journal: journal-lv1
|
||||
journal_vg: vg2
|
||||
crush_device_class: foo
|
||||
- data: data-lv2
|
||||
journal: /dev/sda
|
||||
data_vg: vg1
|
||||
- data: data-lv3
|
||||
journal: /dev/sdb1
|
||||
data_vg: vg2
|
||||
- data: /dev/sda
|
||||
journal: /dev/sdb1
|
||||
- data: /dev/sda1
|
||||
journal: journal-lv1
|
||||
journal_vg: vg2
|
||||
|
||||
osd_objectstore: filestore
|
||||
osd_scenario: lvm
|
||||
dmcrypt: True
|
||||
lvm_volumes:
|
||||
- data: data-lv1
|
||||
data_vg: vg1
|
||||
journal: journal-lv1
|
||||
journal_vg: vg2
|
||||
crush_device_class: foo
|
||||
For example, a configuration to use the ``lvm`` osd scenario with encryption would look like:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
osd_objectstore: filestore
|
||||
osd_scenario: lvm
|
||||
dmcrypt: True
|
||||
lvm_volumes:
|
||||
- data: data-lv1
|
||||
data_vg: vg1
|
||||
journal: journal-lv1
|
||||
journal_vg: vg2
|
||||
crush_device_class: foo
|
||||
|
||||
``bluestore``
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
This scenario allows a combination of devices to be used in an OSD.
|
||||
``bluestore`` can work just with a single "block" device (specified by the
|
||||
``data`` and optionally ``data_vg``) or additionally with a ``block.wal`` and ``block.db``
|
||||
|
@ -265,26 +290,28 @@ The following keys are accepted for a ``bluestore`` deployment:
|
|||
* ``crush_device_class`` (optional, sets the crush device class for the OSD)
|
||||
|
||||
A ``bluestore`` lvm deployment, for all four different combinations supported
|
||||
could look like::
|
||||
could look like:
|
||||
|
||||
osd_objectstore: bluestore
|
||||
osd_scenario: lvm
|
||||
lvm_volumes:
|
||||
- data: data-lv1
|
||||
data_vg: vg1
|
||||
crush_device_class: foo
|
||||
- data: data-lv2
|
||||
data_vg: vg1
|
||||
wal: wal-lv1
|
||||
wal_vg: vg2
|
||||
- data: data-lv3
|
||||
data_vg: vg2
|
||||
db: db-lv1
|
||||
db_vg: vg2
|
||||
- data: data-lv4
|
||||
data_vg: vg4
|
||||
db: db-lv4
|
||||
db_vg: vg4
|
||||
wal: wal-lv4
|
||||
wal_vg: vg4
|
||||
- data: /dev/sda
|
||||
.. code-block:: yaml
|
||||
|
||||
osd_objectstore: bluestore
|
||||
osd_scenario: lvm
|
||||
lvm_volumes:
|
||||
- data: data-lv1
|
||||
data_vg: vg1
|
||||
crush_device_class: foo
|
||||
- data: data-lv2
|
||||
data_vg: vg1
|
||||
wal: wal-lv1
|
||||
wal_vg: vg2
|
||||
- data: data-lv3
|
||||
data_vg: vg2
|
||||
db: db-lv1
|
||||
db_vg: vg2
|
||||
- data: data-lv4
|
||||
data_vg: vg4
|
||||
db: db-lv4
|
||||
db_vg: vg4
|
||||
wal: wal-lv4
|
||||
wal_vg: vg4
|
||||
- data: /dev/sda
|
||||
|
|
|
@ -11,23 +11,28 @@ removes the need to solely rely on a CI system like Jenkins to verify
|
|||
a behavior.
|
||||
|
||||
* **Getting started:**
|
||||
:doc:`Running a Test Scenario <running>` |
|
||||
:ref:`dependencies`
|
||||
|
||||
* :doc:`Running a Test Scenario <running>`
|
||||
* :ref:`dependencies`
|
||||
|
||||
* **Configuration and structure:**
|
||||
:ref:`layout` |
|
||||
:ref:`test_files` |
|
||||
:ref:`scenario_files` |
|
||||
:ref:`scenario_wiring`
|
||||
|
||||
* :ref:`layout`
|
||||
* :ref:`test_files`
|
||||
* :ref:`scenario_files`
|
||||
* :ref:`scenario_wiring`
|
||||
|
||||
* **Adding or modifying tests:**
|
||||
:ref:`test_conventions` |
|
||||
:ref:`testinfra` |
|
||||
|
||||
* :ref:`test_conventions`
|
||||
* :ref:`testinfra`
|
||||
|
||||
* **Adding or modifying a scenario:**
|
||||
:ref:`scenario_conventions` |
|
||||
:ref:`scenario_environment_configuration` |
|
||||
:ref:`scenario_ansible_configuration` |
|
||||
|
||||
* :ref:`scenario_conventions`
|
||||
* :ref:`scenario_environment_configuration`
|
||||
* :ref:`scenario_ansible_configuration`
|
||||
|
||||
* **Custom/development repositories and packages:**
|
||||
:ref:`tox_environment_variables` |
|
||||
|
||||
* :ref:`tox_environment_variables`
|
||||
|
|
|
@ -2,12 +2,15 @@
|
|||
|
||||
Layout and conventions
|
||||
----------------------
|
||||
|
||||
Test files and directories follow a few conventions, which makes it easy to
|
||||
create (or expect) certain interactions between tests and scenarios.
|
||||
|
||||
All tests are in the ``tests`` directory. Scenarios are defined in
|
||||
``tests/functional/`` and use the following convention for directory
|
||||
structure::
|
||||
structure:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
tests/functional/<distro>/<distro version>/<scenario name>/
|
||||
|
||||
|
@ -32,6 +35,7 @@ At the very least, a scenario will need these files:
|
|||
|
||||
Conventions
|
||||
-----------
|
||||
|
||||
Python test files (unlike scenarios) rely on paths to *map* where they belong. For
|
||||
example, a file that should only test monitor nodes would live in
|
||||
``ceph-ansible/tests/functional/tests/mon/``. Internally, the test runner
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
|
||||
.. _running_tests:
|
||||
|
||||
Running Tests
|
||||
=============
|
||||
|
||||
Although tests run continuously in CI, a lot of effort was put into making it
|
||||
easy to run in any environment, as long as a couple of requirements are met.
|
||||
|
||||
|
@ -11,11 +11,14 @@ easy to run in any environment, as long as a couple of requirements are met.
|
|||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
There are some Python dependencies, which are listed in a ``requirements.txt``
|
||||
file within the ``tests/`` directory. These are meant to be installed using
|
||||
Python install tools (pip in this case)::
|
||||
Python install tools (pip in this case):
|
||||
|
||||
pip install -r tests/requirements.txt
|
||||
.. code-block:: console
|
||||
|
||||
pip install -r tests/requirements.txt
|
||||
|
||||
For virtualization, either libvirt or VirtualBox is needed (there is native
|
||||
support from the harness for both). This makes the test harness even more
|
||||
|
@ -32,116 +35,134 @@ a configuration file (``tox.ini`` in this case at the root of the project).
|
|||
For a thorough description of a scenario see :ref:`test_scenarios`.
|
||||
|
||||
To run a single scenario, make sure it is available (should be defined from
|
||||
``tox.ini``) by listing them::
|
||||
``tox.ini``) by listing them:
|
||||
|
||||
tox -l
|
||||
.. code-block:: console
|
||||
|
||||
tox -l
|
||||
|
||||
In this example, we will use the ``luminous-ansible2.4-xenial_cluster`` one. The
|
||||
harness defaults to ``VirtualBox`` as the backend, so if you have that
|
||||
installed in your system then this command should just work::
|
||||
installed in your system then this command should just work:
|
||||
|
||||
tox -e luminous-ansible2.4-xenial_cluster
|
||||
.. code-block:: console
|
||||
|
||||
And for libvirt it would be::
|
||||
tox -e luminous-ansible2.4-xenial_cluster
|
||||
|
||||
tox -e luminous-ansible2.4-xenial_cluster -- --provider=libvirt
|
||||
And for libvirt it would be:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
.. warning:: Depending on the type of scenario and resources available, running
|
||||
these tests locally in a personal computer can be very resource intensive.
|
||||
tox -e luminous-ansible2.4-xenial_cluster -- --provider=libvirt
|
||||
|
||||
.. note:: Most test runs take between 20 and 40 minutes depending on system
|
||||
resources
|
||||
.. warning::
|
||||
|
||||
Depending on the type of scenario and resources available, running
|
||||
these tests locally in a personal computer can be very resource intensive.
|
||||
|
||||
.. note::
|
||||
|
||||
Most test runs take between 20 and 40 minutes depending on system
|
||||
resources
|
||||
|
||||
The command should bring up the machines needed for the test, provision them
|
||||
with ceph-ansible, run the tests, and tear the whole environment down at the
|
||||
end.
|
||||
|
||||
|
||||
The output would look something similar to this trimmed version::
|
||||
The output would look something similar to this trimmed version:
|
||||
|
||||
luminous-ansible2.4-xenial_cluster create: /Users/alfredo/python/upstream/ceph-ansible/.tox/luminous-ansible2.4-xenial_cluster
|
||||
luminous-ansible2.4-xenial_cluster installdeps: ansible==2.4.2, -r/Users/alfredo/python/upstream/ceph-ansible/tests/requirements.txt
|
||||
luminous-ansible2.4-xenial_cluster runtests: commands[0] | vagrant up --no-provision --provider=virtualbox
|
||||
Bringing machine 'client0' up with 'virtualbox' provider...
|
||||
Bringing machine 'rgw0' up with 'virtualbox' provider...
|
||||
Bringing machine 'mds0' up with 'virtualbox' provider...
|
||||
Bringing machine 'mon0' up with 'virtualbox' provider...
|
||||
Bringing machine 'mon1' up with 'virtualbox' provider...
|
||||
Bringing machine 'mon2' up with 'virtualbox' provider...
|
||||
Bringing machine 'osd0' up with 'virtualbox' provider...
|
||||
...
|
||||
.. code-block:: console
|
||||
|
||||
luminous-ansible2.4-xenial_cluster create: /Users/alfredo/python/upstream/ceph-ansible/.tox/luminous-ansible2.4-xenial_cluster
|
||||
luminous-ansible2.4-xenial_cluster installdeps: ansible==2.4.2, -r/Users/alfredo/python/upstream/ceph-ansible/tests/requirements.txt
|
||||
luminous-ansible2.4-xenial_cluster runtests: commands[0] | vagrant up --no-provision --provider=virtualbox
|
||||
Bringing machine 'client0' up with 'virtualbox' provider...
|
||||
Bringing machine 'rgw0' up with 'virtualbox' provider...
|
||||
Bringing machine 'mds0' up with 'virtualbox' provider...
|
||||
Bringing machine 'mon0' up with 'virtualbox' provider...
|
||||
Bringing machine 'mon1' up with 'virtualbox' provider...
|
||||
Bringing machine 'mon2' up with 'virtualbox' provider...
|
||||
Bringing machine 'osd0' up with 'virtualbox' provider...
|
||||
...
|
||||
|
||||
|
||||
After all the nodes are up, ceph-ansible will provision them, and run the
|
||||
playbook(s)::
|
||||
playbook(s):
|
||||
|
||||
...
|
||||
PLAY RECAP *********************************************************************
|
||||
client0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
mds0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
mon0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
mon1 : ok=4 changed=0 unreachable=0 failed=0
|
||||
mon2 : ok=4 changed=0 unreachable=0 failed=0
|
||||
osd0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
rgw0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
...
|
||||
.. code-block:: console
|
||||
|
||||
...
|
||||
PLAY RECAP *********************************************************************
|
||||
client0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
mds0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
mon0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
mon1 : ok=4 changed=0 unreachable=0 failed=0
|
||||
mon2 : ok=4 changed=0 unreachable=0 failed=0
|
||||
osd0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
rgw0 : ok=4 changed=0 unreachable=0 failed=0
|
||||
...
|
||||
|
||||
|
||||
Once the whole environment is all running the tests will be sent out to the
|
||||
hosts, with output similar to this::
|
||||
hosts, with output similar to this:
|
||||
|
||||
luminous-ansible2.4-xenial_cluster runtests: commands[4] | testinfra -n 4 --sudo -v --connection=ansible --ansible-inventory=/Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster/hosts /Users/alfredo/python/upstream/ceph-ansible/tests/functional/tests
|
||||
============================ test session starts ===========================
|
||||
platform darwin -- Python 2.7.8, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 -- /Users/alfredo/python/upstream/ceph-ansible/.tox/luminous-ansible2.4-xenial_cluster/bin/python
|
||||
cachedir: ../../../../.cache
|
||||
rootdir: /Users/alfredo/python/upstream/ceph-ansible/tests, inifile: pytest.ini
|
||||
plugins: testinfra-1.5.4, xdist-1.15.0
|
||||
[gw0] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
|
||||
[gw1] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
|
||||
[gw2] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
|
||||
[gw3] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
|
||||
[gw0] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
|
||||
[gw1] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
|
||||
[gw2] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
|
||||
[gw3] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
|
||||
gw0 [154] / gw1 [154] / gw2 [154] / gw3 [154]
|
||||
scheduling tests via LoadScheduling
|
||||
.. code-block:: console
|
||||
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_dir_exists[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_dir_is_a_directory[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_conf_is_a_file[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_dir_is_a_directory[ansible:/mon1]
|
||||
[gw2] PASSED ../../../tests/test_install.py::TestCephConf::test_ceph_config_has_mon_host_line[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_conf_exists[ansible:/mon1]
|
||||
[gw3] PASSED ../../../tests/test_install.py::TestCephConf::test_mon_host_line_has_correct_value[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_conf_is_a_file[ansible:/mon1]
|
||||
[gw1] PASSED ../../../tests/test_install.py::TestInstall::test_ceph_command_exists[ansible:/mon1]
|
||||
../../../tests/test_install.py::TestCephConf::test_mon_host_line_has_correct_value[ansible:/mon1]
|
||||
...
|
||||
luminous-ansible2.4-xenial_cluster runtests: commands[4] | testinfra -n 4 --sudo -v --connection=ansible --ansible-inventory=/Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster/hosts /Users/alfredo/python/upstream/ceph-ansible/tests/functional/tests
|
||||
============================ test session starts ===========================
|
||||
platform darwin -- Python 2.7.8, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 -- /Users/alfredo/python/upstream/ceph-ansible/.tox/luminous-ansible2.4-xenial_cluster/bin/python
|
||||
cachedir: ../../../../.cache
|
||||
rootdir: /Users/alfredo/python/upstream/ceph-ansible/tests, inifile: pytest.ini
|
||||
plugins: testinfra-1.5.4, xdist-1.15.0
|
||||
[gw0] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
|
||||
[gw1] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
|
||||
[gw2] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
|
||||
[gw3] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
|
||||
[gw0] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
|
||||
[gw1] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
|
||||
[gw2] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
|
||||
[gw3] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
|
||||
gw0 [154] / gw1 [154] / gw2 [154] / gw3 [154]
|
||||
scheduling tests via LoadScheduling
|
||||
|
||||
Finally the whole environment gets torn down::
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_dir_exists[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_dir_is_a_directory[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_conf_is_a_file[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_dir_is_a_directory[ansible:/mon1]
|
||||
[gw2] PASSED ../../../tests/test_install.py::TestCephConf::test_ceph_config_has_mon_host_line[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_conf_exists[ansible:/mon1]
|
||||
[gw3] PASSED ../../../tests/test_install.py::TestCephConf::test_mon_host_line_has_correct_value[ansible:/mon0]
|
||||
../../../tests/test_install.py::TestInstall::test_ceph_conf_is_a_file[ansible:/mon1]
|
||||
[gw1] PASSED ../../../tests/test_install.py::TestInstall::test_ceph_command_exists[ansible:/mon1]
|
||||
../../../tests/test_install.py::TestCephConf::test_mon_host_line_has_correct_value[ansible:/mon1]
|
||||
...
|
||||
|
||||
luminous-ansible2.4-xenial_cluster runtests: commands[5] | vagrant destroy --force
|
||||
==> osd0: Forcing shutdown of VM...
|
||||
==> osd0: Destroying VM and associated drives...
|
||||
==> mon2: Forcing shutdown of VM...
|
||||
==> mon2: Destroying VM and associated drives...
|
||||
==> mon1: Forcing shutdown of VM...
|
||||
==> mon1: Destroying VM and associated drives...
|
||||
==> mon0: Forcing shutdown of VM...
|
||||
==> mon0: Destroying VM and associated drives...
|
||||
==> mds0: Forcing shutdown of VM...
|
||||
==> mds0: Destroying VM and associated drives...
|
||||
==> rgw0: Forcing shutdown of VM...
|
||||
==> rgw0: Destroying VM and associated drives...
|
||||
==> client0: Forcing shutdown of VM...
|
||||
==> client0: Destroying VM and associated drives...
|
||||
Finally the whole environment gets torn down:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
luminous-ansible2.4-xenial_cluster runtests: commands[5] | vagrant destroy --force
|
||||
==> osd0: Forcing shutdown of VM...
|
||||
==> osd0: Destroying VM and associated drives...
|
||||
==> mon2: Forcing shutdown of VM...
|
||||
==> mon2: Destroying VM and associated drives...
|
||||
==> mon1: Forcing shutdown of VM...
|
||||
==> mon1: Destroying VM and associated drives...
|
||||
==> mon0: Forcing shutdown of VM...
|
||||
==> mon0: Destroying VM and associated drives...
|
||||
==> mds0: Forcing shutdown of VM...
|
||||
==> mds0: Destroying VM and associated drives...
|
||||
==> rgw0: Forcing shutdown of VM...
|
||||
==> rgw0: Destroying VM and associated drives...
|
||||
==> client0: Forcing shutdown of VM...
|
||||
==> client0: Destroying VM and associated drives...
|
||||
|
||||
|
||||
And a brief summary of the scenario(s) that ran is displayed::
|
||||
And a brief summary of the scenario(s) that ran is displayed:
|
||||
|
||||
________________________________________________ summary _________________________________________________
|
||||
luminous-ansible2.4-xenial_cluster: commands succeeded
|
||||
congratulations :)
|
||||
.. code-block:: console
|
||||
|
||||
________________________________________________ summary _________________________________________________
|
||||
luminous-ansible2.4-xenial_cluster: commands succeeded
|
||||
congratulations :)
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
.. _test_scenarios:
|
||||
|
||||
Test Scenarios
|
||||
|
@ -17,7 +16,9 @@ consumed by ``Vagrant`` when bringing up an environment.
|
|||
This yaml file is loaded in the ``Vagrantfile`` so that the settings can be
|
||||
used to bring up the boxes and pass some configuration to ansible when running.
|
||||
|
||||
.. note:: The basic layout of a scenario is covered in :ref:`layout`
|
||||
.. note::
|
||||
|
||||
The basic layout of a scenario is covered in :ref:`layout`.
|
||||
There are just a handful of required files, this is the most basic layout.
|
||||
|
||||
There are just a handful of required files, these sections will cover the
|
||||
|
@ -36,25 +37,32 @@ to follow (most of them are 1 line settings).
|
|||
* **docker**: (bool) Indicates if the scenario will deploy docker daemons
|
||||
|
||||
* **VMS**: (int) These integer values are just a count of how many machines will be
|
||||
needed. Each supported type is listed, defaulting to 0::
|
||||
needed. Each supported type is listed, defaulting to 0:
|
||||
|
||||
mon_vms: 0
|
||||
osd_vms: 0
|
||||
mds_vms: 0
|
||||
rgw_vms: 0
|
||||
nfs_vms: 0
|
||||
rbd_mirror_vms: 0
|
||||
client_vms: 0
|
||||
iscsi_gw_vms: 0
|
||||
mgr_vms: 0
|
||||
.. code-block:: yaml
|
||||
|
||||
For a deployment that needs 1 MON and 1 OSD, the list would look like::
|
||||
mon_vms: 0
|
||||
osd_vms: 0
|
||||
mds_vms: 0
|
||||
rgw_vms: 0
|
||||
nfs_vms: 0
|
||||
rbd_mirror_vms: 0
|
||||
client_vms: 0
|
||||
iscsi_gw_vms: 0
|
||||
mgr_vms: 0
|
||||
|
||||
mon_vms: 1
|
||||
osd_vms: 1
|
||||
For a deployment that needs 1 MON and 1 OSD, the list would look like:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
mon_vms: 1
|
||||
osd_vms: 1
|
||||
|
||||
* **RESTAPI**: (bool) Deploy RESTAPI on each of the monitor(s)
|
||||
restapi: true
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
restapi: true
|
||||
|
||||
* **CEPH SOURCE**: (string) indicate whether a ``dev`` or ``stable`` release is
|
||||
needed. A ``stable`` release will use the latest stable release of Ceph,
|
||||
|
@ -62,10 +70,12 @@ to follow (most of them are 1 line settings).
|
|||
|
||||
* **SUBNETS**: These are used for configuring the network availability of each
|
||||
server that will be booted as well as being used as configuration for
|
||||
ceph-ansible (and eventually ceph). The two values that are **required**::
|
||||
ceph-ansible (and eventually ceph). The two values that are **required**:
|
||||
|
||||
public_subnet: 192.168.13
|
||||
cluster_subnet: 192.168.14
|
||||
.. code-block:: yaml
|
||||
|
||||
public_subnet: 192.168.13
|
||||
cluster_subnet: 192.168.14
|
||||
|
||||
* **MEMORY**: Memory requirements (in megabytes) for each server, e.g.
|
||||
``memory: 512``
|
||||
|
@ -76,9 +86,11 @@ to follow (most of them are 1 line settings).
|
|||
the public Vagrant boxes normalize the interface to ``eth1`` for all boxes,
|
||||
making it easier to configure them with Ansible later.
|
||||
|
||||
.. warning:: Do *not* change the interface from ``eth1`` unless absolutely
|
||||
certain that is needed for a box. Some tests that depend on that
|
||||
naming will fail.
|
||||
.. warning::
|
||||
|
||||
Do *not* change the interface from ``eth1`` unless absolutely
|
||||
certain that is needed for a box. Some tests that depend on that
|
||||
naming will fail.
|
||||
|
||||
* **disks**: The disks that will be created for each machine, for most
|
||||
environments ``/dev/sd*`` style of disks will work, like: ``[ '/dev/sda', '/dev/sdb' ]``
|
||||
|
@ -92,16 +104,16 @@ The following aren't usually changed/enabled for tests, since they don't have
|
|||
an impact, however they are documented here for general knowledge in case they
|
||||
are needed:
|
||||
|
||||
* **ssh_private_key_path** : The path to the ``id_rsa`` (or other private SSH
|
||||
* **ssh_private_key_path**: The path to the ``id_rsa`` (or other private SSH
|
||||
key) that should be used to connect to these boxes.
|
||||
|
||||
* **vagrant_sync_dir** : what should be "synced" (made available on the new
|
||||
* **vagrant_sync_dir**: what should be "synced" (made available on the new
|
||||
servers) from the host.
|
||||
|
||||
* **vagrant_disable_synced_folder** : (bool) when disabled, it will make
|
||||
* **vagrant_disable_synced_folder**: (bool) when disabled, it will make
|
||||
booting machines faster because no files need to be synced over.
|
||||
|
||||
* **os_tuning_params** : These are passed onto ceph-ansible as part of the
|
||||
* **os_tuning_params**: These are passed onto ceph-ansible as part of the
|
||||
variables for "system tunning". These shouldn't be changed.
|
||||
|
||||
|
||||
|
@ -122,13 +134,14 @@ The ``hosts`` file should contain the hosts needed for the scenario. This might
|
|||
seem a bit repetitive since machines are already defined in
|
||||
:ref:`vagrant_variables` but it allows granular changes to hosts (for example
|
||||
defining an interface vs. an IP on a monitor) which can help catch issues in
|
||||
ceph-ansible configuration. For example::
|
||||
ceph-ansible configuration. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[mons]
|
||||
mon0 monitor_address=192.168.5.10
|
||||
mon1 monitor_address=192.168.5.11
|
||||
mon2 monitor_interface=eth1
|
||||
[mons]
|
||||
mon0 monitor_address=192.168.5.10
|
||||
mon1 monitor_address=192.168.5.11
|
||||
mon2 monitor_interface=eth1
|
||||
|
||||
.. _group_vars:
|
||||
|
||||
|
@ -150,25 +163,29 @@ Scenario Wiring
|
|||
Scenarios are just meant to provide the Ceph environment for testing, but they
|
||||
do need to be defined in the ``tox.ini`` so that they are available to the test
|
||||
framework. To see a list of available scenarios, the following command (ran
|
||||
from the root of the project) will list them, shortened for brevity::
|
||||
from the root of the project) will list them, shortened for brevity:
|
||||
|
||||
$ tox -l
|
||||
...
|
||||
luminous-ansible2.4-centos7_cluster
|
||||
...
|
||||
.. code-block:: console
|
||||
|
||||
These scenarios are made from different variables, in the above command there
|
||||
are 3:
|
||||
$ tox -l
|
||||
...
|
||||
luminous-ansible2.4-centos7_cluster
|
||||
...
|
||||
|
||||
* jewel: the Ceph version to test
|
||||
* ansible2.4: the Ansible version to install
|
||||
These scenarios are made from different variables, in the above command there
|
||||
are 3:
|
||||
|
||||
* ``jewel``: the Ceph version to test
|
||||
* ``ansible2.4``: the Ansible version to install
|
||||
* ``centos7_cluster``: the name of the scenario
|
||||
|
||||
The last one is important in the *wiring up* of the scenario. It is a variable
|
||||
that will define in what path the scenario lives. For example, the
|
||||
``changedir`` section for ``centos7_cluster`` that looks like::
|
||||
``changedir`` section for ``centos7_cluster`` that looks like:
|
||||
|
||||
centos7_cluster: {toxinidir}/tests/functional/centos/7/cluster
|
||||
.. code-block:: ini
|
||||
|
||||
centos7_cluster: {toxinidir}/tests/functional/centos/7/cluster
|
||||
|
||||
The actual tests are written for specific daemon types, for all daemon types,
|
||||
and for specific use cases (e.g. journal collocation), those have their own
|
||||
|
|
|
@ -36,19 +36,23 @@ Fixtures are detected by name, so as long as the argument being used has the
|
|||
same name, the fixture will be passed in (see `pytest fixtures`_ for more
|
||||
in-depth examples). The code that follows shows a test method that will use the
|
||||
``node`` fixture that contains useful information about a node in a ceph
|
||||
cluster::
|
||||
cluster:
|
||||
|
||||
def test_ceph_conf(self, node):
|
||||
assert node['conf_path'] == "/etc/ceph/ceph.conf"
|
||||
.. code-block:: python
|
||||
|
||||
def test_ceph_conf(self, node):
|
||||
assert node['conf_path'] == "/etc/ceph/ceph.conf"
|
||||
|
||||
The test is naive (the configuration path might not exist remotely) but
|
||||
explains how simple it is to "request" a fixture.
|
||||
|
||||
For remote execution, we can rely further on other fixtures (tests can have as
|
||||
many fixtures as needed) like ``File``::
|
||||
many fixtures as needed) like ``File``:
|
||||
|
||||
def test_ceph_config_has_inital_members_line(self, node, File):
|
||||
assert File(node["conf_path"]).contains("^mon initial members = .*$")
|
||||
.. code-block:: python
|
||||
|
||||
def test_ceph_config_has_inital_members_line(self, node, File):
|
||||
assert File(node["conf_path"]).contains("^mon initial members = .*$")
|
||||
|
||||
|
||||
.. node:
|
||||
|
|
Loading…
Reference in New Issue