ceph-ansible/tests
Dimitri Savineau dcd02e6494 ceph_volume: fix multiple db/wal devices
When using the lvm batch ceph-volume subcommand with dedicated devices
for bluestore (db/wal) then the list of devices is convert to a string
instead of being extended via an iterable.
This was working with only one dedicated device but starting with more
then the ceph_volume module fails.

TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds] **
fatal: [xxxxxx]: FAILED! => changed=true
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - --yes
  - --prepare
  - --osds-per-device
  - '4'
  - /dev/nvme2n1
  - /dev/nvme3n1
  - /dev/nvme4n1
  - /dev/nvme5n1
  - /dev/nvme6n1
  - --db-devices
  - /dev/nvme0n1 /dev/nvme1n1
  - --report
  - --format=json
  msg: non-zero return code
  rc: 2
  stderr: |2-
     stderr: lsblk: /dev/nvme0n1 /dev/nvme1n1: not a block device
     stderr: error: /dev/nvme0n1 /dev/nvme1n1: No such file or directory
     stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
    usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
                                 [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
                                 [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
                                 [--no-auto] [--bluestore] [--filestore]
                                 [--report] [--yes] [--format {json,pretty}]
                                 [--dmcrypt]
                                 [--crush-device-class CRUSH_DEVICE_CLASS]
                                 [--no-systemd]
                                 [--osds-per-device OSDS_PER_DEVICE]
                                 [--block-db-size BLOCK_DB_SIZE]
                                 [--block-wal-size BLOCK_WAL_SIZE]
                                 [--journal-size JOURNAL_SIZE] [--prepare]
                                 [--osd-ids [OSD_IDS [OSD_IDS ...]]]
                                 [DEVICES [DEVICES ...]]
    ceph-volume lvm batch: error: Unable to proceed with non-existing device: /dev/nvme0n1 /dev/nvme1n1

So the dedicated device list is considered as a single string.

This commit also adds the block_db_devices and wal_devices documentation
to the ceph_volume module.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1816713

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
(cherry picked from commit 760b6cd7b0)
2020-03-30 10:04:26 -04:00
..
functional tests: add inventory host for 5.0 upgrade job 2020-03-26 11:23:23 +01:00
inventories add a mdss group to the CLI testing inventory 2016-05-06 14:47:45 -05:00
library ceph_volume: fix multiple db/wal devices 2020-03-30 10:04:26 -04:00
plugins/filter move library/plugins tests files under tests dir 2019-10-28 15:54:31 +01:00
scripts tests: add time command in vagrant_up.sh 2020-01-10 17:41:27 +01:00
README.md WIP: first implementation of functional tests 2015-02-22 02:31:28 +01:00
README.rst tests: create a README with some explanation on how to use the test harness 2016-11-04 13:59:33 -04:00
conftest.py tests: use osd ids instead of device name in ooo_collocation 2019-10-23 17:17:24 +02:00
pytest.ini tests: placeholder pytest.ini to define test root dir 2016-11-04 13:59:32 -04:00
requirements.txt tests/requirements: bump testinfra 2020-03-09 13:33:44 +01:00

README.md

Functional tests

These playbooks aim to individually validate each Ceph component. Some of them require packages to be installed. Ideally you will run these tests from a client machine or from the Ansible server.