This makes it possible to call `podman run` with custom bind-mounts.
cephadm-adopt.yml playbook needs it for a very specific use case:
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2027411
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit b02d71c307)
This commit adds the support for zapping an osd by osd_id in the
ceph_volume module.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 70f1d6e2cd)
This commit makes the zap function idempotent, especially when using
lvm_volumes variable.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1845668
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
In containerized deployment, the ceph_volume module will always uses
the same container command prefix for all actions.
Instead of duplicate this code in all container tests we can define it
once.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
Since Ceph Octopus is python3 only we don't need to specify the max open
files anymore with the container engine.
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
When using the lvm batch ceph-volume subcommand with dedicated devices
for filestore (journal) or bluestore (db/wal) then the list of devices
is convert to a string instead of being extended via an iterable.
This was working with only one dedicated device but starting with more
then the ceph_volume module fails.
TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds] **
fatal: [xxxxxx]: FAILED! => changed=true
cmd:
- ceph-volume
- --cluster
- ceph
- lvm
- batch
- --bluestore
- --yes
- --prepare
- --osds-per-device
- '4'
- /dev/nvme2n1
- /dev/nvme3n1
- /dev/nvme4n1
- /dev/nvme5n1
- /dev/nvme6n1
- --db-devices
- /dev/nvme0n1 /dev/nvme1n1
- --report
- --format=json
msg: non-zero return code
rc: 2
stderr: |2-
stderr: lsblk: /dev/nvme0n1 /dev/nvme1n1: not a block device
stderr: error: /dev/nvme0n1 /dev/nvme1n1: No such file or directory
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
[--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
[--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
[--no-auto] [--bluestore] [--filestore]
[--report] [--yes] [--format {json,pretty}]
[--dmcrypt]
[--crush-device-class CRUSH_DEVICE_CLASS]
[--no-systemd]
[--osds-per-device OSDS_PER_DEVICE]
[--block-db-size BLOCK_DB_SIZE]
[--block-wal-size BLOCK_WAL_SIZE]
[--journal-size JOURNAL_SIZE] [--prepare]
[--osd-ids [OSD_IDS [OSD_IDS ...]]]
[DEVICES [DEVICES ...]]
ceph-volume lvm batch: error: Unable to proceed with non-existing device: /dev/nvme0n1 /dev/nvme1n1
So the dedicated device list is considered as a single string.
This commit also adds the journal_devices, block_db_devices and
wal_devices documentation to the ceph_volume module.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1816713
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
To avoid unnecessary ansible warnings during playbook execution we can
move the library and plugins test files under a different directory.
[WARNING]: Skipping plugin (plugins/filter/test_ipaddrs_in_ranges.py) as
it seems to be invalid:
cannot import name 'ipaddrs_in_ranges'
Closes: #4656
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>