Commit Graph

68 Commits (6c742376fd2b4ac4cab3430430136a5bd5a1145d)

Author SHA1 Message Date
Sébastien Han 5bbbce527e osd: do not do anything if the dev has a partition
Regardless if the partition is 'ceph' or something else, we don't want
to be as strick as checking for a particular partition.
If the drive has a partition, we just don't do anything.

This solves the case where the server reboots, disks get a different
/dev/sda (node) allocation. In this case, prior to restarting the server
/dev/sda was an OSD, but now it's /dev/sdb and the other way around.
In such scenario, we will try to prepare the OSD and create a new
partition, so let's not mess around with devices that have partitions.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1498303
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-13 19:11:15 +02:00
Andrew Schoen 79473badfe ceph-osd: adds dmcrypt to the lvm scenario
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-01-24 14:10:08 +01:00
Andrew Schoen 6cbb56a3b6 ceph-osd: adds the crush_device_class param to the lvm scenario
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-01-17 13:49:29 +01:00
Andrew Schoen 788c3f351a ceph-osd: adds osd_objectstore to the name when using the ceph_volume module
This allows for easier debugging if verbosity is not set high enough.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-12-11 09:58:06 -06:00
Andrew Schoen 5e3d8dbf63 ceph-osd: use the cluster param with the ceph_volume module
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-12-11 09:58:06 -06:00
Andrew Schoen 423166f671 ceph-osd: use the new ceph_volume module for the lvm scenario
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-12-11 09:58:06 -06:00
Andrew Schoen 3c604f1115 lvm: support --data as a raw device or partition in ceph-volume
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-11-15 09:36:17 -06:00
Andrew Schoen 04f02910a9 lvm: ensure the data_vg exists before using it
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-11-15 09:36:17 -06:00
Guillaume Abrioux d5dfc63c89 osd: fix automatic prepare when auto_discover
Use `devices` variable instead of `ansible_devices`, otherwise it means
we are not using the devices which have been 'auto discovered'

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-08 10:20:44 +01:00
Sébastien Han d4ed9a2064 osd: enhance backward compatibility
During the initial implementation of this 'old' thing we were falling
into this issue without noticing
https://github.com/moby/moby/issues/30341 and where blindly using --rm,
now this is fixed the prepare container disappears and thus activation
fail.
I'm fixing this for old jewel images.

Also this fixes the machine reboot case where the docker logs are
purgend. In the old scenario, we now store the log locally in the same
directory as the ceph-osd-run.sh script.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-03 11:15:23 +01:00
Alfredo Deza 517a2b3feb ceph-osd skip lvm creation if they are already in use
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-27 11:33:54 -04:00
Alfredo Deza df05e63c10 ceph-osd use --cluster in ceph-volume calls
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 08:23:45 -04:00
Alfredo Deza 628d98a92c ceph-osd add the CEPH_VOLUME_DEBUG env var to all ceph-volume commands
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 06:50:22 -04:00
Alfredo Deza bbc3672253 ceph-osd: lvm support for bluestore
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 06:46:39 -04:00
Sébastien Han a53aa9e8b4 ci: new osd scenarios
This commit add new osd scenarios, it aims to simplify the CI setup and
brings a better coverage on the OSD scenarios.
We decided to differentiate between filestore and bluestore, thinking
ahead when filestore won't be supported anymore.
So we now have two classes of tests:

* Filestore
* Bluestore

In each of those classes we have container and non-container.
Then for each we test the following:

* collocated
* collocated dmcrypt
* non-collocated
* non-collocated dmcrypt
* auto discovery collocated
* auto discovery collocated dmcrypt

This gives us a nice coverage and also reduces the footprint on the CI.
We are now up to 4 scenarios, each containing 6 OSD VMs.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-18 09:26:06 +02:00
Sébastien Han c693e95cbf purge-docker: rework device detection
we don't need "devices" and other device variable anymore, the playbook
detects that for us.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-07 03:39:04 +02:00
Sébastien Han 3bd341f6c0 osd: container use id instead of dev name
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1494127
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-03 14:44:00 +02:00
Sébastien Han f67b47d056 Merge pull request #1882 from ceph/multi-journal
osd: drop support for device partition
2017-09-13 11:43:48 -06:00
Sébastien Han fdf924401f osd: drop support for device partition
We have been struggling with this, it's still broken and breaking other
things too now.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1490283
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-12 17:42:07 -06:00
Guillaume Abrioux 49ad8528e5 osd: refact include of `activate_osds.yml`
remove duplicate code.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-09-12 16:53:11 -06:00
Sébastien Han 3753e6cfa7 ceph-osd: fix autodetection activation
Prior to this patch this activation sequence for autodetection was
always skipped because we were asking to activate on device without
partitions, which doesn't make sense.

We also fix the way we lookup for a device, since the data partition is
always numbered 1, we take the min element of the dict.

Closes: https://github.com/ceph/ceph-ansible/issues/1782
Signed-off-by: Sébastien Han <seb@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-09-07 17:47:37 +02:00
Sébastien Han 1dd976d28e ceph-osd: do not re-prepare if alreadyy prepared
I forgot to re-add the partition check while refactoring the osd

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-05 09:51:57 +02:00
Andrew Schoen fcba9d17f0 ceph-osd: add support for --journal vg/lv for lvm osds
This also updates the tests

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-30 15:55:16 -05:00
Sébastien Han e0a264c7e9 osd: allow multi dedicated journals for containers
Fix: https://bugzilla.redhat.com/show_bug.cgi?id=1475820
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-30 12:34:06 +02:00
Andrew Schoen 758c31b1cd ceph-osd: ceph-volume requires --data to be in vg/lv format
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-23 13:43:31 -05:00
Andrew Schoen 594d5e017a ceph-osd: restructure lvm_volumes variable for more flexiblity
The lvm_volumes variable is now a list of dictionaries that represent
each OSD you'd like to deploy using ceph-volume. Each dictionary must
have the following keys: data, journal and data_vg. Each dictionary also
can optionaly provide a journal_vg key.

The 'data' key represents the lv name used for the OSD and the 'data_vg'
key is the vg name that the given lv resides on. The 'journal' key is
either an lv, device or partition. The 'journal_vg' key is optional and
must be the vg name for the journal lv if given. This key is mainly used
for purging of the journal lv if purge-cluster.yml is run.

For example:

  lvm_volumes:
    - data: data_lv1
      journal: journal_lv1
      data_vg: vg1
      journal_vg: vg2
    - data: data_lv2
      journal: /dev/sdc
      data_vg: vg1

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-23 10:14:14 -05:00
Andrew Schoen 61d63f8468 lvm-osds: make task name and files consistent
Removes capitilization and newlines to keep these files consistent in
style with the existing tasks.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-04 06:13:10 -05:00
Andrew Schoen b93794bed4 adds a new 'lvm_osds' osd scenario
This scenario will create OSDs using ceph-volume and is only available
in ceph releases greater than Luminous.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-04 06:13:09 -05:00
Sébastien Han 30991b1c0a osd: simplify scenarios
There is only two main scenarios now:

* collocated: everything remains on the same device:
  - data, db, wal for bluestore
  - data and journal for filestore
* non-collocated: dedicated device for some of the component

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-03 10:20:39 +02:00
Sébastien Han 33c1f0cb03 osd: refactor osd scenarios
We have multiple issues with ceph-disk's cli with bluestore and Ceph
releases. This is mainly due to cli changes with Luminous. Luminous
introduced a --bluestore and --filestore options which respectively does
not exist on releases older than Luminous. The default store being
bluestore on Luminous, simply checking for the store is not enough so we
have to build a specific command line for ceph-disk depending on the
Ceph version we are running and the desired osd_store.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-24 13:48:08 +02:00
yanyx 7e56b5c531 ceph-osd: when ceph relase >= luminous add --filestore config 2017-07-14 09:53:59 +08:00
Guillaume Abrioux 94c3756167 Tests: Add bluestore scenarios
Since we started testing against Luminous, we need to add more scenarios
testing.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-07-12 15:02:32 +02:00
Guillaume Abrioux a517ab5583 Osd: Force filestore and bluestore usage
In Luminous, ceph-disk defaults to bluestore so all our scenarios are
using bluestore, we need to force testing both.

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-07-12 11:30:30 +02:00
Sébastien Han 7d657ac643 osd: ability to set db and wal to bluestore
This commits refactors how we deploy bluestore. We have existing
scenarios that we don't want to change too much. This commits eases the
user experience by now changing the way you use scenarios. Bluestore is
just a different interface to store objects but the scenarios more or
less remain the same.

If you set osd_objectstore == 'bluestore' along with
journal_collocation: true, you will get an OSD running bluestore with DB
and WAL partitions on the same device.

If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true, you will get an OSD running bluestore with a
dedicated drive for the rocksdb DB, then the remaining
drives (used with 'devices') will have WAL and DATA collocated.

If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true and declare bluestore_wal_devices you will get
an OSD running bluestore with a dedicated drive for rocksdb db, a
dedicated drive partition for rocksdb WAL and a dedicated drive for
DATA.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-04 19:07:16 +02:00
Sébastien Han fc0e54c59e osd: remove redundant options to enable bluestore
There is no need for 2 variables to enable bluestore, prior to this
patch one had to do the following to activate bluestore:

osd_objectstore: bluestore
bluestore: true

Now you just need to set `osd_objectstore: bluestore`.

Fixes: https://github.com/ceph/ceph-ansible/issues/1475
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-04 18:22:03 +02:00
Gregory Meno eb0c83db5f remove osd directory scenario
Proof-of-concept clusters or actual production clusters will never want to use this. We also do not test it anywhere for this same reason.

Signed-off-by: Gregory Meno <gmeno@redhat.com>
2017-04-21 15:50:32 -07:00
Sébastien Han 42ffe63017 osd: autodiscovery mode, use holders to detect device
As reported in
https://github.com/ceph/ceph-ansible/issues/1403 when devices are held
by lvm and `osd_auto_discovery` is set to true, it's not enough to check
for a partition count = 0 since Ansible does not report.
This patch also looks for 'holders' which in a case of lvm corresponds
to the name of the pv. Now we also look for holders = 0.

Fixes: #1403

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-04-04 10:37:14 +02:00
Sébastien Han 5578b9bc7b osd: clarify osd scenario prepare sequence
we now use the name of the scenario in the prepare task.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-01 13:59:35 +01:00
Guillaume Abrioux 76ddcbc271 Remove support of releases prior to Jewel.
According to #1216, we need to simply the code by removing the
support of anything before Jewel.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-01-31 11:00:54 +01:00
Shengjing Zhu a1b00e96db enable prepare osd with partition devices in raw_multi_journal
Address #895

Signed-off-by: Shengjing Zhu <zsj950618@gmail.com>
2016-12-15 22:03:38 +08:00
Sébastien Han a2fcd222d2 moving to ansible v2.2 compatibility
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-By: Julien Francoz julien@francoz.net
2016-11-04 10:09:38 +01:00
Andrew Schoen 4146edb3d2 raw_multi_journal is not required when using dmcrypt_dedicated_journal
Fixes: https://github.com/ceph/ceph-ansible/issues/1054

Signed-off-by: Andrew Schoen <aschoen@redhat.com>

Resolves: issue#1054
2016-10-28 11:12:55 -05:00
James Saint-Rossy 32f6ef7747 Merged with Upstream Master 2016-08-17 12:00:36 -04:00
James Saint-Rossy 35a26068ef Fixed quotes and removed combined_ prefix from variables that no longer need it 2016-08-16 17:49:30 -04:00
Sébastien Han 673f54a100 osd: fix collocation spelling and declare dmcrypt variables
* changed s/colocation/collocation/
* declare dmcrypt variable in ceph-common so the variables check does
not fail

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-08-10 10:34:23 +02:00
Sébastien Han 788b75efec ceph-osd: re-arrange osd scenario numbers
Signed-off-by: Sébastien Han <seb@redhat.com>
2016-08-08 15:55:12 +02:00
Sébastien Han 5978d55d22 ceph-osd: add dmcrypt scenario
add the ability to encrypt osd data store using dm-crypt

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-07-19 18:02:11 +02:00
Austin Brown efeb63c577 ceph-osd: Use the use_systemd fact when start directory-scenario OSDs 2016-07-12 11:00:49 +09:00
Alfredo Deza 2fa57b4cdb ceph-osd: do not ignore ceph-disk errors in raw multi journal
Signed-off-by: Alfredo Deza <adeza@redhat.com>

Resolves: rhbz#1342117
2016-06-02 17:16:21 -04:00
Alfredo Deza d287959107 ceph-osd: do not ignore ceph-disk errors in journal_collocation
Signed-off-by: Alfredo Deza <adeza@redhat.com>

Resolves: rhbz#1342117
2016-06-02 17:16:05 -04:00