Commit Graph

52 Commits (2fb4981ca9ea604fe8eb905b83887325441818dd)

Author SHA1 Message Date
Sébastien Han 3bd341f6c0 osd: container use id instead of dev name
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1494127
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-03 14:44:00 +02:00
Sébastien Han f67b47d056 Merge pull request #1882 from ceph/multi-journal
osd: drop support for device partition
2017-09-13 11:43:48 -06:00
Sébastien Han fdf924401f osd: drop support for device partition
We have been struggling with this, it's still broken and breaking other
things too now.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1490283
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-12 17:42:07 -06:00
Guillaume Abrioux 49ad8528e5 osd: refact include of `activate_osds.yml`
remove duplicate code.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-09-12 16:53:11 -06:00
Sébastien Han 3753e6cfa7 ceph-osd: fix autodetection activation
Prior to this patch this activation sequence for autodetection was
always skipped because we were asking to activate on device without
partitions, which doesn't make sense.

We also fix the way we lookup for a device, since the data partition is
always numbered 1, we take the min element of the dict.

Closes: https://github.com/ceph/ceph-ansible/issues/1782
Signed-off-by: Sébastien Han <seb@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-09-07 17:47:37 +02:00
Sébastien Han 1dd976d28e ceph-osd: do not re-prepare if alreadyy prepared
I forgot to re-add the partition check while refactoring the osd

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-05 09:51:57 +02:00
Andrew Schoen fcba9d17f0 ceph-osd: add support for --journal vg/lv for lvm osds
This also updates the tests

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-30 15:55:16 -05:00
Sébastien Han e0a264c7e9 osd: allow multi dedicated journals for containers
Fix: https://bugzilla.redhat.com/show_bug.cgi?id=1475820
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-30 12:34:06 +02:00
Andrew Schoen 758c31b1cd ceph-osd: ceph-volume requires --data to be in vg/lv format
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-23 13:43:31 -05:00
Andrew Schoen 594d5e017a ceph-osd: restructure lvm_volumes variable for more flexiblity
The lvm_volumes variable is now a list of dictionaries that represent
each OSD you'd like to deploy using ceph-volume. Each dictionary must
have the following keys: data, journal and data_vg. Each dictionary also
can optionaly provide a journal_vg key.

The 'data' key represents the lv name used for the OSD and the 'data_vg'
key is the vg name that the given lv resides on. The 'journal' key is
either an lv, device or partition. The 'journal_vg' key is optional and
must be the vg name for the journal lv if given. This key is mainly used
for purging of the journal lv if purge-cluster.yml is run.

For example:

  lvm_volumes:
    - data: data_lv1
      journal: journal_lv1
      data_vg: vg1
      journal_vg: vg2
    - data: data_lv2
      journal: /dev/sdc
      data_vg: vg1

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-23 10:14:14 -05:00
Andrew Schoen 61d63f8468 lvm-osds: make task name and files consistent
Removes capitilization and newlines to keep these files consistent in
style with the existing tasks.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-04 06:13:10 -05:00
Andrew Schoen b93794bed4 adds a new 'lvm_osds' osd scenario
This scenario will create OSDs using ceph-volume and is only available
in ceph releases greater than Luminous.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-04 06:13:09 -05:00
Sébastien Han 30991b1c0a osd: simplify scenarios
There is only two main scenarios now:

* collocated: everything remains on the same device:
  - data, db, wal for bluestore
  - data and journal for filestore
* non-collocated: dedicated device for some of the component

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-03 10:20:39 +02:00
Sébastien Han 33c1f0cb03 osd: refactor osd scenarios
We have multiple issues with ceph-disk's cli with bluestore and Ceph
releases. This is mainly due to cli changes with Luminous. Luminous
introduced a --bluestore and --filestore options which respectively does
not exist on releases older than Luminous. The default store being
bluestore on Luminous, simply checking for the store is not enough so we
have to build a specific command line for ceph-disk depending on the
Ceph version we are running and the desired osd_store.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-24 13:48:08 +02:00
yanyx 7e56b5c531 ceph-osd: when ceph relase >= luminous add --filestore config 2017-07-14 09:53:59 +08:00
Guillaume Abrioux 94c3756167 Tests: Add bluestore scenarios
Since we started testing against Luminous, we need to add more scenarios
testing.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-07-12 15:02:32 +02:00
Guillaume Abrioux a517ab5583 Osd: Force filestore and bluestore usage
In Luminous, ceph-disk defaults to bluestore so all our scenarios are
using bluestore, we need to force testing both.

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-07-12 11:30:30 +02:00
Sébastien Han 7d657ac643 osd: ability to set db and wal to bluestore
This commits refactors how we deploy bluestore. We have existing
scenarios that we don't want to change too much. This commits eases the
user experience by now changing the way you use scenarios. Bluestore is
just a different interface to store objects but the scenarios more or
less remain the same.

If you set osd_objectstore == 'bluestore' along with
journal_collocation: true, you will get an OSD running bluestore with DB
and WAL partitions on the same device.

If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true, you will get an OSD running bluestore with a
dedicated drive for the rocksdb DB, then the remaining
drives (used with 'devices') will have WAL and DATA collocated.

If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true and declare bluestore_wal_devices you will get
an OSD running bluestore with a dedicated drive for rocksdb db, a
dedicated drive partition for rocksdb WAL and a dedicated drive for
DATA.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-04 19:07:16 +02:00
Sébastien Han fc0e54c59e osd: remove redundant options to enable bluestore
There is no need for 2 variables to enable bluestore, prior to this
patch one had to do the following to activate bluestore:

osd_objectstore: bluestore
bluestore: true

Now you just need to set `osd_objectstore: bluestore`.

Fixes: https://github.com/ceph/ceph-ansible/issues/1475
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-04 18:22:03 +02:00
Gregory Meno eb0c83db5f remove osd directory scenario
Proof-of-concept clusters or actual production clusters will never want to use this. We also do not test it anywhere for this same reason.

Signed-off-by: Gregory Meno <gmeno@redhat.com>
2017-04-21 15:50:32 -07:00
Sébastien Han 42ffe63017 osd: autodiscovery mode, use holders to detect device
As reported in
https://github.com/ceph/ceph-ansible/issues/1403 when devices are held
by lvm and `osd_auto_discovery` is set to true, it's not enough to check
for a partition count = 0 since Ansible does not report.
This patch also looks for 'holders' which in a case of lvm corresponds
to the name of the pv. Now we also look for holders = 0.

Fixes: #1403

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-04-04 10:37:14 +02:00
Sébastien Han 5578b9bc7b osd: clarify osd scenario prepare sequence
we now use the name of the scenario in the prepare task.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-01 13:59:35 +01:00
Guillaume Abrioux 76ddcbc271 Remove support of releases prior to Jewel.
According to #1216, we need to simply the code by removing the
support of anything before Jewel.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-01-31 11:00:54 +01:00
Shengjing Zhu a1b00e96db enable prepare osd with partition devices in raw_multi_journal
Address #895

Signed-off-by: Shengjing Zhu <zsj950618@gmail.com>
2016-12-15 22:03:38 +08:00
Sébastien Han a2fcd222d2 moving to ansible v2.2 compatibility
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-By: Julien Francoz julien@francoz.net
2016-11-04 10:09:38 +01:00
Andrew Schoen 4146edb3d2 raw_multi_journal is not required when using dmcrypt_dedicated_journal
Fixes: https://github.com/ceph/ceph-ansible/issues/1054

Signed-off-by: Andrew Schoen <aschoen@redhat.com>

Resolves: issue#1054
2016-10-28 11:12:55 -05:00
James Saint-Rossy 32f6ef7747 Merged with Upstream Master 2016-08-17 12:00:36 -04:00
James Saint-Rossy 35a26068ef Fixed quotes and removed combined_ prefix from variables that no longer need it 2016-08-16 17:49:30 -04:00
Sébastien Han 673f54a100 osd: fix collocation spelling and declare dmcrypt variables
* changed s/colocation/collocation/
* declare dmcrypt variable in ceph-common so the variables check does
not fail

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-08-10 10:34:23 +02:00
Sébastien Han 788b75efec ceph-osd: re-arrange osd scenario numbers
Signed-off-by: Sébastien Han <seb@redhat.com>
2016-08-08 15:55:12 +02:00
Sébastien Han 5978d55d22 ceph-osd: add dmcrypt scenario
add the ability to encrypt osd data store using dm-crypt

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-07-19 18:02:11 +02:00
Austin Brown efeb63c577 ceph-osd: Use the use_systemd fact when start directory-scenario OSDs 2016-07-12 11:00:49 +09:00
Alfredo Deza 2fa57b4cdb ceph-osd: do not ignore ceph-disk errors in raw multi journal
Signed-off-by: Alfredo Deza <adeza@redhat.com>

Resolves: rhbz#1342117
2016-06-02 17:16:21 -04:00
Alfredo Deza d287959107 ceph-osd: do not ignore ceph-disk errors in journal_collocation
Signed-off-by: Alfredo Deza <adeza@redhat.com>

Resolves: rhbz#1342117
2016-06-02 17:16:05 -04:00
Alfredo Deza 195ee33dcf ceph-osd: do not ignore ceph-disk errors in bluestore
Signed-off-by: Alfredo Deza <adeza@redhat.com>

Resolves: rhbz#1342117
2016-06-02 17:15:35 -04:00
Alfredo Deza 52f73f30c5 ceph-osd: fail when ceph-disk fails to prepare an OSD
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2016-05-18 08:09:26 -04:00
Sam Yaple 069c93a238 Unify formatting of when conditional
This is purely a refactor. Converts when 'and' conditionals into lists
rather than multiline strings. This does not work for nested
conditionals, but those can be formated with indents.

Moves one line when statements onto the same line as the when command
itself.

A small logic bug was found in ceph-osd/tasks/check_devices.yml which
which was also fixed.

Signed-off-by: Sam Yaple <sam@yaple.net>
2016-05-09 14:08:33 +00:00
Matt Thompson ed2b7757d4 Set init_system fact and reference in ceph-osd role
The ceph-osd role currently uses ansible_service_mgr, which is a fact
only available on ansible 2.x and greater.  This commit sets a similar
fact called init_system which will store the contents of /proc/1/comm
(systemd, init, etc.) and then references it ceph-osd instead.

Closes #741
2016-05-04 12:05:31 +01:00
Sébastien Han 450feaac0a ceph: implement cluster name support
we now have the ability to enable the `cluster` variable with a specific
value that will determine the name of the cluster.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-03-30 18:08:38 +02:00
Sébastien Han 225e066db2 ceph-osd: add support for bluestore
With Jewel comes a new store to store Ceph object: BlueStore. Adding an
extra scenario might seem like a useless duplication however the
ultimate goal is remove the other roles later. Thus this is easier to
add new role instead of modifying existing one. Once we drop the support
for release older than Jewel we will just remove all the previous
  scenario files.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-03-25 16:02:02 +01:00
Derek Anderson 210e4d4a01 Added an additional task for starting/enabling a service based on a systemd target if systemd is available. Otherwise using the init script. 2016-03-07 18:04:39 -05:00
Chris St. Pierre 8dcc802976 Do not prepare skipped disks
When autodiscovering disks, disks can be skipped if either they are
removable, or if they have partitions on them. Skipped actions have no
'rc' attribute, though, so the 'ceph prepare' conditional fails unless
we first check to ensure that the results were not skipped before
checking the return value.
2016-02-24 14:50:37 -06:00
Sébastien Han 64c458bfcf ceph-osd: fix register variable
as stated in https://github.com/ansible/ansible/issues/4297
if we register a variable twice and even if a task is skipped the
  register will not get overwritten... So we use the fact variant as
  mentionned in the ansible issue.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-02-12 00:54:09 +01:00
Sébastien Han ea0979cbbe ceph-osd: fix the auto discovery scenario
While this is not widly used (AFAIK :p) the feature was broken. Thanks
to @zmc for reporting it. You can now set `osd_auto_discovery` to
true in your group_vars/osd and it will go through all the devices
available and will make them OSDs.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-02-11 22:43:09 +01:00
Guillaume Abrioux dcec63adc8 Refact code using `set_fact`
At the moment, all the tasks using the file module are duplicated to have differents ownerships depending on the fact `is_ceph_infernalis`.
The goal of this commit is to have a new logic for this:
- First set facts depending on the `is_ceph_infernalis` fact
- Create the files or directories using the setted facts as ownerships.
2016-02-05 16:14:01 +01:00
Michael Sambol 48b1cf3be2 Refactor osd role 2015-10-18 20:24:47 -05:00
Sébastien Han 68248a266b Remove the disk zap function
This will likely one day or another break something. If ceph-disk
complains about a disk just use the purge-cluster.yml playbook first as
it will wipe all the devices.

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-06 17:24:21 +02:00
Sébastien Han f671e91e61 Fix the sudoer template
+ cleanup the docker.yml from OSD.

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-03 23:53:08 +02:00
Michael Sambol e6f22b948c Failed_when instead of ignore_errors
Changed ignore_errors to failed_when so the output doesn't show in
red.
2015-07-29 13:35:46 -05:00
Sébastien Han 220e07e842 Fix wrong condition
We obviously want to fetch when the files exists :).

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-07-27 17:48:04 +02:00