Commit Graph

66 Commits (d6bc8e3b9ae5d4561e55fea49b8adef16f08e438)

Author SHA1 Message Date
Seena Fallah f90d5d8b4c ceph-config: drop osd_memory_target from ceph_conf_overrides
As it's always being set in ceph.conf template, it leads to having duplicated osd_memory_target keys in rendered ceph conf while defining one in ceph_conf_overrides.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2023-06-01 10:00:04 +02:00
Seena Fallah 225ae38ee2 ceph-config: exclude already counted osds by lvm_volumes
Exclude lvm_volumes defined disks from existing osds while it has been counted by the "count number of osds for lvm scenario" task.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2023-03-17 16:05:34 +01:00
Seena Fallah 80b1ed9d4a devices: allow using lvm_volumes with devices
* Exclude device from lvm_volumes while osd_auto_discovery is true
* Sum num_osds on both lvm_volumes and devices list

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2023-03-17 16:05:34 +01:00
Guillaume Abrioux 15b91cef90 osd: drop filestore support
filestore is about to be removed. This commit removes the filestore
support in ceph-ansible.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2023-03-03 15:00:29 +01:00
Guillaume Abrioux e47288ef6c ceph-config: make sure rgw_instances is set
We need to make sure `rgw_instances` is set before `ceph.conf` is
rendered. Otherwise, the `ceph-crash` play in the main playbook updates
(via ceph-handler) the `ceph.conf` on rgw nodes and removes rgw instances
sections.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2141604

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2023-02-07 10:56:06 +01:00
Seena Fallah 613773b2a3 ceph-config: fix overriding osd_memory_target
When the value is overriding in `ceph_conf_overrides`, there is no need to calculate and set `osd_memory_target` again as we wanted to override the conf by our desired value.

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2022-10-10 10:58:47 +02:00
Seena Fallah 57b0890aff ceph-config: don't check for devices on existing osds
When osd_auto_discovery is true the `devices` var will be empty (as the disks have holders).
Also in general there is no need to check for devices to list the devices with ceph-volume as we have `default({})` on the stdout in `num_osds` set fact in the next task

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2022-10-10 10:58:47 +02:00
Seena Fallah ac4dfa7526 ceph-config: always set _osd_memory_target
this should be set when rolling_update is true as well, otherwise, it will reset to default on the upgrade

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2022-10-10 10:58:47 +02:00
Guillaume Abrioux b40e4bfe60 ceph-config: allow overriding osd_memory_target
If `osd_memory_target` is set in group_vars, the default value (4Gb)
should be overridden.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2118544

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2022-08-16 15:39:25 +02:00
Guillaume Abrioux f19dcb266a config: use osd_memory_target value from ceph_conf_overrides if defined
otherwise it's impossible to override `osd_memory_target`
via `ceph_conf_overrides`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2056675

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2022-08-10 16:32:21 +02:00
Guillaume Abrioux e2076e439b config: do not always set _osd_memory_target
When 'osd_memory_target' is overridden in ceph_conf_overrides.
The task that sets the fact `osd_memory_target` in the ceph-config role
should be skipped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2056675#c11

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2022-08-08 09:46:03 +02:00
Guillaume Abrioux cf4a430d0b config: followup on 8a5628b51
Add missing `--cluster {{ cluster }}` on task
`set osd_memory_target` in the main.yml file of the
ceph-config role.
Also it moves the task after ceph configuration file is actually written.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2022-07-11 19:11:54 +02:00
Guillaume Abrioux 8a5628b516 config/osd: various fixes
- sets `osd_memory_target` per osd host.
- ceph.conf refactor (osd)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2056675

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2022-07-11 13:57:32 +02:00
Guillaume Abrioux 5283fa6e96 config: fix indentation in main.yml
For consistency and readability.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2022-07-11 13:57:32 +02:00
Dmitriy Rabotyagov 2eb0a88a67 Use upstream config_template collection
In order to reduce need of module
internal maintenance and to join forces on plugin development,
it's proposed to switch to using upstream version of
config_template module.

As it's shipped as collection, it's installation for end-users
is trivial and aligns with general approach of shipping extra modules.

Signed-off-by: Dmitriy Rabotyagov <noonedeadpunk@ya.ru>
2022-01-18 20:22:10 +01:00
Guillaume Abrioux 31a0f2653d config: reset num_osds
When collocating OSDs with other daemon, `num_osds` is incorrectly calculated
because `ceph-config` is called multiple times.

Indeed, the following code:
```
num_osds: "{{ lvm_list.stdout | default('{}') | from_json | length | int + num_osds | default(0) | int }}"
```

makes `num_osds` be incremented each time `ceph-config` is called.

We have to reset it in order to get the correct number of expected OSDs.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-03-15 10:41:28 +01:00
Dimitri Savineau 827b23353f ceph-config: fix ceph-volume lvm batch report
Since the major ceph-volume lvm batch refactoring, the report value
is different.
Before the refact, the report was a dict with the OSDs list to be created
under the "osds" key.
After the refact, the report is a list of dict.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-12-15 21:19:04 +01:00
Gaudenz Steinlin 15044da030 osd: Fix number of OSD calculation
If some OSDs are to be created and others already exist the calculation
only counted the to be created OSDs. This changes the calculation to
take all OSDs into account.

Signed-off-by: Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
2020-11-03 14:33:35 +01:00
Guillaume Abrioux 900c0f4492 ceph-config: ceph.conf rendering refactor
This commit cleans up the `main.yml` task file of `ceph-config`.
It drops the local ceph.conf generation.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-10-21 13:22:16 +02:00
Dimitri Savineau 50104650e7 add missing boolean filter
Otherwise this will generate an ansible warning about the missing
filter.

[DEPRECATION WARNING]: evaluating xxx as a bare variable, this behaviour
will go away and you might need to add |bool to the expression in the
future.
Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature will
be removed in version 2.12.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-09-28 20:45:01 +02:00
Guillaume Abrioux 448cc280b7 common: don't enable debug log on ceph-volume calls by default
ceph-volume can generate large logs at some point.

debug logs by definition should be enabled only when debugging.

Let's make it customizable with a variable which is set to `False` by
default.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-08-11 15:03:20 +02:00
Guillaume Abrioux 7dd68b9ac1 rgw: fix multi instances scaleout
When rgw and osd are collocated, the current workflow prevents from
scaling out the radosgw_num_instances parameter when rerunning the
playbook.

The environment file used in the rgw systemd template is rendered when
executing the `ceph-rgw` role but during a new run of the playbook (in
order to scale out rgw instances), handlers are triggered from `ceph-osd`
role which is run before `ceph-rgw`, therefore it tries to start the new
rgw daemon whereas its corresponding environment file hasn't been
rendered yet and fails like following:

```
ceph-radosgw@rgw.ceph4osd3.rgw1.service failed to run 'start-pre' task: No such file or directory
```

This commit moves the tasks generating this file in `ceph-config` role
so it is generated early.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1851906

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-07-02 10:39:50 -04:00
Guillaume Abrioux e7bc079405 config: fix external client scenario
When no monitor group is present in the inventory, this task fails.
This affects only non-containerized deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-01-31 12:02:15 +01:00
Guillaume Abrioux 483adb5d79 common: add a default value for ceph_directories_mode
Since this variable makes it possible to customize the mode for ceph
directories, let's make it a bit more explicit by adding a default value
in ceph-defaults.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-01-22 09:35:35 +01:00
Guillaume Abrioux fd1718f379 config: exclude ceph-disk prepared osds in lvm batch report
We must exclude the devices already used and prepared by ceph-disk when
doing the lvm batch report. Otherwise it fails because ceph-volume
complains about GPT header.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1786682

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2020-01-10 00:04:22 +01:00
Dimitri Savineau bc701860d5 ceph-iscsi: notify rbd target services
When the iscsi gateway or the ceph configuration file change then we
need to notify the rbd target api/gw services to be restarted.
This patch also merges the rbd-target-api and rbd-target-gw handler
into a single file and listen.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-10-16 16:25:40 +02:00
Artur Fijalkowski 011270ca69 global: make directories mode parameterizable
This commit makes it possible to parametrize the ceph directories modes.
So it changes hardocded mode for ceph related directories from 0755 to
customizable with `ceph_directories_mode` variable.

Closes: #2920

Signed-off-by: Artur Fijalkowski <artur.fijalkowski@ing.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-08-23 09:38:17 +02:00
Kevin Coakley e11cbbbcb1 ceph-config: Set changed_when to false on fact gathering statements
The "run 'ceph-volume lvm batch --report' to see how many osds are to be
created" and "run 'ceph-volume lvm list' to see how many osds have already been
created" statements only register the lvm_batch_report and lvm_list variables.
Running those ceph-volume commands should never produce a change on the system.
Adding changed_when: false prevents irrelevant change messages from Ansible.

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2019-08-22 17:27:58 +02:00
L3D ab54fe20ec ansible: use 'bool' filter on boolean conditionals
By running ceph-ansible there are a lot ``[DEPRECATION WARNING]`` like these:
```
[DEPRECATION WARNING]: evaluating containerized_deployment as a bare variable,
this behaviour will go away and you might need to add |bool to the expression
in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This
feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
```

Now appended ``| bool`` on a lot of the affected variables.

Sometimes the coding style from ``variable|bool`` changed to ``variable | bool`` *(with spaces at the pipe)*.

Closes: #4022

Signed-off-by: L3D <l3d@c3woc.de>
2019-06-06 10:21:17 +02:00
Rishabh Dave 739a662c80 improve coding style
Keywords requiring only one item shouldn't express it by creating a
list with single item.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-04-23 15:37:07 +02:00
Andrew Schoen 67453853ff rolling_update: set num_osds to the number of running osds
We do this so that the ceph-config role can most accurately
report the number of osds for the generation of the ceph.conf
file.

We don't want to use ceph-volume to determine the number of
osds because in an upgrade to nautilus ceph-volume won't be able to
accurately count osds created by ceph-disk.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2019-04-18 10:55:11 +02:00
Guillaume Abrioux 4d35e9eeed osd: remove variable osd_scenario
As of stable-4.0, the only valid scenario is `lvm`.
Thus, this makes this variable useless.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Dimitri Savineau 7e5e4229b7 ceph-volume: Add PYTHONIOENCODING env variable
Since https://github.com/ceph/ceph/commit/77912c0 ceph-volume uses
stdout encoding based on LC_CTYPE and PYTHONIOENCODING environment
variables.
Thoses variables aren't set when using ansible.
Currently this commit breaks non containerized deployment on Ubuntu.

TASK [use ceph-volume to create bluestore osds] ********************
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - create
  - --bluestore
  - --data
  - /dev/sdb
  rc: 1
  stderr: |-
    Traceback (most recent call last):
    (...)
    UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in
    position 132: ordinal not in range(128)

Note that the task is failing on ansible side due to the stdout
decoding but the osd creation is successful.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-02 12:41:55 +02:00
Rishabh Dave e0beaf123a "when" keyword should precede "block" keyword
Otherwise the reader is forced to search for "when" when blocks are too
long.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-03-29 16:16:04 +00:00
John Fulton 719a25b571 Create Ceph Initial Dirs earlier
Include tasks from create_ceph_initial_dirs earlier during
ceph config role.

Fixes: #3568
Signed-off-by: John Fulton <fulton@redhat.com>
2019-02-05 18:38:05 +00:00
Andrew Schoen 70a4368bc5 ceph-config: do not always assume containers when calculating num_osds
CEPH_CONTAINER_IMAGE should be None if containerized_deployment
is False.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2019-02-01 12:28:12 +01:00
Guillaume Abrioux fe1528adb4 config: support num_osds fact setting in containerized deployment
This part of the code must be supported in containerized deployment

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1664112

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-31 14:30:23 +00:00
Guillaume Abrioux f8aa8cdf60 facts: clean fsid generation code
clean some leftover and duplicate code.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-30 10:36:02 +01:00
Sébastien Han b1dfe3f03e config: only pre-create ceph dirs on containers
We don't need to create the directories on non-containers, they are
created by the packages.

Closes: https://github.com/ceph/ceph-ansible/issues/3430
Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-04 13:57:40 +00:00
Guillaume Abrioux fead0813b4 remove kv store support
the next stable release will drop this feature.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-30 13:45:12 +00:00
Guillaume Abrioux c783bc70da docker-common: rename role
rename `ceph-docker-common` role to `ceph-container-common`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Sébastien Han 094ae8baf1 lint: do not use local_action
Use delegate_to: localhost instead.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Andrew Schoen 436dc8c5e1 ceph-config: allow the batch --report to fail when getting the OSD num
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen 40f82319dd ceph-config: use 'lvm list' to find num_osds for an existing cluster
This makes finding num_osds idempotent for clusters that were deployed
using 'lvm batch'.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen 8afef3d0de ceph-config: use the ceph_volume module to get num_osds for lvm batch
This gives us an accurate number of how many osds will be created.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Guillaume Abrioux be31c15ccd follow up on b5d2ea2
Add some missed statements

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-06 14:32:17 +02:00
Sébastien Han 4db6a213f7 add ceph-handler role
The role contains all the handlers for Ceph services. We decided to
leave ceph-defaults role with variables and a few facts only. This is
useful when organizing the site.yml files and also adding the known
variables to infrastructure-playbooks.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-09-28 15:15:49 +00:00
Andrew Schoen 16ccac83fe ceph-config: calculate num_osds for the lvm batch scenario
For now our best guess is to count the number of devices and multiply
by osds_per_device. Ideally we'd like to run ceph-volume lvm batch
--report and get the number of OSDs that way, but currently we need
a ceph.conf in place already before we can do that. There is a tracker
ticket that would allow os to get around the need for a ceph.conf:
http://tracker.ceph.com/issues/36088

Fixes: https://github.com/ceph/ceph-ansible/issues/3135

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-09-20 15:41:52 +00:00
Andrew Schoen 8afad35f5a ceph-config: default devices and lvm_volumes when setting num_osds
This avoids errors when the osd scenario choosen does not require
setting devices or lvm_volumes. The default values for these are not
set because they exist in the ceph-osd role, not ceph-defaults.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-09-18 17:02:33 +00:00
Neha Ojha 27027a17d3 osd: add osd memory target option
BlueStore's cache is sized conservatively by default, so that it does
not overwhelm under-provisioned servers. The default is 1G for HDD, and
3G for SSD.

To replace the page cache, as much memory as possible should be given to
BlueStore. This is required for good performance. Since ceph-ansible
knows how much memory a host has, it can set

`bluestore cache size = max(total host memory / num OSDs on this host * safety
factor, 1G)`

Due to fragmentation and other memory use not included in bluestore's
cache, a safety factor of 0.5 for dedicated nodes and 0.2 for
hyperconverged nodes is recommended.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1595003

Signed-off-by: Neha Ojha <nojha@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-18 10:12:46 +00:00