Commit Graph

66 Commits (7d25a5d56512558ed801d3d180ba24606f93947b)

Author SHA1 Message Date
Guillaume Abrioux 0163ecc924 convert some missed `ansible_*`` calls to `ansible_facts['*']`
This converts some missed calls to `ansible_*` that were missed in
initial PR #6312

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2021-03-25 15:19:13 +01:00
Alex Schultz a7f2fa73e6 Use ansible_facts
It has come to our attention that using ansible_* vars that are
populated with INJECT_FACTS_AS_VARS=True is not very performant.  In
order to be able to support setting that to off, we need to update the
references to use ansible_facts[<thing>] instead of ansible_<thing>.

Related: ansible#73654
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1935406
Signed-off-by: Alex Schultz <aschultz@redhat.com>
2021-03-08 20:54:02 +01:00
Dimitri Savineau 829990e60d ceph-osd: remove ceph-osd-run.sh script
Since we only have one scenario since nautilus then we can just move
the container start command from ceph-osd-run.sh to the systemd unit
service.
As a result, the ceph-osd-run.sh.j2 template and the
ceph_osd_docker_run_script_path variable are removed.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-06-18 17:51:13 +02:00
Dimitri Savineau 1fc6b33714 ceph-{mon,osd}: move default crush variables
Since ed36a11 we move the crush rules creation code from the ceph-mon to
the ceph-osd role.
To keep the backward compatibility we kept the possibility to set the
crush variables on the mons side but we didn't move the default values.
As a result, when using crush_rule_config set to true and wanted to use
the default values for crush_rules then the crush rule ansible task
creation will fail.

"msg": "'ansible.vars.hostvars.HostVarsVars object' has no attribute
'crush_rules'"

This patch move the default crush variables from ceph-mon to ceph-osd
role but also use those default values when nothing is defined on the
mons side.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1798864

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2020-02-17 10:50:53 +01:00
Guillaume Abrioux 09e04a9197 osd: add wal_devices option support to ceph_volume module
This commit adds the `wal_devices` option support to the
ceph_volume module.
passing a devices list in `bluestore_wal_devices` will make ceph-volume
creating 1 vg using these devices to create block.wal partitions.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-09-26 11:35:24 +02:00
Guillaume Abrioux 70f1b37097 osd: update doc text in defaults/main.yml
This commit removes ceph-disk references.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-09-26 11:35:24 +02:00
Guillaume Abrioux 7b836eaa47 osd: add block_db_devices option support to ceph_volume module
This commit adds the `block_db_devices` option support to the
ceph_volume module.
passing a devices list in `dedicated_devices` will make ceph-volume
creating 1 vg using these devices to create block.db partitions for data
devices.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-09-26 11:35:24 +02:00
Dimitri Savineau 49aa05b96c ceph-client: Use profile rbd in keyring caps
Like the OpenStack keyrings, we can use the profile rbd for the clients
keyring (both mon and osd).

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-08-27 20:52:23 +02:00
Dimitri Savineau c17106874c ceph-osd: Increase cpu limit to 4
In containerized deployment the default osd cpu quota is too low
for production environment using NVMe devices.
This is causing performance degradation compared to bare-metal.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1695880

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-24 17:59:42 +02:00
Guillaume Abrioux f0416c8892 osd: remove dedicated_devices variable
This variable was related to ceph-disk scenarios.
Since we are entirely dropping ceph-disk support as of stable-4.0, let's
remove this variable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Guillaume Abrioux 4d35e9eeed osd: remove variable osd_scenario
As of stable-4.0, the only valid scenario is `lvm`.
Thus, this makes this variable useless.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Sébastien Han e2a5aa062e osd: remove ceph-disk support
We don't support the preparation of OSD with ceph-disk. ceph-volume is
only supported. However, the start operation of OSD is still supported.
So let's say you change a config option, the handlers will be able to
restart all the OSDs via their respective systemd unit files.

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Guillaume Abrioux b3eb9206fa osd: support numactl options on OSD activate
This commit adds OSD containers activate with numactl support.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1684146

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-11 10:14:50 +01:00
Guillaume Abrioux 21e5db8982 osd: make the 'wait for all osd to be up' task configurable
introduce two new variables to make the check that 'wait for all osd to
be up' configurable.
It's possible that for some deployments, OSDs can take longer to be seen
as UP and IN.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1676763

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-20 16:06:04 +00:00
Kai Wembacher 1dd26f76bf document missing support for non-containerized deployment
Signed-off-by: Kai Wembacher <kai@ktwe.de>
2018-12-21 15:37:55 +00:00
Kai Wembacher a273ed7f60 add support for rocksdb and wal on the same partition in non-collocated
Signed-off-by: Kai Wembacher <kai@ktwe.de>
2018-12-20 14:19:46 +01:00
Guillaume Abrioux 0187166926 osd: remove an incorrect information
This is false, `./defaults/main.yml` is not supposed to be modified
directly. groups_vars a/o host_vars should always be preferred.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-03 08:11:35 +00:00
Noah Watkins 64dee9be0c Remove outdated documentation
Fixes BZ
https://bugzilla.redhat.com/show_bug.cgi?id=1640525

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2018-11-15 22:26:19 +00:00
Guillaume Abrioux 748342f5b6 roles: fix *_docker_memory_limit default value
append 'm' suffix to specify the unit size used in all
`*_docker_memory_limit`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-29 14:59:09 +01:00
Neha Ojha b7e4d4eb84 roles: do not limit docker_memory_limit for various daemons
Since we do not have enough data to put valid upper bounds for the memory
usage of these daemons, do not put artificial limits by default. This will
help us avoid failures like OOM kills due to low default values.

Whenever required, these limits can be manually enforced by the user.

More details in
https://bugzilla.redhat.com/show_bug.cgi?id=1638148

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1638148
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-10-29 14:59:09 +01:00
Andrew Schoen b36f3e06b5 ceph_volume: adds the osds_per_device parameter
If this is set to anything other than the default value of 1 then the
--osds-per-device flag will be used by the batch command to define how
many osds will be created per device.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-09-12 20:27:14 +00:00
Sébastien Han 9ba670567e remove warning for unsupported variables
As promised, these will go unsupported for 3.1 so let's actually remove
them :).

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622729
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-28 13:31:57 -07:00
Andrew Schoen e15c61b601 updates group_vars/osds.yml.sample to inlude crush_device_class
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Sébastien Han a9ed3579ae mon/osd: bump container memory limit
As discussed with the cores, the current limits are too low and should
be bumped to higher value.
So now by default monitors get 3GB and OSDs get 5GB.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1591876
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-17 11:20:27 -04:00
Alfredo Deza 3fcf966803 ceph-osd note that some scenarios use ceph-disk vs. ceph-volume
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2018-03-29 09:11:33 +02:00
Sébastien Han 3261ab23b8 osd: remove old crush_location implementation
This was causing a lot of pain with the handlers. Also the
implementation was not ideal since we were assembling files. Everything
can now be done with the ceph_crush module so let's remove that.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han b1a3c6e4cc osd: resync group_vars file
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-30 17:47:15 +01:00
Guillaume Abrioux 9306a1789c osds: change default value for `dedicated_devices`
This is to keep backward compatibility with stable-2.2 and satisfy the
check "verify dedicated devices have been provided" in
`check_mandatory_vars.yml`. This check is looking for
`dedicated_devices` so we need to default it's value to
`raw_journal_devices` when `raw_multi_journal` is set to `True`.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1536098

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-22 18:02:51 +01:00
Sébastien Han 97f520bc74 containers: bump memory limit
A default value of 4GB for MDS is more appropriate and 3GB for OSD also.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1531607
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-09 11:26:50 +01:00
John Fulton 8cba44262c Add flags for OSD 'docker run --cpuset-{cpus,mems}'
Add the variables ceph_osd_docker_cpuset_cpus and
ceph_osd_docker_cpuset_mems, so that a user may specify
the CPUs and memory nodes of NUMA systems on which OSD
containers are run.

Provides a example in osds.yaml.sample to guide user
based on sample `lscpu` output since cpuset-mems refers
to the memory by NUMA node only while cpuset-cpus can
refer to individual vCPUs within a NUMA node.
2017-12-14 16:39:35 +01:00
Andrew Schoen 6dc090a4c8 update group_vars/osds.yml.sample for the new lvm_volumes examples
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-11-15 09:36:17 -06:00
Guillaume Abrioux 39b584e540 osd: fix a typo in roles/ceph-osd/defaults/main.yml
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-07 10:06:16 +01:00
Alfredo Deza f77180cf49 group_vars: update the examples in lvm for bluestore support
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 06:46:39 -04:00
Yixing Yan b6296c13ac update sample file 2017-10-25 16:39:08 +08:00
Sébastien Han 4413511b66 all: backward compatibility between stable-2.2 and 3.0
stable-3.0 brought numerous changes in ceph-ansible variables, this PR
aims to maintain backward compatibility for someone running stable-2.2
upgrading to stable-3.0 but keeps its groups_vars untouched.
We will then determine the right options to make sure the upgrade works
but we are expecting that new variables should be used.

We will drop this in a near future, maybe 3.1 or 3.2.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:54:10 +02:00
Guillaume Abrioux 20946f7220 ceph-osd: remove deprecated comment in sample file
Since #1724 has been merged, this comment is deprecated

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-09-12 16:48:10 +02:00
Sébastien Han 2fa151b9e8 container: introduce resource limitation for containers
This can be controlled via 2 options:

* ceph_$DAEMON_docker_memory_limit
* ceph_$DAEMON_docker_cpu_limit

All daemons default to 1GB for memory and 1 CPU by default.
Recommendations from:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-06 14:52:21 +02:00
Sébastien Han e0a264c7e9 osd: allow multi dedicated journals for containers
Fix: https://bugzilla.redhat.com/show_bug.cgi?id=1475820
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-30 12:34:06 +02:00
Sébastien Han d7ddf88e88 resync groups_vars
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-24 13:40:36 +02:00
SirishaGuduru 1359869497 Common: changed civetweb line in rgw section(conf)
Resolves issue: Multiple RGW Ceph.conf Issue #1258

In multi-RGW setup, in ceph.conf the RGW sections
contain identical bind IP in civetweb line. So this
modification fixes that issue and puts the right IP
for each RGW.

Signed-off-by: SirishaGuduru SGuduru@walmartlabs.com

Modified ceph-defaults and ran generate_group_vars_sample.sh

group_vars/osds.yml.sample and group_vars/rhcs.yml.sample are
not part of the changes. But they got modified when
generate_group_vars_sample.sh is ran to generate group_vars/
all.yml.sample.

Uncommented added variables in ceph-defaults

Updated tests by adding value for radosgw_interface

Added radosgw_interface to centos cluster tests

Modified ceph-rgw role,rebased and ran generate_group_vars_sample.sh

In ceph-rgw role removed check_mandatory_vars.yml.
Rebased on master.
Ran generate_group_vars_sample.sh and then the below files got
modified.
2017-08-23 15:03:37 +05:30
Sébastien Han 38d575ce55 Merge pull request #1654 from andymcc/master
Allow ceph-mon systemd overrides to be specified
2017-08-22 15:32:39 +02:00
Sébastien Han 19ae8b42e6 resync group_vars files
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-22 11:31:49 +02:00
Andy McCrae 4671b9e74e Allow ceph service systemd overrides to be specified
ceph services can fail to start under certain circumstances (for
example, when running in a container) because the default systemd
service configuration causes namespace issues.

To work around this we can override the system service settings by
placing an overrides file in the ceph-<service>@.service.d directory.
This can be generic so as to allow any potential changes required to
the ceph-<service> service files.

The overrides file is only setup when the
"ceph_<service>_systemd_overrides" config_template override variable is
specified.

The available service systemd override files are as follows:
ceph_mds_systemd_overrides
ceph_mgr_systemd_overrides
ceph_mon_systemd_overrides
ceph_osd_systemd_overrides
ceph_rbd_mirror_systemd_overrides
ceph_rgw_systemd_overrides
2017-08-16 17:57:06 +01:00
Andrew Schoen e597628be9 lvm: update scenario for new osd_scenario variable
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-04 06:38:36 -05:00
Andrew Schoen 249598ff4b lvm-osds: update group_vars/osds.yml.sample
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-04 06:13:09 -05:00
Sébastien Han 30991b1c0a osd: simplify scenarios
There is only two main scenarios now:

* collocated: everything remains on the same device:
  - data, db, wal for bluestore
  - data and journal for filestore
* non-collocated: dedicated device for some of the component

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-03 10:20:39 +02:00
Sébastien Han 3a8bf18b0b generate_group_vars_sample: update for ceph-defaults
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-02 17:12:20 +02:00
Guillaume Abrioux 1d003aa887 merge docker-common and common defaults vars
Merge `ceph-docker-common` and `ceph-common` defaults vars in
`ceph-defaults` role.
Remove redundant variables declaration in `ceph-mon` and `ceph-osd` roles.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-08-02 14:46:51 +02:00
Guillaume Abrioux 94c3756167 Tests: Add bluestore scenarios
Since we started testing against Luminous, we need to add more scenarios
testing.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-07-12 15:02:32 +02:00
Sébastien Han 7d657ac643 osd: ability to set db and wal to bluestore
This commits refactors how we deploy bluestore. We have existing
scenarios that we don't want to change too much. This commits eases the
user experience by now changing the way you use scenarios. Bluestore is
just a different interface to store objects but the scenarios more or
less remain the same.

If you set osd_objectstore == 'bluestore' along with
journal_collocation: true, you will get an OSD running bluestore with DB
and WAL partitions on the same device.

If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true, you will get an OSD running bluestore with a
dedicated drive for the rocksdb DB, then the remaining
drives (used with 'devices') will have WAL and DATA collocated.

If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true and declare bluestore_wal_devices you will get
an OSD running bluestore with a dedicated drive for rocksdb db, a
dedicated drive partition for rocksdb WAL and a dedicated drive for
DATA.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-04 19:07:16 +02:00