Commit Graph

1950 Commits (d6e79044efbbfad9ba1b809276baf03284780392)

Author SHA1 Message Date
Sébastien Han fbd878c8d5 infra: rename osd-configure to add-osd and improve it
The playbook has various improvements:

* run ceph-validate role before doing anything
* run ceph-fetch-keys only on the first monitor of the inventory list
* set noup flag so PGs get distributed once all the new OSDs have been
added to the cluster and unset it when they are up and running

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1624962
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-17 11:26:11 +00:00
Sébastien Han 680574ed4c ceph-fetch-keys: refact
This commits simplies the usage of the ceph-fetch-keys role. The role
now has a nicer way to find various ceph keys and fetch them on the
ansible server.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1624962
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-17 11:26:11 +00:00
Andy McCrae 3e0fa3bc18 Add ability to use a different client container
Currently a throw-away container is built to run ceph client
commands to setup users, pools & auth keys. This utilises
the same base ceph container which has all the ceph services
inside it.

This PR allows the use of a separate container if the deployer
wishes - but defaults to use the same full ceph container.

This can be used for different architectures or distributions,
which may support the the Ceph client, but not Ceph server,
and allows the deployer to build and specify a separate client
container if need be.

Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
2018-10-16 23:28:35 +00:00
Guillaume Abrioux f0b2d82695 infra: fix wrong condition on firewalld start task
a non skipped task won't have the `skipped` attribute, so `start
firewalld` task will complain about that.
Indeed, `skipped` and `rc` attributes won't exist since the first task
`check firewalld installation on redhat or suse` won't be skipped in
case of non-containerized deployment.

Fixes: #3236
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1541840

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-16 16:24:42 +00:00
Christian Berendt ac37a0d0cd ceph-defaults: set ceph_stable_openstack_release_uca to queens
Liberty is no longer available in the UCA. The last available release there
is currently Queens.

Signed-off-by: Christian Berendt <berendt@betacloud-solutions.de>
2018-10-16 12:56:32 +00:00
Guillaume Abrioux b953965399 handler: remove some leftover in restart_*_daemon.sh.j2
Remove some legacy in those restart script.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-16 11:53:55 +00:00
Nan Li 55334baa0c docker-ce is used in aarch64 instead of docker engine
Signed-off-by: Nan Li <herbert.nan@linaro.org>
2018-10-15 18:38:40 +02:00
Guillaume Abrioux 60bc1e38db handler: fix osd containers handler
`ceph_osd_container_stat` might not be set on other osd node.
We must ensure we are on the last node before trying to evaluate
`ceph_osd_container_stat`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-15 10:30:40 +02:00
Guillaume Abrioux 40b7747af7 remove jewel support
As of now, we should no longer support Jewel in ceph-ansible.
The latest ceph-ansible release supporting Jewel is `stable-3.1`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-12 23:38:17 +00:00
Sébastien Han 31a0438cb2 ceph_volume: refactor
This commit does a couple of things:

* Avoid code duplication
* Clarify the code
* add more unit tests
* add myself to the author of the module

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han bfe689094e osd: do not run when lvm scenario
This task was created for ceph-disk based deployments so it's not needed
when osd are prepared with ceph-volume.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 2bea8d8ecf handler: add support for ceph-volume containerized restart
The restart script wasn't working with the current new addition of
ceph-volume in container where now OSDs have the OSD id name in the
container name.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 790f52f934 ceph-handler: change osd container check
Now that the container is named ceph-osd@<id> looking for something that
contains a host is not necessary. This is also backward compatible as it
will continue to match container names with hostname in them.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 0580328340 validate: add warning for ceph-disk
ceph-disk will be removed in 3.3 and we encourage to start using
ceph-volume as of 3.2.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han a948677de1 osd: ceph-volume activate, just pass the OSD_ID
We don't need to pass the device and discover the OSD ID. We have a
task that gathers all the OSD ID present on that machine, so we simply
re-use them and activate them. This also handles the situation when you
have multiple OSDs running on the same device.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 5f35910ee1 osd: change unit template for ceph-volume container
We don't need to pass the hostname on the container name but we can keep
it simple and just call it ceph-osd-$id.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han ece9e9812e osd: do not use expose_partitions on lvm
expose_partitions is only needed on ceph-disk OSDs so we don't need to
activate this code when running lvm prepared OSDs.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han e39fc4f6ce ceph_volume: add container support for batch command
The batch option got recently added, while rebasing this patch it was
necessary to implement it. So now, the batch option can work on
containerized environments.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1630977
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 3ddcc9af16 ceph_volume: try to get ride of the dummy container
If we run on a containerized deployment we pass an env variable which
contains the container image.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han aa2c1b27e3 ceph-osd: ceph-volume container support
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Guillaume Abrioux 678e155328 infra: fix a typo in filename
configure_firewall is missing its dot.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 12:39:04 -04:00
Guillaume Abrioux f666902d52 infra: add tags for each subcomponent
This way we can skip one specific component if needed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 15:44:33 +00:00
Guillaume Abrioux f8a7ffb085 infra: add firewall configuration for containerized deployment
firewalld is available on atomic so there is no reason to not apply
firewall configuration.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 15:44:33 +00:00
Guillaume Abrioux 0fb8812e47 infra: update firewall rules, add cluster_network for osds
At the moment, all daemons accept connections from 0.0.0.0.
We should at least restrict to public_network and add
cluster_network for OSDs.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1541840

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 15:44:33 +00:00
Guillaume Abrioux b3a71eeb08 ceph-infra: add new role ceph-infra
this role manages ceph infra services such as ntp, firewall, ...

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 15:44:33 +00:00
Noah Watkins 8dcc8d1434 Stringify ceph_docker_image_tag
This could be a numeric input, but is treated like a string leading to
runtime errors.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1635823

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2018-10-10 04:26:33 +00:00
Noah Watkins 306e308f13 Avoid using tests as filter
Fixes the deprecation warning:

  [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of
  using `result|search` use `result is search`.

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2018-10-10 04:26:33 +00:00
Andrew Schoen ada03d064d ceph-validate: remove versions checks for bluestore and lvm scenario
These checks will never pass unless ceph_stable_release is passed and
ceph-defaults is run before ceph-validate. Additionally, we don't want
to support deploying jewel upstream at ceph-ansible master.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1637537

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 13:30:42 -04:00
Andrew Schoen 436dc8c5e1 ceph-config: allow the batch --report to fail when getting the OSD num
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen 40f82319dd ceph-config: use 'lvm list' to find num_osds for an existing cluster
This makes finding num_osds idempotent for clusters that were deployed
using 'lvm batch'.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen 8afef3d0de ceph-config: use the ceph_volume module to get num_osds for lvm batch
This gives us an accurate number of how many osds will be created.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen c453ea25c0 ceph-osd: use journal_size and block_db_size for lvm batch
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen 71ce539da5 ceph-defaults: add the block_db_size option
This is used in the lvm osd scenario for the 'lvm batch' subcommand
of ceph-volume.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Guillaume Abrioux 3e2cdcc735 common: remove check_firewall code
Check firewall isn't working as expected and might break deployments.
This part of the code will be reworked soon.

Let's focus on configure_firewall code for now.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1541840

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-06 14:32:17 +02:00
Guillaume Abrioux be31c15ccd follow up on b5d2ea2
Add some missed statements

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-06 14:32:17 +02:00
Rishabh Dave b5d2ea269f don't use "static" field while including tasks
Instead used "import_tasks" and "include_tasks" to tell whether tasks
must be included statically or dynamically.

Fixes: https://github.com/ceph/ceph-ansible/issues/2998
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-10-04 07:44:28 +00:00
Guillaume Abrioux 6130bc841d config: look up for monitor_address_block in hostvars
`monitor_address_block` should be read from hostvars[host] instead of
current node being played.

eg:

Let's assume we have:

```
[mons]
ceph-mon0 monitor_address=192.168.1.10
ceph-mon1 monitor_interface=eth1
ceph-mon2 monitor_address_block=192.168.1.0/24
```

the ceph.conf generation task will end up with:

```
fatal: [ceph-mon0]: FAILED! => {}

MSG:

'ansible.vars.hostvars.HostVarsVars object' has no attribute u'ansible_interface'
```

the reason is that it will assume `monitor_address_block` isn't defined even on
ceph-mon2 because looking for `monitor_address_block` instead of
`hostvars[host]['monitor_address_block']`, therefore it enters in the condition as default value:

```
    {%- else -%}
      {% set interface = 'ansible_' + (monitor_interface | replace('-', '_')) %}
      {% if ip_version == 'ipv4' -%}
        {{ hostvars[host][interface][ip_version]['address'] }}
      {%- elif ip_version == 'ipv6' -%}
        [{{ hostvars[host][interface][ip_version][0]['address'] }}]
      {%- endif %}
    {%- endif %}
```

`monitor_interface` is set with default value `'interface'` so the `interface`
variable is built with 'ansible_' + 'interface'. It makes ansible throwing a
confusing message about `'ansible_interface'`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1635303

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-02 22:41:05 +02:00
Benjamin Cherian 85071e6e53 Add support for different NTP daemons
Allow user to choose between timesyncd, chronyd and ntpd
Installation will default to timesyncd since it is distributed as
part of the systemd installation for most distros.
Added note indicating NTP daemon type is not used for containerized
deployments.

Fixes issue #3086 on Github

Signed-off-by: Benjamin Cherian <benjamin_cherian@amat.com>
2018-10-02 13:18:08 +00:00
Mike Christie eddb95941b igw: valid client CHAP settings.
The linux kernel target layer, LIO, does not support the iscsi target to
mix ACLs that have chap enabled and disabled under the same tpg. This
patch adds a check and fails if this type of setup is detected.

This fixes Red Hat BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1615088

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-10-01 18:23:03 +02:00
Sébastien Han 4db6a213f7 add ceph-handler role
The role contains all the handlers for Ceph services. We decided to
leave ceph-defaults role with variables and a few facts only. This is
useful when organizing the site.yml files and also adding the known
variables to infrastructure-playbooks.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-09-28 15:15:49 +00:00
Sébastien Han 145aef9fed defaults: do not disable THP on bluestore
As per #1013 it appears that BS will soon use THP to lower TLB misses,
also disabling THP hasn't demonstrated any gains so far.

Closes: https://github.com/ceph/ceph-ansible/issues/1013
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-09-27 21:23:49 +00:00
Sébastien Han dc3319c3c4 default: use bluestore as default object store
All tooling in Ceph is defaulting to use the bluestore objectstore for provisioning OSDs, there is no good reason for ceph-ansible to continue to default to filestore.

Closes: https://github.com/ceph/ceph-ansible/issues/3149
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1633508
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-09-27 21:23:49 +00:00
Rishabh Dave 380168dadc don't use "include" to include tasks
Use "import_tasks" or "include_tasks" instead.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-09-27 17:53:40 +02:00
Giulio Fidente 6126210e0e Fix version check in ceph.conf template
We need to look for ceph_release when comparing with release names,
not ceph_version.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1631789
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
2018-09-24 13:08:27 +02:00
Matthew Vernon 806461ac6e restart_osd_daemon.sh.j2 - use `+` rather than `{1,}` in regex
`+` is more idiomatic for "one or more" in a regex than `{1,}`; the
latter was introduced in a previous fix for an incorrect `{1,2}`
restriction.

Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
2018-09-24 10:33:46 +00:00
Matthew Vernon 04f4991648 restart_osd_daemon.sh.j2 - consider active+clean+* pgs as OK
After restarting each OSD, restart_osd_daemon.sh checks that the
cluster is in a good state before moving on to the next one. One of
the checks it does is that the number of pgs in the state
"active+clean" is equal to the total number of pgs in the cluster.

On large clusters (e.g. we have 173,696 pgs), it is likely that at
least one pg will be scrubbing and/or deep-scrubbing at any one
time. These pgs are in state "active+clean+scrubbing" or
"active+clean+scrubbing+deep", so the script was erroneously not
including them in the "good" count. Similar concerns apply to
"active+clean+snaptrim" and "active+clean+snaptrim_wait".

Fix this by considering as good any pg whose state contains
active+clean. Do this as an integer comparison to num_pgs in pgmap.

(could this be backported to at least stable-3.0 please?)

Closes: #2008
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
2018-09-24 10:33:46 +00:00
Matthew Vernon aa97ecf048 restart_osd_daemon.sh.j2 - Reset RETRIES between calls of check_pgs
Previously RETRIES was set (by default to 40) once at the start of the
script; this meant that it would only ever wait for up to 40 lots of
30s across *all* the OSDs on a host before bombing out. In fact, we
want to be prepared to wait for the same amount of time after each OSD
restart for the clusters' pgs to be happy again before continuing.

Closes: #3154
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
2018-09-24 08:20:32 +00:00
Andrew Schoen 16ccac83fe ceph-config: calculate num_osds for the lvm batch scenario
For now our best guess is to count the number of devices and multiply
by osds_per_device. Ideally we'd like to run ceph-volume lvm batch
--report and get the number of OSDs that way, but currently we need
a ceph.conf in place already before we can do that. There is a tracker
ticket that would allow os to get around the need for a ceph.conf:
http://tracker.ceph.com/issues/36088

Fixes: https://github.com/ceph/ceph-ansible/issues/3135

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-09-20 15:41:52 +00:00
Guillaume Abrioux 6d6fd514e0 config: set default _rgw_hostname value to respective host
the default value for _rgw_hostname was took from the current node being
played while it should be took from the respective node in the loop.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622505

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-18 20:10:34 +02:00
Andrew Schoen 8afad35f5a ceph-config: default devices and lvm_volumes when setting num_osds
This avoids errors when the osd scenario choosen does not require
setting devices or lvm_volumes. The default values for these are not
set because they exist in the ceph-osd role, not ceph-defaults.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-09-18 17:02:33 +00:00