Commit Graph

4055 Commits (8edbda96df6896d51703fcc250f562abb4011a2d)
 

Author SHA1 Message Date
Sébastien Han dad10e8f3f rolling_update: register container osd units
Before running the upgrade, let's call systemd to collect unit names
instead of relaying on the device list. This is more accurate and fix
the osd_auto_discovery scenario too.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1613626
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-16 11:13:12 +02:00
Sébastien Han 3149b2564f Revert "osd: generate device list for osd_auto_discovery on rolling_update"
This reverts commit e84f11e99e.

This commit was giving a new failure later during the rolling_update
process. Basically, this was modifying the list of devices and started
impacting the ceph-osd itself. The modification to accomodate the
osd_auto_discovery parameter should happen outside of the ceph-osd.

Also we are trying to not play ceph-osd role during the rolling_update
process so we can speed up the upgrade.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-16 11:13:12 +02:00
Jeffrey Zhang 85cc61a6d9 Use /var/lib/ceph/osd folder to filter osd mount point
In some case, use may mount a partition to /var/lib/ceph, and umount
it will be failure and no need to do so too.

Signed-off-by: Jeffrey Zhang <zhang.lei.fly@gmail.com>
2018-08-14 13:00:24 +00:00
Markos Chandras 7172737f13 roles: ceph-defaults: Set ceph_uid on SUSE distributions
The ceph_uid is also '167' on SUSE systems so extend the existing task.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-08-13 19:02:57 +00:00
Guillaume Abrioux 36942af698 mgr: backward compatibility for module management
Follow up on 3abc253fec

The structure had even changed within `luminous` release.
It was first:

```
{
    "enabled_modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "disabled_modules": [
        "influx",
        "localpool",
        "prometheus",
        "selftest",
        "zabbix"
    ]
}
```
Then it changed for:

```
{
  "enabled_modules": [
      "status"
  ],
  "disabled_modules": [
      "balancer",
      "dashboard",
      "influx",
      "localpool",
      "prometheus",
      "restful",
      "selftest",
      "zabbix"
  ]
}
```

and finally:
```
{
  "enabled_modules": [
      "status"
  ],
  "disabled_modules": [
      {
          "name": "balancer",
          "can_run": true,
          "error_string": ""
      },
      {
          "name": "dashboard",
          "can_run": true,
          "error_string": ""
      }
  ]
}
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 13:25:06 +00:00
Guillaume Abrioux 8b5e3cd999 validate: fail if fqdn deployment attempted
fqdn configuration possibility caused a lot of trouble, it's adding a
lot of complexity because of multiple cases and the relation between
ceph-ansible and ceph-container. Moreover, there is no benefit for such
a feature.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1613155

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux f422efb1d6 config: ensure rgw section has the correct name
the ceph.conf.j2 always assumes the hostname used to register the
radosgw in the servicemap is equivalent to `{{ ansible_hostname }}`
which returns the shortname form.

We need to detect which form of the hostname was used in case of already
deployed cluster and update the ceph.conf accordingly.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1580408

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux db29b5b84d config: clean template, remove useless conditions
there is no need to have all these conditions.

for instance, assuming `mds_group_name` is set to 'mdss':

  - `if groups[mds_group_name] is defined` checks if `'mdss'` is present in `{{ groups }}`

  - `if {{ mds_group_name }} in group_names` checks if the current node is part
  the group `'mdss'`

  - `if inventory_hostname in groups.get(mds_group_name, [])` checks if
  the current node is part of the group 'mdss'

The third condition is enough to cover the need of ensuring we are
running on a mds node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux 4522dbfc74 doc: update ansible supported version matrix.
Closes: #2989

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Sébastien Han 4c9e24a90f mon: fix calamari initialisation
If calamari is already installed and ceph has been upgraded to a higher
version the initialisation will fail later. So if we detect the
calamari-server is too old compare to ceph_rhcs_version we try to update
it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1601755
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-10 14:14:23 +02:00
Andrew Schoen 6423ab4ad3 lvm: fix condition when selecting which scenario to run
devices and lvm_volumes will always be defined, so we need to instead
check it's length before deciding to run the scenario.

This fixes the failure here:
https://2.jenkins.ceph.com/job/ceph-ansible-prs-luminous-bluestore_lvm_osds/86/consoleFull#1667273050b5dd38fa-a56e-4233-a5ca-584604e56e3a

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-10 11:46:12 +02:00
Sébastien Han e84f11e99e osd: generate device list for osd_auto_discovery on rolling_update
rolling_update relies on the list of devices when performing the restart
of the OSDs. The task that is builind the devices list out of the
ansible_devices dict only runs when there are no partitions on the
drives. However during an upgrade the OSD are already configured, they
have been prepared and have partitions so this task won't run and thus
the devices list will be empty, skipping the restart during
rolling_update. We now run the same task under different requirements
when rolling_update is true and build a list when:

* osd_auto_discovery is true
* rolling_update is true
* ansible_devices exists
* no dm/lv are part of the discovery
* the device is not removable
* the device has more than 1 sector

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1613626
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-10 09:19:40 +02:00
Andrew Schoen e15c61b601 updates group_vars/osds.yml.sample to inlude crush_device_class
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 68d929299a ceph-volume: docs for "lvm batch" support
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 647bbd8f1e tests: adds crush_device_class to lvm-batch scenario
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 3592c68cca ceph-osd: adds crush_device_class config option
This is used with the lvm osd scenario. When using devices you need the
option to set the crush device class for all of the OSDs that are
created from those devices.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 6d431ec22d ceph-volume: implement the 'lvm batch' subcommand
This adds the action 'batch' to the ceph-volume module so that we can
run the new 'ceph-volume lvm batch' subcommand. A functional test is
also included.

If devices is defind and osd_scenario is lvm then the 'ceph-volume lvm
batch' command will be used to create the OSDs.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Sébastien Han 4d64dd4686 rgw: ability to use ceph-ansible vars into containers
Since the container now simply reads the ceph.conf, we remove all the
unnecessary options.

Also this PR is the foundation to support multiple backend, such as the
new 'beast' from Ceph Mimic.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Sébastien Han 3bce117de2 rgw: remove unused file
copy_configs.yml was not including and is a leftover so let's remove it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Sébastien Han 5a89479abe rgw: remove useless condition
The include does not need a condition on containerized_deployment since
we are already in an include than has the same condition.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Graeme Gillies a46025820d Allow mgr bootstrap keyring to be defined
In environments where we wish to have manual/greater control over
how the bootstrap keyrings are used, we need to able to externally
define what the mgr keyring secret will be and have ceph-ansible
use it, instead of it being autogenerated

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610213

Signed-off-by: Graeme Gillies <ggillies@akamai.com>
2018-08-08 19:09:01 +00:00
Sébastien Han 19518656a7 Resync rhcs_edits.txt
We were missing an option so let's add it back.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1519835
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-08 15:53:52 +02:00
Christian Berendt 83edbcb5f2 docs: overall improvements
* PR -> pull request
* add 2018 to copyright
* add link to OFTC
* add missing ``
* add / remove some newlines
* ansible -> Ansible, ceph -> Ceph, docker -> Docker, jenkins -> Jenkins,
  osd -> OSD, python -> Python
* fix reference syntax
* improve some titles

Signed-off-by: Christian Berendt <berendt@b1-systems.de>
2018-08-08 11:42:28 +02:00
Sébastien Han 77d4023fbe test: follow up on osd_crush_location for containers
This was fixed by
578aa5c2d5
on non-container, we need to apply the same fix for containers.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-07 16:20:13 +00:00
Sébastien Han 50be3fd9e8 test: remove osd_crush_location from shrink scenarios
This is not needed since this is already covered by docker_cluster and
centos_cluster scenarios.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-07 16:20:13 +00:00
Christian Berendt d168d81c16 docs: overall syntax improvements
* add some missing dots and ``
* add/remove line breaks
* consistent use of shell prompt in consoles outpus
* fix block indents Bearbeiten
* use code blocks

Signed-off-by: Christian Berendt <berendt@b1-systems.de>
2018-08-07 10:33:48 +00:00
Artur Fijalkowski 52d9d406b1 Fix in regular expression matching OSD ID on non-contenerized
deployment.
restart_osd_daemon.sh is used to discover and restart all OSDs on a
host. To do it the scripts loops the list of ceph-osd@ services in the
system. This commit fixes bug in the regular expression responsile for
extraction of OSDs - prior version uses `[0-9]{1,2}` expression
which is ignoring all OSDS which numbers are greater than 99 (thus
longer than 2 digits). Fix removed upper limit of digits in the number.
This problem existed in two places in the script.

Closes: #2964

Signed-off-by: Artur Fijalkowski <artur.fijalkowski@ing.com>
2018-08-06 15:53:49 +00:00
Guillaume Abrioux 1164cdc002 iscsigw: install ceph-iscsi-cli package
Install ceph-iscsi-cli in order to provide the `gwcli` command tool.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1602785

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-06 14:11:52 +02:00
Guillaume Abrioux 0a6ff6bbf8 defaults: backward compatibility with fqdn deployments
This commit ensures we are backward compatible with fqdn deployments.
Since ceph-container enforces deployment to be done with shortname, we
must keep backward compatibility with clusters already deployed with
fqdn configuration

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-06 10:14:58 +00:00
Sébastien Han ea9e60d48d config: enforce socket name
This was introduced by
59ee2e8d3b
and made our socket checks impossible to run. The PID could be found,
but the cctid cannot.
This happens during upgrade to mimic and on cluster running on mimic.

So let's force the admin socket the way it was so we can properly check
for existing instances also the line $cluster-$name.$pid.$cctid.asok
is only needed when running multiple instances of the same daemon,
thing ceph-ansible cannot do at the time of writing

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610220
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-31 10:58:04 +02:00
Mike Christie 6f72f96dad igw: do not fail purge on rbd removal errors
Instead of failing the entire purge operation when the rbd command fails
just log an error. This will allow the higher level target and config
cleanup to complete, and the user only has to manually delete the rbd
images.

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-07-31 10:08:26 +02:00
Mike Christie d572a9a602 igw: fix image removal during purge
We were not passing in the ceph conf info into the rbd image removal
command, so if the clustername was not the default igw purge would fail
due to the rbd rm command failing.

This just fixes the bug by passing in the ceph conf info which has the
clustername to use.

This fixes Red Hat bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1601949

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-07-31 10:08:26 +02:00
Sébastien Han b334cdcbe5 restapi: disable it when ceph version > luminous
ceph-rest-api binary has been removed in mimic so we cannot deploy it
anymore. We just keep the role and the compability for existing users.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-30 13:18:30 +00:00
Sébastien Han 2ca8c51906 osd: do not remove expose_partition container
The container runs with --rm which means it will be deleted by Docker
when exiting. Also 'docker rm -f' is not idempotent and returns 1 if the
container does not exist.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1609007
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-30 10:38:15 +02:00
Sébastien Han 1f341e69d1 site: report ceph -s status at the end of the deployment
We now show the output of 'ceph -s'. Example output below:

TASK [display post install message] **********************************************************************************************************************************************************************************************************
ok: [localhost] => {
    "msg": [
        "  cluster:",
        "    id:     753212df-f32a-4cc9-a097-2db6fe89a251",
        "    health: HEALTH_OK",
        " ",
        "  services:",
        "    mon: 1 daemons, quorum ceph-nano-lul-faa32aebf00b",
        "    mgr: ceph-nano-lul-faa32aebf00b(active)",
        "    osd: 1 osds: 1 up, 1 in",
        " ",
        "  data:",
        "    pools:   4 pools, 32 pgs",
        "    objects: 224 objects, 2546 bytes",
        "    usage:   1027 MB used, 9212 MB / 10240 MB avail",
        "    pgs:     32 active+clean",
        " "
    ]
}

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1602910
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-27 14:49:42 +00:00
Guillaume Abrioux 578aa5c2d5 tests: leave an OSD node in default crush root
jewel used to create a default `rbd` pool in the default crush root
`default`, we need to have at least 1 osd to satisfy the PGs for this
created pool, otherwise the cluster will be in HEALTH_ERR state because
of `pgs stuck unclean`/`pgs stuck inactive`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-26 18:47:10 +00:00
Guillaume Abrioux a1ca2c8fd3 iscsigw: do not run common roles when deploying jewel
Let's not deploy common roles when iscsigw nodes for jewel deployment.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-26 18:47:10 +00:00
Guillaume Abrioux 1ecbbbdcfa rbd-mirror: bring back compatibility with jewel deployment
rbd-mirror can't start when deploying jewel because it needs admin
keyring.
Getting back this task brings backward compatibility for jewel
deployment.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-26 18:47:10 +00:00
Guillaume Abrioux 053709da97 ceph-osds: backward compatibility with jewel for osp pools creation
If we want to be backward compatible with release prior to luminous, we
have to set the rule name accordingly to default values used in jewel.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-26 18:47:10 +00:00
Guillaume Abrioux 2597a557c5 client: fix an incorrect title in a task
This task would be run on both containerized *and* non containerized
deployment.
Let's have a proper title to avoid confusion.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-26 15:57:41 +02:00
Arata Notsu 2bbb4acca6 site.yml.sample: fix install python2
Check `systempython2.stat` instead of `systempython2.stat.exists`.

Without this change, in the case that python2 is not installed, the `stat`
task fails without defining `systempython2.stat`. It leads that the next
installation tasks fail because of undefined `systempython2.stat`.

An example error output (edited for readability):

```
TASK [check for python2] ***********************************************
Wednesday 25 July 2018  14:52:47 +0900 (0:00:00.182)       0:00:00.182 *
fatal: [ceph-osd1.vlan221.vtj]: FAILED! => {
"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not
found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
...ignoring

TASK [install python2 for debian based systems] ************************
Wednesday 25 July 2018  14:51:00 +0900 (0:00:01.742)       0:00:01.926 *
fatal: [ceph-mon2]: FAILED! => {
"msg": "The conditional check 'systempython2.stat.exists is undefined or
systempython2.stat.exists == false' failed. The error was: error while
evaluating conditional (systempython2.stat.exists is undefined or
systempython2.stat.exists == false): 'dict object' has no attribute 'stat'
\n\n The error appears to have been in
'/Users/arata/git/ceph-ansible/site.yml.sample': line 36, column 7, but
may\n be elsewhere in the file depending on the exact syntax problem.\n\n
The offending line appears to be:\n\n\n
    - name: install python2 for debian based systems\n
      ^ here\n
"}
...ignoring
```

Fixes: #2930
Signed-off-by: Arata Notsu <arata776@gmail.com>
2018-07-25 16:59:37 +00:00
Sébastien Han e2ea5bac51 rgw: add more config option for civetweb frontend
In containerized deployments we now inherite from the
radosgw_civetweb_options options when bootstrapping the container.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-25 13:19:14 +00:00
Giulio Fidente e85e5ea781 Run creation of empty rados index object to first monitor
When distributing ceph-nfs role, creation of rados index object
fails as it assumes availability of client.admin locally.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1607970
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
2018-07-25 11:40:11 +02:00
Guillaume Abrioux 0640e2aa1e main: update requirements.txt
update requirements.txt accordingly to the last ansible version tested
on master.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-24 18:23:04 +02:00
Sébastien Han 235d1b3f55 validate: add checks for interfaces
Check if the interface provided:

* exists in the gathered facts (thus on the system)
* is active
* has an IP address (depending on ip_version )

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600227
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-24 17:59:30 +02:00
Sébastien Han b3266c5be2 rolling_update: set osd sortbitwise
upgrade RHCS 2 -> RHCS 3 will fail if cluster has still set
sortnibblewise,
it stay stuck on "TASK [waiting for clean pgs...]" as RHCS 3 osds will
not start if nibblewise is set.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600943
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-24 17:19:02 +02:00
Guillaume Abrioux 0a88bccf87 tests: followup on b89cc1746f
Update network subnets in group_vars/all

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-24 16:55:15 +02:00
Guillaume Abrioux b89cc1746f tests: do not deploy all daemons for shrink osds scenarios
Let's create a dedicated environment for these scenarios, there is no
need to deploy everything.
By the way, doing so will save some times.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-23 18:30:06 +02:00
Guillaume Abrioux af82e7523d tests: test master against ansible 2.6
Ansible 2.4 is currently end-of-life.
Ansible 2.5 will go end-of-life after Ansible 2.7 is released.

Fixes: #2901

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-23 11:59:15 +00:00
Guillaume Abrioux 0c863a3783 tests: add support of 'ooo-collocation' scenario when testing against ceph dev
The group_vars/all file is not available on 'ooo-collocation' scenario,
it's making the `dev_setup.yml` failing because this path is hardcoded.

The idea here is to check if the pattern 'ooo-collocation' is present in
`change_dir` variable so we can set this path properly according to the
scenario being run.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-20 07:47:33 +02:00