Commit Graph

3879 Commits (2e6e885bb75156c74735a65c05b4757b031041bb)
 

Author SHA1 Message Date
Sébastien Han 2e6e885bb7 rolling_upgrade: set sortbitwise properly
Running 'osd set sortbitwise' when we detect a version 12 of Ceph is
wrong. When OSD are getting updated, even though the package is updated
they won't send their updated version (12) and will stick with 10 if the
command is not applied. So we have to check if OSD are sending a version
10 and then run the command to unlock the OSDs.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600943
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-21 12:22:32 +00:00
Sébastien Han 77a3a682f3 iscsi group name preserve backward compatibility
Recently we renamed the group_name for iscsi iscsigws where previously
it was named iscsi-gws. Existing deployments with a host file section
with iscsi-gws must continue to work.

This commit adds the old group name as a backoward compatility, no error
from Ansible should be expected, if the hostgroup is not found nothing
is played.

Close: https://bugzilla.redhat.com/show_bug.cgi?id=1619167
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-20 23:52:19 +02:00
Sébastien Han 8c70a5b197 osd: fix ceph_release
We need ceph_release in the condition, not ceph_stable_release

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1619255
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-20 20:14:56 +02:00
Sébastien Han b738706810 take-over-existing-cluster: do not call var_files
We were using var_files long ago when default variables were not in
ceph-defaults, now the role exists this is not need. Moreover having
these two var files added:

- roles/ceph-defaults/defaults/main.yml
- group_vars/all.yml

Will create collision and override necessary variables.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1555305
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-20 14:47:04 +02:00
Markos Chandras 126e2e3f92 roles: ceph-defaults: Check if 'rgw' attribute exists for rgw_hostname
If there are no services on the cluster, then the 'rgw' could be missing
and the task is failing with the following problem:

msg": "The task includes an option with an undefined variable.
The error was: 'dict object' has no attribute 'rgw'

We fix this by checking the existence of the 'rgw' attribute. If it's
missing, we skip the task since the role already contains code to set
a good default rgw_hostname.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-08-20 11:37:45 +02:00
Markos Chandras 37e50114de roles: ceph-defaults: Delegate cluster information task to monitor node
Since commit f422efb1d6 ("config: ensure
rgw section has the correct name") we observe the following failures in
new Ceph deployment with OpenStack-Ansible

fatal: [aio1_ceph-rgw_container-fc588f0a]: FAILED! => {"changed": false,
"cmd": "ceph --cluster ceph -s -f json", "msg": "[Errno 2] No such file
or directory"

This is because the task executes 'ceph' but at this point no package
installation has happened. Packages are normally installed in the
'ceph-common' role which runs after the 'ceph-defaults' one.

Since we are looking to obtain cluster information, the task should be
delegated to a monitor node similar to other tasks in that role

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-08-20 11:37:45 +02:00
Dardo D Kleiner f6519e4003 mgr: improve/fix disabled modules check
Follow up on 36942af698

"disabled_modules" is always a list, it's the items in the list that
can be dicts in mimic.  Many ways to fix this, here's one.

Signed-off-by: Dardo D Kleiner <dardokleiner@gmail.com>
2018-08-20 11:23:58 +02:00
Andrew Schoen 04df3f0802 lv-create: use copy instead of the template module
The copy module does in fact do variable interpolation so we do not need
to use the template module or keep a template in the source.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Andrew Schoen f5a4c89869 tests: cat the contents of lv-create.log in infra_lv_create
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Andrew Schoen 131796f275 lv-create: add an example logfile_path config option in lv_vars.yml
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Andrew Schoen 810cc47892 tests: adds a testing scenario for lv-create and lv-teardown
Using an explicitly named testing environment name allows us to have a
specific [testenv] block for this test. This greatly simplifies how it will
work as it doesn't really anything from the ceph cluster tests.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Andrew Schoen b0bfc17351 lv-teardown: fail silently if lv_vars.yml is not found
This allows user to opt out of using lv_vars.yml and load configuration
from other sources.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Andrew Schoen 8424858b40 lv-teardown: set become: true at the playbook level
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Andrew Schoen e43eec57bb lv-create: fail silenty if lv_vars.yml is not found
If a user decides to to use the lv_vars.yml file then it should fail
silenty so that configuration can be picked up from other places.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Andrew Schoen fde47be13c lv-create: set become: true at the playbook level
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Andrew Schoen 35301b35af lv-create: use the template module to write log file
The copy module will not expand the template and render the variables
included, so we must use template.

Creating a temp file and using it locally means that you must run the
playbook with sudo privledges, which I don't think we want to require.
This introduces a logfile_path variable that the user can use to control
where the logfile is written to, defaulting to the cwd.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-16 16:38:23 +02:00
Neha Ojha 909b38da82 infrastructure-playbooks/vars/lv_vars.yaml: minor fixes
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-08-16 16:38:23 +02:00
Neha Ojha f65f3ea89f infrastructure-playbooks/lv-create.yml: use tempfile to create logfile
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-08-16 16:38:23 +02:00
Neha Ojha 65fdad0723 infrastructure-playbooks/lv-create.yml: add lvm_volumes to suggested paste
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-08-16 16:38:23 +02:00
Neha Ojha 50a6d8141c infrastructure-playbooks/lv-create.yml: copy without using a template file
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-08-16 16:38:23 +02:00
Neha Ojha 186c4e11c7 infrastructure-playbooks/lv-create.yml: don't use action to copy
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-08-16 16:38:23 +02:00
Neha Ojha 9d43806df9 infrastructure-playbooks: standardize variable usage with a space after brackets
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-08-16 16:38:23 +02:00
Neha Ojha e0293de3e7 vars/lv_vars.yaml: remove journal_device
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-08-16 16:38:23 +02:00
Ali Maredia 1f018d8612 infrastructure-playbooks: playbooks for creating LVs for bucket indexes and journals
These playbooks create and tear down logical
volumes for OSD data on HDDs and for a bucket index and
journals on 1 NVMe device.

Users should follow the guidelines set in var/lv_vars.yaml

After the lv-create.yml playbook is run, output is
sent to /tmp/logfile.txt for copy and paste into
osds.yml

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2018-08-16 16:38:23 +02:00
Sébastien Han dad10e8f3f rolling_update: register container osd units
Before running the upgrade, let's call systemd to collect unit names
instead of relaying on the device list. This is more accurate and fix
the osd_auto_discovery scenario too.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1613626
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-16 11:13:12 +02:00
Sébastien Han 3149b2564f Revert "osd: generate device list for osd_auto_discovery on rolling_update"
This reverts commit e84f11e99e.

This commit was giving a new failure later during the rolling_update
process. Basically, this was modifying the list of devices and started
impacting the ceph-osd itself. The modification to accomodate the
osd_auto_discovery parameter should happen outside of the ceph-osd.

Also we are trying to not play ceph-osd role during the rolling_update
process so we can speed up the upgrade.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-16 11:13:12 +02:00
Jeffrey Zhang 85cc61a6d9 Use /var/lib/ceph/osd folder to filter osd mount point
In some case, use may mount a partition to /var/lib/ceph, and umount
it will be failure and no need to do so too.

Signed-off-by: Jeffrey Zhang <zhang.lei.fly@gmail.com>
2018-08-14 13:00:24 +00:00
Markos Chandras 7172737f13 roles: ceph-defaults: Set ceph_uid on SUSE distributions
The ceph_uid is also '167' on SUSE systems so extend the existing task.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-08-13 19:02:57 +00:00
Guillaume Abrioux 36942af698 mgr: backward compatibility for module management
Follow up on 3abc253fec

The structure had even changed within `luminous` release.
It was first:

```
{
    "enabled_modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "disabled_modules": [
        "influx",
        "localpool",
        "prometheus",
        "selftest",
        "zabbix"
    ]
}
```
Then it changed for:

```
{
  "enabled_modules": [
      "status"
  ],
  "disabled_modules": [
      "balancer",
      "dashboard",
      "influx",
      "localpool",
      "prometheus",
      "restful",
      "selftest",
      "zabbix"
  ]
}
```

and finally:
```
{
  "enabled_modules": [
      "status"
  ],
  "disabled_modules": [
      {
          "name": "balancer",
          "can_run": true,
          "error_string": ""
      },
      {
          "name": "dashboard",
          "can_run": true,
          "error_string": ""
      }
  ]
}
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 13:25:06 +00:00
Guillaume Abrioux 8b5e3cd999 validate: fail if fqdn deployment attempted
fqdn configuration possibility caused a lot of trouble, it's adding a
lot of complexity because of multiple cases and the relation between
ceph-ansible and ceph-container. Moreover, there is no benefit for such
a feature.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1613155

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux f422efb1d6 config: ensure rgw section has the correct name
the ceph.conf.j2 always assumes the hostname used to register the
radosgw in the servicemap is equivalent to `{{ ansible_hostname }}`
which returns the shortname form.

We need to detect which form of the hostname was used in case of already
deployed cluster and update the ceph.conf accordingly.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1580408

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux db29b5b84d config: clean template, remove useless conditions
there is no need to have all these conditions.

for instance, assuming `mds_group_name` is set to 'mdss':

  - `if groups[mds_group_name] is defined` checks if `'mdss'` is present in `{{ groups }}`

  - `if {{ mds_group_name }} in group_names` checks if the current node is part
  the group `'mdss'`

  - `if inventory_hostname in groups.get(mds_group_name, [])` checks if
  the current node is part of the group 'mdss'

The third condition is enough to cover the need of ensuring we are
running on a mds node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux 4522dbfc74 doc: update ansible supported version matrix.
Closes: #2989

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Sébastien Han 4c9e24a90f mon: fix calamari initialisation
If calamari is already installed and ceph has been upgraded to a higher
version the initialisation will fail later. So if we detect the
calamari-server is too old compare to ceph_rhcs_version we try to update
it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1601755
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-10 14:14:23 +02:00
Andrew Schoen 6423ab4ad3 lvm: fix condition when selecting which scenario to run
devices and lvm_volumes will always be defined, so we need to instead
check it's length before deciding to run the scenario.

This fixes the failure here:
https://2.jenkins.ceph.com/job/ceph-ansible-prs-luminous-bluestore_lvm_osds/86/consoleFull#1667273050b5dd38fa-a56e-4233-a5ca-584604e56e3a

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-10 11:46:12 +02:00
Sébastien Han e84f11e99e osd: generate device list for osd_auto_discovery on rolling_update
rolling_update relies on the list of devices when performing the restart
of the OSDs. The task that is builind the devices list out of the
ansible_devices dict only runs when there are no partitions on the
drives. However during an upgrade the OSD are already configured, they
have been prepared and have partitions so this task won't run and thus
the devices list will be empty, skipping the restart during
rolling_update. We now run the same task under different requirements
when rolling_update is true and build a list when:

* osd_auto_discovery is true
* rolling_update is true
* ansible_devices exists
* no dm/lv are part of the discovery
* the device is not removable
* the device has more than 1 sector

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1613626
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-10 09:19:40 +02:00
Andrew Schoen e15c61b601 updates group_vars/osds.yml.sample to inlude crush_device_class
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 68d929299a ceph-volume: docs for "lvm batch" support
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 647bbd8f1e tests: adds crush_device_class to lvm-batch scenario
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 3592c68cca ceph-osd: adds crush_device_class config option
This is used with the lvm osd scenario. When using devices you need the
option to set the crush device class for all of the OSDs that are
created from those devices.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 6d431ec22d ceph-volume: implement the 'lvm batch' subcommand
This adds the action 'batch' to the ceph-volume module so that we can
run the new 'ceph-volume lvm batch' subcommand. A functional test is
also included.

If devices is defind and osd_scenario is lvm then the 'ceph-volume lvm
batch' command will be used to create the OSDs.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Sébastien Han 4d64dd4686 rgw: ability to use ceph-ansible vars into containers
Since the container now simply reads the ceph.conf, we remove all the
unnecessary options.

Also this PR is the foundation to support multiple backend, such as the
new 'beast' from Ceph Mimic.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Sébastien Han 3bce117de2 rgw: remove unused file
copy_configs.yml was not including and is a leftover so let's remove it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Sébastien Han 5a89479abe rgw: remove useless condition
The include does not need a condition on containerized_deployment since
we are already in an include than has the same condition.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Graeme Gillies a46025820d Allow mgr bootstrap keyring to be defined
In environments where we wish to have manual/greater control over
how the bootstrap keyrings are used, we need to able to externally
define what the mgr keyring secret will be and have ceph-ansible
use it, instead of it being autogenerated

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610213

Signed-off-by: Graeme Gillies <ggillies@akamai.com>
2018-08-08 19:09:01 +00:00
Sébastien Han 19518656a7 Resync rhcs_edits.txt
We were missing an option so let's add it back.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1519835
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-08 15:53:52 +02:00
Christian Berendt 83edbcb5f2 docs: overall improvements
* PR -> pull request
* add 2018 to copyright
* add link to OFTC
* add missing ``
* add / remove some newlines
* ansible -> Ansible, ceph -> Ceph, docker -> Docker, jenkins -> Jenkins,
  osd -> OSD, python -> Python
* fix reference syntax
* improve some titles

Signed-off-by: Christian Berendt <berendt@b1-systems.de>
2018-08-08 11:42:28 +02:00
Sébastien Han 77d4023fbe test: follow up on osd_crush_location for containers
This was fixed by
578aa5c2d5
on non-container, we need to apply the same fix for containers.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-07 16:20:13 +00:00
Sébastien Han 50be3fd9e8 test: remove osd_crush_location from shrink scenarios
This is not needed since this is already covered by docker_cluster and
centos_cluster scenarios.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-07 16:20:13 +00:00
Christian Berendt d168d81c16 docs: overall syntax improvements
* add some missing dots and ``
* add/remove line breaks
* consistent use of shell prompt in consoles outpus
* fix block indents Bearbeiten
* use code blocks

Signed-off-by: Christian Berendt <berendt@b1-systems.de>
2018-08-07 10:33:48 +00:00