Commit Graph

3228 Commits (2bc58684e52d4082472ece0237bb7bab5ae315d8)
 

Author SHA1 Message Date
Sébastien Han 2bc58684e5 Merge pull request #2054 from jprovaznik/bindaddr
ceph-nfs - add bind address variable
2017-10-23 12:20:46 +02:00
Jan Provaznik 291e6b604d ceph-nfs - add bind address variable 2017-10-23 09:34:51 +02:00
Sébastien Han 54de2efc5d Merge pull request #2082 from ceph/restapi-cephconf
common: move restapi template to config
2017-10-20 14:07:48 +02:00
Sébastien Han 8016d5d1a5 Merge pull request #2076 from ceph/2.4-3.0-backward
[skip ci] all: backward compatibility between stable-2.2 and 3.0
2017-10-20 13:58:30 +02:00
Sébastien Han 4413511b66 all: backward compatibility between stable-2.2 and 3.0
stable-3.0 brought numerous changes in ceph-ansible variables, this PR
aims to maintain backward compatibility for someone running stable-2.2
upgrading to stable-3.0 but keeps its groups_vars untouched.
We will then determine the right options to make sure the upgrade works
but we are expecting that new variables should be used.

We will drop this in a near future, maybe 3.1 or 3.2.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:54:10 +02:00
Sébastien Han fccb9472cd mgr: force module addition
Some module require --force to be enabled.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:54:10 +02:00
Sébastien Han ba5c6e66f0 common: move restapi template to config
Closes: github.com/ceph/ceph-ansible/issues/1981
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:14:13 +02:00
Guillaume Abrioux 5b1087f1e5 mgr: play 'enable modules' sequence only on luminous
This feature isn't available before luminous, therefore, we need to play
them only on luminous and after otherwise the playbook will fail.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3f3d4b9c727d06154c422d445fc2a245aceaed89)
2017-10-19 20:54:23 +02:00
Guillaume Abrioux 982326373b upgrade: fix upgrade jewel to luminous for nfs nodes
nfs nodes can't be upgraded from jewel to luminous because ceph-nfs role
is skipped because of the condition `when:
"ceph_release_num[ceph_release] >= ceph_release_num.luminous"`. Indeed,
package is upgraded in `ceph-nfs` role, therefore,
`ceph_release` is still set to the old version. It means the when can't
be satisfied.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-19 20:54:23 +02:00
Guillaume Abrioux 70034451e9 upgrade: fix upgrade jewel to luminous for mgr nodes
mgr nodes can't be upgraded from jewel to luminous because ceph-mgr role
is skipped because of the condition `when:
"ceph_release_num[ceph_release] >= ceph_release_num.luminous"`. Indeed,
ceph-mgr package is upgraded in `ceph-mgr` role, therefore,
`ceph_release` is still set to the old version. It means the when can't
be satisfied.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 302e563601cd6820b1ae44fabdfb1506688c7c9b)
2017-10-19 20:54:23 +02:00
Sébastien Han acbdb7f4fe Merge pull request #2081 from ceph/fix_mgr_upgrade
tests: fix docker_collocation, change jewel upgrade
2017-10-19 20:53:43 +02:00
Guillaume Abrioux db6b93d441 tests: only test upgrade jewel > luminous
Since it has been decided to stop testing against kraken, we have to
test upgrade from jewel to luminous instead of kraken.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-19 17:00:27 +02:00
Guillaume Abrioux 93f0856770 tests: re-add missing param in tox
this line has been removed by mistake in a53aa9e.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-19 17:00:27 +02:00
Sébastien Han c527515502 Merge pull request #2000 from ceph/merge-osd-scenarios
[skip ci] ci: new osd scenarios
2017-10-19 09:18:02 +02:00
Sébastien Han f047ad15f9 Merge pull request #2075 from ceph/fix_3a58757
[skip ci] mgr: fix broken task on jewel
2017-10-18 15:19:41 +02:00
Guillaume Abrioux ff228e2d88 mgr: fix broken task on jewel
3a58757 introduced an issue for Jewel deployments, since this role is
skipped, `enabled_ceph_mgr_modules.stdout` doesn't exist, therefore, it
ends up with an attribute error.

Uses `.get()` to retrieve `stdout` with a default value so it won't fail
if this attribute doesn't exist (jewel).

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-18 14:11:46 +02:00
Sébastien Han 1579f1c5b1 Merge pull request #2073 from ceph/fix_rbd_handler
[skip ci] rbd: fix restart script for jewel
2017-10-18 11:12:05 +02:00
Guillaume Abrioux c2850b11be rbd: fix restart script for jewel
In Jewel, we don't use bootstrap-rbd keyring for rbd-mirror nodes, it
results with a socket path/name different according to which ceph
release you are deploying.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-18 11:10:49 +02:00
Sébastien Han 2936d152c9 Merge pull request #2053 from Fbrachere/mgr-modules
Add ability to enable ceph mgr modules.
2017-10-18 10:27:31 +02:00
Sébastien Han a53aa9e8b4 ci: new osd scenarios
This commit add new osd scenarios, it aims to simplify the CI setup and
brings a better coverage on the OSD scenarios.
We decided to differentiate between filestore and bluestore, thinking
ahead when filestore won't be supported anymore.
So we now have two classes of tests:

* Filestore
* Bluestore

In each of those classes we have container and non-container.
Then for each we test the following:

* collocated
* collocated dmcrypt
* non-collocated
* non-collocated dmcrypt
* auto discovery collocated
* auto discovery collocated dmcrypt

This gives us a nice coverage and also reduces the footprint on the CI.
We are now up to 4 scenarios, each containing 6 OSD VMs.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-18 09:26:06 +02:00
Sébastien Han 6320f3aac7 Merge pull request #2068 from ceph/fix-collocation2
defaults: fix handlers for collocation
2017-10-18 09:15:50 +02:00
Sébastien Han 90b75185d5 defaults: fix handlers for collocation
When doing collocation the condition "inventory_hostname in play_hosts"
is breaking the restart workflow.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-17 19:23:16 +02:00
Sébastien Han 897136b368 Merge pull request #2065 from ceph/rhcs-rm
[skip ci] rpm: remove ability to install ceph community version
2017-10-17 17:05:14 +02:00
Sébastien Han 013bc6454f Merge pull request #2050 from ceph/sort_dict
ceph-defaults: fix handlers that are always triggered
2017-10-17 15:23:34 +02:00
Guillaume Abrioux 2aa53fb0f5 Merge pull request #2055 from ceph/update-mirror-nfs
upgrade: support for rbd mirror and nfs
2017-10-17 14:51:39 +02:00
Sébastien Han 1cf39df5d4 Merge pull request #2062 from berendt/remove-readme-files-in-roles-directories
[skip ci] Cleanup readme files in roles directories
2017-10-17 11:55:48 +02:00
Sébastien Han c72ddee2d9 rpm: remove ability to install ceph community version
Downstream version of ceph-ansible could still trigger install from
upstream repo and import keys.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1503019
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-17 11:49:41 +02:00
Christian Berendt 4c380c9ef8 Cleanup readme files in roles directories
The contents of the README files are no longer up to date.
Documentation for all roles is located below the docs directory.
2017-10-17 11:22:06 +02:00
Sébastien Han b511b8679e Merge pull request #2064 from berendt/sphinx-conf-do-not-set-release-version
Do not set release/version in sphinx configuration
2017-10-17 11:06:28 +02:00
Sébastien Han 55de29abfe Merge pull request #2063 from berendt/docs-index-syntax-issue
Add missing backticks in docs index
2017-10-17 11:06:02 +02:00
Christian Berendt 9a1e789a98 Add missing backticks in docs index 2017-10-17 11:00:02 +02:00
Christian Berendt 611138484d Do not set release/version in sphinx configuration 2017-10-17 10:55:54 +02:00
Sébastien Han d920d4839d upgrade: support for rbd mirror and nfs
- Add upgrade support for rbd mirror and nfs daemons.
- Only works with systemd (remove sysvinit and upstart occurence)
- A bit of cleanup

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-17 10:54:47 +02:00
Sébastien Han 39bf102b64 switch: nicer way to check mon quorum
re-use the same syntax as rolling_udate.yml

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-17 10:54:36 +02:00
Sébastien Han ddb878d938 Merge pull request #2057 from berendt/gh2056
In docker start scripts replace \u00a0 with \u0020
2017-10-16 17:49:11 +02:00
Christian Berendt cf901f0171 In docker start scripts replace \u00a0 with \u0020
This will solve the following issue when starting docker containers on ubuntu:

invalid argument "1\u00a0" for --cpus=1 : failed to parse 1  as a rational number

Closes-bug: #2056
2017-10-16 15:16:48 +02:00
Fabien Brachere 3a587575d7 Add ability to enable ceph mgr modules. 2017-10-16 15:04:23 +02:00
Guillaume Abrioux 7ee9aa94b5 Merge pull request #1963 from ceph/pull-in-para
site-docker.yml try to fetch images in //
2017-10-13 19:35:11 +02:00
Guillaume Abrioux 46774207b7 Merge pull request #2051 from ceph/mds-fs
mds: fix fs pool creation
2017-10-13 19:34:06 +02:00
Sébastien Han af10fc87d7 Merge pull request #2052 from ceph/fix-contriub-tag
[skip ci] contrib: galaxy fix the array
2017-10-13 17:20:33 +02:00
Sébastien Han d5358bd20c contrib: galaxy fix the array
Fix the array where we build the list of which tag is newer and then
needs to be pushed.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-13 17:09:58 +02:00
Guillaume Abrioux ec042219e6 ceph-defaults: fix handlers that are always triggered
Handlers are always triggered in ceph-ansible because ceph.conf file is
generated with a randomly order for the different keys/values pairs
in sections.

In python, a dict is not sorted. It means in our case each time we try
to generate the ceph.conf file it will be rendered with a random order
since the mecanism behind consist of rendering a file from a python dict
with keys/values. Therefore, as a quick workaround, forcing this dict to be
sorted before rendering the configuration file will ensure that it will be
rendered always the same way.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-13 16:15:27 +02:00
Sébastien Han 71d819620c mds: fix fs pool creation
1. add the variables to docker_collocation
2. trigger the check when a MDS is part of the inventory file, not when
we run on an MDS...

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-13 16:03:04 +02:00
Sébastien Han 90ce4276ca ci: use a container client VM
The client won't run on centos7 anymore but on Atomic host just like the
rest of the daemons.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-13 15:26:03 +02:00
Sébastien Han b34a04ea41 site-docker.yml try to fetch images in //
The container deployment is serialized, adding this task as a best
effort. If docker is already present we pull the image otherwise we wait
for the role to play.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-13 11:24:40 +02:00
Guillaume Abrioux 80c62e05e7 Merge pull request #2045 from ceph/reboot
ci: reboot with ansible instead of vagrant reload
2017-10-13 10:43:19 +02:00
Guillaume Abrioux 7d4b3f9989 Merge pull request #2047 from ceph/enable_ceph-rbd-mirror.target
rbd-mirror: enable ceph-rbd-mirror.target
2017-10-13 10:34:10 +02:00
Sébastien Han f7832e5eb9 Merge pull request #2031 from major/simplify-ntp
Simplify NTP checks/install
2017-10-13 09:16:20 +02:00
Sébastien Han 3e058bff06 ci: reboot with ansible instead of vagrant reload
vagrant is serialized and takes a lot of time compare to simple reboot.
See the benchmarks below for 3 VMs:

[leseb@rick docker]$ time ANSIBLE_SSH_ARGS="-F
/home/leseb/reproduce-ci/tmp.zgGC7d5mIC/build/workspace/ceph-ansible/tests/functional/centos/7/docker/vagrant_ssh_config"  ansible-playbook -i /home/leseb/reproduce-ci/tmp.zgGC7d5mIC/build/workspace/ceph-ansible/tests/functional/centos/7/docker/hosts reboot.yml

PLAY [mons]
****************************************************************************************************************************************************************************************************

TASK [Gathering Facts]
*****************************************************************************************************************************************************************************************
ok: [mon1]
ok: [mon2]
ok: [mon0]

TASK [restart machine]
*****************************************************************************************************************************************************************************************
changed: [mon2]
changed: [mon1]
changed: [mon0]

TASK [wait for server to boot]
*********************************************************************************************************************************************************************************
ok: [mon2 -> localhost]
ok: [mon0 -> localhost]
ok: [mon1 -> localhost]

TASK [uptime]
**************************************************************************************************************************************************************************************************
changed: [mon2]
changed: [mon0]
changed: [mon1]

PLAY RECAP
*****************************************************************************************************************************************************************************************************
mon0                       : ok=4    changed=2    unreachable=0
failed=0
mon1                       : ok=4    changed=2    unreachable=0
failed=0
mon2                       : ok=4    changed=2    unreachable=0
failed=0

real    0m35.112s
user    0m5.737s
sys     0m1.849s

[leseb@rick docker]$ time vagrant reload
==> mon0: Halting domain...
==> mon0: Starting domain.
==> mon0: Waiting for domain to get an IP address...
==> mon0: Waiting for SSH to become available...
==> mon0: Creating shared folders metadata...
==> mon0: Rsyncing folder:
/home/leseb/reproduce-ci/tmp.zgGC7d5mIC/build/workspace/ceph-ansible/tests/functional/centos/7/docker/
=> /home/vagrant/sync
==> mon0: Machine already provisioned. Run `vagrant provision` or use
the `--provision`
==> mon0: flag to force provisioning. Provisioners marked to run always
will still run.
==> mon1: Halting domain...
==> mon1: Starting domain.
==> mon1: Waiting for domain to get an IP address...
==> mon1: Waiting for SSH to become available...
==> mon1: Creating shared folders metadata...
==> mon1: Rsyncing folder:
/home/leseb/reproduce-ci/tmp.zgGC7d5mIC/build/workspace/ceph-ansible/tests/functional/centos/7/docker/
=> /home/vagrant/sync
==> mon1: Machine already provisioned. Run `vagrant provision` or use
the `--provision`
==> mon1: flag to force provisioning. Provisioners marked to run always
will still run.
==> mon2: Halting domain...
==> mon2: Starting domain.
==> mon2: Waiting for domain to get an IP address...
==> mon2: Waiting for SSH to become available...
==> mon2: Creating shared folders metadata...
==> mon2: Rsyncing folder:
/home/leseb/reproduce-ci/tmp.zgGC7d5mIC/build/workspace/ceph-ansible/tests/functional/centos/7/docker/
=> /home/vagrant/sync
==> mon2: Machine already provisioned. Run `vagrant provision` or use
the `--provision`
==> mon2: flag to force provisioning. Provisioners marked to run always
will still run.

real    1m31.850s
user    0m7.387s
sys     0m0.796s

Reboot via Ansible: 0m35.112s
Reboot via vagrant: 1m31.850s

We save 1/3 time.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-13 09:04:26 +02:00
Guillaume Abrioux 59ca1065e9 rbd-mirror: enable ceph-rbd-mirror.target
on jewel `ceph-rbd-mirror.target` isn't enabled, therefore, if the node
is rebooted, the service doesn't get started.

from ceph-rbd-mirror unit file:
```
[Install]
WantedBy=ceph-rbd-mirror.target
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-13 08:27:43 +02:00