Commit Graph

102 Commits (0e4fde4b6c36a411aacf5fe9949938d0b257424d)

Author SHA1 Message Date
Daniel Marks 77edd3d40a Fixing tabs that are breaking the syntax check
With the merge of PR #1336 the syntax check fails. This commit replaces
the tabs with proper indentation.
2017-03-15 14:15:15 +01:00
Sébastien Han 38ab6de602 Merge pull request #1336 from WingkaiHo/master
Load a variable file for devices partition
2017-03-15 11:55:26 +01:00
Sébastien Han 8320c14191 Merge pull request #1317 from ibotty/harmonize-docker-names
harmonize docker names
2017-03-14 18:20:20 +01:00
Andrew Schoen e81d690aa0 switch-to-containers: do not include group vars or role defaults
Doing so will override any values set for these in the group_vars
directory relative to the users inventory.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-03-08 08:57:09 -06:00
Andrew Schoen cf702b05cf purge-docker-cluster: do not include role defaults or group vars
Doing so at playbook level overrides whatever values might be set for
these in the user's group_vars directory that's relative to their
inventory.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-03-08 08:57:09 -06:00
Andrew Schoen aef54d89d9 switch-to-containers: do not set group name vars at playbook level
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-03-08 08:57:09 -06:00
Andrew Schoen 7289acb6b3 purge-docker-cluster: do not set group names vars at playbook level
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-03-08 08:57:08 -06:00
Andrew Schoen 46f26bec13 rolling-update: do not set group name vars at playbook level
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-03-08 08:57:08 -06:00
Andrew Schoen 4fe6607004 purge-cluster: do not set group name vars at playbook level
This has the behavior of overriding custom values set in group_vars.
I've added defaults to the rest of the group names so that if they are
not overridden in group_vars then defaults will be used.

See: https://bugzilla.redhat.com/show_bug.cgi?id=1354700

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-03-08 08:57:08 -06:00
WingKai Ho 0d134b4ad9 Update make-osd-partitions.yml
change
2017-03-08 17:46:37 +08:00
WingKai Ho e2d06068f4 Update make-osd-partitions.yml
When ansible do not load the file host_vars/{{ ansible_hostname }}.yml and host_vars/default.yml it will show syntactic, so keyword "skip" to fix it. 
Exit the playbook if the user not define devices  in both  host_vars/{{ ansible_hostname }}.yml and host_vars/default.yml
2017-03-06 15:43:09 +08:00
WingKai Ho 2861a483d7 Update make-osd-partitions.yml
When ansible do not load the file host_vars/{{ ansible_hostname }}.yml and host_vars/default.yml it will show syntactic err, so add keyword "skip" to fix it. 

Exit the playbook if the user not define devices  in both  host_vars/{{ ansible_hostname }}.yml and host_vars/default.yml
host_vars/default.yml
2017-03-06 10:33:22 +08:00
WingKai Ho 4cc489f2ba Update make-osd-partitions.yml
fix syntactic error
2017-03-03 17:26:53 +08:00
WingKai Ho 102befa927 Update make-osd-partitions.yml
Remove capital `L`
2017-03-02 14:06:41 +08:00
WingKai Ho c3f170e758 Update make-osd-partitions.yml
there is an extra space between 'custom' and 'layout'
2017-03-02 12:24:44 +08:00
WingKai Ho 2967772f6a Load a variable file for devices parrition
load device partition file in directory host_vars

1) if the user define host_vars/hostname.yml load the devices  partition on this file.
2) otherwise load host_vars/default.yml for default
2017-03-01 17:27:57 +08:00
yangyimincn 8b36cbac64 Update rolling_update.yml
The task waiting for the monitor to join the quorum... , the result for ceph -s | grep monmap only contain monmap, not included quorum:

# ceph -s --cluster ceph | grep monmap
     monmap e1: 3 mons at {sh-office-ceph-1=10.12.10.34:6789/0,sh-office-ceph-2=10.12.10.35:6789/0,sh-office-ceph-3=10.12.10.36:6789/0}

If want to get monitor, should use this:

# ceph -s --cluster ceph | grep election
            election epoch 80, quorum 0,1 sh-office-ceph-1,sh-office-ceph-2

ceph verison: 10.2.5
2017-02-28 16:56:02 +08:00
Sébastien Han 4639d89231 infra: fix cluster name detection
The previous command was returning /etc/ceph/ceph.conf, we only need
'ceph' to be returned.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-23 15:40:34 -05:00
Tobias Florek 931027e6f7 harmonize docker names
Created containers now are named more or less in the form of

    <ansible role>-<ansible_hostname>
2017-02-23 09:15:05 +01:00
Sébastien Han 3b633d5ddc purge-docker: re-implement zap devices
We now run the container and waits until it dies. Prior to this we were
stopping it before completion so not all the devices where zapped.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-21 15:56:09 -05:00
Sébastien Han a002508a91 purge-docker: also purge journal devices
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-21 15:54:36 -05:00
Andrew Schoen 5622c94e8b rolling-update: do not use upstart to stop mons when using systemd
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-02-21 12:31:26 -06:00
Shengjing Zhu 32923fd217 fix grep match pattern for osd ids
Some playbooks use [0-9]*, others use \d+$
The latter is more correct since cluster name may contain numbers.

Signed-off-by: Shengjing Zhu <zsj950618@gmail.com>
2017-02-20 16:35:56 +08:00
Andrew Schoen 22f52a9dc6 purge-cluster: also purge dmcrypt dedicated journals
See: https://bugzilla.redhat.com/show_bug.cgi?id=1414647

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-02-15 10:27:17 -06:00
Andrew Schoen 3964929a56 rgw-standalone: also fetch keys from mons
This is to allow for ceph-installer usage of this playbook and
to ensure that you have the correct keys locally when bootstrapping.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-02-14 16:12:59 -06:00
Andrew Schoen c5f561a4e9 purge-cluster: remove calamari-server package
See: https://bugzilla.redhat.com/show_bug.cgi?id=1422134

Signed-off-by: Andrew Schoen <aschoen@redhat.com>

Resolves rhbz#1422134
2017-02-14 09:24:02 -06:00
Sébastien Han c2f1dca823 docker: use a better method to pull images
We changed the way we declare image.
Prior to this patch we must have a "user/image:tag"
format, which is incompatible with non docker-hub registry where you
usually don't have a "user". On the docker hub a "user" is also
identified as a namespace, so for Ceph the user was "ceph".

Variables have been simplified with only:

* ceph_docker_image
* ceph_docker_image_tag

1. For docker hub images: ceph_docker_name: "ceph/daemon" will give
you the 'daemon' image of the 'ceph' user.

2. For non docker hub images: ceph_docker_name: "daemon" will simply
give you the "daemon" image.

Infrastructure playbooks have been modified as well.
The file group_vars/all.docker.yml.sample has been removed as well.
It is hard to maintain since we have to generate it manually. If
you want to configure specific variables for a specific daemon simply
edit group_vars/$DAEMON.yml

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1420207
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-09 17:57:18 +01:00
Andrew Schoen 5ddfc4f85c Merge pull request #1284 from ceph/BZ-1418980
purge-cluster: do not use ceph-detect-init
2017-02-08 08:46:03 -06:00
Andrew Schoen 4ff5908758 Merge pull request #1289 from ceph/fix-1286
rolling-update: detect init system properly
2017-02-08 06:31:30 -06:00
Andrew Schoen 865b4500dc purge-cluster: set a default value for fetch_directory if not defined
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-02-08 06:25:43 -06:00
Andrew Schoen adf6aee643 purge-cluster: remove all include tasks
Including variables from role defaults or files in a group_vars
directory relative to the playbook is a bad practice. We don't want to
do this because including these defaults at the task level overrides
values that would be set in a group_vars directory relative to the
inventory file, which is the correct usage if you wish to override
those default values.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-02-08 06:25:43 -06:00
Andrew Schoen 0476b24af1 purge-cluster: do not use ceph-detect-init
We can not always ensure that ceph-detect-init will be
present on the system.

See: https://bugzilla.redhat.com/show_bug.cgi?id=1418980

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-02-08 06:24:44 -06:00
Sébastien Han 8f94bfb498 rolling-update: detect init system properly
Simply use the ansible_service_mgr fact.

Closes: #1286

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-08 08:52:05 +01:00
Sébastien Han c34d0a9d28 purge-docker: force image deletion
even if non-runnin containers are using this image as a reference.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-07 22:14:21 +01:00
Sébastien Han 72cd9199ac purge: ability to purge client role
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-07 22:14:18 +01:00
Guillaume Abrioux 76ddcbc271 Remove support of releases prior to Jewel.
According to #1216, we need to simply the code by removing the
support of anything before Jewel.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-01-31 11:00:54 +01:00
Sébastien Han d5dd658cfa purge: do not stop ceph.target on each daemon
Doing this cause some all the daemons to go down at the same time. In a
scenario where we colocate a monitor and an osd, this osds will take
some time to go down which will make the 'umount' task fail.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-01-30 14:31:56 +01:00
Sébastien Han cb57a359ba purge: do not fail on purge ceph files
On systems running docker there is an issue with lxfs that results in
the find command returning 1 but actually did the job.
e.g: on a system with docker runnning find /var will give us the
following error:

find:
'/var/lib/lxcfs/cgroup/devices/lxc/x1/system.slice/systemd-update-utmp.service/devices.deny':
Permission denied
find:
'/var/lib/lxcfs/cgroup/devices/lxc/x1/system.slice/dev-random.mount/devices.allow':
Permission denied
...
...

However ceph files got deleted so we ignore the error.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-01-30 14:31:56 +01:00
Sébastien Han e371bd591c purge: fix ubuntu purge when not using systemd
We now rely on the cli tool ceph-detect-init which will tell us the init
system in used on the distribution. We do this instead of the previous
lookup for systemd unit files to call the right task depending on the
init system.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-01-30 14:31:56 +01:00
Sébastien Han 0e2e270ab2 purge: allow purge to run multiple times
with_items is evaluated before the when so in a second run where the
variable is empty if will fail with "'dict object' has no attribute
'stdout_lines'". To fix this we had a default array so with_items does
not fail and the task is skipped with the when.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-01-30 14:31:56 +01:00
Sébastien Han 0d2e580768 Merge pull request #1250 from ceph/new-tests
CI testing updates
2017-01-27 14:30:45 +01:00
Andrew Schoen d3cb8dba4e purge-cluster: fix failure when raw_multi_journal is not defined
Because the purge-cluster.yml playbook does not have access to the roles
default vars then we can be sure that raw_multi_journal is defined. For
example, if this was purging a dmcrypt journal then raw_multi_journal
might not be defined at all in group_vars/all.yml or
group_vars/osds.yml.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-01-27 05:23:17 -06:00
Ivan Font 0298354137 Update to use consistent docker extra env vars
This playbook was still referencing the old version of the
ceph_*_docker_extra_env but only for Ceph MONs and Ceph NFS. This
playbook was not kept up-to-date when updating the
ceph_*_docker_extra_env variables to add the '-e' option to docker.
That's because the addition of '-e' breaks this playbook as it requires
a comma separated list of variables for the 'env:' docker module
parameter. Therefore this change just makes the playbook consistently
broken by referencing the same variable throughout.
2017-01-26 15:57:34 -08:00
Andrew Schoen b2a6f095f1 purge-cluster: fix syntax when deleting dmcrypt devices
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-01-26 11:28:30 -06:00
Sébastien Han 73ca1a7a00 purge: remove dm-crypt devices
When running encrypted OSDs, an encrypted device mapper is used (because
created by the crypsetup tool). So before attempting to remove all the
partitions on a device we must delete all the encrypted device mappers,
then we can delete all the partitions.

Signed-off-by: Sébastien Han <seb@redhat.com>

 Please enter the commit message for your changes. Lines starting
2017-01-25 22:32:46 +01:00
Sébastien Han adeb3decf3 purge: remove zap_block_devs variable
The name of this variable was a bit confusing since its activation will
zap all the block devices no matter which osd scenario we are using.
Removing this variable and applying a condition on the OSD scenario is
now feasible and easier since we import group_vars variable files for
OSDs.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-01-18 10:55:01 +01:00
Sébastien Han b7fcbe5ca2 purge: cosmetic cleanup
Just applying our writing syntax convention in the playbook.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-01-18 10:53:21 +01:00
Andrew Schoen dd8389cdf7 purge-cluster: do not include ceph-osd and ceph-common defaults for osds
When purging OSDs we do not need to include these defaults as nothing in
the following tasks uses them. Also, it has the side effect of
overwriting any variables defined in group_vars files that are relative
to the inventory you are using with the default values. That behavior
was causing the CI tests to fail.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-01-10 16:57:58 -06:00
Andrew Schoen 321cea8ba9 purge-cluster: get journal partitions after zapping osd disks
In my testing zapping the osd disks deleted the journal
partitions, making the 'zap ceph journal partitions' task fail because
the partitions it found previously do not exist anymore.

This moves the task that finds the journal partitions after 'zap osd disks'
to catch any partitions ceph-disk might have missed.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-01-03 15:57:17 -06:00
Andrew Schoen c9e5914377 purge-cluster: use ignore_errors: true when including group_vars files
Using failed_when will still throw an exception and stop the playbook if
the file you're trying to include doesn't exist.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-01-03 15:57:17 -06:00