Commit Graph

2904 Commits (383aa68be9c65f17a4c0939373ac6c23da9e05f9)
 

Author SHA1 Message Date
Sébastien Han d0515cb704 Merge pull request #1825 from ceph/fix-item
ceph-docker-common: fix empty array
2017-08-29 12:15:46 +02:00
Sébastien Han b3e5206289 Merge pull request #1814 from ceph/handler-defaults
handler: default to empty array if task skipped
2017-08-29 11:09:35 +02:00
Sébastien Han cfddd2903c ceph-docker-common: fix empty array
The list can not be evaluated properly if it containers '[]', which is
the case when using the filter "default([])". To fix this, we have to
properly merge the lists.

This is fixing the issue: "list object has no element 1"

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-29 10:25:46 +02:00
Sébastien Han 764e697186 ceph-docker-common: detect ceph version
By detecting the ceph version running in the container we can easily
apply conditions like:
ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous

We do that already, in ceph-docker-common/tasks/fetch_configs.yml.

This fixes the error:

TASK [ceph-docker-common : register rbd bootstrap key]
******************************************************

fatal: [magna005]: FAILED! => {"failed": true, "msg": "The conditional
check 'ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous'
failed. The error was: error while evaluating conditional
(ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous):
'dict object' has no attribute 'dummy'\n\nThe error appears to have been
in
'/home/ubuntu/ceph-ansible/roles/ceph-docker-common/tasks/fetch_configs.yml':
line 2, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n---\n-
name: register rbd bootstrap key\n  ^ here\n"}

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1486062
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-28 23:28:47 +02:00
Ben England 617d9ee75d dont use devices var anymore, works for osd_auto_discover 2017-08-28 17:27:01 -04:00
David Galloway 146bed0927 Merge pull request #1821 from ceph/test-sitepackages
tests: always use sitepackages=True
2017-08-28 12:40:00 -04:00
Andrew Schoen 24107b9b3d tests: always use sitepackages=True
This is mostly important in rhcs testing so that our tests can use
packages installed on the distro.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-28 09:59:28 -05:00
Sébastien Han aa69c2c007 ceph-docker-common: do not log inside the container
Logging inside the container is not useful since it writes to the
overlayfs partition, resulting in potential performance degradation on
the container.

If you need to check the logs, just look at journald.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-28 12:04:49 +02:00
Sébastien Han 3c294131ae Merge pull request #1815 from ceph/container-key-perms
ceph-docker-common: apply 0600 to key permissions
2017-08-28 11:15:41 +02:00
Sébastien Han 24d3bdd0ce Merge pull request #1517 from ceph/rolling
rolling_update: nicer way to set osd flags
2017-08-28 11:14:55 +02:00
Sébastien Han 29753da05c handler: default to empty array if task skipped
with_items is evaluated before the when condition so if the task that
registers the 'results' is skipped the task will fail with:

{"failed": true, "msg": "'dict object' has no attribute 'results'"}

Defaulting to an empty array fixes the issue.

Reverts: abdd66619e
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1482061
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-25 18:39:00 +02:00
Sébastien Han 0205f6d645 rolling_update: nicer way to set osd flags
Prior to this patch, we were applying the osd flags like this:

"
General pre tasks
Set flags
Upgrade OSDs on a host
Unset flags <-- this triggers pending scrub to start
Set flags
Upgrade OSDs on a hosts
Unset flags <-- this triggers pending scrub to start
.
.
.
General post tasks
"

Now instead, we apply the flag once before starting the OSD update and
unset them once the last OSD is finished.

"
General pre tasks
Set flags and wait for any scrubs to finish
Upgrade OSDs on a host
Upgrade OSDs on a host
.
.
.
Unset flags
General post tasks
"

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1450754
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-08-25 18:21:28 +02:00
Sébastien Han 972eb45d31 ceph-docker-common: apply 0600 to key permissions
Keys should only be readable and writable by their respective owners and that's all.

Closes: https://github.com/ceph/ceph-ansible/issues/1760
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-25 18:14:28 +02:00
Sébastien Han d132605cb5 Merge pull request #1811 from ceph/push-galaxy
contrib: do not rework if tag exist
2017-08-25 14:54:09 +02:00
Sébastien Han ef8e744ccf contrib: do not rework if tag exist
We now compare local tags versus remote tags and do nothing if they both
exist.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-25 12:17:34 +02:00
Sébastien Han 730f351e59 Merge pull request #1810 from ceph/config-meta
update meta for ansible galaxy
2017-08-25 00:17:08 +02:00
Sébastien Han 1f4082f200 update meta for ansible galaxy
Closes: https://github.com/ceph/ceph-ansible/issues/1637
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-25 00:05:44 +02:00
Sébastien Han aee8267be4 Merge pull request #1808 from ceph/role-path
ceph-mon: detect ANSIBLE_ROLES_PATH if present
2017-08-24 23:49:41 +02:00
Sébastien Han 59dbf775c6 Merge pull request #1809 from ceph/no-sudo-fetch-dir
ceph-config: when using local_action set become: false
2017-08-24 18:43:51 +02:00
Andrew Schoen 910bb036c6 ceph-config: when using local_action set become: false
There should be no need to use sudo when writing or using these files.
It creates an issue when the user running ansible-playbook does not
have sudo privs.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-24 10:07:03 -05:00
Sébastien Han 76ac9b077b ceph-mon: detect ANSIBLE_ROLES_PATH if present
Some deployments can't copy infrastructure playbooks outside of the
infrastructure-playbooks directory. Thus they use ANSIBLE_ROLES_PATH to
overcome this. However some roles have 'playbook_dir' hardcoded, which
results in wrong path since the execution comes from
infrastructure-playbooks. Basically the role triggered by a playbook
from infrastructure-playbooks believes that the roles are in
infrastructure-playbooks/roles. This commit fixes that.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-24 16:19:39 +02:00
Alfredo Deza ac818f5bba Merge pull request #1807 from ceph/rpm-build-fix
rpm update the DOC section to point to rst
2017-08-24 09:24:00 -04:00
Alfredo Deza c261739cba rpm update the DOC section to point to rst
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-08-24 09:18:17 -04:00
Sébastien Han f568b5e126 Merge pull request #1806 from ceph/resync-group
resync groups_vars
2017-08-24 13:42:06 +02:00
Sébastien Han d7ddf88e88 resync groups_vars
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-24 13:40:36 +02:00
Sébastien Han 5bda515d7c site: delegate fact to all the hosts
Before this patch we couldn't use --limit properly to only interact with
a particular set of hosts. We basically always required to have ceph-mon
role being played to properly get facts and then build the ceph.conf.

Now, the current running host will get the facts from the machines that
are not part of the current play. This is achieved with the help of the
new option delegate_facts, for more info see:
http://docs.ansible.com/ansible/latest/playbooks_delegation.html#delegated-facts

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1482067
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-24 11:33:07 +02:00
Andrew Schoen d0a3034857 ceph-config: write ceph_conf_overrides_temp to fetch_directory
because /tmp is not always writable, but we can assume that the
fetch_directory will be

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-24 11:33:03 +02:00
Sébastien Han 80dc5eead7 ceph-config: add missing meta and files for the galaxy
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-24 11:33:03 +02:00
Guillaume Abrioux 539197a2fc Introduce new role ceph-config.
This will give us more flexibility and the possibility to deploy a client node
for an external ceph-cluster.

related BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1469426

Fixes: #1670

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-08-24 11:33:03 +02:00
Guillaume Abrioux b264bfece0 tests: Update tests according to `ceph-config` role implementation
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-08-24 11:33:02 +02:00
Sébastien Han 6d894e556c ceph-mon: remove hardcoded ipv4 in containers
Before this commit we were forcing ipv4 which might not be available.
Now setting ip_version to ipv4 or ipv6 will give you the right support.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1484189
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-24 11:33:02 +02:00
Sébastien Han 4a4a20f07d rolling update: skip pg check if num_pgs = 0
In our test case we don't have any pgs, thus the check fails. The check
always returns an empty array, which makes the comparaison failing.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-24 08:50:49 +02:00
Alfredo Deza e9c1ade0f4 Merge pull request #1799 from ceph/lvm-vg-lv
ceph-osd: ceph-volume requires --data to be in vg/lv format
2017-08-23 17:16:32 -04:00
Andrew Schoen 758c31b1cd ceph-osd: ceph-volume requires --data to be in vg/lv format
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-23 13:43:31 -05:00
Alfredo Deza e651469a2a Merge pull request #1797 from ceph/purge-lvm
adds purge support for the lvm_osds osd scenario
2017-08-23 14:28:29 -04:00
Sébastien Han f2499ff5ac Merge pull request #1788 from ceph/improve-switch
switch-from-non-containerized-to-containerized: simplify
2017-08-23 19:47:26 +02:00
Sébastien Han 4f0ecb7f30 switch-from-non-containerized-to-containerized: simplify
This commit eases the use of the
infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml
playbook. We basically run it with a couple of pre-tasks and then we let
the playbook run the docker roles.

It obviously expect to have proper variables configured in order to
work.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-23 18:39:45 +02:00
Andrew Schoen bed57572cc purge-cluster: adds support for purging lvm osds
This also adds a new testing scenario for purging lvm osds

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-23 10:33:35 -05:00
Andrew Schoen 594d5e017a ceph-osd: restructure lvm_volumes variable for more flexiblity
The lvm_volumes variable is now a list of dictionaries that represent
each OSD you'd like to deploy using ceph-volume. Each dictionary must
have the following keys: data, journal and data_vg. Each dictionary also
can optionaly provide a journal_vg key.

The 'data' key represents the lv name used for the OSD and the 'data_vg'
key is the vg name that the given lv resides on. The 'journal' key is
either an lv, device or partition. The 'journal_vg' key is optional and
must be the vg name for the journal lv if given. This key is mainly used
for purging of the journal lv if purge-cluster.yml is run.

For example:

  lvm_volumes:
    - data: data_lv1
      journal: journal_lv1
      data_vg: vg1
      journal_vg: vg2
    - data: data_lv2
      journal: /dev/sdc
      data_vg: vg1

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-23 10:14:14 -05:00
Sébastien Han daca49cdb1 Merge pull request #1796 from ceph/resync-groupvars
resync group_vars
2017-08-23 15:34:44 +02:00
Sébastien Han 1d696dec2a resync group_vars
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-23 15:33:48 +02:00
Sébastien Han d9b3d4a981 Merge pull request #1731 from SirishaGuduru/rgw-civetwebIP-conf
Common: changed civetweb line in rgw section(conf)
2017-08-23 15:33:08 +02:00
Sébastien Han e0c43ccc53 Merge pull request #1784 from ceph/fix-restart-osd-container
ceph-defaults: fix handler for osd container
2017-08-23 12:40:01 +02:00
SirishaGuduru 1359869497 Common: changed civetweb line in rgw section(conf)
Resolves issue: Multiple RGW Ceph.conf Issue #1258

In multi-RGW setup, in ceph.conf the RGW sections
contain identical bind IP in civetweb line. So this
modification fixes that issue and puts the right IP
for each RGW.

Signed-off-by: SirishaGuduru SGuduru@walmartlabs.com

Modified ceph-defaults and ran generate_group_vars_sample.sh

group_vars/osds.yml.sample and group_vars/rhcs.yml.sample are
not part of the changes. But they got modified when
generate_group_vars_sample.sh is ran to generate group_vars/
all.yml.sample.

Uncommented added variables in ceph-defaults

Updated tests by adding value for radosgw_interface

Added radosgw_interface to centos cluster tests

Modified ceph-rgw role,rebased and ran generate_group_vars_sample.sh

In ceph-rgw role removed check_mandatory_vars.yml.
Rebased on master.
Ran generate_group_vars_sample.sh and then the below files got
modified.
2017-08-23 15:03:37 +05:30
Sébastien Han 34e114e395 Merge pull request #1625 from ceph/wip-rbd-mirror-keys
rbd-mirror should use per-host user id keyring
2017-08-23 11:26:05 +02:00
Jason Dillaman b70d54ac80 rbd-mirror should use per-host user id keyring
The rbd-mirror daemon will be HA under luminous and new daemon health
features require a way to uniquely identify rbd-mirror instances.

Signed-off-by: Jason Dillaman <dillaman@redhat.com>
2017-08-22 18:55:29 -04:00
Jason Dillaman 70c2b934ca distribute rbd bootstrap key if available
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
2017-08-22 18:55:29 -04:00
Sébastien Han 1ac0969c28 Merge pull request #1778 from ceph/fix-1770
purge: add ability to purge bluestore osd
2017-08-22 23:56:36 +02:00
Sébastien Han 07821d9bb1 Merge pull request #1786 from ceph/re-arrange-skipped
mon, osd: fix skipped condition
2017-08-22 19:44:48 +02:00
Sébastien Han 8b1c5ca62f Merge pull request #1789 from mistur/master
fix radosgw-admin call with another cluster name than "ceph"
2017-08-22 19:41:31 +02:00