Commit Graph

2137 Commits (33c09af2507101289e8524c91f2b48c5adbdf904)

Author SHA1 Message Date
Sébastien Han abdb53e16a ceph-osd: trigger osd container restart on script change
The script ceph-osd-run.sh holds the config options to start the
container, if one of these options are modified we must restart the
container. This was not the case before becauase the 'notify' flag
wasn't present.

Closing: https://bugzilla.redhat.com/show_bug.cgi?id=1596061
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-28 17:54:13 +02:00
Sébastien Han f623997271 systemd: remove changed_when: false
When using a module there is no need to apply this Ansible option. The
module will handle the idempotency on its own. So the module decides
wether or not the task has changed during the execution.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-28 17:54:13 +02:00
George Shuklin 653b483fc3 Add ceph_keyring_permissions variable to control permissions for
keyring files in /etc/ceph. Default value is the same as it was (0600),
but this variable allows user to override it (f.e. set it to 0640).

Signed-off-by: George Shuklin <george.shuklin@gmail.com>
2018-06-28 15:48:39 +00:00
Ha Phan a7b7735b6f ceph-mon: Generate initial keyring
Minor fix so that initial keyring can be generated using python3.

Signed-off-by: Ha Phan <thanhha.work@gmail.com>
2018-06-28 10:39:56 +02:00
Ha Phan b7b8aba47b Generate a copy of ceph.conf locally
Refers to #2697

This change creates a copy of `ceph.conf` in ansible server.

Signed-off-by: Ha Phan <thanhha.work@gmail.com>
2018-06-28 07:39:30 +00:00
Andy McCrae a4a3d9a01b Fix package state for upgrades on SuSE/RHEL
During 226f80c22b only Debian package
installs had the correct state set to ensure packages were upgraded when
the "upgrade_ceph_packages" var was set to true.

Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
2018-06-27 18:55:22 +00:00
Sébastien Han 322e2de7d2 mon: honour mon_docker_net_host option
--net=host was hardcoded in the startup line so even though
mon_docker_net_host was set to False the net option would always be
activated.
mon_docker_net_host is set to True by default so this commit does not
change the behaviour.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-27 13:44:41 +00:00
Michel Rode 7774935707 Added 'squash' as a parameter to nfs-ganesha.
Set the default to 'root_squash' - which is the default of nfs-ganesha.

Signed-off-by: Michel Rode <rmichel@devnu11.net>
2018-06-25 09:13:17 +02:00
Christian Zunker 48394597c9 reset failed count of ceph-mgr
Depending on your setup, ceph-mgr might get restarted multiple times.
When this is done to fast, systemd will prevent further restarts because of
configured limits in the ceph-mgr systemd unit file.

Resetting the failure count will prevent this problem. The reset is done before
the restart so in case of a real problem during the restart it still fails.

Fixes: #2768

Signed-off-by: Christian Zunker <christian.zunker@codecentric.cloud>
2018-06-20 13:59:16 +02:00
Sébastien Han bea4027f0c common: start firewalld if configure_firewall
Currently we expect that if configure_firewall is set to True to have
firewalld enabled and running. Let's enforce that.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1589146
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-18 04:02:50 -04:00
Sébastien Han a9ed3579ae mon/osd: bump container memory limit
As discussed with the cores, the current limits are too low and should
be bumped to higher value.
So now by default monitors get 3GB and OSDs get 5GB.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1591876
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-17 11:20:27 -04:00
Guillaume Abrioux 51cf3b7fa0 client: try to kill dummy container only on first client node
The 'dummy' container is created only on first client node, it means we
must seek to destroy this container only on this node, otherwise this
can cause failure like following :
```
fatal: [192.168.24.8]: FAILED! => {"changed": false, "cmd": ["docker", "rm",
"-f", "ceph-create-keys"], "delta": "0:00:00.023692", "end": "2018-06-12
20:56:07.261278", "msg": "non-zero return code", "rc": 1, "start":
"2018-06-12 20:56:07.237586", "stderr": "Error response from daemon: No such
container: ceph-create-keys", "stderr_lines": ["Error response from daemon: No
such container: ceph-create-keys"], "stdout": "", "stdout_lines": []}

```

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1590746

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-13 16:10:46 +02:00
Patrick Donnelly 9ce81ae845 ceph-mds: do not enable multimds on jewel
Multiple active MDS became stable in Luminous.

Introduced-by: c8573fe0d7
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2018-06-12 10:47:34 +02:00
Sébastien Han 2e8412734a common: ability to enable/disable fw configuration
Prior to this patch if you were running on a Red Hat system,
ceph-ansible would try to configure firewalld for you without the
operators's consent.
Now you can enable or disable the fw configuration by setting
configure_firewall to either true or false.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1589146
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-11 21:51:59 +02:00
Konstantin Shalygin 3a07568496 ceph-osd: set 'openstack_keys_tmp' only when 'openstack_config' is defined.
If 'openstack_config' is false this task shouldn't be executed.

Signed-off-by: Konstantin Shalygin <k0ste@k0ste.ru>
2018-06-11 13:03:55 +02:00
Vishal Kanaujia 1a610df02b Fix to run secure cluster only once in a run
The current secure cluster play runs with all the monitors. The rerun
of this task is unnecessary and can be skipped.

Fixes: #2737

Signed-off-by: Vishal Kanaujia <vishal.kanaujia@flipkart.com>
2018-06-11 08:37:29 +02:00
Guillaume Abrioux 090ecff94e client: keyrings aren't created when single client node
combining `run_once: true` with `inventory_hostname ==
groups.get(client_group_name) | first` might cause bug when the only
node being run is not the first in the group.

In a deployment with a single client node it might cause issue because
sometimes keyring won't be created since the task could be definitively
skipped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1588093

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-08 15:05:47 +02:00
Sébastien Han 20c8065e48 ceph-iscsi: rename group iscsi_gws
Let's try to avoid using dashes as testinfra needs to be able to read
the groups.
Typically, with iscsi-gws we can't add a marker for these iscsi nodes,
using an underscore fixes the issue.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-08 10:21:54 +02:00
Sébastien Han 91bf53ee93 ceph-iscsi: support for containerize deployment
We now have the ability to deploy a containerized version of ceph-iscsi.
The result is similar to the non-containerized version, you simply have
3 containers running for the following services:

* rbd-target-api
* rbd-target-gw
* tcmu-runner

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1508144
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-08 10:21:54 +02:00
Guillaume Abrioux 8a653cacd5 client: add a default value for keyring file
Potential error if someone doesnt pass the mode in `keys` dict for
client nodes:

```
fatal: [client2]: FAILED! => {}

MSG:

The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'mode'

The error appears to have been in '/home/guits/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml': line 117, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- name: get client cephx keys
  ^ here

exception type: <class 'ansible.errors.AnsibleUndefinedVariable'>
exception: 'dict object' has no attribute 'mode'

```

adding a default value will avoid the deployment failing for this.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-07 17:26:35 +02:00
Guillaume Abrioux 5eacc8f8d8 tests: add a dummy value for 'dev' release
Functional tests are broken when testing against 'dev' release (ceph).
Adding a dummy value here will make it possible to run ceph-ansible CI
against dev ceph release.

Typical error:

```
>       if request.node.get_marker("from_luminous") and ceph_release_num[ceph_stable_release] < ceph_release_num['luminous']:
E       KeyError: 'dev'
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fd1487d93f21b609a637053f5b33cd2a4e408d00)
2018-06-07 13:59:17 +02:00
Andrew Schoen 24ef47b0e5 ceph-common: move firewall checks after package installation
We need to do this because on dev or rhcs installs ceph_stable_release
is not mandatory and the firewall check tasks have a task that is
conditional based off the installed version of ceph. If we perform those
checks after package install then they will not fail on dev or rhcs
installs.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-06-07 13:59:17 +02:00
Guillaume Abrioux 7b156deb67 client: use dummy created container when there is no mon in inventory
the `docker_exec_cmd` fact set in client role when there is no monitor
in inventory is wrong, `ceph-client-{{ hostname }}` is never created so
it will fail anyway.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-07 16:16:38 +08:00
Guillaume Abrioux 433ecc7cbc osd: copy openstack keys over to all mon
When configuring openstack, the created keyrings aren't copied over to
all monitors nodes.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1588093

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-07 13:58:57 +08:00
Patrick Donnelly 91f9da530f change max_mds default to 1
Otherwise, with the removal of mds_allow_multimds, the default of 3 will be set
on every new FS.

Introduced by: c8573fe0d7

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1583020
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2018-06-06 12:16:42 +08:00
Vishal Kanaujia 2cdb0d1812 Syntax error fix in rgw multisite role
This checkin fixes a syntax error in RGW multisite role under when
clause.

Fixes: #2704

Signed-off-by: Vishal Kanaujia <vishal.kanaujia@flipkart.com>
2018-06-05 16:01:07 +05:30
Guillaume Abrioux 2cf06b515f rgw: refact rgw pools creation
Refact of 8704144e31
There is no need to have duplicated tasks for this. The rgw pools
creation should be delegated on a monitor node se we don't have to care
if the admin keyring is present on rgw node.
By the way, only one task is needed to create the pools, we just need to
use the `docker_exec_cmd` fact already defined in `ceph-defaults` to
achieve it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1550281

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-05 15:00:20 +08:00
Ha Phan 1f3c9ce4f3 Use python instead of python2
The initial keyring is generated from ansible server locally and the snippet works well for both v2 and v3 of python.

I don't see any reason why we should explicitly invoke`python2` instead of just `python`.

In some setups, `python2` is not symlinked to `python`; while `python` and `python3` refer to v2 and v3 respectively.

Signed-off-by: Ha Phan <thanhha.work@gmail.com>
2018-06-04 14:24:10 +02:00
Sébastien Han db50aec13d ceph-common: add firewall rules for ceph-mgr
Prior to this commit the firewall tasks were not opening the ceph-mgr
ports. This would lead to unclean configuration since the ceph-mgr
daemons can not connect to the OSDs.
Thi commit opens the right ports on the ceph-mgr nodes to talk with the
OSDs.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1526400
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-04 12:11:41 +02:00
jtudelag 600e1e2c26 rgws: renames create_pools variable with rgw_create_pools.
Renamed to be consistent with the role (rgw) and have a meaningful name.

Signed-off-by: Jorge Tudela <jtudelag@redhat.com>
2018-06-04 06:23:42 +02:00
jtudelag 8704144e31 Adds RGWs pool creation to containerized installation.
ceph command has to be executed from one of the monitor containers
if not admin copy present in RGWs. Task has to be delegated then.

Adds test to check proper RGW pool creation for Docker container scenarios.

Signed-off-by: Jorge Tudela <jtudelag@redhat.com>
2018-06-04 06:23:42 +02:00
Guillaume Abrioux aae37b44f5 mons: move set_fact of openstack_keys in ceph-osd
Since the openstack_config.yml has been moved to `ceph-osd` we must move
this `set_fact` in ceph-osd otherwise the tasks in
`openstack_config.yml` using `openstack_keys` will actually use the
defaults value from `ceph-defaults`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1585139

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-01 17:12:01 +02:00
Andrew Schoen c2423e2c48 ceph-defaults: add the nautilus 14.x entry to ceph_release_num
The first 14.x tag has been cut so this needs to be added so that
version detection will still work on the master branch of ceph.

Fixes: https://github.com/ceph/ceph-ansible/issues/2671

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-06-01 16:51:23 +02:00
Guillaume Abrioux 9d5265fe11 osds: wait for osds to be up before creating pools
This is a follow up on #2628.
Even with the openstack pools creation moved later in the playbook,
there is still an issue because OSDs are not all UP when trying to
create pools.

Adding a task which checks for all OSDs to be UP with a `retries/until`
condition should definitively fix this issue.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1578086

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-01 15:46:52 +02:00
Guillaume Abrioux c68126d6fd mdss: do not make pg_num a mandatory params
When playing ceph-mds role, mon nodes have set a fact with the default
pg num for osd pools, we can simply default to this value for cephfs
pools (`cephfs_pools` variable).

At the moment the variable definition for `cephfs_pools` looks like:

```
cephfs_pools:
  - { name: "{{ cephfs_data }}", pgs: "" }
  - { name: "{{ cephfs_metadata }}", pgs: "" }
```

and we have a task in `ceph-validate` to ensure `pgs` has been set to a
valid value.

We could simply avoid this check by setting the default value of `pgs`
to `hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num']` and
let to users the possibility to override this value.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1581164

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-30 16:20:34 +02:00
Guillaume Abrioux 34e646e767 osds: do not set docker_exec_cmd fact
in `ceph-osd` there is no need to set `docker_exec_cmd` since the only
place where this fact is used is in `openstack_config.yml` which
delegate all docker command to a monitor node. It means we need the
`docker_exec_cmd` fact that has been set referring to `ceph-mon-*`
containers, this fact is already set earlier in `ceph-defaults`.

By the way, when collocating an OSD with a MON it fails because the container
`ceph-osd-{{ ansible_hostname }}` doesn't exist.

Removing this task will allow to collocate an OSD with a MON.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1584179

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-30 16:17:29 +02:00
Guillaume Abrioux 608ea947a9 mds: move mds fs pools creation
When collocating mds on monitor node, the cephpfs will fail
because `docker_exec_cmd` is reset to `ceph-mds-monXX` which is
incorrect because we need to delegate the task on `ceph-mon-monXX`.
In addition, it wouldn't have worked since `ceph-mds-monXX` container
isn't started yet.

Moving the task earlier in the `ceph-mds` role will fix this issue.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1578086

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-25 11:16:56 +02:00
Sébastien Han 1c084efb3c rgw: container add option to configure multi-site zone
You can now use RGW_ZONE and RGW_ZONEGROUP on each rgw host from your
inventory and assign them a value. Once the rgw container starts it'll
pick the info and add itself to the right zone.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1551637
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-24 11:32:05 -07:00
Guillaume Abrioux 3a0e168a76 mdss: move cephfs pools creation in ceph-mds
When deploying a large number of OSD nodes it can be an issue because the
protection check [1] won't pass since it tries to create pools before all
OSDs are active.

The idea here is to move cephfs pools creation in `ceph-mds` role.

[1] e59258943b/src/mon/OSDMonitor.cc (L5673)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1578086

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-24 09:39:38 -07:00
Guillaume Abrioux 564a662baf osds: move openstack pools creation in ceph-osd
When deploying a large number of OSD nodes it can be an issue because the
protection check [1] won't pass since it tries to create pools before all
OSDs are active.

The idea here is to move openstack pools creation at the end of `ceph-osd` role.

[1] e59258943b/src/mon/OSDMonitor.cc (L5673)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1578086

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-24 09:39:38 -07:00
Luigi Toscano 43e96c1f98 ceph-radosgw: disable NSS PKI db when SSL is disabled
The NSS PKI database is needed only if radosgw_keystone_ssl
is explicitly set to true, otherwise the SSL integration is
not enabled.

It is worth noting that the PKI support was removed from Keystone
starting from the Ocata release, so some code paths should be
changed anyway.

Also, remove radosgw_keystone, which is not useful anymore.
This variable was used until fcba2c801a.
Now profiles drives the setting of rgw keystone *.

Signed-off-by: Luigi Toscano <ltoscano@redhat.com>
2018-05-23 23:24:09 -07:00
Vishal Kanaujia ef5f52b1f3 Skip GPT header creation for lvm osd scenario
The LVM lvcreate fails if the disk already has a GPT header.
We create GPT header regardless of OSD scenario. The fix is to
skip header creation for lvm scenario.

fixes: https://github.com/ceph/ceph-ansible/issues/2592

Signed-off-by: Vishal Kanaujia <vishal.kanaujia@flipkart.com>
2018-05-23 11:44:09 -07:00
Subhachandra Chandra c7e269fcf5 Fix restarting OSDs twice during a rolling update.
During a rolling update, OSDs are restarted twice currently. Once, by the
handler in roles/ceph-defaults/handlers/main.yml and a second time by tasks
in the rolling_update playbook. This change turns off restarts by the handler.
Further, the restart initiated by the rolling_update playbook is more
efficient as it restarts all the OSDs on a host as one operation and waits
for them to rejoin the cluster. The restart task in the handler restarts one
OSD at a time and waits for it to join the cluster.
2018-05-22 19:23:07 +02:00
Andrew Schoen a9ad8eb5f3 ceph-validate: do not check ceph version on dev or rhcs installs
A dev or rhcs install does not require ceph_stable_release to be set and
instead generates that by looking at the installed ceph-version.
However, at this point in the playbook ceph may not have been installed
yet and ceph-common has not be run.

Fixes: https://github.com/ceph/ceph-ansible/issues/2618

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-21 23:11:04 +02:00
Andrew Schoen e7d02a50d8 ceph-validate: move system checks from ceph-common to ceph-validate
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 645f61c351 ceph-defaults: remove backwards compat for containerized_deployment
The validation module does not get config options with the template
syntax rendered, so we're gonna remove that and just default it to
False. The backwards compat was schedule to be removed in 3.1 anyway.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen d30a99c350 validate: add support for containerized_deployment
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen f84c2ba27b ceph-defaults: fix failing tasks when osd_scenario was not set correctly
When devices is not defined because you want to use the 'lvm'
osd_scenario but you've made a mistake selecting that scenario these
tasks should not fail.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 1f15a81c48 ceph-defaults: move cephfs vars from the ceph-mon role
We're doing this so we can validate this in the ceph-validate role

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen ffe05872ac validate: only validate cephfs_pools on mon nodes
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 48c2a4fda8 validate: check rados config options
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 377fe81c10 validate: make sure ceph_stable_release is set to the correct value
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen ba7f09c0a7 ceph-validate: move var checks from ceph-common into this role
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 32bac6b491 ceph-validate: move var checks from ceph-osd into this role
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 29a9dffc83 ceph-validate: move ceph-mon config checks into this role
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen d87a32347f adds a new ceph-validate role
This will be used to validate config given to ceph-ansible.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Sébastien Han 2f43e9dab5 defaults: restart_osd_daemon unit spaces
Extra space in systemctl list-units can cause restart_osd_daemon.sh to
fail

It looks like if you have more services enabled in the node space
between "loaded" and "active" get more space as compared to one space
given in command the command[1].

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1573317
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-18 17:53:47 +02:00
Michael Vollman ed050bf3f6 Do nothing when mgr module is in good state
Check whether a mgr module is supposed to be disabled before disabling
it and whether it is already enabled before enabling it.

Signed-off-by: Michael Vollman <michael.b.vollman@gmail.com>
2018-05-18 15:21:45 +02:00
Andy McCrae f45662e270 Fix template reference for ganesha.conf
We can simply reference the template name since it exists within the
role that we are calling. We don't need to check the ANSIBLE_ROLE_PATH
or playbooks directory for the file.
2018-05-17 15:23:52 +02:00
Andy McCrae 226f80c22b Install packages as a list
To make the package installation more efficient we should install
packages as a list rather than as individual tasks or using a
"with_items" loop. The package managers can handle a list passed to them
to install in one go.

We can use a specified list and substitute any packages that are not to
be installed with the ceph-common package, which is installed on every
package install, then apply the unique filter to the package install
list.
2018-05-16 09:59:00 +02:00
Guillaume Abrioux f749830897 mon: refactor of mgr key fetching
There is no need to stat for created mgr keyrings since they are created
anyway when deploying a ceph cluster > jewel. In case of a jewel
deployment we won't enter that block.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-16 09:44:58 +02:00
Guillaume Abrioux 926be51b44 mgr: delete copy_configs.yml (containerized)
This file is a leftover from PR ceph/ceph-ansible#2516
It is not used anymore so it can be removed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-15 19:30:34 +02:00
Sébastien Han 52fc8a0385 rolling_update: move mgr key creation
Until all the mons haven't been updated to Luminous, there is no way to
create a key. So we should do the key creation in the mon role only if
we are not part of an update.
If we are then the key creation is done after the mons upgrade to
Luminous.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1574995
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-15 09:01:42 +02:00
Sébastien Han e810fb217f Revert "mon: fix mgr keyring creation when upgrading from jewel"
This reverts commit 259fae931d.
2018-05-15 09:01:42 +02:00
Guillaume Abrioux a145caf947 iscsi-gw: fix issue when trying to mask target
trying to mask target when `/etc/systemd/system/target.service` doesn't
exist seems to be a bug.
There is no need to mask a unit file which doesn't exist.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-14 21:42:22 +02:00
Sébastien Han 8c7c11b774 iscsi: add python-rtslib repository
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-14 21:42:22 +02:00
Andy McCrae 08a2b58d39 Allow os_tuning_params to overwrite fs.aio-max-nr
The order of fs.aio-max-nr (which is hard-coded to 1048576) means that
if you set fs.aio-max-nr in os_tuning_params it will effectively be
ignored for bluestore scenarios.

To resolve this we should move the setting of fs.aio-max-nr above the
setting of os_tuning_params, in this way the operator can define the
value of fs.aio-max-nr to be something other than 1048576 if they want
to.

Additionally, we can make the sysctl settings happen in 1 task rather
than multiple.
2018-05-11 10:49:37 +01:00
Guillaume Abrioux f60b049ae5 client: remove default value for pg_num in pools creation
trying to set the default value for pg_num to
`hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num'])` will
break in case of external client nodes deployment.
the `pg_num` attribute should be mandatory and be tested in future
`ceph-validate` role.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-10 11:51:02 -07:00
Gregory Meno 26f6a65042 adds missing state needed to upgrade nfs-ganesha
in tasks for os_family Red Hat we were missing this

fixes: bz1575859
Signed-off-by: Gregory Meno <gmeno@redhat.com>
2018-05-09 19:58:04 +00:00
Guillaume Abrioux 259fae931d mon: fix mgr keyring creation when upgrading from jewel
On containerized deployment,
when upgrading from jewel to luminous, mgr keyring creation fails because the
command to create mgr keyring is executed on a container that is still
running jewel since the container is restarted later to run the new
image, therefore, it fails with bad entity error.

To get around this situation, we can delegate the command to create
these keyrings on the first monitor when we are running the playbook on the last monitor.
That way we ensure we will issue the command on a container that has
been well restarted with the new image.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1574995

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-09 10:29:48 -07:00
Guillaume Abrioux 7b387b506a osd: clean legacy syntax in ceph-osd-run.sh.j2
Quick clean on a legacy syntax due to e0a264c7e

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-09 07:29:33 +02:00
Simone Caronni b12bf62c36 Make sure the restart_mds_daemon script is created with the correct MDS name 2018-05-08 20:53:15 +02:00
Sébastien Han 07ca91b5cb common: enable Tools repo for rhcs clients
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1574458
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-08 16:12:30 +02:00
Andy McCrae e99351b95b Fix install of nfs-ganesha-ceph for Debian/SuSE
The Debian and SuSE installs for nfs-ganesha on the non-rhcs repository
requires you to allow_unauthenticated for Debian, and disable_gpg_check
for SuSE. The nfs-ganesha-rgw package already does this, but the
nfs-ganesha-ceph package will fail to install because of this same
issue.

This PR moves the installations to happen when the appropriate flags are
set to True (nfs_obj_gw & nfs_file_gw), but does it per distro (one for
SuSE and one for Debian) so that the appropriate flag can be passed to
ignore the GPG check.
2018-05-04 15:13:59 +02:00
Ramana Raja 31762dede3 ceph-nfs: disable attribute caching
When 'ceph_nfs_disable_caching' is set to True, disable attribute
caching done by Ganesha for all Ganesha exports.

Signed-off-by: Ramana Raja <rraja@redhat.com>
2018-05-04 09:47:54 +02:00
Sébastien Han 4a186237e6 common: copy iso files if rolling_update
If we are in a middle of an update we want to get the new package
version being installed so the task that copies the repo files should
not be skipped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1572032
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-03 17:18:55 +02:00
Andy McCrae d142be0422 Move apt cache update to individual task per role
The apt-cache update can fail due to transient issues related to the
action being a network operation. To reduce the impact of these
transient failures this patch adds a retry to the update_cache task.

However, the apt_repository tasks which would perform an apt_update
won't retry the apt_update on a failure in the same way, as such this PR
moves the apt_update into an individual task, once per role.

Finally, the apt_repository tasks no longer have a changed_when: false,
and the apt_cache update is only performed once per role, if the
repositories change. Otherwise the cache is updated on the "apt" install
tasks if the cache_timeout has been reached.
2018-05-03 14:02:15 +02:00
Guillaume Abrioux 6fe8df627b client: fix pool creation
the value in `docker_exec_client_cmd` doesn't allow to check for
existing pools because it's set with a wrong value for the entrypoint
that is going to be used.
It means the check were going to fail anyway even if pools actually exist.

Using jinja syntax to set `docker_exec_cmd` allows to handle the case
where you don't have monitors in your inventory.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-03 08:22:40 +02:00
Sébastien Han 43e23ffe4d mon: change application pool support
If openstack_pools contains an application key it will be used to apply
this application pool type to a pool.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1562220
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-30 09:42:58 +02:00
Guillaume Abrioux 75ed437d4e check if pools already exist before creating them
Add a task to check if pools already exist before we create them.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-30 08:15:18 +02:00
Guillaume Abrioux a68091c923 tests: update the type for the rule used in pools
As of ceph 12.2.5 the type of the parameter `type` is not a name anymore but
an id, therefore an `int` is expected otherwise it will fail with the
following error

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-30 08:15:18 +02:00
Sébastien Han 12eebc31fb mon/client: honor key mode when copying it to other nodes
The last mon creates the keys with a particular mode, while copying them
to the other mons (first and second) we must re-use the mode that was
set.

The same applies for the client node, the slurp preserves the initial
'item' so we can get the mode for the copy.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 18:34:58 +02:00
Sébastien Han 74494253fa mon: remove redundant copy task
We had twice the same task, also one was overriding the mode.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 18:34:58 +02:00
Sébastien Han 85732d11b9 mon/client: remove acl code
Applying ACL on the keyrings is not used anymore so let's remove this
code.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 18:34:58 +02:00
Sébastien Han cfe8e51d99 mon/client: apply mode from ceph_key
Do not use a dedicated task for this but use the ceph_key module
capability to set file mode.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 18:34:58 +02:00
Di Xu 113eb25424 add AArch64 to supported architecture
works on AArch64 platform
2018-04-23 10:23:21 +02:00
Sébastien Han 949507d304 mon: remove mgr key from ceph_config_keys
This key is created after the last mon is up so there is no need to try
to push it from the first mon. The initia mon container is not creating
the mgr key, ansible does. So this key will never exist.
The key will go into the fetch dir once the last mon is up, then when
the ceph-mgr plays it will try to get it from the fetch directory.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 10:17:24 +02:00
Sébastien Han 35c1eb7183 mon: remove mon map from ceph_config_keys
During the initial bootstrap of the first mon, the monmap file is
destroyed so it's not available and ansible will never find it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 10:17:24 +02:00
Sébastien Han 65ba85aff6 Expose /var/run/ceph
Useful for softwares that do data collection/monitoring like collectd.
They can connect to the socket and then retrieve information.

Even though the sockets are exposed now, I'm keeping the docker exec to
check the socket, this will allow newer version of ceph-ansible to work
with older versions.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1563280
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-20 15:48:32 +02:00
Sébastien Han bf1e70e8cf default: extent ceph_uid and gid
We now have the ability to detect the uid/gid of the ceph user depending
on the distribution we are running on and so we are doing non-container
deployements.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-20 15:48:32 +02:00
Sébastien Han f3656ad167 move create ceph initial directories to default
This is needed for both non-container and container deployments.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-20 15:48:32 +02:00
Sébastien Han 641f141c0f selinux: remove chcon calls
We know bindmount with the :z option at the end of the -v command so
this will basically run the exact same command as we used to run. So to
speak:

chcon -Rt svirt_sandbox_file_t /var/lib/ceph

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-19 14:59:37 +02:00
Sébastien Han 90e47c5fb0 client: add a --rm option to run the container
This fixes the case where the playbook died and never removed the
container. So now, once the container exits it will remove itself from
the container list.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1568157
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-19 14:59:37 +02:00
Sébastien Han 6c742376fd client: import the key in ceph is copy_admin_key is true
If the user has set copy_admin_key to true we assume he/she wants to
import the key in Ceph and not only create the key on the filesystem.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-18 17:46:54 +02:00
Sébastien Han 424815501a client: add quotes to the dict values
ceph-authtool does not support raw arguements so we have to quote caps
declaration like this allow 'bla bla' instead of allow bla bla

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1568157
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-18 17:46:54 +02:00
Sébastien Han d2a2793cb0 refactor the way we copy keys
This commit does a couple of things:

* use a common.yml file that contains things that can be played on both
container and non-container

* refactor the ability to copy the admin key to the nodes

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-18 16:46:33 +02:00
Randy J. Martinez 127a643fd0 ceph-defaults: fix ceph_uid fact on container deployments
Red Hat is now using tags[3,latest] for image rhceph/rhceph-3-rhel7.
Because of this, the ceph_uid conditional passes for Debian
when 'ceph_docker_image_tag: latest' on RH deployments.
I've added an additional task to check for rhceph image specifically,
and also updated the RH family task for ceph/daemon [centos|fedora]tags.

Signed-off-by: Randy J. Martinez <ramartin@redhat.com>
2018-04-17 16:54:51 +02:00
Sébastien Han a98885a71e rhcs: re-add apt-pining
When installing rhcs on Debian systems the red hat repos must have the
highest priority so we avoid packages conflicts and install the rhcs
version.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1565850
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-17 16:07:06 +02:00
Guillaume Abrioux 899b0eb451 defaults: check only 1 time if there is a running cluster
There is no need to check for a running cluster n*nodes time in
`ceph-defaults` so let's add a `run_once: true` to save some resources
and time.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-16 11:23:00 +02:00
Sébastien Han 5bbbce527e osd: do not do anything if the dev has a partition
Regardless if the partition is 'ceph' or something else, we don't want
to be as strick as checking for a particular partition.
If the drive has a partition, we just don't do anything.

This solves the case where the server reboots, disks get a different
/dev/sda (node) allocation. In this case, prior to restarting the server
/dev/sda was an OSD, but now it's /dev/sdb and the other way around.
In such scenario, we will try to prepare the OSD and create a new
partition, so let's not mess around with devices that have partitions.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1498303
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-13 19:11:15 +02:00
Sébastien Han 37117071eb common: add tools repo for iscsi gw
To install iscsi gw packages we need to enable the tools repo.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1547849
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-12 13:38:34 +02:00
Douglas Fuller c8573fe0d7 Remove deprecated allow_multimds
allow_multimds will be officially deprecated in Mimic, specify it
only for all versions of Ceph where it was declared stable. Going
forward, specify only max_mds.

Signed-off-by: Douglas Fuller <dfuller@redhat.com>
2018-04-12 10:29:17 +02:00
vasishta p shastry 020e66c1b4 Fixed a typo (extra space) 2018-04-11 14:21:15 +02:00
vasishta p shastry e1a1f81b6f osd: to support copy_admin_key 2018-04-11 14:21:15 +02:00
vasishta p shastry db3a5ce6d9 mds: to support copy_admin_keyring 2018-04-11 14:21:15 +02:00
vasishta p shastry 6b59416f75 nfs: to support copy_admin_key - containerized 2018-04-11 14:21:15 +02:00
Ali Maredia 01c58695fc nfs: ensure nfs-server server is stopped
NFS-ganesha cannot start is the nfs-server service
is running. This commit stops nfs-server in case it
is running on a (debian, redhat, suse) node before
the nfs-ganesha service starts up

fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1508506

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2018-04-11 14:00:48 +02:00
Ramana Raja 4a430ae29a ceph-nfs: allow disabling ganesha caching
Add a variable, ceph_nfs_disable_caching, that if set to true
disables ganesha's directory and attribute caching as much as
possible.

Also, disable caching done by ganesha, when 'nfs_file_gw'
variable is true, i.e., when Ganesha is used as CephFS's gateway.
This is the recommended Ganesha setting as libcephfs already caches
information. And doing so helps avoid cache incoherency issues
especially with clustered ganesha over CephFS.

Fixes: https://tracker.ceph.com/issues/23393

Signed-off-by: Ramana Raja <rraja@redhat.com>
2018-04-11 13:56:40 +02:00
Sébastien Han 82ccbdafbc ceph-defaults: bring backward compatibility for old syntax
If people keep on using the mon_cap, osd_cap etc the playbook will
translate this old syntax on the flight.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-11 12:18:34 +02:00
Sébastien Han 9657e4d6fa ceph_key: use ceph_key in the playbook
Replaced all the occurence of raw command using the 'command' module
with the ceph_key module instead.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-11 12:18:34 +02:00
Guillaume Abrioux 66c4118dcd defaults: fix backward compatibility
backward compatibility with `ceph_mon_docker_interface` and
`ceph_mon_docker_subnet` was not working since there wasn't lookup on
`monitor_interface` and `public_network`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-10 00:19:11 +02:00
Ken Dreyer 3752cc6f38 common: upgrade/install ceph-test RPM first
Prior to this change, if a user had ceph-test-12.2.1 installed, and
upgraded to ceph v12.2.3 or newer, the RPM upgrade process would
fail.

The problem is that the ceph-test RPM did not depend on an exact version
of ceph-common until v12.2.3.

In Ceph v12.2.3, ceph-{osdomap,kvstore,monstore}-tool binaries moved
from ceph-test into ceph-base. When ceph-test is not yet up-to-date, Yum
encounters package conflicts between the older ceph-test and newer
ceph-base.

When all users have upgraded beyond Ceph < 12.2.3, this is no longer
relevant.
2018-04-09 18:09:52 +02:00
Sébastien Han bb60f2fea4 ceph-defaults: fix ceoh_uid for container image tag latest
According to our recent change, we now use "CentOS" as a latest
container image. We need to reflect this on the ceph_uid.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-09 13:54:55 +02:00
Zack Cerza 0123d790cd Use the CentOS repo for Red Hat dev packages
No use even trying to use something that doesn't exist.

Signed-off-by: Zack Cerza <zack@redhat.com>
2018-04-09 10:05:57 +02:00
Attila Fazekas ecd3563c21 Deploying without managed monitors failed
Tripleo deployment failed when the monitors not manged
by tripleo itself with:
    FAILED! => {"msg": "list object has no element 0"}

The failing play item was introduced by
 f46217b69a .

fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1552327

Signed-off-by: Attila Fazekas <afazekas@redhat.com>
2018-04-04 18:16:46 +02:00
Guillaume Abrioux dcf6a246a4 defaults: remove `run_once: true` when creating fetch_directory
because of `serial: 1`, it can be an issue when the playbook is being
run on client nodes.
Since the refact of `ceph-client` we skip the role `ceph-defaults` on
every node except the first client node, it means that the task is not
going to be played because of `run_once: true`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Guillaume Abrioux 18c0c7a508 config: use fact `ceph_uid`
Use fact `ceph_uid` in the task which ensures `/etc/ceph` exists in
containerized deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Guillaume Abrioux 9c979c6390 clients: refact `ceph-clients` role
This commit refacts this role so we don't have to pull container image
on client nodes just to create pools and keys.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1550977

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Guillaume Abrioux cefd471967 client: remove legacy code
This seems to be a leftover.
This commit removes an unnecessary 'set linux permissions' on
`/var/lib/ceph`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Guillaume Abrioux cf27c5e941 move selinux check to `ceph-defaults`
This check is alone in `ceph-docker-common` since a previous code
refactor.
Moving this check in `ceph-defaults` allows us to run `ceph-clients`
without having to run `ceph-docker-common` even in non-containerized
deployment.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Sébastien Han f3caee8460 ceph-iscsi: fix certificates generation and distribution
Prior to this patch, the certificates where being generated on a single
node only (because of the run_once: true). Thus certificates were not
distributed on all the gateway nodes.

This would require a second ansible run to work. This patches fix the
creation and keys's distribution on all the nodes.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1540845
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-04 09:27:39 +02:00
Randy J. Martinez ca572a11f1 ceph-mds: delete duplicate tasks which cause multimds container deployments to fail.
This update will resolve error['cephfs' is undefined.] in multimds container deployments.
See: roles/ceph-mon/tasks/create_mds_filesystems.yml. The same last two tasks are present there, and actully need to happen in that role since "{{ cephfs }}" gets defined in
roles/ceph-mon/defaults/main.yml, and not roles/ceph-mds/defaults/main.yml.

Signed-off-by: Randy J. Martinez <ramartin@redhat.com>
2018-03-29 09:32:40 +02:00
Alfredo Deza 3fcf966803 ceph-osd note that some scenarios use ceph-disk vs. ceph-volume
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2018-03-29 09:11:33 +02:00
John Fulton e6e6bd078a Refer to expected-num-ojects as expected_num_objects, not size
Follow up patch to PR 2432 [1] which replaces "size" (sorry if
the original bug used that term, which can be confusing) with
expected_num_objects as is used in the Ceph documentation [2].

[1] https://github.com/ceph/ceph-ansible/pull/2432/files
[2] http://docs.ceph.com/docs/jewel/rados/operations/pools
2018-03-26 15:41:51 +02:00
Ning Yao 691ddf5349 cleanup osd.conf.j2 in ceph-osd
osd crush location is set by ceph_crush in the library,
osd.conf.j2 is not used any more.

Signed-off-by: Ning Yao <yaoning@unitedstack.com>
2018-03-26 15:57:37 +08:00
Patrick Donnelly 7f91547304 setup cephx keys when not nfs_obj_gw
Copy the admin key when configured nfs_file_gw (but not nfs_obj_gw). Also,
copy/setup RGW related directories only when configured as nfs_obj_gw.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2018-03-22 14:01:08 +01:00
Andrew Schoen 6cffbd5409 ceph-defaults: set is_atomic variable
This variable is needed for containerized clusters and is required for
the ceph-docker-common role. Typically the is_atomic variable is set in
site-docker.yml.sample though so if ceph-docker-common is used outside
of that playbook it needs set in another way. Moving the creation of
the variable inside this role means playbooks don't need to worry
about setting it.

fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1558252

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-03-21 19:16:11 +01:00
Andy McCrae 388562a4af Simplify ceph.conf generation
Since the approach to creating a ceph.conf file has changed, and now
no-longer relies on assembling config file fragments in /etc/ceph/ceph.d
we can avoid the conf_overrides rendering on the local host and skip out
the tasks related to that, instead using just the config_template task
to configure the file directly.
2018-03-15 15:47:41 +01:00
Sébastien Han e3275c1ca1 osd: add fs.aio-max-nr tuning
The number of osds per nodes is limited by aio-max-nr, default is low,
so we need to increase it.

Full story:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-August/020408.html

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1553407
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-15 14:06:26 +01:00
Sébastien Han f432819c1e osd: apply systcl right away
Without     sysctl_set: yes the sysctm tuning will only get applied on
the systctl.conf but not on the fly.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-15 14:06:26 +01:00
Sébastien Han 0f8a4251ba move system tuning to osd role
The changes from these tasks only apply to osd nodes so there is no
reason to have them in ceph-common.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-15 14:06:26 +01:00
Sébastien Han f119b25bbe client: implement proper pools creation
Just like we did for the monitor and openstack_config we now have the
ability to precisely create pools.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han e302c1baae mon: add support for erasure code pool
You can now specify type: erasure and   erasure_profile to use when
declaring the pool dictionnary.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han 277d885bc9 mon: add support for pgp, pool type and rule name
When creating pools, it's crucial to expose all the options available as
part of the pool creation command. As explained in:
http://docs.ceph.com/docs/jewel/rados/operations/pools/

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han 26bc00fb74 mon: fail if pool creation fails
There is no reason to continue the deployment if these tasks fail.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1546185
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han 0011edd2bc mon: add support for expected-num-objects
This commit adds the support for expected-num-objects when creating a pool.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1541520
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han 18402b636f defaults: add useful info if daemon are not restarted properly
If OSDs don't restart normally we now also dump info of the crush map,
crush rules, crush tree and pools.

If the monitors don't restart normally we also print the socket status
by calling mon_status and quorum_status.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
jtudelag 691f7c5146 Adds handy ceph aliases whe containerized installations.
Same approach as openshift-ansible etcdctl:

* https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/etcd/tasks/auxiliary/drop_etcdctl.yml
* https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/etcd/etcdctl.sh
2018-03-08 13:56:39 +01:00
Guillaume Abrioux 9181c94adf client: fix pgs num for client pool creation
The `pools` dict defined in `roles/ceph-client/defaults/main.yml`
shouldn't have `{{ ceph_conf_overrides.global.osd_pool_default_pg_num
}}` as default value for `pgs` keys.

For instance, if you want some pools to be created but without explicitely
specifying the pgs for these pools (it means you want to use the
`osd_pool_default_pg_num`), you will be obliged to define
`{{ ceph_conf_overrides.global.osd_pool_default_pg_num }}` anyway while you
wanted to use the current default value already defined in the cluster which is
retrieved early in the playbook and stored in the
`{{ osd_pool_default_pg_num }}` fact.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-03-07 11:18:04 +01:00
Sébastien Han 96c049be5b common: run updatedb task on debian systems only
The command doesn't exist on Red Hat systems so it's better to skip it
instead of ignoring the error.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han a52ed43093 mon: fix osd_pool_default_crush_rule persistence and effectiveness
Running the last portion (insert new default and add new default crush
tasks) of crush_rules.yml only on the last monitor is
wrong since ceph CLI calls usually end up on the master having the
quorum, which is by default the one with the lower IP.
So if we run the  command and end up on another mon the creation will
happen on the default crush rule because the particular mon hasn't been
updated.
To fix this we remove the |last on the include and use run_once: true on
 certain tasks, then we let the final two tasks run on all the monitors.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han 47cef7a41d mon: fix set crush default rule
On releases after jewel the option
'osd_pool_default_crush_replicated_ruleset' does not exist anymore, it's
called osd_pool_default_crush_rule.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han 3261ab23b8 osd: remove old crush_location implementation
This was causing a lot of pain with the handlers. Also the
implementation was not ideal since we were assembling files. Everything
can now be done with the ceph_crush module so let's remove that.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han 73c4846744 mon: use ceph_crush module in the playbook
Instead of creating the CRUSH hierarchy with Ansible tasks using the
command module we now rely on the ceph_crush module.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Greg Charot 78c1f1938f mons: Current crush_rule playbook does not work if there is no default rule defined (default: true).
One could want to add new crush rules while keeping his current default rule.
Fixed it so that it works with all rules defined as "default: false". If multiple rules are defined as default (should not be) then the last rule listed in "crush_rules" is taken as default.
2018-03-06 15:24:31 +00:00
Greg Charot 77f9c1df10 no reason the ceph-ansible ansible default provided crush_rule_hdd rule should be set as rack root + default ruleset 2018-03-06 15:24:31 +00:00
Greg Charot 50afc3fbf3 We don't want to automatically move the rbd pool to the new default crush rule. This operation shall be performed by the cluster operator. 2018-03-06 15:24:31 +00:00
Andy McCrae 04ca685ba7 Remove vars that are no longer used
As part of fcba2c801a these vars were
removed and no longer do anything:

radosgw_dns_name
radosgw_resolve_cname

This patch removes them from the group_vars files and defaults/main.yml
2018-03-06 09:16:25 +01:00
jtudelag c3267b77b7 Makes use of docker_exec_cmd in ceph-mon role.
Keeps consistency inside the role and among roles.
Makes the code more readable.
2018-03-05 12:48:35 +00:00
Sébastien Han cb0f598965 common: run updatedb task on debian systems only
The command doesn't exist on Red Hat systems so it's better to skip it
instead of ignoring the error.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-02 20:59:10 +00:00
Sébastien Han 7f19df8196 rgw: add cluster name option to the handler
If the cluster name is different than 'ceph', the command will fail so
we need to pass the cluster name.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-02 20:59:10 +00:00
Sébastien Han 9c85280602 rgw: ability to copy ceph admin key on containerized
If we now set copy_admin_key while running a containerized scenario, the
ceph admin key will be copied on the node.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-02 20:59:10 +00:00
Sébastien Han 67f46d8ec3 rgw: run the handler on a mon host
In case the admin wasn't copied over to the node this command would
fail. So it's safer to run it from a monitor directly.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-02 20:59:10 +00:00
Guillaume Abrioux 6d35bc9bde client: use `ceph_uid` fact to set uid/gid on admin key
That task is failing on containerized deployment because `ceph:ceph`
doesn't exist.
The idea here is to use the `{{ ceph_uid }}` to set the ownerships for
the admin keyring when containerized_deployment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1540578

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-02-26 15:52:05 +01:00
Grant Slater 1e1b26ca4d mds: fix ansible_service_mgr typo
This commit fixes a typo introduced by 4671b9e74e
2018-02-26 13:05:14 +01:00
Andy McCrae c33dae7509 Revert "[TEST] Test setting up correct systemd file for nfs-ganesha"
The nfs-ganesha package has been fixed as part of this commit:
963b6681df

Once the package is rebuilt this should be good to merge.

This reverts commit e88af3c4cb.
2018-02-26 10:23:42 +01:00
Giulio Fidente a83e1aeea3 Make rule_name optional when defining items in openstack_pools
Previously it was necessary to provide a value (eventually an
empty string) for the "rule_name" key for each item in
openstack_pools. This change makes that optional and defaults to
empty string when not given.
2018-02-23 15:11:53 +01:00
Sébastien Han 165d9dec10 remove kernel.pid_max
This is now managed by Ceph packages.

See: https://github.com/ceph/ceph/pull/18544/files

http://tracker.ceph.com/issues/21929

Closes: https://github.com/ceph/ceph-ansible/issues/2410

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-23 13:57:57 +01:00
Andy McCrae 2779d2a850 Adjust /etc/updatedb.conf to not parse /var/lib/ceph
Using updatedb -e doesnt make a permanent change, but will updatedb
without the passed path.

To make this change more permanent we should update the
/etc/updatedb.conf file to include /var/lib/ceph.
2018-02-20 11:32:56 +01:00
Andy McCrae e88af3c4cb [TEST] Test setting up correct systemd file for nfs-ganesha
Don't merge this.
Test to see if we copy over the nfs-ganesha-lock.service.debian8 file
properly, whether the Xenial CI job will work.

The upstream download.ceph.com nfs-ganesha package should be fixed for
xenial (which is in progress).
2018-02-20 10:49:37 +01:00
Paul Bourke 463b5c6b22 Remove redundant task to check if atomic
This fact is already set in site-docker.yml so there's no need to check
it again in ceph-docker-common

Signed-off-by: Paul Bourke <paul.bourke@oracle.com>
2018-02-19 10:10:46 +01:00
Andy McCrae 59a4335a56 Restart services if handler called
This patch fixes an issue where if hosts have different service lists,
it will prevent restarting changes on services that run later on.

For example, hostA in the mons and rgws group would initiate a config
change and restart of services on all mons and rgws hosts, even though
a separate hostB (which is only in the rgws group) has not had its
configuration changed yet. Additionally, when the second host has its
coniguration changed as part of the ceph-rgw role, it will not initiate
a restart since its inventory name != the first hosts.

To fix this we should run the restart once (using run_once: True)
as long as the host has called the handler. This will ensure that even
if only 1 host has called the handler it will initiate a restart on all
hosts that have called the handler.

Additionally, we add a var that is set when the handler runs, this will
ensure that only hosts that have called the handler get restarted.

Includes minor fix to remove unrequired "inventory_hostname in
play_hosts" when: clause. This is no longer required since the handlers
were changed. The host calling the handler will be in play_hosts
already.
2018-02-16 10:40:20 +01:00
Sébastien Han c816a9282c container: osd remove run_once
When used along with  delegate, run_once does not belong well. Thus,
using | last always brings the desired result.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-14 02:01:29 +01:00
Sébastien Han d47d02a5eb docker-common: fix container restart on new image
We now look for any excisting containers, if any we compare their
running image with the latest pulled container image.
For OSDs, we iterate over the list of running OSDs, this handles the
case where the first OSD of the list has been updated (runs the new
image) and not the others.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1526513
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-14 02:01:29 +01:00
Sébastien Han ebc195487c default: remove duplicate code
This is already defined in ceph-defaults.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-14 02:01:29 +01:00
Caleb Boylan 0be60456ce osd: Add support for multipath disks
Multipath disks have partitions with a different format than what
ceph-ansible currently supports, this update makes ceph-ansible
aware of that format so multipath disks can be used as OSDs

Signed-off-by: Caleb Boylan <caleb.boylan@ormuco.com>
2018-02-09 18:06:25 +01:00
Andy McCrae b4dbc862d6 Set application for OpenStack pools
Since Luminous we need to set the application tag for each pool,
otherwise a CEPH_WARNING is generated when the pools are in use.

We should assign the OpenStack pools to their default which would be
"rbd". When updating to Luminous this would happen automatically to the
vms, images, backups and volumes pools, but for new deploys this is not
the case.
2018-02-09 17:15:55 +01:00
Sébastien Han 22f843e3d4 default: define 'osd_scenario' variable
osd_scenario does not exist in the ceph-default role so if we try to
play ceph-default on an OSD node, the playbook will fail with undefined
variable.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-08 17:42:12 +01:00
Guillaume Abrioux e537779bb3 osd: fix osd restart when dmcrypt
This commit fixes a bug that occurs especially for dmcrypt scenarios.

There is an issue where the 'disk_list' container can't reach the ceph
cluster because it's not launched with `--net=host`.

If this container can't reach the cluster, it will hang on this step
(when trying to retrieve the dm-crypt key) :

```
+common_functions.sh:448: open_encrypted_part(): ceph --cluster abc12 --name \
client.osd-lockbox.9138767f-7445-49e0-baad-35e19adca8bb --keyring \
/var/lib/ceph/osd-lockbox/9138767f-7445-49e0-baad-35e19adca8bb/keyring \
config-key get dm-crypt/osd/9138767f-7445-49e0-baad-35e19adca8bb/luks
+common_functions.sh:452: open_encrypted_part(): base64 -d
+common_functions.sh:452: open_encrypted_part(): cryptsetup --key-file \
-luksOpen /dev/sdb1 9138767f-7445-49e0-baad-35e19adca8bb
```

It means the `ceph-run-osd.sh` script won't be able to start the
`osd_disk_activate` process in ceph-container because he won't have
filled the `$DOCKER_ENV` environment variable properly.

Adding `--net=host` to the 'disk_list' container fixes this issue.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1543284

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-02-08 15:45:13 +01:00
Giulio Fidente bdcc52b96d Check for docker sockets named after both _hostname or _fqdn
While hostname -f will always return an hostname including its
domain part and -s without the domain part, the behavior when
no arguments are given can include or not include the domain part
depending on how the system is configured; the socket name might
not match the instance name then.
2018-02-06 14:16:54 +01:00
Greg Charot a6d1922a2e mon: Fixed crush_rule_config for containerised deployment.
Was called too early, container was not yet started so the commands failed.
Moved the section after include docker/main.yml

Signed-off-by: Greg Charot <gcharot@redhat.com>
2018-02-06 05:12:59 +01:00
Guillaume Abrioux dd0c98c5a2 common: do not use `shell` module when it is not needed
There is no need here to use `shell` instead of `command`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-31 10:45:34 +01:00
Guillaume Abrioux deaf273b25 syntax: change local_action syntax
Use a nicer syntax for `local_action` tasks.
We used to have oneliner like this:
```
local_action: wait_for port=22 host={{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }} state=started delay=10 timeout=500 }}
```

The usual syntax:
```
    local_action:
      module: wait_for
      port: 22
      host: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
      state: started
      delay: 10
      timeout: 500
```
is nicer and kind of way to keep consistency regarding the whole
playbook.

This also fix a potential issue about missing quotation :

```
Traceback (most recent call last):
  File "/tmp/ansible_wQtWsi/ansible_module_command.py", line 213, in <module>
    main()
  File "/tmp/ansible_wQtWsi/ansible_module_command.py", line 185, in main
    rc, out, err = module.run_command(args, executable=executable, use_unsafe_shell=shell, encoding=None, data=stdin)
  File "/tmp/ansible_wQtWsi/ansible_modlib.zip/ansible/module_utils/basic.py", line 2710, in run_command
  File "/usr/lib64/python2.7/shlex.py", line 279, in split
    return list(lex)                                                                                                                                                                                                                                                                                                            File "/usr/lib64/python2.7/shlex.py", line 269, in next
    token = self.get_token()
  File "/usr/lib64/python2.7/shlex.py", line 96, in get_token
    raw = self.read_token()
  File "/usr/lib64/python2.7/shlex.py", line 172, in read_token
    raise ValueError, "No closing quotation"
ValueError: No closing quotation
```

writing `local_action: shell echo {{ fsid }} | tee {{ fetch_directory }}/ceph_cluster_uuid.conf`
can cause trouble because it's complaining with missing quotes, this fix solves this issue.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1510555

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-31 10:45:34 +01:00
Sébastien Han 6f9dd26caa config: remove any spaces in public_network or cluster_network
With two public networks configured - we found that with
"NETWORK_ADDR_1, NETWORK_ADDR_2" install process consistently became
broken, trying to find docker registry on second network, and not
finding mon container.

but without spaces
"NETWORK_ADDR_1,NETWORK_ADDR_2" install succeeds
so, containerized install is more peculiar with formatting of this line

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1534003
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-30 17:47:15 +01:00
Sébastien Han 5132cc3de4 Do not search osd ids if ceph-volume
Description of problem: The 'get osd id' task goes through all the 10 times (and its respective timeouts) to make sure that the number of OSDs in the osd directory match the number of devices.

This happens always, regardless if the setup and deployment is correct.

Version-Release number of selected component (if applicable): Surely the latest. But any ceph-ansible version that contains ceph-volume support is affected.

How reproducible: 100%

Steps to Reproduce:
1. Use ceph-volume (LVM) to deploy OSDs
2. Avoid using anything in the 'devices' section
3. Deploy the cluster

Actual results:
TASK [ceph-osd : get osd id _uses_shell=True, _raw_params=ls /var/lib/ceph/osd/ | sed 's/.*-//'] **********************************************************************************************************************************************
task path: /Users/alfredo/python/upstream/ceph/src/ceph-volume/ceph_volume/tests/functional/lvm/.tox/xenial-filestore-dmcrypt/tmp/ceph-ansible/roles/ceph-osd/tasks/start_osds.yml:6
FAILED - RETRYING: get osd id (10 retries left).
FAILED - RETRYING: get osd id (9 retries left).
FAILED - RETRYING: get osd id (8 retries left).
FAILED - RETRYING: get osd id (7 retries left).
FAILED - RETRYING: get osd id (6 retries left).
FAILED - RETRYING: get osd id (5 retries left).
FAILED - RETRYING: get osd id (4 retries left).
FAILED - RETRYING: get osd id (3 retries left).
FAILED - RETRYING: get osd id (2 retries left).
FAILED - RETRYING: get osd id (1 retries left).
ok: [osd0] => {
    "attempts": 10,
    "changed": false,
    "cmd": "ls /var/lib/ceph/osd/ | sed 's/.*-//'",
    "delta": "0:00:00.002717",
    "end": "2018-01-21 18:10:31.237933",
    "failed": true,
    "failed_when_result": false,
    "rc": 0,
    "start": "2018-01-21 18:10:31.235216"
}

STDOUT:

0
1
2

Expected results:
There aren't any (or just a few) timeouts while the OSDs are found

Additional info:
This is happening because the check is mapping the number of "devices" defined for ceph-disk (in this case it would be 0) to match the number of OSDs found.

Basically this line:

    until: osd_id.stdout_lines|length == devices|unique|length

Means in this 2 OSD case it is trying to ensure the following incorrect condition:

    until: 2 == 0

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1537103
2018-01-30 14:44:38 +01:00
Andy McCrae 481173f203 Add default for radosgw_keystone_ssl
This should default to False. The default for Keystone is not to use PKI
keys, additionally, anybody using this setting had to have been manually
setting it before.

Fixes: #2111
2018-01-30 11:30:23 +01:00
Guillaume Abrioux f1232b33fd Revert "monitor_interface: document need to use monitor_address when using IPv6"
This reverts commit 10b91661ce.

This reverts also the same comment added in
1359869497
2018-01-29 14:43:24 +01:00
Eduard Egorov 93e9f3723b config: add host-specific ceph_conf_overrides evaluation and generation.
This allows us to use host-specific variables in ceph_conf_overrides variable. For example, this fixes usage of such variables (e.g. 'nss db path' having {{ ansible_hostname }} inside) in ceph_conf_overrides for rados gateway configuration (see profiles/rgw-keystone-v3) - issue #2157.

Signed-off-by: Eduard Egorov <eduard.egorov@icl-services.com>
2018-01-26 10:15:03 +01:00
Guillaume Abrioux ec16cbdb1a defaults: avoid getting stuck (ceph --connect-timeout)
Sometime the playbook gets stuck because even with `--connect-timeout=`
option, the connexion to the existing ceph cluster never timeout.

As a workaround, using `timeout` command provided by coreutils will
actually timeout if we can't connect to the cluster.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1537003

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-25 10:15:59 +01:00
Andrew Schoen 79473badfe ceph-osd: adds dmcrypt to the lvm scenario
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-01-24 14:10:08 +01:00
Guillaume Abrioux 9306a1789c osds: change default value for `dedicated_devices`
This is to keep backward compatibility with stable-2.2 and satisfy the
check "verify dedicated devices have been provided" in
`check_mandatory_vars.yml`. This check is looking for
`dedicated_devices` so we need to default it's value to
`raw_journal_devices` when `raw_multi_journal` is set to `True`.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1536098

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-22 18:02:51 +01:00
Sébastien Han f88795e843 rgw: disable legacy unit
Some systems that were deployed with old tools can leave units named
"ceph-radosgw@radosgw.gateway.service". As a consequence, they will
prevent the new unit to start.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1509584
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-18 14:12:18 +01:00
Andrew Schoen fb4a6dc9a4 docs for the crush_device_class option of lvm_volumes
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-01-17 13:49:29 +01:00
Andrew Schoen 6cbb56a3b6 ceph-osd: adds the crush_device_class param to the lvm scenario
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-01-17 13:49:29 +01:00
Eduard Egorov 7d7080df6c crush: create rack type buckets and build crush tree according to {{ osd_crush_location }}.
Currently, we can define crush location for each host but only crush roots and crush rules are created. This commit automates other routines for a complete solution:
  1) Creates rack type crush buckets defined in {{ ceph_crush_rack }} of each osd host. If it's not defined by user then a rack named 'default_rack_{{ ceph_crush_root  }}' would be added and used in next steps.
  2) Move rack type crush buckets defined in {{ ceph_crush_rack }} into crush roots defined in {{ ceph_crush_root }} of each osd host.
  3) Move hosts defined in {{ ceph_crush_rack }} into crush roots defined in {{ ceph_crush_root }} of each osd host.

Signed-off-by: Eduard Egorov <eduard.egorov@icl-services.com>
2018-01-11 17:42:18 +01:00
Sébastien Han 6db4aea453 osd: skip devices marked as '/dev/dead'
On a non-collocated scenario, if a drive is faulty we can't really
remove it from the list of 'devices' without messing up or having to
re-arrange the order of the 'dedicated_devices'. We want to keep this
device list ordered. This will prevent the activation failing on a
device that we know is failing but we can't remove it yet to not mess up
the dedicated_devices mapping with devices.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-11 17:34:32 +01:00
Guillaume Abrioux 70401f955b container: trigger handlers on systemd file change
When a systemd unit file is changed we should trigger handlers to
restart the services.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-10 16:46:42 +01:00
Guillaume Abrioux b29a42cba6 handlers: avoid duplicate handler
Having handlers in both ceph-defaults and ceph-docker-common roles can make the
playbook restarting two times services. Handlers can be triggered first
time because of a change in ceph.conf and a second time because a new
image has been pulled.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-10 16:46:42 +01:00
Sébastien Han 8a19a83354 container: restart container when there is a new image
This wasn't any good choice to implement this.
We had several options and none of them were ideal since handlers can
not be triggered cross-roles.
We could have achieved that by doing:

* option 1 was to add a dependancy in the meta of the ceph-docker-common
role. We had that long ago and we decided to stop so everything is
managed via site.yml

* option 2 was to import files from another role. This is messy and we
don't that anywhere in the current code base. We will continue to do so.

There is option 3 where we pull the image from the ceph-config role.
This is not suitable as well since the docker command won't be available
unless you run Atomic distro. This would also mean that you're trying to
pull twice. First time in ceph-config, second time in ceph-docker-common

The only option I came up with was to duplicate a bit of the ceph-config
handlers code.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1526513
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-10 16:46:42 +01:00
Guillaume Abrioux 900f447c82 containers: fix bug when looking for existing cluster
When containerized deployment, `docker_exec_cmd` is not set before the
task which try to retrieve the current fsid is played, it means it
considers there is no existing fsid and try to generate a new one.

Typical error:

```
ok: [mon0 -> mon0] => {
    "changed": false,
    "cmd": [
        "ceph",
        "--connect-timeout",
        "3",
        "--cluster",
        "test",
        "fsid"
    ],
    "delta": "0:00:00.179909",
    "end": "2018-01-09 10:36:58.759846",
    "failed": false,
    "failed_when_result": false,
    "rc": 1,
    "start": "2018-01-09 10:36:58.579937"
}

STDERR:

Error initializing cluster client: Error('error calling conf_read_file: errno EINVAL',)
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-10 16:23:18 +01:00
Sébastien Han c2e04623a5 container: change the way we force no logs inside the container
Previously we were using ceph_conf_overrides however this doesn't play
nice for softwares like TripleO that uses ceph_conf_overrides inside its
own code. For now, and since this is the only occurence of this, we can
ensure no logs through the ceph conf template.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1532619
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-10 16:21:47 +01:00
Guillaume Abrioux acfbebe67e defaults: rename check_socket files for containers
When containerized deployment, we are not looking for a socket but for a
running container.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-01-10 15:44:47 +01:00
Sébastien Han f0787e64da mon: use crush rules for non-container too
There is no reasons why we can't use crush rules when deploying
containers. So moving the inlcude in the main.yml so it can be called.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-10 15:21:36 +01:00
Sébastien Han 97f520bc74 containers: bump memory limit
A default value of 4GB for MDS is more appropriate and 3GB for OSD also.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1531607
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-01-09 11:26:50 +01:00
Sébastien Han 0b55abe3d0 mon: always run ceph-create-keys
ceph-create-keys is idempotent so it's not an issue to run it each time
we play ansible. This also fix issues where the 'creates' arg skips the
task and no keys get generated on newer version, e.g during an upgrade.

Closes: https://github.com/ceph/ceph-ansible/issues/2228
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-12-21 13:50:01 +01:00
Sébastien Han ad54e19262 rgw: disable legacy rgw service unit
When upgrading from OSP11 to OSP12 container, ceph-ansible attempts to
disable the RGW service provided by the overcloud image. The task
attempts to stop/disable ceph-rgw@{{ ansible-hostname }} and
ceph-radosgw@{{ ansible-hostname }}.service. The actual service name is
ceph-radosgw@radosgw.$name

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1525209
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-12-21 13:48:42 +01:00
Guillaume Abrioux 895949d6c4 osd: fix check gpt
the gpt label creation doesn't work even with parted module.
This commit fixes the gpt label creation by using parted command
instead.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-12-20 17:42:45 +01:00
Sébastien Han bbc79765f3 osd: best effort if no device is found during activation
We have a scenario when we switch from non-container to containers. This
means we don't know anything about the ceph partitions associated to an
OSD. Normally in a containerized context we have files containing the
preparation sequence. From these files we can get the capabilities of
each OSD. As a last resort we use a ceph-disk call inside a dummy bash
container to discover the ceph journal on the current osd.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1525612
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-12-19 14:40:48 +01:00
Sébastien Han dfbef8361d nfs: fix package install for debian/suss systems
This resolves the following error:
E: There were unauthenticated packages and -y was used without
--allow-unauthenticated

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-12-19 13:30:49 +01:00
Christian Berendt 50a848dc40 Rename fact docker_version to ceph_docker_version
The name docker_version is very generic and is also used by other
roles. As a result, there may be name conflicts. To avoid this a
ceph_ prefix should be used for this fact. Since it is an internal
fact renaming is not a problem.
2017-12-15 20:12:21 +01:00
Markos Chandras 162b7d2b23 roles: ceph-mgr: Install the ceph-mgr package on SUSE
The ceph-mgr package name is identical to RedHat so add the SUSE family
to the existing task.
2017-12-15 09:22:14 +01:00
Guillaume Abrioux a24fd1cfd9 client: don't make `osd_pool_default_pg_num` mandatory
making `osd_pool_default_pg_num` mandatory is a bit agressive and is
unrelated when you just want to create users keyrings.

Closes: #2241

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-12-14 17:22:07 +01:00
Guillaume Abrioux ab1dd3027a client: don't try to generate keys
the entrypoint to generate users keyring is `ceph-authtool`, therefore,
it can expand the `$(ceph-authtool --gen-print-key)` inside the
container. Users must generate a keyring themselves.
This commit also adds a check to ensure keyring are properly filled when
`user_config: true`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-12-14 17:22:07 +01:00
Guillaume Abrioux 26afe46e13 docker: add missing condition for selinux tasks
on `client` and `mds` roles, it tries to set selinux even on non rhel
based distributions.`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-12-14 17:00:14 +01:00
Sébastien Han 7eaf444328 default: look for the right return code on socket stat in-use
As reported in https://github.com/ceph/ceph-ansible/issues/2254, the
check with fuser is not ideal. If fuser is not available the return code
is 127. Here we want to make sure that we looking for the correct return
code, so 1.

Closes: https://github.com/ceph/ceph-ansible/issues/2254
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-12-14 16:59:14 +01:00
John Fulton 8cba44262c Add flags for OSD 'docker run --cpuset-{cpus,mems}'
Add the variables ceph_osd_docker_cpuset_cpus and
ceph_osd_docker_cpuset_mems, so that a user may specify
the CPUs and memory nodes of NUMA systems on which OSD
containers are run.

Provides a example in osds.yaml.sample to guide user
based on sample `lscpu` output since cpuset-mems refers
to the memory by NUMA node only while cpuset-cpus can
refer to individual vCPUs within a NUMA node.
2017-12-14 16:39:35 +01:00
Eduard Egorov a8a2c13f6a firewall: add mds, nfs, restapi and iscsi ports, remove 'configure_firewall' variable used for conditional execution. Include the task only on rpm-based systems.
Signed-off-by: Eduard Egorov <eduard.egorov@icl-services.com>
2017-12-12 23:44:55 +01:00
Eduard Egorov 6a5e0da30d firewall: configure firewalld if it's already installed on the host (#2192).
Signed-off-by: Eduard Egorov <eduard.egorov@icl-services.com>
2017-12-12 23:44:55 +01:00
Major Hayden 5676fa23b1 Convert interface names to underscores for facts
If a deployer uses an interface name with a dash/hyphen in it, such
as 'br-storage' for the monitor_interface group_var, the ceph.conf.j2
template fails to find the right facts. It looks for
'ansible_br-storage' but only 'ansible_br_storage' exists.

This patch converts the interface name to underscores when the
template does the fact lookup.
2017-12-12 09:03:40 +01:00
Konstantin Shalygin d7dadc3e7b ceph-osd: respect nvme partitions when device is a disk. 2017-12-12 09:03:18 +01:00
Guillaume Abrioux 6a9b5c9632 defaults: fix CI issue with ceph_uid fact
The CI complains because of `ceph_uid` fact which doesn't exist since
the docker image tag used in the CI doesn't match with this condition.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-12-12 09:02:37 +01:00
Andrew Schoen 788c3f351a ceph-osd: adds osd_objectstore to the name when using the ceph_volume module
This allows for easier debugging if verbosity is not set high enough.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-12-11 09:58:06 -06:00
Andrew Schoen 5e3d8dbf63 ceph-osd: use the cluster param with the ceph_volume module
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-12-11 09:58:06 -06:00
Andrew Schoen 423166f671 ceph-osd: use the new ceph_volume module for the lvm scenario
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-12-11 09:58:06 -06:00
Sébastien Han 0ea1811f6f
Merge pull request #2226 from andymcc/gpt_mklabel
Skip mklabel gpt if already gpt
2017-12-11 03:12:46 -06:00
Andy McCrae 4f1e854c79 Use parted module instead of command 2017-12-11 17:33:40 +10:00
John Fulton ffae294288 Set tighter permissions on keyrings when containerized
During a containerized deployment, set the permissions
of ceph.client.admin.keyring and other keyrings to
chmod 600 and chown it to ceph.
2017-12-06 19:22:28 -05:00
Guillaume Abrioux b449b16edd
Merge pull request #2215 from squidboylan/support_loopback_devices
Add support for using loopback devices as OSDs
2017-11-28 14:04:47 +01:00
Sébastien Han f94b9040eb
Merge pull request #2214 from ceph/bz-1510555
handlers: restart daemons only if docker is running
2017-11-28 12:22:50 +01:00
Sébastien Han ef581f807d
Merge pull request #2202 from ceph/remove_leftover
osd: remove leftover and fix a typo
2017-11-28 12:21:13 +01:00
wintamute ebe0e60235 Openstack: replaced hardcoded pool names with variables for openstack (nova) user
(cherry picked from commit 2bf48f1)
2017-11-28 09:06:51 +01:00
Caleb Boylan 8f02bb007f Add support for using loopback devices as OSDs
This is particularly useful in CI environments where you dont have
the option of adding extra devices or volumes to the host. It is also
a simple change to support loopback devices
2017-11-27 16:02:36 -08:00
Guillaume Abrioux b26a840002 handlers: restart daemons only if docker is running
In case where docker CLI is available but docker is not running, we
don't want to trigger the restart of the daemons.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1510555

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-27 14:59:30 +01:00
Sébastien Han d9cfe5f6df
Merge pull request #2177 from jprovaznik/rados
Allow to use rados for ganesha exports
2017-11-23 10:36:58 +01:00
Sébastien Han bb7b29a9fc common: install ceph-common on all the machines
Since some daemons now install their own packages the task checking the
ceph version fails on Debian systems. So the 'ceph-common' package must
be installed on all the machines.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-22 17:11:50 +01:00
Jan Provaznik 2435c48cd5 Allow to use rados for ganesha exports 2017-11-21 15:21:32 +01:00
Guillaume Abrioux 1cba626484 osd: remove leftover and fix a typo
This task was originally needed to fix a docker installation issue
(see: #1030). This has been fixed, therefore it can be removed.

Fixes: #2199

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-21 11:11:34 +01:00
Guillaume Abrioux efe06be10f osd: ensure a gpt label is set on device
ceph-disk prepare will fail on jewel if a GPT label is not present on
device.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-17 17:32:23 +01:00
Guillaume Abrioux 3c6f2854fe
Merge pull request #2189 from fultonj/empty-acl
Make openstack_keys param support no acls list
2017-11-16 19:39:01 +01:00
John Fulton d73f751b63 Make openstack_keys param support no acls list
A recent change [1] required that the openstack_keys
param always containe an acls list. However, it's
possible it might not contain that list. Thus, this
param sets a default for that list to be empty if it
is not in the structure as defined by the user.

[1] d65cbaa539
2017-11-16 11:29:59 -05:00
Sébastien Han f31d8557dd
Merge pull request #2182 from ceph/fix_reboot_rbd
rbd: enable ceph-rbd-mirror.target on releases prior to luminous
2017-11-16 16:55:39 +01:00
Sébastien Han 932345ab2a osd: remove leftover from osd partition
We used to support osds that are a partition. This is long gone so
removing this task.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-16 14:58:40 +01:00
Sébastien Han b1c1322357 osd: remove failed_when on activation
There is no need to continue if the activation fails.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-16 14:57:49 +01:00
Sébastien Han 80d3a242d0 osd: fix bad activation for dmcrypt
We were activating dmcrypt devices with the wrong command. Basically the
first task execute the wrong activate command. The task fails but
continues because of the 'failed_when: false'. Then the right activation
sequence is being done by the next task.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-16 14:55:08 +01:00
Sébastien Han cc264d6ba6
Merge pull request #2151 from hwoarang/add-opensuse
Add openSUSE Leap 42.3 support
2017-11-16 14:35:28 +01:00
Sébastien Han a98f14784a
Merge pull request #2172 from ceph/lvm-raw-device
lvm: add support for --data to be a raw device or partition
2017-11-16 14:14:23 +01:00
Guillaume Abrioux ccad0ebf26 rbd: enable ceph-rbd-mirror.target for releases <= luminous
when `ceph-rbd-mirror.target` is not enabled, the service won't start
after a reboot because there is a dependency between these two units.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-16 14:12:59 +01:00
Yixing Yan 097249371f fix: remove the duplicated code 2017-11-16 16:45:03 +08:00
Andrew Schoen 3c604f1115 lvm: support --data as a raw device or partition in ceph-volume
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-11-15 09:36:17 -06:00
Andrew Schoen 04f02910a9 lvm: ensure the data_vg exists before using it
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-11-15 09:36:17 -06:00
John Fulton d65cbaa539 Set permissions and ACLs of OpenStack keys on all ceph-mons
If ceph-ansible deploys a Ceph cluster with "openstack_config: true"
and sets the openstack_keys map to have certain ACLs or permissions,
the requested ACLs or permissions are only set on one of the monitor
nodes [2] when they should be set on all of them.

This patch solves [3] the above issue by having the chmod and setfacl
tasks iterate the list of mon nodes (including the mon node that the
task was delegated to) to apply the chmod of setfacl to the keys in
openstack_keys.

[1]
```
openstack_keys:
  - { name: client.openstack, key: "$(ceph-authtool --gen-print-key)", mon_cap: "allow r", osd_cap: "allow class-read object_prefix rbd_children, allow rwx pool=images, allow rwx pool=vms, allow rwx pool=volumes, allow rwx pool=backups", mode: "0600", acls: ["u:nova:r--", "u:cinder:r--", "u:glance:r--", "u:gnocchi:r--"] }
```
[2]
```
$ ansible mons -m shell -b -a "ls -l /etc/ceph/ceph.client.openstack.keyring ; getfacl /etc/ceph/ceph.client.openstack.keyring"
192.168.1.26 | SUCCESS | rc=0 >>
-rw-r-----+ 1 root root 253 Nov  3 20:30 /etc/ceph/ceph.client.openstack.keyring
user::rw-
user:glance:r--
user:nova:r--
user:cinder:r--
user:gnocchi:r--
group::---
mask::r--
other::---getfacl: Removing leading '/' from absolute path names

192.168.1.29 | SUCCESS | rc=0 >>
-rw-r--r--. 1 root root 253 Nov  3 20:30 /etc/ceph/ceph.client.openstack.keyring
user::rw-
group::r--
other::r--getfacl: Removing leading '/' from absolute path names

192.168.1.23 | SUCCESS | rc=0 >>
-rw-r--r--. 1 root root 253 Nov  3 20:30 /etc/ceph/ceph.client.openstack.keyring
user::rw-
group::r--
other::r--getfacl: Removing leading '/' from absolute path names

$
```
[3]
```
(undercloud) [stack@hci-director ceph-ansible]$ ansible mons -m shell -b -a "ls -l /etc/ceph/ceph.client.openstack.keyring ; getfacl /etc/ceph/ceph.client.openstack.keyring"
192.168.1.25 | SUCCESS | rc=0 >>
-rw-r-----+ 1 root root 253 Nov 14 01:12 /etc/ceph/ceph.client.openstack.keyring
user::rw-
user:glance:r--
user:nova:r--
user:cinder:r--
user:gnocchi:r--
group::---
mask::r--
other::---getfacl: Removing leading '/' from absolute path names

192.168.1.29 | SUCCESS | rc=0 >>
-rw-r-----+ 1 root root 253 Nov 14 01:12 /etc/ceph/ceph.client.openstack.keyring
user::rw-
user:glance:r--
user:nova:r--
user:cinder:r--
user:gnocchi:r--
group::---
mask::r--
other::---getfacl: Removing leading '/' from absolute path names

192.168.1.27 | SUCCESS | rc=0 >>
-rw-r-----+ 1 root root 253 Nov 14 01:12 /etc/ceph/ceph.client.openstack.keyring
user::rw-
user:glance:r--
user:nova:r--
user:cinder:r--
user:gnocchi:r--
group::---
mask::r--
other::---getfacl: Removing leading '/' from absolute path names

(undercloud) [stack@hci-director ceph-ansible]$
```
2017-11-15 10:09:24 -05:00
Guillaume Abrioux aa0b1ed118 tests: remove OSD_FORCE_ZAP variable from tests
according to ceph/ceph-container#840, this variable is no longer needed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-14 17:55:01 +01:00
Markos Chandras f8e3d4bb76 ceph-docker-common: Add support for openSUSE Leap distributions
Add support for the openSUSE Leap distributions.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras 8c321b8416 ceph-nfs: Add support for openSUSE Leap distributions
Add support for the openSUSE distributions. The required packages
are available either in the distribution repositories or in the
OBS one.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras 173959cfc7 ceph-rgw: Add support for openSUSE Leap distributions
Add support for openSUSE Leap distributions.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras a868c52f3f ceph-restapi: Add support for openSUSE Leap distributions
Add support for openSUSE Leap distributions.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras ddb468bfb3 ceph-rbd-mirror: Add support for openSUSE Leap distributions
Add support for openSUSE Leap distributions

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras fb46950373 ceph-osd: Add support for openSUSE Leap distributions
Add support for openSUSE Leap distributions

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras 34a40adcf7 ceph-mon: Add support for openSUSE Leap distributions
Add support for openSUSE Leap distributions.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras f944ee3980 ceph-mgr: Add support for openSUSE Leap distributions
Add support for openSUSE Leap distributions.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras 8135638c58 ceph-mds: Add support for openSUSE Leap distributions
Add support for openSUSE Leap distributions

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras c6103a0f13 ceph-fetch-keys: Add support for openSUSE Leap distributions
Add support for openSUSE Leap distributions

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras 3e4a7c8b61 ceph-config: Add support for the openSUSE Leap distributions
Add support for the openSUSE Leap distributions

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras 211b0c33a0 ceph-client: Add support for the openSUSE Leap distributions
Add support for the openSUSE Leap distributions

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras e06c108442 ceph-agent: Add support for the openSUSE Leap distributions
Add support for the openSUSE Leap distributions.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras dd6ee72547 ceph-common: Don't check for ceph_stable_release for distro packages
When we consume the distribution packages, we don't have the choise on
which version to install, so we shouldn't require that variable to be
set. Distributions normally provide only one version of Ceph in the
official repositories so we get whatever they provide.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:23 +00:00
Markos Chandras 849786967a ceph-common: Add initial support for openSUSE Leap distributions
openSUSE Leap 42.3 provides support for Ceph Luminous in both the
distribution package and the latest available version in the OBS
repository so add these as the only available installation methods for
openSUSE.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2017-11-14 10:51:22 +00:00
Guillaume Abrioux 44df3f9102 defaults: fix rgw restart script in handlers
Like 80d32dec, the path to the fact is not correct.
In any case, we will retrieve the IP address in hostvars, the variable
is the way we get the interface name according where it has been set
(eg.: inventory host file vs. group_vars/)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1510906

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-13 16:30:03 +01:00
Guillaume Abrioux 0369bd59e2
Merge pull request #2146 from mslovy/wip-fix-crush-location
osd: fix crush location for non-containerized deployment
2017-11-13 12:23:44 +01:00
Sébastien Han 7b0743be52
Merge pull request #2144 from ceph/quick_fix_lvm
osd: skip some set_fact when osd_scenario=lvm
2017-11-13 21:50:37 +11:00
Sébastien Han 17d1ff61d5
Merge pull request #2141 from Arano-kai/run_restart_scripts_in_noexec_tmp
FIX: run restart scripts in `noexec` /tmp
2017-11-13 21:37:35 +11:00
Guillaume Abrioux c06faf2deb
Merge pull request #2154 from ceph/fix_auto_discover
osd: avoid using non desired loop device in autodiscovery
2017-11-10 01:19:20 +01:00
Guillaume Abrioux a695b2c08f
Merge pull request #2153 from ceph/fix_disk_list_test
osd: always run disk_list test
2017-11-09 23:50:32 +01:00
Guillaume Abrioux 591d77220e osd: always run disk_list test
there is no need to have a condition on this task, this test should be
always run since the result will be interpreted later.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-09 11:51:16 +01:00
Guillaume Abrioux 43975a7332 osd: avoid using non desired loop device in autodiscovery
This will prevent ceph-ansible from using a loop device while it
shouldn't in auto_discovery mode.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-09 10:26:24 +01:00
Guillaume Abrioux 80d32decd3 config: fix config generation
The path to the fact is not correct.
In any case, we will retrieve the IP address in hostvars, the variable
is the way we get the interface name according where it has been set
(eg.: inventory host file vs. group_vars/)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1510906

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-09 08:50:57 +01:00
Guillaume Abrioux d5dfc63c89 osd: fix automatic prepare when auto_discover
Use `devices` variable instead of `ansible_devices`, otherwise it means
we are not using the devices which have been 'auto discovered'

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-08 10:20:44 +01:00
yaoning d82a09dddd fix crush location for non-containerized deployment
crush location only set for containerized deployment

Signed-off-by: yaoning <yaoning@unitedstack.com>
2017-11-08 12:05:10 +11:00
Sébastien Han 0930f14915 osd: do not use dm when osd_auto_discovery
The current code will also return lvm devices such as /dev/dm-2, this
kind of device type is not supported by ceph-disk at the moment. Now we
just ignore them.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-08 11:33:10 +11:00
Guillaume Abrioux 238754a844 osd: skip some set_fact when osd_scenario=lvm
these tasks are not needed when using `osd_scenario: lvm`

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1509230

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-07 15:30:08 +01:00
Guillaume Abrioux 39b584e540 osd: fix a typo in roles/ceph-osd/defaults/main.yml
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-11-07 10:06:16 +01:00
Arano-kai 5cde3175ae FIX: run restart scripts in `noexec` /tmp
- One can not run scripts directly in place, that mounted with `noexec`
option. But one can run scripts as arguments for `bash/sh`.

Signed-off-by: Arano-kai <captcha.is.evil@gmail.com>
2017-11-06 16:02:47 +02:00
Sébastien Han d4ed9a2064 osd: enhance backward compatibility
During the initial implementation of this 'old' thing we were falling
into this issue without noticing
https://github.com/moby/moby/issues/30341 and where blindly using --rm,
now this is fixed the prepare container disappears and thus activation
fail.
I'm fixing this for old jewel images.

Also this fixes the machine reboot case where the docker logs are
purgend. In the old scenario, we now store the log locally in the same
directory as the ceph-osd-run.sh script.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-03 11:15:23 +01:00
Sébastien Han ab7eb79212 config: fix monitor_interface when not passed in the inventory file
Setting monitor_interface in group_vars/all.yml makes the
hostvars[host]['monitor_interface'] non-existing.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1507922
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-11-03 09:25:02 +01:00
Jan Provaznik 589cd27ce4 Include ganesha dbus config file
This file was (accidentally) not included in a previous
commit 87b1da09e7.
2017-10-31 08:30:12 +01:00
Sébastien Han faccd0acf0 Merge pull request #2100 from ceph/lvm-bluestore
ceph-volume lvm bluestore support
2017-10-27 17:36:16 +02:00
Alfredo Deza 517a2b3feb ceph-osd skip lvm creation if they are already in use
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-27 11:33:54 -04:00
Sébastien Han 6ea92756c0 Merge pull request #2117 from ceph/rm-dup
default: remove dup variable
2017-10-27 13:49:52 +02:00
Sébastien Han d2575c7f5e default: remove dup variable
ceph_repository_type was declared multiple times. This commit fixes
this.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-27 11:46:15 +02:00
Sébastien Han d6a0d2f9be Merge pull request #2071 from jtaleric/master
Docker image pull retry
2017-10-27 09:49:03 +02:00
Sébastien Han 5a10b048b0 Merge pull request #2105 from major/really-fix-always-run
Really fix always run
2017-10-27 09:33:47 +02:00
John Fulton ae156e9f34 Make acls and mode parameters of opentack_keys optional
Only chmod or setfacl the requested keyring(s) in the
opentack_keys data structure when the mode or acls keys
of that data structure exist.

User may specify four permission combinations for the
keyring file(s): 1. only set ACL, 2. only set mode,
3. set neither mode nor ACL, 4. set mode and then ACL.

Fixes: #2092
2017-10-26 12:45:17 +00:00
Joe Talerico ab58764288 Docker image pull retry
This change sets a default timeout of 300s for the image pull. If the
image pull times out (300s), we will retry 3 times by default.

fixes 1954
2017-10-25 13:37:10 -04:00
Sébastien Han 5f9e50dabe Merge pull request #2103 from andymcc/tcmalloc_settings
Option to set TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES
2017-10-25 17:36:04 +02:00
Sébastien Han 613b6a30f1 Merge pull request #2104 from ceph/rgw-section
rgw/nfs: fix section duplication
2017-10-25 17:35:01 +02:00
Sébastien Han 07e2a783f8 Merge pull request #2084 from ceph/backward-osd-2.4
osd: bring backward compatibility with old Jewel images
2017-10-25 17:33:49 +02:00
Major Hayden f73232caa4
Use check_mode instead of always_run
This patch changes the `always_run: yes` task option to
`check_mode: no` to avoid Ansible warnings.
2017-10-25 09:53:34 -05:00
Major Hayden c2b5118c1b
Revert "Avoid deprecated always_run"
This reverts commit 620fb37dd4.
2017-10-25 09:48:09 -05:00
Sébastien Han 8670b45ef2 rgw/nfs: fix section duplication
Once and for all, hopefully...

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-25 15:45:37 +02:00
Andy McCrae 7f6c39102d Option to set TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES
Use "ceph_tcmalloc_max_total_thread_cache" to set the
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES value inside /etc/default/ceph for
Debian installs, or /etc/sysconfig/ceph for Red Hat/CentOS installs.

By default this is set to 0, so the default package value will be used,
if specified this value will be changed to match the variable, and ceph
osd services will be restarted.
2017-10-25 14:38:36 +01:00
Alfredo Deza d3b427e169 ceph-osd lvm scnearios are no longer limited to filestore
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 08:23:45 -04:00
Alfredo Deza df05e63c10 ceph-osd use --cluster in ceph-volume calls
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 08:23:45 -04:00
Alfredo Deza 628d98a92c ceph-osd add the CEPH_VOLUME_DEBUG env var to all ceph-volume commands
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 06:50:22 -04:00
Alfredo Deza b89309e2a3 ceph-osd update the examples in defaults for lvm bluestore
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 06:46:39 -04:00
Alfredo Deza bbc3672253 ceph-osd: lvm support for bluestore
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2017-10-25 06:46:39 -04:00
Guillaume Abrioux f21859656b Merge pull request #2102 from yanyixing/fix_miss_word
add the miss word
2017-10-25 10:49:38 +02:00
Yixing Yan b6296c13ac update sample file 2017-10-25 16:39:08 +08:00
Sébastien Han 049729b8d3 Merge pull request #2097 from fultonj/issue/2095
Require osd_scenario parameter to be provided in containerized deploy
2017-10-24 13:59:51 +02:00
Sébastien Han 751da93b08 Merge pull request #2096 from andymcc/regex_defaults
Add regexp check for setting CLUSTER_NAME
2017-10-23 17:24:44 +02:00
John Fulton 7a7ddab6c2 Require osd_scenario parameter to be provided in containerized deploy
Fixes: #2095
2017-10-23 15:16:03 +00:00
Andy McCrae 9ebef8ba3c Add regexp check for setting CLUSTER_NAME
Minor fix to ensure that existing CLUSTER_NAME is changed, and avoid duplicates.
2017-10-23 14:42:07 +01:00
Andy McCrae 05a1f965c8 Typo fix for radosgw@ systemd file
systemd script for radosgw is radosgw@ not rgw@, the directory needs to
match the path.
2017-10-23 14:07:23 +01:00
Jan Provaznik 291e6b604d ceph-nfs - add bind address variable 2017-10-23 09:34:51 +02:00
Sébastien Han 968ef04324 osd: bring backward compatibility with old Jewel images
There was a huge resync from luminous to jewel in ceph-docker:
https://github.com/ceph/ceph-docker/pull/797

This change brought a new handy function to discover partitions tight to
an OSD. This function doesn't exist in the old image so the
ceph-osd-run.sh script breaks when trying to deploy Jewel OSD with that
old Jewel image version.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 16:26:41 +02:00
Sébastien Han 54de2efc5d Merge pull request #2082 from ceph/restapi-cephconf
common: move restapi template to config
2017-10-20 14:07:48 +02:00
Sébastien Han 4413511b66 all: backward compatibility between stable-2.2 and 3.0
stable-3.0 brought numerous changes in ceph-ansible variables, this PR
aims to maintain backward compatibility for someone running stable-2.2
upgrading to stable-3.0 but keeps its groups_vars untouched.
We will then determine the right options to make sure the upgrade works
but we are expecting that new variables should be used.

We will drop this in a near future, maybe 3.1 or 3.2.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:54:10 +02:00
Sébastien Han fccb9472cd mgr: force module addition
Some module require --force to be enabled.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:54:10 +02:00
Sébastien Han ba5c6e66f0 common: move restapi template to config
Closes: github.com/ceph/ceph-ansible/issues/1981
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-20 11:14:13 +02:00
Guillaume Abrioux 5b1087f1e5 mgr: play 'enable modules' sequence only on luminous
This feature isn't available before luminous, therefore, we need to play
them only on luminous and after otherwise the playbook will fail.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 3f3d4b9c727d06154c422d445fc2a245aceaed89)
2017-10-19 20:54:23 +02:00
Sébastien Han c527515502 Merge pull request #2000 from ceph/merge-osd-scenarios
[skip ci] ci: new osd scenarios
2017-10-19 09:18:02 +02:00
Guillaume Abrioux ff228e2d88 mgr: fix broken task on jewel
3a58757 introduced an issue for Jewel deployments, since this role is
skipped, `enabled_ceph_mgr_modules.stdout` doesn't exist, therefore, it
ends up with an attribute error.

Uses `.get()` to retrieve `stdout` with a default value so it won't fail
if this attribute doesn't exist (jewel).

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-18 14:11:46 +02:00
Sébastien Han 1579f1c5b1 Merge pull request #2073 from ceph/fix_rbd_handler
[skip ci] rbd: fix restart script for jewel
2017-10-18 11:12:05 +02:00
Guillaume Abrioux c2850b11be rbd: fix restart script for jewel
In Jewel, we don't use bootstrap-rbd keyring for rbd-mirror nodes, it
results with a socket path/name different according to which ceph
release you are deploying.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-18 11:10:49 +02:00
Sébastien Han 2936d152c9 Merge pull request #2053 from Fbrachere/mgr-modules
Add ability to enable ceph mgr modules.
2017-10-18 10:27:31 +02:00
Sébastien Han a53aa9e8b4 ci: new osd scenarios
This commit add new osd scenarios, it aims to simplify the CI setup and
brings a better coverage on the OSD scenarios.
We decided to differentiate between filestore and bluestore, thinking
ahead when filestore won't be supported anymore.
So we now have two classes of tests:

* Filestore
* Bluestore

In each of those classes we have container and non-container.
Then for each we test the following:

* collocated
* collocated dmcrypt
* non-collocated
* non-collocated dmcrypt
* auto discovery collocated
* auto discovery collocated dmcrypt

This gives us a nice coverage and also reduces the footprint on the CI.
We are now up to 4 scenarios, each containing 6 OSD VMs.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-18 09:26:06 +02:00
Sébastien Han 90b75185d5 defaults: fix handlers for collocation
When doing collocation the condition "inventory_hostname in play_hosts"
is breaking the restart workflow.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-17 19:23:16 +02:00
Guillaume Abrioux 2aa53fb0f5 Merge pull request #2055 from ceph/update-mirror-nfs
upgrade: support for rbd mirror and nfs
2017-10-17 14:51:39 +02:00
Christian Berendt 4c380c9ef8 Cleanup readme files in roles directories
The contents of the README files are no longer up to date.
Documentation for all roles is located below the docs directory.
2017-10-17 11:22:06 +02:00
Sébastien Han d920d4839d upgrade: support for rbd mirror and nfs
- Add upgrade support for rbd mirror and nfs daemons.
- Only works with systemd (remove sysvinit and upstart occurence)
- A bit of cleanup

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-17 10:54:47 +02:00
Christian Berendt cf901f0171 In docker start scripts replace \u00a0 with \u0020
This will solve the following issue when starting docker containers on ubuntu:

invalid argument "1\u00a0" for --cpus=1 : failed to parse 1  as a rational number

Closes-bug: #2056
2017-10-16 15:16:48 +02:00
Fabien Brachere 3a587575d7 Add ability to enable ceph mgr modules. 2017-10-16 15:04:23 +02:00
Guillaume Abrioux 7ee9aa94b5 Merge pull request #1963 from ceph/pull-in-para
site-docker.yml try to fetch images in //
2017-10-13 19:35:11 +02:00
Sébastien Han 71d819620c mds: fix fs pool creation
1. add the variables to docker_collocation
2. trigger the check when a MDS is part of the inventory file, not when
we run on an MDS...

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-13 16:03:04 +02:00
Sébastien Han b34a04ea41 site-docker.yml try to fetch images in //
The container deployment is serialized, adding this task as a best
effort. If docker is already present we pull the image otherwise we wait
for the role to play.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-13 11:24:40 +02:00
Guillaume Abrioux 7d4b3f9989 Merge pull request #2047 from ceph/enable_ceph-rbd-mirror.target
rbd-mirror: enable ceph-rbd-mirror.target
2017-10-13 10:34:10 +02:00
Sébastien Han f7832e5eb9 Merge pull request #2031 from major/simplify-ntp
Simplify NTP checks/install
2017-10-13 09:16:20 +02:00
Guillaume Abrioux 59ca1065e9 rbd-mirror: enable ceph-rbd-mirror.target
on jewel `ceph-rbd-mirror.target` isn't enabled, therefore, if the node
is rebooted, the service doesn't get started.

from ceph-rbd-mirror unit file:
```
[Install]
WantedBy=ceph-rbd-mirror.target
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-13 08:27:43 +02:00
Sébastien Han b685aceede Merge pull request #2044 from major/avoid-jinja-in-when
Remove jinja2 delimiters from `when` keys
2017-10-12 22:23:06 +02:00
Major Hayden a1c76e834c
Simplify NTP checks/install
This patch simplifies the checks and installation tasks for NTP.

Debian and Red Hat had a check for NTP's presence but would then
install NTP right afterwards anyways. In addition, there were
tasks for atomic that weren't used anywhere else in the role.

This patch also uses a dynamic include to reduce delays from
skipped tasks.
2017-10-12 12:31:07 -05:00
Sébastien Han 9c3d749f7c Merge pull request #2038 from major/fix-cmd-warning
Suppress yum/dnf/rpm command warnings
2017-10-12 18:46:52 +02:00
Major Hayden c01851325e
Remove jinja2 delimiters from `when` keys
This patch changes the `when:` keys so that they have no jinja2
delimiters. This avoids Ansible warnings which could turn into
errors in a future Ansible release.
2017-10-12 11:27:42 -05:00
Guillaume Abrioux 17623a2157 Merge pull request #2036 from ceph/cephfs-pool
mds: precisely define cephfs pool
2017-10-12 17:47:10 +02:00
Sébastien Han b49f9bda21 mds: precisely define cephfs pool
We now have a variable called ceph_pools that is mandatory when
deploying a MDS.
It's a dictionnary that contains a pool name and a PG count. PG count is
mandatory and must be set, the playbook will fail otherwise.

Closes: https://github.com/ceph/ceph-ansible/issues/2017
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-12 15:56:04 +02:00
Major Hayden 33b200d43a
Suppress yum/dnf/rpm command warnings
Ansible throws warnings when using yum/dnf/rpm with the command
module:

    [WARNING]: Consider using yum module rather than running yum

This patch adds the `warn: no` argument to suppress the warnings
in the Ansible output.
2017-10-12 08:38:05 -05:00
Major Hayden 620fb37dd4
Avoid deprecated always_run
The `always_run` key is deprecated and being removed in Ansible 2.4.
Using it causes a warning to be displayed:

    [DEPRECATION WARNING]: always_run is deprecated.

This patch changes all instances of `always_run` to use the `always`
tag, which causes the task to run each time the playbook runs.
2017-10-12 08:29:44 -05:00
Sébastien Han 739a41ae91 Merge pull request #2030 from major/ceph-common-pass-pkgs-as-list
Pass list of packages instead of with_items
2017-10-12 09:15:58 +02:00
Major Hayden 9d62630303
Pass list of packages instead of with_items
Modern versions of Ansible can handle a list of packages passed
directly to the package modules. This patch optimizes the package
install process by passing the list of packages directly to the
module.
2017-10-11 12:18:15 -05:00
Sébastien Han aa70b07ae2 config: proper render ceph.conf when doing collocation
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-11 18:29:34 +02:00
Sébastien Han f50b170a49 Merge pull request #2022 from ceph/fix-purge-iscis
[skip ci] purge-iscsi: fix group name
2017-10-11 14:21:19 +02:00
Sébastien Han d0a9e57bfc osd: rollback bindmount of /run/udev
This is causing unknown issues when trying to start a dmcrypt container.
Basically the container is stuck at mount opening the LUKS device. This
is still unknown why this is causing trouble but we need to move
forward. Also, this doesn't seem to help in any ways to fix the race
condition we've seen.

Here is the log for dmcrypt:

cryptsetup 1.7.4 processing "cryptsetup --debug --verbose --key-file
key luksClose fbf8887d-8694-46ca-b9ff-be79a668e2a9"
Running command close.
Locking memory.
Installing SIGINT/SIGTERM handler.
Unblocking interruption on signal.
Allocating crypt device context by device
fbf8887d-8694-46ca-b9ff-be79a668e2a9.
Initialising device-mapper backend library.
dm version   [ opencount flush ]   [16384] (*1)
dm versions   [ opencount flush ]   [16384] (*1)
Detected dm-crypt version 1.14.1, dm-ioctl version 4.35.0.
Device-mapper backend running with UDEV support enabled.
dm status fbf8887d-8694-46ca-b9ff-be79a668e2a9  [ opencount flush ]
[16384] (*1)
Releasing device-mapper backend.
Trying to open and read device /dev/sdc1 with direct-io.
Allocating crypt device /dev/sdc1 context.
Trying to open and read device /dev/sdc1 with direct-io.
Initialising device-mapper backend library.
dm table fbf8887d-8694-46ca-b9ff-be79a668e2a9  [ opencount flush
securedata ]   [16384] (*1)
Trying to open and read device /dev/sdc1 with direct-io.
Crypto backend (gcrypt 1.5.3) initialized in cryptsetup library
version 1.7.4.
Detected kernel Linux 3.10.0-693.el7.x86_64 x86_64.
Reading LUKS header of size 1024 from device /dev/sdc1
Key length 32, device size 1943016847 sectors, header size 2050
sectors.
Deactivating volume fbf8887d-8694-46ca-b9ff-be79a668e2a9.
dm status fbf8887d-8694-46ca-b9ff-be79a668e2a9  [ opencount flush ]
[16384] (*1)
Udev cookie 0xd4d14e4 (semid 32769) created
Udev cookie 0xd4d14e4 (semid 32769) incremented to 1
Udev cookie 0xd4d14e4 (semid 32769) incremented to 2
Udev cookie 0xd4d14e4 (semid 32769) assigned to REMOVE task(2) with
flags         (0x0)
dm remove fbf8887d-8694-46ca-b9ff-be79a668e2a9  [ opencount flush
retryremove ]   [16384] (*1)
fbf8887d-8694-46ca-b9ff-be79a668e2a9: Stacking NODE_DEL [verify_udev]
Udev cookie 0xd4d14e4 (semid 32769) decremented to 1
Udev cookie 0xd4d14e4 (semid 32769) waiting for zero

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-11 13:21:37 +02:00
Major Hayden 10e1d464e5
Remove duplicate 'package' key
This patch fixes a typo where "package:" was used twice in the same
task.
2017-10-10 15:39:20 -05:00
Sébastien Han f6d1be269f Merge pull request #2015 from ceph/fix_nfs-ganesha-repos
nfs: move repository configuration in ceph-nfs role
2017-10-10 17:15:33 +02:00
Guillaume Abrioux 5dc9c640e8 nfs: add missing condition for debian_rhcs
in addition to c4dcdaa20 this commit adds the missing condition on
install tasks for debian_rhcs deployment. Without them, these tasks are
played on any kind of deployment.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-10 16:27:00 +02:00
Jan Provaznik 87b1da09e7 Ceph-nfs dynamic exports fixes
* DBus on host should include ganesha service file
* to allow ganesha container to respond on DBus it needs to run
  in --privileged mode (ganesha folks contacted to look at this)
* ceph_nfs_include_exports_dir variable replaced with more general
  ceph_nfs_dynamic_exports
2017-10-10 13:59:01 +02:00
Guillaume Abrioux fbd1a57b11 iscsi-gw: move repository configuration to ceph-iscsi-gw
This is something that has nothing to do in `ceph-common`, this
is too specific to `ceph-iscsi-gw` role.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-10 11:36:03 +02:00
Guillaume Abrioux c4dcdaa201 nfs: move repository configuration in ceph-nfs role
This is something that has nothing to do in `ceph-common`, this
is too specific to `ceph-nfs` role.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-10 11:35:58 +02:00
Guillaume Abrioux 9e8204d9e8 nfs: move packages installation to own role
Make role `ceph-nfs` handling itself the installation of nfs
packages.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-09 19:10:15 +02:00
Guillaume Abrioux 3c64abe07d mds: move installation packages in role itself
Make role `ceph-mds` handling itself the installation of `ceph-mds`
package.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-09 17:25:46 +02:00
Sébastien Han 4032f102fe iscsi: move package install to ceph-iscsi-role
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-10-09 17:25:46 +02:00
Guillaume Abrioux 1581a1c078 mgr: move installation packages in role itself
Make role `ceph-mgr` handling itself the installation of `ceph-mgr`
package because it's complicated to manage it regarding we are going to
install `jewel vs. luminous`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-09 17:25:45 +02:00