Commit Graph

11 Commits (488281187e8ac6c587db74961db9e075f31c8eae)

Author SHA1 Message Date
Sébastien Han b738706810 take-over-existing-cluster: do not call var_files
We were using var_files long ago when default variables were not in
ceph-defaults, now the role exists this is not need. Moreover having
these two var files added:

- roles/ceph-defaults/defaults/main.yml
- group_vars/all.yml

Will create collision and override necessary variables.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1555305
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-20 14:47:04 +02:00
Guillaume Abrioux 415dc0a29b take-over: fix bug when trying to override variable
A customer has been facing an issue when trying to override
`monitor_interface` in inventory host file.
In his use case, all nodes had the same interface for
`monitor_interface` name except one. Therefore, they tried to override
this variable for that node in the inventory host file but the
take-over-existing-cluster playbook was failing when trying to generate
the new ceph.conf file because of undefined variable.

Typical error:

```
fatal: [srvcto103cnodep01]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute u'ansible_bond0.15'"}
```

Including variables like this `include_vars: group_vars/all.yml` prevent
us from overriding anything in inventory host file because it
overwrites everything you would have defined in inventory.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1575915

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-18 10:10:08 +02:00
Andrew Schoen 7c7017ebe6 infra: do not include host_vars/* in take-over-existing-cluster.yml
These are better collected by ansible automatically. This would also
fail if the host_var file didn't exist.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-02-12 11:48:47 +01:00
Caleb Boylan 41d10a2f64 infra: fix take-over-existing-cluster.yml playbook
The ansible inventory could have more than just ceph-ansible hosts, so
we shouldnt use "hosts: all", also only grab one file when getting
the ceph cluster name instead of failing when there is more than one
file in /etc/ceph. Also fix location of the ceph.conf template
2017-11-06 15:00:30 -08:00
Sébastien Han fdc6aebd62 infrastructure-playbooks: update with ceph-defaults roles
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-02 17:12:20 +02:00
David Galloway 127b5ad9b4 infra: Create a backup of ceph.conf when taking over existing cluster
Signed-off-by: David Galloway <dgallowa@redhat.com>
2017-06-21 09:53:09 -04:00
David Galloway 40ed2d7be6 infra: Fix ceph.conf creation when taking over existing cluster
Fixes bug introduced in https://github.com/ceph/ceph-ansible/pull/1330

The "stat ceph.conf" task was basically using the stat module on a
string instead of the ceph.conf filename.  This caused the "generate
ceph configuration file" task to fail.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1463382

Signed-off-by: David Galloway <dgallowa@redhat.com>
2017-06-21 09:52:01 -04:00
Sébastien Han 4639d89231 infra: fix cluster name detection
The previous command was returning /etc/ceph/ceph.conf, we only need
'ceph' to be returned.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-23 15:40:34 -05:00
Sébastien Han 9dac195200 take-over: use more precise ceph.conf detection
Prior to this patch we were just looking for any *.conf file which
sometimes could results in multiple matches. The new command looks for a
.conf file that must contain [global] and 'fsid' patterns. This will
definitely get us the ceph.conf file. We can not directly use ceph.conf
because of a different cluster name.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-12-06 16:02:48 +01:00
Guillaume Abrioux a680707f6f All `include_vars` need to have `*.yml`, `*.yaml` or `*.json` extension.
As introduced in the following PR:
- https://github.com/ansible/ansible/pull/17207
we need to refactor our code.
2016-11-24 14:03:49 +01:00
Sébastien Han fde819d1a8 create a directory for infrastructure playbooks
Since we have a couple of infrastructure related playbooks
(additionnally to the roles we are using to deploy Ceph), it makes sense
to have them located in a separate directory.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-08-17 11:53:34 +02:00