This was introduced by
59ee2e8d3b
and made our socket checks impossible to run. The PID could be found,
but the cctid cannot.
This happens during upgrade to mimic and on cluster running on mimic.
So let's force the admin socket the way it was so we can properly check
for existing instances also the line $cluster-$name.$pid.$cctid.asok
is only needed when running multiple instances of the same daemon,
thing ceph-ansible cannot do at the time of writing
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610220
Signed-off-by: Sébastien Han <seb@redhat.com>
Instead of failing the entire purge operation when the rbd command fails
just log an error. This will allow the higher level target and config
cleanup to complete, and the user only has to manually delete the rbd
images.
Signed-off-by: Mike Christie <mchristi@redhat.com>
We were not passing in the ceph conf info into the rbd image removal
command, so if the clustername was not the default igw purge would fail
due to the rbd rm command failing.
This just fixes the bug by passing in the ceph conf info which has the
clustername to use.
This fixes Red Hat bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1601949
Signed-off-by: Mike Christie <mchristi@redhat.com>
ceph-rest-api binary has been removed in mimic so we cannot deploy it
anymore. We just keep the role and the compability for existing users.
Signed-off-by: Sébastien Han <seb@redhat.com>
The container runs with --rm which means it will be deleted by Docker
when exiting. Also 'docker rm -f' is not idempotent and returns 1 if the
container does not exist.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1609007
Signed-off-by: Sébastien Han <seb@redhat.com>
jewel used to create a default `rbd` pool in the default crush root
`default`, we need to have at least 1 osd to satisfy the PGs for this
created pool, otherwise the cluster will be in HEALTH_ERR state because
of `pgs stuck unclean`/`pgs stuck inactive`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
rbd-mirror can't start when deploying jewel because it needs admin
keyring.
Getting back this task brings backward compatibility for jewel
deployment.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If we want to be backward compatible with release prior to luminous, we
have to set the rule name accordingly to default values used in jewel.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This task would be run on both containerized *and* non containerized
deployment.
Let's have a proper title to avoid confusion.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Check `systempython2.stat` instead of `systempython2.stat.exists`.
Without this change, in the case that python2 is not installed, the `stat`
task fails without defining `systempython2.stat`. It leads that the next
installation tasks fail because of undefined `systempython2.stat`.
An example error output (edited for readability):
```
TASK [check for python2] ***********************************************
Wednesday 25 July 2018 14:52:47 +0900 (0:00:00.182) 0:00:00.182 *
fatal: [ceph-osd1.vlan221.vtj]: FAILED! => {
"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not
found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
...ignoring
TASK [install python2 for debian based systems] ************************
Wednesday 25 July 2018 14:51:00 +0900 (0:00:01.742) 0:00:01.926 *
fatal: [ceph-mon2]: FAILED! => {
"msg": "The conditional check 'systempython2.stat.exists is undefined or
systempython2.stat.exists == false' failed. The error was: error while
evaluating conditional (systempython2.stat.exists is undefined or
systempython2.stat.exists == false): 'dict object' has no attribute 'stat'
\n\n The error appears to have been in
'/Users/arata/git/ceph-ansible/site.yml.sample': line 36, column 7, but
may\n be elsewhere in the file depending on the exact syntax problem.\n\n
The offending line appears to be:\n\n\n
- name: install python2 for debian based systems\n
^ here\n
"}
...ignoring
```
Fixes: #2930
Signed-off-by: Arata Notsu <arata776@gmail.com>
In containerized deployments we now inherite from the
radosgw_civetweb_options options when bootstrapping the container.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
When distributing ceph-nfs role, creation of rados index object
fails as it assumes availability of client.admin locally.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1607970
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
Check if the interface provided:
* exists in the gathered facts (thus on the system)
* is active
* has an IP address (depending on ip_version )
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600227
Signed-off-by: Sébastien Han <seb@redhat.com>
upgrade RHCS 2 -> RHCS 3 will fail if cluster has still set
sortnibblewise,
it stay stuck on "TASK [waiting for clean pgs...]" as RHCS 3 osds will
not start if nibblewise is set.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600943
Signed-off-by: Sébastien Han <seb@redhat.com>
Let's create a dedicated environment for these scenarios, there is no
need to deploy everything.
By the way, doing so will save some times.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Ansible 2.4 is currently end-of-life.
Ansible 2.5 will go end-of-life after Ansible 2.7 is released.
Fixes: #2901
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The group_vars/all file is not available on 'ooo-collocation' scenario,
it's making the `dev_setup.yml` failing because this path is hardcoded.
The idea here is to check if the pattern 'ooo-collocation' is present in
`change_dir` variable so we can set this path properly according to the
scenario being run.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
`test_rbd_mirror_is_up()` is failing on update scenarios because it
assumes the `ceph_stable_release` is still set to the value of the
original ceph release, it means it won't enter in the right part of the
condition and fails.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
In addition to ceph/ceph-build#1082
Let's set the ansible version in each ceph-ansible branch's respective
requirements.txt.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Do not run device validation on every hosts, only on OSD nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
We know make sure that:
* devices are actually block special files
* length of dedicated_device is identical to devices
Signed-off-by: Sébastien Han <seb@redhat.com>
since no latest-bis-jewel exists, it's using latest-bis which points to
ceph mimic. In our testing, using it for idempotency/handlers tests
means upgrading from jewel to mimic which is not what we want do.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since `V2.6-stable` is available and has packages for `mimic`, let's
update this default value accordingly so nfs nodes can be deployed with
mimic.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
34f70428 has introduced a fix using `command` module while this could
have been achieved by using `lvol` module.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Relying on `copy_admin_key` to import created keys on client nodes makes
us obliged to copy admin key on those nodes which is not something we might
want.
We should use the fact `condition_copy_admin_key` which will be set to
`True` when the delegated node is a mon which means we can import keys
without taking care of admin keyring.
Fixes: #2867
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The original_basename option in the copy module changed to be
_original_basename in Ansible 2.6+, this PR resyncs the config_template
module to allow this to work with both Ansible 2.6+ and before.
Additionally, this PR removes the _v1_config_template.py file, since
ceph-ansible no longer supports versions of Ansible before version 2,
and so we shouldn't continue to carry that code.
Closes: #2843
Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
Follow up on #2784
We must check in the generated fact `_disabled_ceph_mgr_modules` to
enable disabled mgr module.
Otherwise, this task will be skipped because it's not comparing the
right list.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600155
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
On containerized deployment, if a mon is stopped, the socket is not
purged and can cause failure when a cluster is redeployed after the
purge playbook has been run.
Typical error:
```
fatal: [osd0]: FAILED! => {}
MSG:
'dict object' has no attribute 'osd_pool_default_pg_num'
```
the fact is not set because of this previous failure earlier:
```
ok: [mon0] => {
"changed": false,
"cmd": "docker exec ceph-mon-mon0 ceph --cluster test daemon mon.mon0 config get osd_pool_default_pg_num",
"delta": "0:00:00.217382",
"end": "2018-07-09 22:25:53.155969",
"failed_when_result": false,
"rc": 22,
"start": "2018-07-09 22:25:52.938587"
}
STDERR:
admin_socket: exception getting command descriptions: [Errno 111] Connection refused
MSG:
non-zero return code
```
This failure happens when the ceph-mon service is stopped, indeed, since
the socket isn't purged, it's a leftover which is confusing the process.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We must initialize `children` variable in `_get_osd_id_from_host()`,
otherwise, if for any reason the deployment has failed and result with
an osd host with no OSD registered, we won't enter in the condition,
therefore, `children` is never set and the function tries to return
something undefined.
Typical error:
```
E UnboundLocalError: local variable 'children' referenced before assignment
```
Fixes: #2860
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When you delete a zone without removing from zonegroup, the period update would
fail since that command needs to load the zone and zonegroup to be able to
update the master. Period update would fail with an error like this:
radosgw-admin period update --commit
-1 Cannot find zone id= (name=), switching to local zonegroup configuration
-1 Cannot find zone id= (name=)
Signed-off-by: Shilpa Jagannath <smanjara@redhat.com>
As of Kraken, the journal code does not use the hdparm command anymore
so we can remove it from our package dependency list.
Fixes: https://github.com/ceph/ceph-ansible/issues/1402
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f6910efa24389c264062963b2054c7cd29ffebb3)
We should test ceph-ansible against the latest ansible stable version on
master.
This commit also remove the pinning to 1.7.1 version of testinfra
because ansible 2.5 requires a newer version.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The container image recently merged both cluster and mon log into a
single stream. Following this, we now see this warning coming from the
container image:
2018-06-19 13:44:01.542990 7ff75b024700 1 mon.vm02@1(peon).log
v57928205 unable to write to '/var/log/ceph/ceph.log' for channel
'cluster': (2) No such file or directory
So we now tell the mon to not log cluster log on the filesystem.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1591771
Signed-off-by: Sébastien Han <seb@redhat.com>
We forgot to add mgr_group_name when checking for the mon repo, thus the
conditional on the next task was failing.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1598185
Signed-off-by: Sébastien Han <seb@redhat.com>
The data structure has slightly changed on mimic.
Prior to mimic, it used to be:
```
{
"enabled_modules": [
"status"
],
"disabled_modules": [
"balancer",
"dashboard",
"influx",
"localpool",
"prometheus",
"restful",
"selftest",
"zabbix"
]
}
```
From mimic it looks like this:
```
{
"enabled_modules": [
"status"
],
"disabled_modules": [
{
"name": "balancer",
"can_run": true,
"error_string": ""
},
{
"name": "dashboard",
"can_run": true,
"error_string": ""
}
]
}
```
This means we can't simply check if `item` is in `item in
_ceph_mgr_modules.disabled_modules`
the idea here is to use filter `map(attribute='name')` to build a list
when deploying mimic.
Fixes: #2766
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>