This adds the action 'batch' to the ceph-volume module so that we can
run the new 'ceph-volume lvm batch' subcommand. A functional test is
also included.
If devices is defind and osd_scenario is lvm then the 'ceph-volume lvm
batch' command will be used to create the OSDs.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Since the container now simply reads the ceph.conf, we remove all the
unnecessary options.
Also this PR is the foundation to support multiple backend, such as the
new 'beast' from Ceph Mimic.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
The include does not need a condition on containerized_deployment since
we are already in an include than has the same condition.
Signed-off-by: Sébastien Han <seb@redhat.com>
In environments where we wish to have manual/greater control over
how the bootstrap keyrings are used, we need to able to externally
define what the mgr keyring secret will be and have ceph-ansible
use it, instead of it being autogenerated
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610213
Signed-off-by: Graeme Gillies <ggillies@akamai.com>
* add some missing dots and ``
* add/remove line breaks
* consistent use of shell prompt in consoles outpus
* fix block indents Bearbeiten
* use code blocks
Signed-off-by: Christian Berendt <berendt@b1-systems.de>
deployment.
restart_osd_daemon.sh is used to discover and restart all OSDs on a
host. To do it the scripts loops the list of ceph-osd@ services in the
system. This commit fixes bug in the regular expression responsile for
extraction of OSDs - prior version uses `[0-9]{1,2}` expression
which is ignoring all OSDS which numbers are greater than 99 (thus
longer than 2 digits). Fix removed upper limit of digits in the number.
This problem existed in two places in the script.
Closes: #2964
Signed-off-by: Artur Fijalkowski <artur.fijalkowski@ing.com>
This commit ensures we are backward compatible with fqdn deployments.
Since ceph-container enforces deployment to be done with shortname, we
must keep backward compatibility with clusters already deployed with
fqdn configuration
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This was introduced by
59ee2e8d3b
and made our socket checks impossible to run. The PID could be found,
but the cctid cannot.
This happens during upgrade to mimic and on cluster running on mimic.
So let's force the admin socket the way it was so we can properly check
for existing instances also the line $cluster-$name.$pid.$cctid.asok
is only needed when running multiple instances of the same daemon,
thing ceph-ansible cannot do at the time of writing
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610220
Signed-off-by: Sébastien Han <seb@redhat.com>
Instead of failing the entire purge operation when the rbd command fails
just log an error. This will allow the higher level target and config
cleanup to complete, and the user only has to manually delete the rbd
images.
Signed-off-by: Mike Christie <mchristi@redhat.com>
We were not passing in the ceph conf info into the rbd image removal
command, so if the clustername was not the default igw purge would fail
due to the rbd rm command failing.
This just fixes the bug by passing in the ceph conf info which has the
clustername to use.
This fixes Red Hat bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1601949
Signed-off-by: Mike Christie <mchristi@redhat.com>
ceph-rest-api binary has been removed in mimic so we cannot deploy it
anymore. We just keep the role and the compability for existing users.
Signed-off-by: Sébastien Han <seb@redhat.com>
The container runs with --rm which means it will be deleted by Docker
when exiting. Also 'docker rm -f' is not idempotent and returns 1 if the
container does not exist.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1609007
Signed-off-by: Sébastien Han <seb@redhat.com>
jewel used to create a default `rbd` pool in the default crush root
`default`, we need to have at least 1 osd to satisfy the PGs for this
created pool, otherwise the cluster will be in HEALTH_ERR state because
of `pgs stuck unclean`/`pgs stuck inactive`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
rbd-mirror can't start when deploying jewel because it needs admin
keyring.
Getting back this task brings backward compatibility for jewel
deployment.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If we want to be backward compatible with release prior to luminous, we
have to set the rule name accordingly to default values used in jewel.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This task would be run on both containerized *and* non containerized
deployment.
Let's have a proper title to avoid confusion.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Check `systempython2.stat` instead of `systempython2.stat.exists`.
Without this change, in the case that python2 is not installed, the `stat`
task fails without defining `systempython2.stat`. It leads that the next
installation tasks fail because of undefined `systempython2.stat`.
An example error output (edited for readability):
```
TASK [check for python2] ***********************************************
Wednesday 25 July 2018 14:52:47 +0900 (0:00:00.182) 0:00:00.182 *
fatal: [ceph-osd1.vlan221.vtj]: FAILED! => {
"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not
found\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 127}
...ignoring
TASK [install python2 for debian based systems] ************************
Wednesday 25 July 2018 14:51:00 +0900 (0:00:01.742) 0:00:01.926 *
fatal: [ceph-mon2]: FAILED! => {
"msg": "The conditional check 'systempython2.stat.exists is undefined or
systempython2.stat.exists == false' failed. The error was: error while
evaluating conditional (systempython2.stat.exists is undefined or
systempython2.stat.exists == false): 'dict object' has no attribute 'stat'
\n\n The error appears to have been in
'/Users/arata/git/ceph-ansible/site.yml.sample': line 36, column 7, but
may\n be elsewhere in the file depending on the exact syntax problem.\n\n
The offending line appears to be:\n\n\n
- name: install python2 for debian based systems\n
^ here\n
"}
...ignoring
```
Fixes: #2930
Signed-off-by: Arata Notsu <arata776@gmail.com>
In containerized deployments we now inherite from the
radosgw_civetweb_options options when bootstrapping the container.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
When distributing ceph-nfs role, creation of rados index object
fails as it assumes availability of client.admin locally.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1607970
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
Check if the interface provided:
* exists in the gathered facts (thus on the system)
* is active
* has an IP address (depending on ip_version )
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600227
Signed-off-by: Sébastien Han <seb@redhat.com>
upgrade RHCS 2 -> RHCS 3 will fail if cluster has still set
sortnibblewise,
it stay stuck on "TASK [waiting for clean pgs...]" as RHCS 3 osds will
not start if nibblewise is set.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600943
Signed-off-by: Sébastien Han <seb@redhat.com>
Let's create a dedicated environment for these scenarios, there is no
need to deploy everything.
By the way, doing so will save some times.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Ansible 2.4 is currently end-of-life.
Ansible 2.5 will go end-of-life after Ansible 2.7 is released.
Fixes: #2901
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The group_vars/all file is not available on 'ooo-collocation' scenario,
it's making the `dev_setup.yml` failing because this path is hardcoded.
The idea here is to check if the pattern 'ooo-collocation' is present in
`change_dir` variable so we can set this path properly according to the
scenario being run.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
`test_rbd_mirror_is_up()` is failing on update scenarios because it
assumes the `ceph_stable_release` is still set to the value of the
original ceph release, it means it won't enter in the right part of the
condition and fails.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
In addition to ceph/ceph-build#1082
Let's set the ansible version in each ceph-ansible branch's respective
requirements.txt.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Do not run device validation on every hosts, only on OSD nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
We know make sure that:
* devices are actually block special files
* length of dedicated_device is identical to devices
Signed-off-by: Sébastien Han <seb@redhat.com>
since no latest-bis-jewel exists, it's using latest-bis which points to
ceph mimic. In our testing, using it for idempotency/handlers tests
means upgrading from jewel to mimic which is not what we want do.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since `V2.6-stable` is available and has packages for `mimic`, let's
update this default value accordingly so nfs nodes can be deployed with
mimic.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
34f70428 has introduced a fix using `command` module while this could
have been achieved by using `lvol` module.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Relying on `copy_admin_key` to import created keys on client nodes makes
us obliged to copy admin key on those nodes which is not something we might
want.
We should use the fact `condition_copy_admin_key` which will be set to
`True` when the delegated node is a mon which means we can import keys
without taking care of admin keyring.
Fixes: #2867
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The original_basename option in the copy module changed to be
_original_basename in Ansible 2.6+, this PR resyncs the config_template
module to allow this to work with both Ansible 2.6+ and before.
Additionally, this PR removes the _v1_config_template.py file, since
ceph-ansible no longer supports versions of Ansible before version 2,
and so we shouldn't continue to carry that code.
Closes: #2843
Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
Follow up on #2784
We must check in the generated fact `_disabled_ceph_mgr_modules` to
enable disabled mgr module.
Otherwise, this task will be skipped because it's not comparing the
right list.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600155
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
On containerized deployment, if a mon is stopped, the socket is not
purged and can cause failure when a cluster is redeployed after the
purge playbook has been run.
Typical error:
```
fatal: [osd0]: FAILED! => {}
MSG:
'dict object' has no attribute 'osd_pool_default_pg_num'
```
the fact is not set because of this previous failure earlier:
```
ok: [mon0] => {
"changed": false,
"cmd": "docker exec ceph-mon-mon0 ceph --cluster test daemon mon.mon0 config get osd_pool_default_pg_num",
"delta": "0:00:00.217382",
"end": "2018-07-09 22:25:53.155969",
"failed_when_result": false,
"rc": 22,
"start": "2018-07-09 22:25:52.938587"
}
STDERR:
admin_socket: exception getting command descriptions: [Errno 111] Connection refused
MSG:
non-zero return code
```
This failure happens when the ceph-mon service is stopped, indeed, since
the socket isn't purged, it's a leftover which is confusing the process.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>