In Luminous, ceph-disk defaults to bluestore so all our scenarios are
using bluestore, we need to force testing both.
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
Fail with a sane message if the devices or raw_journal_devices variables
are strings instead of lists during manual device assignment.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
This commits refactors how we deploy bluestore. We have existing
scenarios that we don't want to change too much. This commits eases the
user experience by now changing the way you use scenarios. Bluestore is
just a different interface to store objects but the scenarios more or
less remain the same.
If you set osd_objectstore == 'bluestore' along with
journal_collocation: true, you will get an OSD running bluestore with DB
and WAL partitions on the same device.
If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true, you will get an OSD running bluestore with a
dedicated drive for the rocksdb DB, then the remaining
drives (used with 'devices') will have WAL and DATA collocated.
If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true and declare bluestore_wal_devices you will get
an OSD running bluestore with a dedicated drive for rocksdb db, a
dedicated drive partition for rocksdb WAL and a dedicated drive for
DATA.
Signed-off-by: Sébastien Han <seb@redhat.com>
There is no need for 2 variables to enable bluestore, prior to this
patch one had to do the following to activate bluestore:
osd_objectstore: bluestore
bluestore: true
Now you just need to set `osd_objectstore: bluestore`.
Fixes: https://github.com/ceph/ceph-ansible/issues/1475
Signed-off-by: Sébastien Han <seb@redhat.com>
Followup on https://github.com/ceph/ceph-ansible/pull/1469 where we
merged most of the container code from roles/ceph-*/task/docker/*.yml
into roles/ceph-docker-common/tasks/
It seems that we forgot to remove the original files.
Signed-off-by: Sébastien Han <seb@redhat.com>
`ceph-docker-common`:
At the moment there is a lot of duplicated tasks in each
`./roles/ceph-<role>/tasks/docker/main.yml` that could be refactored in
`./roles/ceph-docker-common/tasks/main.yml`.
`*_containerized_deployment` variables:
All `*_containerized_deployment` have been refactored to a single
variable `containerized_deployment`
duplicate `cephx` variables in `group_vars/* have been removed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
In addition to `196fa7e` this commit check if a container has been
already launched and delete it before retrying the ceph osd prepare
process.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Proof-of-concept clusters or actual production clusters will never want to use this. We also do not test it anywhere for this same reason.
Signed-off-by: Gregory Meno <gmeno@redhat.com>
Ansible's assemble module by default will put all files in the src
directory together into dest. We only want to put {{ cluster }}.conf
and osd.conf together, not anything that might have found its way into
/etc/ceph/ceph.d (e.g. files left by the sysadmin taking backups
before an ansible run). So specify a regexp that matches only those
two files.
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
As reported in
https://github.com/ceph/ceph-ansible/issues/1403 when devices are held
by lvm and `osd_auto_discovery` is set to true, it's not enough to check
for a partition count = 0 since Ansible does not report.
This patch also looks for 'holders' which in a case of lvm corresponds
to the name of the pv. Now we also look for holders = 0.
Fixes: #1403
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: too many different commands to do the same thing. The 'cut'
command on infrastructure-playbooks/purge-cluster.yml was also wrong.
This sed command from osixia in ceph-docker
https://github.com/ceph/ceph-docker/pull/580/ addresses all the
scenarios.
Signed-off-by: Sébastien Han <seb@redhat.com>
Install package from official repos rather than pip when using RHEL.
This commit fix https://bugzilla.redhat.com/show_bug.cgi?id=1420855
Also this commit Refact all `roles/ceph-*/tasks/docker/pre_requisite.yml`
to avoid a lot of duplicated code.
Fix: #1303
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since distro will not allow /usr/share to be writable (e.g: atomic) so
we let the operator decide where to put that script.
Signed-off-by: Sébastien Han <seb@redhat.com>
Oh yeah! This patch adds more fine grained control on how we run the
activation osd container. We now use --device to give a read, write and
mknodaccess to a specific device to be consumed by Ceph. We also use
SYS_ADMIN cap to allow mount operations, ceph-disk needs to temporary
mount the osd data directory during the activation sequence.
This patch also enables the support of dedicated journal devices when
deploying ceph-docker with ceph-ansible.
Depends on https://github.com/ceph/ceph-docker/pull/478
Signed-off-by: Sébastien Han <seb@redhat.com>
We changed the way we declare image.
Prior to this patch we must have a "user/image:tag"
format, which is incompatible with non docker-hub registry where you
usually don't have a "user". On the docker hub a "user" is also
identified as a namespace, so for Ceph the user was "ceph".
Variables have been simplified with only:
* ceph_docker_image
* ceph_docker_image_tag
1. For docker hub images: ceph_docker_name: "ceph/daemon" will give
you the 'daemon' image of the 'ceph' user.
2. For non docker hub images: ceph_docker_name: "daemon" will simply
give you the "daemon" image.
Infrastructure playbooks have been modified as well.
The file group_vars/all.docker.yml.sample has been removed as well.
It is hard to maintain since we have to generate it manually. If
you want to configure specific variables for a specific daemon simply
edit group_vars/$DAEMON.yml
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1420207
Signed-off-by: Sébastien Han <seb@redhat.com>
Since we now only support systemd has an init system we can finally
treat containers as processes using systemd and this for all the
distros.
Signed-off-by: Sébastien Han <seb@redhat.com>
According to #1216, we need to simply the code by removing the
support of anything before Jewel.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Sometimes users for testing, tend to delete the whole /var/lib/ceph and
then run ansible again, OSD will never come up if we do not create their
directory.
Signed-off-by: Sébastien Han <seb@redhat.com>
Prior to this patch we had several ways to runs containers, we could use
ansible's docker module on some distro and on containers distros we were
using systemd. We strongly believe threating containers as services with
systemd is the right approach so this patch generalizes to all the
distros. These days most of the distros are running systemd so it's fair
assumption.
Signed-off-by: Sébastien Han <seb@redhat.com>
Refactor the code using 'package' module
Fix Issue #520
(However it doesn't cover all cases because some cases are not refactorable.
Ex: because of diverging packages name between distribution)
Ansible task was not properly fetching OSD cluster keyring causing
the keyring to be missing when we needed to authenticate. Similarly, we
were not properly waiting on the OSD keyring to be available before
continuing.
Signed-off-by: Ivan Font <ifont@redhat.com>
- Update rolling update playbook to support containerized deployments
for mons, osds, mdss, and rgws
- Skip checking if existing cluster is running when performing a rolling
update
- Fixed bug where we were failing to start the mds container because it
was missing the admin keyring. The admin keyring was missing because
it was not being pushed from the mon host to the ansible host due to
the keyring not being available before running the copy_configs.yml
task include file. Now we forcefully wait for the admin keyring to be
generated before continuing with the copy_configs.yml task include file
- Skip pre_requisite.yml when running on atomic host. This technically
no longer requires specifying to skip tasks containing the with_pkg tag
- Add missing variables to all.docker.sample
- Misc. cleanup
Signed-off-by: Ivan Font <ifont@redhat.com>
Even for dmcrypt we need to check the "devices" status and
"raw_journal_devices" as well so we can fix them if there is something
wrong with them.
Signed-off-by: Sébastien Han <seb@redhat.com>
This allows us to test devices set with persistent naming such as
/dev/disk/by-*
When registering devices we can use persisent (/dev/disk/by-*) or
non-persistent (/dev/sd*). Both declarations are supported by
ceph-ansible. There was just two tasks that were not compatible with
this. Since we support using partitions directly we need to test that
because the device activation will be different. To test if the device
is a partition we use a regular expression which wasn't compatible with
the persistent device naming format (/dev/disk/by-*).
This commit solves this issue by reading the path of the symlink since
devices like /dev/disk/by-* are symlinks to devices like /dev/sd*
Signed-off-by: Sébastien Han <seb@redhat.com>
use the activation scenario instead of the full ceph_disk one, we
already have a task to prepare osds so we just need to activate the
device.
working for me using vagrant :)
Signed-off-by: Sébastien Han <seb@redhat.com>
* changed s/colocation/collocation/
* declare dmcrypt variable in ceph-common so the variables check does
not fail
Signed-off-by: Sébastien Han <seb@redhat.com>
This fixes#845 for containerized deployments. We now also mount the
/etc/localtime volume in the containers in order to synchronize the host
timezone with the container timezone.
Signed-off-by: Ivan Font <ivan.font@redhat.com>
Update each role's task to use the respective role's username, image
name, and image tag to check if a container is already running. This was
causing false failures because we were not matching any running
containers and subsequently running checks.yml to check the status of
cluster files being left behind.
Signed-off-by: Ivan Font <ivan.font@redhat.com>
Docker makes it difficult to use images that are not on signed
registries. This is a problem for developers, who likely won't have
access to a registry with proper signed certificates.
This allows the ability to use any docker image on the machine running
vagrant/ansible. The way it works is that the image in question is
exported locally, then sent to each target box and imported there.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
In order to align all Ansible versions, we now use the full path for the
template. We rely on `role_path` variable. Now all the tasks using
the template module have a uniform syntax.
Might fix issue raised in #483
Signed-off-by: Sébastien Han <seb@redhat.com>
This is purely a refactor. Converts when 'and' conditionals into lists
rather than multiline strings. This does not work for nested
conditionals, but those can be formated with indents.
Moves one line when statements onto the same line as the when command
itself.
A small logic bug was found in ceph-osd/tasks/check_devices.yml which
which was also fixed.
Signed-off-by: Sam Yaple <sam@yaple.net>
The ceph-osd role currently uses ansible_service_mgr, which is a fact
only available on ansible 2.x and greater. This commit sets a similar
fact called init_system which will store the contents of /proc/1/comm
(systemd, init, etc.) and then references it ceph-osd instead.
Closes#741
If the ceph cluster name includes numbers, the grep used to find the OSD
IDs from /var/lib/ceph/osd/ would also return the numbers that were in
the cluster name.
For example, if the cluster was named 'mine123' and there was only one
OSD on the node, then the task that finds the OSD IDs would return
'123' and '0'.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
We now check if the device has already been prepared, if we detect a
ceph partition we do not prepare the device.
Also fixed some issues while running on Atomic or CoreOS.
Signed-off-by: Sébastien Han <seb@redhat.com>
fixing the can't open /var/lib/ceph/bootstrap-osd/ceph.keyring: can't
open /var/lib/ceph/bootstrap-osd/ceph.keyring: (13) Permission denied
Signed-off-by: Sébastien Han <seb@redhat.com>
we now have the ability to enable the `cluster` variable with a specific
value that will determine the name of the cluster.
Signed-off-by: Sébastien Han <seb@redhat.com>
With Jewel comes a new store to store Ceph object: BlueStore. Adding an
extra scenario might seem like a useless duplication however the
ultimate goal is remove the other roles later. Thus this is easier to
add new role instead of modifying existing one. Once we drop the support
for release older than Jewel we will just remove all the previous
scenario files.
Signed-off-by: Sébastien Han <seb@redhat.com>
Since we want to activate the OSD when it's a partition we are looking
for a return code that is equal to 0 which means the device is a
partition.
closes: #636
Signed-off-by: Sébastien Han <seb@redhat.com>
Currently we don't yet support runnings OSDs w/ selinux in
enforcing mode. Thus its better to ensure that ceph-ansible
explicitly makes selinux permissive. This should help in
scenarios such as hyperconverged where OSDs are colocated
with VMs on compute nodes which needs selinux enforcing, but
OSDs don't.
Signed-off-by: Deepak C Shetty <deepakcs@redhat.com>
This commit allows you to set a new variable to 'true' if you want to
have ceph admin key copied over different kind of hosts such as MDS,
OSD, RGW. To enable this just set `copy_admin_key` to true.
Closes: #555
Signed-off-by: Sébastien Han <seb@redhat.com>
When autodiscovering disks, disks can be skipped if either they are
removable, or if they have partitions on them. Skipped actions have no
'rc' attribute, though, so the 'ceph prepare' conditional fails unless
we first check to ensure that the results were not skipped before
checking the return value.
as stated in https://github.com/ansible/ansible/issues/4297
if we register a variable twice and even if a task is skipped the
register will not get overwritten... So we use the fact variant as
mentionned in the ansible issue.
Signed-off-by: Sébastien Han <seb@redhat.com>
While this is not widly used (AFAIK :p) the feature was broken. Thanks
to @zmc for reporting it. You can now set `osd_auto_discovery` to
true in your group_vars/osd and it will go through all the devices
available and will make them OSDs.
Signed-off-by: Sébastien Han <seb@redhat.com>
run containerized daemons in virtual machines.
to enable it simply do:
`cp site-docker.yml.sample site-docker.yml`
and set `docker: true` in `vagrant_variables.yml`
Signed-off-by: Sébastien Han <seb@redhat.com>
At the moment, all the tasks using the file module are duplicated to have differents ownerships depending on the fact `is_ceph_infernalis`.
The goal of this commit is to have a new logic for this:
- First set facts depending on the `is_ceph_infernalis` fact
- Create the files or directories using the setted facts as ownerships.
I changed the argument used for starting the mds server. (pre
infernalis)
```
service ceph start mds
```
errors, while
```
service ceph start mds.$hostname
```
correctly starts the service.
I changed the mds directory ownership from ceph:cephh to root:root
again, for pre-infernalis.
And finally, add the ceph_stable_releases checks for the upstart
activation task `for or after infernalis release'.
Verify that partitions (for both osd disks and journal disks) are sane
before attempting to prepare the device. Fail if parted fails for
whatever reason.
Closes: #437
Signed-off-by: Sébastien Han <seb@redhat.com>
Since we renamed the variables and removed the old 'docker' variable we
can now collocate container daemons with standard bare metal deployment.
For instance, monitors can be containerized but osds can be deployed
traditionally.
Signed-off-by: Sébastien Han <seb@redhat.com>
Currently, the fetch directory is created in your working directory
(where ansible is run from). We prefer to not keep any state in this
directory and would prefer to have the fetch directory configurable so
we can store it outside of our code checkout.
This commit creates a new variable in each role called
`fetch_directory` (defaulting to the previous value of 'fetch/'), and
then updates each reference to 'fetch' to use the new variable instead.
Closes issue #383
I'm removing the ceph paritition check from `activate osd(s) when device
is a disk` because the ceph parition does not exist when parted was
registered (on a fresh install). This was causing the activate step to
be skipped.
This will likely one day or another break something. If ceph-disk
complains about a disk just use the purge-cluster.yml playbook first as
it will wipe all the devices.
Signed-off-by: Sébastien Han <seb@redhat.com>
Prior to this change, the zap was executed during every play, this was
not ideal. Now we do check if there is a 'ceph' partition. If so we skip
the zap.
Signed-off-by: leseb <seb@redhat.com>
The logic was broken here for repeated runs. We only want to run
'ceph-disk prepare' when the disk does not contain a ceph partition, is
not a partition, and raw_multi_journal is set. Previously it would
attempt to run 'ceph-disk prepare' when there was a ceph partition
because the second half of the 'or' was still true since it isn't a
partition.
This branch has been sitting on my local repo for a while. I guess I had
time to spend on a plane :).
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Still WIP, @mwheckmann free to test
As requested by #162
Current known issue, since ceph.conf gets modified during every single
run (at the end during the merge) so this will restart ceph daemons.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
We remove all the partitions, label and re-create something clean prior
to prepare the design. This will help solving many issues with existing
disks or while scratching/deploy test environments often.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
I added a 'ceph-' prefix to all the roles related to Ceph. Since we are
about to push the roles into the Ansible Galaxy that will be easier when
we want to use these roles into a larger environement with other roles.
Fixes: #94
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>