Verify that partitions (for both osd disks and journal disks) are sane
before attempting to prepare the device. Fail if parted fails for
whatever reason.
Closes: #437
Signed-off-by: Sébastien Han <seb@redhat.com>
Since we renamed the variables and removed the old 'docker' variable we
can now collocate container daemons with standard bare metal deployment.
For instance, monitors can be containerized but osds can be deployed
traditionally.
Signed-off-by: Sébastien Han <seb@redhat.com>
Currently, the fetch directory is created in your working directory
(where ansible is run from). We prefer to not keep any state in this
directory and would prefer to have the fetch directory configurable so
we can store it outside of our code checkout.
This commit creates a new variable in each role called
`fetch_directory` (defaulting to the previous value of 'fetch/'), and
then updates each reference to 'fetch' to use the new variable instead.
Closes issue #383
I'm removing the ceph paritition check from `activate osd(s) when device
is a disk` because the ceph parition does not exist when parted was
registered (on a fresh install). This was causing the activate step to
be skipped.
This will likely one day or another break something. If ceph-disk
complains about a disk just use the purge-cluster.yml playbook first as
it will wipe all the devices.
Signed-off-by: Sébastien Han <seb@redhat.com>
Prior to this change, the zap was executed during every play, this was
not ideal. Now we do check if there is a 'ceph' partition. If so we skip
the zap.
Signed-off-by: leseb <seb@redhat.com>
The logic was broken here for repeated runs. We only want to run
'ceph-disk prepare' when the disk does not contain a ceph partition, is
not a partition, and raw_multi_journal is set. Previously it would
attempt to run 'ceph-disk prepare' when there was a ceph partition
because the second half of the 'or' was still true since it isn't a
partition.
This branch has been sitting on my local repo for a while. I guess I had
time to spend on a plane :).
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Still WIP, @mwheckmann free to test
As requested by #162
Current known issue, since ceph.conf gets modified during every single
run (at the end during the merge) so this will restart ceph daemons.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
We remove all the partitions, label and re-create something clean prior
to prepare the design. This will help solving many issues with existing
disks or while scratching/deploy test environments often.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
I added a 'ceph-' prefix to all the roles related to Ceph. Since we are
about to push the roles into the Ansible Galaxy that will be easier when
we want to use these roles into a larger environement with other roles.
Fixes: #94
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>