We don't test server reboot, a lot of things can happen after that.
So now, we deploy, reboot then we run testinfra.
Signed-off-by: Sébastien Han <seb@redhat.com>
Prior to this patch this activation sequence for autodetection was
always skipped because we were asking to activate on device without
partitions, which doesn't make sense.
We also fix the way we lookup for a device, since the data partition is
always numbered 1, we take the min element of the dict.
Closes: https://github.com/ceph/ceph-ansible/issues/1782
Signed-off-by: Sébastien Han <seb@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
we need to force the value of `docker` variable which is initially set
to `false` since it's a migration from non-containerized to
containerized cluster.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We must mask the image so we are sure that even if the system reboots
then the OSDs won't start.
Also remove Ceph udev rules if found on the system prior to deploy
containers. If we don't do this we are exposed to conflicts between udev
rules and sytemd unit files.
Also add the CI will now test the migration from a non-containerized cluster to a
containerized cluster.
Signed-off-by: Sébastien Han <seb@redhat.com>
The installation process is now described as follow:
* you still have to choose a 'ceph_origin' installation method. The
origin can be a 'repository' (add a new repository), distro (it will use
the packages provided by the native repo source of your distribution),
local (only available on redhat system, it installs locally built
packages). This option is not well tested, so use it carefully
* if ceph_origin == 'repository' you will have to decide what kind of
repository you want to enable:
- community: corresponds to the stable upstream/community version
- enterprise: corresponds to the stable enterprise/downstream version
(basically you are a red hat customer)
- dev: it will install ceph from packages built out of the github
development branches
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
There is only two main scenarios now:
* collocated: everything remains on the same device:
- data, db, wal for bluestore
- data and journal for filestore
* non-collocated: dedicated device for some of the component
Signed-off-by: Sébastien Han <seb@redhat.com>
If you use the 'dev' factor, the testing scenario will
use repos from shaman.ceph.com. You can define CEPH_DEV_BRANCH
and CEPH_DEV_SHA1 to specify which repo you'd like to test.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Since we are hitting this bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1324587
eg:
`failed: internal error: Monitor path /var/lib/libvirt/qemu/domain-bs-docker-cl
uster-dmcrypt-journal-collocation_mon0_1499294943_ba9faf7bf296533177f6/monitor.
sock too big for destination`
and we can't upgrade libvirt in our CI for some reason
we need to get the directories name shorter in order to workaround this
issue
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When we purge a containerized cluster we need to use the correct
playbook when redploying the cluster.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
rhcs is based off of jewel, so we need to set this var in the tests so
that the ceph-mgr role is skipped.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
I continue to have issues with extra-vars as json. The latest issue
being that the ceph_docker_image_tag config option included in the json
was being ignored. I can't find the root cause, by using the key/value
format seems to work.
I've also removed several options here to simply the interface. We can
add those back if they become necessary.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
I'm removing this because when we use an 'rhcs' scenario then we attempt
to set CEPH_STABLE=false as an environment variable. The issue with that
is because the value is coming from an environment variable it is always
treated as a string and ansible treats that as a boolean True. I plan to
set the ceph_stable value with our rhcs_setup.yml playbook instead of
relying on ---extra-vars and environment variables.
Related ansible issue: https://github.com/ansible/ansible/issues/17193
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
When testing this downstream it makes more sense for this scenario to be
named just 'cluster' because have 'centos7' in the name is misleading.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This allows for us to have a copy of the existing testing scenarios with
a 'rhcs-' prefix. We can use that in the tox.ini to take actions we need
to properly test Red Hat Ceph Storage.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This will prevent ansible from misreading any of these values. There
were failures with xenial deployments because the value set for
``ceph_rhcs`` was being treated as a boolean True even though I'd set
the value to false. This is because boolean values passed in with
--extra-vars must use the json format.
The formatting of the json is very important as you need a '\' to escape
the starting and ending json to make tox happy. Also, each line needs to
end with '\' if it's a multi-line command.
Another thing to note is that if you want to use extra vars at the
command line to respond to a vars_prompt it must be in key/value format.
This is why we have a -e and a --extra-vars on the purge and update
tests.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
When using CEPH_DEV=true you'll need to set CEPH_STABLE=false so that
that an upstream repo file doesn't get created.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Use CEPH_STABLE_RELEASE to set the name of the ceph release you plan to
install. When testing an upgrade scenario you'll also need to set
UPGRADE_CEPH_STABLE_RELEASE.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>