The installation process is now described as follow:
* you still have to choose a 'ceph_origin' installation method. The
origin can be a 'repository' (add a new repository), distro (it will use
the packages provided by the native repo source of your distribution),
local (only available on redhat system, it installs locally built
packages). This option is not well tested, so use it carefully
* if ceph_origin == 'repository' you will have to decide what kind of
repository you want to enable:
- community: corresponds to the stable upstream/community version
- enterprise: corresponds to the stable enterprise/downstream version
(basically you are a red hat customer)
- dev: it will install ceph from packages built out of the github
development branches
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
There is only two main scenarios now:
* collocated: everything remains on the same device:
- data, db, wal for bluestore
- data and journal for filestore
* non-collocated: dedicated device for some of the component
Signed-off-by: Sébastien Han <seb@redhat.com>
If you use the 'dev' factor, the testing scenario will
use repos from shaman.ceph.com. You can define CEPH_DEV_BRANCH
and CEPH_DEV_SHA1 to specify which repo you'd like to test.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Since we are hitting this bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1324587
eg:
`failed: internal error: Monitor path /var/lib/libvirt/qemu/domain-bs-docker-cl
uster-dmcrypt-journal-collocation_mon0_1499294943_ba9faf7bf296533177f6/monitor.
sock too big for destination`
and we can't upgrade libvirt in our CI for some reason
we need to get the directories name shorter in order to workaround this
issue
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
When we purge a containerized cluster we need to use the correct
playbook when redploying the cluster.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
rhcs is based off of jewel, so we need to set this var in the tests so
that the ceph-mgr role is skipped.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
I continue to have issues with extra-vars as json. The latest issue
being that the ceph_docker_image_tag config option included in the json
was being ignored. I can't find the root cause, by using the key/value
format seems to work.
I've also removed several options here to simply the interface. We can
add those back if they become necessary.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
I'm removing this because when we use an 'rhcs' scenario then we attempt
to set CEPH_STABLE=false as an environment variable. The issue with that
is because the value is coming from an environment variable it is always
treated as a string and ansible treats that as a boolean True. I plan to
set the ceph_stable value with our rhcs_setup.yml playbook instead of
relying on ---extra-vars and environment variables.
Related ansible issue: https://github.com/ansible/ansible/issues/17193
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
When testing this downstream it makes more sense for this scenario to be
named just 'cluster' because have 'centos7' in the name is misleading.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This allows for us to have a copy of the existing testing scenarios with
a 'rhcs-' prefix. We can use that in the tox.ini to take actions we need
to properly test Red Hat Ceph Storage.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This will prevent ansible from misreading any of these values. There
were failures with xenial deployments because the value set for
``ceph_rhcs`` was being treated as a boolean True even though I'd set
the value to false. This is because boolean values passed in with
--extra-vars must use the json format.
The formatting of the json is very important as you need a '\' to escape
the starting and ending json to make tox happy. Also, each line needs to
end with '\' if it's a multi-line command.
Another thing to note is that if you want to use extra vars at the
command line to respond to a vars_prompt it must be in key/value format.
This is why we have a -e and a --extra-vars on the purge and update
tests.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
When using CEPH_DEV=true you'll need to set CEPH_STABLE=false so that
that an upstream repo file doesn't get created.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Use CEPH_STABLE_RELEASE to set the name of the ceph release you plan to
install. When testing an upgrade scenario you'll also need to set
UPGRADE_CEPH_STABLE_RELEASE.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
To run tests that deploy shaman repos set CEPH_DEV=true and optionally
use CEPH_DEV_BRANCH and CEPH_DEV_SHA1 to define with branch and sha1 to
test. CEPH_DEV_BRANCH defaults to master and CEPH_DEV_SHA1 defaults to
latest.
For example, this would run the journal_collocation test with the latest
build of the master branch:
CEPH_DEV=true tox -rve ansible2.2-journal_collocation
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
For example, the following would run the journal collocation test and
would install ceph from the repos already on the nodes:
CEPH_ORIGIN=distro tox -rve ansible2.2-journal_collocation
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
The purge_dmcrypt scenario also tests centos7, so change this one to
xenial so we can have more test coverage.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This also removes the purge_cluster_collocated scenario as it's not
needed now because of purge_cluster.
Moving all the purge commands into its own section allows for ease of
reuse when creating new purge scenarios.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>