- Update rolling update playbook to support containerized deployments
for mons, osds, mdss, and rgws
- Skip checking if existing cluster is running when performing a rolling
update
- Fixed bug where we were failing to start the mds container because it
was missing the admin keyring. The admin keyring was missing because
it was not being pushed from the mon host to the ansible host due to
the keyring not being available before running the copy_configs.yml
task include file. Now we forcefully wait for the admin keyring to be
generated before continuing with the copy_configs.yml task include file
- Skip pre_requisite.yml when running on atomic host. This technically
no longer requires specifying to skip tasks containing the with_pkg tag
- Add missing variables to all.docker.sample
- Misc. cleanup
Signed-off-by: Ivan Font <ifont@redhat.com>
This is done for preventing of their use-before-definition for osd scenarios checks (should be removed after a refactor has properly seperated all the checks into appropriate roles).
Signed-off-by: Eduard Egorov <eduard.egorov@icl-services.com>
backward compatibility for ceph-ansible version running latest code but
using variables defined before commit: 492518a2
Signed-off-by: Sébastien Han <seb@redhat.com>
This RHCS version is now generally available. Default to using it.
Signed-off-by: Alfredo Deza <adeza@redhat.com>
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
Related: rhbz#1357631
By overriding the openstack_pools variable introduced by this commit, the
deployer may choose not to create some of the openstack pools, or to add
new pools which were not foreseen by ceph-ansible, e.g. for a gnocchi
storage backend.
For backwards compatibility, we keep the openstack_glance_pool,
openstack_cinder_pool, openstack_nova_pool and
openstack_cinder_backup_pool variables, although the user may now choose
to specify the pools directly as dictionary literals inside the
openstack_pools list.
- Move mon_containerized_default_ceph_conf_with_kv config from ceph-mon
to ceph-common defaults as it's used in ceph-nfs
- Update conditional to generate ganesha config when not
mon_containerized_default_ceph_conf_with_kv
- Revert change to store radosgw keyring using ansible_hostname on
ansible server so that ceph-nfs can find it
- Update ceph-ceph-nfs0-rgw-user container to use ansible_hostname
variable
Signed-off-by: Ivan Font <ivan.font@redhat.com>
use the activation scenario instead of the full ceph_disk one, we
already have a task to prepare osds so we just need to activate the
device.
working for me using vagrant :)
Signed-off-by: Sébastien Han <seb@redhat.com>
- Move fsal_rgw config to ceph-common, as it's shaered with ceph-rgw
- Update all.docker.sample with NFS config
- Rename fsal_rgw to nfs_obj_gw and fsal_ceph to nfs_file_gw, because
the former names mean nothing to non-Ganesha developers
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
-First install ceph into a directory with CMake
cmake -DCMAKE_INSTALL_LIBEXECDIR=/usr/lib -DWITH_SYSTEMD=ON -DCMAKE_INSTALL_PREFIX:PATH:=/usr <ceph_src_dir> && make DESTDIR=<install_dir> install/strip
-Ceph-ansible copies over the install_dir
-User can use rundep_installer.sh to install any runtime dependencies that ceph needs onto the machine from rundep
* changed s/colocation/collocation/
* declare dmcrypt variable in ceph-common so the variables check does
not fail
Signed-off-by: Sébastien Han <seb@redhat.com>
Journal size is not mandatory anymore, a default from 5GB is being
added. A simple warning message will show up if the size is set to
something below 5GB.
Signed-off-by: Sébastien Han <seb@redhat.com>
Add the ability to use a custom repo, rather than just upstream, RHEL,
and distro. This allows ansible to be used for internal testing.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
Ceph has the ability to export it's filesystem via NFS using Ganesha.
Add a ceph-nfs role that will start Ganesha and export the Ceph
filesystems.
Note that, although support is going in to export RGW via NFS, this is
not working yet.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
The changes here are not anything to do with removing
yum-plugin-priorities. They must be leftover from
prior commits that did not run generate_group_vars_sample.sh
This causes ceph-ansible scripts to fail when targeting Centos7 machines.
Installation fails because newer ceph package dependencies provided
by ceph-release-{version}.noarch.rpm were overridden by older
package dependency versions in default distribution repositories,
due to the fact that default distribution repositories have higher
priority.
Docker makes it difficult to use images that are not on signed
registries. This is a problem for developers, who likely won't have
access to a registry with proper signed certificates.
This allows the ability to use any docker image on the machine running
vagrant/ansible. The way it works is that the image in question is
exported locally, then sent to each target box and imported there.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
By default, this roles will create a ceph config file and get the admin
key. You can optionnally add other users, keys and pools for your tests.
Closes: #769
Signed-off-by: Sébastien Han <seb@redhat.com>
Add support to allow ceph-ansible to install and
configure Ceph on Debian on the ppc64le architecture.
Canonical has ppc64le Debian packages in Ubuntu distros
and on Ubuntu Cloud Archive. Both of which can be installed
and configured using the 'distro' or 'uca' options in
ceph-ansible when this patch is used.
Signed-off-by: Samuel Matzek <smatzek@us.ibm.com>
Since ##461 we have been having the ability to override ceph default
options. Previously we had to add a new line in the template and then
another variable as well. Doing a PR for one option was such a pain. As
a result, we now have tons of options that we need to maintain across
all the ceph version, yet another painful thing to do.
This commit removes all the ceph options so they are handled by ceph
directly. If you want to add a new option, feel free to to use the
`ceph_conf_overrides` variable of your `group_vars/all`.
Risks, for those who have been managing their ceph using ceph-ansible
this is not a trivial change as it will trigger a change in your
`ceph.conf` and then restart all your ceph services. Moreover if you did
some specific tweaks as well, prior to run ansible you should update the
`ceph_conf_overrides` variable to reflect your previous changes.
To avoid service restart, you need to know a bit of ansible for this,
but generally the idea would be to run ansible on a dummy host to
generate the ceph.conf, then scp this file to all your ceph hosts and
you should be good.
Closes: #693
Signed-off-by: Sébastien Han <seb@redhat.com>
This adds support to allow the install of Ceph from the
Ubuntu Cloud Archive. The Ubuntu Cloud Archive provides newer
release of Ceph than the normal Ubuntu distro repository.
Signed-off-by: Samuel Matzek <smatzek@us.ibm.com>
This will allow a user to conditionally install the ceph package on rpm
based systems. Installing this package is not required or wanted in
versions passed infernalis.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Introducing a new config option: `radosgw_civetweb_bind_ip` which points
to the `ansible_default_ipv4` by default. You can override this
variable. Use ansible facts to put a proper value.
Signed-off-by: Sébastien Han <seb@redhat.com>
Instead of creating the RBD client socket path three different places
in three different ways, this creates it once. Ceph on OpenStack users
have the option to customize the permissions of the RBD client
directories.
Fixes#687
we now have the ability to enable the `cluster` variable with a specific
value that will determine the name of the cluster.
Signed-off-by: Sébastien Han <seb@redhat.com>
ceph.conf file generation task in ceph-common role was getting failed
because it ansible cant find defination of varriable mon_containerized_deployment_with_kv
This fix declare mon_containerized_deployment_with_kv under ceph-common/defaults/main.yml which fixes this issue
Signed-off-by: ksingh7 <karan.singh731987@gmail.com>
With Jewel comes a new store to store Ceph object: BlueStore. Adding an
extra scenario might seem like a useless duplication however the
ultimate goal is remove the other roles later. Thus this is easier to
add new role instead of modifying existing one. Once we drop the support
for release older than Jewel we will just remove all the previous
scenario files.
Signed-off-by: Sébastien Han <seb@redhat.com>
Some versions (?) of libvirt provide a 'libvirt' group instead of
'libvirtd'. (Observed with libvirt-daemon-1.2.17-13.el7_2.2.x86_64.)
This makes the RBD client directory owner and group configurable to
allow for this.