As Ali Maredia explains (in
https://github.com/ceph/ceph-ansible/issues/1907#issuecomment-331200448),
since the Ceph RGW/NFS gateway (with the nfs-ganesha RGW FSAL) is not
supported for any stable Ceph release prior to Luminous, the
nfs_obj_gw variable does not serve any real purpose in this
branch. Thus, remove it along with all references that use it.
As Ken Dryer explains (in
https://github.com/ceph/ceph-ansible/issues/1907#issuecomment-330364084),
nfs-ganesha 2.5 requires newer features than what Ceph Jewel
supports. Since this branch isn't expected to support Luminous, it
follows that clusters deployed from this branch won't ever support an
RGW that can be used with the nfs-ganesha RGW FSAL.
Consequently, remove support for the RGW FSAL from the ganesha.conf
template in this branch.
Add the ceph_nfs_ceph_user variable to nfss.yml.sample too (not just
all.yml.sample).
This was properly included in ada2f147f5
in master, but was missing from
72e32ae0bf in stable-2.2.
In analogy to ceph_nfs_rgw_user, we should be able to define a user
with which the nfs-ganesha Ceph FSAL connects to the cluster.
Introduce a ceph_nfs_ceph_user, setting its default to "admin" (which
preserves the prior behavior of always connecting as client.admin).
Backport of #1911.
The gluster/nfs-ganesha PPA does not contain any nfs-ganesha builds
with the Ceph FSAL enabled, so adding that PPA is fairly useless.
Set the correct PPA (gluster/nfs-ganesha-2.5), and also correct the
PPA name for libntirpc.
Finally, install the correct package (nfs-ganesha-ceph not
nfs-ganesha-fsal).
Fixes#1905.
We ship ceph-iscsi-gw in a separate repo downstream and do not package
it with ceph-ansible. Including the play for ceph-iscsi-gw in
site.yml.sample makes the playbook fail when using the downstream
packages.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1454945
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit dd193df4b9)
The fact ['ansible_$interface']['ipv4'] is a dictionary where
['ansible_$interface']['ipv6'] is a list. If we use
ansible_default_ipv6|ipv4 is is always a dictionary which allows us to
get the ipv6 and ipv4 address without adding more complexity to the
template.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit e8187f6a0f)
When we purge a containerized cluster we need to use the correct
playbook when redploying the cluster.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 22541a9c9a)
Conflicts:
tox.ini
We need to include ceph_docker_registry when removing containers/images
because if we don't it will assume docker.io which is not always where
the image originated from, causing the playbook to fail.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 59992c54cc)
We only need to purge packages installed by pip on Debian systems.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit f7677e4393)
If we're purging a containerized cluster that did not use the
raw_multi_journal OSD scenario then raw_journal_devices will not be
defined which causes the playbook to fail.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1455187
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 8e322d4825)
Ansible evaluates the 'with_items' before the 'when' so if the inventory
does not have the group declared it'll fail. To fix this, we set an
empty array to make the with_items happy and then evaluate with the
'when'.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 05331a2634)
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: we are delegating the set/unset flag to a monitor node but we
try to call an osd container
Solution: use the right container name.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 90389864d8)
Signed-off-by: Sébastien Han <seb@redhat.com>
We only check for everything expect 'distro' because that
is a valid way of deploying RHCS, with preprepared repos
present on the nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 468dc06bcd)
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: we could end up in situation where we would install a package
on a machine that does not have the right repo enabled. Because the
condition was set to OR we weren't pinning a particular host but just a
condition. Let's say someone sets 'ceph_origin == "distro"', this would
try to install OSD packages on Monitors.
Solution: use a AND condition to first pin to the group_name (which
identifies a set of hosts) AND then after this one of the installation
condition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1453119
Co-Authored-By: https://github.com/zhsj
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f7e9585a2c)
Signed-off-by: Sébastien Han <seb@redhat.com>
Clarify in the error message that only RHEL version >= 7.3 are
supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452431
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 8ad503b248)
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: fail to deploy a containerized Ceph cluster with ipv6
Solution: do not hardcode ipv4 when bootstrapping the container.
Now use ip_version: ipv6 to get a containerized cluster deployed with
ipv6.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1451786
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit c7aae7f965)
Signed-off-by: Sébastien Han <seb@redhat.com>
Already documented in the Red Hat Ceph Storage 2 Installation Guide
for Red Hat Enterprise Linux, but not here
Signed-off-by: Florian Klink <flokli@flokli.de>
(cherry picked from commit 10b91661ce)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Comments inside this file must be set BEFORE the option. NOT after the
option, otherwise the comment will be interpreted as a value to that
option.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit ece9c14a33)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
If the comment is put after the line then it is interpreted so we need
to move it before and have a dedicated line.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 9f2c21972d)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Problem: the meta declaration just includes the role, it does nothing
with the group_vars. For ansible to use files defined in group_vars/ the
name of the file must match a host group. Like mons, osds, etc. There is
no group docker-common so the variables defined there are never used, as
proved by https://bugzilla.redhat.com/show_bug.cgi?id=1447179 and the
ansible documentation.
Solution: bring the ability to merge roles files. So now by default,
ceph-docker-common and ceph-common will go into all.yml.sample
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1447179
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit a8c75c3bc9)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Using indent of 2 on the entire script.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 7fddcad8ea)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
At some point, during a commit changing `roles/ceph-rgw/defaults/main.yml`
we forgot to run `generate_group_vars_sample.sh`.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 2106745343)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>