the osd disk activate can't find device in /dev/mapper/* because of a
refactor that is missing in stable-2.2 branch
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
b278d7c has introduced an error for containerized deployment with `stable-2.2`
It removed the copy_configs.yml file but not the include in `main.yml`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
With all references to the nfs-ganesha RGW FSAL removed from this
branch, the remaining ceph_nfs_rgw_* are no longer used
anywhere. Remove them in order to avoid potential user confusion.
As Ali Maredia explains (in
https://github.com/ceph/ceph-ansible/issues/1907#issuecomment-331200448),
since the Ceph RGW/NFS gateway (with the nfs-ganesha RGW FSAL) is not
supported for any stable Ceph release prior to Luminous, the
nfs_obj_gw variable does not serve any real purpose in this
branch. Thus, remove it along with all references that use it.
As Ken Dryer explains (in
https://github.com/ceph/ceph-ansible/issues/1907#issuecomment-330364084),
nfs-ganesha 2.5 requires newer features than what Ceph Jewel
supports. Since this branch isn't expected to support Luminous, it
follows that clusters deployed from this branch won't ever support an
RGW that can be used with the nfs-ganesha RGW FSAL.
Consequently, remove support for the RGW FSAL from the ganesha.conf
template in this branch.
In analogy to ceph_nfs_rgw_user, we should be able to define a user
with which the nfs-ganesha Ceph FSAL connects to the cluster.
Introduce a ceph_nfs_ceph_user, setting its default to "admin" (which
preserves the prior behavior of always connecting as client.admin).
Backport of #1911.
The gluster/nfs-ganesha PPA does not contain any nfs-ganesha builds
with the Ceph FSAL enabled, so adding that PPA is fairly useless.
Set the correct PPA (gluster/nfs-ganesha-2.5), and also correct the
PPA name for libntirpc.
Finally, install the correct package (nfs-ganesha-ceph not
nfs-ganesha-fsal).
Fixes#1905.
The fact ['ansible_$interface']['ipv4'] is a dictionary where
['ansible_$interface']['ipv6'] is a list. If we use
ansible_default_ipv6|ipv4 is is always a dictionary which allows us to
get the ipv6 and ipv4 address without adding more complexity to the
template.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit e8187f6a0f)
Ansible evaluates the 'with_items' before the 'when' so if the inventory
does not have the group declared it'll fail. To fix this, we set an
empty array to make the with_items happy and then evaluate with the
'when'.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 05331a2634)
Signed-off-by: Sébastien Han <seb@redhat.com>
We only check for everything expect 'distro' because that
is a valid way of deploying RHCS, with preprepared repos
present on the nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 468dc06bcd)
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: we could end up in situation where we would install a package
on a machine that does not have the right repo enabled. Because the
condition was set to OR we weren't pinning a particular host but just a
condition. Let's say someone sets 'ceph_origin == "distro"', this would
try to install OSD packages on Monitors.
Solution: use a AND condition to first pin to the group_name (which
identifies a set of hosts) AND then after this one of the installation
condition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1453119
Co-Authored-By: https://github.com/zhsj
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f7e9585a2c)
Signed-off-by: Sébastien Han <seb@redhat.com>
Clarify in the error message that only RHEL version >= 7.3 are
supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452431
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 8ad503b248)
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: fail to deploy a containerized Ceph cluster with ipv6
Solution: do not hardcode ipv4 when bootstrapping the container.
Now use ip_version: ipv6 to get a containerized cluster deployed with
ipv6.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1451786
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit c7aae7f965)
Signed-off-by: Sébastien Han <seb@redhat.com>
Already documented in the Red Hat Ceph Storage 2 Installation Guide
for Red Hat Enterprise Linux, but not here
Signed-off-by: Florian Klink <flokli@flokli.de>
(cherry picked from commit 10b91661ce)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
"rgw override bucket index max shards" and
"rgw bucket default quota max objects" were in the
client section of the ceph.conf and not being
applied, this commit moves them to global
Resolves: bz#1391500
Signed-off-by: Ali Maredia <amaredia@redhat.com>
(cherry picked from commit 2aeb3a4957)
Change civetweb_num_thread default to 100
Add capability to override number of pgs for
rgw pools.
Add ceph.conf vars to enable default bucket
object quota at users choosing into the ceph.conf.j2
template
Resolves: rhbz#1437173
Resolves: rhbz#1391500
Signed-off-by: Ali Maredia <amaredia@redhat.com>
Prior to this change, ansible was only checking for the existence of the
package, now if upgrade_ceph_packages is true this means we are
performing an upgrade.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1442016
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 84d96be197)
Restore the check_socket that was removed by `5bec62b`.
This commit also improves the logging in `restart_*_daemon.sh` scripts
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
That task includes logic for upstream installs that we do not want to
run when deploying RHCS.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
(cherry picked from commit 37d38b122b)
Prior to this change we were deploying a monitor using tis fqdn name but
we were checking its state and performing actions on it using its
shortname.
Signed-off-by: Sébastien Han <seb@redhat.com>
The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to
provide additional monitoring and interfaces to external monitoring and
management systems.
Only works as of the Kraken release.
Co-Authored-By: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Sébastien Han <seb@redhat.com>
ceph-create-keys unit file was removed here:
* 8bcb4646b6
* dc5fe8d415
As a consequence the systemctl preset command now fails to run since the
unit does not exist anymore. Due to the redirection in /dev/null we
don't know what's happening.
Ultimately the mon unit doesn't get enabled and the mon service won't
start after reboot.
Removing the old/non-existent unit makes the command succeed now.
ceph fix: https://github.com/ceph/ceph/pull/14226
Signed-off-by: WingkaiHo <sanguosfiang@163.com>
Co-Authored-By: Sébastien Han <seb@redhat.com>
Until now, only the first task were executed.
The idea here is to use `listen` statement to be able to notify multiple
handler and regroup all of them in `./handlers/main.yml` as notifying an
included handler task is not possible.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
As reported in
https://github.com/ceph/ceph-ansible/issues/1403 when devices are held
by lvm and `osd_auto_discovery` is set to true, it's not enough to check
for a partition count = 0 since Ansible does not report.
This patch also looks for 'holders' which in a case of lvm corresponds
to the name of the pv. Now we also look for holders = 0.
Fixes: #1403
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: too many different commands to do the same thing. The 'cut'
command on infrastructure-playbooks/purge-cluster.yml was also wrong.
This sed command from osixia in ceph-docker
https://github.com/ceph/ceph-docker/pull/580/ addresses all the
scenarios.
Signed-off-by: Sébastien Han <seb@redhat.com>
ntp is still installed even if ntp_service_enabled is set to false.
That could be a problem if the time synchronization is managed by
something else than ceph-ansible or if you want to use different NTP
implementation as suggested in #1354.
Fixes: #1354
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Guits <gabrioux@redhat.com>
If a group of hosts is empty, (for instance 'mdss', in case of a
deployment without any mds node), the playbook will fails when trying
to restart service with `"'dict object' has no attribute u'XXX'"` error.
The idea here is to force the `with_items` statements in all included handler tasks
to get at least an empty array.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
systctl tuning should be in the sysctl.d directory. This creates
a seperation from what values were set specific to ceph, and what
values were set by the operator.
Signed-off-by: Tyler Brekke <tbrekke@redhat.com>