There should be no need to use sudo when writing or using these files.
It creates an issue when the user running ansible-playbook does not
have sudo privs.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Some deployments can't copy infrastructure playbooks outside of the
infrastructure-playbooks directory. Thus they use ANSIBLE_ROLES_PATH to
overcome this. However some roles have 'playbook_dir' hardcoded, which
results in wrong path since the execution comes from
infrastructure-playbooks. Basically the role triggered by a playbook
from infrastructure-playbooks believes that the roles are in
infrastructure-playbooks/roles. This commit fixes that.
Signed-off-by: Sébastien Han <seb@redhat.com>
This will give us more flexibility and the possibility to deploy a client node
for an external ceph-cluster.
related BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1469426Fixes: #1670
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Before this commit we were forcing ipv4 which might not be available.
Now setting ip_version to ipv4 or ipv6 will give you the right support.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1484189
Signed-off-by: Sébastien Han <seb@redhat.com>
This commit eases the use of the
infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml
playbook. We basically run it with a couple of pre-tasks and then we let
the playbook run the docker roles.
It obviously expect to have proper variables configured in order to
work.
Signed-off-by: Sébastien Han <seb@redhat.com>
The lvm_volumes variable is now a list of dictionaries that represent
each OSD you'd like to deploy using ceph-volume. Each dictionary must
have the following keys: data, journal and data_vg. Each dictionary also
can optionaly provide a journal_vg key.
The 'data' key represents the lv name used for the OSD and the 'data_vg'
key is the vg name that the given lv resides on. The 'journal' key is
either an lv, device or partition. The 'journal_vg' key is optional and
must be the vg name for the journal lv if given. This key is mainly used
for purging of the journal lv if purge-cluster.yml is run.
For example:
lvm_volumes:
- data: data_lv1
journal: journal_lv1
data_vg: vg1
journal_vg: vg2
- data: data_lv2
journal: /dev/sdc
data_vg: vg1
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Resolves issue: Multiple RGW Ceph.conf Issue #1258
In multi-RGW setup, in ceph.conf the RGW sections
contain identical bind IP in civetweb line. So this
modification fixes that issue and puts the right IP
for each RGW.
Signed-off-by: SirishaGuduru SGuduru@walmartlabs.com
Modified ceph-defaults and ran generate_group_vars_sample.sh
group_vars/osds.yml.sample and group_vars/rhcs.yml.sample are
not part of the changes. But they got modified when
generate_group_vars_sample.sh is ran to generate group_vars/
all.yml.sample.
Uncommented added variables in ceph-defaults
Updated tests by adding value for radosgw_interface
Added radosgw_interface to centos cluster tests
Modified ceph-rgw role,rebased and ran generate_group_vars_sample.sh
In ceph-rgw role removed check_mandatory_vars.yml.
Rebased on master.
Ran generate_group_vars_sample.sh and then the below files got
modified.
The rbd-mirror daemon will be HA under luminous and new daemon health
features require a way to uniquely identify rbd-mirror instances.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
To be properly evaluated the "skipped" conditions must always have the
first place on the list of condition, otherwise the other conditions are
evaluated before and make the task fail.
Closes: https://github.com/ceph/ceph-ansible/issues/1733
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: task "check for a ceph socket in containerized deployment" will
be skipped if we are not an OSD.
with_items are still evaluated before when conditions so if the task was
skipped the dict will be empty and then fail.
Adding a "not skipped" condition skips the execution of the task.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1482061
Signed-off-by: Sébastien Han <seb@redhat.com>
This commits force ceph-common to be installed early in deployment on
nodes.
For instance, ceph-rbdmirror doesn't have the CLI installed while it is
needed for some tasks which uses it to set some facts.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
ceph services can fail to start under certain circumstances (for
example, when running in a container) because the default systemd
service configuration causes namespace issues.
To work around this we can override the system service settings by
placing an overrides file in the ceph-<service>@.service.d directory.
This can be generic so as to allow any potential changes required to
the ceph-<service> service files.
The overrides file is only setup when the
"ceph_<service>_systemd_overrides" config_template override variable is
specified.
The available service systemd override files are as follows:
ceph_mds_systemd_overrides
ceph_mgr_systemd_overrides
ceph_mon_systemd_overrides
ceph_osd_systemd_overrides
ceph_rbd_mirror_systemd_overrides
ceph_rgw_systemd_overrides
The original fix to issue #1755 only set the permissions on
the monitors to which the key was copied, but not the original
monitor where the key was created. Thus, we use a separate task
to set the permission of the key.
The openstack_keys structure now supports a key called mode
whose value is a string that one could pass to chmod to set
the mode of the key file. The ansible file module applies the
mode to all openstack keys with this property.
Fixes: #1755
This is under the MDS role instead of the mon role because that role
does not create the filesystem under docker.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
This scenario will create OSDs using ceph-volume and is only available
in ceph releases greater than Luminous.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
There is only two main scenarios now:
* collocated: everything remains on the same device:
- data, db, wal for bluestore
- data and journal for filestore
* non-collocated: dedicated device for some of the component
Signed-off-by: Sébastien Han <seb@redhat.com>
in containerized deployment, if you try to update your `ceph.conf` file
it won't be actually updated on your nodes because it is overwritten by
the copy of the file which is present in your fetch directory.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Move `fsid`,`monitor_name`,`docker_exec_cmd` and `ceph_release` set_fact
to `ceph-defaults` role.
It will allow to reuse these facts without having to play `ceph-common`
or `ceph-docker-common`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This will give us more flexibility and avoid a lot of useless when
skipping all tasks from a non-desired role.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add a new role `ceph-defaults`.
This role aims to handle all defaults vars for `ceph-docker-common` and
`ceph-common` and set basic facts (eg. `fsid`)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Merge `ceph-docker-common` and `ceph-common` defaults vars in
`ceph-defaults` role.
Remove redundant variables declaration in `ceph-mon` and `ceph-osd` roles.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We don't want to have heterogeous ceph.conf anymore and believe that we
should have the right section for the running daemon.
If we don't do this and use profiles, e.g: rgw, we will get a new rgw
section on some of the nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
It is mandatory now to set the Ceph version you want to install, e.g:
ceph_stable_release: luminous
To find the release names, you can look at the release not doc:
http://docs.ceph.com/docs/master/release-notes/
Signed-off-by: Sébastien Han <seb@redhat.com>
ceph-disk is responsable for enabling the unit file if needed. Actually
since https://github.com/ceph/ceph/pull/12241 it seems that it's not
even needed. On an event of a restart, udev rules will be trigger and
they will ceph-disk activate the device too so the 'enabled' is not
needed.
Closes: https://github.com/ceph/ceph-ansible/issues/1142
Signed-off-by: Sébastien Han <seb@redhat.com>
OSDs get started by ceph-disk before the ceph.conf file is written
with a crush location. That results in a crush map without configured
crush location.
To prevent this, we have to restart the OSDs during the initial setup
after the crush location was added to the ceph.conf file.
We have multiple issues with ceph-disk's cli with bluestore and Ceph
releases. This is mainly due to cli changes with Luminous. Luminous
introduced a --bluestore and --filestore options which respectively does
not exist on releases older than Luminous. The default store being
bluestore on Luminous, simply checking for the store is not enough so we
have to build a specific command line for ceph-disk depending on the
Ceph version we are running and the desired osd_store.
Signed-off-by: Sébastien Han <seb@redhat.com>
The keys and openstack_keys structure now supports an optional
key called acls whose value is a list of strings one could pass
to setfacl. The ansible ACL module applies the ACLs to all
openstack keys with this property.
Fixes: #1688
Remove `rgw enable static website` and `rgw enable usage log` from
ceph.conf and make it usable with ceph_config_overrides as profiles.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit introduces a new directory called "profiles" which
contains some set of variables for a particular use case. These profiles
provide guidance for certain scenarios such as:
* configuring rgw with keystone v3
Signed-off-by: Sébastien Han <seb@redhat.com>
According to Alfredo, this was used for gitbuilders. Right now shaman/chacra
dev repos are unsigned.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Move condition at task level and not at include level to make `fsid`
variable available for all roles.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Some tasks fetch file to `{{ fetch_directory }}/docker_mon_files` and
then try to copy from `{{ fetch_directory }}/{{ fsid }}`. That causes
the playbook to fail.
Fixes: #1683
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
To keep consistency between `{{ openstack_keys }}` and `{{ keys }}`
respectively in `ceph-mon` and `ceph-client` roles.
This commit also add the possibility to set mds caps.
Fixes: #1680
Co-Authored-by: John Fulton <johfulto@redhat.com>
Co-Authored-by: Giulio Fidente <gfidente@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The ceph.conf template needs to look for the value of monitor_interface
in hostvars[host] because there might be different values set per host.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
In Luminous, ceph-disk defaults to bluestore so all our scenarios are
using bluestore, we need to force testing both.
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
In addition to ceph/ceph-docker@69d9aa6, this explains how to deploy a
containerized cluster with a custom admin secret.
Basically, just need to pass the `admin_secret` defined in your
`group_vars/all.yml` to the `ceph_mon_docker_extra_env` variable.
Eg:
`ceph_mon_docker_extra_env: -e CLUSTER={{ cluster }} -e FSID={{ fsid }}
-e MON_NAME={{ monitor_name }} -e ADMIN_SECRET={{ admin_secret }}`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Fail with a sane message if the devices or raw_journal_devices variables
are strings instead of lists during manual device assignment.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
rsync is required by the ansible synchronize package. Ensure
it is installed when local installation is selected.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Add a new parameter `admin_secret` that allow to deploy a ceph cluster
with a custom admin secret.
Fix: #1630
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commits refactors how we deploy bluestore. We have existing
scenarios that we don't want to change too much. This commits eases the
user experience by now changing the way you use scenarios. Bluestore is
just a different interface to store objects but the scenarios more or
less remain the same.
If you set osd_objectstore == 'bluestore' along with
journal_collocation: true, you will get an OSD running bluestore with DB
and WAL partitions on the same device.
If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true, you will get an OSD running bluestore with a
dedicated drive for the rocksdb DB, then the remaining
drives (used with 'devices') will have WAL and DATA collocated.
If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true and declare bluestore_wal_devices you will get
an OSD running bluestore with a dedicated drive for rocksdb db, a
dedicated drive partition for rocksdb WAL and a dedicated drive for
DATA.
Signed-off-by: Sébastien Han <seb@redhat.com>
There is no need for 2 variables to enable bluestore, prior to this
patch one had to do the following to activate bluestore:
osd_objectstore: bluestore
bluestore: true
Now you just need to set `osd_objectstore: bluestore`.
Fixes: https://github.com/ceph/ceph-ansible/issues/1475
Signed-off-by: Sébastien Han <seb@redhat.com>
remove `ceph_mon_docker_interface` and use `monitor_interface` instead
for both containerized and non-containerized deployment.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Some variables are missing from ceph-docker-common role since the
include of check_mandatory_vars.yml has been re-added in the ceph-mon
role.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The check regarding the networking scenario configuration has been
moved from ceph-common to ceph-mon in 1de8176 but the include was not re-added
in 189f4fe
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
e8187f6 does not fix the ipv6 as expected since `ansible_default_*` are
filled with the IP address carried by the network interface used by the
default gateway route. By the way, it assumes that the MON_IP address will
be this IP address which is not always the case.
We need to keep using the previous fact but add some intelligence in the
template to determine how to retrieve the ipv4|ipv6 address since the path
to the fact in `hostvars` is not the same according to ipv4 vs ipv6 case.
Fix: 1569
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add an extra variable to the openstack pools, which creates them with
defined rules. This will allow to place different pools on e.g.
different type of disks.
This commit will also set a new default rule when defined and move
the rbd pool to the new rule.
Somehow the shell module will return an error if the command line is not
next to it.
Plus fixed the import with the right path.
Signed-off-by: Sébastien Han <seb@redhat.com>
Followup on https://github.com/ceph/ceph-ansible/pull/1469 where we
merged most of the container code from roles/ceph-*/task/docker/*.yml
into roles/ceph-docker-common/tasks/
It seems that we forgot to remove the original files.
Signed-off-by: Sébastien Han <seb@redhat.com>
The new test in the checks PGs are no longer working on distributions
where /bin/sh isn't linked to /bin/bash.
Fix: #1619
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
OpenStack's Gnocchi service expects to have a pool called "metrics".
This change addess "metrics" to the list of `openstack_pools` and
creates a corresponding key. It is only run if the user sets
`openstack_config: false`.
The current handler only restarts one OSD on each OSD server. After
the first one the handler stops, not matter what results the checks had.
Co-Authored-By: Gaudenz Steinlin (@gaudenz)
Remove "osd mkfs type" and the other pre-Bluestore parameters from the
generated ceph.conf so that disk activation on OSDs will work. The
current default xfs config results in a failed deployment and
incorrect partition metadata.
For newly created cluster the command: ceph --cluster {{ cluster }} osd
pool get rbd size does not respond properly.
We only want to check if the rbd pool exists, so we know use an ls |
grep approach.
Closes: https://github.com/ceph/ceph-ansible/issues/1547
Signed-off-by: Sébastien Han <seb@redhat.com>
Rewrite the check_pgs by using json parsing instead of complex regexp to
parse the `ceph -s` output.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add a default value for `ceph_docker_on_openstack` to avoid a
conditional check error for the task `pause after docker install before starting` in
`roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The fact ['ansible_$interface']['ipv4'] is a dictionary where
['ansible_$interface']['ipv6'] is a list. If we use
ansible_default_ipv6|ipv4 is is always a dictionary which allows us to
get the ipv6 and ipv4 address without adding more complexity to the
template.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
For some reason we changed the check of pgs but it appears it could be
dangerous because the current check might satisfied as long as 1 PG is
active+clean.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
since `-e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` is already hardcoded in
`eph-osd-run.sh.j2` there is no need to add `-e
CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` as a default value in defaults vars.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
`ceph-docker-common`:
At the moment there is a lot of duplicated tasks in each
`./roles/ceph-<role>/tasks/docker/main.yml` that could be refactored in
`./roles/ceph-docker-common/tasks/main.yml`.
`*_containerized_deployment` variables:
All `*_containerized_deployment` have been refactored to a single
variable `containerized_deployment`
duplicate `cephx` variables in `group_vars/* have been removed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We only check for everything expect 'distro' because that
is a valid way of deploying RHCS, with preprepared repos
present on the nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: we could end up in situation where we would install a package
on a machine that does not have the right repo enabled. Because the
condition was set to OR we weren't pinning a particular host but just a
condition. Let's say someone sets 'ceph_origin == "distro"', this would
try to install OSD packages on Monitors.
Solution: use a AND condition to first pin to the group_name (which
identifies a set of hosts) AND then after this one of the installation
condition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1453119
Co-Authored-By: https://github.com/zhsj
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: fail to deploy a containerized Ceph cluster with ipv6
Solution: do not hardcode ipv4 when bootstrapping the container.
Now use ip_version: ipv6 to get a containerized cluster deployed with
ipv6.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1451786
Signed-off-by: Sébastien Han <seb@redhat.com>
In addition to `196fa7e` this commit check if a container has been
already launched and delete it before retrying the ceph osd prepare
process.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The CI on Docker is reporting the following error:
STDERR:
Error EINVAL: bad entity name
This is due to the fact that this auth entity name does not exist on
Jewel so we should not create that key when running Jewel containers.
Fixes: https://github.com/ceph/ceph-ansible/issues/1514
Signed-off-by: Sébastien Han <seb@redhat.com>
Already documented in the Red Hat Ceph Storage 2 Installation Guide
for Red Hat Enterprise Linux, but not here
Signed-off-by: Florian Klink <flokli@flokli.de>
"rgw override bucket index max shards" and
"rgw bucket default quota max objects" were in the
client section of the ceph.conf and not being
applied, this commit moves them to global
Resolves: bz#1391500
Signed-off-by: Ali Maredia <amaredia@redhat.com>
We shouldn't need this anymore as the upgrade bug that
debian_ceph_packages was used to workaround should have
been fixed as of jewel.
See https://github.com/ceph/ceph-ansible/issues/1481 for more
detailed information.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Change civetweb_num_thread default to 100
Add capability to override number of pgs for
rgw pools.
Add ceph.conf vars to enable default bucket
object quota at users choosing into the ceph.conf.j2
template
Resolves: rhbz#1437173
Resolves: rhbz#1391500
Signed-off-by: Ali Maredia <amaredia@redhat.com>
Restore the check_socket that was removed by `5bec62b`.
This commit also improves the logging in `restart_*_daemon.sh` scripts
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Prior to this change, ansible was only checking for the existence of the
package, now if upgrade_ceph_packages is true this means we are
performing an upgrade.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1442016
Signed-off-by: Sébastien Han <seb@redhat.com>
Proof-of-concept clusters or actual production clusters will never want to use this. We also do not test it anywhere for this same reason.
Signed-off-by: Gregory Meno <gmeno@redhat.com>
This is to allow ceph-mgr daemons to remote control
osd and mds daemons with MCommand messages.
Fixes: http://tracker.ceph.com/issues/19713
Signed-off-by: John Spray <john.spray@redhat.com>
Without this, we don't test the mgr role so we need to add it.
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Sébastien Han <seb@redhat.com>
This is the same fix as bc846b7da6
applied to the other part of the code-base that builds ceph.conf (I'd
missed that 349b9ab3e7 had duplicated
this code).
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
Ansible's assemble module by default will put all files in the src
directory together into dest. We only want to put {{ cluster }}.conf
and osd.conf together, not anything that might have found its way into
/etc/ceph/ceph.d (e.g. files left by the sysadmin taking backups
before an ansible run). So specify a regexp that matches only those
two files.
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
Ansible evaluates the 'with_items' before the 'when' so if the inventory
does not have the group declared it'll fail. To fix this, we set an
empty array to make the with_items happy and then evaluate with the
'when'.
Signed-off-by: Sébastien Han <seb@redhat.com>
Prior to this change we were deploying a monitor using tis fqdn name but
we were checking its state and performing actions on it using its
shortname.
Signed-off-by: Sébastien Han <seb@redhat.com>
The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to
provide additional monitoring and interfaces to external monitoring and
management systems.
Only works as of the Kraken release.
Co-Authored-By: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Sébastien Han <seb@redhat.com>
ceph-create-keys unit file was removed here:
* 8bcb4646b6
* dc5fe8d415
As a consequence the systemctl preset command now fails to run since the
unit does not exist anymore. Due to the redirection in /dev/null we
don't know what's happening.
Ultimately the mon unit doesn't get enabled and the mon service won't
start after reboot.
Removing the old/non-existent unit makes the command succeed now.
ceph fix: https://github.com/ceph/ceph/pull/14226
Signed-off-by: WingkaiHo <sanguosfiang@163.com>
Co-Authored-By: Sébastien Han <seb@redhat.com>
Until now, only the first task were executed.
The idea here is to use `listen` statement to be able to notify multiple
handler and regroup all of them in `./handlers/main.yml` as notifying an
included handler task is not possible.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
As reported in
https://github.com/ceph/ceph-ansible/issues/1403 when devices are held
by lvm and `osd_auto_discovery` is set to true, it's not enough to check
for a partition count = 0 since Ansible does not report.
This patch also looks for 'holders' which in a case of lvm corresponds
to the name of the pv. Now we also look for holders = 0.
Fixes: #1403
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: too many different commands to do the same thing. The 'cut'
command on infrastructure-playbooks/purge-cluster.yml was also wrong.
This sed command from osixia in ceph-docker
https://github.com/ceph/ceph-docker/pull/580/ addresses all the
scenarios.
Signed-off-by: Sébastien Han <seb@redhat.com>
ntp is still installed even if ntp_service_enabled is set to false.
That could be a problem if the time synchronization is managed by
something else than ceph-ansible or if you want to use different NTP
implementation as suggested in #1354.
Fixes: #1354
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Guits <gabrioux@redhat.com>
If a group of hosts is empty, (for instance 'mdss', in case of a
deployment without any mds node), the playbook will fails when trying
to restart service with `"'dict object' has no attribute u'XXX'"` error.
The idea here is to force the `with_items` statements in all included handler tasks
to get at least an empty array.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
systctl tuning should be in the sysctl.d directory. This creates
a seperation from what values were set specific to ceph, and what
values were set by the operator.
Signed-off-by: Tyler Brekke <tbrekke@redhat.com>
With ' in osd_crush_location, systemd will show this error:
ceph-osd-prestart.sh[2931]: Invalid command: invalid chars ' in 'root=
Signed-off-by: Christian Zunker <christian.zunker@codecentric.de>
This fixes issue #1299. According to @ktdreyer s comment in the ticket,
he fixed the web server config so also older (non-SNI) python clients
can use the uri module here.
After the jewel release the mon startup does not generate keys, but it's
still harmless to call ceph-create-keys with jewel because this task has
a 'creates' argument that will cause it not to run if the keys already
exist.
Removing this when condition also allows the downstream CI tests to
install kraken or luminous without resetting ceph_stable_release, which does not
pertain to rhcs.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This is not only for monitors, but also mds, rgw and rbd mirror so
making the var name more generic:
ceph_docker_enable_centos_extra_repo
Signed-off-by: Sébastien Han <seb@redhat.com>
Sometimes the socket appears during the 5th attempt and sometimes not so
increasing the timeout a little bit.
Signed-off-by: Sébastien Han <seb@redhat.com>
Add the possibility to create openstack pools and keys even for containerized deployments
Fix: #1321
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This removes the implicit order requirement when using OSD fragments.
When you use OSD fragments and ceph-osd role is not the last one,
the fragments get removed from ceph.conf by ceph-common.
It is not nice to have this code at two locations, but this is
necessary to prevent problems, when ceph-osd is the last role as
ceph-common gets executed before ceph-osd.
This could be prevented when ceph-common would be explicitly called
at the end of the playbook.
Signed-off-by: Christian Zunker <christian.zunker@codecentric.de>
This option was missing for rrgw, mds, rbd mirror and nfs making these
daemon impossible to run on a kv deployment with containers.
Signed-off-by: Sébastien Han <seb@redhat.com>
Prior to this change, ceph-ansible would install the main NFS Ganesha
server daemon on Ubuntu, but it would skip the Ceph FSALs.
Running "apt-get install nfs-ganesha" will only install the main NFS Ganesha
server. It does *not* pull in the RGW FSAL
(/usr/lib/x86_64-linux-gnu/ganesha/libfsalrgw.so)
Running "apt-get install nfs-ganesha-fsal" will install the RGW FSAL as
well as the main NFS Ganesha server package.
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
This patch introduces calamari_debug option which will turn on debugging
for calamari before initializing and running it.
Signed-off-by: Boris Ranto <branto@redhat.com>
From Josh Durgin, "I'd recommend not setting vfs_cache_pressure in
ceph-ansible. The syncfs issue is still there, and has caused real
problems in the past, whereas there hasn't been good data showing lower
vfs_cache_pressure is very helpful - the only cases I'm aware of have
shown it makes little difference to performance."
https://bugzilla.redhat.com/show_bug.cgi?id=1395451
Install package from official repos rather than pip when using RHEL.
This commit fix https://bugzilla.redhat.com/show_bug.cgi?id=1420855
Also this commit Refact all `roles/ceph-*/tasks/docker/pre_requisite.yml`
to avoid a lot of duplicated code.
Fix: #1303
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This was needed for Hammer and older version, not needed anymore since
we have a 'ceph' user to run ceph processes.
Signed-off-by: Sébastien Han <seb@redhat.com>
Check if ceph filesystem already exists before creating it.
If the ceph filesystem doesn't exist, execute the task only on one node.
Fix: #1314
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Since distro will not allow /usr/share to be writable (e.g: atomic) so
we let the operator decide where to put that script.
Signed-off-by: Sébastien Han <seb@redhat.com>
Oh yeah! This patch adds more fine grained control on how we run the
activation osd container. We now use --device to give a read, write and
mknodaccess to a specific device to be consumed by Ceph. We also use
SYS_ADMIN cap to allow mount operations, ceph-disk needs to temporary
mount the osd data directory during the activation sequence.
This patch also enables the support of dedicated journal devices when
deploying ceph-docker with ceph-ansible.
Depends on https://github.com/ceph/ceph-docker/pull/478
Signed-off-by: Sébastien Han <seb@redhat.com>
As of Infernalis, the Ceph daemons run as an unprivileged "ceph" UID,
and this is by design.
Commit f19b765 altered the default
civetweb port from 80 to 8080 with a comment in the commit log about
"until this gets solved"
Remove the comment about permissions on Infernalis, because this is
always going to be the case on the Ceph versions we support, and it
is just confusing.
If users want to expose civetweb to s3 clients using privileged TCP
ports, they can redirect traffic with iptables, or use a reverse proxy
application like HAproxy.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This avoids a situation where during a rolling_update we try to talk to
a mon to get the fsid and if that mon is down the playbook hangs
indefinitely.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
This gives us more flexibility than installing the ceph-release package
as we can easily use different mirrors. Also, I noticed an issue when
upgrading from jewel -> kraken as the ceph-release package for those
releases both have the same version number and yum doesn't know to
update anything.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
To configure kernel the task is using "command" module which is not
respect operator ">". So this task just print to "stdout": "never >
/sys/kernel/mm/transparent_hugepage/enabled"
fix: #1319
Signed-off-by: Sébastien Han <seb@redhat.com>
Some playbooks use [0-9]*, others use \d+$
The latter is more correct since cluster name may contain numbers.
Signed-off-by: Shengjing Zhu <zsj950618@gmail.com>
So unit files were stored in /var/lib/ceph some where in
/etc/systemd/system. Now they are all under /etc/systemd/system.
closes: #1296
Signed-off-by: Sébastien Han <seb@redhat.com>
If cephx is disabled it is not necessary to include `facts_mon_fsid.yml`
in `roles/ceph-common/tasks/facts.yml`.
Fix: #1300
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We changed the way we declare image.
Prior to this patch we must have a "user/image:tag"
format, which is incompatible with non docker-hub registry where you
usually don't have a "user". On the docker hub a "user" is also
identified as a namespace, so for Ceph the user was "ceph".
Variables have been simplified with only:
* ceph_docker_image
* ceph_docker_image_tag
1. For docker hub images: ceph_docker_name: "ceph/daemon" will give
you the 'daemon' image of the 'ceph' user.
2. For non docker hub images: ceph_docker_name: "daemon" will simply
give you the "daemon" image.
Infrastructure playbooks have been modified as well.
The file group_vars/all.docker.yml.sample has been removed as well.
It is hard to maintain since we have to generate it manually. If
you want to configure specific variables for a specific daemon simply
edit group_vars/$DAEMON.yml
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1420207
Signed-off-by: Sébastien Han <seb@redhat.com>
We shouldn't test directly the value of
`ceph_conf_overrides.global.osd_pool_default_pg_num` because this can
cause the playbook to fail if the key `global` is not present in
`ceph_conf_overrides`. Therefore we have to use the facts that have been
defined earlier.
Fix: #1242
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
On ubntu systems mkdir is in /bin where on atomic it is /usr/bin/.
We use the shell built-in function "command" to find its right location.
Signed-off-by: Sébastien Han <seb@redhat.com>
Since we now only support systemd has an init system we can finally
treat containers as processes using systemd and this for all the
distros.
Signed-off-by: Sébastien Han <seb@redhat.com>
This commits allows us to restart Ceph daemon machine by machine instead
of restarting all the daemons in a single shot.
Rework the structure of the handler for clarity as well.
Signed-off-by: Sébastien Han <seb@redhat.com>
According to #1216, we need to simply the code by removing the
support of anything before Jewel.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Some users purge their environments and leave it in a non-optimal state.
e.g: packages are still installed but /etc/ceph and /var/lib/ceph don't
exist anymore. This will result in multiple failures across the play,
sometimes hard to detect. Populating these directories "just in case"
should help us solving these problems.
Closes: #1253
Signed-off-by: Sébastien Han <seb@redhat.com>
Sometimes users for testing, tend to delete the whole /var/lib/ceph and
then run ansible again, OSD will never come up if we do not create their
directory.
Signed-off-by: Sébastien Han <seb@redhat.com>
This patch makes sure we set the proper pool size on the rbd pool.
Usually during bootstrap the rbd pool size is not honoured so we need to
add this workaround.
Signed-off-by: Sébastien Han <seb@redhat.com>
This allows the user to set ip_version to either ipv4 or ipv6. This
resolves a bug where monitor_address is set to an ipv6 address, but the
template fails to render because it's hardcoded to look for an 'ipv4'
key in the ansible facts.
See: https://bugzilla.redhat.com/show_bug.cgi?id=1416010
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Resolves: bz#1416010
could have scenario where different openstack components would
use the same pool, but the logic would create the same pool
more than once
add unique filter to account for this
Allow for more operator flexibility in the `rgw frontends` setting
while maintaining backwards compatibility with the old vars. This
allows an operator to, for example, use the civetweb settings for
implementing SSL ports.
For available civetweb configuration parameters, see:
https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md
It is not enough to check for the mds to exists, it actually always does
because we declare the variable. So we need to make sure that there is a
mds host.
Signed-off-by: Sébastien Han <seb@redhat.com>
Since we introduced config_overrides we removed a lot of options from
the default template. In some cases, like mds pool, openstack pools etc
we need to know the amount of PGs required. The idea here is to skip the
task if ceph_conf_overrides.global.osd_pool_default_pg_num is not define
in your `group_vars/all.yml`.
Closes: #1145
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-By: Guillaume Abrioux <gabrioux@redhat.com>
This allows for the role to be used with ansible-galaxy and to fix the
include in all the meta/main.yml files in the roles.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
The libcephfs1 package was removed from ceph-common in
cb1c06901e, however it was not synced
to group_vars/all.yml.sample using the `generate_group_vars_sample.sh`
script. This fixes up the comment formatting in the ceph-common
defaults and brings the group_vars sample back into sync.
Prior to this change, a playbook run with '--tags' or '--skip-tags'
would fail, because the ceph-common role would not include the
release.yml task, and this file defines critical things like
ceph_release.
Thanks Andrew Schoen <aschoen@redhat.com> for help with the fix.
Task put initial mon keyring in mon kv store from
ceph-mon/tasks/ceph_keys.yml is failing when cephx is disabled. The root
cause is that variable monitor_keyring is not populated by any task from
deploy_monitors.yml.
Fixes: #1211
Signed-off-by: Sébastien Han <seb@redhat.com>
Prior to this patch we had several ways to runs containers, we could use
ansible's docker module on some distro and on containers distros we were
using systemd. We strongly believe threating containers as services with
systemd is the right approach so this patch generalizes to all the
distros. These days most of the distros are running systemd so it's fair
assumption.
Signed-off-by: Sébastien Han <seb@redhat.com>
Once we have our first monitor up and running we need to add it to the
monitor store as a safety measure. Just in case the local file gets
deleted and you need to add a new monitor. Now you can retrieve this key
like this:
ceph config-key get initial_mon_keyring > initial_mon_keyring.txt
Signed-off-by: Sébastien Han <seb@redhat.com>
There is no need to become root on local_action. This will event trigger
an error on some systems as it will try to run a sudo command. If the
current user does not have passwordless sudo, Ansible will fail. Anyway
using the current user is perfectly fine and no elevation privilege is
needed.
Signed-off-by: Sébastien Han <seb@redhat.com>
The Keystone v2 APIs are deprecated and scheduled to be removed in
Q release of Openstack. This adds support for configuring RGW to
use the current Keystone v3 API.
The PKI keys are used to decrypt the Keystone revocation list when
PKI tokens are used. When UUID or Fernet token providers are used in
Keystone, PKI certs may not exist, so we now accommodate this scenario
by allowing the operator to disable the PKI tasks.
Jewel added support for user/pass authentication with Keystone,
allowing deployers to disable Keystone admin token as required
for production deployments.
This implements configuration for the new RGW Keystone user/pass
authentication feature added in Jewel.
See docs here: http://docs.ceph.com/docs/master/radosgw/keystone/
Just for clarity and because we can we now show the name of the
ceph configuration file that is generated.
Signed-off-by: Sébastien Han <seb@redhat.com>
This commit solves the situation where you lost your fetch directory and
you are running ansible against an existing cluster. Since no fetch
directory is present the file containing the initial mon keyring
doesn't exist so we are generating a new one.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We do not need to run another condition for 'ceph_rhcs' since the
include we came from already has it, so we are already inside this
condition.
We also spell red hat entirely instead of rh and we remove capital
letters.
Signed-off-by: Sébastien Han <seb@redhat.com>
When `ceph_stable_rh_storage` is True, every cluster node should have a
`/etc/apt/preferences.d/rhcs.pref` file with the following contents:
```
Explanation: Prefer Red Hat packages
Package: *
Pin: release o=/Red Hat/
Pin-Priority: 999
```
ceph-deploy already did this when used with ice-setup, and we need to do
the same thing with the ceph-ansible stack.
Closes: #1182 and https://bugzilla.redhat.com/show_bug.cgi?id=1404515
Signed-off-by: Sébastien Han <seb@redhat.com>
Only when ceph_origin == "upstream", install_on_redhat.yml will include
redhat_ceph_repository.yml, same as debian.
In redhat_ceph_repository.yml, ceph_custom_repo will be added.
But in check_mandatory_vars.yml, ceph_origin=="upstream" can't be combined
with ceph_custom
If previous check was not run, .stdout_lines is not a valid key on the dictionary.
To get around this, use .get("stdout_lines") instead.
Also add in a default empty list
in hammer, ceph-common depended on libcephfs (indirectly, via
python-cephfs). this is no longer the case in jewel or later, so it can
be removed from debian_ceph_packages
Signed-off-by: Casey Bodley <cbodley@redhat.com>
For readibility and clarity we do not run any tasks directly in the
main.yml file. This file should only contain include, which helps us
later to apply conditionnals if we want to.
Signed-off-by: Sébastien Han <seb@redhat.com>
This commit re-uses some of the existing ceph-ansible variables for a
containirzed deployment. There is no reasons why we should add new
variables for the containerized deployment.
Signed-off-by: Sébastien Han <seb@redhat.com>
mon_group_name variable can be used to override mons group, but
this task assumes the group is always 'mons'. So we need to use
the var to find the group name instead.
Before this patch only the address for the first mon would show
in the ceph.conf even if there were multiple mons in the inventory.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Once the monitor process starts it will also trigger `ceph-create-keys`
which will collect the admin key and bootstrap keys. We used to force
this command because we were having issues on some distros like centos
7.0 and 7.1 not triggering this. This is fixed on centos 7.2 and not an
issue on ubuntu 14.04 or 16.04 so we can remove this task. If the
monitor hangs or fails to start the playbook will fail right after at
the "wait for client.admin key exists" task after 300sec.
Closes: #1161
Signed-off-by: Sébastien Han <seb@redhat.com>
This commit solves the situation where you lost your fetch directory and
you are running ansible against an existing cluster. Since no fetch
directory is present the file containing the fsid doesn't exist so we
are creating a new one. Later the ceph.conf gets updated with a wrong
fsid which causes problems for clients and ceph processes.
Closes: #1148
Signed-off-by: Sébastien Han <seb@redhat.com>
Adding that avoids this bug:
https://github.com/ansible/ansible/issues/18206
Without that you'll get failures like:
TASK [ceph-mon : set keys permissions]
*****************************************
task path:
/home/andrewschoen/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:31
fatal: [mon0]: FAILED! => {"failed": true, "msg": "'dict object' has no attribute 'stdout_lines'"}
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
We removed the "apache" setting for "radosgw_frontend" in
adfdf6871e.
As part of that change, we removed the final references to
ceph-extra.repo, but I failed to clean up this file itself.
Now that nothing uses this file, delete it.
This file contained the sole reference to redhat_distro_ceph_extra, so
we can drop that variable as well.
Refactor the code using 'package' module
Fix Issue #520
(However it doesn't cover all cases because some cases are not refactorable.
Ex: because of diverging packages name between distribution)
a397922 introduced a syntax error by attempting to default an unquoted
string, which causes execution failures on some ansible versions with:
Failed to template {{ ceph_rhcs_mount_path }}: Failed to template {{ ceph_stable_rh_storage_mount_path | default(/tmp/rh-storage-mount) }}: template error while templating string: unexpected '/'
libfcgi is dead upstream (http://tracker.ceph.com/issues/16784)
The RGW developers intend to remove libfcgi support entirely before the
Luminous release.
Since libfcgi gets little-to-no developer attention or testing, remove
it entirely from ceph-ansible.
Ansible task was not properly fetching OSD cluster keyring causing
the keyring to be missing when we needed to authenticate. Similarly, we
were not properly waiting on the OSD keyring to be available before
continuing.
Signed-off-by: Ivan Font <ifont@redhat.com>
- Update rolling update playbook to support containerized deployments
for mons, osds, mdss, and rgws
- Skip checking if existing cluster is running when performing a rolling
update
- Fixed bug where we were failing to start the mds container because it
was missing the admin keyring. The admin keyring was missing because
it was not being pushed from the mon host to the ansible host due to
the keyring not being available before running the copy_configs.yml
task include file. Now we forcefully wait for the admin keyring to be
generated before continuing with the copy_configs.yml task include file
- Skip pre_requisite.yml when running on atomic host. This technically
no longer requires specifying to skip tasks containing the with_pkg tag
- Add missing variables to all.docker.sample
- Misc. cleanup
Signed-off-by: Ivan Font <ifont@redhat.com>
We have a fact that detects the package manager, so we can detect if
systemd is used. Radosgw was still using some old logic from Ubuntu.
Ubuntu 16.04 now has systemd so we don't need to configure rgw as it was
running on upstart.
Signed-off-by: Sébastien Han <seb@redhat.com>
ansible 2.2 deprecates first_available_file option which is used in
the config_template module by 'generate ceph configuration file' task.
This change syncs the config_module files from their master repository
in github.com/openstack/openstack/ansible-plugins which includes the fix
2f6cac2cf6
Signed-off-by: Alberto Murillo Silva <alberto.murillo.silva@intel.com>
Even for dmcrypt we need to check the "devices" status and
"raw_journal_devices" as well so we can fix them if there is something
wrong with them.
Signed-off-by: Sébastien Han <seb@redhat.com>