Resolves issue: Multiple RGW Ceph.conf Issue #1258
In multi-RGW setup, in ceph.conf the RGW sections
contain identical bind IP in civetweb line. So this
modification fixes that issue and puts the right IP
for each RGW.
Signed-off-by: SirishaGuduru SGuduru@walmartlabs.com
Modified ceph-defaults and ran generate_group_vars_sample.sh
group_vars/osds.yml.sample and group_vars/rhcs.yml.sample are
not part of the changes. But they got modified when
generate_group_vars_sample.sh is ran to generate group_vars/
all.yml.sample.
Uncommented added variables in ceph-defaults
Updated tests by adding value for radosgw_interface
Added radosgw_interface to centos cluster tests
Modified ceph-rgw role,rebased and ran generate_group_vars_sample.sh
In ceph-rgw role removed check_mandatory_vars.yml.
Rebased on master.
Ran generate_group_vars_sample.sh and then the below files got
modified.
The rbd-mirror daemon will be HA under luminous and new daemon health
features require a way to uniquely identify rbd-mirror instances.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
To be properly evaluated the "skipped" conditions must always have the
first place on the list of condition, otherwise the other conditions are
evaluated before and make the task fail.
Closes: https://github.com/ceph/ceph-ansible/issues/1733
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: task "check for a ceph socket in containerized deployment" will
be skipped if we are not an OSD.
with_items are still evaluated before when conditions so if the task was
skipped the dict will be empty and then fail.
Adding a "not skipped" condition skips the execution of the task.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1482061
Signed-off-by: Sébastien Han <seb@redhat.com>
This commits force ceph-common to be installed early in deployment on
nodes.
For instance, ceph-rbdmirror doesn't have the CLI installed while it is
needed for some tasks which uses it to set some facts.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
ceph services can fail to start under certain circumstances (for
example, when running in a container) because the default systemd
service configuration causes namespace issues.
To work around this we can override the system service settings by
placing an overrides file in the ceph-<service>@.service.d directory.
This can be generic so as to allow any potential changes required to
the ceph-<service> service files.
The overrides file is only setup when the
"ceph_<service>_systemd_overrides" config_template override variable is
specified.
The available service systemd override files are as follows:
ceph_mds_systemd_overrides
ceph_mgr_systemd_overrides
ceph_mon_systemd_overrides
ceph_osd_systemd_overrides
ceph_rbd_mirror_systemd_overrides
ceph_rgw_systemd_overrides
The original fix to issue #1755 only set the permissions on
the monitors to which the key was copied, but not the original
monitor where the key was created. Thus, we use a separate task
to set the permission of the key.
The openstack_keys structure now supports a key called mode
whose value is a string that one could pass to chmod to set
the mode of the key file. The ansible file module applies the
mode to all openstack keys with this property.
Fixes: #1755
This is under the MDS role instead of the mon role because that role
does not create the filesystem under docker.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
This scenario will create OSDs using ceph-volume and is only available
in ceph releases greater than Luminous.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
There is only two main scenarios now:
* collocated: everything remains on the same device:
- data, db, wal for bluestore
- data and journal for filestore
* non-collocated: dedicated device for some of the component
Signed-off-by: Sébastien Han <seb@redhat.com>
in containerized deployment, if you try to update your `ceph.conf` file
it won't be actually updated on your nodes because it is overwritten by
the copy of the file which is present in your fetch directory.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Move `fsid`,`monitor_name`,`docker_exec_cmd` and `ceph_release` set_fact
to `ceph-defaults` role.
It will allow to reuse these facts without having to play `ceph-common`
or `ceph-docker-common`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This will give us more flexibility and avoid a lot of useless when
skipping all tasks from a non-desired role.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add a new role `ceph-defaults`.
This role aims to handle all defaults vars for `ceph-docker-common` and
`ceph-common` and set basic facts (eg. `fsid`)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Merge `ceph-docker-common` and `ceph-common` defaults vars in
`ceph-defaults` role.
Remove redundant variables declaration in `ceph-mon` and `ceph-osd` roles.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We don't want to have heterogeous ceph.conf anymore and believe that we
should have the right section for the running daemon.
If we don't do this and use profiles, e.g: rgw, we will get a new rgw
section on some of the nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
It is mandatory now to set the Ceph version you want to install, e.g:
ceph_stable_release: luminous
To find the release names, you can look at the release not doc:
http://docs.ceph.com/docs/master/release-notes/
Signed-off-by: Sébastien Han <seb@redhat.com>
ceph-disk is responsable for enabling the unit file if needed. Actually
since https://github.com/ceph/ceph/pull/12241 it seems that it's not
even needed. On an event of a restart, udev rules will be trigger and
they will ceph-disk activate the device too so the 'enabled' is not
needed.
Closes: https://github.com/ceph/ceph-ansible/issues/1142
Signed-off-by: Sébastien Han <seb@redhat.com>
OSDs get started by ceph-disk before the ceph.conf file is written
with a crush location. That results in a crush map without configured
crush location.
To prevent this, we have to restart the OSDs during the initial setup
after the crush location was added to the ceph.conf file.