This is causing unknown issues when trying to start a dmcrypt container.
Basically the container is stuck at mount opening the LUKS device. This
is still unknown why this is causing trouble but we need to move
forward. Also, this doesn't seem to help in any ways to fix the race
condition we've seen.
Here is the log for dmcrypt:
cryptsetup 1.7.4 processing "cryptsetup --debug --verbose --key-file
key luksClose fbf8887d-8694-46ca-b9ff-be79a668e2a9"
Running command close.
Locking memory.
Installing SIGINT/SIGTERM handler.
Unblocking interruption on signal.
Allocating crypt device context by device
fbf8887d-8694-46ca-b9ff-be79a668e2a9.
Initialising device-mapper backend library.
dm version [ opencount flush ] [16384] (*1)
dm versions [ opencount flush ] [16384] (*1)
Detected dm-crypt version 1.14.1, dm-ioctl version 4.35.0.
Device-mapper backend running with UDEV support enabled.
dm status fbf8887d-8694-46ca-b9ff-be79a668e2a9 [ opencount flush ]
[16384] (*1)
Releasing device-mapper backend.
Trying to open and read device /dev/sdc1 with direct-io.
Allocating crypt device /dev/sdc1 context.
Trying to open and read device /dev/sdc1 with direct-io.
Initialising device-mapper backend library.
dm table fbf8887d-8694-46ca-b9ff-be79a668e2a9 [ opencount flush
securedata ] [16384] (*1)
Trying to open and read device /dev/sdc1 with direct-io.
Crypto backend (gcrypt 1.5.3) initialized in cryptsetup library
version 1.7.4.
Detected kernel Linux 3.10.0-693.el7.x86_64 x86_64.
Reading LUKS header of size 1024 from device /dev/sdc1
Key length 32, device size 1943016847 sectors, header size 2050
sectors.
Deactivating volume fbf8887d-8694-46ca-b9ff-be79a668e2a9.
dm status fbf8887d-8694-46ca-b9ff-be79a668e2a9 [ opencount flush ]
[16384] (*1)
Udev cookie 0xd4d14e4 (semid 32769) created
Udev cookie 0xd4d14e4 (semid 32769) incremented to 1
Udev cookie 0xd4d14e4 (semid 32769) incremented to 2
Udev cookie 0xd4d14e4 (semid 32769) assigned to REMOVE task(2) with
flags (0x0)
dm remove fbf8887d-8694-46ca-b9ff-be79a668e2a9 [ opencount flush
retryremove ] [16384] (*1)
fbf8887d-8694-46ca-b9ff-be79a668e2a9: Stacking NODE_DEL [verify_udev]
Udev cookie 0xd4d14e4 (semid 32769) decremented to 1
Udev cookie 0xd4d14e4 (semid 32769) waiting for zero
Signed-off-by: Sébastien Han <seb@redhat.com>
in addition to c4dcdaa20 this commit adds the missing condition on
install tasks for debian_rhcs deployment. Without them, these tasks are
played on any kind of deployment.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
* DBus on host should include ganesha service file
* to allow ganesha container to respond on DBus it needs to run
in --privileged mode (ganesha folks contacted to look at this)
* ceph_nfs_include_exports_dir variable replaced with more general
ceph_nfs_dynamic_exports
This is something that has nothing to do in `ceph-common`, this
is too specific to `ceph-iscsi-gw` role.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This is something that has nothing to do in `ceph-common`, this
is too specific to `ceph-nfs` role.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Make role `ceph-mgr` handling itself the installation of `ceph-mgr`
package because it's complicated to manage it regarding we are going to
install `jewel vs. luminous`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
During the initial play, the docker command doesn't not exist and then
there is no stdout_lines to the command. So get allows us to fix this by
declaring an array if the command fails.
Signed-off-by: Sébastien Han <seb@redhat.com>
Use an intermediate variable to build the final `dedicated_devices` list
to avoid duplicate entry in that array. (We need a 1:1 relation between
`dedicated_devices` and `devices` since we are using a `with_together`
later.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
With jewel, `bootstrap_rbd_keyring` is not set because of this condition:
```
when:
- ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous
```
Therefore, the task `try to fetch ceph config and keys` will fail.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
* Change version from 2 to 3.
* use ceph_rhcs_cdn_debian_repo_version to use other repositories along
* with ceph_rhcs_cdn_debian_repo
Signed-off-by: Sébastien Han <seb@redhat.com>
`ceph_release` isn't available at this step of the playbook because it
is set later based on the installed binaries.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1486062
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The value of doing this is fairly low compare to the added value.
So we remove these tasks, if rbd pool on Jewel doesn't have the right PG
value you can always increase it.
Signed-off-by: Sébastien Han <seb@redhat.com>
All keyring are getting copied to all nodes.
This commit fixes a leftover from a previous code refactor.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1498583
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
iscsi-gw needs a 'rbd' pool to configure iscsi target.
Note: I could have used the facts already set in `ceph-mon` but I voluntarily
didn't do it to not create a dependancy between these two roles.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit refacts the code regarding all `set_osd_pool_default_*`
related tasks by avoiding usage of useless `set_fact` to determine
whether a key is present in `ceph_conf_overrides`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The rbd pool is the default pool that gets created during ceph cluster
initializaiton. If we act on the rbd related operations too early, the
rbd pool does not exist yet. Move the call to perform rbd operations
to a later stage after other pools have been created.
The rbd_pool.yml playbook has all the operations related to the rbd pool.
Replace the always_run (deprecated) directive with check_mode.
Most of the ceph related tasks only need to run once. The run_once directive
executes the task on the first host.
The ceph sub-command to delete a pool is delete (not rm).
The changes submitted here were tested with this ceph version.
ceph version 0.94.9-9.el7cp (b83334e01379f267fb2f9ce729d74a0a8fa1e92c)
This upload includes these changes:
- Use the fail module (instead of assert).
- From luminous release, the rbd pool is no longer created by default.
Delete the code to create the rbd pool for luminous release
- Conform the .yml files to use the suggested syntax.
The commands are executed on the mcp nodes and I think shell ansible module
is the right one to use. The command module is used to execute commands on
remote nodes. I can make the change to use command module if that is
prefrerred.
This is to ensure `docker_exec_cmd` fact is set with the correct value
in case of daemons collocation
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Also we now play ceph-config to have everything being generated for new
daemons bootstrap during upgrade.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1497959
Signed-off-by: Sébastien Han <seb@redhat.com>
generate_crt|bool|default(false) won't apply the default value, this
generate_crt|default(false)|bool will
Signed-off-by: Sébastien Han <seb@redhat.com>
The task which sets `ceph_current_fsid` in `facts.yml` in case of containerized
deployment, will definitely fail because `docker_exec_cmd` is not set
yet.
This commits simply makes `facts.yml` played after `check_socket.yml` so
`docker_exec_cmd` is set properly.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commits refacts the role ceph-mds
The goal here is to create cephfs in `ceph-mon` for both containerized
and non-containerized cases so we don't need the admin keyring on mds
nodes anymore.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1488999
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Using systemd module allows us to do in one task what we did in three
tasks:
- enable unit file,
- issue a `daemon-reload`,
- start the service
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Running the socket check on all the hosts will override the default
value of docker_exec_cmd, leaving it with the last value (currently
rbd-mirror), as a result the subsequent docker_exec_cmd usage for the
:x
Signed-off-by: Sébastien Han <seb@redhat.com>
There is a bug in the rbd mirror unit file, the upstream fix is here:
https://github.com/ceph/ceph/pull/17969.
This should be reverted once the patch is merged and backport is done.
Signed-off-by: Sébastien Han <seb@redhat.com>
This fixes the error :
```
The conditional check 'sestatus.stdout != 'Disabled'' failed.
```
that occurs when running on non rhel based system since the
`sestatus` fact is registered only on rhel based distribution.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Specify the timeout flag to ceph-create-keys, which causes it to time out
if a monitor quorum isn't achieved. This overrides the default timeout
of 10 minutes, causing ceph-ansible to fail faster in the event of cluster
network issues.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
We have seen issues with leftover socker. So now, if a socket is found
we also check if it's accessed by a process. If so, we can run the
handler, if not we remove it and continue the playbook.
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
It's sad but we can not rely on the prepare container anymore since the
log are flushed after reboot. So inpecting the container does not return
anything.
Now, instead we use a ephemeral container to look up for the
journal/block.db/block.wal (depending if filestore or bluestore) and
build the activate command accordingly.
Signed-off-by: Sébastien Han <seb@redhat.com>
need to use `hostvars[host]['XXX']` to retrieve the monitor
interface and/or radosgw interface.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1493920
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
- move the file fetch/push to the existing task
- rename the include
- generate the ganesha template from ansible
- re-arrange role structure
- re-use tasks for non-container and container
- configure keys for non-container and container
- fix rgw container key collection;
Signed-off-by: Sébastien Han <seb@redhat.com>
the rbd key was not pushed on rbd nodes because its keyring path was not
added in `ceph_config_keys`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We generate the ceph.conf on all the nodes through the
ceph-docker-common so there is no need to push it to the Ansible file.
Also this is breaking the ceph.conf template generation since we only
generate sections based on the host the ansible task is running on.
For example, what's typically happening, we bootstrap the monitor, we
get a ceph.conf generated for a mon only, we go on an osd, we generate
the ceph.conf with osd section (done by ceph-docker-common) but this
gets overwritten by the copy_config task of the ceph-osd role.
Signed-off-by: Sébastien Han <seb@redhat.com>
- Change capitalization of config options to be
in line with what config.txt in the nfs-ganesha
tree says
Signed-off-by: Ali Maredia <amaredia@redhat.com>
RHCS install wasn't working at all prior to this commit as the name of
the include was pointing to a non-existing file.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1492056
Signed-off-by: Sébastien Han <seb@redhat.com>
When ceph-nfs service is managed by pacemaker, it's useful to
not enable and start ceph-nfs service through systemd but let
pacemaker to start the service in a next step.
In analogy to ceph_nfs_rgw_user, we should be able to define a user
with which the nfs-ganesha Ceph FSAL connects to the cluster.
Introduce a ceph_nfs_ceph_user variable, setting its default to
"admin" (which preserves the prior behavior of always connecting as
client.admin).
Fixes#1910.
When Ansible is not run with verbose options it's difficult to see which
include and/or set_fact does what. So adding a name for each clarifies.
Signed-off-by: Sébastien Han <seb@redhat.com>
The variable "statleftover" was removed by commit
a60c74f61e
and never added back to the new playbook,
yet it is still being referenced.
Adding it back
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1492224
Signed-off-by: Sébastien Han <seb@redhat.com>
On a container env, machines don't have any ceph binaries so we need to
use a container to run the commands.
Signed-off-by: Sébastien Han <seb@redhat.com>
Use default delay since the mon (in particular) can take more time to
restart.
Solves error with:
STDERR:
Error response from daemon: No such container: ceph-mon-mon0
Signed-off-by: Sébastien Han <seb@redhat.com>
All keys are copied to all nodes.
This commit split that task in each roles so keys are copied to their
respective nodes.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1488999
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Less configuration for the user, the container inherit from the global
variables. No more container specific variables.
Signed-off-by: Sébastien Han <seb@redhat.com>
In a collocated scenario, where you might put a rgw, a mds and a mon on
the same node you don't want the handler blindly restart all the daemons
on the node. Indeed some of them might not be configured yet.
Implementing a more precise socket detection, for each daemon type.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1488813
Signed-off-by: Sébastien Han <seb@redhat.com>
Prior to this patch this activation sequence for autodetection was
always skipped because we were asking to activate on device without
partitions, which doesn't make sense.
We also fix the way we lookup for a device, since the data partition is
always numbered 1, we take the min element of the dict.
Closes: https://github.com/ceph/ceph-ansible/issues/1782
Signed-off-by: Sébastien Han <seb@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add new boolean parameter for client config create_key_file_only
with a default of false. When create_key_file_only is true, the
client tasks to connect to the external ceph cluster to verify
the key `ceph auth import` the key are skipped.
Fixes: #1848
We must mask the image so we are sure that even if the system reboots
then the OSDs won't start.
Also remove Ceph udev rules if found on the system prior to deploy
containers. If we don't do this we are exposed to conflicts between udev
rules and sytemd unit files.
Also add the CI will now test the migration from a non-containerized cluster to a
containerized cluster.
Signed-off-by: Sébastien Han <seb@redhat.com>
The way we handle the restart for both mds and rgw is not ideal, it will
try to restart the daemon on the host that don't run the daemon,
resulting in a service file being created (see bug description).
Now we restart each daemon precisely and in a serialized fashion.
Note: the current implementation does NOT support multiple mds or rgw on
the same node.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1469781
Signed-off-by: Sébastien Han <seb@redhat.com>
This patch adds passing the RGW_CIVETWEB_IP to the docker
container. This IP defaults to the value of radosgw_civetweb_bind_ip.
radosgw_civetweb_bind_ip default to ipv4.default
Without this value, the RGW containter will bind to 0.0.0.0
Remove the old check prior systemd.
We only support systemd so there is no need to run a condition on
systemd. The playbook will fail if systemd is not present.
Signed-off-by: Sébastien Han <seb@redhat.com>
The installation process is now described as follow:
* you still have to choose a 'ceph_origin' installation method. The
origin can be a 'repository' (add a new repository), distro (it will use
the packages provided by the native repo source of your distribution),
local (only available on redhat system, it installs locally built
packages). This option is not well tested, so use it carefully
* if ceph_origin == 'repository' you will have to decide what kind of
repository you want to enable:
- community: corresponds to the stable upstream/community version
- enterprise: corresponds to the stable enterprise/downstream version
(basically you are a red hat customer)
- dev: it will install ceph from packages built out of the github
development branches
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The list can not be evaluated properly if it containers '[]', which is
the case when using the filter "default([])". To fix this, we have to
properly merge the lists.
This is fixing the issue: "list object has no element 1"
Signed-off-by: Sébastien Han <seb@redhat.com>
By detecting the ceph version running in the container we can easily
apply conditions like:
ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous
We do that already, in ceph-docker-common/tasks/fetch_configs.yml.
This fixes the error:
TASK [ceph-docker-common : register rbd bootstrap key]
******************************************************
fatal: [magna005]: FAILED! => {"failed": true, "msg": "The conditional
check 'ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous'
failed. The error was: error while evaluating conditional
(ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous):
'dict object' has no attribute 'dummy'\n\nThe error appears to have been
in
'/home/ubuntu/ceph-ansible/roles/ceph-docker-common/tasks/fetch_configs.yml':
line 2, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n---\n-
name: register rbd bootstrap key\n ^ here\n"}
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1486062
Signed-off-by: Sébastien Han <seb@redhat.com>
Logging inside the container is not useful since it writes to the
overlayfs partition, resulting in potential performance degradation on
the container.
If you need to check the logs, just look at journald.
Signed-off-by: Sébastien Han <seb@redhat.com>
with_items is evaluated before the when condition so if the task that
registers the 'results' is skipped the task will fail with:
{"failed": true, "msg": "'dict object' has no attribute 'results'"}
Defaulting to an empty array fixes the issue.
Reverts: abdd66619e
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1482061
Signed-off-by: Sébastien Han <seb@redhat.com>
The script can fail to get the osd id because the osds are activated by
udev and it can take a while for them to activate. This commit fixes
that by trying to get all the osds per node in a loop.
This commit also makes the osd services enabled so that they are
available after reboot.
Signed-off-by: Boris Ranto <branto@redhat.com>
There should be no need to use sudo when writing or using these files.
It creates an issue when the user running ansible-playbook does not
have sudo privs.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Some deployments can't copy infrastructure playbooks outside of the
infrastructure-playbooks directory. Thus they use ANSIBLE_ROLES_PATH to
overcome this. However some roles have 'playbook_dir' hardcoded, which
results in wrong path since the execution comes from
infrastructure-playbooks. Basically the role triggered by a playbook
from infrastructure-playbooks believes that the roles are in
infrastructure-playbooks/roles. This commit fixes that.
Signed-off-by: Sébastien Han <seb@redhat.com>
This will give us more flexibility and the possibility to deploy a client node
for an external ceph-cluster.
related BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1469426Fixes: #1670
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Before this commit we were forcing ipv4 which might not be available.
Now setting ip_version to ipv4 or ipv6 will give you the right support.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1484189
Signed-off-by: Sébastien Han <seb@redhat.com>
This commit eases the use of the
infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml
playbook. We basically run it with a couple of pre-tasks and then we let
the playbook run the docker roles.
It obviously expect to have proper variables configured in order to
work.
Signed-off-by: Sébastien Han <seb@redhat.com>
The lvm_volumes variable is now a list of dictionaries that represent
each OSD you'd like to deploy using ceph-volume. Each dictionary must
have the following keys: data, journal and data_vg. Each dictionary also
can optionaly provide a journal_vg key.
The 'data' key represents the lv name used for the OSD and the 'data_vg'
key is the vg name that the given lv resides on. The 'journal' key is
either an lv, device or partition. The 'journal_vg' key is optional and
must be the vg name for the journal lv if given. This key is mainly used
for purging of the journal lv if purge-cluster.yml is run.
For example:
lvm_volumes:
- data: data_lv1
journal: journal_lv1
data_vg: vg1
journal_vg: vg2
- data: data_lv2
journal: /dev/sdc
data_vg: vg1
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Resolves issue: Multiple RGW Ceph.conf Issue #1258
In multi-RGW setup, in ceph.conf the RGW sections
contain identical bind IP in civetweb line. So this
modification fixes that issue and puts the right IP
for each RGW.
Signed-off-by: SirishaGuduru SGuduru@walmartlabs.com
Modified ceph-defaults and ran generate_group_vars_sample.sh
group_vars/osds.yml.sample and group_vars/rhcs.yml.sample are
not part of the changes. But they got modified when
generate_group_vars_sample.sh is ran to generate group_vars/
all.yml.sample.
Uncommented added variables in ceph-defaults
Updated tests by adding value for radosgw_interface
Added radosgw_interface to centos cluster tests
Modified ceph-rgw role,rebased and ran generate_group_vars_sample.sh
In ceph-rgw role removed check_mandatory_vars.yml.
Rebased on master.
Ran generate_group_vars_sample.sh and then the below files got
modified.
The rbd-mirror daemon will be HA under luminous and new daemon health
features require a way to uniquely identify rbd-mirror instances.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
To be properly evaluated the "skipped" conditions must always have the
first place on the list of condition, otherwise the other conditions are
evaluated before and make the task fail.
Closes: https://github.com/ceph/ceph-ansible/issues/1733
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: task "check for a ceph socket in containerized deployment" will
be skipped if we are not an OSD.
with_items are still evaluated before when conditions so if the task was
skipped the dict will be empty and then fail.
Adding a "not skipped" condition skips the execution of the task.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1482061
Signed-off-by: Sébastien Han <seb@redhat.com>
This commits force ceph-common to be installed early in deployment on
nodes.
For instance, ceph-rbdmirror doesn't have the CLI installed while it is
needed for some tasks which uses it to set some facts.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
ceph services can fail to start under certain circumstances (for
example, when running in a container) because the default systemd
service configuration causes namespace issues.
To work around this we can override the system service settings by
placing an overrides file in the ceph-<service>@.service.d directory.
This can be generic so as to allow any potential changes required to
the ceph-<service> service files.
The overrides file is only setup when the
"ceph_<service>_systemd_overrides" config_template override variable is
specified.
The available service systemd override files are as follows:
ceph_mds_systemd_overrides
ceph_mgr_systemd_overrides
ceph_mon_systemd_overrides
ceph_osd_systemd_overrides
ceph_rbd_mirror_systemd_overrides
ceph_rgw_systemd_overrides
The original fix to issue #1755 only set the permissions on
the monitors to which the key was copied, but not the original
monitor where the key was created. Thus, we use a separate task
to set the permission of the key.
The openstack_keys structure now supports a key called mode
whose value is a string that one could pass to chmod to set
the mode of the key file. The ansible file module applies the
mode to all openstack keys with this property.
Fixes: #1755
This is under the MDS role instead of the mon role because that role
does not create the filesystem under docker.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
This scenario will create OSDs using ceph-volume and is only available
in ceph releases greater than Luminous.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
There is only two main scenarios now:
* collocated: everything remains on the same device:
- data, db, wal for bluestore
- data and journal for filestore
* non-collocated: dedicated device for some of the component
Signed-off-by: Sébastien Han <seb@redhat.com>
in containerized deployment, if you try to update your `ceph.conf` file
it won't be actually updated on your nodes because it is overwritten by
the copy of the file which is present in your fetch directory.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Move `fsid`,`monitor_name`,`docker_exec_cmd` and `ceph_release` set_fact
to `ceph-defaults` role.
It will allow to reuse these facts without having to play `ceph-common`
or `ceph-docker-common`.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This will give us more flexibility and avoid a lot of useless when
skipping all tasks from a non-desired role.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add a new role `ceph-defaults`.
This role aims to handle all defaults vars for `ceph-docker-common` and
`ceph-common` and set basic facts (eg. `fsid`)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Merge `ceph-docker-common` and `ceph-common` defaults vars in
`ceph-defaults` role.
Remove redundant variables declaration in `ceph-mon` and `ceph-osd` roles.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We don't want to have heterogeous ceph.conf anymore and believe that we
should have the right section for the running daemon.
If we don't do this and use profiles, e.g: rgw, we will get a new rgw
section on some of the nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
It is mandatory now to set the Ceph version you want to install, e.g:
ceph_stable_release: luminous
To find the release names, you can look at the release not doc:
http://docs.ceph.com/docs/master/release-notes/
Signed-off-by: Sébastien Han <seb@redhat.com>
ceph-disk is responsable for enabling the unit file if needed. Actually
since https://github.com/ceph/ceph/pull/12241 it seems that it's not
even needed. On an event of a restart, udev rules will be trigger and
they will ceph-disk activate the device too so the 'enabled' is not
needed.
Closes: https://github.com/ceph/ceph-ansible/issues/1142
Signed-off-by: Sébastien Han <seb@redhat.com>
OSDs get started by ceph-disk before the ceph.conf file is written
with a crush location. That results in a crush map without configured
crush location.
To prevent this, we have to restart the OSDs during the initial setup
after the crush location was added to the ceph.conf file.
We have multiple issues with ceph-disk's cli with bluestore and Ceph
releases. This is mainly due to cli changes with Luminous. Luminous
introduced a --bluestore and --filestore options which respectively does
not exist on releases older than Luminous. The default store being
bluestore on Luminous, simply checking for the store is not enough so we
have to build a specific command line for ceph-disk depending on the
Ceph version we are running and the desired osd_store.
Signed-off-by: Sébastien Han <seb@redhat.com>
The keys and openstack_keys structure now supports an optional
key called acls whose value is a list of strings one could pass
to setfacl. The ansible ACL module applies the ACLs to all
openstack keys with this property.
Fixes: #1688
Remove `rgw enable static website` and `rgw enable usage log` from
ceph.conf and make it usable with ceph_config_overrides as profiles.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commit introduces a new directory called "profiles" which
contains some set of variables for a particular use case. These profiles
provide guidance for certain scenarios such as:
* configuring rgw with keystone v3
Signed-off-by: Sébastien Han <seb@redhat.com>
According to Alfredo, this was used for gitbuilders. Right now shaman/chacra
dev repos are unsigned.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Move condition at task level and not at include level to make `fsid`
variable available for all roles.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Some tasks fetch file to `{{ fetch_directory }}/docker_mon_files` and
then try to copy from `{{ fetch_directory }}/{{ fsid }}`. That causes
the playbook to fail.
Fixes: #1683
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
To keep consistency between `{{ openstack_keys }}` and `{{ keys }}`
respectively in `ceph-mon` and `ceph-client` roles.
This commit also add the possibility to set mds caps.
Fixes: #1680
Co-Authored-by: John Fulton <johfulto@redhat.com>
Co-Authored-by: Giulio Fidente <gfidente@redhat.com>
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The ceph.conf template needs to look for the value of monitor_interface
in hostvars[host] because there might be different values set per host.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
In Luminous, ceph-disk defaults to bluestore so all our scenarios are
using bluestore, we need to force testing both.
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
In addition to ceph/ceph-docker@69d9aa6, this explains how to deploy a
containerized cluster with a custom admin secret.
Basically, just need to pass the `admin_secret` defined in your
`group_vars/all.yml` to the `ceph_mon_docker_extra_env` variable.
Eg:
`ceph_mon_docker_extra_env: -e CLUSTER={{ cluster }} -e FSID={{ fsid }}
-e MON_NAME={{ monitor_name }} -e ADMIN_SECRET={{ admin_secret }}`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Fail with a sane message if the devices or raw_journal_devices variables
are strings instead of lists during manual device assignment.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
rsync is required by the ansible synchronize package. Ensure
it is installed when local installation is selected.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Add a new parameter `admin_secret` that allow to deploy a ceph cluster
with a custom admin secret.
Fix: #1630
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
This commits refactors how we deploy bluestore. We have existing
scenarios that we don't want to change too much. This commits eases the
user experience by now changing the way you use scenarios. Bluestore is
just a different interface to store objects but the scenarios more or
less remain the same.
If you set osd_objectstore == 'bluestore' along with
journal_collocation: true, you will get an OSD running bluestore with DB
and WAL partitions on the same device.
If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true, you will get an OSD running bluestore with a
dedicated drive for the rocksdb DB, then the remaining
drives (used with 'devices') will have WAL and DATA collocated.
If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true and declare bluestore_wal_devices you will get
an OSD running bluestore with a dedicated drive for rocksdb db, a
dedicated drive partition for rocksdb WAL and a dedicated drive for
DATA.
Signed-off-by: Sébastien Han <seb@redhat.com>
There is no need for 2 variables to enable bluestore, prior to this
patch one had to do the following to activate bluestore:
osd_objectstore: bluestore
bluestore: true
Now you just need to set `osd_objectstore: bluestore`.
Fixes: https://github.com/ceph/ceph-ansible/issues/1475
Signed-off-by: Sébastien Han <seb@redhat.com>
remove `ceph_mon_docker_interface` and use `monitor_interface` instead
for both containerized and non-containerized deployment.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Some variables are missing from ceph-docker-common role since the
include of check_mandatory_vars.yml has been re-added in the ceph-mon
role.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The check regarding the networking scenario configuration has been
moved from ceph-common to ceph-mon in 1de8176 but the include was not re-added
in 189f4fe
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
e8187f6 does not fix the ipv6 as expected since `ansible_default_*` are
filled with the IP address carried by the network interface used by the
default gateway route. By the way, it assumes that the MON_IP address will
be this IP address which is not always the case.
We need to keep using the previous fact but add some intelligence in the
template to determine how to retrieve the ipv4|ipv6 address since the path
to the fact in `hostvars` is not the same according to ipv4 vs ipv6 case.
Fix: 1569
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add an extra variable to the openstack pools, which creates them with
defined rules. This will allow to place different pools on e.g.
different type of disks.
This commit will also set a new default rule when defined and move
the rbd pool to the new rule.
Somehow the shell module will return an error if the command line is not
next to it.
Plus fixed the import with the right path.
Signed-off-by: Sébastien Han <seb@redhat.com>
Followup on https://github.com/ceph/ceph-ansible/pull/1469 where we
merged most of the container code from roles/ceph-*/task/docker/*.yml
into roles/ceph-docker-common/tasks/
It seems that we forgot to remove the original files.
Signed-off-by: Sébastien Han <seb@redhat.com>
The new test in the checks PGs are no longer working on distributions
where /bin/sh isn't linked to /bin/bash.
Fix: #1619
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
OpenStack's Gnocchi service expects to have a pool called "metrics".
This change addess "metrics" to the list of `openstack_pools` and
creates a corresponding key. It is only run if the user sets
`openstack_config: false`.
The current handler only restarts one OSD on each OSD server. After
the first one the handler stops, not matter what results the checks had.
Co-Authored-By: Gaudenz Steinlin (@gaudenz)
Remove "osd mkfs type" and the other pre-Bluestore parameters from the
generated ceph.conf so that disk activation on OSDs will work. The
current default xfs config results in a failed deployment and
incorrect partition metadata.
For newly created cluster the command: ceph --cluster {{ cluster }} osd
pool get rbd size does not respond properly.
We only want to check if the rbd pool exists, so we know use an ls |
grep approach.
Closes: https://github.com/ceph/ceph-ansible/issues/1547
Signed-off-by: Sébastien Han <seb@redhat.com>
Rewrite the check_pgs by using json parsing instead of complex regexp to
parse the `ceph -s` output.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Add a default value for `ceph_docker_on_openstack` to avoid a
conditional check error for the task `pause after docker install before starting` in
`roles/ceph-docker-common/tasks/pre_requisites/prerequisites.yml`
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The fact ['ansible_$interface']['ipv4'] is a dictionary where
['ansible_$interface']['ipv6'] is a list. If we use
ansible_default_ipv6|ipv4 is is always a dictionary which allows us to
get the ipv6 and ipv4 address without adding more complexity to the
template.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
For some reason we changed the check of pgs but it appears it could be
dangerous because the current check might satisfied as long as 1 PG is
active+clean.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
since `-e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` is already hardcoded in
`eph-osd-run.sh.j2` there is no need to add `-e
CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` as a default value in defaults vars.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
`ceph-docker-common`:
At the moment there is a lot of duplicated tasks in each
`./roles/ceph-<role>/tasks/docker/main.yml` that could be refactored in
`./roles/ceph-docker-common/tasks/main.yml`.
`*_containerized_deployment` variables:
All `*_containerized_deployment` have been refactored to a single
variable `containerized_deployment`
duplicate `cephx` variables in `group_vars/* have been removed.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
We only check for everything expect 'distro' because that
is a valid way of deploying RHCS, with preprepared repos
present on the nodes.
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: we could end up in situation where we would install a package
on a machine that does not have the right repo enabled. Because the
condition was set to OR we weren't pinning a particular host but just a
condition. Let's say someone sets 'ceph_origin == "distro"', this would
try to install OSD packages on Monitors.
Solution: use a AND condition to first pin to the group_name (which
identifies a set of hosts) AND then after this one of the installation
condition.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1453119
Co-Authored-By: https://github.com/zhsj
Signed-off-by: Sébastien Han <seb@redhat.com>
Problem: fail to deploy a containerized Ceph cluster with ipv6
Solution: do not hardcode ipv4 when bootstrapping the container.
Now use ip_version: ipv6 to get a containerized cluster deployed with
ipv6.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1451786
Signed-off-by: Sébastien Han <seb@redhat.com>
In addition to `196fa7e` this commit check if a container has been
already launched and delete it before retrying the ceph osd prepare
process.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The CI on Docker is reporting the following error:
STDERR:
Error EINVAL: bad entity name
This is due to the fact that this auth entity name does not exist on
Jewel so we should not create that key when running Jewel containers.
Fixes: https://github.com/ceph/ceph-ansible/issues/1514
Signed-off-by: Sébastien Han <seb@redhat.com>
Already documented in the Red Hat Ceph Storage 2 Installation Guide
for Red Hat Enterprise Linux, but not here
Signed-off-by: Florian Klink <flokli@flokli.de>
"rgw override bucket index max shards" and
"rgw bucket default quota max objects" were in the
client section of the ceph.conf and not being
applied, this commit moves them to global
Resolves: bz#1391500
Signed-off-by: Ali Maredia <amaredia@redhat.com>
We shouldn't need this anymore as the upgrade bug that
debian_ceph_packages was used to workaround should have
been fixed as of jewel.
See https://github.com/ceph/ceph-ansible/issues/1481 for more
detailed information.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
Change civetweb_num_thread default to 100
Add capability to override number of pgs for
rgw pools.
Add ceph.conf vars to enable default bucket
object quota at users choosing into the ceph.conf.j2
template
Resolves: rhbz#1437173
Resolves: rhbz#1391500
Signed-off-by: Ali Maredia <amaredia@redhat.com>