Commit Graph

2823 Commits (04b1665e5ea80b1190cea9a17bd265e02ba402d1)

Author SHA1 Message Date
Guillaume Abrioux 80875adba7 ceph-osd: do not relabel /run/udev in containerized context
Otherwise content in /run/udev is mislabeled and prevent some services
like NetworkManager from starting.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-04 11:32:41 -04:00
Guillaume Abrioux a78fb209b1 tests: test podman against atomic os instead rhel8
the rhel8 image used is an outdated beta version, it is not worth it to
maintain this image upstream, since it's possible to test podman with a
newer version of centos/atomic-host image.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-04 11:32:41 -04:00
Dimitri Savineau 616c484698 ceph-nfs: use template module for configuration
789cef7 introduces a regression in the ganesha configuration file
generation. The new config_template module version broke it.
But the ganesha.conf file isn't an ini file and doesn't really
need to use the config_template module. Instead we can use the
classic template module.

Resolves: #4045

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-04 09:11:52 +02:00
Guillaume Abrioux 6e2e30db54 dashboard: move ceph-grafana-dashboards package installation
This commit moves the package installation into ceph-dashboard role.
This is needed to install ceph dasboard json file in
`/etc/grafana/dashboards/ceph-dashboard/`.

Closes: #4026

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:36:38 +02:00
Guillaume Abrioux 14f5fc3c86 infra: refact dashboard firewall rules
- There is no need to open ports 3000, 8234, 9283 on all nodes.
- Add missing rule for alertmanager (port 9093)

Closes: #4023

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:36:38 +02:00
Guillaume Abrioux a2b6f44665 dashboard: append mgr modules to ceph_mgr_modules
when `dashboard_enabled` is `True`, let's append `dashboard` and
`prometheus` modules to `ceph_mgr_modules` so they are automatically
loaded.

Closes: #4026

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:36:38 +02:00
Dimitri Savineau 7503098ca0 remove ceph-agent role and references
The ceph-agent role was used only for RHCS 2 (jewel) so it's not
usefull anymore.
The current code will fail on CentOS distribution because the rhscon
package is only avaible on Red Hat with the RHCS 2 repository and
this ceph release is supported on stable-3.0 branch.

Resolves: #4020

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-03 13:35:50 +02:00
Guillaume Abrioux 003aeea45a validate: add a check for nfs standalone
if `nfs_obj_gw` is True when deploying an internal ganesha with an
external ceph cluster, `ceph_nfs_rgw_access_key` and
`ceph_nfs_rgw_secret_key` must be provided so the
ganesha configuration file can be generated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:34:38 +02:00
Guillaume Abrioux 6a6785b719 nfs: support internal Ganesha with external ceph cluster
This commits allows to deploy an internal ganesha with an external ceph
cluster.

This requires to define `external_cluster_mon_ips` with a comma
separated list of external monitors.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1710358

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-06-03 13:34:38 +02:00
Dimitri Savineau daf92a9e1f ceph-facts: generate fsid on mon node
The fsid generation is done via a python command. When the ansible
controller node only have python3 available (like RHEL 8) then the
python command isn't necessarily present causing the fsid generation
to fail.
We already do some resource creation (like ceph keyring secret) with
the python command too but from the mon node so we should do the same
for fsid.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1714631

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-06-03 10:11:32 +02:00
Guillaume Abrioux 55420d6253 roles: introduce `ceph-container-engine` role
This commit splits the current `ceph-container-common` role.

This introduces a new role `ceph-container-engine` which handles the
tasks specific to the installation of containers tools (docker/podman).

This is needed for the ceph-dashboard implementation for 2 main reasons:

1/ Since the ceph-dashboard stack is only containerized, we must install
everything needed to run containers even in non containerized
deployments. Splitting this role allows us to not have to call the full
`ceph-container-common` role which would run a bunch of unneeded tasks
that would have been skipped anyway.

2/ The current implementation would have required to run
`ceph-container-common` on all ceph-clients nodes which would have been
conflicting with 9d3517c670 (we don't want
to run ceph-container-common on all client nodes, see mentioned commit
for more details)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-22 13:02:10 +02:00
Dimitri Savineau f37edfa113 ceph-mgr: install python-routes for dashboard
The ceph mgr dashboard requires routes python library to be installed
on the system.

Resolves: #3995

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-22 08:46:16 +02:00
Dimitri Savineau 622d9feae9 common: use gnupg instead of gpg
gpg package isn't available for all Debian/Ubuntu distribution but
gnupg is.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-21 17:53:58 +02:00
Dimitri Savineau 29b0d47c8c ceph-prometheus: fix error in templates
- remove trailing double quotes in jinja templates
- add jinja filename without .j2 suffix

Resolves: #4011

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-21 17:53:39 +02:00
Guillaume Abrioux 6ca7372a2d config: fix ipv6
As of nautilus, if you set `ms bind ipv6 = True` you must explicitly set
`ms bind ipv4 = False` too, otherwise OSDs will still try to pick up an
IPv4 address.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1710319

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-21 10:38:00 -04:00
Dimitri Savineau 0ee833432e ceph-nfs: apply selinux fix anyway
Because ansible_distribution_version doesn't return minor version on
CentOS with ansible 2.8 we can apply the selinux anyway but only for
CentOS/RHEL 7.
Starting RHEL 8, there's a dedicated package for selinux called
nfs-ganesha-selinux [1].

Also replace the command module + semanage by the selinux_permissive
module.

[1] https://github.com/nfs-ganesha/nfs-ganesha/commit/a7911f

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-20 13:04:58 +02:00
Dimitri Savineau 0c7fd79865 ceph-validate: use kernel validation for iscsi
Ceph iSCSI gateway requires Red Hat Enterprise Linux or CentOS 7.5
or later.
Because we can not check the ansible_distribution_version fact for
CentOS with ansible 2.8 (returns only the major version) we can
fallback by checking the kernel option.

  - CONFIG_TARGET_CORE=m
  - CONFIG_TCM_USER2=m
  - CONFIG_ISCSI_TARGET=m

http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-20 13:04:58 +02:00
Guillaume Abrioux 72d8315299 switch to ansible 2.8
- remove private attribute with import_role.
- update documentation.
- update rpm spec requirement.
- fix MagicMock python import in unit tests.

Closes: #3765

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-20 13:04:58 +02:00
Dimitri Savineau 494746b7a6 common: install dependencies for apt modules
When using a minimal Debian/Ubuntu distribution there's no
ca-certificates and gpg packages installed so the apt modules will
fail:

Failed to find required executable gpg in paths:
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

apt.cache.FetchFailedException:
W:https://download.ceph.com/debian-luminous/dists/bionic/InRelease:
No system certificates available. Try installing ca-certificates.

Resolves: #3994

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-20 10:02:43 +02:00
Guillaume Abrioux 9f0d4d6847 dashboard: move defaults variables to ceph-defaults
There is no need to have default values for these variables in each roles
since there is no corresponding host groups

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux e74d80e72f rename docker_exec_cmd variable
This commit renames the `docker_exec_cmd` variable to
`container_exec_cmd` so it's more generic.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux cc285c417a dashboard: align the way containers are managed
This commit aligns the way the different containers are managed with how
it's currently done with the other ceph daemon.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux cd5f3fca64 dashboard: convert dashboard_rgw_api_no_ssl_verify to a bool
make `dashboard_rgw_api_no_ssl_verify` a bool variable since it seems to
be used as it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux 8bbcc46ae4 dashboard: remove legacy file
this file seems to be no longer used, let's remove it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux 14f381200d dashboard: set less permissive permissions on dashboard certificate/key
use `0440` instead of `0644` is enough

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux 4405f50c85 dashboard: simplify config-key command
since stable-4.0 isn't to deploy ceph releases prior to nautilus,
there's no need to add this complexity here.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux cdff0da7d4 dashboard: do not call ceph-container-common from other role
use site.yml to deploy ceph-container-common in order to install docker
even in non-containerized deployments since there's no RPM available to
deploy the differents applications needed for ceph-dashboard.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux 742bb6214c dashboard: use existing variable to detect containerized deployment
there is no need to add more complexity for this, let's use
`containerized_deployment` in order to detect if we are running a
containerized deployment.
The idea is to use `container_exec_cmd` the same way we do in the rest of
the playbook to run the different ceph commands needed to deploy the
ceph-dashboard role.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux 6d9dbb1d39 facts: set container_binary fact in non-containerized deployment
This is needed for the ceph-dashboard implementation since it requires
to run containerized application which aren't packaged as RPMs.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Guillaume Abrioux 3578d576a4 dashboard: rename template files
add .j2 to all templates file related to dashboard roles.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Boris Ranto b4d1c3693b dashboard: Support podman
This adds support for podman in dashboard-related roles. It also drops
the creation of custom network for the dashboard-related roles as this
functionality works in a different way with podman.

Signed-off-by: Boris Ranto <branto@redhat.com>
2019-05-16 16:39:13 +02:00
Boris Ranto e737a1f83e dashboard: Set ssl_server_port if it is supported
We cannot use the old fashioned config-key way, here. It was not
supported when the option was introduced (post 14.2.0). Since the option
is not always supported we can simply ignore the potential failure on
ceph clusters that do not support it.

Signed-off-by: Boris Ranto <branto@redhat.com>
2019-05-16 16:39:13 +02:00
Boris Ranto 8f77caa932 dashboard: Add and copy alerting rules
This commit adds a list of alerting rules for ceph-dashboard from the
old cephmetrics project. It also installs the configuration file so that
the rules get recognized by the prometheus server.

Signed-off-by: Boris Ranto <branto@redhat.com>
2019-05-16 16:39:13 +02:00
Boris Ranto 2f141a6e80 Merge cephmetrics/dashboard-ansible repo
This commit will merge dashboard-ansible installation scripts with
ceph-ansible. This includes several new roles to setup ceph-dashboard
and the underlying technologies like prometheus and grafana server.

Signed-off-by: Boris Ranto & Zack Cerza <team-gmeno@redhat.com>
Co-authored-by: Zack Cerza <zcerza@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-16 16:39:13 +02:00
Dimitri Savineau d2ad191eca container-common: allow podman for other distros
Currently podman installation is very tied to RHEL 8 even if we're
able to install it on Debian/Ubuntu distribution.
This patch changes the way we are starting or not the (fat) container
daemon. Before the condition was based on the distribution release
and now on the container_service_name variable.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-13 16:24:00 +02:00
Bruceforce c3b0ee30a1 ceph-nfs: fixed with_items
If we do this in one line we get the error described in #3968

fixes #3968

Signed-off-by: Bruceforce <markus.greis@gmx.de>
2019-05-13 16:23:43 +02:00
Bruceforce 29f2c953b4 ceph-nfs: fixed condition for "stable repos specific tasks"
The old condition would resolve to
"when": "nfs_ganesha_stable - ceph_repository == 'community'"

now it is
"when": [
          "nfs_ganesha_stable",
          "ceph_repository == 'community'"
        ]

Please backport to stable-4.0

Signed-off-by: Bruceforce <markus.greis@gmx.de>
2019-05-13 09:53:54 +02:00
Dimitri Savineau ba49225eab Update RHCS version with Nautilus
RHCS 4 will be based on Nautilus and only usable on RHEL 8.
Updated the default ceph_rhcs_version to 4 and update the rhcs
repositories to rhcs 4 with RHEL 8.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-13 09:53:18 +02:00
Kevin Coakley 381c58ca3e Set the rgw_create_pools pools application to rgw
Set the application to rgw for pools created from rgw_create_pools. On Ceph Nautilus the heath is set to HEALTH_WARN with the message "application not enabled on X pool(s)" if an application isn't specified for a pool.

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2019-05-13 09:48:25 +02:00
Rishabh Dave 121b5e4184 ceph-rbd-mirror: refactor tasks/main.yml
Use blocks for similar tasks in main.yml. And move when keywords before
block keywords.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-05-10 09:21:54 +02:00
Rishabh Dave 1a4dccdbb9 ceph-mds: group similar tasks in create_mds_filesystem.yml
Group similar tasks together using block keyword.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-05-10 09:21:05 +02:00
Guillaume Abrioux 936c6fca78 facts: fix external cluster bug
running an external ceph cluster deployment with (obviously) no
monitors defined in inventory breaks with an undefined error because
`_monitor_addresses` never get defined.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1707460

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-05-07 17:53:53 +02:00
Rishabh Dave 56bfec7c58 ceph-mgr: create keys for MGRs
Add code in ceph-mgr for creating a keyring for manager in so that
managers can be deployed on a separate node too.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-05-07 14:13:06 +02:00
Rishabh Dave 89748d579a don't access other node's docker_exec_cmd variable
Except for some corner case, it's not correct to access some other
node's copy of variable docker_exec_cmd. Therefore replace
"hostvars[groups[mon_group_name][0]]['docker_exec_cmd']" by
"docker_exec_cmd".

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-05-07 12:37:48 +02:00
Gaudenz Steinlin 3c8987c7a5 Fix check mode support
Adds "check_mode: no" to commands which register cluster state in a
variable and don't modify anything. These commands have to run in order
to support running the playbook in check mode.

Signed-off-by: Gaudenz Steinlin <gaudenz.steinlin@cloudscale.ch>
2019-05-07 09:49:20 +02:00
Dimitri Savineau ae266c6f2b ansible: remove private and static attribute
This will be removed in ansible 2.8 and breaks the playbook execution
with this release.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-05-02 14:25:17 -04:00
Dimitri Savineau 1999cf3d19 ceph-mds: Increase cpu limit to 4
In containerized deployment the default mds cpu quota is too low
for production environment.
This is causing performance degradation compared to bare-metal.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1695850

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-24 20:33:02 +02:00
Dimitri Savineau c17106874c ceph-osd: Increase cpu limit to 4
In containerized deployment the default osd cpu quota is too low
for production environment using NVMe devices.
This is causing performance degradation compared to bare-metal.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1695880

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-24 17:59:42 +02:00
Dimitri Savineau 4ae5ce399b ceph-iscsi: start tcmu-runner for non-container
Only rbd-target-api and rbd-target-gw were started/enabled for non
containerized deployment.
The issue doesn't happen with containerized setup.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-24 10:03:25 +02:00
Rishabh Dave 739a662c80 improve coding style
Keywords requiring only one item shouldn't express it by creating a
list with single item.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-04-23 15:37:07 +02:00
Radu Toader b2f242660e Allow CephFS pool to be created with specific rule_name, erasure_profile just like rbd pools
Signed-off-by: Radu Toader <radu.m.toader@gmail.com>
2019-04-20 02:26:05 +00:00
Dimitri Savineau 8105a1cefb ceph-container-common: modify requirement flow
Until now it was not possible to install a specific container package
because it was somehow hardcoded.
This patch allows to override the container package name (docker.io
vs docker-ce) and refacts the package installation. This could be
achieve via the container_package_name variable.
Instead of using one task per distribution we can set the package and
service name in vars. This allows to have a unified package task.
Also refactorize the debian_prerequisites tasks because the content
was outdated.

https://docs.docker.com/install/linux/docker-ce/debian/
https://docs.docker.com/install/linux/docker-ce/ubuntu/

Resolves: #3609

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-18 16:18:01 +02:00
Guillaume Abrioux 58f3851573 mds: remove legacy task
this task has nothing to do in stable-4.0 and after.
Let's remove it since stable-4.0 and after aren't intended to deploy
luminous.

Closes: #3873

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-18 15:55:45 +02:00
Kyle Bader 0bee90b201 rgw: add cpuset support
1/ The OSD already supports cpuset to be used for containerized deployments
through the use of the ceph_osd_docker_cpuset_cpus variable. This adds similar
support to the RGW service for containerized deployments by setting a new
variable named ceph_rgw_docker_cpuset_cpus. Like the OSD, there are times where
using distinct cores has advantages over using the CFS in kernel scheduler.

ceph_rgw_docker_cpuset_cpus accepts a comma delimited set of CPU ids

2/ Add support for specifying --cpuset-mem variable to restrict the cgroup's memory
allocations to a particular numa node, which should typically correspond with
the cpu ids of that numa node that were provided with --cpuset-cpus. To ensure
the correct cpu ids are used one can run `numactl --hardware`  to list the nodes
and which cpu ids correspond to each.

Signed-off-by: Kyle Bader <kbader@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-18 15:55:19 +02:00
Dimitri Savineau 86315272c7 ceph-mgr: Add extra module packages
Since Nautilus there's mgr extra modules not present in ceph-mgr
package but in dedicated packages.

Resolves: #3860

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-18 15:31:22 +02:00
Guillaume Abrioux a4bc7bda51 update: refact msgr2 migration
this commit refact the msgr2 protocol introduction.

If it's a fresh install, let's go with v2 only.
If we upgrade to nautilus, we should go with v2+v1 syntax to ensure
nothing breaks.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-18 11:16:11 +02:00
Andrew Schoen 67453853ff rolling_update: set num_osds to the number of running osds
We do this so that the ceph-config role can most accurately
report the number of osds for the generation of the ceph.conf
file.

We don't want to use ceph-volume to determine the number of
osds because in an upgrade to nautilus ceph-volume won't be able to
accurately count osds created by ceph-disk.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2019-04-18 10:55:11 +02:00
Andrew Schoen 5e3dfe5021 ceph-osd: do not run lvm batch tasks during update
When performing a rolling update do not try to create
any new osds with `ceph-volume lvm batch`. This is troublesome
because when upgrading to nautilus the devices list might contain
devices that are currently being used by ceph-disk and have GPT
headers on them, which will cause ceph-volume to fail when
trying to use such a device. Any devices originally created
by ceph-disk will need to be removed from the devices list
before any new osds can be created.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2019-04-18 10:55:11 +02:00
Dimitri Savineau c8814d1331 ceph-iscsi-gw: Remove library directory
The library directory that contain the custom ceph modules in present
in the ceph-ansible root directory.
All igw_* mocules are already present there so we don't need the one
present in roles/ceph-iscsi-gw/library.
Also remove the associated spec file.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-18 10:37:57 +02:00
Dimitri Savineau e471bce76b allow using ansible 2.8
Currently we only support ansible 2.7
We plan to use 2.8 when it will be release so we have to support both
2.7 and 2.8.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1700548

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-17 16:57:37 +02:00
Guillaume Abrioux edfa4310d3 defaults: refact package dependencies installation.
Because 5c98e361df could be seen as a non
backward compatible change this commit reverts it and bring back package
dependencies installation support.
Let's just modify the default value instead.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-16 11:07:59 -04:00
Guillaume Abrioux 83df60cbc3 defaults: remove some package dependencies
These packages aren't needed anymore.
They were needed for ceph-init-detect buti as of ceph-init-detect doesn't exist
anymore.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1683885

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-15 11:28:58 -04:00
Rishabh Dave 96c180cc0e check if mon daemon is installed before restarting it
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-04-15 10:00:50 +02:00
Guillaume Abrioux edf1ee2073 mon: check if an initial monitor keyring already exists
When adding a new monitor, we must reuse the existing initial monitor
keyring. Otherwise, the new monitor will issue its 'mkfs' with a new
monitor keyring and it will result with a mismatch between them. The
new monitor will be unable to join the quorum in the end.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-authored-by: Rishabh Dave <ridave@redhat.com>
2019-04-15 10:00:50 +02:00
Guillaume Abrioux f899da3172 osd: remove legacy file
this file is not used anymore, let's remove it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Guillaume Abrioux 4f68462009 osd: remove ceph-disk scenarios files
these files aren't needed anymore since we only use lvm scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Guillaume Abrioux f0416c8892 osd: remove dedicated_devices variable
This variable was related to ceph-disk scenarios.
Since we are entirely dropping ceph-disk support as of stable-4.0, let's
remove this variable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Guillaume Abrioux 4d35e9eeed osd: remove variable osd_scenario
As of stable-4.0, the only valid scenario is `lvm`.
Thus, this makes this variable useless.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Guillaume Abrioux 4d5637fd8a osd: remove legacy file
ceph_disk_cli_options_facts.yml is not used anymore, let's remove it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Sébastien Han 2888c0825f validate: only check device when they are devices
We only validate the devices that are passed if there is a list of
devices to validate.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-04-11 11:57:02 -04:00
Sébastien Han 52df15895b osd: default osd_scenario to lvm
osd_scenario has become obsolete and defaults to lvm. With lvm there is
no such things has collocated and non-collocated.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-04-11 11:57:02 -04:00
Sébastien Han 9ea1e49407 validate: print a message for old scenarios
ceph-disk is not supported anymore, so all the newly created OSDs will
be configured using ceph-volume.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-04-11 11:57:02 -04:00
Sébastien Han e2a5aa062e osd: remove ceph-disk support
We don't support the preparation of OSD with ceph-disk. ceph-volume is
only supported. However, the start operation of OSD is still supported.
So let's say you change a config option, the handlers will be able to
restart all the OSDs via their respective systemd unit files.

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-11 11:57:02 -04:00
Dimitri Savineau d2efb7f02b ceph-mds: Set application pool to cephfs
We don't need to use the cephfs variable for the application pool
name because it's always cephfs.
If the cephfs variable is set to something else than the default
value it will break the appplication pool task.

Resolves: #3790

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-11 15:15:41 +02:00
Guillaume Abrioux 7e0adca7a4 osds: allow passing devices by path
ceph-volume didn't work when the devices where passed by path.
Since it now support it, let's allow this feature in ceph-ansible

Closes: #3812

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-10 13:22:30 -04:00
Guillaume Abrioux 631e5d3144 mon: remove useless delegate_to
Let's use a condition to run this task only on the first mon.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-10 00:39:25 +00:00
Dimitri Savineau d17b1b48b6 rgw: change default frontend on nautilus
As discussed in ceph/ceph#26599, beast is now the default frontend
for rados gateway with nautilus release.
Add rgw_thread_pool_size variable with 512 as default value and keep
backward compatibility with num_threads option when using civetweb.
Update radosgw_civetweb_num_threads to reflect rgw_thread_pool_size
change.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-09 17:21:51 +02:00
Dimitri Savineau 37816570c6 container-common: Enable docker on boot for ubuntu
docker daemon is automatically started during package installation
but the service isn't enabled on boot.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-09 16:50:10 +02:00
Matthew Vernon 9dd913cf8a UCA: Uncomment UCA variables in defaults, fix consequent breakage
The Ubuntu Cloud Archive-related (UCA) defaults in
roles/ceph-defaults/defaults/main.yml were commented out, which means
if you set `ceph_repository` to "uca", you get undefined variable
errors, e.g.

```
The task includes an option with an undefined variable. The error was: 'ceph_stable_repo_uca' is undefined

The error appears to have been in '/nfs/users/nfs_m/mv3/software/ceph-ansible/roles/ceph-common/tasks/installs/debian_uca_repository.yml': line 6, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- name: add ubuntu cloud archive repository
  ^ here

```

Unfortunately, uncommenting these results in some other breakage,
because further roles were written that use the fact of
`ceph_stable_release_uca` being defined as a proxy for "we're using
UCA", so try and install packages from the bionic-updates/queens
release, for example, which doesn't work. So there are a few `apt` tasks
that need modifying to not use `ceph_stable_release_uca` unless
`ceph_origin` is `repository` and `ceph_repository` is `uca`.

Closes: #3475
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
2019-04-09 13:44:00 +02:00
Dimitri Savineau fd4b0ec7eb ceph-facts: use last ipv6 address for mon/rgw
When using monitor_address_block or radosgw_address_block variables
to configure the mon/rgw address we're getting the first ip address
from the ansible facts present in that cidr.
When there's VIP on that network the first filter could return the
wrong value.
This seems to affect only IPv6 setup because the VIP addresses are
added to the ansible facts at the beginning of the list. This is the
opposite (at the end) when using IPv4.
This causes the mon/rgw processes to bind on the VIP address.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1680155

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-09 06:17:05 +02:00
François Lafont 4c3e77d869 ceph-rgw: Fix bad paths which depend on the clustername
The path of the RGW environment file (in the /var/lib/ceph/radosgw/
directory) depends on the Ceph clustername. It was not taken into
account in the Ansible role `ceph-rgw`.

Signed-off-by: flaf <francois.lafont.1978@gmail.com>
2019-04-09 06:16:31 +02:00
Guillaume Abrioux cbfdbab177 mgr: manage mgr modules when mgr and mon are collocated
When mgrs are implicitly collocated on monitors (no mgrs in mgrs group).
That include was skipped because of this condition :

`inventory_hostname == groups[mgr_group_name][0]`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-09 06:12:29 +02:00
Guillaume Abrioux f596cc1711 mgr: wait for all mgr to be available
before managing mgr modules, we must ensure all mgr are available
otherwise we can hit failure like following:

```
stdout:Error ENOENT: all mgr daemons do not support module 'restful', pass --force to force enablement
```

It happens because all mgr are not yet available when trying to manage
with mgr modules.

Closes: #3100

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-04-09 06:12:29 +02:00
Rishabh Dave c0dfa9b61a allow adding a MDS to already deployed cluster
Add a tox scenario that adds an new MDS node as a part of already
deployed Ceph cluster and deploys MDS there.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-04-08 13:33:28 +02:00
Ali Maredia 37f46a8c5d rgw multisite: add more than 1 rgw to the master or secondary zone
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1664869

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2019-04-06 08:01:19 +02:00
Dimitri Savineau d3ae9fd05f radosgw: Raise cpu limit to 8
In containerized deployment the default radosgw quota is too low
for production environment.
This is causing performance degradation compared to bare-metal.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1680171

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-04 18:50:48 +02:00
fpantano afbb90e4ac Check ceph_health_raw.stdout value as string during mon bootstrap
According to rdo testing https://review.rdoproject.org/r/#/c/18721
a check on the output of the ceph_health value is added to
allow the playbook to make several attempts (according to the
retry/delay variables) when waiting the cluster quorum or
when the container bootstrap is not ended.
It avoids the failure of the command execution when it doesn't
receive a valid json object to decode (because cluster is too
slow to boostrap compared to ceph-ansible task execution).

Signed-off-by: fpantano <fpantano@redhat.com>
2019-04-03 20:55:05 +00:00
Dimitri Savineau 7e5e4229b7 ceph-volume: Add PYTHONIOENCODING env variable
Since https://github.com/ceph/ceph/commit/77912c0 ceph-volume uses
stdout encoding based on LC_CTYPE and PYTHONIOENCODING environment
variables.
Thoses variables aren't set when using ansible.
Currently this commit breaks non containerized deployment on Ubuntu.

TASK [use ceph-volume to create bluestore osds] ********************
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - create
  - --bluestore
  - --data
  - /dev/sdb
  rc: 1
  stderr: |-
    Traceback (most recent call last):
    (...)
    UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in
    position 132: ordinal not in range(128)

Note that the task is failing on ansible side due to the stdout
decoding but the osd creation is successful.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-04-02 12:41:55 +02:00
Rishabh Dave 4241b6403f merge task blocks if their execution is based on same conditions
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-03-29 16:16:04 +00:00
Rishabh Dave e0beaf123a "when" keyword should precede "block" keyword
Otherwise the reader is forced to search for "when" when blocks are too
long.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-03-29 16:16:04 +00:00
Guillaume Abrioux f55e2b08be remove all NBSPs on master branch
Similar to #3658

Since there's too many changes between master and stable branches let's
commit directly in each branches instead of trying to backport this
commit.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-28 11:57:55 +00:00
Dimitri Savineau 40a8e1160c container: Add python3-docker on Ubuntu bionic
When installing python-minimal on Ubuntu bionic, this will add the
/usr/bin/python symlink to the default python interpreter.
On bionic, this isn't python2 but python3.

$ /usr/bin/python --version
Python 3.6.7

The python docker library is only installed for python2 which causes
issues when running the purge-docker-cluster playbook. This playbook
uses the ansible docker modules and requires to have python bindings
installed on the remote host.
Without the bindings we can see python error reported by the docker
module.

msg: Failed to import docker or docker-py - No module named 'docker'.
Try `pip install docker` or `pip install docker-py` (Python 2.6)

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-28 08:03:58 +00:00
Guillaume Abrioux 6f47c20c3a rgw: fix a typo
ee2d52d33d introduced a typo.
This commit fixes it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux 3c4f464c54 rgw: cleanup legacy task
this task was here for backward compatibility.
It's time to remove it in the next release.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux 9134624578 rgw: add a retry on pool related tasks
sometimes those tasks might fail because of a timeout.
I've been facing this several times in the CI, adding this retry might
help and won't hurt in any case.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux f6e0185146 update: add containerized deployment upgrade support (L->N)
Add a couple of fixes to allow containerized deployments upgrade support
to upgrade from luminous/mimic to nautilus.

- pass CEPH_CONTAINER_IMAGE and CEPH_CONTAINER_BINARY environment
variable to the ceph_key module,
- fix the docker exec command in 'waiting for the containerized monitor
to join the quorum' task according to the `delegate_to` parameter,
- override `docker_exec_cmd` in `ceph-facts` with `mon_host` when
rolling_update is `True`,
- do not run unnecessarily `create_mds_filesystems.yml` when performing an
upgrade.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux 7386249c71 facts: retrieve fsid during rolling_update playbook
otherwise it generates a new cluster fsid and makes the upgrade failing

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux 5c3ce4ca77 mon: fetch initial keyring even when running rolling_update
otherwise, the task to copy mgr keyring fails during the rolling_update.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux afdaa70a63 update: enable msgr2 protocol
This commit enable the msgr2 protocol when the cluster is fully upgraded
to nautilus

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux 82764afe8d update: mask systemd service units during upgrade
This prevents the packaging from restarting services before we do need
to restart them in the rolling update sequence.
We want to handle services restart at rolling_update playbook.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux b4f14aba8e ceph_key: `lookup_ceph_initial_entities` shouldn't fail on update
As of nautilus, the initial keyrings list has changed, it means when
upgrading from Luminous or Mimic, it is expected there's a mismatch
between what is found on the cluster and the expected initial keyring
list hardcoded in ceph_key module. We shouldn't fail when upgrading to
nautilus.

str_to_bool() took from ceph-volume.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Co-Authored-by: Alfredo Deza <adeza@redhat.com>
2019-03-25 16:02:56 -04:00
Guillaume Abrioux e99305c684 handlers: do not trigger handlers on rolling_update
rolling_update playbook already takes care of stopping/starting services
during the sequence. There's no need to trigger potential unwanted
services restart.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-25 16:02:56 -04:00
Dimitri Savineau 179fdfbc19 ceph-osd: Ensure lvm2 is installed
When using osd_scenario lvm, we never check if the lvm2 package is
present on the host.
When using containerized deployment and docker on CentOS/RedHat this
package will be automatically installed as a dependency but not for
Ubuntu distribution.
OSD deployed via ceph-volume require the lvmetad.socket to be active
and running.

Resolves: #3728

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-20 22:26:45 +00:00
Guillaume Abrioux 987bdac963 osd: backward compatibility with old disk_list.sh location
Since all files in container image have moved to `/opt/ceph-container`
this check must look for new AND the old path so it's backward
compatible. Otherwise it could end up by templating an inconsistent
`ceph-osd-run.sh`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-18 17:25:51 +00:00
Dimitri Savineau 5c39735be5 ceph-validate: fail if there's no ipaddr available in monitor_address_block subnet
When using monitor_address_block to determine the ip address of the
monitor node, we need an ip address available in that cidr to be
present in the ansible facts (ansible_all_ipv[46]_addresses).
Currently we don't check if there's an ip address available during
the ceph-validate role.
As a result, the ceph-config role fails due to an empty list during
ceph.conf template creation but the error isn't explicit.

TASK [ceph-config : generate ceph.conf configuration file] *****
fatal: [0]: FAILED! => {"msg": "No first item, sequence was empty."}

With this patch we will fail before the ceph deployment with an
explicit failure message.

Resolves: rhbz#1673687

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-18 16:35:36 +00:00
Dimitri Savineau a7b1e35a16 ceph-common: Install yum plugin priorities
When using community repository we need to set the priority on the
ceph repositories because we could have some conflict with EPEL
packages.
In order to set the priority on the ceph repositories, we need to
install the yum-plugin-priorities package.

http://docs.ceph.com/docs/master/install/get-packages/#rpm-packages

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-16 06:24:55 +00:00
wumingqiao 31617afca9 ceph-mgr: run mgr_modules.yml only on the first mgr host
the task will be delegated to mons[0] for all mgr hosts, so we can just run it on the first host and have the same effect.

Signed-off-by: wumingqiao <wumingqiao@beyondcent.com>
2019-03-14 20:16:33 +00:00
Dimitri Savineau d8538ad4e1 Set the default crush rule in ceph.conf
Currently the default crush rule value is added to the ceph config
on the mon nodes as an extra configuration applied after the template
generation via the ansible ini module.

This implies two behaviors:

1/ On each ceph-ansible run, the ceph.conf will be regenerated via
ceph-config+template and then ceph-mon+ini_file. This leads to a
non necessary daemons restart.

2/ When other ceph daemons are collocated on the monitor nodes
(like mgr or rgw), the default crush rule value will be erased by
the ceph.conf template (mon -> mgr -> rgw).

This patch adds the osd_pool_default_crush_rule config to the ceph
template and only for the monitor nodes (like crush_rules.yml).
The default crush rule id is read (if exist) from the current ceph
configuration.
The default configuration is -1 (ceph default).

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1638092

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-14 08:56:52 +00:00
Dimitri Savineau b7f4e3e7c7 ceph-osd: Install numactl package when needed
With 3e32dce we can run OSD containers with numactl support.
When using numactl command in a containerized deployment we need to
be sure that the corresponding package is installed on the host.
The package installation is only executed when the
ceph_osd_numactl_opts variable isn't empty.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-12 07:43:06 +00:00
Guillaume Abrioux b3eb9206fa osd: support numactl options on OSD activate
This commit adds OSD containers activate with numactl support.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1684146

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-11 10:14:50 +01:00
Dimitri Savineau a089e1ec23 systemd/service: Set docker.service conditionally
We don't need to set After=docker.service when the container_binary
variable isn't set to docker.
It doesn't break anything currently but it could be confusing when
using podman.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-07 20:56:11 +00:00
Dimitri Savineau d6e71d769c common: Use rhsm_repository module for RHCS
Instead of using subscription-manager with command module we can use
the rhsm_repository ansible module.
This module already uses repos list feature to determine if a
repository is enabled or not. That way this module is idempotent so
we don't need changed_when: false anymore.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-07 19:15:42 +00:00
Dimitri Savineau 53514a5b50 common: Add noarch to community repository
The ceph stable community repository only enables the basearch
packages url.
Adding the noarch url because starting with nautilus release, some
packages are added there and useful for mgr or grafana.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-06 00:25:11 +00:00
Dimitri Savineau 4d32ecc980 Force osd pool min_size value to integer
After b8d580b and e9e5d5a we could have either item.min_size or
osd_pool_default_min_size using string instead of int causing the
condition to be true when it's false.
As a result, the task could try to set the pool min_size value to
0 which leads to:

Error EINVAL: pool min_size must be between 1 and 1

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-05 19:48:09 +00:00
Dimitri Savineau cb381b41fe Add CONTAINER_IMAGE env var to ceph daemons
Ceph daemons will set the CONTAINER_IMAGE environment variable value
in the daemon metadata.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-05 15:07:05 +00:00
Guillaume Abrioux e9e5d5a39a fix pool min_size customization
b8d580b3f4 introduced a bug when
`min_size` isn't set (default to 0).

Typical error:

```
Error EINVAL: pool min_size must be between 1 and 1
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-05 13:29:34 +00:00
Radu Toader b8d580b3f4 Customize pools min_size
Signed-off-by: Radu Toader <radu.m.toader@gmail.com>
2019-03-05 10:57:15 +00:00
Radu Toader 2048255f61 When creating pool, read pool.application and make the call to ceph osd pool enable application
Signed-off-by: Radu Toader <radu.m.toader@gmail.com>
2019-03-05 09:16:03 +00:00
Kevin Coakley b11dc13476 Updated 7 ansible-lint issues in the ceph-mon, ceph-osd, and ceph-rgw roles
The following lint issues have been resolved:

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:2

[305] Use shell only when shell functionality is required
/home/travis/build/ceph/ceph-ansible/roles/ceph-osd/tasks/start_osds.yml:47

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:2

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:7

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:14

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:19

[301] Commands should not change things if nothing needs doing
/home/travis/build/ceph/ceph-ansible/roles/ceph-rgw/tasks/multisite/destroy.yml:24

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2019-03-04 22:25:35 +00:00
Guillaume Abrioux 359f8a9a4a nfs: fix systemd template service for ubuntu
`mkdir` is located in `/bin` on Ubuntu.
Let's use some jinja to support Ubuntu.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-03-04 19:54:25 +00:00
Dimitri Savineau 45a7082712 lint: Fix spaces before and after variables
ansible-lint reports:

[206] Variables should have spaces after {{ and before }}

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-03-01 17:22:24 +00:00
VasishtaShastry 34c25ef49b Extends check_devices tasks to non-collocated an lvm-batch scenarios
Tuned name of a task and error message to make it more user understandable

Fixes BZ 1648168 - ceph-validate : devices are not validated in non-collocated and lvm_batch scenario

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1648168

Signed-off-by: VasishtaShastry <vipin.indiasmg@gmail.com>
2019-03-01 02:13:51 +00:00
Kevin Coakley 038401fef2 Add changed_when: false to the "get osd ids" statement
The "get osd ids" statement only registers the osd_ids_non_container variable. Running "ls /var/lib/ceph/osd/ | sed 's/.*-//'" should never produce a change on the system. Adding changed_when: false prevents irrelevant change messages from Ansible.

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2019-02-28 22:46:19 +00:00
ToprHarley 573adce7dd Convert interface names to underscores
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1540881

Signed-off-by: Tomas Petr <tpetr@redhat.com>
2019-02-28 17:07:34 +00:00
Guillaume Abrioux d5be83e504 osd: add ipc=host in systemd template for containers
in addition to 15812970f0

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-28 13:14:09 +00:00
fpantano 21fad7ced3 Removed not needed mountpoint and removed ubuntu section
Referring to BZ#1683290, as dsavineau suggests, being this
bug tripleO specific, removed the ubuntu section and removed
useless mountpoints.

Signed-off-by: fpantano <fpantano@redhat.com>
2019-02-28 09:46:10 +00:00
fpantano 0c1944236b Added to the ceph-radosgw service template the ca-trust
volume avoiding to expose useless information.
This bug is referred to the following bugzilla:

https://bugzilla.redhat.com/show_bug.cgi?id=1683290

Signed-off-by: fpantano <fpantano@redhat.com>
2019-02-28 09:46:10 +00:00
Dimitri Savineau 58a9d310d5 mon: Move client admin variable to defaults
There's no need to set the client_admin_ceph_authtool_cap variable
via a set_fact task.
Instead we can set this in the role defaults.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-02-27 18:39:39 +00:00
Dimitri Savineau dd7b7604de mon: Add mds permissions to client.admin
The administrator keyring needs full capabilities on mds like mon,
osd and mgr.
Whithout this, the client.admin key won't be able to run commands
against mds (like ceph tell mds.0 session ls)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1672878

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-02-27 18:39:39 +00:00
Guillaume Abrioux 4ab02d2cd1 tests: set ceph_origin and ceph_repository for non_container-collocation
those variables are mandatory.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-27 15:58:35 +00:00
Guillaume Abrioux f68ad10bc9 mon: do not create unnecessarily mgr keyrings
there's no need to generate mgr keyrings 'mgr.monX' when mgrs aren't
collocated with monitors.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-27 15:58:35 +00:00
Kevin Coakley d327681b99 Set permissions on monitor directory to u=rwX,g=rX,o=rX recursive
Set directories to 755 and files to 644 to /var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }} recursively instead of setting files and directories to 755 recursively. The ceph mon process writes files to this path with permissions 644. This update stops ansible from updating the permissions in /var/lib/ceph/mon/{{ cluster }}-{{ monitor_name }} every time ceph mon writes a file and increases idempotency.

Signed-off-by: Kevin Coakley <kcoakley@sdsc.edu>
2019-02-27 10:48:19 +00:00
Dimitri Savineau dc1c0dcee2 ceph-osd: Drop memory flag with bluestore
Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-02-26 07:27:06 +00:00
Guillaume Abrioux 8f42007272 facts: fix auto_discovery exclude
the previous approach was wrong.
checking if `item.key` is in `osd_auto_discovery_exclude` (`['dm-',
'loop']`) is incorrect because it will obviously not match. Therefore,
the condition will return `True` whatever the device we are checking.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-26 03:16:33 +00:00
Guillaume Abrioux 83d7ef777e osd: add possibility to exclude device in osd_auto_discovery
Add a new `osd_auto_discovery_exclude` to give the possibility of
excluding some devices in auto_discovery scenario.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-25 10:05:34 +00:00
Dimitri Savineau b7338d438a ceph-infra: Remove restart firewalld handler
There's no need to restart firewalld service when a new rule is
added due to the usage of the immediate flag.

Signed-off-by: Dimitri Savineau <dsavinea@redhat.com>
2019-02-22 18:15:47 +00:00
Guillaume Abrioux 2b60a35634 common: do not override ceph_release when ceph_repository is 'rhcs'
We shouldn't reset `ceph_release` with `ceph_stable_release` when
`ceph_repository` is `rhcs`

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1645379

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-21 11:58:49 +01:00
Guillaume Abrioux 21e5db8982 osd: make the 'wait for all osd to be up' task configurable
introduce two new variables to make the check that 'wait for all osd to
be up' configurable.
It's possible that for some deployments, OSDs can take longer to be seen
as UP and IN.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1676763

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-20 16:06:04 +00:00
Patrick Donnelly ed40c5237d delegate key creation to first mon
Otherwise keys get scattered over the mons and the mgr key is not copied properly.

With ansible_inventory:

    [mdss]
            mds-000 ansible_ssh_host=192.168.129.110 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
    [clients]
            client-000 ansible_ssh_host=192.168.143.94 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
    [mgrs]
            mgr-000 ansible_ssh_host=192.168.222.195 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
    [mons]
            mon-000 ansible_ssh_host=192.168.139.173 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.139.173
            mon-002 ansible_ssh_host=192.168.212.114 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.212.114
            mon-001 ansible_ssh_host=192.168.167.177 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa' monitor_address=192.168.167.177
    [osds]
            osd-001 ansible_ssh_host=192.168.178.128 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
            osd-000 ansible_ssh_host=192.168.138.233 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'
            osd-002 ansible_ssh_host=192.168.197.23 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='/root/.ssh/id_rsa'

We get this failure:

    TASK [ceph-mon : include_tasks ceph_keys.yml] **********************************************************************************************************************************************************************
    included: /root/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml for mon-000, mon-002, mon-001

    TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] *************************************************************************************************************************************************
    changed: [mon-000] => {
        "attempts": 1,
        "changed": true,
        "cmd": [
            "ceph",
            "--cluster",
            "ceph",
            "-n",
            "mon.",
            "-k",
            "/var/lib/ceph/mon/ceph-li1166-30/keyring",
            "mon_status",
            "--format",
            "json"
        ],
        "delta": "0:00:01.897397",
        "end": "2019-02-14 17:08:09.340534",
        "rc": 0,
        "start": "2019-02-14 17:08:07.443137"
    }

    STDOUT:

    {"name":"li1166-30","rank":0,"state":"leader","election_epoch":4,"quorum":[0,1,2],"quorum_age":0,"features":{"required_con":"2449958747315912708","required_mon":["kraken","luminous","mimic","osdmap-prune","nautilus"],"quorum_con":"4611087854031667199","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus"]},"outside_quorum":[],"extra_probe_peers":[{"addrvec":[{"type":"v2","addr":"192.168.167.177:3300","nonce":0},{"type":"v1","addr":"192.168.167.177:6789","nonce":0}]},{"addrvec":[{"type":"v2","addr":"192.168.212.114:3300","nonce":0},{"type":"v1","addr":"192.168.212.114:6789","nonce":0}]}],"sync_provider":[],"monmap":{"epoch":1,"fsid":"bb401e2a-c524-428e-bba9-8977bc96f04b","modified":"2019-02-14 17:08:05.012133","created":"2019-02-14 17:08:05.012133","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus"],"optional":[]},"mons":[{"rank":0,"name":"li1166-30","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.139.173:3300","nonce":0},{"type":"v1","addr":"192.168.139.173:6789","nonce":0}]},"addr":"192.168.139.173:6789/0","public_addr":"192.168.139.173:6789/0"},{"rank":1,"name":"li985-128","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.167.177:3300","nonce":0},{"type":"v1","addr":"192.168.167.177:6789","nonce":0}]},"addr":"192.168.167.177:6789/0","public_addr":"192.168.167.177:6789/0"},{"rank":2,"name":"li895-17","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.212.114:3300","nonce":0},{"type":"v1","addr":"192.168.212.114:6789","nonce":0}]},"addr":"192.168.212.114:6789/0","public_addr":"192.168.212.114:6789/0"}]},"feature_map":{"mon":[{"features":"0x3ffddff8ffacffff","release":"luminous","num":1}],"client":[{"features":"0x3ffddff8ffacffff","release":"luminous","num":1}]}}

    TASK [ceph-mon : fetch ceph initial keys] **************************************************************************************************************************************************************************
    changed: [mon-001] => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "mon.",
            "-k",
            "/var/lib/ceph/mon/ceph-li985-128/keyring",
            "--cluster",
            "ceph",
            "auth",
            "get",
            "client.bootstrap-rgw",
            "-f",
            "plain",
            "-o",
            "/var/lib/ceph/bootstrap-rgw/ceph.keyring"
        ],
        "delta": "0:00:03.179584",
        "end": "2019-02-14 17:08:14.305348",
        "rc": 0,
        "start": "2019-02-14 17:08:11.125764"
    }

    STDERR:

    exported keyring for client.bootstrap-rgw
    changed: [mon-002] => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "mon.",
            "-k",
            "/var/lib/ceph/mon/ceph-li895-17/keyring",
            "--cluster",
            "ceph",
            "auth",
            "get",
            "client.bootstrap-rgw",
            "-f",
            "plain",
            "-o",
            "/var/lib/ceph/bootstrap-rgw/ceph.keyring"
        ],
        "delta": "0:00:03.706169",
        "end": "2019-02-14 17:08:14.041698",
        "rc": 0,
        "start": "2019-02-14 17:08:10.335529"
    }

    STDERR:

    exported keyring for client.bootstrap-rgw
    changed: [mon-000] => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "mon.",
            "-k",
            "/var/lib/ceph/mon/ceph-li1166-30/keyring",
            "--cluster",
            "ceph",
            "auth",
            "get",
            "client.bootstrap-rgw",
            "-f",
            "plain",
            "-o",
            "/var/lib/ceph/bootstrap-rgw/ceph.keyring"
        ],
        "delta": "0:00:03.916467",
        "end": "2019-02-14 17:08:13.803999",
        "rc": 0,
        "start": "2019-02-14 17:08:09.887532"
    }

    STDERR:

    exported keyring for client.bootstrap-rgw

    TASK [ceph-mon : create ceph mgr keyring(s)] ***********************************************************************************************************************************************************************
    skipping: [mon-000] => (item=mgr-000)  => {
        "changed": false,
        "item": "mgr-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=mon-000)  => {
        "changed": false,
        "item": "mon-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=mon-002)  => {
        "changed": false,
        "item": "mon-002",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=mon-001)  => {
        "changed": false,
        "item": "mon-001",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mgr-000)  => {
        "changed": false,
        "item": "mgr-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mon-000)  => {
        "changed": false,
        "item": "mon-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mon-002)  => {
        "changed": false,
        "item": "mon-002",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mon-001)  => {
        "changed": false,
        "item": "mon-001",
        "skip_reason": "Conditional result was False"
    }
    changed: [mon-001] => (item=mgr-000) => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "client.admin",
            "-k",
            "/etc/ceph/ceph.client.admin.keyring",
            "--cluster",
            "ceph",
            "auth",
            "import",
            "-i",
            "/etc/ceph//ceph.mgr.li547-145.keyring"
        ],
        "delta": "0:00:05.822460",
        "end": "2019-02-14 17:08:21.422810",
        "item": "mgr-000",
        "rc": 0,
        "start": "2019-02-14 17:08:15.600350"
    }

    STDERR:

    imported keyring
    changed: [mon-001] => (item=mon-000) => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "client.admin",
            "-k",
            "/etc/ceph/ceph.client.admin.keyring",
            "--cluster",
            "ceph",
            "auth",
            "import",
            "-i",
            "/etc/ceph//ceph.mgr.li1166-30.keyring"
        ],
        "delta": "0:00:05.814039",
        "end": "2019-02-14 17:08:27.663745",
        "item": "mon-000",
        "rc": 0,
        "start": "2019-02-14 17:08:21.849706"
    }

    STDERR:

    imported keyring
    changed: [mon-001] => (item=mon-002) => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "client.admin",
            "-k",
            "/etc/ceph/ceph.client.admin.keyring",
            "--cluster",
            "ceph",
            "auth",
            "import",
            "-i",
            "/etc/ceph//ceph.mgr.li895-17.keyring"
        ],
        "delta": "0:00:05.787291",
        "end": "2019-02-14 17:08:33.921243",
        "item": "mon-002",
        "rc": 0,
        "start": "2019-02-14 17:08:28.133952"
    }

    STDERR:

    imported keyring
    changed: [mon-001] => (item=mon-001) => {
        "changed": true,
        "cmd": [
            "ceph",
            "-n",
            "client.admin",
            "-k",
            "/etc/ceph/ceph.client.admin.keyring",
            "--cluster",
            "ceph",
            "auth",
            "import",
            "-i",
            "/etc/ceph//ceph.mgr.li985-128.keyring"
        ],
        "delta": "0:00:05.782064",
        "end": "2019-02-14 17:08:40.138706",
        "item": "mon-001",
        "rc": 0,
        "start": "2019-02-14 17:08:34.356642"
    }

    STDERR:

    imported keyring

    TASK [ceph-mon : copy ceph mgr key(s) to the ansible server] *******************************************************************************************************************************************************
    skipping: [mon-000] => (item=mgr-000)  => {
        "changed": false,
        "item": "mgr-000",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=mgr-000)  => {
        "changed": false,
        "item": "mgr-000",
        "skip_reason": "Conditional result was False"
    }
    changed: [mon-001] => (item=mgr-000) => {
        "changed": true,
        "checksum": "aa0fa40225c9e09d67fe7700ce9d033f91d46474",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/etc/ceph/ceph.mgr.li547-145.keyring",
        "item": "mgr-000",
        "md5sum": "cd884fb9ddc9b8b4e3cd1ad6a98fb531",
        "remote_checksum": "aa0fa40225c9e09d67fe7700ce9d033f91d46474",
        "remote_md5sum": null
    }

    TASK [ceph-mon : copy keys to the ansible server] ******************************************************************************************************************************************************************
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-000] => (item=/etc/ceph/ceph.client.admin.keyring)  => {
        "changed": false,
        "item": "/etc/ceph/ceph.client.admin.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring)  => {
        "changed": false,
        "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring",
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => (item=/etc/ceph/ceph.client.admin.keyring)  => {
        "changed": false,
        "item": "/etc/ceph/ceph.client.admin.keyring",
        "skip_reason": "Conditional result was False"
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) => {
        "changed": true,
        "checksum": "095c7868a080b4c53494335d3a2223abbad12605",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-osd/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-osd/ceph.keyring",
        "md5sum": "d8f4c4fa564aade81b844e3d92c7cac6",
        "remote_checksum": "095c7868a080b4c53494335d3a2223abbad12605",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) => {
        "changed": true,
        "checksum": "ce7a2d4441626f22e995b37d5131b9e768f18494",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rgw/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-rgw/ceph.keyring",
        "md5sum": "271e4f90c5853c74264b6b749650c3f2",
        "remote_checksum": "ce7a2d4441626f22e995b37d5131b9e768f18494",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) => {
        "changed": true,
        "checksum": "e35e8613076382dd3c9d89b5bc2090e37871aab7",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-mds/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-mds/ceph.keyring",
        "md5sum": "ed7c32277914c8e34ad5c532d8293dd2",
        "remote_checksum": "e35e8613076382dd3c9d89b5bc2090e37871aab7",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) => {
        "changed": true,
        "checksum": "ac43101ad249f6b6bb07ceb3287a3693aeae7f6c",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rbd/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-rbd/ceph.keyring",
        "md5sum": "1460e3c9532b0b7b3a5cb329d77342cd",
        "remote_checksum": "ac43101ad249f6b6bb07ceb3287a3693aeae7f6c",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring) => {
        "changed": true,
        "checksum": "01d74751810f5da621937b10c83d47fc7f1865c5",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring",
        "item": "/var/lib/ceph/bootstrap-rbd-mirror/ceph.keyring",
        "md5sum": "979987f10fd7da5cff67e665f54bfe4d",
        "remote_checksum": "01d74751810f5da621937b10c83d47fc7f1865c5",
        "remote_md5sum": null
    }
    changed: [mon-001] => (item=/etc/ceph/ceph.client.admin.keyring) => {
        "changed": true,
        "checksum": "482f702cf861b41021d76de655ecf996fe9a4a4a",
        "dest": "/root/ceph-ansible/fetch/bb401e2a-c524-428e-bba9-8977bc96f04b/etc/ceph/ceph.client.admin.keyring",
        "item": "/etc/ceph/ceph.client.admin.keyring",
        "md5sum": "7581c187044fd4e0f7a5440244a6b306",
        "remote_checksum": "482f702cf861b41021d76de655ecf996fe9a4a4a",
        "remote_md5sum": null
    }

    TASK [ceph-mon : include secure_cluster.yml] ***********************************************************************************************************************************************************************
    skipping: [mon-000] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }

    TASK [ceph-mon : crush_rules.yml] **********************************************************************************************************************************************************************************
    skipping: [mon-000] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-001] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }

    TASK [ceph-mgr : set_fact docker_exec_cmd] *************************************************************************************************************************************************************************
    skipping: [mon-000] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-001] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }

    TASK [ceph-mgr : include common.yml] *******************************************************************************************************************************************************************************
    included: /root/ceph-ansible/roles/ceph-mgr/tasks/common.yml for mon-000, mon-002, mon-001

    TASK [ceph-mgr : create mgr directory] *****************************************************************************************************************************************************************************
    changed: [mon-000] => {
        "changed": true,
        "gid": 167,
        "group": "ceph",
        "mode": "0755",
        "owner": "ceph",
        "path": "/var/lib/ceph/mgr/ceph-li1166-30",
        "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0",
        "size": 4096,
        "state": "directory",
        "uid": 167
    }
    changed: [mon-002] => {
        "changed": true,
        "gid": 167,
        "group": "ceph",
        "mode": "0755",
        "owner": "ceph",
        "path": "/var/lib/ceph/mgr/ceph-li895-17",
        "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0",
        "size": 4096,
        "state": "directory",
        "uid": 167
    }
    changed: [mon-001] => {
        "changed": true,
        "gid": 167,
        "group": "ceph",
        "mode": "0755",
        "owner": "ceph",
        "path": "/var/lib/ceph/mgr/ceph-li985-128",
        "secontext": "unconfined_u:object_r:ceph_var_lib_t:s0",
        "size": 4096,
        "state": "directory",
        "uid": 167
    }

    TASK [ceph-mgr : fetch ceph mgr keyring] ***************************************************************************************************************************************************************************
    skipping: [mon-000] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-002] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }
    skipping: [mon-001] => {
        "changed": false,
        "skip_reason": "Conditional result was False"
    }

    TASK [ceph-mgr : copy ceph keyring(s) if needed] *******************************************************************************************************************************************************************
    An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
    failed: [mon-002] (item={'name': '/etc/ceph/ceph.mgr.li895-17.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li895-17/keyring', 'copy_key': True}) => {
        "changed": false,
        "item": {
            "copy_key": true,
            "dest": "/var/lib/ceph/mgr/ceph-li895-17/keyring",
            "name": "/etc/ceph/ceph.mgr.li895-17.keyring"
        }
    }

    MSG:

    Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring'
    Searched in:
     /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring
     /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li895-17.keyring on the Ansible Controller.
    If you are using a module and expect the file to exist on the remote, see the remote_src option
    skipping: [mon-002] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False})  => {
        "changed": false,
        "item": {
            "copy_key": false,
            "dest": "/etc/ceph/ceph.client.admin.keyring",
            "name": "/etc/ceph/ceph.client.admin.keyring"
        },
        "skip_reason": "Conditional result was False"
    }
    An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
    failed: [mon-001] (item={'name': '/etc/ceph/ceph.mgr.li985-128.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li985-128/keyring', 'copy_key': True}) => {
        "changed": false,
        "item": {
            "copy_key": true,
            "dest": "/var/lib/ceph/mgr/ceph-li985-128/keyring",
            "name": "/etc/ceph/ceph.mgr.li985-128.keyring"
        }
    }

    MSG:

    Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring'
    Searched in:
     /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring
     /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li985-128.keyring on the Ansible Controller.
    If you are using a module and expect the file to exist on the remote, see the remote_src option
    skipping: [mon-001] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False})  => {
        "changed": false,
        "item": {
            "copy_key": false,
            "dest": "/etc/ceph/ceph.client.admin.keyring",
            "name": "/etc/ceph/ceph.client.admin.keyring"
        },
        "skip_reason": "Conditional result was False"
    }
    An exception occurred during task execution. To see the full traceback, use -vvv. The error was: If you are using a module and expect the file to exist on the remote, see the remote_src option
    failed: [mon-000] (item={'name': '/etc/ceph/ceph.mgr.li1166-30.keyring', 'dest': '/var/lib/ceph/mgr/ceph-li1166-30/keyring', 'copy_key': True}) => {
        "changed": false,
        "item": {
            "copy_key": true,
            "dest": "/var/lib/ceph/mgr/ceph-li1166-30/keyring",
            "name": "/etc/ceph/ceph.mgr.li1166-30.keyring"
        }
    }

    MSG:

    Could not find or access 'fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring'
    Searched in:
     /root/ceph-ansible/roles/ceph-mgr/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/roles/ceph-mgr/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/roles/ceph-mgr/tasks/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/files/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring
     /root/ceph-ansible/fetch//bb401e2a-c524-428e-bba9-8977bc96f04b//etc/ceph/ceph.mgr.li1166-30.keyring on the Ansible Controller.
    If you are using a module and expect the file to exist on the remote, see the remote_src option
    skipping: [mon-000] => (item={'name': '/etc/ceph/ceph.client.admin.keyring', 'dest': '/etc/ceph/ceph.client.admin.keyring', 'copy_key': False})  => {
        "changed": false,
        "item": {
            "copy_key": false,
            "dest": "/etc/ceph/ceph.client.admin.keyring",
            "name": "/etc/ceph/ceph.client.admin.keyring"
        },
        "skip_reason": "Conditional result was False"
    }

    NO MORE HOSTS LEFT *************************************************************************************************************************************************************************************************
     to retry, use: --limit @/root/ceph-linode/linode.retry

    PLAY RECAP *********************************************************************************************************************************************************************************************************
    client-000                 : ok=30   changed=2    unreachable=0    failed=0
    mds-000                    : ok=32   changed=4    unreachable=0    failed=0
    mgr-000                    : ok=32   changed=4    unreachable=0    failed=0
    mon-000                    : ok=89   changed=21   unreachable=0    failed=1
    mon-001                    : ok=84   changed=20   unreachable=0    failed=1
    mon-002                    : ok=81   changed=17   unreachable=0    failed=1
    osd-000                    : ok=32   changed=4    unreachable=0    failed=0
    osd-001                    : ok=32   changed=4    unreachable=0    failed=0
    osd-002                    : ok=32   changed=4    unreachable=0    failed=0

Also, create all keys on the first mon and copy those to the other mons to be
consistent.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2019-02-20 11:19:44 +01:00
David Waiting 3930791cb7 ensure at least one osd is up
The existing task checks that the number of OSDs is equal to the number of up OSDs before continuing.

The problem is that if none of the OSDs have been discovered yet, the task will exit immediately and subsequent pool creation will fail (num_osds = 0, num_up_osds = 0).

This is related to Bugzilla 1578086.

In this change, we also check that at least one OSD is present. In our testing, this results in the task correctly waiting for all OSDs to come up before continuing.

Signed-off-by: David Waiting <david_waiting@comcast.com>
2019-02-19 18:31:05 +00:00
Guillaume Abrioux c98fd0b9e0 facts: ensure ceph_uid is set when running rhel
when hosts is running on RHEL, let's enforce ceph_uid to 167.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-19 16:40:08 +01:00
Guillaume Abrioux e7034402a4 container: fix tmpfiles.d ceph files
- fix uid/gid in ceph tmpfiles
- move file to `/etc/tmpfiles.d`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-19 16:40:08 +01:00
Guillaume Abrioux d4e31b90a6 Revert "osd: container remove --pid=host"
This reverts commit bb2bbeb941.

Looks like when not passing `--pid=host` we are facing some issues when
deploying more than 2 OSDs in containerized environment.

At the moment, we are still troubleshooting this issue but we prefer to
revert this commit so it doesn't block any PR in the CI.

As soon as we have a fix; we will push a new PR to remove `--pid=host`
(a revert of revert...)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-14 10:34:37 +00:00
Guillaume Abrioux 500256cdab validate: fix ntp_daemon_type check in validate
is_atomic is defined in ceph-facts or very early in main playbook.

In non containerized deployment, is_atomic is only set in ceph-facts
which is played after ceph-validate.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-14 10:34:37 +00:00
Guillaume Abrioux 76303b457c container: create ceph-common.conf tmpfiles.d if it doesn't exist
Otherwise the task will fail.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-14 10:34:37 +00:00
Guillaume Abrioux b24202f6a4 facts: move two set_fact into ceph-facts
those two set_fact tasks should be moved in ceph-facts.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-14 10:34:37 +00:00
Guillaume Abrioux 8c8ec63633 container: use tmpfiles.d to creates /run/ceph
instead of using `RuntimeDirectory` parameter in systemd unit files,
let's use a systemd `tmpfiles.d` to ensure `/run/ceph`.

Explanation:

`podman` doesn't create the `/var/run/ceph` if it doesn't exist the time
where the container is run while `docker` used to create it.
In case of `switch_to_containers` scenario, `/run/ceph` gets created by
a tmpfiles.d systemd file; when switching to containers, the systemd
unit file complains because `/run/ceph` already exists

The better fix would be to ensure `/usr/lib/tmpfiles.d/ceph-common.conf`
is removed and only rely on `RuntimeDirectory` from systemd unit file parameter
but we come from a non-containerized environment which is already running,
it means `/run/ceph` is already created and when starting the unit to
start the container, systemd will still complain and we can't simply
remove the directory if daemons are collocated.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-13 09:42:27 +01:00
Guillaume Abrioux 7e0a70f7a8 switch_to_containers: do not try to redeploy monitors
`ceph-mon` tries to redeploy monitors because it assumes it was not yet
deployed since `mon_socket_stat` and `ceph_mon_container_stat` are
undefined (indeed, we stop the daemon before calling `ceph-mon` in the
switch_to_containers playbook).

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-13 09:42:27 +01:00
Rishabh Dave 05ea783eff fix mistake in task that aborts when ntpd is chosen on Atomic
Since it's already confusing whether ntp_daemon_type should be "ntp" or
"ntpd", fix the mistake in the title of the task that aborts if
ntp_daemon_type is set to "ntpd" and OS being used is Atomic.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-02-12 09:09:27 +01:00
Rishabh Dave bdff3e48fd don't install NTPd on Atomic
Since Atomic doesn't allow any installations and NTPd is not present
on Atomic image we are using, abort when ntp_daemon_type is set to ntpd.

https://github.com/ceph/ceph-ansible/issues/3572
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-02-11 12:02:30 +01:00
Sébastien Han c69c8c9ac1 mon: do not hardcode ceph uid
167 is the ceph uid for Red Hat based system, thus trying to deploy a
monitor on Debian fail since the ceph user id on that system is 64045.
This commit uses the ceph_uid variable which contains the right uid
based on system/container detection.

Closes: https://github.com/ceph/ceph-ansible/issues/3589
Signed-off-by: Sébastien Han <seb@redhat.com>
2019-02-11 09:09:40 +00:00
Leah Neukirchen 4fe7f37849 Fix uses of default(omit) with string concatenation
When {{omit}} is concatenated with another string, it expands to something
like __omit_place_holder__63eea0d96dd6ed867b95405e11d87dddf61f448d.
However, in these use-cases we need an empty string.

Regression introduced in d53f55e807.

Signed-off-by: Leah Neukirchen <leah.neukirchen@mayflower.de>
2019-02-08 16:18:15 +00:00
Patrick C. F. Ernzer c605ff6a68 setup_ntp: call handler to disable ntpd if chronyd used
The task setup chronyd called the handler disable chronyd, which of
course defeats the purpose.

Changing the task to disable ntpd instead fixes the issue of chronyd
being disabled after it got enabled.

Fixes: #3582

Signed-off-by: Patrick C. F. Ernzer pcfe@redhat.com
2019-02-08 12:04:44 +01:00
Guillaume Abrioux d4b3c1d409 iscsi-gws: remove a leftover
remove leftover introduced by 9d590f4

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-08 01:11:42 +01:00
Guillaume Abrioux 9d590f4339 iscsi: fix permission denied error
Typical error:
```
fatal: [iscsi-gw0]: FAILED! =>
  msg: 'an error occurred while trying to read the file ''/home/guits/ceph-ansible/tests/functional/all_daemons/fetch/e5f4ab94-c099-4781-b592-dbd440a9d6f3/iscsi-gateway.key'': [Errno 13] Permission denied: b''/home/guits/ceph-ansible/tests/functional/all_daemons/fetch/e5f4ab94-c099-4781-b592-dbd440a9d6f3/iscsi-gateway.key'''
```

`become: True` is not needed on the following task:

`copy crt file(s) to gateway nodes`.

Since it's already set in the main playbook (site.yml/site-container.yml)

The thing is that the files get generated in the 'fetch_directory' with
root user because there is a 'delegate_to' + we run the playbook with
`become: True` (from main playbook).

The idea here is to create files under ansible user so we can open them
later to copy them on the remote machine.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-07 17:57:22 +01:00
Sébastien Han bb2bbeb941 osd: container remove --pid=host
Let's try again with the Nautilus release.

Closes: https://github.com/ceph/ceph-ansible/issues/1297
Signed-off-by: Sébastien Han <seb@redhat.com>
2019-02-07 12:13:51 +00:00
John Fulton cc0bf197e1 Fix CNI error when net=host is not used on OSD calls
Follow up fix that 410abd7 missed.

Related: ceph#3561

Signed-off-by: John Fulton <fulton@redhat.com>
2019-02-05 22:49:01 +00:00
John Fulton 719a25b571 Create Ceph Initial Dirs earlier
Include tasks from create_ceph_initial_dirs earlier during
ceph config role.

Fixes: #3568
Signed-off-by: John Fulton <fulton@redhat.com>
2019-02-05 18:38:05 +00:00
John Fulton dab3f6ee3f Fix CNI error when net=host is not used in some podman calls
With 'podman version 1.0.0' on RHEL8 beta the 'get ceph version' and
'ceph monitor mkfs' commands fail [1] with "error configuring network
namespace for container Missing CNI default network".

When net=host is added these errors are resolved. net=host is used in
many other calls (grep -R net=host | wc -l --> 38).

Fixes: #3561
Signed-off-by: John Fulton <fulton@redhat.com>
(cherry picked from commit 410abd7745)
2019-02-05 18:14:28 +01:00
Guillaume Abrioux 914d94cae8 set RuntimeDirectory in all systemd unit templates
/var/run/ceph resides in a non persistent filesystem (tmpfs)
After a reboot, all daemons won't start because this directory will be
missing.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-05 18:14:28 +01:00
Guillaume Abrioux 7ade032807 osd: bind mount /var/run/udev/
without this, the command `ceph-volume lvm list --format json` hangs and
takes a very long time to complete.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-05 18:14:28 +01:00
Guillaume Abrioux fdca29f2a7 facts: set timeout_command fact in ceph-defaults
- also add `--foreground` which seems to fix some issue we are facing when
using timeout with `podman`.
- use this fact in the `is ceph running already?` task.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-05 18:14:28 +01:00
Guillaume Abrioux 16efdbc59b podman: support podman installation on rhel8
Add required changes to support podman on rhel8

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1667101

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-02-05 18:14:28 +01:00
John Fulton 37b5d1084a Make python print statements python3 compatible
The restart_osd_daemon.sh generated from the j2 template
contains a python call which uses 'print x' instead of
'print(x)'. Add the missing parentheses to make this call
compatible with both 2 and 3.

Also add parentheses to other python print calls found
in roles/ceph-client/defaults/main.yml and
infrastructure-playbooks/cluster-os-migration.yml.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1671721
Signed-off-by: John Fulton <fulton@redhat.com>
2019-02-01 15:23:27 +00:00
Andrew Schoen 70a4368bc5 ceph-config: do not always assume containers when calculating num_osds
CEPH_CONTAINER_IMAGE should be None if containerized_deployment
is False.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2019-02-01 12:28:12 +01:00
Andrew Schoen 88eda479a9 ceph-facts: generate devices when osd_auto_discovery is true
This task used to live in ceph-osd, but we need it defined here to that
ceph-config can use it when trying to determine the number of osds.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2019-02-01 12:28:12 +01:00
John Fulton cba9b23363 Do not timeout podman/docker pull if timeout value is '0'
If user sets "docker_pull_timeout: '0'" then do not use the
timeout command when running podman/docker pull. Also, use
"timeout -s KILL"; without KILL, podman on RHEL8 beta does
not timeout and deployment can hang.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1670625
Signed-off-by: John Fulton <fulton@redhat.com>
2019-01-31 15:35:09 +00:00
Guillaume Abrioux fe1528adb4 config: support num_osds fact setting in containerized deployment
This part of the code must be supported in containerized deployment

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1664112

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-31 14:30:23 +00:00
Ramana Raja dfff89ce67 Install nfs-ganesha stable v2.7
nfs-ganesha v2.5 and 2.6 have hit EOL. Install nfs-ganesha v2.7
stable that is currently being maintained.

Signed-off-by: Ramana Raja <rraja@redhat.com>
2019-01-30 14:57:26 +01:00
Guillaume Abrioux 9f16501747 common: clean monitor initial keyring code
let's not be blocked by the fact we don't have the initial keyring in
`{{ fetch_directory }}`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-30 10:36:02 +01:00
Guillaume Abrioux f8aa8cdf60 facts: clean fsid generation code
clean some leftover and duplicate code.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-30 10:36:02 +01:00
Patrick Donnelly 080ce7dd72 do not set ceph_release to dummy
When we're deploying a dev branch.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2019-01-29 16:04:44 +00:00
Zack Cerza 82897c76fb Fix ceph.conf generation
877979c78 in #3486 broke ceph.conf generation entirely. Remove the stray
curly brace.

Signed-off-by: Zack Cerza <zack@redhat.com>
2019-01-25 09:19:24 +01:00
Guillaume Abrioux 0bfefdd5bc override ceph_release with ceph_stable_release
when `ceph_origin` is set to `'repository'` and `ceph_repository` to
`'community'` we need to ensure `ceph_release` reflect
`ceph_stable_release`.

4a3f180f9d simply removed the override
while it should just have to be run only when the condition mentioned
above is satisfied.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-22 15:53:32 +01:00
Guillaume Abrioux 877979c787 config: make sure ceph_release is set for all client node
`ceph_release` is set in `ceph-container-common` but this role is
played only on first node for clients, this means ceph-config will fail
on all client nodes except the first one.

This commit ensure ceph_release is set for all client nodes.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-22 13:45:38 +01:00
Sébastien Han 5babc1b4eb mon: enable msgr2
Enabling msgr2 style declaration for Nautilus and above. Prior releases
will keep the right syntax.
When upgrading from Mimic to Nautilus we must maintain something in the
form of:

mon_host = [v1:127.0.0.1:6789/0,v2:127.0.0.1:3300/0]

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-22 13:45:38 +01:00
Sébastien Han fc34fb1bd9 mon: ability to change mon listening port on container
You can now use 'ceph_mon_container_listen_port' to change the port the
monitor will listen on.
Setting the default to 3300 (assigned by IANA) since Nautilus has released the messenger2
transport protocol.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-22 13:45:38 +01:00
Sébastien Han 41a7cc878c Revert "mon: force peer addition"
This reverts commit ee08d1f89a which was
mostly to workaround a bug in ceph@master. Now, ceph@master is fixed so
reverting this. Thanks to https://github.com/ceph/ceph/pull/25900

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-22 13:45:38 +01:00
guihecheng 1ac94c048f rgw: add support for multiple rgw instances on a single host
With this, we could have multiple rgw instances on a single host
with a single run, don't have to use rgw-standalone.yml which does not
seems able to bind ports separately.
If you want to have multiple rgw instances, just change 'radosgw_instances'
to the number you want, which defaults to 1.
Not compatible with Multi-Site yet.

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
2019-01-18 11:12:28 +01:00
Guillaume Abrioux 1bbdde272f config: remove code related to ceph release prior to luminous
This part of the code is not needed since ceph-ansible@master is
intended to deploy ceph@master only.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-14 14:41:13 +00:00
Guillaume Abrioux e9188cd202 ceph-default: rm useless condition
This condition is useless and it's also creating issues we don't see in
our CI. ceph_release is set by either ceph-common or ceph-docker-common
so let's keep it this way.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1645379

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2019-01-14 14:41:13 +00:00
Sébastien Han ee08d1f89a mon: force peer addition
Somewhat something changed with the introduction of msg2 and we have to
add each node as a peer so the monitors can form a quorum. This might be
due to our CI environment, although adding this is completly harmless
and solves monitors not being able to form quorum.

It seems that the initial monitor map wasn't containing the right
information about the peers (addresses like 0.0.0.0/0r1, for each rank.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-09 13:15:52 +01:00
Bruceforce 446f3c9fae nfs-ganesha: fixed nfs_ganesha_dev_apt_repo variable
The nfs_ganesha_dev_apt_repo variable was set incorrect in task
"fetch nfs-ganesha development repository"

Signed-off-by: Bruceforce <Bruceforce@users.noreply.github.com>
2019-01-05 16:04:05 +01:00
Sébastien Han 9abf9dba0b rgw: do not create mandatory directories
The packages are responsible for this, currently tracked in ceph https://github.com/ceph/ceph/pull/25503

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-04 13:57:40 +00:00
Sébastien Han 2af624dc5b rbd-mirror: copy bootstrap key after package install
If we don't copy the key after the package install the directory /var/lib/ceph/bootstrap-rbd-mirror
will not exist and the copy will fail.

Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-04 13:57:40 +00:00
Sébastien Han b1dfe3f03e config: only pre-create ceph dirs on containers
We don't need to create the directories on non-containers, they are
created by the packages.

Closes: https://github.com/ceph/ceph-ansible/issues/3430
Signed-off-by: Sébastien Han <seb@redhat.com>
2019-01-04 13:57:40 +00:00
Rishabh Dave 6fa757d343 ceph-infra: disable unrequired NTP services
When one of the currently supported NTP services has been set up,
disable rest of the NTP services on Ceph nodes.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-01-04 14:01:05 +01:00
Rishabh Dave b03ab60742 ceph-infra: merge ntp_debian.yml and ntp_rpm.yml
Merge ntp_debian.yml and ntp_rpm.yml into one (the new file is called
setup_ntp.yml) since they are almost identical. Also avoid repetition
of the common setup step for ntpd and chronyd services.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-01-04 14:01:05 +01:00
Rishabh Dave a14bfa282a copy certificates as root user
Since the current user on the controller node, might not have the
permission to read the TLS certificate and related files, copy these
files to the Ceph nodes as root user.

Fixes: https://github.com/ceph/ceph-ansible/issues/3465
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2019-01-03 09:49:06 +01:00
Kai Wembacher 1dd26f76bf document missing support for non-containerized deployment
Signed-off-by: Kai Wembacher <kai@ktwe.de>
2018-12-21 15:37:55 +00:00
jtudelag 23ad5fd9cb Clarify RGWs configuration when using ceph_conf_overrides.
To avoid future misconfigurations, clarify that the only valid
scheme is [client.rgw.*] instead of [client.radosgw.*].
2018-12-20 13:55:03 +00:00
Kai Wembacher a273ed7f60 add support for rocksdb and wal on the same partition in non-collocated
Signed-off-by: Kai Wembacher <kai@ktwe.de>
2018-12-20 14:19:46 +01:00
Sébastien Han f99a875b7f lint: Remote package tasks should have a retry
Make linter happy and add more robustness to remote tasks by retrying 3
times (the default) before failing.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-20 11:06:09 +01:00
Guillaume Abrioux d7e77012ef retry on packages and repositories failures
add register/until on all packaging related tasks to avoid non valid CI
failure.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-19 14:48:27 +00:00
Sébastien Han d9e7835086 mon: remove ceph aliases for containers
These aliases have led to several issues making believe that ceph
binaries are actually present on the host when running the command.
However it wasn't explicit that the commands were only ran inside a
container.
It has brought to much confusion so we decided to remove them.

Closes: https://github.com/ceph/ceph-ansible/issues/3445
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-17 11:10:03 +01:00
Guillaume Abrioux 0eb56e36f8 introduce new role ceph-facts
sometimes we play the whole role `ceph-defaults` just to access the
default value of some variables. It means we play the `facts.yml` part
in this role while it's not desired. Splitting this role will speedup
the playbook.

Closes: #3282

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-12 11:18:01 +01:00
Guillaume Abrioux 1b8b5e0aac meta: set the right minimum ansible version required for galaxy
ceph-ansible@master requires the latest stable ansible version.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-11 09:59:25 +01:00
Sébastien Han 8e2585b6c7 ceph-defaults: do not use podman only on atomic
We want to test podman on f29 non-atomic, atomic is not a hard
requirement. However, if you want to get podman then you will have to
install it first before running the playbook.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-07 15:37:07 +01:00
Sébastien Han 51ca4f883b mgr: little refact
This commit removes the default module, so ceph-ansible does not enable
any manager module.
To enable a module you need to set a value to 'ceph_mgr_modules', you
can pass a list of modules like this:

ceph_mgr_modules:
  - status
  - dashboard

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-06 14:55:56 +00:00
Noah Watkins 3cf5fd2c3e start_osds: use list instead of keys (re-introduce)
the python3 fix merged by:

  https://github.com/ceph/ceph-ansible/pull/3346

was reintroduced a few days later by:

  82a6b5adec

and this patch fixes it again :)

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2018-12-05 23:25:35 +00:00
Guillaume Abrioux 1cff1f9806 revert infra: don't restart firewalld if unit is masked
If firewalld unit is masked, setting `configure_firewall: false` is
enough

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-04 15:52:08 +00:00
Sébastien Han 896676ee80 fix json data type
Json is a type structure which is always typed as a string, where before
this we were declaring a dict, which is not a json valid structure.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-04 12:34:54 +01:00
Sébastien Han 82a6b5adec osd: manage legacy ceph-disk non-container startup
The code is now able (again) to start osds that where configured with
ceph-disk on a non-container scenario.

Closes: https://github.com/ceph/ceph-ansible/issues/3388
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 452069cb3a)
2018-12-03 16:01:57 +01:00
Sébastien Han ec2d1f502d osd: re-introduce disk_list check
This commit
4cc1506303 (diff-51bbe3572e46e3b219ad726da44b64ebL13)
accidentally removed this check.

This is a must have for ceph-disk based containerized OSDs.

Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit 9b5a93e3a5)
2018-12-03 16:01:57 +01:00
Sébastien Han 4c51130198 osd: discover osd_objectstore on the fly
Applying and passing the OSD_BLUESTORE/FILESTORE on the fly is wrong for
existing clusters as their config will be changed.

Typically, if an OSD was prepared with ceph-disk on filestore and we
change the default objectstore to bluestore, the activation will fail.
The flag osd_objectstore should only be used for the preparation, not
activation. The activate in this case detects the osd objecstore which
prevents failures like the one described above.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:11:47 +00:00
Sébastien Han bef522627e ceph-osd: change jinja condition
If an existing cluster runs this config, and has ceph-disk OSD, the
`expose_partitions` won't be expected by jinja since it's inside the
'old' if. We need it as part of the osd_scenario != 'lvm' condition.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1640273
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:11:47 +00:00
Sébastien Han bf375327a0 ceph-mgr: refact role for containers
Now we simplify the invocation of start and remove some code and the
directory 'docker'.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 14fc5bad12 mon: do not serialized container bootstrap
This commit unifies the container and non-container code, which in the
meantime gives use the ability to deploy N mon container at the same
time without having to serialized the deployment. This will drastically
reduces the time needed to bootstrap the cluster.
Note, this is only possible since Nautilus because the monitors are
bootstrap the initial keys on their own once they reach quorum. In the
Nautilus version of the ceph-container mon, we stopped generating the
keys 'manually' from inside the container, for more detail see: https://github.com/ceph/ceph-container/pull/1238

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 61082b3b32 mgr: only copy keys with dedicated mgr
When collocating mon and mgr, the mgr container will attempt to create
its own key since it has the admin key at its disposal. Also at this
point there is nothing to fetch since the key is not created by the
mons, as mentionned above the mgr creates the key on its own.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 1c760904b0 site: collocated mon and mgr by default
This will speed up the deployment and also deploy mon and mgr collocated
just as recommended.
This won't prevent you of adding more and dedicaded machines for mgr if
needed.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han ee1905ad31 mon: add missing include_tasks instead of import_tasks
This was probably a leftover/mistake so let's fix this and make the file
consistent.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 7cb1040440 config: add missing bootstrap mgr directory
This directory is needed so we can fetch the bootstrap mgr key in it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 8d4de44f5d mon: default ceph_health_raw to json
During the first iteration, the command won't return anything, or can
simply fail and might not return a valid json structure. Ansible will
fail parsing it in the filter `from_json` so let's default that variable
to empty dictionary.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han cfac79bec4 container-common: remove old check
This removes a bit of unnecessary code, the check was always wrong
because of the condition 'not ceph_current_status.get('rc', 1) == 0'
It will never match since `Not` is used for bool and we are checking for
an rc.
Also, even though the check would work, this will be a major blocker for
a complete meltdown. If the whole platform is shutdown then nothing will
be up but files will be present, so this check is definitely wrong.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 7ac73202f7 fw: update rules for mon/mgr collocation
Since we now deploy mgr on mon we need to open fw rules so the mgr can
reach out to the osds.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han 5b9d8f9737 mon: remove old ubuntu login status
We don't support Ubuntu Precise, so this feature does not exists
anymore.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Sébastien Han a0e5ef8516 mon: secure cluster on container
Add the ability to protect pools on containerized clusters.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-12-03 14:39:43 +01:00
Guillaume Abrioux ccc0c9c24c osd: remove a leftover
this file is never included in ceph-osd, looks like a leftover let's remove it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-03 09:12:02 +01:00
Guillaume Abrioux 0187166926 osd: remove an incorrect information
This is false, `./defaults/main.yml` is not supposed to be modified
directly. groups_vars a/o host_vars should always be preferred.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-12-03 08:11:35 +00:00
Guillaume Abrioux fead0813b4 remove kv store support
the next stable release will drop this feature.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-30 13:45:12 +00:00
Christian Berendt 1f73a9900f Add missing space before }}
This will fix the following yamllint warning:

Variables should have spaces after {{ and before }}

Signed-off-by: Christian Berendt <berendt@betacloud-solutions.de>
2018-11-29 16:04:05 +01:00
Guillaume Abrioux a86c2b8526 config: write jinja comment with appropriate syntax
jinja comment should be written using the jinja syntax `{# ... #}`

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1654441

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-29 15:48:23 +01:00
Guillaume Abrioux e4869ac8bd validate: change default value for `radosgw_address`
change default value of `radosgw_address` to keep consistency with
`monitor_address`.
Moreover, `ceph-validate` checks if the value is '0.0.0.0' to determine
if it has to run `check_eth_rgw.yml`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600227

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-28 23:13:38 +01:00
Sébastien Han bc2daaeb71 ceph-osd fix batch with container binary
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han 80ba45793d fix template generation
Position the right condition on ceph_docker_version, activate it when
the container_binary is 'docker'.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han 00ebdeff78 container-common: remove leftover
ntp is installation is managed by the ceph-infra role.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Guillaume Abrioux 3684d421e4 defaults: play set_radosgw_address.yml only on rgw nodes
This is not needed to play these tasks on nodes that are not in rgw
group.

Always playing this code makes `shrink_mon.yml` failing.

Typical error:

```
TASK [ceph-defaults : set_fact _radosgw_address to radosgw_interface - ipv4] ***
task path: /home/jenkins-build/build/workspace/ceph-ansible-prs-dev-shrink_mon/roles/ceph-defaults/tasks/set_radosgw_address.yml:21
Thursday 22 November 2018  12:34:51 +0000 (0:00:00.154)       0:00:12.371 *****
fatal: [localhost]: FAILED! => {}

MSG:

The task includes an option with an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute u'ansible_eth1'
```

Indeed, `radosgw_interface` is the network interface on rgw only. It is
expected that this same interface doesn't exist on `localhost`, so, when
running `shrink_mon.yml`, the role `ceph-defaults` is called in
`hosts: localhost` and causes the playbook to fail.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han 4f57e44f9c defaults: declare container_binary
Always declare container_binary and assign it a correct value.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han ac3e18e4c1 ceph-defaults: use podman on Fedora only
It seems Atomic 7.5 has podman already, however this is an old version
(0.4). The podman integration is targetting RHEL 8, so Fedora is
currently the closest to that.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han f203031f88 iscsi: expose /dev/log in the container
During its initialisation both rbd-target-api and rbd-target-gw try to
open /dev/log for their syslog handler. If the device is not present the
service fails to start. Thus expose /dev/log from the host in the
container solves that problem.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han f192bc92a2 ceph_key: use the right container runtime binary
Rework all the ceph_key invocation to use either docker or podman
binary.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han 6cca37b683 client: do not use a dummy container anymore
Since 84fcf4639140c390a7f1fcd790ba190503713f86 we now use the container
binary cli to create ceph keys instead of creating a container and
'docker execing' into it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han a96e910114 Add new container scenario
Test with podman instead of docker and also support for python 3 only.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 16:47:40 +00:00
Sébastien Han a9b337ba66 handler: show unit logs on error
This will tremendously help debugging daemons that fail on restart by
showing the systemd unit logs.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-27 11:00:37 +00:00
Sébastien Han 997667a873 osd: expose udev into the container
In order to be able to retrieve udev information, we must expose its
socket. As per, https://github.com/ceph/ceph/pull/25201 ceph-volume will
start consuming udev output.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-26 18:57:12 +00:00
Guillaume Abrioux ed42262b37 client: change default pool size
default pool size should match the real default that is defined in ceph
itself.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-21 18:23:07 +00:00
Guillaume Abrioux 6d1fe32998 defaults: change default size for openstack pools
default pool size should match the real default that is defined in ceph
itself.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-21 18:23:07 +00:00
Guillaume Abrioux fdc438dd0d defaults: change for default pool size for cephfs_pools
default pool size should match the real default that is defined in ceph
itself.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-21 18:23:07 +00:00
Guillaume Abrioux f1735e9bb0 defaults: add ceph related vars file
This is to add a granularity level.
We can have ceph specific variables that user shouldn't have to change
here.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-21 15:42:50 +00:00
Guillaume Abrioux 7774069d45 refact osd pool size customization
Add real default value for osd pool size customization.
Ceph itself has an `osd_pool_default_size` default value to `3`.

If users don't specify a pool size in various pools definition within
ceph-ansible, we should default to `3`.

By the way, this kind of condition isn't really clear:
```
when:
  - rbd_pool_size | default ("")
```

we should try to get the customized value then default to what is in
`osd_pool_default_size` (which has its default value pointing to
`ceph_osd_pool_default_size` (`3`) as well) and compare it to
`ceph_osd_pool_default_size`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-21 15:42:50 +00:00
Guillaume Abrioux d4c0960f04 mon: move `osd_pool_default_pg_num` in `ceph-defaults`
`osd_pool_default_pg_num` parameter is set in `ceph-mon`.
When using ceph-ansible with `--limit` on a specifc group of nodes, it
will fail when trying to access this variables since it wouldn't be
defined.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1518696

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-21 15:42:50 +00:00
Guillaume Abrioux 68dde424f6 config: convert _osd_memory_target to int
ceph.conf doesn't accept float value.

Typical error seen:
```
$ sudo ceph daemon osd.2 config get osd_memory_target
Can't get admin socket path: unable to get conf option admin_socket for osd.2:
parse error setting 'osd_memory_target' to '7823740108,8' (strict_si_cast:
unit prefix not recognized)
```

This commit ensures the value inserted in ceph.conf will be an integer.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-21 14:33:27 +00:00
Boris Ranto dfab42a21f defaults/facts: Use list instead of keys
It is safer to use the list filter than the keys() method since the keys
method does have some interoperability issues between python2 and
python3 based ansible/jinja.

Signed-off-by: Boris Ranto <branto@redhat.com>
2018-11-20 18:48:22 +01:00
Boris Ranto c2b0cbd699 start_osds: Use list instead of keys
If you use python3 based ansible then keys() returns a dict_keys object,
not a list of keys. This breaks the installation on such a system. Using
the list filter provides a more robust solution that should work on both
python2 and python3 based ansible. You can find some more information
about the issue, here:

https://github.com/ansible/ansible/issues/19514

Signed-off-by: Boris Ranto <branto@redhat.com>
2018-11-20 18:48:22 +01:00
Neha Ojha 10538e9a23 osd_memory_target: standardize unit and fix calculation
* The default value of osd_memory_target used by ceph is 4294967296 bytes,
so use the same as ceph-ansible default.

* Convert ansible_memtotal_mb to bytes to calculate osd_memory_target

Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-11-19 09:54:33 +00:00
Sébastien Han 976b66842f ceph.ceph-container-common remove symlink
This error was introduced in the recent refactor of ceph-docker-common
in https://github.com/ceph/ceph-ansible/pull/3251. However, the Ansible
galaxy linter is not happy about it and fails importing the role.
Removing this since it's not used anymore.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-18 21:54:46 +01:00
Guillaume Abrioux 393ab94728 client: fix a typo in create_users_keys.yml
cd1e4ee024 introduced a typo.
This commit fixes it.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-17 17:31:29 +00:00
Guillaume Abrioux 63b9835cbb infra: don't restart firewalld if unit is masked
if firewalld.service systemd unit is masked, the handler will fail when
trying to restart it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1650281

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-16 09:30:17 +00:00
Noah Watkins 64dee9be0c Remove outdated documentation
Fixes BZ
https://bugzilla.redhat.com/show_bug.cgi?id=1640525

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2018-11-15 22:26:19 +00:00
Guillaume Abrioux f7fcc012e9 osd: commonize start_osd code
since `ceph-volume` introduction, there is no need to split those tasks.

Let's refact this part of the code so it's clearer.

By the way, this was breaking rolling_update.yml when `openstack_config:
true` playbook because nothing ensured OSDs were started in ceph-osd role (In
`openstack_config.yml` there is a check ensuring all OSD are UP which was
obviously failing) and resulted with OSDs on the last OSD node not started
anyway.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Guillaume Abrioux c783bc70da docker-common: rename role
rename `ceph-docker-common` role to `ceph-container-common`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Guillaume Abrioux 8069c25ba5 docker-common: remove system_checks.yml
This check is now part of `ceph-validate`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Guillaume Abrioux 1144df2852 docker-common: remove check_mandatory_vars.yml
this is part of `ceph-validate` role.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Guillaume Abrioux 6947f9c3ea docker-common: remove dirs_permissions.yml
this is already done in `ceph-config` role.
Let's remove this duplicated task.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Guillaume Abrioux cd0195c562 docker-common: remove legacy tasks for ntp configuration
Those tasks aren't needed in docker-common since the introduction of
`ceph-infra` role. They are duplicated tasks.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Guillaume Abrioux 2bcaf9088b docker-common: remove duplicate running cluster check
this is already done in ceph-defaults, there is no need to have this
check in ceph-docker-common.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Guillaume Abrioux 2b245b923e docker-common: remove duplicate set_fact (monitor_name)
this fact is already set in ceph-defaults, there is no need to set it
again in ceph-docker-common

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-11-12 10:51:48 +01:00
Rishabh Dave d72340abbe pass the list of packages to package management modules
Instead of looping over a list of packages or repeating the task
separately for different packages, pass the list of packages to the
task performing package management.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-11-09 12:59:08 +00:00
Sébastien Han e552026418 rbd-mirror: use the new rbd-mirror key
Instead of using the old rbd key let's use the new rbr-mirror key to
bootstrap the rbd -mirror daemon.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-09 12:45:52 +01:00
Sébastien Han 53910de43b ceph_key: add fetch_initial_keys capability
This is needed for Nautilus since the ceph-create-keys script goes away.
(https://github.com/ceph/ceph/pull/21305)
Now the module if called with 'state: fetch_initial_keys' will lookup
keys generated by the monitor and write them down on the filesystem to
the right location (/etc/ceph and /var/lib/ceph/boostrap*).
This is not applicable to container since keys are generated by the
container only.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-09 12:45:52 +01:00
Mike Christie 5ba7d1671e igw: open iscsi target port
Open the port the iscsi target uses for iscsi traffic.

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-11-09 10:04:44 +01:00
Mike Christie e2f1f81de4 igw: use api_port variable for firewall port setting
Don't hard code api port because it might be overridden by the user.

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-11-09 10:04:44 +01:00
Mike Christie a4ff52842c igw: fix firewall iscsi_group_name check
The firewall setup for igw is not getting setup because iscsi_group_name
does not it exist. It should be iscsi_gw_group_name.

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-11-09 10:04:44 +01:00
Mike Christie a10853c5f8 igw: Fix default api port
The default igw api port is 5000 in the manual setup docs and
ceph-iscsi-config package so this syncs up ansible.

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-11-09 10:04:44 +01:00
Jason Dillaman 0aff0e9ede igw: add support for IPv6
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
2018-11-08 17:08:36 +01:00
Sébastien Han b82995df58 Revert "ceph_key: add fetch_initial_keys capability"
This reverts commit 17883e09ba.
2018-11-08 13:34:47 +00:00
Sébastien Han 26cd62b4e6 Revert "rbd-mirror: use the new rbd-mirror key"
This reverts commit cdee9f0119.
2018-11-08 13:34:47 +00:00
Sébastien Han cdee9f0119 rbd-mirror: use the new rbd-mirror key
Instead of using the old rbd key let's use the new rbr-mirror key to
bootstrap the rbd -mirror daemon.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 13:32:18 +00:00
Sébastien Han 17883e09ba ceph_key: add fetch_initial_keys capability
This is needed for Nautilus since the ceph-create-keys script goes away.
(https://github.com/ceph/ceph/pull/21305)
Now the module if called with 'state: fetch_initial_keys' will lookup
keys generated by the monitor and write them down on the filesystem to
the right location (/etc/ceph and /var/lib/ceph/boostrap*).
This is not applicable to container since keys are generated by the
container only.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 13:32:18 +00:00
Sébastien Han 72cae542da lint: Don't compare to empty string
description = 'Use `when: var` rather than `when: var != ""` (or ' \ 'conversely `when: not var` rather than `when: var == ""`)'

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Sébastien Han 87e90a0893 lint: Don't compare to literal True/False
Use `when: var` rather than `when: var == True`

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Sébastien Han d749dc9f36 remove ceph-common-coreos role
This role is not maintained and not tested so removing it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Sébastien Han f9ddc27cd5 lint: meta add company info
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Sébastien Han 094ae8baf1 lint: do not use local_action
Use delegate_to: localhost instead.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Sébastien Han aaceeff14e lint: use galaxy_tags instead of categories
Update the meta

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Sébastien Han 037bab2922 lint: line length should not exceed 160 chars
Line was too long

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Sébastien Han 2cd0d2f1e6 lint: yaml space before and after {{ }}
Fix tasks using variables that did not have space before and after {{
  }}

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-08 10:22:02 +00:00
Sébastien Han b7a791e902 rbd-mirror: enable ceph-rbd-mirror.target
Without this the daemon will never start after reboot.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-11-06 09:06:34 +00:00
Alfredo Deza 21f9126fc4 ceph-common: update_cache whenever a new deb repo is added
The use of a handler meant that the cache would be updated at the very
end of the play, which doesn't work when adding a development repo and
trying to install right after it. This mostly reverts
53cdddf886 without an actual `git revert`
because that caused other conflicts.

Signed-off-by: Alfredo Deza <adeza@redhat.com>
2018-11-05 13:31:54 +01:00
Rishabh Dave ed1d53b2f8 remove task named "include ntp debian setup tasks"
The task was included by mistake while resolving a merge confict for
commit 8edbda96df. The task removed in
commit b3a71eeb08.

Fixes: https://github.com/ceph/ceph-ansible/issues/3292
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-11-01 18:32:06 +01:00
Sébastien Han ca7ed7dd81 galaxy roles: polish metadata
Update the meta with the relavant support such as:

* ansible version: min 2.4
* distro supported (tested on) centos 7

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 17:48:58 +01:00
Sébastien Han e3bdde5b6d remove roles symlinks
These 2 symlinks are not needed anymore as the whole ceph-ansible repo
is now in ansible galaxy.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 17:48:58 +01:00
Sébastien Han 7ce177f686 roles: add missing meta on roles
In order to have roles in the Ansible galaxy, we must have a meta
directory.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 17:48:58 +01:00
Sébastien Han 5be0b7adae lint yaml: remove blank lines
Fixes : [error] too many blank lines (1 > 0)

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 15:04:40 +00:00
Sébastien Han 1e5ee54d9d iscsi: rm README entry
The file does not bring much to the repo.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 15:04:40 +00:00
Sébastien Han a882ad7ade lint: use command instead of shell
Use command when the tasks does not have any pipes or wilcards.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 14:18:36 +01:00
Sébastien Han cfd60411bc lint: skip the linter
Do not run the linter for these 3:

* we use latest for pip docker-py package
* for ssl keys this is a false positive since the inital command is a
'shell' it'll always change
* for keystone, we must use shell since the with_items contains pipes

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 14:18:36 +01:00
Sébastien Han 55b071a114 lint: remove trailling spaces
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 14:18:36 +01:00
Sébastien Han adaa914d8e ceph-common: use yum install of shell
Use yum module to list repos and then activate them if needed.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 14:18:36 +01:00
Sébastien Han 53cdddf886 ceph-common: use a handler
We need a handler because the task changed, the old implementation was
basically mimicing a handler.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 14:18:36 +01:00
Sébastien Han 53972ee672 lint: add changed_when to command
Calling command should have changed_when false otherwise each time it
runs it will show as 'changed' and this is irrelevant.
Commands should not change things if nothing needs doing

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 14:18:36 +01:00
Sébastien Han 7dab5d4ac2 lint: name tasks
Tasks must have names.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-31 14:18:36 +01:00
Guillaume Abrioux 74ef7769fb mon: use `_current_monitor_address` in systemd unit file
Let's avoid a jinja loop and use `_current_monitor_address` to get the
monitor address.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-31 14:16:10 +01:00
Guillaume Abrioux 073131d8a6 mon: refact docker/main.yml
since the jinja logic has been moved into ansible task, we can simply
this part of the code and use `_current_monitor_address`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-31 14:16:10 +01:00
Guillaume Abrioux 404712ef01 defaults: add a fact '_current_monitor_address'
So we don't have to loop over `_monitor_addresses` when we need the
monitor address of the current node being played.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-31 14:16:10 +01:00
Guillaume Abrioux a2b2028212 config: remove complex jinja logic in ceph.conf.j2
using consecutive set_fact in the playbook instead of complex jinja syntax
makes ceph.conf.j2 more readable.
By the way, jinja can be painful to debug at some point.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-31 14:16:10 +01:00
Guillaume Abrioux f7d4651186 playbook: remove jinja syntax in when statement
this syntax in deprecated

Closes: #3281

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-31 13:45:41 +01:00
Rishabh Dave e2be5cf58c Revert "DNM: use ansible 2.7 for testing this PR"
This reverts commit 162010d90e.
2018-10-31 11:54:57 +01:00
Rishabh Dave 162010d90e DNM: use ansible 2.7 for testing this PR
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-10-31 09:38:59 +01:00
Rishabh Dave 8edbda96df use blocks directives to group tasks
Using block directives simplifies the playbooks and makes them more
readable.

Fixes: https://github.com/ceph/ceph-ansible/issues/2835
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-10-31 09:37:43 +01:00
Guillaume Abrioux 34275ac847 rgw: move multisite default variables in ceph-defaults
Move all rgw multisite variables in ceph-defaults so ceph-validate can
go through them.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 17:41:05 +01:00
Guillaume Abrioux 17ffb792e0 tests: fail if ansible version is not 2.7
Latest ansible version at the moment is 2.7

We should explicitly require 2.7 only on master branch.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 17:07:05 +01:00
Guillaume Abrioux 62c314e2ba tests: test master against ansible 2.7
Let's test ceph-ansible master against ansible 2.7 to catch early any
potential issue with this ansible version.

Closes: #3148

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 17:07:05 +01:00
Sébastien Han 8843f48222 iscsi more linting
Make flake8 happy

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-30 13:47:37 +00:00
Sébastien Han fd72f1dd0d iscsi module linting
Fix linter issues on iscsi modules.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-30 14:41:36 +01:00
Sébastien Han d209fc9d02 lint yaml
Fix [error] too many blank lines (1 > 0) (empty-lines)

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-30 14:41:36 +01:00
Guillaume Abrioux d8d3e55006 remove restapi role
As of `mimic`, restapi is no longer available because of manager daemon.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 14:19:13 +01:00
Guillaume Abrioux 547e90f281 rgw: move multisite related tasks after docker/main.yml
We must play this task after the container has started otherwise
rgw_multisite tasks will fail.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 14:00:28 +01:00
Guillaume Abrioux 710e11668d rgw: add rgw_multisite for containerized deployments
run commands on containers when containerized deployments.
(At the moment, all commands are run on the host only)

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 14:00:28 +01:00
Guillaume Abrioux fe88c89c9c validate: remove check on rgw_multisite_endpoint_addr definition
since `rgw_multisite_endpoint_addr` has a default value to
`{{ ansible_fqdn }}`, it shouldn't be mandatory to set this variable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 14:00:28 +01:00
Ali Maredia 59e6d04f9b rgw: add ceph-validate tasks for multisite, other fixes
- updated README-MULTISITE
- re-added destroy.yml
- added tasks in ceph-validate to make sure the
rgw multisite vars are set

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2018-10-30 14:00:28 +01:00
Guillaume Abrioux 77d5d128c3 rgw: add a dedicated variable for multisite endpoint
We should give users the possibility to set the IP they want as
multisite endpoint, setting the default value to `{{ ansible_fqdn }}` to
not force them to set this variable.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-30 14:00:28 +01:00
Ali Maredia 474f151450 rgw: update rgw multisite tasks
- remove destroy tasks
- cleanup conditionals and syntax
- remove unnecessary realm pulls
- enable multisite to be tested in automated
testing infra
- add multisite related vars to main.yml and
group_vars
- update README-MULTISITE
- ensure all `radosgw-admin` commands are being run
on a mon

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2018-10-30 14:00:28 +01:00
Guillaume Abrioux 748342f5b6 roles: fix *_docker_memory_limit default value
append 'm' suffix to specify the unit size used in all
`*_docker_memory_limit`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-29 14:59:09 +01:00
Neha Ojha b7e4d4eb84 roles: do not limit docker_memory_limit for various daemons
Since we do not have enough data to put valid upper bounds for the memory
usage of these daemons, do not put artificial limits by default. This will
help us avoid failures like OOM kills due to low default values.

Whenever required, these limits can be manually enforced by the user.

More details in
https://bugzilla.redhat.com/show_bug.cgi?id=1638148

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1638148
Signed-off-by: Neha Ojha <nojha@redhat.com>
2018-10-29 14:59:09 +01:00
Sébastien Han 0e63f0f3c9
Merge branch 'master' into wip-rm-calamari 2018-10-29 14:50:37 +01:00
Sébastien Han 5ab90b358c nfs: do not create the nfs user if already present
Check if the user exists and skip its creation if true.

Closes: https://github.com/ceph/ceph-ansible/issues/3254
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-26 16:24:38 +00:00
Guillaume Abrioux 4d698ce831 ceph-infra: reload firewall after rules are added
we ensure that firewalld is installed and running before adding any
rule. This has no sense anymore not to reload firewalld once the rule
are added.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-23 09:53:09 +00:00
Rishabh Dave ee2d52d33d allow custom pool size
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1596339
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-10-22 16:00:21 +02:00
Guillaume Abrioux 48cfc60722 defaults: set default `configure_firewall` to `True`
Let's configure firewalld by default.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1526400

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-19 15:12:45 +02:00
Guillaume Abrioux 8fa437b7bd iscsi: fix networking issue on containerized env
The iscsi-gw containers can't reach monitors without `--net=host`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-19 00:12:43 +00:00
Guillaume Abrioux e77c36ad17 infra: move restart fw handler in ceph-infra role
Move the handler to restart firewall in ceph-infra role.

Closes: #3243

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-19 00:12:43 +00:00
Sébastien Han fbd878c8d5 infra: rename osd-configure to add-osd and improve it
The playbook has various improvements:

* run ceph-validate role before doing anything
* run ceph-fetch-keys only on the first monitor of the inventory list
* set noup flag so PGs get distributed once all the new OSDs have been
added to the cluster and unset it when they are up and running

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1624962
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-17 11:26:11 +00:00
Sébastien Han 680574ed4c ceph-fetch-keys: refact
This commits simplies the usage of the ceph-fetch-keys role. The role
now has a nicer way to find various ceph keys and fetch them on the
ansible server.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1624962
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-17 11:26:11 +00:00
Andy McCrae 3e0fa3bc18 Add ability to use a different client container
Currently a throw-away container is built to run ceph client
commands to setup users, pools & auth keys. This utilises
the same base ceph container which has all the ceph services
inside it.

This PR allows the use of a separate container if the deployer
wishes - but defaults to use the same full ceph container.

This can be used for different architectures or distributions,
which may support the the Ceph client, but not Ceph server,
and allows the deployer to build and specify a separate client
container if need be.

Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
2018-10-16 23:28:35 +00:00
Guillaume Abrioux f0b2d82695 infra: fix wrong condition on firewalld start task
a non skipped task won't have the `skipped` attribute, so `start
firewalld` task will complain about that.
Indeed, `skipped` and `rc` attributes won't exist since the first task
`check firewalld installation on redhat or suse` won't be skipped in
case of non-containerized deployment.

Fixes: #3236
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1541840

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-16 16:24:42 +00:00
Christian Berendt ac37a0d0cd ceph-defaults: set ceph_stable_openstack_release_uca to queens
Liberty is no longer available in the UCA. The last available release there
is currently Queens.

Signed-off-by: Christian Berendt <berendt@betacloud-solutions.de>
2018-10-16 12:56:32 +00:00
Guillaume Abrioux b953965399 handler: remove some leftover in restart_*_daemon.sh.j2
Remove some legacy in those restart script.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-16 11:53:55 +00:00
Nan Li 55334baa0c docker-ce is used in aarch64 instead of docker engine
Signed-off-by: Nan Li <herbert.nan@linaro.org>
2018-10-15 18:38:40 +02:00
Guillaume Abrioux 60bc1e38db handler: fix osd containers handler
`ceph_osd_container_stat` might not be set on other osd node.
We must ensure we are on the last node before trying to evaluate
`ceph_osd_container_stat`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-15 10:30:40 +02:00
Guillaume Abrioux 40b7747af7 remove jewel support
As of now, we should no longer support Jewel in ceph-ansible.
The latest ceph-ansible release supporting Jewel is `stable-3.1`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-12 23:38:17 +00:00
Sébastien Han 31a0438cb2 ceph_volume: refactor
This commit does a couple of things:

* Avoid code duplication
* Clarify the code
* add more unit tests
* add myself to the author of the module

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han bfe689094e osd: do not run when lvm scenario
This task was created for ceph-disk based deployments so it's not needed
when osd are prepared with ceph-volume.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 2bea8d8ecf handler: add support for ceph-volume containerized restart
The restart script wasn't working with the current new addition of
ceph-volume in container where now OSDs have the OSD id name in the
container name.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 790f52f934 ceph-handler: change osd container check
Now that the container is named ceph-osd@<id> looking for something that
contains a host is not necessary. This is also backward compatible as it
will continue to match container names with hostname in them.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 0580328340 validate: add warning for ceph-disk
ceph-disk will be removed in 3.3 and we encourage to start using
ceph-volume as of 3.2.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han a948677de1 osd: ceph-volume activate, just pass the OSD_ID
We don't need to pass the device and discover the OSD ID. We have a
task that gathers all the OSD ID present on that machine, so we simply
re-use them and activate them. This also handles the situation when you
have multiple OSDs running on the same device.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 5f35910ee1 osd: change unit template for ceph-volume container
We don't need to pass the hostname on the container name but we can keep
it simple and just call it ceph-osd-$id.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han ece9e9812e osd: do not use expose_partitions on lvm
expose_partitions is only needed on ceph-disk OSDs so we don't need to
activate this code when running lvm prepared OSDs.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han e39fc4f6ce ceph_volume: add container support for batch command
The batch option got recently added, while rebasing this patch it was
necessary to implement it. So now, the batch option can work on
containerized environments.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1630977
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han 3ddcc9af16 ceph_volume: try to get ride of the dummy container
If we run on a containerized deployment we pass an env variable which
contains the container image.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Sébastien Han aa2c1b27e3 ceph-osd: ceph-volume container support
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-10-10 16:08:41 -04:00
Guillaume Abrioux 678e155328 infra: fix a typo in filename
configure_firewall is missing its dot.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 12:39:04 -04:00
Guillaume Abrioux f666902d52 infra: add tags for each subcomponent
This way we can skip one specific component if needed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 15:44:33 +00:00
Guillaume Abrioux f8a7ffb085 infra: add firewall configuration for containerized deployment
firewalld is available on atomic so there is no reason to not apply
firewall configuration.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 15:44:33 +00:00
Guillaume Abrioux 0fb8812e47 infra: update firewall rules, add cluster_network for osds
At the moment, all daemons accept connections from 0.0.0.0.
We should at least restrict to public_network and add
cluster_network for OSDs.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1541840

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 15:44:33 +00:00
Guillaume Abrioux b3a71eeb08 ceph-infra: add new role ceph-infra
this role manages ceph infra services such as ntp, firewall, ...

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-10 15:44:33 +00:00
Noah Watkins 8dcc8d1434 Stringify ceph_docker_image_tag
This could be a numeric input, but is treated like a string leading to
runtime errors.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1635823

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2018-10-10 04:26:33 +00:00
Noah Watkins 306e308f13 Avoid using tests as filter
Fixes the deprecation warning:

  [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of
  using `result|search` use `result is search`.

Signed-off-by: Noah Watkins <nwatkins@redhat.com>
2018-10-10 04:26:33 +00:00
Andrew Schoen ada03d064d ceph-validate: remove versions checks for bluestore and lvm scenario
These checks will never pass unless ceph_stable_release is passed and
ceph-defaults is run before ceph-validate. Additionally, we don't want
to support deploying jewel upstream at ceph-ansible master.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1637537

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 13:30:42 -04:00
Andrew Schoen 436dc8c5e1 ceph-config: allow the batch --report to fail when getting the OSD num
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen 40f82319dd ceph-config: use 'lvm list' to find num_osds for an existing cluster
This makes finding num_osds idempotent for clusters that were deployed
using 'lvm batch'.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen 8afef3d0de ceph-config: use the ceph_volume module to get num_osds for lvm batch
This gives us an accurate number of how many osds will be created.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen c453ea25c0 ceph-osd: use journal_size and block_db_size for lvm batch
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Andrew Schoen 71ce539da5 ceph-defaults: add the block_db_size option
This is used in the lvm osd scenario for the 'lvm batch' subcommand
of ceph-volume.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-10-09 10:09:50 -04:00
Guillaume Abrioux 3e2cdcc735 common: remove check_firewall code
Check firewall isn't working as expected and might break deployments.
This part of the code will be reworked soon.

Let's focus on configure_firewall code for now.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1541840

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-06 14:32:17 +02:00
Guillaume Abrioux be31c15ccd follow up on b5d2ea2
Add some missed statements

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-06 14:32:17 +02:00
Rishabh Dave b5d2ea269f don't use "static" field while including tasks
Instead used "import_tasks" and "include_tasks" to tell whether tasks
must be included statically or dynamically.

Fixes: https://github.com/ceph/ceph-ansible/issues/2998
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-10-04 07:44:28 +00:00
Guillaume Abrioux 6130bc841d config: look up for monitor_address_block in hostvars
`monitor_address_block` should be read from hostvars[host] instead of
current node being played.

eg:

Let's assume we have:

```
[mons]
ceph-mon0 monitor_address=192.168.1.10
ceph-mon1 monitor_interface=eth1
ceph-mon2 monitor_address_block=192.168.1.0/24
```

the ceph.conf generation task will end up with:

```
fatal: [ceph-mon0]: FAILED! => {}

MSG:

'ansible.vars.hostvars.HostVarsVars object' has no attribute u'ansible_interface'
```

the reason is that it will assume `monitor_address_block` isn't defined even on
ceph-mon2 because looking for `monitor_address_block` instead of
`hostvars[host]['monitor_address_block']`, therefore it enters in the condition as default value:

```
    {%- else -%}
      {% set interface = 'ansible_' + (monitor_interface | replace('-', '_')) %}
      {% if ip_version == 'ipv4' -%}
        {{ hostvars[host][interface][ip_version]['address'] }}
      {%- elif ip_version == 'ipv6' -%}
        [{{ hostvars[host][interface][ip_version][0]['address'] }}]
      {%- endif %}
    {%- endif %}
```

`monitor_interface` is set with default value `'interface'` so the `interface`
variable is built with 'ansible_' + 'interface'. It makes ansible throwing a
confusing message about `'ansible_interface'`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1635303

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-10-02 22:41:05 +02:00
Benjamin Cherian 85071e6e53 Add support for different NTP daemons
Allow user to choose between timesyncd, chronyd and ntpd
Installation will default to timesyncd since it is distributed as
part of the systemd installation for most distros.
Added note indicating NTP daemon type is not used for containerized
deployments.

Fixes issue #3086 on Github

Signed-off-by: Benjamin Cherian <benjamin_cherian@amat.com>
2018-10-02 13:18:08 +00:00
Mike Christie eddb95941b igw: valid client CHAP settings.
The linux kernel target layer, LIO, does not support the iscsi target to
mix ACLs that have chap enabled and disabled under the same tpg. This
patch adds a check and fails if this type of setup is detected.

This fixes Red Hat BZ:
https://bugzilla.redhat.com/show_bug.cgi?id=1615088

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-10-01 18:23:03 +02:00
Sébastien Han 4db6a213f7 add ceph-handler role
The role contains all the handlers for Ceph services. We decided to
leave ceph-defaults role with variables and a few facts only. This is
useful when organizing the site.yml files and also adding the known
variables to infrastructure-playbooks.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-09-28 15:15:49 +00:00
Sébastien Han 145aef9fed defaults: do not disable THP on bluestore
As per #1013 it appears that BS will soon use THP to lower TLB misses,
also disabling THP hasn't demonstrated any gains so far.

Closes: https://github.com/ceph/ceph-ansible/issues/1013
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-09-27 21:23:49 +00:00
Sébastien Han dc3319c3c4 default: use bluestore as default object store
All tooling in Ceph is defaulting to use the bluestore objectstore for provisioning OSDs, there is no good reason for ceph-ansible to continue to default to filestore.

Closes: https://github.com/ceph/ceph-ansible/issues/3149
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1633508
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-09-27 21:23:49 +00:00
Rishabh Dave 380168dadc don't use "include" to include tasks
Use "import_tasks" or "include_tasks" instead.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2018-09-27 17:53:40 +02:00
Giulio Fidente 6126210e0e Fix version check in ceph.conf template
We need to look for ceph_release when comparing with release names,
not ceph_version.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1631789
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
2018-09-24 13:08:27 +02:00
Matthew Vernon 806461ac6e restart_osd_daemon.sh.j2 - use `+` rather than `{1,}` in regex
`+` is more idiomatic for "one or more" in a regex than `{1,}`; the
latter was introduced in a previous fix for an incorrect `{1,2}`
restriction.

Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
2018-09-24 10:33:46 +00:00
Matthew Vernon 04f4991648 restart_osd_daemon.sh.j2 - consider active+clean+* pgs as OK
After restarting each OSD, restart_osd_daemon.sh checks that the
cluster is in a good state before moving on to the next one. One of
the checks it does is that the number of pgs in the state
"active+clean" is equal to the total number of pgs in the cluster.

On large clusters (e.g. we have 173,696 pgs), it is likely that at
least one pg will be scrubbing and/or deep-scrubbing at any one
time. These pgs are in state "active+clean+scrubbing" or
"active+clean+scrubbing+deep", so the script was erroneously not
including them in the "good" count. Similar concerns apply to
"active+clean+snaptrim" and "active+clean+snaptrim_wait".

Fix this by considering as good any pg whose state contains
active+clean. Do this as an integer comparison to num_pgs in pgmap.

(could this be backported to at least stable-3.0 please?)

Closes: #2008
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
2018-09-24 10:33:46 +00:00
Matthew Vernon aa97ecf048 restart_osd_daemon.sh.j2 - Reset RETRIES between calls of check_pgs
Previously RETRIES was set (by default to 40) once at the start of the
script; this meant that it would only ever wait for up to 40 lots of
30s across *all* the OSDs on a host before bombing out. In fact, we
want to be prepared to wait for the same amount of time after each OSD
restart for the clusters' pgs to be happy again before continuing.

Closes: #3154
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
2018-09-24 08:20:32 +00:00
John Spray 26bfef4107 Remove Calamari-related pieces
...with the exception of the purge operation, since
removing Calamari would still be useful for an old
cluster.

Signed-off-by: John Spray <john.spray@redhat.com>
2018-09-21 11:00:18 +01:00
Andrew Schoen 16ccac83fe ceph-config: calculate num_osds for the lvm batch scenario
For now our best guess is to count the number of devices and multiply
by osds_per_device. Ideally we'd like to run ceph-volume lvm batch
--report and get the number of OSDs that way, but currently we need
a ceph.conf in place already before we can do that. There is a tracker
ticket that would allow os to get around the need for a ceph.conf:
http://tracker.ceph.com/issues/36088

Fixes: https://github.com/ceph/ceph-ansible/issues/3135

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-09-20 15:41:52 +00:00
Guillaume Abrioux 6d6fd514e0 config: set default _rgw_hostname value to respective host
the default value for _rgw_hostname was took from the current node being
played while it should be took from the respective node in the loop.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622505

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-18 20:10:34 +02:00
Andrew Schoen 8afad35f5a ceph-config: default devices and lvm_volumes when setting num_osds
This avoids errors when the osd scenario choosen does not require
setting devices or lvm_volumes. The default values for these are not
set because they exist in the ceph-osd role, not ceph-defaults.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-09-18 17:02:33 +00:00
Neha Ojha 27027a17d3 osd: add osd memory target option
BlueStore's cache is sized conservatively by default, so that it does
not overwhelm under-provisioned servers. The default is 1G for HDD, and
3G for SSD.

To replace the page cache, as much memory as possible should be given to
BlueStore. This is required for good performance. Since ceph-ansible
knows how much memory a host has, it can set

`bluestore cache size = max(total host memory / num OSDs on this host * safety
factor, 1G)`

Due to fragmentation and other memory use not included in bluestore's
cache, a safety factor of 0.5 for dedicated nodes and 0.2 for
hyperconverged nodes is recommended.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1595003

Signed-off-by: Neha Ojha <nojha@redhat.com>
Co-Authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-18 10:12:46 +00:00
Mike Christie 8fcd63cc50 igw: enable and start rbd-target-api
The commit:

commit 1164cdc002
Author: Guillaume Abrioux <gabrioux@redhat.com>
Date:   Thu Aug 2 11:58:47 2018 +0200

    iscsigw: install ceph-iscsi-cli package

installs the cli package but does not start and enable the
rbd-target-api daemon needed for gwcli to communicate with the igw
nodes. This patch just enables and starts it for the non-container
setup. The container setup is already doing this.

This fixes bz https://bugzilla.redhat.com/show_bug.cgi?id=1613963

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-09-13 19:35:45 +00:00
Guillaume Abrioux a6f77340fd nfs: ignore error on semanage command for ganesha_t
As of rhel 7.6, it has been decided it doesn't make sense to confine
`ganesha_t` anymore. It means this domain won't exist anymore.

Let's add a `failed_when: false` in order to make the deployment not
failing when trying to run this command.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1626070

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-13 13:06:47 +02:00
Andrew Schoen b36f3e06b5 ceph_volume: adds the osds_per_device parameter
If this is set to anything other than the default value of 1 then the
--osds-per-device flag will be used by the batch command to define how
many osds will be created per device.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-09-12 20:27:14 +00:00
Guillaume Abrioux 1c88c444a3 mon: fix `ExecStartPre` option in systemd unit file
This command line is not supported.
According to official documentation:

```
Note that shell command lines are not directly supported.
If shell command lines are to be used,
they need to be passed explicitly to a shell implementation of some kind.
```

We must run this using /bin/sh instead.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-11 10:48:21 +02:00
Guillaume Abrioux 9ff26e80f2 defaults: add a default value to rgw_hostname
let's add ansible_hostname as a default value for rgw_hostname if no
hostname in servicemap matches ansible_fqdn.

Fixes: #3063
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622505

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-10 12:07:44 +02:00
Guillaume Abrioux ecbd3e4558 Revert "client: add quotes to the dict values"
This commit is adding quotes that make keyring unusuable

eg:

```
client.john
        key: AQAN0RdbAAAAABAAH5D3WgMN9Rxw3M8jkpMIfg==
        caps: [mds] ''
        caps: [mgr] 'allow *'
        caps: [mon] 'allow rw'
        caps: [osd] 'allow rw'
```

Trying to import such a keyring and use it will result:

```
Error EACCES: access denied
```

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1623417

This reverts commit 424815501a.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-09-07 17:21:55 +00:00
Tom Barron bf8f589958 run rados cmd in container if containerized deployment
When ceph-nfs is deployed containerized and ceph-common is not
installed on the host the start_nfs task fails because the rados
command is missing on the host.

Run rados commands from a ceph container instead so that
they will succeed.

Signed-off-by: Tom Barron <tpb@dyncloud.net>
2018-09-03 17:06:00 +00:00
Markos Chandras 217f35dbdb roles: ceph-rgw: Enable the ceph-radosgw target
If the ceph-radosgw target is not enabled, then enabling the
ceph-radosgw@ service has no effect since nothing will pull
it on the next reboot. As such, we need to ensure that the
target is enabled.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-09-03 15:48:58 +02:00
Andy McCrae 772e6b9be2 Dont run client dummy container on non-x86_64 hosts
The dummy client container currently wont work on non-x86_64 hosts.
This PR creates a filtered client group that contains only hosts
that are x86_64 - which can then be the group to run the
dummy container against.

This is for the specific case of a containerized_deployment where
there is a mixture of non-x86_64 hosts and x86_64 hosts. As such
the filtered group will contain all hosts when running with
containerized_deployment: false.

Currently ppc64le is not supported for Ceph server components.

Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
2018-08-31 11:34:00 +00:00
Sébastien Han 9ba670567e remove warning for unsupported variables
As promised, these will go unsupported for 3.1 so let's actually remove
them :).

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1622729
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-28 13:31:57 -07:00
Sébastien Han 6d7fa99ff7 defaults: fix rgw_hostname
A couple if things were wrong in the initial commit:

* ceph_release_num[ceph_release] >= ceph_release_num['luminous'] will
never work since the ceph_release fact is set in the roles after. So
either ceph-common or ceph-docker-common set it

* we can easily re-use the initial command to check if a cluster is
running, it's more elegant than running it twice.

* set the fact rgw_hostname on rgw nodes only

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1618678
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-22 17:46:00 +02:00
Andy McCrae 18684b7209 Sync config_template with base plugin
The config_template plugin exists in the ceph-common role so that
config_template will still work with ansible galaxy.

This PR syncs the config_template module from the base of the repo in
plugins/actions to the ceph-common role.

Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
2018-08-21 16:10:33 +00:00
Sébastien Han 8c70a5b197 osd: fix ceph_release
We need ceph_release in the condition, not ceph_stable_release

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1619255
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-20 20:14:56 +02:00
Markos Chandras 126e2e3f92 roles: ceph-defaults: Check if 'rgw' attribute exists for rgw_hostname
If there are no services on the cluster, then the 'rgw' could be missing
and the task is failing with the following problem:

msg": "The task includes an option with an undefined variable.
The error was: 'dict object' has no attribute 'rgw'

We fix this by checking the existence of the 'rgw' attribute. If it's
missing, we skip the task since the role already contains code to set
a good default rgw_hostname.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-08-20 11:37:45 +02:00
Markos Chandras 37e50114de roles: ceph-defaults: Delegate cluster information task to monitor node
Since commit f422efb1d6 ("config: ensure
rgw section has the correct name") we observe the following failures in
new Ceph deployment with OpenStack-Ansible

fatal: [aio1_ceph-rgw_container-fc588f0a]: FAILED! => {"changed": false,
"cmd": "ceph --cluster ceph -s -f json", "msg": "[Errno 2] No such file
or directory"

This is because the task executes 'ceph' but at this point no package
installation has happened. Packages are normally installed in the
'ceph-common' role which runs after the 'ceph-defaults' one.

Since we are looking to obtain cluster information, the task should be
delegated to a monitor node similar to other tasks in that role

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-08-20 11:37:45 +02:00
Dardo D Kleiner f6519e4003 mgr: improve/fix disabled modules check
Follow up on 36942af698

"disabled_modules" is always a list, it's the items in the list that
can be dicts in mimic.  Many ways to fix this, here's one.

Signed-off-by: Dardo D Kleiner <dardokleiner@gmail.com>
2018-08-20 11:23:58 +02:00
Sébastien Han 3149b2564f Revert "osd: generate device list for osd_auto_discovery on rolling_update"
This reverts commit e84f11e99e.

This commit was giving a new failure later during the rolling_update
process. Basically, this was modifying the list of devices and started
impacting the ceph-osd itself. The modification to accomodate the
osd_auto_discovery parameter should happen outside of the ceph-osd.

Also we are trying to not play ceph-osd role during the rolling_update
process so we can speed up the upgrade.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-16 11:13:12 +02:00
Markos Chandras 7172737f13 roles: ceph-defaults: Set ceph_uid on SUSE distributions
The ceph_uid is also '167' on SUSE systems so extend the existing task.

Signed-off-by: Markos Chandras <mchandras@suse.de>
2018-08-13 19:02:57 +00:00
Guillaume Abrioux 36942af698 mgr: backward compatibility for module management
Follow up on 3abc253fec

The structure had even changed within `luminous` release.
It was first:

```
{
    "enabled_modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "disabled_modules": [
        "influx",
        "localpool",
        "prometheus",
        "selftest",
        "zabbix"
    ]
}
```
Then it changed for:

```
{
  "enabled_modules": [
      "status"
  ],
  "disabled_modules": [
      "balancer",
      "dashboard",
      "influx",
      "localpool",
      "prometheus",
      "restful",
      "selftest",
      "zabbix"
  ]
}
```

and finally:
```
{
  "enabled_modules": [
      "status"
  ],
  "disabled_modules": [
      {
          "name": "balancer",
          "can_run": true,
          "error_string": ""
      },
      {
          "name": "dashboard",
          "can_run": true,
          "error_string": ""
      }
  ]
}
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 13:25:06 +00:00
Guillaume Abrioux 8b5e3cd999 validate: fail if fqdn deployment attempted
fqdn configuration possibility caused a lot of trouble, it's adding a
lot of complexity because of multiple cases and the relation between
ceph-ansible and ceph-container. Moreover, there is no benefit for such
a feature.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1613155

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux f422efb1d6 config: ensure rgw section has the correct name
the ceph.conf.j2 always assumes the hostname used to register the
radosgw in the servicemap is equivalent to `{{ ansible_hostname }}`
which returns the shortname form.

We need to detect which form of the hostname was used in case of already
deployed cluster and update the ceph.conf accordingly.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1580408

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Guillaume Abrioux db29b5b84d config: clean template, remove useless conditions
there is no need to have all these conditions.

for instance, assuming `mds_group_name` is set to 'mdss':

  - `if groups[mds_group_name] is defined` checks if `'mdss'` is present in `{{ groups }}`

  - `if {{ mds_group_name }} in group_names` checks if the current node is part
  the group `'mdss'`

  - `if inventory_hostname in groups.get(mds_group_name, [])` checks if
  the current node is part of the group 'mdss'

The third condition is enough to cover the need of ensuring we are
running on a mds node.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-13 10:04:24 +02:00
Sébastien Han 4c9e24a90f mon: fix calamari initialisation
If calamari is already installed and ceph has been upgraded to a higher
version the initialisation will fail later. So if we detect the
calamari-server is too old compare to ceph_rhcs_version we try to update
it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1601755
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-10 14:14:23 +02:00
Andrew Schoen 6423ab4ad3 lvm: fix condition when selecting which scenario to run
devices and lvm_volumes will always be defined, so we need to instead
check it's length before deciding to run the scenario.

This fixes the failure here:
https://2.jenkins.ceph.com/job/ceph-ansible-prs-luminous-bluestore_lvm_osds/86/consoleFull#1667273050b5dd38fa-a56e-4233-a5ca-584604e56e3a

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-10 11:46:12 +02:00
Sébastien Han e84f11e99e osd: generate device list for osd_auto_discovery on rolling_update
rolling_update relies on the list of devices when performing the restart
of the OSDs. The task that is builind the devices list out of the
ansible_devices dict only runs when there are no partitions on the
drives. However during an upgrade the OSD are already configured, they
have been prepared and have partitions so this task won't run and thus
the devices list will be empty, skipping the restart during
rolling_update. We now run the same task under different requirements
when rolling_update is true and build a list when:

* osd_auto_discovery is true
* rolling_update is true
* ansible_devices exists
* no dm/lv are part of the discovery
* the device is not removable
* the device has more than 1 sector

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1613626
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-10 09:19:40 +02:00
Andrew Schoen 3592c68cca ceph-osd: adds crush_device_class config option
This is used with the lvm osd scenario. When using devices you need the
option to set the crush device class for all of the OSDs that are
created from those devices.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Andrew Schoen 6d431ec22d ceph-volume: implement the 'lvm batch' subcommand
This adds the action 'batch' to the ceph-volume module so that we can
run the new 'ceph-volume lvm batch' subcommand. A functional test is
also included.

If devices is defind and osd_scenario is lvm then the 'ceph-volume lvm
batch' command will be used to create the OSDs.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-08-09 09:41:58 -04:00
Sébastien Han 4d64dd4686 rgw: ability to use ceph-ansible vars into containers
Since the container now simply reads the ceph.conf, we remove all the
unnecessary options.

Also this PR is the foundation to support multiple backend, such as the
new 'beast' from Ceph Mimic.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Sébastien Han 3bce117de2 rgw: remove unused file
copy_configs.yml was not including and is a leftover so let's remove it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Sébastien Han 5a89479abe rgw: remove useless condition
The include does not need a condition on containerized_deployment since
we are already in an include than has the same condition.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-08-09 14:13:17 +02:00
Graeme Gillies a46025820d Allow mgr bootstrap keyring to be defined
In environments where we wish to have manual/greater control over
how the bootstrap keyrings are used, we need to able to externally
define what the mgr keyring secret will be and have ceph-ansible
use it, instead of it being autogenerated

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610213

Signed-off-by: Graeme Gillies <ggillies@akamai.com>
2018-08-08 19:09:01 +00:00
Artur Fijalkowski 52d9d406b1 Fix in regular expression matching OSD ID on non-contenerized
deployment.
restart_osd_daemon.sh is used to discover and restart all OSDs on a
host. To do it the scripts loops the list of ceph-osd@ services in the
system. This commit fixes bug in the regular expression responsile for
extraction of OSDs - prior version uses `[0-9]{1,2}` expression
which is ignoring all OSDS which numbers are greater than 99 (thus
longer than 2 digits). Fix removed upper limit of digits in the number.
This problem existed in two places in the script.

Closes: #2964

Signed-off-by: Artur Fijalkowski <artur.fijalkowski@ing.com>
2018-08-06 15:53:49 +00:00
Guillaume Abrioux 1164cdc002 iscsigw: install ceph-iscsi-cli package
Install ceph-iscsi-cli in order to provide the `gwcli` command tool.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1602785

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-06 14:11:52 +02:00
Guillaume Abrioux 0a6ff6bbf8 defaults: backward compatibility with fqdn deployments
This commit ensures we are backward compatible with fqdn deployments.
Since ceph-container enforces deployment to be done with shortname, we
must keep backward compatibility with clusters already deployed with
fqdn configuration

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-08-06 10:14:58 +00:00
Sébastien Han ea9e60d48d config: enforce socket name
This was introduced by
59ee2e8d3b
and made our socket checks impossible to run. The PID could be found,
but the cctid cannot.
This happens during upgrade to mimic and on cluster running on mimic.

So let's force the admin socket the way it was so we can properly check
for existing instances also the line $cluster-$name.$pid.$cctid.asok
is only needed when running multiple instances of the same daemon,
thing ceph-ansible cannot do at the time of writing

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1610220
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-31 10:58:04 +02:00
Mike Christie 6f72f96dad igw: do not fail purge on rbd removal errors
Instead of failing the entire purge operation when the rbd command fails
just log an error. This will allow the higher level target and config
cleanup to complete, and the user only has to manually delete the rbd
images.

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-07-31 10:08:26 +02:00
Mike Christie d572a9a602 igw: fix image removal during purge
We were not passing in the ceph conf info into the rbd image removal
command, so if the clustername was not the default igw purge would fail
due to the rbd rm command failing.

This just fixes the bug by passing in the ceph conf info which has the
clustername to use.

This fixes Red Hat bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1601949

Signed-off-by: Mike Christie <mchristi@redhat.com>
2018-07-31 10:08:26 +02:00
Sébastien Han 2ca8c51906 osd: do not remove expose_partition container
The container runs with --rm which means it will be deleted by Docker
when exiting. Also 'docker rm -f' is not idempotent and returns 1 if the
container does not exist.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1609007
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-30 10:38:15 +02:00
Guillaume Abrioux 1ecbbbdcfa rbd-mirror: bring back compatibility with jewel deployment
rbd-mirror can't start when deploying jewel because it needs admin
keyring.
Getting back this task brings backward compatibility for jewel
deployment.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-26 18:47:10 +00:00
Guillaume Abrioux 053709da97 ceph-osds: backward compatibility with jewel for osp pools creation
If we want to be backward compatible with release prior to luminous, we
have to set the rule name accordingly to default values used in jewel.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-26 18:47:10 +00:00
Guillaume Abrioux 2597a557c5 client: fix an incorrect title in a task
This task would be run on both containerized *and* non containerized
deployment.
Let's have a proper title to avoid confusion.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-26 15:57:41 +02:00
Sébastien Han e2ea5bac51 rgw: add more config option for civetweb frontend
In containerized deployments we now inherite from the
radosgw_civetweb_options options when bootstrapping the container.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1582411
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-25 13:19:14 +00:00
Giulio Fidente e85e5ea781 Run creation of empty rados index object to first monitor
When distributing ceph-nfs role, creation of rados index object
fails as it assumes availability of client.admin locally.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1607970
Signed-off-by: Giulio Fidente <gfidente@redhat.com>
2018-07-25 11:40:11 +02:00
Sébastien Han 235d1b3f55 validate: add checks for interfaces
Check if the interface provided:

* exists in the gathered facts (thus on the system)
* is active
* has an IP address (depending on ip_version )

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600227
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-24 17:59:30 +02:00
Guillaume Abrioux af82e7523d tests: test master against ansible 2.6
Ansible 2.4 is currently end-of-life.
Ansible 2.5 will go end-of-life after Ansible 2.7 is released.

Fixes: #2901

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-23 11:59:15 +00:00
Sébastien Han 7fc13bc9d5 validate: only run osd test on osd node
Do not run device validation on every hosts, only on OSD nodes.

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-19 12:46:18 +00:00
Sébastien Han cf01e596b6 valide: improve device check
We know make sure that:

* devices are actually block special files
* length of dedicated_device is identical to devices

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-18 14:26:22 +00:00
Guillaume Abrioux 1a626d3c61 nfs: change default stable branch for nfs-ganesha repo
Since `V2.6-stable` is available and has packages for `mimic`, let's
update this default value accordingly so nfs nodes can be deployed with
mimic.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-13 08:20:27 +00:00
Sébastien Han e61ca882a1 validate: force ansible version
We currently only support Ansible 2.4.X so let's fail if the version is
different.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-13 07:52:56 +00:00
Guillaume Abrioux 5ef5fcd0b6 client: do not rely on copy_admin_key to import keys
Relying on `copy_admin_key` to import created keys on client nodes makes
us obliged to copy admin key on those nodes which is not something we might
want.
We should use the fact `condition_copy_admin_key` which will be set to
`True` when the delegated node is a mon which means we can import keys
without taking care of admin keyring.

Fixes: #2867

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-13 06:52:00 +00:00
Guillaume Abrioux ce5ac930c5 mgr: fix condition to add modules to ceph-mgr
Follow up on #2784

We must check in the generated fact `_disabled_ceph_mgr_modules` to
enable disabled mgr module.
Otherwise, this task will be skipped because it's not comparing the
right list.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1600155

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-12 21:04:01 +00:00
Guillaume Abrioux 9f54b3b4a7 mon: ensure socker is purged when mon is stopped
On containerized deployment, if a mon is stopped, the socket is not
purged and can cause failure when a cluster is redeployed after the
purge playbook has been run.

Typical error:

```
fatal: [osd0]: FAILED! => {}

MSG:

'dict object' has no attribute 'osd_pool_default_pg_num'
```

the fact is not set because of this previous failure earlier:

```
ok: [mon0] => {
    "changed": false,
    "cmd": "docker exec ceph-mon-mon0 ceph --cluster test daemon mon.mon0 config get osd_pool_default_pg_num",
    "delta": "0:00:00.217382",
    "end": "2018-07-09 22:25:53.155969",
    "failed_when_result": false,
    "rc": 22,
    "start": "2018-07-09 22:25:52.938587"
}

STDERR:

admin_socket: exception getting command descriptions: [Errno 111] Connection refused

MSG:

non-zero return code
```

This failure happens when the ceph-mon service is stopped, indeed, since
the socket isn't purged, it's a leftover which is confusing the process.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-10 20:08:07 +00:00
Guillaume Abrioux d0746e0858 common: switch from docker module to docker_container
As of ansible 2.4, `docker` module has been removed (was deprecated
since ansible 2.1).
We must switch to `docker_container` instead.

See: https://docs.ansible.com/ansible/latest/modules/docker_module.html#docker-module

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-10 20:08:07 +00:00
Shilpa Jagannath 07852ed039 Remove zone from zonegroup and update period before deleting the zone to avoid inconsistent period information across other zones.
When you delete a zone without removing from zonegroup, the period update would
fail since that command needs to load the zone and zonegroup to be able to
update the master. Period update would fail with an error like this:

radosgw-admin period update --commit
-1 Cannot find zone id= (name=), switching to local zonegroup configuration
-1 Cannot find zone id= (name=)

Signed-off-by: Shilpa Jagannath <smanjara@redhat.com>
2018-07-09 12:27:24 +00:00
Sébastien Han b9f7df7ba2 common: remove hdparm
As of Kraken, the journal code does not use the hdparm command anymore
so we can remove it from our package dependency list.

Fixes: https://github.com/ceph/ceph-ansible/issues/1402
Signed-off-by: Sébastien Han <seb@redhat.com>
(cherry picked from commit f6910efa24389c264062963b2054c7cd29ffebb3)
2018-07-07 08:53:47 +00:00
Sébastien Han 713b9fcf9b ceph-config: do not log cluster log on container
The container image recently merged both cluster and mon log into a
single stream. Following this, we now see this warning coming from the
container image:

2018-06-19 13:44:01.542990 7ff75b024700  1 mon.vm02@1(peon).log
v57928205 unable to write to '/var/log/ceph/ceph.log' for channel
'cluster': (2) No such file or directory

So we now tell the mon to not log cluster log on the filesystem.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1591771
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-05 15:11:45 +00:00
Sébastien Han fcf11ecc35 ceph-common: fix rhcs condition
We forgot to add mgr_group_name when checking for the mon repo, thus the
conditional on the next task was failing.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1598185
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-04 17:17:21 +02:00
Guillaume Abrioux 3abc253fec mgr: fix enabling of mgr module on mimic
The data structure has slightly changed on mimic.

Prior to mimic, it used to be:

```
{
    "enabled_modules": [
        "status"
    ],
    "disabled_modules": [
        "balancer",
        "dashboard",
        "influx",
        "localpool",
        "prometheus",
        "restful",
        "selftest",
        "zabbix"
    ]
}
```

From mimic it looks like this:

```
{
    "enabled_modules": [
        "status"
    ],
    "disabled_modules": [
        {
            "name": "balancer",
            "can_run": true,
            "error_string": ""
        },
        {
            "name": "dashboard",
            "can_run": true,
            "error_string": ""
        }
    ]
}
```

This means we can't simply check if `item` is in `item in
_ceph_mgr_modules.disabled_modules`

the idea here is to use filter `map(attribute='name')` to build a list
when deploying mimic.

Fixes: #2766

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-07-03 21:19:16 +00:00
Sébastien Han 63658c05c7 ceph-client: do not kill the dummy container
The container runs for 300 sec, then dies and removes itself thanks to
the '--rm' option, so there is no point of removing it. Also this is
causing failure under some circonstances.

Closing: https://bugzilla.redhat.com/show_bug.cgi?id=1568157
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-03 16:09:52 +00:00
Sébastien Han a629408967 ceph-mds: enable application pool
We now enable the application type 'cephfs' for each cephfs pools we
create.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1590275
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-02 10:28:34 +00:00
Sébastien Han 103c279c21 ceph-defaults: add default application to pool
We now add a default 'rbd' application type to each pool we create. This
will remove the warning: "  application not enabled on N pool(s) "

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1590275
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-07-02 10:28:34 +00:00
Vasu Kulkarni 1d454b611f Enable monitor repo for mgr nodes and Tools repo for iscsi/nfs/clients
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
2018-06-29 18:09:26 +00:00
Sébastien Han abdb53e16a ceph-osd: trigger osd container restart on script change
The script ceph-osd-run.sh holds the config options to start the
container, if one of these options are modified we must restart the
container. This was not the case before becauase the 'notify' flag
wasn't present.

Closing: https://bugzilla.redhat.com/show_bug.cgi?id=1596061
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-28 17:54:13 +02:00
Sébastien Han f623997271 systemd: remove changed_when: false
When using a module there is no need to apply this Ansible option. The
module will handle the idempotency on its own. So the module decides
wether or not the task has changed during the execution.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-28 17:54:13 +02:00
George Shuklin 653b483fc3 Add ceph_keyring_permissions variable to control permissions for
keyring files in /etc/ceph. Default value is the same as it was (0600),
but this variable allows user to override it (f.e. set it to 0640).

Signed-off-by: George Shuklin <george.shuklin@gmail.com>
2018-06-28 15:48:39 +00:00
Ha Phan a7b7735b6f ceph-mon: Generate initial keyring
Minor fix so that initial keyring can be generated using python3.

Signed-off-by: Ha Phan <thanhha.work@gmail.com>
2018-06-28 10:39:56 +02:00
Ha Phan b7b8aba47b Generate a copy of ceph.conf locally
Refers to #2697

This change creates a copy of `ceph.conf` in ansible server.

Signed-off-by: Ha Phan <thanhha.work@gmail.com>
2018-06-28 07:39:30 +00:00
Andy McCrae a4a3d9a01b Fix package state for upgrades on SuSE/RHEL
During 226f80c22b only Debian package
installs had the correct state set to ensure packages were upgraded when
the "upgrade_ceph_packages" var was set to true.

Signed-off-by: Andy McCrae <andy.mccrae@gmail.com>
2018-06-27 18:55:22 +00:00
Sébastien Han 322e2de7d2 mon: honour mon_docker_net_host option
--net=host was hardcoded in the startup line so even though
mon_docker_net_host was set to False the net option would always be
activated.
mon_docker_net_host is set to True by default so this commit does not
change the behaviour.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-27 13:44:41 +00:00
Michel Rode 7774935707 Added 'squash' as a parameter to nfs-ganesha.
Set the default to 'root_squash' - which is the default of nfs-ganesha.

Signed-off-by: Michel Rode <rmichel@devnu11.net>
2018-06-25 09:13:17 +02:00
Christian Zunker 48394597c9 reset failed count of ceph-mgr
Depending on your setup, ceph-mgr might get restarted multiple times.
When this is done to fast, systemd will prevent further restarts because of
configured limits in the ceph-mgr systemd unit file.

Resetting the failure count will prevent this problem. The reset is done before
the restart so in case of a real problem during the restart it still fails.

Fixes: #2768

Signed-off-by: Christian Zunker <christian.zunker@codecentric.cloud>
2018-06-20 13:59:16 +02:00
Sébastien Han bea4027f0c common: start firewalld if configure_firewall
Currently we expect that if configure_firewall is set to True to have
firewalld enabled and running. Let's enforce that.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1589146
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-18 04:02:50 -04:00
Sébastien Han a9ed3579ae mon/osd: bump container memory limit
As discussed with the cores, the current limits are too low and should
be bumped to higher value.
So now by default monitors get 3GB and OSDs get 5GB.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1591876
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-17 11:20:27 -04:00
Guillaume Abrioux 51cf3b7fa0 client: try to kill dummy container only on first client node
The 'dummy' container is created only on first client node, it means we
must seek to destroy this container only on this node, otherwise this
can cause failure like following :
```
fatal: [192.168.24.8]: FAILED! => {"changed": false, "cmd": ["docker", "rm",
"-f", "ceph-create-keys"], "delta": "0:00:00.023692", "end": "2018-06-12
20:56:07.261278", "msg": "non-zero return code", "rc": 1, "start":
"2018-06-12 20:56:07.237586", "stderr": "Error response from daemon: No such
container: ceph-create-keys", "stderr_lines": ["Error response from daemon: No
such container: ceph-create-keys"], "stdout": "", "stdout_lines": []}

```

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1590746

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-13 16:10:46 +02:00
Patrick Donnelly 9ce81ae845 ceph-mds: do not enable multimds on jewel
Multiple active MDS became stable in Luminous.

Introduced-by: c8573fe0d7
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2018-06-12 10:47:34 +02:00
Sébastien Han 2e8412734a common: ability to enable/disable fw configuration
Prior to this patch if you were running on a Red Hat system,
ceph-ansible would try to configure firewalld for you without the
operators's consent.
Now you can enable or disable the fw configuration by setting
configure_firewall to either true or false.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1589146
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-11 21:51:59 +02:00
Konstantin Shalygin 3a07568496 ceph-osd: set 'openstack_keys_tmp' only when 'openstack_config' is defined.
If 'openstack_config' is false this task shouldn't be executed.

Signed-off-by: Konstantin Shalygin <k0ste@k0ste.ru>
2018-06-11 13:03:55 +02:00
Vishal Kanaujia 1a610df02b Fix to run secure cluster only once in a run
The current secure cluster play runs with all the monitors. The rerun
of this task is unnecessary and can be skipped.

Fixes: #2737

Signed-off-by: Vishal Kanaujia <vishal.kanaujia@flipkart.com>
2018-06-11 08:37:29 +02:00
Guillaume Abrioux 090ecff94e client: keyrings aren't created when single client node
combining `run_once: true` with `inventory_hostname ==
groups.get(client_group_name) | first` might cause bug when the only
node being run is not the first in the group.

In a deployment with a single client node it might cause issue because
sometimes keyring won't be created since the task could be definitively
skipped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1588093

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-08 15:05:47 +02:00
Sébastien Han 20c8065e48 ceph-iscsi: rename group iscsi_gws
Let's try to avoid using dashes as testinfra needs to be able to read
the groups.
Typically, with iscsi-gws we can't add a marker for these iscsi nodes,
using an underscore fixes the issue.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-08 10:21:54 +02:00
Sébastien Han 91bf53ee93 ceph-iscsi: support for containerize deployment
We now have the ability to deploy a containerized version of ceph-iscsi.
The result is similar to the non-containerized version, you simply have
3 containers running for the following services:

* rbd-target-api
* rbd-target-gw
* tcmu-runner

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1508144
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-08 10:21:54 +02:00
Guillaume Abrioux 8a653cacd5 client: add a default value for keyring file
Potential error if someone doesnt pass the mode in `keys` dict for
client nodes:

```
fatal: [client2]: FAILED! => {}

MSG:

The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'mode'

The error appears to have been in '/home/guits/ceph-ansible/roles/ceph-client/tasks/create_users_keys.yml': line 117, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- name: get client cephx keys
  ^ here

exception type: <class 'ansible.errors.AnsibleUndefinedVariable'>
exception: 'dict object' has no attribute 'mode'

```

adding a default value will avoid the deployment failing for this.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-07 17:26:35 +02:00
Guillaume Abrioux 5eacc8f8d8 tests: add a dummy value for 'dev' release
Functional tests are broken when testing against 'dev' release (ceph).
Adding a dummy value here will make it possible to run ceph-ansible CI
against dev ceph release.

Typical error:

```
>       if request.node.get_marker("from_luminous") and ceph_release_num[ceph_stable_release] < ceph_release_num['luminous']:
E       KeyError: 'dev'
```

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit fd1487d93f21b609a637053f5b33cd2a4e408d00)
2018-06-07 13:59:17 +02:00
Andrew Schoen 24ef47b0e5 ceph-common: move firewall checks after package installation
We need to do this because on dev or rhcs installs ceph_stable_release
is not mandatory and the firewall check tasks have a task that is
conditional based off the installed version of ceph. If we perform those
checks after package install then they will not fail on dev or rhcs
installs.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-06-07 13:59:17 +02:00
Guillaume Abrioux 7b156deb67 client: use dummy created container when there is no mon in inventory
the `docker_exec_cmd` fact set in client role when there is no monitor
in inventory is wrong, `ceph-client-{{ hostname }}` is never created so
it will fail anyway.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-07 16:16:38 +08:00
Guillaume Abrioux 433ecc7cbc osd: copy openstack keys over to all mon
When configuring openstack, the created keyrings aren't copied over to
all monitors nodes.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1588093

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-07 13:58:57 +08:00
Patrick Donnelly 91f9da530f change max_mds default to 1
Otherwise, with the removal of mds_allow_multimds, the default of 3 will be set
on every new FS.

Introduced by: c8573fe0d7

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1583020
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2018-06-06 12:16:42 +08:00
Vishal Kanaujia 2cdb0d1812 Syntax error fix in rgw multisite role
This checkin fixes a syntax error in RGW multisite role under when
clause.

Fixes: #2704

Signed-off-by: Vishal Kanaujia <vishal.kanaujia@flipkart.com>
2018-06-05 16:01:07 +05:30
Guillaume Abrioux 2cf06b515f rgw: refact rgw pools creation
Refact of 8704144e31
There is no need to have duplicated tasks for this. The rgw pools
creation should be delegated on a monitor node se we don't have to care
if the admin keyring is present on rgw node.
By the way, only one task is needed to create the pools, we just need to
use the `docker_exec_cmd` fact already defined in `ceph-defaults` to
achieve it.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1550281

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-05 15:00:20 +08:00
Ha Phan 1f3c9ce4f3 Use python instead of python2
The initial keyring is generated from ansible server locally and the snippet works well for both v2 and v3 of python.

I don't see any reason why we should explicitly invoke`python2` instead of just `python`.

In some setups, `python2` is not symlinked to `python`; while `python` and `python3` refer to v2 and v3 respectively.

Signed-off-by: Ha Phan <thanhha.work@gmail.com>
2018-06-04 14:24:10 +02:00
Sébastien Han db50aec13d ceph-common: add firewall rules for ceph-mgr
Prior to this commit the firewall tasks were not opening the ceph-mgr
ports. This would lead to unclean configuration since the ceph-mgr
daemons can not connect to the OSDs.
Thi commit opens the right ports on the ceph-mgr nodes to talk with the
OSDs.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1526400
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-06-04 12:11:41 +02:00
jtudelag 600e1e2c26 rgws: renames create_pools variable with rgw_create_pools.
Renamed to be consistent with the role (rgw) and have a meaningful name.

Signed-off-by: Jorge Tudela <jtudelag@redhat.com>
2018-06-04 06:23:42 +02:00
jtudelag 8704144e31 Adds RGWs pool creation to containerized installation.
ceph command has to be executed from one of the monitor containers
if not admin copy present in RGWs. Task has to be delegated then.

Adds test to check proper RGW pool creation for Docker container scenarios.

Signed-off-by: Jorge Tudela <jtudelag@redhat.com>
2018-06-04 06:23:42 +02:00
Guillaume Abrioux aae37b44f5 mons: move set_fact of openstack_keys in ceph-osd
Since the openstack_config.yml has been moved to `ceph-osd` we must move
this `set_fact` in ceph-osd otherwise the tasks in
`openstack_config.yml` using `openstack_keys` will actually use the
defaults value from `ceph-defaults`.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1585139

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-01 17:12:01 +02:00
Andrew Schoen c2423e2c48 ceph-defaults: add the nautilus 14.x entry to ceph_release_num
The first 14.x tag has been cut so this needs to be added so that
version detection will still work on the master branch of ceph.

Fixes: https://github.com/ceph/ceph-ansible/issues/2671

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-06-01 16:51:23 +02:00
Guillaume Abrioux 9d5265fe11 osds: wait for osds to be up before creating pools
This is a follow up on #2628.
Even with the openstack pools creation moved later in the playbook,
there is still an issue because OSDs are not all UP when trying to
create pools.

Adding a task which checks for all OSDs to be UP with a `retries/until`
condition should definitively fix this issue.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1578086

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-06-01 15:46:52 +02:00
Guillaume Abrioux c68126d6fd mdss: do not make pg_num a mandatory params
When playing ceph-mds role, mon nodes have set a fact with the default
pg num for osd pools, we can simply default to this value for cephfs
pools (`cephfs_pools` variable).

At the moment the variable definition for `cephfs_pools` looks like:

```
cephfs_pools:
  - { name: "{{ cephfs_data }}", pgs: "" }
  - { name: "{{ cephfs_metadata }}", pgs: "" }
```

and we have a task in `ceph-validate` to ensure `pgs` has been set to a
valid value.

We could simply avoid this check by setting the default value of `pgs`
to `hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num']` and
let to users the possibility to override this value.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1581164

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-30 16:20:34 +02:00
Guillaume Abrioux 34e646e767 osds: do not set docker_exec_cmd fact
in `ceph-osd` there is no need to set `docker_exec_cmd` since the only
place where this fact is used is in `openstack_config.yml` which
delegate all docker command to a monitor node. It means we need the
`docker_exec_cmd` fact that has been set referring to `ceph-mon-*`
containers, this fact is already set earlier in `ceph-defaults`.

By the way, when collocating an OSD with a MON it fails because the container
`ceph-osd-{{ ansible_hostname }}` doesn't exist.

Removing this task will allow to collocate an OSD with a MON.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1584179

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-30 16:17:29 +02:00
Guillaume Abrioux 608ea947a9 mds: move mds fs pools creation
When collocating mds on monitor node, the cephpfs will fail
because `docker_exec_cmd` is reset to `ceph-mds-monXX` which is
incorrect because we need to delegate the task on `ceph-mon-monXX`.
In addition, it wouldn't have worked since `ceph-mds-monXX` container
isn't started yet.

Moving the task earlier in the `ceph-mds` role will fix this issue.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1578086

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-25 11:16:56 +02:00
Sébastien Han 1c084efb3c rgw: container add option to configure multi-site zone
You can now use RGW_ZONE and RGW_ZONEGROUP on each rgw host from your
inventory and assign them a value. Once the rgw container starts it'll
pick the info and add itself to the right zone.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1551637
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-24 11:32:05 -07:00
Guillaume Abrioux 3a0e168a76 mdss: move cephfs pools creation in ceph-mds
When deploying a large number of OSD nodes it can be an issue because the
protection check [1] won't pass since it tries to create pools before all
OSDs are active.

The idea here is to move cephfs pools creation in `ceph-mds` role.

[1] e59258943b/src/mon/OSDMonitor.cc (L5673)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1578086

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-24 09:39:38 -07:00
Guillaume Abrioux 564a662baf osds: move openstack pools creation in ceph-osd
When deploying a large number of OSD nodes it can be an issue because the
protection check [1] won't pass since it tries to create pools before all
OSDs are active.

The idea here is to move openstack pools creation at the end of `ceph-osd` role.

[1] e59258943b/src/mon/OSDMonitor.cc (L5673)

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1578086

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-24 09:39:38 -07:00
Luigi Toscano 43e96c1f98 ceph-radosgw: disable NSS PKI db when SSL is disabled
The NSS PKI database is needed only if radosgw_keystone_ssl
is explicitly set to true, otherwise the SSL integration is
not enabled.

It is worth noting that the PKI support was removed from Keystone
starting from the Ocata release, so some code paths should be
changed anyway.

Also, remove radosgw_keystone, which is not useful anymore.
This variable was used until fcba2c801a.
Now profiles drives the setting of rgw keystone *.

Signed-off-by: Luigi Toscano <ltoscano@redhat.com>
2018-05-23 23:24:09 -07:00
Vishal Kanaujia ef5f52b1f3 Skip GPT header creation for lvm osd scenario
The LVM lvcreate fails if the disk already has a GPT header.
We create GPT header regardless of OSD scenario. The fix is to
skip header creation for lvm scenario.

fixes: https://github.com/ceph/ceph-ansible/issues/2592

Signed-off-by: Vishal Kanaujia <vishal.kanaujia@flipkart.com>
2018-05-23 11:44:09 -07:00
Subhachandra Chandra c7e269fcf5 Fix restarting OSDs twice during a rolling update.
During a rolling update, OSDs are restarted twice currently. Once, by the
handler in roles/ceph-defaults/handlers/main.yml and a second time by tasks
in the rolling_update playbook. This change turns off restarts by the handler.
Further, the restart initiated by the rolling_update playbook is more
efficient as it restarts all the OSDs on a host as one operation and waits
for them to rejoin the cluster. The restart task in the handler restarts one
OSD at a time and waits for it to join the cluster.
2018-05-22 19:23:07 +02:00
Andrew Schoen a9ad8eb5f3 ceph-validate: do not check ceph version on dev or rhcs installs
A dev or rhcs install does not require ceph_stable_release to be set and
instead generates that by looking at the installed ceph-version.
However, at this point in the playbook ceph may not have been installed
yet and ceph-common has not be run.

Fixes: https://github.com/ceph/ceph-ansible/issues/2618

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-21 23:11:04 +02:00
Andrew Schoen e7d02a50d8 ceph-validate: move system checks from ceph-common to ceph-validate
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 645f61c351 ceph-defaults: remove backwards compat for containerized_deployment
The validation module does not get config options with the template
syntax rendered, so we're gonna remove that and just default it to
False. The backwards compat was schedule to be removed in 3.1 anyway.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen d30a99c350 validate: add support for containerized_deployment
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen f84c2ba27b ceph-defaults: fix failing tasks when osd_scenario was not set correctly
When devices is not defined because you want to use the 'lvm'
osd_scenario but you've made a mistake selecting that scenario these
tasks should not fail.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 1f15a81c48 ceph-defaults: move cephfs vars from the ceph-mon role
We're doing this so we can validate this in the ceph-validate role

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen ffe05872ac validate: only validate cephfs_pools on mon nodes
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 48c2a4fda8 validate: check rados config options
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 377fe81c10 validate: make sure ceph_stable_release is set to the correct value
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen ba7f09c0a7 ceph-validate: move var checks from ceph-common into this role
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 32bac6b491 ceph-validate: move var checks from ceph-osd into this role
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen 29a9dffc83 ceph-validate: move ceph-mon config checks into this role
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Andrew Schoen d87a32347f adds a new ceph-validate role
This will be used to validate config given to ceph-ansible.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-05-18 17:58:24 +02:00
Sébastien Han 2f43e9dab5 defaults: restart_osd_daemon unit spaces
Extra space in systemctl list-units can cause restart_osd_daemon.sh to
fail

It looks like if you have more services enabled in the node space
between "loaded" and "active" get more space as compared to one space
given in command the command[1].

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1573317
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-18 17:53:47 +02:00
Michael Vollman ed050bf3f6 Do nothing when mgr module is in good state
Check whether a mgr module is supposed to be disabled before disabling
it and whether it is already enabled before enabling it.

Signed-off-by: Michael Vollman <michael.b.vollman@gmail.com>
2018-05-18 15:21:45 +02:00
Andy McCrae f45662e270 Fix template reference for ganesha.conf
We can simply reference the template name since it exists within the
role that we are calling. We don't need to check the ANSIBLE_ROLE_PATH
or playbooks directory for the file.
2018-05-17 15:23:52 +02:00
Andy McCrae 226f80c22b Install packages as a list
To make the package installation more efficient we should install
packages as a list rather than as individual tasks or using a
"with_items" loop. The package managers can handle a list passed to them
to install in one go.

We can use a specified list and substitute any packages that are not to
be installed with the ceph-common package, which is installed on every
package install, then apply the unique filter to the package install
list.
2018-05-16 09:59:00 +02:00
Guillaume Abrioux f749830897 mon: refactor of mgr key fetching
There is no need to stat for created mgr keyrings since they are created
anyway when deploying a ceph cluster > jewel. In case of a jewel
deployment we won't enter that block.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-16 09:44:58 +02:00
Guillaume Abrioux 926be51b44 mgr: delete copy_configs.yml (containerized)
This file is a leftover from PR ceph/ceph-ansible#2516
It is not used anymore so it can be removed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-15 19:30:34 +02:00
Sébastien Han 52fc8a0385 rolling_update: move mgr key creation
Until all the mons haven't been updated to Luminous, there is no way to
create a key. So we should do the key creation in the mon role only if
we are not part of an update.
If we are then the key creation is done after the mons upgrade to
Luminous.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1574995
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-15 09:01:42 +02:00
Sébastien Han e810fb217f Revert "mon: fix mgr keyring creation when upgrading from jewel"
This reverts commit 259fae931d.
2018-05-15 09:01:42 +02:00
Guillaume Abrioux a145caf947 iscsi-gw: fix issue when trying to mask target
trying to mask target when `/etc/systemd/system/target.service` doesn't
exist seems to be a bug.
There is no need to mask a unit file which doesn't exist.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-14 21:42:22 +02:00
Sébastien Han 8c7c11b774 iscsi: add python-rtslib repository
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-14 21:42:22 +02:00
Andy McCrae 08a2b58d39 Allow os_tuning_params to overwrite fs.aio-max-nr
The order of fs.aio-max-nr (which is hard-coded to 1048576) means that
if you set fs.aio-max-nr in os_tuning_params it will effectively be
ignored for bluestore scenarios.

To resolve this we should move the setting of fs.aio-max-nr above the
setting of os_tuning_params, in this way the operator can define the
value of fs.aio-max-nr to be something other than 1048576 if they want
to.

Additionally, we can make the sysctl settings happen in 1 task rather
than multiple.
2018-05-11 10:49:37 +01:00
Guillaume Abrioux f60b049ae5 client: remove default value for pg_num in pools creation
trying to set the default value for pg_num to
`hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num'])` will
break in case of external client nodes deployment.
the `pg_num` attribute should be mandatory and be tested in future
`ceph-validate` role.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-10 11:51:02 -07:00
Gregory Meno 26f6a65042 adds missing state needed to upgrade nfs-ganesha
in tasks for os_family Red Hat we were missing this

fixes: bz1575859
Signed-off-by: Gregory Meno <gmeno@redhat.com>
2018-05-09 19:58:04 +00:00
Guillaume Abrioux 259fae931d mon: fix mgr keyring creation when upgrading from jewel
On containerized deployment,
when upgrading from jewel to luminous, mgr keyring creation fails because the
command to create mgr keyring is executed on a container that is still
running jewel since the container is restarted later to run the new
image, therefore, it fails with bad entity error.

To get around this situation, we can delegate the command to create
these keyrings on the first monitor when we are running the playbook on the last monitor.
That way we ensure we will issue the command on a container that has
been well restarted with the new image.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1574995

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-09 10:29:48 -07:00
Guillaume Abrioux 7b387b506a osd: clean legacy syntax in ceph-osd-run.sh.j2
Quick clean on a legacy syntax due to e0a264c7e

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-09 07:29:33 +02:00
Simone Caronni b12bf62c36 Make sure the restart_mds_daemon script is created with the correct MDS name 2018-05-08 20:53:15 +02:00
Sébastien Han 07ca91b5cb common: enable Tools repo for rhcs clients
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1574458
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-08 16:12:30 +02:00
Andy McCrae e99351b95b Fix install of nfs-ganesha-ceph for Debian/SuSE
The Debian and SuSE installs for nfs-ganesha on the non-rhcs repository
requires you to allow_unauthenticated for Debian, and disable_gpg_check
for SuSE. The nfs-ganesha-rgw package already does this, but the
nfs-ganesha-ceph package will fail to install because of this same
issue.

This PR moves the installations to happen when the appropriate flags are
set to True (nfs_obj_gw & nfs_file_gw), but does it per distro (one for
SuSE and one for Debian) so that the appropriate flag can be passed to
ignore the GPG check.
2018-05-04 15:13:59 +02:00
Ramana Raja 31762dede3 ceph-nfs: disable attribute caching
When 'ceph_nfs_disable_caching' is set to True, disable attribute
caching done by Ganesha for all Ganesha exports.

Signed-off-by: Ramana Raja <rraja@redhat.com>
2018-05-04 09:47:54 +02:00
Sébastien Han 4a186237e6 common: copy iso files if rolling_update
If we are in a middle of an update we want to get the new package
version being installed so the task that copies the repo files should
not be skipped.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1572032
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-05-03 17:18:55 +02:00
Andy McCrae d142be0422 Move apt cache update to individual task per role
The apt-cache update can fail due to transient issues related to the
action being a network operation. To reduce the impact of these
transient failures this patch adds a retry to the update_cache task.

However, the apt_repository tasks which would perform an apt_update
won't retry the apt_update on a failure in the same way, as such this PR
moves the apt_update into an individual task, once per role.

Finally, the apt_repository tasks no longer have a changed_when: false,
and the apt_cache update is only performed once per role, if the
repositories change. Otherwise the cache is updated on the "apt" install
tasks if the cache_timeout has been reached.
2018-05-03 14:02:15 +02:00
Guillaume Abrioux 6fe8df627b client: fix pool creation
the value in `docker_exec_client_cmd` doesn't allow to check for
existing pools because it's set with a wrong value for the entrypoint
that is going to be used.
It means the check were going to fail anyway even if pools actually exist.

Using jinja syntax to set `docker_exec_cmd` allows to handle the case
where you don't have monitors in your inventory.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-05-03 08:22:40 +02:00
Sébastien Han 43e23ffe4d mon: change application pool support
If openstack_pools contains an application key it will be used to apply
this application pool type to a pool.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1562220
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-30 09:42:58 +02:00
Guillaume Abrioux 75ed437d4e check if pools already exist before creating them
Add a task to check if pools already exist before we create them.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-30 08:15:18 +02:00
Guillaume Abrioux a68091c923 tests: update the type for the rule used in pools
As of ceph 12.2.5 the type of the parameter `type` is not a name anymore but
an id, therefore an `int` is expected otherwise it will fail with the
following error

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-30 08:15:18 +02:00
Sébastien Han 12eebc31fb mon/client: honor key mode when copying it to other nodes
The last mon creates the keys with a particular mode, while copying them
to the other mons (first and second) we must re-use the mode that was
set.

The same applies for the client node, the slurp preserves the initial
'item' so we can get the mode for the copy.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 18:34:58 +02:00
Sébastien Han 74494253fa mon: remove redundant copy task
We had twice the same task, also one was overriding the mode.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 18:34:58 +02:00
Sébastien Han 85732d11b9 mon/client: remove acl code
Applying ACL on the keyrings is not used anymore so let's remove this
code.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 18:34:58 +02:00
Sébastien Han cfe8e51d99 mon/client: apply mode from ceph_key
Do not use a dedicated task for this but use the ceph_key module
capability to set file mode.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 18:34:58 +02:00
Di Xu 113eb25424 add AArch64 to supported architecture
works on AArch64 platform
2018-04-23 10:23:21 +02:00
Sébastien Han 949507d304 mon: remove mgr key from ceph_config_keys
This key is created after the last mon is up so there is no need to try
to push it from the first mon. The initia mon container is not creating
the mgr key, ansible does. So this key will never exist.
The key will go into the fetch dir once the last mon is up, then when
the ceph-mgr plays it will try to get it from the fetch directory.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 10:17:24 +02:00
Sébastien Han 35c1eb7183 mon: remove mon map from ceph_config_keys
During the initial bootstrap of the first mon, the monmap file is
destroyed so it's not available and ansible will never find it.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-23 10:17:24 +02:00
Sébastien Han 65ba85aff6 Expose /var/run/ceph
Useful for softwares that do data collection/monitoring like collectd.
They can connect to the socket and then retrieve information.

Even though the sockets are exposed now, I'm keeping the docker exec to
check the socket, this will allow newer version of ceph-ansible to work
with older versions.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1563280
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-20 15:48:32 +02:00
Sébastien Han bf1e70e8cf default: extent ceph_uid and gid
We now have the ability to detect the uid/gid of the ceph user depending
on the distribution we are running on and so we are doing non-container
deployements.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-20 15:48:32 +02:00
Sébastien Han f3656ad167 move create ceph initial directories to default
This is needed for both non-container and container deployments.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-20 15:48:32 +02:00
Sébastien Han 641f141c0f selinux: remove chcon calls
We know bindmount with the :z option at the end of the -v command so
this will basically run the exact same command as we used to run. So to
speak:

chcon -Rt svirt_sandbox_file_t /var/lib/ceph

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-19 14:59:37 +02:00
Sébastien Han 90e47c5fb0 client: add a --rm option to run the container
This fixes the case where the playbook died and never removed the
container. So now, once the container exits it will remove itself from
the container list.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1568157
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-19 14:59:37 +02:00
Sébastien Han 6c742376fd client: import the key in ceph is copy_admin_key is true
If the user has set copy_admin_key to true we assume he/she wants to
import the key in Ceph and not only create the key on the filesystem.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-18 17:46:54 +02:00
Sébastien Han 424815501a client: add quotes to the dict values
ceph-authtool does not support raw arguements so we have to quote caps
declaration like this allow 'bla bla' instead of allow bla bla

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1568157
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-18 17:46:54 +02:00
Sébastien Han d2a2793cb0 refactor the way we copy keys
This commit does a couple of things:

* use a common.yml file that contains things that can be played on both
container and non-container

* refactor the ability to copy the admin key to the nodes

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-18 16:46:33 +02:00
Randy J. Martinez 127a643fd0 ceph-defaults: fix ceph_uid fact on container deployments
Red Hat is now using tags[3,latest] for image rhceph/rhceph-3-rhel7.
Because of this, the ceph_uid conditional passes for Debian
when 'ceph_docker_image_tag: latest' on RH deployments.
I've added an additional task to check for rhceph image specifically,
and also updated the RH family task for ceph/daemon [centos|fedora]tags.

Signed-off-by: Randy J. Martinez <ramartin@redhat.com>
2018-04-17 16:54:51 +02:00
Sébastien Han a98885a71e rhcs: re-add apt-pining
When installing rhcs on Debian systems the red hat repos must have the
highest priority so we avoid packages conflicts and install the rhcs
version.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1565850
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-17 16:07:06 +02:00
Guillaume Abrioux 899b0eb451 defaults: check only 1 time if there is a running cluster
There is no need to check for a running cluster n*nodes time in
`ceph-defaults` so let's add a `run_once: true` to save some resources
and time.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-16 11:23:00 +02:00
Sébastien Han 5bbbce527e osd: do not do anything if the dev has a partition
Regardless if the partition is 'ceph' or something else, we don't want
to be as strick as checking for a particular partition.
If the drive has a partition, we just don't do anything.

This solves the case where the server reboots, disks get a different
/dev/sda (node) allocation. In this case, prior to restarting the server
/dev/sda was an OSD, but now it's /dev/sdb and the other way around.
In such scenario, we will try to prepare the OSD and create a new
partition, so let's not mess around with devices that have partitions.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1498303
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-13 19:11:15 +02:00
Sébastien Han 37117071eb common: add tools repo for iscsi gw
To install iscsi gw packages we need to enable the tools repo.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1547849
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-12 13:38:34 +02:00
Douglas Fuller c8573fe0d7 Remove deprecated allow_multimds
allow_multimds will be officially deprecated in Mimic, specify it
only for all versions of Ceph where it was declared stable. Going
forward, specify only max_mds.

Signed-off-by: Douglas Fuller <dfuller@redhat.com>
2018-04-12 10:29:17 +02:00
vasishta p shastry 020e66c1b4 Fixed a typo (extra space) 2018-04-11 14:21:15 +02:00
vasishta p shastry e1a1f81b6f osd: to support copy_admin_key 2018-04-11 14:21:15 +02:00
vasishta p shastry db3a5ce6d9 mds: to support copy_admin_keyring 2018-04-11 14:21:15 +02:00
vasishta p shastry 6b59416f75 nfs: to support copy_admin_key - containerized 2018-04-11 14:21:15 +02:00
Ali Maredia 01c58695fc nfs: ensure nfs-server server is stopped
NFS-ganesha cannot start is the nfs-server service
is running. This commit stops nfs-server in case it
is running on a (debian, redhat, suse) node before
the nfs-ganesha service starts up

fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1508506

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2018-04-11 14:00:48 +02:00
Ramana Raja 4a430ae29a ceph-nfs: allow disabling ganesha caching
Add a variable, ceph_nfs_disable_caching, that if set to true
disables ganesha's directory and attribute caching as much as
possible.

Also, disable caching done by ganesha, when 'nfs_file_gw'
variable is true, i.e., when Ganesha is used as CephFS's gateway.
This is the recommended Ganesha setting as libcephfs already caches
information. And doing so helps avoid cache incoherency issues
especially with clustered ganesha over CephFS.

Fixes: https://tracker.ceph.com/issues/23393

Signed-off-by: Ramana Raja <rraja@redhat.com>
2018-04-11 13:56:40 +02:00
Sébastien Han 82ccbdafbc ceph-defaults: bring backward compatibility for old syntax
If people keep on using the mon_cap, osd_cap etc the playbook will
translate this old syntax on the flight.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-11 12:18:34 +02:00
Sébastien Han 9657e4d6fa ceph_key: use ceph_key in the playbook
Replaced all the occurence of raw command using the 'command' module
with the ceph_key module instead.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-11 12:18:34 +02:00
Guillaume Abrioux 66c4118dcd defaults: fix backward compatibility
backward compatibility with `ceph_mon_docker_interface` and
`ceph_mon_docker_subnet` was not working since there wasn't lookup on
`monitor_interface` and `public_network`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-10 00:19:11 +02:00
Ken Dreyer 3752cc6f38 common: upgrade/install ceph-test RPM first
Prior to this change, if a user had ceph-test-12.2.1 installed, and
upgraded to ceph v12.2.3 or newer, the RPM upgrade process would
fail.

The problem is that the ceph-test RPM did not depend on an exact version
of ceph-common until v12.2.3.

In Ceph v12.2.3, ceph-{osdomap,kvstore,monstore}-tool binaries moved
from ceph-test into ceph-base. When ceph-test is not yet up-to-date, Yum
encounters package conflicts between the older ceph-test and newer
ceph-base.

When all users have upgraded beyond Ceph < 12.2.3, this is no longer
relevant.
2018-04-09 18:09:52 +02:00
Sébastien Han bb60f2fea4 ceph-defaults: fix ceoh_uid for container image tag latest
According to our recent change, we now use "CentOS" as a latest
container image. We need to reflect this on the ceph_uid.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-09 13:54:55 +02:00
Zack Cerza 0123d790cd Use the CentOS repo for Red Hat dev packages
No use even trying to use something that doesn't exist.

Signed-off-by: Zack Cerza <zack@redhat.com>
2018-04-09 10:05:57 +02:00
Attila Fazekas ecd3563c21 Deploying without managed monitors failed
Tripleo deployment failed when the monitors not manged
by tripleo itself with:
    FAILED! => {"msg": "list object has no element 0"}

The failing play item was introduced by
 f46217b69a .

fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1552327

Signed-off-by: Attila Fazekas <afazekas@redhat.com>
2018-04-04 18:16:46 +02:00
Guillaume Abrioux dcf6a246a4 defaults: remove `run_once: true` when creating fetch_directory
because of `serial: 1`, it can be an issue when the playbook is being
run on client nodes.
Since the refact of `ceph-client` we skip the role `ceph-defaults` on
every node except the first client node, it means that the task is not
going to be played because of `run_once: true`.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Guillaume Abrioux 18c0c7a508 config: use fact `ceph_uid`
Use fact `ceph_uid` in the task which ensures `/etc/ceph` exists in
containerized deployments.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Guillaume Abrioux 9c979c6390 clients: refact `ceph-clients` role
This commit refacts this role so we don't have to pull container image
on client nodes just to create pools and keys.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1550977

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Guillaume Abrioux cefd471967 client: remove legacy code
This seems to be a leftover.
This commit removes an unnecessary 'set linux permissions' on
`/var/lib/ceph`

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Guillaume Abrioux cf27c5e941 move selinux check to `ceph-defaults`
This check is alone in `ceph-docker-common` since a previous code
refactor.
Moving this check in `ceph-defaults` allows us to run `ceph-clients`
without having to run `ceph-docker-common` even in non-containerized
deployment.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-04-04 10:51:17 +02:00
Sébastien Han f3caee8460 ceph-iscsi: fix certificates generation and distribution
Prior to this patch, the certificates where being generated on a single
node only (because of the run_once: true). Thus certificates were not
distributed on all the gateway nodes.

This would require a second ansible run to work. This patches fix the
creation and keys's distribution on all the nodes.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1540845
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-04-04 09:27:39 +02:00
Randy J. Martinez ca572a11f1 ceph-mds: delete duplicate tasks which cause multimds container deployments to fail.
This update will resolve error['cephfs' is undefined.] in multimds container deployments.
See: roles/ceph-mon/tasks/create_mds_filesystems.yml. The same last two tasks are present there, and actully need to happen in that role since "{{ cephfs }}" gets defined in
roles/ceph-mon/defaults/main.yml, and not roles/ceph-mds/defaults/main.yml.

Signed-off-by: Randy J. Martinez <ramartin@redhat.com>
2018-03-29 09:32:40 +02:00
Alfredo Deza 3fcf966803 ceph-osd note that some scenarios use ceph-disk vs. ceph-volume
Signed-off-by: Alfredo Deza <adeza@redhat.com>
2018-03-29 09:11:33 +02:00
John Fulton e6e6bd078a Refer to expected-num-ojects as expected_num_objects, not size
Follow up patch to PR 2432 [1] which replaces "size" (sorry if
the original bug used that term, which can be confusing) with
expected_num_objects as is used in the Ceph documentation [2].

[1] https://github.com/ceph/ceph-ansible/pull/2432/files
[2] http://docs.ceph.com/docs/jewel/rados/operations/pools
2018-03-26 15:41:51 +02:00
Ning Yao 691ddf5349 cleanup osd.conf.j2 in ceph-osd
osd crush location is set by ceph_crush in the library,
osd.conf.j2 is not used any more.

Signed-off-by: Ning Yao <yaoning@unitedstack.com>
2018-03-26 15:57:37 +08:00
Patrick Donnelly 7f91547304 setup cephx keys when not nfs_obj_gw
Copy the admin key when configured nfs_file_gw (but not nfs_obj_gw). Also,
copy/setup RGW related directories only when configured as nfs_obj_gw.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2018-03-22 14:01:08 +01:00
Andrew Schoen 6cffbd5409 ceph-defaults: set is_atomic variable
This variable is needed for containerized clusters and is required for
the ceph-docker-common role. Typically the is_atomic variable is set in
site-docker.yml.sample though so if ceph-docker-common is used outside
of that playbook it needs set in another way. Moving the creation of
the variable inside this role means playbooks don't need to worry
about setting it.

fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1558252

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2018-03-21 19:16:11 +01:00
Andy McCrae 388562a4af Simplify ceph.conf generation
Since the approach to creating a ceph.conf file has changed, and now
no-longer relies on assembling config file fragments in /etc/ceph/ceph.d
we can avoid the conf_overrides rendering on the local host and skip out
the tasks related to that, instead using just the config_template task
to configure the file directly.
2018-03-15 15:47:41 +01:00
Sébastien Han e3275c1ca1 osd: add fs.aio-max-nr tuning
The number of osds per nodes is limited by aio-max-nr, default is low,
so we need to increase it.

Full story:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-August/020408.html

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1553407
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-15 14:06:26 +01:00
Sébastien Han f432819c1e osd: apply systcl right away
Without     sysctl_set: yes the sysctm tuning will only get applied on
the systctl.conf but not on the fly.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-15 14:06:26 +01:00
Sébastien Han 0f8a4251ba move system tuning to osd role
The changes from these tasks only apply to osd nodes so there is no
reason to have them in ceph-common.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-15 14:06:26 +01:00
Sébastien Han f119b25bbe client: implement proper pools creation
Just like we did for the monitor and openstack_config we now have the
ability to precisely create pools.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han e302c1baae mon: add support for erasure code pool
You can now specify type: erasure and   erasure_profile to use when
declaring the pool dictionnary.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han 277d885bc9 mon: add support for pgp, pool type and rule name
When creating pools, it's crucial to expose all the options available as
part of the pool creation command. As explained in:
http://docs.ceph.com/docs/jewel/rados/operations/pools/

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han 26bc00fb74 mon: fail if pool creation fails
There is no reason to continue the deployment if these tasks fail.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1546185
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han 0011edd2bc mon: add support for expected-num-objects
This commit adds the support for expected-num-objects when creating a pool.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1541520
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
Sébastien Han 18402b636f defaults: add useful info if daemon are not restarted properly
If OSDs don't restart normally we now also dump info of the crush map,
crush rules, crush tree and pools.

If the monitors don't restart normally we also print the socket status
by calling mon_status and quorum_status.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-14 14:22:00 +01:00
jtudelag 691f7c5146 Adds handy ceph aliases whe containerized installations.
Same approach as openshift-ansible etcdctl:

* https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/etcd/tasks/auxiliary/drop_etcdctl.yml
* https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/etcd/etcdctl.sh
2018-03-08 13:56:39 +01:00
Guillaume Abrioux 9181c94adf client: fix pgs num for client pool creation
The `pools` dict defined in `roles/ceph-client/defaults/main.yml`
shouldn't have `{{ ceph_conf_overrides.global.osd_pool_default_pg_num
}}` as default value for `pgs` keys.

For instance, if you want some pools to be created but without explicitely
specifying the pgs for these pools (it means you want to use the
`osd_pool_default_pg_num`), you will be obliged to define
`{{ ceph_conf_overrides.global.osd_pool_default_pg_num }}` anyway while you
wanted to use the current default value already defined in the cluster which is
retrieved early in the playbook and stored in the
`{{ osd_pool_default_pg_num }}` fact.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-03-07 11:18:04 +01:00
Sébastien Han 96c049be5b common: run updatedb task on debian systems only
The command doesn't exist on Red Hat systems so it's better to skip it
instead of ignoring the error.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han a52ed43093 mon: fix osd_pool_default_crush_rule persistence and effectiveness
Running the last portion (insert new default and add new default crush
tasks) of crush_rules.yml only on the last monitor is
wrong since ceph CLI calls usually end up on the master having the
quorum, which is by default the one with the lower IP.
So if we run the  command and end up on another mon the creation will
happen on the default crush rule because the particular mon hasn't been
updated.
To fix this we remove the |last on the include and use run_once: true on
 certain tasks, then we let the final two tasks run on all the monitors.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han 47cef7a41d mon: fix set crush default rule
On releases after jewel the option
'osd_pool_default_crush_replicated_ruleset' does not exist anymore, it's
called osd_pool_default_crush_rule.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han 3261ab23b8 osd: remove old crush_location implementation
This was causing a lot of pain with the handlers. Also the
implementation was not ideal since we were assembling files. Everything
can now be done with the ceph_crush module so let's remove that.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Sébastien Han 73c4846744 mon: use ceph_crush module in the playbook
Instead of creating the CRUSH hierarchy with Ansible tasks using the
command module we now rely on the ceph_crush module.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-06 15:24:31 +00:00
Greg Charot 78c1f1938f mons: Current crush_rule playbook does not work if there is no default rule defined (default: true).
One could want to add new crush rules while keeping his current default rule.
Fixed it so that it works with all rules defined as "default: false". If multiple rules are defined as default (should not be) then the last rule listed in "crush_rules" is taken as default.
2018-03-06 15:24:31 +00:00
Greg Charot 77f9c1df10 no reason the ceph-ansible ansible default provided crush_rule_hdd rule should be set as rack root + default ruleset 2018-03-06 15:24:31 +00:00
Greg Charot 50afc3fbf3 We don't want to automatically move the rbd pool to the new default crush rule. This operation shall be performed by the cluster operator. 2018-03-06 15:24:31 +00:00
Andy McCrae 04ca685ba7 Remove vars that are no longer used
As part of fcba2c801a these vars were
removed and no longer do anything:

radosgw_dns_name
radosgw_resolve_cname

This patch removes them from the group_vars files and defaults/main.yml
2018-03-06 09:16:25 +01:00
jtudelag c3267b77b7 Makes use of docker_exec_cmd in ceph-mon role.
Keeps consistency inside the role and among roles.
Makes the code more readable.
2018-03-05 12:48:35 +00:00
Sébastien Han cb0f598965 common: run updatedb task on debian systems only
The command doesn't exist on Red Hat systems so it's better to skip it
instead of ignoring the error.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-02 20:59:10 +00:00
Sébastien Han 7f19df8196 rgw: add cluster name option to the handler
If the cluster name is different than 'ceph', the command will fail so
we need to pass the cluster name.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-02 20:59:10 +00:00
Sébastien Han 9c85280602 rgw: ability to copy ceph admin key on containerized
If we now set copy_admin_key while running a containerized scenario, the
ceph admin key will be copied on the node.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-02 20:59:10 +00:00
Sébastien Han 67f46d8ec3 rgw: run the handler on a mon host
In case the admin wasn't copied over to the node this command would
fail. So it's safer to run it from a monitor directly.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-03-02 20:59:10 +00:00
Guillaume Abrioux 6d35bc9bde client: use `ceph_uid` fact to set uid/gid on admin key
That task is failing on containerized deployment because `ceph:ceph`
doesn't exist.
The idea here is to use the `{{ ceph_uid }}` to set the ownerships for
the admin keyring when containerized_deployment.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1540578

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2018-02-26 15:52:05 +01:00
Grant Slater 1e1b26ca4d mds: fix ansible_service_mgr typo
This commit fixes a typo introduced by 4671b9e74e
2018-02-26 13:05:14 +01:00
Andy McCrae c33dae7509 Revert "[TEST] Test setting up correct systemd file for nfs-ganesha"
The nfs-ganesha package has been fixed as part of this commit:
963b6681df

Once the package is rebuilt this should be good to merge.

This reverts commit e88af3c4cb.
2018-02-26 10:23:42 +01:00
Giulio Fidente a83e1aeea3 Make rule_name optional when defining items in openstack_pools
Previously it was necessary to provide a value (eventually an
empty string) for the "rule_name" key for each item in
openstack_pools. This change makes that optional and defaults to
empty string when not given.
2018-02-23 15:11:53 +01:00
Sébastien Han 165d9dec10 remove kernel.pid_max
This is now managed by Ceph packages.

See: https://github.com/ceph/ceph/pull/18544/files

http://tracker.ceph.com/issues/21929

Closes: https://github.com/ceph/ceph-ansible/issues/2410

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-23 13:57:57 +01:00
Andy McCrae 2779d2a850 Adjust /etc/updatedb.conf to not parse /var/lib/ceph
Using updatedb -e doesnt make a permanent change, but will updatedb
without the passed path.

To make this change more permanent we should update the
/etc/updatedb.conf file to include /var/lib/ceph.
2018-02-20 11:32:56 +01:00
Andy McCrae e88af3c4cb [TEST] Test setting up correct systemd file for nfs-ganesha
Don't merge this.
Test to see if we copy over the nfs-ganesha-lock.service.debian8 file
properly, whether the Xenial CI job will work.

The upstream download.ceph.com nfs-ganesha package should be fixed for
xenial (which is in progress).
2018-02-20 10:49:37 +01:00
Paul Bourke 463b5c6b22 Remove redundant task to check if atomic
This fact is already set in site-docker.yml so there's no need to check
it again in ceph-docker-common

Signed-off-by: Paul Bourke <paul.bourke@oracle.com>
2018-02-19 10:10:46 +01:00
Andy McCrae 59a4335a56 Restart services if handler called
This patch fixes an issue where if hosts have different service lists,
it will prevent restarting changes on services that run later on.

For example, hostA in the mons and rgws group would initiate a config
change and restart of services on all mons and rgws hosts, even though
a separate hostB (which is only in the rgws group) has not had its
configuration changed yet. Additionally, when the second host has its
coniguration changed as part of the ceph-rgw role, it will not initiate
a restart since its inventory name != the first hosts.

To fix this we should run the restart once (using run_once: True)
as long as the host has called the handler. This will ensure that even
if only 1 host has called the handler it will initiate a restart on all
hosts that have called the handler.

Additionally, we add a var that is set when the handler runs, this will
ensure that only hosts that have called the handler get restarted.

Includes minor fix to remove unrequired "inventory_hostname in
play_hosts" when: clause. This is no longer required since the handlers
were changed. The host calling the handler will be in play_hosts
already.
2018-02-16 10:40:20 +01:00
Sébastien Han c816a9282c container: osd remove run_once
When used along with  delegate, run_once does not belong well. Thus,
using | last always brings the desired result.

Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-14 02:01:29 +01:00
Sébastien Han d47d02a5eb docker-common: fix container restart on new image
We now look for any excisting containers, if any we compare their
running image with the latest pulled container image.
For OSDs, we iterate over the list of running OSDs, this handles the
case where the first OSD of the list has been updated (runs the new
image) and not the others.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1526513
Signed-off-by: Sébastien Han <seb@redhat.com>
2018-02-14 02:01:29 +01:00