Commit Graph

60 Commits (cf945d5ac85368fdf9b4ca0fcd5dbaff2424f3df)

Author SHA1 Message Date
Guillaume Abrioux 20946f7220 ceph-osd: remove deprecated comment in sample file
Since #1724 has been merged, this comment is deprecated

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-09-12 16:48:10 +02:00
Sébastien Han 2fa151b9e8 container: introduce resource limitation for containers
This can be controlled via 2 options:

* ceph_$DAEMON_docker_memory_limit
* ceph_$DAEMON_docker_cpu_limit

All daemons default to 1GB for memory and 1 CPU by default.
Recommendations from:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-09-06 14:52:21 +02:00
Sébastien Han e0a264c7e9 osd: allow multi dedicated journals for containers
Fix: https://bugzilla.redhat.com/show_bug.cgi?id=1475820
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-30 12:34:06 +02:00
Andrew Schoen 594d5e017a ceph-osd: restructure lvm_volumes variable for more flexiblity
The lvm_volumes variable is now a list of dictionaries that represent
each OSD you'd like to deploy using ceph-volume. Each dictionary must
have the following keys: data, journal and data_vg. Each dictionary also
can optionaly provide a journal_vg key.

The 'data' key represents the lv name used for the OSD and the 'data_vg'
key is the vg name that the given lv resides on. The 'journal' key is
either an lv, device or partition. The 'journal_vg' key is optional and
must be the vg name for the journal lv if given. This key is mainly used
for purging of the journal lv if purge-cluster.yml is run.

For example:

  lvm_volumes:
    - data: data_lv1
      journal: journal_lv1
      data_vg: vg1
      journal_vg: vg2
    - data: data_lv2
      journal: /dev/sdc
      data_vg: vg1

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-23 10:14:14 -05:00
Andy McCrae 4671b9e74e Allow ceph service systemd overrides to be specified
ceph services can fail to start under certain circumstances (for
example, when running in a container) because the default systemd
service configuration causes namespace issues.

To work around this we can override the system service settings by
placing an overrides file in the ceph-<service>@.service.d directory.
This can be generic so as to allow any potential changes required to
the ceph-<service> service files.

The overrides file is only setup when the
"ceph_<service>_systemd_overrides" config_template override variable is
specified.

The available service systemd override files are as follows:
ceph_mds_systemd_overrides
ceph_mgr_systemd_overrides
ceph_mon_systemd_overrides
ceph_osd_systemd_overrides
ceph_rbd_mirror_systemd_overrides
ceph_rgw_systemd_overrides
2017-08-16 17:57:06 +01:00
Andrew Schoen e597628be9 lvm: update scenario for new osd_scenario variable
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-04 06:38:36 -05:00
Andrew Schoen b93794bed4 adds a new 'lvm_osds' osd scenario
This scenario will create OSDs using ceph-volume and is only available
in ceph releases greater than Luminous.

Signed-off-by: Andrew Schoen <aschoen@redhat.com>
2017-08-04 06:13:09 -05:00
Sébastien Han 30991b1c0a osd: simplify scenarios
There is only two main scenarios now:

* collocated: everything remains on the same device:
  - data, db, wal for bluestore
  - data and journal for filestore
* non-collocated: dedicated device for some of the component

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-08-03 10:20:39 +02:00
Guillaume Abrioux 1d003aa887 merge docker-common and common defaults vars
Merge `ceph-docker-common` and `ceph-common` defaults vars in
`ceph-defaults` role.
Remove redundant variables declaration in `ceph-mon` and `ceph-osd` roles.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-08-02 14:46:51 +02:00
Guillaume Abrioux 94c3756167 Tests: Add bluestore scenarios
Since we started testing against Luminous, we need to add more scenarios
testing.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-07-12 15:02:32 +02:00
Sébastien Han 7d657ac643 osd: ability to set db and wal to bluestore
This commits refactors how we deploy bluestore. We have existing
scenarios that we don't want to change too much. This commits eases the
user experience by now changing the way you use scenarios. Bluestore is
just a different interface to store objects but the scenarios more or
less remain the same.

If you set osd_objectstore == 'bluestore' along with
journal_collocation: true, you will get an OSD running bluestore with DB
and WAL partitions on the same device.

If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true, you will get an OSD running bluestore with a
dedicated drive for the rocksdb DB, then the remaining
drives (used with 'devices') will have WAL and DATA collocated.

If you set osd_objectstore == 'bluestore' along with
raw_multi_journal: true and declare bluestore_wal_devices you will get
an OSD running bluestore with a dedicated drive for rocksdb db, a
dedicated drive partition for rocksdb WAL and a dedicated drive for
DATA.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-04 19:07:16 +02:00
Sébastien Han fc0e54c59e osd: remove redundant options to enable bluestore
There is no need for 2 variables to enable bluestore, prior to this
patch one had to do the following to activate bluestore:

osd_objectstore: bluestore
bluestore: true

Now you just need to set `osd_objectstore: bluestore`.

Fixes: https://github.com/ceph/ceph-ansible/issues/1475
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-07-04 18:22:03 +02:00
Sébastien Han fdc7866072 Merge pull request #1469 from ceph/refact_code
Docker: Refact code
2017-06-02 12:40:25 +02:00
Guillaume Abrioux 0a2048a577 Docker: Remove duplicate var passed to docker-run
since `-e CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` is already hardcoded in
`eph-osd-run.sh.j2` there is no need to add `-e
CEPH_DAEMON=OSD_CEPH_DISK_ACTIVATE` as a default value in defaults vars.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-06-01 14:31:17 +02:00
Guillaume Abrioux ddfe019342 Refact code
`ceph-docker-common`:
  At the moment there is a lot of duplicated tasks in each
  `./roles/ceph-<role>/tasks/docker/main.yml` that could be refactored in
  `./roles/ceph-docker-common/tasks/main.yml`.

`*_containerized_deployment` variables:
  All `*_containerized_deployment` have been refactored to a single
  variable `containerized_deployment`

duplicate `cephx` variables in `group_vars/* have been removed.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-05-24 15:55:41 +02:00
Guillaume Abrioux f0adecf482 Clean osds.yml.sample
Remove duplicate lines in osds.yml default vars file.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-05-24 15:55:41 +02:00
Gregory Meno eb0c83db5f remove osd directory scenario
Proof-of-concept clusters or actual production clusters will never want to use this. We also do not test it anywhere for this same reason.

Signed-off-by: Gregory Meno <gmeno@redhat.com>
2017-04-21 15:50:32 -07:00
Christian Zunker 09646041ee Fix osd_crush_location to prevent systemd error message
With ' in osd_crush_location, systemd will show this error:
ceph-osd-prestart.sh[2931]: Invalid command:  invalid chars ' in 'root=

Signed-off-by: Christian Zunker <christian.zunker@codecentric.de>
2017-03-17 07:26:40 +01:00
Sébastien Han 72b17d2480 docker: osd, clarify variable usage for scenarii
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-21 15:56:09 -05:00
Sébastien Han b91d227b99 docker: make ceph docker osd script path
Since distro will not allow /usr/share to be writable (e.g: atomic) so
we let the operator decide where to put that script.

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-21 15:56:09 -05:00
Sébastien Han 73cf0378c2 docker: osd, do not use priviledged container anymore
Oh yeah! This patch adds more fine grained control on how we run the
activation osd container. We now use --device to give a read, write and
mknodaccess to a specific device to be consumed by Ceph. We also use
SYS_ADMIN cap to allow mount operations, ceph-disk needs to temporary
mount the osd data directory during the activation sequence.

This patch also enables the support of dedicated journal devices when
deploying ceph-docker with ceph-ansible.

Depends on https://github.com/ceph/ceph-docker/pull/478

Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-21 15:54:36 -05:00
Sébastien Han c2f1dca823 docker: use a better method to pull images
We changed the way we declare image.
Prior to this patch we must have a "user/image:tag"
format, which is incompatible with non docker-hub registry where you
usually don't have a "user". On the docker hub a "user" is also
identified as a namespace, so for Ceph the user was "ceph".

Variables have been simplified with only:

* ceph_docker_image
* ceph_docker_image_tag

1. For docker hub images: ceph_docker_name: "ceph/daemon" will give
you the 'daemon' image of the 'ceph' user.

2. For non docker hub images: ceph_docker_name: "daemon" will simply
give you the "daemon" image.

Infrastructure playbooks have been modified as well.
The file group_vars/all.docker.yml.sample has been removed as well.
It is hard to maintain since we have to generate it manually. If
you want to configure specific variables for a specific daemon simply
edit group_vars/$DAEMON.yml

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1420207
Signed-off-by: Sébastien Han <seb@redhat.com>
2017-02-09 17:57:18 +01:00
Daniel Marks fefaa8ed13 Set empty list as default for osd_directories
As described in issue #1224 leaving this variable undefined may
cause a problem during execution of the ceph-osd role.
2017-01-13 15:27:16 +01:00
Sébastien Han 2d8ac4a586 docker: only use systemd to manage containers
Prior to this patch we had several ways to runs containers, we could use
ansible's docker module on some distro and on containers distros we were
using systemd. We strongly believe threating containers as services with
systemd is the right approach so this patch generalizes to all the
distros. These days most of the distros are running systemd so it's fair
assumption.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-12-16 19:37:05 +01:00
Sébastien Han ce7431a227 docker: add support for cluster name
We need to honour the cluster name that was chosen by ceph-ansible and
pass it to ceph-docker.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-12-16 14:31:21 +01:00
Sébastien Han 75cb749570 docker: consolidate ceph-ansible and ceph-docker varible
This commit re-uses some of the existing ceph-ansible variables for a
containirzed deployment. There is no reasons why we should add new
variables for the containerized deployment.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-12-09 14:39:05 +01:00
Sébastien Han a2fcd222d2 moving to ansible v2.2 compatibility
Signed-off-by: Sébastien Han <seb@redhat.com>
Co-Authored-By: Julien Francoz julien@francoz.net
2016-11-04 10:09:38 +01:00
Eduard Egorov 4895c2864e Make {{ raw_journal_devices }} list optional: define it as empty list by default, remove unneccessary 'default([])' checks
Signed-off-by: Eduard Egorov <eduard.egorov@icl-services.com>
2016-11-01 09:57:25 +00:00
Eduard Egorov f33c1cd2d2 Make {{ devices }} list optional: define it as empty list by default, remove unneccessary 'default([])' checks
Signed-off-by: Eduard Egorov <eduard.egorov@icl-services.com>
2016-11-01 09:57:25 +00:00
Sébastien Han 9753e29bae osd: clarify osd scenarios
Co-Authored-By: Rachana Patel <racpatel@redhat.com>
Signed-off-by: Sébastien Han <seb@redhat.com>
2016-10-06 12:01:29 +02:00
Sébastien Han 708c43a04e docker: fix osd configuration
use the activation scenario instead of the full ceph_disk one, we
already have a task to prepare osds so we just need to activate the
device.

working for me using vagrant :)

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-08-24 09:05:14 +02:00
Sébastien Han 673f54a100 osd: fix collocation spelling and declare dmcrypt variables
* changed s/colocation/collocation/
* declare dmcrypt variable in ceph-common so the variables check does
not fail

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-08-10 10:34:23 +02:00
Sébastien Han 788b75efec ceph-osd: re-arrange osd scenario numbers
Signed-off-by: Sébastien Han <seb@redhat.com>
2016-08-08 15:55:12 +02:00
Sébastien Han 5978d55d22 ceph-osd: add dmcrypt scenario
add the ability to encrypt osd data store using dm-crypt

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-07-19 18:02:11 +02:00
pprokop 3950751317 Adding an option to choose an etcd port and tag of docker images 2016-07-13 10:19:50 +02:00
Ivan Font 6f5f6610a8 Support for docker image tags
Signed-off-by: Ivan Font <ivan.font@redhat.com>
2016-07-12 15:49:07 -07:00
Huamin Chen c4f9bf8d13 update osd default vars
Signed-off-by: Huamin Chen <hchen@redhat.com>
2016-06-03 16:42:22 +00:00
Leseb 4c31c3bcb5 Merge pull request #661 from PiotrProkop/ceph-osd-kv
Adding ceph-osd continerized deployment with kv store.
2016-03-29 14:15:40 +02:00
pprokop ec9a96e570 Adding ceph-osd continerized deployment with kv store 2016-03-29 10:23:31 +02:00
Sébastien Han 225e066db2 ceph-osd: add support for bluestore
With Jewel comes a new store to store Ceph object: BlueStore. Adding an
extra scenario might seem like a useless duplication however the
ultimate goal is remove the other roles later. Thus this is easier to
add new role instead of modifying existing one. Once we drop the support
for release older than Jewel we will just remove all the previous
  scenario files.

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-03-25 16:02:02 +01:00
Sébastien Han b0f56590e0 docker: fix tons of issues
Signed-off-by: Sébastien Han <seb@redhat.com>
2016-03-24 17:55:21 +01:00
Chris St. Pierre c4a9b1020f Generate group_vars samples automagically
This adds a script, generate_group_vars_sample.sh, that generates
group_vars/*.sample from roles/ceph-*/defaults/main.yml to avoid
discrepancies between the sets of files. It also converts the line
endings in the various main.yml from DOS to Unix, since generating the
samples was spreading the line ending plague around to more files.
2016-02-29 12:07:01 -06:00
Sébastien Han bb55860a7a ceph-: abitlity to copy admin on all the nodes
This commit allows you to set a new variable to 'true' if you want to
have ceph admin key copied over different kind of hosts such as MDS,
OSD, RGW. To enable this just set `copy_admin_key` to true.

Closes: #555

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-02-25 12:04:17 +01:00
Sébastien Han a3cc055e61 ceph-osd: docker: fix type
use ceph_osd_docker_devices and not ceph_osd_docker_device

Signed-off-by: Sébastien Han <seb@redhat.com>
2016-02-11 17:57:10 +01:00
Sébastien Han d7c17812dd Ability to collocate bare metal and container
Since we renamed the variables and removed the old 'docker' variable we
can now collocate container daemons with standard bare metal deployment.
For instance, monitors can be containerized but osds can be deployed
traditionally.

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-10-21 23:18:22 +02:00
Matt Thompson afc934d22a Make fetch directory configurable
Currently, the fetch directory is created in your working directory
(where ansible is run from).  We prefer to not keep any state in this
directory and would prefer to have the fetch directory configurable so
we can store it outside of our code checkout.

This commit creates a new variable in each role called
`fetch_directory` (defaulting to the previous value of 'fetch/'), and
then updates each reference to 'fetch' to use the new variable instead.

Closes issue #383
2015-08-27 16:49:50 +01:00
Sébastien Han 0496a3e0d4 Remove zap variables
Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-06 17:34:25 +02:00
Sébastien Han 6d0e8777e2 Re-arrange docker invocation and fix bootstrap
Signed-off-by: Sébastien Han <seb@redhat.com>
2015-07-28 16:05:35 +02:00
leseb 7fdc2b1d36 Use more variable check
Fail early if a variable is not defined.

Signed-off-by: leseb <seb@redhat.com>
2015-07-03 21:38:30 +02:00
leseb 50d1f73afe Change default options
We want to force the user to only enable the options they need. Thus
they shouldn't have to enable one option and then disable another.

Signed-off-by: leseb <seb@redhat.com>
2015-07-03 18:38:30 +02:00