When multiple monitor hosts attempt to create the fetch directory there
is the potential for the task to fail with:
"OSError: [Errno 17] File exists: 'fetch'"
This appear to be an issue with the file module trying to create the
same directory at the same time when the tasks has been delegated to a
single host.
This commit enables run_once on the affected task which should address
the issue.
This is a rare case but it happens. Since we're just calling
`monitor_interface` and not `hostvars[host]['monitor_interface'],
an error may occur when the current host's interface does not
exist on the other hosts. (eg. eth0 exists for node0, but it does
not exist on node1 and node2)
Fix for this is to use hostvars[host]['monitor_interface']
I'm removing the ceph paritition check from `activate osd(s) when device
is a disk` because the ceph parition does not exist when parted was
registered (on a fresh install). This was causing the activate step to
be skipped.
Currently the OpenStack pools that get created use the default pg_num.
This commit updates the ceph-mon role to allow the pg_num for each pool
to be customised.
Fix back the rolling update playbook.
However every single time the playbook will run it will check for new
packages and install the latest ones. I don't think this is always the
desired behaviour. We need to find a way to conciliate both...
Signed-off-by: Sébastien Han <seb@redhat.com>
Fix the logic for the mandatory devices check so that it applies to
raw_multi_journal and journal_collocation scenarios separately.
This fails otherwise because whichever var is "first" in the or is most
likely undefined.
This will likely one day or another break something. If ceph-disk
complains about a disk just use the purge-cluster.yml playbook first as
it will wipe all the devices.
Signed-off-by: Sébastien Han <seb@redhat.com>
I'm currently getting a KeyError due to missing 'dependencies' on this
role when I attempt to install it with ansible-galaxy (ansible 1.9.2).
This commit simply defines an empty dependencies list so that
ansible-galaxy executes correctly.
Cool stuff :). We don't need to specify an initial monitor key anymore.
A key will automatically be generated.
The default key can always be overriden with the `monitor_secret`
variable.
Signed-off-by: leseb <seb@redhat.com>
We don't always have a dedicated cluster network so we can by default
re-use the public network value.
This is just laziness :).
Signed-off-by: leseb <seb@redhat.com>
While re-running the playbook we do not want to check for new packages.
We shouldn't perform upgrades, we leave this to the operators.
Signed-off-by: leseb <seb@redhat.com>
Prior to this change, the zap was executed during every play, this was
not ideal. Now we do check if there is a 'ceph' partition. If so we skip
the zap.
Signed-off-by: leseb <seb@redhat.com>
Feel so bad about this one...
Now it's fixed, the rgw section will be activated once the rgws hosts
are part of the inventory.
Signed-off-by: leseb <seb@redhat.com>
Even if the subcription command is indempotent it takes around 15/16sec
to get it done. Where with the simple yum check we lower down this to
3sec.
Signed-off-by: leseb <seb@redhat.com>
Since the command is indempotent we don't need to check if the repo is
enabled as it will likely take twice the time.
Signed-off-by: leseb <seb@redhat.com>
We want to force the user to only enable the options they need. Thus
they shouldn't have to enable one option and then disable another.
Signed-off-by: leseb <seb@redhat.com>
Now we don't need to activate the services through a variable. If the
role is activated in the inventory, actions will occur automatically.
Fixing the repo creation for red hat storage too.
Signed-off-by: leseb <seb@redhat.com>
The new product version has jsut came out. ICE doesn't exist anymore and
Red Hat Storage is the name of the new product.
Signed-off-by: leseb <seb@redhat.com>
The logic was broken here for repeated runs. We only want to run
'ceph-disk prepare' when the disk does not contain a ceph partition, is
not a partition, and raw_multi_journal is set. Previously it would
attempt to run 'ceph-disk prepare' when there was a ceph partition
because the second half of the 'or' was still true since it isn't a
partition.
Following the best practice, we don't create a key from the monitor but
we really on the initial keys created by the mons to bootstrap each
daemon.
Signed-off-by: Sébastien Han <seb@redhat.com>
This branch has been sitting on my local repo for a while. I guess I had
time to spend on a plane :).
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
* fix the Vagrantfile ruby check
* fix the variable positions
Bring more mandatory variables and try to separate Vagrant vars from the
playbook vars.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Once again and hopefully final commit to rework the support of both
upstart and sysvinit. As from now, Ubuntu systems will use upstart and
the others will use sysvinit.
A later commit might include the support of systemd as the unit files
come out. This will be for Hammer so probably soon.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Depending on the distro, init scripts will look for different files to
be available on the ceph data dir.
Fixing the upstart support here.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
If the distribution wasn't Ubuntu, the check wasn't performed so the
evaluation in the task later wasn't possible.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Now the Ceph REST API can be deployed.
Default implementation deploys it on the same nodes as the monitors
which should be fine.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Fix the usage of Upstart for Ubuntu machines instead of the init.d
script.
Note that because of the way upstart init script looks at the radosgw id
the command 'start radosgw id=' is broken, you should use 'start
radosgw-all' instead.
Keep backard compatibility with the radosgw init script as well by using
client prefixed by 'client.radosgw'.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
The ceph fs new command was introduced in Ceph 0.84. Prior to this
release, no manual steps are required to create a filesystem, and pools
named data and metadata exist by default.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
If we use the hostname, the radosgw will lookup for a wrong secret.
Using the same name for all the gateways.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Use hostname in socket and log.
Improve jinja template so when a var doesn't exist we don't indent the
next line.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
We isolated the key operations into a file and modified the fetch
function to collect all the new keys.
In the mean time fixed the pool creation since the command is not
indempotent.
Renamed the rgw key to work with the key collection.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Without this plugin if a Ceph version is present in a repo (let's say
epel) it will install the epel version and not the ICE version.
We install yum-plugin-priorities.noarch to honor the 'priority=1' flag.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
In storage world it's often recommended to disable transparent hugepages
as they will tend to lower performance.
Note that this change won't survive reboot. There are several ways to
disable this permanently such as:
* rc.local
* grub boot line
It's a bit tricky to do this in Ansible since it really depends on the
OS you're running on.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Default behavior is to fail if a variable is not declared however this
can be disable in your ansible.cfg so we force this variable as
mandatory.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Still WIP, @mwheckmann free to test
As requested by #162
Current known issue, since ceph.conf gets modified during every single
run (at the end during the merge) so this will restart ceph daemons.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Depending on the OS you are runnning on you should be able to configure
these values.
Re-ordering file for clarity as well.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Big cluster will easily reach the default limit so we need to increase
it and make it configurable.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
We remove all the partitions, label and re-create something clean prior
to prepare the design. This will help solving many issues with existing
disks or while scratching/deploy test environments often.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
MDS and RGW are not deployed often (RGW more), so we disable them from
the default deployment to only get MONs and OSDs.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
With the appropriate subscription details you will be able to use the
Inktank Ceph Enterprise version of Ceph running on RHEL7.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
It has becomes really anoying to manually generate an fsid prior to the
inital bootstrap. This commit introduces a method that auto-generates an
fsid. If for whatever reasons you want to force your own fsid you can
simply edit these 3 files and override the fsid variable:
- roles/ceph-common/vars/main.yml
- roles/ceph-mon/vars/main.yml
- roles/ceph-osd/vars/main.yml
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
- We don’t need ceph-extra for trusty
- Enable multiverse repo for access to libapache2-mod-fastcgi
- Update cache before attempting to install packages to register
multiverse repo and only refresh cache once an hour to avoid delays in
the playbook
- Add wildcard to disabling default site as on Ubuntu it is 000_default
by…default
While running big boxes with 72 disks it's easy to get out of PID for
all the threads needed by Ceph. Increasing the default value removes
this limitation.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
In ceph-common you load {{ ansible_managed }} at the top of the main
config file - this will trigger handlers on that file whenever an
Ansible run is made.
I'd suggest replacing it with a vanilla text comment 'managed by
Ansible' to warn
admins but avoid unnecessary cluster bounces.
fixes: #125
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
The ceph.conf.j2 template currently always uses the current host facts
to get the IP address of each host in the mon loop. This is not the
expected behavior. This patch uses the correct facts to get the IP.
Recovery and/or re-balancing decrease performance, adding more options
might help tweaking this behavior.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Since 192.168.0.0/24 is very commong and might overlap with some
existing networks on your laptop, using another subnet like '42' is less
bound to happen.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Proviously we used osd_crush_update_on_start: true, this was interpreted
by Ansible as a boolean and appeared as 'True' inside the Ceph configuration
file. However the Ceph's init script is looking for 'true'.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
This commits introduces the support of the developpement branches of
Ceph. You can now install Ceph from master.
The behavior is done through 2 new options:
* ceph_stable: true will use the stable branch
* ceph_dev: true will use the dev branch
For the dev packages don't forget to set the branch that you want to
use.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
Prior to this patch, the first match was winning and the playbook wasn't
doing any difference both "restart ceph", adding a distro filtrer fixes
this.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
It has been reported a couple of months ago by Dan van der Ster from
CERN that updatedb was consumming 100% of CPU while parsing system's
directories. Indeed the process was parsing the OSD PG directories that
might contains billions of objects.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
After a change is made on the configuration file we must restart the
Ceph services. I also added a check that verifies if a socker exists
because during the first play there are no services running. We check if
a socket exists, if not we don't try to restart the services, if it
exists we can restart them.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
I added a 'ceph-' prefix to all the roles related to Ceph. Since we are
about to push the roles into the Ansible Galaxy that will be easier when
we want to use these roles into a larger environement with other roles.
Fixes: #94
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
This commit implements a fourth scenario where we can directely use a
directory instead of a block device for the OSDs. The purpose of this
scenario is more testing-oriented. Please note that we do not check
the filesystem underneath the directory so it is really up to you to
configure this properly. Declaring more than one directory on the
same filesystem will confuse Ceph.
Fixes: #14
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
This commit introduces a new config option 'osd crush chooseleaf type'.
With the help of this option and by setting it to '0' we tell Ceph to
store all the replicas on a single host. Basically we tell CRUSH to
iterate over disk and not over host.
Signed-off-by: Sébastien Han <sebastien.han@enovance.com>
since we're now using fsid for the directory name, it should be safe to
just copy the keys from all mon hosts. Once they are copied, the rest of
the hosts will just skip copying. :)