Commit Graph

1040 Commits (5705cc71a314c9de43048910034f6be2015473a8)
 

Author SHA1 Message Date
Leseb 00bca9a535 Merge pull request #394 from ti-mo/master
Enable optional-rpms on official RHEL for yum-plugin-priorities
2015-09-01 17:03:08 +02:00
Timo Beckers e0ebd05565 Enable optional-rpms on official RHEL for yum-plugin-priorities 2015-09-01 16:59:52 +02:00
Leseb 0e26c85b2d Merge pull request #396 from ceph/redhat-distro-repos
Get Ceph from distro repository (redhat-based)
2015-09-01 15:24:39 +02:00
Sébastien Han 0cbc81622f Get Ceph from distro repository (redhat-based)
Follow up on #392

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-31 15:25:42 +02:00
Leseb 0410f6a258 Merge pull request #389 from AcalephStorage/fix-for-different-monitor-interfaces
Fix for error when the nodes don't have the same interface name.
2015-08-31 14:27:17 +02:00
Leseb d1c8c46bf1 Merge pull request #392 from HanXHX/apt-origin
Get Ceph from distro repository (debian-based)
2015-08-31 14:24:12 +02:00
Emilien Mantel b99355839a Remove capital letters 2015-08-31 14:23:20 +02:00
Leseb cc11187430 Merge pull request #395 from mattt416/make_fetch_configurable
Make fetch directory configurable
2015-08-27 19:27:06 +02:00
dexter dd65c5ebb1 use hostvars for monitor interface in ceph.conf if available, else, fallback to just the plain monitor_interface var 2015-08-28 00:41:15 +08:00
Emilien Mantel a75e1cbb67 Import changes to sample 2015-08-27 18:01:34 +02:00
Matt Thompson afc934d22a Make fetch directory configurable
Currently, the fetch directory is created in your working directory
(where ansible is run from).  We prefer to not keep any state in this
directory and would prefer to have the fetch directory configurable so
we can store it outside of our code checkout.

This commit creates a new variable in each role called
`fetch_directory` (defaulting to the previous value of 'fetch/'), and
then updates each reference to 'fetch' to use the new variable instead.

Closes issue #383
2015-08-27 16:49:50 +01:00
Leseb 3fadd0bf32 Merge pull request #393 from HanXHX/storagectl
Vagrant storagectl as option
2015-08-27 11:58:58 +02:00
Emilien Mantel b187393a93 Get Ceph from distro repository (debian-based) 2015-08-27 11:26:54 +02:00
Emilien Mantel 1d6fa46079 Vagrant storagectl as option 2015-08-27 11:26:10 +02:00
Leseb 6ca32bccf1 Merge pull request #391 from git-harry/fetch-run-once
Prevent failure from race creating fetch directory
2015-08-26 14:27:20 +02:00
git-harry f60179e33f Prevent failure from race creating fetch directory
When multiple monitor hosts attempt to create the fetch directory there
is the potential for the task to fail with:

  "OSError: [Errno 17] File exists: 'fetch'"

This appear to be an issue with the file module trying to create the
same directory at the same time when the tasks has been delegated to a
single host.

This commit enables run_once on the affected task which should address
the issue.
2015-08-26 10:49:22 +01:00
Leseb e41e197fe7 Merge pull request #390 from ceph/ntp-option
Make package dependencies configurable
2015-08-26 11:23:30 +02:00
Sébastien Han b3c7c36299 Make package dependencies configurable
Closes: #386 and #384

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-26 11:21:24 +02:00
dexter a39bd9f2a6 missing quotes. :( 2015-08-26 16:31:01 +08:00
dexter 873c5cffb2 Fix for error when the nodes don't have the same interface name.
This is a rare case but it happens. Since we're just calling
`monitor_interface` and not `hostvars[host]['monitor_interface'],
an error may occur when the current host's interface does not
exist on the other hosts. (eg. eth0 exists for node0, but it does
not exist on node1 and node2)

Fix for this is to use hostvars[host]['monitor_interface']
2015-08-26 16:11:21 +08:00
Leseb 453bf50126 Merge pull request #382 from msambol/activate_osd
Remove parition check from ceph-osd role
2015-08-17 19:38:17 +02:00
Michael Sambol d1628a2d28 item.2 changes to item.1 2015-08-17 12:30:03 -05:00
Michael Sambol f132188658 Remove parition check from ceph-osd role
I'm removing the ceph paritition check from `activate osd(s) when device
is a disk` because the ceph parition does not exist when parted was
registered (on a fresh install). This was causing the activate step to
be skipped.
2015-08-17 11:14:06 -05:00
Leseb 861d7296ef Merge pull request #381 from git-harry/openstack-pg-num
Allow configurable pg_num for OpenStack pools
2015-08-17 17:45:50 +02:00
git-harry 835951b3d0 Allow configurable pg_num for OpenStack pools
Currently the OpenStack pools that get created use the default pg_num.
This commit updates the ceph-mon role to allow the pg_num for each pool
to be customised.
2015-08-17 16:14:26 +01:00
Leseb cc761535fb Merge pull request #380 from Abhishekvrshny/fix-keys-debian
enable ceph-create-keys in Debian
2015-08-17 15:37:58 +02:00
Abhishek Varshney e142c21776 removed when condition in ceph-create-keys 2015-08-17 18:59:14 +05:30
Leseb 792f839f07 Merge pull request #379 from andymcc/device_check_group
Check to ensure device checks only happen on osds
2015-08-17 13:52:09 +02:00
Andy McCrae 942f914b84 Check to ensure device checks only happen on osds
Add bool for osd_group_name in group_names for osd checks.
2015-08-17 12:45:20 +01:00
Leseb 967b9b51c0 Merge pull request #378 from ceph/use-latest-packages
Use latest packages
2015-08-17 11:46:44 +02:00
Sébastien Han 476c5df38f Use latest packages
Fix back the rolling update playbook.
However every single time the playbook will run it will check for new
packages and install the latest ones. I don't think this is always the
desired behaviour. We need to find a way to conciliate both...

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-17 11:28:20 +02:00
Leseb bafc546064 Merge pull request #374 from andymcc/device_check_fix
Fix devices check for raw_multi_journal
2015-08-14 17:48:37 +02:00
Andy McCrae 25a45332f3 Fix devices check for raw_multi_journal
Fix the logic for the mandatory devices check so that it applies to
raw_multi_journal and journal_collocation scenarios separately.

This fails otherwise because whichever var is "first" in the or is most
likely undefined.
2015-08-14 15:43:10 +01:00
Leseb 6fa7038ab1 Merge pull request #371 from msambol/revert-367-stat_module
Revert "Use stat module instead of shell"
2015-08-07 09:51:59 +02:00
Leseb d11870cd8d Merge pull request #368 from msambol/ceph_common_readme
Update ceph-common readme
2015-08-07 09:41:25 +02:00
Michael Sambol c187e1ff83 Revert "Use stat module instead of shell" 2015-08-07 00:07:51 -05:00
Leseb 31ea5b49e6 Merge pull request #370 from ceph/remove-zapp
Remove zap variables
2015-08-06 17:35:13 +02:00
Sébastien Han 0496a3e0d4 Remove zap variables
Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-06 17:34:25 +02:00
Leseb e51a18591b Merge pull request #369 from ceph/remove-zap
Remove the disk zap function
2015-08-06 17:28:56 +02:00
Sébastien Han 68248a266b Remove the disk zap function
This will likely one day or another break something. If ceph-disk
complains about a disk just use the purge-cluster.yml playbook first as
it will wipe all the devices.

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-06 17:24:21 +02:00
Michael Sambol 4661dc86fd Update ceph-common README 2015-08-06 09:29:30 -05:00
Michael Sambol 36052b15fb Update ceph-common README 2015-08-06 09:27:52 -05:00
Michael Sambol 6b5f278da1 Update ceph-common README 2015-08-06 08:15:42 -05:00
Michael Sambol 0342bc7fcc Update ceph-common README 2015-08-06 08:13:46 -05:00
Leseb a8c1309e72 Merge pull request #367 from msambol/stat_module
Use stat module instead of shell
2015-08-06 11:29:00 +02:00
Michael Sambol 4531b67a4f Use stat module instead of shell 2015-08-05 23:06:09 -05:00
Leseb c8bf8cc605 Merge pull request #366 from ceph/fixes
Fix the sudoer template
2015-08-03 23:54:11 +02:00
Sébastien Han f671e91e61 Fix the sudoer template
+ cleanup the docker.yml from OSD.

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-03 23:53:08 +02:00
Leseb 9c5574f826 Merge pull request #364 from ceph/rework-rgw-install
Remove rgw installation from the ceph-rgw role
2015-08-03 22:21:36 +02:00
Sébastien Han 7ed67f37d8 Remove rgw installation from the ceph-rgw role
The installation of rgw is now handled by the ceph-common role.
Fixes: #307

Signed-off-by: Sébastien Han <seb@redhat.com>
2015-08-03 22:17:43 +02:00