Use Vagrant's Ansible provisioner

Use Vagrant's built-in support for Ansible provisioner. This eliminates the need
for a hosts file, and simplifies the ansible config file.

Renames config from .ansible.cfg to ansible.cfg since Ansible expects the file
to be called ansible.cfg and be adjacent to Vagrantfile when using the Vagrant
provisioner.
pull/82/head
Lorin Hochstein 2014-05-10 21:52:26 -04:00
parent 3b68622f9d
commit 92c0445989
7 changed files with 59 additions and 131 deletions

View File

@ -1,10 +0,0 @@
[defaults]
host_key_checking = False
remote_user = vagrant
hostfile = hosts
ansible_managed = Ansible managed: modified on %Y-%m-%d %H:%M:%S by {uid}
private_key_file = ~/.vagrant.d/insecure_private_key
error_on_undefined_vars = False
forks = 6
#If set to False, ansible will not display any status for a task that is skipped. The default behavior is to display skipped tasks:
display_skipped_hosts=True

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
.vagrant .vagrant
*.vdi *.vdi
*.keyring *.keyring
fetch/4a158d27-f750-41d5-9e7f-26ce4c9d2d45

107
README.md
View File

@ -27,88 +27,12 @@ More details:
## Setup with Vagrant ## Setup with Vagrant
First source the `rc` file:
$ source rc
Edit your `/etc/hosts` file with:
# Ansible hosts
127.0.0.1 ceph-mon0
127.0.0.1 ceph-mon1
127.0.0.1 ceph-mon2
127.0.0.1 ceph-osd0
127.0.0.1 ceph-osd1
127.0.0.1 ceph-osd2
127.0.0.1 ceph-rgw
**Now since we use Vagrant and port forwarding, don't forget to collect the SSH local port of your VMs.**
Then edit your `hosts` file accordingly.
Ok let's get serious now.
Run your virtual machines: Run your virtual machines:
```bash ```bash
$ vagrant up $ vagrant up
... ...
... ...
...
```
Test if Ansible can access the virtual machines:
```bash
$ ansible all -m ping
ceph-mon0 | success >> {
"changed": false,
"ping": "pong"
}
ceph-mon1 | success >> {
"changed": false,
"ping": "pong"
}
ceph-osd0 | success >> {
"changed": false,
"ping": "pong"
}
ceph-osd2 | success >> {
"changed": false,
"ping": "pong"
}
ceph-mon2 | success >> {
"changed": false,
"ping": "pong"
}
ceph-osd1 | success >> {
"changed": false,
"ping": "pong"
}
ceph-rgw | success >> {
"changed": false,
"ping": "pong"
}
```
**DON'T FORGET TO GENERATE A FSID FOR THE CLUSTER AND A KEY FOR THE MONITOR**
For this go to `group_vars/all` and `group_vars/mons` and append the fsid and key.
These are **ONLY** examples, **DON'T USE THEM IN PRODUCTION**:
* fsid: 4a158d27-f750-41d5-9e7f-26ce4c9d2d45
* monitor: AQAWqilTCDh7CBAAawXt6kyTgLFCxSvJhTEmuw==
Ready to deploy? Let's go!
```bash
$ ansible-playbook -f 7 -v site.yml
...
... ...
____________ ____________
< PLAY RECAP > < PLAY RECAP >
@ -120,13 +44,13 @@ $ ansible-playbook -f 7 -v site.yml
|| || || ||
ceph-mon0 : ok=13 changed=10 unreachable=0 failed=0 mon0 : ok=16 changed=11 unreachable=0 failed=0
ceph-mon1 : ok=13 changed=9 unreachable=0 failed=0 mon1 : ok=16 changed=10 unreachable=0 failed=0
ceph-mon2 : ok=13 changed=9 unreachable=0 failed=0 mon2 : ok=16 changed=11 unreachable=0 failed=0
ceph-osd0 : ok=19 changed=12 unreachable=0 failed=0 osd0 : ok=19 changed=7 unreachable=0 failed=0
ceph-osd1 : ok=19 changed=12 unreachable=0 failed=0 osd1 : ok=19 changed=7 unreachable=0 failed=0
ceph-osd2 : ok=19 changed=12 unreachable=0 failed=0 osd2 : ok=19 changed=7 unreachable=0 failed=0
ceph-rgw : ok=23 changed=16 unreachable=0 failed=0 rgw : ok=20 changed=17 unreachable=0 failed=0
``` ```
Check the status: Check the status:
@ -141,3 +65,20 @@ $ vagrant ssh mon0 -c "sudo ceph -s"
pgmap v17: 192 pgs, 3 pools, 9470 bytes data, 21 objects pgmap v17: 192 pgs, 3 pools, 9470 bytes data, 21 objects
205 MB used, 29728 MB / 29933 MB avail 205 MB used, 29728 MB / 29933 MB avail
192 active+clean 192 active+clean
```
To re-run the Ansible provisioning scripts:
```bash
$ vagrant provision
```
## Specifying fsid and secret key in production
The Vagrantfile specifies an fsid for the cluster and a secret key for the
monitor. If using these playbooks in production, you must generate your own `fsid`
in `group_vars/all` and `monitor_secret` in `group_vars/mons`. Those files contain
information about how to generate appropriate values for these variables.

35
Vagrantfile vendored
View File

@ -1,13 +1,37 @@
# -*- mode: ruby -*- # -*- mode: ruby -*-
# vi: set ft=ruby : # vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2" VAGRANTFILE_API_VERSION = "2"
NMONS = 3
NOSDS = 3
ansible_provision = Proc.new do |ansible|
ansible.playbook = "site.yml"
# Note: Can't do ranges like mon[0-2] in groups because
# these aren't supported by Vagrant, see
# https://github.com/mitchellh/vagrant/issues/3539
ansible.groups = {
"mons" => (0..NMONS-1).map {|j| "mon#{j}"},
"osds" => (0..NOSDS-1).map {|j| "osd#{j}"},
"mdss" => [],
"rgws" => ["rgw"]
}
# In a production deployment, these should be secret
ansible.extra_vars = {
fsid: "4a158d27-f750-41d5-9e7f-26ce4c9d2d45",
monitor_secret: "AQAWqilTCDh7CBAAawXt6kyTgLFCxSvJhTEmuw=="
}
ansible.limit = 'all'
end
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise64" config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box" config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.define :rgw do |rgw| config.vm.define :rgw do |rgw|
rgw.vm.network :private_network, ip: "192.168.0.2" rgw.vm.network :private_network, ip: "192.168.0.2"
rgw.vm.host_name = "ceph-rgw" rgw.vm.host_name = "ceph-rgw"
@ -16,7 +40,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
end end
end end
(0..2).each do |i| (0..NMONS-1).each do |i|
config.vm.define "mon#{i}" do |mon| config.vm.define "mon#{i}" do |mon|
mon.vm.hostname = "ceph-mon#{i}" mon.vm.hostname = "ceph-mon#{i}"
mon.vm.network :private_network, ip: "192.168.0.1#{i}" mon.vm.network :private_network, ip: "192.168.0.1#{i}"
@ -26,7 +50,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
end end
end end
(0..2).each do |i| (0..NOSDS-1).each do |i|
config.vm.define "osd#{i}" do |osd| config.vm.define "osd#{i}" do |osd|
osd.vm.hostname = "ceph-osd#{i}" osd.vm.hostname = "ceph-osd#{i}"
osd.vm.network :private_network, ip: "192.168.0.10#{i}" osd.vm.network :private_network, ip: "192.168.0.10#{i}"
@ -38,6 +62,11 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
vb.customize ["modifyvm", :id, "--memory", "192"] vb.customize ["modifyvm", :id, "--memory", "192"]
end end
end end
# Run the provisioner after the last machine comes up
if i == (NOSDS-1)
osd.vm.provision "ansible", &ansible_provision
end
end end
end end
end end

2
ansible.cfg 100644
View File

@ -0,0 +1,2 @@
[defaults]
ansible_managed = Ansible managed: modified on %Y-%m-%d %H:%M:%S by {uid}

34
hosts
View File

@ -1,34 +0,0 @@
#
## If you use Vagrant and port forwarding, don't forget to grab the SSH local port of your VMs.
#
## Common setup example
#
[mons]
ceph-mon0:2200
ceph-mon1:2201
ceph-mon2:2202
[osds]
ceph-osd0:2203
ceph-osd1:2204
ceph-osd2:2205
[mdss]
ceph-osd0:2203
ceph-osd1:2204
ceph-osd2:2205
#[rgws]
#ceph-rgw:2200
# Colocation setup example
#[mons]
#ceph-osd0:2222
#ceph-osd1:2200
#ceph-osd2:2201
#[osds]
#ceph-osd0:2222
#ceph-osd1:2200
#ceph-osd2:2201
#[mdss]
#ceph-osd0:2222
#ceph-osd1:2200
#ceph-osd2:2201

1
rc
View File

@ -1 +0,0 @@
export ANSIBLE_CONFIG=.ansible.cfg