add variable to allow containerized mon to run privileged mode.

this is to allow ceph-authtool to read and write to /var/ and /etc on CentOS Atomic.
Add doc on how to run containerized deployment on RHEL/CentOS Atomic

Signed-off-by: Huamin Chen <hchen@redhat.com>
pull/648/head
Huamin Chen 2016-01-26 20:01:03 +00:00 committed by Sébastien Han
parent f88eff37d7
commit 70561b3fc3
5 changed files with 24 additions and 13 deletions

View File

@ -185,6 +185,20 @@ $ vagrant provision
If you want to use "backports", you can set "true" to `ceph_use_distro_backports`.
Attention, ceph-common doesn't manage backports repository, you must add it yourself.
### For Atomic systems
If you want to run containerized deployment on Atomic systems (RHEL/CentOS Atomic), please copy
[vagrant.yml.atomic](vagrant_variables.yml.atomic) to vagrant_variables.yml, and copy [group_vars/all.docker](group_vars/all.docker) to `group_vars/all`.
Since `centos/atomic-host` doesn't have spare storage controller to attach more disks, it is likely the first time `vagrant up --provider=virtualbox` runs, it will fail to attach to a storage controller. In such case, run the following command:
```console
VBoxManage storagectl `VBoxManage list vms |grep ceph-ansible_osd0|awk '{print $1}'|tr \" ' '` --name "SATA" --add sata
```
then run `vagrant up --provider=virtualbox` again.
# Want to contribute?

View File

@ -5,6 +5,7 @@ cephx_cluster_require_signatures: false
restapi_group_name: restapis
fetch_directory: fetch/
mon_containerized_deployment: true
mon_docker_privileged: true
ceph_mon_docker_username: hchen
ceph_mon_docker_imagename: rhceph
ceph_mon_docker_interface: "{{ monitor_interface }}"

View File

@ -71,9 +71,13 @@ dummy:
##########
#mon_containerized_deployment: false
#mon_containerized_deployment_with_kv: false
#mon_containerized_default_ceph_conf_with_kv: false
#ceph_mon_docker_interface: eth0
#ceph_mon_docker_subnet: # subnet of the ceph_mon_docker_interface
#ceph_mon_docker_username: ceph
#ceph_mon_docker_imagename: daemon
#ceph_mon_extra_envs: "MON_NAME={{ ansible_hostname }}" # comma separated variables
#ceph_docker_on_openstack: false
#mon_docker_privileged: true

View File

@ -16,6 +16,7 @@
name: "{{ ansible_hostname }}"
net: "host"
state: "running"
privileged: "{{ mon_docker_privileged }}"
env: "MON_IP={{ hostvars[inventory_hostname]['ansible_' + ceph_mon_docker_interface]['ipv4']['address'] }},CEPH_DAEMON=MON,CEPH_PUBLIC_NETWORK={{ ceph_mon_docker_subnet }},{{ ceph_mon_extra_envs }}"
volumes: "/var/lib/ceph:/var/lib/ceph,/etc/ceph:/etc/ceph"

View File

@ -15,20 +15,11 @@ memory: 1024
disks: "[ '/dev/sdb', '/dev/sdc' ]"
eth: 'enp0s3'
# VAGRANT BOX
# Fedora: https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-Vagrant-22-20150521.x86_64.vagrant-virtualbox.box
# Ubuntu: ubuntu/trusty64
# CentOS: chef/centos-7.0
# Debian: deb/jessie-amd64 - be careful the storage controller is named 'SATA Controller'
# For more boxes have a look at:
# - https://atlas.hashicorp.com/boxes/search?utf8=✓&sort=&provider=virtualbox&q=
# - https://download.gluster.org/pub/gluster/purpleidea/vagrant/
eth: 'enp0s8'
vagrant_box: centos/atomic-host
# if vagrant fails to attach storage controller, add the storage controller name by:
# VBoxManage storagectl `VBoxManage list vms |grep ceph-ansible-osd|awk '{print $1}'|tr \" ' '` --name "LsiLogic" --add scsi
# VBoxManage storagectl `VBoxManage list vms |grep ceph-ansible_osd0|awk '{print $1}'|tr \" ' '` --name "SATA" --add sata
# and "vagrant up" again
vagrant_storagectl: 'LsiLogic'
skip_tags: 'with_pkg'
vagrant_storagectl: 'SATA'
skip_tags: 'with_pkg'