ceph-ansible/roles/ceph-iscsi-gw
Guillaume Abrioux 8fac8f54a6 iscsi-gw: Create a rbd pool if it doesn't exist
iscsi-gw needs a 'rbd' pool to configure iscsi target.
Note: I could have used the facts already set in `ceph-mon` but I voluntarily
didn't do it to not create a dependancy between these two roles.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
2017-10-04 15:40:10 +02:00
..
defaults iscsi: re-enable the scenario 2017-09-28 18:46:28 +02:00
library resync ceph-iscsi-gw with old upstream 2017-09-12 18:06:10 -06:00
meta resync ceph-iscsi-gw with old upstream 2017-09-12 18:06:10 -06:00
tasks iscsi-gw: Create a rbd pool if it doesn't exist 2017-10-04 15:40:10 +02:00
templates iscsi: re-enable the scenario 2017-09-28 18:46:28 +02:00
LICENSE resync ceph-iscsi-gw with old upstream 2017-09-12 18:06:10 -06:00
README resync ceph-iscsi-gw with old upstream 2017-09-12 18:06:10 -06:00
README.md resync ceph-iscsi-gw with old upstream 2017-09-12 18:06:10 -06:00
ceph-iscsi-ansible.spec resync ceph-iscsi-gw with old upstream 2017-09-12 18:06:10 -06:00

README.md

ceph-iscsi-ansible

This project provides a mechanism to deploy iSCSI gateways in front of a ceph cluster using Ansible. The ansible playbooks
provided rely upon configuration logic from the "ceph-iscsi-config" project. This separation provides independence to the
configuration logic, potentially opening up the possibility for puppet/chef to create/manage ceph/iSCSI gateways in the
same way.

Introduction

At a high level, this project provides custom modules which are responsible for calling the configuration logic, together
with the relevant playbooks. The project defines a new ceph-ansible role; ceph-iscsi-gw, together with two playbooks;

  • ceph-iscsi-gw-yml ... to define our change the gateway configuration based on group_vars/ceph-iscsi-gw.yml
  • purge_gateways.yml .. to destroy the LIO configuration, or the LIO and any associated rbd images.

Features

The combination of the playbooks and the configuration logic deliver the following features;

  • confirms RHEL7.3 and aborts if necessary
  • ensures targetcli/device-mapper-multipath is installed (for rtslib support)
  • configures multipath.conf
  • creates rbd's if needed - at allocation time, each rbd is assigned an owner, which will become the preferred path
  • checks the size of the rbds at run time and expands if necessary
  • maps the rbd's to the host (gateway)
  • enables the rbdmap service to start on boot, and reconfigures the target service to be dependent on rbdmap
  • adds the rbd's to the /etc/ceph/rbdmap file ensuring the devices are automatically mapped following a gateway reboot
  • maps these rbds to LIO
  • once mapped, the alua preferred path state is set or cleared (supporting an active/passive topology)
  • creates an iscsi target - common iqn, and multiple tpg's
  • adds a portal ip based on a the provided IP addresses defined in the group vars to each tpg
  • enables the local tpg, other gateways are defined as disabled
  • adds all the mapped luns to ALL tpg's (ready for client assignment)
  • add clients to the active/enabled tpg, with/without CHAP
  • images mapped to clients can be added/removed by changing image_list and rerunning the playbook
  • clients can be removed using the state=absent variable and rerunning the playbook. At this point the entry can be
    removed from the group variables file
  • configuration can be wiped with the purge_gateway playbook
  • current state can be seen by looking at the configuration object (stored in the rbd pool)

Why RHEL 7.3?

There are several system dependencies that are required to ensure the correct (i.e. don't eat my data!) behaviors when OSD connectivity
or gateway nodes fail. RHEL 7.3 delivers the necessary kernel changes, and also provides an updated multipathd, enabling rbd images
to be managed by multipathd.

Prerequisites

  • a working ceph cluster ( rbd pool defined )
  • a server/host with ceph-ansible installed and working
  • nodes intended to be gateways should be at least ceph clients, with the ability to create and map rbd images

Testing So far

The solution has been tested on a collocated cluster where the osd/mons and gateways all reside on the same node.

Quick Start

Prepare the iSCSI Gateway Nodes

  1. install the ceph-iscsi-config package on the nodes, intended to be gateways. NB. The playbook includes a check for the presence
    of this rpm (https://github.com/pcuzner/ceph-iscsi-config)

Install the ansible playbooks

  1. install the ceph-iscsi-ansible rpm from the packages directory on the node where you already have ceph-ansible installed.
  2. update /etc/ansible/hosts to include a host group (ceph-iscsi-gw) for the nodes that you want to become iscsi gateways
  3. make a copy of the group_vars/ceph-iscsi-gw.sample file called ceph-iscsi-gw, and update it to define the environment you want
  4. run the playbook
    > ansible-playbook ceph-iscsi-gw.yml

Purging the configuration

As mentioned above, the project provides a purge-gateways.yml playbook which can remove the LIO configuration alone, or remove
both LIO and all associated rbd images that have been declared in the group_vars/ceph-iscsi-gw file. The purge playbook will
check for any active iscsi sessions, and abort if any are found.

Known Issues and Considerations

  1. the ceph cluster name is the default 'ceph', so the corresponding configuration file /etc/ceph/ceph.conf is assumed to be valid