Commit Graph

4 Commits (871ad6d71a0b700f826d25159865e1430704d4e7)

Author SHA1 Message Date
Benoît Knecht 9a286e273a ceph-rgw-loadbalancer: Fix rgw_ports fact
The `set_fact rgw_ports` task was failing due to a templating error, because
`hostvars[item].rgw_instances` is a list, but it was treated as if it was a
dictionary.

Another issue was the fact that the `unique` filter only applied to the list
being appended to `rgw_ports` instead of the entire list, which means it was
possible to have duplicate items.

Lastly, `rgw_ports` would have been a list of integers, but the `seport` module
expects a list of strings.

This commit fixes all of the issues above, allowing the `ceph-rgw-loadbalancer`
role to work on systems with SELinux enabled.

Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
(cherry picked from commit c078513475)
2021-04-15 13:21:07 +02:00
Guillaume Abrioux f47da73a8a common: selinux tasks related refactor
This moves some task from the `ceph-nfs` role in `ceph-common` since
some of them are needed in `ceph-rgwloadbalancer` role.
This avoids duplicated tasks.

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit d0442d81b9)
2021-04-06 15:09:00 +02:00
Guillaume Abrioux 3bfa0772e2 rgw-loadbalancers: add all rgw_ports to http_port_t type
This adds all rgw ports to the http_port_t selinux type so it
allows haproxy to connect to those ports in order to avoid AVC.

Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1923890

Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 6bbb90198b)
2021-04-06 15:09:00 +02:00
guihecheng c52020a4db Add role definitions of ceph-rgw-loadbalancer
This add support for rgw loadbalancer based on HAProxy and Keepalived.
We define a single role ceph-rgw-loadbalancer and include HAProxy and
Keepalived configurations all in this.

A single haproxy backend is used to balance all RGW instances and
a single frontend is exported via a single port, default 80.

Keepalived is used to maintain the high availability of all haproxy
instances. You are free to use any number of VIPs. A single VIP is
shared across all keepalived instances and there will be one
master for one VIP, selected sequentially, and others serve as
backups.
This assumes that each keepalived instance is on the same node as
one haproxy instance and we use a simple check script to detect
the state of each haproxy instance and trigger the VIP failover
upon its failure.

Signed-off-by: guihecheng <guihecheng@cmiot.chinamobile.com>
(cherry picked from commit 35d40c65f8)
2019-06-06 19:44:30 +00:00