rewrite ex-lb installation

pull/1006/head
gjmzj 2021-04-09 08:35:12 +08:00
parent 6064aa36c6
commit a024b8913e
13 changed files with 199 additions and 315 deletions

View File

@ -8,55 +8,43 @@
### ex_lb 服务组件
ex_lb 服务由 keepalived 和 haproxy 组成:
- haproxy高效代理四层模式转发到多个 apiserver
- keepalived利用主备节点vrrp协议通信和虚拟地址消除haproxy的单点故障
更新kubeasz 3.0.2 重写了ex-lb服务安装利用最小化依赖编译安装的二进制文件不依赖于linux发行版优点是可以统一版本和简化离线安装部署并且理论上能够支持更多linux发行版
Haproxy支持四层和七层负载稳定性好根据官方文档HAProxy可以跑满10Gbps-New benchmark of HAProxy at 10 Gbps using Myricom's 10GbE NICs (Myri-10G PCI-Express)另外openstack高可用也有用haproxy的。
keepalived观其名可知保持存活它是基于VRRP协议保证所谓的高可用或热备的这里用来预防haproxy的单点故障。
ex_lb 服务由 keepalived 和 l4lb 组成:
- l4lb是一个精简版仅支持四层转发的nginx编译二进制版本
- keepalived利用主备节点vrrp协议通信和虚拟地址消除l4lb的单点故障keepalived保持存活它是基于VRRP协议保证所谓的高可用或热备的这里用来预防l4lb的单点故障。
keepalived与haproxy配合实现master的高可用过程如下
keepalived与l4lb配合实现master的高可用过程如下
+ 1.keepalived利用vrrp协议生成一个虚拟地址(VIP)正常情况下VIP存活在keepalive的主节点当主节点故障时VIP能够漂移到keepalived的备节点保障VIP地址高可用性。
+ 2.在keepalived的主备节点都配置相同haproxy负载配置并且监听客户端请求在VIP的地址上保障随时都有一个haproxy负载均衡在正常工作。并且keepalived启用对haproxy进程的存活检测一旦主节点haproxy进程故障VIP也能切换到备节点从而让备节点的haproxy进行负载工作。
+ 3.在haproxy的配置中配置多个后端真实kube-apiserver的endpoints并启用存活监测后端kube-apiserver如果一个kube-apiserver故障haproxy会将其剔除负载池。
+ 2.在keepalived的主备节点都配置相同l4lb负载配置并且监听客户端请求在VIP的地址上保障随时都有一个l4lb负载均衡在正常工作。并且keepalived启用对l4lb进程的存活检测一旦主节点l4lb进程故障VIP也能切换到备节点从而让备节点的l4lb进行负载工作。
+ 3.在l4lb的配置中配置多个后端真实kube-apiserver的endpoints并启用存活监测后端kube-apiserver如果一个kube-apiserver故障l4lb会将其剔除负载池。
#### 安装haproxy
#### 安装l4lb
+ 使用apt源安装
#### 配置l4lb (roles/ex-lb/templates/l4lb.conf.j2)
#### 配置haproxy (roles/ex-lb/templates/haproxy.cfg.j2)
配置由全局配置和三个listen配置组成
- listen kube_master 用于转发至多个apiserver
- listen ingress-node 用于转发至node节点的ingress http服务[参阅](../op/loadballance_ingress_nodeport.md)
- listen ingress-node-tls 用于转发至node节点的ingress https服务
如果用apt安装的话可以在/usr/share/doc/haproxy目录下找到配置指南configuration.txt.gz全局和默认配置这里不展开关注`listen` 代理设置模块,各项配置说明:
+ 名称 kube_master
+ bind 监听客户端请求的地址/端口保证监听master的VIP地址和端口
+ mode 选择四层负载模式 (当然你也可以选择七层负载,请查阅指南,适当调整)
+ balance 选择负载算法 (负载算法也有很多供选择)
配置由全局配置和三个upstream servers配置组成
- apiservers 用于转发至多个apiserver
- ingress-nodes 用于转发至node节点的ingress http服务[参阅](../op/loadballance_ingress_nodeport.md)
- ingress-tls-nodes 用于转发至node节点的ingress https服务
#### 安装keepalived
+ 使用apt源安装
#### 配置keepalived主节点 [keepalived-master.conf.j2](../../roles/ex-lb/templates/keepalived-master.conf.j2)
``` bash
global_defs {
router_id lb-master-{{ inventory_hostname }}
}
vrrp_script check-haproxy {
script "killall -0 haproxy"
interval 5
vrrp_track_process check-l4lb {
process l4lb
weight -60
delay 3
}
vrrp_instance VI-kube_master {
vrrp_instance VI-01 {
state MASTER
priority 120
unicast_src_ip {{ inventory_hostname }}
@ -69,15 +57,15 @@ vrrp_instance VI-kube_master {
interface {{ LB_IF }}
virtual_router_id {{ ROUTER_ID }}
advert_int 3
track_script {
check-haproxy
track_process {
check-l4lb
}
virtual_ipaddress {
{{ EX_APISERVER_VIP }}
}
}
```
+ vrrp_script 定义了监测haproxy进程的脚本利用shell 脚本`killall -0 haproxy` 进行检测进程是否存活,如果进程不存在,根据`weight -30`设置将主节点优先级降低30这样原先备节点将变成主节点。
+ vrrp_track_process 定义了监测l4lb进程是否存活如果进程不存在根据`weight -60`设置将主节点优先级降低60这样原先备节点将变成主节点。
+ vrrp_instance 定义了vrrp组包括优先级、使用端口、router_id、心跳频率、检测脚本、虚拟地址VIP等
+ 特别注意 `virtual_router_id` 标识了一个 VRRP组在同网段下必须唯一否则出现 `Keepalived_vrrp: bogus VRRP packet received on eth0 !!!`类似报错
+ 配置 vrrp 协议通过单播发送
@ -86,13 +74,13 @@ vrrp_instance VI-kube_master {
+ 备节点的配置类似主节点,除了优先级和检测脚本,其他如 `virtual_router_id` `advert_int` `virtual_ipaddress`必须与主节点一致
### 启动 keepalived 和 haproxy 后验证
### 启动 keepalived 和 l4lb 后验证
+ lb 节点验证
``` bash
systemctl status haproxy # 检查进程状态
journalctl -u haproxy # 检查进程日志是否有报错信息
systemctl status l4lb # 检查进程状态
journalctl -u l4lb # 检查进程日志是否有报错信息
systemctl status keepalived # 检查进程状态
journalctl -u keepalived # 检查进程日志是否有报错信息
```
@ -103,6 +91,6 @@ ip a # 检查 master的 VIP地址是否存在
```
### keepalived 主备切换演练
1. 尝试关闭 keepalived主节点上的 haproxy进程然后在keepalived 备节点上查看 master的 VIP地址是否能够漂移过来并依次检查上一步中的验证项。
1. 尝试关闭 keepalived主节点上的 l4lb进程然后在keepalived 备节点上查看 master的 VIP地址是否能够漂移过来并依次检查上一步中的验证项。
1. 尝试直接关闭 keepalived 主节点系统,检查各验证项。

View File

@ -1,12 +1,12 @@
# to clean 'lb' service
- block:
- name: get service info
shell: 'systemctl list-units --type=service |grep -E "haproxy|keepalived|ssh"'
shell: 'systemctl list-units --type=service |grep -E "l4lb|keepalived|ssh"'
register: service_info
- name: remove service haproxy
service: name=haproxy state=stopped enabled=no
when: '"haproxy" in service_info.stdout'
- name: remove service l4lb
service: name=l4lb state=stopped enabled=no
when: '"l4lb" in service_info.stdout'
ignore_errors: true
- name: remove service keepalived
@ -17,6 +17,9 @@
- name: remove files and dirs
file: name={{ item }} state=absent
with_items:
- "/etc/haproxy"
- "/etc/l4lb"
- "/etc/keepalived"
when: "inventory_hostname in groups['kube_node'] or inventory_hostname in groups['ex_lb']"
- "/etc/systemd/system/l4lb.service"
- "/etc/systemd/system/keepalived.service"
- "/usr/local/sbin/keepalived"
when: "inventory_hostname in groups['ex_lb']"

View File

@ -1,13 +1,13 @@
- hosts:
- ex_lb
- ex-lb
tasks:
- name: get service info
shell: 'systemctl list-units --type=service |grep -E "haproxy|keepalived|ssh"'
shell: 'systemctl list-units --type=service |grep -E "l4lb|keepalived|ssh"'
register: service_info
- name: remove service haproxy
service: name=haproxy state=stopped enabled=no
when: '"haproxy" in service_info.stdout'
- name: remove service l4lb
service: name=l4lb state=stopped enabled=no
when: '"l4lb" in service_info.stdout'
ignore_errors: true
- name: remove service keepalived
@ -18,5 +18,8 @@
- name: remove files and dirs
file: name={{ item }} state=absent
with_items:
- "/etc/haproxy"
- "/etc/l4lb"
- "/etc/keepalived"
- "/etc/systemd/system/l4lb.service"
- "/etc/systemd/system/keepalived.service"
- "/usr/local/sbin/keepalived"

View File

@ -2,22 +2,14 @@
# 因项目已设置vrrp报文单播模式所以这个ROUTER_ID 即便同网段里面有重复也没关系
ROUTER_ID: 222
# haproxy负载均衡算法常见如下
# "roundrobin": 基于服务器权重的轮询
# "leastconn": 基于服务器最小连接数
# "source": 基于请求源IP地址
# "uri": 基于请求的URI
BALANCE_ALG: "roundrobin"
# 启用 ingress NodePort服务的负载均衡 (yes/no)
INGRESS_NODEPORT_LB: "no"
# ingress NodePort 的端口号
INGRESS_NODEPORT_LB_PORT: 23456
# 启用 ingress tls NodePort服务的负载均衡 (yes/no)
INGRESS_TLS_NODEPORT_LB: "no"
# ingress tls NodePort 的端口号
INGRESS_TLS_NODEPORT_LB_PORT: 23457
# 离线安装 haproxy+keepalived (offline|online)
INSTALL_SOURCE: "online"

View File

@ -8,16 +8,46 @@
set_fact: LB_IF={{ LB_IF_TMP.stdout }}
tags: restart_lb
- name: 创建相关目录
- name: prepare some dirs
file: name={{ item }} state=directory
with_items:
- /etc/haproxy
- /etc/keepalived
- "/etc/l4lb/sbin"
- "/etc/l4lb/logs"
- "/etc/l4lb/conf"
- "/etc/keepalived"
- name: 配置 haproxy
template: src=haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg
- name: 下载二进制文件l4lb(nginx)
copy: src={{ base_dir }}/bin/nginx dest=/etc/l4lb/sbin/l4lb mode=0755
- name: 创建l4lb的配置文件
template: src=l4lb.conf.j2 dest=/etc/l4lb/conf/l4lb.conf
tags: restart_lb
- name: 创建l4lb的systemd unit文件
template: src=l4lb.service.j2 dest=/etc/systemd/system/l4lb.service
tags: restart_lb
- name: 开机启用l4lb服务
shell: systemctl enable l4lb
ignore_errors: true
- name: 开启l4lb服务
shell: systemctl daemon-reload && systemctl restart l4lb
ignore_errors: true
tags: restart_lb
- name: 以轮询的方式等待l4lb服务启动
shell: "systemctl status l4lb.service|grep Active"
register: svc_status
until: '"running" in svc_status.stdout'
retries: 3
delay: 3
tags: restart_lb
- name: 下载二进制文件keepalived
copy: src={{ base_dir }}/bin/keepalived dest=/usr/local/sbin/keepalived mode=0755
- name: 配置 keepalived 主节点
template: src=keepalived-master.conf.j2 dest=/etc/keepalived/keepalived.conf
when: LB_ROLE == "master"
@ -28,39 +58,23 @@
when: LB_ROLE == "backup"
tags: restart_lb
- name: 安装 haproxy+keepalived
package: name={{ item }} state=present
with_items:
- haproxy
- keepalived
when: 'INSTALL_SOURCE != "offline"'
ignore_errors: true
# 离线安装 haproxy+keepalived
- import_tasks: offline.yml
when: 'INSTALL_SOURCE == "offline"'
- name: 修改centos的haproxy.service
template: src=haproxy.service.j2 dest=/usr/lib/systemd/system/haproxy.service
when: 'ansible_distribution in ["CentOS","RedHat","Amazon","Aliyun"]'
tags: restart_lb
- name: daemon-reload for haproxy.service
shell: systemctl daemon-reload
tags: restart_lb
- name: 开机启用haproxy服务
shell: systemctl enable haproxy
ignore_errors: true
- name: 重启haproxy服务
shell: systemctl restart haproxy
- name: 创建keepalived的systemd unit文件
template: src=keepalived.service.j2 dest=/etc/systemd/system/keepalived.service
tags: restart_lb
- name: 开机启用keepalived服务
shell: systemctl enable keepalived
ignore_errors: true
- name: 重启keepalived服务
shell: systemctl restart keepalived
- name: 开启keepalived服务
shell: systemctl daemon-reload && systemctl restart keepalived
ignore_errors: true
tags: restart_lb
- name: 以轮询的方式等待keepalived服务启动
shell: "systemctl status keepalived.service|grep Active"
register: svc_status
until: '"running" in svc_status.stdout'
retries: 3
delay: 3
tags: restart_lb

View File

@ -1,131 +0,0 @@
# 离线安装 haproxy
- name: 准备离线安装包目录
file: name=/opt/kube/packages/haproxy state=directory
- block:
- name: 分发 haproxy_xenial 离线包
copy:
src: "{{ base_dir }}/down/packages/haproxy_xenial.tar.gz"
dest: "/opt/kube/packages/haproxy/haproxy_xenial.tar.gz"
- name: 安装 haproxy_xenial 离线包
shell: 'cd /opt/kube/packages/haproxy && tar zxf haproxy_xenial.tar.gz && \
dpkg -i *.deb > /tmp/install_haproxy.log 2>&1'
when: ansible_distribution_release == "xenial"
ignore_errors: true
- block:
- name: 分发 haproxy_bionic 离线包
copy:
src: "{{ base_dir }}/down/packages/haproxy_bionic.tar.gz"
dest: "/opt/kube/packages/haproxy/haproxy_bionic.tar.gz"
- name: 安装 haproxy_bionic 离线包
shell: 'cd /opt/kube/packages/haproxy && tar zxf haproxy_bionic.tar.gz && \
dpkg -i *.deb > /tmp/install_haproxy.log 2>&1'
when: ansible_distribution_release == "bionic"
ignore_errors: true
- block:
- name: 分发 haproxy_centos7 离线包
copy:
src: "{{ base_dir }}/down/packages/haproxy_centos7.tar.gz"
dest: "/opt/kube/packages/haproxy/haproxy_centos7.tar.gz"
- name: 安装 haproxy_centos7 离线包
shell: 'cd /opt/kube/packages/haproxy && tar zxf haproxy_centos7.tar.gz && \
rpm -Uvh --force --nodeps *.rpm > /tmp/install_haproxy.log 2>&1'
when:
- 'ansible_distribution == "CentOS"'
- 'ansible_distribution_major_version == "7"'
ignore_errors: true
- block:
- name: 分发 haproxy_stretch 离线包
copy:
src: "{{ base_dir }}/down/packages/haproxy_stretch.tar.gz"
dest: "/opt/kube/packages/haproxy/haproxy_stretch.tar.gz"
- name: 安装 haproxy_stretch 离线包
shell: 'cd /opt/kube/packages/haproxy && tar zxf haproxy_stretch.tar.gz && \
dpkg -i *.deb > /tmp/install_haproxy.log 2>&1'
when: ansible_distribution_release == "stretch"
ignore_errors: true
- block:
- name: 分发 haproxy_buster 离线包
copy:
src: "{{ base_dir }}/down/packages/haproxy_buster.tar.gz"
dest: "/opt/kube/packages/haproxy/haproxy_buster.tar.gz"
- name: 安装 haproxy_buster 离线包
shell: 'cd /opt/kube/packages/haproxy && tar zxf haproxy_buster.tar.gz && \
dpkg -i *.deb > /tmp/install_haproxy.log 2>&1'
when: ansible_distribution_release == "buster"
ignore_errors: true
# 离线安装 keepalived
- name: 准备离线安装包目录
file: name=/opt/kube/packages/keepalived state=directory
- block:
- name: 分发 keepalived_xenial 离线包
copy:
src: "{{ base_dir }}/down/packages/keepalived_xenial.tar.gz"
dest: "/opt/kube/packages/keepalived/keepalived_xenial.tar.gz"
- name: 安装 keepalived_xenial 离线包
shell: 'cd /opt/kube/packages/keepalived && tar zxf keepalived_xenial.tar.gz && \
dpkg -i *.deb > /tmp/install_keepalived.log 2>&1'
when: ansible_distribution_release == "xenial"
ignore_errors: true
- block:
- name: 分发 keepalived_bionic 离线包
copy:
src: "{{ base_dir }}/down/packages/keepalived_bionic.tar.gz"
dest: "/opt/kube/packages/keepalived/keepalived_bionic.tar.gz"
- name: 安装 keepalived_bionic 离线包
shell: 'cd /opt/kube/packages/keepalived && tar zxf keepalived_bionic.tar.gz && \
dpkg -i *.deb > /tmp/install_keepalived.log 2>&1'
when: ansible_distribution_release == "bionic"
ignore_errors: true
- block:
- name: 分发 keepalived_centos7 离线包
copy:
src: "{{ base_dir }}/down/packages/keepalived_centos7.tar.gz"
dest: "/opt/kube/packages/keepalived/keepalived_centos7.tar.gz"
- name: 安装 keepalived_centos7 离线包
shell: 'cd /opt/kube/packages/keepalived && tar zxf keepalived_centos7.tar.gz && \
rpm -Uvh --force --nodeps *.rpm > /tmp/install_keepalived.log 2>&1'
when:
- 'ansible_distribution == "CentOS"'
- 'ansible_distribution_major_version == "7"'
ignore_errors: true
- block:
- name: 分发 keepalived_stretch 离线包
copy:
src: "{{ base_dir }}/down/packages/keepalived_stretch.tar.gz"
dest: "/opt/kube/packages/keepalived/keepalived_stretch.tar.gz"
- name: 安装 keepalived_stretch 离线包
shell: 'cd /opt/kube/packages/keepalived && tar zxf keepalived_stretch.tar.gz && \
dpkg -i *.deb > /tmp/install_keepalived.log 2>&1'
when: ansible_distribution_release == "stretch"
ignore_errors: true
- block:
- name: 分发 keepalived_buster 离线包
copy:
src: "{{ base_dir }}/down/packages/keepalived_buster.tar.gz"
dest: "/opt/kube/packages/keepalived/keepalived_buster.tar.gz"
- name: 安装 keepalived_buster 离线包
shell: 'cd /opt/kube/packages/keepalived && tar zxf keepalived_buster.tar.gz && \
dpkg -i *.deb > /tmp/install_keepalived.log 2>&1'
when: ansible_distribution_release == "buster"
ignore_errors: true

View File

@ -1,63 +0,0 @@
global
log /dev/log local1 warning
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
maxconn 50000
nbproc 1
defaults
log global
timeout connect 5s
timeout client 10m
timeout server 10m
listen kube_master
bind 0.0.0.0:{{ EX_APISERVER_PORT }}
mode tcp
option tcplog
option dontlognull
option dontlog-normal
balance {{ BALANCE_ALG }}
{% for host in groups['kube_master'] %}
server {{ host }} {{ host }}:{{ SECURE_PORT }} check inter 5s fall 2 rise 2 weight 1
{% endfor %}
{% if INGRESS_NODEPORT_LB == "yes" %}
listen ingress-node
bind 0.0.0.0:80
mode tcp
option tcplog
option dontlognull
option dontlog-normal
balance {{ BALANCE_ALG }}
{% if groups['kube_node']|length > 3 %}
server {{ groups['kube_node'][0] }} {{ groups['kube_node'][0] }}:{{INGRESS_NODEPORT_LB_PORT}} check inter 5s fall 2 rise 2 weight 1
server {{ groups['kube_node'][1] }} {{ groups['kube_node'][1] }}:{{INGRESS_NODEPORT_LB_PORT}} check inter 5s fall 2 rise 2 weight 1
server {{ groups['kube_node'][2] }} {{ groups['kube_node'][2] }}:{{INGRESS_NODEPORT_LB_PORT}} check inter 5s fall 2 rise 2 weight 1
{% else %}
{% for host in groups['kube_node'] %}
server {{ host }} {{ host }}:{{INGRESS_NODEPORT_LB_PORT}} check inter 5s fall 2 rise 2 weight 1
{% endfor %}
{% endif %}
{% endif %}
{% if INGRESS_TLS_NODEPORT_LB == "yes" %}
listen ingress-node-tls
bind 0.0.0.0:443
mode tcp
option tcplog
option dontlognull
option dontlog-normal
balance {{ BALANCE_ALG }}
{% if groups['kube_node']|length > 3 %}
server {{ groups['kube_node'][0] }} {{ groups['kube_node'][0] }}:{{INGRESS_TLS_NODEPORT_LB_PORT}} check inter 5s fall 2 rise 2 weight 1
server {{ groups['kube_node'][1] }} {{ groups['kube_node'][1] }}:{{INGRESS_TLS_NODEPORT_LB_PORT}} check inter 5s fall 2 rise 2 weight 1
server {{ groups['kube_node'][2] }} {{ groups['kube_node'][2] }}:{{INGRESS_TLS_NODEPORT_LB_PORT}} check inter 5s fall 2 rise 2 weight 1
{% else %}
{% for host in groups['kube_node'] %}
server {{ host }} {{ host }}:{{INGRESS_TLS_NODEPORT_LB_PORT}} check inter 5s fall 2 rise 2 weight 1
{% endfor %}
{% endif %}
{% endif %}

View File

@ -1,13 +0,0 @@
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
EnvironmentFile=/etc/sysconfig/haproxy
ExecStartPre=/usr/bin/mkdir -p /run/haproxy
ExecStart=/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +1,13 @@
global_defs {
router_id lb-backup-{{ inventory_hostname }}
script_user root
}
vrrp_script check-haproxy {
script "/usr/bin/killall -0 haproxy"
interval 5
vrrp_track_process check-l4lb {
process l4lb
weight -60
delay 3
}
vrrp_instance VI-kube_master {
vrrp_instance VI-01 {
state BACKUP
priority {{ 119 | random(61, 1) }}
unicast_src_ip {{ inventory_hostname }}
@ -22,8 +20,8 @@ vrrp_instance VI-kube_master {
interface {{ LB_IF }}
virtual_router_id {{ ROUTER_ID }}
advert_int 3
track_script {
check-haproxy
track_process {
check-l4lb
}
virtual_ipaddress {
{{ EX_APISERVER_VIP }}

View File

@ -1,15 +1,13 @@
global_defs {
router_id lb-master-{{ inventory_hostname }}
script_user root
}
vrrp_script check-haproxy {
script "/usr/bin/killall -0 haproxy"
interval 5
vrrp_track_process check-l4lb {
process l4lb
weight -60
delay 3
}
vrrp_instance VI-kube_master {
vrrp_instance VI-01 {
state MASTER
priority 120
unicast_src_ip {{ inventory_hostname }}
@ -22,8 +20,8 @@ vrrp_instance VI-kube_master {
interface {{ LB_IF }}
virtual_router_id {{ ROUTER_ID }}
advert_int 3
track_script {
check-haproxy
track_process {
check-l4lb
}
virtual_ipaddress {
{{ EX_APISERVER_VIP }}

View File

@ -0,0 +1,14 @@
[Unit]
Description=VRRP High Availability Monitor
After=network-online.target syslog.target
Wants=network-online.target
Documentation=https://keepalived.org/manpage.html
[Service]
Type=forking
KillMode=process
ExecStart=/usr/local/sbin/keepalived -D -f /etc/keepalived/keepalived.conf
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,62 @@
user root;
worker_processes 1;
error_log /etc/l4lb/logs/error.log warn;
events {
worker_connections 3000;
}
stream {
upstream apiservers {
{% for host in groups['kube_master'] %}
server {{ host }}:{{ SECURE_PORT }} max_fails=2 fail_timeout=3s;
{% endfor %}
}
server {
listen 0.0.0.0:{{ EX_APISERVER_PORT }};
proxy_connect_timeout 1s;
proxy_pass apiservers;
}
{% if INGRESS_NODEPORT_LB == "yes" %}
upstream ingress-nodes {
{% if groups['kube_node']|length > 3 %}
server {{ groups['kube_node'][0] }}:{{ INGRESS_NODEPORT_LB_PORT }} max_fails=2 fail_timeout=3s;
server {{ groups['kube_node'][1] }}:{{ INGRESS_NODEPORT_LB_PORT }} max_fails=2 fail_timeout=3s;
server {{ groups['kube_node'][2] }}:{{ INGRESS_NODEPORT_LB_PORT }} max_fails=2 fail_timeout=3s;
{% else %}
{% for host in groups['kube_node'] %}
server {{ host }}:{{ INGRESS_NODEPORT_LB_PORT }} max_fails=2 fail_timeout=3s;
{% endfor %}
{% endif %}
}
server {
listen 0.0.0.0:80;
proxy_connect_timeout 1s;
proxy_pass ingress-nodes;
}
{% endif %}
{% if INGRESS_TLS_NODEPORT_LB == "yes" %}
upstream ingress-tls-nodes {
{% if groups['kube_node']|length > 3 %}
server {{ groups['kube_node'][0] }}:{{ INGRESS_TLS_NODEPORT_LB_PORT }} max_fails=2 fail_timeout=3s;
server {{ groups['kube_node'][1] }}:{{ INGRESS_TLS_NODEPORT_LB_PORT }} max_fails=2 fail_timeout=3s;
server {{ groups['kube_node'][2] }}:{{ INGRESS_TLS_NODEPORT_LB_PORT }} max_fails=2 fail_timeout=3s;
{% else %}
{% for host in groups['kube_node'] %}
server {{ host }}:{{ INGRESS_TLS_NODEPORT_LB_PORT }} max_fails=2 fail_timeout=3s;
{% endfor %}
{% endif %}
}
server {
listen 0.0.0.0:443;
proxy_connect_timeout 1s;
proxy_pass ingress-tls-nodes;
}
{% endif %}
}

View File

@ -0,0 +1,19 @@
[Unit]
Description=l4 nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=forking
ExecStartPre=/etc/l4lb/sbin/l4lb -c /etc/l4lb/conf/l4lb.conf -p /etc/l4lb -t
ExecStart=/etc/l4lb/sbin/l4lb -c /etc/l4lb/conf/l4lb.conf -p /etc/l4lb
ExecReload=/etc/l4lb/sbin/l4lb -c /etc/l4lb/conf/l4lb.conf -p /etc/l4lb -s reload
PrivateTmp=true
Restart=always
RestartSec=15
StartLimitInterval=0
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target