use nginx proxy on non-master nodes to proxy apiserver traffic

Also adds all masters by hostname and localhost/127.0.0.1 to
apiserver SSL certificate.

Includes documentation update on how localhost loadbalancer works.
pull/528/head
Matthew Mosesohn 2016-09-28 14:05:08 +03:00
parent d9641771ed
commit 84052ff0b6
13 changed files with 129 additions and 47 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@ -33,15 +33,27 @@ Kube-apiserver
--------------
K8s components require a loadbalancer to access the apiservers via a reverse
proxy. A kube-proxy does not support multiple apiservers for the time being so
proxy. Kargo includes support for an nginx-based proxy that resides on each
non-master Kubernetes node. This is referred to as localhost loadbalancing. It
is less efficient than a dedicated load balancer because it creates extra
health checks on the Kubernetes apiserver, but is more practical for scenarios
where an external LB or virtual IP management is inconvenient.
This option is configured by the variable `loadbalancer_apiserver_localhost`.
you will need to configure your own loadbalancer to achieve HA. Note that
deploying a loadbalancer is up to a user and is not covered by ansible roles
in Kargo. By default, it only configures a non-HA endpoint, which points to
the `access_ip` or IP address of the first server node in the `kube-master`
group. It can also configure clients to use endpoints for a given loadbalancer
type.
type. The following diagram shows how traffic to the apiserver is directed.
A loadbalancer (LB) may be an external or internal one. An external LB
![Image](figures/loadbalancer_localhost.png?raw=true)
..note:: Kubernetes master nodes still use insecure localhost access because
there are bugs in Kubernetes <1.5.0 in using TLS auth on master role
services.
A user may opt to use an external loadbalancer (LB) instead. An external LB
provides access for external clients, while the internal LB accepts client
connections only to the localhost, similarly to the etcd-proxy HA endpoints.
Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is
@ -71,35 +83,11 @@ into the `/etc/hosts` file of all servers in the `k8s-cluster` group. Note that
the HAProxy service should as well be HA and requires a VIP management, which
is out of scope of this doc.
The internal LB may be the case if you do not want to operate a VIP management
HA stack and require no external and no secure access to the K8s API. The group
var `loadbalancer_apiserver_localhost` (defaults to `false`) controls that
deployment layout. When enabled, it is expected each node in the `k8s-cluster`
group to run a loadbalancer that listens the localhost frontend and has all
of the apiservers as backends. Here is an example configuration for a HAProxy
service acting as an internal LB:
```
listen kubernetes-apiserver-http
bind localhost:8080
mode tcp
timeout client 3h
timeout server 3h
server master1 <IP1>:8080
server master2 <IP2>:8080
balance leastconn
```
And the corresponding example global vars config:
```
loadbalancer_apiserver_localhost: true
```
This var overrides an external LB configuration, if any. Note that for this
example, the `kubernetes-apiserver-http` endpoint has backends receiving
unencrypted traffic, which may be a security issue when interconnecting
different nodes, or may be not, if those belong to the isolated management
network without external access.
Specifying an external LB overrides any internal localhost LB configuration.
Note that for this example, the `kubernetes-apiserver-http` endpoint
has backends receiving unencrypted traffic, which may be a security issue
when interconnecting different nodes, or maybe not, if those belong to the
isolated management network without external access.
In order to achieve HA for HAProxy instances, those must be running on the
each node in the `k8s-cluster` group as well, but require no VIP, thus
@ -109,8 +97,8 @@ Access endpoints are evaluated automagically, as the following:
| Endpoint type | kube-master | non-master |
|------------------------------|---------------|---------------------|
| Local LB (overrides ext) | http://lc:p | http://lc:p |
| External LB, no internal | https://lb:lp | https://lb:lp |
| Local LB | http://lc:p | http://lc:sp |
| External LB, no internal | http://lc:p | https://lb:lp |
| No ext/int LB (default) | http://lc:p | https://m[0].aip:sp |
Where:

View File

@ -64,8 +64,9 @@ ndots: 5
# This may be the case if clients support and loadbalance multiple etcd servers natively.
etcd_multiaccess: false
# Assume there are no internal loadbalancers for apiservers exist
loadbalancer_apiserver_localhost: false
# Assume there are no internal loadbalancers for apiservers exist and listen on
# kube_apiserver_port (default 443)
loadbalancer_apiserver_localhost: true
# Choose network plugin (calico, weave or flannel)
kube_network_plugin: flannel

View File

@ -11,3 +11,6 @@ kube_proxy_mode: iptables
# kube_api_runtime_config:
# - extensions/v1beta1/daemonsets=true
# - extensions/v1beta1/deployments=true
nginx_image_repo: nginx
nginx_image_tag: 1.11.4-alpine

View File

@ -1,6 +1,9 @@
---
- include: install.yml
- include: nginx-proxy.yml
when: is_kube_master == false and loadbalancer_apiserver_localhost|default(false)
- name: Write Calico cni config
template:
src: "cni-calico.conf.j2"

View File

@ -0,0 +1,9 @@
---
- name: nginx-proxy | Write static pod
template: src=manifests/nginx-proxy.manifest.j2 dest=/etc/kubernetes/manifests/nginx-proxy.yml
- name: nginx-proxy | Make nginx directory
file: path=/etc/nginx state=directory mode=0700 owner=root
- name: nginx-proxy | Write nginx-proxy configuration
template: src=nginx.conf.j2 dest="/etc/nginx/nginx.conf" owner=root mode=0755 backup=yes

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: nginx-proxy
image: {{ nginx_image_repo }}:{{ nginx_image_tag }}
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/nginx
name: etc-nginx
readOnly: true
volumes:
- name: etc-nginx
hostPath:
path: /etc/nginx

View File

@ -0,0 +1,26 @@
error_log stderr notice;
worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}
stream {
upstream kube_apiserver {
least_conn;
{% for host in groups['kube-master'] -%}
server {{ hostvars[host]['access_ip'] | default(hostvars[host]['ip'] | default(hostvars[host]['ansible_default_ipv4']['address'])) }}:{{ kube_apiserver_port }};
{% endfor %}
}
server {
listen {{ kube_apiserver_port }};
proxy_pass kube_apiserver;
proxy_timeout 3s;
proxy_connect_timeout 1s;
}
}

View File

@ -21,6 +21,8 @@ kube_log_dir: "/var/log/kubernetes"
# pods on startup
kube_manifest_dir: "{{ kube_config_dir }}/manifests"
# change to 0.0.0.0 to enable insecure access from anywhere (not recommended)
kube_apiserver_insecure_bind_address: 127.0.0.1
common_required_pkgs:
- python-httplib2

View File

@ -5,12 +5,12 @@
- set_fact: is_kube_master="{{ inventory_hostname in groups['kube-master'] }}"
- set_fact: first_kube_master="{{ hostvars[groups['kube-master'][0]]['access_ip'] | default(hostvars[groups['kube-master'][0]]['ip'] | default(hostvars[groups['kube-master'][0]]['ansible_default_ipv4']['address'])) }}"
- set_fact:
kube_apiserver_insecure_bind_address: |-
{% if loadbalancer_apiserver_localhost %}{{ kube_apiserver_address }}{% else %}127.0.0.1{% endif %}
loadbalancer_apiserver_localhost: false
when: loadbalancer_apiserver is defined
- set_fact:
kube_apiserver_endpoint: |-
{% if loadbalancer_apiserver_localhost -%}
http://127.0.0.1:{{ kube_apiserver_insecure_port }}
{% if not is_kube_master and loadbalancer_apiserver_localhost -%}
https://localhost:{{ kube_apiserver_port }}
{%- elif is_kube_master and loadbalancer_apiserver is not defined -%}
http://127.0.0.1:{{ kube_apiserver_insecure_port }}
{%- else -%}

View File

@ -26,8 +26,8 @@ Usage : $(basename $0) -f <config> [-d <ssldir>]
-h | --help : Show this message
-f | --config : Openssl configuration file
-d | --ssldir : Directory where the certificates will be installed
ex :
ex :
$(basename $0) -f openssl.conf -d /srv/ssl
EOF
}
@ -37,7 +37,7 @@ while (($#)); do
case "$1" in
-h | --help) usage; exit 0;;
-f | --config) CONFIG=${2}; shift 2;;
-d | --ssldir) SSLDIR="${2}"; shift 2;;
-d | --ssldir) SSLDIR="${2}"; shift 2;;
*)
usage
echo "ERROR : Unknown option"
@ -68,6 +68,7 @@ openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN
openssl genrsa -out apiserver-key.pem 2048 > /dev/null 2>&1
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config ${CONFIG} > /dev/null 2>&1
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile ${CONFIG} > /dev/null 2>&1
cat ca.pem >> apiserver.pem
# Nodes and Admin
for i in node admin; do

View File

@ -65,3 +65,30 @@
shell: chmod 0600 {{ kube_cert_dir}}/*key.pem
when: inventory_hostname in groups['kube-master']
changed_when: false
- name: Gen_certs | target ca-certificates directory
set_fact:
ca_cert_dir: |-
{% if ansible_os_family == "Debian" -%}
/usr/local/share/ca-certificates
{%- elif ansible_os_family == "RedHat" -%}
/etc/pki/ca-trust/source/anchors
{%- elif ansible_os_family == "CoreOS" -%}
/etc/ssl/certs
{%- endif %}
- name: Gen_certs | add CA to trusted CA dir
copy:
src: "{{ kube_cert_dir }}/ca.pem"
dest: "{{ ca_cert_dir }}/kube-ca.crt"
remote_src: true
register: kube_ca_cert
- name: Gen_certs | update ca-certificates (Debian/Ubuntu/CoreOS)
command: update-ca-certificates
when: kube_ca_cert.changed and ansible_os_family in ["Debian", "CoreOS"]
- name: Gen_certs | update ca-certificatesa (RedHat)
command: update-ca-trust extract
when: kube_ca_cert.changed and ansible_os_family == "RedHat"

View File

@ -11,16 +11,18 @@ DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.{{ dns_domain }}
DNS.5 = localhost
{% for host in groups['kube-master'] %}
DNS.{{ 4 + loop.index }} = {{ host }}
DNS.{{ 5 + loop.index }} = {{ host }}
{% endfor %}
{% if loadbalancer_apiserver is defined and apiserver_loadbalancer_domain_name is defined %}
{% set idx = groups['kube-master'] | length | int + 4 %}
DNS.5 = {{ apiserver_loadbalancer_domain_name }}
{% set idx = groups['kube-master'] | length | int + 5 %}
DNS.{{ idx | string }} = {{ apiserver_loadbalancer_domain_name }}
{% endif %}
{% for host in groups['kube-master'] %}
IP.{{ 2 * loop.index - 1 }} = {{ hostvars[host]['access_ip'] | default(hostvars[host]['ansible_default_ipv4']['address']) }}
IP.{{ 2 * loop.index }} = {{ hostvars[host]['ip'] | default(hostvars[host]['ansible_default_ipv4']['address']) }}
{% endfor %}
{% set idx = groups['kube-master'] | length | int * 2 + 1 %}
IP.{{ idx | string }} = {{ kube_apiserver_ip }}
IP.{{ idx }} = {{ kube_apiserver_ip }}
IP.{{ idx + 1 }} = 127.0.0.1