Merge pull request #378 from bogdando/issues/26
Add HA/LB endpoints for kube-apiserverpull/386/head
commit
fbc55da2bf
|
@ -31,4 +31,94 @@ flannel networking plugin always uses a single `--etcd-server` endpoint!
|
||||||
|
|
||||||
Kube-apiserver
|
Kube-apiserver
|
||||||
--------------
|
--------------
|
||||||
TODO(bogdando) TBD
|
|
||||||
|
K8s components require a loadbalancer to access the apiservers via a reverse
|
||||||
|
proxy. A kube-proxy does not support multiple apiservers for the time being so
|
||||||
|
you will need to configure your own loadbalancer to achieve HA. Note that
|
||||||
|
deploying a loadbalancer is up to a user and is not covered by ansible roles
|
||||||
|
in Kargo. By default, it only configures a non-HA endpoint, which points to
|
||||||
|
the `access_ip` or IP address of the first server node in the `kube-master`
|
||||||
|
group. It can also configure clients to use endpoints for a given loadbalancer
|
||||||
|
type.
|
||||||
|
|
||||||
|
A loadbalancer (LB) may be an external or internal one. An external LB
|
||||||
|
provides access for external clients, while the internal LB accepts client
|
||||||
|
connections only to the localhost, similarly to the etcd-proxy HA endpoints.
|
||||||
|
Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is
|
||||||
|
an example configuration for a HAProxy service acting as an external LB:
|
||||||
|
```
|
||||||
|
listen kubernetes-apiserver-https
|
||||||
|
bind <VIP>:8383
|
||||||
|
option ssl-hello-chk
|
||||||
|
mode tcp
|
||||||
|
timeout client 3h
|
||||||
|
timeout server 3h
|
||||||
|
server master1 <IP1>:443
|
||||||
|
server master2 <IP2>:443
|
||||||
|
balance roundrobin
|
||||||
|
```
|
||||||
|
|
||||||
|
And the corresponding example global vars config:
|
||||||
|
```
|
||||||
|
apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local"
|
||||||
|
loadbalancer_apiserver:
|
||||||
|
address: <VIP>
|
||||||
|
port: 8383
|
||||||
|
```
|
||||||
|
|
||||||
|
This domain name, or default "lb-apiserver.kubernetes.local", will be inserted
|
||||||
|
into the `/etc/hosts` file of all servers in the `k8s-cluster` group. Note that
|
||||||
|
the HAProxy service should as well be HA and requires a VIP management, which
|
||||||
|
is out of scope of this doc.
|
||||||
|
|
||||||
|
The internal LB may be the case if you do not want to operate a VIP management
|
||||||
|
HA stack and require no external and no secure access to the K8s API. The group
|
||||||
|
var `loadbalancer_apiserver_localhost` (defaults to `false`) controls that
|
||||||
|
deployment layout. When enabled, it is expected each node in the `k8s-cluster`
|
||||||
|
group to run a loadbalancer that listens the localhost frontend and has all
|
||||||
|
of the apiservers as backends. Here is an example configuration for a HAProxy
|
||||||
|
service acting as an internal LB:
|
||||||
|
|
||||||
|
```
|
||||||
|
listen kubernetes-apiserver-http
|
||||||
|
bind localhost:8080
|
||||||
|
mode tcp
|
||||||
|
timeout client 3h
|
||||||
|
timeout server 3h
|
||||||
|
server master1 <IP1>:8080
|
||||||
|
server master2 <IP2>:8080
|
||||||
|
balance leastconn
|
||||||
|
```
|
||||||
|
|
||||||
|
And the corresponding example global vars config:
|
||||||
|
```
|
||||||
|
loadbalancer_apiserver_localhost: true
|
||||||
|
```
|
||||||
|
|
||||||
|
This var overrides an external LB configuration, if any. Note that for this
|
||||||
|
example, the `kubernetes-apiserver-http` endpoint has backends receiving
|
||||||
|
unencrypted traffic, which may be a security issue when interconnecting
|
||||||
|
different nodes, or may be not, if those belong to the isolated management
|
||||||
|
network without external access.
|
||||||
|
|
||||||
|
In order to achieve HA for HAProxy instances, those must be running on the
|
||||||
|
each node in the `k8s-cluster` group as well, but require no VIP, thus
|
||||||
|
no VIP management.
|
||||||
|
|
||||||
|
Access endpoints are evaluated automagically, as the following:
|
||||||
|
|
||||||
|
| Endpoint type | kube-master | non-master |
|
||||||
|
|------------------------------|---------------|---------------------|
|
||||||
|
| Local LB (overrides ext) | http://lc:p | http://lc:p |
|
||||||
|
| External LB, no internal | https://lb:lp | https://lb:lp |
|
||||||
|
| No ext/int LB (default) | http://lc:p | https://m[0].aip:sp |
|
||||||
|
|
||||||
|
Where:
|
||||||
|
* `m[0]` - the first node in the `kube-master` group;
|
||||||
|
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
|
||||||
|
* `lc` - localhost;
|
||||||
|
* `p` - insecure port, `kube_apiserver_insecure_port`
|
||||||
|
* `sp` - secure port, `kube_apiserver_port`;
|
||||||
|
* `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
|
||||||
|
* `ip` - the node IP, defers to the ansible IP;
|
||||||
|
* `aip` - `access_ip`, defers to the ip.
|
||||||
|
|
|
@ -51,22 +51,14 @@ cluster_name: cluster.local
|
||||||
# but don't know about that address themselves.
|
# but don't know about that address themselves.
|
||||||
# access_ip: 1.1.1.1
|
# access_ip: 1.1.1.1
|
||||||
|
|
||||||
# Service endpoints. May be a VIP or a load balanced frontend IP, like one
|
|
||||||
# that a HAProxy or Nginx provides, or just a local service endpoint.
|
|
||||||
#
|
|
||||||
# Etcd endpoints use a local etcd-proxies to reach the etcd cluster via
|
|
||||||
# auto-evaluated endpoints. Those will reuse the access_ip for etcd cluster,
|
|
||||||
# if specified, or defer to the localhost:2379 as well.
|
|
||||||
|
|
||||||
# Etcd access modes:
|
# Etcd access modes:
|
||||||
# Enable multiaccess to configure clients to access all of the etcd members directly
|
# Enable multiaccess to configure clients to access all of the etcd members directly
|
||||||
# as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers.
|
# as the "http://hostX:port, http://hostY:port, ..." and ignore the proxy loadbalancers.
|
||||||
# This may be the case if clients support and loadbalance multiple etcd servers natively.
|
# This may be the case if clients support and loadbalance multiple etcd servers natively.
|
||||||
etcd_multiaccess: false
|
etcd_multiaccess: false
|
||||||
|
|
||||||
#
|
# Assume there are no internal loadbalancers for apiservers exist
|
||||||
# TODO apiserver localhost:8080 and localhost:443 endpoints for kubelets and
|
loadbalancer_apiserver_localhost: false
|
||||||
# (hyper)kube-* and networking components.
|
|
||||||
|
|
||||||
# Choose network plugin (calico, weave or flannel)
|
# Choose network plugin (calico, weave or flannel)
|
||||||
kube_network_plugin: flannel
|
kube_network_plugin: flannel
|
||||||
|
@ -126,21 +118,6 @@ dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address')
|
||||||
# like you would do when using nova-client before starting the playbook.
|
# like you would do when using nova-client before starting the playbook.
|
||||||
# cloud_provider:
|
# cloud_provider:
|
||||||
|
|
||||||
# For multi masters architecture:
|
|
||||||
# kube-proxy doesn't support multiple apiservers for the time being so you'll need to configure your own loadbalancer
|
|
||||||
# This domain name will be inserted into the /etc/hosts file of all servers
|
|
||||||
# configuration example with haproxy :
|
|
||||||
# listen kubernetes-apiserver-https
|
|
||||||
# bind 10.99.0.21:8383
|
|
||||||
# option ssl-hello-chk
|
|
||||||
# mode tcp
|
|
||||||
# timeout client 3h
|
|
||||||
# timeout server 3h
|
|
||||||
# server master1 10.99.0.26:443
|
|
||||||
# server master2 10.99.0.27:443
|
|
||||||
# balance roundrobin
|
|
||||||
# apiserver_loadbalancer_domain_name: "lb-apiserver.kubernetes.local"
|
|
||||||
|
|
||||||
## Set these proxy values in order to update docker daemon to use proxies
|
## Set these proxy values in order to update docker daemon to use proxies
|
||||||
# http_proxy: ""
|
# http_proxy: ""
|
||||||
# https_proxy: ""
|
# https_proxy: ""
|
||||||
|
|
|
@ -27,9 +27,12 @@
|
||||||
when: apiserver_manifest.changed
|
when: apiserver_manifest.changed
|
||||||
|
|
||||||
- name: wait for the apiserver to be running
|
- name: wait for the apiserver to be running
|
||||||
wait_for:
|
uri: url=http://localhost:8080/healthz
|
||||||
port: "{{kube_apiserver_insecure_port}}"
|
register: result
|
||||||
timeout: 60
|
until: result.status == 200
|
||||||
|
retries: 10
|
||||||
|
delay: 6
|
||||||
|
|
||||||
|
|
||||||
# Create kube-system namespace
|
# Create kube-system namespace
|
||||||
- name: copy 'kube-system' namespace manifest
|
- name: copy 'kube-system' namespace manifest
|
||||||
|
|
|
@ -5,7 +5,7 @@ preferences: {}
|
||||||
clusters:
|
clusters:
|
||||||
- cluster:
|
- cluster:
|
||||||
certificate-authority-data: {{ kube_node_cert|b64encode }}
|
certificate-authority-data: {{ kube_node_cert|b64encode }}
|
||||||
server: https://{{ groups['kube-master'][0] }}:{{ kube_apiserver_port }}
|
server: {{ kube_apiserver_endpoint }}
|
||||||
name: {{ cluster_name }}
|
name: {{ cluster_name }}
|
||||||
contexts:
|
contexts:
|
||||||
- context:
|
- context:
|
||||||
|
|
|
@ -13,7 +13,8 @@ spec:
|
||||||
- apiserver
|
- apiserver
|
||||||
- --advertise-address={{ ip | default(ansible_default_ipv4.address) }}
|
- --advertise-address={{ ip | default(ansible_default_ipv4.address) }}
|
||||||
- --etcd-servers={{ etcd_access_endpoint }}
|
- --etcd-servers={{ etcd_access_endpoint }}
|
||||||
- --insecure-bind-address={{ kube_apiserver_insecure_bind_address | default('127.0.0.1') }}
|
- --insecure-bind-address={{ kube_apiserver_insecure_bind_address }}
|
||||||
|
- --apiserver-count={{ kube_apiserver_count }}
|
||||||
- --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
|
- --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
|
||||||
- --service-cluster-ip-range={{ kube_service_addresses }}
|
- --service-cluster-ip-range={{ kube_service_addresses }}
|
||||||
- --client-ca-file={{ kube_cert_dir }}/ca.pem
|
- --client-ca-file={{ kube_cert_dir }}/ca.pem
|
||||||
|
|
|
@ -11,7 +11,7 @@ spec:
|
||||||
command:
|
command:
|
||||||
- /hyperkube
|
- /hyperkube
|
||||||
- controller-manager
|
- controller-manager
|
||||||
- --master=http://127.0.0.1:{{kube_apiserver_insecure_port}}
|
- --master={{ kube_apiserver_endpoint }}
|
||||||
- --leader-elect=true
|
- --leader-elect=true
|
||||||
- --service-account-private-key-file={{ kube_cert_dir }}/apiserver-key.pem
|
- --service-account-private-key-file={{ kube_cert_dir }}/apiserver-key.pem
|
||||||
- --root-ca-file={{ kube_cert_dir }}/ca.pem
|
- --root-ca-file={{ kube_cert_dir }}/ca.pem
|
||||||
|
|
|
@ -12,7 +12,7 @@ spec:
|
||||||
- /hyperkube
|
- /hyperkube
|
||||||
- scheduler
|
- scheduler
|
||||||
- --leader-elect=true
|
- --leader-elect=true
|
||||||
- --master=http://127.0.0.1:{{kube_apiserver_insecure_port}}
|
- --master={{ kube_apiserver_endpoint }}
|
||||||
- --v={{ kube_log_level | default('2') }}
|
- --v={{ kube_log_level | default('2') }}
|
||||||
livenessProbe:
|
livenessProbe:
|
||||||
httpGet:
|
httpGet:
|
||||||
|
|
|
@ -7,7 +7,7 @@ KUBE_LOGGING="--logtostderr=true"
|
||||||
{% endif %}
|
{% endif %}
|
||||||
KUBE_LOG_LEVEL="--v={{ kube_log_level | default('2') }}"
|
KUBE_LOG_LEVEL="--v={{ kube_log_level | default('2') }}"
|
||||||
{% if inventory_hostname in groups['kube-node'] %}
|
{% if inventory_hostname in groups['kube-node'] %}
|
||||||
KUBELET_API_SERVER="--api_servers={% for host in groups['kube-master'] %}https://{{ hostvars[host]['access_ip'] | default(hostvars[host]['ip'] | default(hostvars[host]['ansible_default_ipv4']['address'])) }}:{{ kube_apiserver_port }}{% if not loop.last %},{% endif %}{% endfor %}"
|
KUBELET_API_SERVER="--api_servers={{ kube_apiserver_endpoint }}"
|
||||||
{% endif %}
|
{% endif %}
|
||||||
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
|
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
|
||||||
KUBELET_ADDRESS="--address={{ ip | default("0.0.0.0") }}"
|
KUBELET_ADDRESS="--address={{ ip | default("0.0.0.0") }}"
|
||||||
|
|
|
@ -12,14 +12,8 @@ spec:
|
||||||
- /hyperkube
|
- /hyperkube
|
||||||
- proxy
|
- proxy
|
||||||
- --v={{ kube_log_level | default('2') }}
|
- --v={{ kube_log_level | default('2') }}
|
||||||
{% if inventory_hostname in groups['kube-master'] %}
|
- --master={{ kube_apiserver_endpoint }}
|
||||||
- --master=http://127.0.0.1:{{kube_apiserver_insecure_port}}
|
{% if not is_kube_master %}
|
||||||
{% else %}
|
|
||||||
{% if loadbalancer_apiserver is defined and apiserver_loadbalancer_domain_name is defined %}
|
|
||||||
- --master=https://{{ apiserver_loadbalancer_domain_name }}:{{ loadbalancer_apiserver.port }}
|
|
||||||
{% else %}
|
|
||||||
- --master=https://{{ hostvars[groups['kube-master'][0]]['access_ip'] | default(hostvars[groups['kube-master'][0]]['ip'] | default(hostvars[groups['kube-master'][0]]['ansible_default_ipv4']['address'])) }}:{{ kube_apiserver_port }}
|
|
||||||
{% endif%}
|
|
||||||
- --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml
|
- --kubeconfig=/etc/kubernetes/node-kubeconfig.yaml
|
||||||
{% endif %}
|
{% endif %}
|
||||||
- --bind-address={{ ip | default(ansible_default_ipv4.address) }}
|
- --bind-address={{ ip | default(ansible_default_ipv4.address) }}
|
||||||
|
|
|
@ -1,4 +1,26 @@
|
||||||
---
|
---
|
||||||
|
- set_fact: kube_apiserver_count="{{ groups['kube-master'] | length }}"
|
||||||
|
- set_fact: kube_apiserver_address="{{ ip | default(ansible_default_ipv4['address']) }}"
|
||||||
|
- set_fact: kube_apiserver_access_address="{{ access_ip | default(kube_apiserver_address) }}"
|
||||||
|
- set_fact: is_kube_master="{{ inventory_hostname in groups['kube-master'] }}"
|
||||||
|
- set_fact: first_kube_master="{{ hostvars[groups['kube-master'][0]]['access_ip'] | default(hostvars[groups['kube-master'][0]]['ip'] | default(hostvars[groups['kube-master'][0]]['ansible_default_ipv4']['address'])) }}"
|
||||||
|
- set_fact:
|
||||||
|
kube_apiserver_insecure_bind_address: |-
|
||||||
|
{% if loadbalancer_apiserver_localhost %}{{ kube_apiserver_address }}{% else %}127.0.0.1{% endif %}
|
||||||
|
- set_fact:
|
||||||
|
kube_apiserver_endpoint: |-
|
||||||
|
{% if loadbalancer_apiserver_localhost -%}
|
||||||
|
http://127.0.0.1:{{ kube_apiserver_insecure_port }}
|
||||||
|
{%- elif is_kube_master and loadbalancer_apiserver is not defined -%}
|
||||||
|
http://127.0.0.1:{{ kube_apiserver_insecure_port }}
|
||||||
|
{%- else -%}
|
||||||
|
{%- if loadbalancer_apiserver is defined and loadbalancer_apiserver.port is defined -%}
|
||||||
|
https://{{ apiserver_loadbalancer_domain_name|default('lb-apiserver.kubernetes.local') }}:{{ loadbalancer_apiserver.port|default(kube_apiserver_port) }}
|
||||||
|
{%- else -%}
|
||||||
|
https://{{ first_kube_master }}:{{ kube_apiserver_port }}
|
||||||
|
{%- endif -%}
|
||||||
|
{%- endif %}
|
||||||
|
|
||||||
- set_fact: etcd_address="{{ ip | default(ansible_default_ipv4['address']) }}"
|
- set_fact: etcd_address="{{ ip | default(ansible_default_ipv4['address']) }}"
|
||||||
- set_fact: etcd_access_address="{{ access_ip | default(etcd_address) }}"
|
- set_fact: etcd_access_address="{{ access_ip | default(etcd_address) }}"
|
||||||
- set_fact: etcd_peer_url="http://{{ etcd_access_address }}:2380"
|
- set_fact: etcd_peer_url="http://{{ etcd_access_address }}:2380"
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
DEFAULT_IPV4={{ip | default(ansible_default_ipv4.address) }}
|
DEFAULT_IPV4={{ip | default(ansible_default_ipv4.address) }}
|
||||||
|
|
||||||
# The Kubernetes master IP
|
# The Kubernetes master IP
|
||||||
KUBERNETES_MASTER={{ hostvars[groups['kube-master'][0]]['access_ip'] | default(hostvars[groups['kube-master'][0]]['ip'] | default(hostvars[groups['kube-master'][0]]['ansible_default_ipv4']['address'])) }}
|
KUBERNETES_MASTER={{ first_kube_master }}
|
||||||
|
|
||||||
# IP and port of etcd instance used by Calico
|
# IP and port of etcd instance used by Calico
|
||||||
ETCD_AUTHORITY={{ etcd_authority }}
|
ETCD_AUTHORITY={{ etcd_authority }}
|
||||||
|
|
Loading…
Reference in New Issue