|
|
|
@ -33,15 +33,20 @@ proxy. Kargo includes support for an nginx-based proxy that resides on each
|
|
|
|
|
non-master Kubernetes node. This is referred to as localhost loadbalancing. It
|
|
|
|
|
is less efficient than a dedicated load balancer because it creates extra
|
|
|
|
|
health checks on the Kubernetes apiserver, but is more practical for scenarios
|
|
|
|
|
where an external LB or virtual IP management is inconvenient.
|
|
|
|
|
where an external LB or virtual IP management is inconvenient. This option is
|
|
|
|
|
configured by the variable `loadbalancer_apiserver_localhost`. You may also
|
|
|
|
|
define the port the local internal loadbalancer users by changing,
|
|
|
|
|
`nginx_kube_apiserver_port`. This defaults to the value of `kube_apiserver_port`.
|
|
|
|
|
It is also import to note that Kargo will only configure kubelet and kube-proxy
|
|
|
|
|
on non-master nodes to use the local internal loadbalancer.
|
|
|
|
|
|
|
|
|
|
This option is configured by the variable `loadbalancer_apiserver_localhost`.
|
|
|
|
|
you will need to configure your own loadbalancer to achieve HA. Note that
|
|
|
|
|
deploying a loadbalancer is up to a user and is not covered by ansible roles
|
|
|
|
|
in Kargo. By default, it only configures a non-HA endpoint, which points to
|
|
|
|
|
the `access_ip` or IP address of the first server node in the `kube-master`
|
|
|
|
|
group. It can also configure clients to use endpoints for a given loadbalancer
|
|
|
|
|
type. The following diagram shows how traffic to the apiserver is directed.
|
|
|
|
|
If you choose to NOT use the local internal loadbalancer, you will need to configure
|
|
|
|
|
your own loadbalancer to achieve HA. Note that deploying a loadbalancer is up to
|
|
|
|
|
a user and is not covered by ansible roles in Kargo. By default, it only configures
|
|
|
|
|
a non-HA endpoint, which points to the `access_ip` or IP address of the first server
|
|
|
|
|
node in the `kube-master` group. It can also configure clients to use endpoints
|
|
|
|
|
for a given loadbalancer type. The following diagram shows how traffic to the
|
|
|
|
|
apiserver is directed.
|
|
|
|
|
|
|
|
|
|
![Image](figures/loadbalancer_localhost.png?raw=true)
|
|
|
|
|
|
|
|
|
@ -90,7 +95,7 @@ Access endpoints are evaluated automagically, as the following:
|
|
|
|
|
|
|
|
|
|
| Endpoint type | kube-master | non-master |
|
|
|
|
|
|------------------------------|---------------|---------------------|
|
|
|
|
|
| Local LB | http://lc:p | https://lc:sp |
|
|
|
|
|
| Local LB | http://lc:p | https://lc:nsp |
|
|
|
|
|
| External LB, no internal | https://lb:lp | https://lb:lp |
|
|
|
|
|
| No ext/int LB (default) | http://lc:p | https://m[0].aip:sp |
|
|
|
|
|
|
|
|
|
@ -99,7 +104,9 @@ Where:
|
|
|
|
|
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
|
|
|
|
|
* `lc` - localhost;
|
|
|
|
|
* `p` - insecure port, `kube_apiserver_insecure_port`
|
|
|
|
|
* `nsp` - nginx secure port, `nginx_kube_apiserver_port`;
|
|
|
|
|
* `sp` - secure port, `kube_apiserver_port`;
|
|
|
|
|
* `lp` - LB port, `loadbalancer_apiserver.port`, defers to the secure port;
|
|
|
|
|
* `ip` - the node IP, defers to the ansible IP;
|
|
|
|
|
* `aip` - `access_ip`, defers to the ip.
|
|
|
|
|
|
|
|
|
|