commit
8f5b0c777b
|
@ -69,7 +69,7 @@ minute which may require large etcd containers or even dedicated nodes for etcd.
|
||||||
|
|
||||||
> If we calculate the number of tries, the division will give 5, but in reality
|
> If we calculate the number of tries, the division will give 5, but in reality
|
||||||
> it will be from 3 to 5 with `nodeStatusUpdateRetry` attempts of each try. The
|
> it will be from 3 to 5 with `nodeStatusUpdateRetry` attempts of each try. The
|
||||||
> total number of attemtps will vary from 15 to 25 due to latency of all
|
> total number of attempts will vary from 15 to 25 due to latency of all
|
||||||
> components.
|
> components.
|
||||||
|
|
||||||
## Medium Update and Average Reaction
|
## Medium Update and Average Reaction
|
||||||
|
@ -92,7 +92,7 @@ etcd updates per minute.
|
||||||
Let's set `-–node-status-update-frequency` to **1m**.
|
Let's set `-–node-status-update-frequency` to **1m**.
|
||||||
`--node-monitor-grace-period` will set to **5m** and `--pod-eviction-timeout`
|
`--node-monitor-grace-period` will set to **5m** and `--pod-eviction-timeout`
|
||||||
to **1m**. In this scenario, every kubelet will try to update the status every
|
to **1m**. In this scenario, every kubelet will try to update the status every
|
||||||
minute. There will be 5 * 5 = 25 attempts before unhealty status. After 5m,
|
minute. There will be 5 * 5 = 25 attempts before unhealthy status. After 5m,
|
||||||
Kubernetes controller manager will set unhealthy status. This means that pods
|
Kubernetes controller manager will set unhealthy status. This means that pods
|
||||||
will be evicted after 1m after being marked unhealthy. (6m in total).
|
will be evicted after 1m after being marked unhealthy. (6m in total).
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue