2017-06-17 01:25:46 +08:00
|
|
|
Upgrading Kubernetes in Kubespray
|
2016-12-07 21:33:08 +08:00
|
|
|
=============================
|
|
|
|
|
|
|
|
#### Description
|
|
|
|
|
2017-06-17 01:25:46 +08:00
|
|
|
Kubespray handles upgrades the same way it handles initial deployment. That is to
|
2016-12-07 21:33:08 +08:00
|
|
|
say that each component is laid down in a fixed order. You should be able to
|
2017-06-17 01:25:46 +08:00
|
|
|
upgrade from Kubespray tag 2.0 up to the current master without difficulty. You can
|
2016-12-07 21:33:08 +08:00
|
|
|
also individually control versions of components by explicitly defining their
|
|
|
|
versions. Here are all version vars for each component:
|
|
|
|
|
|
|
|
* docker_version
|
|
|
|
* kube_version
|
|
|
|
* etcd_version
|
|
|
|
* calico_version
|
|
|
|
* calico_cni_version
|
|
|
|
* weave_version
|
|
|
|
* flannel_version
|
|
|
|
* kubedns_version
|
|
|
|
|
2017-02-15 00:08:44 +08:00
|
|
|
#### Unsafe upgrade example
|
2016-12-07 21:33:08 +08:00
|
|
|
|
|
|
|
If you wanted to upgrade just kube_version from v1.4.3 to v1.4.6, you could
|
|
|
|
deploy the following way:
|
|
|
|
|
|
|
|
```
|
2018-02-01 14:42:34 +08:00
|
|
|
ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.3
|
2016-12-07 21:33:08 +08:00
|
|
|
```
|
|
|
|
|
|
|
|
And then repeat with v1.4.6 as kube_version:
|
|
|
|
|
|
|
|
```
|
2018-02-01 14:42:34 +08:00
|
|
|
ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.6
|
2016-12-07 21:33:08 +08:00
|
|
|
```
|
|
|
|
|
2017-02-15 00:08:44 +08:00
|
|
|
#### Graceful upgrade
|
|
|
|
|
2017-06-17 01:25:46 +08:00
|
|
|
Kubespray also supports cordon, drain and uncordoning of nodes when performing
|
2017-02-15 00:08:44 +08:00
|
|
|
a cluster upgrade. There is a separate playbook used for this purpose. It is
|
|
|
|
important to note that upgrade-cluster.yml can only be used for upgrading an
|
|
|
|
existing cluster. That means there must be at least 1 kube-master already
|
|
|
|
deployed.
|
|
|
|
|
|
|
|
```
|
|
|
|
git fetch origin
|
|
|
|
git checkout origin/master
|
2018-02-01 14:42:34 +08:00
|
|
|
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.6.0
|
2017-03-30 10:00:52 +08:00
|
|
|
```
|
|
|
|
|
|
|
|
After a successul upgrade, the Server Version should be updated:
|
|
|
|
|
|
|
|
```
|
|
|
|
$ kubectl version
|
|
|
|
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T19:15:41Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
|
|
|
|
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0+coreos.0", GitCommit:"8031716957d697332f9234ddf85febb07ac6c3e3", GitTreeState:"clean", BuildDate:"2017-03-29T04:33:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
|
2017-02-15 00:08:44 +08:00
|
|
|
```
|
|
|
|
|
2016-12-07 21:33:08 +08:00
|
|
|
#### Upgrade order
|
|
|
|
|
|
|
|
As mentioned above, components are upgraded in the order in which they were
|
|
|
|
installed in the Ansible playbook. The order of component installation is as
|
|
|
|
follows:
|
|
|
|
|
2017-03-17 00:53:48 +08:00
|
|
|
* Docker
|
|
|
|
* etcd
|
|
|
|
* kubelet and kube-proxy
|
|
|
|
* network_plugin (such as Calico or Weave)
|
|
|
|
* kube-apiserver, kube-scheduler, and kube-controller-manager
|
|
|
|
* Add-ons (such as KubeDNS)
|
2017-09-26 17:38:58 +08:00
|
|
|
|
|
|
|
#### Upgrade considerations
|
|
|
|
|
|
|
|
Kubespray supports rotating certificates used for etcd and Kubernetes
|
|
|
|
components, but some manual steps may be required. If you have a pod that
|
|
|
|
requires use of a service token and is deployed in a namespace other than
|
|
|
|
`kube-system`, you will need to manually delete the affected pods after
|
|
|
|
rotating certificates. This is because all service account tokens are dependent
|
|
|
|
on the apiserver token that is used to generate them. When the certificate
|
|
|
|
rotates, all service account tokens must be rotated as well. During the
|
|
|
|
kubernetes-apps/rotate_tokens role, only pods in kube-system are destroyed and
|
|
|
|
recreated. All other invalidated service account tokens are cleaned up
|
|
|
|
automatically, but other pods are not deleted out of an abundance of caution
|
|
|
|
for impact to user deployed pods.
|