Update README
parent
970aab70e1
commit
46807c655d
29
README.md
29
README.md
|
@ -1,21 +1,19 @@
|
||||||
kubernetes-ansible
|
kubernetes-ansible
|
||||||
========
|
========
|
||||||
|
|
||||||
Install and configure a kubernetes cluster including network plugin and optionnal addons.
|
Install and configure a kubernetes cluster including network plugin.
|
||||||
Based on [CiscoCloud](https://github.com/CiscoCloud/kubernetes-ansible) work.
|
|
||||||
|
|
||||||
### Requirements
|
### Requirements
|
||||||
Tested on **Debian Jessie** and **Ubuntu** (14.10, 15.04, 15.10).
|
Tested on **Debian Jessie** and **Ubuntu** (14.10, 15.04, 15.10).
|
||||||
* The target servers must have access to the Internet in order to pull docker imaqes.
|
* The target servers must have access to the Internet in order to pull docker imaqes.
|
||||||
* The firewalls are not managed, you'll need to implement your own rules the way you used to.
|
* The firewalls are not managed, you'll need to implement your own rules the way you used to.
|
||||||
* the following packages are required: openssl, curl, dnsmasq, python-httplib2 on remote servers and python-ipaddr on deployment machine.
|
|
||||||
|
|
||||||
Ansible v1.9.x
|
Ansible v1.9.x
|
||||||
|
|
||||||
### Components
|
### Components
|
||||||
* [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.1.3
|
* [kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.1.3
|
||||||
* [etcd](https://github.com/coreos/etcd/releases) v2.2.2
|
* [etcd](https://github.com/coreos/etcd/releases) v2.2.2
|
||||||
* [calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.12.0
|
* [calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.13.0
|
||||||
* [flanneld](https://github.com/coreos/flannel/releases) v0.5.5
|
* [flanneld](https://github.com/coreos/flannel/releases) v0.5.5
|
||||||
* [docker](https://www.docker.com/) v1.9.1
|
* [docker](https://www.docker.com/) v1.9.1
|
||||||
|
|
||||||
|
@ -48,7 +46,7 @@ kube-master
|
||||||
|
|
||||||
Run the playbook
|
Run the playbook
|
||||||
```
|
```
|
||||||
ansible-playbook -i environments/test/inventory cluster.yml -u root
|
ansible-playbook -i inventory/inventory.cfg cluster.yml -u root
|
||||||
```
|
```
|
||||||
|
|
||||||
You can jump directly to "*Available apps, installation procedure*"
|
You can jump directly to "*Available apps, installation procedure*"
|
||||||
|
@ -65,7 +63,7 @@ Please ensure that you have enough disk space there (about **300M**).
|
||||||
|
|
||||||
|
|
||||||
### Variables
|
### Variables
|
||||||
The main variables to change are located in the directory ```environments/[env_name]/group_vars/k8s-cluster.yml```.
|
The main variables to change are located in the directory ```inventory/group_vars/all.yml```.
|
||||||
|
|
||||||
### Inventory
|
### Inventory
|
||||||
Below is an example of an inventory.
|
Below is an example of an inventory.
|
||||||
|
@ -137,7 +135,7 @@ kube-master
|
||||||
It is possible to define variables for different environments.
|
It is possible to define variables for different environments.
|
||||||
For instance, in order to deploy the cluster on 'dev' environment run the following command.
|
For instance, in order to deploy the cluster on 'dev' environment run the following command.
|
||||||
```
|
```
|
||||||
ansible-playbook -i environments/dev/inventory cluster.yml -u root
|
ansible-playbook -i inventory/dev/inventory.cfg cluster.yml -u root
|
||||||
```
|
```
|
||||||
|
|
||||||
Kubernetes
|
Kubernetes
|
||||||
|
@ -148,9 +146,9 @@ the server address has to be present on both groups 'kube-master' and 'kube-node
|
||||||
|
|
||||||
* Almost all kubernetes components are running into pods except *kubelet*. These pods are managed by kubelet which ensure they're always running
|
* Almost all kubernetes components are running into pods except *kubelet*. These pods are managed by kubelet which ensure they're always running
|
||||||
|
|
||||||
* One etcd cluster member per node will be configured. For safety reasons, you should have at least two master nodes.
|
* For safety reasons, you should have at least two master nodes and 3 etcd servers
|
||||||
|
|
||||||
* Kube-proxy doesn't support multiple apiservers on startup ([#18174]('https://github.com/kubernetes/kubernetes/issues/18174')). An external loadbalancer needs to be configured.
|
* Kube-proxy doesn't support multiple apiservers on startup ([Issue 18174]('https://github.com/kubernetes/kubernetes/issues/18174')). An external loadbalancer needs to be configured.
|
||||||
In order to do so, some variables have to be used '**loadbalancer_apiserver**' and '**apiserver_loadbalancer_domain_name**'
|
In order to do so, some variables have to be used '**loadbalancer_apiserver**' and '**apiserver_loadbalancer_domain_name**'
|
||||||
|
|
||||||
|
|
||||||
|
@ -165,7 +163,7 @@ The choice is defined with the variable '**kube_network_plugin**'
|
||||||
|
|
||||||
### Expose a service
|
### Expose a service
|
||||||
There are several loadbalancing solutions.
|
There are several loadbalancing solutions.
|
||||||
The ones i found suitable for kubernetes are [Vulcand]('http://vulcand.io/') and [Haproxy]('http://www.haproxy.org/')
|
The one i found suitable for kubernetes are [Vulcand]('http://vulcand.io/') and [Haproxy]('http://www.haproxy.org/')
|
||||||
|
|
||||||
My cluster is working with haproxy and kubernetes services are configured with the loadbalancing type '**nodePort**'.
|
My cluster is working with haproxy and kubernetes services are configured with the loadbalancing type '**nodePort**'.
|
||||||
eg: each node opens the same tcp port and forwards the traffic to the target pod wherever it is located.
|
eg: each node opens the same tcp port and forwards the traffic to the target pod wherever it is located.
|
||||||
|
@ -177,17 +175,15 @@ Please refer to the proper kubernetes documentation on [Services]('https://githu
|
||||||
### Check cluster status
|
### Check cluster status
|
||||||
|
|
||||||
#### Kubernetes components
|
#### Kubernetes components
|
||||||
Master processes : kube-apiserver, kube-scheduler, kube-controller, kube-proxy
|
|
||||||
Nodes processes : kubelet, kube-proxy, [calico-node|flanneld]
|
|
||||||
|
|
||||||
* Check the status of the processes
|
* Check the status of the processes
|
||||||
```
|
```
|
||||||
systemctl status [process_name]
|
systemctl status kubelet
|
||||||
```
|
```
|
||||||
|
|
||||||
* Check the logs
|
* Check the logs
|
||||||
```
|
```
|
||||||
journalctl -ae -u [process_name]
|
journalctl -ae -u kubelet
|
||||||
```
|
```
|
||||||
|
|
||||||
* Check the NAT rules
|
* Check the NAT rules
|
||||||
|
@ -195,6 +191,11 @@ journalctl -ae -u [process_name]
|
||||||
iptables -nLv -t nat
|
iptables -nLv -t nat
|
||||||
```
|
```
|
||||||
|
|
||||||
|
For the master nodes you'll have to see the docker logs for the apiserver
|
||||||
|
```
|
||||||
|
docker logs [apiserver docker id]
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
### Available apps, installation procedure
|
### Available apps, installation procedure
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue