Merge pull request #331 from kubespray/docs

add documentation
pull/286/merge
Smaine Kahlouch 2016-07-04 14:39:00 +02:00 committed by GitHub
commit 19f5093034
10 changed files with 370 additions and 25 deletions

View File

@ -1,25 +0,0 @@
![Kubespray Logo](http://s9.postimg.org/md5dyjl67/kubespray_logoandkubespray_small.png)
##Deploy a production ready kubernetes cluster
- Can be deployed on **AWS, GCE, OpenStack or Baremetal**
- **High available** cluster
- **Composable** (Choice of the network plugin for instance)
- Support most popular **Linux distributions**
- **Continuous integration tests**
To deploy the cluster you can use :
* [**kargo-cli**](https://github.com/kubespray/kargo-cli)
* **vagrant** by simply running `vagrant up`
* **Ansible** usual commands
A complete **documentation** can be found [**here**](https://docs.kubespray.io)
If you have questions, you can [invite yourself](https://slack.kubespray.io/) to **chat** with us on Slack! [![SlackStatus](https://slack.kubespray.io/badge.svg)](https://kubespray.slack.com)
[![Build Status](https://travis-ci.org/kubespray/kargo.svg)](https://travis-ci.org/kubespray/kargo) </br>
CI tests sponsored by Google (GCE), and [teuto.net](https://teuto.net/) for OpenStack.

1
README.md 120000
View File

@ -0,0 +1 @@
docs/documentation.md

50
docs/ansible.md 100644
View File

@ -0,0 +1,50 @@
Ansible variables
===============
Inventory
-------------
The inventory is composed of 3 groups:
* **kube-node** : list of kubernetes nodes where the pods will run.
* **kube-master** : list of servers where kubernetes master components (apiserver, scheduler, controller) will run.
Note: if you want the server to act both as master and node the server must be defined on both groups _kube-master_ and _kube-node_
* **etcd**: list of server to compose the etcd server. you should have at least 3 servers for failover purposes.
Below is a complete inventory example:
```
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
node1 ansible_ssh_host=95.54.0.12 # ip=10.3.0.1
node2 ansible_ssh_host=95.54.0.13 # ip=10.3.0.2
node3 ansible_ssh_host=95.54.0.14 # ip=10.3.0.3
node4 ansible_ssh_host=95.54.0.15 # ip=10.3.0.4
node5 ansible_ssh_host=95.54.0.16 # ip=10.3.0.5
node6 ansible_ssh_host=95.54.0.17 # ip=10.3.0.6
[kube-master]
node1
node2
[etcd]
node1
node2
node3
[kube-node]
node2
node3
node4
node5
node6
[k8s-cluster:children]
kube-node
kube-master
etcd
```
Group vars
--------------
The main variables to change are located in the directory ```inventory/group_vars/all.yml```.

39
docs/calico.md 100644
View File

@ -0,0 +1,39 @@
Calico
===========
Check if the calico-node container is running
```
docker ps | grep calico
```
The **calicoctl** command allows to check the status of the network workloads.
* Check the status of Calico nodes
```
calicoctl status
```
* Show the configured network subnet for containers
```
calicoctl pool show
```
* Show the workloads (ip addresses of containers and their located)
```
calicoctl endpoint show --detail
```
##### Optionnal : BGP Peering with border routers
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
The following variables need to be set:
`peer_with_router` to enable the peering with the datacenter's border router (default value: false).
you'll need to edit the inventory and add a and a hostvar `local_as` by node.
```
node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
```

22
docs/cloud.md 100644
View File

@ -0,0 +1,22 @@
Cloud providers
==============
#### Provisioning
You can use kargo-cli to start new instances on cloud providers
here's an example
```
kargo [aws|gce] --nodes 2 --etcd 3 --cluster-name test-smana
```
#### Deploy kubernetes
With kargo-cli
```
kargo deploy [--aws|--gce] -u admin
```
Or ansible-playbook command
```
ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml
```

24
docs/coreos.md 100644
View File

@ -0,0 +1,24 @@
CoreOS bootstrap
===============
Example with **kargo-cli**:
```
kargo deploy --gce --coreos
```
Or with Ansible:
Before running the cluster playbook you must satisfy the following requirements:
* On each CoreOS nodes a writable directory **/opt/bin** (~400M disk space)
* Uncomment the variable **ansible\_python\_interpreter** in the file `inventory/group_vars/all.yml`
* run the Python bootstrap playbook
```
ansible-playbook -u smana -e ansible_ssh_user=smana -b --become-user=root -i inventory/inventory.cfg coreos-bootstrap.yml
```
Then you can proceed to [cluster deployment](#run-deployment)

View File

@ -0,0 +1,75 @@
![Kubespray Logo](http://s9.postimg.org/md5dyjl67/kubespray_logoandkubespray_small.png)
##Deploy a production ready kubernetes cluster
If you have questions, you can [invite yourself](https://slack.kubespray.io/) to **chat** with us on Slack! [![SlackStatus](https://slack.kubespray.io/badge.svg)](https://kubespray.slack.com)
- Can be deployed on **AWS, GCE, OpenStack or Baremetal**
- **High available** cluster
- **Composable** (Choice of the network plugin for instance)
- Support most popular **Linux distributions**
- **Continuous integration tests**
To deploy the cluster you can use :
[**kargo-cli**](https://github.com/kubespray/kargo-cli) <br>
**Ansible** usual commands <br>
**vagrant** by simply running `vagrant up` (for tests purposes) <br>
* [Requirements](#requirements)
* [Getting started](docs/getting-started.md)
* [Vagrant install](docs/vagrant.md)
* [CoreOS bootstrap](docs/coreos.md)
* [Ansible variables](docs/ansible.md)
* [Cloud providers](docs/cloud.md)
* [Openstack](docs/openstack.md)
* [Network plugins](#network-plugins)
Supported Linux distributions
===============
* **CoreOS**
* **Debian** Wheezy, Jessie
* **Ubuntu** 14.10, 15.04, 15.10, 16.04
* **Fedora** 23
* **CentOS/RHEL** 7
Versions
--------------
[kubernetes](https://github.com/kubernetes/kubernetes/releases) v1.3.0 <br>
[etcd](https://github.com/coreos/etcd/releases) v3.0.1 <br>
[calicoctl](https://github.com/projectcalico/calico-docker/releases) v0.20.0 <br>
[flanneld](https://github.com/coreos/flannel/releases) v0.5.5 <br>
[weave](http://weave.works/) v1.5.0 <br>
[docker](https://www.docker.com/) v1.10.3 <br>
Requirements
--------------
* The target servers must have **access to the Internet** in order to pull docker images.
* The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall
* **Copy your ssh keys** to all the servers part of your inventory.
* **Ansible v2.x and python-netaddr**
## Network plugins
You can choose between 3 network plugins. (default: `flannel` with vxlan backend)
* [**flannel**](docs/flannel.md): gre/vxlan (layer 2) networking.
* [**calico**](docs/calico.md): bgp (layer 3) networking.
* **weave**: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. <br>
(Please refer to `weave` [troubleshooting documentation](http://docs.weave.works/weave/latest_release/troubleshooting.html))
The choice is defined with the variable `kube_network_plugin`
[![Build Status](https://travis-ci.org/kubespray/kargo.svg)](https://travis-ci.org/kubespray/kargo) </br>
CI tests sponsored by Google (GCE), and [teuto.net](https://teuto.net/) for OpenStack.

51
docs/flannel.md 100644
View File

@ -0,0 +1,51 @@
Flannel
==============
* Flannel configuration file should have been created there
```
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.233.0.0/18
FLANNEL_SUBNET=10.233.16.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
```
* Check if the network interface has been created
```
ip a show dev flannel.1
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether e2:f3:a7:0f:bf:cb brd ff:ff:ff:ff:ff:ff
inet 10.233.16.0/18 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::e0f3:a7ff:fe0f:bfcb/64 scope link
valid_lft forever preferred_lft forever
```
* Docker must be configured with a bridge ip in the flannel subnet.
```
ps aux | grep docker
root 20196 1.7 2.7 1260616 56840 ? Ssl 10:18 0:07 /usr/bin/docker daemon --bip=10.233.16.1/24 --mtu=1450
```
* Try to run a container and check its ip address
```
kubectl run test --image=busybox --command -- tail -f /dev/null
replicationcontroller "test" created
kubectl describe po test-34ozs | grep ^IP
IP: 10.233.16.2
```
```
kubectl exec test-34ozs -- ip a show dev eth0
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:e9:2b:03 brd ff:ff:ff:ff:ff:ff
inet 10.233.16.2/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fee9:2b03/64 scope link tentative flags 08
valid_lft forever preferred_lft forever
```

View File

@ -0,0 +1,19 @@
Getting started
===============
The easiest way to run the deployement is to use the **kargo-cli** tool.
A complete documentation can be found in its [github repository](https://github.com/kubespray/kargo-cli).
Here is a simple example on AWS:
* Create instances and generate the inventory
```
kargo aws --instances 3
```
* Run the deployment
```
kargo deploy --aws -u centos -n calico
```

48
docs/openstack.md 100644
View File

@ -0,0 +1,48 @@
OpenStack
===============
To deploy kubespray on [OpenStack](https://www.openstack.org/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'openstack'`.
After that make sure to source in your OpenStack credentials like you would do when using `nova-client` by using `source path/to/your/openstack-rc`.
The next step is to make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack.
Otherwise [cinder](https://wiki.openstack.org/wiki/Cinder) won't work as expected.
Unless you are using calico you can now run the playbook.
**Additional step needed when using calico:**
Calico does not encapsulate all packages with the hosts ip addresses. Instead the packages will be routed with the PODs ip addresses directly.
OpenStack will filter and drop all packages from ips it does not know to prevent spoofing.
In order to make calico work on OpenStack you will need to tell OpenStack to allow calicos packages by allowing the network it uses.
First you will need the ids of your OpenStack instances that will run kubernetes:
nova list --tenant Your-Tenant
+--------------------------------------+--------+----------------------------------+--------+-------------+
| ID | Name | Tenant ID | Status | Power State |
+--------------------------------------+--------+----------------------------------+--------+-------------+
| e1f48aad-df96-4bce-bf61-62ae12bf3f95 | k8s-1 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
| 725cd548-6ea3-426b-baaa-e7306d3c8052 | k8s-2 | fba478440cb2444a9e5cf03717eb5d6f | ACTIVE | Running |
Then you can use the instance ids to find the connected [neutron](https://wiki.openstack.org/wiki/Neutron) ports:
neutron port-list -c id -c device_id
+--------------------------------------+--------------------------------------+
| id | device_id |
+--------------------------------------+--------------------------------------+
| 5662a4e0-e646-47f0-bf88-d80fbd2d99ef | e1f48aad-df96-4bce-bf61-62ae12bf3f95 |
| e5ae2045-a1e1-4e99-9aac-4353889449a7 | 725cd548-6ea3-426b-baaa-e7306d3c8052 |
Given the port ids on the left, you can set the `allowed_address_pairs` in neutron:
# allow kube_service_addresses network
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.0.0/18
# allow kube_pods_subnet network
neutron port-update 5662a4e0-e646-47f0-bf88-d80fbd2d99ef --allowed_address_pairs list=true type=dict ip_address=10.233.64.0/18
neutron port-update e5ae2045-a1e1-4e99-9aac-4353889449a7 --allowed_address_pairs list=true type=dict ip_address=10.233.64.0/18
Now you can finally run the playbook.

41
docs/vagrant.md 100644
View File

@ -0,0 +1,41 @@
Vagrant Install
=================
Assuming you have Vagrant (1.8+) installed with virtualbox (it may work
with vmware, but is untested) you should be able to launch a 3 node
Kubernetes cluster by simply running `$ vagrant up`.<br />
This will spin up 3 VMs and install kubernetes on them. Once they are
completed you can connect to any of them by running <br />
`$ vagrant ssh k8s-0[1..3]`.
```
$ vagrant up
Bringing machine 'k8s-01' up with 'virtualbox' provider...
Bringing machine 'k8s-02' up with 'virtualbox' provider...
Bringing machine 'k8s-03' up with 'virtualbox' provider...
==> k8s-01: Box 'bento/ubuntu-14.04' could not be found. Attempting to find and install...
...
...
k8s-03: Running ansible-playbook...
PLAY [k8s-cluster] *************************************************************
TASK [setup] *******************************************************************
ok: [k8s-03]
ok: [k8s-01]
ok: [k8s-02]
...
...
PLAY RECAP *********************************************************************
k8s-01 : ok=157 changed=66 unreachable=0 failed=0
k8s-02 : ok=137 changed=59 unreachable=0 failed=0
k8s-03 : ok=86 changed=51 unreachable=0 failed=0
$ vagrant ssh k8s-01
vagrant@k8s-01:~$ kubectl get nodes
NAME STATUS AGE
k8s-01 Ready 45s
k8s-02 Ready 45s
k8s-03 Ready 45s
```