Add markdown CI (#5380)
parent
b1fbead531
commit
a9b67d586b
|
@ -47,3 +47,11 @@ tox-inventory-builder:
|
|||
- cd contrib/inventory_builder && tox
|
||||
when: manual
|
||||
except: ['triggers', 'master']
|
||||
|
||||
markdownlint:
|
||||
stage: unit-tests
|
||||
image: node
|
||||
before_script:
|
||||
- npm install -g markdownlint-cli
|
||||
script:
|
||||
- markdownlint README.md docs --ignore docs/_sidebar.md
|
||||
|
|
|
@ -0,0 +1,2 @@
|
|||
---
|
||||
MD013: false
|
43
README.md
43
README.md
|
@ -1,7 +1,6 @@
|
|||
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
|
||||
# Deploy a Production Ready Kubernetes Cluster
|
||||
|
||||
Deploy a Production Ready Kubernetes Cluster
|
||||
============================================
|
||||
![Kubernetes Logo](https://raw.githubusercontent.com/kubernetes-sigs/kubespray/master/docs/img/kubernetes-logo.png)
|
||||
|
||||
If you have questions, check the [documentation](https://kubespray.io) and join us on the [kubernetes slack](https://kubernetes.slack.com), channel **\#kubespray**.
|
||||
You can get your invite [here](http://slack.k8s.io/)
|
||||
|
@ -12,8 +11,7 @@ You can get your invite [here](http://slack.k8s.io/)
|
|||
- Supports most popular **Linux distributions**
|
||||
- **Continuous integration tests**
|
||||
|
||||
Quick Start
|
||||
-----------
|
||||
## Quick Start
|
||||
|
||||
To deploy the cluster you can use :
|
||||
|
||||
|
@ -21,6 +19,7 @@ To deploy the cluster you can use :
|
|||
|
||||
#### Usage
|
||||
|
||||
```ShellSession
|
||||
# Install dependencies from ``requirements.txt``
|
||||
sudo pip install -r requirements.txt
|
||||
|
||||
|
@ -40,12 +39,15 @@ To deploy the cluster you can use :
|
|||
# installing packages and interacting with various systemd daemons.
|
||||
# Without --become the playbook will fail to run!
|
||||
ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml
|
||||
```
|
||||
|
||||
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu).
|
||||
As a consequence, `ansible-playbook` command will fail with:
|
||||
```
|
||||
|
||||
```raw
|
||||
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
|
||||
```
|
||||
|
||||
probably pointing on a task depending on a module present in requirements.txt (i.e. "unseal vault").
|
||||
|
||||
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible.
|
||||
|
@ -56,16 +58,19 @@ A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` en
|
|||
For Vagrant we need to install python dependencies for provisioning tasks.
|
||||
Check if Python and pip are installed:
|
||||
|
||||
```ShellSession
|
||||
python -V && pip -V
|
||||
```
|
||||
|
||||
If this returns the version of the software, you're good to go. If not, download and install Python from here <https://www.python.org/downloads/source/>
|
||||
Install the necessary requirements
|
||||
|
||||
```ShellSession
|
||||
sudo pip install -r requirements.txt
|
||||
vagrant up
|
||||
```
|
||||
|
||||
Documents
|
||||
---------
|
||||
## Documents
|
||||
|
||||
- [Requirements](#requirements)
|
||||
- [Kubespray vs ...](docs/comparisons.md)
|
||||
|
@ -91,8 +96,7 @@ Documents
|
|||
- [Upgrades basics](docs/upgrades.md)
|
||||
- [Roadmap](docs/roadmap.md)
|
||||
|
||||
Supported Linux Distributions
|
||||
-----------------------------
|
||||
## Supported Linux Distributions
|
||||
|
||||
- **Container Linux by CoreOS**
|
||||
- **Debian** Buster, Jessie, Stretch, Wheezy
|
||||
|
@ -105,8 +109,7 @@ Supported Linux Distributions
|
|||
|
||||
Note: Upstart/SysV init based OS types are not supported.
|
||||
|
||||
Supported Components
|
||||
--------------------
|
||||
## Supported Components
|
||||
|
||||
- Core
|
||||
- [kubernetes](https://github.com/kubernetes/kubernetes) v1.16.3
|
||||
|
@ -132,8 +135,8 @@ Supported Components
|
|||
|
||||
Note: The list of validated [docker versions](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md) was updated to 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09. kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
|
||||
|
||||
Requirements
|
||||
------------
|
||||
## Requirements
|
||||
|
||||
- **Minimum required version of Kubernetes is v1.15**
|
||||
- **Ansible v2.7.8 (or newer, but [not 2.8.x](https://github.com/kubernetes-sigs/kubespray/issues/4778)) and python-netaddr is installed on the machine
|
||||
that will run Ansible commands**
|
||||
|
@ -155,8 +158,7 @@ These limits are safe guarded by Kubespray. Actual requirements for your workloa
|
|||
- Node
|
||||
- Memory: 1024 MB
|
||||
|
||||
Network Plugins
|
||||
---------------
|
||||
## Network Plugins
|
||||
|
||||
You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`)
|
||||
|
||||
|
@ -189,22 +191,19 @@ The choice is defined with the variable `kube_network_plugin`. There is also an
|
|||
option to leverage built-in cloud provider networking instead.
|
||||
See also [Network checker](docs/netcheck.md).
|
||||
|
||||
Community docs and resources
|
||||
----------------------------
|
||||
## Community docs and resources
|
||||
|
||||
- [kubernetes.io/docs/setup/production-environment/tools/kubespray/](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/)
|
||||
- [kubespray, monitoring and logging](https://github.com/gregbkr/kubernetes-kargo-logging-monitoring) by @gregbkr
|
||||
- [Deploy Kubernetes w/ Ansible & Terraform](https://rsmitty.github.io/Terraform-Ansible-Kubernetes/) by @rsmitty
|
||||
- [Deploy a Kubernetes Cluster with Kubespray (video)](https://www.youtube.com/watch?v=N9q51JgbWu8)
|
||||
|
||||
Tools and projects on top of Kubespray
|
||||
--------------------------------------
|
||||
## Tools and projects on top of Kubespray
|
||||
|
||||
- [Digital Rebar Provision](https://github.com/digitalrebar/provision/blob/v4/doc/integrations/ansible.rst)
|
||||
- [Terraform Contrib](https://github.com/kubernetes-sigs/kubespray/tree/master/contrib/terraform)
|
||||
|
||||
CI Tests
|
||||
--------
|
||||
## CI Tests
|
||||
|
||||
[![Build graphs](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/badges/master/build.svg)](https://gitlab.com/kargo-ci/kubernetes-sigs-kubespray/pipelines)
|
||||
|
||||
|
|
|
@ -1,9 +1,7 @@
|
|||
Ansible variables
|
||||
===============
|
||||
# Ansible variables
|
||||
|
||||
## Inventory
|
||||
|
||||
Inventory
|
||||
-------------
|
||||
The inventory is composed of 3 groups:
|
||||
|
||||
* **kube-node** : list of kubernetes nodes where the pods will run.
|
||||
|
@ -14,7 +12,7 @@ Note: do not modify the children of _k8s-cluster_, like putting
|
|||
the _etcd_ group into the _k8s-cluster_, unless you are certain
|
||||
to do that and you have it fully contained in the latter:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
|
||||
```
|
||||
|
||||
|
@ -32,7 +30,7 @@ There are also two special groups:
|
|||
|
||||
Below is a complete inventory example:
|
||||
|
||||
```
|
||||
```ini
|
||||
## Configure 'ip' variable to bind kubernetes services on a
|
||||
## different ip than the default iface
|
||||
node1 ansible_host=95.54.0.12 ip=10.3.0.1
|
||||
|
@ -63,8 +61,7 @@ kube-node
|
|||
kube-master
|
||||
```
|
||||
|
||||
Group vars and overriding variables precedence
|
||||
----------------------------------------------
|
||||
## Group vars and overriding variables precedence
|
||||
|
||||
The group variables to control main deployment options are located in the directory ``inventory/sample/group_vars``.
|
||||
Optional variables are located in the `inventory/sample/group_vars/all.yml`.
|
||||
|
@ -97,8 +94,8 @@ block vars (only for tasks in block) | Kubespray overrides for internal roles' l
|
|||
task vars (only for the task) | Unused for roles, but only for helper scripts
|
||||
**extra vars** (always win precedence) | override with ``ansible-playbook -e @foo.yml``
|
||||
|
||||
Ansible tags
|
||||
------------
|
||||
## Ansible tags
|
||||
|
||||
The following tags are defined in playbooks:
|
||||
|
||||
| Tag name | Used for
|
||||
|
@ -145,21 +142,25 @@ Note: Use the ``bash scripts/gen_tags.sh`` command to generate a list of all
|
|||
tags found in the codebase. New tags will be listed with the empty "Used for"
|
||||
field.
|
||||
|
||||
Example commands
|
||||
----------------
|
||||
## Example commands
|
||||
|
||||
Example command to filter and apply only DNS configuration tasks and skip
|
||||
everything else related to host OS configuration and downloading images of containers:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os
|
||||
```
|
||||
|
||||
And this play only removes the K8s cluster DNS resolver IP from hosts' /etc/resolv.conf files:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf
|
||||
```
|
||||
|
||||
And this prepares all container images locally (at the ansible runner node) without installing
|
||||
or upgrading related stuff or trying to upload container to K8s cluster nodes:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
|
||||
-e download_run_once=true -e download_localhost=true \
|
||||
--tags download --skip-tags upload,upgrade
|
||||
|
@ -167,14 +168,14 @@ ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
|
|||
|
||||
Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you're doing.
|
||||
|
||||
Bastion host
|
||||
--------------
|
||||
## Bastion host
|
||||
|
||||
If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
|
||||
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion,
|
||||
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
|
||||
bastion host.
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
[bastion]
|
||||
bastion ansible_host=x.x.x.x
|
||||
```
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
## Architecture compatibility
|
||||
# Architecture compatibility
|
||||
|
||||
The following table shows the impact of the CPU architecture on compatible features:
|
||||
|
||||
- amd64: Cluster using only x86/amd64 CPUs
|
||||
- arm64: Cluster using only arm64 CPUs
|
||||
- amd64 + arm64: Cluster with a mix of x86/amd64 and arm64 CPUs
|
||||
|
|
|
@ -1,23 +1,22 @@
|
|||
Atomic host bootstrap
|
||||
=====================
|
||||
# Atomic host bootstrap
|
||||
|
||||
Atomic host testing has been done with the network plugin flannel. Change the inventory var `kube_network_plugin: flannel`.
|
||||
|
||||
Note: Flannel is the only plugin that has currently been tested with atomic
|
||||
|
||||
### Vagrant
|
||||
## Vagrant
|
||||
|
||||
* For bootstrapping with Vagrant, use box centos/atomic-host or fedora/atomic-host
|
||||
* Update VagrantFile variable `local_release_dir` to `/var/vagrant/temp`.
|
||||
* Update `vm_memory = 2048` and `vm_cpus = 2`
|
||||
* Networking on vagrant hosts has to be brought up manually once they are booted.
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
vagrant ssh
|
||||
sudo /sbin/ifup enp0s8
|
||||
```
|
||||
|
||||
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/
|
||||
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from https://dl.fedoraproject.org/pub/alt/atomic/stable/
|
||||
* For users of vagrant-libvirt download centos/atomic-host qcow2 format from <https://wiki.centos.org/SpecialInterestGroup/Atomic/Download/>
|
||||
* For users of vagrant-libvirt download fedora/atomic-host qcow2 format from <https://dl.fedoraproject.org/pub/alt/atomic/stable/>
|
||||
|
||||
Then you can proceed to [cluster deployment](#run-deployment)
|
||||
|
|
15
docs/aws.md
15
docs/aws.md
|
@ -1,5 +1,4 @@
|
|||
AWS
|
||||
===============
|
||||
# AWS
|
||||
|
||||
To deploy kubespray on [AWS](https://aws.amazon.com/) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'aws'`. Refer to the [Kubespray Configuration](#kubespray-configuration) for customizing the provider.
|
||||
|
||||
|
@ -13,11 +12,13 @@ The next step is to make sure the hostnames in your `inventory` file are identic
|
|||
|
||||
You can now create your cluster!
|
||||
|
||||
### Dynamic Inventory ###
|
||||
## Dynamic Inventory
|
||||
|
||||
There is also a dynamic inventory script for AWS that can be used if desired. However, be aware that it makes some certain assumptions about how you'll create your inventory. It also does not handle all use cases and groups that we may use as part of more advanced deployments. Additions welcome.
|
||||
|
||||
This will produce an inventory that is passed into Ansible that looks like the following:
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"_meta": {
|
||||
"hostvars": {
|
||||
|
@ -48,15 +49,18 @@ This will produce an inventory that is passed into Ansible that looks like the f
|
|||
```
|
||||
|
||||
Guide:
|
||||
|
||||
- Create instances in AWS as needed.
|
||||
- Either during or after creation, add tags to the instances with a key of `kubespray-role` and a value of `kube-master`, `etcd`, or `kube-node`. You can also share roles like `kube-master, etcd`
|
||||
- Copy the `kubespray-aws-inventory.py` script from `kubespray/contrib/aws_inventory` to the `kubespray/inventory` directory.
|
||||
- Set the following AWS credentials and info as environment variables in your terminal:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
export AWS_ACCESS_KEY_ID="xxxxx"
|
||||
export AWS_SECRET_ACCESS_KEY="yyyyy"
|
||||
export REGION="us-east-2"
|
||||
```
|
||||
|
||||
- We will now create our cluster. There will be either one or two small changes. The first is that we will specify `-i inventory/kubespray-aws-inventory.py` as our inventory script. The other is conditional. If your AWS instances are public facing, you can set the `VPC_VISIBILITY` variable to `public` and that will result in public IP and DNS names being passed into the inventory. This causes your cluster.yml command to look like `VPC_VISIBILITY="public" ansible-playbook ... cluster.yml`
|
||||
|
||||
## Kubespray configuration
|
||||
|
@ -75,4 +79,3 @@ aws_kubernetes_cluster_id|string|KubernetesClusterID is the cluster id we'll use
|
|||
aws_disable_security_group_ingress|bool|The aws provider creates an inbound rule per load balancer on the node security group. However, this can run into the AWS security group rule limit of 50 if many LoadBalancers are created. This flag disables the automatic ingress creation. It requires that the user has setup a rule that allows inbound traffic on kubelet ports from the local VPC subnet (so load balancers can access it). E.g. 10.82.0.0/16 30000-32000.
|
||||
aws_elb_security_group|string|Only in Kubelet version >= 1.7 : AWS has a hard limit of 500 security groups. For large clusters creating a security group for each ELB can cause the max number of security groups to be reached. If this is set instead of creating a new Security group for each ELB this security group will be used instead.
|
||||
aws_disable_strict_zone_check|bool|During the instantiation of an new AWS cloud provider, the detected region is validated against a known set of regions. In a non-standard, AWS like environment (e.g. Eucalyptus), this check may be undesirable. Setting this to true will disable the check and provide a warning that the check was skipped. Please note that this is an experimental feature and work-in-progress for the moment.
|
||||
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Azure
|
||||
===============
|
||||
# Azure
|
||||
|
||||
To deploy Kubernetes on [Azure](https://azure.microsoft.com) uncomment the `cloud_provider` option in `group_vars/all.yml` and set it to `'azure'`.
|
||||
|
||||
|
@ -7,38 +6,43 @@ All your instances are required to run in a resource group and a routing table h
|
|||
|
||||
Not all features are supported yet though, for a list of the current status have a look [here](https://github.com/colemickens/azure-kubernetes-status)
|
||||
|
||||
### Parameters
|
||||
## Parameters
|
||||
|
||||
Before creating the instances you must first set the `azure_` variables in the `group_vars/all.yml` file.
|
||||
|
||||
All of the values can be retrieved using the azure cli tool which can be downloaded here: https://docs.microsoft.com/en-gb/azure/xplat-cli-install
|
||||
All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-gb/azure/xplat-cli-install>
|
||||
After installation you have to run `azure login` to get access to your account.
|
||||
|
||||
### azure\_tenant\_id + azure\_subscription\_id
|
||||
|
||||
#### azure\_tenant\_id + azure\_subscription\_id
|
||||
run `azure account show` to retrieve your subscription id and tenant id:
|
||||
`azure_tenant_id` -> Tenant ID field
|
||||
`azure_subscription_id` -> ID field
|
||||
|
||||
### azure\_location
|
||||
|
||||
#### azure\_location
|
||||
The region your instances are located, can be something like `westeurope` or `westcentralus`. A full list of region names can be retrieved via `azure location list`
|
||||
|
||||
### azure\_resource\_group
|
||||
|
||||
#### azure\_resource\_group
|
||||
The name of the resource group your instances are in, can be retrieved via `azure group list`
|
||||
|
||||
#### azure\_vnet\_name
|
||||
### azure\_vnet\_name
|
||||
|
||||
The name of the virtual network your instances are in, can be retrieved via `azure network vnet list`
|
||||
|
||||
#### azure\_subnet\_name
|
||||
### azure\_subnet\_name
|
||||
|
||||
The name of the subnet your instances are in, can be retrieved via `azure network vnet subnet list --resource-group RESOURCE_GROUP --vnet-name VNET_NAME`
|
||||
|
||||
#### azure\_security\_group\_name
|
||||
### azure\_security\_group\_name
|
||||
|
||||
The name of the network security group your instances are in, can be retrieved via `azure network nsg list`
|
||||
|
||||
#### azure\_aad\_client\_id + azure\_aad\_client\_secret
|
||||
### azure\_aad\_client\_id + azure\_aad\_client\_secret
|
||||
|
||||
These will have to be generated first:
|
||||
|
||||
- Create an Azure AD Application with:
|
||||
`azure ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
|
||||
display name, identifier-uri, homepage and the password can be chosen
|
||||
|
@ -51,24 +55,28 @@ This is the AppId from the last command
|
|||
|
||||
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.
|
||||
|
||||
#### azure\_loadbalancer\_sku
|
||||
### azure\_loadbalancer\_sku
|
||||
|
||||
Sku of Load Balancer and Public IP. Candidate values are: basic and standard.
|
||||
|
||||
#### azure\_exclude\_master\_from\_standard\_lb
|
||||
### azure\_exclude\_master\_from\_standard\_lb
|
||||
|
||||
azure\_exclude\_master\_from\_standard\_lb excludes master nodes from `standard` load balancer.
|
||||
|
||||
#### azure\_disable\_outbound\_snat
|
||||
### azure\_disable\_outbound\_snat
|
||||
|
||||
azure\_disable\_outbound\_snat disables the outbound SNAT for public load balancer rules. It should only be set when azure\_exclude\_master\_from\_standard\_lb is `standard`.
|
||||
|
||||
#### azure\_primary\_availability\_set\_name
|
||||
### azure\_primary\_availability\_set\_name
|
||||
|
||||
(Optional) The name of the availability set that should be used as the load balancer backend .If this is set, the Azure
|
||||
cloudprovider will only add nodes from that availability set to the load balancer backend pool. If this is not set, and
|
||||
multiple agent pools (availability sets) are used, then the cloudprovider will try to add all nodes to a single backend
|
||||
pool which is forbidden. In other words, if you use multiple agent pools (availability sets), you MUST set this field.
|
||||
|
||||
#### azure\_use\_instance\_metadata
|
||||
Use instance metadata service where possible
|
||||
### azure\_use\_instance\_metadata
|
||||
|
||||
Use instance metadata service where possible
|
||||
|
||||
## Provisioning Azure with Resource Group Templates
|
||||
|
||||
|
|
|
@ -1,82 +1,83 @@
|
|||
Calico
|
||||
===========
|
||||
# Calico
|
||||
|
||||
N.B. **Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
|
||||
|
||||
---
|
||||
**N.B. Version 2.6.5 upgrade to 3.1.1 is upgrading etcd store to etcdv3**
|
||||
If you create automated backups of etcdv2 please switch for creating etcdv3 backups, as kubernetes and calico now uses etcdv3
|
||||
After migration you can check `/tmp/calico_upgrade/` directory for converted items to etcdv3.
|
||||
**PLEASE TEST upgrade before upgrading production cluster.**
|
||||
---
|
||||
|
||||
Check if the calico-node container is running
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
docker ps | grep calico
|
||||
```
|
||||
|
||||
The **calicoctl** command allows to check the status of the network workloads.
|
||||
|
||||
* Check the status of Calico nodes
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
calicoctl node status
|
||||
```
|
||||
|
||||
or for versions prior to *v1.0.0*:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
calicoctl status
|
||||
```
|
||||
|
||||
* Show the configured network subnet for containers
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
calicoctl get ippool -o wide
|
||||
```
|
||||
|
||||
or for versions prior to *v1.0.0*:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
calicoctl pool show
|
||||
```
|
||||
|
||||
* Show the workloads (ip addresses of containers and their located)
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
calicoctl get workloadEndpoint -o wide
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
calicoctl get hostEndpoint -o wide
|
||||
```
|
||||
|
||||
or for versions prior *v1.0.0*:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
calicoctl endpoint show --detail
|
||||
```
|
||||
|
||||
##### Optional : Define network backend
|
||||
## Configuration
|
||||
|
||||
### Optional : Define network backend
|
||||
|
||||
In some cases you may want to define Calico network backend. Allowed values are 'bird', 'gobgp' or 'none'. Bird is a default value.
|
||||
|
||||
To re-define you need to edit the inventory and add a group variable `calico_network_backend`
|
||||
|
||||
```
|
||||
```yml
|
||||
calico_network_backend: none
|
||||
```
|
||||
|
||||
##### Optional : Define the default pool CIDR
|
||||
### Optional : Define the default pool CIDR
|
||||
|
||||
By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool.
|
||||
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
calico_pool_cidr: 10.233.64.0/20
|
||||
```
|
||||
|
||||
##### Optional : BGP Peering with border routers
|
||||
### Optional : BGP Peering with border routers
|
||||
|
||||
In some cases you may want to route the pods subnet and so NAT is not needed on the nodes.
|
||||
For instance if you have a cluster spread on different locations and you want your pods to talk each other no matter where they are located.
|
||||
|
@ -84,11 +85,11 @@ The following variables need to be set:
|
|||
`peer_with_router` to enable the peering with the datacenter's border router (default value: false).
|
||||
you'll need to edit the inventory and add a hostvar `local_as` by node.
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
node1 ansible_ssh_host=95.54.0.12 local_as=xxxxxx
|
||||
```
|
||||
|
||||
##### Optional : Defining BGP peers
|
||||
### Optional : Defining BGP peers
|
||||
|
||||
Peers can be defined using the `peers` variable (see docs/calico_peer_example examples).
|
||||
In order to define global peers, the `peers` variable can be defined in group_vars with the "scope" attribute of each global peer set to "global".
|
||||
|
@ -97,16 +98,17 @@ NB: Ansible's `hash_behaviour` is by default set to "replace", thus defining bot
|
|||
|
||||
Since calico 3.4, Calico supports advertising Kubernetes service cluster IPs over BGP, just as it advertises pod IPs.
|
||||
This can be enabled by setting the following variable as follow in group_vars (k8s-cluster/k8s-net-calico.yml)
|
||||
```
|
||||
|
||||
```yml
|
||||
calico_advertise_cluster_ips: true
|
||||
```
|
||||
|
||||
##### Optional : Define global AS number
|
||||
### Optional : Define global AS number
|
||||
|
||||
Optional parameter `global_as_num` defines Calico global AS number (`/calico/bgp/v1/global/as_num` etcd key).
|
||||
It defaults to "64512".
|
||||
|
||||
##### Optional : BGP Peering with route reflectors
|
||||
### Optional : BGP Peering with route reflectors
|
||||
|
||||
At large scale you may want to disable full node-to-node mesh in order to
|
||||
optimize your BGP topology and improve `calico-node` containers' start times.
|
||||
|
@ -114,8 +116,8 @@ optimize your BGP topology and improve `calico-node` containers' start times.
|
|||
To do so you can deploy BGP route reflectors and peer `calico-node` with them as
|
||||
recommended here:
|
||||
|
||||
* https://hub.docker.com/r/calico/routereflector/
|
||||
* https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric
|
||||
* <https://hub.docker.com/r/calico/routereflector/>
|
||||
* <https://docs.projectcalico.org/v3.1/reference/private-cloud/l3-interconnect-fabric>
|
||||
|
||||
You need to edit your inventory and add:
|
||||
|
||||
|
@ -127,7 +129,7 @@ You need to edit your inventory and add:
|
|||
|
||||
Here's an example of Kubespray inventory with standalone route reflectors:
|
||||
|
||||
```
|
||||
```ini
|
||||
[all]
|
||||
rr0 ansible_ssh_host=10.210.1.10 ip=10.210.1.10
|
||||
rr1 ansible_ssh_host=10.210.1.11 ip=10.210.1.11
|
||||
|
@ -177,35 +179,35 @@ The inventory above will deploy the following topology assuming that calico's
|
|||
|
||||
![Image](figures/kubespray-calico-rr.png?raw=true)
|
||||
|
||||
##### Optional : Define default endpoint to host action
|
||||
|
||||
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see https://github.com/projectcalico/felix/issues/660 and https://github.com/projectcalico/calicoctl/issues/1389). Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
|
||||
### Optional : Define default endpoint to host action
|
||||
|
||||
By default Calico blocks traffic from endpoints to the host itself by using an iptables DROP action. When using it in kubernetes the action has to be changed to RETURN (default in kubespray) or ACCEPT (see <https://github.com/projectcalico/felix/issues/660> and <https://github.com/projectcalico/calicoctl/issues/1389).> Otherwise all network packets from pods (with hostNetwork=False) to services endpoints (with hostNetwork=True) within the same node are dropped.
|
||||
|
||||
To re-define default action please set the following variable in your inventory:
|
||||
```
|
||||
|
||||
```yml
|
||||
calico_endpoint_to_host_action: "ACCEPT"
|
||||
```
|
||||
|
||||
##### Optional : Define address on which Felix will respond to health requests
|
||||
## Optional : Define address on which Felix will respond to health requests
|
||||
|
||||
Since Calico 3.2.0, HealthCheck default behavior changed from listening on all interfaces to just listening on localhost.
|
||||
|
||||
To re-define health host please set the following variable in your inventory:
|
||||
```
|
||||
|
||||
```yml
|
||||
calico_healthhost: "0.0.0.0"
|
||||
```
|
||||
|
||||
Cloud providers configuration
|
||||
=============================
|
||||
## Cloud providers configuration
|
||||
|
||||
Please refer to the official documentation, for example [GCE configuration](http://docs.projectcalico.org/v1.5/getting-started/docker/installation/gce) requires a security rule for calico ip-ip tunnels. Note, calico is always configured with ``ipip: true`` if the cloud provider was defined.
|
||||
|
||||
##### Optional : Ignore kernel's RPF check setting
|
||||
### Optional : Ignore kernel's RPF check setting
|
||||
|
||||
By default the felix agent(calico-node) will abort if the Kernel RPF setting is not 'strict'. If you want Calico to ignore the Kernel setting:
|
||||
|
||||
```
|
||||
```yml
|
||||
calico_node_ignorelooserpf: true
|
||||
```
|
||||
|
||||
|
@ -213,7 +215,7 @@ Note that in OpenStack you must allow `ipip` traffic in your security groups,
|
|||
otherwise you will experience timeouts.
|
||||
To do this you must add a rule which allows it, for example:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
neutron security-group-rule-create --protocol 4 --direction egress k8s-a0tp4t
|
||||
neutron security-group-rule-create --protocol 4 --direction igress k8s-a0tp4t
|
||||
```
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Cinder CSI Driver
|
||||
===============
|
||||
# Cinder CSI Driver
|
||||
|
||||
Cinder CSI driver allows you to provision volumes over an OpenStack deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions.
|
||||
|
||||
|
@ -15,11 +14,11 @@ If you want to deploy the cinder provisioner used with Cinder CSI Driver, you sh
|
|||
|
||||
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over OpenStack with Cinder CSI Driver enabled.
|
||||
|
||||
## Usage example ##
|
||||
## Usage example
|
||||
|
||||
To check if Cinder CSI Driver works properly, see first that the cinder-csi pods are running:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
$ kubectl -n kube-system get pods | grep cinder
|
||||
csi-cinder-controllerplugin-7f8bf99785-cpb5v 5/5 Running 0 100m
|
||||
csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m
|
||||
|
@ -27,7 +26,7 @@ csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m
|
|||
|
||||
Check the associated storage class (if you enabled persistent_volumes):
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
$ kubectl get storageclass
|
||||
NAME PROVISIONER AGE
|
||||
cinder-csi cinder.csi.openstack.org 100m
|
||||
|
@ -35,7 +34,7 @@ cinder-csi cinder.csi.openstack.org 100m
|
|||
|
||||
You can run a PVC and an Nginx Pod using this file `nginx.yaml`:
|
||||
|
||||
```
|
||||
```yml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
|
@ -75,7 +74,8 @@ spec:
|
|||
Apply this conf to your cluster: ```kubectl apply -f nginx.yml```
|
||||
|
||||
You should see the PVC provisioned and bound:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
$ kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi RWO cinder-csi 8s
|
||||
|
@ -83,17 +83,20 @@ csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi
|
|||
|
||||
And the volume mounted to the Nginx Pod (wait until the Pod is Running):
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
kubectl exec -it nginx -- df -h | grep /var/lib/www/html
|
||||
/dev/vdb 976M 2.6M 958M 1% /var/lib/www/html
|
||||
```
|
||||
|
||||
## Compatibility with in-tree cloud provider ##
|
||||
## Compatibility with in-tree cloud provider
|
||||
|
||||
It is not necessary to enable OpenStack as a cloud provider for Cinder CSI Driver to work.
|
||||
Though, you can run both the in-tree openstack cloud provider and the Cinder CSI Driver at the same time. The storage class provisioners associated to each one of them are differently named.
|
||||
|
||||
## Cinder v2 support ##
|
||||
## Cinder v2 support
|
||||
|
||||
For the moment, only Cinder v3 is supported by the CSI Driver.
|
||||
|
||||
## More info ##
|
||||
## More info
|
||||
|
||||
For further information about the Cinder CSI Driver, you can refer to this page: [Cloud Provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md).
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
Cloud providers
|
||||
==============
|
||||
# Cloud providers
|
||||
|
||||
#### Provisioning
|
||||
## Provisioning
|
||||
|
||||
You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
|
||||
|
||||
#### Deploy kubernetes
|
||||
## Deploy kubernetes
|
||||
|
||||
With ansible-playbook command
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -u smana -e ansible_ssh_user=admin -e cloud_provider=[aws|gce] -b --become-user=root -i inventory/single.cfg cluster.yml
|
||||
```
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
Kubespray vs [Kops](https://github.com/kubernetes/kops)
|
||||
---------------
|
||||
# Comparaison
|
||||
|
||||
## Kubespray vs [Kops](https://github.com/kubernetes/kops)
|
||||
|
||||
Kubespray runs on bare metal and most clouds, using Ansible as its substrate for
|
||||
provisioning and orchestration. Kops performs the provisioning and orchestration
|
||||
|
@ -10,8 +11,7 @@ however, is more tightly integrated with the unique features of the clouds it
|
|||
supports so it could be a better choice if you know that you will only be using
|
||||
one platform for the foreseeable future.
|
||||
|
||||
Kubespray vs [Kubeadm](https://github.com/kubernetes/kubeadm)
|
||||
------------------
|
||||
## Kubespray vs [Kubeadm](https://github.com/kubernetes/kubeadm)
|
||||
|
||||
Kubeadm provides domain Knowledge of Kubernetes clusters' life cycle
|
||||
management, including self-hosted layouts, dynamic discovery services and so
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Contiv
|
||||
======
|
||||
# Contiv
|
||||
|
||||
Here is the [Contiv documentation](http://contiv.github.io/documents/).
|
||||
|
||||
|
@ -10,7 +9,6 @@ There are two ways to manage Contiv:
|
|||
* a web UI managed by the api proxy service
|
||||
* a CLI named `netctl`
|
||||
|
||||
|
||||
### Interfaces
|
||||
|
||||
#### The Web Interface
|
||||
|
@ -27,7 +25,6 @@ contiv_generate_certificate: true
|
|||
|
||||
The default credentials to log in are: admin/admin.
|
||||
|
||||
|
||||
#### The Command Line Interface
|
||||
|
||||
The second way to modify the Contiv configuration is to use the CLI. To do this, you have to connect to the server and export an environment variable to tell netctl how to connect to the cluster:
|
||||
|
@ -44,7 +41,6 @@ contiv_netmaster_port: 9999
|
|||
|
||||
The CLI doesn't use the authentication process needed by the web interface.
|
||||
|
||||
|
||||
### Network configuration
|
||||
|
||||
The default configuration uses VXLAN to create an overlay. Two networks are created by default:
|
||||
|
|
|
@ -6,6 +6,7 @@ Example with Ansible:
|
|||
Before running the cluster playbook you must satisfy the following requirements:
|
||||
|
||||
General CoreOS Pre-Installation Notes:
|
||||
|
||||
- Ensure that the bin_dir is set to `/opt/bin`
|
||||
- ansible_python_interpreter should be `/opt/bin/python`. This will be laid down by the bootstrap task.
|
||||
- The default resolvconf_mode setting of `docker_dns` **does not** work for CoreOS. This is because we do not edit the systemd service file for docker on CoreOS nodes. Instead, just use the `host_resolvconf` mode. It should work out of the box.
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
CRI-O
|
||||
===============
|
||||
# CRI-O
|
||||
|
||||
[CRI-O] is a lightweight container runtime for Kubernetes.
|
||||
Kubespray supports basic functionality for using CRI-O as the default container runtime in a cluster.
|
||||
|
@ -10,14 +9,14 @@ Kubespray supports basic functionality for using CRI-O as the default container
|
|||
|
||||
_To use CRI-O instead of Docker, set the following variables:_
|
||||
|
||||
#### all.yml
|
||||
## all.yml
|
||||
|
||||
```yaml
|
||||
download_container: false
|
||||
skip_downloads: false
|
||||
```
|
||||
|
||||
#### k8s-cluster.yml
|
||||
## k8s-cluster.yml
|
||||
|
||||
```yaml
|
||||
etcd_deployment_type: host
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Debian Jessie
|
||||
===============
|
||||
# Debian Jessie
|
||||
|
||||
Debian Jessie installation Notes:
|
||||
|
||||
|
@ -9,7 +8,7 @@ Debian Jessie installation Notes:
|
|||
|
||||
to /etc/default/grub. Then update with
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
sudo update-grub
|
||||
sudo update-grub2
|
||||
sudo reboot
|
||||
|
@ -23,7 +22,7 @@ Debian Jessie installation Notes:
|
|||
|
||||
- Add the Ansible repository and install Ansible to get a proper version
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
sudo add-apt-repository ppa:ansible/ansible
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible
|
||||
|
@ -34,5 +33,4 @@ Debian Jessie installation Notes:
|
|||
|
||||
```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr```
|
||||
|
||||
|
||||
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
K8s DNS stack by Kubespray
|
||||
======================
|
||||
# K8s DNS stack by Kubespray
|
||||
|
||||
For K8s cluster nodes, Kubespray configures a [Kubernetes DNS](http://kubernetes.io/docs/admin/dns/)
|
||||
[cluster add-on](http://releases.k8s.io/master/cluster/addons/README.md)
|
||||
|
@ -9,19 +8,19 @@ to serve as an authoritative DNS server for a given ``dns_domain`` and its
|
|||
Other nodes in the inventory, like external storage nodes or a separate etcd cluster
|
||||
node group, considered non-cluster and left up to the user to configure DNS resolve.
|
||||
|
||||
|
||||
DNS variables
|
||||
=============
|
||||
## DNS variables
|
||||
|
||||
There are several global variables which can be used to modify DNS settings:
|
||||
|
||||
#### ndots
|
||||
### ndots
|
||||
|
||||
ndots value to be used in ``/etc/resolv.conf``
|
||||
|
||||
It is important to note that multiple search domains combined with high ``ndots``
|
||||
values lead to poor performance of DNS stack, so please choose it wisely.
|
||||
|
||||
#### searchdomains
|
||||
### searchdomains
|
||||
|
||||
Custom search domains to be added in addition to the cluster search domains (``default.svc.{{ dns_domain }}, svc.{{ dns_domain }}``).
|
||||
|
||||
Most Linux systems limit the total number of search domains to 6 and the total length of all search domains
|
||||
|
@ -30,57 +29,68 @@ to 256 characters. Depending on the length of ``dns_domain``, you're limited to
|
|||
Please note that ``resolvconf_mode: docker_dns`` will automatically add your systems search domains as
|
||||
additional search domains. Please take this into the accounts for the limits.
|
||||
|
||||
#### nameservers
|
||||
### nameservers
|
||||
|
||||
This variable is only used by ``resolvconf_mode: host_resolvconf``. These nameservers are added to the hosts
|
||||
``/etc/resolv.conf`` *after* ``upstream_dns_servers`` and thus serve as backup nameservers. If this variable
|
||||
is not set, a default resolver is chosen (depending on cloud provider or 8.8.8.8 when no cloud provider is specified).
|
||||
|
||||
#### upstream_dns_servers
|
||||
### upstream_dns_servers
|
||||
|
||||
DNS servers to be added *after* the cluster DNS. Used by all ``resolvconf_mode`` modes. These serve as backup
|
||||
DNS servers in early cluster deployment when no cluster DNS is available yet.
|
||||
|
||||
DNS modes supported by Kubespray
|
||||
============================
|
||||
## DNS modes supported by Kubespray
|
||||
|
||||
You can modify how Kubespray sets up DNS for your cluster with the variables ``dns_mode`` and ``resolvconf_mode``.
|
||||
|
||||
## dns_mode
|
||||
### dns_mode
|
||||
|
||||
``dns_mode`` configures how Kubespray will setup cluster DNS. There are four modes available:
|
||||
|
||||
#### coredns (default)
|
||||
#### dns_mode: coredns (default)
|
||||
|
||||
This installs CoreDNS as the default cluster DNS for all queries.
|
||||
|
||||
#### coredns_dual
|
||||
#### dns_mode: coredns_dual
|
||||
|
||||
This installs CoreDNS as the default cluster DNS for all queries, plus a secondary CoreDNS stack.
|
||||
|
||||
#### manual
|
||||
#### dns_mode: manual
|
||||
|
||||
This does not install coredns, but allows you to specify
|
||||
`manual_dns_server`, which will be configured on nodes for handling Pod DNS.
|
||||
Use this method if you plan to install your own DNS server in the cluster after
|
||||
initial deployment.
|
||||
|
||||
#### none
|
||||
#### dns_mode: none
|
||||
|
||||
This does not install any of DNS solution at all. This basically disables cluster DNS completely and
|
||||
leaves you with a non functional cluster.
|
||||
|
||||
## resolvconf_mode
|
||||
|
||||
``resolvconf_mode`` configures how Kubespray will setup DNS for ``hostNetwork: true`` PODs and non-k8s containers.
|
||||
There are three modes available:
|
||||
|
||||
#### docker_dns (default)
|
||||
### resolvconf_mode: docker_dns (default)
|
||||
|
||||
This sets up the docker daemon with additional --dns/--dns-search/--dns-opt flags.
|
||||
|
||||
The following nameservers are added to the docker daemon (in the same order as listed here):
|
||||
|
||||
* cluster nameserver (depends on dns_mode)
|
||||
* content of optional upstream_dns_servers variable
|
||||
* host system nameservers (read from hosts /etc/resolv.conf)
|
||||
|
||||
The following search domains are added to the docker daemon (in the same order as listed here):
|
||||
|
||||
* cluster domains (``default.svc.{{ dns_domain }}``, ``svc.{{ dns_domain }}``)
|
||||
* content of optional searchdomains variable
|
||||
* host system search domains (read from hosts /etc/resolv.conf)
|
||||
|
||||
The following dns options are added to the docker daemon
|
||||
|
||||
* ndots:{{ ndots }}
|
||||
* timeout:2
|
||||
* attempts:2
|
||||
|
@ -96,7 +106,8 @@ DNS queries to the cluster DNS will timeout after a few seconds, resulting in th
|
|||
used as a backup nameserver. After cluster DNS is running, all queries will be answered by the cluster DNS
|
||||
servers, which in turn will forward queries to the system nameserver if required.
|
||||
|
||||
#### host_resolvconf
|
||||
#### resolvconf_mode: host_resolvconf
|
||||
|
||||
This activates the classic Kubespray behavior that modifies the hosts ``/etc/resolv.conf`` file and dhclient
|
||||
configuration to point to the cluster dns server (either coredns or coredns_dual, depending on dns_mode).
|
||||
|
||||
|
@ -108,21 +119,21 @@ the other nameservers as backups.
|
|||
Also note, existing records will be purged from the `/etc/resolv.conf`,
|
||||
including resolvconf's base/head/cloud-init config files and those that come from dhclient.
|
||||
|
||||
#### none
|
||||
#### resolvconf_mode: none
|
||||
|
||||
Does nothing regarding ``/etc/resolv.conf``. This leaves you with a cluster that works as expected in most cases.
|
||||
The only exception is that ``hostNetwork: true`` PODs and non-k8s managed containers will not be able to resolve
|
||||
cluster service names.
|
||||
|
||||
## Nodelocal DNS cache
|
||||
|
||||
Setting ``enable_nodelocaldns`` to ``true`` will make pods reach out to the dns (core-dns) caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query core-dns (depending on what main DNS plugin is configured in your cluster) for cache misses of cluster hostnames(cluster.local suffix by default).
|
||||
|
||||
More information on the rationale behind this implementation can be found [here](https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md).
|
||||
|
||||
**As per the 2.10 release, Nodelocal DNS cache is enabled by default.**
|
||||
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
## Limitations
|
||||
|
||||
* Kubespray has yet ways to configure Kubedns addon to forward requests SkyDns can
|
||||
not answer with authority to arbitrary recursive resolvers. This task is left
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Downloading binaries and containers
|
||||
===================================
|
||||
# Downloading binaries and containers
|
||||
|
||||
Kubespray supports several download/upload modes. The default is:
|
||||
|
||||
|
@ -30,11 +29,13 @@ Container images may be defined by its repo and tag, for example:
|
|||
|
||||
Note, the SHA256 digest and the image tag must be both specified and correspond
|
||||
to each other. The given example above is represented by the following vars:
|
||||
|
||||
```yaml
|
||||
dnsmasq_digest_checksum: 7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffdc09168193
|
||||
dnsmasq_image_repo: andyshinn/dnsmasq
|
||||
dnsmasq_image_tag: '2.72'
|
||||
```
|
||||
|
||||
The full list of available vars may be found in the download's ansible role defaults. Those also allow to specify custom urls and local repositories for binaries and container
|
||||
images as well. See also the DNS stack docs for the related intranet configuration,
|
||||
so the hosts can resolve those urls and repos.
|
||||
|
|
|
@ -1,9 +1,8 @@
|
|||
Flannel
|
||||
==============
|
||||
# Flannel
|
||||
|
||||
* Flannel configuration file should have been created there
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
cat /run/flannel/subnet.env
|
||||
FLANNEL_NETWORK=10.233.0.0/18
|
||||
FLANNEL_SUBNET=10.233.16.1/24
|
||||
|
@ -13,7 +12,7 @@ FLANNEL_IPMASQ=false
|
|||
|
||||
* Check if the network interface has been created
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ip a show dev flannel.1
|
||||
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
|
||||
link/ether e2:f3:a7:0f:bf:cb brd ff:ff:ff:ff:ff:ff
|
||||
|
@ -25,7 +24,7 @@ ip a show dev flannel.1
|
|||
|
||||
* Try to run a container and check its ip address
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
kubectl run test --image=busybox --command -- tail -f /dev/null
|
||||
replicationcontroller "test" created
|
||||
|
||||
|
@ -33,7 +32,7 @@ kubectl describe po test-34ozs | grep ^IP
|
|||
IP: 10.233.16.2
|
||||
```
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
kubectl exec test-34ozs -- ip a show dev eth0
|
||||
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
|
||||
link/ether 02:42:0a:e9:2b:03 brd ff:ff:ff:ff:ff:ff
|
||||
|
|
|
@ -1,8 +1,6 @@
|
|||
Getting started
|
||||
===============
|
||||
# Getting started
|
||||
|
||||
Building your own inventory
|
||||
---------------------------
|
||||
## Building your own inventory
|
||||
|
||||
Ansible inventory can be stored in 3 formats: YAML, JSON, or INI-like. There is
|
||||
an example inventory located
|
||||
|
@ -18,38 +16,41 @@ certain threshold. Run `python3 contrib/inventory_builder/inventory.py help` hel
|
|||
|
||||
Example inventory generator usage:
|
||||
|
||||
```ShellSession
|
||||
cp -r inventory/sample inventory/mycluster
|
||||
declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
|
||||
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
|
||||
```
|
||||
|
||||
Then use `inventory/mycluster/hosts.yml` as inventory file.
|
||||
|
||||
Starting custom deployment
|
||||
--------------------------
|
||||
## Starting custom deployment
|
||||
|
||||
Once you have an inventory, you may want to customize deployment data vars
|
||||
and start the deployment:
|
||||
|
||||
**IMPORTANT**: Edit my\_inventory/groups\_vars/\*.yaml to override data vars:
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml -b -v \
|
||||
--private-key=~/.ssh/private_key
|
||||
```
|
||||
|
||||
See more details in the [ansible guide](ansible.md).
|
||||
|
||||
Adding nodes
|
||||
------------
|
||||
### Adding nodes
|
||||
|
||||
You may want to add worker, master or etcd nodes to your existing cluster. This can be done by re-running the `cluster.yml` playbook, or you can target the bare minimum needed to get kubelet installed on the worker and talking to your masters. This is especially helpful when doing something like autoscaling your clusters.
|
||||
|
||||
- Add the new worker node to your inventory in the appropriate group (or utilize a [dynamic inventory](https://docs.ansible.com/ansible/intro_dynamic_inventory.html)).
|
||||
- Run the ansible-playbook command, substituting `cluster.yml` for `scale.yml`:
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/mycluster/hosts.yml scale.yml -b -v \
|
||||
--private-key=~/.ssh/private_key
|
||||
```
|
||||
|
||||
Remove nodes
|
||||
------------
|
||||
### Remove nodes
|
||||
|
||||
You may want to remove **master**, **worker**, or **etcd** nodes from your
|
||||
existing cluster. This can be done by re-running the `remove-node.yml`
|
||||
|
@ -61,7 +62,8 @@ when doing something like autoscaling your clusters. Of course, if a node
|
|||
is not working, you can remove the node and install it again.
|
||||
|
||||
Use `--extra-vars "node=<nodename>,<nodename2>"` to select the node(s) you want to delete.
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/mycluster/hosts.yml remove-node.yml -b -v \
|
||||
--private-key=~/.ssh/private_key \
|
||||
--extra-vars "node=nodename,nodename2"
|
||||
|
@ -72,8 +74,7 @@ to skip the node reset step. If one node is unavailable, but others you wish
|
|||
to remove are able to connect via SSH, you could set reset_nodes=no as a host
|
||||
var in inventory.
|
||||
|
||||
Connecting to Kubernetes
|
||||
------------------------
|
||||
## Connecting to Kubernetes
|
||||
|
||||
By default, Kubespray configures kube-master hosts with insecure access to
|
||||
kube-apiserver via port 8080. A kubeconfig file is not necessary in this case,
|
||||
|
@ -95,8 +96,7 @@ file yourself.
|
|||
For more information on kubeconfig and accessing a Kubernetes cluster, refer to
|
||||
the Kubernetes [documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
|
||||
|
||||
Accessing Kubernetes Dashboard
|
||||
------------------------------
|
||||
## Accessing Kubernetes Dashboard
|
||||
|
||||
As of kubernetes-dashboard v1.7.x:
|
||||
|
||||
|
@ -113,8 +113,7 @@ Or you can run 'kubectl proxy' from your local machine to access dashboard in yo
|
|||
|
||||
It is recommended to access dashboard from behind a gateway (like Ingress Controller) that enforces an authentication token. Details and other access options here: <https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above>
|
||||
|
||||
Accessing Kubernetes API
|
||||
------------------------
|
||||
## Accessing Kubernetes API
|
||||
|
||||
The main client of Kubernetes is `kubectl`. It is installed on each kube-master
|
||||
host and can optionally be configured on your ansible host by setting
|
||||
|
@ -125,7 +124,9 @@ host and can optionally be configured on your ansible host by setting
|
|||
|
||||
You can see a list of nodes by running the following commands:
|
||||
|
||||
```ShellSession
|
||||
cd inventory/mycluster/artifacts
|
||||
./kubectl.sh get nodes
|
||||
```
|
||||
|
||||
If desired, copy admin.conf to ~/.kube/config.
|
||||
|
|
|
@ -1,19 +1,18 @@
|
|||
HA endpoints for K8s
|
||||
====================
|
||||
# HA endpoints for K8s
|
||||
|
||||
The following components require a highly available endpoints:
|
||||
|
||||
* etcd cluster,
|
||||
* kube-apiserver service instances.
|
||||
|
||||
The latter relies on a 3rd side reverse proxy, like Nginx or HAProxy, to
|
||||
achieve the same goal.
|
||||
|
||||
Etcd
|
||||
----
|
||||
## Etcd
|
||||
|
||||
The etcd clients (kube-api-masters) are configured with the list of all etcd peers. If the etcd-cluster has multiple instances, it's configured in HA already.
|
||||
|
||||
Kube-apiserver
|
||||
--------------
|
||||
## Kube-apiserver
|
||||
|
||||
K8s components require a loadbalancer to access the apiservers via a reverse
|
||||
proxy. Kubespray includes support for an nginx-based proxy that resides on each
|
||||
|
@ -50,7 +49,8 @@ provides access for external clients, while the internal LB accepts client
|
|||
connections only to the localhost.
|
||||
Given a frontend `VIP` address and `IP1, IP2` addresses of backends, here is
|
||||
an example configuration for a HAProxy service acting as an external LB:
|
||||
```
|
||||
|
||||
```raw
|
||||
listen kubernetes-apiserver-https
|
||||
bind <VIP>:8383
|
||||
option ssl-hello-chk
|
||||
|
@ -66,7 +66,8 @@ listen kubernetes-apiserver-https
|
|||
|
||||
And the corresponding example global vars for such a "cluster-aware"
|
||||
external LB with the cluster API access modes configured in Kubespray:
|
||||
```
|
||||
|
||||
```yml
|
||||
apiserver_loadbalancer_domain_name: "my-apiserver-lb.example.com"
|
||||
loadbalancer_apiserver:
|
||||
address: <VIP>
|
||||
|
@ -102,13 +103,14 @@ exclusive to `loadbalancer_apiserver_localhost`.
|
|||
Access API endpoints are evaluated automatically, as the following:
|
||||
|
||||
| Endpoint type | kube-master | non-master | external |
|
||||
|------------------------------|----------------|---------------------|---------------------|
|
||||
| Local LB (default) | https://bip:sp | https://lc:nsp | https://m[0].aip:sp |
|
||||
| Local LB + Unmanaged here LB | https://bip:sp | https://lc:nsp | https://ext |
|
||||
| External LB, no internal | https://bip:sp | https://lb:lp | https://lb:lp |
|
||||
| No ext/int LB | https://bip:sp | https://m[0].aip:sp | https://m[0].aip:sp |
|
||||
|------------------------------|------------------|-------------------------|-----------------------|
|
||||
| Local LB (default) | `https://bip:sp` | `https://lc:nsp` | `https://m[0].aip:sp` |
|
||||
| Local LB + Unmanaged here LB | `https://bip:sp` | `https://lc:nsp` | `https://ext` |
|
||||
| External LB, no internal | `https://bip:sp` | `<https://lb:lp>` | `https://lb:lp` |
|
||||
| No ext/int LB | `https://bip:sp` | `<https://m[0].aip:sp>` | `https://m[0].aip:sp` |
|
||||
|
||||
Where:
|
||||
|
||||
* `m[0]` - the first node in the `kube-master` group;
|
||||
* `lb` - LB FQDN, `apiserver_loadbalancer_domain_name`;
|
||||
* `ext` - Externally load balanced VIP:port and FQDN, not managed by Kubespray;
|
||||
|
@ -132,16 +134,19 @@ Kubespray, the masters' APIs are accessed via the insecure endpoint, which
|
|||
consists of the local `kube_apiserver_insecure_bind_address` and
|
||||
`kube_apiserver_insecure_port`.
|
||||
|
||||
Optional configurations
|
||||
------------------------
|
||||
## Optional configurations
|
||||
|
||||
### ETCD with a LB
|
||||
|
||||
In order to use an external loadbalancing (L4/TCP or L7 w/ SSL Passthrough VIP), the following variables need to be overridden in group_vars
|
||||
|
||||
* `etcd_access_addresses`
|
||||
* `etcd_client_url`
|
||||
* `etcd_cert_alt_names`
|
||||
* `etcd_cert_alt_ips`
|
||||
|
||||
#### Example of a VIP w/ FQDN
|
||||
|
||||
```yaml
|
||||
etcd_access_addresses: https://etcd.example.com:2379
|
||||
etcd_client_url: https://etcd.example.com:2379
|
||||
|
|
|
@ -8,7 +8,8 @@
|
|||
2. Add **forked repo** as submodule to desired folder in your existent ansible repo(for example 3d/kubespray):
|
||||
```git submodule add https://github.com/YOUR_GITHUB/kubespray.git kubespray```
|
||||
Git will create _.gitmodules_ file in your existent ansible repo:
|
||||
```
|
||||
|
||||
```ini
|
||||
[submodule "3d/kubespray"]
|
||||
path = 3d/kubespray
|
||||
url = https://github.com/YOUR_GITHUB/kubespray.git
|
||||
|
@ -21,7 +22,8 @@
|
|||
```git remote add upstream https://github.com/kubernetes-sigs/kubespray.git```
|
||||
|
||||
5. Sync your master branch with upstream:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
git checkout master
|
||||
git fetch upstream
|
||||
git merge upstream/master
|
||||
|
@ -33,7 +35,8 @@
|
|||
***Never*** use master branch of your repository for your commits.
|
||||
|
||||
7. Modify path to library and roles in your ansible.cfg file (role naming should be uniq, you may have to rename your existent roles if they have same names as kubespray project):
|
||||
```
|
||||
|
||||
```ini
|
||||
...
|
||||
library = 3d/kubespray/library/
|
||||
roles_path = 3d/kubespray/roles/
|
||||
|
@ -45,7 +48,8 @@ You could rename *all.yml* config to something else, i.e. *kubespray.yml* and cr
|
|||
|
||||
9. Modify your ansible inventory file by adding mapping of your existent groups (if any) to kubespray naming.
|
||||
For example:
|
||||
```
|
||||
|
||||
```ini
|
||||
...
|
||||
#Kargo groups:
|
||||
[kube-node:children]
|
||||
|
@ -65,54 +69,62 @@ You could rename *all.yml* config to something else, i.e. *kubespray.yml* and cr
|
|||
[kubespray:children]
|
||||
kubernetes
|
||||
```
|
||||
|
||||
* Last entry here needed to apply kubespray.yml config file, renamed from all.yml of kubespray project.
|
||||
|
||||
10. Now you can include kubespray tasks in you existent playbooks by including cluster.yml file:
|
||||
```
|
||||
|
||||
```yml
|
||||
- name: Include kubespray tasks
|
||||
include: 3d/kubespray/cluster.yml
|
||||
```
|
||||
|
||||
Or your could copy separate tasks from cluster.yml into your ansible repository.
|
||||
|
||||
11. Commit changes to your ansible repo. Keep in mind, that submodule folder is just a link to the git commit hash of your forked repo.
|
||||
When you update your "work" branch you need to commit changes to ansible repo as well.
|
||||
Other members of your team should use ```git submodule sync```, ```git submodule update --init``` to get actual code from submodule.
|
||||
|
||||
# Contributing
|
||||
## Contributing
|
||||
|
||||
If you made useful changes or fixed a bug in existent kubespray repo, use this flow for PRs to original kubespray repo.
|
||||
|
||||
0. Sign the [CNCF CLA](https://git.k8s.io/community/CLA.md).
|
||||
1. Sign the [CNCF CLA](https://git.k8s.io/community/CLA.md).
|
||||
|
||||
1. Change working directory to git submodule directory (3d/kubespray).
|
||||
2. Change working directory to git submodule directory (3d/kubespray).
|
||||
|
||||
2. Setup desired user.name and user.email for submodule.
|
||||
3. Setup desired user.name and user.email for submodule.
|
||||
If kubespray is only one submodule in your repo you could use something like:
|
||||
```git submodule foreach --recursive 'git config user.name "First Last" && git config user.email "your-email-addres@used.for.cncf"'```
|
||||
|
||||
3. Sync with upstream master:
|
||||
```
|
||||
4. Sync with upstream master:
|
||||
|
||||
```ShellSession
|
||||
git fetch upstream
|
||||
git merge upstream/master
|
||||
git push origin master
|
||||
```
|
||||
4. Create new branch for the specific fixes that you want to contribute:
|
||||
|
||||
5. Create new branch for the specific fixes that you want to contribute:
|
||||
```git checkout -b fixes-name-date-index```
|
||||
Branch name should be self explaining to you, adding date and/or index will help you to track/delete your old PRs.
|
||||
|
||||
5. Find git hash of your commit in "work" repo and apply it to newly created "fix" repo:
|
||||
```
|
||||
6. Find git hash of your commit in "work" repo and apply it to newly created "fix" repo:
|
||||
|
||||
```ShellSession
|
||||
git cherry-pick <COMMIT_HASH>
|
||||
```
|
||||
6. If your have several temporary-stage commits - squash them using [```git rebase -i```](http://eli.thegreenplace.net/2014/02/19/squashing-github-pull-requests-into-a-single-commit)
|
||||
|
||||
7. If your have several temporary-stage commits - squash them using [```git rebase -i```](http://eli.thegreenplace.net/2014/02/19/squashing-github-pull-requests-into-a-single-commit)
|
||||
Also you could use interactive rebase (```git rebase -i HEAD~10```) to delete commits which you don't want to contribute into original repo.
|
||||
|
||||
7. When your changes is in place, you need to check upstream repo one more time because it could be changed during your work.
|
||||
8. When your changes is in place, you need to check upstream repo one more time because it could be changed during your work.
|
||||
Check that you're on correct branch:
|
||||
```git status```
|
||||
And pull changes from upstream (if any):
|
||||
```git pull --rebase upstream master```
|
||||
|
||||
8. Now push your changes to your **fork** repo with ```git push```. If your branch doesn't exists on github, git will propose you to use something like ```git push --set-upstream origin fixes-name-date-index```.
|
||||
9. Now push your changes to your **fork** repo with ```git push```. If your branch doesn't exists on github, git will propose you to use something like ```git push --set-upstream origin fixes-name-date-index```.
|
||||
|
||||
9. Open you forked repo in browser, on the main page you will see proposition to create pull request for your newly created branch. Check proposed diff of your PR. If something is wrong you could safely delete "fix" branch on github using ```git push origin --delete fixes-name-date-index```, ```git branch -D fixes-name-date-index``` and start whole process from the beginning.
|
||||
10. Open you forked repo in browser, on the main page you will see proposition to create pull request for your newly created branch. Check proposed diff of your PR. If something is wrong you could safely delete "fix" branch on github using ```git push origin --delete fixes-name-date-index```, ```git branch -D fixes-name-date-index``` and start whole process from the beginning.
|
||||
If everything is fine - add description about your changes (what they do and why they're needed) and confirm pull request creation.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
Kube-OVN
|
||||
===========
|
||||
# Kube-OVN
|
||||
|
||||
Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises.
|
||||
|
||||
For more information please check [Kube-OVN documentation](https://github.com/alauda/kube-ovn)
|
||||
|
@ -7,7 +7,8 @@ For more information please check [Kube-OVN documentation](https://github.com/al
|
|||
## How to use it
|
||||
|
||||
Enable kube-ovn in `group_vars/k8s-cluster/k8s-cluster.yml`
|
||||
```
|
||||
|
||||
```yml
|
||||
...
|
||||
kube_network_plugin: kube-ovn
|
||||
...
|
||||
|
@ -19,7 +20,7 @@ Kube-OVN run ovn and controller in `kube-ovn` namespace
|
|||
|
||||
* Check the status of kube-ovn pods
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# From the CLI
|
||||
kubectl get pod -n kube-ovn
|
||||
|
||||
|
@ -37,7 +38,7 @@ ovs-ovn-r5frh 1/1 Running 0 4d16h
|
|||
|
||||
* Check the default and node subnet
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# From the CLI
|
||||
kubectl get subnet
|
||||
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Kube-router
|
||||
===========
|
||||
# Kube-router
|
||||
|
||||
Kube-router is a L3 CNI provider, as such it will setup IPv4 routing between
|
||||
nodes to provide Pods' networks reachability.
|
||||
|
@ -12,7 +11,7 @@ Kube-router runs its pods as a `DaemonSet` in the `kube-system` namespace:
|
|||
|
||||
* Check the status of kube-router pods
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# From the CLI
|
||||
kubectl get pod --namespace=kube-system -l k8s-app=kube-router -owide
|
||||
|
||||
|
@ -29,7 +28,7 @@ kube-router-x2xs7 1/1 Running 0 2d 192.168.186.10 my
|
|||
|
||||
* Peek at kube-router container logs:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# From the CLI
|
||||
kubectl logs --namespace=kube-system -l k8s-app=kube-router | grep Peer.Up
|
||||
|
||||
|
@ -56,24 +55,24 @@ You need to `kubectl exec -it ...` into a kube-router container to use these, se
|
|||
|
||||
## Kube-router configuration
|
||||
|
||||
|
||||
You can change the default configuration by overriding `kube_router_...` variables
|
||||
(as found at `roles/network_plugin/kube-router/defaults/main.yml`),
|
||||
these are named to follow `kube-router` command-line options as per
|
||||
<https://www.kube-router.io/docs/user-guide/#try-kube-router-with-cluster-installers>.
|
||||
|
||||
## Advanced BGP Capabilities
|
||||
https://github.com/cloudnativelabs/kube-router#advanced-bgp-capabilities
|
||||
|
||||
<https://github.com/cloudnativelabs/kube-router#advanced-bgp-capabilities>
|
||||
|
||||
If you have other networking devices or SDN systems that talk BGP, kube-router will fit in perfectly.
|
||||
From a simple full node-to-node mesh to per-node peering configurations, most routing needs can be attained.
|
||||
The configuration is Kubernetes native (annotations) just like the rest of kube-router.
|
||||
|
||||
For more details please refer to the https://github.com/cloudnativelabs/kube-router/blob/master/docs/bgp.md.
|
||||
For more details please refer to the <https://github.com/cloudnativelabs/kube-router/blob/master/docs/bgp.md.>
|
||||
|
||||
Next options will set up annotations for kube-router, using `kubectl annotate` command.
|
||||
|
||||
```
|
||||
```yml
|
||||
kube_router_annotations_master: []
|
||||
kube_router_annotations_node: []
|
||||
kube_router_annotations_all: []
|
||||
|
|
|
@ -26,7 +26,7 @@ By default the normal behavior looks like:
|
|||
> etcd in 6-7 seconds or even longer when etcd cannot commit data to quorum
|
||||
> nodes.
|
||||
|
||||
# Failure
|
||||
## Failure
|
||||
|
||||
Kubelet will try to make `nodeStatusUpdateRetry` post attempts. Currently
|
||||
`nodeStatusUpdateRetry` is constantly set to 5 in
|
||||
|
@ -50,7 +50,7 @@ Kube proxy has a watcher over API. Once pods are evicted, Kube proxy will
|
|||
notice and will update iptables of the node. It will remove endpoints from
|
||||
services so pods from failed node won't be accessible anymore.
|
||||
|
||||
# Recommendations for different cases
|
||||
## Recommendations for different cases
|
||||
|
||||
## Fast Update and Fast Reaction
|
||||
|
||||
|
|
|
@ -1,20 +1,18 @@
|
|||
Macvlan
|
||||
===============
|
||||
|
||||
How to use it :
|
||||
-------------
|
||||
# Macvlan
|
||||
|
||||
## How to use it
|
||||
|
||||
* Enable macvlan in `group_vars/k8s-cluster/k8s-cluster.yml`
|
||||
```
|
||||
|
||||
```yml
|
||||
...
|
||||
kube_network_plugin: macvlan
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
* Adjust the `macvlan_interface` in `group_vars/k8s-cluster/k8s-net-macvlan.yml` or by host in the `host.yml` file:
|
||||
```
|
||||
|
||||
```yml
|
||||
all:
|
||||
hosts:
|
||||
node1:
|
||||
|
@ -24,25 +22,20 @@ all:
|
|||
macvlan_interface: ens5
|
||||
```
|
||||
|
||||
## Issue encountered
|
||||
|
||||
|
||||
Issue encountered :
|
||||
-------------
|
||||
|
||||
- Service DNS
|
||||
* Service DNS
|
||||
|
||||
reply from unexpected source:
|
||||
|
||||
add `kube_proxy_masquerade_all: true` in `group_vars/all/all.yml`
|
||||
|
||||
|
||||
- Disable nodelocaldns
|
||||
* Disable nodelocaldns
|
||||
|
||||
The nodelocal dns IP is not reacheable.
|
||||
|
||||
Disable it in `sample/group_vars/k8s-cluster/k8s-cluster.yml`
|
||||
```
|
||||
|
||||
```yml
|
||||
enable_nodelocaldns: false
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Multus
|
||||
===========
|
||||
# Multus
|
||||
|
||||
Multus is a meta CNI plugin that provides multiple network interface support to
|
||||
pods. For each interface, Multus delegates CNI calls to secondary CNI plugins
|
||||
|
@ -10,17 +9,19 @@ See [multus documentation](https://github.com/intel/multus-cni).
|
|||
## Multus installation
|
||||
|
||||
Since Multus itself does not implement networking, it requires a master plugin, which is specified through the variable `kube_network_plugin`. To enable Multus an additional variable `kube_network_plugin_multus` must be set to `true`. For example,
|
||||
```
|
||||
|
||||
```yml
|
||||
kube_network_plugin: calico
|
||||
kube_network_plugin_multus: true
|
||||
```
|
||||
|
||||
will install Multus and Calico and configure Multus to use Calico as the primary network plugin.
|
||||
|
||||
## Using Multus
|
||||
|
||||
Once Multus is installed, you can create CNI configurations (as a CRD objects) for additional networks, in this case a macvlan CNI configuration is defined. You may replace the config field with any valid CNI configuration where the CNI binary is available on the nodes.
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
cat <<EOF | kubectl create -f -
|
||||
apiVersion: "k8s.cni.cncf.io/v1"
|
||||
kind: NetworkAttachmentDefinition
|
||||
|
@ -48,7 +49,7 @@ EOF
|
|||
|
||||
You may then create a pod with and additional interface that connects to this network using annotations. The annotation correlates to the name in the NetworkAttachmentDefinition above.
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
cat <<EOF | kubectl create -f -
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
|
@ -66,8 +67,8 @@ EOF
|
|||
|
||||
You may now inspect the pod and see that there is an additional interface configured:
|
||||
|
||||
```
|
||||
$ kubectl exec -it samplepod -- ip a
|
||||
```ShellSession
|
||||
kubectl exec -it samplepod -- ip a
|
||||
```
|
||||
|
||||
For more details on how to use Multus, please visit https://github.com/intel/multus-cni
|
||||
For more details on how to use Multus, please visit <https://github.com/intel/multus-cni>
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Network Checker Application
|
||||
===========================
|
||||
# Network Checker Application
|
||||
|
||||
With the ``deploy_netchecker`` var enabled (defaults to false), Kubespray deploys a
|
||||
Network Checker Application from the 3rd side `l23network/k8s-netchecker` docker
|
||||
|
@ -14,14 +13,17 @@ logs.
|
|||
|
||||
To get the most recent and cluster-wide network connectivity report, run from
|
||||
any of the cluster nodes:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
curl http://localhost:31081/api/v1/connectivity_check
|
||||
```
|
||||
|
||||
Note that Kubespray does not invoke the check but only deploys the application, if
|
||||
requested.
|
||||
|
||||
There are related application specific variables:
|
||||
```
|
||||
|
||||
```yml
|
||||
netchecker_port: 31081
|
||||
agent_report_interval: 15
|
||||
netcheck_namespace: default
|
||||
|
@ -33,7 +35,7 @@ combination of the ``netcheck_namespace.dns_domain`` vars, for example the
|
|||
to the non default namespace, make sure as well to adjust the ``searchdomains`` var
|
||||
so the resulting search domain records to contain that namespace, like:
|
||||
|
||||
```
|
||||
```yml
|
||||
search: foospace.cluster.local default.cluster.local ...
|
||||
nameserver: ...
|
||||
```
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
openSUSE Leap 15.0 and Tumbleweed
|
||||
===============
|
||||
# openSUSE Leap 15.0 and Tumbleweed
|
||||
|
||||
openSUSE Leap installation Notes:
|
||||
|
||||
- Install Ansible
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
sudo zypper ref
|
||||
sudo zypper -n install ansible
|
||||
|
||||
|
@ -15,5 +14,4 @@ openSUSE Leap installation Notes:
|
|||
|
||||
```sudo zypper -n install python-Jinja2 python-netaddr```
|
||||
|
||||
|
||||
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Packet
|
||||
===============
|
||||
# Packet
|
||||
|
||||
Kubespray provides support for bare metal deployments using the [Packet bare metal cloud](http://www.packet.com).
|
||||
Deploying upon bare metal allows Kubernetes to run at locations where an existing public or private cloud might not exist such
|
||||
|
@ -37,6 +36,7 @@ Terraform is required to deploy the bare metal infrastructure. The steps below a
|
|||
[More terraform installation options are available.](https://learn.hashicorp.com/terraform/getting-started/install.html)
|
||||
|
||||
Grab the latest version of Terraform and install it.
|
||||
|
||||
```bash
|
||||
echo "https://releases.hashicorp.com/terraform/$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')/terraform_$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M '.current_version')_darwin_amd64.zip"
|
||||
sudo yum install unzip
|
||||
|
@ -69,6 +69,7 @@ for Packet need to be defined. To find these values see [Packet API Integration]
|
|||
```bash
|
||||
vi cluster.tfvars
|
||||
```
|
||||
|
||||
* cluster_name = alpha
|
||||
* packet_project_id = ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
|
||||
* public_key_path = 12345678-90AB-CDEF-GHIJ-KLMNOPQRSTUV
|
||||
|
@ -94,4 +95,3 @@ With the bare metal infrastructure deployed, Kubespray can now install Kubernete
|
|||
```bash
|
||||
ansible-playbook --become -i inventory/alpha/hosts cluster.yml
|
||||
```
|
||||
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
|
||||
Recovering the control plane
|
||||
============================
|
||||
# Recovering the control plane
|
||||
|
||||
To recover from broken nodes in the control plane use the "recover\-control\-plane.yml" playbook.
|
||||
|
||||
|
|
|
@ -1,14 +1,15 @@
|
|||
Kubespray's roadmap
|
||||
=================
|
||||
# Kubespray's roadmap
|
||||
|
||||
## Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320)
|
||||
|
||||
### Self deployment (pull-mode) [#320](https://github.com/kubespray/kubespray/issues/320)
|
||||
- the playbook would install and configure docker and the etcd cluster
|
||||
- the following data would be inserted into etcd: certs,tokens,users,inventory,group_vars.
|
||||
- a "kubespray" container would be deployed (kubespray-cli, ansible-playbook)
|
||||
- to be discussed, a way to provide the inventory
|
||||
- **self deployment** of the node from inside a container [#321](https://github.com/kubespray/kubespray/issues/321)
|
||||
|
||||
### Provisioning and cloud providers
|
||||
## Provisioning and cloud providers
|
||||
|
||||
- [ ] Terraform to provision instances on:
|
||||
- [ ] GCE
|
||||
- [x] AWS (contrib/terraform/aws)
|
||||
|
@ -20,35 +21,39 @@ Kubespray's roadmap
|
|||
- [ ] On Azure autoscaling, create loadbalancer [#297](https://github.com/kubespray/kubespray/issues/297)
|
||||
- [ ] On GCE be able to create a loadbalancer automatically (IAM ?) [#280](https://github.com/kubespray/kubespray/issues/280)
|
||||
- [x] **TLS bootstrap** support for kubelet (covered by kubeadm, but not in standard deployment) [#234](https://github.com/kubespray/kubespray/issues/234)
|
||||
(related issues: https://github.com/kubernetes/kubernetes/pull/20439 <br>
|
||||
https://github.com/kubernetes/kubernetes/issues/18112)
|
||||
(related issues: <https://github.com/kubernetes/kubernetes/pull/20439> <https://github.com/kubernetes/kubernetes/issues/18112)>
|
||||
|
||||
## Tests
|
||||
|
||||
### Tests
|
||||
- [x] Run kubernetes e2e tests
|
||||
- [ ] Test idempotency on single OS but for all network plugins/container engines
|
||||
- [ ] single test on AWS per day
|
||||
- [ ] test scale up cluster: +1 etcd, +1 master, +1 node
|
||||
- [x] Reorganize CI test vars into group var files
|
||||
|
||||
### Lifecycle
|
||||
## Lifecycle
|
||||
|
||||
- [ ] Upgrade granularity: select components to upgrade and skip others
|
||||
|
||||
### Networking
|
||||
## Networking
|
||||
|
||||
- [ ] Opencontrail
|
||||
- [ ] Consolidate roles/network_plugin and roles/kubernetes-apps/network_plugin
|
||||
|
||||
### Kubespray API
|
||||
## Kubespray API
|
||||
|
||||
- Perform all actions through an **API**
|
||||
- Store inventories / configurations of multiple clusters
|
||||
- Make sure that state of cluster is completely saved in no more than one config file beyond hosts inventory
|
||||
|
||||
### Addons (helm or native ansible)
|
||||
## Addons (helm or native ansible)
|
||||
|
||||
- [x] Helm
|
||||
- [x] Ingress-nginx
|
||||
- [x] kubernetes-dashboard
|
||||
|
||||
## Others
|
||||
|
||||
### Others
|
||||
- Organize and update documentation (split in categories)
|
||||
- Refactor downloads so it all runs in the beginning of deployment
|
||||
- Make bootstrapping OS more consistent
|
||||
|
|
|
@ -1,9 +1,7 @@
|
|||
Node Layouts
|
||||
------------
|
||||
# Node Layouts
|
||||
|
||||
There are four node layout types: `default`, `separate`, `ha`, and `scale`.
|
||||
|
||||
|
||||
`default` is a non-HA two nodes setup with one separate `kube-node`
|
||||
and the `etcd` group merged with the `kube-master`.
|
||||
|
||||
|
@ -20,8 +18,7 @@ never actually deployed, but certificates are generated for them.
|
|||
|
||||
Note, the canal network plugin deploys flannel as well plus calico policy controller.
|
||||
|
||||
GCE instances
|
||||
-------------
|
||||
## GCE instances
|
||||
|
||||
| Stage| Network plugin| OS type| GCE region| Nodes layout
|
||||
|--------------------|--------------------|--------------------|--------------------|--------------------|
|
||||
|
|
|
@ -1,7 +1,4 @@
|
|||
Upgrading Kubernetes in Kubespray
|
||||
=============================
|
||||
|
||||
#### Description
|
||||
# Upgrading Kubernetes in Kubespray
|
||||
|
||||
Kubespray handles upgrades the same way it handles initial deployment. That is to
|
||||
say that each component is laid down in a fixed order.
|
||||
|
@ -22,22 +19,22 @@ versions. Here are all version vars for each component:
|
|||
|
||||
See [Multiple Upgrades](#multiple-upgrades) for how to upgrade from older releases to the latest release
|
||||
|
||||
#### Unsafe upgrade example
|
||||
## Unsafe upgrade example
|
||||
|
||||
If you wanted to upgrade just kube_version from v1.4.3 to v1.4.6, you could
|
||||
deploy the following way:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.3
|
||||
```
|
||||
|
||||
And then repeat with v1.4.6 as kube_version:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook cluster.yml -i inventory/sample/hosts.ini -e kube_version=v1.4.6
|
||||
```
|
||||
|
||||
#### Graceful upgrade
|
||||
## Graceful upgrade
|
||||
|
||||
Kubespray also supports cordon, drain and uncordoning of nodes when performing
|
||||
a cluster upgrade. There is a separate playbook used for this purpose. It is
|
||||
|
@ -45,19 +42,19 @@ important to note that upgrade-cluster.yml can only be used for upgrading an
|
|||
existing cluster. That means there must be at least 1 kube-master already
|
||||
deployed.
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=v1.6.0
|
||||
```
|
||||
|
||||
After a successful upgrade, the Server Version should be updated:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
$ kubectl version
|
||||
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T19:15:41Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0+coreos.0", GitCommit:"8031716957d697332f9234ddf85febb07ac6c3e3", GitTreeState:"clean", BuildDate:"2017-03-29T04:33:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```
|
||||
|
||||
#### Multiple upgrades
|
||||
## Multiple upgrades
|
||||
|
||||
:warning: [Do not skip releases when upgrading--upgrade by one tag at a time.](https://github.com/kubernetes-sigs/kubespray/issues/3849#issuecomment-451386515) :warning:
|
||||
|
||||
|
@ -71,7 +68,7 @@ Assuming you don't explicitly define a kubernetes version in your k8s-cluster.ym
|
|||
|
||||
The below example shows taking a cluster that was set up for v2.6.0 up to v2.10.0
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
$ kubectl get node
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
apollo Ready master,node 1h v1.10.4
|
||||
|
@ -96,7 +93,7 @@ HEAD is now at 05dabb7e Fix Bionic networking restart error #3430 (#3431)
|
|||
|
||||
# NOTE: May need to sudo pip3 install -r requirements.txt when upgrading.
|
||||
|
||||
$ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
|
||||
ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
|
||||
|
||||
...
|
||||
|
||||
|
@ -117,7 +114,7 @@ Some deprecations between versions that mean you can't just upgrade straight fro
|
|||
|
||||
In this case, I set "kubeadm_enabled" to false, knowing that it is deprecated and removed by 2.9.0, to delay converting the cluster to kubeadm as long as I could.
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
$ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
|
||||
...
|
||||
"msg": "DEPRECATION: non-kubeadm deployment is deprecated from v2.9. Will be removed in next release."
|
||||
|
@ -233,8 +230,8 @@ If you do not keep your inventory copy up to date, **your upgrade will fail** an
|
|||
|
||||
It is at this point the cluster was upgraded from non-kubeadm to kubeadm as per the deprecation warning.
|
||||
|
||||
```
|
||||
$ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
|
||||
```ShellSession
|
||||
ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
|
||||
|
||||
...
|
||||
|
||||
|
@ -259,7 +256,7 @@ $ git checkout v2.10.0
|
|||
Previous HEAD position was a4e65c7c Upgrade to Ansible >2.7.0 (#4471)
|
||||
HEAD is now at dcd9c950 Add etcd role dependency on kube user to avoid etcd role failure when running scale.yml with a fresh node. (#3240) (#4479)
|
||||
|
||||
$ ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
|
||||
ansible-playbook -i inventory/mycluster/hosts.ini -b upgrade-cluster.yml
|
||||
|
||||
...
|
||||
|
||||
|
@ -272,8 +269,7 @@ caprica Ready master,node 7h40m v1.14.1
|
|||
|
||||
```
|
||||
|
||||
|
||||
#### Upgrade order
|
||||
## Upgrade order
|
||||
|
||||
As mentioned above, components are upgraded in the order in which they were
|
||||
installed in the Ansible playbook. The order of component installation is as
|
||||
|
@ -286,7 +282,7 @@ follows:
|
|||
* kube-apiserver, kube-scheduler, and kube-controller-manager
|
||||
* Add-ons (such as KubeDNS)
|
||||
|
||||
#### Upgrade considerations
|
||||
## Upgrade considerations
|
||||
|
||||
Kubespray supports rotating certificates used for etcd and Kubernetes
|
||||
components, but some manual steps may be required. If you have a pod that
|
||||
|
@ -312,48 +308,48 @@ hosts.
|
|||
|
||||
Upgrade docker:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=docker
|
||||
```
|
||||
|
||||
Upgrade etcd:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=etcd
|
||||
```
|
||||
|
||||
Upgrade vault:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=vault
|
||||
```
|
||||
|
||||
Upgrade kubelet:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=node --skip-tags=k8s-gen-certs,k8s-gen-tokens
|
||||
```
|
||||
|
||||
Upgrade Kubernetes master components:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=master
|
||||
```
|
||||
|
||||
Upgrade network plugins:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=network
|
||||
```
|
||||
|
||||
Upgrade all add-ons:
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=apps
|
||||
```
|
||||
|
||||
Upgrade just helm (assuming `helm_enabled` is true):
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
ansible-playbook -b -i inventory/sample/hosts.ini cluster.yml --tags=helm
|
||||
```
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Introduction
|
||||
============
|
||||
# Vagrant
|
||||
|
||||
Assuming you have Vagrant 2.0+ installed with virtualbox, libvirt/qemu or vmware, but is untested) you should be able to launch a 3 node Kubernetes cluster by simply running `vagrant up`. This will spin up 3 VMs and install kubernetes on them. Once they are completed you can connect to any of them by running `vagrant ssh k8s-[1..3]`.
|
||||
|
||||
|
@ -7,33 +6,31 @@ To give an estimate of the expected duration of a provisioning run: On a dual co
|
|||
|
||||
For proper performance a minimum of 12GB RAM is recommended. It is possible to run a 3 node cluster on a laptop with 8GB of RAM using the default Vagrantfile, provided you have 8GB zram swap configured and not much more than a browser and a mail client running. If you decide to run on such a machine, then also make sure that any tmpfs devices, that are mounted, are mostly empty and disable any swapfiles mounted on HDD/SSD or you will be in for some serious swap-madness. Things can get a bit sluggish during provisioning, but when that's done, the system will actually be able to perform quite well.
|
||||
|
||||
Customize Vagrant
|
||||
=================
|
||||
## Customize Vagrant
|
||||
|
||||
You can override the default settings in the `Vagrantfile` either by directly modifying the `Vagrantfile` or through an override file. In the same directory as the `Vagrantfile`, create a folder called `vagrant` and create `config.rb` file in it. An example of how to configure this file is given below.
|
||||
|
||||
Use alternative OS for Vagrant
|
||||
==============================
|
||||
## Use alternative OS for Vagrant
|
||||
|
||||
By default, Vagrant uses Ubuntu 18.04 box to provision a local cluster. You may use an alternative supported operating system for your local cluster.
|
||||
|
||||
Customize `$os` variable in `Vagrantfile` or as override, e.g.,:
|
||||
|
||||
```ShellSession
|
||||
echo '$os = "coreos-stable"' >> vagrant/config.rb
|
||||
```
|
||||
|
||||
The supported operating systems for vagrant are defined in the `SUPPORTED_OS` constant in the `Vagrantfile`.
|
||||
|
||||
File and image caching
|
||||
======================
|
||||
## File and image caching
|
||||
|
||||
Kubespray can take quite a while to start on a laptop. To improve provisioning speed, the variable 'download_run_once' is set. This will make kubespray download all files and containers just once and then redistributes them to the other nodes and as a bonus, also cache all downloads locally and re-use them on the next provisioning run. For more information on download settings see [download documentation](downloads.md).
|
||||
|
||||
Example use of Vagrant
|
||||
======================
|
||||
## Example use of Vagrant
|
||||
|
||||
The following is an example of setting up and running kubespray using `vagrant`. For repeated runs, you could save the script to a file in the root of the kubespray and run it by executing 'source <name_of_the_file>.
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# use virtualenv to install all python requirements
|
||||
VENVDIR=venv
|
||||
virtualenv --python=/usr/bin/python3.7 $VENVDIR
|
||||
|
@ -76,28 +73,38 @@ sudo ln -s $INV/artifacts/kubectl /usr/local/bin/kubectl
|
|||
#or
|
||||
export PATH=$PATH:$INV/artifacts
|
||||
```
|
||||
|
||||
If a vagrant run failed and you've made some changes to fix the issue causing the fail, here is how you would re-run ansible:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
ansible-playbook -vvv -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory cluster.yml
|
||||
```
|
||||
|
||||
If all went well, you check if it's all working as expected:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
The output should look like this:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kub-1 Ready master 32m v1.14.1
|
||||
kub-2 Ready master 31m v1.14.1
|
||||
kub-3 Ready <none> 31m v1.14.1
|
||||
```
|
||||
|
||||
Another nice test is the following:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
kubectl get po --all-namespaces -o wide
|
||||
```
|
||||
|
||||
Which should yield something like the following:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
kube-system coredns-97c4b444f-9wm86 1/1 Running 0 31m 10.233.66.2 kub-3 <none> <none>
|
||||
kube-system coredns-97c4b444f-g7hqx 0/1 Pending 0 30m <none> <none> <none> <none>
|
||||
|
@ -120,10 +127,12 @@ kube-system nodelocaldns-2x7vh 1/1 Running 0
|
|||
kube-system nodelocaldns-fpvnz 1/1 Running 0 31m 10.0.20.103 kub-3 <none> <none>
|
||||
kube-system nodelocaldns-h2f42 1/1 Running 0 31m 10.0.20.101 kub-1 <none> <none>
|
||||
```
|
||||
|
||||
Create clusteradmin rbac and get the login token for the dashboard:
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
kubectl create -f contrib/misc/clusteradmin-rbac.yml
|
||||
kubectl -n kube-system describe secret kubernetes-dashboard-token | grep 'token:' | grep -o '[^ ]\+$'
|
||||
```
|
||||
Copy it to the clipboard and now log in to the [dashboard](https://10.0.20.101:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login).
|
||||
|
||||
Copy it to the clipboard and now log in to the [dashboard](https://10.0.20.101:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login).
|
||||
|
|
34
docs/vars.md
34
docs/vars.md
|
@ -1,7 +1,6 @@
|
|||
Configurable Parameters in Kubespray
|
||||
================================
|
||||
# Configurable Parameters in Kubespray
|
||||
|
||||
#### Generic Ansible variables
|
||||
## Generic Ansible variables
|
||||
|
||||
You can view facts gathered by Ansible automatically
|
||||
[here](http://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts).
|
||||
|
@ -12,7 +11,7 @@ Some variables of note include:
|
|||
* *ansible_default_ipv4.address*: IP address Ansible automatically chooses.
|
||||
Generated based on the output from the command ``ip -4 route get 8.8.8.8``
|
||||
|
||||
#### Common vars that are used in Kubespray
|
||||
## Common vars that are used in Kubespray
|
||||
|
||||
* *calico_version* - Specify version of Calico to use
|
||||
* *calico_cni_version* - Specify version of Calico CNI plugin to use
|
||||
|
@ -28,7 +27,7 @@ Some variables of note include:
|
|||
* *nameservers* - Array of nameservers to use for DNS lookup
|
||||
* *preinstall_selinux_state* - Set selinux state, permitted values are permissive and disabled.
|
||||
|
||||
#### Addressing variables
|
||||
## Addressing variables
|
||||
|
||||
* *ip* - IP to use for binding services (host var)
|
||||
* *access_ip* - IP for other hosts to use to connect to. Often required when
|
||||
|
@ -45,7 +44,7 @@ Some variables of note include:
|
|||
`loadbalancer_apiserver`. See more details in the
|
||||
[HA guide](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ha-mode.md).
|
||||
|
||||
#### Cluster variables
|
||||
## Cluster variables
|
||||
|
||||
Kubernetes needs some parameters in order to get deployed. These are the
|
||||
following default cluster parameters:
|
||||
|
@ -86,7 +85,7 @@ Note, if cloud providers have any use of the ``10.233.0.0/16``, like instances'
|
|||
private addresses, make sure to pick another values for ``kube_service_addresses``
|
||||
and ``kube_pods_subnet``, for example from the ``172.18.0.0/16``.
|
||||
|
||||
#### DNS variables
|
||||
## DNS variables
|
||||
|
||||
By default, hosts are set up with 8.8.8.8 as an upstream DNS server and all
|
||||
other settings from your existing /etc/resolv.conf are lost. Set the following
|
||||
|
@ -100,7 +99,7 @@ variables to match your requirements.
|
|||
For more information, see [DNS
|
||||
Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.md).
|
||||
|
||||
#### Other service variables
|
||||
## Other service variables
|
||||
|
||||
* *docker_options* - Commonly used to set
|
||||
``--insecure-registry=myregistry.mydomain:5000``
|
||||
|
@ -125,20 +124,24 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
|
|||
* *node_labels* - Labels applied to nodes via kubelet --node-labels parameter.
|
||||
For example, labels can be set in the inventory as variables or more widely in group_vars.
|
||||
*node_labels* can be defined either as a dict or a comma-separated labels string:
|
||||
```
|
||||
|
||||
```yml
|
||||
node_labels:
|
||||
label1_name: label1_value
|
||||
label2_name: label2_value
|
||||
|
||||
node_labels: "label1_name=label1_value,label2_name=label2_value"
|
||||
```
|
||||
|
||||
* *node_taints* - Taints applied to nodes via kubelet --register-with-taints parameter.
|
||||
For example, taints can be set in the inventory as variables or more widely in group_vars.
|
||||
*node_taints* has to be defined as a list of strings in format `key=value:effect`, e.g.:
|
||||
```
|
||||
|
||||
```yml
|
||||
node_taints:
|
||||
- "node.example.com/external=true:NoSchedule"
|
||||
```
|
||||
|
||||
* *podsecuritypolicy_enabled* - When set to `true`, enables the PodSecurityPolicy admission controller and defines two policies `privileged` (applying to all resources in `kube-system` namespace and kubelet) and `restricted` (applying all other namespaces).
|
||||
Addons deployed in kube-system namespaces are handled.
|
||||
* *kubernetes_audit* - When set to `true`, enables Auditing.
|
||||
|
@ -151,25 +154,30 @@ node_taints:
|
|||
|
||||
By default, the `audit_policy_file` contains [default rules](https://github.com/kubernetes-sigs/kubespray/blob/master/roles/kubernetes/master/templates/apiserver-audit-policy.yaml.j2) that can be overridden with the `audit_policy_custom_rules` variable.
|
||||
|
||||
##### Custom flags for Kube Components
|
||||
### Custom flags for Kube Components
|
||||
|
||||
For all kube components, custom flags can be passed in. This allows for edge cases where users need changes to the default deployment that may not be applicable to all deployments. This can be done by providing a list of flags. The `kubelet_node_custom_flags` apply kubelet settings only to nodes and not masters. Example:
|
||||
```
|
||||
|
||||
```yml
|
||||
kubelet_custom_flags:
|
||||
- "--eviction-hard=memory.available<100Mi"
|
||||
- "--eviction-soft-grace-period=memory.available=30s"
|
||||
- "--eviction-soft=memory.available<300Mi"
|
||||
```
|
||||
|
||||
The possible vars are:
|
||||
|
||||
* *kubelet_custom_flags*
|
||||
* *kubelet_node_custom_flags*
|
||||
|
||||
Extra flags for the API server, controller, and scheduler components can be specified using these variables,
|
||||
in the form of dicts of key-value pairs of configuration parameters that will be inserted into the kubeadm YAML config file:
|
||||
|
||||
* *kube_kubeadm_apiserver_extra_args*
|
||||
* *kube_kubeadm_controller_extra_args*
|
||||
* *kube_kubeadm_scheduler_extra_args*
|
||||
|
||||
#### User accounts
|
||||
## User accounts
|
||||
|
||||
By default, a user with admin rights is created, named `kube`.
|
||||
The password can be viewed after deployment by looking at the file
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# vSphere cloud provider
|
||||
|
||||
Kubespray can be deployed with vSphere as Cloud provider. This feature supports
|
||||
|
||||
- Volumes
|
||||
- Persistent Volumes
|
||||
- Storage Classes and provisioning of volumes.
|
||||
|
@ -11,15 +12,16 @@ Kubespray can be deployed with vSphere as Cloud provider. This feature supports
|
|||
You need at first to configure your vSphere environment by following the [official documentation](https://kubernetes.io/docs/getting-started-guides/vsphere/#vsphere-cloud-provider).
|
||||
|
||||
After this step you should have:
|
||||
|
||||
- UUID activated for each VM where Kubernetes will be deployed
|
||||
- A vSphere account with required privileges
|
||||
|
||||
If you intend to leverage the [zone and region node labeling](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domain-beta-kubernetes-io-region), create a tag category for both the zone and region in vCenter. The tags can then be applied at the host, cluster, datacenter, or folder level, and the cloud provider will walk the hierarchy to extract and apply the labels to the Kubernetes nodes.
|
||||
|
||||
|
||||
## Kubespray configuration
|
||||
|
||||
First you must define the cloud provider in `inventory/sample/group_vars/all.yml` and set it to `vsphere`.
|
||||
|
||||
```yml
|
||||
cloud_provider: vsphere
|
||||
```
|
||||
|
@ -61,7 +63,8 @@ vsphere_resource_pool: "K8s-Pool"
|
|||
## Deployment
|
||||
|
||||
Once the configuration is set, you can execute the playbook again to apply the new configuration
|
||||
```
|
||||
|
||||
```ShellSession
|
||||
cd kubespray
|
||||
ansible-playbook -i inventory/sample/hosts.ini -b -v cluster.yml
|
||||
```
|
||||
|
|
|
@ -1,5 +1,4 @@
|
|||
Weave
|
||||
=======
|
||||
# Weave
|
||||
|
||||
Weave 2.0.1 is supported by kubespray
|
||||
|
||||
|
@ -11,7 +10,7 @@ Weave encryption is supported for all communication
|
|||
|
||||
* To use Weave encryption, specify a strong password (if no password, no encryption)
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# In file ./inventory/sample/group_vars/k8s-cluster.yml
|
||||
weave_password: EnterPasswordHere
|
||||
```
|
||||
|
@ -22,18 +21,19 @@ Weave is deployed by kubespray using a daemonSet
|
|||
|
||||
* Check the status of Weave containers
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# From client
|
||||
kubectl -n kube-system get pods | grep weave
|
||||
# output
|
||||
weave-net-50wd2 2/2 Running 0 2m
|
||||
weave-net-js9rb 2/2 Running 0 2m
|
||||
```
|
||||
|
||||
There must be as many pods as nodes (here kubernetes have 2 nodes so there are 2 weave pods).
|
||||
|
||||
* Check status of weave (connection,encryption ...) for each node
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# On nodes
|
||||
curl http://127.0.0.1:6784/status
|
||||
# output on node1
|
||||
|
@ -57,14 +57,14 @@ Version: 2.0.1 (up to date; next check at 2017/08/01 13:51:34)
|
|||
|
||||
* Check parameters of weave for each node
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# On nodes
|
||||
ps -aux | grep weaver
|
||||
# output on node1 (here its use seed mode)
|
||||
root 8559 0.2 3.0 365280 62700 ? Sl 08:25 0:00 /home/weave/weaver --name=fa:16:3e:b3:d6:b2 --port=6783 --datapath=datapath --host-root=/host --http-addr=127.0.0.1:6784 --status-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=10.233.64.0/18 --nickname=node1 --ipalloc-init seed=fa:16:3e:b3:d6:b2,fa:16:3e:f0:50:53 --conn-limit=30 --expect-npc 192.168.208.28 192.168.208.19
|
||||
```
|
||||
|
||||
### Consensus mode (default mode)
|
||||
## Consensus mode (default mode)
|
||||
|
||||
This mode is best to use on static size cluster
|
||||
|
||||
|
@ -76,14 +76,14 @@ The seed mode also allows multi-clouds and hybrid on-premise/cloud clusters depl
|
|||
|
||||
* Switch from consensus mode to seed mode
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# In file ./inventory/sample/group_vars/k8s-cluster.yml
|
||||
weave_mode_seed: true
|
||||
```
|
||||
|
||||
These two variables are only used when `weave_mode_seed` is set to `true` (**/!\ do not manually change these values**)
|
||||
|
||||
```
|
||||
```ShellSession
|
||||
# In file ./inventory/sample/group_vars/k8s-cluster.yml
|
||||
weave_seed: uninitialized
|
||||
weave_peers: uninitialized
|
||||
|
|
Loading…
Reference in New Issue