Minor cleanup of README.md and two other docs (#9621)

Signed-off-by: Anthony D'Atri <anthony.datri@gmail.com>

Signed-off-by: Anthony D'Atri <anthony.datri@gmail.com>
pull/9637/head
Anthony D'Atri 2023-01-03 05:51:31 -05:00 committed by GitHub
parent 48282a344f
commit 8ca0bfffe0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 27 additions and 16 deletions

View File

@ -13,7 +13,7 @@ You can get your invite [here](http://slack.k8s.io/)
## Quick Start ## Quick Start
To deploy the cluster you can use : Below are several ways to use Kubespray to deploy a Kubernetes cluster.
### Ansible ### Ansible
@ -41,20 +41,31 @@ cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
``` ```
Note: When Ansible is already installed via system packages on the control machine, other python packages installed via `sudo pip install -r requirements.txt` will go to a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on Ubuntu). Note: When Ansible is already installed via system packages on the control node,
As a consequence, `ansible-playbook` command will fail with: Python packages installed via `sudo pip install -r requirements.txt` will go to
a different directory tree (e.g. `/usr/local/lib/python2.7/dist-packages` on
Ubuntu) from Ansible's (e.g. `/usr/lib/python2.7/dist-packages/ansible` still on
buntu). As a consequence, the `ansible-playbook` command will fail with:
```raw ```raw
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path. ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
``` ```
probably pointing on a task depending on a module present in requirements.txt. This likely indicates that a task depends on a module present in ``requirements.txt``.
One way of solving this would be to uninstall the Ansible package and then, to install it via pip but it is not always possible. One way of addressing this is to uninstall the system Ansible package then
A workaround consists of setting `ANSIBLE_LIBRARY` and `ANSIBLE_MODULE_UTILS` environment variables respectively to the `ansible/modules` and `ansible/module_utils` subdirectories of pip packages installation location, which can be found in the Location field of the output of `pip show [package]` before executing `ansible-playbook`. reinstall Ansible via ``pip``, but this not always possible and one must
take care regarding package versions.
A workaround consists of setting the `ANSIBLE_LIBRARY`
and `ANSIBLE_MODULE_UTILS` environment variables respectively to
the `ansible/modules` and `ansible/module_utils` subdirectories of the ``pip``
installation location, which is the ``Location`` shown by running
`pip show [package]` before executing `ansible-playbook`.
A simple way to ensure you get all the correct version of Ansible is to use the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags). A simple way to ensure you get all the correct version of Ansible is to use
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/) to get the inventory and ssh key into the container, like this: the [pre-built docker image from Quay](https://quay.io/repository/kubespray/kubespray?tab=tags).
You will then need to use [bind mounts](https://docs.docker.com/storage/bind-mounts/)
to access the inventory and SSH key in the container, like this:
```ShellSession ```ShellSession
git checkout v2.20.0 git checkout v2.20.0
@ -68,8 +79,8 @@ ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa clu
### Vagrant ### Vagrant
For Vagrant we need to install python dependencies for provisioning tasks. For Vagrant we need to install Python dependencies for provisioning tasks.
Check if Python and pip are installed: Check that ``Python`` and ``pip`` are installed:
```ShellSession ```ShellSession
python -V && pip -V python -V && pip -V
@ -176,7 +187,7 @@ Note: Upstart/SysV init based OS types are not supported.
## Container Runtime Notes ## Container Runtime Notes
- The list of available docker version is 18.09, 19.03 and 20.10. The recommended docker version is 20.10. The kubelet might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin). - Supported Docker versions are 18.09, 19.03 and 20.10. The *recommended* Docker version is 20.10. `Kubelet` might break on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. the YUM ``versionlock`` plugin or ``apt pin``).
- The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20) - The cri-o version should be aligned with the respective kubernetes version (i.e. kube_version=1.20.x, crio_version=1.20)
## Requirements ## Requirements
@ -193,7 +204,7 @@ Note: Upstart/SysV init based OS types are not supported.
or command parameters `--become or -b` should be specified. or command parameters `--become or -b` should be specified.
Hardware: Hardware:
These limits are safe guarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide. These limits are safeguarded by Kubespray. Actual requirements for your workload can differ. For a sizing guide go to the [Building Large Clusters](https://kubernetes.io/docs/setup/cluster-large/#size-of-master-and-master-components) guide.
- Master - Master
- Memory: 1500 MB - Memory: 1500 MB
@ -202,7 +213,7 @@ These limits are safe guarded by Kubespray. Actual requirements for your workloa
## Network Plugins ## Network Plugins
You can choose between 10 network plugins. (default: `calico`, except Vagrant uses `flannel`) You can choose among ten network plugins. (default: `calico`, except Vagrant uses `flannel`)
- [flannel](docs/flannel.md): gre/vxlan (layer 2) networking. - [flannel](docs/flannel.md): gre/vxlan (layer 2) networking.
@ -229,7 +240,7 @@ You can choose between 10 network plugins. (default: `calico`, except Vagrant us
- [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc. - [multus](docs/multus.md): Multus is a meta CNI plugin that provides multiple network interface support to pods. For each interface Multus delegates CNI calls to secondary CNI plugins such as Calico, macvlan, etc.
The choice is defined with the variable `kube_network_plugin`. There is also an The network plugin to use is defined by the variable `kube_network_plugin`. There is also an
option to leverage built-in cloud provider networking instead. option to leverage built-in cloud provider networking instead.
See also [Network checker](docs/netcheck.md). See also [Network checker](docs/netcheck.md).

View File

@ -267,7 +267,7 @@ Note: use `--tags` and `--skip-tags` wise and only if you're 100% sure what you'
## Bastion host ## Bastion host
If you prefer to not make your nodes publicly accessible (nodes with private IPs only), If you prefer to not make your nodes publicly accessible (nodes with private IPs only),
you can use a so called *bastion* host to connect to your nodes. To specify and use a bastion, you can use a so-called _bastion_ host to connect to your nodes. To specify and use a bastion,
simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of the
bastion host. bastion host.

View File

@ -124,7 +124,7 @@ By default NGINX `keepalive_timeout` is set to `75s`.
The default ELB idle timeout will work for most scenarios, unless the NGINX [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) has been modified, The default ELB idle timeout will work for most scenarios, unless the NGINX [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) has been modified,
in which case `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` will need to be modified to ensure it is less than the `keepalive_timeout` the user has configured. in which case `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` will need to be modified to ensure it is less than the `keepalive_timeout` the user has configured.
_Please Note: An idle timeout of `3600s` is recommended when using WebSockets._ *Please Note: An idle timeout of `3600s` is recommended when using WebSockets.*
More information with regards to idle timeouts for your Load Balancer can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html). More information with regards to idle timeouts for your Load Balancer can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html).