Optimize the document for readability (#9730)

Signed-off-by: Fish-pro <zechun.chen@daocloud.io>
pull/9750/head
Fish-pro 2023-02-01 16:01:06 +08:00 committed by GitHub
parent edde594bbe
commit 6cb027dfab
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 23 additions and 23 deletions

View File

@ -199,7 +199,7 @@ Note: Upstart/SysV init based OS types are not supported.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**. - If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to. - The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall. in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is ran from non-root user account, correct privilege escalation method - If kubespray is run from non-root user account, correct privilege escalation method
should be configured in the target servers. Then the `ansible_become` flag should be configured in the target servers. Then the `ansible_become` flag
or command parameters `--become or -b` should be specified. or command parameters `--become or -b` should be specified.

View File

@ -60,7 +60,7 @@ release-notes --start-sha <The start commit-id> --end-sha <The end commit-id> --
``` ```
If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.). If the release note file(/tmp/kubespray-release-note) contains "### Uncategorized" pull requests, those pull requests don't have a valid kind label(`kind/feature`, etc.).
It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note) It is necessary to put a valid label on each pull request and run the above release-notes command again to get a better release note
## Container image creation ## Container image creation

View File

@ -14,7 +14,7 @@ If you want to deploy the Azure Disk storage class to provision volumes dynamica
Before creating the instances you must first set the `azure_csi_` variables in the `group_vars/all.yml` file. Before creating the instances you must first set the `azure_csi_` variables in the `group_vars/all.yml` file.
All of the values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest> All values can be retrieved using the azure cli tool which can be downloaded here: <https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest>
After installation you have to run `az login` to get access to your account. After installation you have to run `az login` to get access to your account.
@ -34,7 +34,7 @@ The name of the resource group your instances are in, a list of your resource gr
Or you can do `az vm list | grep resourceGroup` and get the resource group corresponding to the VMs of your cluster. Or you can do `az vm list | grep resourceGroup` and get the resource group corresponding to the VMs of your cluster.
The resource group name is not case sensitive. The resource group name is not case-sensitive.
### azure\_csi\_vnet\_name ### azure\_csi\_vnet\_name

View File

@ -10,7 +10,7 @@ Not all features are supported yet though, for a list of the current status have
Before creating the instances you must first set the `azure_` variables in the `group_vars/all/all.yml` file. Before creating the instances you must first set the `azure_` variables in the `group_vars/all/all.yml` file.
All of the values can be retrieved using the Azure CLI tool which can be downloaded here: <https://docs.microsoft.com/en-gb/cli/azure/install-azure-cli> All values can be retrieved using the Azure CLI tool which can be downloaded here: <https://docs.microsoft.com/en-gb/cli/azure/install-azure-cli>
After installation you have to run `az login` to get access to your account. After installation you have to run `az login` to get access to your account.
### azure_cloud ### azure_cloud

View File

@ -2,7 +2,7 @@
## Provisioning ## Provisioning
You can deploy instances in your cloud environment in several different ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation. You can deploy instances in your cloud environment in several ways. Examples include Terraform, Ansible (ec2 and gce modules), and manual creation.
## Deploy kubernetes ## Deploy kubernetes

View File

@ -10,7 +10,7 @@ dynamically from the Terraform state file.
## Local Host Configuration ## Local Host Configuration
To perform this installation, you will need a localhost to run Terraform/Ansible (laptop, VM, etc) and an account with Equinix Metal. To perform this installation, you will need a localhost to run Terraform/Ansible (laptop, VM, etc) and an account with Equinix Metal.
In this example, we're using an m1.large CentOS 7 OpenStack VM as the localhost to kickoff the Kubernetes installation. In this example, we are provisioning a m1.large CentOS7 OpenStack VM as the localhost for the Kubernetes installation.
You'll need Ansible, Git, and PIP. You'll need Ansible, Git, and PIP.
```bash ```bash

View File

@ -25,7 +25,7 @@ etcd_metrics_port: 2381
``` ```
To create a service `etcd-metrics` and associated endpoints in the `kube-system` namespace, To create a service `etcd-metrics` and associated endpoints in the `kube-system` namespace,
define it's labels in the inventory with: define its labels in the inventory with:
```yaml ```yaml
etcd_metrics_service_labels: etcd_metrics_service_labels:

View File

@ -54,7 +54,7 @@ listen kubernetes-apiserver-https
balance roundrobin balance roundrobin
``` ```
Note: That's an example config managed elsewhere outside of Kubespray. Note: That's an example config managed elsewhere outside Kubespray.
And the corresponding example global vars for such a "cluster-aware" And the corresponding example global vars for such a "cluster-aware"
external LB with the cluster API access modes configured in Kubespray: external LB with the cluster API access modes configured in Kubespray:
@ -85,7 +85,7 @@ for it.
Note: TLS/SSL termination for externally accessed API endpoints' will **not** Note: TLS/SSL termination for externally accessed API endpoints' will **not**
be covered by Kubespray for that case. Make sure your external LB provides it. be covered by Kubespray for that case. Make sure your external LB provides it.
Alternatively you may specify an externally load balanced VIPs in the Alternatively you may specify an external load balanced VIPs in the
`supplementary_addresses_in_ssl_keys` list. Then, kubespray will add them into `supplementary_addresses_in_ssl_keys` list. Then, kubespray will add them into
the generated cluster certificates as well. the generated cluster certificates as well.

View File

@ -95,7 +95,7 @@
ansible.builtin.import_playbook: 3d/kubespray/cluster.yml ansible.builtin.import_playbook: 3d/kubespray/cluster.yml
``` ```
Or your could copy separate tasks from cluster.yml into your ansible repository. Or you could copy separate tasks from cluster.yml into your ansible repository.
11. Commit changes to your ansible repo. Keep in mind, that submodule folder is just a link to the git commit hash of your forked repo. 11. Commit changes to your ansible repo. Keep in mind, that submodule folder is just a link to the git commit hash of your forked repo.
@ -170,7 +170,7 @@ If you made useful changes or fixed a bug in existent kubespray repo, use this f
git push git push
``` ```
If your branch doesn't exists on github, git will propose you to use something like If your branch doesn't exist on github, git will propose you to use something like
```ShellSession ```ShellSession
git push --set-upstream origin fixes-name-date-index git push --set-upstream origin fixes-name-date-index

View File

@ -4,7 +4,7 @@ Kube-OVN integrates the OVN-based Network Virtualization with Kubernetes. It off
For more information please check [Kube-OVN documentation](https://github.com/alauda/kube-ovn) For more information please check [Kube-OVN documentation](https://github.com/alauda/kube-ovn)
**Warning:** Kernel version (`cat /proc/version`) needs to be different than `3.10.0-862` or kube-ovn won't start and will print this message: **Warning:** Kernel version (`cat /proc/version`) needs to be different from `3.10.0-862` or kube-ovn won't start and will print this message:
```bash ```bash
kernel version 3.10.0-862 has a nat related bug that will affect ovs function, please update to a version greater than 3.10.0-898 kernel version 3.10.0-862 has a nat related bug that will affect ovs function, please update to a version greater than 3.10.0-898

View File

@ -4,7 +4,7 @@ Distributed system such as Kubernetes are designed to be resilient to the
failures. More details about Kubernetes High-Availability (HA) may be found at failures. More details about Kubernetes High-Availability (HA) may be found at
[Building High-Availability Clusters](https://kubernetes.io/docs/admin/high-availability/) [Building High-Availability Clusters](https://kubernetes.io/docs/admin/high-availability/)
To have a simple view the most of parts of HA will be skipped to describe To have a simple view the most of the parts of HA will be skipped to describe
Kubelet<->Controller Manager communication only. Kubelet<->Controller Manager communication only.
By default the normal behavior looks like: By default the normal behavior looks like:

View File

@ -138,7 +138,7 @@ Run `cluster.yml` with `--limit=kube_control_plane`
## Adding an etcd node ## Adding an etcd node
You need to make sure there are always an odd number of etcd nodes in the cluster. In such a way, this is always a replace or scale up operation. Either add two new nodes or remove an old one. You need to make sure there are always an odd number of etcd nodes in the cluster. In such a way, this is always a replacement or scale up operation. Either add two new nodes or remove an old one.
### 1) Add the new node running cluster.yml ### 1) Add the new node running cluster.yml

View File

@ -13,7 +13,7 @@ following artifacts in advance from another environment where has access to the
Then you need to setup the following services on your offline environment: Then you need to setup the following services on your offline environment:
* a HTTP reverse proxy/cache/mirror to serve some static files (zips and binaries) * an HTTP reverse proxy/cache/mirror to serve some static files (zips and binaries)
* an internal Yum/Deb repository for OS packages * an internal Yum/Deb repository for OS packages
* an internal container image registry that need to be populated with all container images used by Kubespray * an internal container image registry that need to be populated with all container images used by Kubespray
* [Optional] an internal PyPi server for python packages used by Kubespray * [Optional] an internal PyPi server for python packages used by Kubespray
@ -97,7 +97,7 @@ If you use the settings like the one above, you'll need to define in your invent
* `files_repo`: HTTP webserver or reverse proxy that is able to serve the files listed above. Path is not important, you * `files_repo`: HTTP webserver or reverse proxy that is able to serve the files listed above. Path is not important, you
can store them anywhere as long as it's accessible by kubespray. It's recommended to use `*_version` in the path so can store them anywhere as long as it's accessible by kubespray. It's recommended to use `*_version` in the path so
that you don't need to modify this setting everytime kubespray upgrades one of these components. that you don't need to modify this setting everytime kubespray upgrades one of these components.
* `yum_repo`/`debian_repo`/`ubuntu_repo`: OS package repository depending of your OS, should point to your internal * `yum_repo`/`debian_repo`/`ubuntu_repo`: OS package repository depending on your OS, should point to your internal
repository. Adjust the path accordingly. repository. Adjust the path accordingly.
## Install Kubespray Python Packages ## Install Kubespray Python Packages
@ -114,7 +114,7 @@ Look at the `requirements.txt` file and check if your OS provides all packages o
manager). For those missing, you need to either use a proxy that has Internet access (typically from a DMZ) or setup a manager). For those missing, you need to either use a proxy that has Internet access (typically from a DMZ) or setup a
PyPi server in your network that will host these packages. PyPi server in your network that will host these packages.
If you're using a HTTP(S) proxy to download your python packages: If you're using an HTTP(S) proxy to download your python packages:
```bash ```bash
sudo pip install --proxy=https://[username:password@]proxyserver:port -r requirements.txt sudo pip install --proxy=https://[username:password@]proxyserver:port -r requirements.txt

View File

@ -272,7 +272,7 @@ scp $USERNAME@$IP_CONTROLLER_0:/etc/kubernetes/admin.conf kubespray-do.conf
This kubeconfig file uses the internal IP address of the controller node to This kubeconfig file uses the internal IP address of the controller node to
access the API server. This kubeconfig file will thus not work of from access the API server. This kubeconfig file will thus not work of from
outside of the VPC network. We will need to change the API server IP address outside the VPC network. We will need to change the API server IP address
to the controller node his external IP address. The external IP address will be to the controller node his external IP address. The external IP address will be
accepted in the accepted in the
TLS negotiation as we added the controllers external IP addresses in the SSL TLS negotiation as we added the controllers external IP addresses in the SSL
@ -482,7 +482,7 @@ nginx version: nginx/1.19.1
### Kubernetes services ### Kubernetes services
#### Expose outside of the cluster #### Expose outside the cluster
In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/). In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).

View File

@ -263,7 +263,7 @@ Previous HEAD position was 6f97687d Release 2.8 robust san handling (#4478)
HEAD is now at a4e65c7c Upgrade to Ansible >2.7.0 (#4471) HEAD is now at a4e65c7c Upgrade to Ansible >2.7.0 (#4471)
``` ```
:warning: IMPORTANT: Some of the variable formats changed in the k8s_cluster.yml between 2.8.5 and 2.9.0 :warning: :warning: IMPORTANT: Some variable formats changed in the k8s_cluster.yml between 2.8.5 and 2.9.0 :warning:
If you do not keep your inventory copy up to date, **your upgrade will fail** and your first master will be left non-functional until fixed and re-run. If you do not keep your inventory copy up to date, **your upgrade will fail** and your first master will be left non-functional until fixed and re-run.

View File

@ -81,7 +81,7 @@ following default cluster parameters:
bits in kube_pods_subnet dictates how many kube_nodes can be in cluster. Setting this > 25 will bits in kube_pods_subnet dictates how many kube_nodes can be in cluster. Setting this > 25 will
raise an assertion in playbooks if the `kubelet_max_pods` var also isn't adjusted accordingly raise an assertion in playbooks if the `kubelet_max_pods` var also isn't adjusted accordingly
(assertion not applicable to calico which doesn't use this as a hard limit, see (assertion not applicable to calico which doesn't use this as a hard limit, see
[Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes). [Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes)).
* *enable_dual_stack_networks* - Setting this to true will provision both IPv4 and IPv6 networking for pods and services. * *enable_dual_stack_networks* - Setting this to true will provision both IPv4 and IPv6 networking for pods and services.
@ -209,7 +209,7 @@ Stack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/dns-stack.m
* *kubelet_systemd_hardening* - If `true`, provides kubelet systemd service with security features for isolation. * *kubelet_systemd_hardening* - If `true`, provides kubelet systemd service with security features for isolation.
**N.B.** To enable this feature, ensure you are using the **`cgroup v2`** on your system. Check it out with command: `sudo ls -l /sys/fs/cgroup/*.slice`. If directory does not exists, enable this with the following guide: [enable cgroup v2](https://rootlesscontaine.rs/getting-started/common/cgroup2/#enabling-cgroup-v2). **N.B.** To enable this feature, ensure you are using the **`cgroup v2`** on your system. Check it out with command: `sudo ls -l /sys/fs/cgroup/*.slice`. If directory does not exist, enable this with the following guide: [enable cgroup v2](https://rootlesscontaine.rs/getting-started/common/cgroup2/#enabling-cgroup-v2).
* *kubelet_secure_addresses* - By default *kubelet_systemd_hardening* set the **control plane** `ansible_host` IPs as the `kubelet_secure_addresses`. In case you have multiple interfaces in your control plane nodes and the `kube-apiserver` is not bound to the default interface, you can override them with this variable. * *kubelet_secure_addresses* - By default *kubelet_systemd_hardening* set the **control plane** `ansible_host` IPs as the `kubelet_secure_addresses`. In case you have multiple interfaces in your control plane nodes and the `kube-apiserver` is not bound to the default interface, you can override them with this variable.
Example: Example: