Fix markdownlint failures under ./roles/ (#7089)

This fixes markdownlint failures under roles/
pull/7092/head
Kenichi Omichi 2020-12-30 05:07:49 -08:00 committed by GitHub
parent dc86b2063a
commit 398a995798
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 65 additions and 60 deletions

View File

@ -66,8 +66,7 @@ markdownlint:
before_script: before_script:
- npm install -g markdownlint-cli@0.22.0 - npm install -g markdownlint-cli@0.22.0
script: script:
# TODO: Remove "grep -v" part to enable markdownlint for all md files - markdownlint $(find . -name "*.md" | grep -v .github) --ignore docs/_sidebar.md --ignore contrib/dind/README.md
- markdownlint $(find . -name "*.md" | grep -v .github | grep -v roles) --ignore docs/_sidebar.md --ignore contrib/dind/README.md
ci-matrix: ci-matrix:
stage: unit-tests stage: unit-tests

View File

@ -6,7 +6,7 @@ Provision a Kubernetes cluster on GCP using Terraform and Kubespray
The setup looks like following The setup looks like following
``` ```text
Kubernetes cluster Kubernetes cluster
+-----------------------+ +-----------------------+
+---------------+ | +--------------+ | +---------------+ | +--------------+ |

View File

@ -3,15 +3,16 @@
Bootstrap an Ansible host to be able to run Ansible modules. Bootstrap an Ansible host to be able to run Ansible modules.
This role will: This role will:
* configure the package manager (if applicable) to be able to fetch packages
* install Python * configure the package manager (if applicable) to be able to fetch packages
* install the necessary packages to use Ansible's package manager modules * install Python
* set the hostname of the host to `{{ inventory_hostname }}` when requested * install the necessary packages to use Ansible's package manager modules
* set the hostname of the host to `{{ inventory_hostname }}` when requested
## Requirements ## Requirements
A host running an operating system that is supported by Kubespray. A host running an operating system that is supported by Kubespray.
See https://github.com/kubernetes-sigs/kubespray#supported-linux-distributions for a current list. See [Supported Linux Distributions](https://github.com/kubernetes-sigs/kubespray#supported-linux-distributions) for a current list.
SSH access to the host. SSH access to the host.
@ -21,10 +22,10 @@ Variables are listed with their default values, if applicable.
### General variables ### General variables
* `http_proxy`/`https_proxy` * `http_proxy`/`https_proxy`
The role will configure the package manager (if applicable) to download packages via a proxy. The role will configure the package manager (if applicable) to download packages via a proxy.
* `override_system_hostname: true` * `override_system_hostname: true`
The role will set the hostname of the machine to the name it has according to Ansible's inventory (the variable `{{ inventory_hostname }}`). The role will set the hostname of the machine to the name it has according to Ansible's inventory (the variable `{{ inventory_hostname }}`).
### Per distribution variables ### Per distribution variables

View File

@ -25,7 +25,7 @@ Test instruction
- Start Kubernetes local cluster - Start Kubernetes local cluster
See <a href="https://kubernetes.io/" class="uri" class="uri">https://kubernetes.io/</a>. See [Kubernetes](https://kubernetes.io/)
- Create a Ceph admin secret - Create a Ceph admin secret
@ -47,7 +47,7 @@ Alternatively, deploy it in kubernetes, see [deployment](deploy/README.md).
- Create a CephFS Storage Class - Create a CephFS Storage Class
Replace Ceph monitor's IP in <a href="example/class.yaml" class="uri" class="uri">example/class.yaml</a> with your own and create storage class: Replace Ceph monitor's IP in [example class](example/class.yaml) with your own and create storage class:
``` bash ``` bash
kubectl create -f example/class.yaml kubectl create -f example/class.yaml

View File

@ -50,7 +50,7 @@ the rest of this doc will use that path as an example.
Examples to create local storage volumes Examples to create local storage volumes
---------------------------------------- ----------------------------------------
### tmpfs method: 1. tmpfs method:
``` bash ``` bash
for vol in vol1 vol2 vol3; do for vol in vol1 vol2 vol3; do
@ -62,7 +62,7 @@ done
The tmpfs method is not recommended for production because the mount is not The tmpfs method is not recommended for production because the mount is not
persistent and data will be deleted on reboot. persistent and data will be deleted on reboot.
### Mount physical disks 1. Mount physical disks
``` bash ``` bash
mkdir /mnt/disks/ssd1 mkdir /mnt/disks/ssd1
@ -72,8 +72,7 @@ mount /dev/vdb1 /mnt/disks/ssd1
Physical disks are recommended for production environments because it offers Physical disks are recommended for production environments because it offers
complete isolation in terms of I/O and capacity. complete isolation in terms of I/O and capacity.
### Mount unpartitioned physical devices 1. Mount unpartitioned physical devices
``` bash ``` bash
for disk in /dev/sdc /dev/sdd /dev/sde; do for disk in /dev/sdc /dev/sdd /dev/sde; do
@ -85,7 +84,7 @@ This saves time of precreatnig filesystems. Note that your storageclass must hav
volume_mode set to "Filesystem" and fs_type defined. If either is not set, the volume_mode set to "Filesystem" and fs_type defined. If either is not set, the
disk will be added as a raw block device. disk will be added as a raw block device.
### File-backed sparsefile method 1. File-backed sparsefile method
``` bash ``` bash
truncate /mnt/disks/disk5 --size 2G truncate /mnt/disks/disk5 --size 2G
@ -97,12 +96,12 @@ mount /mnt/disks/disk5 /mnt/disks/vol5
If you have a development environment and only one disk, this is the best way If you have a development environment and only one disk, this is the best way
to limit the quota of persistent volumes. to limit the quota of persistent volumes.
### Simple directories 1. Simple directories
In a development environment using `mount --bind` works also, but there is no capacity In a development environment using `mount --bind` works also, but there is no capacity
management. management.
### Block volumeMode PVs 1. Block volumeMode PVs
Create a symbolic link under discovery directory to the block device on the node. To use Create a symbolic link under discovery directory to the block device on the node. To use
raw block devices in pods, volume_type should be set to "Block". raw block devices in pods, volume_type should be set to "Block".

View File

@ -26,7 +26,7 @@ make push
* Start Kubernetes local cluster * Start Kubernetes local cluster
See https://kubernetes.io/. See [Kubernetes](https://kubernetes.io/).
* Create a Ceph admin secret * Create a Ceph admin secret
@ -76,4 +76,4 @@ kubectl create -f examples/test-pod.yaml
## Acknowledgements ## Acknowledgements
- This provisioner is extracted from [Kubernetes core](https://github.com/kubernetes/kubernetes) with some modifications for this project. * This provisioner is extracted from [Kubernetes core](https://github.com/kubernetes/kubernetes) with some modifications for this project.

View File

@ -17,6 +17,7 @@ Checkout our [Live Docs](https://kubernetes-sigs.github.io/aws-alb-ingress-contr
To get started with the controller, see our [walkthrough](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/walkthrough/echoserver/). To get started with the controller, see our [walkthrough](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/walkthrough/echoserver/).
## Setup ## Setup
- See [controller setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/setup/) on how to install ALB ingress controller - See [controller setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/controller/setup/) on how to install ALB ingress controller
- See [external-dns setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/external-dns/setup/) for how to setup the external-dns to manage route 53 records. - See [external-dns setup](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/external-dns/setup/) for how to setup the external-dns to manage route 53 records.

View File

@ -24,10 +24,10 @@ versions of Ambassador as they become available.
## Configuration ## Configuration
* `ingress_ambassador_namespace` (default `ambassador`): namespace for installing Ambassador. - `ingress_ambassador_namespace` (default `ambassador`): namespace for installing Ambassador.
* `ingress_ambassador_update_window` (default `0 0 * * SUN`): _crontab_-like expression - `ingress_ambassador_update_window` (default `0 0 * * SUN`): _crontab_-like expression
for specifying when the Operator should try to update the Ambassador API Gateway. for specifying when the Operator should try to update the Ambassador API Gateway.
* `ingress_ambassador_version` (defaulkt: `*`): SemVer rule for versions allowed for - `ingress_ambassador_version` (defaulkt: `*`): SemVer rule for versions allowed for
installation/updates. installation/updates.
## Ingress annotations ## Ingress annotations

View File

@ -87,12 +87,12 @@ For further information, read the official [Cert-Manager Ingress](https://cert-m
### Create New TLS Root CA Certificate and Key ### Create New TLS Root CA Certificate and Key
#### Install Cloudflare PKI/TLS `cfssl` Toolkit. #### Install Cloudflare PKI/TLS `cfssl` Toolkit
e.g. For Ubuntu/Debian distibutions, the toolkit is part of the `golang-cfssl` package. e.g. For Ubuntu/Debian distibutions, the toolkit is part of the `golang-cfssl` package.
```shell ```shell
$ sudo apt-get install -y golang-cfssl sudo apt-get install -y golang-cfssl
``` ```
#### Create Root Certificate Authority (CA) Configuration File #### Create Root Certificate Authority (CA) Configuration File

View File

@ -25,11 +25,12 @@
!!! attention !!! attention
If you're using GKE you need to initialize your user as a cluster-admin with the following command: If you're using GKE you need to initialize your user as a cluster-admin with the following command:
```console
kubectl create clusterrolebinding cluster-admin-binding \ ```console
--clusterrole cluster-admin \ kubectl create clusterrolebinding cluster-admin-binding \
--user $(gcloud config get-value account) --clusterrole cluster-admin \
``` --user $(gcloud config get-value account)
```
The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version. The following **Mandatory Command** is required for all deployments except for AWS. See below for the AWS version.
@ -60,6 +61,7 @@ For standard usage:
```console ```console
minikube addons enable ingress minikube addons enable ingress
``` ```
For development: For development:
1. Disable the ingress addon: 1. Disable the ingress addon:
@ -68,8 +70,8 @@ For development:
minikube addons disable ingress minikube addons disable ingress
``` ```
2. Execute `make dev-env` 1. Execute `make dev-env`
3. Confirm the `nginx-ingress-controller` deployment exists: 1. Confirm the `nginx-ingress-controller` deployment exists:
```console ```console
$ kubectl get pods -n ingress-nginx $ kubectl get pods -n ingress-nginx
@ -115,9 +117,12 @@ This example creates an ELB with just two listeners, one in port 80 and another
##### ELB Idle Timeouts ##### ELB Idle Timeouts
In some scenarios users will need to modify the value of the ELB idle timeout. Users need to ensure the idle timeout is less than the [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) that is configured for NGINX. By default NGINX `keepalive_timeout` is set to `75s`. In some scenarios users will need to modify the value of the ELB idle timeout.
Users need to ensure the idle timeout is less than the [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) that is configured for NGINX.
By default NGINX `keepalive_timeout` is set to `75s`.
The default ELB idle timeout will work for most scenarios, unless the NGINX [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) has been modified, in which case `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` will need to be modified to ensure it is less than the `keepalive_timeout` the user has configured. The default ELB idle timeout will work for most scenarios, unless the NGINX [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) has been modified,
in which case `service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout` will need to be modified to ensure it is less than the `keepalive_timeout` the user has configured.
_Please Note: An idle timeout of `3600s` is recommended when using WebSockets._ _Please Note: An idle timeout of `3600s` is recommended when using WebSockets._

View File

@ -4,7 +4,7 @@ MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer
In short, it allows you to create Kubernetes services of type "LoadBalancer" in clusters that In short, it allows you to create Kubernetes services of type "LoadBalancer" in clusters that
don't run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers. don't run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
This addon aims to automate [MetalLB in layer 2 mode](https://metallb.universe.tf/concepts/layer2/) This addon aims to automate [MetalLB in layer 2 mode](https://metallb.universe.tf/concepts/layer2/)
or [MetalLB in BGP mode][https://metallb.universe.tf/concepts/bgp/]. or [MetalLB in BGP mode](https://metallb.universe.tf/concepts/bgp/).
It deploys MetalLB into Kubernetes and sets up a layer 2 or BGP load-balancer. It deploys MetalLB into Kubernetes and sets up a layer 2 or BGP load-balancer.
## Install ## Install

View File

@ -24,7 +24,7 @@ whether the registry is run or not. To set this flag, you can specify
does not include this flag, the following steps should work. Note that some of does not include this flag, the following steps should work. Note that some of
this is cloud-provider specific, so you may have to customize it a bit. this is cloud-provider specific, so you may have to customize it a bit.
### Make some storage - Make some storage
The primary job of the registry is to store data. To do that we have to decide The primary job of the registry is to store data. To do that we have to decide
where to store it. For cloud environments that have networked storage, we can where to store it. For cloud environments that have networked storage, we can
@ -58,7 +58,7 @@ If, for example, you wanted to use NFS you would just need to change the
Note that in any case, the storage (in the case the GCE PersistentDisk) must be Note that in any case, the storage (in the case the GCE PersistentDisk) must be
created independently - this is not something Kubernetes manages for you (yet). created independently - this is not something Kubernetes manages for you (yet).
### I don't want or don't have persistent storage - I don't want or don't have persistent storage
If you are running in a place that doesn't have networked storage, or if you If you are running in a place that doesn't have networked storage, or if you
just want to kick the tires on this without committing to it, you can easily just want to kick the tires on this without committing to it, you can easily
@ -260,7 +260,7 @@ Now you can build and push images on your local computer as
your kubernetes cluster with the same name. your kubernetes cluster with the same name.
More Extensions More Extensions
=============== ---------------
- [Use GCS as storage backend](gcs/README.md) - [Use GCS as storage backend](gcs/README.md)
- [Enable TLS/SSL](tls/README.md) - [Enable TLS/SSL](tls/README.md)