Improve markdownlint coverage (#7075)

Now markdownlint covers ./README.md and md files under ./docs only.
However we have a lot of md files under different directories also.
This enables markdownlint for other md files also.
pull/7079/head
Kenichi Omichi 2020-12-22 04:44:26 -08:00 committed by GitHub
parent 286191ecb7
commit 1347bb2e4b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 26 additions and 28 deletions

View File

@ -64,9 +64,10 @@ markdownlint:
tags: [light]
image: node
before_script:
- npm install -g markdownlint-cli
- npm install -g markdownlint-cli@0.22.0
script:
- markdownlint README.md docs --ignore docs/_sidebar.md
# TODO: Remove "grep -v" part to enable markdownlint for all md files
- markdownlint $(find . -name "*.md" | grep -v .github | grep -v roles | grep -v contrib/terraform | grep -v contrib/vault | grep -v contrib/network-storage) --ignore docs/_sidebar.md --ignore contrib/dind/README.md
ci-matrix:
stage: unit-tests

View File

@ -24,14 +24,14 @@ experience.
You can enable the use of a Bastion Host by changing **use_bastion** in group_vars/all to **true**. The generated
templates will then include an additional bastion VM which can then be used to connect to the masters and nodes. The option
also removes all public IPs from all other VMs.
also removes all public IPs from all other VMs.
## Generating and applying
To generate and apply the templates, call:
```shell
$ ./apply-rg.sh <resource_group_name>
./apply-rg.sh <resource_group_name>
```
If you change something in the configuration (e.g. number of nodes) later, you can call this again and Azure will
@ -42,25 +42,23 @@ take care about creating/modifying whatever is needed.
If you need to delete all resources from a resource group, simply call:
```shell
$ ./clear-rg.sh <resource_group_name>
./clear-rg.sh <resource_group_name>
```
**WARNING** this really deletes everything from your resource group, including everything that was later created by you!
## Generating an inventory for kubespray
After you have applied the templates, you can generate an inventory with this call:
```shell
$ ./generate-inventory.sh <resource_group_name>
./generate-inventory.sh <resource_group_name>
```
It will create the file ./inventory which can then be used with kubespray, e.g.:
```shell
$ cd kubespray-root-dir
$ sudo pip3 install -r requirements.txt
$ ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
cd kubespray-root-dir
sudo pip3 install -r requirements.txt
ansible-playbook -i contrib/azurerm/inventory -u devops --become -e "@inventory/sample/group_vars/all/all.yml" cluster.yml
```

View File

@ -6,6 +6,7 @@ to serve as Kubernetes "nodes", which in turn will run
called DIND (Docker-IN-Docker).
The playbook has two roles:
- dind-host: creates the "nodes" as containers in localhost, with
appropriate settings for DIND (privileged, volume mapping for dind
storage, etc).
@ -27,7 +28,7 @@ See below for a complete successful run:
1. Create the node containers
~~~~
```shell
# From the kubespray root dir
cd contrib/dind
pip install -r requirements.txt
@ -36,15 +37,15 @@ ansible-playbook -i hosts dind-cluster.yaml
# Back to kubespray root
cd ../..
~~~~
```
NOTE: if the playbook run fails with something like below error
message, you may need to specifically set `ansible_python_interpreter`,
see `./hosts` file for an example expanded localhost entry.
~~~
```shell
failed: [localhost] (item=kube-node1) => {"changed": false, "item": "kube-node1", "msg": "Failed to import docker or docker-py - No module named requests.exceptions. Try `pip install docker` or `pip install docker-py` (Python 2.6)"}
~~~
```
2. Customize kubespray-dind.yaml
@ -52,33 +53,33 @@ Note that there's coupling between above created node containers
and `kubespray-dind.yaml` settings, in particular regarding selected `node_distro`
(as set in `group_vars/all/all.yaml`), and docker settings.
~~~
```shell
$EDITOR contrib/dind/kubespray-dind.yaml
~~~
```
3. Prepare the inventory and run the playbook
~~~
```shell
INVENTORY_DIR=inventory/local-dind
mkdir -p ${INVENTORY_DIR}
rm -f ${INVENTORY_DIR}/hosts.ini
CONFIG_FILE=${INVENTORY_DIR}/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
ansible-playbook --become -e ansible_ssh_user=debian -i ${INVENTORY_DIR}/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml
~~~
```
NOTE: You could also test other distros without editing files by
passing `--extra-vars` as per below commandline,
replacing `DISTRO` by either `debian`, `ubuntu`, `centos`, `fedora`:
~~~
```shell
cd contrib/dind
ansible-playbook -i hosts dind-cluster.yaml --extra-vars node_distro=DISTRO
cd ../..
CONFIG_FILE=inventory/local-dind/hosts.ini /tmp/kubespray.dind.inventory_builder.sh
ansible-playbook --become -e ansible_ssh_user=DISTRO -i inventory/local-dind/hosts.ini cluster.yml --extra-vars @contrib/dind/kubespray-dind.yaml --extra-vars bootstrap_os=DISTRO
~~~
```
## Resulting deployment
@ -89,7 +90,7 @@ from the host where you ran kubespray playbooks.
Running from an Ubuntu Xenial host:
~~~
```shell
$ uname -a
Linux ip-xx-xx-xx-xx 4.4.0-1069-aws #79-Ubuntu SMP Mon Sep 24
15:01:41 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
@ -149,14 +150,14 @@ kube-system weave-net-xr46t 2/2 Running 0
$ docker exec kube-node1 curl -s http://localhost:31081/api/v1/connectivity_check
{"Message":"All 10 pods successfully reported back to the server","Absent":null,"Outdated":null}
~~~
```
## Using ./run-test-distros.sh
You can use `./run-test-distros.sh` to run a set of tests via DIND,
and excerpt from this script, to get an idea:
~~~
```shell
# The SPEC file(s) must have two arrays as e.g.
# DISTROS=(debian centos)
# EXTRAS=(
@ -169,7 +170,7 @@ and excerpt from this script, to get an idea:
#
# Each $EXTRAS element will be whitespace split, and passed as --extra-vars
# to main kubespray ansible-playbook run.
~~~
```
See e.g. `test-some_distros-most_CNIs.env` and
`test-some_distros-kube_router_combo.env` in particular for a richer

View File

@ -5,7 +5,7 @@ deployment on VMs.
This playbook does not create Virtual Machines, nor does it run Kubespray itself.
### User creation
## User creation
If you want to create a user for running Kubespray deployment, you should specify
both `k8s_deployment_user` and `k8s_deployment_user_pkey_path`.

View File

@ -9,8 +9,6 @@ Ubuntu Trusty |[![Build Status](https://ci.kubespray.io/job/kubespray-aws-calico
RHEL 7.2 |[![Build Status](https://ci.kubespray.io/job/kubespray-aws-calico-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-calico-rhel72/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-flannel-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-flannel-rhel72/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-weave-rhel72/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-weave-rhel72/)|
CentOS 7 |[![Build Status](https://ci.kubespray.io/job/kubespray-aws-calico-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-calico-centos7/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-flannel-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-flannel-centos7/)|[![Build Status](https://ci.kubespray.io/job/kubespray-aws-weave-centos7/badge/icon)](https://ci.kubespray.io/job/kubespray-aws-weave-centos7/)|
## Test environment variables
### Common