fix docs/<file>.md errors identified by markdownlint
* docs/azure-csi.md * docs/azure.md * docs/bootstrap-os.md * docs/calico.md * docs/debian.md * docs/fcos.md * docs/vagrant.md * docs/gcp-lb.md * docs/kubernetes-apps/registry.md * docs/setting-up-your-first-cluster.md * docs/vagrant.md * docs/vars.mdpre-commit-hook
parent
b074b91ee9
commit
4a994c82d1
|
@ -57,19 +57,28 @@ The name of the network security group your instances are in, can be retrieved v
|
|||
These will have to be generated first:
|
||||
|
||||
- Create an Azure AD Application with:
|
||||
`az ad app create --display-name kubespray --identifier-uris http://kubespray --homepage http://kubespray.com --password CLIENT_SECRET`
|
||||
|
||||
```ShellSession
|
||||
az ad app create --display-name kubespray --identifier-uris http://kubespray --homepage http://kubespray.com --password CLIENT_SECRET
|
||||
```
|
||||
|
||||
Display name, identifier-uri, homepage and the password can be chosen
|
||||
|
||||
Note the AppId in the output.
|
||||
|
||||
- Create Service principal for the application with:
|
||||
`az ad sp create --id AppId`
|
||||
|
||||
```ShellSession
|
||||
az ad sp create --id AppId
|
||||
```
|
||||
|
||||
This is the AppId from the last command
|
||||
|
||||
- Create the role assignment with:
|
||||
`az role assignment create --role "Owner" --assignee http://kubespray --subscription SUBSCRIPTION_ID`
|
||||
|
||||
```ShellSession
|
||||
az role assignment create --role "Owner" --assignee http://kubespray --subscription SUBSCRIPTION_ID
|
||||
```
|
||||
|
||||
azure\_csi\_aad\_client\_id must be set to the AppId, azure\_csi\_aad\_client\_secret is your chosen secret.
|
||||
|
||||
|
|
|
@ -71,14 +71,27 @@ The name of the resource group that contains the route table. Defaults to `azur
|
|||
These will have to be generated first:
|
||||
|
||||
- Create an Azure AD Application with:
|
||||
`az ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET`
|
||||
|
||||
```ShellSession
|
||||
az ad app create --display-name kubernetes --identifier-uris http://kubernetes --homepage http://example.com --password CLIENT_SECRET
|
||||
```
|
||||
|
||||
display name, identifier-uri, homepage and the password can be chosen
|
||||
Note the AppId in the output.
|
||||
|
||||
- Create Service principal for the application with:
|
||||
`az ad sp create --id AppId`
|
||||
|
||||
```ShellSession
|
||||
az ad sp create --id AppId
|
||||
```
|
||||
|
||||
This is the AppId from the last command
|
||||
|
||||
- Create the role assignment with:
|
||||
`az role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID`
|
||||
|
||||
```ShellSession
|
||||
az role assignment create --role "Owner" --assignee http://kubernetes --subscription SUBSCRIPTION_ID
|
||||
```
|
||||
|
||||
azure\_aad\_client\_id must be set to the AppId, azure\_aad\_client\_secret is your chosen secret.
|
||||
|
||||
|
|
|
@ -48,11 +48,13 @@ The `kubespray-defaults` role is expected to be run before this role.
|
|||
|
||||
Remember to disable fact gathering since Python might not be present on hosts.
|
||||
|
||||
- hosts: all
|
||||
gather_facts: false # not all hosts might be able to run modules yet
|
||||
roles:
|
||||
- kubespray-defaults
|
||||
- bootstrap-os
|
||||
```yaml
|
||||
- hosts: all
|
||||
gather_facts: false # not all hosts might be able to run modules yet
|
||||
roles:
|
||||
- kubespray-defaults
|
||||
- bootstrap-os
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
|
|
|
@ -124,8 +124,7 @@ You need to edit your inventory and add:
|
|||
* `calico_rr` group with nodes in it. `calico_rr` can be combined with
|
||||
`kube_node` and/or `kube_control_plane`. `calico_rr` group also must be a child
|
||||
group of `k8s_cluster` group.
|
||||
* `cluster_id` by route reflector node/group (see details
|
||||
[here](https://hub.docker.com/r/calico/routereflector/))
|
||||
* `cluster_id` by route reflector node/group (see details [here](https://hub.docker.com/r/calico/routereflector/))
|
||||
|
||||
Here's an example of Kubespray inventory with standalone route reflectors:
|
||||
|
||||
|
|
|
@ -3,34 +3,39 @@
|
|||
Debian Jessie installation Notes:
|
||||
|
||||
- Add
|
||||
|
||||
```GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"```
|
||||
|
||||
to /etc/default/grub. Then update with
|
||||
|
||||
```ShellSession
|
||||
sudo update-grub
|
||||
sudo update-grub2
|
||||
sudo reboot
|
||||
|
||||
```ini
|
||||
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
|
||||
```
|
||||
|
||||
|
||||
to `/etc/default/grub`. Then update with
|
||||
|
||||
```ShellSession
|
||||
sudo update-grub
|
||||
sudo update-grub2
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
- Add the [backports](https://backports.debian.org/Instructions/) which contain Systemd 2.30 and update Systemd.
|
||||
|
||||
```apt-get -t jessie-backports install systemd```
|
||||
|
||||
|
||||
```ShellSession
|
||||
apt-get -t jessie-backports install systemd
|
||||
```
|
||||
|
||||
(Necessary because the default Systemd version (2.15) does not support the "Delegate" directive in service files)
|
||||
|
||||
|
||||
- Add the Ansible repository and install Ansible to get a proper version
|
||||
|
||||
```ShellSession
|
||||
sudo add-apt-repository ppa:ansible/ansible
|
||||
sudo apt-get update
|
||||
sudo apt-get install ansible
|
||||
|
||||
```
|
||||
|
||||
- Install Jinja2 and Python-Netaddr
|
||||
|
||||
```sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr```
|
||||
|
||||
```ShellSession
|
||||
sudo apt-get install python-jinja2=2.8-1~bpo8+1 python-netaddr
|
||||
```
|
||||
|
||||
Now you can continue with [Preparing your deployment](getting-started.md#starting-custom-deployment)
|
||||
|
|
|
@ -54,7 +54,7 @@ Prepare ignition and serve via http (a.e. python -m http.server )
|
|||
|
||||
### create guest
|
||||
|
||||
```shell script
|
||||
```ShellSeasion
|
||||
machine_name=myfcos1
|
||||
ignition_url=http://mywebserver/fcos.ign
|
||||
|
||||
|
|
|
@ -2,15 +2,19 @@
|
|||
|
||||
Google Cloud Platform can be used for creation of Kubernetes Service Load Balancer.
|
||||
|
||||
This feature is able to deliver by adding parameters to kube-controller-manager and kubelet. You need specify:
|
||||
This feature is able to deliver by adding parameters to `kube-controller-manager` and `kubelet`. You need specify:
|
||||
|
||||
```
|
||||
--cloud-provider=gce
|
||||
--cloud-config=/etc/kubernetes/cloud-config
|
||||
```
|
||||
|
||||
To get working it in kubespray, you need to add tag to GCE instances and specify it in kubespray group vars and also set cloud_provider to gce. So for example, in file group_vars/all/gcp.yml:
|
||||
To get working it in kubespray, you need to add tag to GCE instances and specify it in kubespray group vars and also set `cloud_provider` to `gce`. So for example, in file `group_vars/all/gcp.yml`:
|
||||
|
||||
```
|
||||
cloud_provider: gce
|
||||
gce_node_tags: k8s-lb
|
||||
```
|
||||
|
||||
When you will setup it and create SVC in Kubernetes with type=LoadBalancer, cloud provider will create public IP and will set firewall.
|
||||
When you will setup it and create SVC in Kubernetes with `type=LoadBalancer`, cloud provider will create public IP and will set firewall.
|
||||
Note: Cloud provider run under VM service account, so this account needs to have correct permissions to be able to create all GCP resources.
|
||||
|
|
|
@ -29,8 +29,7 @@ use Kubernetes's `PersistentVolume` abstraction. The following template is
|
|||
expanded by `salt` in the GCE cluster turnup, but can easily be adapted to
|
||||
other situations:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE registry-pv.yaml.in -->
|
||||
``` yaml
|
||||
```yaml
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
|
@ -46,7 +45,6 @@ spec:
|
|||
fsType: "ext4"
|
||||
{% endif %}
|
||||
```
|
||||
<!-- END MUNGE: EXAMPLE registry-pv.yaml.in -->
|
||||
|
||||
If, for example, you wanted to use NFS you would just need to change the
|
||||
`gcePersistentDisk` block to `nfs`. See
|
||||
|
@ -68,8 +66,7 @@ Now that the Kubernetes cluster knows that some storage exists, you can put a
|
|||
claim on that storage. As with the `PersistentVolume` above, you can start
|
||||
with the `salt` template:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE registry-pvc.yaml.in -->
|
||||
``` yaml
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
|
@ -82,7 +79,6 @@ spec:
|
|||
requests:
|
||||
storage: {{ pillar['cluster_registry_disk_size'] }}
|
||||
```
|
||||
<!-- END MUNGE: EXAMPLE registry-pvc.yaml.in -->
|
||||
|
||||
This tells Kubernetes that you want to use storage, and the `PersistentVolume`
|
||||
you created before will be bound to this claim (unless you have other
|
||||
|
@ -93,8 +89,7 @@ gives you the right to use this storage until you release the claim.
|
|||
|
||||
Now we can run a Docker registry:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE registry-rc.yaml -->
|
||||
``` yaml
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
|
@ -138,7 +133,6 @@ spec:
|
|||
persistentVolumeClaim:
|
||||
claimName: kube-registry-pvc
|
||||
```
|
||||
<!-- END MUNGE: EXAMPLE registry-rc.yaml -->
|
||||
|
||||
*Note:* that if you have set multiple replicas, make sure your CSI driver has support for the `ReadWriteMany` accessMode.
|
||||
|
||||
|
@ -146,8 +140,7 @@ spec:
|
|||
|
||||
Now that we have a registry `Pod` running, we can expose it as a Service:
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE registry-svc.yaml -->
|
||||
``` yaml
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
|
@ -164,7 +157,6 @@ spec:
|
|||
port: 5000
|
||||
protocol: TCP
|
||||
```
|
||||
<!-- END MUNGE: EXAMPLE registry-svc.yaml -->
|
||||
|
||||
## Expose the registry on each node
|
||||
|
||||
|
@ -172,8 +164,7 @@ Now that we have a running `Service`, we need to expose it onto each Kubernetes
|
|||
`Node` so that Docker will see it as `localhost`. We can load a `Pod` on every
|
||||
node by creating following daemonset.
|
||||
|
||||
<!-- BEGIN MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
|
||||
``` yaml
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
|
@ -207,7 +198,6 @@ spec:
|
|||
containerPort: 80
|
||||
hostPort: 5000
|
||||
```
|
||||
<!-- END MUNGE: EXAMPLE ../../saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml -->
|
||||
|
||||
When modifying replication-controller, service and daemon-set definitions, take
|
||||
care to ensure *unique* identifiers for the rc-svc couple and the daemon-set.
|
||||
|
@ -219,7 +209,7 @@ This ensures that port 5000 on each node is directed to the registry `Service`.
|
|||
You should be able to verify that it is running by hitting port 5000 with a web
|
||||
browser and getting a 404 error:
|
||||
|
||||
``` console
|
||||
```ShellSession
|
||||
$ curl localhost:5000
|
||||
404 page not found
|
||||
```
|
||||
|
@ -229,7 +219,7 @@ $ curl localhost:5000
|
|||
To use an image hosted by this registry, simply say this in your `Pod`'s
|
||||
`spec.containers[].image` field:
|
||||
|
||||
``` yaml
|
||||
```yaml
|
||||
image: localhost:5000/user/container
|
||||
```
|
||||
|
||||
|
@ -241,7 +231,7 @@ building locally and want to push to your cluster.
|
|||
You can use `kubectl` to set up a port-forward from your local node to a
|
||||
running Pod:
|
||||
|
||||
``` console
|
||||
```ShellSession
|
||||
$ POD=$(kubectl get pods --namespace kube-system -l k8s-app=registry \
|
||||
-o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
|
||||
| grep Running | head -1 | cut -f1 -d' ')
|
||||
|
|
|
@ -252,11 +252,7 @@ Ansible will now execute the playbook, this can take up to 20 minutes.
|
|||
We will leverage a kubeconfig file from one of the controller nodes to access
|
||||
the cluster as administrator from our local workstation.
|
||||
|
||||
> In this simplified set-up, we did not include a load balancer that usually
|
||||
sits on top of the
|
||||
three controller nodes for a high available API server endpoint. In this
|
||||
simplified tutorial we connect directly to one of the three
|
||||
controllers.
|
||||
> In this simplified set-up, we did not include a load balancer that usually sits on top of the three controller nodes for a high available API server endpoint. In this simplified tutorial we connect directly to one of the three controllers.
|
||||
|
||||
First, we need to edit the permission of the kubeconfig file on one of the
|
||||
controller nodes:
|
||||
|
|
|
@ -58,7 +58,7 @@ see [download documentation](/docs/downloads.md).
|
|||
|
||||
The following is an example of setting up and running kubespray using `vagrant`.
|
||||
For repeated runs, you could save the script to a file in the root of the
|
||||
kubespray and run it by executing 'source <name_of_the_file>.
|
||||
kubespray and run it by executing `source <name_of_the_file>`.
|
||||
|
||||
```ShellSession
|
||||
# use virtualenv to install all python requirements
|
||||
|
|
|
@ -81,7 +81,7 @@ following default cluster parameters:
|
|||
raise an assertion in playbooks if the `kubelet_max_pods` var also isn't adjusted accordingly
|
||||
(assertion not applicable to calico which doesn't use this as a hard limit, see
|
||||
[Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes).
|
||||
|
||||
|
||||
* *enable_dual_stack_networks* - Setting this to true will provision both IPv4 and IPv6 networking for pods and services.
|
||||
|
||||
* *kube_service_addresses_ipv6* - Subnet for cluster IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1000/116``). Must not overlap with ``kube_pods_subnet_ipv6``.
|
||||
|
@ -99,7 +99,7 @@ following default cluster parameters:
|
|||
|
||||
* *coredns_k8s_external_zone* - Zone that will be used when CoreDNS k8s_external plugin is enabled
|
||||
(default is k8s_external.local)
|
||||
|
||||
|
||||
* *enable_coredns_k8s_endpoint_pod_names* - If enabled, it configures endpoint_pod_names option for kubernetes plugin.
|
||||
on the CoreDNS service.
|
||||
|
||||
|
|
Loading…
Reference in New Issue