From f03e3e38a93524e7fb8d488ffbf639414d48475b Mon Sep 17 00:00:00 2001 From: rootsongjc Date: Tue, 27 Feb 2018 17:04:18 +0800 Subject: [PATCH] lower case block language config --- appendix/tricks.md | 6 +++--- concepts/ingress.md | 6 +++--- concepts/label.md | 4 ++-- concepts/network-policy.md | 2 +- concepts/pod-security-policy.md | 2 +- concepts/rbac.md | 6 +++--- concepts/secret.md | 4 ++-- concepts/serviceaccount.md | 2 +- guide/authenticate-across-clusters-kubeconfig.md | 2 +- guide/configure-liveness-readiness-probes.md | 2 +- guide/configure-pod-service-account.md | 2 +- guide/deploy-applications-in-kubernetes.md | 2 +- guide/rbac.md | 6 +++--- guide/resource-quota-management.md | 2 +- guide/tls-bootstrapping.md | 4 ++-- practice/distributed-load-test.md | 2 +- practice/edge-node-configuration.md | 2 +- practice/helm.md | 8 ++++---- practice/service-rolling-update.md | 4 ++-- practice/traefik-ingress-installation.md | 6 +++--- usecases/configuring-request-routing.md | 4 ++-- usecases/istio-installation.md | 2 +- 22 files changed, 40 insertions(+), 40 deletions(-) diff --git a/appendix/tricks.md b/appendix/tricks.md index d47a98a17..6887c371c 100644 --- a/appendix/tricks.md +++ b/appendix/tricks.md @@ -2,7 +2,7 @@ 通过环境变量来实现,该环境变量直接引用 resource 的状态字段,示例如下: -```Yaml +```yaml apiVersion: v1 kind: ReplicationController metadata: @@ -45,7 +45,7 @@ command: ["/bin/bash","-c","bootstrap.sh"] 我们可以想象一下这样的场景,让 Pod 来调用宿主机的 docker 能力,只需要将宿主机的 `docker` 命令和 `docker.sock` 文件挂载到 Pod 里面即可,如下: -```Yaml +```yaml apiVersion: v1 kind: Pod metadata: @@ -209,7 +209,7 @@ data: ## 8. 创建一个CentOS测试容器 -有时我们可能需要在Kubernetes集群中创建一个容器来测试集群的状态或对其它容器进行操作,这时候我们需要一个操作节点,可以使用一个普通的CentOS容器来实现。YAML文件见[manifests/test/centos.yaml](https://github.com/rootsongjc/kubernetes-handbook/tree/master/manifests/test/centos.yaml)。 +有时我们可能需要在Kubernetes集群中创建一个容器来测试集群的状态或对其它容器进行操作,这时候我们需要一个操作节点,可以使用一个普通的CentOS容器来实现。yaml文件见[manifests/test/centos.yaml](https://github.com/rootsongjc/kubernetes-handbook/tree/master/manifests/test/centos.yaml)。 ```yaml apiVersion: extensions/v1beta1 diff --git a/concepts/ingress.md b/concepts/ingress.md index aa69cc269..97bdd7817 100644 --- a/concepts/ingress.md +++ b/concepts/ingress.md @@ -94,7 +94,7 @@ Kubernetes中已经存在一些概念可以暴露单个service(查看[替代 ingress.yaml定义文件: -```Yaml +```yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: @@ -171,7 +171,7 @@ bar.foo.com --| |-> bar.foo.com s2:80 下面这个ingress说明基于[Host header](https://tools.ietf.org/html/rfc7230#section-5.4)的后端loadbalancer的路由请求: -```Yaml +```yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: @@ -198,7 +198,7 @@ spec: 你可以通过指定包含TLS私钥和证书的[secret](https://kubernetes.io/docs/user-guide/secrets)来加密Ingress。 目前,Ingress仅支持单个TLS端口443,并假定TLS termination。 如果Ingress中的TLS配置部分指定了不同的主机,则它们将根据通过SNI TLS扩展指定的主机名(假如Ingress controller支持SNI)在多个相同端口上进行复用。 TLS secret中必须包含名为`tls.crt`和`tls.key`的密钥,这里面包含了用于TLS的证书和私钥,例如: -```Yaml +```yaml apiVersion: v1 data: tls.crt: base64 encoded cert diff --git a/concepts/label.md b/concepts/label.md index ea4088dc1..b554ca02b 100644 --- a/concepts/label.md +++ b/concepts/label.md @@ -66,7 +66,7 @@ selector: 在`Job`、`Deployment`、`ReplicaSet`和`DaemonSet`这些object中,支持*set-based*的过滤,例如: -```Yaml +```yaml selector: matchLabels: component: redis @@ -81,7 +81,7 @@ selector: 另外在node affinity和pod affinity中的label selector的语法又有些许不同,示例如下: -```Yaml +```yaml affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: diff --git a/concepts/network-policy.md b/concepts/network-policy.md index a3af0a338..61a012746 100644 --- a/concepts/network-policy.md +++ b/concepts/network-policy.md @@ -61,7 +61,7 @@ spec: 通过创建一个可以选择所有 Pod 但不允许任何流量的 NetworkPolicy,你可以为一个 Namespace 创建一个 “默认的” 隔离策略,如下所示: -```Yaml +```yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: diff --git a/concepts/pod-security-policy.md b/concepts/pod-security-policy.md index 79bc65d82..d81b7c246 100644 --- a/concepts/pod-security-policy.md +++ b/concepts/pod-security-policy.md @@ -112,7 +112,7 @@ Pod 必须基于 PSP 验证每个字段。 下面是一个 Pod 安全策略的例子,所有字段的设置都被允许: -```Yaml +```yaml apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: diff --git a/concepts/rbac.md b/concepts/rbac.md index 4d0a52410..c0d0fe18b 100644 --- a/concepts/rbac.md +++ b/concepts/rbac.md @@ -38,7 +38,7 @@ rules: 下面示例中的`ClusterRole`定义可用于授予用户对某一特定命名空间,或者所有命名空间中的secret(取决于其[绑定](https://k8smeetup.github.io/docs/admin/authorization/rbac/#rolebinding-and-clusterrolebinding)方式)的读访问权限: -```Yaml +```yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: @@ -56,7 +56,7 @@ rules: `RoleBinding`可以引用在同一命名空间内定义的`Role`对象。 下面示例中定义的`RoleBinding`对象在”default”命名空间中将”pod-reader”角色授予用户”jane”。 这一授权将允许用户”jane”从”default”命名空间中读取pod。 -```Yaml +```yaml # 以下角色绑定定义将允许用户"jane"从"default"命名空间中读取pod。 kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 @@ -158,7 +158,7 @@ rules: 允许读取core API Group中定义的资源”pods”: -```Yaml +```yaml rules: - apiGroups: [""] resources: ["pods"] diff --git a/concepts/secret.md b/concepts/secret.md index d7e0ba75b..58ea8ef03 100644 --- a/concepts/secret.md +++ b/concepts/secret.md @@ -21,7 +21,7 @@ MWYyZDFlMmU2N2Rm secrets.yml -```Yaml +```yaml apiVersion: v1 kind: Secret metadata: @@ -41,7 +41,7 @@ data: ### 将Secret挂载到Volume中 -```Yaml +```yaml apiVersion: v1 kind: Pod metadata: diff --git a/concepts/serviceaccount.md b/concepts/serviceaccount.md index 03ea0ec1a..a62dbd9a1 100644 --- a/concepts/serviceaccount.md +++ b/concepts/serviceaccount.md @@ -29,7 +29,7 @@ automountServiceAccountToken: false 在 1.6 以上版本中,您也可以选择只取消单个 pod 的 API 凭证自动挂载: -```Yaml +```yaml apiVersion: v1 kind: Pod metadata: diff --git a/guide/authenticate-across-clusters-kubeconfig.md b/guide/authenticate-across-clusters-kubeconfig.md index 4c9344b0f..a8b16dae0 100644 --- a/guide/authenticate-across-clusters-kubeconfig.md +++ b/guide/authenticate-across-clusters-kubeconfig.md @@ -115,7 +115,7 @@ contexts: #### current-context -```Yaml +```yaml current-context: federal-context ``` diff --git a/guide/configure-liveness-readiness-probes.md b/guide/configure-liveness-readiness-probes.md index 3f02650df..4b372d2ff 100644 --- a/guide/configure-liveness-readiness-probes.md +++ b/guide/configure-liveness-readiness-probes.md @@ -14,7 +14,7 @@ Kubelet使用readiness probe(就绪探针)来确定容器是否已经就绪 在本次练习将基于 `gcr.io/google_containers/busybox`镜像创建运行一个容器的Pod。以下是Pod的配置文件`exec-liveness.yaml`: -```Yaml +```yaml apiVersion: v1 kind: Pod metadata: diff --git a/guide/configure-pod-service-account.md b/guide/configure-pod-service-account.md index 1d8654196..9d0e9326d 100644 --- a/guide/configure-pod-service-account.md +++ b/guide/configure-pod-service-account.md @@ -18,7 +18,7 @@ Service account 是否能够取得访问 API 的许可取决于您使用的 [授 在 1.6 以上版本中,您可以选择取消为 service account 自动挂载 API 凭证,只需在 service account 中设置 `automountServiceAccountToken: false`: -```Yaml +```yaml apiVersion: v1 kind: ServiceAccount metadata: diff --git a/guide/deploy-applications-in-kubernetes.md b/guide/deploy-applications-in-kubernetes.md index 913fa8b9a..c50887b81 100644 --- a/guide/deploy-applications-in-kubernetes.md +++ b/guide/deploy-applications-in-kubernetes.md @@ -40,7 +40,7 @@ API文档见[k8s-app-monitor-test](https://github.com/rootsongjc/k8s-app-monitor 服务启动后需要更新ingress配置,在[ingress.yaml](../manifests/traefik-ingress/ingress.yaml)文件中增加以下几行: -```Yaml +```yaml - host: k8s-app-monitor-agent.jimmysong.io http: paths: diff --git a/guide/rbac.md b/guide/rbac.md index 4d0a52410..c0d0fe18b 100644 --- a/guide/rbac.md +++ b/guide/rbac.md @@ -38,7 +38,7 @@ rules: 下面示例中的`ClusterRole`定义可用于授予用户对某一特定命名空间,或者所有命名空间中的secret(取决于其[绑定](https://k8smeetup.github.io/docs/admin/authorization/rbac/#rolebinding-and-clusterrolebinding)方式)的读访问权限: -```Yaml +```yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: @@ -56,7 +56,7 @@ rules: `RoleBinding`可以引用在同一命名空间内定义的`Role`对象。 下面示例中定义的`RoleBinding`对象在”default”命名空间中将”pod-reader”角色授予用户”jane”。 这一授权将允许用户”jane”从”default”命名空间中读取pod。 -```Yaml +```yaml # 以下角色绑定定义将允许用户"jane"从"default"命名空间中读取pod。 kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 @@ -158,7 +158,7 @@ rules: 允许读取core API Group中定义的资源”pods”: -```Yaml +```yaml rules: - apiGroups: [""] resources: ["pods"] diff --git a/guide/resource-quota-management.md b/guide/resource-quota-management.md index 4bc690158..98e70b93c 100644 --- a/guide/resource-quota-management.md +++ b/guide/resource-quota-management.md @@ -53,7 +53,7 @@ kubectl -n spark-cluster describe resourcequota compute-resources 配置文件:`spark-object-counts.yaml` -```Yaml +```yaml apiVersion: v1 kind: ResourceQuota metadata: diff --git a/guide/tls-bootstrapping.md b/guide/tls-bootstrapping.md index ec3c7ad73..24a329b3f 100644 --- a/guide/tls-bootstrapping.md +++ b/guide/tls-bootstrapping.md @@ -72,7 +72,7 @@ Kube-controller-manager 标志为: 以下 RBAC `ClusterRoles` 代表 `nodeClient`、`selfnodeclient` 和 `selfnodeserver` 功能。在以后的版本中可能会自动创建类似的角色。 -```Yaml +```yaml # A ClusterRole which instructs the CSR approver to approve a user requesting # node client credentials. kind: ClusterRole @@ -117,7 +117,7 @@ rules: 管理员将创建一个 `ClusterRoleBinding` 来定位该组。 -```Yaml +```yaml # Approve all CSRs for the group "kubelet-bootstrap-token" kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 diff --git a/practice/distributed-load-test.md b/practice/distributed-load-test.md index 87be79ad9..2bd511fb2 100644 --- a/practice/distributed-load-test.md +++ b/practice/distributed-load-test.md @@ -69,7 +69,7 @@ $ kubectl scale --replicas=20 replicationcontrollers locust-worker 参考[kubernetes的traefik ingress安装](http://rootsongjc.github.io/blogs/traefik-ingress-installation/),在`ingress.yaml`中加入如下配置: -```Yaml +```yaml - host: traefik.locust.io http: paths: diff --git a/practice/edge-node-configuration.md b/practice/edge-node-configuration.md index 221fbdac6..1820ceb41 100644 --- a/practice/edge-node-configuration.md +++ b/practice/edge-node-configuration.md @@ -160,7 +160,7 @@ $ ip addr show eth0 配置文件`traefik.yaml`内容如下: -```Yaml +```yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: diff --git a/practice/helm.md b/practice/helm.md index 3efffe6ff..6c134b79f 100644 --- a/practice/helm.md +++ b/practice/helm.md @@ -117,14 +117,14 @@ spec: path: / port: {{ .Values.service.internalPort }} resources: -{{ toYaml .Values.resources | indent 12 }} +{{ toyaml .Values.resources | indent 12 }} ``` 这是该应用的Deployment的yaml配置文件,其中的双大括号包扩起来的部分是Go template,其中的Values是在`values.yaml`文件中定义的: -```Yaml +```yaml # Default values for mychart. -# This is a YAML-formatted file. +# This is a yaml-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 image: @@ -634,4 +634,4 @@ helm package . - [Go template](https://golang.org/pkg/text/template/) - [Helm docs](https://github.com/kubernetes/helm/blob/master/docs/index.md) - [How To Create Your First Helm Chart](https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/) -- [Speed deployment on Kubernetes with Helm Chart – Quick YAML example from scratch](https://www.ibm.com/blogs/bluemix/2017/10/quick-example-helm-chart-for-kubernetes/) \ No newline at end of file +- [Speed deployment on Kubernetes with Helm Chart – Quick yaml example from scratch](https://www.ibm.com/blogs/bluemix/2017/10/quick-example-helm-chart-for-kubernetes/) \ No newline at end of file diff --git a/practice/service-rolling-update.md b/practice/service-rolling-update.md index b2633d77d..c96afb2f1 100644 --- a/practice/service-rolling-update.md +++ b/practice/service-rolling-update.md @@ -116,7 +116,7 @@ make all 配置文件`rolling-update-test.yaml`: -```Yaml +```yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: @@ -159,7 +159,7 @@ kubectl create -f rolling-update-test.yaml 在`ingress.yaml`文件中增加新service的配置。 -```Yaml +```yaml - host: rolling-update-test.traefik.io http: paths: diff --git a/practice/traefik-ingress-installation.md b/practice/traefik-ingress-installation.md index eb37a75de..48cecfb04 100644 --- a/practice/traefik-ingress-installation.md +++ b/practice/traefik-ingress-installation.md @@ -24,7 +24,7 @@ Ingress Controller 实质上可以理解为是个监视器,Ingress Controller 将用于service account验证。 -```Yaml +```yaml apiVersion: v1 kind: ServiceAccount metadata: @@ -83,7 +83,7 @@ spec: 我们使用DaemonSet类型来部署Traefik,并使用`nodeSelector`来限定Traefik所部署的主机。 -```Yaml +```yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: @@ -141,7 +141,7 @@ kubectl label nodes 172.20.0.115 edgenode=true **Traefik UI** -使用下面的YAML配置来创建Traefik的Web UI。 +使用下面的yaml配置来创建Traefik的Web UI。 ```yaml apiVersion: v1 diff --git a/usecases/configuring-request-routing.md b/usecases/configuring-request-routing.md index a0ce21de7..e527c349a 100644 --- a/usecases/configuring-request-routing.md +++ b/usecases/configuring-request-routing.md @@ -24,7 +24,7 @@ istioctl get route-rules -o yaml ``` - ```Yaml + ```yaml type: route-rule name: details-default namespace: default @@ -85,7 +85,7 @@ istioctl get route-rule reviews-test-v2 ``` - ```Yaml + ```yaml destination: reviews.default.svc.cluster.local match: httpHeaders: diff --git a/usecases/istio-installation.md b/usecases/istio-installation.md index d4ec51d76..6997e5c4a 100644 --- a/usecases/istio-installation.md +++ b/usecases/istio-installation.md @@ -164,7 +164,7 @@ kubectl apply -f install/kubernetes/addons/zipkin.yaml 在traefik ingress中增加增加以上几个服务的配置,同时增加istio-ingress配置。 -```Yaml +```yaml - host: grafana.istio.io http: paths: