change highlight to prism
parent
be8965e784
commit
4e537aeadd
|
@ -60,7 +60,7 @@ gitbook pdf . ./kubernetes-handbook.pdf
|
|||
|
||||
使用`pandoc`和`latex`来生成pdf格式文档。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
pandoc --latex-engine=xelatex --template=pm-template input.md -o output.pdf
|
||||
```
|
||||
|
||||
|
|
|
@ -21,7 +21,8 @@
|
|||
"favicon@^0.0.2",
|
||||
"tbfed-pagefooter@^0.0.1",
|
||||
"3-ba",
|
||||
"theme-default"
|
||||
"theme-default",
|
||||
"-highlight", "prism", "prism-themes"
|
||||
],
|
||||
"pluginsConfig": {
|
||||
"theme-default": {
|
||||
|
@ -53,6 +54,11 @@
|
|||
},
|
||||
"3-ba": {
|
||||
"token": "11f7d254cfa4e0ca44b175c66d379ecc"
|
||||
},
|
||||
"prism": {
|
||||
"css": [
|
||||
"prism-themes/themes/prism-ghcolors.css"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -90,7 +90,7 @@ Deployment为Pod和Replica Set(下一代Replication Controller)提供声明
|
|||
|
||||
下载示例文件并执行命令:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl create -f https://kubernetes.io/docs/user-guide/nginx-deployment.yaml --record
|
||||
deployment "nginx-deployment" created
|
||||
```
|
||||
|
@ -99,7 +99,7 @@ deployment "nginx-deployment" created
|
|||
|
||||
然后立即执行 `get` 将获得如下结果:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 0 0 0 1s
|
||||
|
@ -109,7 +109,7 @@ nginx-deployment 3 0 0 0 1s
|
|||
|
||||
过几秒后再执行`get`命令,将获得如下输出:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 18s
|
||||
|
@ -117,7 +117,7 @@ nginx-deployment 3 3 3 3 18s
|
|||
|
||||
我们可以看到Deployment已经创建了3个 replica,所有的 replica 都已经是最新的了(包含最新的pod template),可用的(根据Deployment中的`.spec.minReadySeconds`声明,处于已就绪状态的pod的最少个数)。执行`kubectl get rs`和`kubectl get pods`会显示Replica Set(RS)和Pod已创建。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-2035384211 3 3 0 18s
|
||||
|
@ -125,7 +125,7 @@ nginx-deployment-2035384211 3 3 0 18s
|
|||
|
||||
您可能会注意到 ReplicaSet 的名字总是`<Deployment的名字>-<pod template的hash值>`。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get pods --show-labels
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
nginx-deployment-2035384211-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
|
||||
|
@ -149,21 +149,21 @@ nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app
|
|||
|
||||
假如我们现在想要让 nginx pod 使用`nginx:1.9.1`的镜像来代替原来的`nginx:1.7.9`的镜像。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
|
||||
deployment "nginx-deployment" image updated
|
||||
```
|
||||
|
||||
我们可以使用`edit`命令来编辑 Deployment,修改 `.spec.template.spec.containers[0].image` ,将`nginx:1.7.9` 改写成 `nginx:1.9.1`。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl edit deployment/nginx-deployment
|
||||
deployment "nginx-deployment" edited
|
||||
```
|
||||
|
||||
查看 rollout 的状态,只要执行:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout status deployment/nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
deployment "nginx-deployment" successfully rolled out
|
||||
|
@ -171,7 +171,7 @@ deployment "nginx-deployment" successfully rolled out
|
|||
|
||||
Rollout 成功后,`get` Deployment:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get deployments
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 36s
|
||||
|
@ -183,7 +183,7 @@ CURRENT 的 replica 数表示 Deployment 管理的 replica 数量,AVAILABLE
|
|||
|
||||
我们通过执行`kubectl get rs`可以看到 Deployment 更新了Pod,通过创建一个新的 ReplicaSet 并扩容了3个 replica,同时将原来的 ReplicaSet 缩容到了0个 replica。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 3 3 0 6s
|
||||
|
@ -192,7 +192,7 @@ nginx-deployment-2035384211 0 0 0 36s
|
|||
|
||||
执行 `get pods`只会看到当前的新的 pod:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-1564180365-khku8 1/1 Running 0 14s
|
||||
|
@ -210,7 +210,7 @@ Deployment 同时也可以确保只创建出超过期望数量的一定数量的
|
|||
|
||||
例如,如果您自己看下上面的 Deployment,您会发现,开始创建一个新的 Pod,然后删除一些旧的 Pod 再创建一个新的。当新的Pod创建出来之前不会杀掉旧的Pod。这样能够确保可用的 Pod 数量至少有2个,Pod的总数最多4个。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl describe deployments
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
|
@ -270,14 +270,14 @@ Events:
|
|||
|
||||
假设我们在更新 Deployment 的时候犯了一个拼写错误,将镜像的名字写成了`nginx:1.91`,而正确的名字应该是`nginx:1.9.1`:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.91
|
||||
deployment "nginx-deployment" image updated
|
||||
```
|
||||
|
||||
Rollout 将会卡住。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout status deployments nginx-deployment
|
||||
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
||||
```
|
||||
|
@ -286,7 +286,7 @@ Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
|
|||
|
||||
您会看到旧的 replica(nginx-deployment-1564180365 和 nginx-deployment-2035384211)和新的 replica (nginx-deployment-3066724191)数目都是2个。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1564180365 2 2 0 25s
|
||||
|
@ -296,7 +296,7 @@ nginx-deployment-3066724191 2 2 2 6s
|
|||
|
||||
看下创建 Pod,您会看到有两个新的 ReplicaSet 创建的 Pod 处于 ImagePullBackOff 状态,循环拉取镜像。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-deployment-1564180365-70iae 1/1 Running 0 25s
|
||||
|
@ -307,7 +307,7 @@ nginx-deployment-3066724191-eocby 0/1 ImagePullBackOff 0 6s
|
|||
|
||||
注意,Deployment controller会自动停止坏的 rollout,并停止扩容新的 ReplicaSet。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl describe deployment
|
||||
Name: nginx-deployment
|
||||
Namespace: default
|
||||
|
@ -340,7 +340,7 @@ Events:
|
|||
|
||||
首先,检查下 Deployment 的 revision:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout history deployment/nginx-deployment
|
||||
deployments "nginx-deployment":
|
||||
REVISION CHANGE-CAUSE
|
||||
|
@ -353,7 +353,7 @@ REVISION CHANGE-CAUSE
|
|||
|
||||
查看单个revision 的详细信息:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout history deployment/nginx-deployment --revision=2
|
||||
deployments "nginx-deployment" revision 2
|
||||
Labels: app=nginx
|
||||
|
@ -374,14 +374,14 @@ deployments "nginx-deployment" revision 2
|
|||
|
||||
现在,我们可以决定回退当前的 rollout 到之前的版本:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout undo deployment/nginx-deployment
|
||||
deployment "nginx-deployment" rolled back
|
||||
```
|
||||
|
||||
也可以使用 `--revision`参数指定某个历史版本:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
|
||||
deployment "nginx-deployment" rolled back
|
||||
```
|
||||
|
@ -390,7 +390,7 @@ deployment "nginx-deployment" rolled back
|
|||
|
||||
该 Deployment 现在已经回退到了先前的稳定版本。如您所见,Deployment controller产生了一个回退到revison 2的`DeploymentRollback`的 event。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get deployment
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 3 3 3 3 30m
|
||||
|
@ -431,14 +431,14 @@ Events:
|
|||
|
||||
您可以使用以下命令扩容 Deployment:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl scale deployment nginx-deployment --replicas 10
|
||||
deployment "nginx-deployment" scaled
|
||||
```
|
||||
|
||||
假设您的集群中启用了[horizontal pod autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough),您可以给 Deployment 设置一个 autoscaler,基于当前 Pod的 CPU 利用率选择最少和最多的 Pod 数。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
|
||||
deployment "nginx-deployment" autoscaled
|
||||
```
|
||||
|
@ -449,7 +449,7 @@ RollingUpdate Deployment 支持同时运行一个应用的多个版本。或者
|
|||
|
||||
例如,您正在运行中含有10个 replica 的 Deployment。maxSurge=3,maxUnavailable=2。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get deploy
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 10 10 10 10 50s
|
||||
|
@ -457,14 +457,14 @@ nginx-deployment 10 10 10 10 50s
|
|||
|
||||
您更新了一个镜像,而在集群内部无法解析。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl set image deploy/nginx-deployment nginx=nginx:sometag
|
||||
deployment "nginx-deployment" image updated
|
||||
```
|
||||
|
||||
镜像更新启动了一个包含ReplicaSet nginx-deployment-1989198191的新的rollout,但是它被阻塞了,因为我们上面提到的maxUnavailable。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get rs
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
nginx-deployment-1989198191 5 5 0 9s
|
||||
|
@ -475,7 +475,7 @@ nginx-deployment-618515232 8 8 8 1m
|
|||
|
||||
在我们上面的例子中,3个replica将添加到旧的ReplicaSet中,2个replica将添加到新的ReplicaSet中。rollout进程最终会将所有的replica移动到新的ReplicaSet中,假设新的replica成为健康状态。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get deploy
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx-deployment 15 18 7 8 7m
|
||||
|
@ -491,7 +491,7 @@ nginx-deployment-618515232 11 11 11 7m
|
|||
|
||||
例如使用刚刚创建 Deployment:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get deploy
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
nginx 3 3 3 3 1m
|
||||
|
@ -502,21 +502,21 @@ nginx-2142116321 3 3 3 1m
|
|||
|
||||
使用以下命令暂停 Deployment:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout pause deployment/nginx-deployment
|
||||
deployment "nginx-deployment" paused
|
||||
```
|
||||
|
||||
然后更新 Deplyment中的镜像:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl set image deploy/nginx nginx=nginx:1.9.1
|
||||
deployment "nginx-deployment" image updated
|
||||
```
|
||||
|
||||
注意新的 rollout 启动了:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout history deploy/nginx
|
||||
deployments "nginx"
|
||||
REVISION CHANGE-CAUSE
|
||||
|
@ -529,7 +529,7 @@ nginx-2142116321 3 3 3 2m
|
|||
|
||||
您可以进行任意多次更新,例如更新使用的资源:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi
|
||||
deployment "nginx" resource requirements updated
|
||||
```
|
||||
|
@ -538,7 +538,7 @@ Deployment 暂停前的初始状态将继续它的功能,而不会对 Deployme
|
|||
|
||||
最后,恢复这个 Deployment,观察完成更新的 ReplicaSet 已经创建出来了:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl rollout resume deploy nginx
|
||||
deployment "nginx" resumed
|
||||
$ KUBECTL get rs -w
|
||||
|
|
|
@ -54,7 +54,7 @@ PVC 保护的目的是确保由 pod 正在使用的 PVC 不会从系统中移除
|
|||
|
||||
您可以看到,当 PVC 的状态为 `Teminatiing` 时,PVC 受到保护,`Finalizers` 列表中包含 `kubernetes.io/pvc-protection`:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl described pvc hostpath
|
||||
Name: hostpath
|
||||
Namespace: default
|
||||
|
|
|
@ -201,7 +201,7 @@ Kubernetes 支持2种基本的服务发现模式 —— 环境变量和 DNS。
|
|||
|
||||
举个例子,一个名称为 `"redis-master"` 的 Service 暴露了 TCP 端口 6379,同时给它分配了 Cluster IP 地址 10.0.0.11,这个 Service 生成了如下环境变量:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
REDIS_MASTER_SERVICE_HOST=10.0.0.11
|
||||
REDIS_MASTER_SERVICE_PORT=6379
|
||||
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
|
||||
|
|
|
@ -65,7 +65,7 @@ Kubernetes 支持以下类型的卷:
|
|||
|
||||
在 pod 中使用的 EBS 卷之前,您需要先创建它。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2
|
||||
```
|
||||
|
||||
|
@ -200,7 +200,7 @@ PD 的一个特点是它们可以同时被多个用户以只读方式挂载。
|
|||
|
||||
在您在 pod 中使用 GCE PD 之前,需要先创建它。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
|
||||
```
|
||||
|
||||
|
@ -625,13 +625,13 @@ spec:
|
|||
|
||||
首先进入 ESX,然后使用以下命令创建一个 VMDK:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
|
||||
```
|
||||
|
||||
使用下列命令创建一个 VMDK:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
|
||||
```
|
||||
|
||||
|
|
|
@ -56,7 +56,7 @@ openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj "/CN=jbeda/O=app1/O=app
|
|||
|
||||
token 文件是一个 csv 文件,每行至少包含三列:token、用户名、用户 uid,其次是可选的组名。请注意,如果您有多个组,则该列必须使用双引号。
|
||||
|
||||
```conf
|
||||
```ini
|
||||
token,user,uid,"group1,group2,group3"
|
||||
```
|
||||
|
||||
|
@ -471,7 +471,7 @@ Impersonate-Extra-scopes: development
|
|||
|
||||
当使用 `kubectl` 的 `--as` 标志来配置 `Impersonate-User` header 时,可以使用 `--as-group` 标志来配置 `Impersonate-Group` header。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl drain mynode
|
||||
Error from server (Forbidden): User "clark" cannot get nodes at the cluster scope. (get nodes mynode)
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ spec:
|
|||
|
||||
容器启动时,执行该命令:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
/bin/sh -c "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600"
|
||||
```
|
||||
|
||||
|
@ -50,7 +50,7 @@ spec:
|
|||
|
||||
创建Pod:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/exec-liveness.yaml
|
||||
```
|
||||
|
||||
|
@ -62,7 +62,7 @@ kubectl describe pod liveness-exec
|
|||
|
||||
结果显示没有失败的liveness probe:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
24s 24s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
|
@ -74,13 +74,13 @@ FirstSeen LastSeen Count From SubobjectPath Type
|
|||
|
||||
启动35秒后,再次查看pod的event:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl describe pod liveness-exec
|
||||
```
|
||||
|
||||
在最下面有一条信息显示liveness probe失败,容器被删掉并重新创建。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
37s 37s 1 {default-scheduler } Normal Scheduled Successfully assigned liveness-exec to worker0
|
||||
|
@ -93,13 +93,13 @@ FirstSeen LastSeen Count From SubobjectPath Type
|
|||
|
||||
再等30秒,确认容器已经重启:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl get pod liveness-exec
|
||||
```
|
||||
|
||||
从输出结果来`RESTARTS`值加1了。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
liveness-exec 1/1 Running 1 1m
|
||||
```
|
||||
|
@ -157,7 +157,7 @@ http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
|
|||
|
||||
创建一个Pod来测试一下HTTP liveness检测:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/http-liveness.yaml
|
||||
```
|
||||
|
||||
|
@ -166,7 +166,7 @@ the Container has been restarted:
|
|||
|
||||
10秒后,查看Pod的event,确认liveness probe失败并重启了容器。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl describe pod liveness-http
|
||||
```
|
||||
|
||||
|
|
|
@ -104,7 +104,7 @@ $ kubectl delete serviceaccount/build-robot
|
|||
|
||||
假设我们已经有了一个如上文提到的名为 ”build-robot“ 的 service account,我们手动创建一个新的 secret。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ cat > /tmp/build-robot-secret.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
|
|
@ -201,25 +201,25 @@ spec:
|
|||
|
||||
使用该文件创建 Pod 并验证其状态:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl create -f busybox.yaml
|
||||
pod "busybox" created
|
||||
|
||||
$ kubectl get pods busybox
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
busybox 1/1 Running 0 <some-time>
|
||||
```
|
||||
```
|
||||
|
||||
该 Pod 运行后,您可以在它的环境中执行 `nslookup`。如果您看到类似如下的输出,表示 DNS 正在正确工作。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl exec -ti busybox -- nslookup kubernetes.default
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
Name: kubernetes.default
|
||||
Address 1: 10.0.0.1
|
||||
```
|
||||
```
|
||||
|
||||
如果 `nslookup` 命令失败,检查如下内容:
|
||||
|
||||
|
@ -227,49 +227,49 @@ Address 1: 10.0.0.1
|
|||
|
||||
查看下 resolv.conf 文件。(参考[集成节点的 DNS](inheriting-dns-from-the-node)和 下面的[已知问题](#known-issues)获取更多信息)
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl exec busybox cat /etc/resolv.conf
|
||||
```
|
||||
```
|
||||
|
||||
验证搜索路径和名称服务器设置如下(请注意,搜索路径可能因不同的云提供商而异):
|
||||
|
||||
```
|
||||
```
|
||||
search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal
|
||||
nameserver 10.0.0.10
|
||||
options ndots:5
|
||||
```
|
||||
```
|
||||
|
||||
如果看到如下错误表明错误来自 kube-dns 或相关服务:
|
||||
|
||||
```
|
||||
```
|
||||
$ kubectl exec -ti busybox -- nslookup kubernetes.default
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
nslookup: can't resolve 'kubernetes.default'
|
||||
```
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```
|
||||
```
|
||||
$ kubectl exec -ti busybox -- nslookup kubernetes.default
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
|
||||
|
||||
nslookup: can't resolve 'kubernetes.default'
|
||||
```
|
||||
```
|
||||
|
||||
### 检查 DNS pod 是否在运行
|
||||
|
||||
使用 `kubectl get pods` 命令验证 DNS pod 是否正在运行。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
...
|
||||
kube-dns-v19-ezo1y 3/3 Running 0 1h
|
||||
...
|
||||
```
|
||||
```
|
||||
|
||||
如果您看到没有 Pod 运行或者 Pod 处于 失败/完成 状态,DNS 插件可能没有部署到您的当前环境中,您需要手动部署。
|
||||
|
||||
|
@ -277,11 +277,11 @@ kube-dns-v19-ezo1y 3/3 Running 0 1h
|
|||
|
||||
使用 `kubectl logs` 命令查看 DNS 守护进程的日志。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns
|
||||
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
|
||||
$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar
|
||||
```
|
||||
```
|
||||
|
||||
看看有没有可疑的日志。以字母“`W`”,“`E`”,“`F`”开头的代表警告、错误和失败。请搜索具有这些日志级别的条目,并使用 [kubernetes issues](https://github.com/kubernetes/kubernetes/issues)来报告意外错误。
|
||||
|
||||
|
@ -289,13 +289,13 @@ $ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-syste
|
|||
|
||||
使用 `kubectl get service` 命令验证 DNS 服务是否启动。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get svc --namespace=kube-system
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
...
|
||||
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
|
||||
...
|
||||
```
|
||||
```
|
||||
|
||||
如果您已经创建了该服务或它本应该默认创建但没有出现,参考[调试服务](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/)获取更多信息。
|
||||
|
||||
|
@ -303,11 +303,11 @@ kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
|
|||
|
||||
您可以使用`kubectl get endpoints`命令验证 DNS 端点是否被暴露。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl get ep kube-dns --namespace=kube-system
|
||||
NAME ENDPOINTS AGE
|
||||
kube-dns 10.180.3.17:53,10.180.3.17:53 1h
|
||||
```
|
||||
```
|
||||
|
||||
如果您没有看到端点,查看[调试服务](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/)文档中的端点部分。
|
||||
|
||||
|
@ -330,4 +330,5 @@ Kubernetes 1.3 版本起引入了支持多站点 Kubernetes 安装的集群联
|
|||
- [Configure DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/)
|
||||
- [Service 和 Pod 的 DNS](/docs/concepts/services-networking/dns-pod-service/)
|
||||
- [自动扩容集群中的 DNS 服务](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
|
||||
- [Using CoreDNS for Service Discovery](https://kubernetes.io/docs/tasks/administer-cluster/coredns/)
|
||||
- [Using CoreDNS for Service Discovery](https://kubernetes.io/docs/tasks/administer-cluster/coredns/)
|
||||
````
|
|
@ -43,7 +43,7 @@ LVS的工作原理请参考:http://www.cnblogs.com/codebean/archive/2011/07/25
|
|||
|
||||
因为我们的测试集群一共只有三个node,所有在在三个node上都要安装keepalived和ipvsadmin。
|
||||
|
||||
```Shell
|
||||
```bash
|
||||
yum install keepalived ipvsadm
|
||||
```
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
建议直接使用yum安装flanneld,除非对版本有特殊需求,默认安装的是0.7.1版本的flannel。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
yum install -y flannel
|
||||
```
|
||||
|
||||
|
@ -57,7 +57,7 @@ FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kub
|
|||
|
||||
执行下面的命令为docker分配IP地址段。
|
||||
|
||||
```shell
|
||||
```bash
|
||||
etcdctl --endpoints=https://172.20.0.113:2379,https://172.20.0.114:2379,https://172.20.0.115:2379 \
|
||||
--ca-file=/etc/kubernetes/ssl/ca.pem \
|
||||
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
|
||||
|
@ -76,7 +76,7 @@ etcdctl --endpoints=https://172.20.0.113:2379,https://172.20.0.114:2379,https://
|
|||
|
||||
**启动flannel**
|
||||
|
||||
```shell
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
systemctl enable flanneld
|
||||
systemctl start flanneld
|
||||
|
|
|
@ -38,7 +38,7 @@ CPU 总是要用绝对数量,不可以使用相对数量;0.1 的 CPU 在单
|
|||
|
||||
内存的限制和请求以字节为单位。您可以使用以下后缀之一作为平均整数或定点整数表示内存:E,P,T,G,M,K。您还可以使用两个字母的等效的幂数:Ei,Pi,Ti ,Gi,Mi,Ki。例如,以下代表大致相同的值:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
128974848, 129e6, 129M, 123Mi
|
||||
```
|
||||
|
||||
|
@ -107,7 +107,7 @@ Pod 的资源使用情况被报告为 Pod 状态的一部分。
|
|||
|
||||
如果调度器找不到任何该 Pod 可以匹配的节点,则该 Pod 将保持不可调度状态,直到找到一个可以被调度到的位置。每当调度器找不到 Pod 可以调度的地方时,会产生一个事件,如下所示:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
$ kubectl describe pod frontend | grep -A 3 Events
|
||||
Events:
|
||||
FirstSeen LastSeen Count From Subobject PathReason Message
|
||||
|
@ -116,7 +116,7 @@ Events:
|
|||
|
||||
在上述示例中,由于节点上的 CPU 资源不足,名为 “frontend” 的 Pod 将无法调度。由于内存不足(PodExceedsFreeMemory),类似的错误消息也可能会导致失败。一般来说,如果有这种类型的消息而处于 pending 状态,您可以尝试如下几件事情:
|
||||
|
||||
```Shell
|
||||
```bash
|
||||
$ kubectl describe nodes e2e-test-minion-group-4lw4
|
||||
Name: e2e-test-minion-group-4lw4
|
||||
[ ... lines removed for clarity ...]
|
||||
|
@ -150,7 +150,7 @@ Allocated resources:
|
|||
|
||||
您的容器可能因为资源枯竭而被终结了。要查看容器是否因为遇到资源限制而被杀死,请在相关的 Pod 上调用 `kubectl describe pod`:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
[12:54:41] $ kubectl describe pod simmemleak-hra99
|
||||
Name: simmemleak-hra99
|
||||
Namespace: default
|
||||
|
@ -192,7 +192,7 @@ Events:
|
|||
|
||||
您可以使用 `kubectl get pod` 命令加上 `-o go-template=...` 选项来获取之前终止容器的状态。
|
||||
|
||||
```Shell
|
||||
```bash
|
||||
[13:59:01] $ kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-60xbc
|
||||
Container Name: simmemleak
|
||||
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]
|
||||
|
@ -233,7 +233,7 @@ Host: k8s-master:8080
|
|||
]
|
||||
```
|
||||
|
||||
```shell{% raw %}
|
||||
```bash{% raw %}
|
||||
curl --header "Content-Type: application/json-patch+json" \
|
||||
--request PATCH \
|
||||
--data '[{"op": "add", "path": "/status/capacity/pod.alpha.kubernetes.io~1opaque-int-resource-foo", "value": "5"}]' \
|
||||
|
|
|
@ -45,7 +45,7 @@ cd kubernetes
|
|||
|
||||
`server` 的 tarball `kubernetes-server-linux-amd64.tar.gz` 已经包含了 `client`(`kubectl`) 二进制文件,所以不用单独下载`kubernetes-client-linux-amd64.tar.gz`文件;
|
||||
|
||||
``` shell
|
||||
```bash
|
||||
# wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz
|
||||
wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz
|
||||
tar -xzvf kubernetes-server-linux-amd64.tar.gz
|
||||
|
|
|
@ -40,7 +40,7 @@ Welcome to the "Distributed Load Testing Using Kubernetes" sample web app
|
|||
|
||||
**测试命令**
|
||||
|
||||
```shell
|
||||
```bash
|
||||
curl -o /dev/null -s -w '%{time_connect} %{time_starttransfer} %{time_total}' "http://10.254.149.31:8000/"
|
||||
```
|
||||
|
||||
|
@ -75,7 +75,7 @@ time_total:完成请求所用的时间
|
|||
|
||||
**测试命令**
|
||||
|
||||
```Shell
|
||||
```bash
|
||||
curl -o /dev/null -s -w '%{time_connect} %{time_starttransfer} %{time_total}' "http://sample-webapp:8000/"
|
||||
```
|
||||
|
||||
|
@ -100,7 +100,7 @@ curl -o /dev/null -s -w '%{time_connect} %{time_starttransfer} %{time_total}' "h
|
|||
|
||||
**测试命令**
|
||||
|
||||
```Shell
|
||||
```bash
|
||||
curl -o /dev/null -s -w '%{time_connect} %{time_starttransfer} %{time_total}' "http://traefik.sample-webapp.io" >>result
|
||||
```
|
||||
|
||||
|
@ -139,13 +139,13 @@ curl -o /dev/null -s -w '%{time_connect} %{time_starttransfer} %{time_total}' "h
|
|||
|
||||
服务端命令:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
iperf -s -p 12345 -i 1 -M
|
||||
```
|
||||
|
||||
客户端命令:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
iperf -c ${server-ip} -p 12345 -i 1 -t 10 -w 20K
|
||||
```
|
||||
|
||||
|
|
|
@ -67,7 +67,7 @@ func main() {
|
|||
|
||||
**创建Dockerfile**
|
||||
|
||||
```Dockerfile
|
||||
```dockerfile
|
||||
FROM alpine:3.5
|
||||
MAINTAINER Jimmy Song<rootsongjc@gmail.com>
|
||||
ADD hellov2 /
|
||||
|
@ -82,7 +82,7 @@ ENTRYPOINT ["/hellov2"]
|
|||
|
||||
修改`Makefile`中的`TAG`为新的版本号。
|
||||
|
||||
```cmake
|
||||
```makefile
|
||||
all: build push clean
|
||||
.PHONY: build push clean
|
||||
|
||||
|
@ -104,7 +104,7 @@ clean:
|
|||
|
||||
**编译**
|
||||
|
||||
```Shell
|
||||
```bash
|
||||
make all
|
||||
```
|
||||
|
||||
|
@ -151,7 +151,7 @@ spec:
|
|||
|
||||
**部署service**
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl create -f rolling-update-test.yaml
|
||||
```
|
||||
|
||||
|
@ -187,7 +187,7 @@ This is version 1.
|
|||
|
||||
只需要将`rolling-update-test.yaml`文件中的`image`改成新版本的镜像名,然后执行:
|
||||
|
||||
```shell
|
||||
```bash
|
||||
kubectl apply -f rolling-update-test.yaml
|
||||
```
|
||||
|
||||
|
|
|
@ -14,15 +14,14 @@
|
|||
|
||||
1. 将微服务的默认版本设置成v1。
|
||||
|
||||
```
|
||||
```bash
|
||||
istioctl create -f samples/apps/bookinfo/route-rule-all-v1.yaml
|
||||
```
|
||||
|
||||
使用以下命令查看定义的路由规则。
|
||||
|
||||
```
|
||||
```bash
|
||||
istioctl get route-rules -o yaml
|
||||
|
||||
```
|
||||
|
||||
```Yaml
|
||||
|
@ -76,17 +75,17 @@
|
|||
|
||||
为测试用户jason启用评分服务,将productpage的流量路由到`reviews:v2`实例上。
|
||||
|
||||
```
|
||||
```bash
|
||||
istioctl create -f samples/apps/bookinfo/route-rule-reviews-test-v2.yaml
|
||||
```
|
||||
|
||||
确认规则生效:
|
||||
|
||||
```
|
||||
```bash
|
||||
istioctl get route-rule reviews-test-v2
|
||||
```
|
||||
|
||||
```Yams
|
||||
```Yaml
|
||||
destination: reviews.default.svc.cluster.local
|
||||
match:
|
||||
httpHeaders:
|
||||
|
@ -112,7 +111,7 @@
|
|||
|
||||
1. 将50%的流量从`reviews:v1`转移到`reviews:v3`上。
|
||||
|
||||
```
|
||||
```bash
|
||||
istioctl replace -f samples/apps/bookinfo/route-rule-reviews-50-v3.yaml
|
||||
```
|
||||
|
||||
|
@ -122,7 +121,7 @@
|
|||
|
||||
删除测试规则。
|
||||
|
||||
```
|
||||
```bash
|
||||
istioctl delete route-rule reviews-test-v2
|
||||
istioctl delete route-rule ratings-test-delay
|
||||
```
|
||||
|
@ -133,7 +132,7 @@
|
|||
|
||||
3. 当v3版本的微服务稳定以后,就可以将100%的流量分摊到`reviews:v3`上了。
|
||||
|
||||
```
|
||||
```bash
|
||||
istioctl replace -f samples/apps/bookinfo/route-rule-reviews-v3.yaml
|
||||
```
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
|
||||
下载Linux版本的当前最新版安装包
|
||||
|
||||
```Shell
|
||||
```bash
|
||||
wget https://github.com/istio/istio/releases/download/0.1.5/istio-0.1.5-linux.tar.gz
|
||||
```
|
||||
|
||||
|
|
|
@ -148,7 +148,7 @@ $ kubectl create -f world-v2.yml
|
|||
|
||||
在本地`/etc/hosts`中添加如下内容:
|
||||
|
||||
```i
|
||||
```
|
||||
172.20.0.119 linkerd.jimmysong.io
|
||||
172.20.0.119 linkerd-viz.jimmysong.io
|
||||
172.20.0.119 l5d.jimmysong.io
|
||||
|
|
Loading…
Reference in New Issue