2017-08-21 17:44:43 +08:00
<!DOCTYPE HTML>
2017-09-19 22:01:07 +08:00
< html lang = "zh-hans" >
2017-08-21 17:44:43 +08:00
< head >
< meta charset = "UTF-8" >
< meta content = "text/html; charset=utf-8" http-equiv = "Content-Type" >
< title > 4.1.6 部署node节点 · Kubernetes Handbook< / title >
< meta http-equiv = "X-UA-Compatible" content = "IE=edge" / >
< meta name = "description" content = "" >
< meta name = "generator" content = "GitBook 3.2.2" >
< meta name = "author" content = "Jimmy Song" >
< link rel = "stylesheet" href = "../gitbook/style.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-splitter/splitter.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-page-toc-button/plugin.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-image-captions/image-captions.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-page-footer-ex/style/plugin.css" >
2017-09-19 21:38:03 +08:00
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-back-to-top-button/plugin.css" >
2017-08-21 17:44:43 +08:00
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-search-plus/search.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-highlight/website.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-fontsettings/website.css" >
< meta name = "HandheldFriendly" content = "true" / >
< meta name = "viewport" content = "width=device-width, initial-scale=1, user-scalable=no" >
< meta name = "apple-mobile-web-app-capable" content = "yes" >
< meta name = "apple-mobile-web-app-status-bar-style" content = "black" >
< link rel = "apple-touch-icon-precomposed" sizes = "152x152" href = "../gitbook/images/apple-touch-icon-precomposed-152.png" >
< link rel = "shortcut icon" href = "../gitbook/images/favicon.ico" type = "image/x-icon" >
< link rel = "next" href = "kubedns-addon-installation.html" / >
< link rel = "prev" href = "master-installation.html" / >
< / head >
< body >
< div class = "book" >
< div class = "book-summary" >
< div id = "book-search-input" role = "search" >
2017-09-19 22:01:07 +08:00
< input type = "text" placeholder = "输入并搜索" / >
2017-08-21 17:44:43 +08:00
< / div >
< nav role = "navigation" >
< ul class = "summary" >
< li class = "chapter " data-level = "1.1" data-path = "../" >
< a href = "../" >
1. 前言
< / a >
< / li >
< li class = "chapter " data-level = "1.2" data-path = "../concepts/" >
< a href = "../concepts/" >
2. 概念原理
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.2.1" data-path = "../concepts/concepts.html" >
< a href = "../concepts/concepts.html" >
2.1 设计理念
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2" data-path = "../concepts/objects.html" >
< a href = "../concepts/objects.html" >
2017-09-03 13:29:38 +08:00
2.2 Objects
2017-08-21 17:44:43 +08:00
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.2.2.1" data-path = "../concepts/pod-overview.html" >
< a href = "../concepts/pod-overview.html" >
2.2.1 Pod
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.2.2.1.1" data-path = "../concepts/pod.html" >
< a href = "../concepts/pod.html" >
2.2.1.1 Pod解析
< / a >
2017-08-31 23:28:33 +08:00
< / li >
< li class = "chapter " data-level = "1.2.2.1.2" data-path = "../concepts/init-containers.html" >
< a href = "../concepts/init-containers.html" >
2.2.1.2 Init容器
< / a >
2017-09-03 15:58:39 +08:00
< / li >
< li class = "chapter " data-level = "1.2.2.1.3" data-path = "../concepts/pod-security-policy.html" >
< a href = "../concepts/pod-security-policy.html" >
2.2.1.3 Pod安全策略
< / a >
2017-09-17 15:39:26 +08:00
< / li >
< li class = "chapter " data-level = "1.2.2.1.4" data-path = "../concepts/pod-lifecycle.html" >
< a href = "../concepts/pod-lifecycle.html" >
2.2.1.4 Pod的生命周期
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.2.2.2" data-path = "../concepts/node.html" >
< a href = "../concepts/node.html" >
2.2.2 Node
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.3" data-path = "../concepts/namespace.html" >
< a href = "../concepts/namespace.html" >
2.2.3 Namespace
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.4" data-path = "../concepts/service.html" >
< a href = "../concepts/service.html" >
2.2.4 Service
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.5" data-path = "../concepts/volume.html" >
< a href = "../concepts/volume.html" >
2.2.5 Volume和Persistent Volume
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.6" data-path = "../concepts/deployment.html" >
< a href = "../concepts/deployment.html" >
2.2.6 Deployment
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.7" data-path = "../concepts/secret.html" >
< a href = "../concepts/secret.html" >
2.2.7 Secret
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.8" data-path = "../concepts/statefulset.html" >
< a href = "../concepts/statefulset.html" >
2.2.8 StatefulSet
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.9" data-path = "../concepts/daemonset.html" >
< a href = "../concepts/daemonset.html" >
2.2.9 DaemonSet
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.10" data-path = "../concepts/serviceaccount.html" >
< a href = "../concepts/serviceaccount.html" >
2.2.10 ServiceAccount
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.11" data-path = "../concepts/replicaset.html" >
< a href = "../concepts/replicaset.html" >
2.2.11 ReplicationController和ReplicaSet
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.12" data-path = "../concepts/job.html" >
< a href = "../concepts/job.html" >
2.2.12 Job
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.13" data-path = "../concepts/cronjob.html" >
< a href = "../concepts/cronjob.html" >
2.2.13 CronJob
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.14" data-path = "../concepts/ingress.html" >
< a href = "../concepts/ingress.html" >
2.2.14 Ingress
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.15" data-path = "../concepts/configmap.html" >
< a href = "../concepts/configmap.html" >
2.2.15 ConfigMap
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.16" data-path = "../concepts/horizontal-pod-autoscaling.html" >
< a href = "../concepts/horizontal-pod-autoscaling.html" >
2.2.16 Horizontal Pod Autoscaling
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.17" data-path = "../concepts/label.html" >
< a href = "../concepts/label.html" >
2.2.17 Label
< / a >
2017-09-03 15:58:39 +08:00
< / li >
< li class = "chapter " data-level = "1.2.2.18" data-path = "../concepts/garbage-collection.html" >
< a href = "../concepts/garbage-collection.html" >
2.2.18 垃圾收集
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.19" data-path = "../concepts/network-policy.html" >
< a href = "../concepts/network-policy.html" >
2.2.19 NetworkPolicy
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.3" data-path = "../guide/" >
< a href = "../guide/" >
3. 用户指南
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.1" data-path = "../guide/resource-configuration.html" >
< a href = "../guide/resource-configuration.html" >
3.1 资源配置
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.1.1" data-path = "../guide/configure-liveness-readiness-probes.html" >
< a href = "../guide/configure-liveness-readiness-probes.html" >
3.1.1 配置Pod的liveness和readiness探针
< / a >
< / li >
< li class = "chapter " data-level = "1.3.1.2" data-path = "../guide/configure-pod-service-account.html" >
< a href = "../guide/configure-pod-service-account.html" >
3.1.2 配置Pod的Service Account
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.3.2" data-path = "../guide/command-usage.html" >
< a href = "../guide/command-usage.html" >
3.2 命令使用
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.2.1" data-path = "../guide/using-kubectl.html" >
< a href = "../guide/using-kubectl.html" >
3.2.1 使用kubectl
< / a >
2017-09-16 20:56:43 +08:00
< / li >
< li class = "chapter " data-level = "1.3.2.2" data-path = "../guide/docker-cli-to-kubectl.html" >
< a href = "../guide/docker-cli-to-kubectl.html" >
3.2.2 docker用户过度到kubectl命令行指南
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
< / li >
2017-09-07 12:29:13 +08:00
< li class = "chapter " data-level = "1.3.3" data-path = "../guide/cluster-security-management.html" >
2017-08-21 17:44:43 +08:00
2017-09-07 12:29:13 +08:00
< a href = "../guide/cluster-security-management.html" >
2017-08-21 17:44:43 +08:00
2017-09-07 12:29:13 +08:00
3.3 集群安全性管理
2017-08-21 17:44:43 +08:00
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.3.1" data-path = "../guide/managing-tls-in-a-cluster.html" >
< a href = "../guide/managing-tls-in-a-cluster.html" >
3.3.1 管理集群中的TLS
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< li class = "chapter " data-level = "1.3.3.2" data-path = "../guide/kubelet-authentication-authorization.html" >
< a href = "../guide/kubelet-authentication-authorization.html" >
3.3.2 kubelet的认证授权
< / a >
< / li >
< li class = "chapter " data-level = "1.3.3.3" data-path = "../guide/tls-bootstrapping.html" >
< a href = "../guide/tls-bootstrapping.html" >
3.3.3 TLS bootstrap
< / a >
2017-08-31 14:23:44 +08:00
< / li >
< li class = "chapter " data-level = "1.3.3.4" data-path = "../guide/kubectl-user-authentication-authorization.html" >
< a href = "../guide/kubectl-user-authentication-authorization.html" >
3.3.4 kubectl的用户认证授权
< / a >
< / li >
< li class = "chapter " data-level = "1.3.3.5" data-path = "../guide/rbac.html" >
< a href = "../guide/rbac.html" >
3.3.5 RBAC——基于角色的访问控制
< / a >
2017-09-07 14:13:59 +08:00
< / li >
< li class = "chapter " data-level = "1.3.3.6" data-path = "../guide/ip-masq-agent.html" >
< a href = "../guide/ip-masq-agent.html" >
3.3.6 IP伪装代理
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.3.4" data-path = "../guide/access-kubernetes-cluster.html" >
< a href = "../guide/access-kubernetes-cluster.html" >
3.4 访问 Kubernetes 集群
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.4.1" data-path = "../guide/access-cluster.html" >
< a href = "../guide/access-cluster.html" >
3.4.1 访问集群
< / a >
< / li >
< li class = "chapter " data-level = "1.3.4.2" data-path = "../guide/authenticate-across-clusters-kubeconfig.html" >
< a href = "../guide/authenticate-across-clusters-kubeconfig.html" >
3.4.2 使用 kubeconfig 文件配置跨集群认证
< / a >
< / li >
< li class = "chapter " data-level = "1.3.4.3" data-path = "../guide/connecting-to-applications-port-forward.html" >
< a href = "../guide/connecting-to-applications-port-forward.html" >
3.4.3 通过端口转发访问集群中的应用程序
< / a >
< / li >
< li class = "chapter " data-level = "1.3.4.4" data-path = "../guide/service-access-application-cluster.html" >
< a href = "../guide/service-access-application-cluster.html" >
3.4.4 使用 service 访问群集中的应用程序
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.3.5" data-path = "../guide/application-development-deployment-flow.html" >
< a href = "../guide/application-development-deployment-flow.html" >
3.5 在kubernetes中开发部署应用
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.5.1" data-path = "../guide/deploy-applications-in-kubernetes.html" >
< a href = "../guide/deploy-applications-in-kubernetes.html" >
3.5.1 适用于kubernetes的应用开发部署流程
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< li class = "chapter " data-level = "1.3.5.2" data-path = "../guide/migrating-hadoop-yarn-to-kubernetes.html" >
< a href = "../guide/migrating-hadoop-yarn-to-kubernetes.html" >
3.5.2 迁移传统应用到kubernetes中——以Hadoop YARN为例
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4" data-path = "./" >
< a href = "./" >
4. 最佳实践
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.1" data-path = "install-kbernetes1.6-on-centos.html" >
< a href = "install-kbernetes1.6-on-centos.html" >
4.1 在CentOS上部署kubernetes1.6集群
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.1.1" data-path = "create-tls-and-secret-key.html" >
< a href = "create-tls-and-secret-key.html" >
4.1.1 创建TLS证书和秘钥
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.2" data-path = "create-kubeconfig.html" >
< a href = "create-kubeconfig.html" >
4.1.2 创建kubeconfig文件
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.3" data-path = "etcd-cluster-installation.html" >
< a href = "etcd-cluster-installation.html" >
4.1.3 创建高可用etcd集群
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.4" data-path = "kubectl-installation.html" >
< a href = "kubectl-installation.html" >
4.1.4 安装kubectl命令行工具
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.5" data-path = "master-installation.html" >
< a href = "master-installation.html" >
4.1.5 部署master节点
< / a >
< / li >
< li class = "chapter active" data-level = "1.4.1.6" data-path = "node-installation.html" >
< a href = "node-installation.html" >
4.1.6 部署node节点
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.7" data-path = "kubedns-addon-installation.html" >
< a href = "kubedns-addon-installation.html" >
4.1.7 安装kubedns插件
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.8" data-path = "dashboard-addon-installation.html" >
< a href = "dashboard-addon-installation.html" >
4.1.8 安装dashboard插件
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.9" data-path = "heapster-addon-installation.html" >
< a href = "heapster-addon-installation.html" >
4.1.9 安装heapster插件
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.10" data-path = "efk-addon-installation.html" >
< a href = "efk-addon-installation.html" >
4.1.10 安装EFK插件
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4.2" data-path = "service-discovery-and-loadbalancing.html" >
< a href = "service-discovery-and-loadbalancing.html" >
4.2 服务发现与负载均衡
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.2.1" data-path = "traefik-ingress-installation.html" >
< a href = "traefik-ingress-installation.html" >
4.2.1 安装Traefik ingress
< / a >
< / li >
< li class = "chapter " data-level = "1.4.2.2" data-path = "distributed-load-test.html" >
< a href = "distributed-load-test.html" >
4.2.2 分布式负载测试
< / a >
< / li >
< li class = "chapter " data-level = "1.4.2.3" data-path = "network-and-cluster-perfermance-test.html" >
< a href = "network-and-cluster-perfermance-test.html" >
4.2.3 网络和集群性能测试
< / a >
< / li >
< li class = "chapter " data-level = "1.4.2.4" data-path = "edge-node-configuration.html" >
< a href = "edge-node-configuration.html" >
4.2.4 边缘节点配置
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4.3" data-path = "operation.html" >
< a href = "operation.html" >
4.3 运维管理
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.3.1" data-path = "service-rolling-update.html" >
< a href = "service-rolling-update.html" >
4.3.1 服务滚动升级
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.2" data-path = "app-log-collection.html" >
< a href = "app-log-collection.html" >
4.3.2 应用日志收集
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.3" data-path = "configuration-best-practice.html" >
< a href = "configuration-best-practice.html" >
4.3.3 配置最佳实践
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.4" data-path = "monitor.html" >
< a href = "monitor.html" >
4.3.4 集群及应用监控
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.5" data-path = "jenkins-ci-cd.html" >
< a href = "jenkins-ci-cd.html" >
4.3.5 使用Jenkins进行持续构建与发布
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.6" data-path = "data-persistence-problem.html" >
< a href = "data-persistence-problem.html" >
4.3.6 数据持久化问题
< / a >
2017-09-03 15:58:39 +08:00
< / li >
< li class = "chapter " data-level = "1.4.3.7" data-path = "manage-compute-resources-container.html" >
< a href = "manage-compute-resources-container.html" >
4.3.7 管理容器的计算资源
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4.4" data-path = "storage.html" >
< a href = "storage.html" >
4.4 存储管理
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.4.1" data-path = "glusterfs.html" >
< a href = "glusterfs.html" >
4.4.1 GlusterFS
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.4.1.1" data-path = "using-glusterfs-for-persistent-storage.html" >
< a href = "using-glusterfs-for-persistent-storage.html" >
4.4.1.1 使用GlusterFS做持久化存储
< / a >
< / li >
< li class = "chapter " data-level = "1.4.4.1.2" data-path = "storage-for-containers-using-glusterfs-with-openshift.html" >
< a href = "storage-for-containers-using-glusterfs-with-openshift.html" >
4.4.1.2 在OpenShift中使用GlusterFS做持久化存储
< / a >
2017-09-01 21:04:51 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4.4.2" data-path = "cephfs.html" >
< a href = "cephfs.html" >
4.4.2 CephFS
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.4.2.1" data-path = "using-ceph-for-persistent-storage.html" >
< a href = "using-ceph-for-persistent-storage.html" >
4.4.2.1 使用Ceph做持久化存储
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
< / li >
< / ul >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.5" data-path = "../usecases/" >
< a href = "../usecases/" >
5. 领域应用
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.5.1" data-path = "../usecases/microservices.html" >
< a href = "../usecases/microservices.html" >
5.1 微服务架构
< / a >
< ul class = "articles" >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.1.1" data-path = "../usecases/service-discovery-in-microservices.html" >
< a href = "../usecases/service-discovery-in-microservices.html" >
5.1.1 微服务中的服务发现
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.5.2" data-path = "../usecases/service-mesh.html" >
< a href = "../usecases/service-mesh.html" >
5.2 Service Mesh 服务网格
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.5.2.1" data-path = "../usecases/istio.html" >
2017-08-21 17:44:43 +08:00
< a href = "../usecases/istio.html" >
5.1.1 Istio
< / a >
< ul class = "articles" >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.2.1.1" data-path = "../usecases/istio-installation.html" >
2017-08-21 17:44:43 +08:00
< a href = "../usecases/istio-installation.html" >
5.1.1.1 安装istio
< / a >
< / li >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.2.1.2" data-path = "../usecases/configuring-request-routing.html" >
2017-08-21 17:44:43 +08:00
< a href = "../usecases/configuring-request-routing.html" >
5.1.1.2 配置请求的路由规则
< / a >
< / li >
< / ul >
< / li >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.2.2" data-path = "../usecases/linkerd.html" >
2017-08-21 17:44:43 +08:00
< a href = "../usecases/linkerd.html" >
5.1.2 Linkerd
< / a >
< ul class = "articles" >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.2.2.1" data-path = "../usecases/linkerd-user-guide.html" >
2017-08-21 17:44:43 +08:00
< a href = "../usecases/linkerd-user-guide.html" >
5.1.2.1 Linkerd 使用指南
< / a >
< / li >
< / ul >
< / li >
< / ul >
< / li >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.3" data-path = "../usecases/big-data.html" >
2017-08-21 17:44:43 +08:00
< a href = "../usecases/big-data.html" >
5.2 大数据
< / a >
< ul class = "articles" >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.3.1" data-path = "../usecases/spark-standalone-on-kubernetes.html" >
2017-08-21 17:44:43 +08:00
2017-08-30 14:20:52 +08:00
< a href = "../usecases/spark-standalone-on-kubernetes.html" >
2017-08-21 17:44:43 +08:00
2017-08-30 14:20:52 +08:00
5.2.1 Spark standalone on Kubernetes
2017-08-21 17:44:43 +08:00
< / a >
2017-08-31 14:23:44 +08:00
< / li >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.3.2" data-path = "../usecases/running-spark-with-kubernetes-native-scheduler.html" >
2017-08-31 14:23:44 +08:00
2017-09-14 15:57:50 +08:00
< a href = "../usecases/running-spark-with-kubernetes-native-scheduler.html" >
2017-08-31 14:23:44 +08:00
5.2.2 运行支持kubernetes原生调度的Spark程序
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
2017-08-30 16:52:33 +08:00
< / li >
2017-09-20 21:55:19 +08:00
< li class = "chapter " data-level = "1.5.4" data-path = "../usecases/serverless.html" >
2017-08-30 16:52:33 +08:00
< a href = "../usecases/serverless.html" >
5.3 Serverless架构
< / a >
2017-08-21 17:44:43 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.6" data-path = "../develop/" >
< a href = "../develop/" >
6. 开发指南
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.6.1" data-path = "../develop/developing-environment.html" >
< a href = "../develop/developing-environment.html" >
6.1 开发环境搭建
< / a >
< / li >
< li class = "chapter " data-level = "1.6.2" data-path = "../develop/testing.html" >
< a href = "../develop/testing.html" >
6.2 单元测试和集成测试
< / a >
< / li >
< li class = "chapter " data-level = "1.6.3" data-path = "../develop/client-go-sample.html" >
< a href = "../develop/client-go-sample.html" >
6.3 client-go示例
< / a >
< / li >
< li class = "chapter " data-level = "1.6.4" data-path = "../develop/contribute.html" >
< a href = "../develop/contribute.html" >
6.4 社区贡献
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.7" data-path = "../appendix/" >
< a href = "../appendix/" >
7. 附录
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.7.1" data-path = "../appendix/docker-best-practice.html" >
< a href = "../appendix/docker-best-practice.html" >
7.1 Docker最佳实践
< / a >
< / li >
< li class = "chapter " data-level = "1.7.2" data-path = "../appendix/issues.html" >
< a href = "../appendix/issues.html" >
7.2 问题记录
< / a >
< / li >
< li class = "chapter " data-level = "1.7.3" data-path = "../appendix/tricks.html" >
< a href = "../appendix/tricks.html" >
7.3 使用技巧
< / a >
< / li >
< / ul >
< / li >
< li class = "divider" > < / li >
< li >
< a href = "https://www.gitbook.com" target = "blank" class = "gitbook-link" >
2017-09-19 22:01:07 +08:00
本书使用 GitBook 发布
2017-08-21 17:44:43 +08:00
< / a >
< / li >
< / ul >
< / nav >
< / div >
< div class = "book-body" >
< div class = "body-inner" >
< div class = "book-header" role = "navigation" >
<!-- Title -->
< h1 >
< i class = "fa fa-circle-o-notch fa-spin" > < / i >
< a href = ".." > 4.1.6 部署node节点< / a >
< / h1 >
< / div >
< div class = "page-wrapper" tabindex = "-1" role = "main" >
< div class = "page-inner" >
< div class = "search-plus" id = "book-search-results" >
< div class = "search-noresults" >
< section class = "normal markdown-section" >
< h1 id = "部署node节点" > 部 署 node节 点 < / h1 >
< p > kubernetes node 节 点 包 含 如 下 组 件 : < / p >
< ul >
< li > Flanneld: 参 考 我 之 前 写 的 文 章 < a href = "http://rootsongjc.github.io/blogs/kubernetes-network-config/" target = "_blank" > Kubernetes基 于 Flannel的 网 络 配 置 < / a > , 之 前 没 有 配 置 TLS, 现 在 需 要 在 serivce配 置 文 件 中 增 加 TLS配 置 。 < / li >
< li > Docker1.12.5: docker的 安 装 很 简 单 , 这 里 也 不 说 了 。 < / li >
< li > kubelet< / li >
< li > kube-proxy< / li >
< / ul >
< p > 下 面 着 重 讲 < code > kubelet< / code > 和 < code > kube-proxy< / code > 的 安 装 , 同 时 还 要 将 之 前 安 装 的 flannel集 成 TLS验 证 。 < / p >
< p > < strong > 注 意 < / strong > : 每 台 node 上 都 需 要 安 装 flannel, master 节 点 上 可 以 不 必 安 装 。 < / p >
< h2 id = "目录和文件" > 目 录 和 文 件 < / h2 >
< p > 我 们 再 检 查 一 下 三 个 节 点 上 , 经 过 前 几 步 操 作 生 成 的 配 置 文 件 。 < / p >
< pre > < code class = "lang-bash" > $ ls /etc/kubernetes/ssl
admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem kubernetes-key.pem kubernetes.pem
$ ls /etc/kubernetes/
apiserver bootstrap.kubeconfig config controller-manager kubelet kube-proxy.kubeconfig proxy scheduler ssl token.csv
< / code > < / pre >
< h2 id = "配置flanneld" > 配 置 Flanneld< / h2 >
< p > 参 考 我 之 前 写 的 文 章 < a href = "http://rootsongjc.github.io/blogs/kubernetes-network-config/" target = "_blank" > Kubernetes基 于 Flannel的 网 络 配 置 < / a > , 之 前 没 有 配 置 TLS, 现 在 需 要 在 serivce配 置 文 件 中 增 加 TLS配 置 。 < / p >
< p > 直 接 使 用 yum安 装 flanneld即 可 。 < / p >
< pre > < code class = "lang-shell" > yum install -y flannel
< / code > < / pre >
< p > service配 置 文 件 < code > /usr/lib/systemd/system/flanneld.service< / code > 。 < / p >
< pre > < code class = "lang-ini" > [Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
-etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-prefix=${ETCD_PREFIX} \
$FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
< / code > < / pre >
< p > < code > /etc/sysconfig/flanneld< / code > 配 置 文 件 。 < / p >
< pre > < code class = "lang-ini" > < span class = "hljs-comment" > # Flanneld configuration options < / span >
< span class = "hljs-comment" > # etcd url location. Point this to the server where etcd runs< / span >
< span class = "hljs-attr" > ETCD_ENDPOINTS< / span > =< span class = "hljs-string" > " https://172.20.0.113:2379,https://172.20.0.114:2379,https://172.20.0.115:2379" < / span >
< span class = "hljs-comment" > # etcd config key. This is the configuration key that flannel queries< / span >
< span class = "hljs-comment" > # For address range assignment< / span >
< span class = "hljs-attr" > ETCD_PREFIX< / span > =< span class = "hljs-string" > " /kube-centos/network" < / span >
< span class = "hljs-comment" > # Any additional options that you want to pass< / span >
< span class = "hljs-attr" > FLANNEL_OPTIONS< / span > =< span class = "hljs-string" > " -etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem" < / span >
< / code > < / pre >
< p > 在 FLANNEL_OPTIONS中 增 加 TLS的 配 置 。 < / p >
< p > < strong > 在 etcd中 创 建 网 络 配 置 < / strong > < / p >
< p > 执 行 下 面 的 命 令 为 docker分 配 IP地 址 段 。 < / p >
< pre > < code class = "lang-shell" > etcdctl --endpoints=https://172.20.0.113:2379,https://172.20.0.114:2379,https://172.20.0.115:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mkdir /kube-centos/network
etcdctl --endpoints=https://172.20.0.113:2379,https://172.20.0.114:2379,https://172.20.0.115:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mk /kube-centos/network/config ' {" Network" :" 172.30.0.0/16" ," SubnetLen" :24," Backend" :{" Type" :" vxlan" }}'
< / code > < / pre >
< p > 如 果 你 要 使 用 < code > host-gw< / code > 模 式 , 可 以 直 接 将 vxlan改 成 < code > host-gw< / code > 即 可 。 < / p >
< p > < strong > 配 置 Docker< / strong > < / p >
< p > Flannel的 < a href = "https://github.com/coreos/flannel/blob/master/Documentation/running.md" target = "_blank" > 文 档 < / a > 中 有 写 < strong > Docker Integration< / strong > : < / p >
< p > Docker daemon accepts < code > --bip< / code > argument to configure the subnet of the docker0 bridge. It also accepts < code > --mtu< / code > to set the MTU for docker0 and veth devices that it will be creating. Since flannel writes out the acquired subnet and MTU values into a file, the script starting Docker can source in the values and pass them to Docker daemon:< / p >
< pre > < code > source /run/flannel/subnet.env
docker daemon --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} &
< / code > < / pre > < p > Systemd users can use < code > EnvironmentFile< / code > directive in the .service file to pull in < code > /run/flannel/subnet.env< / code > < / p >
< p > 如 果 你 不 是 使 用 yum安 装 的 flanneld, 那 么 需 要 下 载 flannel github release中 的 tar包 , 解 压 后 会 获 得 一 个 < strong > mk-docker-opts.sh< / strong > 文 件 。 < / p >
< p > 这 个 文 件 是 用 来 < code > Generate Docker daemon options based on flannel env file< / code > 。 < / p >
< p > 执 行 < code > ./mk-docker-opts.sh -i< / code > 将 会 生 成 如 下 两 个 文 件 环 境 变 量 文 件 。 < / p >
< p > /run/flannel/subnet.env< / p >
< pre > < code > FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.46.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false
< / code > < / pre > < p > /run/docker_opts.env< / p >
< pre > < code > DOCKER_OPT_BIP=" --bip=172.30.46.1/24"
DOCKER_OPT_IPMASQ=" --ip-masq=true"
DOCKER_OPT_MTU=" --mtu=1450"
< / code > < / pre > < p > < strong > 设 置 docker0网 桥 的 IP地 址 < / strong > < / p >
< pre > < code class = "lang-shell" > source /run/flannel/subnet.env
ifconfig docker0 $FLANNEL_SUBNET
< / code > < / pre >
< p > 这 样 docker0和 flannel网 桥 会 在 同 一 个 子 网 中 , 如 < / p >
< pre > < code > 6: docker0: < NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:da:bf:83:a2 brd ff:ff:ff:ff:ff:ff
inet 172.30.38.1/24 brd 172.30.38.255 scope global docker0
valid_lft forever preferred_lft forever
7: flannel.1: < BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 9a:29:46:61:03:44 brd ff:ff:ff:ff:ff:ff
inet 172.30.38.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
< / code > < / pre > < p > 同 时 在 docker 的 配 置 文 件 < a href = "../systemd/docker.service" > docker.service< / a > 中 增 加 环 境 变 量 配 置 : < / p >
< pre > < code class = "lang-ini" > < span class = "hljs-attr" > EnvironmentFile< / span > =-/run/flannel/docker
< span class = "hljs-attr" > EnvironmentFile< / span > =-/run/docker_opts.env
< span class = "hljs-attr" > EnvironmentFile< / span > =-/run/flannel/subnet.env
< / code > < / pre >
< p > 防 止 主 机 重 启 后 docker 自 动 重 启 时 加 载 不 到 该 上 述 环 境 变 量 。 < / p >
< p > < strong > 启 动 docker< / strong > < / p >
< p > 重 启 了 docker后 还 要 重 启 kubelet, 这 时 又 遇 到 问 题 , kubelet启 动 失 败 。 报 错 : < / p >
< pre > < code > Mar 31 16:44:41 sz-pg-oam-docker-test-002.tendcloud.com kubelet[81047]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: " cgroupfs" is different from docker cgroup driver: " systemd"
< / code > < / pre > < p > 这 是 kubelet与 docker的 < strong > cgroup driver< / strong > 不 一 致 导 致 的 , kubelet启 动 的 时 候 有 个 < code > — cgroup-driver< / code > 参 数 可 以 指 定 为 " cgroupfs" 或 者 “ systemd” 。 < / p >
< pre > < code > --cgroup-driver string Driver that the kubelet uses to manipulate cgroups on the host. Possible values: ' cgroupfs' , ' systemd' (default " cgroupfs" )
< / code > < / pre > < p > < strong > 启 动 flannel< / strong > < / p >
< pre > < code class = "lang-shell" > systemctl daemon-reload
systemctl start flanneld
systemctl status flanneld
< / code > < / pre >
< p > 现 在 查 询 etcd中 的 内 容 可 以 看 到 : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-variable" > $etcdctl< / span > --endpoints=< span class = "hljs-variable" > ${ETCD_ENDPOINTS}< / span > \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
ls /kube-centos/network/subnets
/kube-centos/network/subnets/172.30.14.0-24
/kube-centos/network/subnets/172.30.38.0-24
/kube-centos/network/subnets/172.30.46.0-24
< span class = "hljs-variable" > $etcdctl< / span > --endpoints=< span class = "hljs-variable" > ${ETCD_ENDPOINTS}< / span > \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
get /kube-centos/network/config
{ < span class = "hljs-string" > " Network" < / span > : < span class = "hljs-string" > " 172.30.0.0/16" < / span > , < span class = "hljs-string" > " SubnetLen" < / span > : 24, < span class = "hljs-string" > " Backend" < / span > : { < span class = "hljs-string" > " Type" < / span > : < span class = "hljs-string" > " vxlan" < / span > } }
< span class = "hljs-variable" > $etcdctl< / span > get /kube-centos/network/subnets/172.30.14.0-24
{< span class = "hljs-string" > " PublicIP" < / span > :< span class = "hljs-string" > " 172.20.0.114" < / span > ,< span class = "hljs-string" > " BackendType" < / span > :< span class = "hljs-string" > " vxlan" < / span > ,< span class = "hljs-string" > " BackendData" < / span > :{< span class = "hljs-string" > " VtepMAC" < / span > :< span class = "hljs-string" > " 56:27:7d:1c:08:22" < / span > }}
< span class = "hljs-variable" > $etcdctl< / span > get /kube-centos/network/subnets/172.30.38.0-24
{< span class = "hljs-string" > " PublicIP" < / span > :< span class = "hljs-string" > " 172.20.0.115" < / span > ,< span class = "hljs-string" > " BackendType" < / span > :< span class = "hljs-string" > " vxlan" < / span > ,< span class = "hljs-string" > " BackendData" < / span > :{< span class = "hljs-string" > " VtepMAC" < / span > :< span class = "hljs-string" > " 12:82:83:59:cf:b8" < / span > }}
< span class = "hljs-variable" > $etcdctl< / span > get /kube-centos/network/subnets/172.30.46.0-24
{< span class = "hljs-string" > " PublicIP" < / span > :< span class = "hljs-string" > " 172.20.0.113" < / span > ,< span class = "hljs-string" > " BackendType" < / span > :< span class = "hljs-string" > " vxlan" < / span > ,< span class = "hljs-string" > " BackendData" < / span > :{< span class = "hljs-string" > " VtepMAC" < / span > :< span class = "hljs-string" > " e6:b2:fd:f6:66:96" < / span > }}
< / code > < / pre >
< h2 id = "安装和配置-kubelet" > 安 装 和 配 置 kubelet< / h2 >
< p > kubelet 启 动 时 向 kube-apiserver 发 送 TLS bootstrapping 请 求 , 需 要 先 将 bootstrap token 文 件 中 的 kubelet-bootstrap 用 户 赋 予 system:node-bootstrapper cluster 角 色 (role),
然 后 kubelet 才 能 有 权 限 创 建 认 证 请 求 (certificate signing requests): < / p >
2017-08-31 22:48:18 +08:00
< pre > < code class = "lang-bash" > < span class = "hljs-built_in" > cd< / span > /etc/kubernetes
kubectl create clusterrolebinding kubelet-bootstrap \
2017-08-21 17:44:43 +08:00
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
< / code > < / pre >
< ul >
< li > < code > --user=kubelet-bootstrap< / code > 是 在 < code > /etc/kubernetes/token.csv< / code > 文 件 中 指 定 的 用 户 名 , 同 时 也 写 入 了 < code > /etc/kubernetes/bootstrap.kubeconfig< / code > 文 件 ; < / li >
< / ul >
< h3 id = "下载最新的-kubelet-和-kube-proxy-二进制文件" > 下 载 最 新 的 kubelet 和 kube-proxy 二 进 制 文 件 < / h3 >
2017-08-31 22:48:18 +08:00
< pre > < code class = "lang-bash" > wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
< span class = "hljs-built_in" > cd< / span > kubernetes
tar -xzvf kubernetes-src.tar.gz
cp -r ./server/bin/{kube-proxy,kubelet} /usr/< span class = "hljs-built_in" > local< / span > /bin/
2017-08-21 17:44:43 +08:00
< / code > < / pre >
< h3 id = "创建-kubelet-的service配置文件" > 创 建 kubelet 的 service配 置 文 件 < / h3 >
< p > 文 件 位 置 < code > /usr/lib/systemd/system/kubelet.service< / code > 。 < / p >
< pre > < code class = "lang-ini" > [Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
< / code > < / pre >
< p > kubelet的 配 置 文 件 < code > /etc/kubernetes/kubelet< / code > 。 其 中 的 IP地 址 更 改 为 你 的 每 台 node节 点 的 IP地 址 。 < / p >
< p > 注 意 : < code > /var/lib/kubelet< / code > 需 要 手 动 创 建 。 < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > ###< / span >
< span class = "hljs-comment" > ## kubernetes kubelet (minion) config< / span >
< span class = "hljs-comment" > #< / span >
< span class = "hljs-comment" > ## The address for the info server to serve on (set to 0.0.0.0 or " " for all interfaces)< / span >
KUBELET_ADDRESS=< span class = "hljs-string" > " --address=172.20.0.113" < / span >
< span class = "hljs-comment" > #< / span >
< span class = "hljs-comment" > ## The port for the info server to serve on< / span >
< span class = "hljs-comment" > #KUBELET_PORT=" --port=10250" < / span >
< span class = "hljs-comment" > #< / span >
< span class = "hljs-comment" > ## You may leave this blank to use the actual hostname< / span >
KUBELET_HOSTNAME=< span class = "hljs-string" > " --hostname-override=172.20.0.113" < / span >
< span class = "hljs-comment" > #< / span >
< span class = "hljs-comment" > ## location of the api-server< / span >
KUBELET_API_SERVER=< span class = "hljs-string" > " --api-servers=http://172.20.0.113:8080" < / span >
< span class = "hljs-comment" > #< / span >
< span class = "hljs-comment" > ## pod infrastructure container< / span >
KUBELET_POD_INFRA_CONTAINER=< span class = "hljs-string" > " --pod-infra-container-image=sz-pg-oam-docker-hub-001.tendcloud.com/library/pod-infrastructure:rhel7" < / span >
< span class = "hljs-comment" > #< / span >
< span class = "hljs-comment" > ## Add your own!< / span >
2017-08-21 18:44:34 +08:00
KUBELET_ARGS=< span class = "hljs-string" > " --cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false" < / span >
2017-08-21 17:44:43 +08:00
< / code > < / pre >
< ul >
< li > < code > --address< / code > 不 能 设 置 为 < code > 127.0.0.1< / code > , 否 则 后 续 Pods 访 问 kubelet 的 API 接 口 时 会 失 败 , 因 为 Pods 访 问 的 < code > 127.0.0.1< / code > 指 向 自 己 而 不 是 kubelet; < / li >
< li > 如 果 设 置 了 < code > --hostname-override< / code > 选 项 , 则 < code > kube-proxy< / code > 也 需 要 设 置 该 选 项 , 否 则 会 出 现 找 不 到 Node 的 情 况 ; < / li >
2017-08-21 18:44:34 +08:00
< li > < code > " --cgroup-driver< / code > 配 置 成 < code > systemd< / code > , 不 要 使 用 < code > cgroup< / code > , 否 则 在 CentOS 系 统 中 kubelet 讲 启 动 失 败 。 < / li >
2017-08-21 17:44:43 +08:00
< li > < code > --experimental-bootstrap-kubeconfig< / code > 指 向 bootstrap kubeconfig 文 件 , kubelet 使 用 该 文 件 中 的 用 户 名 和 token 向 kube-apiserver 发 送 TLS Bootstrapping 请 求 ; < / li >
< li > 管 理 员 通 过 了 CSR 请 求 后 , kubelet 自 动 在 < code > --cert-dir< / code > 目 录 创 建 证 书 和 私 钥 文 件 (< code > kubelet-client.crt< / code > 和 < code > kubelet-client.key< / code > ), 然 后 写 入 < code > --kubeconfig< / code > 文 件 ; < / li >
< li > 建 议 在 < code > --kubeconfig< / code > 配 置 文 件 中 指 定 < code > kube-apiserver< / code > 地 址 , 如 果 未 指 定 < code > --api-servers< / code > 选 项 , 则 必 须 指 定 < code > --require-kubeconfig< / code > 选 项 后 才 从 配 置 文 件 中 读 取 kube-apiserver 的 地 址 , 否 则 kubelet 启 动 后 将 找 不 到 kube-apiserver (日 志 中 提 示 未 找 到 API Server) , < code > kubectl get nodes< / code > 不 会 返 回 对 应 的 Node 信 息 ;< / li >
< li > < code > --cluster-dns< / code > 指 定 kubedns 的 Service IP(可 以 先 分 配 , 后 续 创 建 kubedns 服 务 时 指 定 该 IP), < code > --cluster-domain< / code > 指 定 域 名 后 缀 , 这 两 个 参 数 同 时 指 定 后 才 会 生 效 ; < / li >
2017-08-21 18:44:34 +08:00
< li > < code > --cluster-domain< / code > 指 定 pod 启 动 时 < code > /etc/resolve.conf< / code > 文 件 中 的 < code > search domain< / code > , 起 初 我 们 将 其 配 置 成 了 < code > cluster.local.< / code > , 这 样 在 解 析 service 的 DNS 名 称 时 是 正 常 的 , 可 是 在 解 析 headless service 中 的 FQDN pod name 的 时 候 却 错 误 , 因 此 我 们 将 其 修 改 为 < code > cluster.local< / code > , 去 掉 嘴 后 面 的 ” 点 号 “ 就 可 以 解 决 该 问 题 , 关 于 kubernetes 中 的 域 名 /服 务 名 称 解 析 请 参 见 我 的 另 一 篇 文 章 。 < / li >
< li > < code > --kubeconfig=/etc/kubernetes/kubelet.kubeconfig< / code > 中 指 定 的 < code > kubelet.kubeconfig< / code > 文 件 在 第 一 次 启 动 kubelet之 前 并 不 存 在 , 请 看 下 文 , 当 通 过 CSR请 求 后 会 自 动 生 成 < code > kubelet.kubeconfig< / code > 文 件 , 如 果 你 的 节 点 上 已 经 生 成 了 < code > ~/.kube/config< / code > 文 件 , 你 可 以 将 该 文 件 拷 贝 到 该 路 径 下 , 并 重 命 名 为 < code > kubelet.kubeconfig< / code > , 所 有 node节 点 可 以 共 用 同 一 个 kubelet.kubeconfig文 件 , 这 样 新 添 加 的 节 点 就 不 需 要 再 创 建 CSR请 求 就 能 自 动 添 加 到 kubernetes集 群 中 。 同 样 , 在 任 意 能 够 访 问 到 kubernetes集 群 的 主 机 上 使 用 < code > kubectl --kubeconfig< / code > 命 令 操 作 集 群 时 , 只 要 使 用 < code > ~/.kube/config< / code > 文 件 就 可 以 通 过 权 限 认 证 , 因 为 这 里 面 已 经 有 认 证 信 息 并 认 为 你 是 admin用 户 , 对 集 群 拥 有 所 有 权 限 。 < / li >
2017-09-21 10:48:33 +08:00
< li > < code > KUBELET_POD_INFRA_CONTAINER< / code > 是 基 础 镜 像 容 器 , 这 里 我 用 的 是 私 有 镜 像 仓 库 地 址 , < strong > 大 家 部 署 的 时 候 需 要 修 改 为 自 己 的 镜 像 < / strong > 。 我 上 传 了 一 个 到 时 速 云 上 , 可 以 直 接 < code > docker pull index.tenxcloud.com/jimmy/pod-infrastructure< / code > 下 载 。 < / li >
2017-08-21 17:44:43 +08:00
< / ul >
2017-09-01 21:31:13 +08:00
< p > 完 整 unit 见 < a href = "../systemd/kubelet.service" > kubelet.service< / a > < / p >
2017-08-21 17:44:43 +08:00
< h3 id = "启动kublet" > 启 动 kublet< / h3 >
2017-08-31 22:48:18 +08:00
< pre > < code class = "lang-bash" > systemctl daemon-reload
systemctl < span class = "hljs-built_in" > enable< / span > kubelet
systemctl start kubelet
systemctl status kubelet
2017-08-21 17:44:43 +08:00
< / code > < / pre >
< h3 id = "通过-kublet-的-tls-证书请求" > 通 过 kublet 的 TLS 证 书 请 求 < / h3 >
< p > kubelet 首 次 启 动 时 向 kube-apiserver 发 送 证 书 签 名 请 求 , 必 须 通 过 后 kubernetes 系 统 才 会 将 该 Node 加 入 到 集 群 。 < / p >
< p > 查 看 未 授 权 的 CSR 请 求 < / p >
< pre > < code class = "lang-bash" > $ kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-2b308 4m kubelet-bootstrap Pending
$ kubectl get nodes
No resources found.
< / code > < / pre >
< p > 通 过 CSR 请 求 < / p >
< pre > < code class = "lang-bash" > $ kubectl certificate approve csr-2b308
certificatesigningrequest < span class = "hljs-string" > " csr-2b308" < / span > approved
$ kubectl get nodes
NAME STATUS AGE VERSION
10.64.3.7 Ready 49m v1.6.1
< / code > < / pre >
< p > 自 动 生 成 了 kubelet kubeconfig 文 件 和 公 私 钥 < / p >
< pre > < code class = "lang-bash" > $ ls < span class = "hljs-_" > -l< / span > /etc/kubernetes/kubelet.kubeconfig
-rw------- 1 root root 2284 Apr 7 02:07 /etc/kubernetes/kubelet.kubeconfig
$ ls < span class = "hljs-_" > -l< / span > /etc/kubernetes/ssl/kubelet*
-rw-r--r-- 1 root root 1046 Apr 7 02:07 /etc/kubernetes/ssl/kubelet-client.crt
-rw------- 1 root root 227 Apr 7 02:04 /etc/kubernetes/ssl/kubelet-client.key
-rw-r--r-- 1 root root 1103 Apr 7 02:07 /etc/kubernetes/ssl/kubelet.crt
-rw------- 1 root root 1675 Apr 7 02:07 /etc/kubernetes/ssl/kubelet.key
< / code > < / pre >
< p > 注 意 : 假 如 你 更 新 kubernetes的 证 书 , 只 要 没 有 更 新 < code > token.csv< / code > , 当 重 启 kubelet后 , 该 node就 会 自 动 加 入 到 kuberentes集 群 中 , 而 不 会 重 新 发 送 < code > certificaterequest< / code > , 也 不 需 要 在 master节 点 上 执 行 < code > kubectl certificate approve< / code > 操 作 。 前 提 是 不 要 删 除 node节 点 上 的 < code > /etc/kubernetes/ssl/kubelet*< / code > 和 < code > /etc/kubernetes/kubelet.kubeconfig< / code > 文 件 。 否 则 kubelet启 动 时 会 提 示 找 不 到 证 书 而 失 败 。 < / p >
< h2 id = "配置-kube-proxy" > 配 置 kube-proxy< / h2 >
< p > < strong > 创 建 kube-proxy 的 service配 置 文 件 < / strong > < / p >
< p > 文 件 路 径 < code > /usr/lib/systemd/system/kube-proxy.service< / code > 。 < / p >
< pre > < code class = "lang-ini" > [Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
< / code > < / pre >
< p > kube-proxy配 置 文 件 < code > /etc/kubernetes/proxy< / code > 。 < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > ###< / span >
< span class = "hljs-comment" > # kubernetes proxy config< / span >
< span class = "hljs-comment" > # default config should be adequate< / span >
< span class = "hljs-comment" > # Add your own!< / span >
KUBE_PROXY_ARGS=< span class = "hljs-string" > " --bind-address=172.20.0.113 --hostname-override=172.20.0.113 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16" < / span >
< / code > < / pre >
< ul >
< li > < code > --hostname-override< / code > 参 数 值 必 须 与 kubelet 的 值 一 致 , 否 则 kube-proxy 启 动 后 会 找 不 到 该 Node, 从 而 不 会 创 建 任 何 iptables 规 则 ; < / li >
< li > kube-proxy 根 据 < code > --cluster-cidr< / code > 判 断 集 群 内 部 和 外 部 流 量 , 指 定 < code > --cluster-cidr< / code > 或 < code > --masquerade-all< / code > 选 项 后 kube-proxy 才 会 对 访 问 Service IP 的 请 求 做 SNAT; < / li >
< li > < code > --kubeconfig< / code > 指 定 的 配 置 文 件 嵌 入 了 kube-apiserver 的 地 址 、 用 户 名 、 证 书 、 秘 钥 等 请 求 和 认 证 信 息 ; < / li >
< li > 预 定 义 的 RoleBinding < code > cluster-admin< / code > 将 User < code > system:kube-proxy< / code > 与 Role < code > system:node-proxier< / code > 绑 定 , 该 Role 授 予 了 调 用 < code > kube-apiserver< / code > Proxy 相 关 API 的 权 限 ; < / li >
< / ul >
2017-09-01 21:31:13 +08:00
< p > 完 整 unit 见 < a href = "../systemd/kube-proxy.service" > kube-proxy.service< / a > < / p >
2017-08-21 17:44:43 +08:00
< h3 id = "启动-kube-proxy" > 启 动 kube-proxy< / h3 >
2017-08-31 22:48:18 +08:00
< pre > < code class = "lang-bash" > systemctl daemon-reload
systemctl < span class = "hljs-built_in" > enable< / span > kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
2017-08-21 17:44:43 +08:00
< / code > < / pre >
< h2 id = "验证测试" > 验 证 测 试 < / h2 >
< p > 我 们 创 建 一 个 niginx的 service试 一 下 集 群 是 否 可 用 。 < / p >
< pre > < code class = "lang-bash" > $ kubectl run nginx --replicas=2 --labels=< span class = "hljs-string" > " run=load-balancer-example" < / span > --image=sz-pg-oam-docker-hub-001.tendcloud.com/library/nginx:1.9 --port=80
deployment < span class = "hljs-string" > " nginx" < / span > created
$ kubectl expose deployment nginx --type=NodePort --name=example-service
service < span class = "hljs-string" > " example-service" < / span > exposed
$ kubectl describe svc example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: < none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.254.62.207
Port: < < span class = "hljs-built_in" > unset< / span > > 80/TCP
NodePort: < < span class = "hljs-built_in" > unset< / span > > 32724/TCP
Endpoints: 172.30.60.2:80,172.30.94.2:80
Session Affinity: None
Events: < none>
$ curl < span class = "hljs-string" > " 10.254.62.207:80" < / span >
< !DOCTYPE html>
< html>
< head>
< title> Welcome to nginx!< /title>
< style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
< /style>
< /head>
< body>
< h1> Welcome to nginx!< /h1>
< p> If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.< /p>
< p> For online documentation and support please refer to
< a href=< span class = "hljs-string" > " http://nginx.org/" < / span > > nginx.org< /a> .< br/>
Commercial support is available at
< a href=< span class = "hljs-string" > " http://nginx.com/" < / span > > nginx.com< /a> .< /p>
< p> < em> Thank you < span class = "hljs-keyword" > for< / span > using nginx.< /em> < /p>
< /body>
< /html>
< / code > < / pre >
< p > 提 示 : 上 面 的 测 试 示 例 中 使 用 的 nginx是 我 的 私 有 镜 像 仓 库 中 的 镜 像 < code > sz-pg-oam-docker-hub-001.tendcloud.com/library/nginx:1.9< / code > , 大 家 在 测 试 过 程 中 请 换 成 自 己 的 nginx镜 像 地 址 。 < / p >
< p > 访 问 < code > 172.20.0.113:32724< / code > 或 < code > 172.20.0.114:32724< / code > 或 者 < code > 172.20.0.115:32724< / code > 都 可 以 得 到 nginx的 页 面 。 < / p >
2017-08-21 18:44:34 +08:00
< figure id = "fig1.4.1.6.1" > < img src = "http://olz1di9xf.bkt.clouddn.com/kubernetes-installation-test-nginx.png" alt = "welcome-nginx" > < figcaption > Figure: welcome-nginx< / figcaption > < / figure >
< h2 id = "参考" > 参 考 < / h2 >
2017-09-19 21:38:03 +08:00
< p > < a href = "../guide/kubelet-authentication-authorization.html" > Kubelet 的 认 证 授 权 < / a > < / p >
2017-09-21 10:48:33 +08:00
< footer class = "page-footer-ex" > < span class = "page-footer-ex-copyright" > © All Rights Reserved < / span >                     < span class = "page-footer-ex-footer-update" > updated 2017-09-21 10:47:11 < / span > < / footer >
2017-08-21 17:44:43 +08:00
< / section >
< / div >
< div class = "search-results" >
< div class = "has-results" >
< h1 class = "search-results-title" > < span class = 'search-results-count' > < / span > results matching "< span class = 'search-query' > < / span > "< / h1 >
< ul class = "search-results-list" > < / ul >
< / div >
< div class = "no-results" >
< h1 class = "search-results-title" > No results matching "< span class = 'search-query' > < / span > "< / h1 >
< / div >
< / div >
< / div >
< / div >
< / div >
< / div >
< a href = "master-installation.html" class = "navigation navigation-prev " aria-label = "Previous page: 4.1.5 部署master节点" >
< i class = "fa fa-angle-left" > < / i >
< / a >
< a href = "kubedns-addon-installation.html" class = "navigation navigation-next " aria-label = "Next page: 4.1.7 安装kubedns插件" >
< i class = "fa fa-angle-right" > < / i >
< / a >
< / div >
< script >
var gitbook = gitbook || [];
gitbook.push(function() {
2017-09-21 10:48:33 +08:00
gitbook.page.hasChanged({"page":{"title":"4.1.6 部署node节点","level":"1.4.1.6","depth":3,"next":{"title":"4.1.7 安装kubedns插件","level":"1.4.1.7","depth":3,"path":"practice/kubedns-addon-installation.md","ref":"practice/kubedns-addon-installation.md","articles":[]},"previous":{"title":"4.1.5 部署master节点","level":"1.4.1.5","depth":3,"path":"practice/master-installation.md","ref":"practice/master-installation.md","articles":[]},"dir":"ltr"},"config":{"plugins":["github","codesnippet","splitter","page-toc-button","image-captions","page-footer-ex","editlink","back-to-top-button","-lunr","-search","search-plus"],"styles":{"website":"styles/website.css","pdf":"styles/pdf.css","epub":"styles/epub.css","mobi":"styles/mobi.css","ebook":"styles/ebook.css","print":"styles/print.css"},"pluginsConfig":{"github":{"url":"https://github.com/rootsongjc/kubernetes-handbook"},"editlink":{"label":"编辑本页","multilingual":false,"base":"https://github.com/rootsongjc/kubernetes-handbook/blob/master/"},"page-footer-ex":{"copyright":"© All Rights Reserved","markdown":false,"update_format":"YYYY-MM-DD HH:mm:ss","update_label":"updated"},"splitter":{},"codesnippet":{},"fontsettings":{"theme":"white","family":"sans","size":2},"highlight":{},"page-toc-button":{},"back-to-top-button":{},"sharing":{"facebook":true,"twitter":true,"google":false,"weibo":false,"instapaper":false,"vk":false,"all":["facebook","google","twitter","weibo","instapaper"]},"theme-default":{"styles":{"website":"styles/website.css","pdf":"styles/pdf.css","epub":"styles/epub.css","mobi":"styles/mobi.css","ebook":"styles/ebook.css","print":"styles/print.css"},"showLevel":false},"search-plus":{},"image-captions":{"variable_name":"_pictures"}},"page-footer-ex":{"copyright":"Jimmy Song","update_label":"最后更新于:","update_format":"YYYY-MM-DD HH:mm:ss"},"theme":"default","author":"Jimmy Song","pdf":{"pageNumbers":true,"fontSize":12,"fontFamily":"Arial","paperSize":"a4","chapterMark":"pagebreak","pageBreaksBefore":"/","margin":{"right":62,"left":62,"top":56,"bottom":56}},"structure":{"langs":"LANGS.md","readme":"README.md","glossary":"GLOSSARY.md","summary":"SUMMARY.md"},"variables":{"_pictures":[{"backlink":"concepts/index.html#fig1.2.1","level":"1.2","list_caption":"Figure: Borg架构","alt":"Borg架构","nro":1,"url":"../images/borg.png","index":1,"caption_template":"Figure: _CAPTION_","label":"Borg架构","attributes":{},"skip":false,"key":"1.2.1"},{"backlink":"concepts/index.html#fig1.2.2","level":"1.2","list_caption":"Figure: Kubernetes架构","alt":"Kubernetes架构","nro":2,"url":"../images/architecture.png","index":2,"caption_template":"Figure: _CAPTION_","label":"Kubernetes架构","attributes":{},"skip":false,"key":"1.2.2"},{"backlink":"concepts/index.html#fig1.2.3","level":"1.2","list_caption":"Figure: kubernetes整体架构示意图","alt":"kubernetes整体架构示意图","nro":3,"url":"../images/kubernetes-whole-arch.png","index":3,"caption_template":"Figure: _CAPTION_","label":"kubernetes整体架构示意图","attributes":{},"skip":false,"key":"1.2.3"},{"backlink":"concepts/index.html#fig1.2.4","level":"1.2","list_caption":"Figure: Kubernetes master架构示意图","alt":"Kubernetes master架构示意图","nro":4,"url":"../images/kubernetes-master-arch.png","index":4,"caption_template":"Figure: _CAPTION_","label":"Kubernetes master架构示意图","attributes":{},"skip":false,"key":"1.2.4"},{"backlink":"concepts/index.html#fig1.2.5","level":"1.2","list_caption":"Figure: kubernetes node架构示意图","alt":"kubernetes node架构示意图","nro":5,"url":"../images/kubernetes-node-arch.png","index":5,"caption_template":"Figure: _CAPTION_","label":"kubernetes node架构示意图","attributes":{},"skip":false,"key":"1.2.5"},{"backlink":"concepts/index.html#fig1.2.6","level":"1.2","list_caption":"Figure: Kubernetes分层架构示意图","alt":"Kubernetes分层架构示意图","nro":6,"url":"../images/kubernetes-layers-arch.jpg","index":6,"caption_template":"Figure: _CAPTION_","label":"Kubernetes分层架构示意图","attributes":{}
2017-08-21 17:44:43 +08:00
});
< / script >
< / div >
< script src = "../gitbook/gitbook.js" > < / script >
< script src = "../gitbook/theme.js" > < / script >
< script src = "../gitbook/gitbook-plugin-github/plugin.js" > < / script >
< script src = "../gitbook/gitbook-plugin-splitter/splitter.js" > < / script >
< script src = "../gitbook/gitbook-plugin-page-toc-button/plugin.js" > < / script >
< script src = "../gitbook/gitbook-plugin-editlink/plugin.js" > < / script >
2017-09-19 21:38:03 +08:00
< script src = "../gitbook/gitbook-plugin-back-to-top-button/plugin.js" > < / script >
2017-08-21 17:44:43 +08:00
< script src = "../gitbook/gitbook-plugin-search-plus/jquery.mark.min.js" > < / script >
< script src = "../gitbook/gitbook-plugin-search-plus/search.js" > < / script >
< script src = "../gitbook/gitbook-plugin-sharing/buttons.js" > < / script >
< script src = "../gitbook/gitbook-plugin-fontsettings/fontsettings.js" > < / script >
< / body >
< / html >