2017-08-21 18:44:34 +08:00
<!DOCTYPE HTML>
< html lang = "zh-cn" >
< head >
< meta charset = "UTF-8" >
< meta content = "text/html; charset=utf-8" http-equiv = "Content-Type" >
< title > 4.4.1.2 在OpenShift中使用GlusterFS做持久化存储 · Kubernetes Handbook< / title >
< meta http-equiv = "X-UA-Compatible" content = "IE=edge" / >
< meta name = "description" content = "" >
< meta name = "generator" content = "GitBook 3.2.2" >
< meta name = "author" content = "Jimmy Song" >
< link rel = "stylesheet" href = "../gitbook/style.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-splitter/splitter.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-page-toc-button/plugin.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-image-captions/image-captions.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-page-footer-ex/style/plugin.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-search-plus/search.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-highlight/website.css" >
< link rel = "stylesheet" href = "../gitbook/gitbook-plugin-fontsettings/website.css" >
< meta name = "HandheldFriendly" content = "true" / >
< meta name = "viewport" content = "width=device-width, initial-scale=1, user-scalable=no" >
< meta name = "apple-mobile-web-app-capable" content = "yes" >
< meta name = "apple-mobile-web-app-status-bar-style" content = "black" >
< link rel = "apple-touch-icon-precomposed" sizes = "152x152" href = "../gitbook/images/apple-touch-icon-precomposed-152.png" >
< link rel = "shortcut icon" href = "../gitbook/images/favicon.ico" type = "image/x-icon" >
2017-09-01 21:04:51 +08:00
< link rel = "next" href = "cephfs.html" / >
2017-08-21 18:44:34 +08:00
< link rel = "prev" href = "using-glusterfs-for-persistent-storage.html" / >
< / head >
< body >
< div class = "book" >
< div class = "book-summary" >
< div id = "book-search-input" role = "search" >
< input type = "text" placeholder = "輸入並搜尋" / >
< / div >
< nav role = "navigation" >
< ul class = "summary" >
< li class = "chapter " data-level = "1.1" data-path = "../" >
< a href = "../" >
1. 前言
< / a >
< / li >
< li class = "chapter " data-level = "1.2" data-path = "../concepts/" >
< a href = "../concepts/" >
2. 概念原理
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.2.1" data-path = "../concepts/concepts.html" >
< a href = "../concepts/concepts.html" >
2.1 设计理念
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2" data-path = "../concepts/objects.html" >
< a href = "../concepts/objects.html" >
2017-09-03 13:29:38 +08:00
2.2 Objects
2017-08-21 18:44:34 +08:00
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.2.2.1" data-path = "../concepts/pod-overview.html" >
< a href = "../concepts/pod-overview.html" >
2.2.1 Pod
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.2.2.1.1" data-path = "../concepts/pod.html" >
< a href = "../concepts/pod.html" >
2.2.1.1 Pod解析
< / a >
2017-08-31 23:28:33 +08:00
< / li >
< li class = "chapter " data-level = "1.2.2.1.2" data-path = "../concepts/init-containers.html" >
< a href = "../concepts/init-containers.html" >
2.2.1.2 Init容器
< / a >
2017-09-03 15:58:39 +08:00
< / li >
< li class = "chapter " data-level = "1.2.2.1.3" data-path = "../concepts/pod-security-policy.html" >
< a href = "../concepts/pod-security-policy.html" >
2.2.1.3 Pod安全策略
< / a >
2017-09-17 15:39:26 +08:00
< / li >
< li class = "chapter " data-level = "1.2.2.1.4" data-path = "../concepts/pod-lifecycle.html" >
< a href = "../concepts/pod-lifecycle.html" >
2.2.1.4 Pod的生命周期
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.2.2.2" data-path = "../concepts/node.html" >
< a href = "../concepts/node.html" >
2.2.2 Node
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.3" data-path = "../concepts/namespace.html" >
< a href = "../concepts/namespace.html" >
2.2.3 Namespace
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.4" data-path = "../concepts/service.html" >
< a href = "../concepts/service.html" >
2.2.4 Service
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.5" data-path = "../concepts/volume.html" >
< a href = "../concepts/volume.html" >
2.2.5 Volume和Persistent Volume
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.6" data-path = "../concepts/deployment.html" >
< a href = "../concepts/deployment.html" >
2.2.6 Deployment
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.7" data-path = "../concepts/secret.html" >
< a href = "../concepts/secret.html" >
2.2.7 Secret
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.8" data-path = "../concepts/statefulset.html" >
< a href = "../concepts/statefulset.html" >
2.2.8 StatefulSet
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.9" data-path = "../concepts/daemonset.html" >
< a href = "../concepts/daemonset.html" >
2.2.9 DaemonSet
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.10" data-path = "../concepts/serviceaccount.html" >
< a href = "../concepts/serviceaccount.html" >
2.2.10 ServiceAccount
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.11" data-path = "../concepts/replicaset.html" >
< a href = "../concepts/replicaset.html" >
2.2.11 ReplicationController和ReplicaSet
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.12" data-path = "../concepts/job.html" >
< a href = "../concepts/job.html" >
2.2.12 Job
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.13" data-path = "../concepts/cronjob.html" >
< a href = "../concepts/cronjob.html" >
2.2.13 CronJob
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.14" data-path = "../concepts/ingress.html" >
< a href = "../concepts/ingress.html" >
2.2.14 Ingress
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.15" data-path = "../concepts/configmap.html" >
< a href = "../concepts/configmap.html" >
2.2.15 ConfigMap
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.16" data-path = "../concepts/horizontal-pod-autoscaling.html" >
< a href = "../concepts/horizontal-pod-autoscaling.html" >
2.2.16 Horizontal Pod Autoscaling
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.17" data-path = "../concepts/label.html" >
< a href = "../concepts/label.html" >
2.2.17 Label
< / a >
2017-09-03 15:58:39 +08:00
< / li >
< li class = "chapter " data-level = "1.2.2.18" data-path = "../concepts/garbage-collection.html" >
< a href = "../concepts/garbage-collection.html" >
2.2.18 垃圾收集
< / a >
< / li >
< li class = "chapter " data-level = "1.2.2.19" data-path = "../concepts/network-policy.html" >
< a href = "../concepts/network-policy.html" >
2.2.19 NetworkPolicy
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< / ul >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.3" data-path = "../guide/" >
< a href = "../guide/" >
3. 用户指南
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.1" data-path = "../guide/resource-configuration.html" >
< a href = "../guide/resource-configuration.html" >
3.1 资源配置
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.1.1" data-path = "../guide/configure-liveness-readiness-probes.html" >
< a href = "../guide/configure-liveness-readiness-probes.html" >
3.1.1 配置Pod的liveness和readiness探针
< / a >
< / li >
< li class = "chapter " data-level = "1.3.1.2" data-path = "../guide/configure-pod-service-account.html" >
< a href = "../guide/configure-pod-service-account.html" >
3.1.2 配置Pod的Service Account
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.3.2" data-path = "../guide/command-usage.html" >
< a href = "../guide/command-usage.html" >
3.2 命令使用
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.2.1" data-path = "../guide/using-kubectl.html" >
< a href = "../guide/using-kubectl.html" >
3.2.1 使用kubectl
< / a >
2017-09-16 20:56:43 +08:00
< / li >
< li class = "chapter " data-level = "1.3.2.2" data-path = "../guide/docker-cli-to-kubectl.html" >
< a href = "../guide/docker-cli-to-kubectl.html" >
3.2.2 docker用户过度到kubectl命令行指南
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< / ul >
< / li >
2017-09-07 12:29:13 +08:00
< li class = "chapter " data-level = "1.3.3" data-path = "../guide/cluster-security-management.html" >
2017-08-21 18:44:34 +08:00
2017-09-07 12:29:13 +08:00
< a href = "../guide/cluster-security-management.html" >
2017-08-21 18:44:34 +08:00
2017-09-07 12:29:13 +08:00
3.3 集群安全性管理
2017-08-21 18:44:34 +08:00
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.3.1" data-path = "../guide/managing-tls-in-a-cluster.html" >
< a href = "../guide/managing-tls-in-a-cluster.html" >
3.3.1 管理集群中的TLS
< / a >
< / li >
< li class = "chapter " data-level = "1.3.3.2" data-path = "../guide/kubelet-authentication-authorization.html" >
< a href = "../guide/kubelet-authentication-authorization.html" >
3.3.2 kubelet的认证授权
< / a >
< / li >
< li class = "chapter " data-level = "1.3.3.3" data-path = "../guide/tls-bootstrapping.html" >
< a href = "../guide/tls-bootstrapping.html" >
3.3.3 TLS bootstrap
< / a >
2017-08-31 14:23:44 +08:00
< / li >
< li class = "chapter " data-level = "1.3.3.4" data-path = "../guide/kubectl-user-authentication-authorization.html" >
< a href = "../guide/kubectl-user-authentication-authorization.html" >
3.3.4 kubectl的用户认证授权
< / a >
< / li >
< li class = "chapter " data-level = "1.3.3.5" data-path = "../guide/rbac.html" >
< a href = "../guide/rbac.html" >
3.3.5 RBAC——基于角色的访问控制
< / a >
2017-09-07 14:13:59 +08:00
< / li >
< li class = "chapter " data-level = "1.3.3.6" data-path = "../guide/ip-masq-agent.html" >
< a href = "../guide/ip-masq-agent.html" >
3.3.6 IP伪装代理
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.3.4" data-path = "../guide/access-kubernetes-cluster.html" >
< a href = "../guide/access-kubernetes-cluster.html" >
3.4 访问 Kubernetes 集群
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.4.1" data-path = "../guide/access-cluster.html" >
< a href = "../guide/access-cluster.html" >
3.4.1 访问集群
< / a >
< / li >
< li class = "chapter " data-level = "1.3.4.2" data-path = "../guide/authenticate-across-clusters-kubeconfig.html" >
< a href = "../guide/authenticate-across-clusters-kubeconfig.html" >
3.4.2 使用 kubeconfig 文件配置跨集群认证
< / a >
< / li >
< li class = "chapter " data-level = "1.3.4.3" data-path = "../guide/connecting-to-applications-port-forward.html" >
< a href = "../guide/connecting-to-applications-port-forward.html" >
3.4.3 通过端口转发访问集群中的应用程序
< / a >
< / li >
< li class = "chapter " data-level = "1.3.4.4" data-path = "../guide/service-access-application-cluster.html" >
< a href = "../guide/service-access-application-cluster.html" >
3.4.4 使用 service 访问群集中的应用程序
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.3.5" data-path = "../guide/application-development-deployment-flow.html" >
< a href = "../guide/application-development-deployment-flow.html" >
3.5 在kubernetes中开发部署应用
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.3.5.1" data-path = "../guide/deploy-applications-in-kubernetes.html" >
< a href = "../guide/deploy-applications-in-kubernetes.html" >
3.5.1 适用于kubernetes的应用开发部署流程
< / a >
< / li >
< li class = "chapter " data-level = "1.3.5.2" data-path = "../guide/migrating-hadoop-yarn-to-kubernetes.html" >
< a href = "../guide/migrating-hadoop-yarn-to-kubernetes.html" >
3.5.2 迁移传统应用到kubernetes中——以Hadoop YARN为例
< / a >
< / li >
< / ul >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4" data-path = "./" >
< a href = "./" >
4. 最佳实践
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.1" data-path = "install-kbernetes1.6-on-centos.html" >
< a href = "install-kbernetes1.6-on-centos.html" >
4.1 在CentOS上部署kubernetes1.6集群
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.1.1" data-path = "create-tls-and-secret-key.html" >
< a href = "create-tls-and-secret-key.html" >
4.1.1 创建TLS证书和秘钥
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.2" data-path = "create-kubeconfig.html" >
< a href = "create-kubeconfig.html" >
4.1.2 创建kubeconfig文件
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.3" data-path = "etcd-cluster-installation.html" >
< a href = "etcd-cluster-installation.html" >
4.1.3 创建高可用etcd集群
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.4" data-path = "kubectl-installation.html" >
< a href = "kubectl-installation.html" >
4.1.4 安装kubectl命令行工具
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.5" data-path = "master-installation.html" >
< a href = "master-installation.html" >
4.1.5 部署master节点
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.6" data-path = "node-installation.html" >
< a href = "node-installation.html" >
4.1.6 部署node节点
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.7" data-path = "kubedns-addon-installation.html" >
< a href = "kubedns-addon-installation.html" >
4.1.7 安装kubedns插件
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.8" data-path = "dashboard-addon-installation.html" >
< a href = "dashboard-addon-installation.html" >
4.1.8 安装dashboard插件
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.9" data-path = "heapster-addon-installation.html" >
< a href = "heapster-addon-installation.html" >
4.1.9 安装heapster插件
< / a >
< / li >
< li class = "chapter " data-level = "1.4.1.10" data-path = "efk-addon-installation.html" >
< a href = "efk-addon-installation.html" >
4.1.10 安装EFK插件
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4.2" data-path = "service-discovery-and-loadbalancing.html" >
< a href = "service-discovery-and-loadbalancing.html" >
4.2 服务发现与负载均衡
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.2.1" data-path = "traefik-ingress-installation.html" >
< a href = "traefik-ingress-installation.html" >
4.2.1 安装Traefik ingress
< / a >
< / li >
< li class = "chapter " data-level = "1.4.2.2" data-path = "distributed-load-test.html" >
< a href = "distributed-load-test.html" >
4.2.2 分布式负载测试
< / a >
< / li >
< li class = "chapter " data-level = "1.4.2.3" data-path = "network-and-cluster-perfermance-test.html" >
< a href = "network-and-cluster-perfermance-test.html" >
4.2.3 网络和集群性能测试
< / a >
< / li >
< li class = "chapter " data-level = "1.4.2.4" data-path = "edge-node-configuration.html" >
< a href = "edge-node-configuration.html" >
4.2.4 边缘节点配置
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4.3" data-path = "operation.html" >
< a href = "operation.html" >
4.3 运维管理
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.3.1" data-path = "service-rolling-update.html" >
< a href = "service-rolling-update.html" >
4.3.1 服务滚动升级
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.2" data-path = "app-log-collection.html" >
< a href = "app-log-collection.html" >
4.3.2 应用日志收集
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.3" data-path = "configuration-best-practice.html" >
< a href = "configuration-best-practice.html" >
4.3.3 配置最佳实践
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.4" data-path = "monitor.html" >
< a href = "monitor.html" >
4.3.4 集群及应用监控
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.5" data-path = "jenkins-ci-cd.html" >
< a href = "jenkins-ci-cd.html" >
4.3.5 使用Jenkins进行持续构建与发布
< / a >
< / li >
< li class = "chapter " data-level = "1.4.3.6" data-path = "data-persistence-problem.html" >
< a href = "data-persistence-problem.html" >
4.3.6 数据持久化问题
< / a >
2017-09-03 15:58:39 +08:00
< / li >
< li class = "chapter " data-level = "1.4.3.7" data-path = "manage-compute-resources-container.html" >
< a href = "manage-compute-resources-container.html" >
4.3.7 管理容器的计算资源
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4.4" data-path = "storage.html" >
< a href = "storage.html" >
4.4 存储管理
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.4.1" data-path = "glusterfs.html" >
< a href = "glusterfs.html" >
4.4.1 GlusterFS
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.4.1.1" data-path = "using-glusterfs-for-persistent-storage.html" >
< a href = "using-glusterfs-for-persistent-storage.html" >
4.4.1.1 使用GlusterFS做持久化存储
< / a >
< / li >
< li class = "chapter active" data-level = "1.4.4.1.2" data-path = "storage-for-containers-using-glusterfs-with-openshift.html" >
< a href = "storage-for-containers-using-glusterfs-with-openshift.html" >
4.4.1.2 在OpenShift中使用GlusterFS做持久化存储
< / a >
2017-09-01 21:04:51 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.4.4.2" data-path = "cephfs.html" >
< a href = "cephfs.html" >
4.4.2 CephFS
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.4.4.2.1" data-path = "using-ceph-for-persistent-storage.html" >
< a href = "using-ceph-for-persistent-storage.html" >
4.4.2.1 使用Ceph做持久化存储
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< / ul >
< / li >
< / ul >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.5" data-path = "../usecases/" >
< a href = "../usecases/" >
5. 领域应用
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.5.1" data-path = "../usecases/microservices.html" >
< a href = "../usecases/microservices.html" >
5.1 微服务架构
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.5.1.1" data-path = "../usecases/istio.html" >
< a href = "../usecases/istio.html" >
5.1.1 Istio
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.5.1.1.1" data-path = "../usecases/istio-installation.html" >
< a href = "../usecases/istio-installation.html" >
5.1.1.1 安装istio
< / a >
< / li >
< li class = "chapter " data-level = "1.5.1.1.2" data-path = "../usecases/configuring-request-routing.html" >
< a href = "../usecases/configuring-request-routing.html" >
5.1.1.2 配置请求的路由规则
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.5.1.2" data-path = "../usecases/linkerd.html" >
< a href = "../usecases/linkerd.html" >
5.1.2 Linkerd
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.5.1.2.1" data-path = "../usecases/linkerd-user-guide.html" >
< a href = "../usecases/linkerd-user-guide.html" >
5.1.2.1 Linkerd 使用指南
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.5.1.3" data-path = "../usecases/service-discovery-in-microservices.html" >
< a href = "../usecases/service-discovery-in-microservices.html" >
5.1.3 微服务中的服务发现
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.5.2" data-path = "../usecases/big-data.html" >
< a href = "../usecases/big-data.html" >
5.2 大数据
< / a >
< ul class = "articles" >
2017-08-30 14:20:52 +08:00
< li class = "chapter " data-level = "1.5.2.1" data-path = "../usecases/spark-standalone-on-kubernetes.html" >
2017-08-21 18:44:34 +08:00
2017-08-30 14:20:52 +08:00
< a href = "../usecases/spark-standalone-on-kubernetes.html" >
2017-08-21 18:44:34 +08:00
2017-08-30 14:20:52 +08:00
5.2.1 Spark standalone on Kubernetes
2017-08-21 18:44:34 +08:00
< / a >
2017-08-31 14:23:44 +08:00
< / li >
2017-09-14 15:57:50 +08:00
< li class = "chapter " data-level = "1.5.2.2" data-path = "../usecases/running-spark-with-kubernetes-native-scheduler.html" >
2017-08-31 14:23:44 +08:00
2017-09-14 15:57:50 +08:00
< a href = "../usecases/running-spark-with-kubernetes-native-scheduler.html" >
2017-08-31 14:23:44 +08:00
5.2.2 运行支持kubernetes原生调度的Spark程序
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< / ul >
2017-08-30 16:52:33 +08:00
< / li >
< li class = "chapter " data-level = "1.5.3" data-path = "../usecases/serverless.html" >
< a href = "../usecases/serverless.html" >
5.3 Serverless架构
< / a >
2017-08-21 18:44:34 +08:00
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.6" data-path = "../develop/" >
< a href = "../develop/" >
6. 开发指南
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.6.1" data-path = "../develop/developing-environment.html" >
< a href = "../develop/developing-environment.html" >
6.1 开发环境搭建
< / a >
< / li >
< li class = "chapter " data-level = "1.6.2" data-path = "../develop/testing.html" >
< a href = "../develop/testing.html" >
6.2 单元测试和集成测试
< / a >
< / li >
< li class = "chapter " data-level = "1.6.3" data-path = "../develop/client-go-sample.html" >
< a href = "../develop/client-go-sample.html" >
6.3 client-go示例
< / a >
< / li >
< li class = "chapter " data-level = "1.6.4" data-path = "../develop/contribute.html" >
< a href = "../develop/contribute.html" >
6.4 社区贡献
< / a >
< / li >
< / ul >
< / li >
< li class = "chapter " data-level = "1.7" data-path = "../appendix/" >
< a href = "../appendix/" >
7. 附录
< / a >
< ul class = "articles" >
< li class = "chapter " data-level = "1.7.1" data-path = "../appendix/docker-best-practice.html" >
< a href = "../appendix/docker-best-practice.html" >
7.1 Docker最佳实践
< / a >
< / li >
< li class = "chapter " data-level = "1.7.2" data-path = "../appendix/issues.html" >
< a href = "../appendix/issues.html" >
7.2 问题记录
< / a >
< / li >
< li class = "chapter " data-level = "1.7.3" data-path = "../appendix/tricks.html" >
< a href = "../appendix/tricks.html" >
7.3 使用技巧
< / a >
< / li >
< / ul >
< / li >
< li class = "divider" > < / li >
< li >
< a href = "https://www.gitbook.com" target = "blank" class = "gitbook-link" >
本書使用 GitBook 釋出
< / a >
< / li >
< / ul >
< / nav >
< / div >
< div class = "book-body" >
< div class = "body-inner" >
< div class = "book-header" role = "navigation" >
<!-- Title -->
< h1 >
< i class = "fa fa-circle-o-notch fa-spin" > < / i >
< a href = ".." > 4.4.1.2 在OpenShift中使用GlusterFS做持久化存储< / a >
< / h1 >
< / div >
< div class = "page-wrapper" tabindex = "-1" role = "main" >
< div class = "page-inner" >
< div class = "search-plus" id = "book-search-results" >
< div class = "search-noresults" >
< section class = "normal markdown-section" >
< h1 id = "storage-for-containers-using-gluster-–-part-ii" > Storage for Containers using Gluster – Part II< / h1 >
< h3 id = "概述" > 概 述 < / h3 >
< p > 本 文 由 Daniel Messer( Technical Marketing Manager Storage @RedHat) 和 Keith Tenzer( Solutions Architect @RedHat) 共 同 撰 写 。 < / p >
< ul >
< li > < a href = "https://keithtenzer.com/2017/03/07/storage-for-containers-overview-part-i/" target = "_blank" > Storage for Containers Overview – Part I< / a > < / li >
< li > < a href = "https://keithtenzer.com/2017/03/24/storage-for-containers-using-gluster-part-ii/" target = "_blank" > Storage for Containers using Gluster – Part II< / a > < / li >
< li > < a href = "https://keithtenzer.com/2017/03/29/storage-for-containers-using-container-native-storage-part-iii/" target = "_blank" > Storage for Containers using Container Native Storage – Part III< / a > < / li >
< li > < a href = "https://keithtenzer.com/2017/04/07/storage-for-containers-using-ceph-rbd-part-iv/" target = "_blank" > Storage for Containers using Ceph – Part IV< / a > < / li >
< li > < a href = "https://keithtenzer.com/2017/04/05/storage-for-containers-using-netapp-ontap-nas-part-v/" target = "_blank" > Storage for Containers using NetApp ONTAP NAS – Part V< / a > < / li >
< li > < a href = "https://keithtenzer.com/2017/04/05/storage-for-containers-using-netapp-solidfire-part-vi/" target = "_blank" > Storage for Containers using NetApp SolidFire – Part VI< / a > < / li >
< / ul >
< h3 id = "gluster作为container-ready-storagecrs" > Gluster作 为 Container-Ready Storage(CRS)< / h3 >
< p > 在 本 文 中 , 我 们 将 介 绍 容 器 存 储 的 首 选 以 及 如 何 部 署 它 。 Kusternet和 OpenShift支 持 GlusterFS已 经 有 一 段 时 间 了 。 GlusterFS的 适 用 性 很 好 , 可 用 于 所 有 的 部 署 场 景 : 裸 机 、 虚 拟 机 、 内 部 部 署 和 公 共 云 。 在 容 器 中 运 行 GlusterFS的 新 特 性 将 在 本 系 列 后 面 讨 论 。 < / p >
< p > GlusterFS是 一 个 分 布 式 文 件 系 统 , 内 置 了 原 生 协 议 ( GlusterFS) 和 各 种 其 他 协 议 ( NFS, SMB, ...) 。 为 了 与 OpenShift集 成 , 节 点 将 通 过 FUSE使 用 原 生 协 议 , 将 GlusterFS卷 挂 在 到 节 点 本 身 上 , 然 后 将 它 们 绑 定 到 目 标 容 器 中 。 OpenShift / Kubernetes具 有 实 现 请 求 、 释 放 和 挂 载 、 卸 载 GlusterFS卷 的 原 生 程 序 。 < / p >
< h3 id = "crs概述" > CRS概 述 < / h3 >
< p > 在 存 储 方 面 , 根 据 OpenShift / Kubernetes的 要 求 , 还 有 一 个 额 外 的 组 件 管 理 集 群 , 称 为 “ heketi” 。 这 实 际 上 是 一 个 用 于 GlusterFS的 REST API, 它 还 提 供 CLI版 本 。 在 以 下 步 骤 中 , 我 们 将 在 3个 GlusterFS节 点 中 部 署 heketi, 使 用 它 来 部 署 GlusterFS存 储 池 , 将 其 连 接 到 OpenShift, 并 使 用 它 来 通 过 PersistentVolumeClaims为 容 器 配 置 存 储 。 我 们 将 总 共 部 署 4台 虚 拟 机 。 一 个 用 于 OpenShift( 实 验 室 设 置 ) , 另 一 个 用 于 GlusterFS。 < / p >
< p > 注 意 : 您 的 系 统 应 至 少 需 要 有 四 核 CPU, 16GB RAM和 20 GB可 用 磁 盘 空 间 。 < / p >
< h3 id = "部署openshift" > 部 署 OpenShift< / h3 >
< p > 首 先 你 需 要 先 部 署 OpenShift。 最 有 效 率 的 方 式 是 直 接 在 虚 拟 机 中 部 署 一 个 All-in-One环 境 , 部 署 指 南 见 < a href = "https://keithtenzer.com/2017/03/13/openshift-enterprise-3-4-all-in-one-lab-environment/" target = "_blank" > the “ OpenShift Enterprise 3.4 all-in-one Lab Environment” article.< / a > 。 < / p >
< p > 确 保 你 的 OpenShift虚 拟 机 可 以 解 析 外 部 域 名 。 编 辑 < code > /etc/dnsmasq.conf< / code > 文 件 , 增 加 下 面 的 Google DNS: < / p >
< pre > < code > server=8.8.8.8
< / code > < / pre > < p > 重 启 : < / p >
< pre > < code > # systemctl restart dnsmasq
# ping -c1 google.com
< / code > < / pre > < h3 id = "部署gluster" > 部 署 Gluster< / h3 >
< p > GlusterFS至 少 需 要 有 以 下 配 置 的 3台 虚 拟 机 : < / p >
< ul >
< li > RHEL 7.3< / li >
< li > 2 CPUs< / li >
< li > 2 GB内 存 < / li >
< li > 30 GB磁 盘 存 储 给 操 作 系 统 < / li >
< li > 10 GB磁 盘 存 储 给 GlusterFS bricks< / li >
< / ul >
< p > 修 改 /etc/hosts文 件 , 定 义 三 台 虚 拟 机 的 主 机 名 。 < / p >
< p > 例 如 ( 主 机 名 可 以 根 据 你 自 己 的 环 境 自 由 调 整 ) < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # cat /etc/hosts< / span >
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.99.144 ocp-master.lab ocp-master
172.16.128.7 crs-node1.lab crs-node1
172.16.128.8 crs-node2.lab crs-node2
172.16.128.9 crs-node3.lab crs-node3
< / code > < / pre >
< p > < strong > 在 3台 GlusterFS虚 拟 机 上 都 执 行 以 下 步 骤 < / strong > : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # subscription-manager repos --disable=" *" < / span >
< span class = "hljs-comment" > # subscription-manager repos --enable=rhel-7-server-rpms< / span >
< / code > < / pre >
< p > 如 果 你 已 经 订 阅 了 GlusterFS那 么 可 以 直 接 使 用 , 开 启 < code > rh-gluster-3-for-rhel-7-server-rpms< / code > 的 yum源 。 < / p >
< p > 如 果 你 没 有 的 话 , 那 么 可 以 通 过 EPEL使 用 非 官 方 支 持 的 GlusterFS的 社 区 源 。 < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # yum -y install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm< / span >
< span class = "hljs-comment" > # rpm --import http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7< / span >
< / code > < / pre >
< p > 在 < code > /etc/yum.repos.d/< / code > 目 录 下 创 建 < code > glusterfs-3.10.repo< / code > 文 件 : < / p >
< pre > < code class = "lang-ini" > < span class = "hljs-section" > [glusterfs-3.10]< / span >
< span class = "hljs-attr" > name< / span > =glusterfs-< span class = "hljs-number" > 3.10< / span >
< span class = "hljs-attr" > description< / span > =< span class = "hljs-string" > " GlusterFS 3.10 Community Version" < / span >
< span class = "hljs-attr" > baseurl< / span > =https://buildlogs.centos.org/centos/< span class = "hljs-number" > 7< / span > /storage/x< span class = "hljs-number" > 86_64< / span > /gluster-< span class = "hljs-number" > 3.10< / span > /
< span class = "hljs-attr" > gpgcheck< / span > =< span class = "hljs-number" > 0< / span >
< span class = "hljs-attr" > enabled< / span > =< span class = "hljs-number" > 1< / span >
< / code > < / pre >
< p > 验 证 源 已 经 被 激 活 。 < / p >
< pre > < code class = "lang-Bash" > < span class = "hljs-comment" > # yum repolist< / span >
< / code > < / pre >
< p > 现 在 可 以 开 始 安 装 GlusterFS了 。 < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # yum -y install glusterfs-server< / span >
< / code > < / pre >
< p > 需 要 为 GlusterFS peers打 开 几 个 基 本 TCP端 口 , 以 便 与 OpenShift进 行 通 信 并 提 供 存 储 : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # firewall-cmd --add-port=24007-24008/tcp --add-port=49152-49664/tcp --add-port=2222/tcp< / span >
< span class = "hljs-comment" > # firewall-cmd --runtime-to-permanent< / span >
< / code > < / pre >
< p > 现 在 我 们 可 以 启 动 GlusterFS的 daemon进 程 了 : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # systemctl enable glusterd< / span >
< span class = "hljs-comment" > # systemctl start glusterd< / span >
< / code > < / pre >
< p > 完 成 。 GlusterFS已 经 启 动 并 正 在 运 行 。 其 他 配 置 将 通 过 heketi完 成 。 < / p >
< p > < strong > 在 GlusterFS的 一 台 虚 拟 机 上 安 装 heketi< / strong > < / p >
< pre > < code class = "lang-bash" > [root@crs-node1 ~]< span class = "hljs-comment" > # yum -y install heketi heketi-client< / span >
< / code > < / pre >
< h3 id = "更新epel" > 更 新 EPEL< / h3 >
< p > 如 果 你 没 有 Red Hat Gluster Storage订 阅 的 话 , 你 可 以 从 EPEL中 获 取 heketi。 在 撰 写 本 文 时 , 2016年 10月 那 时 候 还 是 3.0.0-1.el7版 本 , 它 不 适 用 于 OpenShift 3.4。 你 将 需 要 更 新 到 更 新 的 版 本 : < / p >
< pre > < code class = "lang-bash" > [root@crs-node1 ~]< span class = "hljs-comment" > # yum -y install wget< / span >
[root@crs-node1 ~]< span class = "hljs-comment" > # wget https://github.com/heketi/heketi/releases/download/v4.0.0/heketi-v4.0.0.linux.amd64.tar.gz< / span >
[root@crs-node1 ~]< span class = "hljs-comment" > # tar -xzf heketi-v4.0.0.linux.amd64.tar.gz< / span >
[root@crs-node1 ~]< span class = "hljs-comment" > # systemctl stop heketi< / span >
[root@crs-node1 ~]< span class = "hljs-comment" > # cp heketi/heketi* /usr/bin/< / span >
[root@crs-node1 ~]< span class = "hljs-comment" > # chown heketi:heketi /usr/bin/heketi*< / span >
< / code > < / pre >
< p > 在 < code > /etc/systemd/system/heketi.service< / code > 中 创 建 v4版 本 的 heketi二 进 制 文 件 的 更 新 语 法 文 件 : < / p >
< pre > < code class = "lang-ini" > < span class = "hljs-section" > [Unit]< / span >
< span class = "hljs-attr" > Description< / span > =Heketi Server
< span class = "hljs-section" >
[Service]< / span >
< span class = "hljs-attr" > Type< / span > =simple
< span class = "hljs-attr" > WorkingDirectory< / span > =/var/lib/heketi
< span class = "hljs-attr" > EnvironmentFile< / span > =-/etc/heketi/heketi.json
< span class = "hljs-attr" > User< / span > =heketi
< span class = "hljs-attr" > ExecStart< / span > =/usr/bin/heketi --config=/etc/heketi/heketi.json
< span class = "hljs-attr" > Restart< / span > =< span class = "hljs-literal" > on< / span > -failure
< span class = "hljs-attr" > StandardOutput< / span > =syslog
< span class = "hljs-attr" > StandardError< / span > =syslog
< span class = "hljs-section" >
[Install]< / span >
< span class = "hljs-attr" > WantedBy< / span > =multi-user.target
< / code > < / pre >
< pre > < code class = "lang-bash" > [root@crs-node1 ~]< span class = "hljs-comment" > # systemctl daemon-reload< / span >
[root@crs-node1 ~]< span class = "hljs-comment" > # systemctl start heketi< / span >
< / code > < / pre >
< p > Heketi使 用 SSH来 配 置 GlusterFS的 所 有 节 点 。 创 建 SSH密 钥 对 , 将 公 钥 拷 贝 到 所 有 3个 节 点 上 ( 包 括 你 登 陆 的 第 一 个 节 点 ) : < / p >
< pre > < code > [root@crs-node1 ~]# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ' '
[root@crs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@crs-node1.lab
[root@crs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@crs-node2.lab
[root@crs-node1 ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub root@crs-node3.lab
[root@crs-node1 ~]# chown heketi:heketi /etc/heketi/heketi_key*
< / code > < / pre > < p > 剩 下 唯 一 要 做 的 事 情 就 是 配 置 heketi来 使 用 SSH。 编 辑 < code > /etc/heketi/heketi.json< / code > 文 件 使 它 看 起 来 像 下 面 这 个 样 子 ( 改 变 的 部 分 突 出 显 示 下 划 线 ) : < / p >
< pre > < code > {
" _port_comment" :" Heketi Server Port Number" ,
" port" :" 8080" ,
" _use_auth" :" Enable JWT authorization. Please enable for deployment" ,
" use_auth" :false,
" _jwt" :" Private keys for access" ,
" jwt" :{
" _admin" :" Admin has access to all APIs" ,
" admin" :{
" key" :" My Secret"
},
" _user" :" User only has access to /volumes endpoint" ,
" user" :{
" key" :" My Secret"
}
},
" _glusterfs_comment" :" GlusterFS Configuration" ,
" glusterfs" :{
" _executor_comment" :[
" Execute plugin. Possible choices: mock, ssh" ,
" mock: This setting is used for testing and development." ,
" It will not send commands to any node." ,
" ssh: This setting will notify Heketi to ssh to the nodes." ,
" It will need the values in sshexec to be configured." ,
" kubernetes: Communicate with GlusterFS containers over" ,
" Kubernetes exec api."
],
" executor" :" ssh" ,
" _sshexec_comment" :" SSH username and private key file information" ,
" sshexec" :{
" keyfile" :" /etc/heketi/heketi_key" ,
" user" :" root" ,
" port" :" 22" ,
" fstab" :" /etc/fstab"
},
" _kubeexec_comment" :" Kubernetes configuration" ,
" kubeexec" :{
" host" :" https://kubernetes.host:8443" ,
" cert" :" /path/to/crt.file" ,
" insecure" :false,
" user" :" kubernetes username" ,
" password" :" password for kubernetes user" ,
" namespace" :" OpenShift project or Kubernetes namespace" ,
" fstab" :" Optional: Specify fstab file on node. Default is /etc/fstab"
},
" _db_comment" :" Database file name" ,
" db" :" /var/lib/heketi/heketi.db" ,
" _loglevel_comment" :[
" Set log level. Choices are:" ,
" none, critical, error, warning, info, debug" ,
" Default is warning"
],
" loglevel" :" debug"
}
}
< / code > < / pre > < p > 完 成 。 heketi将 监 听 8080端 口 , 我 们 来 确 认 下 防 火 墙 规 则 允 许 它 监 听 该 端 口 : < / p >
< pre > < code > # firewall-cmd --add-port=8080/tcp
# firewall-cmd --runtime-to-permanent
< / code > < / pre > < p > 重 启 heketi: < / p >
< pre > < code > # systemctl enable heketi
# systemctl restart heketi
< / code > < / pre > < p > 测 试 它 是 否 在 运 行 : < / p >
< pre > < code > # curl http://crs-node1.lab:8080/hello
Hello from Heketi
< / code > < / pre > < p > 很 好 。 heketi上 场 的 时 候 到 了 。 我 们 将 使 用 它 来 配 置 我 们 的 GlusterFS存 储 池 。 该 软 件 已 经 在 我 们 所 有 的 虚 拟 机 上 运 行 , 但 并 未 被 配 置 。 要 将 其 改 造 为 满 足 我 们 需 求 的 存 储 系 统 , 需 要 在 拓 扑 文 件 中 描 述 我 们 所 需 的 GlusterFS存 储 池 , 如 下 所 示 : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # vi topology.json< / span >
{
< span class = "hljs-string" > " clusters" < / span > : [
{
< span class = "hljs-string" > " nodes" < / span > : [
{
< span class = "hljs-string" > " node" < / span > : {
< span class = "hljs-string" > " hostnames" < / span > : {
< span class = "hljs-string" > " manage" < / span > : [
< span class = "hljs-string" > " crs-node1.lab" < / span >
],
< span class = "hljs-string" > " storage" < / span > : [
< span class = "hljs-string" > " 172.16.128.7" < / span >
]
},
< span class = "hljs-string" > " zone" < / span > : 1
},
< span class = "hljs-string" > " devices" < / span > : [
< span class = "hljs-string" > " /dev/sdb" < / span >
]
},
{
< span class = "hljs-string" > " node" < / span > : {
< span class = "hljs-string" > " hostnames" < / span > : {
< span class = "hljs-string" > " manage" < / span > : [
< span class = "hljs-string" > " crs-node2.lab" < / span >
],
< span class = "hljs-string" > " storage" < / span > : [
< span class = "hljs-string" > " 172.16.128.8" < / span >
]
},
< span class = "hljs-string" > " zone" < / span > : 1
},
< span class = "hljs-string" > " devices" < / span > : [
< span class = "hljs-string" > " /dev/sdb" < / span >
]
},
{
< span class = "hljs-string" > " node" < / span > : {
< span class = "hljs-string" > " hostnames" < / span > : {
< span class = "hljs-string" > " manage" < / span > : [
< span class = "hljs-string" > " crs-node3.lab" < / span >
],
< span class = "hljs-string" > " storage" < / span > : [
< span class = "hljs-string" > " 172.16.128.9" < / span >
]
},
< span class = "hljs-string" > " zone" < / span > : 1
},
< span class = "hljs-string" > " devices" < / span > : [
< span class = "hljs-string" > " /dev/sdb" < / span >
]
}
]
}
]
}
< / code > < / pre >
< p > 该 文 件 格 式 比 较 简 单 , 基 本 上 是 告 诉 heketi要 创 建 一 个 3节 点 的 集 群 , 其 中 每 个 节 点 包 含 的 配 置 有 FQDN, IP地 址 以 及 至 少 一 个 将 用 作 GlusterFS块 的 备 用 块 设 备 。 < / p >
< p > 现 在 将 该 文 件 发 送 给 heketi: < / p >
< pre > < code > # export HEKETI_CLI_SERVER=http://crs-node1.lab:8080
# heketi-cli topology load --json=topology.json
Creating cluster ... ID: 78cdb57aa362f5284bc95b2549bc7e7d
Creating node crs-node1.lab ... ID: ffd7671c0083d88aeda9fd1cb40b339b
Adding device /dev/sdb ... OK
Creating node crs-node2.lab ... ID: 8220975c0a4479792e684584153050a9
Adding device /dev/sdb ... OK
Creating node crs-node3.lab ... ID: b94f14c4dbd8850f6ac589ac3b39cc8e
Adding device /dev/sdb ... OK
< / code > < / pre > < p > 现 在 heketi已 经 配 置 了 3个 节 点 的 GlusterFS存 储 池 。 很 简 单 ! 你 现 在 可 以 看 到 3个 虚 拟 机 都 已 经 成 功 构 成 了 GlusterFS中 的 可 信 存 储 池 ( Trusted Stroage Pool) 。 < / p >
< pre > < code class = "lang-bash" > [root@crs-node1 ~]< span class = "hljs-comment" > # gluster peer status< / span >
Number of Peers: 2
Hostname: crs-node2.lab
Uuid: 93b34946-9571-46a8-983c-c9f128557c0e
State: Peer < span class = "hljs-keyword" > in< / span > Cluster (Connected)
Other names:
crs-node2.lab
Hostname: 172.16.128.9
Uuid: e3c1f9b0-be97-42e5-beda< span class = "hljs-_" > -f< / span > 70< span class = "hljs-built_in" > fc< / span > 05f47ea
State: Peer < span class = "hljs-keyword" > in< / span > Cluster (Connected)
< / code > < / pre >
< p > 现 在 回 到 OpenShift! < / p >
< h3 id = "将gluster与openshift集成" > 将 Gluster与 OpenShift集 成 < / h3 >
< p > 为 了 集 成 OpenShift, 需 要 两 样 东 西 : 一 个 动 态 的 Kubernetes Storage Provisioner和 一 个 StorageClass。 Provisioner在 OpenShift中 开 箱 即 用 。 实 际 上 关 键 的 是 如 何 将 存 储 挂 载 到 容 器 上 。 StorageClass是 OpenShift中 的 用 户 可 以 用 来 实 现 的 PersistentVolumeClaims的 实 体 , 它 反 过 来 能 够 触 发 一 个 Provisioner实 现 实 际 的 配 置 , 并 将 结 果 表 示 为 Kubernetes PersistentVolume( PV) 。 < / p >
< p > 就 像 OpenShift中 的 其 他 组 件 一 样 , StorageClass也 简 单 的 用 YAML文 件 定 义 : < / p >
< pre > < code class = "lang-Bash" > < span class = "hljs-comment" > # cat crs-storageclass.yaml< / span >
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: container-ready-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: < span class = "hljs-string" > " true" < / span >
provisioner: kubernetes.io/glusterfs
parameters:
resturl: < span class = "hljs-string" > " http://crs-node1.lab:8080" < / span >
restauthenabled: < span class = "hljs-string" > " false" < / span >
< / code > < / pre >
< p > 我 们 的 provisioner是 kubernetes.io/glusterfs, 将 它 指 向 我 们 的 heketi实 例 。 我 们 将 类 命 名 为 “ container-ready-storage” , 同 时 使 其 成 为 所 有 没 有 显 示 指 定 StorageClass的 PersistentVolumeClaim的 默 认 StorageClass。 < / p >
< p > 为 你 的 GlusterFS池 创 建 StorageClass: < / p >
< pre > < code > # oc create -f crs-storageclass.yaml
< / code > < / pre > < h3 id = "在openshift中使用gluster" > 在 OpenShift中 使 用 Gluster< / h3 >
< p > 我 们 来 看 下 如 何 在 OpenShift中 使 用 GlusterFS。 首 先 在 OpenShift虚 拟 机 中 创 建 一 个 测 试 项 目 。 < / p >
< pre > < code > # oc new-project crs-storage --display-name=" Container-Ready Storage"
< / code > < / pre > < p > 这 会 向 Kubernetes/OpenShift发 出 storage请 求 , 请 求 一 个 PersistentVolumeClaim( PVC) 。 这 是 一 个 简 单 的 对 象 , 它 描 述 最 少 需 要 多 少 容 量 和 应 该 提 供 哪 种 访 问 模 式 ( 非 共 享 , 共 享 , 只 读 ) 。 它 通 常 是 应 用 程 序 模 板 的 一 部 分 , 但 我 们 只 需 创 建 一 个 独 立 的 PVC: < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # cat crs-claim.yaml< / span >
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-crs-storage
namespace: crs-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
< / code > < / pre >
< p > 发 送 该 请 求 : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # oc create -f crs-claim.yaml< / span >
< / code > < / pre >
< p > 观 察 在 OpenShfit中 , PVC正 在 以 动 态 创 建 volume的 方 式 实 现 : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # oc get pvc< / span >
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
my-crs-storage Bound pvc-41ad5adb-107c-11e7-afae-000c2949cce7 1Gi RWO 58s
< / code > < / pre >
< p > 太 棒 了 ! 你 现 在 可 以 在 OpenShift中 使 用 存 储 容 量 , 而 不 需 要 直 接 与 存 储 系 统 进 行 任 何 交 互 。 我 们 来 看 看 创 建 的 volume: < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # oc get pv/pvc-41ad5adb-107c-11e7-afae-000c2949cce7< / span >
Name: pvc-41ad5adb-107c-11e7-afae-000c2949cce7
Labels:
StorageClass: container-ready-storage
Status: Bound
Claim: crs-storage/my-crs-storage
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod< span class = "hljs-string" > ' s lifetime)
EndpointsName: gluster-dynamic-my-crs-storage
Path: vol_85e444ee3bc154de084976a9aef16025
ReadOnly: false
< / span > < / code > < / pre >
< p > What happened in the background was that when the PVC reached the system, our default StorageClass reached out to the GlusterFS Provisioner with the volume specs from the PVC. The provisioner in turn communicates with our heketi instance which facilitates the creation of the GlusterFS volume, which we can trace in it’ s log messages:< / p >
< p > 该 volume是 根 据 PVC中 的 定 义 特 别 创 建 的 。 在 PVC中 , 我 们 没 有 明 确 指 定 要 使 用 哪 个 StorageClass, 因 为 heketi的 GlusterFS StorageClass已 经 被 定 义 为 系 统 范 围 的 默 认 值 。 < / p >
< p > 在 后 台 发 生 的 情 况 是 , 当 PVC到 达 系 统 时 , 默 认 的 StorageClass请 求 具 有 该 PVC中 volume声 明 规 格 的 GlusterFS Provisioner。 Provisioner又 与 我 们 的 heketi实 例 通 信 , 这 有 助 于 创 建 GlusterFS volume, 我 们 可 以 在 其 日 志 消 息 中 追 踪 : < / p >
< pre > < code class = "lang-bash" > [root@crs-node1 ~]< span class = "hljs-comment" > # journalctl -l -u heketi.service< / span >
...
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [heketi] DEBUG 2017/03/24 11:25:52 /src/github.com/heketi/heketi/apps/glusterfs/volume_entry.go:298: Volume to be created on cluster e
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [heketi] INFO 2017/03/24 11:25:52 Creating brick 9e791b1daa12af783c9195941fe63103
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [heketi] INFO 2017/03/24 11:25:52 Creating brick 3e06af2f855bef521a95ada91680d14b
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [heketi] INFO 2017/03/24 11:25:52 Creating brick e4daa240f1359071e3f7ea22618cfbab
...
Mar 24 11:25:52 crs-node1.lab heketi[2598]: [sshexec] INFO 2017/03/24 11:25:52 Creating volume vol_85e444ee3bc154de084976a9aef16025 replica 3
...
Mar 24 11:25:53 crs-node1.lab heketi[2598]: Result: volume create: vol_85e444ee3bc154de084976a9aef16025: success: please start the volume to access data
...
Mar 24 11:25:55 crs-node1.lab heketi[2598]: Result: volume start: vol_85e444ee3bc154de084976a9aef16025: success
...
Mar 24 11:25:55 crs-node1.lab heketi[2598]: [asynchttp] INFO 2017/03/24 11:25:55 Completed job c3d6c4f9< span class = "hljs-built_in" > fc< / span > 74796f4a5262647dc790fe < span class = "hljs-keyword" > in< / span > 3.176522702s
...
< / code > < / pre >
< p > 成 功 ! 大 约 用 了 3秒 钟 , GlusterFS池 就 配 置 完 成 了 , 并 配 置 了 一 个 volume。 默 认 值 是 replica 3, 这 意 味 着 数 据 将 被 复 制 到 3个 不 同 节 点 的 3个 块 上 ( 用 GlusterFS作 为 后 端 存 储 ) 。 该 过 程 是 通 过 Heketi在 OpenShift进 行 编 排 的 。 < / p >
< p > 你 也 可 以 从 GlusterFS的 角 度 看 到 有 关 volume的 信 息 : < / p >
< pre > < code class = "lang-bash" > [root@crs-node1 ~]< span class = "hljs-comment" > # gluster volume list< / span >
vol_85e444ee3bc154de084976a9aef16025
[root@crs-node1 ~]< span class = "hljs-comment" > # gluster volume info vol_85e444ee3bc154de084976a9aef16025< / span >
Volume Name: vol_85e444ee3bc154de084976a9aef16025
Type: Replicate
Volume ID: a32168c8-858e-472a-b145-08c20192082b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.128.8:/var/lib/heketi/mounts/vg_147b43f6f6903be8b23209903b7172ae/brick_9e791b1daa12af783c9195941fe63103/brick
Brick2: 172.16.128.9:/var/lib/heketi/mounts/vg_72c0f520b0c57d807be21e9c90312f85/brick_3e06af2f855bef521a95ada91680d14b/brick
Brick3: 172.16.128.7:/var/lib/heketi/mounts/vg_67314f879686de975f9b8936ae43c5c5/brick_e4daa240f1359071e3f7ea22618cfbab/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
< / code > < / pre >
< p > 请 注 意 , GlusterFS中 的 卷 名 称 如 何 对 应 于 OpenShift中 Kubernetes Persistent Volume的 “ 路 径 ” 。 < / p >
< p > 或 者 , 你 也 可 以 使 用 OpenShift UI来 配 置 存 储 , 这 样 可 以 很 方 便 地 在 系 统 中 的 所 有 已 知 的 StorageClasses中 进 行 选 择 : < / p >
< figure id = "fig1.4.4.1.2.1" > < img src = "https://keithtenzer.files.wordpress.com/2017/03/screen-shot-2017-03-23-at-21-50-34.png?w=440" alt = "Screen Shot 2017-03-23 at 21.50.34" > < figcaption > Figure: Screen Shot 2017-03-23 at 21.50.34< / figcaption > < / figure >
< figure id = "fig1.4.4.1.2.2" > < img src = "https://keithtenzer.files.wordpress.com/2017/03/screen-shot-2017-03-24-at-11-09-341.png?w=440" alt = "Screen Shot 2017-03-24 at 11.09.34.png" > < figcaption > Figure: Screen Shot 2017-03-24 at 11.09.34.png< / figcaption > < / figure >
< p > 让 我 们 做 点 更 有 趣 的 事 情 , 在 OpenShift中 运 行 工 作 负 载 。 < / p >
< p > 在 仍 运 行 着 crs-storage项 目 的 OpenShift虚 拟 机 中 执 行 : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # oc get templates -n openshift< / span >
< / code > < / pre >
< p > 你 应 该 可 以 看 到 一 个 应 用 程 序 和 数 据 库 模 板 列 表 , 这 个 列 表 将 方 便 你 更 轻 松 的 使 用 OpenShift来 部 署 你 的 应 用 程 序 项 目 。 < / p >
< p > 我 们 将 使 用 MySQL来 演 示 如 何 在 OpenShift上 部 署 具 有 持 久 化 和 弹 性 存 储 的 有 状 态 应 用 程 序 。 Mysql-persistent模 板 包 含 一 个 用 于 MySQL数 据 库 目 录 的 1G空 间 的 PVC。 为 了 演 示 目 的 , 可 以 直 接 使 用 默 认 值 。 < / p >
< pre > < code > # oc process mysql-persistent -n openshift | oc create -f -
< / code > < / pre > < p > 等 待 部 署 完 成 。 你 可 以 通 过 UI或 者 命 令 行 观 察 部 署 进 度 : < / p >
< pre > < code > # oc get pods
NAME READY STATUS RESTARTS AGE
mysql-1-h4afb 1/1 Running 0 2m
< / code > < / pre > < p > 好 了 。 我 们 已 经 使 用 这 个 模 板 创 建 了 一 个 service, secrets、 PVC和 pod。 我 们 来 使 用 它 ( 你 的 pod名 字 将 跟 我 的 不 同 ) : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # oc rsh mysql-1-h4afb< / span >
< / code > < / pre >
< p > 你 已 经 成 功 的 将 它 挂 载 到 MySQL的 pod上 。 我 们 连 接 一 下 数 据 库 试 试 : < / p >
< pre > < code class = "lang-bash" > sh-4.2$ mysql -u < span class = "hljs-variable" > $MYSQL_USER< / span > -p< span class = "hljs-variable" > $MYSQL_PASSWORD< / span > -h < span class = "hljs-variable" > $HOSTNAME< / span > < span class = "hljs-variable" > $MYSQL_DATABASE< / span >
< / code > < / pre >
< p > 这 点 很 方 便 , 所 有 重 要 的 配 置 , 如 MySQL凭 据 , 数 据 库 名 称 等 都 是 pod模 板 中 的 环 境 变 量 的 一 部 分 , 因 此 可 以 在 pod中 作 为 shell的 环 境 变 量 。 我 们 来 创 建 一 些 数 据 : < / p >
< pre > < code class = "lang-Bash" > mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| sampledb |
+--------------------+
2 rows < span class = "hljs-keyword" > in< / span > < span class = "hljs-built_in" > set< / span > (0.02 sec)
mysql> \u sampledb
Database changed
mysql> CREATE TABLE IF NOT EXISTS equipment (
-> equip_id int(5) NOT NULL AUTO_INCREMENT,
-> < span class = "hljs-built_in" > type< / span > varchar(50) DEFAULT NULL,
-> install_date DATE DEFAULT NULL,
-> color varchar(20) DEFAULT NULL,
-> working bool DEFAULT NULL,
-> location varchar(250) DEFAULT NULL,
-> PRIMARY KEY(equip_id)
-> );
Query OK, 0 rows affected (0.13 sec)
mysql> INSERT INTO equipment (< span class = "hljs-built_in" > type< / span > , install_date, color, working, location)
-> VALUES
-> (< span class = "hljs-string" > " Slide" < / span > , Now(), < span class = "hljs-string" > " blue" < / span > , 1, < span class = "hljs-string" > " Southwest Corner" < / span > );
Query OK, 1 row affected, 1 warning (0.01 sec)
mysql> SELECT * FROM equipment;
+----------+-------+--------------+-------+---------+------------------+
| equip_id | < span class = "hljs-built_in" > type< / span > | install_date | color | working | location |
+----------+-------+--------------+-------+---------+------------------+
| 1 | Slide | 2017-03-24 | blue | 1 | Southwest Corner |
+----------+-------+--------------+-------+---------+------------------+
1 row < span class = "hljs-keyword" > in< / span > < span class = "hljs-built_in" > set< / span > (0.00 sec)
< / code > < / pre >
< p > 很 好 , 数 据 库 运 行 正 常 。 < / p >
< p > 你 想 看 下 数 据 存 储 在 哪 里 吗 ? 很 简 单 ! 查 看 刚 使 用 模 板 创 建 的 mysql volume: < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # oc get pvc/mysql< / span >
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
mysql Bound pvc< span class = "hljs-_" > -a< / span > 678b583-1082-11e7-afae-000c2949cce7 1Gi RWO 11m
< span class = "hljs-comment" > # oc describe pv/pvc-a678b583-1082-11e7-afae-000c2949cce7< / span >
Name: pvc< span class = "hljs-_" > -a< / span > 678b583-1082-11e7-afae-000c2949cce7
Labels:
StorageClass: container-ready-storage
Status: Bound
Claim: crs-storage/mysql
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod< span class = "hljs-string" > ' s lifetime)
EndpointsName: gluster-dynamic-mysql
Path: vol_6299fc74eee513119dafd43f8a438db1
ReadOnly: false
< / span > < / code > < / pre >
< p > GlusterFS的 volume名 字 是 vol_6299fc74eee513119dafd43f8a438db1。 回 到 你 的 GlusterFS虚 拟 机 中 , 输 入 : < / p >
< pre > < code class = "lang-bash" > < span class = "hljs-comment" > # gluster volume info vol_6299fc74eee513119dafd43f8a438db< / span >
Volume Name: vol_6299< span class = "hljs-built_in" > fc< / span > 74eee513119dafd43f8a438db1
Type: Replicate
Volume ID: 4115918f-28f7-4d4a-b3f5-4b9afe5b391f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 172.16.128.7:/var/lib/heketi/mounts/vg_67314f879686de975f9b8936ae43c5c5/brick_f264a47aa32be5d595f83477572becf8/brick
Brick2: 172.16.128.8:/var/lib/heketi/mounts/vg_147b43f6f6903be8b23209903b7172ae/brick_f5731fe7175cbe6e6567e013c2591343/brick
Brick3: 172.16.128.9:/var/lib/heketi/mounts/vg_72c0f520b0c57d807be21e9c90312f85/brick_ac6add804a6a467< span class = "hljs-built_in" > cd< / span > 81< span class = "hljs-built_in" > cd< / span > 1404841bbf1/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
< / code > < / pre >
< p > 你 可 以 看 到 数 据 是 如 何 被 复 制 到 3个 GlusterFS块 的 。 我 们 从 中 挑 一 个 ( 最 好 挑 选 你 刚 登 陆 的 那 台 虚 拟 机 并 查 看 目 录 ) : < / p >
< pre > < code > # ll /var/lib/heketi/mounts/vg_67314f879686de975f9b8936ae43c5c5/brick_f264a47aa32be5d595f83477572becf8/brick
total 180300
-rw-r-----. 2 1000070000 2001 56 Mar 24 12:11 auto.cnf
-rw-------. 2 1000070000 2001 1676 Mar 24 12:11 ca-key.pem
-rw-r--r--. 2 1000070000 2001 1075 Mar 24 12:11 ca.pem
-rw-r--r--. 2 1000070000 2001 1079 Mar 24 12:12 client-cert.pem
-rw-------. 2 1000070000 2001 1680 Mar 24 12:12 client-key.pem
-rw-r-----. 2 1000070000 2001 352 Mar 24 12:12 ib_buffer_pool
-rw-r-----. 2 1000070000 2001 12582912 Mar 24 12:20 ibdata1
-rw-r-----. 2 1000070000 2001 79691776 Mar 24 12:20 ib_logfile0
-rw-r-----. 2 1000070000 2001 79691776 Mar 24 12:11 ib_logfile1
-rw-r-----. 2 1000070000 2001 12582912 Mar 24 12:12 ibtmp1
drwxr-s---. 2 1000070000 2001 8192 Mar 24 12:12 mysql
-rw-r-----. 2 1000070000 2001 2 Mar 24 12:12 mysql-1-h4afb.pid
drwxr-s---. 2 1000070000 2001 8192 Mar 24 12:12 performance_schema
-rw-------. 2 1000070000 2001 1676 Mar 24 12:12 private_key.pem
-rw-r--r--. 2 1000070000 2001 452 Mar 24 12:12 public_key.pem
drwxr-s---. 2 1000070000 2001 62 Mar 24 12:20 sampledb
-rw-r--r--. 2 1000070000 2001 1079 Mar 24 12:11 server-cert.pem
-rw-------. 2 1000070000 2001 1676 Mar 24 12:11 server-key.pem
drwxr-s---. 2 1000070000 2001 8192 Mar 24 12:12 sys
< / code > < / pre > < p > 你 可 以 在 这 里 看 到 MySQL数 据 库 目 录 。 它 使 用 GlusterFS作 为 后 端 存 储 , 并 作 为 绑 定 挂 载 给 MySQL容 器 使 用 。 如 果 你 检 查 OpenShift VM上 的 mount表 , 你 将 会 看 到 GlusterFS的 mount。 < / p >
< h3 id = "总结" > 总 结 < / h3 >
< p > 在 这 里 我 们 是 在 OpenShift之 外 创 建 了 一 个 简 单 但 功 能 强 大 的 GlusterFS存 储 池 。 该 池 可 以 独 立 于 应 用 程 序 扩 展 和 收 缩 。 该 池 的 整 个 生 命 周 期 由 一 个 简 单 的 称 为 heketi的 前 端 管 理 , 你 只 需 要 在 部 署 增 长 时 进 行 手 动 干 预 。 对 于 日 常 配 置 操 作 , 使 用 它 的 API与 OpenShifts动 态 配 置 器 交 互 , 无 需 开 发 人 员 直 接 与 基 础 架 构 团 队 进 行 交 互 。 < / p >
< p > 这 就 是 我 们 如 何 将 存 储 带 入 DevOps世 界 - 无 痛 苦 , 并 在 OpenShift PaaS系 统 的 开 发 人 员 工 具 中 直 接 提 供 。 < / p >
< p > GlusterFS和 OpenShift可 跨 越 所 有 环 境 : 裸 机 , 虚 拟 机 , 私 有 和 公 共 云 ( Azure, Google Cloud, AWS ...) , 确 保 应 用 程 序 可 移 植 性 , 并 避 免 云 供 应 商 锁 定 。 < / p >
< p > 祝 你 愉 快 在 容 器 中 使 用 GlusterFS! < / p >
< p > (c) 2017 Keith Tenzer< / p >
< p > 原 文 链 接 : < a href = "https://keithtenzer.com/2017/03/24/storage-for-containers-using-gluster-part-ii/" target = "_blank" > https://keithtenzer.com/2017/03/24/storage-for-containers-using-gluster-part-ii/< / a > < / p >
< footer class = "page-footer-ex" > < span class = "page-footer-ex-copyright" > for GitBook< / span >                       < span class = "page-footer-ex-footer-update" > update
2017-08-21 18:23:35
< / span > < / footer >
< / section >
< / div >
< div class = "search-results" >
< div class = "has-results" >
< h1 class = "search-results-title" > < span class = 'search-results-count' > < / span > results matching "< span class = 'search-query' > < / span > "< / h1 >
< ul class = "search-results-list" > < / ul >
< / div >
< div class = "no-results" >
< h1 class = "search-results-title" > No results matching "< span class = 'search-query' > < / span > "< / h1 >
< / div >
< / div >
< / div >
< / div >
< / div >
< / div >
< a href = "using-glusterfs-for-persistent-storage.html" class = "navigation navigation-prev " aria-label = "Previous page: 4.4.1.1 使用GlusterFS做持久化存储" >
< i class = "fa fa-angle-left" > < / i >
< / a >
2017-09-01 21:04:51 +08:00
< a href = "cephfs.html" class = "navigation navigation-next " aria-label = "Next page: 4.4.2 CephFS" >
2017-08-21 18:44:34 +08:00
< i class = "fa fa-angle-right" > < / i >
< / a >
< / div >
< script >
var gitbook = gitbook || [];
gitbook.push(function() {
2017-09-19 19:34:05 +08:00
gitbook.page.hasChanged({"page":{"title":"4.4.1.2 在OpenShift中使用GlusterFS做持久化存储","level":"1.4.4.1.2","depth":4,"next":{"title":"4.4.2 CephFS","level":"1.4.4.2","depth":3,"path":"practice/cephfs.md","ref":"practice/cephfs.md","articles":[{"title":"4.4.2.1 使用Ceph做持久化存储","level":"1.4.4.2.1","depth":4,"path":"practice/using-ceph-for-persistent-storage.md","ref":"practice/using-ceph-for-persistent-storage.md","articles":[]}]},"previous":{"title":"4.4.1.1 使用GlusterFS做持久化存储","level":"1.4.4.1.1","depth":4,"path":"practice/using-glusterfs-for-persistent-storage.md","ref":"practice/using-glusterfs-for-persistent-storage.md","articles":[]},"dir":"ltr"},"config":{"plugins":["github","codesnippet","splitter","page-toc-button","image-captions","page-footer-ex","editlink","-lunr","-search","search-plus"],"styles":{"website":"styles/website.css","pdf":"styles/pdf.css","epub":"styles/epub.css","mobi":"styles/mobi.css","ebook":"styles/ebook.css","print":"styles/print.css"},"pluginsConfig":{"github":{"url":"https://github.com/rootsongjc/kubernetes-handbook"},"editlink":{"label":"编辑本页","multilingual":false,"base":"https://github.com/rootsongjc/kubernetes-handbook/blob/master/"},"page-footer-ex":{"copyright":"for GitBook","update_format":"YYYY-MM-DD HH:mm:ss","update_label":"update"},"splitter":{},"codesnippet":{},"fontsettings":{"theme":"white","family":"sans","size":2},"highlight":{},"page-toc-button":{},"sharing":{"facebook":true,"twitter":true,"google":false,"weibo":false,"instapaper":false,"vk":false,"all":["facebook","google","twitter","weibo","instapaper"]},"theme-default":{"styles":{"website":"styles/website.css","pdf":"styles/pdf.css","epub":"styles/epub.css","mobi":"styles/mobi.css","ebook":"styles/ebook.css","print":"styles/print.css"},"showLevel":false},"search-plus":{},"image-captions":{"variable_name":"_pictures"}},"page-footer-ex":{"copyright":"Jimmy Song","update_label":"最后更新:","update_format":"YYYY-MM-DD HH:mm:ss"},"theme":"default","author":"Jimmy Song","pdf":{"pageNumbers":true,"fontSize":12,"fontFamily":"Arial","paperSize":"a4","chapterMark":"pagebreak","pageBreaksBefore":"/","margin":{"right":62,"left":62,"top":56,"bottom":56}},"structure":{"langs":"LANGS.md","readme":"README.md","glossary":"GLOSSARY.md","summary":"SUMMARY.md"},"variables":{"_pictures":[{"backlink":"concepts/index.html#fig1.2.1","level":"1.2","list_caption":"Figure: Borg架构","alt":"Borg架构","nro":1,"url":"../images/borg.png","index":1,"caption_template":"Figure: _CAPTION_","label":"Borg架构","attributes":{},"skip":false,"key":"1.2.1"},{"backlink":"concepts/index.html#fig1.2.2","level":"1.2","list_caption":"Figure: Kubernetes架构","alt":"Kubernetes架构","nro":2,"url":"../images/architecture.png","index":2,"caption_template":"Figure: _CAPTION_","label":"Kubernetes架构","attributes":{},"skip":false,"key":"1.2.2"},{"backlink":"concepts/index.html#fig1.2.3","level":"1.2","list_caption":"Figure: kubernetes整体架构示意图","alt":"kubernetes整体架构示意图","nro":3,"url":"../images/kubernetes-whole-arch.png","index":3,"caption_template":"Figure: _CAPTION_","label":"kubernetes整体架构示意图","attributes":{},"skip":false,"key":"1.2.3"},{"backlink":"concepts/index.html#fig1.2.4","level":"1.2","list_caption":"Figure: Kubernetes master架构示意图","alt":"Kubernetes master架构示意图","nro":4,"url":"../images/kubernetes-master-arch.png","index":4,"caption_template":"Figure: _CAPTION_","label":"Kubernetes master架构示意图","attributes":{},"skip":false,"key":"1.2.4"},{"backlink":"concepts/index.html#fig1.2.5","level":"1.2","list_caption":"Figure: kubernetes node架构示意图","alt":"kubernetes node架构示意图","nro":5,"url":"../images/kubernetes-node-arch.png","index":5,"caption_template":"Figure: _CAPTION_","label":"kubernetes node架构示意图","attributes":{},"skip":false,"key":"1.2.5"},{"backlink":"concepts/index.html#fig1.2.6","level":"1.2","list_caption":"Figure: Kubernetes分层架构示意图","alt":"Kubernetes分层架构示意
2017-08-21 18:44:34 +08:00
});
< / script >
< / div >
< script src = "../gitbook/gitbook.js" > < / script >
< script src = "../gitbook/theme.js" > < / script >
< script src = "../gitbook/gitbook-plugin-github/plugin.js" > < / script >
< script src = "../gitbook/gitbook-plugin-splitter/splitter.js" > < / script >
< script src = "../gitbook/gitbook-plugin-page-toc-button/plugin.js" > < / script >
< script src = "../gitbook/gitbook-plugin-editlink/plugin.js" > < / script >
< script src = "../gitbook/gitbook-plugin-search-plus/jquery.mark.min.js" > < / script >
< script src = "../gitbook/gitbook-plugin-search-plus/search.js" > < / script >
< script src = "../gitbook/gitbook-plugin-sharing/buttons.js" > < / script >
< script src = "../gitbook/gitbook-plugin-fontsettings/fontsettings.js" > < / script >
< / body >
< / html >