commit
b7c28dd8b3
13
README.md
13
README.md
|
@ -39,4 +39,15 @@ Gitbook 在线浏览:https://jimmysong.io/kubernetes-handbook/
|
|||
|
||||
<p align="center">
|
||||
<img src="https://github.com/rootsongjc/kubernetes-handbook/blob/master/images/cloud-native-go-wechat-qr-code.jpg?raw=true" alt="CloudNativeGo微信公众号二维码"/>
|
||||
</p>
|
||||
</p>
|
||||
|
||||
## 支持本书
|
||||
|
||||
为贡献者加油⛽️!为云原生干杯🍻!
|
||||
|
||||
使用微信扫一扫请贡献者喝一杯🍺
|
||||
|
||||
<p align="center">
|
||||
<img src="https://github.com/rootsongjc/kubernetes-handbook/blob/master/images/wechat-appreciate-qrcode.jpg?raw=true" alt="微信赞赏码"/>
|
||||
</p>
|
||||
|
||||
|
|
|
@ -121,6 +121,8 @@
|
|||
- [安装dashboard插件](practice/dashboard-addon-installation.md)
|
||||
- [安装heapster插件](practice/heapster-addon-installation.md)
|
||||
- [安装EFK插件](practice/efk-addon-installation.md)
|
||||
- [使用kubeadm快速构建测试集群](practice/install-kubernetes-with-kubeadm.md)
|
||||
- [使用kubeadm在Ubuntu Server 16.04上快速构建测试集群](practice/install-kubernetes-on-ubuntu-server-16.04-with-kubeadm.md)
|
||||
- [服务发现与负载均衡](practice/service-discovery-and-loadbalancing.md)
|
||||
- [安装Traefik ingress](practice/traefik-ingress-installation.md)
|
||||
- [分布式负载测试](practice/distributed-load-test.md)
|
||||
|
@ -140,12 +142,15 @@
|
|||
- [存储管理](practice/storage.md)
|
||||
- [GlusterFS](practice/glusterfs.md)
|
||||
- [使用GlusterFS做持久化存储](practice/using-glusterfs-for-persistent-storage.md)
|
||||
- [使用Heketi作为kubernetes的持久存储GlusterFS的external provisioner](practice/using-heketi-gluster-for-persistent-storage.md)
|
||||
- [在OpenShift中使用GlusterFS做持久化存储](practice/storage-for-containers-using-glusterfs-with-openshift.md)
|
||||
- [CephFS](practice/cephfs.md)
|
||||
- [Ceph](practice/ceph.md)
|
||||
- [使用Ceph做持久化存储](practice/using-ceph-for-persistent-storage.md)
|
||||
- [OpenEBS](practice/openebs.md)
|
||||
- [使用OpenEBS做持久化存储](practice/using-openebs-for-persistent-storage.md)
|
||||
- [Rook](practice/rook.md)
|
||||
- [NFS](practice/nfs.md)
|
||||
- [利用NFS动态提供Kubernetes后端存储卷](practice/using-nfs-for-persistent-storage.md)
|
||||
- [集群与应用监控](practice/monitoring.md)
|
||||
- [Heapster](practice//heapster.md)
|
||||
- [使用Heapster获取集群和对象的metric数据](practice/using-heapster-to-get-object-metrics.md)
|
||||
|
@ -217,4 +222,3 @@
|
|||
- [Kubernetes1.10更新日志](appendix/kubernetes-1.10-changelog.md)
|
||||
- [Kubernetes及云原生年度总结及展望](appendix/summary-and-outlook.md)
|
||||
- [Kubernetes与云原生2017年年终总结及2018年展望](appendix/kubernetes-and-cloud-native-summary-in-2017-and-outlook-for-2018.md)
|
||||
|
||||
|
|
|
@ -199,7 +199,7 @@ Kubernetes 对象是 “目标性记录” —— 一旦创建对象,Kubernete
|
|||
|
||||
### 部署Kubernetes集群
|
||||
|
||||
使用二进制部署 `kubernetes` 集群的所有组件和插件,而不是使用 `kubeadm` 等自动化方式来部署集群,同时开启了集群的TLS安全认证,这样可以帮助我们解系统各组件的交互原理,进而能快速解决实际问题。详见[Kubernetes Handbook - 在CentOS上部署kubernetes1.6集群](https://jimmysong.io/kubernetes-handbook/practice/install-kbernetes1.6-on-centos.html)。
|
||||
使用二进制部署 `kubernetes` 集群的所有组件和插件,而不是使用 `kubeadm` 等自动化方式来部署集群,同时开启了集群的TLS安全认证,这样可以帮助我们解系统各组件的交互原理,进而能快速解决实际问题。详见[在CentOS上部署Kubernetes集群](../practice/install-kubernetes-on-centos.md)。
|
||||
|
||||
**集群详情**
|
||||
|
||||
|
|
|
@ -359,6 +359,16 @@ DEBUG
|
|||
|
||||
我们可以看到使用 ConfigMap 方式挂载的 Volume 的文件中的内容已经变成了 `DEBUG`。
|
||||
|
||||
## ConfigMap 更新后滚动更新 Pod
|
||||
|
||||
更新 ConfigMap 目前并不会触发相关 Pod 的滚动更新,可以通过修改 pod annotations 的方式强制触发滚动更新。
|
||||
|
||||
```bash
|
||||
$ kubectl patch deployment my-nginx --patch '{"spec": {"template": {"metadata": {"annotations": {"version/config": "20180411" }}}}}'
|
||||
```
|
||||
|
||||
这个例子里我们在 `.spec.template.metadata.annotations` 中添加 `version/config`,每次通过修改 `version/config` 来触发滚动更新。
|
||||
|
||||
## 总结
|
||||
|
||||
更新 ConfigMap 后:
|
||||
|
@ -366,11 +376,12 @@ DEBUG
|
|||
- 使用该 ConfigMap 挂载的 Env **不会**同步更新
|
||||
- 使用该 ConfigMap 挂载的 Volume 中的数据需要一段时间(实测大概10秒)才能同步更新
|
||||
|
||||
ENV 是在容器启动的时候注入的,启动之后 kubernetes 就不会再改变环境变量的值,且同一个 namespace 中的 pod 的环境变量是不断累加的,参考 [Kubernetes中的服务发现与docker容器间的环境变量传递源码探究](https://jimmysong.io/posts/exploring-kubernetes-env-with-docker/)。为了更新容器中使用 ConfigMap 挂载的配置,可以通过滚动更新 pod 的方式来强制重新挂载 ConfigMap,也可以在更新了 ConfigMap 后,先将副本数设置为 0,然后再扩容。
|
||||
ENV 是在容器启动的时候注入的,启动之后 kubernetes 就不会再改变环境变量的值,且同一个 namespace 中的 pod 的环境变量是不断累加的,参考 [Kubernetes中的服务发现与docker容器间的环境变量传递源码探究](https://jimmysong.io/posts/exploring-kubernetes-env-with-docker/)。为了更新容器中使用 ConfigMap 挂载的配置,需要通过滚动更新 pod 的方式来强制重新挂载 ConfigMap。
|
||||
|
||||
## 参考
|
||||
|
||||
- [Kubernetes 1.7 security in practice](https://acotten.com/post/kube17-security)
|
||||
- [ConfigMap | kubernetes handbook - jimmysong.io](https://jimmysong.io/kubernetes-handbook/concepts/configmap.html)
|
||||
- [创建高可用ectd集群 | Kubernetes handbook - jimmysong.io](https://jimmysong.io/kubernetes-handbook/practice/etcd-cluster-installation.html)
|
||||
- [Kubernetes中的服务发现与docker容器间的环境变量传递源码探究](https://jimmysong.io/posts/exploring-kubernetes-env-with-docker/)
|
||||
- [Kubernetes中的服务发现与docker容器间的环境变量传递源码探究](https://jimmysong.io/posts/exploring-kubernetes-env-with-docker/)
|
||||
- [Automatically Roll Deployments When ConfigMaps or Secrets change](https://github.com/kubernetes/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change)
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 24 KiB |
|
@ -13,11 +13,12 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: influxdb
|
||||
# image: gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1
|
||||
image: sz-pg-oam-docker-hub-001.tendcloud.com/library/heapster-influxdb-amd64:v1.1.1
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: influxdb-storage
|
||||
- mountPath: /etc/
|
||||
- mountPath: /etc/config.toml
|
||||
name: influxdb-config
|
||||
volumes:
|
||||
- name: influxdb-storage
|
||||
|
|
|
@ -0,0 +1,64 @@
|
|||
# Ceph的简要介绍
|
||||
本文参考翻译自[这篇文章](https://www.stratoscale.com/blog/storage/introduction-to-ceph/)的部分内容。
|
||||
|
||||
Ceph是一个开源的分布式对象,块和文件存储。该项目诞生于2003年,是塞奇·韦伊的博士论文的结果,然后在2006年在LGPL 2.1许可证发布。Ceph已经与Linux内核KVM集成,并且默认包含在许多GNU / Linux发行版中。
|
||||
|
||||
## 介绍
|
||||
当前的工作负载和基础设施需要不同的数据访问方法(对象,块,文件),Ceph支持所有这些方法。它旨在具有可扩展性,并且没有单点故障。它是一款开源软件,可以在生产环境,通用硬件上运行。
|
||||
|
||||
RADOS (可靠的自动分布式对象存储)是Ceph的核心组件。RADOS对象和当今流行的对象之间存在着重要的区别,例如Amazon S3,OpenStack Swift或Ceph的RADOS对象网关提供的对象。从2005年到2010年,对象存储设备(OSD)成为一个流行的概念。这些OSD提供了强大的一致性,提供不同的接口,并且每个对象通常驻留在单个设备上。
|
||||
|
||||
在RADOS中有几种操作对象的方法:
|
||||
* 在用C,C ++,Java,PHP和Python编写的应用程序中使用客户端库(librados)
|
||||
* 使用命令行工具'rados'
|
||||
* 使用与S3(Amazon)和Swift(OpenStack)兼容的现有API
|
||||
|
||||
RADOS是一个由Ceph节点组成的集群。有两种类型的节点:
|
||||
* Ceph存储设备节点
|
||||
* Ceph监控节点
|
||||
|
||||
每个Ceph存储设备节点运行一个或多个Ceph OSD守护进程,每个磁盘设备一个。OSD是一个Linux进程(守护进程),可处理与其分配的磁盘(HDD或SSD)相关的所有操作。所述OSD守护程序访问本地文件系统来存储数据和元数据,而不是直接与磁盘通信。Ceph常用的文件系统是XFS,btrfs和ext4。每个OSD还需要一个日志,用于对RADOS对象进行原子更新。日志可能驻留在单独的磁盘上(通常是SSD以提高性能),但同一个磁盘可以被同一节点上的多个OSD使用。
|
||||
|
||||
该Ceph的监控节点上运行的单个Ceph的监控守护。Ceph Monitor守护程序维护集群映射的主副本。虽然Ceph集群可以与单个监控节点一起工作,但需要更多设备来确保高可用性。建议使用三个或更多Ceph Monitor节点,因为它们使用法定数量来维护集群映射。需要大多数Monitor来确认仲裁数,因此建议使用奇数个Monitor。例如,3个或4个Monitor都可以防止单个故障,而5个Monitor可以防止两个故障。
|
||||
|
||||
Ceph OSD守护进程和Ceph客户端可以感知群集,因此每个Ceph OSD守护进程都可以直接与其他Ceph OSD守护进程和Ceph监视器进行通信。此外,Ceph客户端可直接与Ceph OSD守护进程通信以读取和写入数据。
|
||||
|
||||
Ceph对象网关守护进程(radosgw) 提供了两个API:
|
||||
* API与Amazon S3 RESTful AP的子集兼容
|
||||
* API与OpenStack Swift API的子集兼容
|
||||
|
||||
如果RADOS和radosgw为客户提供对象存储服务,那么Ceph如何被用作块和文件存储?
|
||||
|
||||
Ceph中的分布式块存储(Ceph RDB)实现为对象存储顶部的薄层。Ceph RADOS块设备(RBD)存储分布在群集中多个Ceph OSD上的数据。RBD利用RADOS功能,如快照,复制和一致性。RBD使用Linux内核模块或librbd库与RADOS通信。此外,KVM管理程序可以利用librbd允许虚拟机访问Ceph卷。
|
||||
|
||||
Ceph文件系统(CephFS)是一个符合POSIX的文件系统,使用Ceph集群来存储其数据。所述Ceph的文件系统要求Ceph的集群中的至少一个Ceph的元数据服务器(MDS)。MDS处理所有文件操作,例如文件和目录列表,属性,所有权等。MDS利用RADOS对象来存储文件系统数据和属性。它可以水平扩展,因此您可以将更多的Ceph元数据服务器添加到您的群集中,以支持更多的文件系统操作客户端。
|
||||
|
||||
## Kubernetes和Ceph
|
||||
Kubernetes支持Ceph的块存储(Ceph RDB)和文件存储(CephFS)作为Kubernetes的持久存储后端。Kubernetes自带Ceph RDB的internal provisioner,可以配置动态提供,如果要使用CephFS作为动态存储提供,需要安装外置的provisioner。
|
||||
|
||||
与Ceph相关的Kubernetes StorageClass的[官方文档介绍](https://kubernetes.io/docs/concepts/storage/storage-classes/)
|
||||
|
||||
| Volume Plugin | Internal Provisioner| Config Example |
|
||||
| :--- | :---: | :---: |
|
||||
| AWSElasticBlockStore | ✓ | [AWS](#aws) |
|
||||
| AzureFile | ✓ | [Azure File](#azure-file) |
|
||||
| AzureDisk | ✓ | [Azure Disk](#azure-disk) |
|
||||
| CephFS | - | - |
|
||||
| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder)|
|
||||
| FC | - | - |
|
||||
| FlexVolume | - | - |
|
||||
| Flocker | ✓ | - |
|
||||
| GCEPersistentDisk | ✓ | [GCE](#gce) |
|
||||
| Glusterfs | ✓ | [Glusterfs](#glusterfs) |
|
||||
| iSCSI | - | - |
|
||||
| PhotonPersistentDisk | ✓ | - |
|
||||
| Quobyte | ✓ | [Quobyte](#quobyte) |
|
||||
| NFS | - | - |
|
||||
| RBD | ✓ | [Ceph RBD](#ceph-rbd) |
|
||||
| VsphereVolume | ✓ | [vSphere](#vsphere) |
|
||||
| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) |
|
||||
| ScaleIO | ✓ | [ScaleIO](#scaleio) |
|
||||
| StorageOS | ✓ | [StorageOS](#storageos) |
|
||||
| Local | - | [Local](#local) |
|
||||
|
||||
后续文档将介绍Kubernetes如何与Ceph RDB 和 CephFS集成。
|
|
@ -130,7 +130,10 @@ EOF
|
|||
"O": "k8s",
|
||||
"OU": "System"
|
||||
}
|
||||
]
|
||||
],
|
||||
"ca": {
|
||||
"expiry": "87600h"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
@ -0,0 +1,306 @@
|
|||
# 用kubeadm在Ubuntu上快速构建Kubernetes基础集群
|
||||
|
||||
本文将介绍如何在Ubuntu server 16.04版本上安装kubeadm,并利用kubeadm快速的在Ubuntu server 版本 16.04上构建一个kubernetes的基础的测试集群,用来做学习和测试用途,当前(2018-04-14)最新的版本是1.10.1。参考文档包括kubernetes官方网站的[kubeadm安装文档](https://kubernetes.io/docs/setup/independent/install-kubeadm/)以及[利用kubeadm创建集群](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)这两个文档。
|
||||
|
||||
生产用途的环境,需要考虑各个组件的高可用,建议参考Kubernetes的官方的相关的安装文档。
|
||||
## 概述
|
||||
本次安装建议至少4台服务器或者虚拟机,每台服务器4G内存,2个CPU核心以上,基本架构为1台master节点,3台slave节点。整个安装过程将在Ubuntu服务器上安装完kubeadm,以及安装kubernetes的基本集群,包括canal网络,另后台存储可参考本书的最佳实践中的存储管理内容。
|
||||
本次安装一共4个节点,节点信息如下:
|
||||
|
||||
| 角色 | 主机名 | IP地址 |
|
||||
|----------|----------- |------------|
|
||||
| Master | Ubuntu-master | 192.168.5.200 |
|
||||
| Slave | ubuntu-1 | 192.168.5.201 |
|
||||
| Slave | ubuntu-2 | 192.168.5.202 |
|
||||
| Slave | ubuntu-3 | 192.168.5.203 |
|
||||
|
||||
## 准备工作
|
||||
* 默认方式安装Ubuntu Server 版本 16.04
|
||||
* 配置主机名映射,每个节点
|
||||
|
||||
|
||||
```bash
|
||||
# cat /etc/hosts
|
||||
127.0.0.1 localhost
|
||||
192.168.0.200 Ubuntu-master
|
||||
192.168.0.201 Ubuntu-1
|
||||
192.168.0.202 Ubuntu-2
|
||||
192.168.0.203 Ubuntu-3
|
||||
|
||||
```
|
||||
* 如果连接gcr网站不方便,无法下载镜像,会导致安装过程卡住,可以下载我导出的镜像包,[我导出的镜像](https://pan.baidu.com/s/1knjGYvxfSeiixWbK6Le8Jw),解压缩以后是9个tar包,使用```docker load< xxxx.tar``` 导入即可)。
|
||||
## 在所有节点上安装kubeadm
|
||||
查看apt安装源如下配置,使用阿里云的系统和kubernetes的源。
|
||||
|
||||
```bash
|
||||
# cat /etc/apt/sources.list
|
||||
```
|
||||
|
||||
```
|
||||
#系统安装源
|
||||
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
|
||||
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
|
||||
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
|
||||
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
|
||||
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
|
||||
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
|
||||
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
|
||||
#kubeadm及kubernetes组件安装源
|
||||
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
|
||||
```
|
||||
|
||||
安装docker,可以使用系统源的的docker.io软件包,版本1.13.1,我的系统里是已经安装好最新的版本了。
|
||||
```bash
|
||||
# apt-get install docker.io
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
docker.io is already the newest version (1.13.1-0ubuntu1~16.04.2).
|
||||
0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.
|
||||
|
||||
```
|
||||
更新源,可以不理会gpg的报错信息。
|
||||
```bash
|
||||
# apt-get update
|
||||
Hit:1 http://mirrors.aliyun.com/ubuntu xenial InRelease
|
||||
Hit:2 http://mirrors.aliyun.com/ubuntu xenial-updates InRelease
|
||||
Hit:3 http://mirrors.aliyun.com/ubuntu xenial-backports InRelease
|
||||
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease [8,993 B]
|
||||
Ign:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease
|
||||
Fetched 8,993 B in 0s (20.7 kB/s)
|
||||
Reading package lists... Done
|
||||
W: GPG error: https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
|
||||
W: The repository 'https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease' is not signed.
|
||||
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
|
||||
N: See apt-secure(8) manpage for repository creation and user configuration details.
|
||||
```
|
||||
强制安装kubeadm,kubectl,kubelet软件包。
|
||||
```bash
|
||||
# apt-get install -y kubelet kubeadm kubectl --allow-unauthenticated
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following additional packages will be installed:
|
||||
kubernetes-cni socat
|
||||
The following NEW packages will be installed:
|
||||
kubeadm kubectl kubelet kubernetes-cni socat
|
||||
0 upgraded, 5 newly installed, 0 to remove and 4 not upgraded.
|
||||
Need to get 56.9 MB of archives.
|
||||
After this operation, 410 MB of additional disk space will be used.
|
||||
WARNING: The following packages cannot be authenticated!
|
||||
kubernetes-cni kubelet kubectl kubeadm
|
||||
Authentication warning overridden.
|
||||
Get:1 http://mirrors.aliyun.com/ubuntu xenial/universe amd64 socat amd64 1.7.3.1-1 [321 kB]
|
||||
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.6.0-00 [5,910 kB]
|
||||
Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.10.1-00 [21.1 MB]
|
||||
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.10.1-00 [8,906 kB]
|
||||
Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.10.1-00 [20.7 MB]
|
||||
Fetched 56.9 MB in 5s (11.0 MB/s)
|
||||
Use of uninitialized value $_ in lc at /usr/share/perl5/Debconf/Template.pm line 287.
|
||||
Selecting previously unselected package kubernetes-cni.
|
||||
(Reading database ... 191799 files and directories currently installed.)
|
||||
Preparing to unpack .../kubernetes-cni_0.6.0-00_amd64.deb ...
|
||||
Unpacking kubernetes-cni (0.6.0-00) ...
|
||||
Selecting previously unselected package socat.
|
||||
Preparing to unpack .../socat_1.7.3.1-1_amd64.deb ...
|
||||
Unpacking ....
|
||||
....
|
||||
```
|
||||
kubeadm安装完以后,就可以使用它来快速安装部署Kubernetes集群了。
|
||||
|
||||
## 使用kubeadm安装Kubernetes集群
|
||||
### 使用kubeadmin初始化master节点
|
||||
|
||||
因为使用要使用canal,因此需要在初始化时加上网络配置参数,设置kubernetes的子网为10.244.0.0/16,注意此处不要修改为其他地址,因为这个值与后续的canal的yaml值要一致,如果修改,请一并修改。
|
||||
|
||||
这个下载镜像的过程涉及翻墙,因为会从gcr的站点下载容器镜像。。。(如果大家翻墙不方便的话,可以用[我导出的镜像](https://pan.baidu.com/s/1knjGYvxfSeiixWbK6Le8Jw),解压缩以后是9个tar包,使用```docker load< xxxx.tar``` 导入即可)。
|
||||
|
||||
如果有能够连接gcr站点的网络,那么整个安装过程非常简单。
|
||||
|
||||
```bash
|
||||
# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.200
|
||||
[init] Using Kubernetes version: v1.10.1
|
||||
[init] Using Authorization modes: [Node RBAC]
|
||||
[preflight] Running pre-flight checks.
|
||||
[WARNING FileExisting-crictl]: crictl not found in system path
|
||||
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
|
||||
[preflight] Starting the kubelet service
|
||||
[certificates] Generated ca certificate and key.
|
||||
[certificates] Generated apiserver certificate and key.
|
||||
[certificates] apiserver serving cert is signed for DNS names [ubuntu-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.200]
|
||||
[certificates] Generated apiserver-kubelet-client certificate and key.
|
||||
[certificates] Generated etcd/ca certificate and key.
|
||||
[certificates] Generated etcd/server certificate and key.
|
||||
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
|
||||
[certificates] Generated etcd/peer certificate and key.
|
||||
[certificates] etcd/peer serving cert is signed for DNS names [ubuntu-master] and IPs [192.168.0.200]
|
||||
[certificates] Generated etcd/healthcheck-client certificate and key.
|
||||
[certificates] Generated apiserver-etcd-client certificate and key.
|
||||
[certificates] Generated sa key and public key.
|
||||
[certificates] Generated front-proxy-ca certificate and key.
|
||||
[certificates] Generated front-proxy-client certificate and key.
|
||||
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
|
||||
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
|
||||
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
|
||||
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
|
||||
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
|
||||
[init] This might take a minute or longer if the control plane images have to be pulled.
|
||||
[apiclient] All control plane components are healthy after 28.003828 seconds
|
||||
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
|
||||
[markmaster] Will mark node ubuntu-master as master by adding a label and a taint
|
||||
[markmaster] Master ubuntu-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
|
||||
[bootstraptoken] Using token: rw4enn.mvk547juq7qi2b5f
|
||||
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
|
||||
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
|
||||
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
|
||||
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
|
||||
[addons] Applied essential addon: kube-dns
|
||||
[addons] Applied essential addon: kube-proxy
|
||||
|
||||
Your Kubernetes master has initialized successfully!
|
||||
|
||||
To start using your cluster, you need to run the following as a regular user:
|
||||
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
|
||||
You should now deploy a pod network to the cluster.
|
||||
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
|
||||
https://kubernetes.io/docs/concepts/cluster-administration/addons/
|
||||
|
||||
You can now join any number of machines by running the following on each node
|
||||
as root:
|
||||
|
||||
kubeadm join 192.168.0.200:6443 --token rw4enn.mvk547juq7qi2b5f --discovery-token-ca-cert-hash sha256:ba260d5191213382a806a9a7d92c9e6bb09061847c7914b1ac584d0c69471579
|
||||
```
|
||||
|
||||
|
||||
执行如下命令来配置kubectl。
|
||||
|
||||
```bash
|
||||
mkdir -p $HOME/.kube
|
||||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||
```
|
||||
这样master的节点就配置好了,并且可以使用kubectl来进行各种操作了,根据上面的提示接着往下做,将slave节点加入到集群。
|
||||
|
||||
### Slave节点加入集群
|
||||
|
||||
在slave节点执行如下的命令,将slave节点加入集群,正常的返回信息如下:
|
||||
|
||||
```bash
|
||||
#kubeadm join 192.168.0.200:6443 --token rw4enn.mvk547juq7qi2b5f --discovery-token-ca-cert-hash sha256:ba260d5191213382a806a9a7d92c9e6bb09061847c7914b1ac584d0c69471579
|
||||
[preflight] Running pre-flight checks.
|
||||
[WARNING FileExisting-crictl]: crictl not found in system path
|
||||
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
|
||||
[discovery] Trying to connect to API Server "192.168.0.200:6443"
|
||||
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.200:6443"
|
||||
[discovery] Requesting info from "https://192.168.0.200:6443" again to validate TLS against the pinned public key
|
||||
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.200:6443"
|
||||
[discovery] Successfully established connection with API Server "192.168.0.200:6443"
|
||||
|
||||
This node has joined the cluster:
|
||||
* Certificate signing request was sent to master and a response
|
||||
was received.
|
||||
* The Kubelet was informed of the new secure connection details.
|
||||
|
||||
Run 'kubectl get nodes' on the master to see this node join the cluster.
|
||||
```
|
||||
等待节点加入完毕。加入中状态。
|
||||
|
||||
```bash
|
||||
# kubectl get node
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ubuntu-1 NotReady <none> 6m v1.10.1
|
||||
ubuntu-2 NotReady <none> 6m v1.10.1
|
||||
ubuntu-3 NotReady <none> 6m v1.10.1
|
||||
ubuntu-master NotReady master 10m v1.10.1
|
||||
```
|
||||
在master节点查看信息如下状态为节点加入完毕。
|
||||
```bash
|
||||
root@Ubuntu-master:~# kubectl get pod -n kube-system -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
etcd-ubuntu-master 1/1 Running 0 21m 192.168.0.200 ubuntu-master
|
||||
kube-apiserver-ubuntu-master 1/1 Running 0 21m 192.168.0.200 ubuntu-master
|
||||
kube-controller-manager-ubuntu-master 1/1 Running 0 22m 192.168.0.200 ubuntu-master
|
||||
kube-dns-86f4d74b45-wkfk2 0/3 Pending 0 22m <none> <none>
|
||||
kube-proxy-6ddb4 1/1 Running 0 22m 192.168.0.200 ubuntu-master
|
||||
kube-proxy-7ngb9 1/1 Running 0 17m 192.168.0.202 ubuntu-2
|
||||
kube-proxy-fkhhx 1/1 Running 0 18m 192.168.0.201 ubuntu-1
|
||||
kube-proxy-rh4lq 1/1 Running 0 18m 192.168.0.203 ubuntu-3
|
||||
kube-scheduler-ubuntu-master 1/1 Running 0 21m 192.168.0.200 ubuntu-master
|
||||
```
|
||||
|
||||
kubedns组件需要在网络插件完成安装以后会自动安装完成。
|
||||
|
||||
## 安装网络插件canal
|
||||
|
||||
从[canal官方文档参考](https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/canal/),如下网址下载2个文件并且安装,其中一个是配置canal的RBAC权限,一个是部署canal的DaemonSet。
|
||||
|
||||
```bash
|
||||
# kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
|
||||
clusterrole.rbac.authorization.k8s.io "calico" created
|
||||
clusterrole.rbac.authorization.k8s.io "flannel" created
|
||||
clusterrolebinding.rbac.authorization.k8s.io "canal-flannel" created
|
||||
clusterrolebinding.rbac.authorization.k8s.io "canal-calico" created
|
||||
```
|
||||
|
||||
```bash
|
||||
# kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/canal/canal.yaml
|
||||
configmap "canal-config" created
|
||||
daemonset.extensions "canal" created
|
||||
customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" created
|
||||
customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" created
|
||||
customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" created
|
||||
customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" created
|
||||
customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" created
|
||||
customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" created
|
||||
serviceaccount "canal" created
|
||||
```
|
||||
|
||||
查看canal的安装状态。
|
||||
```bash
|
||||
# kubectl get pod -n kube-system -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
canal-fc94k 3/3 Running 10 4m 192.168.0.201 ubuntu-1
|
||||
canal-rs2wp 3/3 Running 10 4m 192.168.0.200 ubuntu-master
|
||||
canal-tqd4l 3/3 Running 10 4m 192.168.0.202 ubuntu-2
|
||||
canal-vmpnr 3/3 Running 10 4m 192.168.0.203 ubuntu-3
|
||||
etcd-ubuntu-master 1/1 Running 0 28m 192.168.0.200 ubuntu-master
|
||||
kube-apiserver-ubuntu-master 1/1 Running 0 28m 192.168.0.200 ubuntu-master
|
||||
kube-controller-manager-ubuntu-master 1/1 Running 0 29m 192.168.0.200 ubuntu-master
|
||||
kube-dns-86f4d74b45-wkfk2 3/3 Running 0 28m 10.244.2.2 ubuntu-3
|
||||
kube-proxy-6ddb4 1/1 Running 0 28m 192.168.0.200 ubuntu-master
|
||||
kube-proxy-7ngb9 1/1 Running 0 24m 192.168.0.202 ubuntu-2
|
||||
kube-proxy-fkhhx 1/1 Running 0 24m 192.168.0.201 ubuntu-1
|
||||
kube-proxy-rh4lq 1/1 Running 0 24m 192.168.0.203 ubuntu-3
|
||||
kube-scheduler-ubuntu-master 1/1 Running 0 28m 192.168.0.200 ubuntu-master
|
||||
```
|
||||
|
||||
可以看到canal和kube-dns都已经运行正常,一个基本功能正常的测试环境就部署完毕了。
|
||||
|
||||
此时查看集群的节点状态,版本为最新的版本v1.10.1。
|
||||
```bash
|
||||
# kubectl get node
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
ubuntu-1 Ready <none> 27m v1.10.1
|
||||
ubuntu-2 Ready <none> 27m v1.10.1
|
||||
ubuntu-3 Ready <none> 27m v1.10.1
|
||||
ubuntu-master Ready master 31m v1.10.1
|
||||
```
|
||||
|
||||
让master也运行pod(默认master不运行pod),这样在测试环境做是可以的,不建议在生产环境如此操作。
|
||||
```bash
|
||||
#kubectl taint nodes --all node-role.kubernetes.io/master-
|
||||
node "ubuntu-master" untainted
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
taint "node-role.kubernetes.io/master:" not found
|
||||
```
|
||||
后续如果想要集群其他功能启用,请参考后续文章。
|
|
@ -0,0 +1,40 @@
|
|||
# kubeadm
|
||||
## 基本介绍
|
||||
**kubeadm** 是一个工具包,可帮助您以简单,合理安全和可扩展的方式引导最佳实践Kubernetes群集。它还支持为您管理[Bootstrap Tokens](/docs/admin/bootstrap-tokens/)并升级/降级群集。
|
||||
|
||||
kubeadm的目标是建立一个通过Kubernetes一致性测试[Kubernetes Conformance tests](http://blog.kubernetes.io/2017/10/software-conformance-certification)的最小可行集群 ,但不会安装其他功能插件。
|
||||
|
||||
它在设计上并未为您安装网络解决方案,需要用户自行安装第三方符合CNI的网络解决方案(如flanal,calico,canal等)。
|
||||
|
||||
kubeadm可以在多种设备上运行,可以是Linux笔记本电脑,虚拟机,物理/云服务器或Raspberry Pi。这使得kubeadm非常适合与不同种类的配置系统(例如Terraform,Ansible等)集成。
|
||||
|
||||
kubeadm是一种简单的方式让新用户开始尝试Kubernetes,也可能是第一次让现有用户轻松测试他们的应用程序并缝合到一起的方式,也可以作为其他生态系统中的构建块,或者具有更大范围的安装工具。
|
||||
|
||||
可以在支持安装deb或rpm软件包的操作系统上非常轻松地安装kubeadm。SIG集群生命周期[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle) kubeadm的SIG相关维护者提供了预编译的这些软件包,也可以在其他操作系统上使用。
|
||||
|
||||
## kubeadm 成熟度
|
||||
|
||||
| 分类 | 成熟度 Level |
|
||||
|---------------------------|--------------- |
|
||||
| Command line UX | beta |
|
||||
| Implementation | beta |
|
||||
| Config file API | alpha |
|
||||
| Self-hosting | alpha |
|
||||
| kubeadm alpha subcommands | alpha |
|
||||
| CoreDNS | alpha |
|
||||
| DynamicKubeletConfig | alpha |
|
||||
|
||||
kubeadm的整体功能状态为 **Beta**,即将在2018 年推向 **General Availability(GA)**。一些子功能(如自托管或配置文件API)仍在积极开发中。随着这个工具的发展,创建集群的实现可能会稍微改变,但总体实现应该相当稳定。根据`kubeadm alpha`定义,任何命令 都在alpha级别上受支持。
|
||||
|
||||
|
||||
## 支持时间表
|
||||
|
||||
Kubernetes版本通常支持九个月,在此期间,如果发现严重的错误或安全问题,可能会从发布分支发布补丁程序版本。这里是最新的Kubernetes版本和支持时间表; 这也适用于 `kubeadm`.
|
||||
|
||||
| Kubernetes version | Release month | End-of-life-month |
|
||||
|--------------------|----------------|-------------------|
|
||||
| v1.6.x | March 2017 | December 2017 |
|
||||
| v1.7.x | June 2017 | March 2018 |
|
||||
| v1.8.x | September 2017 | June 2018 |
|
||||
| v1.9.x | December 2017 | September 2018 |
|
||||
| v1.10.x | March 2018 | December 2018 |
|
|
@ -0,0 +1,3 @@
|
|||
# NFS(Network File System)网络文件系统
|
||||
|
||||
NFS(Network File System)即网络文件系统,是FreeBSD支持的文件系统中的一种,它允许网络中的计算机之间通过TCP/IP网络共享资源。在NFS的应用中,本地NFS的客户端应用可以透明地读写位于远端NFS服务器上的文件,就像访问本地文件一样。在Linux系统中,NFS也作为一种简单的网络共享文件系统而存在。
|
|
@ -0,0 +1,421 @@
|
|||
# 使用Heketi作为kubernetes的持久存储GlusterFS的external provisioner(Kubernetes集成GlusterFS集群和Heketi)
|
||||
|
||||
本文翻译自[heketi的github网址官方文档](https://github.com/heketi/heketi/blob/master/docs/admin/install-kubernetes.md )(大部分为google翻译,少许人工调整,括号内为个人注解)其中注意事项部分为其他网上查询所得。
|
||||
本文的整个过程将在kubernetes集群上的3个或以上节点安装glusterfs的服务端集群(DaemonSet方式),并将heketi以deployment的方式部署到kubernetes集群。在我的示例部分有StorageClass和PVC的样例。本文介绍的Heketi,GlusterFS这2个组件与kubernetes集成只适合用于测试验证环境,并不适合生产环境,请注意这一点。
|
||||
|
||||
Heketi是一个具有resetful接口的glusterfs管理程序,作为kubernetes的Storage存储的external provisioner。
|
||||
“Heketi提供了一个RESTful管理界面,可用于管理GlusterFS卷的生命周期。借助Heketi,像OpenStack Manila,Kubernetes和OpenShift这样的云服务可以动态地配置GlusterFS卷和任何支持的持久性类型。Heketi将自动确定整个集群的brick位置,确保将brick及其副本放置在不同的故障域中。Heketi还支持任意数量的GlusterFS集群,允许云服务提供网络文件存储,而不受限于单个GlusterFS集群。”
|
||||
|
||||
## 注意事项
|
||||
|
||||
* 安装Glusterfs客户端:每个kubernetes集群的节点需要安装gulsterfs的客户端,如ubuntu系统的`apt-get install glusterfs-client`。
|
||||
* 加载内核模块:每个kubernetes集群的节点运行`modprobe dm_thin_pool`,加载内核模块。
|
||||
* 至少三个slave节点:至少需要3个kubernetes slave节点用来部署glusterfs集群,并且这3个slave节点每个节点需要至少一个空余的磁盘。
|
||||
|
||||
|
||||
## 概述
|
||||
|
||||
本指南支持在Kubernetes集群中集成,部署和管理GlusterFS 容器化的存储节点。这使得Kubernetes管理员可以为其用户提供可靠的共享存储。
|
||||
|
||||
跟这个话题相关的另一个重要资源是[gluster-kubernetes](https://github.com/gluster/gluster-kubernetes) 项目。它专注于在Kubernetes集群中部署GlusterFS,并提供简化的工具来完成此任务。它包含一个安装指南 [setup guide](https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md)。它还包括一个样例 [Hello World](https://github.com/gluster/gluster-kubernetes/tree/master/docs/examples/hello_world)。其中包含一个使用动态配置(dynamically-provisioned)的GlusterFS卷进行存储的Web server pod示例。对于那些想要测试或学习更多关于此主题的人,请按照主[README](https://github.com/gluster/gluster-kubernetes) 的快速入门说明 进行操作。
|
||||
|
||||
本指南旨在展示Heketi在Kubernetes环境中管理Gluster的最简单示例。这是为了强调这种配置的主要组成组件,因此并不适合生产环境。
|
||||
|
||||
## 基础设施要求
|
||||
|
||||
* 正在运行的Kubernetes集群,至少有三个Kubernetes工作节点,每个节点至少有一个可用的裸块设备(如EBS卷或本地磁盘).
|
||||
* 用于运行GlusterFS Pod的三个Kubernetes节点必须为GlusterFS通信打开相应的端口(如果开启了防火墙的情况下,没开防火墙就不需要这些操作)。在每个节点上运行以下命令。
|
||||
```bash
|
||||
iptables -N heketi
|
||||
iptables -A heketi -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT
|
||||
iptables -A heketi -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT
|
||||
iptables -A heketi -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT
|
||||
iptables -A heketi -p tcp -m state --state NEW -m multiport --dports 49152:49251 -j ACCEPT
|
||||
service iptables save
|
||||
```
|
||||
|
||||
## 客户端安装
|
||||
|
||||
Heketi提供了一个CLI客户端,为用户提供了一种管理Kubernetes中GlusterFS的部署和配置的方法。 在客户端机器上下载并安装[Download and install the heketi-cli](https://github.com/heketi/heketi/releases)。
|
||||
|
||||
|
||||
## Glusterfs和Heketi在Kubernetes集群中的部署过程
|
||||
以下所有文件都位于下方extras/kubernetes (`git clone https://github.com/heketi/heketi.git`)。
|
||||
|
||||
* 部署 GlusterFS DaemonSet
|
||||
|
||||
```bash
|
||||
$ kubectl create -f glusterfs-daemonset.json
|
||||
```
|
||||
|
||||
* 通过运行如下命令获取节点名称:
|
||||
|
||||
```bash
|
||||
$ kubectl get nodes
|
||||
```
|
||||
|
||||
* 通过设置storagenode=glusterfs节点上的标签,将gluster容器部署到指定节点上。
|
||||
|
||||
```bash
|
||||
$ kubectl label node <...node...> storagenode=glusterfs
|
||||
```
|
||||
|
||||
根据需要重复打标签的步骤。验证Pod在节点上运行至少应运行3个Pod(因此至少需要给3个节点打标签)。
|
||||
|
||||
```bash
|
||||
$ kubectl get pods
|
||||
```
|
||||
|
||||
* 接下来,我们将为Heketi创建一个服务帐户(service-account):
|
||||
|
||||
```bash
|
||||
$ kubectl create -f heketi-service-account.json
|
||||
```
|
||||
|
||||
* 我们现在必须给该服务帐户的授权绑定相应的权限来控制gluster的pod。我们通过为我们新创建的服务帐户创建群集角色绑定(cluster role binding)来完成此操作。
|
||||
|
||||
```bash
|
||||
$ kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
|
||||
```
|
||||
|
||||
* 现在我们需要创建一个Kubernetes secret来保存我们Heketi实例的配置。必须将配置文件的执行程序设置为 kubernetes才能让Heketi server控制gluster pod(配置文件的默认配置)。除此这些,可以尝试配置的其他选项。
|
||||
|
||||
```bash
|
||||
$ kubectl create secret generic heketi-config-secret --from-file=./heketi.json
|
||||
```
|
||||
|
||||
* 接下来,我们需要部署一个初始(bootstrap)Pod和一个服务来访问该Pod。在你用git克隆的repo中,会有一个heketi-bootstrap.json文件。
|
||||
|
||||
提交文件并验证一切正常运行,如下所示:
|
||||
|
||||
```bash
|
||||
# kubectl create -f heketi-bootstrap.json
|
||||
service "deploy-heketi" created
|
||||
deployment "deploy-heketi" created
|
||||
|
||||
# kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
deploy-heketi-1211581626-2jotm 1/1 Running 0 35m
|
||||
glusterfs-ip-172-20-0-217.ec2.internal-1217067810-4gsvx 1/1 Running 0 1h
|
||||
glusterfs-ip-172-20-0-218.ec2.internal-2001140516-i9dw9 1/1 Running 0 1h
|
||||
glusterfs-ip-172-20-0-219.ec2.internal-2785213222-q3hba 1/1 Running 0 1h
|
||||
```
|
||||
|
||||
* 当Bootstrap heketi服务正在运行,我们配置端口转发,以便我们可以使用Heketi CLI与服务进行通信。使用heketi pod的名称,运行下面的命令:
|
||||
|
||||
`kubectl port-forward deploy-heketi-1211581626-2jotm :8080`
|
||||
|
||||
如果在运行命令的系统上本地端口8080是空闲的,则可以运行port-forward命令,以便绑定到8080以方便使用(2个命令二选一即可,我选择第二个):
|
||||
|
||||
`kubectl port-forward deploy-heketi-1211581626-2jotm 8080:8080`
|
||||
|
||||
现在通过对Heketi服务运行示例查询来验证端口转发是否正常。该命令应该已经打印了将从其转发的本地端口。将其合并到URL中以测试服务,如下所示:
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/hello
|
||||
Handling connection for 8080
|
||||
Hello from heketi
|
||||
```
|
||||
|
||||
最后,为Heketi CLI客户端设置一个环境变量,以便它知道Heketi服务器的地址。
|
||||
|
||||
`export HEKETI_CLI_SERVER=http://localhost:8080`
|
||||
|
||||
* 接下来,我们将向Heketi提供有关要管理的GlusterFS集群的信息。通过拓扑文件提供这些信息。克隆的repo中有一个示例拓扑文件,名为topology-sample.json。拓扑指定运行GlusterFS容器的Kubernetes节点以及每个节点的相应原始块设备。
|
||||
|
||||
确保hostnames/manage指向如下所示的确切名称kubectl get nodes得到的主机名(如ubuntu-1),并且hostnames/storage是存储网络的IP地址(对应ubuntu-1的ip地址)。
|
||||
|
||||
**IMPORTANT**: 重要提示,目前,必须使用与服务器版本匹配的Heketi-cli版本加载拓扑文件。另外,Heketi pod 带有可以通过 `kubectl exec ...`访问的heketi-cli副本。
|
||||
|
||||
|
||||
修改拓扑文件以反映您所做的选择,然后如下所示部署它(修改主机名,IP,block 设备的名称 如xvdg):
|
||||
|
||||
```bash
|
||||
heketi-client/bin/heketi-cli topology load --json=topology-sample.json
|
||||
Handling connection for 57598
|
||||
Found node ip-172-20-0-217.ec2.internal on cluster e6c063ba398f8e9c88a6ed720dc07dd2
|
||||
Adding device /dev/xvdg ... OK
|
||||
Found node ip-172-20-0-218.ec2.internal on cluster e6c063ba398f8e9c88a6ed720dc07dd2
|
||||
Adding device /dev/xvdg ... OK
|
||||
Found node ip-172-20-0-219.ec2.internal on cluster e6c063ba398f8e9c88a6ed720dc07dd2
|
||||
Adding device /dev/xvdg ... OK
|
||||
```
|
||||
|
||||
* 接下来,我们将使用heketi为其存储其数据库提供一个卷(不要怀疑,就是使用这个命令,openshift和kubernetes通用,此命令生成heketi-storage.json文件):
|
||||
|
||||
```bash
|
||||
# heketi-client/bin/heketi-cli setup-openshift-heketi-storage
|
||||
# kubectl create -f heketi-storage.json
|
||||
```
|
||||
|
||||
> Pitfall: 注意,如果在运行setup-openshift-heketi-storage子命令时heketi-cli报告“无空间”错误,则可能无意中运行topology load命令的时候服务端和heketi-cli的版本不匹配造成的。停止正在运行的heketi pod(kubectl scale deployment deploy-heketi --replicas=0),手动删除存储块设备中的任何签名,然后继续运行heketi pod(kubectl scale deployment deploy-heketi --replicas=1)。然后用匹配版本的heketi-cli重新加载拓扑,然后重试该步骤。
|
||||
|
||||
* 等到作业完成后,删除bootstrap Heketi实例相关的组件:
|
||||
|
||||
```bash
|
||||
# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"
|
||||
```
|
||||
|
||||
* 创建长期使用的Heketi实例(存储持久化的):
|
||||
|
||||
```bash
|
||||
# kubectl create -f heketi-deployment.json
|
||||
service "heketi" created
|
||||
deployment "heketi" created
|
||||
```
|
||||
|
||||
* 这样做了以后,heketi db将使用GlusterFS卷,并且每当heketi pod重新启动时都不会重置(数据不会丢失,存储持久化)。
|
||||
|
||||
使用诸如heketi-cli cluster list和的命令heketi-cli volume list 来确认先前建立的集群存在,并且heketi可以列出在bootstrap阶段创建的db存储卷。
|
||||
|
||||
|
||||
# 使用样例
|
||||
|
||||
有两种方法来调配存储。常用的方法是设置一个StorageClass,让Kubernetes为提交的PersistentVolumeClaim自动配置存储。或者,可以通过Kubernetes手动创建和管理卷(PVs),或直接使用heketi-cli中的卷。
|
||||
|
||||
参考[gluster-kubernetes hello world example](https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md)
|
||||
获取关于 storageClass 的更多信息.
|
||||
|
||||
|
||||
# 我的示例(非翻译部分内容)
|
||||
|
||||
* topology文件:我的例子(3个节点,ubuntu-1(192.168.5.191),ubuntu-2(192.168.5.192),ubuntu-3(192.168.5.193),每个节点2个磁盘用来做存储(sdb,sdc))
|
||||
|
||||
```bash
|
||||
# cat topology-sample.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"clusters": [
|
||||
{
|
||||
"nodes": [
|
||||
{
|
||||
"node": {
|
||||
"hostnames": {
|
||||
"manage": [
|
||||
"ubuntu-1"
|
||||
],
|
||||
"storage": [
|
||||
"192.168.5.191"
|
||||
]
|
||||
},
|
||||
"zone": 1
|
||||
},
|
||||
"devices": [
|
||||
"/dev/sdb",
|
||||
"/dev/sdc"
|
||||
]
|
||||
},
|
||||
{
|
||||
"node": {
|
||||
"hostnames": {
|
||||
"manage": [
|
||||
"ubuntu-2"
|
||||
],
|
||||
"storage": [
|
||||
"192.168.5.192"
|
||||
]
|
||||
},
|
||||
"zone": 1
|
||||
},
|
||||
"devices": [
|
||||
"/dev/sdb",
|
||||
"/dev/sdc"
|
||||
]
|
||||
},
|
||||
{
|
||||
"node": {
|
||||
"hostnames": {
|
||||
"manage": [
|
||||
"ubuntu-3"
|
||||
],
|
||||
"storage": [
|
||||
"192.168.5.193"
|
||||
]
|
||||
},
|
||||
"zone": 1
|
||||
},
|
||||
"devices": [
|
||||
"/dev/sdb",
|
||||
"/dev/sdc"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
* 确认glusterfs和heketi的pod运行正常
|
||||
|
||||
```bash
|
||||
# kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
glusterfs-gf5zc 1/1 Running 2 8h
|
||||
glusterfs-ngc55 1/1 Running 2 8h
|
||||
glusterfs-zncjs 1/1 Running 0 2h
|
||||
heketi-5c8ffcc756-x9gnv 1/1 Running 5 7h
|
||||
```
|
||||
* StorageClass yaml文件示例
|
||||
|
||||
```bash
|
||||
# cat storage-class-slow.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow #-------------SC的名字
|
||||
provisioner: kubernetes.io/glusterfs
|
||||
parameters:
|
||||
resturl: "http://10.103.98.75:8080" #-------------heketi service的cluster ip 和端口
|
||||
restuser: "admin" #-------------随便填,因为没有启用鉴权模式
|
||||
gidMin: "40000"
|
||||
gidMax: "50000"
|
||||
volumetype: "replicate:3" #-------------申请的默认为3副本模式
|
||||
|
||||
```
|
||||
|
||||
* PVC举例
|
||||
|
||||
```bash
|
||||
# cat pvc-sample.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: myclaim
|
||||
annotations:
|
||||
volume.beta.kubernetes.io/storage-class: "slow" #-------------sc的名字,需要与storageclass的名字一致
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
|
||||
```
|
||||
|
||||
查看创建的pvc和pv
|
||||
|
||||
```bash
|
||||
# kubectl get pvc|grep myclaim
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
myclaim Bound pvc-e98e9117-3ed7-11e8-b61d-08002795cb26 1Gi RWO slow 28s
|
||||
|
||||
# kubectl get pv|grep myclaim
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-e98e9117-3ed7-11e8-b61d-08002795cb26 1Gi RWO Delete Bound default/myclaim slow 1m
|
||||
```
|
||||
|
||||
* 可以将slow的sc设置为默认,这样平台分配存储的时候可以自动从glusterfs集群分配pv
|
||||
|
||||
```bash
|
||||
# kubectl patch storageclass slow -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
storageclass.storage.k8s.io "slow" patched
|
||||
|
||||
# kubectl get sc
|
||||
NAME PROVISIONER AGE
|
||||
default fuseim.pri/ifs 1d
|
||||
slow (default) kubernetes.io/glusterfs 6h
|
||||
```
|
||||
# 容量限额测试
|
||||
已经通过Helm 部署的一个mysql2 实例,使用存储2G,信息查看如下:
|
||||
|
||||
```bash
|
||||
# helm list
|
||||
NAME REVISION UPDATED STATUS CHART NAMESPACE
|
||||
mysql2 1 Thu Apr 12 15:27:11 2018 DEPLOYED mysql-0.3.7 default
|
||||
```
|
||||
|
||||
查看PVC和PV,大小2G,mysql2-mysql
|
||||
|
||||
```bash
|
||||
# kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
mysql2-mysql Bound pvc-ea4ae3e0-3e22-11e8-8bb6-08002795cb26 2Gi RWO slow 19h
|
||||
|
||||
# kubectl get pv
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-ea4ae3e0-3e22-11e8-8bb6-08002795cb26 2Gi RWO Delete Bound default/mysql2-mysql slow 19h
|
||||
```
|
||||
|
||||
查看mysql的pod
|
||||
|
||||
```bash
|
||||
# kubectl get pod|grep mysql2
|
||||
mysql2-mysql-56d64f5b77-j2v84 1/1 Running 2 19h
|
||||
```
|
||||
进入mysql所在容器
|
||||
|
||||
```bash
|
||||
# kubectl exec -it mysql2-mysql-56d64f5b77-j2v84 /bin/bash
|
||||
```
|
||||
查看挂载路径,查看挂载信息
|
||||
|
||||
```bash
|
||||
root@mysql2-mysql-56d64f5b77-j2v84:/#cd /var/lib/mysql
|
||||
root@mysql2-mysql-56d64f5b77-j2v84:/var/lib/mysql#
|
||||
root@mysql2-mysql-56d64f5b77-j2v84:/var/lib/mysql# df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
none 48G 9.2G 37G 21% /
|
||||
tmpfs 1.5G 0 1.5G 0% /dev
|
||||
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
|
||||
/dev/mapper/ubuntu--1--vg-root 48G 9.2G 37G 21% /etc/hosts
|
||||
shm 64M 0 64M 0% /dev/shm
|
||||
192.168.5.191:vol_2c2227ee65b64a0225aa9bce848a9925 2.0G 264M 1.8G 13% /var/lib/mysql
|
||||
tmpfs 1.5G 12K 1.5G 1% /run/secrets/kubernetes.io/serviceaccount
|
||||
tmpfs 1.5G 0 1.5G 0% /sys/firmware
|
||||
```
|
||||
使用dd写入数据,写入一段时间以后,空间满了,会报错(报错信息有bug,不是报空间满了,而是报文件系统只读,应该是glusterfs和docker配合的问题)
|
||||
|
||||
```bash
|
||||
root@mysql2-mysql-56d64f5b77-j2v84:/var/lib/mysql# dd if=/dev/zero of=test.img bs=8M count=300
|
||||
|
||||
dd: error writing 'test.img': Read-only file system
|
||||
dd: closing output file 'test.img': Input/output error
|
||||
```
|
||||
查看写满以后的文件大小
|
||||
|
||||
```bash
|
||||
root@mysql2-mysql-56d64f5b77-j2v84:/var/lib/mysql# ls -l
|
||||
total 2024662
|
||||
-rw-r----- 1 mysql mysql 56 Apr 12 07:27 auto.cnf
|
||||
-rw-r----- 1 mysql mysql 1329 Apr 12 07:27 ib_buffer_pool
|
||||
-rw-r----- 1 mysql mysql 50331648 Apr 12 12:05 ib_logfile0
|
||||
-rw-r----- 1 mysql mysql 50331648 Apr 12 07:27 ib_logfile1
|
||||
-rw-r----- 1 mysql mysql 79691776 Apr 12 12:05 ibdata1
|
||||
-rw-r----- 1 mysql mysql 12582912 Apr 12 12:05 ibtmp1
|
||||
drwxr-s--- 2 mysql mysql 8192 Apr 12 07:27 mysql
|
||||
drwxr-s--- 2 mysql mysql 8192 Apr 12 07:27 performance_schema
|
||||
drwxr-s--- 2 mysql mysql 8192 Apr 12 07:27 sys
|
||||
-rw-r--r-- 1 root mysql 1880887296 Apr 13 02:47 test.img
|
||||
```
|
||||
|
||||
查看挂载信息(挂载信息显示bug,应该是glusterfs的bug)
|
||||
```bash
|
||||
root@mysql2-mysql-56d64f5b77-j2v84:/var/lib/mysql# df -h
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
none 48G 9.2G 37G 21% /
|
||||
tmpfs 1.5G 0 1.5G 0% /dev
|
||||
tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup
|
||||
/dev/mapper/ubuntu--1--vg-root 48G 9.2G 37G 21% /etc/hosts
|
||||
shm 64M 0 64M 0% /dev/shm
|
||||
192.168.5.191:vol_2c2227ee65b64a0225aa9bce848a9925 2.0G -16E 0 100% /var/lib/mysql
|
||||
tmpfs 1.5G 12K 1.5G 1% /run/secrets/kubernetes.io/serviceaccount
|
||||
tmpfs 1.5G 0 1.5G 0% /sys/firmware
|
||||
```
|
||||
查看文件夹大小,为2G
|
||||
|
||||
```bash
|
||||
# du -h
|
||||
25M ./mysql
|
||||
825K ./performance_schema
|
||||
496K ./sys
|
||||
2.0G .
|
||||
```
|
||||
如上说明glusterfs的限额作用是起效的,限制在2G的空间大小。
|
|
@ -0,0 +1,252 @@
|
|||
# 利用NFS动态提供Kubernetes后端存储卷
|
||||
本文翻译自nfs-client-provisioner的[说明文档](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client),本文将介绍使用nfs-client-provisioner这个应用,利用NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV。前提条件是有已经安装好的NFS服务器,并且NFS服务器与Kubernetes的Slave节点都能网络连通。
|
||||
所有下文用到的文件来自于`git clone https://github.com/kubernetes-incubator/external-storage.git`的nfs-client目录。
|
||||
## nfs-client-provisioner
|
||||
nfs-client-provisioner 是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储
|
||||
|
||||
- PV以 `${namespace}-${pvcName}-${pvName}`的命名格式提供(在NFS服务器上)
|
||||
- PV回收的时候以 `archieved-${namespace}-${pvcName}-${pvName}` 的命名格式(在NFS服务器上)
|
||||
|
||||
## 安装部署
|
||||
- 修改deployment文件并部署 `deploy/deployment.yaml`
|
||||
|
||||
需要修改的地方只有NFS服务器所在的IP地址(10.10.10.60),以及NFS服务器共享的路径(`/ifs/kubernetes`),两处都需要修改为你实际的NFS服务器和共享目录
|
||||
```yaml
|
||||
kind: Deployment
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
name: nfs-client-provisioner
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nfs-client-provisioner
|
||||
spec:
|
||||
serviceAccountName: nfs-client-provisioner
|
||||
containers:
|
||||
- name: nfs-client-provisioner
|
||||
image: quay.io/external_storage/nfs-client-provisioner:latest
|
||||
volumeMounts:
|
||||
- name: nfs-client-root
|
||||
mountPath: /persistentvolumes
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
value: fuseim.pri/ifs
|
||||
- name: NFS_SERVER
|
||||
value: 10.10.10.60
|
||||
- name: NFS_PATH
|
||||
value: /ifs/kubernetes
|
||||
volumes:
|
||||
- name: nfs-client-root
|
||||
nfs:
|
||||
server: 10.10.10.60
|
||||
path: /ifs/kubernetes
|
||||
```
|
||||
|
||||
- 修改StorageClass文件并部署 `deploy/class.yaml`
|
||||
|
||||
此处可以不修改,或者修改provisioner的名字,需要与上面的deployment的PROVISIONER_NAME名字一致。
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: managed-nfs-storage
|
||||
provisioner: fuseim.pri/ifs
|
||||
```
|
||||
|
||||
## 授权
|
||||
|
||||
如果您的集群启用了RBAC,或者您正在运行OpenShift,则必须授权provisioner。 如果你在非默认的“default”名称空间/项目之外部署,可以编辑`deploy/auth/clusterrolebinding.yaml`或编辑`oadm policy“指令。
|
||||
|
||||
### 如果启用了RBAC
|
||||
|
||||
需要执行如下的命令来授权。
|
||||
```bash
|
||||
$ kubectl create -f deploy/auth/serviceaccount.yaml
|
||||
serviceaccount "nfs-client-provisioner" created
|
||||
$ kubectl create -f deploy/auth/clusterrole.yaml
|
||||
clusterrole "nfs-client-provisioner-runner" created
|
||||
$ kubectl create -f deploy/auth/clusterrolebinding.yaml
|
||||
clusterrolebinding "run-nfs-client-provisioner" created
|
||||
$ kubectl patch deployment nfs-client-provisioner -p '{"spec":{"template":{"spec":{"serviceAccount":"nfs-client-provisioner"}}}}'
|
||||
```
|
||||
|
||||
## 测试
|
||||
测试创建PVC
|
||||
- `kubectl create -f deploy/test-claim.yaml`
|
||||
|
||||
测试创建POD
|
||||
- `kubectl create -f deploy/test-pod.yaml`
|
||||
|
||||
在NFS服务器上的共享目录下的卷子目录中检查创建的NFS PV卷下是否有"SUCCESS" 文件。
|
||||
|
||||
删除测试POD
|
||||
- `kubectl delete -f deploy/test-pod.yaml`
|
||||
|
||||
删除测试PVC
|
||||
- `kubectl delete -f deploy/test-claim.yaml`
|
||||
|
||||
在NFS服务器上的共享目录下查看NFS的PV卷回收以后是否名字以archived开头。
|
||||
|
||||
## 我的示例
|
||||
|
||||
* NFS服务器配置
|
||||
```bash
|
||||
# cat /etc/exports
|
||||
```
|
||||
```ini
|
||||
/media/docker *(no_root_squash,rw,sync,no_subtree_check)
|
||||
```
|
||||
|
||||
* nfs-deployment.yaml示例
|
||||
|
||||
NFS服务器的地址是ubuntu-master,共享出来的路径是/media/docker,其他不需要修改。
|
||||
|
||||
```bash
|
||||
# cat nfs-deployment.yaml
|
||||
```
|
||||
|
||||
```yaml
|
||||
kind: Deployment
|
||||
apiVersion: extensions/v1beta1
|
||||
metadata:
|
||||
name: nfs-client-provisioner
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nfs-client-provisioner
|
||||
spec:
|
||||
serviceAccountName: nfs-client-provisioner
|
||||
containers:
|
||||
- name: nfs-client-provisioner
|
||||
image: quay.io/external_storage/nfs-client-provisioner:latest
|
||||
volumeMounts:
|
||||
- name: nfs-client-root
|
||||
mountPath: /persistentvolumes
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
value: fuseim.pri/ifs
|
||||
- name: NFS_SERVER
|
||||
value: ubuntu-master
|
||||
- name: NFS_PATH
|
||||
value: /media/docker
|
||||
volumes:
|
||||
- name: nfs-client-root
|
||||
nfs:
|
||||
server: ubuntu-master
|
||||
path: /media/docker
|
||||
```
|
||||
* StorageClass示例
|
||||
|
||||
可以修改Class的名字,我的改成了default。
|
||||
|
||||
```bash
|
||||
# cat class.yaml
|
||||
```
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: default
|
||||
provisioner: fuseim.pri/ifs
|
||||
```
|
||||
* 查看StorageClass
|
||||
|
||||
```bash
|
||||
# kubectl get sc
|
||||
NAME PROVISIONER AGE
|
||||
default fuseim.pri/ifs 2d
|
||||
```
|
||||
|
||||
* 设置这个default名字的SC为Kubernetes的默认存储后端
|
||||
|
||||
```bash
|
||||
# kubectl patch storageclass default -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
storageclass.storage.k8s.io "default" patched
|
||||
# kubectl get sc
|
||||
NAME PROVISIONER AGE
|
||||
default (default) fuseim.pri/ifs 2d
|
||||
```
|
||||
|
||||
* 测试创建PVC
|
||||
|
||||
查看pvc文件
|
||||
```bash
|
||||
# cat test-claim.yaml
|
||||
```
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: test-claim
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Mi
|
||||
```
|
||||
创建PVC
|
||||
```bash
|
||||
# kubectl apply -f test-claim.yaml
|
||||
persistentvolumeclaim "test-claim" created
|
||||
root@Ubuntu-master:~/kubernetes/nfs# kubectl get pvc|grep test
|
||||
test-claim Bound pvc-fe3cb938-3f15-11e8-b61d-08002795cb26 1Mi RWX default 10s
|
||||
# kubectl get pv|grep test
|
||||
pvc-fe3cb938-3f15-11e8-b61d-08002795cb26 1Mi RWX Delete Bound default/test-claim default 58s
|
||||
```
|
||||
* 启动测试POD
|
||||
|
||||
POD文件如下,作用就是在test-claim的PV里touch一个SUCCESS文件。
|
||||
```bash
|
||||
# cat test-pod.yaml
|
||||
```
|
||||
```yaml
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-pod
|
||||
image: gcr.io/google_containers/busybox:1.24
|
||||
command:
|
||||
- "/bin/sh"
|
||||
args:
|
||||
- "-c"
|
||||
- "touch /mnt/SUCCESS && exit 0 || exit 1"
|
||||
volumeMounts:
|
||||
- name: nfs-pvc
|
||||
mountPath: "/mnt"
|
||||
restartPolicy: "Never"
|
||||
volumes:
|
||||
- name: nfs-pvc
|
||||
persistentVolumeClaim:
|
||||
claimName: test-claim
|
||||
```
|
||||
|
||||
启动POD,一会儿POD就是completed状态,说明执行完毕。
|
||||
|
||||
```bash
|
||||
# kubectl apply -f test-pod.yaml
|
||||
pod "test-pod" created
|
||||
kubectl get pod|grep test
|
||||
test-pod 0/1 Completed 0 40s
|
||||
```
|
||||
|
||||
我们去NFS共享目录查看有没有SUCCESS文件。
|
||||
|
||||
```bash
|
||||
# cd default-test-claim-pvc-fe3cb938-3f15-11e8-b61d-08002795cb26
|
||||
# ls
|
||||
SUCCESS
|
||||
```
|
||||
|
||||
说明部署正常,并且可以动态分配NFS的共享卷。
|
Loading…
Reference in New Issue