Merge pull request #178 from whmzsu/whmzsu

add kubeadm introduction and setting up guide
pull/179/merge
Jimmy Song 2018-04-15 19:36:30 +08:00 committed by GitHub
commit 600234b916
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 423 additions and 6 deletions

View File

@ -121,6 +121,8 @@
- [安装dashboard插件](practice/dashboard-addon-installation.md)
- [安装heapster插件](practice/heapster-addon-installation.md)
- [安装EFK插件](practice/efk-addon-installation.md)
- [使用kubeadm快速构建测试集群](practice/install-kubernetes-with-kubeadm.md)
- [使用kubeadm在Ubuntu Server 16.04上快速构建测试集群](practice/install-kubernetes-on-ubuntu-server-16.04-with-kubeadm.md)
- [服务发现与负载均衡](practice/service-discovery-and-loadbalancing.md)
- [安装Traefik ingress](practice/traefik-ingress-installation.md)
- [分布式负载测试](practice/distributed-load-test.md)
@ -142,7 +144,7 @@
- [使用GlusterFS做持久化存储](practice/using-glusterfs-for-persistent-storage.md)
- [使用Heketi作为kubernetes的持久存储GlusterFS的external provisioner](practice/using-heketi-gluster-for-persistent-storage.md)
- [在OpenShift中使用GlusterFS做持久化存储](practice/storage-for-containers-using-glusterfs-with-openshift.md)
- [CephFS](practice/cephfs.md)
- [Ceph](practice/ceph.md)
- [使用Ceph做持久化存储](practice/using-ceph-for-persistent-storage.md)
- [OpenEBS](practice/openebs.md)
- [使用OpenEBS做持久化存储](practice/using-openebs-for-persistent-storage.md)
@ -220,4 +222,3 @@
- [Kubernetes1.10更新日志](appendix/kubernetes-1.10-changelog.md)
- [Kubernetes及云原生年度总结及展望](appendix/summary-and-outlook.md)
- [Kubernetes与云原生2017年年终总结及2018年展望](appendix/kubernetes-and-cloud-native-summary-in-2017-and-outlook-for-2018.md)

64
practice/ceph.md 100644
View File

@ -0,0 +1,64 @@
# Ceph的简要介绍
本文参考翻译自[这篇文章](https://www.stratoscale.com/blog/storage/introduction-to-ceph/)的部分内容。
Ceph是一个开源的分布式对象块和文件存储。该项目诞生于2003年是塞奇·韦伊的博士论文的结果然后在2006年在LGPL 2.1许可证发布。Ceph已经与Linux内核KVM集成并且默认包含在许多GNU / Linux发行版中。
## 介绍
当前的工作负载和基础设施需要不同的数据访问方法对象文件Ceph支持所有这些方法。它旨在具有可扩展性并且没有单点故障。它是一款开源软件可以在生产环境通用硬件上运行。
RADOS 可靠的自动分布式对象存储是Ceph的核心组件。RADOS对象和当今流行的对象之间存在着重要的区别例如Amazon S3OpenStack Swift或Ceph的RADOS对象网关提供的对象。从2005年到2010年对象存储设备OSD成为一个流行的概念。这些OSD提供了强大的一致性提供不同的接口并且每个对象通常驻留在单个设备上。
在RADOS中有几种操作对象的方法
* 在用CC ++JavaPHP和Python编写的应用程序中使用客户端库librados
* 使用命令行工具'rados'
* 使用与S3Amazon和SwiftOpenStack兼容的现有API
RADOS是一个由Ceph节点组成的集群。有两种类型的节点
* Ceph存储设备节点
* Ceph监控节点
每个Ceph存储设备节点运行一个或多个Ceph OSD守护进程每个磁盘设备一个。OSD是一个Linux进程守护进程可处理与其分配的磁盘HDD或SSD相关的所有操作。所述OSD守护程序访问本地文件系统来存储数据和元数据而不是直接与磁盘通信。Ceph常用的文件系统是XFSbtrfs和ext4。每个OSD还需要一个日志用于对RADOS对象进行原子更新。日志可能驻留在单独的磁盘上通常是SSD以提高性能但同一个磁盘可以被同一节点上的多个OSD使用。
该Ceph的监控节点上运行的单个Ceph的监控守护。Ceph Monitor守护程序维护集群映射的主副本。虽然Ceph集群可以与单个监控节点一起工作但需要更多设备来确保高可用性。建议使用三个或更多Ceph Monitor节点因为它们使用法定数量来维护集群映射。需要大多数Monitor来确认仲裁数因此建议使用奇数个Monitor。例如3个或4个Monitor都可以防止单个故障而5个Monitor可以防止两个故障。
Ceph OSD守护进程和Ceph客户端可以感知群集因此每个Ceph OSD守护进程都可以直接与其他Ceph OSD守护进程和Ceph监视器进行通信。此外Ceph客户端可直接与Ceph OSD守护进程通信以读取和写入数据。
Ceph对象网关守护进程radosgw 提供了两个API
* API与Amazon S3 RESTful AP的子集兼容
* API与OpenStack Swift API的子集兼容
如果RADOS和radosgw为客户提供对象存储服务那么Ceph如何被用作块和文件存储
Ceph中的分布式块存储Ceph RDB实现为对象存储顶部的薄层。Ceph RADOS块设备RBD存储分布在群集中多个Ceph OSD上的数据。RBD利用RADOS功能如快照复制和一致性。RBD使用Linux内核模块或librbd库与RADOS通信。此外KVM管理程序可以利用librbd允许虚拟机访问Ceph卷。
Ceph文件系统CephFS是一个符合POSIX的文件系统使用Ceph集群来存储其数据。所述Ceph的文件系统要求Ceph的集群中的至少一个Ceph的元数据服务器MDS。MDS处理所有文件操作例如文件和目录列表属性所有权等。MDS利用RADOS对象来存储文件系统数据和属性。它可以水平扩展因此您可以将更多的Ceph元数据服务器添加到您的群集中以支持更多的文件系统操作客户端。
## Kubernetes和Ceph
Kubernetes支持Ceph的块存储Ceph RDB)和文件存储CephFS作为Kubernetes的持久存储后端。Kubernetes自带Ceph RDB的internal provisioner可以配置动态提供如果要使用CephFS作为动态存储提供需要安装外置的provisioner。
与Ceph相关的Kubernetes StorageClass的[官方文档介绍](https://kubernetes.io/docs/concepts/storage/storage-classes/)
| Volume Plugin | Internal Provisioner| Config Example |
| :--- | :---: | :---: |
| AWSElasticBlockStore | ✓ | [AWS](#aws) |
| AzureFile | ✓ | [Azure File](#azure-file) |
| AzureDisk | ✓ | [Azure Disk](#azure-disk) |
| CephFS | - | - |
| Cinder | ✓ | [OpenStack Cinder](#openstack-cinder)|
| FC | - | - |
| FlexVolume | - | - |
| Flocker | ✓ | - |
| GCEPersistentDisk | ✓ | [GCE](#gce) |
| Glusterfs | ✓ | [Glusterfs](#glusterfs) |
| iSCSI | - | - |
| PhotonPersistentDisk | ✓ | - |
| Quobyte | ✓ | [Quobyte](#quobyte) |
| NFS | - | - |
| RBD | ✓ | [Ceph RBD](#ceph-rbd) |
| VsphereVolume | ✓ | [vSphere](#vsphere) |
| PortworxVolume | ✓ | [Portworx Volume](#portworx-volume) |
| ScaleIO | ✓ | [ScaleIO](#scaleio) |
| StorageOS | ✓ | [StorageOS](#storageos) |
| Local | - | [Local](#local) |
后续文档将介绍Kubernetes如何与Ceph RDB 和 CephFS集成。

View File

@ -0,0 +1,306 @@
# 用kubeadm在Ubuntu上快速构建Kubernetes基础集群
本文将介绍如何在Ubuntu server 16.04版本上安装kubeadm并利用kubeadm快速的在Ubuntu server 版本 16.04上构建一个kubernetes的基础的测试集群用来做学习和测试用途当前2018-04-14最新的版本是1.10.1。参考文档包括kubernetes官方网站的[kubeadm安装文档](https://kubernetes.io/docs/setup/independent/install-kubeadm/)以及[利用kubeadm创建集群](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/)这两个文档。
生产用途的环境需要考虑各个组件的高可用建议参考Kubernetes的官方的相关的安装文档。
## 概述
本次安装建议至少4台服务器或者虚拟机每台服务器4G内存2个CPU核心以上基本架构为1台master节点3台slave节点。整个安装过程将在Ubuntu服务器上安装完kubeadm以及安装kubernetes的基本集群包括canal网络另后台存储可参考本书的最佳实践中的存储管理内容。
本次安装一共4个节点节点信息如下:
| 角色 | 主机名 | IP地址 |
|----------|----------- |------------|
| Master | Ubuntu-master | 192.168.5.200 |
| Slave | ubuntu-1 | 192.168.5.201 |
| Slave | ubuntu-2 | 192.168.5.202 |
| Slave | ubuntu-3 | 192.168.5.203 |
## 准备工作
* 默认方式安装Ubuntu Server 版本 16.04
* 配置主机名映射,每个节点
```bash
# cat /etc/hosts
127.0.0.1 localhost
192.168.0.200 Ubuntu-master
192.168.0.201 Ubuntu-1
192.168.0.202 Ubuntu-2
192.168.0.203 Ubuntu-3
```
* 如果连接gcr网站不方便无法下载镜像会导致安装过程卡住可以下载我导出的镜像包[我导出的镜像](https://pan.baidu.com/s/1knjGYvxfSeiixWbK6Le8Jw)解压缩以后是9个tar包使用```docker load< xxxx.tar```
## 在所有节点上安装kubeadm
查看apt安装源如下配置使用阿里云的系统和kubernetes的源。
```bash
# cat /etc/apt/sources.list
```
```
#系统安装源
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
#kubeadm及kubernetes组件安装源
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
```
安装docker可以使用系统源的的docker.io软件包版本1.13.1,我的系统里是已经安装好最新的版本了。
```bash
# apt-get install docker.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker.io is already the newest version (1.13.1-0ubuntu1~16.04.2).
0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.
```
更新源可以不理会gpg的报错信息。
```bash
# apt-get update
Hit:1 http://mirrors.aliyun.com/ubuntu xenial InRelease
Hit:2 http://mirrors.aliyun.com/ubuntu xenial-updates InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu xenial-backports InRelease
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease [8,993 B]
Ign:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease
Fetched 8,993 B in 0s (20.7 kB/s)
Reading package lists... Done
W: GPG error: https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6A030B21BA07F4FB
W: The repository 'https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
```
强制安装kubeadmkubectlkubelet软件包。
```bash
# apt-get install -y kubelet kubeadm kubectl --allow-unauthenticated
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
kubernetes-cni socat
The following NEW packages will be installed:
kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 5 newly installed, 0 to remove and 4 not upgraded.
Need to get 56.9 MB of archives.
After this operation, 410 MB of additional disk space will be used.
WARNING: The following packages cannot be authenticated!
kubernetes-cni kubelet kubectl kubeadm
Authentication warning overridden.
Get:1 http://mirrors.aliyun.com/ubuntu xenial/universe amd64 socat amd64 1.7.3.1-1 [321 kB]
Get:2 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 0.6.0-00 [5,910 kB]
Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.10.1-00 [21.1 MB]
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.10.1-00 [8,906 kB]
Get:5 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.10.1-00 [20.7 MB]
Fetched 56.9 MB in 5s (11.0 MB/s)
Use of uninitialized value $_ in lc at /usr/share/perl5/Debconf/Template.pm line 287.
Selecting previously unselected package kubernetes-cni.
(Reading database ... 191799 files and directories currently installed.)
Preparing to unpack .../kubernetes-cni_0.6.0-00_amd64.deb ...
Unpacking kubernetes-cni (0.6.0-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../socat_1.7.3.1-1_amd64.deb ...
Unpacking ....
....
```
kubeadm安装完以后就可以使用它来快速安装部署Kubernetes集群了。
## 使用kubeadm安装Kubernetes集群
### 使用kubeadmin初始化master节点
因为使用要使用canal因此需要在初始化时加上网络配置参数,设置kubernetes的子网为10.244.0.0/16注意此处不要修改为其他地址因为这个值与后续的canal的yaml值要一致如果修改请一并修改。
这个下载镜像的过程涉及翻墙因为会从gcr的站点下载容器镜像。。。如果大家翻墙不方便的话可以用[我导出的镜像](https://pan.baidu.com/s/1knjGYvxfSeiixWbK6Le8Jw),解压缩以后是9个tar包使用```docker load< xxxx.tar```
如果有能够连接gcr站点的网络那么整个安装过程非常简单。
```bash
# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.200
[init] Using Kubernetes version: v1.10.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ubuntu-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.200]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ubuntu-master] and IPs [192.168.0.200]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 28.003828 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ubuntu-master as master by adding a label and a taint
[markmaster] Master ubuntu-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: rw4enn.mvk547juq7qi2b5f
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.0.200:6443 --token rw4enn.mvk547juq7qi2b5f --discovery-token-ca-cert-hash sha256:ba260d5191213382a806a9a7d92c9e6bb09061847c7914b1ac584d0c69471579
```
执行如下命令来配置kubectl。
```bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
这样master的节点就配置好了并且可以使用kubectl来进行各种操作了根据上面的提示接着往下做将slave节点加入到集群。
### Slave节点加入集群
在slave节点执行如下的命令,将slave节点加入集群正常的返回信息如下
```bash
#kubeadm join 192.168.0.200:6443 --token rw4enn.mvk547juq7qi2b5f --discovery-token-ca-cert-hash sha256:ba260d5191213382a806a9a7d92c9e6bb09061847c7914b1ac584d0c69471579
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "192.168.0.200:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.200:6443"
[discovery] Requesting info from "https://192.168.0.200:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.200:6443"
[discovery] Successfully established connection with API Server "192.168.0.200:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
```
等待节点加入完毕。加入中状态。
```bash
# kubectl get node
NAME STATUS ROLES AGE VERSION
ubuntu-1 NotReady <none> 6m v1.10.1
ubuntu-2 NotReady <none> 6m v1.10.1
ubuntu-3 NotReady <none> 6m v1.10.1
ubuntu-master NotReady master 10m v1.10.1
```
在master节点查看信息如下状态为节点加入完毕。
```bash
root@Ubuntu-master:~# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-ubuntu-master 1/1 Running 0 21m 192.168.0.200 ubuntu-master
kube-apiserver-ubuntu-master 1/1 Running 0 21m 192.168.0.200 ubuntu-master
kube-controller-manager-ubuntu-master 1/1 Running 0 22m 192.168.0.200 ubuntu-master
kube-dns-86f4d74b45-wkfk2 0/3 Pending 0 22m <none> <none>
kube-proxy-6ddb4 1/1 Running 0 22m 192.168.0.200 ubuntu-master
kube-proxy-7ngb9 1/1 Running 0 17m 192.168.0.202 ubuntu-2
kube-proxy-fkhhx 1/1 Running 0 18m 192.168.0.201 ubuntu-1
kube-proxy-rh4lq 1/1 Running 0 18m 192.168.0.203 ubuntu-3
kube-scheduler-ubuntu-master 1/1 Running 0 21m 192.168.0.200 ubuntu-master
```
kubedns组件需要在网络插件完成安装以后会自动安装完成。
## 安装网络插件canal
从[canal官方文档参考](https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/canal/)如下网址下载2个文件并且安装其中一个是配置canal的RBAC权限一个是部署canal的DaemonSet。
```bash
# kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
clusterrole.rbac.authorization.k8s.io "calico" created
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "canal-flannel" created
clusterrolebinding.rbac.authorization.k8s.io "canal-calico" created
```
```bash
# kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/canal/canal.yaml
configmap "canal-config" created
daemonset.extensions "canal" created
customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" created
serviceaccount "canal" created
```
查看canal的安装状态。
```bash
# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
canal-fc94k 3/3 Running 10 4m 192.168.0.201 ubuntu-1
canal-rs2wp 3/3 Running 10 4m 192.168.0.200 ubuntu-master
canal-tqd4l 3/3 Running 10 4m 192.168.0.202 ubuntu-2
canal-vmpnr 3/3 Running 10 4m 192.168.0.203 ubuntu-3
etcd-ubuntu-master 1/1 Running 0 28m 192.168.0.200 ubuntu-master
kube-apiserver-ubuntu-master 1/1 Running 0 28m 192.168.0.200 ubuntu-master
kube-controller-manager-ubuntu-master 1/1 Running 0 29m 192.168.0.200 ubuntu-master
kube-dns-86f4d74b45-wkfk2 3/3 Running 0 28m 10.244.2.2 ubuntu-3
kube-proxy-6ddb4 1/1 Running 0 28m 192.168.0.200 ubuntu-master
kube-proxy-7ngb9 1/1 Running 0 24m 192.168.0.202 ubuntu-2
kube-proxy-fkhhx 1/1 Running 0 24m 192.168.0.201 ubuntu-1
kube-proxy-rh4lq 1/1 Running 0 24m 192.168.0.203 ubuntu-3
kube-scheduler-ubuntu-master 1/1 Running 0 28m 192.168.0.200 ubuntu-master
```
可以看到canal和kube-dns都已经运行正常一个基本功能正常的测试环境就部署完毕了。
此时查看集群的节点状态版本为最新的版本v1.10.1。
```bash
# kubectl get node
NAME STATUS ROLES AGE VERSION
ubuntu-1 Ready <none> 27m v1.10.1
ubuntu-2 Ready <none> 27m v1.10.1
ubuntu-3 Ready <none> 27m v1.10.1
ubuntu-master Ready master 31m v1.10.1
```
让master也运行pod默认master不运行pod,这样在测试环境做是可以的,不建议在生产环境如此操作。
```bash
#kubectl taint nodes --all node-role.kubernetes.io/master-
node "ubuntu-master" untainted
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found
taint "node-role.kubernetes.io/master:" not found
```
后续如果想要集群其他功能启用,请参考后续文章。

View File

@ -0,0 +1,40 @@
# kubeadm
## 基本介绍
**kubeadm** 是一个工具包可帮助您以简单合理安全和可扩展的方式引导最佳实践Kubernetes群集。它还支持为您管理[Bootstrap Tokens](/docs/admin/bootstrap-tokens/)并升级/降级群集。
kubeadm的目标是建立一个通过Kubernetes一致性测试[Kubernetes Conformance tests](http://blog.kubernetes.io/2017/10/software-conformance-certification)的最小可行集群 ,但不会安装其他功能插件。
它在设计上并未为您安装网络解决方案需要用户自行安装第三方符合CNI的网络解决方案如flanalcalicocanal等
kubeadm可以在多种设备上运行可以是Linux笔记本电脑虚拟机物理/云服务器或Raspberry Pi。这使得kubeadm非常适合与不同种类的配置系统例如TerraformAnsible等集成。
kubeadm是一种简单的方式让新用户开始尝试Kubernetes也可能是第一次让现有用户轻松测试他们的应用程序并缝合到一起的方式也可以作为其他生态系统中的构建块或者具有更大范围的安装工具。
可以在支持安装deb或rpm软件包的操作系统上非常轻松地安装kubeadm。SIG集群生命周期[SIG Cluster Lifecycle](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle) kubeadm的SIG相关维护者提供了预编译的这些软件包也可以在其他操作系统上使用。
## kubeadm 成熟度
| 分类 | 成熟度 Level |
|---------------------------|--------------- |
| Command line UX | beta |
| Implementation | beta |
| Config file API | alpha |
| Self-hosting | alpha |
| kubeadm alpha subcommands | alpha |
| CoreDNS | alpha |
| DynamicKubeletConfig | alpha |
kubeadm的整体功能状态为 **Beta**即将在2018 年推向 **General AvailabilityGA**。一些子功能如自托管或配置文件API仍在积极开发中。随着这个工具的发展创建集群的实现可能会稍微改变但总体实现应该相当稳定。根据`kubeadm alpha`定义,任何命令 都在alpha级别上受支持。
## 支持时间表
Kubernetes版本通常支持九个月在此期间如果发现严重的错误或安全问题可能会从发布分支发布补丁程序版本。这里是最新的Kubernetes版本和支持时间表; 这也适用于 `kubeadm`.
| Kubernetes version | Release month | End-of-life-month |
|--------------------|----------------|-------------------|
| v1.6.x | March 2017 | December 2017 |
| v1.7.x | June 2017 | March 2018 |
| v1.8.x | September 2017 | June 2018 |
| v1.9.x | December 2017 | September 2018 |
| v1.10.x | March 2018 | December 2018 |

View File

@ -104,8 +104,12 @@ $ kubectl patch deployment nfs-client-provisioner -p '{"spec":{"template":{"spec
* nfs-deployment.yaml示例
NFS服务器的地址是ubuntu-master,共享出来的路径是/media/docker其他不需要修改。
```yaml
```bash
# cat nfs-deployment.yaml
```
```yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
@ -201,8 +205,10 @@ pvc-fe3cb938-3f15-11e8-b61d-08002795cb26 1Mi RWX Delete
* 启动测试POD
POD文件如下作用就是在test-claim的PV里touch一个SUCCESS文件。
```yaml
```bash
# cat test-pod.yaml
```
```yaml
kind: Pod
apiVersion: v1
metadata: