Merge pull request #1 from gjmzj/master

1-11-PR
pull/74/head^2
崔国正的粑粑 2018-01-11 18:32:33 +08:00 committed by GitHub
commit 8069e81200
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
44 changed files with 1413 additions and 387 deletions

View File

@ -14,6 +14,5 @@
# [可选]多master部署时的负载均衡配置
- hosts: lb
gather_facts: True
roles:
- lb

3
07.flannel.yml 100644
View File

@ -0,0 +1,3 @@
- hosts: kube-cluster
roles:
- flannel

18
11.harbor.yml 100644
View File

@ -0,0 +1,18 @@
- hosts: harbor
roles:
- prepare
- docker
- harbor
- hosts: kube-node
tasks:
- name: harbor证书目录创建
file: name=/etc/docker/certs.d/{{ HARBOR_DOMAIN }} state=directory
- name: harbor服务器证书安装
copy: src={{ base_dir }}/roles/prepare/files/ca.pem dest=/etc/docker/certs.d/{{ HARBOR_DOMAIN }}/ca.crt
# 如果你的环境中有dns服务器可以跳过hosts文件设置
- name: 增加harbor的hosts解析
shell: "sed -i '/{{ HARBOR_DOMAIN }}/d' /etc/hosts && \
echo {{ HARBOR_IP }} {{ HARBOR_DOMAIN }} >> /etc/hosts"

View File

@ -1,4 +1,5 @@
# 在deploy节点生成CA相关证书以供整个集群使用
# 以及初始化kubedns.yaml配置文件
- hosts: deploy
roles:
- deploy
@ -8,6 +9,7 @@
- kube-cluster
- deploy
- etcd
- lb
roles:
- prepare
@ -16,28 +18,35 @@
roles:
- lb
# 创建etcd集群
- hosts: etcd
roles:
- etcd
# kubectl 客户端配置
- hosts:
- kube-cluster
- deploy
roles:
- kubectl
# docker服务安装
- hosts: kube-cluster
roles:
- docker
- hosts: kube-cluster
roles:
- calico
# master 节点部署
- hosts: kube-master
roles:
- kube-master
# node 节点部署
- hosts: kube-node
roles:
- kube-node
# 集群网络插件部署,只能选择一种安装
- hosts: kube-cluster
roles:
- { role: calico, when: "CLUSTER_NETWORK == 'calico'" }
- { role: flannel, when: "CLUSTER_NETWORK == 'flannel'" }

View File

@ -1,7 +1,9 @@
# 警告:此脚本将清理个K8S集群包括所有POD、ETCD数据等
# 警告:此脚本将清理个K8S集群包括所有POD、ETCD数据等
# 请三思后运行此脚本 ansible-playbook 99.clean.yml
- hosts: kube-node
- hosts:
- kube-node
- new-node
tasks:
- name: stop kube-node service
shell: "systemctl stop kubelet kube-proxy"
@ -15,6 +17,7 @@
file: name={{ item }} state=absent
with_items:
- "/var/lib/kubelet/"
- "/var/lib/kube-proxy/"
- "/etc/kubernetes/"
- "/etc/systemd/system/kubelet.service"
- "/etc/systemd/system/kube-proxy.service"
@ -37,10 +40,11 @@
- hosts:
- kube-cluster
- new-node
- deploy
tasks:
- name: stop calico-node service
shell: "systemctl stop calico-node docker"
- name: stop docker service
shell: "systemctl stop docker"
ignore_errors: true
# 因为calico-kube-controller使用了host网络相当于使用了docker -net=host需要
@ -56,6 +60,7 @@
with_items:
- "/etc/cni/"
- "/root/.kube/"
- "/run/flannel/"
- "/etc/calico/"
- "/var/run/calico/"
- "/var/log/calico/"
@ -72,7 +77,13 @@
&& iptables -F -t mangle && iptables -X -t mangle"
- name: 清理网络
shell: "ip link del docker0; ip link del tunl0; systemctl restart networking; systemctl restart network"
shell: "ip link del docker0; \
ip link del tunl0; \
ip link del flannel.1; \
ip link del cni0; \
ip link del mynet0; \
systemctl restart networking; \
systemctl restart network"
ignore_errors: true
- hosts: etcd

View File

@ -2,26 +2,27 @@
![docker](./pics/docker.jpg) ![kube](./pics/kube.jpg) ![ansible](./pics/ansible.jpg)
本系列文档致力于提供快速部署高可用`k8s`集群的工具,并且也努力成为`k8s`实践、使用的参考书;基于二进制方式部署和利用`ansible-playbook`实现自动化:既提供一键安装脚本,也可以分步执行安装各个组件,同时讲解每一步主要参数配置和注意事项。
本系列文档致力于提供快速部署高可用`k8s`集群的工具,并且也努力成为`k8s`实践、使用的参考书;基于二进制方式部署和利用`ansible-playbook`实现自动化:既提供一键安装脚本,也可以分步执行安装各个组件,同时讲解每一步主要参数配置和注意事项;二进制方式部署有助于理解系统各组件的交互原理和熟悉组件启动参数,有助于快速排查解决实际问题
**集群特性:`TLS` 双向认证、`RBAC` 授权、多`Master`高可用、支持`Network Policy`**
**二进制方式部署优势:有助于理解系统各组件的交互原理和熟悉组件启动参数,有助于快速排查解决实际问题**
**注意:** 为提高集群网络插件安装的灵活性,使用`DaemonSet Pod`方式运行网络插件,目前支持`Calico` `flannel`可选
文档基于`Ubuntu 16.04/CentOS 7`,其他系统需要读者自行替换部分命令;由于使用经验有限和简化脚本考虑,已经尽量避免`ansible-playbook`的高级特性和复杂逻辑。
你可能需要掌握基本`kubernetes` `docker` `linux shell` 知识,关于`ansible`建议阅读 [ansible超快入门](http://weiweidefeng.blog.51cto.com/1957995/1895261) 基本够用。
欢迎提`Issues`和`PRs`参与维护项目。
请阅读[项目分支说明](branch.md)欢迎提`Issues`和`PRs`参与维护项目。
## 组件版本
1. kubernetes v1.9.0
1. etcd v3.2.11
1. docker 17.09.1-ce
1. calico/node v2.6.3
1. kubernetes v1.9.1
1. etcd v3.2.13
1. docker 17.12.0-ce
1. calico/node v2.6.5
1. flannel v0.9.1
+ 附:集群用到的所有二进制文件已打包好供下载 [https://pan.baidu.com/s/1i5u3SEh](https://pan.baidu.com/s/1i5u3SEh)
+ 附:集群用到的所有二进制文件已打包好供下载 [https://pan.baidu.com/s/1c4RFaA](https://pan.baidu.com/s/1c4RFaA)
+ 注:`Kubernetes v1.8.x` 版本请切换到项目分支 `v1.8`, 若你需要从v1.8 升级至 v1.9,请参考 [升级注意](docs/upgrade.md)
## 快速指南
@ -35,13 +36,19 @@
1. [安装etcd集群](docs/02-安装etcd集群.md)
1. [配置kubectl命令行工具](docs/03-配置kubectl命令行工具.md)
1. [安装docker服务](docs/04-安装docker服务.md)
1. [安装calico网络组件](docs/05-安装calico网络组件.md)
1. [安装kube-master节点](docs/06-安装kube-master节点.md)
1. [安装kube-node节点](docs/07-安装kube-node节点.md)
1. [安装kube-master节点](docs/05-安装kube-master节点.md)
1. [安装kube-node节点](docs/06-安装kube-node节点.md)
1. [安装calico网络组件](docs/07-安装calico网络组件.md)
1. [安装flannel网络组件](docs/07-安装flannel网络组件.md)
## 使用指南
基本k8s集群安装完成后需要安装一些常用插件(`kubedns` `dashboard` `ingress`等);接着介绍一些集群操作场景和思路;然后介绍一些应用部署实践,请根据这份[目录](docs/guide/index.md)阅读你所感兴趣的内容。尚在更新中...
- 常用插件部署 [kubedns](docs/guide/kubedns.md) [dashboard](docs/guide/dashboard.md) [heapster](docs/guide/heapster.md) [ingress](docs/guide/ingress.md) [efk](docs/guide/efk.md) [harbor](docs/guide/harbor.md)
- K8S 特性实验 [HPA](docs/guide/hpa.md) [NetworkPolicy](docs/guide/networkpolicy.md)
- 集群运维指南
- 应用部署实践
请根据这份 [目录](docs/guide/index.md) 阅读你所感兴趣的内容,尚在更新中...
## 参考阅读

View File

@ -1,6 +1,5 @@
# 主要组件版本
+ kubernetes v1.9.0
+ etcd v3.2.11
+ docker 17.09.1-ce
+ calico/node v2.6.3
+ kubernetes v1.9.1
+ etcd v3.2.13
+ docker 17.12.0-ce

7
branch.md 100644
View File

@ -0,0 +1,7 @@
## 项目分支说明
目前项目分支为 `master` `v1.9` `v1.8`,说明如下:
- `master` 分支将尽量使用最新版k8s和相关组件网络使用`DaemonSet Pod`方式安装,目前提供`calico` `flannel` 可选
- `v1.9` 分支将尽量使用k8s v1.9的最新小版本和相关组件,使用`systemd service`方式安装 `calico`网络
- `v1.8` 分支将尽量使用k8s v1.8的最新小版本和相关组件,使用`systemd service`方式安装 `calico`网络

View File

@ -11,19 +11,35 @@
生产环境使用建议一个节点只是一个角色避免性能瓶颈问题这里演示环境将节点绑定多个角色。项目预定义了3个例子请修改后完成适合你的集群规划。
+ [单节点 AllInOne](../example/hosts.allinone.example)
+ [单节点](../example/hosts.allinone.example)
+ [单主多节点](../example/hosts.s-master.example)
+ [多主多节点](../example/hosts.m-masters.example)
## 集群所用到的参数举例如下:
``` bash
# ---------集群主要参数---------------
#集群 MASTER IP, 需要负载均衡一般为VIP地址
MASTER_IP="192.168.1.10"
KUBE_APISERVER="https://192.168.1.10:8443"
#pause镜像地址
POD_INFRA_CONTAINER_IMAGE=mirrorgooglecontainers/pause-amd64:3.0
#TLS Bootstrapping 使用的 Token使用 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="c30302226d4b810e08731702d3890f50"
# 集群网络插件目前支持calico和flannel
CLUSTER_NETWORK="calico"
# 部分calico相关配置更全配置可以去roles/calico/templates/calico.yaml.j2自定义
# 设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 05.安装calico网络组件.md
CALICO_IPV4POOL_IPIP="always"
# 设置 calico-node使用的host IPbgp邻居通过该地址建立可手动指定端口"interface=eth0"或使用>如下自动发现
IP_AUTODETECTION_METHOD="can-reach=223.5.5.5"
# 部分flannel配置详见roles/flannel/templates/kube-flannel.yaml.j2
FLANNEL_BACKEND="vxlan"
# 服务网段 (Service CIDR部署前路由不可达部署后集群内使用 IP:Port 可达
SERVICE_CIDR="10.68.0.0/16"
@ -39,7 +55,7 @@ CLUSTER_KUBERNETES_SVC_IP="10.68.0.1"
# 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
CLUSTER_DNS_SVC_IP="10.68.0.2"
# 集群 DNS 域名,后续生成 master节点证书时也会用到这个默认根域名
# 集群 DNS 域名
CLUSTER_DNS_DOMAIN="cluster.local."
# etcd 集群间通信的IP和端口, **根据实际 etcd 集群成员设置**
@ -48,7 +64,7 @@ ETCD_NODES="etcd1=https://192.168.1.1:2380,etcd2=https://192.168.1.2:2380,etcd3=
# etcd 集群服务地址列表, **根据实际 etcd 集群成员设置**
ETCD_ENDPOINTS="https://192.168.1.1:2379,https://192.168.1.2:2379,https://192.168.1.3:2379"
# 集群basic auth 使用的用户名和密码【可选】
# 集群basic auth 使用的用户名和密码
BASIC_AUTH_USER="admin"
BASIC_AUTH_PASS="test1234"
@ -62,11 +78,13 @@ ca_dir="/etc/kubernetes/ssl"
#部署目录,即 ansible 工作目录,建议不要修改
base_dir="/etc/ansible"
#私有仓库 harbor服务器 (域名或者IP) 【可选】
#需要把 harbor服务器证书复制到roles/harbor/files/harbor-ca.crt
HARBOR_SERVER="harbor.mydomain.com"
#私有仓库 harbor服务器 (域名或者IP)
#HARBOR_IP="192.168.1.8"
#HARBOR_DOMAIN="harbor.yourdomain.com"
```
+ 请事先规划好使用何种网络插件(calico flannel),并配置对应网络插件的参数
## 部署步骤
按照[多主多节点](../example/hosts.m-masters.example)示例的节点配置至少准备4台虚机测试搭建一个多主高可用集群。
@ -77,7 +95,7 @@ HARBOR_SERVER="harbor.mydomain.com"
+ 最小化安装`Ubuntu 16.04 server`或者`CentOS 7 Minimal`
+ 配置基础网络、更新源、SSH登陆等
### 2.安装依赖工具(每个节点)
### 2.在每个节点安装依赖工具
Ubuntu 16.04 请执行以下脚本:
@ -103,7 +121,7 @@ yum erase firewalld firewalld-filesystem python-firewall -y
# 安装python
yum install python -y
```
### 3.ansible安装及准备(仅deploy节点)
### 3.在deploy节点安装及准备ansible
``` bash
# Ubuntu 16.04
@ -116,14 +134,32 @@ yum install git python-pip -y
pip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
pip install --no-cache-dir ansible -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
```
### 4.在deploy节点配置免密码登陆所有节点包括自身
### 4.在deploy节点配置免密码登陆
``` bash
ssh-keygen -t rsa -b 2048 回车 回车 回车
ssh-copy-id $IPs #$IPs为所有节点地址按照提示输入yes 和root密码
ssh-copy-id $IPs #$IPs为所有节点地址包括自身按照提示输入yes 和root密码
```
### 5.在deploy节点编排k8s安装
``` bash
# 下载项目文件
git clone https://github.com/gjmzj/kubeasz.git
mv kubeasz /etc/ansible
# 下载已打包好的binaries并且解压缩到/etc/ansible/bin目录
# 国内请从我分享的百度云链接下载 https://pan.baidu.com/s/1c4RFaA
# 如果你有合适网络环境也可以按照/down/download.sh自行从官网下载各种tar包到 ./down目录并执行download.sh
tar zxvf k8s.191.tar.gz
mv bin/* /etc/ansible/bin
cd /etc/ansible
cp example/hosts.m-masters.example hosts
# 根据上文实际规划修改此hosts文件
vi hosts
```
+ 验证ansible安装
在deploy 节点使用如下命令
``` bash
ansible all -m ping
```
@ -146,29 +182,16 @@ ansible all -m ping
"ping": "pong"
}
```
### 5.在deploy节点编排k8s安装
+ 开始安装集群,请阅读每步安装讲解后执行分步安装
``` bash
git clone https://github.com/gjmzj/kubeasz.git
mv kubeasz /etc/ansible
# 下载已打包好的binaries并且解压缩到/etc/ansible/bin目录
# 国内请从我分享的百度云链接下载 https://pan.baidu.com/s/1eSetFSA
# 如果你有合适网络环境也可以按照/down/download.sh自行从官网下载各种tar包到 ./down目录并执行download.sh
tar zxvf k8s.184.tar.gz
mv bin/* /etc/ansible/bin
# 配置ansible的hosts文件
cd /etc/ansible
cp example/hosts.m-masters.example hosts
然后根据上文实际规划修改此hosts文件
# 采用分步安装(确定每一步是否安装成功)或者一步安装
# 先不要安装,后文将一步一步讲解后执行安装
#ansible-playbook 01.prepare.yml
#ansible-playbook 02.etcd.yml
#ansible-playbook 03.kubectl.yml
#ansible-playbook 04.docker.yml
#ansible-playbook 05.calico.yml
#ansible-playbook 06.kube-master.yml
#ansible-playbook 07.kube-node.yml
#ansible-playbook 05.kube-master.yml
#ansible-playbook 06.kube-node.yml
#ansible-playbook 07.calico.yml 或者 ansible-playbook 07.flannel.yml 只能选择一种网络插件
#ansible-playbook 90.setup.yml # 一步安装
```

View File

@ -85,7 +85,7 @@ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
- ca.csr
- ca-config.json
```
+ force=no 保证整个安装的幂等性如果已经生成过CA证书就使用已经存在的CA简单说可以多次运行 `ansible-playbook 90.setup.yml`
+ force=no 保证整个安装的幂等性如果已经生成过CA证书就使用已经存在的CA可以多次运行 `ansible-playbook 90.setup.yml`
+ 如果确实需要更新CA 证书,删除/roles/prepare/files/ca* 可以使用新CA 证书
### kubedns.yaml 配置生成
@ -96,6 +96,7 @@ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
``` bash
roles/prepare/
├── files
│   ├── 95-k8s-sysctl.conf
│   ├── ca-config.json
│   ├── ca.csr
│   ├── ca-csr.json
@ -110,6 +111,7 @@ roles/prepare/
1. 修改环境变量,把{{ bin_dir }} 添加到$PATH需要重新登陆 shell生效
1. 把证书工具 CFSSL下发到指定节点
1. 把CA 证书相关下发到指定节点的 {{ ca_dir }} 目录
1. 最后设置基础操作系统软件和系统参数,请阅读脚本中的注释内容
### LB 负载均衡部署
``` bash

View File

@ -142,4 +142,4 @@ iptables-save|grep FORWARD
-A FORWARD -j ACCEPT
```
[前一篇](03-配置kubectl命令行工具.md) -- [后一篇](05-安装calico网络组件.md)
[前一篇](03-配置kubectl命令行工具.md) -- [后一篇](05-安装kube-master节点.md)

View File

@ -1,4 +1,4 @@
## 06-安装kube-master节点.md
## 05-安装kube-master节点.md
部署master节点包含三个组件`apiserver` `scheduler` `controller-manager`,其中:
@ -212,4 +212,4 @@ etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```
[前一篇](05-安装calico网络组件.md) -- [后一篇](07-安装kube-node节点.md)
[前一篇](04-安装docker服务.md) -- [后一篇](06-安装kube-node节点.md)

View File

@ -1,20 +1,18 @@
## 07-安装kube-node节点.md
## 06-安装kube-node节点.md
node 是集群中承载应用的节点前置条件需要先部署好master节点(因为需要操作`用户角色绑定`、`批准kubelet TLS 证书请求`等),它需要部署如下组件:
`kube-node` 是集群中承载应用的节点,前置条件需要先部署好`kube-master`节点(因为需要操作`用户角色绑定`、`批准kubelet TLS 证书请求`等),它需要部署如下组件:
+ docker运行容器
+ calico 配置容器网络
+ kubelet node上最主要的组件
+ calico 配置容器网络 (或者 flannel)
+ kubelet kube-node上最主要的组件
+ kube-proxy 发布应用服务与负载均衡
``` bash
roles/kube-node
├── files
│   └── rbac.yaml
├── tasks
│   └── main.yml
└── templates
├── calico-kube-controllers.yaml.j2
├── cni-default.conf.j2
├── kubelet.service.j2
├── kube-proxy-csr.json.j2
└── kube-proxy.service.j2
@ -56,6 +54,10 @@ kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先
+ 注意 kubelet bootstrapping认证时是靠 token的后续由 `master`为其生成证书和私钥
+ 以上生成的bootstrap.kubeconfig配置文件需要移动到/etc/kubernetes/目录下后续在kubelet启动参数中指定该目录下的 bootstrap.kubeconfig
### 创建cni 基础网络插件配置文件
因为后续需要用 `DaemonSet Pod`方式运行k8s网络插件所以kubelet.server服务必须开启cni相关参数并且提供cni网络配置文件
### 创建 kubelet 的服务文件
+ 必须先创建工作目录 `/var/lib/kubelet`
@ -73,7 +75,7 @@ WorkingDirectory=/var/lib/kubelet
ExecStart={{ bin_dir }}/kubelet \
--address={{ NODE_IP }} \
--hostname-override={{ NODE_IP }} \
--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
--pod-infra-container-image={{ POD_INFRA_CONTAINER_IMAGE }} \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--cert-dir={{ ca_dir }} \
@ -189,30 +191,6 @@ WantedBy=multi-user.target
+ --hostname-override 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node从而不会创建任何 iptables 规则
+ 特别注意kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT但是这个特性与calico 实现 network policy冲突所以如果要用 network policy这两个选项都不要指定。
### 部署calico-kube-controllers
calico networkpolicy正常工作需要3个组件
+ `master/node` 节点需要运行的 docker 容器 `calico/node`
+ `cni-plugin` 所需的插件二进制和配置文件
+ `calico kubernetes controllers` 负责监听Network Policy的变化并将Policy应用到相应的网络接口
#### 准备RBAC和calico-kube-controllers.yaml 文件
- [RBAC](../roles/kube-node/files/rbac.yaml)
- 最小化权限使用
- [Controllers](../roles/kube-node/templates/calico-kube-controllers.yaml.j2)
- 注意只能跑一个 controller实例
- 注意该 controller实例需要使用宿主机网络 `hostNetwork: true`
#### 创建calico-kube-controllers
``` bash
"sleep 15 && {{ bin_dir }}/kubectl create -f /root/local/kube-system/calico/rbac.yaml && \
{{ bin_dir }}/kubectl create -f /root/local/kube-system/calico/calico-kube-controllers.yaml"
```
+ 增加15s等待集群node ready
### 验证 node 状态
``` bash
@ -225,17 +203,10 @@ journalctl -u kube-proxy
``` bash
NAME STATUS ROLES AGE VERSION
192.168.1.42 Ready <none> 2d v1.8.4
192.168.1.43 Ready <none> 2d v1.8.4
192.168.1.44 Ready <none> 2d v1.8.4
```
并且稍等一会,`kubectl get pod -n kube-system -o wide` 可以看到有个calico controller 的POD运行且使用了host 网络
``` bash
kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
calico-kube-controllers-69bcb79c6-b444q 1/1 Running 0 2d 192.168.1.44 192.168.1.44
192.168.1.42 Ready <none> 2d v1.9.0
192.168.1.43 Ready <none> 2d v1.9.0
192.168.1.44 Ready <none> 2d v1.9.0
```
[前一篇](06-安装kube-master节点.md) -- [后一篇]()
[前一篇](05-安装kube-master节点.md) -- [后一篇](07-安装calico网络组件.md)

View File

@ -1,4 +1,4 @@
## 05-安装calico网络组件.md
## 07-安装calico网络组件.md
推荐阅读[feiskyer-kubernetes指南](https://github.com/feiskyer/kubernetes-handbook) 网络相关内容
@ -27,7 +27,7 @@ Kubernetes Pod的网络是这样创建的
本文档基于CNI driver 调用calico 插件来配置kubernetes的网络常用CNI插件有 `flannel` `calico` `weave`等等这些插件各有优势也在互相借鉴学习优点比如在所有node节点都在一个二层网络时候flannel提供hostgw实现避免vxlan实现的udp封装开销估计是目前最高效的calico也针对L3 Fabric推出了IPinIP的选项利用了GRE隧道封装因此这些插件都能适合很多实际应用场景这里选择calico主要考虑它支持 `kubernetes network policy`
推荐阅读[calico kubernetes Integration Guide](https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/integration)
推荐阅读[calico kubernetes guide](https://docs.projectcalico.org/v2.6/getting-started/kubernetes/)
calico-node需要在所有master节点和node节点安装
@ -36,9 +36,10 @@ roles/calico/
├── tasks
│   └── main.yml
└── templates
├── calico-csr.json.j2
├── calicoctl.cfg.j2
├── calico-node.service.j2
└── cni-calico.conf.j2
├── calico-rbac.yaml.j2
└── calico.yaml.j2
```
请在另外窗口打开[roles/calico/tasks/main.yml](../roles/calico/tasks/main.yml) 文件,对照看以下讲解内容。
@ -69,82 +70,22 @@ roles/calico/
- calicoctl 操作集群网络时访问 etcd 使用证书
- calico/kube-controllers 同步集群网络策略时访问 etcd 使用证书
### 创建 calico-node 的服务文件 [calico-node.service.j2](../roles/calico/templates/calico-node.service.j2)
### 创建 calico DaemonSet yaml文件和rbac 文件
``` bash
[Unit]
Description=calico node
After=docker.service
Requires=docker.service
请对照 roles/calico/templates/calico.yaml.j2文件注释和以下注意内容
[Service]
User=root
PermissionsStartOnly=true
ExecStart={{ bin_dir }}/docker run --net=host --privileged --name=calico-node \
-e ETCD_ENDPOINTS={{ ETCD_ENDPOINTS }} \
-e ETCD_CA_CERT_FILE=/etc/calico/ssl/ca.pem \
-e ETCD_CERT_FILE=/etc/calico/ssl/calico.pem \
-e ETCD_KEY_FILE=/etc/calico/ssl/calico-key.pem \
-e CALICO_LIBNETWORK_ENABLED=true \
-e CALICO_NETWORKING_BACKEND=bird \
-e CALICO_DISABLE_FILE_LOGGING=true \
-e CALICO_IPV4POOL_CIDR={{ CLUSTER_CIDR }} \
-e CALICO_IPV4POOL_IPIP=off \
-e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \
-e FELIX_IPV6SUPPORT=false \
-e FELIX_LOGSEVERITYSCREEN=info \
-e FELIX_IPINIPMTU=1440 \
-e FELIX_HEALTHENABLED=true \
-e IP= \
-v /etc/calico/ssl:/etc/calico/ssl \
-v /var/run/calico:/var/run/calico \
-v /lib/modules:/lib/modules \
-v /run/docker/plugins:/run/docker/plugins \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/log/calico:/var/log/calico \
calico/node:v2.6.2
ExecStop={{ bin_dir }}/docker rm -f calico-node
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
```
+ 详细配置参数请参考[calico官方文档](https://docs.projectcalico.org/v2.6/reference/node/configuration)
+ calico-node是以docker容器运行在host上的因此需要把之前的证书目录 /etc/calico/ssl挂载到容器中
+ 配置ETCD_ENDPOINTS 、CA、证书等所有{{ }}变量与ansible hosts文件中设置对应
+ 配置集群POD网络 CALICO_IPV4POOL_CIDR={{ CLUSTER_CIDR }}
+ 本K8S集群运行在自有kvm虚机上虚机间没有网络ACL限制因此可以设置`CALICO_IPV4POOL_IPIP=off`,如果运行在公有云虚机可能需要打开这个选项 `CALICO_IPV4POOL_IPIP=always`
+ **重要**本K8S集群运行在同网段kvm虚机上虚机间没有网络ACL限制因此可以设置`CALICO_IPV4POOL_IPIP=off`,如果你的主机位于不同网段,或者运行在公有云上需要打开这个选项 `CALICO_IPV4POOL_IPIP=always`
+ 配置FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT 默认允许Pod到Node的网络流量更多[felix配置选项](https://docs.projectcalico.org/v2.6/reference/felix/configuration)
### 启动calico-node
### 安装calico 网络
### 准备cni-calico配置文件 [cni-calico.conf.j2](../roles/calico/templates/cni-calico.conf.j2)
``` bash
{
"name": "calico-k8s-network",
"cniVersion": "0.1.0",
"type": "calico",
"etcd_endpoints": "{{ ETCD_ENDPOINTS }}",
"etcd_key_file": "/etc/calico/ssl/calico-key.pem",
"etcd_cert_file": "/etc/calico/ssl/calico.pem",
"etcd_ca_cert_file": "/etc/calico/ssl/ca.pem",
"log_level": "info",
"mtu": 1500,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/root/.kube/config"
}
}
```
+ 主要配置etcd相关、ipam、policy等配置选项[参考](https://docs.projectcalico.org/v2.6/reference/cni-plugin/configuration)
+ 安装之前必须确保`kube-master`和`kube-node`节点已经成功部署
+ 只需要在任意装有kubectl客户端的节点运行 `kubectl create `安装即可,脚本中选取`NODE_ID=node1`节点安装
+ 等待15s后(视网络拉取calico相关镜像速度)calico 网络插件安装完成删除之前kube-node安装时默认cni网络配置
### [可选]配置calicoctl工具 [calicoctl.cfg.j2](roles/calico/templates/calicoctl.cfg.j2)
@ -162,37 +103,42 @@ spec:
### 验证calico网络
执行calico安装 `ansible-playbook 05.calico.yml` 成功后可以验证如下:(需要等待calico/node:v2.6.2 镜像下载完成有时候即便上一步已经配置了docker国内加速还是可能比较慢建议确认以下容器运行起来以后,再执行后续步骤)
执行calico安装成功后可以验证如下(需要等待镜像下载完成有时候即便上一步已经配置了docker国内加速还是可能比较慢确认以下容器运行起来以后,再执行后续验证步骤)
``` bash
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
631dde89eada calico/node:v2.6.2 "start_runit" 10 minutes ago Up 10 minutes calico-node
kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5c6b98d9df-xj2n4 1/1 Running 0 1m
kube-system calico-node-4hr52 2/2 Running 0 1m
kube-system calico-node-8ctc2 2/2 Running 0 1m
kube-system calico-node-9t8md 2/2 Running 0 1m
```
**查看网卡和路由信息**
``` bash
ip a #...省略其他网卡信息可以看到包含类似cali1cxxx的网卡
3: caliccc295a6d4f@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 12:79:2f:fe:8d:28 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1079:2fff:fefe:8d28/64 scope link
valid_lft forever preferred_lft forever
5: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/ipip 0.0.0.0 brd 0.0.0.0
# tunl0网卡现在不用管是默认生成的当开启IPIP 特性时使用的隧道
先在集群创建几个测试pod: `kubectl run test --image=busybox --replicas=3 sleep 30000`
``` bash
# 查看网卡信息
ip a
```
+ 可以看到包含类似cali1cxxx的网卡是calico为测试pod生成的
+ tunl0网卡现在不用管是默认生成的当开启IPIP 特性时使用的隧道
``` bash
# 查看路由
route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 ens3
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ens3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.20.3.64 192.168.1.65 255.255.255.192 UG 0 0 0 ens3
172.20.3.64 192.168.1.34 255.255.255.192 UG 0 0 0 ens3
172.20.33.128 0.0.0.0 255.255.255.192 U 0 0 0 *
172.20.33.129 0.0.0.0 255.255.255.255 UH 0 0 0 caliccc295a6d4f
172.20.104.0 192.168.1.37 255.255.255.192 UG 0 0 0 ens3
172.20.166.128 192.168.1.36 255.255.255.192 UG 0 0 0 ens3
172.20.104.0 192.168.1.35 255.255.255.192 UG 0 0 0 ens3
172.20.166.128 192.168.1.63 255.255.255.192 UG 0 0 0 ens3
```
**查看所有calico节点状态**
@ -208,9 +154,6 @@ IPv4 BGP status
| 192.168.1.34 | node-to-node mesh | up | 12:34:00 | Established |
| 192.168.1.35 | node-to-node mesh | up | 12:34:00 | Established |
| 192.168.1.63 | node-to-node mesh | up | 12:34:01 | Established |
| 192.168.1.36 | node-to-node mesh | up | 12:34:00 | Established |
| 192.168.1.65 | node-to-node mesh | up | 12:34:00 | Established |
| 192.168.1.37 | node-to-node mesh | up | 12:34:15 | Established |
+--------------+-------------------+-------+----------+-------------+
```
@ -219,9 +162,6 @@ IPv4 BGP status
``` bash
netstat -antlp|grep ESTABLISHED|grep 179
tcp 0 0 192.168.1.66:179 192.168.1.35:41316 ESTABLISHED 28479/bird
tcp 0 0 192.168.1.66:179 192.168.1.36:52823 ESTABLISHED 28479/bird
tcp 0 0 192.168.1.66:179 192.168.1.65:56311 ESTABLISHED 28479/bird
tcp 0 0 192.168.1.66:42000 192.168.1.37:179 ESTABLISHED 28479/bird
tcp 0 0 192.168.1.66:179 192.168.1.34:40243 ESTABLISHED 28479/bird
tcp 0 0 192.168.1.66:179 192.168.1.63:48979 ESTABLISHED 28479/bird
```
@ -238,4 +178,4 @@ calicoctl get ipPool -o yaml
nat-outgoing: true
```
[前一篇](04-安装docker服务.md) -- [后一篇](06-安装kube-master节点.md)
[前一篇](06-安装kube-node节点.md) -- [后一篇]()

View File

@ -0,0 +1,111 @@
## 07-安装flannel网络组件.md
** 注意: ** 只需选择安装`calico` `flannel`其中之一,如果你已经安装了`calico`,请跳过此步骤。
关于k8s网络设计和CNI Plugin的介绍请阅读[安装calico](07-安装calico网络组件.md)中相关内容。
`Flannel`是最早应用到k8s集群的网络插件之一简单高效且提供多个后端`backend`模式供选择;本文介绍以`DaemonSet Pod`方式集成到k8s集群需要在所有master节点和node节点安装。
``` text
roles/flannel/
├── tasks
│   └── main.yml
└── templates
└── kube-flannel.yaml.j2
```
请在另外窗口打开[roles/flannel/tasks/main.yml](../roles/flannel/tasks/main.yml) 文件,对照看以下讲解内容。
### 下载基础cni 插件
请到CNI 插件最新[release](https://github.com/containernetworking/plugins/releases)页面下载[cni-v0.6.0.tgz](https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-v0.6.0.tgz),解压后里面有很多插件,选择如下几个复制到项目 `bin`目录下
- flannel用到的插件
- bridge
- flannel
- host-local
- loopback
- portmap
Flannel CNI 插件的配置文件可以包含多个`plugin` 或由其调用其他`plugin``Flannel DaemonSet Pod`运行以后会生成`/run/flannel/subnet.env `文件,例如:
``` bash
FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.17.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=true
```
然后它利用这个文件信息去配置和调用`bridge`插件来生成容器网络,调用`host-local`来管理`IP`地址,例如:
``` bash
{
"name": "mynet",
"type": "bridge",
"mtu": 1472,
"ipMasq": false,
"isGateway": true,
"ipam": {
"type": "host-local",
"subnet": "10.1.17.0/24"
}
}
```
- 更多相关介绍请阅读:
- [flannel kubernetes 集成](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md)
- [flannel cni 插件](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel)
- [更多 cni 插件](https://github.com/containernetworking/plugins)
### 准备`Flannel DaemonSet` yaml配置文件
请阅读 `roles/flannel/templates/kube-flannel.yaml.j2` 内容,注意:
+ 本安装方式flannel使用apiserver 存储数据,而不是 etcd
+ 配置相关RBAC 权限和 `service account`
+ 配置`ConfigMap`包含 CNI配置和 flannel配置(指定backend等),和`hosts`文件中相关设置对应
+ `DaemonSet Pod`包含两个容器一个容器运行flannel本身另一个init容器部署cni 配置文件
+ 为方便国内加速使用镜像 `jmgao1983/flannel:v0.9.1-amd64` (官方镜像在docker-hub上的转存)
### 安装 flannel网络
+ 安装之前必须确保kube-master和kube-node节点已经成功部署
+ 只需要在任意装有kubectl客户端的节点运行 kubectl create安装即可脚本中选取NODE_ID=node1节点安装
+ 等待15s后(视网络拉取相关镜像速度)flannel 网络插件安装完成删除之前kube-node安装时默认cni网络配置
### 验证flannel网络
执行flannel安装成功后可以验证如下(需要等待镜像下载完成有时候即便上一步已经配置了docker国内加速还是可能比较慢请确认以下容器运行起来以后再执行后续验证步骤)
``` bash
# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-m8mzm 1/1 Running 0 3m
kube-system kube-flannel-ds-mnj6j 1/1 Running 0 3m
kube-system kube-flannel-ds-mxn6k 1/1 Running 0 3m
```
在集群创建几个测试pod: `kubectl run test --image=busybox --replicas=3 sleep 30000`
``` bash
# kubectl get pod --all-namespaces -o wide|head -n 4
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default busy-5956b54c8b-ld4gb 1/1 Running 0 9m 172.20.2.7 192.168.1.1
default busy-5956b54c8b-lj9l9 1/1 Running 0 9m 172.20.1.5 192.168.1.2
default busy-5956b54c8b-wwpkz 1/1 Running 0 9m 172.20.0.6 192.168.1.3
# 查看路由
# ip route
default via 192.168.1.254 dev ens3 onlink
192.168.1.0/24 dev ens3 proto kernel scope link src 192.168.1.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.20.0.0/24 via 192.168.1.3 dev ens3
172.20.1.0/24 via 192.168.1.2 dev ens3
172.20.2.0/24 dev cni0 proto kernel scope link src 172.20.2.1
```
现在各节点上分配 ping 这三个POD网段地址确保能通
``` bash
ping 172.20.2.7
ping 172.20.1.5
ping 172.20.0.6
```
[前一篇](06-安装kube-node节点.md) -- [后一篇]()

View File

@ -0,0 +1 @@
## EFK

View File

@ -0,0 +1,178 @@
## harbor
Habor是由VMWare中国团队开源的容器镜像仓库。事实上Habor是在Docker Registry上进行了相应的企业级扩展从而获得了更加广泛的应用这些新的企业级特性包括管理用户界面基于角色的访问控制 水平扩展同步AD/LDAP集成以及审计日志等。本文档仅说明部署单个基础harbor服务的步骤。
### 安装步骤
1. 在deploy节点下载最新的 [docker-compose](https://github.com/docker/compose/releases) 二进制文件,改名后把它放到项目 `/etc/ansible/bin`目录下,后续版本会一起打包进百度云盘`k8s.xxx.tar.gz`文件中,可以省略该步骤
``` bash
wget https://github.com/docker/compose/releases/download/1.18.0/docker-compose-Linux-x86_64
mv docker-compose-Linux-x86_64 /etc/ansible/bin/docker-compose
```
2. 在deploy节点下载最新的 [harbor](https://github.com/vmware/harbor/releases) 离线安装包,把它放到项目 `/etc/ansible/down` 目录下,也可以从分享的百度云盘下载
3. 在deploy节点编辑/etc/ansible/hosts文件可以参考 `example`目录下的模板,修改部分举例如下
``` bash
# 如果启用harbor请配置后面harbor相关参数
[harbor]
192.168.1.8 NODE_IP="192.168.1.8"
#私有仓库 harbor服务器 (域名或者IP)
HARBOR_IP="192.168.1.8"
HARBOR_DOMAIN="harbor.test.com"
```
4. 在deploy节点执行 `cd /etc/ansible && ansible-playbook 11.harbor.yml`完成harbor安装
### 安装讲解
根据 `11.harbor.yml`文件harbor节点需要以下步骤
1. role `prepare` 基础系统环境准备
1. role `docker` 安装docker
1. role `harbor` 安装harbor
`kube-node`节点在harbor部署完之后需要配置harbor的证书并可以在hosts里面添加harbor的域名解析如果你的环境中有dns服务器可以跳过hosts文件设置
请在另外窗口打开 [roles/harbor/tasks/main.yml](../../roles/harbor/tasks/main.yml),对照以下讲解
1. 下载docker-compose可执行文件到$PATH目录
1. 自注册变量result判断是否已经安装harbor避免重复安装问题
1. 解压harbor离线安装包到指定目录
1. 导入harbor所需 docker images
1. 创建harbor证书和私钥(复用集群的CA证书)
1. 修改harbor.cfg配置文件
1. 启动harbor安装脚本
### 验证harbor
1. 在harbor节点使用`docker ps -a` 查看harbor容器组件运行情况
1. 浏览器访问harbor节点的IP地址 `https://{{ NODE_IP }}`,使用账号 admin 和 密码 Harbor12345 (harbor.cfg 配置文件中的默认)登陆系统
### 在k8s集群使用harbor
admin用户web登陆后可以方便的创建项目并指定项目属性(公开或者私有);然后创建用户,并在项目`成员`选项中选择用户和权限;
#### 镜像上传
在node上使用harbor私有镜像仓库首先需要在指定目录配置harbor的CA证书详见 `11.harbor.yml`文件。
使用docker客户端登陆`harbor.test.com`然后把镜像tag成 `harbor.test.com/$项目名/$镜像名:$TAG` 之后即可使用docker push 上传
``` bash
docker login harbor.test.com
Username:
Password:
Login Succeeded
docker tag busybox:latest harbor.test.com/library/busybox:latest
docker push harbor.test.com/library/busybox:latest
The push refers to a repository [harbor.test.com/library/busybox]
0271b8eebde3: Pushed
latest: digest: sha256:91ef6c1c52b166be02645b8efee30d1ee65362024f7da41c404681561734c465 size: 527
```
#### k8s中使用harbor
1. 如果镜像保存在harbor中的公开项目中那么只需要在yaml文件中简单指定harbor私有镜像即可例如
``` bash
apiVersion: v1
kind: Pod
metadata:
name: test-busybox
spec:
containers:
- name: test-busybox
image: harbor.test.com/xxx/busybox:latest
imagePullPolicy: Always
```
2. 如果镜像保存在harbor中的私有项目中那么yaml文件中使用该私有项目的镜像需要指定`imagePullSecrets`,例如
``` bash
apiVersion: v1
kind: Pod
metadata:
name: test-busybox
spec:
containers:
- name: test-busybox
image: harbor.test.com/xxx/busybox:latest
imagePullPolicy: Always
imagePullSecrets:
- name: harborKey1
```
其中 `harborKey1`可以用以下两种方式生成:
+ 1.使用 `kubectl create secret docker-registry harborkey1 --docker-server=harbor.test.com --docker-username=admin --docker-password=Harbor12345 --docker-email=team@test.com`
+ 2.使用yaml配置文件生成
``` bash
//harborkey1.yaml
apiVersion: v1
kind: Secret
metadata:
name: harborkey1
namespace: default
data:
.dockerconfigjson: {base64 -w 0 ~/.docker/config.json}
type: kubernetes.io/dockerconfigjson
```
前面docker login会在~/.docker下面创建一个config.json文件保存鉴权串这里secret yaml的.dockerconfigjson后面的数据就是那个json文件的base64编码输出-w 0让base64输出在单行上避免折行
### 管理harbor
+ 日志目录 `/var/log/harbor`
+ 数据目录 `/data` ,其中最主要是 `/data/database``/data/registry` 目录如果你要彻底重新安装harbor删除这两个目录即可
先进入harbor安装目录 `cd /root/local/harbor`,常规操作如下:
1. 暂停harbor `docker-compose stop` : docker容器stop并不删除容器
2. 恢复harbor `docker-compose start` : 恢复docker容器运行
3. 停止harbor `docker-compose down -v` : 停止并删除docker容器
4. 启动harbor `docker-compose up -d` : 启动所有docker容器
修改harbor的运行配置需要如下步骤
``` bash
# 停止 harbor
docker-compose down -v
# 修改配置
vim harbor.cfg
# 执行./prepare已更新配置到docker-compose.yml文件
./prepare
# 启动 harbor
docker-compose up -d
```
#### harbor 升级
以下步骤基于harbor 1.1.2 版本升级到 1.2.2版本
``` bash
# 进入harbor解压缩后的目录停止harbor
cd /root/local/harbor
docker-compose down
# 备份这个目录
cd ..
mkdir -p /backup && mv harbor /backup/harbor
# 下载更新的离线安装包,并解压
tar zxvf harbor-offline-installer-v1.2.2.tgz -C /root/local
# 使用官方数据库迁移工具,备份数据库,修改数据库连接用户和密码,创建数据库备份目录
# 迁移工具使用docker镜像镜像tag由待升级到目标harbor版本决定这里由 1.1.2升级到1.2.2,所以使用 tag 1.2
docker pull vmware/harbor-db-migrator:1.2
mkdir -p /backup/db-1.1.2
docker run -it --rm -e DB_USR=root -e DB_PWD=xxxx -v /data/database:/var/lib/mysql -v /backup/db-1.1.2:/harbor-migration/backup vmware/harbor-db-migrator:1.2 backup
# 因为新老版本数据库结构不一样需要数据库migration
docker run -it --rm -e DB_USR=root -e DB_PWD=xxxx -v /data/database:/var/lib/mysql vmware/harbor-db-migrator:1.2 up head
# 修改新版本 harbor.cfg配置需要保持与老版本相关配置项保持一致然后执行安装即可
cd /root/local/harbor
vi harbor.cfg
./install.sh
[前一篇]() -- [目录](index.md) -- [后一篇]()

56
docs/guide/hpa.md 100644
View File

@ -0,0 +1,56 @@
## Horizontal Pod Autoscaling
自动水平伸缩是指运行在k8s上的应用负载(POD)可以根据资源使用率进行自动扩容、缩容我们知道应用的资源使用率通常都有高峰和低谷所以k8s的`HPA`特性应运而生;它也是最能体现区别于传统运维的优势之一,不仅能够弹性伸缩,而且完全自动化!
根据 CPU 使用率或自定义 metrics 自动扩展 Pod 数量(支持 replication controller、deploymentk8s1.6版本之前是通过kubelet来获取监控指标1.6版本之后是通过api server、heapster或者kube-aggregator来获取监控指标。
### Metrics支持
根据不同版本的API中HPA autoscale时靠以下指标来判断资源使用率
- autoscaling/v1: CPU
- autoscaling/v2alpha1
- 内存
- 自定义metrics
- 多metrics组合: 根据每个metric的值计算出scale的值并将最大的那个指作为扩容的最终结果
### 基础示例
本实验环境基于k8s 1.8 和 1.9,仅使用`autoscaling/v1` 版本API
``` bash
# 创建deploy和service
$ kubectl run php-apache --image=pilchard/hpa-example --requests=cpu=200m --expose --port=80
# 创建autoscaler
$ kubectl autoscale deploy php-apache --cpu-percent=50 --min=1 --max=10
# 稍等查看hpa状态
$ kubectl get hpa php-apache
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0% / 50% 1 10 1 1d
# 增加负载
$ kubectl run --rm -it load-generator --image=busybox /bin/sh
Hit enter for command prompt
$ while true; do wget -q -O- http://php-apache; done;
# 稍等查看hpa显示负载增加且副本数目增加为4
$ kubectl get hpa php-apache
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 430% / 50% 1 10 4 4m
# 注意k8s为了避免频繁增删pod对副本的增加速度有限制
# 实验过程可以看到副本数目从1到4到8到10大概都需要4~5分钟的缓冲期
$ kubectl get hpa php-apache
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 86% / 50% 1 10 8 9m
$ kubectl get hpa php-apache
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 52% / 50% 1 10 10 12m
# 清除负载CTRL+C 结束上述循环程序稍后副本数目变回1
$ kubectl get hpa php-apache
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0% / 50% 1 10 1 17m
```

View File

@ -1,14 +1,20 @@
## 使用指南
### 附加组件
### 附加组件安装
- 安装 [kubedns](kubedns.md)
- 安装 [dashboard](dashboard.md)
- 安装 [heapster](heapster.md)
- 安装 [ingress](ingress.md)
- 安装 efk
- 安装 [harbor](harbor.md)
### 集群维护
### 基础特性演示
- 自动水平伸缩-基础 [Horizontal Pod Autoscaling](hpa.md)
- 网络安全策略 [Network Policy](networkpolicy.md)
### 集群维护指南
- 集群状态检查
- 集群扩容
@ -26,4 +32,3 @@
### 其他
- Harbor 部署

View File

@ -0,0 +1 @@
## Network Policy

View File

@ -51,9 +51,9 @@ ssh-copy-id $IP #$IP为本虚机地址按照提示输入yes 和root密码
git clone https://github.com/gjmzj/kubeasz.git
mv kubeasz /etc/ansible
# 下载已打包好的binaries并且解压缩到/etc/ansible/bin目录
# 国内请从分享的百度云链接下载 https://pan.baidu.com/s/1eSetFSA
# 国内请从分享的百度云链接下载 https://pan.baidu.com/s/1c4RFaA
# 如果你有合适网络环境也可以按照/down/download.sh自行从官网下载各种tar包到 ./down目录并执行download.sh
tar zxvf k8s.184.tar.gz
tar zxvf k8s.191.tar.gz
mv bin/* /etc/ansible/bin
# 配置ansible的hosts文件
cd /etc/ansible
@ -65,9 +65,11 @@ ansible-playbook 90.setup.yml # 一步安装
#ansible-playbook 02.etcd.yml
#ansible-playbook 03.kubectl.yml
#ansible-playbook 04.docker.yml
#ansible-playbook 05.calico.yml
#ansible-playbook 06.kube-master.yml
#ansible-playbook 07.kube-node.yml
#ansible-playbook 05.kube-master.yml
#ansible-playbook 06.kube-node.yml
# 网络只可选择calico flannel一种安装
#ansible-playbook 07.calico.yml
#ansible-playbook 07.flannel.yml
```
如果执行成功k8s集群就安装好了。详细分步讲解请查看项目目录 `/docs` 下相关文档
@ -80,7 +82,6 @@ kubectl cluster-info # 可以看到kubernetes master(apiserver)组件 running
kubectl get node # 可以看到单 node Ready状态
kubectl get pod --all-namespaces # 可以查看所有集群pod状态
kubectl get svc --all-namespaces # 可以查看所有集群服务状态
calicoctl node status # 可以在master或者node节点上查看calico网络状态
```
### 6.安装主要组件
``` bash
@ -91,7 +92,7 @@ kubectl create -f /etc/ansible/manifests/heapster
# 安装dashboard
kubectl create -f /etc/ansible/manifests/dashboard
```
+ 更新后`dashboard`已经默认关闭非安全端口访问,请使用`https://10.100.80.30:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy`访问,并用默认用户 `admin:test1234` 登陆,更多内容请查阅[dashboard文档](guide/dashboard.md)
+ 更新后`dashboard`已经默认关闭非安全端口访问,请使用`https://xx.xx.xx.xx:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy`访问,并用默认用户 `admin:test1234` 登陆,更多内容请查阅[dashboard文档](guide/dashboard.md)
### 7.清理集群

View File

@ -1,13 +1,15 @@
#!/bin/bash
#主要组件版本如下
export K8S_VER=v1.9.0
export ETCD_VER=v3.2.11
export DOCKER_VER=17.09.1-ce
export CALICO_VER=v2.6.3
export K8S_VER=v1.9.1
export ETCD_VER=v3.2.13
export DOCKER_VER=17.12.0-ce
export CNI_VER=v0.6.0
export DOCKER_COMPOSE=1.18.0
export HARBOR=v1.2.2
echo "\n建议直接下载本人打包好的所有必要二进制包k8s-184.all.tar.gz然后解压到bin目录"
echo "\n建议直接下载本人打包好的所有必要二进制包k8s-***.all.tar.gz然后解压到bin目录"
echo "\n建议不使用此脚本如果你想升级组件或者实验请通读该脚本必要时适当修改后使用"
echo "\n注意1因为网络原因不进行自动下载,请按照以下链接手动下载二进制包到down目录中"
echo "\n注意1请按照以下链接手动下载二进制包到down目录中"
echo "\n注意2如果还没有手工下载tar包请Ctrl-c结束此脚本"
echo "\n----download k8s binary at:"
@ -25,8 +27,14 @@ echo https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
echo https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
echo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
echo "\n----download calico at:"
echo https://docs.projectcalico.org/v2.6/releases/
echo "\n----download docker-compose at:"
echo https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE}/docker-compose-Linux-x86_64
echo "\n----download harbor-offline-installer at:"
echo https://github.com/vmware/harbor/releases/download/${HARBOR}/harbor-offline-installer-${HARBOR}.tgz
echo "\n----download cni plugins at:"
echo https://github.com/containernetworking/plugins/releases/download/${CNI_VER}/cni-${CNI_VER}.tgz
sleep 30
@ -80,8 +88,22 @@ if [ -f "docker-${DOCKER_VER}.tgz" ]; then
tar zxf docker-${DOCKER_VER}.tgz
mv docker/docker* ../bin
if [ -f "docker/completion/bash/docker" ]; then
mv -f docker/completion/bash/docker ../roles/kube-node/files/docker
mv -f docker/completion/bash/docker ../roles/docker/files/docker
fi
else
echo 请先下载docker-${DOCKER_VER}.tgz
fi
### 准备cni plugins仅安装flannel需要安装calico由容器专门下载cni plugins
echo "\n准备cni plugins仅安装flannel需要安装calico由容器专门下载cni plugins..."
if [ -f "cni-${CNI_VER}.tgz" ]; then
echo "\nextracting cni plugins binaries..."
tar zxf cni-${CNI_VER}.tgz
mv bridge ../bin
mv flannel ../bin
mv host-local ../bin
mv loopback ../bin
mv portmap ../bin
else
echo 请先下载cni-${CNI_VER}.tgz
fi

View File

@ -18,6 +18,14 @@
kube-node
kube-master
# 如果启用harbor请配置后面harbor相关参数
[harbor]
#192.168.1.8 NODE_IP="192.168.1.8"
# 预留组后续添加node节点使用
[new-node]
#192.168.1.xx NODE_ID=node6 NODE_IP="192.168.1.xx"
[all:vars]
# ---------集群主要参数---------------
#集群 MASTER IP
@ -32,6 +40,18 @@ POD_INFRA_CONTAINER_IMAGE=mirrorgooglecontainers/pause-amd64:3.0
#TLS Bootstrapping 使用的 Token使用 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="d18f94b5fa585c7123f56803d925d2e7"
# 集群网络插件目前支持calico和flannel
CLUSTER_NETWORK="calico"
# 部分calico相关配置更全配置可以去roles/calico/templates/calico.yaml.j2自定义
# 设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 05.安装calico网络组件.md
CALICO_IPV4POOL_IPIP="always"
# 设置 calico-node使用的host IPbgp邻居通过该地址建立可手动指定端口"interface=eth0"或使用如下自动发现
IP_AUTODETECTION_METHOD="can-reach=223.5.5.5"
# 部分flannel配置详见roles/flannel/templates/kube-flannel.yaml.j2
FLANNEL_BACKEND="vxlan"
# 服务网段 (Service CIDR部署前路由不可达部署后集群内使用 IP:Port 可达
SERVICE_CIDR="10.68.0.0/16"
@ -71,5 +91,5 @@ ca_dir="/etc/kubernetes/ssl"
base_dir="/etc/ansible"
#私有仓库 harbor服务器 (域名或者IP)
#需要把 harbor服务器证书复制到roles/harbor/files/harbor-ca.crt
HARBOR_SERVER="harbor.yourdomain.com"
#HARBOR_IP="192.168.1.8"
#HARBOR_DOMAIN="harbor.yourdomain.com"

View File

@ -34,6 +34,10 @@ MASTER_PORT="8443" # api-server 服务端口
kube-node
kube-master
# 如果启用harbor请配置后面harbor相关参数
[harbor]
#192.168.1.8 NODE_IP="192.168.1.8"
# 预留组后续添加node节点使用
[new-node]
#192.168.1.xx NODE_ID=node6 NODE_IP="192.168.1.xx"
@ -51,6 +55,18 @@ POD_INFRA_CONTAINER_IMAGE=mirrorgooglecontainers/pause-amd64:3.0
#TLS Bootstrapping 使用的 Token使用 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="c30302226d4b810e08731702d3890f50"
# 集群网络插件目前支持calico和flannel
CLUSTER_NETWORK="calico"
# 部分calico相关配置更全配置可以去roles/calico/templates/calico.yaml.j2自定义
# 设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 05.安装calico网络组件.md
CALICO_IPV4POOL_IPIP="always"
# 设置 calico-node使用的host IPbgp邻居通过该地址建立可手动指定端口"interface=eth0"或使用如下自动发现
IP_AUTODETECTION_METHOD="can-reach=223.5.5.5"
# 部分flannel配置详见roles/flannel/templates/kube-flannel.yaml.j2
FLANNEL_BACKEND="vxlan"
# 服务网段 (Service CIDR部署前路由不可达部署后集群内使用 IP:Port 可达
SERVICE_CIDR="10.68.0.0/16"
@ -90,5 +106,5 @@ ca_dir="/etc/kubernetes/ssl"
base_dir="/etc/ansible"
#私有仓库 harbor服务器 (域名或者IP)
#需要把 harbor服务器证书复制到roles/harbor/files/harbor-ca.crt
HARBOR_SERVER="harbor.mydomain.com"
#HARBOR_IP="192.168.1.8"
#HARBOR_DOMAIN="harbor.yourdomain.com"

View File

@ -22,6 +22,14 @@
kube-node
kube-master
# 如果启用harbor请配置后面harbor相关参数
[harbor]
#192.168.1.8 NODE_IP="192.168.1.8"
# 预留组后续添加node节点使用
[new-node]
#192.168.1.xx NODE_ID=node6 NODE_IP="192.168.1.xx"
[all:vars]
# ---------集群主要参数---------------
#集群 MASTER IP
@ -36,6 +44,18 @@ POD_INFRA_CONTAINER_IMAGE=mirrorgooglecontainers/pause-amd64:3.0
#TLS Bootstrapping 使用的 Token使用 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成
BOOTSTRAP_TOKEN="d18f94b5fa585c7123f56803d925d2e7"
# 集群网络插件目前支持calico和flannel
CLUSTER_NETWORK="calico"
# 部分calico相关配置更全配置可以去roles/calico/templates/calico.yaml.j2自定义
# 设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 05.安装calico网络组件.md
CALICO_IPV4POOL_IPIP="always"
# 设置 calico-node使用的host IPbgp邻居通过该地址建立可手动指定端口"interface=eth0"或使用如下自动发现
IP_AUTODETECTION_METHOD="can-reach=223.5.5.5"
# 部分flannel配置详见roles/flannel/templates/kube-flannel.yaml.j2
FLANNEL_BACKEND="vxlan"
# 服务网段 (Service CIDR部署前路由不可达部署后集群内使用 IP:Port 可达
SERVICE_CIDR="10.68.0.0/16"
@ -75,5 +95,5 @@ ca_dir="/etc/kubernetes/ssl"
base_dir="/etc/ansible"
#私有仓库 harbor服务器 (域名或者IP)
#需要把 harbor服务器证书复制到roles/harbor/files/harbor-ca.crt
HARBOR_SERVER="harbor.yourdomain.com"
#HARBOR_IP="192.168.1.8"
#HARBOR_DOMAIN="harbor.yourdomain.com"

View File

@ -2,7 +2,7 @@
file: name={{ item }} state=directory
with_items:
- /etc/calico/ssl
- /etc/cni/net.d
- /root/local/kube-system/calico
- name: 复制CA 证书到calico 证书目录
copy: src={{ ca_dir }}/ca.pem dest=/etc/calico/ssl/ca.pem
@ -17,22 +17,35 @@
-config={{ ca_dir }}/ca-config.json \
-profile=kubernetes calico-csr.json | {{ bin_dir }}/cfssljson -bare calico"
- name: 创建 calico 的 systemd unit 文件
template: src=calico-node.service.j2 dest=/etc/systemd/system/calico-node.service
- name: 准备 calico DaemonSet yaml文件
template: src=calico.yaml.j2 dest=/root/local/kube-system/calico/calico.yaml
- name: 启动calico 服务
shell: systemctl daemon-reload && systemctl enable calico-node && systemctl restart calico-node
- name: 准备 calico rbac文件
template: src=calico-rbac.yaml.j2 dest=/root/local/kube-system/calico/calico-rbac.yaml
- name: 下载calico cni plugins和calicoctl 客户端
# 只需单节点执行一次,重复执行的报错可以忽略
- name: 运行 calico网络
shell: "{{ bin_dir }}/kubectl create -f /root/local/kube-system/calico/ && sleep 15"
when: NODE_ID is defined and NODE_ID == "node1"
ignore_errors: true
# 删除原有cni配置
- name: 删除默认cni配置
file: path=/etc/cni/net.d/10-default.conf state=absent
# 删除原有cni插件网卡mynet0
- name: 删除默认cni插件网卡mynet0
shell: "ip link del mynet0"
ignore_errors: true
# [可选]cni calico plugins 已经在calico.yaml完成自动安装
- name: 下载calicoctl 客户端
copy: src={{ base_dir }}/bin/{{ item }} dest={{ bin_dir }}/{{ item }} mode=0755
with_items:
- calico
- calico-ipam
- loopback
#- calico
#- calico-ipam
#- loopback
- calicoctl
- name: 准备 calicoctl配置文件
template: src=calicoctl.cfg.j2 dest=/etc/calico/calicoctl.cfg
- name: 准备 cni配置文件
template: src=cni-calico.conf.j2 dest=/etc/cni/net.d/10-calico.conf

View File

@ -1,37 +0,0 @@
[Unit]
Description=calico node
After=docker.service
Requires=docker.service
[Service]
User=root
PermissionsStartOnly=true
ExecStart={{ bin_dir }}/docker run --net=host --privileged --name=calico-node \
-e ETCD_ENDPOINTS={{ ETCD_ENDPOINTS }} \
-e ETCD_CA_CERT_FILE=/etc/calico/ssl/ca.pem \
-e ETCD_CERT_FILE=/etc/calico/ssl/calico.pem \
-e ETCD_KEY_FILE=/etc/calico/ssl/calico-key.pem \
-e CALICO_LIBNETWORK_ENABLED=true \
-e CALICO_NETWORKING_BACKEND=bird \
-e CALICO_DISABLE_FILE_LOGGING=true \
-e CALICO_IPV4POOL_CIDR={{ CLUSTER_CIDR }} \
-e CALICO_IPV4POOL_IPIP=off \
-e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \
-e FELIX_IPV6SUPPORT=false \
-e FELIX_LOGSEVERITYSCREEN=info \
-e FELIX_IPINIPMTU=1440 \
-e FELIX_HEALTHENABLED=true \
-e IP= \
-v /etc/calico/ssl:/etc/calico/ssl \
-v /var/run/calico:/var/run/calico \
-v /lib/modules:/lib/modules \
-v /run/docker/plugins:/run/docker/plugins \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/log/calico:/var/log/calico \
calico/node:v2.6.3
ExecStop={{ bin_dir }}/docker rm -f calico-node
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@ -1,5 +1,5 @@
# Calico Version v2.6.2
# https://docs.projectcalico.org/v2.6/releases#v2.6.2
# Calico Version v2.6.5
# https://docs.projectcalico.org/v2.6/releases#v2.6.5
---
@ -15,11 +15,11 @@ rules:
- pods
- namespaces
- networkpolicies
- nodes
verbs:
- watch
- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
@ -34,8 +34,31 @@ subjects:
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
name: calico-node
rules:
- apiGroups: [""]
resources:
- pods
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system

View File

@ -0,0 +1,316 @@
# Calico Version v2.6.5
# https://docs.projectcalico.org/v2.6/releases#v2.6.5
# This manifest includes the following component versions:
# calico/node:v2.6.5
# calico/cni:v1.11.2
# calico/kube-controllers:v1.0.2
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "{{ ETCD_ENDPOINTS }}"
# Configure the Calico backend to use.
calico_backend: "bird"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.1.0",
"type": "calico",
"etcd_endpoints": "{{ ETCD_ENDPOINTS }}",
"etcd_key_file": "/etc/calico/ssl/calico-key.pem",
"etcd_cert_file": "/etc/calico/ssl/calico.pem",
"etcd_ca_cert_file": "/etc/calico/ssl/ca.pem",
"log_level": "info",
"mtu": 1500,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/root/.kube/config"
}
}
# If you're using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
etcd_ca: "/calico-secrets/ca.pem"
etcd_cert: "/calico-secrets/calico.pem"
etcd_key: "/calico-secrets/calico-key.pem"
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
hostNetwork: true
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
#image: quay.io/calico/node:v2.6.5
image: calico/node:v2.6.5
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Configure the IP Pool from which Pod IPs will be chosen.
- name: CALICO_IPV4POOL_CIDR
value: "{{ CLUSTER_CIDR }}"
- name: CALICO_IPV4POOL_IPIP
value: "{{ CALICO_IPV4POOL_IPIP }}"
# Set noderef for node controller.
- name: CALICO_K8S_NODE_REF
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
value: "1440"
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Auto-detect the BGP IP address.
- name: IP
value: ""
- name: IP_AUTODETECTION_METHOD
value: "{{ IP_AUTODETECTION_METHOD }}"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /calico-secrets
name: etcd-certs
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
#image: quay.io/calico/cni:v1.11.2
image: calico/cni:v1.11.2
command: ["/install-cni.sh"]
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
- mountPath: /calico-secrets
name: etcd-certs
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: {{ bin_dir }}
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the etcd TLS secrets.
- name: etcd-certs
hostPath:
path: /etc/calico/ssl
---
# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
# The controllers can only have a single active instance.
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
k8s-app: calico-kube-controllers
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
serviceAccountName: calico-kube-controllers
containers:
- name: calico-kube-controllers
#image: quay.io/calico/kube-controllers:v1.0.2
image: calico/kube-controllers:v1.0.2
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: policy,profile,workloadendpoint,node
volumeMounts:
# Mount in the etcd TLS secrets.
- mountPath: /calico-secrets
name: etcd-certs
volumes:
# Mount in the etcd TLS secrets.
- name: etcd-certs
hostPath:
path: /etc/calico/ssl
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system

View File

@ -1,20 +0,0 @@
{
"name": "calico-k8s-network",
"cniVersion": "0.1.0",
"type": "calico",
"etcd_endpoints": "{{ ETCD_ENDPOINTS }}",
"etcd_key_file": "/etc/calico/ssl/calico-key.pem",
"etcd_cert_file": "/etc/calico/ssl/calico.pem",
"etcd_ca_cert_file": "/etc/calico/ssl/ca.pem",
"log_level": "info",
"mtu": 1500,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/root/.kube/config"
}
}

View File

@ -0,0 +1,28 @@
- name: 创建flannel cni 相关目录
file: name={{ item }} state=directory
with_items:
- /etc/cni/net.d
- /root/local/kube-system/flannel
- name: 下载flannel cni plugins
copy: src={{ base_dir }}/bin/{{ item }} dest={{ bin_dir }}/{{ item }} mode=0755
with_items:
- bridge
- flannel
- host-local
- loopback
- portmap
- name: 准备 flannel DaemonSet yaml文件
template: src=kube-flannel.yaml.j2 dest=/root/local/kube-system/flannel/kube-flannel.yaml
# 只需单节点执行一次,重复执行的报错可以忽略
- name: 运行 flannel网络
shell: "{{ bin_dir }}/kubectl create -f /root/local/kube-system/flannel/ && sleep 15"
when: NODE_ID is defined and NODE_ID == "node1"
ignore_errors: true
# 删除原有cni配置
- name: 删除默认cni配置
file: path=/etc/cni/net.d/10-default.conf state=absent

View File

@ -0,0 +1,161 @@
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "{{ CLUSTER_CIDR }}",
"Backend": {
"Type": "{{ FLANNEL_BACKEND }}"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: jmgao1983/flannel:v0.9.1-amd64
#image: quay.io/coreos/flannel:v0.9.1-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: quay.io/coreos/flannel:v0.9.1-amd64
image: jmgao1983/flannel:v0.9.1-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

View File

@ -0,0 +1,44 @@
- name: 下载docker compose 二进制文件
copy: src={{ base_dir }}/bin/docker-compose dest={{ bin_dir }}/docker-compose mode=0755
# 注册变量result根据result结果判断是否已经安装过harbor
# result|failed 说明没有安装过harbor下一步进行安装
# result|succeeded 说明已经安装过harbor下一步跳过安装
- name: 注册变量result
command: ls /data/registry
register: result
ignore_errors: True
- name: 解压harbor离线安装包
unarchive:
src: "{{ base_dir }}/down/harbor-offline-installer-v1.2.2.tgz"
dest: /root/local
copy: yes
keep_newer: yes
when: result|failed
- name: 导入harbor所需 docker images
shell: "{{ bin_dir }}/docker load -i /root/local/harbor/harbor.v1.2.2.tar.gz"
when: result|failed
- name: 创建harbor证书请求
template: src=harbor-csr.json.j2 dest={{ ca_dir }}/harbor-csr.json
when: result|failed
- name: 创建harbor证书和私钥
shell: "cd {{ ca_dir }} && {{ bin_dir }}/cfssl gencert \
-ca={{ ca_dir }}/ca.pem \
-ca-key={{ ca_dir }}/ca-key.pem \
-config={{ ca_dir }}/ca-config.json \
-profile=kubernetes harbor-csr.json | {{ bin_dir }}/cfssljson -bare harbor"
when: result|failed
- name: 配置 harbor.cfg 文件
template: src=harbor.cfg.j2 dest=/root/local/harbor/harbor.cfg
when: result|failed
- name: 安装 harbor
shell: "cd /root/local/harbor && \
export PATH={{ bin_dir }}:$PATH && \
./install.sh"
when: result|failed

View File

@ -0,0 +1,21 @@
{
"CN": "harbor",
"hosts": [
"127.0.0.1",
"{{ NODE_IP }}",
"{{ HARBOR_DOMAIN }}"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "HangZhou",
"L": "XS",
"O": "k8s",
"OU": "System"
}
]
}

View File

@ -0,0 +1,106 @@
## Configuration file of Harbor
#The IP address or hostname to access admin UI and registry service.
#DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname = {{ NODE_IP }}
#The protocol for accessing the UI and token/notification service, by default it is http.
#It can be set to https if ssl is enabled on nginx.
ui_url_protocol = https
#The password for the root user of mysql db, change this before any production use.
db_password = Harbor12345
#Maximum number of job workers in job service
max_job_workers = 3
#Determine whether or not to generate certificate for the registry's token.
#If the value is on, the prepare script creates new root cert and private key
#for generating token to access the registry. If the value is off the default key/cert will be used.
#This flag also controls the creation of the notary signer's cert.
customize_crt = on
#The path of cert and key files for nginx, they are applied only the protocol is set to https
ssl_cert = {{ ca_dir }}/harbor.pem
ssl_cert_key = {{ ca_dir }}/harbor-key.pem
#The path of secretkey storage
secretkey_path = /data
#Admiral's url, comment this attribute, or set its value to NA when Harbor is standalone
admiral_url = NA
#The password of the Clair's postgres database, only effective when Harbor is deployed with Clair.
#Please update it before deployment, subsequent update will cause Clair's API server and Harbor unable to access Clair's database.
clair_db_password = password
#NOTES: The properties between BEGIN INITIAL PROPERTIES and END INITIAL PROPERTIES
#only take effect in the first boot, the subsequent changes of these properties
#should be performed on web ui
#************************BEGIN INITIAL PROPERTIES************************
#Email account settings for sending out password resetting emails.
#Email server uses the given username and password to authenticate on TLS connections to host and act as identity.
#Identity left blank to act as username.
email_identity =
email_server = smtp.mydomain.com
email_server_port = 25
email_username = sample_admin@mydomain.com
email_password = abc
email_from = admin <sample_admin@mydomain.com>
email_ssl = false
##The initial password of Harbor admin, only works for the first time when Harbor starts.
#It has no effect after the first launch of Harbor.
#Change the admin password from UI after launching Harbor.
harbor_admin_password = Harbor12345
##By default the auth mode is db_auth, i.e. the credentials are stored in a local database.
#Set it to ldap_auth if you want to verify a user's credentials against an LDAP server.
auth_mode = db_auth
#The url for an ldap endpoint.
ldap_url = ldaps://ldap.mydomain.com
#A user's DN who has the permission to search the LDAP/AD server.
#If your LDAP/AD server does not support anonymous search, you should configure this DN and ldap_search_pwd.
#ldap_searchdn = uid=searchuser,ou=people,dc=mydomain,dc=com
#the password of the ldap_searchdn
#ldap_search_pwd = password
#The base DN from which to look up a user in LDAP/AD
ldap_basedn = ou=people,dc=mydomain,dc=com
#Search filter for LDAP/AD, make sure the syntax of the filter is correct.
#ldap_filter = (objectClass=person)
# The attribute used in a search to match a user, it could be uid, cn, email, sAMAccountName or other attributes depending on your LDAP/AD
ldap_uid = uid
#the scope to search for users, 1-LDAP_SCOPE_BASE, 2-LDAP_SCOPE_ONELEVEL, 3-LDAP_SCOPE_SUBTREE
ldap_scope = 3
#Timeout (in seconds) when connecting to an LDAP Server. The default value (and most reasonable) is 5 seconds.
ldap_timeout = 5
#Turn on or off the self-registration feature
self_registration = on
#The expiration time (in minute) of token created by token service, default is 30 minutes
token_expiration = 30
#The flag to control what users have permission to create projects
#The default value "everyone" allows everyone to creates a project.
#Set to "adminonly" so that only admin user can create project.
project_creation_restriction = everyone
#Determine whether the job service should verify the ssl cert when it connects to a remote registry.
#Set this flag to off when the remote registry uses a self-signed or untrusted certificate.
verify_remote_cert = on
#************************END INITIAL PROPERTIES************************
#############

View File

@ -14,6 +14,7 @@ ExecStart={{ bin_dir }}/kube-controller-manager \
--cluster-signing-key-file={{ ca_dir }}/ca-key.pem \
--service-account-private-key-file={{ ca_dir }}/ca-key.pem \
--root-ca-file={{ ca_dir }}/ca.pem \
--horizontal-pod-autoscaler-use-rest-clients=false \
--leader-elect=true \
--v=2
Restart=on-failure

View File

@ -1,10 +1,21 @@
##----------kubelet 配置部分--------------
- name: 下载 kubelet和kube-proxy 二进制
# 创建kubelet,kube-proxy工作目录和cni配置目录
- name: 创建kube-node 相关目录
file: name={{ item }} state=directory
with_items:
- /var/lib/kubelet
- /var/lib/kube-proxy
- /etc/cni/net.d
- name: 下载 kubelet,kube-proxy 二进制和基础 cni plugins
copy: src={{ base_dir }}/bin/{{ item }} dest={{ bin_dir }}/{{ item }} mode=0755
with_items:
- kubelet
- kube-proxy
- bridge
- host-local
- loopback
##----------kubelet 配置部分--------------
# kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要绑定该角色
# 只需单节点执行一次,重复执行的报错可以忽略
# 增加15s等待kube-apiserver正常工作
@ -36,14 +47,16 @@
- name: 安装bootstrap.kubeconfig配置文件
shell: "mv $HOME/bootstrap.kubeconfig /etc/kubernetes/bootstrap.kubeconfig"
- name: 创建kubelet的工作目录
file: name=/var/lib/kubelet state=directory
- name: 准备 cni配置文件
template: src=cni-default.conf.j2 dest=/etc/cni/net.d/10-default.conf
- name: 创建kubelet的systemd unit文件
template: src=kubelet.service.j2 dest=/etc/systemd/system/kubelet.service
tags: kubelet
- name: 开启kubelet 服务
shell: systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet
tags: kubelet
- name: approve-kubelet-csr
shell: "sleep 15 && {{ bin_dir }}/kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs {{ bin_dir }}/kubectl certificate approve"
@ -85,9 +98,6 @@
- name: 安装kube-proxy.kubeconfig配置文件
shell: "mv $HOME/kube-proxy.kubeconfig /etc/kubernetes/kube-proxy.kubeconfig"
- name: 创建kube-proxy的工作目录
file: name=/var/lib/kube-proxy state=directory
- name: 创建kube-proxy 服务文件
tags: reload-kube-proxy
template: src=kube-proxy.service.j2 dest=/etc/systemd/system/kube-proxy.service
@ -96,25 +106,3 @@
tags: reload-kube-proxy
shell: systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy
##-------calico-kube-controllers部分----------------
#
- name: 创建calico-kube-controllers目录
tags: calico-controller
file: name=/root/local/kube-system/calico state=directory
- name: 准备RBAC 配置文件
tags: calico-controller
copy: src=rbac.yaml dest=/root/local/kube-system/calico/rbac.yaml
- name: 准备calico-kube-controllers.yaml 文件
tags: calico-controller
template: src=calico-kube-controllers.yaml.j2 dest=/root/local/kube-system/calico/calico-kube-controllers.yaml
# 只需单节点执行一次,重复执行的报错可以忽略
# 增加15s等待node ready
- name: 运行calico-kube-controllers
tags: calico-controller
shell: "sleep 15 && {{ bin_dir }}/kubectl create -f /root/local/kube-system/calico/rbac.yaml && \
{{ bin_dir }}/kubectl create -f /root/local/kube-system/calico/calico-kube-controllers.yaml"
when: NODE_ID is defined and NODE_ID == "node1"
ignore_errors: true

View File

@ -1,63 +0,0 @@
# Calico Version v2.6.3
# https://docs.projectcalico.org/v2.6/releases#v2.6.3
# This manifest includes the following component versions:
# calico/kube-controllers:v1.0.1
# Create this manifest using kubectl to deploy
# the Calico Kubernetes controllers.
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# Only a single instance of the this pod should be
# active at a time. Since this pod is run as a Deployment,
# Kubernetes will ensure the pod is recreated in case of failure,
# removing the need for passive backups.
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
k8s-app: calico-kube-controllers
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
hostNetwork: true
serviceAccountName: calico-kube-controllers
containers:
- name: calico-kube-controllers
#image: quay.io/calico/kube-controllers:v1.0.1
image: calico/kube-controllers:v1.0.1
env:
# Configure the location of your etcd cluster.
- name: ETCD_ENDPOINTS
value: "{{ ETCD_ENDPOINTS }}"
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
value: "/calico-secrets/ca.pem"
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
value: "/calico-secrets/calico-key.pem"
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
value: "/calico-secrets/calico.pem"
volumeMounts:
# Mount in the etcd TLS secrets.
- mountPath: /calico-secrets
name: etcd-certs
volumes:
# Mount in the etcd TLS secrets.
- name: etcd-certs
hostPath:
path: /etc/calico/ssl
---

View File

@ -0,0 +1,12 @@
{
"name": "mynet",
"type": "bridge",
"bridge": "mynet0",
"isDefaultGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"subnet": "{{ CLUSTER_CIDR }}"
}
}

View File

@ -0,0 +1,4 @@
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1

View File

@ -56,3 +56,12 @@
shell: "setenforce 0 && echo SELINUX=disabled > /etc/selinux/config"
when: ansible_distribution == "CentOS"
ignore_errors: true
# 设置系统参数for k8s
# 消除docker info 警告WARNING: bridge-nf-call-ip[6]tables is disabled
- name: 设置系统参数
copy: src=95-k8s-sysctl.conf dest=/etc/sysctl.d/95-k8s-sysctl.conf
- name: 生效系统参数
shell: "sysctl -p /etc/sysctl.d/95-k8s-sysctl.conf"
ignore_errors: true