mirror of https://github.com/easzlab/kubeasz.git
update storage: nfs-provisioner
parent
6d685b8e73
commit
da63eeedbd
|
@ -10,6 +10,7 @@
|
|||
|
||||
|
||||
**[news]** kubeasz 技术上通过cncf一致性测试 [详情](docs/mixes/conformance.md)
|
||||
|
||||
**[news]** 群里大佬上新一套免费[kubernetes架构师课程](https://www.toutiao.com/c/user/token/MS4wLjABAAAA0YFomuMNm87NNysXeUsQdI0Tt3gOgz8WG_0B3MzxsmI/?tab=article),强烈推荐!
|
||||
|
||||
## 快速指南
|
||||
|
|
|
@ -41,7 +41,7 @@ apt install nfs-kernel-server
|
|||
| anonuid=xxx | 指定nfs服务器/etc/passwd文件中匿名用户的UID |
|
||||
| anongid=xxx | 指定nfs服务器/etc/passwd文件中匿名用户的GID |
|
||||
|
||||
+ 注1:尽量指定主机名或IP或IP段最小化授权可以访问NFS 挂载的资源的客户端
|
||||
+ 注1:尽量指定主机名或IP或IP段最小化授权可以访问NFS 挂载的资源的客户端;注意如果在k8s集群中配合nfs-client-provisioner使用的话,这里需要指定pod的IP段,否则nfs-client-provisioner pod无法启动,报错 mount.nfs: access denied by server while mounting
|
||||
+ 注2:经测试参数insecure必须要加,否则客户端挂载出错mount.nfs: access denied by server while mounting
|
||||
|
||||
### 启动
|
||||
|
|
|
@ -52,27 +52,6 @@ epoch timestamp cluster status node.total node.data shards pri relo i
|
|||
|
||||
- 1.安装 helm: 以本项目[安全安装helm](../guide/helm.md)为例
|
||||
- 2.准备 PV: 以本项目[K8S 集群存储](../setup/08-cluster-storage.md)创建`nfs`动态 PV 为例
|
||||
- 编辑配置文件:roles/cluster-storage/defaults/main.yml
|
||||
|
||||
``` bash
|
||||
storage:
|
||||
nfs:
|
||||
enabled: "yes"
|
||||
server: "192.168.1.8"
|
||||
server_path: "/share"
|
||||
storage_class: "nfs-es"
|
||||
provisioner_name: "nfs-provisioner-01"
|
||||
```
|
||||
|
||||
- 创建 nfs provisioner
|
||||
|
||||
``` bash
|
||||
$ ansible-playbook /etc/ansible/roles/cluster-storage/cluster-storage.yml
|
||||
# 执行成功后验证
|
||||
$ kubectl get pod --all-namespaces |grep nfs-prov
|
||||
kube-system nfs-provisioner-01-6b7fbbf9d4-bh8lh 1/1 Running 0 1d
|
||||
```
|
||||
|
||||
- 3.安装 elasticsearch chart
|
||||
|
||||
``` bash
|
||||
|
|
|
@ -8,25 +8,6 @@ Mariadb 是从 MySQL 衍生出来的开源关系型数据库,目前兼容 mysq
|
|||
- 已部署 helm,参考[这里](../guide/helm.md)
|
||||
- 集群提供持久性存储,参考[这里](../setup/08-cluster-storage.md)
|
||||
|
||||
这里演示使用 nfs 动态存储,编辑修改 nfs 存储部分参数
|
||||
|
||||
``` bash
|
||||
$ vi roles/cluster-storage/defaults/main.yml
|
||||
storage:
|
||||
# nfs server 参数
|
||||
nfs:
|
||||
enabled: "yes" # 启用 nfs
|
||||
server: "172.16.3.86" # 设置 nfs 服务器地址
|
||||
server_path: "/data/nfs" # 设置共享目录
|
||||
storage_class: "nfs-db" # 定义 storage_class,后面pvc要调用这个
|
||||
provisioner_name: "nfs-provisioner-01" # 任意命名
|
||||
|
||||
# 配置完成,保存退出,运行下面命令
|
||||
$ ansible-playbook /etc/ansible/roles/cluster-storage/cluster-storage.yml
|
||||
# 确认nfs provisioner pod
|
||||
$ kubectl get pod --all-namespaces |grep nfs
|
||||
kube-system nfs-provisioner-01-88694d78c-mrn7f 1/1 Running 0 6m
|
||||
```
|
||||
|
||||
## mariadb charts 配置修改
|
||||
|
||||
|
|
|
@ -1,33 +0,0 @@
|
|||
## kubeasz-0.1.0 发布说明
|
||||
|
||||
`kubeasz`项目第一个独立版本发布,使用`ansible playbook`自动化安装k8s集群(目前支持v1.8/v1.9/v1.10)和主要插件,方便部署和灵活配置集群;
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- kubernetes v1.10.4, v1.9.8, v1.8.12
|
||||
- etcd v3.3.6
|
||||
- 安全更新:
|
||||
- 修复kubelet匿名访问漏洞(感谢 cqspirit #192 提醒)
|
||||
- 功能更新:
|
||||
- 增加helm安全部署及说明
|
||||
- 增加prometheus部署及说明
|
||||
- 增加jenkins部署及说明(感谢 lusyoe #208 )
|
||||
- 脚本更新:
|
||||
- 精简 inventory(/etc/ansible/hosts)配置项
|
||||
- 移动calico/flannel配置至对应的roles/defaults/main.yml
|
||||
- 取消变量NODE_IP,使用内置变量inventory_hostname代替
|
||||
- 取消lb组变量设置,自动完成
|
||||
- 取消etcd相关集群变量设置,自动完成
|
||||
- 增加集群版本K8S_VER变量,为兼容k8s v1.8安装
|
||||
- 增加修改AIO部署的系统IP的脚本和说明(docs/op/change_ip_allinone.md)
|
||||
- 增加设置node角色
|
||||
- 修改OS安全加固脚本为可选安装
|
||||
- 其他:
|
||||
- 修复calico-controller多网卡问题
|
||||
- 修改manifests/apiserver参数兼容k8s v1.8
|
||||
- 简化新增master/node节点步骤
|
||||
- 优化ansible配置参数
|
||||
- 更新 harbor 1.5.1及文档修复(感谢 lusyoe #224 )
|
||||
- 更新 kube-dns 1.14.10
|
||||
- 丰富dashboard文档( #182 )
|
||||
- 修复selinux关闭( #194 )
|
|
@ -1,19 +0,0 @@
|
|||
## kubeasz-0.2.0 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- 增加新网络插件 kube-router,可在ansible hosts配置`CLUSTER_NETWORK="kube-router"`
|
||||
- 功能更新:
|
||||
- 增加IPVS/LVS服务代理模式,比默认的kube-proxy服务代理更高效;在选择kube-router网络插件时配置`SERVICE_PROXY="IPVS"`
|
||||
- 增加部署metrics-server,以替代heapster 提供metrics API
|
||||
- 增加自动集成安装kube-dns/dashboard等组件,可在`roles/cluster-addon/defaults/main.yml`配置
|
||||
- 脚本更新:
|
||||
- 增加删除单个节点脚本,docs/op/del_one_node.md
|
||||
- 增加等待网络插件正常运行
|
||||
- Bug fix: 更新99.clean.yml清理脚本,解决集群重装后cni地址分配问题 kubernetes #57280
|
||||
- Bug fix: 从0.1.0版本升级时,kube-apiserver服务启动失败问题
|
||||
- 其他:
|
||||
- 修改部分镜像拉取策略统一为:`imagePullPolicy: IfNotPresent`
|
||||
- 新增metrics-server、cluster-addon文档
|
||||
- 更新kube-router相关文档
|
||||
- 更新集群升级说明文档 docs/op/upgrade.md
|
|
@ -1,18 +0,0 @@
|
|||
## kubeasz-0.2.1 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
如果服务器能够使用内部yum源/apt源,但是无法访问公网情况下,请下载离线docker镜像完成集群安装:从百度云盘把`basic_images_kubeasz_x.y.tar.gz` 下载解压到项目`down`目录即可
|
||||
- 组件更新:
|
||||
- 更新 coredns版本1.1.3
|
||||
- 功能更新:
|
||||
- 集成网络插件(可选)使用离线docker镜像安装
|
||||
- 集成其他插件(可选)使用离线docker镜像安装
|
||||
- 增加切换集群网络插件的脚本
|
||||
- 文档更新:
|
||||
- [快速指南](https://github.com/easzlab/kubeasz/blob/master/docs/setup/quickStart.md)
|
||||
- [安装规划](https://github.com/easzlab/kubeasz/blob/master/docs/setup/00-planning_and_overall_intro.md)
|
||||
- [切换网络](https://github.com/easzlab/kubeasz/blob/master/docs/op/clean_k8s_network.md)
|
||||
- 其他:
|
||||
- Bug fix: 清理集群时可能出现`Device or resource busy: '/var/run/docker/netns/xxxxxxx'`的错误,可手动umount后重新清理集群
|
||||
- Bug fix: #239 harbor调整安装解压工具, 适配多系统 (#240)
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
## kubeasz-0.2.2 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s v1.11.0
|
||||
- etcd v3.3.8
|
||||
- docker 18.03.1-ce
|
||||
- 功能更新:
|
||||
- 更新使用ipvs 配置及[说明文档](https://github.com/easzlab/kubeasz/blob/master/docs/guide/ipvs.md)
|
||||
- 更新lb节点keepalived使用单播发送vrrp报文,预期兼容公有云上自建LB(待测试)
|
||||
- 废弃原 ansible hosts 中变量SERVICE_PROXY
|
||||
- 更新haproxy负载均衡算法配置
|
||||
- 其他修复:
|
||||
- fix 变更集群网络的脚本和[文档](https://github.com/easzlab/kubeasz/blob/master/docs/op/change_k8s_network.md)
|
||||
- fix 脚本99.clean.yml清理环境变量
|
||||
- fix metrics-server允许的client cert问题
|
||||
- fix #242: 添加CA有效期参数,设定CA有效期为15年(131400h) (#245)
|
||||
- fix helm安装出现Error: transport is closing (#248)
|
||||
- fix harbor点击tag界面出现\"发生未知错误,请稍后再试" (#250)
|
||||
- fix 脚本99.clean.yml清理 services softlink (#253)
|
||||
- fix kube-apiserver-v1.8 使用真实数量的 apiserver-count (#254)
|
||||
- fix 清理ipvs产生的网络接口
|
|
@ -1,26 +0,0 @@
|
|||
## kubeasz-0.3.0 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.11.2/v1.10.6/v1.9.10/v1.8.15
|
||||
- calico: v3.1.3
|
||||
- kube-router: v0.2.0-beta.9
|
||||
- 功能更新:
|
||||
- **增加集群备份与恢复** 功能与[说明](https://github.com/easzlab/kubeasz/blob/master/docs/op/cluster_restore.md)
|
||||
- **增加cilium网络插件** ,文档待更新
|
||||
- **增加cluster-storage角色** 与[文档说明](https://github.com/easzlab/kubeasz/blob/master/docs/setup/08-cluster-storage.md)
|
||||
- 增加阿里云NAS存储支持
|
||||
- 增加集群个性化[配置说明](https://github.com/easzlab/kubeasz/blob/master/docs/setup/config_guide.md)与生成脚本`tools/init_vars.yml`
|
||||
- 支持deploy节点与ansible执行节点分离,为一份代码创建多个集群准备
|
||||
- 其他:
|
||||
- 更新 jenkins and plugins (#258)
|
||||
- 重写 nfs动态存储脚本与文档
|
||||
- 优化 cluster-addon 安装脚本
|
||||
- 增加 docker 配置文件
|
||||
- 更新 offline images 0.3
|
||||
- 增加 batch/v2alpha支持
|
||||
- 移动 DNS yaml文件至 /opt/kube/kube-system
|
||||
- fix 多主集群下change_k8s_network时vip丢失问题
|
||||
- fix 禁止节点使用系统swap
|
||||
- fix 解压后的harbor安装文件没有执行权限问题
|
||||
- fix Ubuntu 18.04无法安装haproxy、keepalived问题
|
|
@ -1,30 +0,0 @@
|
|||
## kubeasz-0.3.1 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.11.3, v1.10.7
|
||||
- kube-router: v0.2.0
|
||||
- dashboard: v1.10.0
|
||||
- docker: 17.03.2-ce (选择k8s官方测试稳定的版本)
|
||||
- 集群安装:
|
||||
- **增加集群时间同步服务chrony** [说明](https://github.com/easzlab/kubeasz/blob/master/docs/guide/chrony.md)
|
||||
- **取消 Node节点 Bootstrap机制**,安装流程更稳定,配置更精简
|
||||
- 简化 ansible host 文件配置,移除etcd、harbor 相关变量
|
||||
- 拆分 prepare 阶段的安装脚本,增加设置系统 ulimit
|
||||
- 增加多lb节点(多于2节点)配置支持 (#286)
|
||||
- 增加可选配置lb 节点负载转发ingress controller NodePort service的功能
|
||||
- 自定义 kubelet docker 存储目录 (#305)
|
||||
- 增加变量配置支持多网卡情况时安装 flannel calico
|
||||
- 文档更新:
|
||||
- 更新 kubeasz 公有云安装文档 https://github.com/easzlab/kubeasz/blob/master/docs/setup/kubeasz_on_public_cloud.md
|
||||
- 更新 java war应用部署实践 https://github.com/easzlab/kubeasz/blob/master/docs/practice/java_war_app.md
|
||||
- 更新 cilium 文档,翻译官方 cilium 安全策略例子(deathstar/starwar)
|
||||
- 更新 harbor kubedns README 文档
|
||||
- 更新集群安装部分文档
|
||||
- 其他:
|
||||
- 修复 calicoctl 配置,修复calico/node跑在LB 主节点时使用`vip`作为`bgp peer`地址问题
|
||||
- 修复 jq安装错误,补充ipset和ipvsadm安装
|
||||
- 修复清除单节点脚本 tools/clean_one_node.yml
|
||||
- 修复消除离线镜像不存在时安装的错误提示信息
|
||||
- 修复多节点(超过2节点时)lb 备节点 router_id重复问题
|
||||
- 锁定jenkins镜像tag、升级插件版本以及锁定安全插件 (#315)
|
|
@ -1,29 +0,0 @@
|
|||
## kubeasz-0.4.0 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.12.1, v1.10.8, v1.9.11 [注意 v1.12.1 kubelet日志bug](https://github.com/kubernetes/kubernetes/issues/69503)
|
||||
- docker: 18.06.1-ce (选择k8s官方测试稳定的版本)
|
||||
- metrics-server: v0.3.1
|
||||
- coredns: 1.2.2, kube-dns 1.14.13
|
||||
- heapster v1.5.4
|
||||
- traefik 1.7.2
|
||||
- 集群安装:
|
||||
- **更新 kubelet使用 webhook方式认证/授权** ,提高集群安全性
|
||||
- 调整安装步骤中 kubectl 命令的执行以兼容公有云部署
|
||||
- 调整部分安装步骤以兼容`ansible`执行节点与`deploy`节点分离
|
||||
- 更新节点的安全加固脚本[ansible-os-hardening 5.0.0](https://github.com/dev-sec/ansible-os-hardening)
|
||||
- 文档更新:
|
||||
- 新增`elasticsearch`集群[部署实践](https://github.com/easzlab/kubeasz/blob/master/docs/practice/es_cluster.md)
|
||||
- 更新[kubeasz 公有云安装文档](https://github.com/easzlab/kubeasz/blob/master/docs/setup/kubeasz_on_public_cloud.md)
|
||||
- 调整集群安装步骤文档目录及修改使用英文文件名
|
||||
- 修改部分脚本内部注释为英文
|
||||
- 其他:
|
||||
- 升级 promethus chart 7.1.4,grafana chart 1.16.0
|
||||
- 升级 jenkins 安全插件和 k8s 插件版本 (#325)
|
||||
- 修复 新增 master 节点时报变量未定义错误
|
||||
- 修复 ipvs 模式下网络组件偶尔连不上`kubernetes svc`的错误
|
||||
- 修复 Ansible 2.7 环境下 yum/apt 安装多个软件包的 DEPRECATION WARNING (#334)
|
||||
- 修复 chrony 与 ntp 共存冲突问题 (#341)
|
||||
- 修复 CentOS 下使用 ipvs 模式需依赖 conntrack-tools 软件包
|
||||
- 修复 tools/change_k8s_network.yml 脚本
|
|
@ -1,29 +0,0 @@
|
|||
## kubeasz-0.5.0 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.12.3, v1.11.5, v1.10.11
|
||||
- calico v3.2.4
|
||||
- helm v2.11.0
|
||||
- traefik 1.7.4
|
||||
- 集群安装:
|
||||
- 更新集群升级脚本和[文档](https://github.com/easzlab/kubeasz/blob/master/docs/op/upgrade.md),关注[安全漏洞](https://mp.weixin.qq.com/s/Q8XngAr5RuL_irRscbVbKw)
|
||||
- 集成 metallb 作为自有硬件 k8s 集群的 LoadBalancer 实现
|
||||
- 支持[修改 APISERVER 证书](https://github.com/easzlab/kubeasz/blob/master/docs/op/ch_apiserver_cert.md)
|
||||
- 增加 ingress nodeport 负载转发的脚本与[文档](https://github.com/easzlab/kubeasz/blob/master/docs/op/loadballance_ingress_nodeport.md)
|
||||
- 增加 https ingress 配置和[文档](https://github.com/easzlab/kubeasz/blob/master/docs/guide/ingress-tls.md)
|
||||
- 增加 kubectl 只读访问权限配置和[文档](https://github.com/easzlab/kubeasz/blob/master/docs/op/readonly_kubectl.md)
|
||||
- 增加 apiserver 配置支持 istio sidecar自动注入webhook (#375)
|
||||
- 初始化集群节点设置 net.netfilter.nf_conntrack_max=1000000
|
||||
- 取消多主集群LB_IF参数设置,自动生成以避免人为配置疏忽
|
||||
- 文档更新:
|
||||
- 更新[kubeasz 公有云安装文档](https://github.com/easzlab/kubeasz/blob/master/docs/setup/kubeasz_on_public_cloud.md)
|
||||
- 更新[metallb 文档](https://github.com/easzlab/kubeasz/blob/master/docs/guide/metallb.md)
|
||||
- 更新[dashboard 文档](https://github.com/easzlab/kubeasz/blob/master/docs/guide/dashboard.md),支持只读权限设置
|
||||
- 新增istio安装说明
|
||||
- 其他:
|
||||
- 修复内核4.19加载nf_conntrack (#366)
|
||||
- 修复 calico controller 中 NodePorts 的自动配置
|
||||
- 取消 helms 别名设置
|
||||
- 升级jenkins-lts版本和插件版本 (#358)
|
||||
- 修复阿里云nas动态pv脚本
|
|
@ -1,24 +0,0 @@
|
|||
## kubeasz-0.5.1 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.13.2, v1.12.4, v1.11.6, v1.10.12
|
||||
- calico v3.3.2
|
||||
- coredns 1.2.6
|
||||
- 集群安装:
|
||||
- 更新 calico 3.3.2,并保留3.2.4可选
|
||||
- 修复特定环境下lb节点变量LB_IF自动设置错误
|
||||
- 移除 kube_node csr 请求批准部分(PR #399)
|
||||
- 添加支持 RedHat (PR #431)
|
||||
- 修改 docker 存储的目录设置(PR #436)
|
||||
- 更新 kube-schedule 监听参数 (PR #440)
|
||||
- 安装流程增加等待 ETCD 同步完成再返回成功(PR #420)
|
||||
- 增加 pod-infra-container 可选择配置
|
||||
- 增加 nginx-ingress manifests
|
||||
- 文档更新:
|
||||
- **增加 [calico 设置route reflector文档](https://github.com/easzlab/kubeasz/blob/master/docs/setup/network-plugin/calico-bgp-rr.md)**,大规模k8s集群使用calico网络必读
|
||||
- 部分文档更新优化,部分文档中内部链接修复(PR #429)
|
||||
- 增加 dashboard ingress [配置示例](https://github.com/easzlab/kubeasz/blob/master/docs/guide/ingress-tls.md#%E9%85%8D%E7%BD%AE-dashboard-ingress)
|
||||
- 其他:
|
||||
- 添加 helm tls 环境变量(PR #398)
|
||||
- 修复 dashboard ingress 配置(issue #403)
|
|
@ -1,36 +0,0 @@
|
|||
## kubeasz-0.6.0 发布说明
|
||||
|
||||
- Note: 本次为 kubeasz-0.x 最后一次版本发布,它将被并入 release-0 分支,停止主要更新,仅做 bug 修复版本;后续 master 分支将开始 kubeasz-1.x 版本发布。
|
||||
- Action Required: 本次更新修改 ansible hosts 文件,如需要更新已有项目使用,请按照 example 目录中的对应例子修改`/etc/ansible/hosts`文件。
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.13.3
|
||||
- calico v3.4.1
|
||||
- flannel v0.11.0-amd64
|
||||
- docker 18.09.2
|
||||
- harbor 1.6.3
|
||||
- helm/tiller: v2.12.3
|
||||
- 集群安装:
|
||||
- **增加添加/删除 etcd 节点**脚本和[文档](https://github.com/easzlab/kubeasz/blob/master/docs/op/op-etcd.md)
|
||||
- **增加可选配置附加负载均衡节点(ex_lb)**,可用于负载均衡 NodePort 方式暴露的服务
|
||||
- 更新删除节点脚本和[文档](https://github.com/easzlab/kubeasz/blob/master/docs/op/del_one_node.md)
|
||||
- 优化增加 node 和增加 master 节点流程
|
||||
- 更新 harbor 安装流程和文档
|
||||
- 优化 prepare tasks,避免把证书和 kubeconfig 分发到不需要的节点
|
||||
- 更新 prometheus 告警发送钉钉配置和[文档](https://github.com/easzlab/kubeasz/blob/master/docs/guide/prometheus.md#%E5%8F%AF%E9%80%89-%E9%85%8D%E7%BD%AE%E9%92%89%E9%92%89%E5%91%8A%E8%AD%A6)
|
||||
- 增加使用 helm 部署 mariadb 集群和文档
|
||||
- 增加 k8s 官方 mysql 集群示意配置
|
||||
- 增加使用 helm 部署 redis-ha 集群
|
||||
- 增加开机启动 k8s 相关内核模块配置
|
||||
- 更新 calico 3.4.1,并保留版本 3.3.x/3.2.x 可选
|
||||
- 文档更新:
|
||||
- **增加 gitlab-ci 文档**, https://github.com/easzlab/kubeasz/blob/master/docs/guide/gitlab/readme.md
|
||||
- 部分文档更新(helm/dns/chrony)
|
||||
- 其他:
|
||||
- 修复为兼容k8s版本 <= 1.11,revert PR #440
|
||||
- 修复清除iptables规则时无法连接节点(PR #453 by PowerDos)
|
||||
- 添加开启docker远程API选项(默认关闭)(PR #444 by lusyoe)
|
||||
- 修复 calico 3.3.x rbac 配置(PR #447 by sunshanpeng)
|
||||
- 增加 coredns 和 calico 的 metrics 监控选项(PR #447 by sunshanpeng)
|
||||
- 添加 helm 离线安装方法说明(doc/guide/helm.md)(PR #443 by j4ckzh0u)
|
|
@ -1,41 +0,0 @@
|
|||
## kubeasz-1.0.0 发布说明
|
||||
|
||||
- Note: kubeasz 1.x 第一个正式版本,引入新功能:easzctl 命令行工具、多集群管理、容器化使用以及更多配置精简与优化;原 master 已并入 release-0 分支,停止主要更新,仅做 bug 修复版本;后续 master 分支将开始 kubeasz-1.x 版本发布。
|
||||
- Action Required: 本次更新修改 ansible hosts 文件,请按照 example 目录中的对应例子修改`/etc/ansible/hosts`文件, 确保 ansible hosts 文件中主机组的顺序与例子一致。
|
||||
|
||||
CHANGELOG: (0.6.x 版本以后)
|
||||
- 组件更新:
|
||||
- k8s: v1.13.4
|
||||
- calico v3.4.3
|
||||
- cilium v1.4.1
|
||||
- dashboard v1.10.1
|
||||
- 集群安装:
|
||||
- **引入[easzctl](https://github.com/easzlab/kubeasz/blob/master/tools/easzctl)命令行工具**,后续它将作为推荐的集群常规管理工具,[使用介绍](https://github.com/easzlab/kubeasz/blob/master/docs/setup/easzctl_cmd.md)
|
||||
- **新增 docker 运行安装 kubeasz**,请参考文档 https://github.com/easzlab/kubeasz/blob/master/docs/setup/docker_kubeasz.md
|
||||
- 优化 ansible hosts 配置,更加精简、易用
|
||||
- 废弃 new-node/new-master/new-etcd 主机组,对应功能已集成在 easzctl 命令行
|
||||
- 废弃变量 K8S_VER,改为自动识别,避免手工配置错误
|
||||
- 迁移 basic_auth 相关配置至 roles:kube_master,增强初始安全性,且默认关闭apiserver的用户名/密码认证,详见 roles/kube-master/defaults/main.yml
|
||||
- easzctl 提供以下集群层面操作
|
||||
- 切换/创建集群 context
|
||||
- 删除当前集群
|
||||
- 显示所有集群
|
||||
- 创建集群
|
||||
- 创建单机集群(类似 minikube)
|
||||
- easzctl 提供以下集群内操作
|
||||
- [添加 master](https://github.com/easzlab/kubeasz/blob/master/docs/op/AddMaster.md)
|
||||
- [添加 node](https://github.com/easzlab/kubeasz/blob/master/docs/op/AddNode.md)
|
||||
- [添加 etcd](https://github.com/easzlab/kubeasz/blob/master/docs/op/op-etcd.md)
|
||||
- [删除 etcd](https://github.com/easzlab/kubeasz/blob/master/docs/op/op-etcd.md)
|
||||
- [删除节点](https://github.com/easzlab/kubeasz/blob/master/docs/op/clean_one_node.md)
|
||||
- 修改优化部分安装脚本以兼容 docker 运行 kubeasz
|
||||
- 增加启动 kubeasz 容器的脚本 tools/kubeasz-docker
|
||||
- 修改默认安装 dashboard 的同时安装 heapster(当前 dashboard 版本仍旧依赖 heapster)
|
||||
- 其他:
|
||||
- 修复兼容 docker 18.09.x 版本安装
|
||||
- 修复项目bin目录下二进制不能执行的错误
|
||||
- 修复docker版本判断逻辑
|
||||
- 修复cilium安装时判断内核版本逻辑
|
||||
- add support for harbor v1.7.x (#478 by weilinqwe)
|
||||
- perf: 加速源码下载速度 (#483 by waitingsong)
|
||||
- 更新 dashboard 文档
|
|
@ -1,26 +0,0 @@
|
|||
## kubeasz-1.0.0rc1 发布说明
|
||||
|
||||
- Note: kubeasz-1.x 第一个版本预发布,原 master 已并入 release-0 分支,停止主要更新,仅做 bug 修复版本;后续 master 分支将开始 kubeasz-1.x 版本发布。
|
||||
- Action Required: 本次更新修改 ansible hosts 文件,请按照 example 目录中的对应例子修改`/etc/ansible/hosts`文件, 确保 ansible hosts 文件中主机组的顺序与例子一致。
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.13.4
|
||||
- cilium v1.4.1
|
||||
- 集群安装:
|
||||
- **引入[easzctl](https://github.com/easzlab/kubeasz/blob/master/tools/easzctl)命令行工具**,后续它将作为推荐的集群常规管理工具,包括多集群管理(to do)
|
||||
- **新增 docker 运行安装 kubeasz**,请参考文档 https://github.com/easzlab/kubeasz/blob/master/docs/setup/docker_kubeasz.md
|
||||
- 优化 example hosts 配置,废弃 new-node/new-master/new-etcd 主机组,废弃变量K8S_VER,改为自动识别
|
||||
- 集成以下集群操作至 easzctl 命令行
|
||||
- [添加 master](https://github.com/easzlab/kubeasz/blob/master/docs/op/AddMaster.md)
|
||||
- [添加 node](https://github.com/easzlab/kubeasz/blob/master/docs/op/AddNode.md)
|
||||
- [添加 etcd](https://github.com/easzlab/kubeasz/blob/master/docs/op/op-etcd.md)
|
||||
- [删除 etcd](https://github.com/easzlab/kubeasz/blob/master/docs/op/op-etcd.md)
|
||||
- [删除节点](https://github.com/easzlab/kubeasz/blob/master/docs/op/clean_one_node.md)
|
||||
- [快速创建 aio 集群]()
|
||||
- 修改安装时生成随机 basic auth 密码
|
||||
- 修改优化部分安装脚本以兼容 docker 运行 kubeasz
|
||||
- update cilium v1.4.1,更新 cilium 文档(to do)
|
||||
- 增加启动 kubeasz 容器的脚本 tools/kubeasz-docker
|
||||
- 其他:
|
||||
- 修复兼容 docker 18.09.x 版本安装
|
|
@ -1,23 +0,0 @@
|
|||
## kubeasz-1.0.1 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.13.5 v1.12.7 v1.11.9
|
||||
- cni v0.7.5
|
||||
- coredns 1.4.0
|
||||
- 集群安装:
|
||||
- 优化journald日志服务配置,避免与syslog采集重复,节省节点资源
|
||||
- 修复CVE-2019-3874 (work around),[详情](https://mp.weixin.qq.com/s/CnzK8722pJUWRAitWBRPcw)
|
||||
- 修复首个etcd成员故障时apiserver也故障的bug,详见 kubernetes issue #72102
|
||||
- 修复add-master时偶然出现的兼容性问题 issue #490
|
||||
- 修复apiserver启用basic_auth认证时用户rbac设置问题
|
||||
- 修复docker安装时变量 DOCKER_VER 需要显式转换成 float
|
||||
- 调整ca证书有效期等配置
|
||||
- 增加kubectl使用可选参数配置kubeconfig
|
||||
- easzctl 命令行
|
||||
- 新增[升级集群](https://github.com/easzlab/kubeasz/blob/master/docs/op/upgrade.md)
|
||||
- 修复easzctl setup启动前检查项
|
||||
- 增加/删除etcd节点成功后配置重启apiserver
|
||||
- 其他:
|
||||
- 更新 DOCKER_VER 使用新版本格式 #488
|
||||
- fix download url for harbor v1.5.x #492
|
|
@ -1,32 +0,0 @@
|
|||
## kubeasz-1.1.0 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.14.1 v1.12.8 v1.11.10
|
||||
- coredns 1.5.0
|
||||
- metrics-server 0.3.2
|
||||
- 集群安装:
|
||||
- 更新docker国内镜像站点设置
|
||||
- 增加kubelet资源预留设置
|
||||
- 增加每个node节点pod网络掩码长度设置项
|
||||
- feat(chrony): 增加授时源 #495
|
||||
- 优化只读kubectl配置的创建流程 #537
|
||||
- 删除cAdvisor 4194端口限制
|
||||
- minor fix:提升系统安装包速度
|
||||
- 增加腾讯云安装示例 by waitingsong
|
||||
- easzctl 命令行
|
||||
- 集成basic-auth配置 `easzctl help basic-auth`
|
||||
- 其他:
|
||||
- 更新 basic-auth 相关文档
|
||||
- 更新 DOCKER_VER 使用新版本格式 #488
|
||||
- 更新 istio 文档 by waitingsong
|
||||
- 移除 docker cn registry #514 by neatlife
|
||||
- docs: 更改源码克隆方式 #495 by waitingsong
|
||||
- docs: 更新 id_rsa 密钥生成命令等 #495 by waitingsong
|
||||
- fix kernel>=4.19 加载nf_conntrack问题
|
||||
- fix add-node告警信息 ISSUE #508
|
||||
- fix download url for harbor v1.5.x #492
|
||||
- fix 更正CNI plugins二进制文件下载名 #524
|
||||
- fix 兼容新版openssh批量推送密钥 #522
|
||||
- fix centos7 rsyslog服务启动错误 #538
|
||||
- perf: 使用镜像 nginx:alpine 替代 nginx
|
|
@ -1,33 +0,0 @@
|
|||
## kubeasz-1.2.0 发布说明
|
||||
|
||||
IMPORTANT: 本次更新增加容器运行时`containerd`支持,需要在`ansible hosts`中增加全局变量`CONTAINER_RUNTIME`(可选 docker/containerd ),参考 example/ 中的例子。
|
||||
|
||||
NOTE: kubeasz 项目正式从 github.com/gjmzj/kubeasz 迁移至组织 github.com/easzlab/kubeasz
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新:
|
||||
- k8s: v1.14.2
|
||||
- traefik v1.7.11
|
||||
- efk: es/kibana 6.6.1
|
||||
- 集群安装:
|
||||
- 增加 containerd 支持及[简单介绍](https://github.com/easzlab/kubeasz/blob/master/docs/guide/containerd.md)
|
||||
- 增加 EFK 日志清理工具及[说明](https://github.com/easzlab/kubeasz/blob/master/docs/guide/efk.md#%E7%AC%AC%E5%9B%9B%E9%83%A8%E5%88%86%E6%97%A5%E5%BF%97%E8%87%AA%E5%8A%A8%E6%B8%85%E7%90%86)
|
||||
- 增加 Amazon Linux 支持 by lusyoe
|
||||
- 更新 containerd/docker 仓库国内镜像设置
|
||||
- 增加 containerd 与 harbor 集成
|
||||
- 更新集群清理、离线镜像推送等脚本以支持 containerd 集成
|
||||
- easzctl 命令行
|
||||
- 修复`easzctl basic-auth`命令执行问题 #544
|
||||
- 文档:
|
||||
- 更新 efk 文档
|
||||
- 更新集群节点规划文档
|
||||
- 更新公有云部署文档
|
||||
- 增加 AWS 高可用部署文档 by lusyoe
|
||||
- 更新腾讯云部署文档 by waitingsong
|
||||
- 更新安装文档 istio v1.1.7 by waitingsong
|
||||
- 更新 harbor 文档
|
||||
- 其他:
|
||||
- 更新项目 logo
|
||||
- fix: 清理node时罕见错误删除hosts中其他node信息 #541
|
||||
- fix: 在没有创建集群context下运行`easzctl add-node`成功时返回失败提示
|
||||
- 更新项目迁移部分 URL 连接内容
|
|
@ -1,46 +0,0 @@
|
|||
## kubeasz-2.0.0 发布说明
|
||||
|
||||
**IMPORTANT:** 本次更新为 HA-2x (#585) 架构的第一个版本,相比 HA-1x (#584) 架构优势在于:
|
||||
- 高可用集群安装更简单,不再依赖外部负载均衡;自有环境和云厂商环境安装流程完全一致;
|
||||
- 扩展集群更方便,从单节点集群可以方便地扩展到多主多节点集群;
|
||||
|
||||
**WARNNING:** 因为架构差异,已有集群(HA-1x)不支持升级到 2x 版本;只能在创建新 k8s 集群使用 2x 新版本;后续项目重心转移到维护 2x 版本,详见 [分支说明](https://github.com/easzlab/kubeasz/blob/master/docs/mixes/branch.md)
|
||||
|
||||
CHANGELOG:
|
||||
- 集群安装:
|
||||
- 废弃 ansible hosts 中 deploy 角色,精简保留2个预定义节点规划例子(example/hosts.xx)
|
||||
- 重构 prepare 安装流程(删除 deploy 角色,移除 lb 节点创建)
|
||||
- 调整 kube_master 安装流程
|
||||
- 调整 kube_node 安装流程(node 节点新增 haproxy 服务)
|
||||
- 调整 network 等其他安装流程
|
||||
- 精简 example hosts 配置文件及配置项
|
||||
- 调整 ex_lb 安装流程【可选】
|
||||
- 添加 docker/containerd 安装时互斥判断
|
||||
- 新增 role: clean,重写清理脚本 99.clean.yml
|
||||
- 废弃 tools/clean_one_node.yml
|
||||
- 调整 helm 安装流程
|
||||
- 调整 cluster-addon 安装流程(自动安装traefik,调整dashboard离线安装)
|
||||
- 替换 playbook 中 hosts: all 为具体节点组名称,防止操作扩大风险
|
||||
- 废弃百度盘下载方式,新增 easzup 下载工具
|
||||
- easzctl 工具
|
||||
- 废弃 clean-node 命令,调整为 del-master/del-node 命令
|
||||
- 调整 add-etcd/add-node/add-master 脚本以适应 HA-2x 架构
|
||||
- 调整 del-etcd/del-node/del-master 脚本
|
||||
- 修复 add-node/add-master/add-etcd 判断节点是否存在
|
||||
- easzup 工具
|
||||
- 修复 centos 等可能存在 selinux 设置问题
|
||||
- 下载 docker 二进制时使用 curl 替换 wget
|
||||
- 文档:
|
||||
- 集群安装相关大量文档更新
|
||||
- 快速指南安装文档
|
||||
- 集群规划与配置介绍
|
||||
- 公有云安装文档
|
||||
- node 节点安装文档
|
||||
- ...
|
||||
- 集群操作管理文档更新(docs/op/op-index.md)
|
||||
- 新增可选外部负载均衡文档(docs/setup/ex_lb.md)
|
||||
- 新增容器化系统服务 haproxy/chrony 文档(docs/practice/dockerize_system_service.md)
|
||||
- 其他:
|
||||
- fix: 对已有集群进行安全加固时禁用 ip_forward 问题
|
||||
- fix: haproxy 最大连接数设置
|
||||
- fix: 容器化运行 kubeasz 时清理脚本
|
|
@ -1,27 +0,0 @@
|
|||
## kubeasz-2.0.1 发布说明
|
||||
|
||||
**WARNNING:** 从 kubeasz 2.0.1 版本开始,项目仅支持 kubernetes 社区最近的4个大版本,当前为 v1.12/v1.13/v1.14/v1.15,更老的版本不保证兼容性。
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新
|
||||
- k8s: v1.15.0
|
||||
- metrics-server: v0.3.3
|
||||
- 集群安装
|
||||
- **系统软件离线安装** 支持 chrony/ipvsadm/ipset/haproxy/keepalived 等系统软件,目前已测试 Ubuntu1604/Ubuntu1804/CentOS7 操作系统
|
||||
- **修复及简化** 集群备份/恢复脚本及文档
|
||||
- 调整 kubelet 默认禁用`--system-reserved`参数配置
|
||||
- 修复 kubelet v1.15 删除参数`--allow-privileged`
|
||||
- 修复升级集群时重新配置 k8s 服务文件
|
||||
- easzup 工具
|
||||
- 增加自动下载系统软件包
|
||||
- 增加离线保存及加载kubeasz镜像
|
||||
- 修复下载docker安装包的位置
|
||||
- 增加预处理安装前准备ssh key pair和确保$PATH下python可执行
|
||||
- 文档
|
||||
- 增加**完全离线安装**集群文档
|
||||
- 添加非标ssh端口节点文档说明 docs/op/op-node.md
|
||||
- 其他
|
||||
- role:deploy 增加是否容器化运行ansible脚本的判断
|
||||
- fix: 容器化运行deploy任务时删除kubeconfig报错
|
||||
- fix: 节点做网卡bonding时获取host_ip错误(thx beef9999, ISSUE #607)
|
||||
- fix: 节点操作系统ulimit设置
|
|
@ -1,30 +0,0 @@
|
|||
## kubeasz-2.0.2 发布说明
|
||||
|
||||
**WARNNING:** 从 kubeasz 2.0.1 版本开始,项目仅支持 kubernetes 社区最近的4个大版本,当前为 v1.12/v1.13/v1.14/v1.15,更老的版本不保证兼容性。
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新
|
||||
- docker: 18.09.7
|
||||
- 集群安装
|
||||
- **系统软件离线安装** 全面测试支持 Ubuntu1604/1804 CentOS7 Debian9/10 操作系统
|
||||
- kubelet 分离配置文件至 /var/lib/kubelet/config.yaml
|
||||
- containerd/docker 增加配置项是否启用容器仓库镜像
|
||||
- 修复 helm 安装时使用已有 namespace 执行报错
|
||||
- 调整部分基础软件安装
|
||||
- 调整 apiserver 部分参数 0ca5f7fdd9dc97c72ac
|
||||
- 调整清理脚本不再进行虚拟网卡、路由表、iptalbes/ipvs规则等清理,并提示清理脚本执行后重启节点
|
||||
- easzup 工具
|
||||
- 添加配置项是否启用docker仓库CN镜像和选择合适的docker二进制下载链接
|
||||
- 修复docker已经安装时运行失败问题
|
||||
- update versions and minor fixes
|
||||
- 文档
|
||||
- 离线安装文档更新
|
||||
- 集群安装相关文档更新
|
||||
- 其他
|
||||
- new logo
|
||||
- fix: 执行roles/cluster-storage/cluster-storage.yml 报错不存在`deploy`
|
||||
- fix: 部分os启用kube-reserved出错(提示/sys/fs/cgroup只读)
|
||||
- fix: ex_lb 组少量 keepalived 相关配置
|
||||
- fix: 偶然出现docker安装时提示找不到变量`docker_ver`
|
||||
- fix: Ubuntu1804 pod内dns解析不到外网
|
||||
- fix: k8s 相关服务在接收SIGPIPE信号停止后不重启问题 #631 thx to gj19910723
|
|
@ -1,28 +0,0 @@
|
|||
## kubeasz-2.0.3 发布说明
|
||||
|
||||
**WARNNING:** 从 kubeasz 2.0.1 版本开始,项目仅支持 kubernetes 社区最近的4个大版本,当前为 v1.12/v1.13/v1.14/v1.15,更老的版本不保证兼容性。
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新
|
||||
- k8s: v1.15.2 v1.14.5 v1.13.9
|
||||
- docker: 18.09.8
|
||||
- kube-ovn: 0.6.0 #644
|
||||
- 集群安装
|
||||
- 修复增加/删除 etcd 节点的脚本(当待删除节点不可达时)
|
||||
- 修复删除 master/node 节点的脚本(当待删除节点不可达时)
|
||||
- 修复 etcd 备份脚本
|
||||
- 设置 kube-proxy 默认使用 ipvs 模式
|
||||
- 增加部分内核优化参数
|
||||
- 增加新节点后推送常用集群插件镜像 #650
|
||||
- 增加 Docker 安装后的内部(非安全)仓库 #651
|
||||
- 增加 flannel vxlan 可选开启 DirectRouting 特性 #652
|
||||
- 禁用内核参数 net.ipv4.tcp_tw_reuse
|
||||
- 使用 netaddr 模块进行 ip 地址计算 #658
|
||||
- 工具脚本
|
||||
- 新增 tools/imgutils 方便拉取 gcr.io 等仓库镜像;方便批量存储/导入离线镜像等
|
||||
- 文档
|
||||
- 更新 istio.md #641
|
||||
- 更新[修改 apiserver 证书文档](https://github.com/easzlab/kubeasz/blob/master/docs/op/ch_apiserver_cert.md)
|
||||
- 其他
|
||||
- fix: easzctl 删除节点时的正则匹配
|
||||
- fix: kube-ovn 启动参数设置 #658
|
|
@ -1,32 +0,0 @@
|
|||
## kubeasz-2.1.0 发布说明
|
||||
|
||||
【Warnning】PROXY_MODE变量定义转移到ansible hosts #688,对于已有的ansible hosts文件需要手动增加该定义,参考example/hosts.*
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新
|
||||
- k8s: v1.16.2 v1.15.5 v1.14.8 v1.13.12
|
||||
- docker: 18.09.9
|
||||
- coredns v1.6.2
|
||||
- metrics-server v0.3.6
|
||||
- kube-ovn: 0.8.0 #708
|
||||
- dashboard v2.0.0-beta5
|
||||
- 集群安装
|
||||
- 更新/清理 APIs version,支持 k8s v1.16
|
||||
- 增加临时启停集群脚本 91.start.yml 92.stop.yml
|
||||
- 更新只读权限 read rbac role
|
||||
- 工具脚本
|
||||
- 更新 tools/easzup
|
||||
- 文档
|
||||
- 增加go web应用部署实践 docs/practice/go_web_app
|
||||
- 增加go项目dockerfile示例 docs/practice/go_web_app/Dockerfile-more
|
||||
- 更新 log-pilot 日志方案 docs/guide/log-pilot.md
|
||||
- 更新主页【推荐工具栏】kuboard k9s octant
|
||||
- 其他
|
||||
- fix: 增加kube-proxy参数--cluster-cidr #663
|
||||
- fix: 删除etcd服务不影响node服务 #690
|
||||
- fix: deploy阶段pip安装netaddr包
|
||||
- fix: 仅非容器化运行ansible需要安装 #658
|
||||
- fix: ipvs-connection-timeout-issue
|
||||
- fix: heapster无法读取节点度量数据
|
||||
- fix: tcp_tw_recycle settings issue #714
|
||||
- fix: 文档文字“登陆”->“登录” #720
|
|
@ -1,31 +0,0 @@
|
|||
## kubeasz-2.2.0 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新
|
||||
- k8s: v1.17.2 v1.16.6 v1.15.9 v1.14.10
|
||||
- etcd: v3.4.3
|
||||
- docker: 19.03.5
|
||||
- coredns: v1.6.6
|
||||
- kube-ovn: 0.9.1
|
||||
- dashboard: v2.0.0-rc3
|
||||
- harbor: v1.8~v1.10
|
||||
- traefik: v1.7.20
|
||||
- easzlab/es-index-rotator: 0.2.1
|
||||
- 集群安装
|
||||
- 安全性: 修改kube-controller-manager和kube-scheduler使用证书访问kube-apiserver
|
||||
- 安全性: 关闭kubelet只读端口
|
||||
- 增加清理脚本中对二进制文件和离线镜像文件的清理
|
||||
- 增加对harbor v1.8-v1.10 安装支持
|
||||
- 分离生成只读权限kubeconfig #727
|
||||
- 调整 apiserver 少量启动参数
|
||||
- 工具脚本
|
||||
- easzctl: 增加 ${@:3} 队列变量 以支持在 hosts 为 node 增加更多主机变量 #749
|
||||
- easzup: 修复安装 docker 逻辑 aa76da0f2ee2b01d47c28667feed36b6be778b17
|
||||
- 其他
|
||||
- fix: dashboard生成cluster-service #739
|
||||
- fix: ubuntu1804安装ex_lb失败问题
|
||||
- fix: calico的BGP RR模式下的bgppeer的nodeSelector错误 #741
|
||||
- fix: ectd集群有不正常节点时增/删etcd节点失败 #743
|
||||
- fix: kube-router 安装报错 #783
|
||||
- fix: typo “登陆”->“登录” #720
|
||||
- fix: 部分文档路径等错误
|
|
@ -1,28 +0,0 @@
|
|||
## kubeasz-2.2.1 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新
|
||||
- k8s: v1.18.3, v1.17.6, v1.16.10, v1.15.12
|
||||
- docker: 19.03.8
|
||||
- coredns: v1.6.7
|
||||
- calico: v3.8.8
|
||||
- flannel: v0.12.0-amd64
|
||||
- pause: 3.2
|
||||
- dashboard: v2.0.1
|
||||
- easzlab/kubeasz-ext-bin:0.5.2
|
||||
- 集群安装
|
||||
- 更新etcd 安装参数 #823
|
||||
- 修改kubelet.service部分启动前置条件
|
||||
- 预处理增加设置内核参数net.core.somaxconn = 32768
|
||||
- 工具脚本
|
||||
- easzup: 调整下载/安装docker等
|
||||
- docker-tag: 修复原功能,增加支持harbor镜像查询 #814 #824
|
||||
- 文档
|
||||
- 更新首页连接文档:Allinone安装文档、离线安装文档等
|
||||
- 更新helm3文档,举例redis集群安装
|
||||
- 增加文档 kubesphere 安装 #804
|
||||
- 其他
|
||||
- 删除'azure.cn'的docker镜像加速
|
||||
- fix: 节点角色删除时/opt/kube/bin被误删 #837
|
||||
- fix:pause-amd64:3.2 repos
|
||||
- fix: 部分文档错误
|
|
@ -1,22 +0,0 @@
|
|||
## kubeasz-2.2.2 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新
|
||||
- k8s: v1.19.4, v1.18.12, v1.17.14
|
||||
- docker: 19.03.13
|
||||
- etcd: v3.4.13
|
||||
- coredns: v1.7.1
|
||||
- cni-plugins: v0.8.7
|
||||
- flannel: v0.13.0-amd64
|
||||
- dashboard: v2.0.4
|
||||
- 集群安装
|
||||
- 替换apiserver参数--basic-auth-file为--token-auth-file
|
||||
- kubelet启动参数修改for debian 10 #912
|
||||
- 修复debian 10 默认iptables问题 #909
|
||||
- roles/calico/defaults/main.yaml 增加 CALICO_NETWORKING_BACKEND 变量 #895
|
||||
- 工具脚本
|
||||
- easzup: 调整部分下载脚本, 增加下载containerd #918
|
||||
- 文档
|
||||
- Update kuboard.md #861
|
||||
- 其他
|
||||
- 调整etcd备份文件 #902 #932 #933
|
|
@ -1,18 +0,0 @@
|
|||
## kubeasz-2.2.3 发布说明
|
||||
|
||||
CHANGELOG:
|
||||
- 组件更新
|
||||
- k8s: v1.20.1, v1.19.6, v1.18.14, v1.17.16
|
||||
- containerd v1.4.3
|
||||
- docker: 19.03.14
|
||||
- calico v3.15.3
|
||||
- dashboard: v2.1.0
|
||||
- 集群安装
|
||||
- 更新支持containerd 1.4.3
|
||||
- 修改etcd启动参数auto-compaction-mode=periodic #951 by lushenle
|
||||
- 修改docker默认开启live-restore功能
|
||||
- 工具脚本
|
||||
- easzup: 移除下载containerd代码,已合并在镜像easzlab/kubeasz-ext-bin:0.8.1 中
|
||||
- start-aio: 增加懒人一键下载并启动aio集群脚本 ./start-aio ${kubeasz_version}
|
||||
- 文档
|
||||
- 少量文档更新
|
|
@ -13,7 +13,7 @@
|
|||
## 静态 PV
|
||||
首先我们需要一个NFS服务器,用于提供底层存储。通过文档[nfs-server](../guide/nfs-server.md),我们可以创建一个NFS服务器。
|
||||
|
||||
- 创建静态 pv,指定容量,访问模式,回收策略,存储类等;参考[这里](https://github.com/feiskyer/kubernetes-handbook/blob/master/zh/concepts/persistent-volume.md)
|
||||
- 创建静态 pv,指定容量,访问模式,回收策略,存储类等
|
||||
|
||||
``` bash
|
||||
apiVersion: v1
|
||||
|
@ -40,57 +40,60 @@ spec:
|
|||
|
||||
在一个工作k8s 集群中,`PVC`请求会很多,如果每次都需要管理员手动去创建对应的 `PV`资源,那就很不方便;因此 K8S还提供了多种 `provisioner`来动态创建 `PV`,不仅节省了管理员的时间,还可以根据`StorageClasses`封装不同类型的存储供 PVC 选用。
|
||||
|
||||
项目中的 `role: cluster-storage`目前支持自建nfs 和aliyun_nas 的动态`provisioner`
|
||||
项目中以nfs-client-provisioner为例(https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
|
||||
|
||||
- 1.编辑自定义配置文件:roles/cluster-storage/defaults/main.yml
|
||||
- 1.编辑集群配置文件:clusters/${集群名}/config.yml
|
||||
|
||||
``` bash
|
||||
# 比如创建nfs provisioner
|
||||
storage:
|
||||
nfs:
|
||||
enabled: "yes"
|
||||
server: "192.168.1.8"
|
||||
server_path: "/data/nfs"
|
||||
storage_class: "class-nfs-01"
|
||||
provisioner_name: "nfs-provisioner-01"
|
||||
... 省略
|
||||
# 在role:cluster-addon 中启用nfs-provisioner 安装
|
||||
nfs_provisioner_install: "yes" # 修改为yes
|
||||
nfs_provisioner_namespace: "kube-system"
|
||||
nfs_provisioner_ver: "v4.0.1"
|
||||
nfs_storage_class: "managed-nfs-storage"
|
||||
nfs_server: "192.168.31.244" # 修改为实际nfs server地址
|
||||
nfs_path: "/data/nfs" # 修改为实际的nfs共享目录
|
||||
|
||||
```
|
||||
|
||||
- 2.创建 nfs provisioner
|
||||
|
||||
``` bash
|
||||
$ ansible-playbook /etc/ansible/roles/cluster-storage/cluster-storage.yml
|
||||
$ ezctl setup ${集群名} 07
|
||||
|
||||
# 执行成功后验证
|
||||
$ kubectl get pod --all-namespaces |grep nfs-prov
|
||||
kube-system nfs-provisioner-01-6b7fbbf9d4-bh8lh 1/1 Running 0 1d
|
||||
$ kubectl get pod --all-namespaces |grep nfs-client
|
||||
kube-system nfs-client-provisioner-84ff87c669-ksw95 1/1 Running 0 21m
|
||||
```
|
||||
**注意** k8s集群可以使用多个nfs provisioner,重复上述步骤1、2:修改使用不同的`nfs server` `nfs_storage_class` `nfs_provisioner_name`后执行创建即可。
|
||||
|
||||
## 验证使用动态 PV
|
||||
|
||||
切换到项目`manifests/storage`目录,编辑`test.yaml`文件,根据前文配置情况修改`storageClassName`即可;然后执行以下命令进行创建:
|
||||
在目录clusters/${集群名}/yml/nfs-provisioner/ 有个测试例子
|
||||
|
||||
``` bash
|
||||
$ kubectl apply -f test.yaml
|
||||
$ kubectl apply -f /etc/kubeasz/clusters/hello/yml/nfs-provisioner/test-pod.yaml
|
||||
|
||||
# 验证测试pod
|
||||
$ kubectl get pod --all-namespaces |grep test
|
||||
default test 1/1 Running 0 1m
|
||||
kubectl get pod
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
test-pod 0/1 Completed 0 6h36m
|
||||
|
||||
# 验证自动创建的pv 资源,
|
||||
$ kubectl get pv
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-8f1b4ced-92d2-11e8-a41f-5254008ec7c0 1Mi RWX Delete Bound default/test-claim nfs-dynamic-class-01 3m
|
||||
kubectl get pv
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
pvc-44d34a50-e00b-4f6c-8005-40f5cc54af18 2Mi RWX Delete Bound default/test-claim managed-nfs-storage 6h36m
|
||||
|
||||
# 验证PVC已经绑定成功:STATUS字段为 Bound
|
||||
$ kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
test-claim Bound pvc-8f1b4ced-92d2-11e8-a41f-5254008ec7c0 1Mi RWX nfs-dynamic-class-01 3m
|
||||
kubectl get pvc
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
test-claim Bound pvc-44d34a50-e00b-4f6c-8005-40f5cc54af18 2Mi RWX managed-nfs-storage 6h37m
|
||||
```
|
||||
|
||||
另外,Pod启动完成后,在挂载的目录中创建一个`SUCCESS`文件。我们可以到NFS服务器去看下:
|
||||
|
||||
```
|
||||
.
|
||||
└── default-test-claim-pvc-a877172b-5f49-11e8-b675-d8cb8ae6325a
|
||||
└── default-test-claim-pvc-44d34a50-e00b-4f6c-8005-40f5cc54af18
|
||||
└── SUCCESS
|
||||
```
|
||||
如上,可以发现挂载的时候,nfs-client根据PVC自动创建了一个目录,我们Pod中挂载的`/mnt`,实际引用的就是该目录,而我们在`/mnt`下创建的`SUCCESS`文件,也自动写入到了这里。
|
||||
|
|
|
@ -198,6 +198,14 @@ prom_install: "no"
|
|||
prom_namespace: "monitor"
|
||||
prom_chart_ver: "__prom_chart__"
|
||||
|
||||
# nfs-provisioner 自动安装
|
||||
nfs_provisioner_install: "no"
|
||||
nfs_provisioner_namespace: "kube-system"
|
||||
nfs_provisioner_ver: "__nfs_provisioner__"
|
||||
nfs_storage_class: "managed-nfs-storage"
|
||||
nfs_server: "192.168.1.10"
|
||||
nfs_path: "/data/nfs"
|
||||
|
||||
############################
|
||||
# role:harbor
|
||||
############################
|
||||
|
|
2
ezctl
2
ezctl
|
@ -149,6 +149,7 @@ function new() {
|
|||
dashboardVer=$(grep 'dashboardVer=' ezdown|cut -d'=' -f2)
|
||||
dashboardMetricsScraperVer=$(grep 'dashboardMetricsScraperVer=' ezdown|cut -d'=' -f2)
|
||||
metricsVer=$(grep 'metricsVer=' ezdown|cut -d'=' -f2)
|
||||
nfsProvisionerVer=$(grep 'nfsProvisionerVer=' ezdown|cut -d'=' -f2)
|
||||
promChartVer=$(grep 'promChartVer=' ezdown|cut -d'=' -f2)
|
||||
traefikChartVer=$(grep 'traefikChartVer=' ezdown|cut -d'=' -f2)
|
||||
harborVer=$(grep 'HARBOR_VER=' ezdown|cut -d'=' -f2)
|
||||
|
@ -165,6 +166,7 @@ function new() {
|
|||
-e "s/__dns_node_cache__/$dnsNodeCacheVer/g" \
|
||||
-e "s/__dashboard__/$dashboardVer/g" \
|
||||
-e "s/__dash_metrics__/$dashboardMetricsScraperVer/g" \
|
||||
-e "s/__nfs_provisioner__/$nfsProvisionerVer/g" \
|
||||
-e "s/__prom_chart__/$promChartVer/g" \
|
||||
-e "s/__traefik_chart__/$traefikChartVer/g" \
|
||||
-e "s/__harbor__/$harborVer/g" \
|
||||
|
|
32
ezdown
32
ezdown
|
@ -14,7 +14,7 @@ set -o errexit
|
|||
|
||||
# default settings, can be overridden by cmd line options, see usage
|
||||
DOCKER_VER=20.10.5
|
||||
KUBEASZ_VER=3.0.0
|
||||
KUBEASZ_VER=3.0.1
|
||||
K8S_BIN_VER=v1.20.5
|
||||
EXT_BIN_VER=0.8.1
|
||||
SYS_PKG_VER=0.3.3
|
||||
|
@ -30,6 +30,7 @@ dashboardVer=v2.1.0
|
|||
dashboardMetricsScraperVer=v1.0.6
|
||||
metricsVer=v0.3.6
|
||||
pauseVer=3.2
|
||||
nfsProvisionerVer=v4.0.1
|
||||
export ciliumVer=v1.4.1
|
||||
export kubeRouterVer=v0.3.1
|
||||
export kubeOvnVer=v1.5.3
|
||||
|
@ -110,8 +111,18 @@ Description=Docker Application Container Engine
|
|||
Documentation=http://docs.docker.io
|
||||
[Service]
|
||||
Environment="PATH=/opt/kube/bin:/bin:/sbin:/usr/bin:/usr/sbin"
|
||||
ExecStartPre=/sbin/iptables -F
|
||||
ExecStartPre=/sbin/iptables -X
|
||||
ExecStartPre=/sbin/iptables -F -t nat
|
||||
ExecStartPre=/sbin/iptables -X -t nat
|
||||
ExecStartPre=/sbin/iptables -F -t raw
|
||||
ExecStartPre=/sbin/iptables -X -t raw
|
||||
ExecStartPre=/sbin/iptables -F -t mangle
|
||||
ExecStartPre=/sbin/iptables -X -t mangle
|
||||
ExecStart=/opt/kube/bin/dockerd
|
||||
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
|
||||
ExecStartPost=/sbin/iptables -P INPUT ACCEPT
|
||||
ExecStartPost=/sbin/iptables -P OUTPUT ACCEPT
|
||||
ExecStartPost=/sbin/iptables -P FORWARD ACCEPT
|
||||
ExecReload=/bin/kill -s HUP \$MAINPID
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
@ -167,15 +178,6 @@ EOF
|
|||
sed -i 's/^SELINUX=.*$/SELINUX=disabled/g' /etc/selinux/config
|
||||
fi
|
||||
|
||||
logger info "clean iptable rules"
|
||||
iptables -P INPUT ACCEPT && \
|
||||
iptables -P FORWARD ACCEPT && \
|
||||
iptables -P OUTPUT ACCEPT && \
|
||||
iptables -F && iptables -X && \
|
||||
iptables -F -t nat && iptables -X -t nat && \
|
||||
iptables -F -t raw && iptables -X -t raw && \
|
||||
iptables -F -t mangle && iptables -X -t mangle
|
||||
|
||||
logger debug "enable and start docker"
|
||||
systemctl enable docker
|
||||
systemctl daemon-reload && systemctl restart docker && sleep 4
|
||||
|
@ -220,7 +222,7 @@ function get_k8s_bin() {
|
|||
}
|
||||
|
||||
function get_ext_bin() {
|
||||
[[ -f "$BASE/bin/etcdctl" ]] && { logger warn "extral binaries existed"; return 0; }
|
||||
[[ -f "$BASE/bin/etcdctl" ]] && { logger warn "extra binaries existed"; return 0; }
|
||||
|
||||
logger info "downloading extral binaries kubeasz-ext-bin:$EXT_BIN_VER"
|
||||
docker pull "easzlab/kubeasz-ext-bin:$EXT_BIN_VER" && \
|
||||
|
@ -261,9 +263,7 @@ function get_harbor_offline_pkg() {
|
|||
}
|
||||
|
||||
function get_offline_image() {
|
||||
|
||||
imageDir="$BASE/down"
|
||||
|
||||
logger info "downloading offline images"
|
||||
|
||||
if [[ ! -f "$imageDir/calico_$calicoVer.tar" ]];then
|
||||
|
@ -302,6 +302,10 @@ function get_offline_image() {
|
|||
docker save -o "$imageDir/pause_$pauseVer.tar" "easzlab/pause-amd64:$pauseVer"
|
||||
/bin/cp -u "$imageDir/pause_$pauseVer.tar" "$imageDir/pause.tar"
|
||||
fi
|
||||
if [[ ! -f "$imageDir/nfs-provisioner_$nfsProvisionerVer.tar" ]];then
|
||||
docker pull "easzlab/nfs-subdir-external-provisioner:$nfsProvisionerVer" && \
|
||||
docker save -o "$imageDir/nfs-provisioner_$nfsProvisionerVer.tar" "easzlab/nfs-subdir-external-provisioner:$nfsProvisionerVer"
|
||||
fi
|
||||
if [[ ! -f "$imageDir/kubeasz_$KUBEASZ_VER.tar" ]];then
|
||||
docker pull "easzlab/kubeasz:$KUBEASZ_VER" && \
|
||||
docker save -o "$imageDir/kubeasz_$KUBEASZ_VER.tar" "easzlab/kubeasz:$KUBEASZ_VER"
|
||||
|
|
|
@ -37,6 +37,9 @@
|
|||
- import_tasks: prometheus.yml
|
||||
when: '"kube-prometheus-operator" not in pod_info.stdout and prom_install == "yes"'
|
||||
|
||||
- import_tasks: nfs-provisioner.yml
|
||||
when: '"nfs-client-provisioner" not in pod_info.stdout and nfs_provisioner_install == "yes"'
|
||||
|
||||
#- block:
|
||||
# - block:
|
||||
# - name: 尝试推送离线 metallb镜像(若执行失败,可忽略)
|
||||
|
|
|
@ -0,0 +1,33 @@
|
|||
- name: 尝试推送离线 nfs-provisioner镜像(若执行失败,可忽略)
|
||||
copy: src={{ base_dir }}/down/{{ nfsprovisioner_offline }} dest=/opt/kube/images/{{ nfsprovisioner_offline }}
|
||||
when: 'nfsprovisioner_offline in download_info.stdout'
|
||||
|
||||
- name: 获取nfs-provisioner离线镜像推送情况
|
||||
command: "ls /opt/kube/images"
|
||||
register: image_info
|
||||
|
||||
- name: 导入 nfs-provisioner的离线镜像(若执行失败,可忽略)
|
||||
shell: "{{ bin_dir }}/docker load -i /opt/kube/images/{{ nfsprovisioner_offline }}"
|
||||
when: 'nfsprovisioner_offline in image_info.stdout and CONTAINER_RUNTIME == "docker"'
|
||||
|
||||
- name: 导入 nfs-provisioner的离线镜像(若执行失败,可忽略)
|
||||
shell: "{{ bin_dir }}/ctr -n=k8s.io images import /opt/kube/images/{{ nfsprovisioner_offline }}"
|
||||
when: 'nfsprovisioner_offline in image_info.stdout and CONTAINER_RUNTIME == "containerd"'
|
||||
|
||||
- name: 准备 nfs-provisioner 配置目录
|
||||
file: name={{ cluster_dir }}/yml/nfs-provisioner state=directory
|
||||
run_once: true
|
||||
connection: local
|
||||
|
||||
- name: 准备 nfs-provisioner部署文件
|
||||
template: src=nfs-provisioner/{{ item }}.j2 dest={{ cluster_dir }}/yml/nfs-provisioner/{{ item }}
|
||||
with_items:
|
||||
- "nfs-provisioner.yaml"
|
||||
- "test-pod.yaml"
|
||||
run_once: true
|
||||
connection: local
|
||||
|
||||
- name: 创建 nfs-provisioner部署
|
||||
shell: "{{ base_dir }}/bin/kubectl apply -f {{ cluster_dir }}/yml/nfs-provisioner/nfs-provisioner.yaml"
|
||||
run_once: true
|
||||
connection: local
|
|
@ -0,0 +1,119 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: nfs-client-provisioner
|
||||
# replace with namespace where provisioner is deployed
|
||||
namespace: {{ nfs_provisioner_namespace }}
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: nfs-client-provisioner-runner
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch", "update"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["create", "update", "patch"]
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: run-nfs-client-provisioner
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nfs-client-provisioner
|
||||
# replace with namespace where provisioner is deployed
|
||||
namespace: {{ nfs_provisioner_namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: nfs-client-provisioner-runner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: leader-locking-nfs-client-provisioner
|
||||
# replace with namespace where provisioner is deployed
|
||||
namespace: {{ nfs_provisioner_namespace }}
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["endpoints"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch"]
|
||||
---
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: leader-locking-nfs-client-provisioner
|
||||
# replace with namespace where provisioner is deployed
|
||||
namespace: {{ nfs_provisioner_namespace }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nfs-client-provisioner
|
||||
# replace with namespace where provisioner is deployed
|
||||
namespace: {{ nfs_provisioner_namespace }}
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: leader-locking-nfs-client-provisioner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nfs-client-provisioner
|
||||
labels:
|
||||
app: nfs-client-provisioner
|
||||
# replace with namespace where provisioner is deployed
|
||||
namespace: {{ nfs_provisioner_namespace }}
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nfs-client-provisioner
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nfs-client-provisioner
|
||||
spec:
|
||||
serviceAccountName: nfs-client-provisioner
|
||||
containers:
|
||||
- name: nfs-client-provisioner
|
||||
#image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.1
|
||||
image: easzlab/nfs-subdir-external-provisioner:{{ nfs_provisioner_ver }}
|
||||
volumeMounts:
|
||||
- name: nfs-client-root
|
||||
mountPath: /persistentvolumes
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
value: k8s-sigs.io/nfs-subdir-external-provisioner
|
||||
- name: NFS_SERVER
|
||||
value: {{ nfs_server }}
|
||||
- name: NFS_PATH
|
||||
value: {{ nfs_path }}
|
||||
volumes:
|
||||
- name: nfs-client-root
|
||||
nfs:
|
||||
server: {{ nfs_server }}
|
||||
path: {{ nfs_path }}
|
||||
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ nfs_storage_class }}
|
||||
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
|
||||
parameters:
|
||||
archiveOnDelete: "false"
|
|
@ -0,0 +1,35 @@
|
|||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: test-claim
|
||||
spec:
|
||||
storageClassName: {{ nfs_storage_class }}
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Mi
|
||||
|
||||
---
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: test-pod
|
||||
image: busybox
|
||||
command:
|
||||
- "/bin/sh"
|
||||
args:
|
||||
- "-c"
|
||||
- "touch /mnt/SUCCESS && exit 0 || exit 1"
|
||||
volumeMounts:
|
||||
- name: nfs-pvc
|
||||
mountPath: "/mnt"
|
||||
restartPolicy: "Never"
|
||||
volumes:
|
||||
- name: nfs-pvc
|
||||
persistentVolumeClaim:
|
||||
claimName: test-claim
|
|
@ -7,6 +7,7 @@ dashboard_offline: "dashboard_{{ dashboardVer }}.tar"
|
|||
|
||||
metricsscraper_offline: "metrics-scraper_{{ dashboardMetricsScraperVer }}.tar"
|
||||
|
||||
nfsprovisioner_offline: "nfs-provisioner_{{ nfs_provisioner_ver }}.tar"
|
||||
|
||||
# metallb 自动安装
|
||||
#metallb_install: "no"
|
||||
|
|
|
@ -1,3 +0,0 @@
|
|||
- hosts: localhost
|
||||
roles:
|
||||
- cluster-storage
|
|
@ -1,17 +0,0 @@
|
|||
# 动态存储类型, 目前支持自建nfs和aliyun_nas
|
||||
storage:
|
||||
# nfs server 参数
|
||||
nfs:
|
||||
enabled: "no"
|
||||
server: "172.16.3.86"
|
||||
server_path: "/data/nfs"
|
||||
storage_class: "nfs-dynamic-class"
|
||||
provisioner_name: "nfs-provisioner-01"
|
||||
|
||||
# aliyun_nas 参数
|
||||
aliyun_nas:
|
||||
enabled: "no"
|
||||
server: "xxxxxxxxxxx.cn-hangzhou.nas.aliyuncs.com"
|
||||
server_path: "/"
|
||||
storage_class: "class-aliyun-nas-01"
|
||||
controller_name: "aliyun-nas-controller-01"
|
|
@ -1,99 +0,0 @@
|
|||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: alicloud-disk-common
|
||||
provisioner: alicloud/disk
|
||||
parameters:
|
||||
type: cloud
|
||||
---
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: alicloud-disk-efficiency
|
||||
provisioner: alicloud/disk
|
||||
parameters:
|
||||
type: cloud_efficiency
|
||||
---
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: alicloud-disk-ssd
|
||||
provisioner: alicloud/disk
|
||||
parameters:
|
||||
type: cloud_ssd
|
||||
---
|
||||
kind: StorageClass
|
||||
apiVersion: storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: alicloud-disk-available
|
||||
provisioner: alicloud/disk
|
||||
parameters:
|
||||
type: available
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: alicloud-disk-controller-runner
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch", "update"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: alicloud-disk-controller
|
||||
namespace: kube-system
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: run-alicloud-disk-controller
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: alicloud-disk-controller
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: alicloud-disk-controller-runner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: alicloud-disk-controller
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: alicloud-disk-controller
|
||||
spec:
|
||||
serviceAccount: alicloud-disk-controller
|
||||
containers:
|
||||
- name: alicloud-disk-controller
|
||||
image: registry.cn-hangzhou.aliyuncs.com/acs/alicloud-disk-controller:v1.9.3-ed710ce
|
||||
volumeMounts:
|
||||
- name: cloud-config
|
||||
mountPath: /etc/kubernetes/
|
||||
- name: logdir
|
||||
mountPath: /var/log/alicloud/
|
||||
volumes:
|
||||
- name: cloud-config
|
||||
hostPath:
|
||||
path: /etc/kubernetes/
|
||||
- name: logdir
|
||||
hostPath:
|
||||
path: /var/log/alicloud/
|
|
@ -1,18 +0,0 @@
|
|||
- name: 准备alicloud-nas配置目录
|
||||
file: name=/opt/kube/kube-system/storage/alicloud-nas state=directory
|
||||
|
||||
- name: 生成alicloud-nas动态存储配置
|
||||
template:
|
||||
src: alicloud-nas/alicloud-nas.yaml.j2
|
||||
dest: "/opt/kube/kube-system/storage/alicloud-nas/{{ storage.aliyun_nas.controller_name }}.yaml"
|
||||
|
||||
#- name: 复制alicloud-disk配置
|
||||
# copy:
|
||||
# src: alicloud-disk.yaml
|
||||
# dest: "{{ base_dir }}/manifests/storage/alicloud-nas/alicloud-disk.yaml"
|
||||
|
||||
#- name: 开始部署alicloud-disk存储
|
||||
# shell: "{{ bin_dir }}/kubectl apply -f {{ base_dir }}/manifests/storage/alicloud-nas/alicloud-disk.yaml"
|
||||
|
||||
- name: 开始部署alicloud-nas动态存储
|
||||
shell: "{{ base_dir }}/bin/kubectl apply -f /opt/kube/kube-system/storage/alicloud-nas/{{ storage.aliyun_nas.controller_name }}.yaml"
|
|
@ -1,6 +0,0 @@
|
|||
- import_tasks: nfs-client.yml
|
||||
when: 'storage.nfs.enabled == "yes"'
|
||||
|
||||
- import_tasks: alicloud-nas.yml
|
||||
when: 'storage.aliyun_nas.enabled == "yes"'
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
- name: 准备nfs-client 配置目录
|
||||
file: name={{ base_dir }}/manifests/storage/nfs state=directory
|
||||
|
||||
- name: 生成nfs-client动态存储配置
|
||||
template:
|
||||
src: nfs/nfs-client-provisioner.yaml.j2
|
||||
dest: "{{ base_dir }}/manifests/storage/nfs/{{ storage.nfs.provisioner_name }}.yaml"
|
||||
|
||||
- name: 开始部署nfs-client动态存储
|
||||
shell: "{{ base_dir }}/bin/kubectl apply -f {{ base_dir }}/manifests/storage/nfs/{{ storage.nfs.provisioner_name }}.yaml"
|
|
@ -1,76 +0,0 @@
|
|||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ storage.aliyun_nas.storage_class }}
|
||||
provisioner: alicloud/nas
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: alicloud-nas-controller
|
||||
namespace: kube-system
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: alicloud-disk-controller-runner
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch", "update"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: run-alicloud-nas-controller
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: alicloud-nas-controller
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: alicloud-disk-controller-runner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: {{ storage.aliyun_nas.controller_name }}
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ storage.aliyun_nas.controller_name }}
|
||||
spec:
|
||||
serviceAccount: alicloud-nas-controller
|
||||
containers:
|
||||
- name: alicloud-nas-controller
|
||||
image: registry.cn-hangzhou.aliyuncs.com/acs/alicloud-nas-controller:v1.8.4
|
||||
volumeMounts:
|
||||
- mountPath: /persistentvolumes
|
||||
name: nfs-client-root
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
value: alicloud/nas
|
||||
- name: NFS_SERVER
|
||||
value: {{ storage.aliyun_nas.server }}
|
||||
- name: NFS_PATH
|
||||
value: {{ storage.aliyun_nas.server_path }}
|
||||
volumes:
|
||||
- name: nfs-client-root
|
||||
nfs:
|
||||
server: {{ storage.aliyun_nas.server }}
|
||||
path: {{ storage.aliyun_nas.server_path }}
|
|
@ -1,87 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: nfs-client-provisioner
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: nfs-client-provisioner-runner
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch", "update"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: run-nfs-client-provisioner
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: nfs-client-provisioner
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: nfs-client-provisioner-runner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: {{ storage.nfs.provisioner_name }}
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{ storage.nfs.provisioner_name }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {{ storage.nfs.provisioner_name }}
|
||||
spec:
|
||||
serviceAccountName: nfs-client-provisioner
|
||||
containers:
|
||||
- name: nfs-client-provisioner
|
||||
#image: quay.io/external_storage/nfs-client-provisioner:latest
|
||||
image: jmgao1983/nfs-client-provisioner:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: nfs-client-root
|
||||
mountPath: /persistentvolumes
|
||||
env:
|
||||
- name: PROVISIONER_NAME
|
||||
# 此处供应者名字供storageclass调用
|
||||
value: {{ storage.nfs.provisioner_name }}
|
||||
- name: NFS_SERVER
|
||||
value: {{ storage.nfs.server }}
|
||||
- name: NFS_PATH
|
||||
value: {{ storage.nfs.server_path }}
|
||||
volumes:
|
||||
- name: nfs-client-root
|
||||
nfs:
|
||||
server: {{ storage.nfs.server }}
|
||||
path: {{ storage.nfs.server_path }}
|
||||
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: {{ storage.nfs.storage_class }}
|
||||
provisioner: {{ storage.nfs.provisioner_name }}
|
||||
|
Loading…
Reference in New Issue