fix to support recreating CA and certs

dev3
gjmzj 2022-11-28 20:49:31 +08:00
parent 6adb335993
commit 28496a6793
5 changed files with 43 additions and 1 deletions

View File

@ -0,0 +1,32 @@
# 强制更新CA和所有证书
- WARNNING: 此命令使用需要小心谨慎确保了解功能背景和可能的结果执行后它会重新创建集群CA证书以及由它颁发的所有其他证书一般适合于集群admin.conf不小心泄露为了避免集群被非法访问重新创建CA从而使已泄漏的admin.conf失效。
- 如果需要分发受限的kubeconfig强烈建议使用[自定义权限和期限的kubeconfig](kcfg-adm.md)
## 使用帮助
确认需要强制更新后在ansible 控制节点使用如下命令:(xxx 表示需要操作的集群名)
``` bash
docker exec -it kubeasz ezctl kca-renew xxx
# 或者使用 dk ezctl kca-renew xxx
```
上述命令执行后,按序进行以下的操作:详见`playbooks/96.update-certs.yml`
- 重新生成CA证书以及各种kubeconfig
- 签发新etcd证书并使用新证书重启etcd服务
- 签发新kube-apiserver 证书并重启kube-apiserver/kube-controller-manager/kube-scheduler 服务
- 签发新kubelet 证书并重启kubelet/kube-proxy 服务
- 重启网络组件pod
- 重启其他集群组件pod
- **特别注意:** 如果集群中运行的业务负载pod需要访问apiserver需要重启这些pod
## 检查验证
更新完毕注意检查集群组件日志和容器pod日志确认集群处于正常状态
- 集群组件日志使用journalctl -u xxxx.service -f 依次检查 etcd.service/kube-apiserver.service/kube-controller-manager.service/kube-scheduler.service/kubelet.service/kube-proxy.service
- 容器pod日志使用 kubectl logs 方式检查容器日志

View File

@ -7,4 +7,5 @@
- [集群备份与恢复](cluster_restore.md) - [集群备份与恢复](cluster_restore.md)
- [管理分发用户 kubeconfig](kcfg-adm.md) - [管理分发用户 kubeconfig](kcfg-adm.md)
- [修改 APISERVER 证书](ch_apiserver_cert.md) - [修改 APISERVER 证书](ch_apiserver_cert.md)
- [强制更新CA和所有证书](force_ch_certs.md)
- [配置负载转发 ingress nodeport](loadballance_ingress_nodeport.md) - [配置负载转发 ingress nodeport](loadballance_ingress_nodeport.md)

2
ezctl
View File

@ -465,7 +465,7 @@ function renew-ca() {
logger warn "WARNNING: this script should be used with greate caution" logger warn "WARNNING: this script should be used with greate caution"
logger warn "WARNNING: it will recreate CA certs and all of the others certs used in the cluster" logger warn "WARNNING: it will recreate CA certs and all of the others certs used in the cluster"
COMMAND="ansible-playbook -i clusters/$1/hosts -e CHANGE_CA=true -e @clusters/$1/config.yml playbooks/96.update-certs.yml -t force_change_certs" COMMAND="ansible-playbook -i clusters/$1/hosts -e @clusters/$1/config.yml -e CHANGE_CA=true playbooks/96.update-certs.yml -t force_change_certs"
echo "$COMMAND" echo "$COMMAND"
logger info "cluster:$1 process begins in 5s, press any key to abort:\n" logger info "cluster:$1 process begins in 5s, press any key to abort:\n"
! (read -r -t5 -n1) || { logger warn "process abort"; return 1; } ! (read -r -t5 -n1) || { logger warn "process abort"; return 1; }

View File

@ -2,6 +2,14 @@
# Force to recreate CA certs and all of the others certs used in the cluster. # Force to recreate CA certs and all of the others certs used in the cluster.
# It should be used when the admin.conf leaked, and a new one will be created in place of the leaked one. # It should be used when the admin.conf leaked, and a new one will be created in place of the leaked one.
# backup old certs
- hosts: localhost
tasks:
- name: backup old certs
shell: "cd {{ cluster_dir }} && \
cp -r ssl ssl-$(date +'%Y%m%d%H%M')"
tags: force_change_certs
# to create CA, kubeconfig, kube-proxy.kubeconfig etc. # to create CA, kubeconfig, kube-proxy.kubeconfig etc.
# need to set 'CHANGE_CA=true' # need to set 'CHANGE_CA=true'
- hosts: localhost - hosts: localhost

View File

@ -1,6 +1,7 @@
- name: 获取所有已经创建的POD信息 - name: 获取所有已经创建的POD信息
command: "{{ base_dir }}/bin/kubectl get pod --all-namespaces" command: "{{ base_dir }}/bin/kubectl get pod --all-namespaces"
register: pod_info register: pod_info
tags: force_change_certs
- name: 注册变量 DNS_SVC_IP - name: 注册变量 DNS_SVC_IP
shell: echo {{ SERVICE_CIDR }}|cut -d/ -f1|awk -F. '{print $1"."$2"."$3"."$4+2}' shell: echo {{ SERVICE_CIDR }}|cut -d/ -f1|awk -F. '{print $1"."$2"."$3"."$4+2}'