kubeasz/docs/op/cluster_restore.md

47 lines
1.6 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

# K8S 集群备份与恢复
虽然 K8S 集群可以配置成多主多节点的高可用的部署还是有必要了解下集群的备份和容灾恢复能力在高可用k8s集群中 etcd集群保存了整个集群的状态因此这里的备份与恢复重点就是
- 从运行的etcd集群备份数据到磁盘文件
- 从etcd备份文件恢复数据从而使集群恢复到备份时状态
## 备份与恢复操作说明
- 1.首先搭建一个测试集群部署几个测试deployment验证集群各项正常后进行一次备份
``` bash
$ ansible-playbook /etc/ansible/23.backup.yml
```
执行完毕可以在备份目录下检查备份情况,示例如下:
```
/etc/ansible/.cluster/backup/
├── hosts
├── hosts-201907030954
├── snapshot-201907030954.db
├── snapshot-201907031048.db
└── snapshot.db
```
- 2.模拟误删除操作(略)
- 3.恢复集群及验证
可以在 `roles/cluster-restore/defaults/main.yml` 文件中配置需要恢复的 etcd备份版本从上述备份目录中选取默认使用最近一次备份执行恢复后需要一定时间等待 pod/svc 等资源恢复重建。
``` bash
$ ansible-playbook /etc/ansible/24.restore.yml
```
如果集群主要组件master/etcd/node等出现不可恢复问题可以尝试使用如下步骤 [清理]() --> [创建]() --> [恢复]()
``` bash
$ ansible-playbook /etc/ansible/99.clean.yml
$ ansible-playbook /etc/ansible/90.setup.yml
$ ansible-playbook /etc/ansible/24.restore.yml
```
## 参考
- https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md