mirror of https://github.com/easzlab/kubeasz.git
minor docs update
parent
c662ba1df6
commit
38c0c934bb
|
@ -3,7 +3,7 @@
|
|||
项目致力于提供快速部署高可用`k8s`集群的工具, 同时也努力成为`k8s`实践、使用的参考书;基于二进制方式部署和利用`ansible-playbook`实现自动化;既提供一键安装脚本, 也可以根据`安装指南`分步执行安装各个组件。
|
||||
|
||||
- **集群特性** `TLS`双向认证、`RBAC`授权、[多Master高可用](docs/setup/00-planning_and_overall_intro.md#ha-architecture)、支持`Network Policy`、备份恢复
|
||||
- **集群版本** kubernetes v1.8, v1.9, v1.10, v1.11, v1.12, v1.13, v1.14
|
||||
- **集群版本** kubernetes v1.11, v1.12, v1.13, v1.14
|
||||
- **操作系统** Ubuntu 16.04+, CentOS/RedHat 7
|
||||
- **运行时** docker 17.03.x-ce, 18.06.x-ce, 18.09.x, [containerd](docs/guide/containerd.md) 1.2.6
|
||||
- **网络** [calico](docs/setup/network-plugin/calico.md), [cilium](docs/setup/network-plugin/cilium.md), [flannel](docs/setup/network-plugin/flannel.md), [kube-ovn](docs/setup/network-plugin/kube-ovn.md), [kube-router](docs/setup/network-plugin/kube-router.md)
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
# Binaries for installing k8s
|
||||
# Binaries for k8s clusters
|
||||
|
||||
Binaries can be downloaded at https://pan.baidu.com/s/1c4RFaA, alternatively they can be downloaded from official github repos separately.
|
||||
For kubeasz 2x and above, binaries are downloaded and managed by 'tools/easzup'.
|
||||
|
||||
Alternatively, binaries can be downloaded from the official github repos by referring to the script 'down/download.sh'.
|
||||
|
|
|
@ -11,13 +11,9 @@ Habor是由VMWare中国团队开源的容器镜像仓库。事实上,Habor是
|
|||
|
||||
### 安装步骤
|
||||
|
||||
1. 在ansible控制端下载最新的 [docker-compose](https://github.com/docker/compose/releases) 二进制文件,改名后把它放到项目 `/etc/ansible/bin`目录下(百度云的二进制文件中已包含)
|
||||
1. 在ansible控制端下载最新的 [docker-compose](https://github.com/docker/compose/releases) 二进制文件,改名后把它放到项目 `/etc/ansible/bin`目录(已包含)
|
||||
|
||||
``` bash
|
||||
wget https://github.com/docker/compose/releases/download/1.18.0/docker-compose-Linux-x86_64
|
||||
mv docker-compose-Linux-x86_64 /etc/ansible/bin/docker-compose
|
||||
```
|
||||
2. 在ansible控制端下载最新的 [harbor](https://github.com/vmware/harbor/releases) 离线安装包,把它放到项目 `/etc/ansible/down` 目录下,也可以从分享的百度云盘下载
|
||||
2. 在ansible控制端下载最新的 [harbor](https://github.com/vmware/harbor/releases) 离线安装包,把它放到项目 `/etc/ansible/down` 目录
|
||||
|
||||
3. 在ansible控制端编辑/etc/ansible/hosts文件,可以参考 `example`目录下的模板,修改部分举例如下
|
||||
|
||||
|
|
|
@ -5,8 +5,8 @@
|
|||
+ docker:运行容器
|
||||
+ kubelet: kube-node上最主要的组件
|
||||
+ kube-proxy: 发布应用服务与负载均衡
|
||||
+ haproxy:用于请求转发到多个 apiserver,详见[HA 架构](00-planning_and_overall_intro.md)
|
||||
+ calico: 配置容器网络 (或者 flannel)
|
||||
+ haproxy:用于请求转发到多个 apiserver,详见[HA-2x 架构](00-planning_and_overall_intro.md#ha-architecture)
|
||||
+ calico: 配置容器网络 (或者其他网络组件)
|
||||
|
||||
``` bash
|
||||
roles/kube-node/
|
||||
|
@ -28,7 +28,7 @@ roles/kube-node/
|
|||
|
||||
### 变量配置文件
|
||||
|
||||
详见 roles/kube-node/defaults/main.yml
|
||||
详见 roles/kube-node/defaults/main.yml,举例以下3个变量配置说明
|
||||
- 变量`PROXY_MODE`,配置 kube-proxy 服务代理模式 iptables or ipvs
|
||||
- 变量`KUBE_APISERVER`,根据不同的节点情况,它有三种取值方式
|
||||
- 变量`MASTER_CHG`,变更 master 节点时会根据它来重新配置 haproxy
|
||||
|
|
|
@ -25,13 +25,14 @@ Kubernetes Pod的网络是这样创建的:
|
|||
|
||||
本项目基于CNI driver 调用各种网络插件来配置kubernetes的网络,常用CNI插件有 `flannel` `calico` `weave`等等,这些插件各有优势,也在互相借鉴学习优点,比如:在所有node节点都在一个二层网络时候,flannel提供hostgw实现,避免vxlan实现的udp封装开销,估计是目前最高效的;calico也针对L3 Fabric,推出了IPinIP的选项,利用了GRE隧道封装;因此这些插件都能适合很多实际应用场景。
|
||||
|
||||
项目当前内置支持的网络插件有:`calico` `cilium` `flannel` `kube-router`
|
||||
项目当前内置支持的网络插件有:`calico` `cilium` `flannel` `kube-ovn` `kube-router`
|
||||
|
||||
### 安装讲解
|
||||
|
||||
- [安装calico](network-plugin/calico.md)
|
||||
- [安装cilium](network-plugin/cilium.md)
|
||||
- [安装flannel](network-plugin/flannel.md)
|
||||
- [安装kube-ovn](network-plugin/kube-ovn.md)
|
||||
- [安装kube-router](network-plugin/kube-router.md)
|
||||
|
||||
### 参考
|
||||
|
|
|
@ -1,70 +0,0 @@
|
|||
# 在阿里云上部署多主高可用集群
|
||||
|
||||
首先请阅读一般公有云部署注意事项 https://github.com/easzlab/kubeasz/blob/master/docs/setup/kubeasz_on_public_cloud.md
|
||||
|
||||
- 多主高可用集群节点规划不需要lb节点
|
||||
|
||||
节点规划可以参考 [example/hosts.cloud.example](../../example/hosts.cloud.example),如下:(避免deploy节点复用master节点即可)
|
||||
|
||||
``` bash
|
||||
# 集群部署节点:一般为运行ansible 脚本的节点
|
||||
# 变量 NTP_ENABLED (=yes/no) 设置集群是否安装 chrony 时间同步, 公有云上虚机不需要
|
||||
[deploy]
|
||||
10.1.0.160 NTP_ENABLED=no
|
||||
|
||||
# etcd集群请提供如下NODE_NAME,注意etcd集群必须是1,3,5,7...奇数个节点
|
||||
[etcd]
|
||||
10.1.0.160 NODE_NAME=etcd1
|
||||
10.1.0.161 NODE_NAME=etcd2
|
||||
10.1.0.162 NODE_NAME=etcd3
|
||||
|
||||
[kube-master]
|
||||
10.1.0.161
|
||||
10.1.0.162
|
||||
|
||||
# 公有云上一般都有提供负载均衡产品,且不允许自己创建,lb 节点留空,仅保留组名
|
||||
[lb]
|
||||
|
||||
[kube-node]
|
||||
10.1.0.160
|
||||
10.1.0.163
|
||||
|
||||
# 参数 NEW_INSTALL:yes表示新建,no表示使用已有harbor服务器
|
||||
[harbor]
|
||||
#10.1.0.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no
|
||||
|
||||
...
|
||||
```
|
||||
+ 创建云负载均衡,例如阿里云slb如下:
|
||||
|
||||
``` bash
|
||||
1. 首先创建SLB,注意选择【可用区】,【实例类型】可以先选‘私网’,【网络类型】专有网络,【虚拟交换机】跟你k8s集群节点同一交换机
|
||||
2. 配置【协议&监听】TCP 【端口】8443,【后端服务器】即 master 节点服务器,端口 6443
|
||||
3. 配置完成,记下负载均衡的内部地址(例如 10.1.0.200)
|
||||
```
|
||||
+ 继续配置 ansible hosts,设置`MASTER_IP` 为刚才创建的SLB地址
|
||||
|
||||
``` bash
|
||||
[all:vars]
|
||||
# ---------集群主要参数---------------
|
||||
#集群部署模式:allinone, single-master, multi-master
|
||||
DEPLOY_MODE=multi-master
|
||||
|
||||
# 创建内网云负载均衡,然后配置:前端监听 tcp 8443,后端 tcp 6443,后端节点即 master 节点
|
||||
MASTER_IP="10.1.0.200" # 即负载均衡内网地址
|
||||
KUBE_APISERVER="https://{{ MASTER_IP }}:8443"
|
||||
|
||||
# 集群网络插件,目前支持calico, flannel
|
||||
CLUSTER_NETWORK="flannel"
|
||||
|
||||
...
|
||||
```
|
||||
|
||||
+ 其余集群创建步骤与自有环境完全相同
|
||||
|
||||
+ 创建集群 `ansible-playbook /etc/ansible/90.setup.yml`
|
||||
|
||||
### 其他资料
|
||||
|
||||
另外由[li-sen](https://github.com/li-sen)分享的[kubeasz-阿里云vpc部署记录](https://li-sen.github.io/post/blog-wiki/2018-09-27-k8s-kubeasz-%E9%98%BF%E9%87%8C%E4%BA%91vpc%E9%83%A8%E7%BD%B2%E8%AE%B0%E5%BD%95/):介绍了阿里云上自建高可用k8s集群碰过的问题与解决,主要是使用一台haproxy中转解决slb的限制问题。
|
||||
|
|
@ -1,206 +0,0 @@
|
|||
### <center>AWS EC2 Amazon Linux 系统 kubernetes 集群高可用部署</center>
|
||||
|
||||
文档中脚本默认均以 root 用户执行
|
||||
Amazon Linux默认不能以root用户登录,只能先登录到`ec2-user`,再通过`sudo su - root`切换root用户
|
||||
|
||||
|
||||
#### 高可用集群所需节点配置如下
|
||||
----
|
||||
|
||||
|角色|数量|IP|描述|
|
||||
|:-|:-|:-|:-|
|
||||
|deploy节点|1|172.31.16.81|运行这份 ansible 脚本的节点|
|
||||
|etcd节点|3|172.31.9.100, 172.31.9.192, 172.31.11.185|注意etcd集群必须是1,3,5,7...奇数个节点|
|
||||
|master节点|2|172.31.9.100, 172.31.9.192|共用etcd节点,master VIP(虚地址)采用内部ELB域名代替,可根据需要提升机器配置或增加节点数|
|
||||
|node节点|2|172.31.11.185, 172.31.14.4|运行应用负载的节点,可根据需要提升机器配置或增加节点数|
|
||||
|
||||
|
||||
#### 环境准备
|
||||
----
|
||||
|
||||
##### 创建 EC2 实例
|
||||
- 准备5台虚机,搭建一个多主高可用集群,`Node` 节点内存不低于 4GB
|
||||
- 生产环境一个节点只担任一个角色
|
||||
- 1个 `deploy` 节点 网段 172.31.16.0/20
|
||||
- 2个 `master` 节点 网段 172.31.0.0/20 ,建议采用 SSD 类型磁盘
|
||||
- 2个 `node` 节点 网段 172.31.0.0/20
|
||||
- 2个弹性IP、1个NAT网关,其中1个弹性IP绑定到`deploy`节点、另1个绑定到`NAT网关`
|
||||
- `master`节点和`node`节点不分配公网IP,仅通过`NAT`连接外网
|
||||
|
||||
**注意:**
|
||||
- master和node所在的子网需添加路由表到NAT网关, 否则无法连接到外网
|
||||
- NAT子网需与deploy节点相同,因为最终都是通过同一个Internet网关访问外网,在主路由表
|
||||
|
||||
|
||||
##### 创建 master ELB
|
||||
- 云负载均衡中创建 **经典型内网ELB**, 区域与 EC2 相同,取名为 `k8s-master-lb`,假定内网域名为 `internal-k8s-master-lb-42488333.xxxx.elb.amazonaws.com`
|
||||
- 创建 TCP 类型监听器,前端监听 `8443` 端口,转发后端 `6443` 端口
|
||||
- 绑定 `master` 节点到监听器,子网选择master和node节点相同
|
||||
|
||||
##### 创建 ingress ELB (收费类型 可在集群创建成功后操作)
|
||||
- 云负载均衡中创建 **经典型外网ELB**, 区域与 EC2 相同,取名为 `k8s-ingress-lb`
|
||||
- 创建 TCP 类型监听器,前端监听 `80` 端口,转发后端 `23456` 端口
|
||||
- 绑定 `master和node` 节点到监听器(这里也可以只负载到master或node)
|
||||
- 运行状态检查,Ping协议: `TCP`,Ping端口: `23456`
|
||||
- 如果想要访问`traefik dashboard`,需要再创建一个 TCP 类型监听器,前端后端都监听`traefik admin暴露的nodePort端口`
|
||||
|
||||
##### 开启安全组
|
||||
- 集群节点间的安全组要所有协议都开放
|
||||
|
||||
#### 部署步骤
|
||||
----
|
||||
|
||||
##### 0. 基础系统配置
|
||||
|
||||
+ 使用社区AMI amzn2-ami 安装系统
|
||||
+ 配置SSH root免密登陆等
|
||||
+ 下载kubernetes对应版本bin文件
|
||||
|
||||
##### 1. 在deploy节点安装及准备ansible
|
||||
|
||||
pip 安装 ansible
|
||||
``` bash
|
||||
yum install python-pip -y
|
||||
|
||||
# pip安装ansible
|
||||
pip install pip --upgrade
|
||||
pip install ansible
|
||||
|
||||
# pip安装ansible(国内如果安装太慢可以直接用pip阿里云加速)
|
||||
pip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
|
||||
pip install --no-cache-dir ansible -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
|
||||
```
|
||||
|
||||
|
||||
##### 2. 资源准备
|
||||
|
||||
- 克隆源码
|
||||
```bash
|
||||
git clone --depth=1 https://github.com/easzlab/kubeasz.git /etc/ansible
|
||||
```
|
||||
|
||||
- 下载 k8s 二进制文件
|
||||
从分享的[百度云链接](https://pan.baidu.com/s/1c4RFaA),下载解压到 `/etc/ansible/bin` 目录
|
||||
```bash
|
||||
# 以安装k8s v1-14-1为例
|
||||
tar -xvf k8s.1-14-1.tar.gz -C /etc/ansible/
|
||||
```
|
||||
|
||||
##### 3. 配置集群参数
|
||||
```bash
|
||||
cd /etc/ansible && cp example/hosts.cloud.example hosts
|
||||
```
|
||||
|
||||
编辑此 hosts 文件
|
||||
```bash
|
||||
vi /etc/ansible/hosts
|
||||
```
|
||||
|
||||
更新以下内容
|
||||
```
|
||||
[deploy]
|
||||
172.31.16.81 NTP_ENABLED=yes
|
||||
|
||||
# etcd集群请提供如下NODE_NAME,注意etcd集群必须是1,3,5,7...奇数个节点
|
||||
[etcd]
|
||||
172.31.9.100 NODE_NAME=etcd1
|
||||
172.31.9.192 NODE_NAME=etcd2
|
||||
172.31.11.185 NODE_NAME=etcd3
|
||||
|
||||
[kube-master]
|
||||
172.31.9.100
|
||||
172.31.9.192
|
||||
|
||||
[kube-node]
|
||||
172.31.11.185
|
||||
172.31.14.4
|
||||
|
||||
MASTER_IP="internal-k8s-master-lb-42488333.xxxx.elb.amazonaws.com" # 即 master vip 负载均衡内网地址
|
||||
```
|
||||
|
||||
|
||||
##### 5. 编排k8s安装
|
||||
|
||||
如果你对集群安装流程不熟悉,请阅读项目首页 **安装步骤** 讲解后分步安装,并对 **每步都进行验证**
|
||||
|
||||
验证 ansible 执行 正常能看到所有节点返回 SUCCESS
|
||||
```bash
|
||||
ansible all -m ping
|
||||
```
|
||||
|
||||
执行安装
|
||||
```bash
|
||||
cd /etc/ansible
|
||||
# 一步安装
|
||||
ansible-playbook 90.setup.yml
|
||||
# 分步安装
|
||||
ansible-playbook 01.prepare.yml
|
||||
ansible-playbook 02.etcd.yml
|
||||
ansible-playbook 03.docker.yml
|
||||
ansible-playbook 04.kube-master.yml
|
||||
ansible-playbook 05.kube-node.yml
|
||||
ansible-playbook 06.network.yml
|
||||
ansible-playbook 07.cluster-addon.yml
|
||||
|
||||
# 把 k8s 集群 ca 证书加入本机信任列表
|
||||
cp /etc/kubernetes/ssl/ca.pem /etc/pki/ca-trust/source/anchors/ && update-ca-trust
|
||||
```
|
||||
|
||||
#### 查看集群状态
|
||||
```bash
|
||||
kubectl cluster-info
|
||||
kubectl get cs
|
||||
kubectl get node
|
||||
kubectl get pod,svc --all-namespaces -o wide
|
||||
kubectl top node
|
||||
```
|
||||
|
||||
#### ingress访问
|
||||
可访问`外部ELB`自动分配的域名地址,正式环境可以将域名做个`CNAME`到该地址。
|
||||
测试示例:
|
||||
``` yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:alpine
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
ports:
|
||||
- name: nginx-port
|
||||
port: 80
|
||||
selector:
|
||||
app: nginx
|
||||
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
rules:
|
||||
- host: aws.xxxx.cn
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: nginx
|
||||
servicePort: nginx-port
|
||||
```
|
|
@ -1,211 +0,0 @@
|
|||
### <center>腾讯云 CVM CentOS 系统 kubernetes 集群部署</center>
|
||||
|
||||
文档中脚本默认均以 root 用户执行
|
||||
|
||||
#### 高可用集群所需节点配置如下
|
||||
----
|
||||
|
||||
|角色|数量|描述|
|
||||
|:-|:-|:-|
|
||||
|deploy节点|1|运行这份 ansible 脚本的节点|
|
||||
|etcd节点|3|注意etcd集群必须是1,3,5,7...奇数个节点|
|
||||
|master节点|3|共用etcd节点,master VIP(虚地址)在云管理后台创建,可根据需要提升机器配置或增加节点数|
|
||||
|node节点|2|运行应用负载的节点,可根据需要提升机器配置或增加节点数|
|
||||
|
||||
|
||||
#### 环境准备
|
||||
----
|
||||
|
||||
##### 创建 CVM 实例
|
||||
- 准备6台虚机,搭建一个多主高可用集群,`Node` 节点内存不低于 4GB
|
||||
- 生产环境一个节点只担任一个角色
|
||||
- 1个 `deploy` 节点 网段 10.0.0.3/21
|
||||
- 3个 `master` 节点 网段 10.0.8.0/21 ,建议采用 SSD 类型磁盘
|
||||
- 2个或以上 `node` 节点 网段 10.0.8.0/21
|
||||
|
||||
##### 创建 master vip
|
||||
- `传统型负载均衡`中创建内网 CLB, 区域与 CVM 相同,取名为 `k8s-master-lb`,假定为 `10.0.8.12`
|
||||
- 创建 TCP 类型监听器,前端监听 `8443` 端口,转发后端 `6443` 端口
|
||||
- 绑定 `master` 节点到监听器
|
||||
|
||||
##### 创建 ingress vip (收费类型 可在集群创建成功后操作)
|
||||
- `负载均衡`中创建应用型外网 CLB, 区域与 CVM 相同,取名为 `k8s-ingress-lb`
|
||||
- 创建 TCP 类型监听器,前端监听 `23457` 端口,转发后端 `23457` 端口
|
||||
- 创建 TCP 类型监听器,前端监听 `23456` 端口,转发后端 `23456` 端口
|
||||
- 绑定 `node` 节点到监听器
|
||||
- **在本集群关联的网络安全组的入站规则中放行此负载均衡的外网 IP**
|
||||
|
||||
|
||||
#### 部署步骤
|
||||
----
|
||||
|
||||
|
||||
##### 0. 基础系统配置
|
||||
|
||||
+ 使用自定义系统镜像 `k8s-node` 安装系统
|
||||
+ 配置基础网络、更新源、SSH登陆等
|
||||
+ 腾讯云后台创建 CLB
|
||||
|
||||
|
||||
##### 1. 以 `CentOS 7.x 64bit` 镜像初始化 CVM 实例安装 deploy 节点
|
||||
|
||||
- 更新本节点主机名
|
||||
```bash
|
||||
hostnamectl set-hostname deploy
|
||||
# 重新登录
|
||||
```
|
||||
|
||||
- 更新主机节点列表
|
||||
编辑文件
|
||||
```bash
|
||||
vi /etc/hosts
|
||||
```
|
||||
|
||||
删除云主机自动创建的回环主机名映射行,如下类似
|
||||
```
|
||||
127.0.0.1 VM_0_15_centos VM_0_15_centos
|
||||
::1 VM_0_15_centos VM_0_15_centos
|
||||
```
|
||||
|
||||
根据服务器配置添加主机列表,添加
|
||||
```
|
||||
10.0.0.3 deploy
|
||||
10.0.8.2 master01
|
||||
10.0.8.3 master02
|
||||
10.0.8.4 master02
|
||||
10.0.8.10 node01
|
||||
10.0.8.11 node02
|
||||
```
|
||||
|
||||
|
||||
##### 2. 在deploy节点安装及准备ansible
|
||||
|
||||
pip 安装 ansible
|
||||
``` bash
|
||||
yum install python-pip -y
|
||||
|
||||
# pip安装ansible (腾讯云服务器自带加速)
|
||||
pip install pip --upgrade
|
||||
pip install ansible
|
||||
|
||||
# pip安装ansible(国内如果安装太慢可以直接用pip阿里云加速)
|
||||
pip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
|
||||
pip install --no-cache-dir ansible -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
|
||||
```
|
||||
|
||||
|
||||
##### 3. 资源准备
|
||||
|
||||
- 克隆源码
|
||||
```bash
|
||||
git clone --depth=1 https://github.com/easzlab/kubeasz.git /etc/ansible
|
||||
```
|
||||
|
||||
- 下载 k8s 二进制文件
|
||||
从分享的[百度云链接](https://pan.baidu.com/s/1c4RFaA),下载解压到 `/etc/ansible/bin` 目录
|
||||
```bash
|
||||
# 以安装k8s v1.14.1为例
|
||||
tar -xvf k8s.1-14-1.tar.gz -C /etc/ansible/
|
||||
```
|
||||
|
||||
- 生成离线 docker 镜像
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/waitingsong/blog/master/201904/assets/make_basic_images_bundle.sh
|
||||
wget https://raw.githubusercontent.com/waitingsong/blog/master/201904/assets/make_extra_images_bundle.sh
|
||||
wget https://raw.githubusercontent.com/waitingsong/blog/master/201904/assets/make_istio_images_bundle.sh
|
||||
chmod a+x make_basic_images_bundle.sh
|
||||
chmod a+x make_extra_images_bundle.sh
|
||||
chmod a+x make_istio_images_bundle.sh
|
||||
|
||||
# 根据需要执行脚本进行下载并打包 xz 格式压缩时间比较长
|
||||
# 分别生成以下文件
|
||||
# /tmp/basic_images_kubeasz_1.1.tar.xz
|
||||
# /tmp/extra_images_kubeasz_1.1.tar.xz
|
||||
# /tmp/istio_images_bundle_1.1.7.tar.xz
|
||||
./make_basic_images_bundle.sh dump
|
||||
./make_extra_images_bundle.sh dump
|
||||
./make_istio_images_bundle.sh dump
|
||||
```
|
||||
|
||||
- 下载离线 docker 镜像
|
||||
将上一步生成的文件和脚本文件拷贝到 deploy 节点服务器相同目录下执行
|
||||
istio 安装见文档 [istio_install.md](./istio_install.md)
|
||||
```bash
|
||||
./make_basic_images_bundle.sh extract
|
||||
./make_extra_images_bundle.sh extract
|
||||
```
|
||||
|
||||
##### 4. 配置集群参数
|
||||
```bash
|
||||
cd /etc/ansible && cp example/hosts.cloud.example hosts
|
||||
```
|
||||
|
||||
编辑此 hosts 文件
|
||||
```bash
|
||||
vi /etc/ansible/hosts
|
||||
```
|
||||
|
||||
更新以下内容
|
||||
```
|
||||
# deploy 节点的地址
|
||||
10.0.0.3 NTP_ENABLED=yes
|
||||
|
||||
[etcd]
|
||||
10.0.8.2 NODE_NAME=etcd1
|
||||
10.0.8.3 NODE_NAME=etcd2
|
||||
10.0.8.4 NODE_NAME=etcd3
|
||||
|
||||
[kube-master]
|
||||
10.0.8.2
|
||||
10.0.8.3
|
||||
10.0.8.4
|
||||
|
||||
[kube-node]
|
||||
10.0.8.10
|
||||
10.0.8.11
|
||||
|
||||
MASTER_IP="10.0.8.12" # 即 master vip 负载均衡内网地址
|
||||
```
|
||||
|
||||
|
||||
##### 5. 编排k8s安装
|
||||
|
||||
如果你对集群安装流程不熟悉,请阅读项目首页 **安装步骤** 讲解后分步安装,并对 **每步都进行验证**
|
||||
|
||||
验证 ansible 执行 正常能看到所有节点返回 SUCCESS
|
||||
```bash
|
||||
ansible all -m ping
|
||||
```
|
||||
|
||||
执行安装
|
||||
```bash
|
||||
cd /etc/ansible
|
||||
# 一步安装
|
||||
ansible-playbook 90.setup.yml
|
||||
# 分步安装
|
||||
ansible-playbook 01.prepare.yml
|
||||
ansible-playbook 02.etcd.yml
|
||||
ansible-playbook 03.docker.yml
|
||||
ansible-playbook 04.kube-master.yml
|
||||
ansible-playbook 05.kube-node.yml
|
||||
ansible-playbook 06.network.yml
|
||||
ansible-playbook 07.cluster-addon.yml
|
||||
|
||||
# 把 k8s 集群 ca 证书加入本机信任列表
|
||||
cp /etc/kubernetes/ssl/ca.pem /etc/pki/ca-trust/source/anchors/ && update-ca-trust
|
||||
```
|
||||
|
||||
#### 查看集群状态
|
||||
```bash
|
||||
kubectl cluster-info
|
||||
kubectl get cs
|
||||
kubectl get node
|
||||
kubectl get pod,svc --all-namespaces -o wide
|
||||
kubectl top node
|
||||
```
|
||||
|
||||
#### 资源
|
||||
- [kubeasz](https://github.com/easzlab/kubeasz)
|
||||
- [镜像打包脚本](https://github.com/waitingsong/blog/tree/master/201904/assets)
|
||||
|
||||
origin by [waitingsong](https://github.com/waitingsong/blog/blob/master/201904/k8s_cvm_intro.md)
|
10
tools/easzup
10
tools/easzup
|
@ -225,15 +225,15 @@ function clean_container() {
|
|||
|
||||
function usage() {
|
||||
cat <<EOF
|
||||
Usage: $0 [options] [args]
|
||||
Usage: easzup [options] [args]
|
||||
option: -{DdekSz}
|
||||
-C stop&clean all local containers
|
||||
-D download all into /etc/ansible
|
||||
-S start kubeasz in a container
|
||||
-d <ver> set docker-ce version, default 18.09.6
|
||||
-e <ver> set kubeasz-ext-bin version, default 0.3.0
|
||||
-k <ver> set kubeasz-k8s-bin version, default v1.14.3
|
||||
-z <ver> set kubeasz version, default 1.3.0
|
||||
-d <ver> set docker-ce version, default "$DOCKER_VER"
|
||||
-e <ver> set kubeasz-ext-bin version, default "$EXT_BIN_VER"
|
||||
-k <ver> set kubeasz-k8s-bin version, default "$K8S_BIN_VER"
|
||||
-z <ver> set kubeasz version, default "$KUBEASZ_VER"
|
||||
|
||||
see more at https://github.com/kubeasz/dockerfiles
|
||||
EOF
|
||||
|
|
Loading…
Reference in New Issue