优化命令显示,去掉$符号方便复制粘贴后执行

pull/43/head
Jimmy Song 2017-08-31 22:46:21 +08:00
parent ac98c225f6
commit f40b66dfd4
8 changed files with 116 additions and 99 deletions

View File

@ -42,28 +42,28 @@
**方式一:直接使用二进制源码包安装**
``` bash
$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
$ chmod +x cfssl_linux-amd64
$ sudo mv cfssl_linux-amd64 /root/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /root/local/bin/cfssl
$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
$ chmod +x cfssljson_linux-amd64
$ sudo mv cfssljson_linux-amd64 /root/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /root/local/bin/cfssljson
$ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
$ chmod +x cfssl-certinfo_linux-amd64
$ sudo mv cfssl-certinfo_linux-amd64 /root/local/bin/cfssl-certinfo
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /root/local/bin/cfssl-certinfo
$ export PATH=/root/local/bin:$PATH
export PATH=/root/local/bin:$PATH
```
**方式二使用go命令安装**
我们的系统中安装了Go1.7.5,使用以下命令安装更快捷:
```
$go get -u github.com/cloudflare/cfssl/cmd/...
$echo $GOPATH
```bash
$ go get -u github.com/cloudflare/cfssl/cmd/...
$ echo $GOPATH
/usr/local
$ls /usr/local/bin/cfssl*
cfssl cfssl-bundle cfssl-certinfo cfssljson cfssl-newkey cfssl-scan
@ -78,13 +78,13 @@ cfssl cfssl-bundle cfssl-certinfo cfssljson cfssl-newkey cfssl-scan
**创建 CA 配置文件**
``` bash
$ mkdir /root/ssl
$ cd /root/ssl
$ cfssl print-defaults config > config.json
$ cfssl print-defaults csr > csr.json
mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
# 根据config.json文件的格式创建如下的ca-config.json文件
# 过期时间设置成了 87600h
$ cat ca-config.json
cat > ca-config.json <<EOF
{
"signing": {
"default": {
@ -103,6 +103,7 @@ $ cat ca-config.json
}
}
}
EOF
```
字段说明
@ -113,8 +114,9 @@ $ cat ca-config.json
**创建 CA 证书签名请求**
``` bash
$ cat ca-csr.json
创建 `ca-csr.json` 文件,内容如下:
``` json
{
"CN": "kubernetes",
"key": {
@ -146,10 +148,9 @@ ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
## 创建 kubernetes 证书
创建 kubernetes 证书签名请求
创建 kubernetes 证书签名请求文件 `kubernetes-csr.json`
``` bash
$ cat kubernetes-csr.json
``` json
{
"CN": "kubernetes",
"hosts": [
@ -194,15 +195,14 @@ kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem
或者直接在命令行上指定相关参数:
``` bash
$ echo '{"CN":"kubernetes","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname="127.0.0.1,172.20.0.112,172.20.0.113,172.20.0.114,172.20.0.115,kubernetes,kubernetes.default" - | cfssljson -bare kubernetes
echo '{"CN":"kubernetes","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname="127.0.0.1,172.20.0.112,172.20.0.113,172.20.0.114,172.20.0.115,kubernetes,kubernetes.default" - | cfssljson -bare kubernetes
```
## 创建 admin 证书
创建 admin 证书签名请求
创建 admin 证书签名请求文件 `admin-csr.json`
``` bash
$ cat admin-csr.json
``` json
{
"CN": "admin",
"hosts": [],
@ -236,10 +236,9 @@ admin.csr admin-csr.json admin-key.pem admin.pem
## 创建 kube-proxy 证书
创建 kube-proxy 证书签名请求
创建 kube-proxy 证书签名请求文件 `kube-proxy-csr.json`
``` bash
$ cat kube-proxy-csr.json
``` json
{
"CN": "system:kube-proxy",
"hosts": [],
@ -368,8 +367,8 @@ $ cfssl-certinfo -cert kubernetes.pem
将生成的证书和秘钥文件(后缀名为`.pem`)拷贝到所有机器的 `/etc/kubernetes/ssl` 目录下备用;
``` bash
$ sudo mkdir -p /etc/kubernetes/ssl
$ sudo cp *.pem /etc/kubernetes/ssl
mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
```
## 参考

View File

@ -11,7 +11,7 @@ kuberntes 系统使用 etcd 存储所有数据,本文档介绍部署一个三
需要为 etcd 集群创建加密通信的 TLS 证书,这里复用以前创建的 kubernetes 证书
``` bash
$ cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl
```
+ kubernetes 证书的 `hosts` 字段列表中包含上面三台机器的 IP否则后续证书校验会失败
@ -21,9 +21,9 @@ $ cp ca.pem kubernetes-key.pem kubernetes.pem /etc/kubernetes/ssl
`https://github.com/coreos/etcd/releases` 页面下载最新版本的二进制文件
``` bash
$ https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz
$ tar -xvf etcd-v3.1.5-linux-amd64.tar.gz
$ sudo mv etcd-v3.1.5-linux-amd64/etcd* /usr/local/bin
wget https://github.com/coreos/etcd/releases/download/v3.1.5/etcd-v3.1.5-linux-amd64.tar.gz
tar -xvf etcd-v3.1.5-linux-amd64.tar.gz
mv etcd-v3.1.5-linux-amd64/etcd* /usr/local/bin
```
## 创建 etcd 的 systemd unit 文件
@ -93,11 +93,11 @@ ETCD_ADVERTISE_CLIENT_URLS="https://172.20.0.113:2379"
## 启动 etcd 服务
``` bash
$ sudo mv etcd.service /etc/systemd/system/
$ sudo systemctl daemon-reload
$ sudo systemctl enable etcd
$ sudo systemctl start etcd
$ systemctl status etcd
mv etcd.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
stemctl status etcd
```
在所有的 kubernetes master 节点重复上面的步骤,直到所有机器的 etcd 服务都已启动。

View File

@ -3,9 +3,9 @@
到 [heapster release 页面](https://github.com/kubernetes/heapster/releases) 下载最新版本的 heapster。
``` bash
$ wget https://github.com/kubernetes/heapster/archive/v1.3.0.zip
$ unzip v1.3.0.zip
$ mv v1.3.0.zip heapster-1.3.0
wget https://github.com/kubernetes/heapster/archive/v1.3.0.zip
unzip v1.3.0.zip
mv v1.3.0.zip heapster-1.3.0
```
文件目录: `heapster-1.3.0/deploy/kube-config/influxdb`

View File

@ -56,19 +56,20 @@
- [2 创建kubeconfig 文件](create-kubeconfig.md)
- [3 创建高可用etcd集群](etcd-cluster-installation.md)
- [4 安装kubectl命令行工具](kubectl-installation.md)
- [5 部署高可用master集群](master-installation.md)
- [5 部署master节点](master-installation.md)
- [6 部署node节点](node-installation.md)
- [7 安装kubedns插件](kubedns-addon-installation.md)
- [8 安装dashboard插件](dashboard-addon-installation.md.md)
- [8 安装dashboard插件](dashboard-addon-installation.md)
- [9 安装heapster插件](heapster-addon-installation.md)
- [10 安装EFK插件](efk-addon-installation.md)
## 提醒
1. 由于启用了 TLS 双向认证、RBAC 授权等严格的安全机制,建议**从头开始部署**,而不要从中间开始,否则可能会认证、授权等失败!
2. 本文档将**随着各组件的更新而更新**,有任何问题欢迎提 issue
2. 部署过程中需要有很多证书的操作,请大家耐心操作,不明白的操作可以参考本书中的其他章节的解释。
3. 该部署操作仅是搭建成了一个可用 kubernetes 集群而很多地方还需要进行优化heapster 插件、EFK 插件不一定会用于真实的生产环境中,但是通过部署这些插件,可以让大家了解到如何部署应用到集群上。
## 关于
[Jimmy Song](http://rootsongjc.github.io/about)
[Jimmy Song](http://jimmysong.io/about)

View File

@ -5,32 +5,32 @@
## 下载 kubectl
``` bash
$ wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz
$ tar -xzvf kubernetes-client-linux-amd64.tar.gz
$ cp kubernetes/client/bin/kube* /usr/bin/
$ chmod a+x /usr/bin/kube*
wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz
tar -xzvf kubernetes-client-linux-amd64.tar.gz
cp kubernetes/client/bin/kube* /usr/bin/
chmod a+x /usr/bin/kube*
```
## 创建 kubectl kubeconfig 文件
``` bash
$ export KUBE_APISERVER="https://172.20.0.113:6443"
$ # 设置集群参数
$ kubectl config set-cluster kubernetes \
export KUBE_APISERVER="https://172.20.0.113:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER}
$ # 设置客户端认证参数
$ kubectl config set-credentials admin \
# 设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem
$ # 设置上下文参数
$ kubectl config set-context kubernetes \
# 设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin
$ # 设置默认上下文
$ kubectl config use-context kubernetes
# 设置默认上下文
kubectl config use-context kubernetes
```
+ `admin.pem` 证书 OU 字段值为 `system:masters``kube-apiserver` 预定义的 RoleBinding `cluster-admin` 将 Group `system:masters` 与 Role `cluster-admin` 绑定,该 Role 授予了调用`kube-apiserver` 相关 API 的权限;

View File

@ -33,12 +33,10 @@ admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem
从 [github release 页面](https://github.com/kubernetes/kubernetes/releases) 下载发布版 tarball解压后再执行下载脚本
``` shell
$ wget https://github.com/kubernetes/kubernetes/releases/download/v1.6.0/kubernetes.tar.gz
$ tar -xzvf kubernetes.tar.gz
...
$ cd kubernetes
$ ./cluster/get-kube-binaries.sh
...
wget https://github.com/kubernetes/kubernetes/releases/download/v1.6.0/kubernetes.tar.gz
tar -xzvf kubernetes.tar.gz
cd kubernetes
./cluster/get-kube-binaries.sh
```
**方式二**
@ -47,17 +45,16 @@ $ ./cluster/get-kube-binaries.sh
`server` 的 tarball `kubernetes-server-linux-amd64.tar.gz` 已经包含了 `client`(`kubectl`) 二进制文件,所以不用单独下载`kubernetes-client-linux-amd64.tar.gz`文件;
``` shell
$ # wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz
$ wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz
$ tar -xzvf kubernetes-server-linux-amd64.tar.gz
...
$ cd kubernetes
$ tar -xzvf kubernetes-src.tar.gz
# wget https://dl.k8s.io/v1.6.0/kubernetes-client-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf kubernetes-src.tar.gz
```
将二进制文件拷贝到指定路径
``` bash
$ cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
```
## 配置和启动 kube-apiserver
@ -173,10 +170,10 @@ KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s
**启动kube-apiserver**
``` bash
$ systemctl daemon-reload
$ systemctl enable kube-apiserver
$ systemctl start kube-apiserver
$ systemctl status kube-apiserver
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
```
## 配置和启动 kube-controller-manager
@ -238,9 +235,9 @@ KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.
### 启动 kube-controller-manager
``` bash
$ systemctl daemon-reload
$ systemctl enable kube-controller-manager
$ systemctl start kube-controller-manager
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
```
## 配置和启动 kube-scheduler
@ -288,9 +285,9 @@ KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
### 启动 kube-scheduler
``` bash
$ systemctl daemon-reload
$ systemctl enable kube-scheduler
$ systemctl start kube-scheduler
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
```
## 验证 master 节点功能

View File

@ -215,8 +215,8 @@ kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先
然后 kubelet 才能有权限创建认证请求(certificate signing requests)
``` bash
$ cd /etc/kubernetes
$ kubectl create clusterrolebinding kubelet-bootstrap \
cd /etc/kubernetes
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
```
@ -226,11 +226,11 @@ $ kubectl create clusterrolebinding kubelet-bootstrap \
### 下载最新的 kubelet 和 kube-proxy 二进制文件
``` bash
$ wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz
$ tar -xzvf kubernetes-server-linux-amd64.tar.gz
$ cd kubernetes
$ tar -xzvf kubernetes-src.tar.gz
$ cp -r ./server/bin/{kube-proxy,kubelet} /usr/local/bin/
wget https://dl.k8s.io/v1.6.0/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf kubernetes-src.tar.gz
cp -r ./server/bin/{kube-proxy,kubelet} /usr/local/bin/
```
### 创建 kubelet 的service配置文件
@ -306,10 +306,10 @@ KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bo
### 启动kublet
``` bash
$ systemctl daemon-reload
$ systemctl enable kubelet
$ systemctl start kubelet
$ systemctl status kubelet
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
```
### 通过 kublet 的 TLS 证书请求
@ -399,10 +399,10 @@ KUBE_PROXY_ARGS="--bind-address=172.20.0.113 --hostname-override=172.20.0.113 --
### 启动 kube-proxy
``` bash
$ systemctl daemon-reload
$ systemctl enable kube-proxy
$ systemctl start kube-proxy
$ systemctl status kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
```
## 验证测试

View File

@ -25,7 +25,27 @@
## 架构设计
关于 spark standalone 的局限性与 kubernetes native spark 架构之间的区别请参考 Anirudh Ramanathan 在 2016年10月8日提交的 issue [Support Spark natively in Kubernetes #34377](https://github.com/kubernetes/kubernetes/issues/34377)。
简而言之spark standalone on kubernetes 有如下几个缺点:
- 无法对于多租户做隔离,每个用户都想给 pod 申请 node 节点可用的最大的资源。
- Spark 的 masterworker 本来不是设计成使用 kubernetes 的资源调度,这样会存在两层的资源调度问题,不利于与 kuberentes 集成。
而 kubernetes native spark 集群中spark 可以调用 kubernetes API 获取集群资源和调度。要实现 kubernetes native spark 需要为 spark 提供一个集群外部的 manager 可以用来跟 kubernetes API 交互。
## 安装指南
我们可以直接使用官方已编译好的 docker 镜像来部署。
| 组件 | 镜像 |
| -------------------------- | ---------------------------------------- |
| Spark Driver Image | `kubespark/spark-driver:v2.1.0-kubernetes-0.3.1` |
| Spark Executor Image | `kubespark/spark-executor:v2.1.0-kubernetes-0.3.1` |
| Spark Initialization Image | `kubespark/spark-init:v2.1.0-kubernetes-0.3.1` |
| Spark Staging Server Image | `kubespark/spark-resource-staging-server:v2.1.0-kubernetes-0.3.1` |
| PySpark Driver Image | `kubespark/driver-py:v2.1.0-kubernetes-0.3.1` |
| PySpark Executor Image | `kubespark/executor-py:v2.1.0-kubernetes-0.3.1` |
## 参考