pull/2/head
杜宽 2023-03-13 14:23:59 +08:00
commit 86e2e6b9f1
100 changed files with 7390 additions and 0 deletions

32
README.md 100644
View File

@ -0,0 +1,32 @@
## 云原生K8s全栈架构师实战文档
## K8s技术QQ交流群612388919
## 作者QQ727585266
## 书籍配套视频:
**提供免费更新、免费技术问答、免费岗位推荐、受益终身【平均月薪25K】**
腾讯:
K8s全栈架构师https://ke.qq.com/course/2738602
K8s管理员认证CKAhttps://ke.qq.com/course/3382340?tuin=2b5e11f2
K8s安全专家CKShttps://ke.qq.com/course/4161957?tuin=2b5e11f2
CKA+架构师https://ke.qq.com/course/package/38982?tuin=2b5e11f2
超级套购https://ke.qq.com/course/package/41755?tuin=2b5e11f2
51CTO
全栈架构师https://edu.51cto.com/course/23845.html
K8s管理员认证CKAhttps://edu.51cto.com/course/27103.html
K8s安全专家CKShttps://edu.51cto.com/course/29792.html
CKA+架构师https://edu.51cto.com/topic/4973.html
超级套购https://edu.51cto.com/topic/5174.html
# 勘误
### 非常抱歉给大家带来的不便,书中的错误更正如下:
1. 182页 9.3.2小节 第一个`kubectl run`命令改为`kubectl create deployment nginx-server`,错误原因:由于版本问题,`kubectl run`变为了创建Pod创建Deployment需要用`kubectl create deployment`。
2. 77页
````
successThreshold: 1 # 表示检查成功1次表示就绪
failureThreshold: 2 # 检测失败2次表示未就绪
````
3. 71页 Node节点描述的Docker Engine: 负责对容器的管理,写成了负载对容器的管理

186
docs/chap01/1.4.md 100644
View File

@ -0,0 +1,186 @@
**vim /etc/haproxy/haproxy.cfg**
````bash
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:16443 # 监听的端口
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.236.201:6443 check # 配置后端服务器地址
server k8s-master02 192.168.236.202:6443 check
server k8s-master03 192.168.236.203:6443 check
````
**Master01**
**vim /etc/keepalived/keepalived.conf**
````bash
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens33 # 本机网卡名称
mcast_src_ip 192.168.236.201 # 本机IP地址
virtual_router_id 51
priority 101
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.236.236 # VIP地址需要是宿主机同网段且不存在的IP地址
}
track_script {
chk_apiserver
}
}
````
**Master02**
**vim /etc/keepalived/keepalived.conf**
````
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.236.202
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.236.236
}
track_script {
chk_apiserver
}
}
````
**Master03**
**vim /etc/keepalived/keepalived.conf**
````
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
mcast_src_ip 192.168.236.203
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.236.236
}
track_script {
chk_apiserver
}
}
````
**check_apiserver.sh**
````
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
````

48
docs/chap01/1.5.md 100644
View File

@ -0,0 +1,48 @@
**vim kubeadm-config.yaml**
````
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.236.201
bindPort: 6443
nodeRegistration:
# criSocket: /var/run/dockershim.sock # 如果是Docker作为Runtime配置此项
criSocket: /run/containerd/containerd.sock # 如果是Containerd作为Runtime配置此项
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 192.168.236.236
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.236.236:16443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.22.0 # 更改此处的版本号和kubeadm version一致
networking:
dnsDomain: cluster.local
podSubnet: 172.16.0.0/12
serviceSubnet: 192.168.0.0/16
scheduler: {}
````

73
docs/chap02/2.6.md 100644
View File

@ -0,0 +1,73 @@
**vim /etc/etcd/etcd.config.yml**
**自行更改相关配置**
````
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.236.201:2380'
listen-client-urls: 'https://192.168.236.201:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.236.201:2380'
advertise-client-urls: 'https://192.168.236.201:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.236.201:2380,k8s-master02=https://192.168.236.202:2380,k8s-master03=https://192.168.236.203:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
````
**vim /usr/lib/systemd/system/etcd.service**
````
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
````

116
docs/chap02/2.7.md 100644
View File

@ -0,0 +1,116 @@
**vim /usr/lib/systemd/system/kube-apiserver.service**
**配置自行更改**
````
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address=192.168.236.201 \
--service-cluster-ip-range=192.168.0.0/16 \
--service-node-port-range=30000-32767 \
--etcd-servers=https://192.168.236.201:2379,https://192.168.236.202:2379,https://192.168.236.203:2379 \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
````
**vim /usr/lib/systemd/system/kube-controller-manager.service**
**配置自行更改**
````
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=172.16.0.0/12 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
````
**vim /usr/lib/systemd/system/kube-scheduler.service**
````
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
````

184
docs/chap02/2.9.md 100644
View File

@ -0,0 +1,184 @@
**vim /usr/lib/systemd/system/kubelet.service**
````
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
````
**vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf**
````
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --cgroup-driver=systemd"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
````
**Runtime为Docker请使用如下Kubelet的配置**
**vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf**
````
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
````
**vim /etc/kubernetes/kubelet-conf.yml**
````
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 192.168.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
````
**vim /usr/lib/systemd/system/kube-proxy.service**
````
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
````
**vim /etc/kubernetes/kube-proxy.yaml**
````
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: 172.16.0.0/12
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
````

107
docs/chap04/4.6.md 100644
View File

@ -0,0 +1,107 @@
**定义一个Pod**
````
apiVersion: v1 # 必选API的版本号
kind: Pod # 必选类型Pod
metadata: # 必选,元数据
name: nginx # 必选符合RFC 1035规范的Pod名称
namespace: default # 可选Pod所在的命名空间不指定默认为default可以使用-n 指定namespace
labels: # 可选标签选择器一般用于过滤和区分Pod
app: nginx
role: frontend # 可以写多个
annotations: # 可选,注释列表,可以写多个
app: nginx
spec: # 必选,用于定义容器的详细信息
initContainers: # 初始化容器,在容器启动之前执行的一些初始化操作
- command:
- sh
- -c
- echo "I am InitContainer for init some configuration"
image: busybox
imagePullPolicy: IfNotPresent
name: init-container
containers: # 必选,容器列表
- name: nginx # 必选符合RFC 1035规范的容器名称
image: nginx:latest # 必选,容器所用的镜像的地址
imagePullPolicy: Always # 可选,镜像拉取策略
command: # 可选,容器启动执行的命令
- nginx
- -g
- "daemon off;"
workingDir: /usr/share/nginx/html # 可选,容器的工作目录
volumeMounts: # 可选,存储卷配置,可以配置多个
- name: webroot # 存储卷名称
mountPath: /usr/share/nginx/html # 挂载目录
readOnly: true # 只读
ports: # 可选,容器需要暴露的端口号列表
- name: http # 端口名称
containerPort: 80 # 端口号
protocol: TCP # 端口协议默认TCP
env: # 可选,环境变量配置列表
- name: TZ # 变量名
value: Asia/Shanghai # 变量的值
- name: LANG
value: en_US.utf8
resources: # 可选,资源限制和资源请求限制
limits: # 最大限制设置
cpu: 1000m
memory: 1024Mi
requests: # 启动所需的资源
cpu: 100m
memory: 512Mi
# startupProbe: # 可选,检测容器内进程是否完成启动。注意三种检查方式同时只能使用一种。
# httpGet: # httpGet检测方式生产环境建议使用httpGet实现接口级健康检查健康检查由应用程序提供。
# path: /api/successStart # 检查路径
# port: 80
readinessProbe: # 可选,健康检查。注意三种检查方式同时只能使用一种。
httpGet: # httpGet检测方式生产环境建议使用httpGet实现接口级健康检查健康检查由应用程序提供。
path: / # 检查路径
port: 80 # 监控端口
livenessProbe: # 可选,健康检查
#exec: # 执行容器命令检测方式
#command:
#- cat
#- /health
#httpGet: # httpGet检测方式
# path: /_health # 检查路径
# port: 8080
# httpHeaders: # 检查的请求头
# - name: end-user
# value: Jason
tcpSocket: # 端口检测方式
port: 80
initialDelaySeconds: 60 # 初始化时间
timeoutSeconds: 2 # 超时时间
periodSeconds: 5 # 检测间隔
successThreshold: 1 # 检查成功为2次表示就绪
failureThreshold: 2 # 检测失败1次表示未就绪
lifecycle:
postStart: # 容器创建完成后执行的指令, 可以是exec httpGet TCPSocket
exec:
command:
- sh
- -c
- 'mkdir /data/ '
preStop:
httpGet:
path: /
port: 80
# exec:
# command:
# - sh
# - -c
# - sleep 9
restartPolicy: Always # 可选默认为Always
#nodeSelector: # 可选指定Node节点
# region: subnet7
imagePullSecrets: # 可选拉取镜像使用的secret可以配置多个
- name: default-dockercfg-86258
hostNetwork: false # 可选,是否为主机模式,如是,会占用主机端口
volumes: # 共享存储卷列表
- name: webroot # 名称,与上述对应
emptyDir: {} # 挂载目录
#hostPath: # 挂载本机目录
# path: /etc/hosts
````

68
docs/chap05/5.1.md 100644
View File

@ -0,0 +1,68 @@
**定义一个Replication Controller**
```
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
```
**定义一个ReplicaSet**
````
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80
````

View File

@ -0,0 +1,180 @@
**创建Deployment**
````
apiVersion: apps/v1 # 从Kubernetes 1.16版本开始彻底废弃了其他的APIVersion只能使用apps/v11.16以上的版本可以使用extension等
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
````
**定义一个简单的StatefulSet**
````
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
````
**定义一个DaemonSet**
````
apiVersion: apps/v1
kind: DaemonSet # kind为DaemonSet
metadata:
name: fluentd-es-v2.0.4
namespace: logging
labels:
k8s-app: fluentd-es
version: v2.0.4
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: fluentd-es
version: v2.0.4
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v2.0.4
# This annotation ensures that fluentd does not get evicted if the node
# supports critical pod annotation based priority scheme.
# Note that this does not guarantee admission on the nodes (#40573).
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
serviceAccountName: fluentd-es
containers:
- name: fluentd-es
image: k8s.gcr.io/fluentd-elasticsearch:v2.0.4
env:
- name: FLUENTD_ARGS
value: --no-supervisor -q
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config-volume
mountPath: /etc/fluent/config.d
nodeSelector:
beta.kubernetes.io/fluentd-ds-ready: "true"
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: fluentd-es-config-v0.1.4
````
**CronJob**
````
apiVersion: batch/v1 # K8s小于1.21 batch/v1beta1
kind: CronJob
metadata:
labels:
run: hello
name: hello
namespace: default
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
labels:
run: hello
spec:
containers:
- args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: busybox
imagePullPolicy: Always
name: hello
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: '*/1 * * * *'
successfulJobsHistoryLimit: 3
suspend: false
````

98
docs/chap06/6.2.md 100644
View File

@ -0,0 +1,98 @@
**定义Service的yaml文件**
```
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 9376
```
**无Selector的Service**
````
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-service
subsets:
- addresses:
- ip: 1.2.3.4
ports:
- port: 9376
````
**ExternalName Service**
````
kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
````
**多端口Service**
````
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
- name: https
protocol: TCP
port: 443
targetPort: 9377
````
**NodePort**
`````
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard
`````

120
docs/chap06/6.3.md 100644
View File

@ -0,0 +1,120 @@
**创建一个Ingress**
````
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
kubernetes.io/ingress.class: "nginx" # 不同的controlleringress.class可能不一致
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
serviceName: service1
servicePort: 4200
- path: /bar
pathType: ImplementationSpecific
backend:
serviceName: service2
servicePort: 8080
````
**Ingress v1**
````
apiVersion: networking.k8s.io/v1 # 1.19+
kind: Ingress
metadata:
name: simple-fanout-example
spec:
ingressClassName: nginx
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
service
name: service1
port:
number: 4200
````
**单域名**
````
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2
servicePort: 8080
````
**多域名**
````
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: bar.foo.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
````
**TLS**
````
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-https-test
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: https-test.com
http:
paths:
- backend:
serviceName: nginx-svc
servicePort: 80
tls:
- secretName: nginx-test-tls
````

32
docs/chap07/7.2.md 100644
View File

@ -0,0 +1,32 @@
**game.properties**
````
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
````
**ui.properties**
````
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
````
**game-env-file.properties**
```
enemies=aliens
lives=3
allowed="true"
```

187
docs/chap07/7.3.md 100644
View File

@ -0,0 +1,187 @@
**valueFrom**
```
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: env-valuefrom
name: env-valuefrom
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: env-valuefrom
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: env-valuefrom
spec:
containers:
- command:
- sh
- -c
- env
env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
key: special.how
name: special-config
image: busybox
imagePullPolicy: IfNotPresent
name: env-valuefrom
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
dnsPolicy: ClusterFirst
restartPolicy: Never
```
**envFrom**
````
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: env-valuefrom
name: env-valuefrom
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: env-valuefrom
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: env-valuefrom
spec:
containers:
- command:
- sh
- -c
- env
env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
envFrom:
- configMapRef:
name: game-config-env-file
prefix: fromCm_
image: busybox
imagePullPolicy: IfNotPresent
name: env-valuefrom
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 10Mi
dnsPolicy: ClusterFirst
restartPolicy: Never
````
**文件挂载**
````
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
````
**自定义文件名**
```
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh","-c","cat /etc/config/keys" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: keys
restartPolicy: Never
```
**指定文件权限**
```
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh","-c","ls -l /etc/config/..data/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: keys
defaultMode: 0666
restartPolicy: Never
```

73
docs/chap07/7.6.md 100644
View File

@ -0,0 +1,73 @@
**挂载Secret**
```
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret: # configMap换成secret
secretName: mysecret # configMap类型为name
```
**自定义文件名挂载**
```
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
items:
- key: username
path: my-group/my-username
```
**Secret作为环境变量**
````
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
````

43
docs/chap07/7.7.md 100644
View File

@ -0,0 +1,43 @@
**imagePullSecrets**
````
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
# 多个Secret
- name: myregistrykey2
- name: myregistrykeyx
````
**Ingress TLS**
````
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-https-test
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: https-test.com
http:
paths:
- backend:
serviceName: nginx-svc
servicePort: 80
tls:
- secretName: nginx-test-tls
````

View File

@ -0,0 +1,94 @@
**nginx-empty.yaml**
````
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
volumeMounts:
- mountPath: /opt
name: share-volume
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx2
command:
- sh
- -c
- sleep 3600
volumeMounts:
- mountPath: /mnt
name: share-volume
volumes:
- name: share-volume
emptyDir: {}
#medium: Memory
````
**nginx-hostPath.yaml **
````
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx
volumeMounts:
- mountPath: /opt
name: share-volume
- mountPath: /etc/timezone
name: timezone
- image: nginx:1.15.2
imagePullPolicy: IfNotPresent
name: nginx2
command:
- sh
- -c
- sleep 3600
volumeMounts:
- mountPath: /mnt
name: share-volume
volumes:
- name: share-volume
emptyDir: {}
#medium: Memory
- name: timezone
hostPath:
path: /etc/timezone
type: File
````

71
docs/chap08/8.6.md 100644
View File

@ -0,0 +1,71 @@
**基于NFS的PV**
````
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs-slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
````
**基于HostPath的PV**
````
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
````
**基于Ceph RBD的PV**
````
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-rbd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 192.168.1.123:6789
- 192.168.1.124:6789
- 192.168.1.125:6789
pool: rbd
image: ceph-rbd-pv-test
user: admin
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
````

73
docs/chap08/8.7.md 100644
View File

@ -0,0 +1,73 @@
**PVC的创建**
````
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
````
**NFS类型的PVC**
````
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nfs
spec:
storageClassName: nfs-slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
````
**PVC的使用**
````
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
````

165
docs/chap08/8.8.md 100644
View File

@ -0,0 +1,165 @@
**定义一个StorageClass**
````
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://127.0.0.1:8081"
clusterid: "630372ccdc720a92c681fb928f27b53f"
restauthenabled: "true"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:3"
````
**vim provi-cephrbd.yaml**
````yaml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: kube-system
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "registry.cn-beijing.aliyuncs.com/dotbalo/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
````
**vim rbd-sc.yaml**
````yaml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ceph-rbd
provisioner: ceph.com/rbd
parameters:
monitors: x.x.x.x:6789,x.x.x.x:6789,x.x.x.x:6789
pool: rbdfork8s
adminId: admin
adminSecretNamespace: kube-system
adminSecretName: ceph-admin-secret
userId: kube
userSecretNamespace: kube-system
userSecretName: ceph-k8s-secret
imageFormat: "2"
imageFeatures: layering
````
**vim rbd-pvc.yaml**
````
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rbd-pvc-test
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd
resources:
requests:
storage: 100Mi
# kubectl create -f rbd-pvc.yaml
````

243
docs/chap08/8.9.md 100644
View File

@ -0,0 +1,243 @@
**8.9.1**
**vim ceph-configmap.yaml**
````
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "48ddd55b-28ce-43f3-92a8-d17d9ad2c0de",
"monitors": [
"xxx:6789",
"xxx:6789",
"xxx:6789"
],
"cephFS": {
"subvolumeGroup": "cephfs-k8s-csi"
}
}
]
metadata:
name: ceph-csi-config
````
**vim cephfs-csi-sc.yaml**
````
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: 48ddd55b-28ce-43f3-92a8-d17d9ad2c0de
fsName: sharefs
pool: sharefs-data0
# The secrets have to contain user and/or Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs
# (optional) The driver can use either ceph-fuse (fuse) or
# ceph kernelclient (kernel).
# If omitted, default volume mounter will be used - this is
# determined by probing for ceph-fuse and mount.ceph
# mounter: kernel
# (optional) Prefix to use for naming subvolumes.
# If omitted, defaults to "csi-vol-".
# volumeNamePrefix: "foo-bar-"
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
````
**vim pvc.yaml**
````
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-pvc-test-csi
spec:
accessModes:
- ReadWriteMany
storageClassName: csi-cephfs-sc
resources:
requests:
storage: 100Mi
````
**vim test-pvc-dp.yaml**
````
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-cephfs
name: test-cephfs
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: test-cephfs
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: test-cephfs
spec:
containers:
- command:
- sh
- -c
- sleep 36000
image: registry.cn-beijing.aliyuncs.com/dotbalo/debug-tools
name: test-cephfs
volumeMounts:
- mountPath: /mnt
name: cephfs-pvc-test
volumes:
- name: cephfs-pvc-test
persistentVolumeClaim:
claimName: cephfs-pvc-test-csi
````
**8.9.2**
ceph-configmap.yaml**
````
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "48ddd55b-28ce-43f3-92a8-d17d9ad2c0de",
"monitors": [
"xxx:6789",
"xxx:6789",
"xxx:6789"
],
"cephFS": {
"subvolumeGroup": "cephrbd-k8s-csi"
}
}
]
metadata:
name: ceph-csi-config
````
**rbd-csi-sc.yaml**
````
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 48ddd55b-28ce-43f3-92a8-d17d9ad2c0de
pool: rbdfork8s
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
````
**pvc.yaml**
````
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rbd-pvc-test-csi
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-rbd-sc
resources:
requests:
storage: 100Mi
````
**test-pvc-dp.yaml**
````
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-rbd
name: test-rbd
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: test-rbd
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: test-rbd
spec:
containers:
- command:
- sh
- -c
- sleep 36000
image: registry.cn-beijing.aliyuncs.com/dotbalo/debug-tools
name: test-rbd
volumeMounts:
- mountPath: /mnt
name: rbd-pvc-test
volumes:
- name: rbd-pvc-test
persistentVolumeClaim:
claimName: rbd-pvc-test-csi
````

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,14 @@
apiVersion: v1
appVersion: v3.3.1
description: Container Storage Interface (CSI) driver, provisioner, snapshotter and
attacher for Ceph cephfs
home: https://github.com/ceph/ceph-csi
icon: https://raw.githubusercontent.com/ceph/ceph-csi/v3.3.1/assets/ceph-logo.png
keywords:
- ceph
- cephfs
- ceph-csi
name: ceph-csi-cephfs
sources:
- https://github.com/ceph/ceph-csi/tree/v3.3.1/charts/ceph-csi-cephfs
version: 3.3.1

View File

@ -0,0 +1,73 @@
# ceph-csi-cephfs
The ceph-csi-cephfs chart adds cephfs volume support to your cluster.
## Install from release repo
Add chart repository to install helm charts from it
```console
helm repo add ceph-csi https://ceph.github.io/csi-charts
```
## Install from local Chart
we need to enter into the directory where all charts are present
```console
cd charts
```
**Note:** charts directory is present in root of the ceph-csi project
### Install Chart
To install the Chart into your Kubernetes cluster
- For helm 2.x
```bash
helm install --namespace "ceph-csi-cephfs" --name "ceph-csi-cephfs" ceph-csi/ceph-csi-cephfs
```
- For helm 3.x
Create the namespace where Helm should install the components with
```bash
kubectl create namespace ceph-csi-cephfs
```
Run the installation
```bash
helm install --namespace "ceph-csi-cephfs" "ceph-csi-cephfs" ceph-csi/ceph-csi-cephfs
```
After installation succeeds, you can get a status of Chart
```bash
helm status "ceph-csi-cephfs"
```
### Delete Chart
If you want to delete your Chart, use this command
- For helm 2.x
```bash
helm delete --purge "ceph-csi-cephfs"
```
- For helm 3.x
```bash
helm uninstall "ceph-csi-cephfs" --namespace "ceph-csi-cephfs"
```
If you want to delete the namespace, use this command
```bash
kubectl delete namespace ceph-csi-cephfs
```

View File

@ -0,0 +1,2 @@
Examples on how to configure a storage class and start using the driver are here:
https://github.com/ceph/ceph-csi/tree/v3.3.1/examples/cephfs

View File

@ -0,0 +1,90 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "ceph-csi-cephfs.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "ceph-csi-cephfs.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "ceph-csi-cephfs.nodeplugin.fullname" -}}
{{- if .Values.nodeplugin.fullnameOverride -}}
{{- .Values.nodeplugin.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- printf "%s-%s" .Release.Name .Values.nodeplugin.name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s-%s" .Release.Name $name .Values.nodeplugin.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "ceph-csi-cephfs.provisioner.fullname" -}}
{{- if .Values.provisioner.fullnameOverride -}}
{{- .Values.provisioner.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- printf "%s-%s" .Release.Name .Values.provisioner.name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s-%s" .Release.Name $name .Values.provisioner.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "ceph-csi-cephfs.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "ceph-csi-cephfs.serviceAccountName.nodeplugin" -}}
{{- if .Values.serviceAccounts.nodeplugin.create -}}
{{ default (include "ceph-csi-cephfs.nodeplugin.fullname" .) .Values.serviceAccounts.nodeplugin.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.nodeplugin.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "ceph-csi-cephfs.serviceAccountName.provisioner" -}}
{{- if .Values.serviceAccounts.provisioner.create -}}
{{ default (include "ceph-csi-cephfs.provisioner.fullname" .) .Values.serviceAccounts.provisioner.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.provisioner.name }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,11 @@
{{ if semverCompare ">=1.18" .Capabilities.KubeVersion.GitVersion }}
apiVersion: storage.k8s.io/v1
{{ else }}
apiVersion: storage.k8s.io/v1betav1
{{ end }}
kind: CSIDriver
metadata:
name: {{ .Values.driverName }}
spec:
attachRequired: true
podInfoOnMount: false

View File

@ -0,0 +1,16 @@
{{- if not .Values.externallyManagedConfigmap }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.configMapName | quote }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
config.json: |-
{{ toJson .Values.csiConfig | indent 4 -}}
{{- end }}

View File

@ -0,0 +1,18 @@
{{- if .Values.rbac.create -}}
{{- if .Values.topology.enabled }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
{{- end }}
{{- end -}}

View File

@ -0,0 +1,22 @@
{{- if .Values.rbac.create -}}
{{- if .Values.topology.enabled }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "ceph-csi-cephfs.serviceAccountName.nodeplugin" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end -}}

View File

@ -0,0 +1,199 @@
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
selector:
matchLabels:
app: {{ include "ceph-csi-cephfs.name" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
updateStrategy:
type: {{ .Values.nodeplugin.updateStrategy }}
template:
metadata:
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
serviceAccountName: {{ include "ceph-csi-cephfs.serviceAccountName.nodeplugin" . }}
{{- if .Values.nodeplugin.priorityClassName }}
priorityClassName: {{ .Values.nodeplugin.priorityClassName }}
{{- end }}
hostNetwork: true
# to use e.g. Rook orchestrated cluster, and mons' FQDN is
# resolved through k8s service, set dns policy to cluster first
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: driver-registrar
# This is necessary only for systems with SELinux, where
# non-privileged sidecar containers cannot access unix domain socket
# created by privileged CSI driver container.
securityContext:
privileged: true
image: "{{ .Values.nodeplugin.registrar.image.repository }}:{{ .Values.nodeplugin.registrar.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.registrar.image.pullPolicy }}
args:
- "--v={{ .Values.logLevel }}"
- "--csi-address=/csi/{{ .Values.pluginSocketFile }}"
- "--kubelet-registration-path={{ .Values.kubeletDir }}/plugins/{{ .Values.driverName }}/{{ .Values.pluginSocketFile }}"
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
resources:
{{ toYaml .Values.nodeplugin.registrar.resources | indent 12 }}
- name: csi-cephfsplugin
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--nodeid=$(NODE_ID)"
- "--type=cephfs"
- "--nodeserver=true"
- "--pidlimit=-1"
{{- if .Values.nodeplugin.forcecephkernelclient }}
- "--forcecephkernelclient={{ .Values.nodeplugin.forcecephkernelclient }}"
{{- end }}
- "--endpoint=$(CSI_ENDPOINT)"
- "--v={{ .Values.logLevel }}"
- "--drivername=$(DRIVER_NAME)"
{{- if .Values.topology.enabled }}
- "--domainlabels={{ .Values.topology.domainLabels | join "," }}"
{{- end }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: DRIVER_NAME
value: {{ .Values.driverName }}
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: "unix:///csi/{{ .Values.pluginSocketFile }}"
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: mountpoint-dir
mountPath: {{ .Values.kubeletDir }}/pods
mountPropagation: Bidirectional
- name: plugin-dir
mountPath: {{ .Values.kubeletDir }}/plugins
mountPropagation: "Bidirectional"
- mountPath: /dev
name: host-dev
- mountPath: /run/mount
name: host-mount
- mountPath: /sys
name: host-sys
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- if .Values.nodeplugin.httpMetrics.enabled }}
- name: liveness-prometheus
securityContext:
privileged: true
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--type=liveness"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metricsport={{ .Values.nodeplugin.httpMetrics.containerPort }}"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: "unix:///csi/{{ .Values.pluginSocketFile }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- end }}
volumes:
- name: socket-dir
hostPath:
path: "{{ .Values.kubeletDir }}/plugins/{{ .Values.driverName }}"
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: {{ .Values.kubeletDir }}/plugins_registry
type: Directory
- name: mountpoint-dir
hostPath:
path: {{ .Values.kubeletDir }}/pods
type: DirectoryOrCreate
- name: plugin-dir
hostPath:
path: {{ .Values.kubeletDir }}/plugins
type: Directory
- name: host-sys
hostPath:
path: /sys
- name: host-mount
hostPath:
path: /run/mount
- name: lib-modules
hostPath:
path: /lib/modules
- name: host-dev
hostPath:
path: /dev
- name: ceph-csi-config
configMap:
name: {{ .Values.configMapName | quote }}
{{- if .Values.configMapKey }}
items:
- key: {{ .Values.configMapKey | quote }}
path: config.json
{{- end }}
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
{{- if .Values.nodeplugin.affinity }}
affinity:
{{ toYaml .Values.nodeplugin.affinity | indent 8 -}}
{{- end -}}
{{- if .Values.nodeplugin.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeplugin.nodeSelector | indent 8 -}}
{{- end -}}
{{- if .Values.nodeplugin.tolerations }}
tolerations:
{{ toYaml .Values.nodeplugin.tolerations | indent 8 -}}
{{- end -}}

View File

@ -0,0 +1,41 @@
{{- if .Values.nodeplugin.httpMetrics.service.enabled -}}
apiVersion: v1
kind: Service
metadata:
{{- if .Values.nodeplugin.httpMetrics.service.annotations }}
annotations:
{{ toYaml .Values.nodeplugin.httpMetrics.service.annotations | indent 4 }}
{{- end }}
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}-http-metrics
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.fullname" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.nodeplugin.httpMetrics.service.clusterIP }}
clusterIP: "{{ .Values.nodeplugin.httpMetrics.service.clusterIP }}"
{{- end }}
{{- if .Values.nodeplugin.httpMetrics.service.externalIPs }}
externalIPs:
{{ toYaml .Values.nodeplugin.httpMetrics.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.nodeplugin.httpMetrics.service.loadBalancerIP }}
loadBalancerIP: "{{ .Values.nodeplugin.httpMetrics.service.loadBalancerIP }}"
{{- end }}
{{- if .Values.nodeplugin.httpMetrics.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.nodeplugin.httpMetrics.service.loadBalancerSourceRanges | indent 4 }}
{{- end }}
ports:
- name: http-metrics
port: {{ .Values.nodeplugin.httpMetrics.service.servicePort }}
targetPort: {{ .Values.nodeplugin.httpMetrics.containerPort }}
selector:
app: {{ include "ceph-csi-cephfs.name" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
type: "{{ .Values.nodeplugin.httpMetrics.service.type }}"
{{- end -}}

View File

@ -0,0 +1,45 @@
{{- if .Values.nodeplugin.podSecurityPolicy.enabled -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}
labels:
app: {{ include "ceph-csi-cephfs.fullname" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
allowPrivilegeEscalation: true
allowedCapabilities:
- 'SYS_ADMIN'
fsGroup:
rule: RunAsAny
privileged: true
hostNetwork: true
hostPID: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'hostPath'
allowedHostPaths:
- pathPrefix: '/dev'
readOnly: false
- pathPrefix: '/run/mount'
readOnly: false
- pathPrefix: '/sys'
readOnly: false
- pathPrefix: '/lib/modules'
readOnly: true
- pathPrefix: '{{ .Values.kubeletDir }}'
readOnly: false
{{- end }}

View File

@ -0,0 +1,18 @@
{{- if and .Values.rbac.create .Values.nodeplugin.podSecurityPolicy.enabled -}}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.fullname" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['{{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}']
{{- end -}}

View File

@ -0,0 +1,21 @@
{{- if and .Values.rbac.create .Values.nodeplugin.podSecurityPolicy.enabled -}}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.fullname" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "ceph-csi-cephfs.serviceAccountName.nodeplugin" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,19 @@
{{- if .Values.rbac.create -}}
{{- if .Values.topology.enabled }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}-rules
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rbac.cephfs.csi.ceph.com/aggregate-to-{{ include "ceph-csi-cephfs.nodeplugin.fullname" . }}: "true"
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
{{- end }}
{{- end -}}

View File

@ -0,0 +1,13 @@
{{- if .Values.serviceAccounts.nodeplugin.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "ceph-csi-cephfs.serviceAccountName.nodeplugin" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end -}}

View File

@ -0,0 +1,63 @@
{{- if .Values.rbac.create -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete","patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update"]
{{- if .Values.provisioner.attacher.enabled }}
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
{{- end -}}
{{- if semverCompare ">=1.15" .Capabilities.KubeVersion.GitVersion -}}
{{- if .Values.provisioner.resizer.enabled }}
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
{{- end -}}
{{- end -}}
{{- if .Values.topology.enabled }}
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
{{- end }}
{{- end -}}

View File

@ -0,0 +1,20 @@
{{- if .Values.rbac.create -}}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "ceph-csi-cephfs.serviceAccountName.provisioner" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,234 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.provisioner.replicaCount }}
selector:
matchLabels:
app: {{ include "ceph-csi-cephfs.name" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if gt (int .Values.provisioner.replicaCount) 1 }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ include "ceph-csi-cephfs.name" . }}
- key: component
operator: In
values:
- {{ .Values.provisioner.name }}
topologyKey: "kubernetes.io/hostname"
{{- end }}
serviceAccountName: {{ include "ceph-csi-cephfs.serviceAccountName.provisioner" . }}
{{- if .Values.provisioner.priorityClassName }}
priorityClassName: {{ .Values.provisioner.priorityClassName }}
{{- end }}
containers:
- name: csi-provisioner
image: "{{ .Values.provisioner.provisioner.image.repository }}:{{ .Values.provisioner.provisioner.image.tag }}"
imagePullPolicy: {{ .Values.provisioner.provisioner.image.pullPolicy }}
args:
- "--csi-address=$(ADDRESS)"
- "--v={{ .Values.logLevel }}"
- "--timeout={{ .Values.provisioner.timeout }}"
- "--leader-election=true"
- "--retry-interval-start=500ms"
- "--extra-create-metadata=true"
{{- if .Values.topology.enabled }}
- "--feature-gates=Topology=true"
{{- end }}
env:
- name: ADDRESS
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.provisioner.resources | indent 12 }}
- name: csi-snapshotter
image: {{ .Values.provisioner.snapshotter.image.repository }}:{{ .Values.provisioner.snapshotter.image.tag }}
imagePullPolicy: {{ .Values.provisioner.snapshotter.image.pullPolicy }}
args:
- "--csi-address=$(ADDRESS)"
- "--v={{ .Values.logLevel }}"
- "--timeout={{ .Values.provisioner.timeout }}"
- "--leader-election=true"
env:
- name: ADDRESS
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
securityContext:
privileged: true
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.snapshotter.resources | indent 12 }}
{{- if .Values.provisioner.attacher.enabled }}
- name: csi-attacher
image: "{{ .Values.provisioner.attacher.image.repository }}:{{ .Values.provisioner.attacher.image.tag }}"
imagePullPolicy: {{ .Values.provisioner.attacher.image.pullPolicy }}
args:
- "--v={{ .Values.logLevel }}"
- "--csi-address=$(ADDRESS)"
- "--leader-election=true"
- "--retry-interval-start=500ms"
env:
- name: ADDRESS
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- end }}
{{- if semverCompare ">=1.15" .Capabilities.KubeVersion.GitVersion -}}
{{- if .Values.provisioner.resizer.enabled }}
- name: csi-resizer
image: "{{ .Values.provisioner.resizer.image.repository }}:{{ .Values.provisioner.resizer.image.tag }}"
imagePullPolicy: {{ .Values.provisioner.resizer.image.pullPolicy }}
args:
- "--v={{ .Values.logLevel }}"
- "--csi-address=$(ADDRESS)"
- "--timeout={{ .Values.provisioner.timeout }}"
- "--leader-election"
- "--retry-interval-start=500ms"
- "--handle-volume-inuse-error=false"
env:
- name: ADDRESS
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.resizer.resources | indent 12 }}
{{- end }}
{{- end }}
- name: csi-cephfsplugin
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--nodeid=$(NODE_ID)"
- "--type=cephfs"
- "--controllerserver=true"
- "--pidlimit=-1"
- "--endpoint=$(CSI_ENDPOINT)"
- "--v={{ .Values.logLevel }}"
- "--drivername=$(DRIVER_NAME)"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: DRIVER_NAME
value: {{ .Values.driverName }}
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: host-sys
mountPath: /sys
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: host-dev
mountPath: /dev
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- if .Values.provisioner.httpMetrics.enabled }}
- name: liveness-prometheus
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--type=liveness"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metricsport={{ .Values.provisioner.httpMetrics.containerPort }}"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- end }}
volumes:
- name: socket-dir
emptyDir: {
medium: "Memory"
}
- name: host-sys
hostPath:
path: /sys
- name: lib-modules
hostPath:
path: /lib/modules
- name: host-dev
hostPath:
path: /dev
- name: ceph-csi-config
configMap:
name: {{ .Values.configMapName | quote }}
{{- if .Values.configMapKey }}
items:
- key: {{ .Values.configMapKey | quote }}
path: config.json
{{- end }}
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
{{- if .Values.provisioner.affinity }}
affinity:
{{ toYaml .Values.provisioner.affinity | indent 8 -}}
{{- end -}}
{{- if .Values.provisioner.nodeSelector }}
nodeSelector:
{{ toYaml .Values.provisioner.nodeSelector | indent 8 -}}
{{- end -}}
{{- if .Values.provisioner.tolerations }}
tolerations:
{{ toYaml .Values.provisioner.tolerations | indent 8 -}}
{{- end -}}

View File

@ -0,0 +1,41 @@
{{- if .Values.provisioner.httpMetrics.service.enabled -}}
apiVersion: v1
kind: Service
metadata:
{{- if .Values.provisioner.httpMetrics.service.annotations }}
annotations:
{{ toYaml .Values.provisioner.httpMetrics.service.annotations | indent 4 }}
{{- end }}
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}-http-metrics
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.fullname" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.provisioner.httpMetrics.service.clusterIP }}
clusterIP: "{{ .Values.provisioner.httpMetrics.service.clusterIP }}"
{{- end }}
{{- if .Values.provisioner.httpMetrics.service.externalIPs }}
externalIPs:
{{ toYaml .Values.provisioner.httpMetrics.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.provisioner.httpMetrics.service.loadBalancerIP }}
loadBalancerIP: "{{ .Values.provisioner.httpMetrics.service.loadBalancerIP }}"
{{- end }}
{{- if .Values.provisioner.httpMetrics.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.provisioner.httpMetrics.service.loadBalancerSourceRanges | indent 4 }}
{{- end }}
ports:
- name: http-metrics
port: {{ .Values.provisioner.httpMetrics.service.servicePort }}
targetPort: {{ .Values.provisioner.httpMetrics.containerPort }}
selector:
app: {{ include "ceph-csi-cephfs.name" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
type: "{{ .Values.provisioner.httpMetrics.service.type }}"
{{- end -}}

View File

@ -0,0 +1,39 @@
{{- if .Values.provisioner.podSecurityPolicy.enabled -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
allowPrivilegeEscalation: true
allowedCapabilities:
- 'SYS_ADMIN'
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'hostPath'
allowedHostPaths:
- pathPrefix: '/dev'
readOnly: false
- pathPrefix: '/sys'
readOnly: false
- pathPrefix: '/lib/modules'
readOnly: true
{{- end }}

View File

@ -0,0 +1,26 @@
{{- if .Values.rbac.create -}}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
{{- if .Values.provisioner.podSecurityPolicy.enabled }}
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['{{ include "ceph-csi-cephfs.provisioner.fullname" . }}']
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,21 @@
{{- if .Values.rbac.create -}}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "ceph-csi-cephfs.serviceAccountName.provisioner" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,61 @@
{{- if .Values.rbac.create -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-cephfs.provisioner.fullname" . }}-rules
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rbac.cephfs.csi.ceph.com/aggregate-to-{{ include "ceph-csi-cephfs.provisioner.fullname" . }}: "true"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete","patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update"]
{{- if .Values.provisioner.attacher.enabled }}
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
{{- end -}}
{{- if semverCompare ">=1.15" .Capabilities.KubeVersion.GitVersion -}}
{{- if .Values.provisioner.resizer.enabled }}
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
{{- end -}}
{{- end -}}
{{- if .Values.topology.enabled }}
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
{{- end }}
{{- end -}}

View File

@ -0,0 +1,13 @@
{{- if .Values.serviceAccounts.provisioner.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "ceph-csi-cephfs.serviceAccountName.provisioner" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-cephfs.name" . }}
chart: {{ include "ceph-csi-cephfs.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end -}}

View File

@ -0,0 +1,219 @@
---
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccounts:
nodeplugin:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use. Copy from dotbalo.
# If not set and create is true, a name is generated using the fullname
name:
provisioner:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname
name:
# Configuration for the CSI to connect to the cluster
# Ref: https://github.com/ceph/ceph-csi/blob/devel/examples/README.md
# Example:
# csiConfig:
# - clusterID: "<cluster-id>"
# monitors:
# - "<MONValue1>"
# - "<MONValue2>"
# cephFS:
# subvolumeGroup: "csi"
csiConfig: []
# Set logging level for csi containers.
# Supported values from 0 to 5. 0 for general useful logs,
# 5 for trace level verbosity.
logLevel: 5
nodeplugin:
name: nodeplugin
# if you are using ceph-fuse client set this value to OnDelete
updateStrategy: RollingUpdate
# set user created priorityclassName for csi plugin pods. default is
# system-node-critical which is highest priority
priorityClassName: system-node-critical
httpMetrics:
# Metrics only available for cephcsi/cephcsi => 1.2.0
# Specifies whether http metrics should be exposed
enabled: true
# The port of the container to expose the metrics
containerPort: 8081
service:
# Specifies whether a service should be created for the metrics
enabled: true
# The port to use for the service
servicePort: 8080
type: ClusterIP
# Annotations for the service
# Example:
# annotations:
# prometheus.io/scrape: "true"
# prometheus.io/port: "9080"
annotations: {}
clusterIP: ""
## List of IP addresses at which the stats-exporter service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
registrar:
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-node-driver-registrar
tag: v2.0.1
pullPolicy: IfNotPresent
resources: {}
plugin:
image:
repository: quay.io/cephcsi/cephcsi
tag: v3.3.1
pullPolicy: IfNotPresent
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
# Set to true to enable Ceph Kernel clients
# on kernel < 4.17 which support quotas
# forcecephkernelclient: true
# If true, create & use Pod Security Policy resources
# https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
enabled: false
provisioner:
name: provisioner
replicaCount: 3
# Timeout for waiting for creation or deletion of a volume
timeout: 60s
# set user created priorityclassName for csi provisioner pods. default is
# system-cluster-critical which is less priority than system-node-critical
priorityClassName: system-cluster-critical
httpMetrics:
# Metrics only available for cephcsi/cephcsi => 1.2.0
# Specifies whether http metrics should be exposed
enabled: true
# The port of the container to expose the metrics
containerPort: 8081
service:
# Specifies whether a service should be created for the metrics
enabled: true
# The port to use for the service
servicePort: 8080
type: ClusterIP
# Annotations for the service
# Example:
# annotations:
# prometheus.io/scrape: "true"
# prometheus.io/port: "9080"
annotations: {}
clusterIP: ""
## List of IP addresses at which the stats-exporter service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
provisioner:
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-provisioner
tag: v2.0.4
pullPolicy: IfNotPresent
resources: {}
attacher:
name: attacher
enabled: true
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-attacher
tag: v3.0.2
pullPolicy: IfNotPresent
resources: {}
resizer:
name: resizer
enabled: true
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-resizer
tag: v1.0.1
pullPolicy: IfNotPresent
resources: {}
snapshotter:
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-snapshotter
tag: v3.0.2
pullPolicy: IfNotPresent
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
# If true, create & use Pod Security Policy resources
# https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
enabled: false
topology:
# Specifies whether topology based provisioning support should
# be exposed by CSI
enabled: false
# domainLabels define which node labels to use as domains
# for CSI nodeplugins to advertise their domains
# NOTE: the value here serves as an example and needs to be
# updated with node labels that define domains of interest
domainLabels:
- failure-domain/region
- failure-domain/zone
#########################################################
# Variables for 'internal' use please use with caution! #
#########################################################
# The filename of the provisioner socket
provisionerSocketFile: csi-provisioner.sock
# The filename of the plugin socket
pluginSocketFile: csi.sock
# kubelet working directory,can be set using `--root-dir` when starting kubelet.
kubeletDir: /var/lib/kubelet
# Name of the csi-driver
driverName: cephfs.csi.ceph.com
# Name of the configmap used for state
configMapName: ceph-csi-config
# Key to use in the Configmap if not config.json
# configMapKey:
# Use an externally provided configmap
externallyManagedConfigmap: false

View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@ -0,0 +1,14 @@
apiVersion: v1
appVersion: v3.3.1
description: Container Storage Interface (CSI) driver, provisioner, snapshotter, and
attacher for Ceph RBD
home: https://github.com/ceph/ceph-csi
icon: https://raw.githubusercontent.com/ceph/ceph-csi/v3.3.1/assets/ceph-logo.png
keywords:
- ceph
- rbd
- ceph-csi
name: ceph-csi-rbd
sources:
- https://github.com/ceph/ceph-csi/tree/v3.3.1/charts/ceph-csi-rbd
version: 3.3.1

View File

@ -0,0 +1,73 @@
# ceph-csi-rbd
The ceph-csi-rbd chart adds rbd volume support to your cluster.
## Install from release repo
Add chart repository to install helm charts from it
```console
helm repo add ceph-csi https://ceph.github.io/csi-charts
```
## Install from local Chart
we need to enter into the directory where all charts are present
```console
cd charts
```
**Note:** charts directory is present in root of the ceph-csi project
### Install chart
To install the Chart into your Kubernetes cluster
- For helm 2.x
```bash
helm install --namespace "ceph-csi-rbd" --name "ceph-csi-rbd" ceph-csi/ceph-csi-rbd
```
- For helm 3.x
Create the namespace where Helm should install the components with
```bash
kubectl create namespace "ceph-csi-rbd"
```
Run the installation
```bash
helm install --namespace "ceph-csi-rbd" "ceph-csi-rbd" ceph-csi/ceph-csi-rbd
```
After installation succeeds, you can get a status of Chart
```bash
helm status "ceph-csi-rbd"
```
### Delete Chart
If you want to delete your Chart, use this command
- For helm 2.x
```bash
helm delete --purge "ceph-csi-rbd"
```
- For helm 3.x
```bash
helm uninstall "ceph-csi-rbd" --namespace "ceph-csi-rbd"
```
If you want to delete the namespace, use this command
```bash
kubectl delete namespace ceph-csi-rbd
```

View File

@ -0,0 +1,2 @@
Examples on how to configure a storage class and start using the driver are here:
https://github.com/ceph/ceph-csi/tree/v3.3.1/examples/rbd

View File

@ -0,0 +1,90 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "ceph-csi-rbd.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "ceph-csi-rbd.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "ceph-csi-rbd.nodeplugin.fullname" -}}
{{- if .Values.nodeplugin.fullnameOverride -}}
{{- .Values.nodeplugin.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- printf "%s-%s" .Release.Name .Values.nodeplugin.name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s-%s" .Release.Name $name .Values.nodeplugin.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "ceph-csi-rbd.provisioner.fullname" -}}
{{- if .Values.provisioner.fullnameOverride -}}
{{- .Values.provisioner.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- printf "%s-%s" .Release.Name .Values.provisioner.name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s-%s" .Release.Name $name .Values.provisioner.name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "ceph-csi-rbd.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "ceph-csi-rbd.serviceAccountName.nodeplugin" -}}
{{- if .Values.serviceAccounts.nodeplugin.create -}}
{{ default (include "ceph-csi-rbd.nodeplugin.fullname" .) .Values.serviceAccounts.nodeplugin.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.nodeplugin.name }}
{{- end -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "ceph-csi-rbd.serviceAccountName.provisioner" -}}
{{- if .Values.serviceAccounts.provisioner.create -}}
{{ default (include "ceph-csi-rbd.provisioner.fullname" .) .Values.serviceAccounts.provisioner.name }}
{{- else -}}
{{ default "default" .Values.serviceAccounts.provisioner.name }}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,11 @@
{{ if semverCompare ">=1.18" .Capabilities.KubeVersion.GitVersion }}
apiVersion: storage.k8s.io/v1
{{ else }}
apiVersion: storage.k8s.io/betav1
{{ end }}
kind: CSIDriver
metadata:
name: {{ .Values.driverName }}
spec:
attachRequired: true
podInfoOnMount: false

View File

@ -0,0 +1,16 @@
{{- if not .Values.externallyManagedConfigmap }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.configMapName | quote }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
config.json: |-
{{ toJson .Values.csiConfig | indent 4 -}}
{{- end }}

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.kmsConfigMapName | quote }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
config.json: |-
{{ toJson .Values.encryptionKMSConfig | indent 4 -}}

View File

@ -0,0 +1,25 @@
{{- if .Values.rbac.create -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
{{- if .Values.topology.enabled }}
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
{{- end }}
# allow to read Vault Token and connection options from the Tenants namespace
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
{{- end -}}

View File

@ -0,0 +1,22 @@
{{- if .Values.rbac.create -}}
{{- if .Values.topology.enabled }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "ceph-csi-rbd.serviceAccountName.nodeplugin" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end -}}

View File

@ -0,0 +1,202 @@
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
selector:
matchLabels:
app: {{ include "ceph-csi-rbd.name" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
updateStrategy:
type: {{ .Values.nodeplugin.updateStrategy }}
template:
metadata:
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
serviceAccountName: {{ include "ceph-csi-rbd.serviceAccountName.nodeplugin" . }}
hostNetwork: true
hostPID: true
{{- if .Values.nodeplugin.priorityClassName }}
priorityClassName: {{ .Values.nodeplugin.priorityClassName }}
{{- end }}
# to use e.g. Rook orchestrated cluster, and mons' FQDN is
# resolved through k8s service, set dns policy to cluster first
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: driver-registrar
# This is necessary only for systems with SELinux, where
# non-privileged sidecar containers cannot access unix domain socket
# created by privileged CSI driver container.
securityContext:
privileged: true
image: "{{ .Values.nodeplugin.registrar.image.repository }}:{{ .Values.nodeplugin.registrar.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.registrar.image.pullPolicy }}
args:
- "--v={{ .Values.logLevel }}"
- "--csi-address=/csi/{{ .Values.pluginSocketFile }}"
- "--kubelet-registration-path={{ .Values.kubeletDir }}/plugins/{{ .Values.driverName }}/{{ .Values.pluginSocketFile }}"
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
resources:
{{ toYaml .Values.nodeplugin.registrar.resources | indent 12 }}
- name: csi-rbdplugin
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--nodeid=$(NODE_ID)"
- "--type=rbd"
- "--nodeserver=true"
- "--pidlimit=-1"
- "--endpoint=$(CSI_ENDPOINT)"
- "--v={{ .Values.logLevel }}"
- "--drivername=$(DRIVER_NAME)"
{{- if .Values.topology.enabled }}
- "--domainlabels={{ .Values.topology.domainLabels | join "," }}"
{{- end }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: DRIVER_NAME
value: {{ .Values.driverName }}
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: "unix:///csi/{{ .Values.pluginSocketFile }}"
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
volumeMounts:
- name: socket-dir
mountPath: /csi
- mountPath: /dev
name: host-dev
- mountPath: /run/mount
name: host-mount
- mountPath: /sys
name: host-sys
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: ceph-csi-encryption-kms-config
mountPath: /etc/ceph-csi-encryption-kms-config/
- name: plugin-dir
mountPath: {{ .Values.kubeletDir }}/plugins
mountPropagation: "Bidirectional"
- name: mountpoint-dir
mountPath: {{ .Values.kubeletDir }}/pods
mountPropagation: "Bidirectional"
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- if .Values.nodeplugin.httpMetrics.enabled }}
- name: liveness-prometheus
securityContext:
privileged: true
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--type=liveness"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metricsport={{ .Values.nodeplugin.httpMetrics.containerPort }}"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: "unix:///csi/{{ .Values.pluginSocketFile }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- end }}
volumes:
- name: socket-dir
hostPath:
path: "{{ .Values.kubeletDir }}/plugins/{{ .Values.driverName }}"
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: {{ .Values.kubeletDir }}/plugins_registry
type: Directory
- name: plugin-dir
hostPath:
path: {{ .Values.kubeletDir }}/plugins
type: Directory
- name: mountpoint-dir
hostPath:
path: {{ .Values.kubeletDir }}/pods
type: DirectoryOrCreate
- name: host-dev
hostPath:
path: /dev
- name: host-mount
hostPath:
path: /run/mount
- name: host-sys
hostPath:
path: /sys
- name: lib-modules
hostPath:
path: /lib/modules
- name: ceph-csi-config
configMap:
name: {{ .Values.configMapName | quote }}
{{- if .Values.configMapKey }}
items:
- key: {{ .Values.configMapKey | quote }}
path: config.json
{{- end }}
- name: ceph-csi-encryption-kms-config
configMap:
name: {{ .Values.kmsConfigMapName | quote }}
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
{{- if .Values.nodeplugin.affinity }}
affinity:
{{ toYaml .Values.nodeplugin.affinity | indent 8 -}}
{{- end -}}
{{- if .Values.nodeplugin.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeplugin.nodeSelector | indent 8 -}}
{{- end -}}
{{- if .Values.nodeplugin.tolerations }}
tolerations:
{{ toYaml .Values.nodeplugin.tolerations | indent 8 -}}
{{- end -}}

View File

@ -0,0 +1,41 @@
{{- if .Values.nodeplugin.httpMetrics.service.enabled -}}
apiVersion: v1
kind: Service
metadata:
{{- if .Values.nodeplugin.httpMetrics.service.annotations }}
annotations:
{{ toYaml .Values.nodeplugin.httpMetrics.service.annotations | indent 4 }}
{{- end }}
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}-http-metrics
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.fullname" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.nodeplugin.httpMetrics.service.clusterIP }}
clusterIP: "{{ .Values.nodeplugin.httpMetrics.service.clusterIP }}"
{{- end }}
{{- if .Values.nodeplugin.httpMetrics.service.externalIPs }}
externalIPs:
{{ toYaml .Values.nodeplugin.httpMetrics.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.nodeplugin.httpMetrics.service.loadBalancerIP }}
loadBalancerIP: "{{ .Values.nodeplugin.httpMetrics.service.loadBalancerIP }}"
{{- end }}
{{- if .Values.nodeplugin.httpMetrics.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.nodeplugin.httpMetrics.service.loadBalancerSourceRanges | indent 4 }}
{{- end }}
ports:
- name: http-metrics
port: {{ .Values.nodeplugin.httpMetrics.service.servicePort }}
targetPort: {{ .Values.nodeplugin.httpMetrics.containerPort }}
selector:
app: {{ include "ceph-csi-rbd.name" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
type: "{{ .Values.nodeplugin.httpMetrics.service.type }}"
{{- end -}}

View File

@ -0,0 +1,45 @@
{{- if .Values.nodeplugin.podSecurityPolicy.enabled -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
allowPrivilegeEscalation: true
allowedCapabilities:
- 'SYS_ADMIN'
fsGroup:
rule: RunAsAny
privileged: true
hostNetwork: true
hostPID: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'hostPath'
allowedHostPaths:
- pathPrefix: '/dev'
readOnly: false
- pathPrefix: '/run/mount'
readOnly: false
- pathPrefix: '/sys'
readOnly: false
- pathPrefix: '/lib/modules'
readOnly: true
- pathPrefix: '{{ .Values.kubeletDir }}'
readOnly: false
{{- end }}

View File

@ -0,0 +1,18 @@
{{- if and .Values.rbac.create .Values.nodeplugin.podSecurityPolicy.enabled -}}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['{{ include "ceph-csi-rbd.nodeplugin.fullname" . }}']
{{- end -}}

View File

@ -0,0 +1,21 @@
{{- if and .Values.rbac.create .Values.nodeplugin.podSecurityPolicy.enabled -}}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "ceph-csi-rbd.serviceAccountName.nodeplugin" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,19 @@
{{- if .Values.rbac.create -}}
{{- if .Values.topology.enabled }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.nodeplugin.fullname" . }}-rules
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rbac.rbd.csi.ceph.com/aggregate-to-{{ include "ceph-csi-rbd.nodeplugin.fullname" . }}: "true"
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
{{- end }}
{{- end -}}

View File

@ -0,0 +1,13 @@
{{- if .Values.serviceAccounts.nodeplugin.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "ceph-csi-rbd.serviceAccountName.nodeplugin" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end -}}

View File

@ -0,0 +1,68 @@
{{- if .Values.rbac.create -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "create", "update"]
{{- if .Values.provisioner.attacher.enabled }}
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
{{- end }}
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
{{- if .Values.provisioner.resizer.enabled }}
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
{{- end }}
{{- if .Values.topology.enabled }}
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
{{- end }}
{{- end -}}

View File

@ -0,0 +1,20 @@
{{- if .Values.rbac.create -}}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "ceph-csi-rbd.serviceAccountName.provisioner" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,274 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.provisioner.replicaCount }}
selector:
matchLabels:
app: {{ include "ceph-csi-rbd.name" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if gt (int .Values.provisioner.replicaCount) 1 }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ include "ceph-csi-rbd.name" . }}
- key: component
operator: In
values:
- {{ .Values.provisioner.name }}
topologyKey: "kubernetes.io/hostname"
{{- end }}
serviceAccountName: {{ include "ceph-csi-rbd.serviceAccountName.provisioner" . }}
{{- if .Values.provisioner.priorityClassName }}
priorityClassName: {{ .Values.provisioner.priorityClassName }}
{{- end }}
containers:
- name: csi-provisioner
image: "{{ .Values.provisioner.provisioner.image.repository }}:{{ .Values.provisioner.provisioner.image.tag }}"
imagePullPolicy: {{ .Values.provisioner.provisioner.image.pullPolicy }}
args:
- "--csi-address=$(ADDRESS)"
- "--v={{ .Values.logLevel }}"
- "--timeout={{ .Values.provisioner.timeout }}"
- "--leader-election=true"
- "--retry-interval-start=500ms"
- "--default-fstype={{ .Values.provisioner.defaultFSType }}"
- "--extra-create-metadata=true"
{{- if .Values.topology.enabled }}
- "--feature-gates=Topology=true"
{{- end }}
env:
- name: ADDRESS
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.provisioner.resources | indent 12 }}
{{- if .Values.provisioner.resizer.enabled }}
- name: csi-resizer
image: "{{ .Values.provisioner.resizer.image.repository }}:{{ .Values.provisioner.resizer.image.tag }}"
imagePullPolicy: {{ .Values.provisioner.resizer.image.pullPolicy }}
args:
- "--v={{ .Values.logLevel }}"
- "--csi-address=$(ADDRESS)"
- "--timeout={{ .Values.provisioner.timeout }}"
- "--leader-election"
- "--retry-interval-start=500ms"
- "--handle-volume-inuse-error=false"
env:
- name: ADDRESS
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.resizer.resources | indent 12 }}
{{- end }}
- name: csi-snapshotter
image: {{ .Values.provisioner.snapshotter.image.repository }}:{{ .Values.provisioner.snapshotter.image.tag }}
imagePullPolicy: {{ .Values.provisioner.snapshotter.image.pullPolicy }}
args:
- "--csi-address=$(ADDRESS)"
- "--v={{ .Values.logLevel }}"
- "--timeout={{ .Values.provisioner.timeout }}"
- "--leader-election=true"
env:
- name: ADDRESS
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
securityContext:
privileged: true
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.snapshotter.resources | indent 12 }}
{{- if .Values.provisioner.attacher.enabled }}
- name: csi-attacher
image: "{{ .Values.provisioner.attacher.image.repository }}:{{ .Values.provisioner.attacher.image.tag }}"
imagePullPolicy: {{ .Values.provisioner.attacher.image.pullPolicy }}
args:
- "--v={{ .Values.logLevel }}"
- "--csi-address=$(ADDRESS)"
- "--leader-election=true"
- "--retry-interval-start=500ms"
env:
- name: ADDRESS
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.attacher.resources | indent 12 }}
{{- end }}
- name: csi-rbdplugin
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--nodeid=$(NODE_ID)"
- "--type=rbd"
- "--controllerserver=true"
- "--pidlimit=-1"
- "--endpoint=$(CSI_ENDPOINT)"
- "--v={{ .Values.logLevel }}"
- "--drivername=$(DRIVER_NAME)"
- "--rbdhardmaxclonedepth={{ .Values.provisioner.hardMaxCloneDepth }}"
- "--rbdsoftmaxclonedepth={{ .Values.provisioner.softMaxCloneDepth }}"
- "--maxsnapshotsonimage={{ .Values.provisioner.maxSnapshotsOnImage }}"
- "--minsnapshotsonimage={{ .Values.provisioner.minSnapshotsOnImage }}"
{{- if .Values.provisioner.skipForceFlatten }}
- "--skipforceflatten={{ .Values.provisioner.skipForceFlatten }}"
{{- end }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: DRIVER_NAME
value: {{ .Values.driverName }}
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
volumeMounts:
- name: socket-dir
mountPath: /csi
- mountPath: /dev
name: host-dev
- mountPath: /sys
name: host-sys
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: ceph-csi-encryption-kms-config
mountPath: /etc/ceph-csi-encryption-kms-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- if .Values.provisioner.deployController }}
- name: csi-rbdplugin-controller
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--type=controller"
- "--v={{ .Values.logLevel }}"
- "--drivername=$(DRIVER_NAME)"
- "--drivernamespace=$(DRIVER_NAMESPACE)"
env:
- name: DRIVER_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DRIVER_NAME
value: {{ .Values.driverName }}
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
volumeMounts:
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- end }}
{{- if .Values.provisioner.httpMetrics.enabled }}
- name: liveness-prometheus
image: "{{ .Values.nodeplugin.plugin.image.repository }}:{{ .Values.nodeplugin.plugin.image.tag }}"
imagePullPolicy: {{ .Values.nodeplugin.plugin.image.pullPolicy }}
args:
- "--type=liveness"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metricsport={{ .Values.provisioner.httpMetrics.containerPort }}"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: "unix:///csi/{{ .Values.provisionerSocketFile }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
{{ toYaml .Values.nodeplugin.plugin.resources | indent 12 }}
{{- end }}
volumes:
- name: socket-dir
emptyDir: {
medium: "Memory"
}
- name: host-dev
hostPath:
path: /dev
- name: host-sys
hostPath:
path: /sys
- name: lib-modules
hostPath:
path: /lib/modules
- name: ceph-csi-config
configMap:
name: {{ .Values.configMapName | quote }}
{{- if .Values.configMapKey }}
items:
- key: {{ .Values.configMapKey | quote }}
path: config.json
{{- end }}
- name: ceph-csi-encryption-kms-config
configMap:
name: {{ .Values.kmsConfigMapName | quote }}
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
{{- if .Values.provisioner.affinity }}
affinity:
{{ toYaml .Values.provisioner.affinity | indent 8 -}}
{{- end -}}
{{- if .Values.provisioner.nodeSelector }}
nodeSelector:
{{ toYaml .Values.provisioner.nodeSelector | indent 8 -}}
{{- end -}}
{{- if .Values.provisioner.tolerations }}
tolerations:
{{ toYaml .Values.provisioner.tolerations | indent 8 -}}
{{- end -}}

View File

@ -0,0 +1,41 @@
{{- if .Values.provisioner.httpMetrics.service.enabled -}}
apiVersion: v1
kind: Service
metadata:
{{- if .Values.provisioner.httpMetrics.service.annotations }}
annotations:
{{ toYaml .Values.provisioner.httpMetrics.service.annotations | indent 4 }}
{{- end }}
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}-http-metrics
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.fullname" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if .Values.provisioner.httpMetrics.service.clusterIP }}
clusterIP: "{{ .Values.provisioner.httpMetrics.service.clusterIP }}"
{{- end }}
{{- if .Values.provisioner.httpMetrics.service.externalIPs }}
externalIPs:
{{ toYaml .Values.provisioner.httpMetrics.service.externalIPs | indent 4 }}
{{- end }}
{{- if .Values.provisioner.httpMetrics.service.loadBalancerIP }}
loadBalancerIP: "{{ .Values.provisioner.httpMetrics.service.loadBalancerIP }}"
{{- end }}
{{- if .Values.provisioner.httpMetrics.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.provisioner.httpMetrics.service.loadBalancerSourceRanges | indent 4 }}
{{- end }}
ports:
- name: http-metrics
port: {{ .Values.provisioner.httpMetrics.service.servicePort }}
targetPort: {{ .Values.provisioner.httpMetrics.containerPort }}
selector:
app: {{ include "ceph-csi-rbd.name" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
type: "{{ .Values.provisioner.httpMetrics.service.type }}"
{{- end -}}

View File

@ -0,0 +1,39 @@
{{- if .Values.provisioner.podSecurityPolicy.enabled -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
allowPrivilegeEscalation: true
allowedCapabilities:
- 'SYS_ADMIN'
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'hostPath'
allowedHostPaths:
- pathPrefix: '/dev'
readOnly: false
- pathPrefix: '/sys'
readOnly: false
- pathPrefix: '/lib/modules'
readOnly: true
{{- end }}

View File

@ -0,0 +1,26 @@
{{- if .Values.rbac.create -}}
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "create","update", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
{{- if .Values.provisioner.podSecurityPolicy.enabled }}
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['{{ include "ceph-csi-rbd.provisioner.fullname" . }}']
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,21 @@
{{- if .Values.rbac.create -}}
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
subjects:
- kind: ServiceAccount
name: {{ include "ceph-csi-rbd.serviceAccountName.provisioner" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: Role
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,62 @@
{{- if .Values.rbac.create -}}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "ceph-csi-rbd.provisioner.fullname" . }}-rules
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rbac.rbd.csi.ceph.com/aggregate-to-{{ include "ceph-csi-rbd.provisioner.fullname" . }}: "true"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "create", "update"]
{{- if .Values.provisioner.attacher.enabled }}
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
{{- end }}
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update"]
{{- if .Values.provisioner.resizer.enabled }}
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
{{- end }}
{{- if .Values.topology.enabled }}
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
{{- end }}
{{- end -}}

View File

@ -0,0 +1,13 @@
{{- if .Values.serviceAccounts.provisioner.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "ceph-csi-rbd.serviceAccountName.provisioner" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ include "ceph-csi-rbd.name" . }}
chart: {{ include "ceph-csi-rbd.chart" . }}
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- end -}}

View File

@ -0,0 +1,247 @@
---
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccounts:
nodeplugin:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname
name:
provisioner:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname
name:
# Configuration for the CSI to connect to the cluster
# Ref: https://github.com/ceph/ceph-csi/blob/devel/examples/README.md
# Example:
# csiConfig:
# - clusterID: "<cluster-id>"
# monitors:
# - "<MONValue1>"
# - "<MONValue2>"
csiConfig: []
# Configuration for the encryption KMS
# Ref: https://github.com/ceph/ceph-csi/blob/devel/docs/deploy-rbd.md
# Example:
# encryptionKMSConfig:
# vault-unique-id-1:
# encryptionKMSType: vault
# vaultAddress: https://vault.example.com
# vaultAuthPath: /v1/auth/kubernetes/login
# vaultRole: csi-kubernetes
# vaultPassphraseRoot: /v1/secret
# vaultPassphrasePath: ceph-csi/
# vaultCAVerify: "false"
encryptionKMSConfig: {}
# Set logging level for csi containers.
# Supported values from 0 to 5. 0 for general useful logs,
# 5 for trace level verbosity.
logLevel: 5
nodeplugin:
name: nodeplugin
# set user created priorityclassName for csi plugin pods. default is
# system-node-critical which is high priority
priorityClassName: system-node-critical
# if you are using rbd-nbd client set this value to OnDelete
updateStrategy: RollingUpdate
httpMetrics:
# Metrics only available for cephcsi/cephcsi => 1.2.0
# Specifies whether http metrics should be exposed
enabled: true
# The port of the container to expose the metrics
containerPort: 8080
service:
# Specifies whether a service should be created for the metrics
enabled: true
# The port to use for the service
servicePort: 8080
type: ClusterIP
# Annotations for the service
# Example:
# annotations:
# prometheus.io/scrape: "true"
# prometheus.io/port: "8080"
annotations: {}
clusterIP: ""
## List of IP addresses at which the stats-exporter service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
registrar:
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-node-driver-registrar
tag: v2.0.1
pullPolicy: IfNotPresent
resources: {}
plugin:
image:
repository: quay.io/cephcsi/cephcsi
tag: v3.3.1
pullPolicy: IfNotPresent
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
# If true, create & use Pod Security Policy resources
# https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
enabled: false
provisioner:
name: provisioner
replicaCount: 3
# if fstype is not specified in storageclass, ext4 is default
defaultFSType: ext4
# deployController to enable or disable the deployment of controller which
# generates the OMAP data if its not Present.
deployController: true
# Timeout for waiting for creation or deletion of a volume
timeout: 60s
# Hard limit for maximum number of nested volume clones that are taken before
# a flatten occurs
hardMaxCloneDepth: 8
# Soft limit for maximum number of nested volume clones that are taken before
# a flatten occurs
softMaxCloneDepth: 4
# Maximum number of snapshots allowed on rbd image without flattening
maxSnapshotsOnImage: 450
# Minimum number of snapshots allowed on rbd image to trigger flattening
minSnapshotsOnImage: 250
# skip image flattening if kernel support mapping of rbd images
# which has the deep-flatten feature
# skipForceFlatten: false
# set user created priorityclassName for csi provisioner pods. default is
# system-cluster-critical which is less priority than system-node-critical
priorityClassName: system-cluster-critical
httpMetrics:
# Metrics only available for cephcsi/cephcsi => 1.2.0
# Specifies whether http metrics should be exposed
enabled: true
# The port of the container to expose the metrics
containerPort: 8080
service:
# Specifies whether a service should be created for the metrics
enabled: true
# The port to use for the service
servicePort: 8080
type: ClusterIP
# Annotations for the service
# Example:
# annotations:
# prometheus.io/scrape: "true"
# prometheus.io/port: "8080"
annotations: {}
clusterIP: ""
## List of IP addresses at which the stats-exporter service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
provisioner:
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-provisioner
tag: v2.0.4
pullPolicy: IfNotPresent
resources: {}
attacher:
name: attacher
enabled: true
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-attacher
tag: v3.0.2
pullPolicy: IfNotPresent
resources: {}
resizer:
name: resizer
enabled: true
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-resizer
tag: v1.0.1
pullPolicy: IfNotPresent
resources: {}
snapshotter:
image:
repository: registry.cn-beijing.aliyuncs.com/dotbalo/csi-snapshotter
tag: v3.0.2
pullPolicy: IfNotPresent
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
# If true, create & use Pod Security Policy resources
# https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
enabled: false
topology:
# Specifies whether topology based provisioning support should
# be exposed by CSI
enabled: false
# domainLabels define which node labels to use as domains
# for CSI nodeplugins to advertise their domains
# NOTE: the value here serves as an example and needs to be
# updated with node labels that define domains of interest
# Copy from Dotbalo
domainLabels:
- failure-domain/region
- failure-domain/zone
#########################################################
# Variables for 'internal' use please use with caution! #
#########################################################
# The filename of the provisioner socket
provisionerSocketFile: csi-provisioner.sock
# The filename of the plugin socket
pluginSocketFile: csi.sock
# kubelet working directory,can be set using `--root-dir` when starting kubelet.
kubeletDir: /var/lib/kubelet
# Name of the csi-driver
driverName: rbd.csi.ceph.com
# Name of the configmap used for state
configMapName: ceph-csi-config
# Key to use in the Configmap if not config.json
# configMapKey:
# Use an externally provided configmap
externallyManagedConfigmap: false
# Name of the configmap used for encryption kms configuration
kmsConfigMapName: ceph-csi-encryption-kms-config

31
docs/chap09/9.1.md 100644
View File

@ -0,0 +1,31 @@
**多个初始化容器使用**
**myapp.yaml**
```
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
# 业务应用容器
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
# 初始化容器列表
initContainers:
# 第一个初始化容器等待当前Namespace下的myservice启动
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
# 第二个初始化容器等待DB的Service启动
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
```

191
docs/chap09/9.5.md 100644
View File

@ -0,0 +1,191 @@
**亲和力**
````
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: nginx
````
**Pod亲和力**
````
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
topologyKey: failure-domain.beta.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: failure-domain.beta.kubernetes.io/zone
containers:
- name: with-pod-affinity
image: nginx
````
**示例1同一个应用部署在不同的宿主机**
````
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: must-be-diff-nodes
name: must-be-diff-nodes
namespace: kube-public
spec:
replicas: 3
selector:
matchLabels:
app: must-be-diff-nodes
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: must-be-diff-nodes
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test-affinity
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
image: nginx
imagePullPolicy: Always
name: must-be-diff-nodes
````
**示例2同一个应用不同副本固定节点**
````
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cache
spec:
selector:
matchLabels:
app: store
replicas: 3
template:
metadata:
labels:
app: store
spec:
nodeSelector:
app: store
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: redis-server
image: redis:3.2-alpine
````
**示例3应用和缓存尽量部署在同一个域内**
````
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: web-store
replicas: 3
template:
metadata:
labels:
app: web-store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web-store
topologyKey: "kubernetes.io/hostname"
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: web-app
image: nginx:1.16-alpine
````

View File

@ -0,0 +1,78 @@
resourcequota.yaml
````
apiVersion: v1
kind: ResourceQuota
metadata:
name: resource-test
labels:
app: resourcequota
spec:
hard:
pods: 50
requests.cpu: 0.5
requests.memory: 512Mi
limits.cpu: 5
limits.memory: 16Gi
configmaps: 20
requests.storage: 40Gi
persistentvolumeclaims: 20
replicationcontrollers: 20
secrets: 20
services: 50
services.loadbalancers: "2"
services.nodeports: "10"
````
**quota-objects.yaml**
```
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-quota-demo
spec:
hard:
persistentvolumeclaims: "1"
```
**pvc.yaml**
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-quota-demo
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
```
**pvc2.yaml**
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-quota-demo2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
```

View File

@ -0,0 +1,97 @@
**示例1配置默认的requests和limits**
```
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-mem-limit-range
spec:
limits:
- default:
cpu: 1
memory: 512Mi
defaultRequest:
cpu: 0.5
memory: 256Mi
type: Container
---
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo
spec:
containers:
- name: default-cpu-demo-ctr
image: nginx
```
**示例2配置requests和limits的范围**
```
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-min-max-demo-lr
spec:
limits:
- max:
cpu: "800m"
memory: 1Gi
min:
cpu: "200m"
memory: 500Mi
type: Container
---
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo-2
spec:
containers:
- name: constraints-mem-demo-2-ctr
image: nginx
resources:
limits:
memory: "1.5Gi"
requests:
memory: "800Mi"
```
```
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo-3
spec:
containers:
- name: constraints-mem-demo-3-ctr
image: nginx
resources:
limits:
memory: "800Mi"
requests:
memory: "100Mi"
```
**示例3限制申请存储空间的大小**
```
apiVersion: v1
kind: LimitRange
metadata:
name: storagelimits
spec:
limits:
- type: PersistentVolumeClaim
max:
storage: 2Gi
min:
storage: 1Gi
```

View File

@ -0,0 +1,63 @@
**示例1实现QoS为Guaranteed的Pod**
**qos-pod.yaml**
````
apiVersion: v1
kind: Pod
metadata:
name: qos-demo
namespace: qos-example
spec:
containers:
- name: qos-demo-ctr
image: nginx
resources:
limits:
memory: "200Mi"
cpu: "700m"
requests:
memory: "200Mi"
cpu: "700m"
````
**示例2实现QoS为Burstable的Pod**
**qos-pod-2.yaml**
```
apiVersion: v1
kind: Pod
metadata:
name: qos-demo-2
namespace: qos-example
spec:
containers:
- name: qos-demo-2-ctr
image: nginx
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
```
**示例3实现QoS为BestEffort的Pod**
**qos-pod-3.yaml**
```
apiVersion: v1
kind: Pod
metadata:
name: qos-demo-3
namespace: qos-example
spec:
containers:
- name: qos-demo-3-ctr
image: nginx
```

View File

@ -0,0 +1,52 @@
**pod-exec-cr.yaml**
```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-exec
rules:
- apiGroups:
- ""
resources:
- pods
- pods/log
verbs:
- get
- list
- apiGroups:
- ""
resources:
- pods/exec #之前提到的子资源
verbs:
- create
```
**ns-readonly.yaml**
```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-readonly
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
- watch
```

View File

@ -0,0 +1,80 @@
**mysql-redis-nw.yaml**
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mysql-np
namespace: nw-demo
spec:
podSelector:
matchLabels:
app: mysql
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
access-nw-mysql-redis: "true"
ports:
- protocol: TCP
port: 3306
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: redis-np
namespace: nw-demo
spec:
podSelector:
matchLabels:
app: redis
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
access-nw-mysql-redis: "true"
ports:
- protocol: TCP
port: 6379
```
**nginx-nw.yaml**
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nginx-np
namespace: nw-demo
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
podSelector:
matchLabels:
"app.kubernetes.io/name": ingress-nginx
- podSelector: {}
ports:
- protocol: TCP
port: 80
```

View File

@ -0,0 +1,54 @@
**volumeClaimTemplates**
```
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "rook-ceph-block"
resources:
requests:
storage: 1Gi
```

View File

@ -0,0 +1,58 @@
**pvc-restore.yaml**
```
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc-restore
spec:
storageClassName: rook-ceph-block
dataSource:
name: rbd-pvc-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
```
**restore-check-snapshot-rbd.yaml**
```
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: check-snapshot-restore
spec:
selector:
matchLabels:
app: check
strategy:
type: Recreate
template:
metadata:
labels:
app: check
spec:
containers:
- image: alpine:3.8
name: check
command:
- sh
- -c
- sleep 36000
volumeMounts:
- name: check-mysql-persistent-storage
mountPath: /mnt
volumes:
- name: check-mysql-persistent-storage
persistentVolumeClaim:
claimName: rbd-pvc-restore
```

View File

@ -0,0 +1,20 @@
**pvc-clone.yaml**
````
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc-clone
spec:
storageClassName: rook-ceph-block
dataSource:
name: mysql-pv-claim
kind: PersistentVolumeClaim
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
````

139
docs/chap15/15.3.md 100644
View File

@ -0,0 +1,139 @@
**vim etcd-svc.yaml**
```
apiVersion: v1
kind: Endpoints
metadata:
labels:
app: etcd-prom
name: etcd-prom
namespace: kube-system
subsets:
- addresses:
- ip: YOUR_ETCD_IP01
- ip: YOUR_ETCD_IP02
- ip: YOUR_ETCD_IP03
ports:
- name: https-metrics
port: 2379 # etcd端口
protocol: TCP
apiVersion: v1
kind: Service
metadata:
labels:
app: etcd-prom
name: etcd-prom
namespace: kube-system
spec:
ports:
- name: https-metrics
port: 2379
protocol: TCP
targetPort: 2379
type: ClusterIP
```
**servicemonitor.yaml**
```
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: etcd
namespace: monitoring
labels:
app: etcd
spec:
jobLabel: k8s-app
endpoints:
- interval: 30s
port: https-metrics # 这个port对应 Service.spec.ports.name
scheme: https
tlsConfig:
caFile: /etc/prometheus/secrets/etcd-ssl/etcd-ca.pem #证书路径
certFile: /etc/prometheus/secrets/etcd-ssl/etcd.pem
keyFile: /etc/prometheus/secrets/etcd-ssl/etcd-key.pem
insecureSkipVerify: true # 关闭证书校验
selector:
matchLabels:
app: etcd-prom # 跟Service的lables保持一致
namespaceSelector:
matchNames:
- kube-system
```
**mysql-exporter.yaml**
```
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
k8s-app: mysql-exporter
template:
metadata:
labels:
k8s-app: mysql-exporter
spec:
containers:
- name: mysql-exporter
image: registry.cn-beijing.aliyuncs.com/dotbalo/mysqld-exporter
env:
- name: DATA_SOURCE_NAME
value: "exporter:exporter@(mysql.default:3306)/"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9104
---
apiVersion: v1
kind: Service
metadata:
name: mysql-exporter
namespace: monitoring
labels:
k8s-app: mysql-exporter
spec:
type: ClusterIP
selector:
k8s-app: mysql-exporter
ports:
- name: api
port: 9104
protocol: TCP
```
**mysql-sm.yaml**
```
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: mysql-exporter
namespace: monitoring
labels:
k8s-app: mysql-exporter
namespace: monitoring
spec:
jobLabel: k8s-app
endpoints:
- port: api
interval: 30s
scheme: http
selector:
matchLabels:
k8s-app: mysql-exporter
namespaceSelector:
matchNames:
- monitoring
```

View File

@ -0,0 +1,29 @@
**vim auth-rate-limit.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-realm: Please Input Your Username and Password
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/limit-connections: "1"
name: ingress-with-auth
namespace: study-ingress
spec:
ingressClassName: nginx
rules:
- host: auth.test.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
```

View File

@ -0,0 +1,27 @@
**vim canary-v2.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
name: canary-v2
namespace: canary
spec:
ingressClassName: nginx
rules:
- host: canary.com
http:
paths:
- backend:
service:
name: canary-v2
port:
number: 8080
path: /
pathType: ImplementationSpecific
```

View File

@ -0,0 +1,45 @@
**vim web-ingress.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: study-ingress
spec:
ingressClassName: nginx
rules:
- host: nginx.test.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
```
**v1beta1**
```
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: study-ingress
spec:
rules:
- host: nginx.test.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
path: /
pathType: ImplementationSpecific
```

View File

@ -0,0 +1,26 @@
**vim redirect.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/permanent-redirect: https://www.baidu.com
name: nginx-redirect
namespace: study-ingress
spec:
ingressClassName: nginx
rules:
- host: nginx.redirect.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
```

View File

@ -0,0 +1,26 @@
**vim redirect.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: backend-api
namespace: study-ingress
spec:
ingressClassName: nginx
rules:
- host: nginx.test.com
http:
paths:
- backend:
service:
name: backend-api
port:
number: 80
path: /api-a(/|$)(.*)
pathType: ImplementationSpecific
```

View File

@ -0,0 +1,28 @@
**vim ingress-ssl.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: nginx-ingress
spec:
ingressClassName: nginx
rules:
- host: nginx.test.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- nginx.test.com
secretName: ca-secret
```

View File

@ -0,0 +1,35 @@
**vim laptop-ingress.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
set $agentflag 0;
if ($http_user_agent ~* "(Android|iPhone|Windows Phone|UC|Kindle)" ){
set $agentflag 1;
}
if ( $agentflag = 1 ) {
return 301 http://m.test.com;
}
name: laptop
namespace: study-ingress
spec:
ingressClassName: nginx
rules:
- host: test.com
http:
paths:
- backend:
service:
name: laptop
port:
number: 80
path: /
pathType: ImplementationSpecific
```

View File

@ -0,0 +1,30 @@
**vim ingress-with-auth.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-realm: Please Input Your Username and Password
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
name: ingress-with-auth
namespace: study-ingress
spec:
ingressClassName: nginx
rules:
- host: auth.test.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
```

View File

@ -0,0 +1,29 @@
**vim auth-whitelist.yaml**
```
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-realm: Please Input Your Username and Password
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.10.128
name: ingress-with-auth
namespace: study-ingress
spec:
ingressClassName: nginx
rules:
- host: auth.test.com
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
```

View File

@ -0,0 +1,62 @@
**Jenkinsfile**
```
pipeline {
agent {
kubernetes {
cloud 'kubernetes-study'
slaveConnectTimeout 1200
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
# 只需要配置jnlp和kubectl镜像即可
- args: [\'$(JENKINS_SECRET)\', \'$(JENKINS_NAME)\']
image: 'registry.cn-beijing.aliyuncs.com/citools/jnlp:alpine'
name: jnlp
imagePullPolicy: IfNotPresent
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/kubectl:self-1.17"
imagePullPolicy: "IfNotPresent"
name: "kubectl"
tty: true
restartPolicy: "Never"
'''
}
}
stages {
stage('Deploy') {
environment {
MY_KUBECONFIG = credentials('study-k8s-kubeconfig')
}
steps {
container(name: 'kubectl'){
sh """
echo ${IMAGE_TAG} # 该变量即为前台选择的镜像
kubectl --kubeconfig=${MY_KUBECONFIG} set image deployment -l app=${IMAGE_NAME} ${IMAGE_NAME}=${HARBOR_ADDRESS}/${IMAGE_TAG} -n ${NAMESPACE}
kubectl --kubeconfig=${MY_KUBECONFIG} get po -l app=${IMAGE_NAME} -n ${NAMESPACE} -w
"""
}
}
}
}
environment {
HARBOR_ADDRESS = "HARBOR_ADDRESS"
NAMESPACE = "kubernetes"
IMAGE_NAME = "go-project"
TAG = ""
}
}
```

0
docs/chap17/17.6 100644
View File

335
docs/chap17/17.6.md 100644
View File

@ -0,0 +1,335 @@
**Jenkinsfile**
```
pipeline {
agent {
kubernetes {
cloud 'kubernetes-study'
slaveConnectTimeout 1200
workspaceVolume hostPathWorkspaceVolume(hostPath: "/opt/workspace", readOnly: false)
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- args: [\'$(JENKINS_SECRET)\', \'$(JENKINS_NAME)\']
image: 'registry.cn-beijing.aliyuncs.com/citools/jnlp:alpine'
name: jnlp
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/maven:3.5.3"
imagePullPolicy: "IfNotPresent"
name: "build"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
- mountPath: "/root/.m2/"
name: "cachedir"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/kubectl:self-1.17"
imagePullPolicy: "IfNotPresent"
name: "kubectl"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/docker:19.03.9-git"
imagePullPolicy: "IfNotPresent"
name: "docker"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- mountPath: "/var/run/docker.sock"
name: "dockersock"
readOnly: false
restartPolicy: "Never"
nodeSelector:
build: "true"
securityContext: {}
volumes:
- hostPath:
path: "/var/run/docker.sock"
name: "dockersock"
- hostPath:
path: "/usr/share/zoneinfo/Asia/Shanghai"
name: "localtime"
- name: "cachedir"
hostPath:
path: "/opt/m2"
'''
}
}
stages {
stage('Pulling Code') {
parallel {
stage('Pulling Code by Jenkins') {
when {
expression {
env.gitlabBranch == null
}
}
steps {
git(changelog: true, poll: true, url: 'git@CHANGE_HERE_FOR_YOUR_GITLAB_URL:root/spring-boot-project.git', branch: "${BRANCH}", credentialsId: 'gitlab-key')
script {
COMMIT_ID = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
TAG = BUILD_TAG + '-' + COMMIT_ID
println "Current branch is ${BRANCH}, Commit ID is ${COMMIT_ID}, Image TAG is ${TAG}"
}
}
}
stage('Pulling Code by trigger') {
when {
expression {
env.gitlabBranch != null
}
}
steps {
git(url: 'git@CHANGE_HERE_FOR_YOUR_GITLAB_URL:root/spring-boot-project.git', branch: env.gitlabBranch, changelog: true, poll: true, credentialsId: 'gitlab-key')
script {
COMMIT_ID = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
TAG = BUILD_TAG + '-' + COMMIT_ID
println "Current branch is ${env.gitlabBranch}, Commit ID is ${COMMIT_ID}, Image TAG is ${TAG}"
}
}
}
}
}
stage('Building') {
steps {
container(name: 'build') {
sh """
curl repo.maven.apache.org
mvn clean install -DskipTests
ls target/*
"""
}
}
}
stage('Docker build for creating image') {
environment {
HARBOR_USER = credentials('HARBOR_ACCOUNT')
}
steps {
container(name: 'docker') {
sh """
echo ${HARBOR_USER_USR} ${HARBOR_USER_PSW} ${TAG}
docker build -t ${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG} .
docker login -u ${HARBOR_USER_USR} -p ${HARBOR_USER_PSW} ${HARBOR_ADDRESS}
docker push ${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG}
"""
}
}
}
stage('Deploying to K8s') {
environment {
MY_KUBECONFIG = credentials('study-k8s-kubeconfig')
}
steps {
container(name: 'kubectl'){
sh """
/usr/local/bin/kubectl --kubeconfig $MY_KUBECONFIG set image deploy -l app=${IMAGE_NAME} ${IMAGE_NAME}=${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG} -n $NAMESPACE
"""
}
}
}
}
environment {
COMMIT_ID = ""
HARBOR_ADDRESS = "CHANGE_HERE_FOR_YOUR_HARBOR_URL"
REGISTRY_DIR = "kubernetes"
IMAGE_NAME = "spring-boot-project"
NAMESPACE = "kubernetes"
TAG = ""
}
parameters {
gitParameter(branch: '', branchFilter: 'origin/(.*)', defaultValue: '', description: 'Branch for build and deploy', name: 'BRANCH', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH')
}
}
```
**Dockerfile**
```
# 基础镜像可以按需修改,可以更改为公司自有镜像
FROM registry.cn-beijing.aliyuncs.com/dotbalo/jre:8u211-data
# jar包名称改成实际的名称本示例为spring-cloud-eureka-0.0.1-SNAPSHOT.jar
COPY target/spring-cloud-eureka-0.0.1-SNAPSHOT.jar ./
# 启动Jar包
CMD java -jar spring-cloud-eureka-0.0.1-SNAPSHOT.jar
```
**Deployment/Service/Ingress**
```
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: spring-boot-project
name: spring-boot-project
namespace: kubernetes
spec:
ports:
- name: web
port: 8761
protocol: TCP
targetPort: 8761
selector:
app: spring-boot-project
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: spring-boot-project
namespace: kubernetes
spec:
rules:
- host: spring-boot-project.test.com
http:
paths:
- backend:
service:
name: spring-boot-project
port:
number: 8761
path: /
pathType: ImplementationSpecific
status:
loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: spring-boot-project
name: spring-boot-project
namespace: kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: spring-boot-project
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: spring-boot-project
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- spring-boot-project
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
image: nginx
imagePullPolicy: IfNotPresent
lifecycle: {}
livenessProbe:
failureThreshold: 2
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 8761
timeoutSeconds: 2
name: spring-boot-project
ports:
- containerPort: 8761
name: web
protocol: TCP
readinessProbe:
failureThreshold: 2
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 8761
timeoutSeconds: 2
resources:
limits:
cpu: 994m
memory: 1170Mi
requests:
cpu: 10m
memory: 55Mi
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: harborkey
restartPolicy: Always
securityContext: {}
serviceAccountName: default
```

329
docs/chap17/17.7.md 100644
View File

@ -0,0 +1,329 @@
**Jenkinsfile**
```
pipeline {
agent {
kubernetes {
cloud 'kubernetes-study'
slaveConnectTimeout 1200
workspaceVolume hostPathWorkspaceVolume(hostPath: "/opt/workspace", readOnly: false)
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- args: [\'$(JENKINS_SECRET)\', \'$(JENKINS_NAME)\']
image: 'registry.cn-beijing.aliyuncs.com/citools/jnlp:alpine'
name: jnlp
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/node:lts"
imagePullPolicy: "IfNotPresent"
name: "build"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
- mountPath: "/root/.m2/"
name: "cachedir"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/kubectl:self-1.17"
imagePullPolicy: "IfNotPresent"
name: "kubectl"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/docker:19.03.9-git"
imagePullPolicy: "IfNotPresent"
name: "docker"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- mountPath: "/var/run/docker.sock"
name: "dockersock"
readOnly: false
restartPolicy: "Never"
nodeSelector:
build: "true"
securityContext: {}
volumes:
- hostPath:
path: "/var/run/docker.sock"
name: "dockersock"
- hostPath:
path: "/usr/share/zoneinfo/Asia/Shanghai"
name: "localtime"
- name: "cachedir"
hostPath:
path: "/opt/m2"
'''
}
}
stages {
stage('Pulling Code') {
parallel {
stage('Pulling Code by Jenkins') {
when {
expression {
env.gitlabBranch == null
}
}
steps {
git(changelog: true, poll: true, url: 'git@192.168.236.251:kubernetes/vue-project.git', branch: "${BRANCH}", credentialsId: 'gitlab-key')
script {
COMMIT_ID = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
TAG = BUILD_TAG + '-' + COMMIT_ID
println "Current branch is ${BRANCH}, Commit ID is ${COMMIT_ID}, Image TAG is ${TAG}"
}
}
}
stage('Pulling Code by trigger') {
when {
expression {
env.gitlabBranch != null
}
}
steps {
git(url: 'git@192.168.236.251:kubernetes/vue-project.git', branch: env.gitlabBranch, changelog: true, poll: true, credentialsId: 'gitlab-key')
script {
COMMIT_ID = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
TAG = BUILD_TAG + '-' + COMMIT_ID
println "Current branch is ${BRANCH}, Commit ID is ${COMMIT_ID}, Image TAG is ${TAG}"
}
}
}
}
}
stage('Building') {
steps {
container(name: 'build') {
sh """
npm install --registry=https://registry.npm.taobao.org
npm run build
"""
}
}
}
stage('Docker build for creating image') {
environment {
HARBOR_USER = credentials('HARBOR_ACCOUNT')
}
steps {
container(name: 'docker') {
sh """
echo ${HARBOR_USER_USR} ${HARBOR_USER_PSW} ${TAG}
docker build -t ${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG} .
docker login -u ${HARBOR_USER_USR} -p ${HARBOR_USER_PSW} ${HARBOR_ADDRESS}
docker push ${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG}
"""
}
}
}
stage('Deploying to K8s') {
environment {
MY_KUBECONFIG = credentials('study-k8s-kubeconfig')
}
steps {
container(name: 'kubectl'){
sh """
/usr/local/bin/kubectl --kubeconfig $MY_KUBECONFIG set image deploy -l app=${IMAGE_NAME} ${IMAGE_NAME}=${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG} -n $NAMESPACE
"""
}
}
}
}
environment {
COMMIT_ID = ""
HARBOR_ADDRESS = "192.168.236.204"
REGISTRY_DIR = "kubernetes"
IMAGE_NAME = "vue-project"
NAMESPACE = "kubernetes"
TAG = ""
}
parameters {
gitParameter(branch: '', branchFilter: 'origin/(.*)', defaultValue: '', description: 'Branch for build and deploy', name: 'BRANCH', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH')
}
}
```
**Dockerfile**
```
FROM registry.cn-beijing.aliyuncs.com/dotbalo/nginx:1.15.12
COPY dist/* /usr/share/nginx/html/
```
**Deployment/Service/Ingress**
```
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: vue-project
name: vue-project
namespace: kubernetes
spec:
ports:
- name: web
port: 80
protocol: TCP
targetPort: 80
selector:
app: vue-project
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: vue-project
namespace: kubernetes
spec:
rules:
- host: vue-project.test.com
http:
paths:
- backend:
service:
name: vue-project
port:
number: 80
path: /
pathType: ImplementationSpecific
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: vue-project
name: vue-project
namespace: kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: vue-project
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: vue-project
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- vue-project
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
image: nginx
imagePullPolicy: IfNotPresent
lifecycle: {}
livenessProbe:
failureThreshold: 2
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 80
timeoutSeconds: 2
name: vue-project
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
failureThreshold: 2
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 80
timeoutSeconds: 2
resources:
limits:
cpu: 994m
memory: 1170Mi
requests:
cpu: 10m
memory: 55Mi
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: harborkey
restartPolicy: Always
securityContext: {}
serviceAccountName: default
```

332
docs/chap17/17.8.md 100644
View File

@ -0,0 +1,332 @@
**Jenkinsfile**
```
pipeline {
agent {
kubernetes {
cloud 'kubernetes-study'
slaveConnectTimeout 1200
workspaceVolume hostPathWorkspaceVolume(hostPath: "/opt/workspace", readOnly: false)
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- args: [\'$(JENKINS_SECRET)\', \'$(JENKINS_NAME)\']
image: 'registry.cn-beijing.aliyuncs.com/citools/jnlp:alpine'
name: jnlp
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/node:lts"
imagePullPolicy: "IfNotPresent"
name: "build"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
- mountPath: "/root/.m2/"
name: "cachedir"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/kubectl:self-1.17"
imagePullPolicy: "IfNotPresent"
name: "kubectl"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- command:
- "cat"
env:
- name: "LANGUAGE"
value: "en_US:en"
- name: "LC_ALL"
value: "en_US.UTF-8"
- name: "LANG"
value: "en_US.UTF-8"
image: "registry.cn-beijing.aliyuncs.com/citools/docker:19.03.9-git"
imagePullPolicy: "IfNotPresent"
name: "docker"
tty: true
volumeMounts:
- mountPath: "/etc/localtime"
name: "localtime"
readOnly: false
- mountPath: "/var/run/docker.sock"
name: "dockersock"
readOnly: false
restartPolicy: "Never"
nodeSelector:
build: "true"
securityContext: {}
volumes:
- hostPath:
path: "/var/run/docker.sock"
name: "dockersock"
- hostPath:
path: "/usr/share/zoneinfo/Asia/Shanghai"
name: "localtime"
- name: "cachedir"
hostPath:
path: "/opt/m2"
'''
}
}
stages {
stage('Pulling Code') {
parallel {
stage('Pulling Code by Jenkins') {
when {
expression {
env.gitlabBranch == null
}
}
steps {
git(changelog: true, poll: true, url: 'git@192.168.236.251:kubernetes/vue-project.git', branch: "${BRANCH}", credentialsId: 'gitlab-key')
script {
COMMIT_ID = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
TAG = BUILD_TAG + '-' + COMMIT_ID
println "Current branch is ${BRANCH}, Commit ID is ${COMMIT_ID}, Image TAG is ${TAG}"
}
}
}
stage('Pulling Code by trigger') {
when {
expression {
env.gitlabBranch != null
}
}
steps {
git(url: 'git@192.168.236.251:kubernetes/vue-project.git', branch: env.gitlabBranch, changelog: true, poll: true, credentialsId: 'gitlab-key')
script {
COMMIT_ID = sh(returnStdout: true, script: "git log -n 1 --pretty=format:'%h'").trim()
TAG = BUILD_TAG + '-' + COMMIT_ID
println "Current branch is ${BRANCH}, Commit ID is ${COMMIT_ID}, Image TAG is ${TAG}"
}
}
}
}
}
stage('Building') {
steps {
container(name: 'build') {
sh """
npm install --registry=https://registry.npm.taobao.org
npm run build
"""
}
}
}
stage('Docker build for creating image') {
environment {
HARBOR_USER = credentials('HARBOR_ACCOUNT')
}
steps {
container(name: 'docker') {
sh """
echo ${HARBOR_USER_USR} ${HARBOR_USER_PSW} ${TAG}
docker build -t ${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG} .
docker login -u ${HARBOR_USER_USR} -p ${HARBOR_USER_PSW} ${HARBOR_ADDRESS}
docker push ${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG}
"""
}
}
}
stage('Deploying to K8s') {
environment {
MY_KUBECONFIG = credentials('study-k8s-kubeconfig')
}
steps {
container(name: 'kubectl'){
sh """
/usr/local/bin/kubectl --kubeconfig $MY_KUBECONFIG set image deploy -l app=${IMAGE_NAME} ${IMAGE_NAME}=${HARBOR_ADDRESS}/${REGISTRY_DIR}/${IMAGE_NAME}:${TAG} -n $NAMESPACE
"""
}
}
}
}
environment {
COMMIT_ID = ""
HARBOR_ADDRESS = "192.168.236.204"
REGISTRY_DIR = "kubernetes"
IMAGE_NAME = "vue-project"
NAMESPACE = "kubernetes"
TAG = ""
}
parameters {
gitParameter(branch: '', branchFilter: 'origin/(.*)', defaultValue: '', description: 'Branch for build and deploy', name: 'BRANCH', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH')
}
}
```
**Dockerfile**
```
FROM registry.cn-beijing.aliyuncs.com/dotbalo/alpine-glibc:alpine-3.9
COPY conf/ ./conf # 如果定义了单独的配置文件,可能需要拷贝到镜像中
COPY ./go-project ./ # 包名按照实际情况进行修改
ENTRYPOINT [ "./go-project"] # 启动该应用
```
**Deployment/Service/Ingress**
```
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: go-project
name: go-project
namespace: kubernetes
spec:
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: go-project
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: go-project
namespace: kubernetes
spec:
rules:
- host: go-project.test.com
http:
paths:
- backend:
service:
name: go-project
port:
number: 8080
path: /
pathType: ImplementationSpecific
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: go-project
name: go-project
namespace: kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: go-project
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: go-project
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- go-project
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: TZ
value: Asia/Shanghai
- name: LANG
value: C.UTF-8
image: nginx
imagePullPolicy: IfNotPresent
lifecycle: {}
livenessProbe:
failureThreshold: 2
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 8080
timeoutSeconds: 2
name: go-project
ports:
- containerPort: 8080
name: web
protocol: TCP
readinessProbe:
failureThreshold: 2
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 8080
timeoutSeconds: 2
resources:
limits:
cpu: 994m
memory: 1170Mi
requests:
cpu: 10m
memory: 55Mi
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: harborkey
restartPolicy: Always
securityContext: {}
serviceAccountName: default
```

View File

183
docs/chap18/18.6.md 100644
View File

@ -0,0 +1,183 @@
**Gateway配置**
```
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # 使用默认的istio ingress gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "bookinfo.kubeasy.com" # 发布域名
```
**配置VirtualService**
```
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
```
**vim reviews-dr.yaml**
```
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1 # subset v1指向具有version=v1的Pod
- name: v2
labels:
version: v2 # subset v2指向具有version=v2的Pod
- name: v3
labels:
version: v3 # subset v3指向具有version=v3的Pod
```
**vim reviews-v1-all.yaml**
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1 # 将流量指向v1
```
**vim reviews-20v2-80v1.yaml**
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1 # 将80%流量指向v1
weight: 80 # 只需要配置一个weight参数即可
- destination:
host: reviews
subset: v2 # 将20%流量指向v2
weight: 20
```
**vim reviews-v2-all.yaml**
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v2 # 指向v2
```
**cat reviews-jasonv3.yaml**
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers: # 匹配请求头
end-user: # 匹配请求头的key为end-user
exact: jason # value为jason
route:
- destination:
host: reviews
subset: v3 # 匹配到end-user=jason路由至v3版本
- route:
- destination:
host: reviews
subset: v2 # 其余的路由至v2版本
```
**vim details-delay.yaml**
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
http:
- fault: # 添加一个错误
delay: # 添加类型为delay的故障
percentage: # 故障注入的百分比
value: 100 # 对所有请求注入故障
fixedDelay: 5s # 注入的延迟时间
route:
- destination:
host: details
```
**vim details-abort.yaml**