GCP Persistent Disk CSI Driver deployment (#5857)

* GCP Persistent Disk CSI Driver deployment

* Fix MD lint

* Fix Yaml lint
pull/5862/head
Ali Sanhaji 2020-03-31 09:06:40 +02:00 committed by GitHub
parent 79a6b72a13
commit 484df62c5a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 603 additions and 1 deletions

77
docs/gcp-pd-csi.md 100644
View File

@ -0,0 +1,77 @@
# GCP Persistent Disk CSI Driver
The GCP Persistent Disk CSI driver allows you to provision volumes for pods with a Kubernetes deployment over Google Cloud Platform. The CSI driver replaces to volume provioning done by the in-tree azure cloud provider which is deprecated.
To deploy GCP Persistent Disk CSI driver, uncomment the `gcp_pd_csi_enabled` option in `group_vars/all/gcp.yml` and set it to `true`.
## GCP Persistent Disk Storage Class
If you want to deploy the GCP Persistent Disk storage class to provision volumes dynamically, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
## GCP credentials
In order for the CSI driver to provision disks, you need to create for it a service account on GCP with the appropriate permissions.
Follow these steps to configure it:
```ShellSession
# This will open a web page for you to authenticate
gcloud auth login
export PROJECT=nameofmyproject
gcloud config set project $PROJECT
git clone https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver $GOPATH/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver
export GCE_PD_SA_NAME=my-gce-pd-csi-sa
export GCE_PD_SA_DIR=/my/safe/credentials/directory
./deploy/setup-project.sh
```
The above will create a file named `cloud-sa.json` in the specified `GCE_PD_SA_DIR`. This file contains the service account with the appropriate credentials for the CSI driver to perform actions on GCP to request disks for pods.
You need to provide this file's path through the variable `gcp_pd_csi_sa_cred_file` in `inventory/mycluster/group_vars/all/gcp.yml`
You can now deploy Kubernetes with Kubespray over GCP.
## GCP PD CSI Driver test
To test the dynamic provisioning using GCP PD CSI driver, make sure to have the storage class deployed (through persistent volumes), and apply the following manifest:
```yml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-gce-pd
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: podpvc
readOnly: false
```
## GCP PD documentation
You can find the official GCP Persistent Disk CSI driver installation documentation here: [GCP PD CSI Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/docs/kubernetes/user-guides/driver-install.md
)

View File

@ -0,0 +1,10 @@
## GCP compute Persistent Disk CSI Driver credentials and parameters
## See docs/gcp-pd-csi.md for information about the implementation
## Specify the path to the file containing the service account credentials
# gcp_pd_csi_sa_cred_file: "/my/safe/credentials/directory/cloud-sa.json"
## To enable GCP Persistent Disk CSI driver, uncomment below
# gcp_pd_csi_enabled: true
# gcp_pd_csi_controller_replicas: 1
# gcp_pd_csi_driver_image_tag: "v0.7.0-gke.0"

View File

@ -255,7 +255,7 @@ podsecuritypolicy_enabled: false
## See https://github.com/kubernetes-sigs/kubespray/issues/2141
## Set this variable to true to get rid of this issue
volume_cross_zone_attachment: false
# Add Persistent Volumes Storage Class for corresponding cloud provider (supported: in-tree OpenStack, Cinder CSI, AWS EBS CSI)
# Add Persistent Volumes Storage Class for corresponding cloud provider (supported: in-tree OpenStack, Cinder CSI, AWS EBS CSI, GCP Persistent Disk CSI)
persistent_volumes_enabled: false
## Container Engine Acceleration

View File

@ -531,6 +531,13 @@ cinder_csi_plugin_image_tag: "latest"
aws_ebs_csi_plugin_image_repo: "{{ docker_image_repo }}/amazon/aws-ebs-csi-driver"
aws_ebs_csi_plugin_image_tag: "latest"
gcp_pd_csi_image_repo: "gke.gcr.io"
gcp_pd_csi_driver_image_tag: "v0.7.0-gke.0"
gcp_pd_csi_provisioner_image_tag: "v1.5.0-gke.0"
gcp_pd_csi_attacher_image_tag: "v2.1.1-gke.0"
gcp_pd_csi_resizer_image_tag: "v0.4.0-gke.0"
gcp_pd_csi_registrar_image_tag: "v1.2.0-gke.0"
dashboard_image_repo: "{{ gcr_image_repo }}/google_containers/kubernetes-dashboard-{{ image_arch }}"
dashboard_image_tag: "v1.10.1"

View File

@ -0,0 +1,3 @@
---
gcp_pd_csi_controller_replicas: 1
gcp_pd_csi_driver_image_tag: "v0.7.0-gke.0"

View File

@ -0,0 +1,49 @@
---
- name: GCP PD CSI Driver | Check if cloud-sa.json exists
fail:
msg: "Credentials file cloud-sa.json is mandatory"
when: gcp_pd_csi_sa_cred_file is not defined or not gcp_pd_csi_sa_cred_file
tags: gcp-pd-csi-driver
- name: GCP PD CSI Driver | Copy GCP credentials file
copy:
src: "{{ gcp_pd_csi_sa_cred_file }}"
dest: "{{ kube_config_dir }}/cloud-sa.json"
group: "{{ kube_cert_group }}"
mode: 0640
when: inventory_hostname == groups['kube-master'][0]
tags: gcp-pd-csi-driver
- name: GCP PD CSI Driver | Get base64 cloud-sa.json
slurp:
src: "{{ kube_config_dir }}/cloud-sa.json"
register: gcp_cred_secret
when: inventory_hostname == groups['kube-master'][0]
tags: gcp-pd-csi-driver
- name: GCP PD CSI Driver | Generate Manifests
template:
src: "{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.file }}"
with_items:
- {name: gcp-pd-csi-cred-secret, file: gcp-pd-csi-cred-secret.yml}
- {name: gcp-pd-csi-setup, file: gcp-pd-csi-setup.yml}
- {name: gcp-pd-csi-controller, file: gcp-pd-csi-controller.yml}
- {name: gcp-pd-csi-node, file: gcp-pd-csi-node.yml}
register: gcp_pd_csi_manifests
when: inventory_hostname == groups['kube-master'][0]
tags: gcp-pd-csi-driver
- name: GCP PD CSI Driver | Apply Manifests
kube:
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item.file }}"
state: "latest"
with_items:
- "{{ gcp_pd_csi_manifests.results }}"
when:
- inventory_hostname == groups['kube-master'][0]
- not item is skipped
loop_control:
label: "{{ item.item.file }}"
tags: gcp-pd-csi-driver

View File

@ -0,0 +1,74 @@
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-gce-pd-controller
namespace: kube-system
spec:
serviceName: "csi-gce-pd"
replicas: {{ gcp_pd_csi_controller_replicas }}
selector:
matchLabels:
app: gcp-compute-persistent-disk-csi-driver
template:
metadata:
labels:
app: gcp-compute-persistent-disk-csi-driver
spec:
# Host network must be used for interaction with Workload Identity in GKE
# since it replaces GCE Metadata Server with GKE Metadata Server. Remove
# this requirement when issue is resolved and before any exposure of
# metrics ports
hostNetwork: true
serviceAccountName: csi-gce-pd-controller-sa
priorityClassName: csi-gce-pd-controller
containers:
- name: csi-provisioner
image: {{ gcp_pd_csi_image_repo }}/csi-provisioner:{{ gcp_pd_csi_provisioner_image_tag }}
args:
- "--v=5"
- "--csi-address=/csi/csi.sock"
- "--feature-gates=Topology=true"
# - "--run-controller-service=false" # disable the controller service of the CSI driver
# - "--run-node-service=false" # disable the node service of the CSI driver
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-attacher
image: {{ gcp_pd_csi_image_repo }}/csi-attacher:{{ gcp_pd_csi_attacher_image_tag }}
args:
- "--v=5"
- "--csi-address=/csi/csi.sock"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-resizer
image: {{ gcp_pd_csi_image_repo }}/csi-resizer:{{ gcp_pd_csi_resizer_image_tag }}
args:
- "--v=5"
- "--csi-address=/csi/csi.sock"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: gce-pd-driver
# Don't change base image without changing pdImagePlaceholder in
# test/k8s-integration/main.go
image: {{ gcp_pd_csi_image_repo }}/gcp-compute-persistent-disk-csi-driver:{{ gcp_pd_csi_driver_image_tag }}
args:
- "--v=5"
- "--endpoint=unix:/csi/csi.sock"
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: "/etc/cloud-sa/cloud-sa.json"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: cloud-sa-volume
readOnly: true
mountPath: "/etc/cloud-sa"
volumes:
- name: socket-dir
emptyDir: {}
- name: cloud-sa-volume
secret:
secretName: cloud-sa
volumeClaimTemplates: []

View File

@ -0,0 +1,8 @@
---
kind: Secret
apiVersion: v1
metadata:
name: cloud-sa
namespace: kube-system
data:
cloud-sa.json: {{ gcp_cred_secret.content }}

View File

@ -0,0 +1,111 @@
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-gce-pd-node
namespace: kube-system
spec:
selector:
matchLabels:
app: gcp-compute-persistent-disk-csi-driver
template:
metadata:
labels:
app: gcp-compute-persistent-disk-csi-driver
spec:
# Host network must be used for interaction with Workload Identity in GKE
# since it replaces GCE Metadata Server with GKE Metadata Server. Remove
# this requirement when issue is resolved and before any exposure of
# metrics ports.
hostNetwork: true
priorityClassName: csi-gce-pd-node
serviceAccountName: csi-gce-pd-node-sa
containers:
- name: csi-driver-registrar
image: {{ gcp_pd_csi_image_repo }}/csi-node-driver-registrar:{{ gcp_pd_csi_registrar_image_tag }}
args:
- "--v=5"
- "--csi-address=/csi/csi.sock"
- "--kubelet-registration-path=/var/lib/kubelet/plugins/pd.csi.storage.gke.io/csi.sock"
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /registration/pd.csi.storage.gke.io /registration/pd.csi.storage.gke.io-reg.sock"]
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: gce-pd-driver
securityContext:
privileged: true
# Don't change base image without changing pdImagePlaceholder in
# test/k8s-integration/main.go
image: {{ gcp_pd_csi_image_repo }}/gcp-compute-persistent-disk-csi-driver:{{ gcp_pd_csi_driver_image_tag }}
args:
- "--v=5"
- "--endpoint=unix:/csi/csi.sock"
volumeMounts:
- name: kubelet-dir
mountPath: /var/lib/kubelet
mountPropagation: "Bidirectional"
- name: plugin-dir
mountPath: /csi
- name: device-dir
mountPath: /dev
# The following mounts are required to trigger host udevadm from
# container
- name: udev-rules-etc
mountPath: /etc/udev
- name: udev-rules-lib
mountPath: /lib/udev
- name: udev-socket
mountPath: /run/udev
- name: sys
mountPath: /sys
nodeSelector:
kubernetes.io/os: linux
volumes:
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: kubelet-dir
hostPath:
path: /var/lib/kubelet
type: Directory
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins/pd.csi.storage.gke.io/
type: DirectoryOrCreate
- name: device-dir
hostPath:
path: /dev
type: Directory
# The following mounts are required to trigger host udevadm from
# container
- name: udev-rules-etc
hostPath:
path: /etc/udev
type: Directory
- name: udev-rules-lib
hostPath:
path: /lib/udev
type: Directory
- name: udev-socket
hostPath:
path: /run/udev
type: Directory
- name: sys
hostPath:
path: /sys
type: Directory
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
# See "special case". This will tolerate everything. Node component should
# be scheduled on all nodes.
tolerations:
- operator: Exists

View File

@ -0,0 +1,200 @@
##### Node Service Account, Roles, RoleBindings
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-gce-pd-node-sa
namespace: kube-system
---
##### Controller Service Account, Roles, Rolebindings
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-gce-pd-controller-sa
namespace: kube-system
---
# xref: https://github.com/kubernetes-csi/external-provisioner/blob/master/deploy/kubernetes/rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-gce-pd-provisioner-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-gce-pd-controller-provisioner-binding
subjects:
- kind: ServiceAccount
name: csi-gce-pd-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-gce-pd-provisioner-role
apiGroup: rbac.authorization.k8s.io
---
# xref: https://github.com/kubernetes-csi/external-attacher/blob/master/deploy/kubernetes/rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-gce-pd-attacher-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-gce-pd-controller-attacher-binding
subjects:
- kind: ServiceAccount
name: csi-gce-pd-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-gce-pd-attacher-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: csi-gce-pd-controller
value: 900000000
globalDefault: false
description: "This priority class should be used for the GCE PD CSI driver controller deployment only."
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: csi-gce-pd-node
value: 900001000
globalDefault: false
description: "This priority class should be used for the GCE PD CSI driver node deployment only."
---
# Resizer must be able to work with PVCs, PVs, SCs.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-gce-pd-resizer-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-gce-pd-resizer-binding
subjects:
- kind: ServiceAccount
name: csi-gce-pd-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-gce-pd-resizer-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: csi-gce-pd-node-psp
spec:
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
privileged: true
volumes:
- '*'
hostNetwork: true
allowedHostPaths:
- pathPrefix: "/var/lib/kubelet/plugins_registry/"
- pathPrefix: "/var/lib/kubelet"
- pathPrefix: "/var/lib/kubelet/plugins/pd.csi.storage.gke.io/"
- pathPrefix: "/dev"
- pathPrefix: "/etc/udev"
- pathPrefix: "/lib/udev"
- pathPrefix: "/run/udev"
- pathPrefix: "/sys"
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-gce-pd-node-deploy
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- csi-gce-pd-node-psp
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: csi-gce-pd-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: csi-gce-pd-node-deploy
subjects:
- kind: ServiceAccount
name: csi-gce-pd-node-sa
namespace: kube-system

View File

@ -45,6 +45,14 @@ dependencies:
- aws-ebs-csi-driver
- csi-driver
- role: kubernetes-apps/csi_driver/gcp_pd
when:
- gcp_pd_csi_enabled
tags:
- apps
- gcp-pd-csi-driver
- csi-driver
- role: kubernetes-apps/persistent_volumes
when:
- persistent_volumes_enabled

View File

@ -0,0 +1,8 @@
---
# Choose between pd-standard and pd-ssd
gcp_pd_csi_volume_type: pd-standard
gcp_pd_regional_replication_enabled: false
gcp_pd_restrict_zone_replication: false
gcp_pd_restricted_zones:
- europe-west1-b
- europe-west1-c

View File

@ -0,0 +1,19 @@
---
- name: Kubernetes Persistent Volumes | Copy GCP PD CSI Storage Class template
template:
src: "gcp-pd-csi-storage-class.yml.j2"
dest: "{{ kube_config_dir }}/gcp-pd-csi-storage-class.yml"
register: manifests
when:
- inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Persistent Volumes | Add GCP PD CSI Storage Class
kube:
name: gcp-pd-csi
kubectl: "{{ bin_dir }}/kubectl"
resource: StorageClass
filename: "{{ kube_config_dir }}/gcp-pd-csi-storage-class.yml"
state: "latest"
when:
- inventory_hostname == groups['kube-master'][0]
- manifests.changed

View File

@ -0,0 +1,20 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-gce-pd
provisioner: pd.csi.storage.gke.io
parameters:
type: {{ gcp_pd_csi_volume_type }}
{% if gcp_pd_regional_replication_enabled %}
replication-type: regional-pd
{% endif %}
volumeBindingMode: WaitForFirstConsumer
{% if gcp_pd_restrict_zone_replication %}
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
{% for value in gcp_pd_restricted_zones %}
- {{ value }}
{% endfor %}
{% endif %}

View File

@ -20,3 +20,10 @@ dependencies:
tags:
- persistent_volumes_aws_ebs_csi
- aws-ebs-csi-driver
- role: kubernetes-apps/persistent_volumes/gcp-pd-csi
when:
- gcp_pd_csi_enabled
tags:
- persistent_volumes_gcp_pd_csi
- gcp-pd-csi-driver

View File

@ -305,6 +305,7 @@ local_volume_provisioner_enabled: "{{ local_volumes_enabled | default('false') }
local_volume_provisioner_directory_mode: 0700
cinder_csi_enabled: false
aws_ebs_csi_enabled: false
gcp_pd_csi_enabled: false
persistent_volumes_enabled: false
cephfs_provisioner_enabled: false
rbd_provisioner_enabled: false