Deploy Cinder CSI driver to provision volumes over OpenStack (#5184)

* Deploy Cinder CSI driver to provision volumes over OpenStack

* Deploy Cinder CSI StorageClass

* Cinder CSI doc
pull/5316/head
Ali Sanhaji 2019-11-01 08:59:24 +01:00 committed by Kubernetes Prow Robot
parent 79128dcd6b
commit b0ee1f6cc6
18 changed files with 775 additions and 0 deletions

99
docs/cinder-csi.md 100644
View File

@ -0,0 +1,99 @@
Cinder CSI Driver
===============
Cinder CSI driver allows you to provision volumes over an OpenStack deployment. The Kubernetes historic in-tree cloud provider is deprecated and will be removed in future versions.
To enable Cinder CSI driver, uncomment the `cinder_csi_enabled` option in `group_vars/all/openstack.yml` and set it to `true`.
To set the number of replicas for the Cinder CSI controller, you can change `cinder_csi_controller_replicas` option in `group_vars/all/openstack.yml`.
You need to source the OpenStack credentials you use to deploy your machines that will host Kubernetes: `source path/to/your/openstack-rc` or `. path/to/your/openstack-rc`.
Make sure the hostnames in your `inventory` file are identical to your instance names in OpenStack. Otherwise [cinder](https://docs.openstack.org/cinder/latest/) won't work as expected.
If you want to deploy the cinder provisioner used with Cinder CSI Driver, you should set `persistent_volumes_enabled` in `group_vars/k8s-cluster/k8s-cluster.yml` to `true`.
You can now run the kubespray playbook (cluster.yml) to deploy Kubernetes over OpenStack with Cinder CSI Driver enabled.
## Usage example ##
To check if Cinder CSI Driver works properly, see first that the cinder-csi pods are running:
```
$ kubectl -n kube-system get pods | grep cinder
csi-cinder-controllerplugin-7f8bf99785-cpb5v 5/5 Running 0 100m
csi-cinder-nodeplugin-rm5x2 2/2 Running 0 100m
```
Check the associated storage class (if you enabled persistent_volumes):
```
$ kubectl get storageclass
NAME PROVISIONER AGE
cinder-csi cinder.csi.openstack.org 100m
```
You can run a PVC and an Nginx Pod using this file `nginx.yaml`:
```
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc-cinderplugin
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: cinder-csi
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /var/lib/www/html
name: csi-data-cinderplugin
volumes:
- name: csi-data-cinderplugin
persistentVolumeClaim:
claimName: csi-pvc-cinderplugin
readOnly: false
```
Apply this conf to your cluster: ```kubectl apply -f nginx.yml```
You should see the PVC provisioned and bound:
```
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-cinderplugin Bound pvc-f21ad0a1-5b7b-405e-a462-48da5cb76beb 1Gi RWO cinder-csi 8s
```
And the volume mounted to the Nginx Pod (wait until the Pod is Running):
```
kubectl exec -it nginx -- df -h | grep /var/lib/www/html
/dev/vdb 976M 2.6M 958M 1% /var/lib/www/html
```
## Compatibility with in-tree cloud provider ##
It is not necessary to enable OpenStack as a cloud provider for Cinder CSI Driver to work.
Though, you can run both the in-tree openstack cloud provider and the Cinder CSI Driver at the same time. The storage class provisioners associated to each one of them are differently named.
## Cinder v2 support ##
For the moment, only Cinder v3 is supported by the CSI Driver.
## More info ##
For further information about the Cinder CSI Driver, you can refer to this page: [Cloud Provider OpenStack](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-cinder-csi-plugin.md).

View File

@ -14,3 +14,8 @@
# openstack_lbaas_monitor_delay: "1m"
# openstack_lbaas_monitor_timeout: "30s"
# openstack_lbaas_monitor_max_retries: "3"
## To use Cinder CSI plugin to provision volumes set this value to true
## Make sure to source in the openstack credentials
# cinder_csi_enabled: true
# cinder_csi_controller_replicas: 1

View File

@ -0,0 +1,16 @@
---
# To access Cinder, the CSI controller will need credentials to access
# openstack apis. Per default this values will be
# read from the environment.
cinder_auth_url: "{{ lookup('env','OS_AUTH_URL') }}"
cinder_username: "{{ lookup('env','OS_USERNAME') }}"
cinder_password: "{{ lookup('env','OS_PASSWORD') }}"
cinder_region: "{{ lookup('env','OS_REGION_NAME') }}"
cinder_tenant_id: "{{ lookup('env','OS_TENANT_ID')| default(lookup('env','OS_PROJECT_ID')|default(lookup('env','OS_PROJECT_NAME'),true),true) }}"
cinder_tenant_name: "{{ lookup('env','OS_TENANT_NAME') }}"
cinder_domain_name: "{{ lookup('env','OS_USER_DOMAIN_NAME') }}"
cinder_domain_id: "{{ lookup('env','OS_USER_DOMAIN_ID') }}"
cinder_cacert: "{{ lookup('env','OS_CACERT') }}"
# For now, only Cinder v3 is supported in Cinder CSI driver
cinder_blockstorage_version: "v3"
cinder_csi_controller_replicas: 1

View File

@ -0,0 +1,34 @@
---
- name: Cinder CSI Driver | check cinder_auth_url value
fail:
msg: "cinder_auth_url is missing"
when: cinder_auth_url is not defined or not cinder_auth_url
- name: Cinder CSI Driver | check cinder_username value
fail:
msg: "cinder_username is missing"
when: cinder_username is not defined or not cinder_username
- name: Cinder CSI Driver | check cinder_password value
fail:
msg: "cinder_password is missing"
when: cinder_password is not defined or not cinder_password
- name: Cinder CSI Driver | check cinder_region value
fail:
msg: "cinder_region is missing"
when: cinder_region is not defined or not cinder_region
- name: Cinder CSI Driver | check cinder_tenant_id value
fail:
msg: "one of cinder_tenant_id or cinder_trust_id must be specified"
when:
- cinder_tenant_id is not defined or not cinder_tenant_id
- cinder_trust_id is not defined
- name: Cinder CSI Driver | check cinder_trust_id value
fail:
msg: "one of cinder_tenant_id or cinder_trust_id must be specified"
when:
- cinder_trust_id is not defined or not cinder_trust_id
- cinder_tenant_id is not defined

View File

@ -0,0 +1,60 @@
---
- include_tasks: cinder-credential-check.yml
tags: cinder-csi-driver
- name: Cinder CSI Driver | Write cacert file
copy:
src: "{{ cinder_cacert }}"
dest: "{{ kube_config_dir }}/cinder-cacert.pem"
group: "{{ kube_cert_group }}"
mode: 0640
when:
- inventory_hostname in groups['k8s-cluster']
- cinder_cacert is defined
- cinder_cacert | length > 0
tags: cinder-csi-driver
- name: Cinder CSI Driver | Write Cinder cloud-config
template:
src: "cinder-csi-cloud-config.j2"
dest: "{{ kube_config_dir }}/cinder_cloud_config"
group: "{{ kube_cert_group }}"
mode: 0640
when: inventory_hostname == groups['kube-master'][0]
tags: cinder-csi-driver
- name: Cinder CSI Driver | Get base64 cloud-config
slurp:
src: "{{ kube_config_dir }}/cinder_cloud_config"
register: cloud_config_secret
when: inventory_hostname == groups['kube-master'][0]
tags: cinder-csi-driver
- name: Cinder CSI Driver | Generate Manifests
template:
src: "{{ item.file }}.j2"
dest: "{{ kube_config_dir }}/{{ item.file }}"
with_items:
- {name: cinder-csi-driver, file: cinder-csi-driver.yml}
- {name: cinder-csi-cloud-config-secret, file: cinder-csi-cloud-config-secret.yml}
- {name: cinder-csi-controllerplugin, file: cinder-csi-controllerplugin-rbac.yml}
- {name: cinder-csi-controllerplugin, file: cinder-csi-controllerplugin.yml}
- {name: cinder-csi-nodeplugin, file: cinder-csi-nodeplugin-rbac.yml}
- {name: cinder-csi-nodeplugin, file: cinder-csi-nodeplugin.yml}
register: cinder_csi_manifests
when: inventory_hostname == groups['kube-master'][0]
tags: cinder-csi-driver
- name: Cinder CSI Driver | Apply Manifests
kube:
kubectl: "{{ bin_dir }}/kubectl"
filename: "{{ kube_config_dir }}/{{ item.item.file }}"
state: "latest"
with_items:
- "{{ cinder_csi_manifests.results }}"
when:
- inventory_hostname == groups['kube-master'][0]
- not item is skipped
loop_control:
label: "{{ item.item.file }}"
tags: cinder-csi-driver

View File

@ -0,0 +1,10 @@
# This YAML file contains secret objects,
# which are necessary to run csi cinder plugin.
kind: Secret
apiVersion: v1
metadata:
name: cloud-config
namespace: kube-system
data:
cloud.conf: {{ cloud_config_secret.content }}

View File

@ -0,0 +1,26 @@
[Global]
auth-url="{{ cinder_auth_url }}"
username="{{ cinder_username }}"
password="{{ cinder_password }}"
region="{{ cinder_region }}"
{% if cinder_trust_id is defined and cinder_trust_id != "" %}
trust-id="{{ cinder_trust_id }}"
{% else %}
tenant-id="{{ cinder_tenant_id }}"
{% endif %}
{% if cinder_tenant_name is defined and cinder_tenant_name != "" %}
tenant-name="{{ cinder_tenant_name }}"
{% endif %}
{% if cinder_domain_name is defined and cinder_domain_name != "" %}
domain-name="{{ cinder_domain_name }}"
{% elif cinder_domain_id is defined and cinder_domain_id != "" %}
domain-id ="{{ cinder_domain_id }}"
{% endif %}
{% if cinder_cacert is defined and cinder_cacert != "" %}
ca-file="{{ kube_config_dir }}/cinder-cacert.pem"
{% endif %}
[BlockStorage]
{% if cinder_blockstorage_version is defined %}
bs-version={{ cinder_blockstorage_version }}
{% endif %}

View File

@ -0,0 +1,209 @@
# This YAML file contains RBAC API objects,
# which are necessary to run csi controller plugin
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-cinder-controller-sa
namespace: kube-system
---
# external attacher
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-binding
subjects:
- kind: ServiceAccount
name: csi-cinder-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-attacher-role
apiGroup: rbac.authorization.k8s.io
---
# external Provisioner
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["get", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-binding
subjects:
- kind: ServiceAccount
name: csi-cinder-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-provisioner-role
apiGroup: rbac.authorization.k8s.io
---
# external snapshotter
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-snapshotter-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots/status"]
verbs: ["update"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-snapshotter-binding
subjects:
- kind: ServiceAccount
name: csi-cinder-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-snapshotter-role
apiGroup: rbac.authorization.k8s.io
---
# External Resizer
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-resizer-role
rules:
# The following rule should be uncommented for plugins that require secrets
# for provisioning.
# - apiGroups: [""]
# resources: ["secrets"]
# verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-resizer-binding
subjects:
- kind: ServiceAccount
name: csi-cinder-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-resizer-role
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: external-resizer-cfg
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-resizer-role-cfg
namespace: kube-system
subjects:
- kind: ServiceAccount
name: csi-cinder-controller-sa
namespace: kube-system
roleRef:
kind: Role
name: external-resizer-cfg
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,109 @@
# This YAML file contains CSI Controller Plugin Sidecars
# external-attacher, external-provisioner, external-snapshotter
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: csi-cinder-controllerplugin
namespace: kube-system
spec:
replicas: {{ cinder_csi_controller_replicas }}
selector:
matchLabels:
app: csi-cinder-controllerplugin
template:
metadata:
labels:
app: csi-cinder-controllerplugin
spec:
serviceAccount: csi-cinder-controller-sa
containers:
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v1.2.1
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v1.3.0
args:
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-snapshotter
image: quay.io/k8scsi/csi-snapshotter:v1.2.0
args:
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/csi/sockets/pluginproxy/
name: socket-dir
- name: csi-resizer
image: quay.io/k8scsi/csi-resizer:v0.2.0
args:
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: cinder-csi-plugin
image: docker.io/k8scloudprovider/cinder-csi-plugin:latest
args :
- /bin/cinder-csi-plugin
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
- "--cloud-config=$(CLOUD_CONFIG)"
- "--cluster=$(CLUSTER_NAME)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://csi/csi.sock
- name: CLOUD_CONFIG
value: /etc/config/cloud.conf
- name: CLUSTER_NAME
value: kubernetes
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: secret-cinderplugin
mountPath: /etc/config
readOnly: true
{% if cinder_cacert is defined %}
- name: cinder-cacert
mountPath: {{ kube_config_dir }}/cinder-cacert.pem
readOnly: true
{% endif %}
volumes:
- name: socket-dir
emptyDir:
- name: secret-cinderplugin
secret:
secretName: cloud-config
{% if cinder_cacert is defined %}
- name: cinder-cacert
hostPath:
path: {{ kube_config_dir }}/cinder-cacert.pem
type: FileOrCreate
{% endif %}

View File

@ -0,0 +1,7 @@
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: cinder.csi.openstack.org
spec:
attachRequired: true
podInfoOnMount: false

View File

@ -0,0 +1,30 @@
# This YAML defines all API objects to create RBAC roles for csi node plugin.
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-cinder-node-sa
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-nodeplugin-role
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-nodeplugin-binding
subjects:
- kind: ServiceAccount
name: csi-cinder-node-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-nodeplugin-role
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,116 @@
# This YAML file contains driver-registrar & csi driver nodeplugin API objects,
# which are necessary to run csi nodeplugin for cinder.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-cinder-nodeplugin
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-cinder-nodeplugin
template:
metadata:
labels:
app: csi-cinder-nodeplugin
spec:
serviceAccount: csi-cinder-node-sa
hostNetwork: true
containers:
- name: node-driver-registrar
image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
args:
- "--csi-address=$(ADDRESS)"
- "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"]
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: cinder-csi-plugin
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: docker.io/k8scloudprovider/cinder-csi-plugin:latest
args :
- /bin/cinder-csi-plugin
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
- "--cloud-config=$(CLOUD_CONFIG)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://csi/csi.sock
- name: CLOUD_CONFIG
value: /etc/config/cloud.conf
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: kubelet-dir
mountPath: /var/lib/kubelet
mountPropagation: "Bidirectional"
- name: pods-cloud-data
mountPath: /var/lib/cloud/data
readOnly: true
- name: pods-probe-dir
mountPath: /dev
mountPropagation: "HostToContainer"
- name: secret-cinderplugin
mountPath: /etc/config
readOnly: true
{% if cinder_cacert is defined %}
- name: cinder-cacert
mountPath: {{ kube_config_dir }}/cinder-cacert.pem
readOnly: true
{% endif %}
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/cinder.csi.openstack.org
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: kubelet-dir
hostPath:
path: /var/lib/kubelet
type: Directory
- name: pods-cloud-data
hostPath:
path: /var/lib/cloud/data
type: Directory
- name: pods-probe-dir
hostPath:
path: /dev
type: Directory
- name: secret-cinderplugin
secret:
secretName: cloud-config
{% if cinder_cacert is defined %}
- name: cinder-cacert
hostPath:
path: {{ kube_config_dir }}/cinder-cacert.pem
type: FileOrCreate
{% endif %}

View File

@ -29,6 +29,13 @@ dependencies:
- apps
- metrics_server
- role: kubernetes-apps/csi_driver/cinder
when:
- cinder_csi_enabled
tags:
- apps
- cinder-csi-driver
- role: kubernetes-apps/persistent_volumes
when:
- persistent_volumes_enabled

View File

@ -0,0 +1,6 @@
---
storage_classes:
- name: cinder-csi
is_default: false
parameters:
availability: nova

View File

@ -0,0 +1,19 @@
---
- name: Kubernetes Persistent Volumes | Copy Cinder CSI Storage Class template
template:
src: "cinder-csi-storage-class.yml.j2"
dest: "{{ kube_config_dir }}/cinder-csi-storage-class.yml"
register: manifests
when:
- inventory_hostname == groups['kube-master'][0]
- name: Kubernetes Persistent Volumes | Add Cinder CSI Storage Class
kube:
name: cinder-csi
kubectl: "{{ bin_dir }}/kubectl"
resource: StorageClass
filename: "{{ kube_config_dir }}/cinder-csi-storage-class.yml"
state: "latest"
when:
- inventory_hostname == groups['kube-master'][0]
- manifests.changed

View File

@ -0,0 +1,14 @@
{% for class in storage_classes %}
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: "{{ class.name }}"
annotations:
storageclass.kubernetes.io/is-default-class: "{{ class.is_default | default(false) }}"
provisioner: cinder.csi.openstack.org
parameters:
{% for key, value in (class.parameters | default({})).items() %}
"{{ key }}": "{{ value }}"
{% endfor %}
{% endfor %}

View File

@ -6,3 +6,10 @@ dependencies:
- cloud_provider in [ 'openstack' ]
tags:
- persistent_volumes_openstack
- role: kubernetes-apps/persistent_volumes/cinder-csi
when:
- cinder_csi_enabled
tags:
- persistent_volumes_cinder_csi
- cinder-csi-driver

View File

@ -300,6 +300,7 @@ metrics_server_enabled: false
enable_network_policy: true
local_volume_provisioner_enabled: "{{ local_volumes_enabled | default('false') }}"
local_volume_provisioner_directory_mode: 0700
cinder_csi_enabled: false
persistent_volumes_enabled: false
cephfs_provisioner_enabled: false
rbd_provisioner_enabled: false