Update local-volume-provisioner (#9463)

- Update and re-work the documentation:
  - Update links
  - Fix formatting (especially for lists)
  - Remove documentation about `useAlphaApi`,
    a flag only for k8s versions < v1.10
  - Attempt to clarify the doc
- Update to version 1.5.0
- Remove PodSecurityPolicy (deprecated in k8s v1.21+)
- Update ClusterRole following upstream
  (cf https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/pull/292)
- Add nodeSelector to DaemonSet (following upstream)
pull/9471/head
Olivier Lemasle 2022-11-08 00:28:17 +01:00 committed by GitHub
parent a731e25778
commit 5d1fe64bc8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 85 additions and 175 deletions

View File

@ -170,7 +170,7 @@ Note: Upstart/SysV init based OS types are not supported.
- [cinder-csi-plugin](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md) v1.22.0
- [gcp-pd-csi-plugin](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver) v1.4.0
- [local-path-provisioner](https://github.com/rancher/local-path-provisioner) v0.0.22
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.4.0
- [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) v2.5.0
## Container Runtime Notes

View File

@ -1,10 +1,11 @@
# Local Storage Provisioner
# Local Static Storage Provisioner
The [local storage provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume)
The [local static storage provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner)
is NOT a dynamic storage provisioner as you would
expect from a cloud provider. Instead, it simply creates PersistentVolumes for
all mounts under the host_dir of the specified storage class.
all mounts under the `host_dir` of the specified storage class.
These storage classes are specified in the `local_volume_provisioner_storage_classes` nested dictionary.
Example:
```yaml
@ -16,15 +17,18 @@ local_volume_provisioner_storage_classes:
host_dir: /mnt/fast-disks
mount_dir: /mnt/fast-disks
block_cleaner_command:
- "/scripts/shred.sh"
- "2"
- "/scripts/shred.sh"
- "2"
volume_mode: Filesystem
fs_type: ext4
```
For each key in `local_volume_provisioner_storage_classes` a storageClass with the
same name is created. The subkeys of each storage class are converted to camelCase and added
as attributes to the storageClass.
For each key in `local_volume_provisioner_storage_classes` a "storage class" with
the same name is created in the entry `storageClassMap` of the ConfigMap `local-volume-provisioner`.
The subkeys of each storage class in `local_volume_provisioner_storage_classes`
are converted to camelCase and added as attributes to the storage class in the
ConfigMap.
The result of the above example is:
```yaml
@ -43,80 +47,85 @@ data:
fsType: ext4
```
The default StorageClass is local-storage on /mnt/disks,
the rest of this doc will use that path as an example.
Additionally, a StorageClass object (`storageclasses.storage.k8s.io`) is also
created for each storage class:
```bash
$ kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY
fast-disks kubernetes.io/no-provisioner Delete
local-storage kubernetes.io/no-provisioner Delete
```
The default StorageClass is `local-storage` on `/mnt/disks`;
the rest of this documentation will use that path as an example.
## Examples to create local storage volumes
1. tmpfs method:
1. Using tmpfs
``` bash
for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
done
```
```bash
for vol in vol1 vol2 vol3; do
mkdir /mnt/disks/$vol
mount -t tmpfs -o size=5G $vol /mnt/disks/$vol
done
```
The tmpfs method is not recommended for production because the mount is not
persistent and data will be deleted on reboot.
The tmpfs method is not recommended for production because the mounts are not
persistent and data will be deleted on reboot.
1. Mount physical disks
``` bash
mkdir /mnt/disks/ssd1
mount /dev/vdb1 /mnt/disks/ssd1
```
```bash
mkdir /mnt/disks/ssd1
mount /dev/vdb1 /mnt/disks/ssd1
```
Physical disks are recommended for production environments because it offers
complete isolation in terms of I/O and capacity.
Physical disks are recommended for production environments because it offers
complete isolation in terms of I/O and capacity.
1. Mount unpartitioned physical devices
``` bash
for disk in /dev/sdc /dev/sdd /dev/sde; do
ln -s $disk /mnt/disks
done
```
```bash
for disk in /dev/sdc /dev/sdd /dev/sde; do
ln -s $disk /mnt/disks
done
```
This saves time of precreating filesystems. Note that your storageclass must have
volume_mode set to "Filesystem" and fs_type defined. If either is not set, the
disk will be added as a raw block device.
This saves time of precreating filesystems. Note that your storageclass must have
`volume_mode` set to `"Filesystem"` and `fs_type` defined. If either is not set, the
disk will be added as a raw block device.
1. PersistentVolumes with `volumeMode="Block"`
Just like above, you can create PersistentVolumes with volumeMode `Block`
by creating a symbolic link under discovery directory to the block device on
the node, if you set `volume_mode` to `"Block"`. This will create a volume
presented into a Pod as a block device, without any filesystem on it.
1. File-backed sparsefile method
``` bash
truncate /mnt/disks/disk5 --size 2G
mkfs.ext4 /mnt/disks/disk5
mkdir /mnt/disks/vol5
mount /mnt/disks/disk5 /mnt/disks/vol5
```
```bash
truncate /mnt/disks/disk5 --size 2G
mkfs.ext4 /mnt/disks/disk5
mkdir /mnt/disks/vol5
mount /mnt/disks/disk5 /mnt/disks/vol5
```
If you have a development environment and only one disk, this is the best way
to limit the quota of persistent volumes.
If you have a development environment and only one disk, this is the best way
to limit the quota of persistent volumes.
1. Simple directories
In a development environment using `mount --bind` works also, but there is no capacity
management.
1. Block volumeMode PVs
Create a symbolic link under discovery directory to the block device on the node. To use
raw block devices in pods, volume_type should be set to "Block".
In a development environment, using `mount --bind` works also, but there is no capacity
management.
## Usage notes
Beta PV.NodeAffinity field is used by default. If running against an older K8s
version, the useAlphaAPI flag must be set in the configMap.
The volume provisioner cannot calculate volume sizes correctly, so you should
delete the daemonset pod on the relevant host after creating volumes. The pod
will be recreated and read the size correctly.
Make sure to make any mounts persist via /etc/fstab or with systemd mounts (for
Flatcar Container Linux). Pods with persistent volume claims will not be
Make sure to make any mounts persist via `/etc/fstab` or with systemd mounts (for
Flatcar Container Linux or Fedora CoreOS). Pods with persistent volume claims will not be
able to start if the mounts become unavailable.
## Further reading
Refer to the upstream docs here: <https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume>
Refer to the upstream docs here: <https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner>

View File

@ -988,7 +988,7 @@ registry_image_tag: "{{ registry_version }}"
metrics_server_version: "v0.6.1"
metrics_server_image_repo: "{{ kube_image_repo }}/metrics-server/metrics-server"
metrics_server_image_tag: "{{ metrics_server_version }}"
local_volume_provisioner_version: "v2.4.0"
local_volume_provisioner_version: "v2.5.0"
local_volume_provisioner_image_repo: "{{ kube_image_repo }}/sig-storage/local-volume-provisioner"
local_volume_provisioner_image_tag: "{{ local_volume_provisioner_version }}"
cephfs_provisioner_version: "v2.1.0-k8s1.11"

View File

@ -6,9 +6,9 @@ local_volume_provisioner_nodelabels: []
# - topology.kubernetes.io/region
# - topology.kubernetes.io/zone
local_volume_provisioner_tolerations: []
# Levarages Ansibles string to Python datatype casting. Otherwise the dict_key isn't substituted
# see https://github.com/ansible/ansible/issues/17324
local_volume_provisioner_use_node_name_only: false
# Leverages Ansible's string to Python datatype casting. Otherwise the dict_key isn't substituted.
# see https://github.com/ansible/ansible/issues/17324
local_volume_provisioner_storage_classes: |
{
"{{ local_volume_provisioner_storage_class | default('local-storage') }}": {
@ -16,6 +16,5 @@ local_volume_provisioner_storage_classes: |
"mount_dir": "{{ local_volume_provisioner_mount_dir | default('/mnt/disks') }}",
"volume_mode": "Filesystem",
"fs_type": "ext4"
}
}

View File

@ -24,17 +24,6 @@
- { name: local-volume-provisioner-cm, file: local-volume-provisioner-cm.yml, type: cm }
- { name: local-volume-provisioner-ds, file: local-volume-provisioner-ds.yml, type: ds }
- { name: local-volume-provisioner-sc, file: local-volume-provisioner-sc.yml, type: sc }
local_volume_provisioner_templates_for_psp_not_system_ns:
- { name: local-volume-provisioner-psp, file: local-volume-provisioner-psp.yml, type: psp }
- { name: local-volume-provisioner-psp-role, file: local-volume-provisioner-psp-role.yml, type: role }
- { name: local-volume-provisioner-psp-rb, file: local-volume-provisioner-psp-rb.yml, type: rolebinding }
- name: Local Volume Provisioner | Insert extra templates to Local Volume Provisioner templates list for PodSecurityPolicy
set_fact:
local_volume_provisioner_templates: "{{ local_volume_provisioner_templates[:2] + local_volume_provisioner_templates_for_psp_not_system_ns + local_volume_provisioner_templates[2:] }}"
when:
- podsecuritypolicy_enabled
- local_volume_provisioner_namespace != "kube-system"
- name: Local Volume Provisioner | Create manifests
template:

View File

@ -5,6 +5,18 @@ metadata:
name: local-volume-provisioner-node-clusterrole
namespace: {{ local_volume_provisioner_namespace }}
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch"]
- apiGroups: ["", "events.k8s.io"]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]

View File

@ -1,20 +1,6 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-volume-provisioner-system-persistent-volume-provisioner
namespace: {{ local_volume_provisioner_namespace }}
subjects:
- kind: ServiceAccount
name: local-volume-provisioner
namespace: {{ local_volume_provisioner_namespace }}
roleRef:
kind: ClusterRole
name: system:persistent-volume-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-volume-provisioner-system-node
namespace: {{ local_volume_provisioner_namespace }}

View File

@ -20,6 +20,8 @@ spec:
spec:
priorityClassName: {% if local_volume_provisioner_namespace == 'kube-system' %}system-node-critical{% else %}k8s-cluster-critical{% endif %}{{''}}
serviceAccountName: local-volume-provisioner
nodeSelector:
kubernetes.io/os: linux
{% if local_volume_provisioner_tolerations %}
tolerations:
{{ local_volume_provisioner_tolerations | to_nice_yaml(indent=2) | indent(width=8) }}

View File

@ -1,14 +0,0 @@
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:local-volume-provisioner
namespace: {{ local_volume_provisioner_namespace }}
rules:
- apiGroups:
- policy
resourceNames:
- local-volume-provisioner
resources:
- podsecuritypolicies
verbs:
- use

View File

@ -1,13 +0,0 @@
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:local-volume-provisioner
namespace: {{ local_volume_provisioner_namespace }}
subjects:
- kind: ServiceAccount
name: local-volume-provisioner
namespace: {{ local_volume_provisioner_namespace }}
roleRef:
kind: ClusterRole
name: psp:local-volume-provisioner
apiGroup: rbac.authorization.k8s.io

View File

@ -1,15 +0,0 @@
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp:local-volume-provisioner
namespace: {{ local_volume_provisioner_namespace }}
rules:
- apiGroups:
- policy
resourceNames:
- local-volume-provisioner
resources:
- podsecuritypolicies
verbs:
- use

View File

@ -1,45 +0,0 @@
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: local-volume-provisioner
annotations:
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'runtime/default'
{% if apparmor_enabled %}
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
{% endif %}
labels:
addonmanager.kubernetes.io/mode: Reconcile
spec:
privileged: true
allowPrivilegeEscalation: true
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'secret'
- 'downwardAPI'
- 'hostPath'
allowedHostPaths:
{% for class_name, class_config in local_volume_provisioner_storage_classes.items() %}
- pathPrefix: "{{ class_config.host_dir }}"
readOnly: false
{% endfor %}
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false