kubespray/roles/kubernetes-apps/external_provisioner/cephfs_provisioner
Wong Hoi Sing Edison 1a07c87af7 cephfs-provisioner: Upgrade to v2.0.0-k8s1.11
Upstream Changes:

-   cephfs-provisioner v2.0.0-k8s1.11 (https://github.com/kubernetes-incubator/external-storage/releases/tag/cephfs-provisioner-v2.0.0-k8s1.11)
-   Update ClusterRole

Our Changes:

-   Fix typo in defaults/main.yml (rs -> deploy)
-   Manifests cleanup
2018-08-17 12:41:56 +08:00
..
defaults cephfs-provisioner: Upgrade to 06fddbe2 2018-07-03 10:15:24 +08:00
tasks cephfs-provisioner: Upgrade to v2.0.0-k8s1.11 2018-08-17 12:41:56 +08:00
templates cephfs-provisioner: Upgrade to v2.0.0-k8s1.11 2018-08-17 12:41:56 +08:00
README.md CephFS Provisioner Addon Fixup 2018-03-22 23:03:13 +08:00

README.md

CephFS Volume Provisioner for Kubernetes 1.5+

Docker Repository on Quay

Using Ceph volume client

Development

Compile the provisioner

make

Make the container image and push to the registry

make push

Test instruction

  • Start Kubernetes local cluster

See https://kubernetes.io/.

  • Create a Ceph admin secret
ceph auth get client.admin 2>&1 |grep "key = " |awk '{print  $3'} |xargs echo -n > /tmp/secret
kubectl create ns cephfs
kubectl create secret generic ceph-secret-admin --from-file=/tmp/secret --namespace=cephfs
  • Start CephFS provisioner

The following example uses cephfs-provisioner-1 as the identity for the instance and assumes kubeconfig is at /root/.kube. The identity should remain the same if the provisioner restarts. If there are multiple provisioners, each should have a different identity.

docker run -ti -v /root/.kube:/kube -v /var/run/kubernetes:/var/run/kubernetes --privileged --net=host  cephfs-provisioner /usr/local/bin/cephfs-provisioner -master=http://127.0.0.1:8080 -kubeconfig=/kube/config -id=cephfs-provisioner-1

Alternatively, deploy it in kubernetes, see deployment.

  • Create a CephFS Storage Class

Replace Ceph monitor's IP in example/class.yaml with your own and create storage class:

kubectl create -f example/class.yaml
  • Create a claim
kubectl create -f example/claim.yaml
  • Create a Pod using the claim
kubectl create -f example/test-pod.yaml

Known limitations

  • Kernel CephFS doesn't work with SELinux, setting SELinux label in Pod's securityContext will not work.
  • Kernel CephFS doesn't support quota or capacity, capacity requested by PVC is not enforced or validated.
  • Currently each Ceph user created by the provisioner has allow r MDS cap to permit CephFS mount.

Acknowledgement

Inspired by CephFS Manila provisioner and conversation with John Spray