Taxonomy Icon

Containers

Kubernetes is a very popular platform for container orchestration supported across a broad range of cloud providers. Kubernetes hosts container workloads that run as a process in an ephemeral filesystem. This poses a problem for workloads that need storage persistence or the case where multiple containers in a pod need access to some shared storage. To address this, Kubernetes provides persistent volume resources which can be associated with a container using a persistent volume claim.

For a Kubernetes cluster with multiple worker nodes, the cluster admin needs to create persistent volumes that are mountable by containers running on any node and matching the capacity and access requirements in each persistent volume claim. Cloud provider managed Kubernetes clusters from IBM, Google, AWS, and others support dynamic volume provisioning. As a developer you can request a dynamic persistent volume from these services by including a storage class in your persistent volume claim.

But what about a small Kubernetes cluster that you manage as a developer, that may not include a built-in dynamic storage provider? Is there a way to add a dynamic storage provisioner to these Kubernetes clusters?

Learning objectives

In this tutorial, you will see how to add a dynamic NFS provisioner that runs as a container for a development Kubernetes cluster. Once you’ve got it running, we’ll go through an example of deploying a helm chart for a workload that uses persistence on the cluster. These instructions are adapted from the Kubernetes 1.4+ nfs-provisioner examples from the kubernetes-incubator external-storage repository.

Prerequisites

The Kubernetes resource files included here are based on APIs available with Kubernetes 1.6 and later. Examples and instructions will be given for IBM Cloud Private Community Edition and the IBM Container Service Lite plan. Worker nodes in your cluster will need to have an nfs client installed to be able to mount the created volumes.

You will need a workstation with kubectl installed to configure the dynamic provisioner and helm installed to deploy the example.

Estimated time

Adding the NFS dynamic provisioner and testing it out with a sample helm chart should take about 10-15 minutes

Steps

Adding the NFS dynamic provisioner

Before you start, identify a node in the cluster that you will be using for the backing storage used by the dynamic provisioner. In these instructions, the path used on the node is /storage/dynamic. If you want to use a different path, change it in the first step and also in the nfs-deployment.yaml file. This node will be where you will run the nfs provisioner pod that will listen for storage requests and then create paths and export them over nfs for use by your workloads.

Create the storage path

On the node where you will be providing the backing storage, open a shell and create a directory for use by the nfs provisioner pod. For example if you are using the vagrant-based IBM Cloud Private Community Edition install use this command to create the path on the master node:

vagrant ssh
sudo mkdir -p /storage/dynamic
exit

Note: The IBM Container Service Lite plan does not provide shell access to the worker nodes, but the hostPath provisioner running on the worker will automatically create the requested path in the container deployment spec.

Configure and deploy nfs provisioner pod

In this step, you’ll set up the nfs-provisioner deployment so that the pod starts on the intended node. In some cases, this can be as simple as providing a specific nodeSelector in the deployment. In other cases, it can be a little more complex.

In Kubernetes, nodes can be tagged with special characteristics, called taints. These taints control workload selection for the node. For example with IBM Cloud Private Community Edition, the master node has a taint to prevent scheduling. To start the nfs-provisioner on the master node, a toleration is added to the deployment file.

To see how this works, download the nfs-deployment-icp.yaml file to your workstation.

In this file, the spec from lines 35 to 41 provide a toleration for the taints on the master node and then node selector for the master:

tolerations:
- key: "dedicated"
  operator: "Equal"
  value: "master"
  effect: "NoSchedule"
nodeSelector:
  role: master

To create the deployment on IBM Cloud Private use:

$ kubectl create -f nfs-deployment-icp.yaml
service "nfs-provisioner" created
deployment "nfs-provisioner" created

If you are deploying to a cluster on the IBM Container Service Lite plan where there is only a single worker, remove lines 35 to 41 or just download nfs-deployment-iks.yaml and then deploy with kubectl.

For another Kubernetes cluster type, inspect the labels on your target node using kubectl describe node <nodename> and update the toleration or nodeSelector as needed.

Follow the deployment of the nfs-provisioner pod until it shows in a Running state:

$ kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
nfs-provisioner-1862722505-ncbdc   1/1       Running   0          13s

Define storage class

With the pod running, use nfs-class.yaml to create a storageClass resource. The storageClass will be named nfs-dynamic. You’ll use this name when specifying a dynamic provisioning storage class.

$ kubectl create -f nfs-class.yaml
storageclass "nfs-dynamic" created

Test with a Persistent Volume Claim

Test out the storage class and provisioner with nfs-test-claim.yaml:

$ kubectl create -f nfs-test-claim.yaml
persistentvolumeclaim "nfs" created

Use the describe command for kubectl to get details on the test claim:

$ kubectl describe pvc nfs
Name:         nfs
Namespace:    default
StorageClass: nfs-dynamic
Status:       Bound
Volume:       pvc-edc83458-dbb3-11e7-9873-d206e53e41bd
...

Note: Your deployment will show a different volume name. If you want to remove the test use: `kubectl delete -f nfs-test-claim.yaml“

Use dynamic storage with an example helm chart

With the test working successfully, you can now deploy a helm chart that uses persistence by specifying the storage class that has been defined for the provisioner.

For example, to deploy the redis and override the values.yaml setting for the storage class:

helm install --name myredis --set persistence.storageClass=nfs-dynamic stable/redis

In the resource lists from the helm chart installation you will see the persistent volume claim being created and bound from the nfs-provisioner:

    RESOURCES:
    ==> v1/Service
    NAME           CLUSTER-IP      EXTERNAL-IP  PORT(S)   AGE
    myredis-redis  172.21.167.105  <none>       6379/TCP  1s

    ==> v1beta1/Deployment
    NAME           DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
    myredis-redis  1        1        1           0          1s

    ==> v1/Secret
    NAME           TYPE    DATA  AGE
    myredis-redis  Opaque  1     1s

    ==> v1/PersistentVolumeClaim
    NAME           STATUS  VOLUME                                    CAPACITY  ACCESSMODES  STORAGECLASS  AGE
    myredis-redis  Bound   pvc-a2ca387c-dea4-11e7-9873-d206e53e41bd  8Gi       RWO          nfs-dynamic   1s

Optionally, you can verify that data is being persisted with these steps.

Follow the instructions to create a redis-cli container using the steps that are shown as the helm chart finishes:

$ kubectl run myredis-redis-client --rm --tty -i \
    --env REDIS_PASSWORD=$REDIS_PASSWORD \
    --image bitnami/redis:4.0.2-r1 -- bash
If you don't see a command prompt, try pressing enter.
I have no name!@myredis-redis-client-2302152893-csrzz:/$ redis-cli -h myredis-redis -a $REDIS_PASSWORD
myredis-redis:6379> set foo bar
myredis-redis:6379> bgsave
Background saving started
myredis-redis:6379> exit

With some data added and saved, use kubectl to get the name for your nfs-provisioner pod. Then run a shell on the pod and use ls to verify the existence of the dump file:

$ kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
myredis-redis-1929873755-5g4kw     1/1       Running   0          20m
nfs-provisioner-1862722505-ncbdc   1/1       Running   0          3d
$ kubectl exec -it nfs-provisioner-1862722505-ncbdc -- bash
[root@nfs-provisioner-1862722505-ncbdc /]# ls /export/pvc-a2ca387c-dea4-11e7-9873-d206e53e41bd/redis/data
appendonly.aof  dump.rdb

Note: in the ls command replace the path with the path shown in the helm deployment resources.

When you’re finished testing, you can remove the redis deployment using:

helm delete --purge myredis

Summary

Now you have a dynamic provisioner available for use with helm charts and other kubernetes container deployments that need persistent storage. This approach of using an nfs server in a container will have performance limitations and is a good fit for development use cases. For production use, clustered filesystems like GlusterFS, use of a dedicated external NFS server, or a cloud provider dynamic storage provisioner will provide much higher performance and scalability.