For one of our important customers, we are working on a major project to migrate critical applications to containers. From the implementation of the Kubernetes architecture to the deployment of applications and the administration of the platform, we are responsible for an important technological stack with new challenges for our team.
One of the challenges, both important and exciting, is the implementation of Kubernetes clusters on bare metal (VM) and its management. We have deployed a Kubernetes cluster in VMs, based on VMWare.
As you know, one of the challenges of containerization is storage management. Do we manage stateless or stateful applications? For stateful applications, the way the data generated by the application is stored is very important.
Therefore, based on our infrastructure, we have 2 possibilities:
- The first one is to use the certified plugin provided by VMWare to create a storage class for our cluster and let him manage the persistent volume by itself: https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/overview.html
- The second possibility is to use an NFS server. This is a more trivial choice and you have to manage your persistent volumes by yourself (manually).
Here is a representative diagram of the 2 solutions:
Configuring NFS storage for Kubernetes
The Kubernetes infrastructure is composed of the following:
- k8s-master
- k8s-worker1
- k8s-worker2
In addition, we have an NFS server to store our cluster data. In the next steps, we are going to expose the NFS share as a cluster object. We will create Kubernetes Persistent Volumes and Persistent Volume Claims for our application.
Persistent Volume Creation
Define the persistent volume at the cluster level as following:
[ec2-user@ip-10-3-1-217 ~]$ vi create-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: nfs-demo labels: app: nfs type: data spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi volumeMode: Filesystem nfs: path: /home/ec2-user/data server: ec2-3-88-194-14.compute-1.amazonaws.com persistentVolumeReclaimPolicy: Retain
Create the persistent volume and see the results:
[ec2-user@ip-10-3-1-217 ~]$ kubectl create -f create-pv.yaml persistentvolume/nfs-demo created [ec2-user@ip-10-3-1-217 ~]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-demo 10Gi RWO Retain Available 7s
Once it’s created we can create a persistent volume claim. A PVC is dedicated to a specific namespace.
First, create the nfs-demo namespace, then the PVC.
[ec2-user@ip-10-3-1-217 ~]$ kubectl create ns nfs-demo namespace/nfs-demo created
[ec2-user@ip-10-3-1-217 ~]$ vi create-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-demo namespace: nfs-demo labels: app: nfs spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi selector: matchLabels: app: nfs type: data [ec2-user@ip-10-3-1-217 ~]$ kubectl create -f create-pvc.yaml persistentvolumeclaim/nfs-demo created [ec2-user@ip-10-3-1-217 ~]$ kubectl get pvc -n nfs-demo NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-demo Bound nfs-demo 10Gi RWO 3m21s
We can see now that our persistent volume changes its status from “Available” to “Bound”.
[ec2-user@ip-10-3-1-217 ~]$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-demo 10Gi RWO Retain Bound nfs-demo/nfs-demo 169m
Finally, let’s deploy now our workload which will consume the volume claim and the persistent volume. Whatever the workload API object you are using (Deployment, StatefulSet or DaemonSet) the Persistent Volume Claim is defined within the Pod specification, as follows:
[ec2-user@ip-10-3-1-217 ~]$ vi create-pod.yaml kind: Pod [ec2-user@ip-10-3-1-217 ~]$ packet_write_wait: Connection to 18.205.188.55 port 22: Broken pipe kind: Pod apiVersion: v1 metadata: name: nfs-pod spec: containers: - name: nfs-demo image: alpine volumeMounts: - name: nfs-demo mountPath: /data/nfs command: ["/bin/sh"] args: ["-c", "sleep 500000"] volumes: - name: nfs-demo persistentVolumeClaim: claimName: nfs-demo
[ec2-user@ip-10-3-1-217 ~]$ kubectl create -f create-pod.yaml pod/nfs-pod created [ec2-user@ip-10-3-1-217 ~]$ kubectl get pods -o wide -n nfs-demo NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nfs-pod 1/1 Running 0 9s 192.168.37.68 ip-10-3-0-143.ec2.internal
Let’s now create an empty file into the container volume mount path and see if it is has been created on the NFS server.
[ec2-user@ip-10-3-1-217 ~]$ kubectl -n nfs-demo exec nfs-pod touch /data/test-nfs.sh
We can see now, in the NFS server that the file has been properly stored.
mehdi@MacBook-Pro: ssh -i "dbi.pem" [email protected] Last login: Tue Nov 19 13:35:18 2019 from 62.91.42.92 __| __|_ ) _| ( / Amazon Linux 2 AMI ___|\___|___| https://aws.amazon.com/amazon-linux-2/ 16 package(s) needed for security, out of 27 available Run "sudo yum update" to apply all updates. [ec2-user@ip-10-3-0-184 ~]$ ls -lrt data/ total 0 -rw-r--r-- 1 root root 0 Nov 19 13:42 test-nfs.sh