Dynamic NFS Provisioning in Red Hat OpenShift

Afzal Muhammad
6 min readDec 13, 2020

Persistent storage for Kubernetes

When deploying Kubernetes, one of the very common requirement is to have a persistent storage. For stateful applications such as databases, persistent storage is a “Must Have” requirement. The solution is mounting the external volumes inside the containers. In public cloud deployments, Kubernetes has integrations with the cloud providers’ block-storage backends, allowing developers to create claims for volumes to use with their deployments, and Kubernetes works with the cloud provider to create a volume and mount it inside the developers’ pods. While there are several options available in Kubernetes to replicate the same behavior on-premise. However, one of the simplest and easiest way is to setup NFS server in linux machine and provide the back-end storage to the NFS client provisioner within Kubernetes cluster.

Note: This setup does not address full secure configuration and does not provide high availability for persistent volume. Therefore, must not be adopted for production environment.

In this post, I’ll be explaining how to setup NFS client provisioner in Red Hat OpenShift Container Platform by setting up the NFS server in Red Hat Enterprise Linux.

First lets install NFS server on the host machine, and create a directory where our NFS server will serve the files:

# yum install -y nfs-utils# systemctl enable rpcbind
# systemctl enable nfs-server
# systemctl start rpcbind
# systemctl start nfs-server
[root@bastion ~]# mkdir -p /nfs-share
[root@bastion ~]# /bin/mount -t xfs -o inode64,noatime /dev/sdb /nfs-share
# [root@bastion ~]# df -h /nfs-share
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 1.7T 104M 1.7T 1% /nfs-share
[root@bastion ~]#chmod -R 777 /nfs-share / # for making matter simple to troubleshoot, but not recommended for production setup

Export the directory created earlier.

[root@bastion ~]# cat /etc/exports
/nfs-share *(rw,sync,no_subtree_check,no_root_squash,insecure)
[root@bastion ~]# sudo exportfs -rv
exporting *:/nfs-share
[root@bastion ~]# showmount -e
Export list for bastion.ocp4.sjc02.lab.cisco.com:
/nfs-share *
[root@bastion ~]#

Now Service account must be set using a yaml.file in OpenShift environment , it will create role, role binding, and various roles within the kubernetes cluster as below.

[root@bastion ~]# cat rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-pod-provisioner-sa
---
kind: ClusterRole # Role of kubernetes
apiVersion: rbac.authorization.k8s.io/v1 # auth API
metadata:
name: nfs-provisioner-clusterRole
rules:
- apiGroups: [""] # rules on persistentvolumes
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-rolebinding
subjects:
- kind: ServiceAccount
name: nfs-pod-provisioner-sa # defined on top of file
namespace: default
roleRef: # binding cluster role to service account
kind: ClusterRole
name: nfs-provisioner-clusterRole # name defined in clusterRole
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-pod-provisioner-otherRoles
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-pod-provisioner-otherRoles
subjects:
- kind: ServiceAccount
name: nfs-pod-provisioner-sa # same as top of the file
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: nfs-pod-provisioner-otherRoles
apiGroup: rbac.authorization.k8s.io
[root@bastion ~]#

Deploy the service account by running the following command

[root@bastion ~]# oc apply -f rbac.yaml
serviceaccount/nfs-pod-provisioner-sa created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-clusterRole created
clusterrolebinding.rbac.authorization.k8s.io/nfs-provisioner-rolebinding created
role.rbac.authorization.k8s.io/nfs-pod-provisioner-otherRoles created
rolebinding.rbac.authorization.k8s.io/nfs-pod-provisioner-otherRoles created
[root@bastion ~]# oc get clusterrole,role

Creating a storage class nfs using the nfs.yaml file below

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs # when creating the PVC, it should mention this name
provisioner: nfs-test # give any name of your choice
parameters:
archiveOnDelete: "false"
[root@bastion ~]#

Now create the storage class from the nfs.yaml file

[root@bastion ~]# oc create -f nfs.yaml
you can verify it by running the following command or in OpenShift console
[root@bastion ~]# oc get storageclass | grep nfs
nfs nfs-test 5h30m
[root@bastion ~]#

Now create a POD for NFS client provisioner using the below yaml file

[root@bastion ~]# cat nfs_pod_provisioner.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-pod-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-pod-provisioner
spec:
serviceAccountName: nfs-pod-provisioner-sa # name of service account created in rbac.yaml
containers:
- name: nfs-pod-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-provisioner-v
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME # do not change
value: nfs-test # SAME AS PROVISONER NAME VALUE IN STORAGECLASS
- name: NFS_SERVER # do not change
value: 10.16.1.150 # Ip of the NFS SERVER
- name: NFS_PATH # do not change
value: /nfs-share # path to nfs directory setup
volumes:
- name: nfs-provisioner-v # same as volumemouts name
nfs:
server: 10.16.1.150
path: /nfs-share

Deploy the NFS client POD

[root@bastion ~]# oc create -f nfs_pod_provisioner.yaml

You can verify the POD in running state either via CLI or in the GUI as shown

[root@bastion ~]# oc get pods
NAME READY STATUS RESTARTS AGE
nfs-pod-provisioner-8458c4b4f6-r4cf4 1/1 Running 0 5h27m
[root@bastion ~]#
[root@bastion ~]#

Run the following command to verify if the POD is created with proper configuration

[root@bastion ~]# oc describe pod nfs-pod-provisioner-8458c4b4f6-r4cf4

Now test our setup by provisioning an nginx container by requesting persistent volume claim and mounting it in the container.

Create a persistent volume claim using the following yaml file.

[root@bastion ~]# cat nfs_pvc_dynamic.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-test
spec:
storageClassName: nfs # SAME NAME AS THE STORAGECLASS
accessModes:
- ReadWriteMany # must be the same as PersistentVolume
resources:
requests:
storage: 50Mi
[root@bastion ~]#

Apply the yaml file

oc apply -f  nfs_pvc_dynamic.yaml

verify it in the GUI or by running

[root@bastion ~]# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc-test Bound pvc-6c9c4677-f355-4abe-ace4-81a8546f0d6a 50Mi RWX nfs 5h32m
[root@bastion ~]#

we can verify this persistent volume in the server where NFS server is configured as below

[root@bastion ~]# ls /nfs-share
default-nfs-pvc-test-pvc-6c9c4677-f355-4abe-ace4-81a8546f0d6a

Now create an nginx POD and specify the claim name “nfs-pvc-test” in this case in the yaml file as shown below

[root@bastion ~]# cat ngnix_nfs.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: nginx
name: nfs-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nfs-test #
persistentVolumeClaim:
claimName: nfs-pvc-test # same name of pvc that was created
containers:
- image: nginx
name: nginx
volumeMounts:
- name: nfs-test # name of volume should match claimName volume
mountPath: mydata # mount inside of contianer
[root@bastion ~]#

create the POD

# oc apply -f nginx_nfs.yaml
[root@bastion ~]# oc get pods
NAME READY STATUS RESTARTS AGE
nfs-nginx-6f8d4f7786-9gwks 1/1 Running 0 5h32m
nfs-pod-provisioner-8458c4b4f6-r4cf4 1/1 Running 0 5h43m
[root@bastion ~]#

Now create a text file inside the nginx pod and verify if it exists in the /nfs-share folder in NFS server

[root@bastion ~]# oc exec -it nfs-nginx-6f8d4f7786-9gwks bash
root@nfs-nginx-6f8d4f7786-9gwks:/# cd mydata
root@nfs-nginx-6f8d4f7786-9gwks:/mydata2# date >> demofile.txt

verify that in the NFS server

[root@bastion ~]# ls /nfs-share/default-nfs-pvc-test-pvc-6c9c4677-f355-4abe-ace4-81a8546f0d6a/
demofile.txt

[root@bastion ~]# cat /nfs-share/default-nfs-pvc-test-pvc-6c9c4677-f355-4abe-ace4-81a8546f0d6a/demofile.txt
Sun Dec 13 04:25:45 UTC 2020
[root@bastion ~]#

As you can see, file is replicated in the NFS server.

--

--

Afzal Muhammad

Innovative and transformative cross domain cloud solution architect @Microsoft (& xCisco). Helping companies to digitally transform!!!