Deploy NFS Dynamic Provisioning and Image Registry for Red Hat Openshift Container Platform

Private Cloud and Hybrid Cloud — featuring IBM Power Systems and Red Hat Openshift Container Platform Offerings

Vergie Hadiana, Solution Specialist Hybrid Cloud — Sinergi Wahana Gemilang

Illustration-1: List Infrastructure Provider can install OpenShift.

When creating a demo environment for the Cloud Pak, it requires a storage class to dynamically provision persistent volumes connected to storage with enough space for the environment.

If you don’t already have one available, these steps can be used to setup an open source NFS dynamic storage provisioner on OpenShift.

These steps can be executed from any client machine with an oc Command Line Interface (CLI).

It requires that you have an NFS share already set up with the needed space. Your OpenShift cluster should be able to access the NFS share.

Your OpenShift user needs to have access to the project where you will deploy the provisioner. When using the “default’ namespace, you would need to be a cluster administrator. The oc adm command also requires administrator rights.

In Before we install a six-node OCP 4.7 cluster using the user-provisioned infrastructure (UPI). In this we will:

  • Download and configure the oc client on Bastion Node
  • Configure NFS Server on Bastion Node
  • Configure Dynamic Provisioning using NFS Storage on Openshift Cluster
  • Configure the image registry (non-production) on Openshift Cluster
  • Create an NGINX project on Openshift Cluster

Setup NFS Server in CentOS / RHEL

  1. Install nfs-utils
sudo yum install nfs-utils -y

2. Enable rpcbind and nfs server services

systemctl enable --now rpcbind
# CentOS/RHEL 8: systemctl enable --now nfs-server
# CentOS/RHEL 7: systemctl enable --now nfs

3. Create directory for NFS PATH, in my case using directory “/export

mkdir -p /export
chmod -R 777 /export

4. Edit file on /etc/exports and save the file, then run exportfs -ra

tee -a /etc/exports << EOF
/export *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
EOF
# Apply Changes
exportfs -ra
# CentOS/RHEL 8: systemctl restart nfs-server
# CentOS/RHEL 7: systemctl restart nfs
Illustration-2: Inside file /etc/exports. Captured as of August 13, 2021.

5. Set Allow Services (rpcbind and nfs-server) on Firewall

sudo firewall-cmd --zone=public --permanent --add-service=mountd
sudo firewall-cmd --zone=public --permanent --add-service=rpc-bind
sudo firewall-cmd --zone=public --permanent --add-service=nfs
sudo firewall-cmd --zone=public --permanent --add-service=nfs-server
sudo firewall-cmd --reload

6. Modify Advanced Policy SELinux for NFS Services

setsebool -P nfs_export_all_rw 1
setsebool -P nfs_export_all_ro 1
setsebool -P virt_use_nfs 1
semanage fcontext -a -t public_content_rw_t "/export(/.*)?"
restorecon -R /export

Configure Dynamic Provisioning using NFS Storage on Openshift Cluster

  1. Login to your cluster using kubeadmin or cluster-admin priviledge
  2. Creating SA (ServiceAccount) and RBAC (Role-Based Access Control) for NFS Provisioner.

You can change / replace with namespace where provisioner is deployed, in my case in “default” namespace.

oc apply -f - << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-locking
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-locking
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: nfs-client-provisioner-locking
apiGroup: rbac.authorization.k8s.io
EOF

3. Create Policy “anyuid” for NFS Provisioner

oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:default:nfs-client-provisioner

4. Create Deployment for NFS Provisioner

Change Value Based on your IP and Path Directory on NFS Server :

oc apply -f - << EOF
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: nfs-client-provisioner
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
# “alt for ppc64le” image: ibmcom/nfs-client-provisioner-ppc64le:latest
# "alt for x86" image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
# SAME AS PROVISONER NAME VALUE IN STORAGECLASS
value: nfs-provisioner
- name: NFS_SERVER
# IP of the NFS SERVER
value: 129.40.58.209
- name: NFS_PATH
# Path to NFS Directory Setup
value: /export
volumes:
- name: nfs-client-root
nfs:
# IP of the NFS SERVER
server: 129.40.58.209
# Path to NFS Directory Setup
path: /export
EOF

5. Create SC (StorageClass) for NFS Provisioner

oc apply -f - << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
# This is the Storage Class Name
name: managed-nfs-storage
# Must Match Deployment's env PROVISIONER_NAME
provisioner: nfs-provisioner
parameters:
# When set to "false" your PVs will not be archived
archiveOnDelete: "false"
EOF

Configure the image registry (non-production) on Openshift Cluster

  1. Login to your cluster using kubeadmin or cluster-admin priviledge
  2. Create PVC (PersistentVolumeClaim) for Image Registry

For Demo / PoC in my case recommended using PVC name ”registry-pvc” and set with NFS capacity 20Gi ~ 100Gi Recommended.
Using StorageClass “managed-nfs-storage

oc apply -f - << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-pvc
spec:
# SAME NAME AS THE STORAGECLASS
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
# Must be the same as PersistentVolume
resources:
requests:
storage: 100Gi
EOF

2. Settings the Image registry, set managementState to Managed

oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'

3. Settings the Image registry, PVC name created before (registry-pvc)

oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"pvc":{ "claim": "registry-pvc"}}}}'

4. Optional, If you need expose the image registry, set defaultRoute to true

oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}'

5. Check Status Pods image registry, makesure all pods “Running” well

oc get pods -n openshift-image-registry
Illustration-3: Output pods on openshift-image-registry namespaces Captured as of August 13, 2021.

6. After pods running well, You can check PVC has been bound status and the image regirsty exposed url on web console

On StoragePersistentVolumeClaim :

Illustration-4: Web console PersistentVolumeClaim created before. Captured as of August 13, 2021.

On NetworkingRoutes :

Illustration-5: Web console Routes for access registry image on public. Captured as of August 13, 2021.

Create an NGINX project on Openshift Cluster

  1. Login to your cluster using kubeadmin or cluster-admin priviledge
  2. Create first PVC with name “nfs-pvc-test

⚠️ Make sure you update if your namespaces or project names or PVC name are different.

oc apply -f - << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc-test
spec:
# SAME NAME AS THE STORAGECLASS
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany # Must be the same as PersistentVolume
resources:
requests:
storage: 100Mi
EOF
Illustration-6: Web console PersistentVolumeClaim for NGINX later. Captured as of August 15, 2021.

3. Deploy NGINX Application with using NFS Storage from “nfs-pvc-test

oc apply -f - << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nfs-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nfs-test
persistentVolumeClaim:
claimName: nfs-pvc-test # same name pvc that was created
containers:
- image: docker.io/nginx
name: nginx
volumeMounts:
- name: nfs-test # name of volume match PVC Name volume
mountPath: mydata # mount inside of contianer
EOF
Illustration-7: Web console Deployment NGINX Apps. Captured as of August 15, 2021.

4. You can check the NFS Folder has been mount (/mydata) pods terminal

Illustration-7: Web console Terminal on Pods NGINX to check NFS FolderCaptured as of August 15, 2021.

Create an Test Application (Busybox) with Persistent Volume

Create PV Claim with name ‘test-claim’ :

oc apply -f - << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
EOF

Create Pod with name ’test-pod’

oc apply -f - << EOF
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:stable
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
EOF

Now check your NFS Server for the file SUCCESS

You can reach me on Linkedln.