Add NFS storage to your cluster
Prerequisites
Before you begin, please check the following points:
- Access to a T0 Premium or Dedicated offer
NetApp NFS network storage is only available to customers with a T0 Premium or Dedicated offer. - Available NFS share
An NFS share must be provisioned via https://storage1.cloudavenue.orange-business.com/login.
You can simulate an NFS share with a Linux VM for your tests.
1. Accessing the Network Storage Management Portal
- Go to https://storage1.cloudavenue.orange-business.com/login.
- Select the organization corresponding to your VCD tenant.
- Log in with your local account (the same as your VCD account).
- Follow this documentation:
Network Storage Management Portal to obtain an NFS storage server accessible from the Hypershift network.
From this point, it is assumed that an NFS share is available and accessible from the OpenShift nodes.
2. Dynamic NFS Provisioning in OpenShift with nfs-subdir-external-provisioner
Introduction
Dynamic NFS provisioning allows you to automatically create persistent volumes (PVs) for your Kubernetes/OpenShift applications without manual intervention.
The component https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner facilitates this integration with your existing NFS server.
Prerequisites
- An operational OpenShift/Kubernetes cluster
- An NFS server accessible from all cluster nodes
- NFS client packages installed on each node (
nfs-commonon Ubuntu/Debian) - Administrator access to the cluster
- Helm
1. Verify NFS Server Accessibility
Make sure the NFS server is reachable and the share is exported.
Example test from a worker node:
showmount -e <NFS_SERVER_IP>
2. Install the NFS Subdir External Provisioner
a. Add the Helm repository
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
helm repo update
b. Install the chart
Replace <NFS_SERVER_IP> and <NFS_EXPORT_PATH> with your values.
helm install nfs-subdir-external-provisioner \
nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=<NFS_SERVER_IP> \
--set nfs.path=<NFS_EXPORT_PATH> \
--set storageClass.onDelete=true
By default, the created StorageClass is named nfs-client.
3. Verify the installation
oc get pods
oc get sc
You should see the provisioner pod in Running state and the nfs-client StorageClass.
4. Create a dynamic PVC
Example nfs-pvc.yaml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-client
resources:
requests:
storage: 5Gi
Apply it:
oc apply -f nfs-pvc.yaml
5. Use the PVC in a deployment
Example NGINX deployment using the PVC:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: nfs-volume
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: nfs-pvc
Apply it:
oc apply -f deployment.yaml