In the previous article we walked through the preparation of an upstream k8s cluster to take advantage of the converged storage infrastructure that vSphere provides by using a CSI driver that allows the pod to consume the vSAN storage in the form of Persistent Volumes created automatically through an special purpose StorageClass.

With the CSI driver we would have most of the persistent storage needs for our pods covered, however, in some particular cases it is necessary that multiple pods can mount and read/write the same volume simultaneously. This is basically defined by the Access Mode specification that is part of the PV/PVC definition. The typical Access Modes available in kubernetes are:

  • ReadWriteOnce – Mount a volume as read-write by a single node
  • ReadOnlyMany – Mount the volume as read-only by many nodes
  • ReadWriteMany – Mount the volume as read-write by many nodes

In this article we will focus on the Access Mode ReadWriteMany (RWX) that allow a volume to be mounted simultaneously in read-write mode for multiple pods running in different kubernetes nodes. This access mode is tipically supported by a network file sharing technology such as NFS. The good news are this is not a big deal if we have vSAN because, again, we can take advantage of this wonderful technology to enable the built-in file services and create shared network shares in a very easy and integrated way.

Enabling vSAN File Services

The procedure for doing this is described below. Let’s move into vSphere GUI for a while. Access to your cluster and go to vSAN>Services. Now click on ENABLE blue button at the bottom of the File Service Tile.

Enabling vSAN file Services

The first step will be to select the network on which the service will be deployed. In my case it will select a specific PortGroup in the subnet 10.113.4.0/24 and the VLAN 1304 with name VLAN-1304-NFS.

Enable File Services. Select Network

This action will trigger the creation of the necessary agents in each one of the hosts that will be in charge of serving the shared network resources via NFS. After a while we should be able to see a new Resource Group named ESX Agents with four new vSAN File Service Node VMs.

vSAN File Service Agents

Once the agents have been deployed we can access to the services configuration and we will see that the configuration is incomplete because we haven’t defined some important service parameters yet. Click on Configure Domain button at the botton of the File Service tile.

vSAN File Service Configuration Tile

The first step is to define a domain that will host the names of the shared services. In my particular case I will use vsanfs.sdefinitive.net as domain name. Any shared resource will be reached using this domain.

vSAN File Service Domain Configuration

Before continuing, it is important that our DNS is configured with the names that we will give to the four file services agents needed. In my case I am using fs01, fs02, fs03 and fs04 as names in the domain vsanfs.sdefinitive.net and the IP addresses 10.113.4.100-103. Additionally we will have to indicate the IP of the DNS server used in the resolution, the netmask and the default gateway as shown below.

vSAN File Services Network Configuration

In the next screen we will see the option to integrate with AD, at the moment we can skip it because we will consume the network services from pods.

vSAN File Service integration with Active Directory

Next you will see the summary of all the settings made for a last review before proceeding.

vSAN File Service Domain configuration summary

Once we press the FINISH green button the network file services will be completed and ready to use.

Creating File Shares from vSphere

Once the vSAN File Services have been configured we should be able to create network shares that will be eventually consumed in the form of NFS type volumes from our applications. To do this we must first provision the file shares according our preferences. Go to File Services and click on ADD to create a new file share.

Adding a new NFS File Share

The file share creation wizard allow us to specify some important parameters such as the name of our shared service, the protocol (NFS) used to export the file share, the NFS version (4.1 and 3), the Storage Policy that the volume will use and, finally other quota related settings such as the size and warning threshold for our file share.

Settings for vSAN file share

Additionally we can set a add security by means of a network access control policy. In our case we will allow any IP to access the shared service so we select the option “Allow access from any IP” but feel free to restrict access to certain IP ranges in case you need it.

Network Access Control for vSAN file share

Once all the parameters have been set we can complete the task by pressing the green FINISH button at the bottom right side of the window.

vSAN File Share Configuration Summary

Let’s inspect the created file share that will be seen as another vSAN Virtual Object from the vSphere administrator perspective.

Inpect vSAN File Share

If we click on the VIEW FILE SHARE we could see the specific configuration of our file share. Write down the export path (fs01.vsanfs.sdefinitive.net:/vsanfs/my-nfs-share) since it will be used later as an specification of the yaml manifest that will declare the corresponding persistent volume kubernetes object.

Displaying vSAN File Share parameters

From an storage administrator perspective we are done. Now we will see how to consume it from the developer perspective through native kubernetes resources using yaml manifest.

Consuming vSAN Network File Shares from Pods

A important requirement to be able to mount nfs shares is to have the necesary software installed in the worker OS, otherwise the mounting process will fail. If you are using a Debian’s familly Linux distro such as Ubuntu, the installation package that contains the necessary binaries to allow nfs mounts is nfs-common. Ensure this package is installed before proceeding. Issue below command to meet the requirement.

sudo apt-get install nfs-common

Before proceeding with creation of PV/PVCs, it is recommended to test connectivity from the workers as a sanity check. The first basic test would be pinging to the fqdn of the host in charge of the file share as defined in the export path of our file share captured earlier. Ensure you can also ping to the rest of the nfs agents defined (fs01-fs04).

ping fs01.vsanfs.sdefinitive.net
PING fs01.vsanfs.sdefinitive.net (10.113.4.100) 56(84) bytes of data.
64 bytes from 10.113.4.100 (10.113.4.100): icmp_seq=1 ttl=63 time=0.688 ms

If DNS resolution and connectivity is working as expected we are safe to mount the file share in any folder in your filesystem. Following commands show how to mount the file share using NFS 4.1 by using the export part associated to our file share. Ensure the mount point (/mnt/my-nfs-share in this example) exists before proceeding. If not so create in advance using mkdir as usual.

mount -t nfs4 -o minorversion=1,sec=sys fs01.vsanfs.sdefinitive.net:/vsanfs/my-nfs-share /mnt/my-nfs-share -v
mount.nfs4: timeout set for Fri Dec 23 21:30:09 2022
mount.nfs4: trying text-based options 'minorversion=1,sec=sys,vers=4,addr=10.113.4.100,clientaddr=10.113.2.15'

If the mounting is sucessfull you should be able to access the share at the mount point folder and even create a file like shown below.

/ # cd /mnt/my-nfs-share
/ # touch hola.txt
/ # ls

hola.txt

Now we are safe to jump into the manifest world to define our persistent volumes and attach them to the desired pod. First declare the PV object using ReadWriteMany as accessMode and specify the server and export path of our network file share.

Note we will use here a storageClassName specification using an arbitrary name vsan-nfs. Using a “fake” or undefined storageClass is supported by kubernetes and is tipically used for binding purposes between the PV and the PVC which is exactly our case. This is a requirement to avoid that our PV resource ends up using the default storage class which in this particular scenario would not be compatible with the ReadWriteMany access-mode required for NFS volumes.

vi nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  storageClassName: vsan-nfs
  capacity:
    storage: 500Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: fs01.vsanfs.sdefinitive.net
    path: "/vsanfs/my-nfs-share"

Apply the yaml and verify the creation of the PV. Note we are using rwx mode that allow access to the same volume from different pods running in different nodes simultaneously.

kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   500Mi      RWX            Retain           Bound    default/nfs-pvc   vsan-nfs                 60s

Now do the same for the PVC pointing to the PV created. Note we are specifiying the same storageClassName to bind the PVC with the PV. The accessMode must be also consistent with PV definition and finally, for this example we are claiming 500 Mbytes of storage.

vi nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-pvc
spec:
  storageClassName: vsan-nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi

As usual verify the status of the pvc resource. As you can see the pvc is bound state as expected.

kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   500Mi      RWX            vsan-fs        119s

Then attach the volume to a regular pod using following yaml manifest as shown below. This will create a basic pod that will run an alpine image that will mount the nfs pvc in the /my-nfs-share container’s local path . Ensure the highlighted claimName specification of the volume matches with the PVC name defined earlier.

vi nfs-pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nfs-pod1
spec:
  containers:
  - name: alpine
    image: "alpine"
    volumeMounts:
    - name: nfs-vol
      mountPath: "/my-nfs-share"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: nfs-vol
      persistentVolumeClaim:
        claimName: nfs-pvc

Apply the yaml using kubectl apply and try to open a shell session to the container using kubectl exec as shown below.

kubectl exec nfs-pod1 -it -- sh

We should be able to access the network share, list any existing files to check if you are able to write new files as shown below.

/ # touch /my-nfs-share/hola.pod1
/ # ls /my-nfs-share
hola.pod1  hola.txt

The last test to check if actually multiple pods running in different nodes can read and write the same volume simultaneously would be creating a new pod2 that mounts the same volume. Ensure that both pods are scheduled in different nodes for a full verification of the RWX access-mode.

vi nfs-pod2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nfs-pod2
spec:
  containers:
  - name: alpine
    image: "alpine"
    volumeMounts:
    - name: nfs-vol
      mountPath: "/my-nfs-share"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: nfs-vol
      persistentVolumeClaim:
        claimName: nfs-pvc

In the same manner apply the manifest file abouve to spin up the new pod2 and try to open a shell.

kubectl exec nfs-pod2 -it -- sh

Again, we should be able to access the network share, list existing files and also to create new files.

/ # touch /my-nfs-share/hola.pod2
/ # ls /my-nfs-share
hola.pod1  hola.pod2  hola.txt

In this article we have learnt how to enable vSAN File Services and how to consume PV in RWX. In the next post I will explain how to leverage MinIO technology to provide an S3 like object based storage on the top of vSphere for our workloads. Stay tuned!