The OpenShift platform for running applications in containers can run both cloud-native applications and stateful applications. When we look into stateful applications, we find many users still opt to use NFS as the storage solution, and while this is changing to more modern software-defined storage solutions, like GlusterFS, the truth is that NFS still is widely used. Because of this, administrators of an OpenShift platform need solutions for appropriately backing up and restoring the information stored in these persistent systems. One important use case for backing up and restoring data on NFS is moving a stateful application from one cluster to another when both clusters don't have access to the same storage. In this case, operators will need to back up the information in the original cluster and restore it into the new cluster.

In this post, I'll show how this task can be done using tools you can find readily available in your RHEL base operating system. Obviously, this is not the only possible option, but one that I've proved will work in production for customers.

In advance, the Red Hat tools covered in this article will be:

  • Autofs - The automount utility
  • LVMs snapshot - The LVM feature to make a consistent copy of a Logical Volume
  • rsync - Tool to propagate changes in one filesystem to another

Autofs

The automount utility can mount and unmount NFS file systems automatically (on-demand mounting), therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS, CIFS, and local file systems.

Autofs config files

There are three required files to setup client-side failover using autofs:

  • auto.master to /etc/
    This is the main configuration file for autofs. This file mainly enables the auto.master.d directory
    bash
    ...
    # Include /etc/auto.master.d/*.autofs
    # The included files must conform to the format of this file.
    #
    +dir:/etc/auto.master.d
    ...
  • 01direct.autofs to /etc/auto.master.d/
    This file configures the file /etc/auto.direct as the direct map for / mounts
    bash
    # Direct Map
    /- /etc/auto.direct
  • auto.direct to /etc/
    This file defines the mount options for the direct mounts. In our case we are just interested in /var/backup
    bash
    # direct map
    /var/backup -rw,soft,intr,retrans=1,timeo=5,tcp <IP>:/backup/nfs

NOTE: Where IP could be a single IP or multiple IPs comma-separated (for NFS servers configured in HA or Virtual IPs, etc.) the file mounts the /backup/nfs from one of the configured servers (in case you use multiple IPs) in /var/backup.

LVMs

Volume management creates a layer of abstraction over physical storage, allowing you to create logical storage volumes. This provides much greater flexibility in a number of ways not possible when using physical storage directly. With a logical volume, you are not restricted to physical disk sizes. In addition, the hardware storage configuration is hidden from the software so it can be resized and moved without stopping applications or unmounting file systems. This can reduce operational costs.

Logical volumes provide the following advantages over using physical storage directly:

  • Flexible capacity
  • Resizeable storage pools
  • Online data relocation
  • Convenient device naming
  • Disk striping
  • Mirroring volumes
  • Volume Snapshots

LVM snapshots

The LVM snapshot feature provides the ability to create virtual images of a device at a particular instant without causing a service interruption. When a change is made to the original device (the origin) after a snapshot is taken, the snapshot feature makes a copy of the changed data area as it was prior to the change so that it can reconstruct the state of the device.

Prerequisites

  • A filesystem that is created from an LVM volume
  • A destination directory that would hold a copy of the contents
  • A Volume group with filesystem LVM volumes that contain filesystems

A quick view of the NFS server deployment:

#Volume group for NFS PVs
vg_nfs 1 73 0 wz--n- 500,00g 500,00g
#Logical Volume that belongs to the previous VG
nfs-1g-01 vg_nfs -wi-ao---- 1,00g
#NFS export that would be mapped agains an OSE PV
/dev/mapper/vg_nfs-nfs--1g--01 on /exports/nfs-1g-01 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

rsync

Rsync is a unique, full-featured file transfer facility. It can perform differential uploads and downloads (synchronization) of files across the network, transferring only data that has changed.

Backup over NFS

The backup script does an rsync to a local directory. That destination directory can be an NFS share on a remote server. To ease the configuration of the NFS connection with client-failover, we will use the Autofs automount tool commented in a previous point.

An example of the steps that you should take to backup a single LVM:

#1.- Create snapshot
/usr/sbin/lvcreate --extents 50%FREE --snapshot "/dev/vg_nfs/nfs-1g-01" --name "nfs-1g-01-bak"
#2.- Create the directory that will hold the snapshot
/usr/bin/mkdir -p -m 600 "/var/tmp/backup/nfs-1g-01"
#3.- Mount the snapshot
/usr/bin/mount "/dev/vg_nfs/nfs-1g-01-bak" "/var/tmp/backup/nfs-1g-01" -onouuid,ro
#4.- Start the synchronization
/usr/bin/rsync -am --stats --delete --force "/var/tmp/backup/nfs-1g-01/" "/var/backup/nfs-1g-01"
#5.- After the synchronization is finished, clean the previous steps
/usr/bin/umount "/var/tmp/backup/nfs-1g-01"
/usr/sbin/lvremove --force "/dev/vg_nfs/nfs-1g-01-bak"
/usr/bin/rmdir "/var/tmp/backup/nfs-1g-01"

If you refactor the code above to use variables and a simple for loop (iterating over the vg_nfs LVs) you could easily backup all the LVs of your NFS server.

Restore

The backup strategy does an rsync of the original directory to the destination folder.
In order to restore the backup, you just need to do the rsync in the opposite direction.

To restore a single export:

/usr/bin/rsync -a --delete "/var/backup/nfs-1g-01/" "/exports/nfs-1g-01"

To restore all the exports:

/usr/bin/rsync -a --delete "/var/backup/" "/exports"

Caveat

It’s very important to take into account the configuration of the NFS backup mount point (external NFS for backup). The key attribute/option here is root_squash / no_root_squash:

  • root_squash: By default, NFS shares change the root user to the nfsnobody user, an unprivileged user account. In this way, all root-created files are owned by nfsnobody.
  • no_root_squash: Remote root users are able to change any file on the shared file system.

The mount point in the external NFS server should be exported as no_root_squash to keep track of file permissions and owners.
If the external NFS server is under exceptional security policies and you can’t get a no_root_squash from it, you could use -avz --no-perms --no-owner --no-group instead of -a to avoid rsync warnings/errors trying to do the chmod/chown on that NFS mounted point.

Conclusion and Recommendation

In this post, I've shown you how you can backup and restore information stored in NFS PVs so you can easily move stateful applications from one cluster to another. These techniques should be part of your toolbox if you use NFS storage on OpenShift.
Remember as previously mentioned, NFS is not a recommended storage for OpenShift Container platform. You can refer to the official documentation for further details. See documentation.

I hope you have enjoyed this post and that it's been helpful for you.

Authors

Jose Antonio Gonzalez Prada - senior middleware consultant - josgonza@redhat.com