This article describes how a home NAS can be utilized for the implementation of a backup strategy that drastically reduces the risk of ever losing any important data.

Introduction

The setup consists three different backup levels:

  1. Periodic synchronization of updated files from the clients to the NAS. This prevents data loss e.g. from client hard disk failure.

  2. Multiple hourly, daily, weekly and monthly snapshots of the data stored on the NAS. This helps in case of accidental deletion of files which has been noticed only some time later.

  3. Periodic offsite-backup of encrypted data from the NAS to an online storage provider. This prevents data loss from theft or destruction of the clients and the NAS (e.g. through a fire).

To make the backup runs fast and efficient, rsync will be used to enable incremental file transfers, and the rsync based rsnapshot for the snapshots.

The steps below are written primarily for a QNAP home NAS, and were tested with the TS-231P, so some parts are specific to that brand and model. But the main concepts should be adaptable to any other NAS with similar features.

All commands, except those mentioned in the two client backup subchapters, need to be executed as the admin user while being connected to the NAS via SSH.

Example scenario

Throughout this document we are focusing on the backup of the files belonging to user foo. This user has both a Linux and a Windows machine. Some files that are located on these clients need to be backed up regularly. Others, like her personal documents, are located in a data folder on the NAS which is mounted on both computers. In the last chapter we will also back up a Pictures folder that is shared with all users of the NAS.

This is the directory structure on the NAS that we are starting with:

/
└── share
    ├── homes
    │   └── foo
    │       └── data
    │           └── Documents, etc.
    └── Public
        └── Pictures

Preparations

Installing Entware-ng

First some tools need to be installed that are not included with the NAS operating system, at least not for the QNAP model. These programs will be taken from the Entware-ng repository, which contains hundreds of useful packages.

Deploying the repository tools is as simple as downloading a package and installing it through the web interface of the NAS. Refer to the installation instructions for details.

A side effect of the repository installation is the creation of an /opt directory (actually just a symbolic link) on the NAS that we will use to store our own scripts.

Creating the runQnapJob script

We want to be able to define backup jobs that execute a given command at a certain time and notifies us about the results.

This is done by the script listed below, which needs to be saved as /opt/bin/runQnapJob.sh:

#!/bin/bash

error() {
	echo -e $1 >&2
	exit 1
}

CONFIG=/opt/etc/runQnapJob.conf

if [ -z "$1" ]; then 
	error "missing parameter (job name)" 
fi

if [ ! -e "$CONFIG" ]; then
	error "configuration file $CONFIG not found"
fi

jobDefinition=$(grep "^$1:" $CONFIG) || error "job $1 is not defined"

if [ $(echo "$jobDefinition" | wc -l) -gt 1 ]; then
	error "job $1 is defined multiple times"
fi

jobDescription=$(echo "$jobDefinition" | cut -d\: -f2)

if [ -z "$jobDescription" ]; then
	error "missing job description"
fi

jobCommand=$(echo "$jobDefinition" | cut -d\: -f3)

if [ -z "$jobCommand" ]; then
	error "missing job command"
fi

echo "Running command: $jobCommand"

ERROR=$(eval $jobCommand 2>&1 >/dev/null)

if [ "$?" -eq 0 ]; then
	log_tool -a "Job \"$jobDescription\" completed." -t 0
else
	log_tool -a "Job \"$jobDescription\" failed: $ERROR" -t 1
fi

The script requires one parameter, which is the name of the job to run. It reads its configuration from the file /opt/etc/runQnapJob.conf. This file consists of lines with the following syntax:

job name:job description:command

The first part is the unique short name of the job. The second item is the job description. The last part is the actual command that shall be executed.

The result of the execution will be written to the log of the NAS operating system using QNAP’s own log_tool command, either with Information level on success, or as Warning on failure. Warnings will show up as notifications in the QNAP web interface. If configured, they can also trigger an email or SMS.

Level 1: Client backups

In this section we are setting up the periodic backup of files from the two client computers to the NAS.

It is assumed that the clients are running Ubuntu Linux and Windows 10, respectively.

The variable $NAS_IP mentioned below needs to be replaced with the actual IP address of the NAS.

The backups will be stored in the user’s home directory. The following command will create the destination directories:

$ mkdir -p ~foo/backup/{linux,windows}

The file system will now look as follows:

/
└── share
    └── homes
        └── foo
            ├── backup
            │   ├── linux
            │   └── windows
            └── data

Linux

First, both rsync and the SSH client need to be installed on the client. On Ubuntu or related distributions, the following command takes care of that:

$ sudo apt-get install openssh-client rsync

Next, we enable public key authentication for SSH, to eliminate the need for entering a password when connecting to the NAS.

$ sudo ssh-keygen
$ sudo ssh-copy-id -i /root/.ssh/id_rsa.pub admin@$NAS_IP

The actual backup job will be defined in the script /etc/cron.hourly/nasbackup. It will transfer all updated files in the user’s home directory - excluding the cache subfolder - and the computer’s configuration files. Files removed on the client will also be deleted in the backup, which is fine since we will enable hourly snapshots of these backups later.

#!/bin/bash

PARAMS="-a --relative --delete"
TARGET="admin@$NAS_IP:/share/homes/foo/backup/linux/"

rsync $PARAMS --exclude '.cache' /home/foo $TARGET
rsync $PARAMS /etc $TARGET

Windows

The steps needed for setting up a scheduled rsync backup are basically the same here, but require a bit more effort. Note that we do not need to do any of these steps as the Administrator.

To use a similar environment as on Linux, we will download and install Cygwin. During the installation, the rsync and openssh packages need to be selected. Then we add the directories containing the binaries to Window’s path variable:

  1. run sysdm.cpl
  2. open the Advanced tab and click on Environment variables
  3. edit the Path user variable and add c:\cygwin64\bin

Now we can open the Windows command line by running cmd.exe and set up the password-less SSH authentication:

> ssh-keygen
> bash ssh-copy-id -i /home/foo/.ssh/id_rsa.pub admin@$NAS_IP

Next is the script C:\Users\foo\bin\nasbackup.sh that performs a backup of the user’s home directory. It is recommended to use the vi editor that is included with Cygwin to edit this script, or a Windows editor that can create files with Unix line breaks.

#!/bin/bash

PARAMS="-a --chmod=+rX --relative --delete --ignore-errors"
TARGET="admin@$NAS_IP:/share/homes/foo/backup/windows/"

rsync $PARAMS "/cygdrive/c/users/foo" $TARGET

Since the ownership of the Windows files and directories cannot always be mapped to the user and group IDs used on the Linux based NAS, the user foo might not have the permissions to browse through the cygrive/c/users path in the backup in order to get to her home directory contents, so we need to enforce those permissions with the --chmod parameter.

Sometimes running rsync on Windows with the --delete option can cause error messages like IO error encountered – skipping file deletion. The --ignore-errors flag will force the target files to be deleted.

Finally, we set up a job that runs this script every hour.

  1. run taskschd.msc
  2. create a new task
  3. in the General tab, select:
    • Run whether the user is logged on or not
    • Run with highest privileges
    • Configure for: Windows 10
  4. in the Trigger tab, create a new trigger with the following options:
    • One time
    • Repeat task every: 1 hour
    • for a duration of: Indefinitely
  5. in the Actions tab, add an item with the following parameters:
    • Action: Start a program
    • Program/script:: C:\cygwin64\bin\bash.exe
    • Arguments: C:\Users\foo\bin\nasbackup.sh
  6. click OK and confirm by entering the user’s password

Level 2: Snapshots

The tools required in this chapter can be installed on the NAS using the following command:

$ opkg install rsnapshot coreutils-nice ionice

We want to enable the users to access the snapshots of their personal files to easily restore previous versions, but we cannot just create a single snapshots share: the QNAP operating system automatically enforces a 0777 (full access to everybody) permission on all shares, inclusing the user’s homes. If a snapshot of those were placed in a central public location, everybody would have access to everybody else’s data.

One simple solution to that problem is to put the snapshots of the user’s data into their own home directories.

Normally, rsnapshot works with one central configuration file, which allows only one snapshot root directory to be defined. Therefore we will create one rsnapshot config file per user.

The global /opt/etc/rsnapshot.conf will be reduced to just some general parameters:

config_version	1.2
cmd_cp		/opt/bin/cp
cmd_rm		/opt/bin/rm
cmd_rsync	/opt/bin/rsync
cmd_du		/opt/bin/du
cmd_rsnapshot_diff	/opt/bin/rsnapshot-diff
retain	hourly	24
retain	daily	7
retain	weekly	4
retain	monthly	6
verbose		2
loglevel	3

Next, we create the target directory for the user’s snapshots.

$ mkdir /share/homes/foo/.snapshots

Within that folder we put the snapshot configuration file .rsnapshot.conf. It includes the global settings and then defines the user specific snapshot configuration.

include_conf	/opt/etc/rsnapshot.conf
snapshot_root	/share/homes/foo/.snapshots
lockfile	/share/homes/foo/.snapshots/.rsnapshot.pid
backup	/share/homes/foo/	localhost/

The file must not be editable by the user:

$ chmod 0644 ~foo/.snapshots/.rsnapshot.conf

Now the snapshot jobs can be defined in /opt/etc/runQnapJob.conf:

snapshotFooHourly:Hourly snapshot of ~foo:\
    ionice -c3 nice rsnapshot -c ~foo/.snapshots/.rsnapshot.conf hourly
snapshotFooDaily:Daily snapshot of ~foo:\
    ionice -c3 nice rsnapshot -c ~foo/.snapshots/.rsnapshot.conf daily
snapshotFooWeekly:Weekly snapshot of ~foo:\
    ionice -c3 nice rsnapshot -c ~foo/.snapshots/.rsnapshot.conf weekly
snapshotFooMonthly:Monthly snapshot of ~foo:\
    ionice -c3 nice rsnapshot -c ~foo/.snapshots/.rsnapshot.conf monthly

The nice and ionice commands will lower the CPU and disk access priority of rsnapshot, reducing the impact on the normal operation of the NAS.

Lastly, we add a cron job for each job in /etc/config/crontab:

...
59 * * * * runQnapJob.sh snapshotFooHourly
50 3 * * * runQnapJob.sh snapshotFooDaily
45 2 * * 1 runQnapJob.sh snapshotFooWeekly
40 1 1 * * runQnapJob.sh snapshotFooMonthly
$ crontab /etc/config/crontab && /etc/init.d/crond.sh restart

Over time, the directory structure will grow, and finally look like this (only one snapshot is shown expanded):

/
└── share
    └── homes
        └── foo
            ├── backup
            ├── data
            └── .snapshots
                ├── daily.0
                │   └── localhost
                │       └── share
                │           └── homes
                │               └── foo
                │                   ├── backup
                │                   └── data
                ├── ...
                ├── daily.6
                ├── hourly.0
                ├── ...
                ├── hourly.23
                ├── weekly.0
                ├── ...
                ├── daily.3
                ├── monthly.0
                ├── ...
                └── monthly.5

Note that we can create snapshots of directories that contain the actual snapshot target directory: rsnapshot will automatically take care of excluding .snapshots, thus preventing recursion issues.

Level 3: Encrypted offsite backup

This procedure will copy selected data to an online storage. The provider must support rsync. In this example we will use Strato HiDrive.

To protect personal data it will be encrypted locally using EncFS before the transfer. Its reverse mode allows a directory to contain an on-the-fly encrypted view of another directory. The encrypted directory will then be backed up to the online storage.

Preparations

We first install the encfs package:

$ opkg install encfs

Then we need a directory for preparations before the backup run. This directory should not be located in /tmp or within the directory structure of the NAS operating system, because it needs to survive reboots and upgrades of the NAS firmware. One place where this is guaranteed is a shared folder, so we create a new one called .onlinebackup in the web interface of the NAS. The actual sharing of this directory via NFS, CIFS, etc. needs to be disabled and any access to the share must be restricted to the admin user.

Within this directory we create two subdirectories:

$ mkdir /share/.onlinebackup/{crypted,staging}

Later we will remount all directories that need to be backed up into staging and then create the encrypted view of it in crypted.

But first we need to do a one-time setup of the encrypted directory:

$ encfs --reverse /share/.onlinebackup/staging /share/.onlinebackup/crypted

We choose the standard mode and enter a password.

Everything that is placed into staging will also appear in crypted now. Both file names and content will be encoded.

The command above will also create the file staging/.encfs6.xml which contains the encryption parameters and is required to restore the data.

For now we unmount the crypted view again:

fusermount -u /share/.onlinebackup/crypted

To skip the password prompt when executing encfs during the backup run, we put the password into /share/.onlinebackup/.encfs and make that file readable by the admin user only:

$ chmod 0600 /share/.onlinebackup/.encfs

We also need to create the public SSH key of the admin user and upload it to the online storage provider. This was already described above (just replace root with admin and $NAS_IP with the respective SSH accessible hostname of the online storage provider). Strato also allows uploading the key using their HiDrive web interface.

Creating the onlinebackup script

Now the script /opt/bin/onlinebackup.sh can be deployed ($HIDRIVE_USER needs to be replaced with the actual user name):

#!/bin/bash

BACKUP_DIRS=(
/share/homes/foo/data
/share/homes/foo/backup
/share/Public/Pictures
)

WORKING_DIR="/share/.onlinebackup"
STAGING_DIR="staging"
CRYPTED_DIR="crypted"
ENCFS_CONFIG_FILE=".encfs6.xml"
ENCFS_PW_FILE=".encfs"

TARGET_HOST="rsync.hidrive.strato.com"
TARGET_USER="$HIDRIVE_USER"
TARGET_DIR="/users/$HIDRIVE_USER/backup/nas/data/"

PID="/var/run/onlinebackup.pid"

error() {
	rm ${PID}
	echo -e $1 >&2
	exit 1
}

checkDir() {
	if [[ ! $1 = /* ]]; then
		error "$1: directory path must be absolute"
	fi
	if [ ! -d $1 ]; then
		error "$1: directory does not exist"
	fi
}

if [ -e ${PID} ]; then 
	error "Script already running - exiting."
fi

touch ${PID}

echo "* checking prerequisites"

checkDir ${WORKING_DIR}

WORKING_DIR=${WORKING_DIR%/}
STAGING_DIR="${WORKING_DIR}/${STAGING_DIR}"
CRYPTED_DIR="${WORKING_DIR}/${CRYPTED_DIR}"
ENCFS_CONFIG_FILE="${STAGING_DIR}/${ENCFS_CONFIG_FILE}"
ENCFS_PW_FILE="${WORKING_DIR}/${ENCFS_PW_FILE}"

TARGET="${TARGET_USER}@${TARGET_HOST}:/${TARGET_DIR#/}"

if [ ! -d ${STAGING_DIR} ]; then
	error "${STAGING_DIR}: Directory does not exist"
fi

if [ ! -d ${CRYPTED_DIR} ]; then
	error "${CRYPTED_DIR}: Directory does not exist"
fi

for DIR in "${BACKUP_DIRS[@]}"; do
	checkDir ${DIR}
	# check if corresponding directory exists in staging
	FULL_DIR=${STAGING_DIR}${DIR}
	if [ ! -d ${FULL_DIR} ]; then
		echo "Creating ${FULL_DIR}"
		mkdir -p ${FULL_DIR} || error
	fi
done

# check for encfs config in staging
if [ ! -f ${ENCFS_CONFIG_FILE} ]; then
	error "encfs config file ${ENCFS_CONFIG_FILE} not found.\
	\nRun 'encfs --reverse ${STAGING_DIR} ${CRYPTED_DIR}' \
	to create it and set a password.\nThe password must match \
	the content of ${ENCFS_PW_FILE}."
fi

if [ ! -f ${ENCFS_PW_FILE} ]; then
	error "${ENCFS_PW_FILE}: File does not exist"
fi

echo "* creating encrypted views of the backup directories"

for DIR in "${BACKUP_DIRS[@]}"; do
	mount --bind ${DIR} ${STAGING_DIR}${DIR} || error
done

(cat ${ENCFS_PW_FILE} | encfs -S --reverse ${STAGING_DIR} ${CRYPTED_DIR}) \
    || error

echo "* synchronizing data"

rsync -av --delete-before ${CRYPTED_DIR}/ ${TARGET}

echo "* synchronizing encfs config file"

rsync -av ${ENCFS_CONFIG_FILE} ${TARGET}

echo "* unmounting encrypted views of the backup directories"

fusermount -u ${CRYPTED_DIR}

for DIR in "${BACKUP_DIRS[@]}"; do
	umount ${STAGING_DIR}${DIR}
done

rm ${PID}

The script will remount all the directories in the BACKUP_DIRS list into the staging directory, create the encrypted view and then synchronize it with the online storage. It will also copy the unencrypted .encfs6.xml file so that the data can be decrypted later. Note that this might be a security risk, since anybody who has access to that file and correctly guesses the EncFS password can decrypt the data. If the data is very sensitive, .encfs6.xml should be stored at a differnt location from the encrypted data.

Finally, we add a new job definition to /opt/etc/runQnapJob.conf

...
HiDriveBackup:HiDrive backup:/opt/bin/onlinebackup.sh

… and create the corresponding cron job in /etc/config/crontab:

...
2 10 * * * runQnapJob.sh HiDriveBackup
crontab /etc/config/crontab && /etc/init.d/crond.sh restart

During a run of the job, the file system will look like this:

/
└── share
    ├── homes
    │   └── foo
    │       ├── backup
    │       ├── data
    │       └── .snapshots
    ├── Public
    │   └── Pictures
    └── .onlinebackup
        ├── .encfs
        ├── crypted
        │   ├── P-XIAhFdxfnqgE3x0v7vxyz
        │   └── IrLsqFkyZnkG9PHRJpe7xyz
        │       ├── H6grUky27akOaSUo2l6jxyz
        │       │   └── 9WLhw,zE67CkCOnA8YIwxyz
        │       │       ├── xqSNsPxwkmOHRscpXdwExyz
        │       │       └── f,nG8D3cfdq5k,0Ewqh9xyz
        │       └── vKmB0yMGXtcDzNyMXhoAxyz
        │           └── Q5,k70,zu1eYYbys65xIxyz
        └── staging
            ├── .encfs6.xml
            └── share
                ├── homes
                │   └── foo
                │       ├── backup
                │       └── data
                └── Public
                    └── Pictures

Restoring data

If we need to restore some data, we first need to either download the whole backup directory from the online storage, or to mount it using WebDAV or SSHFS:

mkdir /tmp/{encrypted,decrypted}
sshfs -o ro $HIDRIVE_USER@rsync.hidrive.strato.com:\
    /users/$HIDRIVE_USER/backup/nas/data /tmp/encrypted

Then we can create a decrypted view of the data using the normal EncFS mode:

encfs /tmp/encrypted /tmp/decrypted

If .encfs6.xml is not stored along with the data on the online storage, we would need to specify its location when issuing the command:

ENCFS6_CONFIG=$ENCFS6_PATH encfs /tmp/encrypted /tmp/decrypted