HOWTOlabs  
 Services     Software     Commentary     Design     Astral Musings   
Linux Backups
Suggestions and Tips

Linux Backups
2016-03 updated, rickatech

Elsewhere [ edit ]

Suggestions and tips for performing routine file and database backups.  Best practices for and retiring/winding down Linux servers.

Winding Down and Archiving

At some point a critical linux server has been running important services for many years, but it is finally not practical to upgrade its OS and keep supporting it.  To archive the system, there should only be a few file related areas to copy to archive storage, then the linux machine disk can be erased, recycled.

  - identify network accessible external storage 
  - backup to folders
    /var
    /etc
    /home
    /root
    /custom (e.g. public, opt, ... whereever you may have created
             a custom directory containing critical files)
  - do check to see what files can’t be read by only root on external storage
    du -h --max-depth 1 

Routine Backups

Related
Linux backups are necessary if a system has any critical services or data. Services include databases, mail stores, source code repositories. Data may just be simple static data of a public file share, or some web site files. Unlike static data, services require special commands be used to insure backup snapshot are consistent. For example, the popular MySQL database service offer the mysqldump command to produce database backup snapshots.
For backups to be effective, they need to spread critical data as widely as possible in an appropriately secure manner.
  • place data in a different hard disk partition (good)
  • place data on external media (better)
  • place data on a system far away, network file transfer (best)
Note, placing backups on offline media is becoming less critical these days with ubiquitous network access. It used to be moving data from one system to another made investment in high capacity offline media worth it. However hard disks are getting quite cheap these days. Offline media is becoming more and more awkward to use as multiple tapes/cartridges/disks are needed to properly archive the contents of large disk arrays. Also removable media fills up and needs to be manually replaced. Distributing data amongst disparate online servers which have extra disk space in becoming more and more practical. Also, tactical backups of only the critical data helps reduce backup space needs. Finally having backups online, especially multiple past copies, makes recovery action much faster - no need to find the proper media cartridge and mount it.

Ok, so the following scripts are pretty raw. But they show how to backup a database (i.e. a service), and a file store (i.e. static file). In this case the popular MediaWiki web application. MediaWiki stores most of its content in a database, but the MediaWiki application files are a static set of files that should also be backed up. Also for good measure backing up the core MySQL meta-database is good idea in case you forget you database names and user logins.

The approach shown here is using an external USB drive mounted at /archive. The scripts below use rsync against a local disk partition which should pretty much just work. Better would be to have /archive on a remote system which rsyncs static data down from the original server. Services are best backed up locally using the appropriate service command (e.g. mysqldump) on the source server in concert with snapshot transfer to a remote host. Adding remote hosts rsync capability involves added script complexity. See rysnc related link above for more details on remote rsync setup.

One last bit of advice, always place backup scripts if possible on the external/remote backup drive if possible. In some strange situations, a remote partition mount may drop and if the script is on the remote drive it won't run - which is preferable . That converse case would be the script is local and runs even though the remote drive is offline line - under lInux this might fill up the root or other importat partitiotn suddenly which is very bad! So if remote store is offline, the backup fails, but you should have a recent backup anyway and if your machine is offline you have bigger problems to worry about than backups.

Also these scripts stagger backups so there are 7 days of snapshots and then a trail of weekly snapshots that accumulate over time. The weekly backup files will have to be culled manually or the backup drive will fill up. But hey, checking disk space once/month is not too often to be annoying and is something you should do for other reasons than backup.
# crontab -l

  05  01  * * * sh /archive/sith_db_daily.sh mysql;
                sh /archive/sith_b_daily.sh wikidb;
                ls -lh /archive/sith | mail -s "sith db backups" itstaff@foobar.com

  50  01  * * * sh /archive/sith_files_daily.sh mediawiki-1.5.0;
                ls -lh /archive/sith | mail -s "sith file backups" itstaff@foobar.com
sith_db_daily.sh
#!/bin/bash

### 2006-09-30 Frederick Shaul
### MySQL database backup rotation.
### Performs backup and leaves staggered backups files
###   - last 7 day
###   - week ago backup from 2 Saturdays past
### Future?
###   - month ago backup from 2 months (first Saturday) past
###   - year ago backup from 2 years (first Saturday of first month) past

EXIT_CODE=0

if [ $1 ]
then
	DB=$1
	NAME="sith-$DB"         # include host name
	SUFFIX=sql
	BACKUP=/archive/sith    # target directory
	WDAY=`date +%w`
	DAILY=daily$WDAY
	# foobar20060929_daily3.sql, rotate 7 daily and weekly backup names
	LAST=`date +%Y%m%d`_$DAILY

	if [ $WDAY -eq 6 ]
	then
		# rename last Saturday's backup so it will kept longer than a week
		for file in $BACKUP/$NAME*_$DAILY*.$SUFFIX
		do
			mv ${file} ${file%$DAILY.$SUFFIX}weekly.$SUFFIX
		done
	else
		rm -f $BACKUP/$NAME*$DAILY.$SUFFIX
	fi

	if [ $EXIT_CODE -eq 0 ]
	then
		BNAME=$BACKUP"/"$NAME"_"$LAST"."$SUFFIX
		/usr/bin/mysqldump $DB -u root --opt > $BNAME
	fi
else
        echo "Usage: $0 [database] ..."
        echo "  "
fi
sith_files_daily.sh
#!/bin/bash

### 2006-10-10 Frederick Shaul
### file backup rotation.
### Performs backup and leaves staggered backups files
###   - last 7 day
###   - week ago backup from 2 Saturdays past
### Future?
###   - month ago backup from 2 months (first Saturday) past
###   - year ago backup from 2 years (first Saturday of first month) past

EXIT_CODE=0

if [ $1 ]
then
	NAME=sith-$1
	SUFFIX=tar.gz
	BACKUP=/archive/sith    # target directory
	WDAY=`date +%w`
	DAILY=daily$WDAY
	# foobar20060929_daily3.tar.gz, rotate 7 daily and weekly backup names
	LAST=`date +%Y%m%d`_$DAILY

	if [ $WDAY -eq 6 ]
	then
		# rename last Saturday's backup so it will kept longer than a week
		for file in $BACKUP/$NAME*_$DAILY*.$SUFFIX
		do
			mv ${file} ${file%$DAILY.$SUFFIX}weekly.$SUFFIX
		done
	else
		rm -f $BACKUP/$NAME*$DAILY.$SUFFIX
	fi

	if [ $EXIT_CODE -eq 0 ]
	then
		if [ "$1" = "mediawiki-1.5.0" ]
		then
			echo "sith wiki ..."
			rsync -a -v -x --delete /wiki/mediawiki-1.5.0 /archive/sith
	                BNAME=$BACKUP"/"$NAME"_"$LAST"."$SUFFIX
	                tar cvzf $BNAME /wiki/mediawiki-1.5.0
		else
        		echo "Usage: $0 [file-group] ..."
        		echo "  "
			echo "       mediawiki-1.5.0 | ..."
		fi
	fi
else
        echo "Usage: $0 [file-group] ..."
        echo "  "
	echo "       mediawiki-1.5.0 | ..."
fi
zap technologies
tablet | printable