Centralized Backup Script

Hello There!

I thought I’d share a backup script that was written to consolidate backups onto one server instead of spreading the backup process across several servers. The advantages are somewhat obvious to consolidating the script onto one server, namely being that editing or making changes is much easier as you only have one script to edit.

The environment where this may be ideal would be for environments with 15-20 servers or less. I’d recommend a complete end-to-end backup solution for servers that exceed that number such as Bacula perhaps.

The bash shell script that I pasted below is very straightforward and takes two arguments. The first is the hostname or ip address of the destination server you are backing up. The next (and potentially unlimited) arguments will be single quote encased folders which you would want backed up.

This script is dependent on the server the script is running on having ssh key based authentication enabled and implemented for the root user. Security considerations can be made with IP based restrictions either in the ssh configuration, firewall configuration or other considerations.

It should be explained further that this script actually connects to the destination server as the root user, using ssh key authentication. It then initiates a remote rsync command on the destination server back to the backup server as a user called “backupuser”. So that means that not only does the ssh key need to be installed for root on the destination servers, but a user called “backupuser” needs to be added on the backup server itself, with the ssh keys of all the destination servers installed for the remote rsync.

Hopefully I did not over complicate this, because it really is quite simple :

Backup Server –> root –> destination server to backup — > backupuser rsync –> Backup Server

Once you implement the script and do a few dry run tests then it might be ready to implement a scheduled task for each destination server. Here is an example of one cron entry for a server to be backed up :

SVN Offsite Backup Script : Secure offsite backup solution for SVN to Amazon S3

Hi there!

Backing up your code repository is important. Backing up your code repository to an off-site location in a secure manner is imperative. Throughout our travels and experience utilizing the SVN code repository system, we have developed a quick bash script to export the entire SVN repository, encrypt it, compress it into an archive, and then ship it (over an encrypted network connection) to Amazon S3 storage.

We will be using the (familiar) s3sync Ruby script to do the actual transport to Amazon S3, which you can find here.

Note also that this script also keeps a local copy of the backups, taken each day, for a maximum of 7 days of retention. This might be redundant since all revisions are kept within SVN itself, but I thought it would provide an additional layer of backup redundancy. The script can be easily modified to only backup a single file every night, overwriting the older copy after every backup.

Here’s the script :

Note how I have provided an example , commented out within the script, on how you can go about decrypting the encrypted SVN dump file. You can also modify this script to back up to any offsite location, obviously. Just remove the s3sync related entries and replace with rsync or your preferred transport method.

I hope this makes your life easier!

Centralized remote backup script with SSH key authentication

Greetings,

It has been a while since we posted any useful tidbits for you , so we have decided to share one of our quick & dirty centralized backup scripts.

The script relies on ssh key based authentication, described here on this blog. It essentially parses a configuration file where each variable is separated by a comma and colon, as in the example config here :

Note the intended backup directories in the 3rd variable, separated by colon’s. Simply populate the backup-hosts.txt config file (located in the same folder as the script) with all the hosts you want to be backed up.

The script then ssh’s to the intended host, and sends a tar -czf stream (securely) over ssh, to be output into the destination of your choice. Ideally you should centralize this script on a box that has direct access to alot of disk space.

Find the script here :

You could modify this script to keep different daily backups , pruned to keep only X number of days of backups (i.e. only 7 days worth). There is alot you can do here.

If you have a handful of linux or bsd servers that you would like to backup in a centralized location, without having an individual script to maintain on each server, then perhaps you could use or modify this script to suit your needs.

I hope this helps.

Backup a live FreeBSD filesystem and remotely migrate to another server

Lately we’ve been all about live migrations / backups here at *.hosting. And why not? with the advent of such concepts as “self healing blade cloud environment” , we have made a point to testing / scripting live migration scenarios.

Following on our last post of backing up LVM volumes, we have decided to make a simple post for ‘dumping’ a live freebsd filesystem, compressing it mid-stream, and over the network (encrpyted through ssh of course) , before being saved as a file (or restored to a waiting live-cd mounted system).

By default in FreeBSD, it partitions your var, usr, root

So lets dump the root partition since its the smallest :

Lets break down the options so you can fully understand what its doing : -0 // dump level 0 = full backup
-u // update the dumpdates file after a successful dump
-a // bypass all tape length considerations; autosize
-n // notify if attention is required
-L // tell dump that it is a live filesystem for a consistent dump; it will take a snapshot

Alternatively you could dump the filesystem to a local file :

If you wanted to dump from server1 and restore on server2 :

Again , this is a straightforward command. It is typically fast (within reason). You could script this for automated dumps/snapshots of your filesystem for full restores in a disaster scenario.

Amazon S3 Backup script with encryption

With the advent of cloud computing, there have been several advances as far as commercial cloud offerings, most notably Amazon’s EC2 computing platform as well as their S3 Storage platform.

Backing up to Amazon S3 has become a popular alternative to achieving true offsite backup capabilities for many organizations.

The fast data transfer speeds as well as the low cost of storage per gigabyte make it an attractive offer.

There are several free software solutions that offer the ability to connect to S3 and transfer files. The one that shows the most promise is s3sync.

There are already a few guides that show you how to implement s3sync on your system.

The good thing is that this can be implemented in Windows, Linux, FreeBSD among other operating systems.

We have written a simple script that utilizes the s3sync program in a scheduled offsite backup scenario. Find our script below, and modify it as you wish. Hopefully it will help you get your data safely offsite 😉

Now if your data happens to be sensitive (most usually is), usually encrypting the data during transit (with the –ssl flag) is not enough.

You can encrypt the actual file before it is sent to S3, as an alternative. This would be incorporated into the tar command with the above script. That line would look something like this :

Alternative to gpg, you could utilize openssl to encrypt the data.

Hopefully this has been helpful!