Centralized Backup Script

Hello There!

I thought I’d share a backup script that was written to consolidate backups onto one server instead of spreading the backup process across several servers. The advantages are somewhat obvious to consolidating the script onto one server, namely being that editing or making changes is much easier as you only have one script to edit.

The environment where this may be ideal would be for environments with 15-20 servers or less. I’d recommend a complete end-to-end backup solution for servers that exceed that number such as Bacula perhaps.

The bash shell script that I pasted below is very straightforward and takes two arguments. The first is the hostname or ip address of the destination server you are backing up. The next (and potentially unlimited) arguments will be single quote encased folders which you would want backed up.

This script is dependent on the server the script is running on having ssh key based authentication enabled and implemented for the root user. Security considerations can be made with IP based restrictions either in the ssh configuration, firewall configuration or other considerations.

#!/bin/sh
# Offsite Backup script
# Written by www.stardothosting.com
# Dynamic backup script

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`
currentdate=`date "+%Y-%m-%d%H_%M_%S"`
backup_email="backups@youremail.com"
backupserver="origin-backup-server.hostname.com"

# Check User Input
if [ "$#" -lt 2 ]
then
        echo -e "nnUsage Syntax :"
        echo -e "./backup.sh [hostname] [folder1] [folder2] [folder3]"
        echo -e "Example : ./backup.sh your-server.com '/etc' '/usr/local/www' '/var/lib/mysql'nn"
        exit 1
fi

# get the server's hostname
host_name=`ssh -l root $1 "hostname"`
echo "Host name : $host_name"
if [ "$host_name" == "" ]
then
        host_name="unknown_$currentdate"
fi

echo "$host_name Offsite Backup Report: " $currentmonth > /var/log/backup.log
echo -e "----------------------------------------------------------" >> /var/log/backup.log
echo -e "" >> /var/log/backup.log

# Ensure permissions are correct
chown -R backups:backups /home/fileserver/backups/
ls -d /home/fileserver/backups/* | grep -v ".ssh|.bash" | xargs -d "n" chmod -R 775

# iterate over user arguments & set error level to 0
errors=0
for arg in "${@:2}"
do
        # check if receiving directory exists
        if [ ! -d "/home/fileserver/backups/$host_name" ]
        then
                mkdir /home/fileserver/backups/$host_name
        fi
        sanitize=`echo $arg | sed 's/[^/]/+$ //'`
        sanitize_dir=`echo $arg | awk -F '/' '{printf "%s", $2}'`
        /usr/bin/ssh -o ServerAliveInterval=1 -o TCPKeepAlive=yes -l root $1 "/usr/bin/rsync -ravzp --progress --exclude 'clam_quarantinedir' $sanitize/ backups@$backupserver:/home/fileserver/backups/$host_name/$sanitize_dir; echo $? > /tmp/bu_rlevel.txt" >> /var/log/backup.log 2>&1
        echo "/usr/bin/ssh -o ServerAliveInterval=1 -o TCPKeepAlive=yes -l root $1 "/usr/bin/rsync -ravzp --progress --exclude 'clam_quarantinedir' $sanitize/ backups@$backupserver:/home/fileserver/backups/$host_name/$sanitize_dir""

        runlevel=`ssh -l root $1 "cat /tmp/bu_rlevel.txt"`
        echo "Runlevel : $runlevel"

        if [ "$runlevel" -ge 1 ]
        then
                errors=$((counter+1))
        else
                echo -e "Script Backup for $arg Completed Successfully!" >> /var/log/backup.log 2>&1
        fi

done


# Check error level
if [ $errors -ge 1 ]
then
        echo -e "There were some errors in the backup job for $host_name, please investigate" >> /var/log/backup.log 2>&1
        cat /var/log/backup.log | mail -s "$host_name Backup Job failed" $backup_email
else
        cat /var/log/backup.log | mail -s "$host_name Backup Job Completed" $backup_email
fi

It should be explained further that this script actually connects to the destination server as the root user, using ssh key authentication. It then initiates a remote rsync command on the destination server back to the backup server as a user called “backupuser”. So that means that not only does the ssh key need to be installed for root on the destination servers, but a user called “backupuser” needs to be added on the backup server itself, with the ssh keys of all the destination servers installed for the remote rsync.

Hopefully I did not over complicate this, because it really is quite simple :

Backup Server –> root –> destination server to backup — > backupuser rsync –> Backup Server

Once you implement the script and do a few dry run tests then it might be ready to implement a scheduled task for each destination server. Here is an example of one cron entry for a server to be backed up :

01 1 * * * /bin/sh /usr/local/bin/backups.sh destination-server-hostname '/etc' '/usr/local/www' '/home/automysql-backups'

SVN Offsite Backup Script : Secure offsite backup solution for SVN to Amazon S3

Hi there!

Backing up your code repository is important. Backing up your code repository to an off-site location in a secure manner is imperative. Throughout our travels and experience utilizing the SVN code repository system, we have developed a quick bash script to export the entire SVN repository, encrypt it, compress it into an archive, and then ship it (over an encrypted network connection) to Amazon S3 storage.

We will be using the (familiar) s3sync Ruby script to do the actual transport to Amazon S3, which you can find here.

Note also that this script also keeps a local copy of the backups, taken each day, for a maximum of 7 days of retention. This might be redundant since all revisions are kept within SVN itself, but I thought it would provide an additional layer of backup redundancy. The script can be easily modified to only backup a single file every night, overwriting the older copy after every backup.

Here’s the script :

#!/bin/sh
# SVN Off Site Backup script
# www.stardothosting.com

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`
threedays=`date -v-5d "+%Y-%m-%d"`
todaysdate=`date "+%Y-%m-%d"`

export AWS_ACCESS_KEY_ID="YOUR-S3-KEY-ID"
export AWS_SECRET_ACCESS_KEY="YOUR-S3-ACCESS-KEY"


echo "SVN Offsite Backup Log: " $currentmonth > /var/log/offsite-backup.log
echo -e "--------------------------------------------" >> /var/log/offsite-backup.log
echo -e "" >> /var/log/offsite-backup.log

# Archive Repository Dump Files and remove files older than 7 days
/usr/bin/find /subversion/svn_backups -type f -mtime +7 -delete

# Backup SVN and encrypt it
svnadmin dump /subversion/repo_name | /usr/bin/openssl enc -aes-256-cbc -pass pass:YOUR-ENCRYPTION-PASSWORD -e > /subversion/svn_backups/repo_name-backup-$todaysdate.enc

#fyi to decrypt :
#openssl aes-256-cbc -d -pass pass:YOUR-ENCRYPTION-PASSWORD -in repo_name-backup.enc -out decrypted.dump

# Transfer the files to Amazon S3 Storage via HTTPS
/usr/local/bin/ruby /usr/local/bin/s3sync/s3sync.rb --ssl -v --delete -r /subversion/svn_backups S3_BACKUPS:svn/svnbackup >> /var/log/offsite-backup.log 2>&1

if [ "$?" -eq 1 ]
then
        echo -e "***SVN OFFSITE BACKUP JOB, THERE WERE ERRORS***" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "SVN Offsite Backup Job failed" your@email.com
        exit 1
else
        echo -e "Script Completed Successfully!" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "SVN Offsite Backup Job Completed" your@email.com
        exit 0
fi

Note how I have provided an example , commented out within the script, on how you can go about decrypting the encrypted SVN dump file. You can also modify this script to back up to any offsite location, obviously. Just remove the s3sync related entries and replace with rsync or your preferred transport method.

I hope this makes your life easier!

SVN Pre Commit Hook : Sanitize your Code!

Hello,

Dealing with several different development environments can be tricky. With SVN specifically, it is ideal to have some “pre-flight” checks in order to make sure some basic standards have been followed.

Some of the things you would want to check might be :

– Does the code generate a fatal PHP error?
– Is there any syntax errors?
– Has valid commit messages been attached to the code commit?

I thought I’d share our pre-commit hook in one of our SVN code repositories in order to let you utilize and perhaps expand on it to include many more checks. Additional checks that may be specific to your code environment might benefit you. Feel free to share if improvements are made!

#!/bin/bash
# pre-commit hooks
# www.stardothosting.com

REPOS="$1"
TXN="$2"

PHP="/usr/bin/php"
SVNLOOK="/usr/bin/svnlook"
AWK="/usr/bin/awk"
GREP="/bin/egrep"
SED="/bin/sed"

# Make sure that the commit message is not empty
SVNLOOKOK=1
$SVNLOOK log -t "$TXN" "$REPOS" | grep "[a-zA-Z0-9]" > /dev/null || SVNLOOKOK=0

if [ $SVNLOOKOK = 0 ]; then
        echo -e "Empty commit messages are not allowed. Please provide a descriptive comment when committing code." 1>&2
        exit 1
fi

# Make sure the commit message is more than 5 characters long.
LOGMSG=$($SVNLOOK log -t "$TXN" "$REPOS" | grep [a-zA-Z0-9] | wc -c)

if [ "$LOGMSG" -le 5 ]; then
        echo -e "Please provide a verbose comment when committing changes." 1>&2
        exit 1
fi


# Check for PHP parse errors
CHANGED=`$SVNLOOK changed -t "$TXN" "$REPOS" | $GREP "^[U|A]" | $AWK '{print $2}' | $GREP .php$`

for FILE in $CHANGED
do
    MESSAGE=`$SVNLOOK cat -t "$TXN" "$REPOS" "$FILE" | $PHP -l`
    if [ $? -ne 0 ]
    then
        echo 1>&2
        echo "-----------------------------------" 1>&2
        echo "PHP error in: $FILE:" 1>&2
        echo `echo "$MESSAGE" | $SED "s| -| $FILE|g"` 1>&2
        echo "-----------------------------------" 1>&2
        exit 1
    fi
done

exit 0

Add your Dynamic IPs to Apache HTACCESS files

Hello!

We threw together a quick & simple script to dynamically update your .htaccess files within apache to add your dynamic IP address to the allow / deny fields.

If you’re looking to password protect an admin area (for example) but your office only has a dynamic IP address, then this script might be handy for you.

Its an extremely simple script that polls your dynamic hostname (if you use no-ip.org or dyndns.org for example) every 15 minutes as a cron job and, if it has changed, updates the .htaccess file

Hopefully it will make your life just a little bit easier 🙂

Sample Cron entry :

*/15 * * * * /bin/sh /usr/local/bin/htaccessdynamic.sh yourhostname.dyndns.org /var/www/website.com/public_html/.htaccess > /dev/null 2>&1

And now the script :

#!/bin/bash
# Dynamic IP .htaccess file generator
# Written by Star Dot Hosting
# www.stardothosting.com

dynDomain="$1"
htaccessLoc="$2"

dynIP=$(/usr/bin/dig +short $dynDomain)

echo "dynip: $dynIP"
# verify dynIP resembles an IP
if ! echo -n $dynIP | grep -Eq "[0-9.]+"; then
    exit 1
fi

# if dynIP has changed
if ! cat $htaccessLoc | /bin/grep -q "$dynIP"; then

        # grab the old IP
        oldIP=`cat /usr/local/bin/htold-ip.txt`

        # output .htaccess file
        echo "order deny,allow" > $htaccessLoc 2>&1
        echo "allow from $dynIP" >> $htaccessLoc 2>&1
        echo "allow from x.x.x.x" >> $htaccessLoc 2>&1
        echo "deny from all" >> $htaccessLoc 2>&1

        # save the new ip to remove next time it changes, overwriting previous old IP
        echo $dynIP > /usr/local/bin/htold-ip.txt
fi

Massive Amazon Route53 API Bind Zone Import Script

Hello there,

Occasionally some of our managed services work has us dealing directly with other cloud providers such as Amazon. One of our clients set a requirement to migrate over 5,000 domain’s to Amazon’s Route53 DNS service.

There was little doubt that this could be automated, but since we have never done this massive of a deployment through Amazon’s API directly, we thought it might be interesting to post the process as well as the script through which we managed the import process.

Essentially the script utilizes a master domain name list file as its basis for looping through the import. The master list refers to the bind zone files and imports them into Amazon’s Route53 via the Cli53 tool package.

One final note, the script outputs all completed domain imports into a CSV file with the following format :

domain.com,ns1.nameserver.com,ns2.nameserver.com,ns3.nameserver.com,ns4.nameserver.com

This is because when facilitating the actual nameserver change request, all the nameservers assigned to domains when imported to Route53 are randomly generated, so the script has to keep track of these nameserver/domain associations.

The script isn’t perfect and could benefit from some optimizations and more error checking (it does a lot of error checking already, however), but here it is in its entirety. We hope you will have some use for it!

#!/bin/sh
# Import all zone files into amazon
# Star Dot Hosting 2012
# www.stardothosting.com

currentmonth=`date "+%Y-%m-%d"`

#sanitize input and verify input was given
command=`echo "$1" | sed 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'`

if [ -z "$1" ];
then
        echo "AWS ZONE IMPORT"
        echo "---------------"
        echo ""
        echo "Usage : ./importzone.sh file.txt"
        echo ""
        exit 0
fi


echo "zone import log : $currentmonth" > /var/log/importzone.log 2>&1
echo " " >> /var/log/importzone.log 2>&1



for obj0 in $(cat $1);
do
        echo "checking if $obj0 was already migrated ..."
        ls -la /usr/local/zones/$1-zones/complete | grep -w $obj0 >> /dev/null 2>&1
        if [ "$?" -eq 1 ]
        then
        echo "importing $obj0 ..."

        #check if zone file has NS records
        cat /usr/local/zones/$1-zones/$obj0.txt | grep NS >> /dev/null 2>&1
        if [ "$?" -eq 0 ]
        then
                echo "Nameserver exists, continuing..."
        else
                echo "Adding nameserver to record..."
                echo "$obj0. 43201 IN NS ns1.nameserver.com." >> /usr/local/zones/$1-zones/$obj0.txt
        fi

        #check if zone exists
        /usr/local/zones/cli53/bin/cli53 info $obj0 >> /var/log/importzone.log 2>&1
        if [ "$?" -eq 0 ]
        then
                # grab NAMESERVERS
                nameservers=`/usr/local/zones/cli53/bin/cli53 rrlist $obj0 | grep "NS" | awk -F "NSt" '{printf "%sn", $2}' | sed 's/.$/g' | sed ':a;N;$!ba;s/n/,/g'`
   # import zone file
                /usr/local/zones/cli53/bin/cli53 import $obj0 -r -f /usr/local/zones/$1-zones/$obj0.txt
                if [ "$?" -eq 0 ]
                then
                        #move to complete folder
                        mv /usr/local/zones/$1-zones/$obj0.txt /usr/local/zones/$1-zones/complete
                else
                        echo "There was an error in importing the zone file!" >> /var/log/importzone.log
                        exit 1
                fi
        else
                #create on route53
                /usr/local/zones/cli53/bin/cli53 create $obj0 >> /var/log/importzone.log 2>&1
                # grab NAMESERVERS
                nameservers=`/usr/local/zones/cli53/bin/cli53 rrlist $obj0 | grep "NS" | awk -F "NSt" '{printf "%sn", $2}' | sed 's/.$/g' | sed ':a;N;$!ba;s/n/,/g'`
                # import zone file
                /usr/local/zones/cli53/bin/cli53 import $obj0 -r -f /usr/local/zones/$1-zones/$obj0.txt
                if [ "$?" -eq 0 ]
                then
                        #move to complete folder
                        mv /usr/local/zones/$1-zones/$obj0.txt /usr/local/zones/$1-zones/complete
                else
                        echo "There was an error in importing the zone file!" >> /var/log/importzone.log
                        exit 1
                fi
        fi

        # output domain + nameservers in a CSV with format : domain.com,ns1,ns2,ns3,ns4
        echo "$obj0,$nameservers" >> nameserver_registrar_request.txt 2&>1
        else
                echo "Domain already migrated .. !"
        fi
done

Backup, compress and encrpyt your git repository

Greetings,

I thought I’d share a quick script in the scope of backing up GIT repositories for the purposes of encrypted and compressed off-site backups.

Unfortunately git does not have an equivalent of svnadmin dump or export, which can conveniently be piped to stdout.

What the above scenario would do is shorten the amount of commands a script would require in order to accomplish a similar task.

Find below a quick bash script that clones a repository, tar/gzip’s it, encrypts the archive and keeps 7 days worth of archive files :

#!/bin/sh
# GIT Backup script
# Written by Star Dot Hosting

todaysdate=`date "+%Y-%m-%d"`

#check command input
if [ -z "$1" ];
then
        echo "GIT BACKUP SCRIPT"
        echo "-----------------"
        echo ""
        echo "Usage : ./backup.sh reponame , i.e. yourdomain.git"
        echo ""
        exit 0
fi

echo "GIT Backup Log: " $currentmonth > /var/log/backup.log
echo -e "----------------------------------------" >> /var/log/backup.log
echo -e "" >> /var/log/backup.log

# Find and remove files older than 7 days
/usr/bin/find /data/git/git-backups -type f -mtime +7 -delete >> /var/log/backup.log 2>&1

# Begin creating working directory to clone into
/bin/mkdir /data/git/git-backup/working >> /var/log/backup.log 2>&1
/usr/bin/git clone /data/git/$1 /data/git/git-backup/working >> /var/log/backup.log 2>&1

# Archive working directory into repo name encrpyted tar file
/bin/tar -czvf - /data/git/git-backup/working | /usr/bin/openssl enc -aes-256-cbc -pass pass:abcABC123 -e | dd of=/data/git/git-backup/$1.tar.gz.enc >> /var/log/backup.log 2>&1

# Remove working directory
/bin/rm -rf /data/git/git-backup/working >> /var/log/backup.log 2>&1

FYI if you ever needed to decrypt the openssl encrypted backup archive, the command below should do the job :

openssl aes-256-cbc -d -pass pass:abcABC123 -in $1.tar.gz.enc -out decrypted.tar.gz

Clone a XEN VPS server that resides on a LVM / Logical Volume Manager

Hello!

We thought it would be important to share this information as it might be interesting to someone who wants to replicate the same VPS across many instances in order to create a farm of web servers (for example).

This uses very similar concepts to our LVM XEN backup post a while back.

Step 1: Take a snapshot of the current VPS

This is simple. Use the lvcreate command with the -s option to create a snapshot of the running VPS. We assume your VPS is 5GB in size, so just replace that with however large your VPS is :

lvcreate -s -L 5GB -n snapshot_name /dev/VolGroup00/running_vps_image

Step 2: Create your new VPS

This is important. You want to create a new vps, assign a MAC and IP address first and let the creation process fully finish. Then shut the VPS down.

Step 3: Copy the snapshot to the new VPS

All you have to do is use the dd command to transfer the snapshot image to the newly created VPS image :

dd if=/dev/VolGroup00/snapshot_name of=/dev/VolGroup00/new_vps_image

All done! Dont forget to remove the snapshot after your done with it :

lvremove -f /dev/VolGroup00/snapshot_name

Start up the new vps and you should have a carbon copy of the previous vps!

Centralized remote backup script with SSH key authentication

Greetings,

It has been a while since we posted any useful tidbits for you , so we have decided to share one of our quick & dirty centralized backup scripts.

The script relies on ssh key based authentication, described here on this blog. It essentially parses a configuration file where each variable is separated by a comma and colon, as in the example config here :

hostname1,192.168.1.1,etc:var:root
hostname2,192.168.1.2,etc:var:root:usr

Note the intended backup directories in the 3rd variable, separated by colon’s. Simply populate the backup-hosts.txt config file (located in the same folder as the script) with all the hosts you want to be backed up.

The script then ssh’s to the intended host, and sends a tar -czf stream (securely) over ssh, to be output into the destination of your choice. Ideally you should centralize this script on a box that has direct access to alot of disk space.

Find the script here :

#!/bin/sh
# Centralized Linux Backup Script
# By Star Dot Hosting , www.stardothosting.com
# Uses SSH Key based authentication and remote ssh commands to tar.gz folders to iSCSI storage


todaysdate=`date "+%Y-%m-%d %H:%M:%S"`
backupdest="/backups/linux-backups"

echo "Centralized Linux Backup: " $todaysdate > /var/log/linux-backup.log
echo -e "----------------------------------------------" >> /var/log/linux-backup.log
echo -e >> /var/log/linux-backup.log


for obj0 in $(cat /usr/local/bin/backup-hosts.txt | grep -v "#" | awk -F "," '{printf "%sn", $2}');
do
        backupname=`cat /usr/local/bin/backup-hosts.txt | grep -v "#" | grep $obj0 | awk -F "," '{printf "%sn", $1}'`

        for obj1 in $(cat /usr/local/bin/backup-hosts.txt | grep -v "#" | grep $obj0 | awk -F "," '{printf "%sn", $3'} | awk '{gsub(":","n");printf"%s", $
0}');
        do
                echo -e "backing up $obj0 with $obj1 directory" >> /var/log/linux-backup.log
                ssh -l root $obj0 "(cd /$obj1/ && tar -czf - . -C /$obj1)" >> $backupdest/$backupname.$obj1.tar.gz 2>&1
                if [ "$?" -eq 1 ]
                then
                        echo -e "There were some errors while backing up $obj0 / $backupname within the $obj1 directory" >> /var/log/linux-backup.log
                        #exit 1
                else
                        echo -e "Backup completed on $obj0 / $backupname while backing up $obj1 directory" >> /var/log/linux-backup.log
                fi
        done
done

echo "Backup Script Completed." >> /var/log/linux-backup.log
cat /var/log/linux-backup.log | mail -s "Centralized Backup Complete" topsoperations@topscms.com

You could modify this script to keep different daily backups , pruned to keep only X number of days of backups (i.e. only 7 days worth). There is alot you can do here.

If you have a handful of linux or bsd servers that you would like to backup in a centralized location, without having an individual script to maintain on each server, then perhaps you could use or modify this script to suit your needs.

I hope this helps.

Automatically Deploy Debian Load Balancers with bash scripting

In yet another post in our automation series, we will share a bash script that automates the deployment of debian based load balancers (specifically with LVS / Linux Virtual Server project).

Even though the environments and systems you deploy may start to get more complicated such as with load balancers, there will always be a baseline level with which these systems can be brought to before further configuration and customization needs to be done.

There are many things that can be automated with this process, as you will see in the script below. In most round-robin load balancing scenarios, there wouldn’t be much more that needs to be done as far as configuration beyond what this script can do.

Obviously you will likely need to modify the script to suit your needs and requirements for the organization and standards therein.

Hopefully this will help you roll out many debian load balancers! May the load be split evenly between all your systems 😉

[cc lang=”bash”]

#!/bin/sh
# Debian LVS deployer script
# Version 1.0

PROGNAME=”$0″
VERSION=”1.0″

# working directory for deployer process.
WORKDIR=”/root”

# tasks left (this is updated every step to accommodate recovery during
# the deployer process)
TASKS=”./deploy-lvs.tasks”

init_tasks() {
# This function will write a new tasks file.
# it’s called from the main body of the script if a tasks file does not exist.
cat > $TASKS< /etc/hostname
echo `hostname` > /etc/mailname
return 0
}

usage() {
echo “[+] Usage: $PROGNAME”
echo
return 0
}

###############################
### MAIN SCRIPT STARTS HERE ###
###############################

# installer_splash
installer_splash

# fix working dir.
cd $WORKDIR

# does our installer file exist? if not, initalize it.
if [ ! -f $TASKS ]
then
echo “[+] No task file found, installation will start from beginning.”
init_tasks
if (($? != 0))
then
echo “[!] ERROR: Cannot create tasks file. Installation will not continue.”
exit 1
fi
else
echo “[+] Tasks file located – starting where you left off.”
fi

# start popping off tasks from the task list and running them.
# pop first step off of the list
STEP=`head -n 1 $TASKS`
while [ ! -z $STEP ]
do
# execute the function.
echo -e “nn###################################”
echo “[+] Running step: $STEP”
echo -e “###################################nn”
$STEP
if (($? != 0))
then
# command failed.
echo “[!] ERROR: Step $STEP failed!”
echo ” Installation will now abort – you can pick it up after fixing the problem”
echo
exit 1
fi
# throw up a newline just so things don’t look so crowded
echo
# remove function from function list.
perl -pi -e “s/$STEPn?//” $TASKS || exit 1
STEP=`head -n 1 $TASKS`
done

# clean_up_and_reboot
echo “[+] Installation finished – cleaning up.”
clean_up_and_reboot

# script is done now – termination should happen with clean_up_and_reboot.
echo “[!] Should not be here!”
exit 1
[/cc]

Automatically Deploy Debian Firewalls with bash scripting

Automation is as necessary as any other aspect of systems administration in any critical or production environment where growth and scalability are moving at a significant pace.

Growth in any organization is obviously a good thing. In the systems administrator’s perspective, however, growth can mean more time spent deploying systems and less time spent focusing on other duties.

Automating the server deployment process is the natural next step when your organization has grown to a point where time efficiency becomes more relevant and noticeable to your business owners.

This is the first in a series of posts here where we will explain and share shell scripts that automate the deployment process of several key debian linux based systems. These scripts automate the patching, configuration and implementation of said systems.

They will certainly have to be modified to fit your organization’s needs and standards obviously, but hopefully it will give you a starting point to base your automation / roll-out policies.

Making your life easier and more automated is always a good thing! 😉

#!/bin/sh
# Debian FW deployer script
# Version 1.0

PROGNAME="$0"
VERSION="1.0"

# working directory for deployer process.
WORKDIR="/root"

# tasks left (this is updated every step to accommodate recovery during
# the deployer  process)
TASKS="./deploy-fw.tasks"

init_tasks() {
	# This function will write a new tasks file.
	# it's called from the main body of the script if a tasks file does not exist.
	cat > $TASKS< /etc/hostname
	echo `hostname` > /etc/mailname
	echo "done."
	return 0
}

usage() {
	echo "[+] Usage: $PROGNAME"
	echo
	return 0
}

###############################
### MAIN SCRIPT STARTS HERE ###
###############################

# installer_splash
installer_splash

# fix working dir.
cd $WORKDIR

# does our installer file exist? if not, initalize it.
if [ ! -f $TASKS ]
then
	echo "[+] No task file found, installation will start from beginning."
	init_tasks
	if (($? != 0))
	then
		echo "[!] ERROR: Cannot create tasks file. Installation will not continue."
		exit 1
	fi
else 
	echo "[+] Tasks file located - starting where you left off."
fi

# start popping off tasks from the task list and running them.
# pop first step off of the list
STEP=`head -n 1 $TASKS`
while [ ! -z $STEP ]
do
	# execute the function.
	echo -e "nn###################################"
	echo "[+] Running step: $STEP"
	echo -e "###################################nn"
	$STEP
	if (($? != 0))
	then
		# command failed.
		echo "[!] ERROR: Step $STEP failed!"
		echo "    Installation will now abort - you can pick it up after fixing the problem"
		echo
		exit 1
	fi
	# throw up a newline just so things don't look so crowded
	echo
	# remove function from function list.
	perl -pi -e "s/$STEPn?//" $TASKS || exit 1
	STEP=`head -n 1 $TASKS`
done

# clean_up_and_reboot
echo "[+] Installation finished - cleaning up."
clean_up_and_reboot

# script is done now - termination should happen with clean_up_and_reboot.
echo "[!] Should not be here!"
exit 1

Menu