Massive Amazon Route53 API Bind Zone Import Script

Hello there,

Occasionally some of our managed services work has us dealing directly with other cloud providers such as Amazon. One of our clients set a requirement to migrate over 5,000 domain’s to Amazon’s Route53 DNS service.

There was little doubt that this could be automated, but since we have never done this massive of a deployment through Amazon’s API directly, we thought it might be interesting to post the process as well as the script through which we managed the import process.

Essentially the script utilizes a master domain name list file as its basis for looping through the import. The master list refers to the bind zone files and imports them into Amazon’s Route53 via the Cli53 tool package.

One final note, the script outputs all completed domain imports into a CSV file with the following format :

domain.com,ns1.nameserver.com,ns2.nameserver.com,ns3.nameserver.com,ns4.nameserver.com

This is because when facilitating the actual nameserver change request, all the nameservers assigned to domains when imported to Route53 are randomly generated, so the script has to keep track of these nameserver/domain associations.

The script isn’t perfect and could benefit from some optimizations and more error checking (it does a lot of error checking already, however), but here it is in its entirety. We hope you will have some use for it!

#!/bin/sh
# Import all zone files into amazon
# Star Dot Hosting 2012
# www.stardothosting.com

currentmonth=`date "+%Y-%m-%d"`

#sanitize input and verify input was given
command=`echo "$1" | sed 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'`

if [ -z "$1" ];
then
        echo "AWS ZONE IMPORT"
        echo "---------------"
        echo ""
        echo "Usage : ./importzone.sh file.txt"
        echo ""
        exit 0
fi


echo "zone import log : $currentmonth" > /var/log/importzone.log 2>&1
echo " " >> /var/log/importzone.log 2>&1



for obj0 in $(cat $1);
do
        echo "checking if $obj0 was already migrated ..."
        ls -la /usr/local/zones/$1-zones/complete | grep -w $obj0 >> /dev/null 2>&1
        if [ "$?" -eq 1 ]
        then
        echo "importing $obj0 ..."

        #check if zone file has NS records
        cat /usr/local/zones/$1-zones/$obj0.txt | grep NS >> /dev/null 2>&1
        if [ "$?" -eq 0 ]
        then
                echo "Nameserver exists, continuing..."
        else
                echo "Adding nameserver to record..."
                echo "$obj0. 43201 IN NS ns1.nameserver.com." >> /usr/local/zones/$1-zones/$obj0.txt
        fi

        #check if zone exists
        /usr/local/zones/cli53/bin/cli53 info $obj0 >> /var/log/importzone.log 2>&1
        if [ "$?" -eq 0 ]
        then
                # grab NAMESERVERS
                nameservers=`/usr/local/zones/cli53/bin/cli53 rrlist $obj0 | grep "NS" | awk -F "NSt" '{printf "%sn", $2}' | sed 's/.$/g' | sed ':a;N;$!ba;s/n/,/g'`
   # import zone file
                /usr/local/zones/cli53/bin/cli53 import $obj0 -r -f /usr/local/zones/$1-zones/$obj0.txt
                if [ "$?" -eq 0 ]
                then
                        #move to complete folder
                        mv /usr/local/zones/$1-zones/$obj0.txt /usr/local/zones/$1-zones/complete
                else
                        echo "There was an error in importing the zone file!" >> /var/log/importzone.log
                        exit 1
                fi
        else
                #create on route53
                /usr/local/zones/cli53/bin/cli53 create $obj0 >> /var/log/importzone.log 2>&1
                # grab NAMESERVERS
                nameservers=`/usr/local/zones/cli53/bin/cli53 rrlist $obj0 | grep "NS" | awk -F "NSt" '{printf "%sn", $2}' | sed 's/.$/g' | sed ':a;N;$!ba;s/n/,/g'`
                # import zone file
                /usr/local/zones/cli53/bin/cli53 import $obj0 -r -f /usr/local/zones/$1-zones/$obj0.txt
                if [ "$?" -eq 0 ]
                then
                        #move to complete folder
                        mv /usr/local/zones/$1-zones/$obj0.txt /usr/local/zones/$1-zones/complete
                else
                        echo "There was an error in importing the zone file!" >> /var/log/importzone.log
                        exit 1
                fi
        fi

        # output domain + nameservers in a CSV with format : domain.com,ns1,ns2,ns3,ns4
        echo "$obj0,$nameservers" >> nameserver_registrar_request.txt 2&>1
        else
                echo "Domain already migrated .. !"
        fi
done

Amazon S3 Backup script with encryption

With the advent of cloud computing, there have been several advances as far as commercial cloud offerings, most notably Amazon’s EC2 computing platform as well as their S3 Storage platform.

Backing up to Amazon S3 has become a popular alternative to achieving true offsite backup capabilities for many organizations.

The fast data transfer speeds as well as the low cost of storage per gigabyte make it an attractive offer.

There are several free software solutions that offer the ability to connect to S3 and transfer files. The one that shows the most promise is s3sync.

There are already a few guides that show you how to implement s3sync on your system.

The good thing is that this can be implemented in Windows, Linux, FreeBSD among other operating systems.

We have written a simple script that utilizes the s3sync program in a scheduled offsite backup scenario. Find our script below, and modify it as you wish. Hopefully it will help you get your data safely offsite 😉

#!/bin/sh
# OffSite Backup script

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`

export AWS_ACCESS_KEY_ID="YOUR-ACCESS-KEY"
export AWS_SECRET_ACCESS_KEY="YOUR-SECRET-ACCESS-KEY"

echo "Offsite Backup Log: " $currentmonth > /var/log/offsite-backup.log
echo -e "----------------------------------------" >> /var/log/offsite-backup.log
echo -e "" >> /var/log/offsite-backup.log

# Archive Files and remove files older than 3 days
/usr/bin/find /home/offsite-backup-files -type f -mtime +3 -delete

# Compress and archive a few select key folders for archival and transfer to S3
tar -czvf /home/offsite-backup-files/offsite-backup-`date "+%Y-%m-%d"`.tar.gz /folder1 /folder2 /folder3 >> /var/log/offsite-backup.log 2>&1

# Transfer the files to Amazon S3 Storage via HTTPS
/usr/local/bin/ruby /usr/local/bin/s3sync/s3sync.rb --ssl -v --delete -r /home/offsite-backup-files your-node:your-sub-node/your-sub-sub-node >> /var/log/offsite-b
ackup.log 2>&1

# Some simple error checking and email alert logging
if [ "$?" -eq 1 ]
then
        echo -e "***OFFSITE BACKUP JOB, THERE WERE ERRORS***" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "Offsite Backup Job failed" you@yourdomain.com
        exit 1
else
        echo -e "Script Completed Successfully!" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "Offsite Backup Job Completed" your@yourdomain.com
        exit 0
fi

Now if your data happens to be sensitive (most usually is), usually encrypting the data during transit (with the –ssl flag) is not enough.

You can encrypt the actual file before it is sent to S3, as an alternative. This would be incorporated into the tar command with the above script. That line would look something like this :

/usr/bin/tar -czvf - /folder1 /folder2 /folder3 | /usr/local/bin/gpg --encrypt -r you@yourdomain.com > /home/offsite-backup-files/offsite-backups-`date "+%Y-%m-%d"`.tpg

Alternative to gpg, you could utilize openssl to encrypt the data.

Hopefully this has been helpful!

Menu