How to detect and mitigate DoS (Denial of Service) Attacks

Greetings,

Occasionally with a very busy site, being behind a hefty web stack does not always have enough capacity to mitigate a significant surge in artificial (DoS) requests. Detecting and mitigating denial of service attacks is an important and time sensitive exercise that will determine the next mitigating steps that you may need to take to reduce or null route the offending traffic.

These steps are very basic and use the everyday system utilities and tools that are found in vanilla linux implementations. The idea is to utilize these tools to identify connection and request patterns.

I’m going to assume that your potential or assumed attack is going straight to port 80 (HTTP), which would be a common assumption. A typical DoS attack would just be a generation of thousands of requests to a particular page, or perhaps just to the homepage.

Check your Process and Connection Counts

It is a good idea to get a picture of how overworked your system is currently, other than the initial reports of slow site performance or perhaps mysql errors such as “The MySQL server has gone away”, or anything of the sort.

Count how many apache/httpd processes you currently have to get an idea :

# ps auxw | grep httpd | wc -l
96

Check what the CPU load is currently on the server :

# w
 15:41:52 up 74 days, 17:05,  1 user,  load average: 6.36, 9.89, 8.68

So you can see the load is quite high and there are 96 apache processes that have spawned. Looks to be quite a busy server! You should take it a step further and perhaps identify how many open port 80 (HTTP) connections you have :

# netstat -nap | grep ":80 " | wc -l
1627

So thats quite a significant number of HTTP connections on this server. It could be a substantial DoS attack, especially when you consider that this may be one server in a 3 server load balanced pool. That means the numbers here are likely multiplied by three.

Still, it could be legitimate traffic! The spike could be attributed to a link on reddit, or perhaps the site was mentioned on a popular news site. We need to look further at what requests are coming in to be able to determine if perhaps the traffic is organic or artificial. Artificial traffic would typically have thousands and thousands of identical requests, possibly coming from a series of IP addresses.

How distributed a DoS attack probably depends on the skill and resources of the offending party (potentially). If its a DoS, hopefully it will be easily identifiable.

Lets take a closer look at the open connections. Maybe we can see how many connections from singular IP addresses are currently open on the server. That may help identify if the majority of the traffic is from a single or single set of sources. This information can be kept aside after our analysis is complete so that we can use that information to report and block the traffic to ultimately mitigate the attack.

# /bin/netstat -ntu | grep ":80" | awk '{print $5}'| cut -d: -f1 | sort | uniq -c | sort -n | 
grep -v 127.0.0.1 | awk '{if ($1 > 45) print $2;}'

What the above line essentially does is scan the open connections specifically to port 80 and filters only the IP addresses that have 45 or more open connections. This number can obviously be adjusted to whatever you like. Take a look at the different results and see what it produces.

For potentially offending IP addresses, whois them and see where they are originating from. Are they from a Country that typically isn’t interested in your site? If the site is an English language site about local school programs in a small North American city, chances are someone from China or Russia has little legitimate interest in the site.

Analyze the Requests

In this scenario, we would be analyzing the apache access logs in order to get a clearer picture of what exactly is happening that is generating the DoS. Access logs in apache are a great resource to get the source IP, request URI and other useful information that may help identify an automated tool such as LOIC or an automated botnet perhaps that is sending thousands of identical requests to your server.

Lets filter our the actual GET requests from the apache logs, sort them and count each request in order to show the highest number of requests for the same URI. If we can take this information and then cross reference it with the connection stats we gathered earlier, we should have a clear picture of who is conducting the attack and how they are doing it.

cat access_log | awk -F """ '{printf "%sn", $2}' | sed -e 's/GET //' | awk -F " " '{printf "%sn" ,$1}' | sort | uniq -c | sort -n | awk '{if ($1 > 45) print $2;}' | more

This code filters GET requests from the logs, cleans them up, sorts the results, counts all the duplicate requests , sorts it by highest number of requests, and prints the results.

Again the 45 in the last portion of the command can be changed to whatever you feel is necessary. It all depends on whats a normal request, what is legitimate traffic and how busy your server is normally.

All the data you have gathered thus far should be enough to complete a preliminary investigation into your DoS attack. As for mitigating , there are many options. I wont go too much into it because that could be considered a completely separate topic. I’ll give some suggestions for you, either way :

Block offending IPs with IPTABLES :

iptables -A INPUT -s 1.2.3.4 -j DROP

Use software layer mitigation such as mod_evasive or mod_security to reduce the ability of attackers to generate significant numbers of requests. Most importantly of all, use your best judgement!

SVN Offsite Backup Script : Secure offsite backup solution for SVN to Amazon S3

Hi there!

Backing up your code repository is important. Backing up your code repository to an off-site location in a secure manner is imperative. Throughout our travels and experience utilizing the SVN code repository system, we have developed a quick bash script to export the entire SVN repository, encrypt it, compress it into an archive, and then ship it (over an encrypted network connection) to Amazon S3 storage.

We will be using the (familiar) s3sync Ruby script to do the actual transport to Amazon S3, which you can find here.

Note also that this script also keeps a local copy of the backups, taken each day, for a maximum of 7 days of retention. This might be redundant since all revisions are kept within SVN itself, but I thought it would provide an additional layer of backup redundancy. The script can be easily modified to only backup a single file every night, overwriting the older copy after every backup.

Here’s the script :

#!/bin/sh
# SVN Off Site Backup script
# www.stardothosting.com

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`
threedays=`date -v-5d "+%Y-%m-%d"`
todaysdate=`date "+%Y-%m-%d"`

export AWS_ACCESS_KEY_ID="YOUR-S3-KEY-ID"
export AWS_SECRET_ACCESS_KEY="YOUR-S3-ACCESS-KEY"


echo "SVN Offsite Backup Log: " $currentmonth > /var/log/offsite-backup.log
echo -e "--------------------------------------------" >> /var/log/offsite-backup.log
echo -e "" >> /var/log/offsite-backup.log

# Archive Repository Dump Files and remove files older than 7 days
/usr/bin/find /subversion/svn_backups -type f -mtime +7 -delete

# Backup SVN and encrypt it
svnadmin dump /subversion/repo_name | /usr/bin/openssl enc -aes-256-cbc -pass pass:YOUR-ENCRYPTION-PASSWORD -e > /subversion/svn_backups/repo_name-backup-$todaysdate.enc

#fyi to decrypt :
#openssl aes-256-cbc -d -pass pass:YOUR-ENCRYPTION-PASSWORD -in repo_name-backup.enc -out decrypted.dump

# Transfer the files to Amazon S3 Storage via HTTPS
/usr/local/bin/ruby /usr/local/bin/s3sync/s3sync.rb --ssl -v --delete -r /subversion/svn_backups S3_BACKUPS:svn/svnbackup >> /var/log/offsite-backup.log 2>&1

if [ "$?" -eq 1 ]
then
        echo -e "***SVN OFFSITE BACKUP JOB, THERE WERE ERRORS***" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "SVN Offsite Backup Job failed" your@email.com
        exit 1
else
        echo -e "Script Completed Successfully!" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "SVN Offsite Backup Job Completed" your@email.com
        exit 0
fi

Note how I have provided an example , commented out within the script, on how you can go about decrypting the encrypted SVN dump file. You can also modify this script to back up to any offsite location, obviously. Just remove the s3sync related entries and replace with rsync or your preferred transport method.

I hope this makes your life easier!

Web Designers in Toronto : 30% OFF All Web Design Services!

Your in luck!

Our web design division is offering an amazing deal for anyone looking to kick off their design (or re-design) project.

Shift8web is offering 30% off All web design services through the Kijiji coupon service. Click the link below for more information :

30% Off Web Design

SVN Pre Commit Hook : Sanitize your Code!

Hello,

Dealing with several different development environments can be tricky. With SVN specifically, it is ideal to have some “pre-flight” checks in order to make sure some basic standards have been followed.

Some of the things you would want to check might be :

– Does the code generate a fatal PHP error?
– Is there any syntax errors?
– Has valid commit messages been attached to the code commit?

I thought I’d share our pre-commit hook in one of our SVN code repositories in order to let you utilize and perhaps expand on it to include many more checks. Additional checks that may be specific to your code environment might benefit you. Feel free to share if improvements are made!

#!/bin/bash
# pre-commit hooks
# www.stardothosting.com

REPOS="$1"
TXN="$2"

PHP="/usr/bin/php"
SVNLOOK="/usr/bin/svnlook"
AWK="/usr/bin/awk"
GREP="/bin/egrep"
SED="/bin/sed"

# Make sure that the commit message is not empty
SVNLOOKOK=1
$SVNLOOK log -t "$TXN" "$REPOS" | grep "[a-zA-Z0-9]" > /dev/null || SVNLOOKOK=0

if [ $SVNLOOKOK = 0 ]; then
        echo -e "Empty commit messages are not allowed. Please provide a descriptive comment when committing code." 1>&2
        exit 1
fi

# Make sure the commit message is more than 5 characters long.
LOGMSG=$($SVNLOOK log -t "$TXN" "$REPOS" | grep [a-zA-Z0-9] | wc -c)

if [ "$LOGMSG" -le 5 ]; then
        echo -e "Please provide a verbose comment when committing changes." 1>&2
        exit 1
fi


# Check for PHP parse errors
CHANGED=`$SVNLOOK changed -t "$TXN" "$REPOS" | $GREP "^[U|A]" | $AWK '{print $2}' | $GREP .php$`

for FILE in $CHANGED
do
    MESSAGE=`$SVNLOOK cat -t "$TXN" "$REPOS" "$FILE" | $PHP -l`
    if [ $? -ne 0 ]
    then
        echo 1>&2
        echo "-----------------------------------" 1>&2
        echo "PHP error in: $FILE:" 1>&2
        echo `echo "$MESSAGE" | $SED "s| -| $FILE|g"` 1>&2
        echo "-----------------------------------" 1>&2
        exit 1
    fi
done

exit 0

Add your Dynamic IPs to Apache HTACCESS files

Hello!

We threw together a quick & simple script to dynamically update your .htaccess files within apache to add your dynamic IP address to the allow / deny fields.

If you’re looking to password protect an admin area (for example) but your office only has a dynamic IP address, then this script might be handy for you.

Its an extremely simple script that polls your dynamic hostname (if you use no-ip.org or dyndns.org for example) every 15 minutes as a cron job and, if it has changed, updates the .htaccess file

Hopefully it will make your life just a little bit easier 🙂

Sample Cron entry :

*/15 * * * * /bin/sh /usr/local/bin/htaccessdynamic.sh yourhostname.dyndns.org /var/www/website.com/public_html/.htaccess > /dev/null 2>&1

And now the script :

#!/bin/bash
# Dynamic IP .htaccess file generator
# Written by Star Dot Hosting
# www.stardothosting.com

dynDomain="$1"
htaccessLoc="$2"

dynIP=$(/usr/bin/dig +short $dynDomain)

echo "dynip: $dynIP"
# verify dynIP resembles an IP
if ! echo -n $dynIP | grep -Eq "[0-9.]+"; then
    exit 1
fi

# if dynIP has changed
if ! cat $htaccessLoc | /bin/grep -q "$dynIP"; then

        # grab the old IP
        oldIP=`cat /usr/local/bin/htold-ip.txt`

        # output .htaccess file
        echo "order deny,allow" > $htaccessLoc 2>&1
        echo "allow from $dynIP" >> $htaccessLoc 2>&1
        echo "allow from x.x.x.x" >> $htaccessLoc 2>&1
        echo "deny from all" >> $htaccessLoc 2>&1

        # save the new ip to remove next time it changes, overwriting previous old IP
        echo $dynIP > /usr/local/bin/htold-ip.txt
fi

Automated Amazon EBS snapshot backup script with 7 day retention

Hello there!

We have recently been implementing several different backup strategies for properties that reside on the Amazon cloud platform.

These strategies include scripts that incorporate s3sync and s3fs for offsite or redundant “limitless” backup storage capabilities. One of the more recent strategies we have implemented for several clients is an automated Amazon EBS volume snapshot script that only keeps 7 day retention on all snapshot backups.

The script itself is fairly straightforward, but took several dry-runs in order to fine tune it so that it would reliably create the snapshots, but more importantly would clear out old snapshots older than 7 days.

You can see the for loop for deleting older snapshots. This is done by parsing snapshot dates, converting the dates to a pure numeric value and comparing said numeric value to a “7 days ago” date variable.

Take a look at the script below, hopefully it will be useful to you! There could be more error checking, but that should be fairly easy to do.

#!/bin/sh
# EBS Snapshot volume script
# Written by Star Dot Hosting
# www.stardothosting.com

# Constants
ec2_bin="/opt/aws/bin"
my_cert="/opt/aws/cert.txt"
my_key="/opt/aws/key.txt"
instance_id=`wget -q -O- http://169.254.169.254/latest/meta-data/instance-id`

# Dates
datecheck_7d=`date +%Y-%m-%d --date '7 days ago'`
datecheck_s_7d=`date --date="$datecheck_7d" +%s`

# Get all volume info and copy to temp file
$ec2_bin/ec2-describe-volumes -C $my_cert -K $my_key  --filter "attachment.instance-id=$instance_id" > /tmp/volume_info.txt 2>&1


# Get all snapshot info
$ec2_bin/ec2-describe-snapshots -C $my_cert -K $my_key | grep "$instance_id" > /tmp/snap_info.txt 2>&1

# Loop to remove any snapshots older than 7 days
for obj0 in $(cat /tmp/snap_info.txt)
do

        snapshot_name=`cat /tmp/snap_info.txt | grep "$obj0" | awk '{print $2}'`
        datecheck_old=`cat /tmp/snap_info.txt | grep "$snapshot_name" | awk '{print $5}' | awk -F "T" '{printf "%sn", $1}'`
        datecheck_s_old=`date "--date=$datecheck_old" +%s`

#       echo "snapshot name: $snapshot_name"
#       echo "datecheck 7d : $datecheck_7d"
#       echo "datecheck 7d s : $datecheck_s_7d"
#       echo "datecheck old : $datecheck_old"
#       echo "datecheck old s: $datecheck_s_old"

        if (( $datecheck_s_old <= $datecheck_s_7d ));
        then
                echo "deleting snapshot $snapshot_name ..."
                $ec2_bin/ec2-delete-snapshot -C $my_cert -K $my_key $snapshot_name
        else
                echo "not deleting snapshot $snapshot_name ..."

        fi

done


# Create snapshot
for volume in $(cat /tmp/volume_info.txt | grep "VOLUME" | awk '{print $2}')
do
        description="`hostname`_backup-`date +%Y-%m-%d`"
        echo "Creating Snapshot for the volume: $volume with description: $description"
        $ec2_bin/ec2-create-snapshot -C $my_cert -K $my_key -d $description $volume
done

Managed VPS hosting services from Star Dot Hosting

Click here for a free quote for managed vps hosting!

Hello,

I thought I’d update our blog to let everyone know that we offer our extensive managed services catalog on our VPS infrastructure!

If you are looking for high quality managed services from systems administrators with extensive experience, then contact our sales department for a consultation / quotation!

Find below just a SAMPLE of some of the managed services we provide :

Support Services
– 24/7/365 Technical Support (Ticketing system)
– 24/7/365 Alert monitoring on-call rotation pager system
– Online Customer Service Center

Linux Administration
– Installs, reinstalls and updates
– Trouble Shooting and configuration
– Apache configuration, optimization & Setup
– Linux X64 installation, optimization and maintenance
– Security optimizations
– Proactive and reactive maintenance
– Installs, reinstalls and updates
– Trouble Shooting and configuration

Managed MySQL and MSSQL
– MySQL and MS SQL first time install service
– MySQL maintenance and troubleshooting

24/7/365 System Availability
– Availability monitoring (Ping and Synthetic Transactions)
– Performance monitoring of key server parameters
– Graphing and trending

Security Services
– Routine Security Patching
– Routine Security scanning and auditing
– Penetration testing, SQL Injection
– Quarterly security audit reporting
– Dedicated hardware firewalls for all clients

Hardware Maintenance
– Hardware failure alerting and monitoring

Offsite Backups
– Remote off-site backups per 50gb nightly

Click here for a free quote for managed vps hosting!

Massive Amazon Route53 API Bind Zone Import Script

Hello there,

Occasionally some of our managed services work has us dealing directly with other cloud providers such as Amazon. One of our clients set a requirement to migrate over 5,000 domain’s to Amazon’s Route53 DNS service.

There was little doubt that this could be automated, but since we have never done this massive of a deployment through Amazon’s API directly, we thought it might be interesting to post the process as well as the script through which we managed the import process.

Essentially the script utilizes a master domain name list file as its basis for looping through the import. The master list refers to the bind zone files and imports them into Amazon’s Route53 via the Cli53 tool package.

One final note, the script outputs all completed domain imports into a CSV file with the following format :

domain.com,ns1.nameserver.com,ns2.nameserver.com,ns3.nameserver.com,ns4.nameserver.com

This is because when facilitating the actual nameserver change request, all the nameservers assigned to domains when imported to Route53 are randomly generated, so the script has to keep track of these nameserver/domain associations.

The script isn’t perfect and could benefit from some optimizations and more error checking (it does a lot of error checking already, however), but here it is in its entirety. We hope you will have some use for it!

#!/bin/sh
# Import all zone files into amazon
# Star Dot Hosting 2012
# www.stardothosting.com

currentmonth=`date "+%Y-%m-%d"`

#sanitize input and verify input was given
command=`echo "$1" | sed 'y/abcdefghijklmnopqrstuvwxyz/ABCDEFGHIJKLMNOPQRSTUVWXYZ/'`

if [ -z "$1" ];
then
        echo "AWS ZONE IMPORT"
        echo "---------------"
        echo ""
        echo "Usage : ./importzone.sh file.txt"
        echo ""
        exit 0
fi


echo "zone import log : $currentmonth" > /var/log/importzone.log 2>&1
echo " " >> /var/log/importzone.log 2>&1



for obj0 in $(cat $1);
do
        echo "checking if $obj0 was already migrated ..."
        ls -la /usr/local/zones/$1-zones/complete | grep -w $obj0 >> /dev/null 2>&1
        if [ "$?" -eq 1 ]
        then
        echo "importing $obj0 ..."

        #check if zone file has NS records
        cat /usr/local/zones/$1-zones/$obj0.txt | grep NS >> /dev/null 2>&1
        if [ "$?" -eq 0 ]
        then
                echo "Nameserver exists, continuing..."
        else
                echo "Adding nameserver to record..."
                echo "$obj0. 43201 IN NS ns1.nameserver.com." >> /usr/local/zones/$1-zones/$obj0.txt
        fi

        #check if zone exists
        /usr/local/zones/cli53/bin/cli53 info $obj0 >> /var/log/importzone.log 2>&1
        if [ "$?" -eq 0 ]
        then
                # grab NAMESERVERS
                nameservers=`/usr/local/zones/cli53/bin/cli53 rrlist $obj0 | grep "NS" | awk -F "NSt" '{printf "%sn", $2}' | sed 's/.$/g' | sed ':a;N;$!ba;s/n/,/g'`
   # import zone file
                /usr/local/zones/cli53/bin/cli53 import $obj0 -r -f /usr/local/zones/$1-zones/$obj0.txt
                if [ "$?" -eq 0 ]
                then
                        #move to complete folder
                        mv /usr/local/zones/$1-zones/$obj0.txt /usr/local/zones/$1-zones/complete
                else
                        echo "There was an error in importing the zone file!" >> /var/log/importzone.log
                        exit 1
                fi
        else
                #create on route53
                /usr/local/zones/cli53/bin/cli53 create $obj0 >> /var/log/importzone.log 2>&1
                # grab NAMESERVERS
                nameservers=`/usr/local/zones/cli53/bin/cli53 rrlist $obj0 | grep "NS" | awk -F "NSt" '{printf "%sn", $2}' | sed 's/.$/g' | sed ':a;N;$!ba;s/n/,/g'`
                # import zone file
                /usr/local/zones/cli53/bin/cli53 import $obj0 -r -f /usr/local/zones/$1-zones/$obj0.txt
                if [ "$?" -eq 0 ]
                then
                        #move to complete folder
                        mv /usr/local/zones/$1-zones/$obj0.txt /usr/local/zones/$1-zones/complete
                else
                        echo "There was an error in importing the zone file!" >> /var/log/importzone.log
                        exit 1
                fi
        fi

        # output domain + nameservers in a CSV with format : domain.com,ns1,ns2,ns3,ns4
        echo "$obj0,$nameservers" >> nameserver_registrar_request.txt 2&>1
        else
                echo "Domain already migrated .. !"
        fi
done
Menu