Centralized Backup Script

Hello There!

I thought I’d share a backup script that was written to consolidate backups onto one server instead of spreading the backup process across several servers. The advantages are somewhat obvious to consolidating the script onto one server, namely being that editing or making changes is much easier as you only have one script to edit.

The environment where this may be ideal would be for environments with 15-20 servers or less. I’d recommend a complete end-to-end backup solution for servers that exceed that number such as Bacula perhaps.

The bash shell script that I pasted below is very straightforward and takes two arguments. The first is the hostname or ip address of the destination server you are backing up. The next (and potentially unlimited) arguments will be single quote encased folders which you would want backed up.

This script is dependent on the server the script is running on having ssh key based authentication enabled and implemented for the root user. Security considerations can be made with IP based restrictions either in the ssh configuration, firewall configuration or other considerations.

It should be explained further that this script actually connects to the destination server as the root user, using ssh key authentication. It then initiates a remote rsync command on the destination server back to the backup server as a user called “backupuser”. So that means that not only does the ssh key need to be installed for root on the destination servers, but a user called “backupuser” needs to be added on the backup server itself, with the ssh keys of all the destination servers installed for the remote rsync.

Hopefully I did not over complicate this, because it really is quite simple :

Backup Server –> root –> destination server to backup — > backupuser rsync –> Backup Server

Once you implement the script and do a few dry run tests then it might be ready to implement a scheduled task for each destination server. Here is an example of one cron entry for a server to be backed up :

How to detect and mitigate DoS (Denial of Service) Attacks

Greetings,

Occasionally with a very busy site, being behind a hefty web stack does not always have enough capacity to mitigate a significant surge in artificial (DoS) requests. Detecting and mitigating denial of service attacks is an important and time sensitive exercise that will determine the next mitigating steps that you may need to take to reduce or null route the offending traffic.

These steps are very basic and use the everyday system utilities and tools that are found in vanilla linux implementations. The idea is to utilize these tools to identify connection and request patterns.

I’m going to assume that your potential or assumed attack is going straight to port 80 (HTTP), which would be a common assumption. A typical DoS attack would just be a generation of thousands of requests to a particular page, or perhaps just to the homepage.

Check your Process and Connection Counts

It is a good idea to get a picture of how overworked your system is currently, other than the initial reports of slow site performance or perhaps mysql errors such as “The MySQL server has gone away”, or anything of the sort.

Count how many apache/httpd processes you currently have to get an idea :

Check what the CPU load is currently on the server :

So you can see the load is quite high and there are 96 apache processes that have spawned. Looks to be quite a busy server! You should take it a step further and perhaps identify how many open port 80 (HTTP) connections you have :

So thats quite a significant number of HTTP connections on this server. It could be a substantial DoS attack, especially when you consider that this may be one server in a 3 server load balanced pool. That means the numbers here are likely multiplied by three.

Still, it could be legitimate traffic! The spike could be attributed to a link on reddit, or perhaps the site was mentioned on a popular news site. We need to look further at what requests are coming in to be able to determine if perhaps the traffic is organic or artificial. Artificial traffic would typically have thousands and thousands of identical requests, possibly coming from a series of IP addresses.

How distributed a DoS attack probably depends on the skill and resources of the offending party (potentially). If its a DoS, hopefully it will be easily identifiable.

Lets take a closer look at the open connections. Maybe we can see how many connections from singular IP addresses are currently open on the server. That may help identify if the majority of the traffic is from a single or single set of sources. This information can be kept aside after our analysis is complete so that we can use that information to report and block the traffic to ultimately mitigate the attack.

What the above line essentially does is scan the open connections specifically to port 80 and filters only the IP addresses that have 45 or more open connections. This number can obviously be adjusted to whatever you like. Take a look at the different results and see what it produces.

For potentially offending IP addresses, whois them and see where they are originating from. Are they from a Country that typically isn’t interested in your site? If the site is an English language site about local school programs in a small North American city, chances are someone from China or Russia has little legitimate interest in the site.

Analyze the Requests

In this scenario, we would be analyzing the apache access logs in order to get a clearer picture of what exactly is happening that is generating the DoS. Access logs in apache are a great resource to get the source IP, request URI and other useful information that may help identify an automated tool such as LOIC or an automated botnet perhaps that is sending thousands of identical requests to your server.

Lets filter our the actual GET requests from the apache logs, sort them and count each request in order to show the highest number of requests for the same URI. If we can take this information and then cross reference it with the connection stats we gathered earlier, we should have a clear picture of who is conducting the attack and how they are doing it.

This code filters GET requests from the logs, cleans them up, sorts the results, counts all the duplicate requests , sorts it by highest number of requests, and prints the results.

Again the 45 in the last portion of the command can be changed to whatever you feel is necessary. It all depends on whats a normal request, what is legitimate traffic and how busy your server is normally.

All the data you have gathered thus far should be enough to complete a preliminary investigation into your DoS attack. As for mitigating , there are many options. I wont go too much into it because that could be considered a completely separate topic. I’ll give some suggestions for you, either way :

Block offending IPs with IPTABLES :

Use software layer mitigation such as mod_evasive or mod_security to reduce the ability of attackers to generate significant numbers of requests. Most importantly of all, use your best judgement!

SVN Offsite Backup Script : Secure offsite backup solution for SVN to Amazon S3

Hi there!

Backing up your code repository is important. Backing up your code repository to an off-site location in a secure manner is imperative. Throughout our travels and experience utilizing the SVN code repository system, we have developed a quick bash script to export the entire SVN repository, encrypt it, compress it into an archive, and then ship it (over an encrypted network connection) to Amazon S3 storage.

We will be using the (familiar) s3sync Ruby script to do the actual transport to Amazon S3, which you can find here.

Note also that this script also keeps a local copy of the backups, taken each day, for a maximum of 7 days of retention. This might be redundant since all revisions are kept within SVN itself, but I thought it would provide an additional layer of backup redundancy. The script can be easily modified to only backup a single file every night, overwriting the older copy after every backup.

Here’s the script :

Note how I have provided an example , commented out within the script, on how you can go about decrypting the encrypted SVN dump file. You can also modify this script to back up to any offsite location, obviously. Just remove the s3sync related entries and replace with rsync or your preferred transport method.

I hope this makes your life easier!

SVN Pre Commit Hook : Sanitize your Code!

Hello,

Dealing with several different development environments can be tricky. With SVN specifically, it is ideal to have some “pre-flight” checks in order to make sure some basic standards have been followed.

Some of the things you would want to check might be :

– Does the code generate a fatal PHP error?
– Is there any syntax errors?
– Has valid commit messages been attached to the code commit?

I thought I’d share our pre-commit hook in one of our SVN code repositories in order to let you utilize and perhaps expand on it to include many more checks. Additional checks that may be specific to your code environment might benefit you. Feel free to share if improvements are made!

Add your Dynamic IPs to Apache HTACCESS files

Hello!

We threw together a quick & simple script to dynamically update your .htaccess files within apache to add your dynamic IP address to the allow / deny fields.

If you’re looking to password protect an admin area (for example) but your office only has a dynamic IP address, then this script might be handy for you.

Its an extremely simple script that polls your dynamic hostname (if you use no-ip.org or dyndns.org for example) every 15 minutes as a cron job and, if it has changed, updates the .htaccess file

Hopefully it will make your life just a little bit easier 🙂

Sample Cron entry :

And now the script :

Automated Amazon EBS snapshot backup script with 7 day retention

Hello there!

We have recently been implementing several different backup strategies for properties that reside on the Amazon cloud platform.

These strategies include scripts that incorporate s3sync and s3fs for offsite or redundant “limitless” backup storage capabilities. One of the more recent strategies we have implemented for several clients is an automated Amazon EBS volume snapshot script that only keeps 7 day retention on all snapshot backups.

The script itself is fairly straightforward, but took several dry-runs in order to fine tune it so that it would reliably create the snapshots, but more importantly would clear out old snapshots older than 7 days.

You can see the for loop for deleting older snapshots. This is done by parsing snapshot dates, converting the dates to a pure numeric value and comparing said numeric value to a “7 days ago” date variable.

Take a look at the script below, hopefully it will be useful to you! There could be more error checking, but that should be fairly easy to do.

Patch Scanning / Information Gathering Script for RedHat / CentOS

With all the patch management solutions, local repositories and other options, it is rarely necessary to manually scan all servers on your network to build a “report” of the patch levels in your environment.

Sometimes it is, however. For instance, if you are brought into an environment that has not been properly managed and require some quick audits to evaluate how much actual work needs to be done bringing all the patch levels up to standard, then there are ways to produce these reports with simple bash scripting.

I have developed such a script for similar situations — quick reporting is sometimes necessary even when you are evaluating a large commercial patch management solution. It can even be implemented to coincide such solutions, for independent reporting perhaps.

This script would work well either by distributing it to each server and running the script via ssh key based authentication for centralized reporting. Alternatively, you could modify this script to perform each command via SSH over the network to gather information that way. It is probably more ideal to centrally distribute the script to each server so only one ssh command is executed per server.

Find the script below — note that it only works with RedHat / CentOS systems. Obviously if you are paying for Red Hat enterprise support you already are using satellite; If you are using CentOS then this script may be useful for you.

Enjoy!

Note that you can modify the echo output to produce whatever output you need in order to present it in a nice human readable report.

Scheduled antivirus scans to prevent viral injections on user generated content

When dealing with high traffic sites, especially media based or community based sites, there is always the risk of javascript, virus, XSS or other malicious injection of badness when giving a community of users the ability to upload files to your site.

There are several things to consider when evaluating all “points of entry” that are available to the public, into your systems.

Most content management and community based systems use libraries such as Imagemagick to process images (such as profile pictures) into their proper format and size.

Believe it or not, it is hard to actually inject code or other malicious data into the actual image to survive this sanitizing process. There is still risks , however. The library version you are running may be vulnerable to exploits itself.

As always, a good rule of thumb is to ensure all possible aspects of your systems are up to date and that you are aware of any security vulnerabilities as they come out so they can either be patched or addressed in some other way.

One thing to consider, especially when dealing with thousands of users and even more uploads is a scheduled scan of your user uploads using free virus scanning tools such as clamscan. This is an endpoint reporting strategy that can at least cover your ass in the event that something else was missed or a 0day vulnerability exploited.

It should be noted that the virus scans themselves aren’t intended to protect the linux systems themselves, but rather the opportunistic ‘spreading’ of compromised images and code that having an infected file on a public community based system can provide.

Its very simple to implement clamav (daemonization is not necessary), clamscan is all we need to execute regular scans at 10, 15, 30 or 60 minute intervals.

Once clamscan is implemented, definitions updated (and regular update cronjobs in place) you can roll out a script similar to the one we have here to implement the scheduled scans :

The actual cronjob entry can look something like this :

It seems like a rather simple solution — but it does provide a venue for additional sanitizing of user input. In our experience , it is best to only report on anything that clamscan might find. You can, however tell clamscan to simply delete any suspected infections it finds.

Amazon S3 Backup script with encryption

With the advent of cloud computing, there have been several advances as far as commercial cloud offerings, most notably Amazon’s EC2 computing platform as well as their S3 Storage platform.

Backing up to Amazon S3 has become a popular alternative to achieving true offsite backup capabilities for many organizations.

The fast data transfer speeds as well as the low cost of storage per gigabyte make it an attractive offer.

There are several free software solutions that offer the ability to connect to S3 and transfer files. The one that shows the most promise is s3sync.

There are already a few guides that show you how to implement s3sync on your system.

The good thing is that this can be implemented in Windows, Linux, FreeBSD among other operating systems.

We have written a simple script that utilizes the s3sync program in a scheduled offsite backup scenario. Find our script below, and modify it as you wish. Hopefully it will help you get your data safely offsite 😉

Now if your data happens to be sensitive (most usually is), usually encrypting the data during transit (with the –ssl flag) is not enough.

You can encrypt the actual file before it is sent to S3, as an alternative. This would be incorporated into the tar command with the above script. That line would look something like this :

Alternative to gpg, you could utilize openssl to encrypt the data.

Hopefully this has been helpful!

Compress files and folders over the network without using rsync

The following command ssh’s to your remote server, tar + gzips a directory, and then outputs the compressed stream to your local machine.

This is a good alternative to rsync. Even though rsync can compress the transfer mid stream, the receiving end is still the un-extracted copy.

To do the above command, and extract it on your end (after transferring the compressed file over the network), simply do the following :

These commands could theoretically incorporate pgp encryption to encrypt and compress the archive before it travels across the network, for increased security. That is why this alternative to rsync may be preferential to some.

Obviously you could locally encrpyt + compress , then rsync, but its always a good idea to not utilize local storage for this process and keep all the storage capacity on the centralized storage system that you have already allocated.