Integrate your custom IPTables script with Linux

How do I integrate my custom iptables script with Red Hat Enterprise Linux?

A custom iptables script is sometimes necessary to work around the limitations of the Red Hat Enterprise Linux firewall configuration tool. The procedure is as follows:

1. Make sure that the default iptables initialization script is not running:

service iptables stop

2. Execute the custom iptables script:

sh [custom iptables script]

3. Save the newly created iptables rules:

service iptables save

4. Restart the iptables service:

service iptables restart

5. Verify that the custom iptables ruleset have taken effect:

service iptables status

6. Enable automatic start up of the iptables service on boot up:

chkconfig iptables on

The custom iptables script should now be integrated into the operating system.

Patch Scanning / Information Gathering Script for RedHat / CentOS

With all the patch management solutions, local repositories and other options, it is rarely necessary to manually scan all servers on your network to build a “report” of the patch levels in your environment.

Sometimes it is, however. For instance, if you are brought into an environment that has not been properly managed and require some quick audits to evaluate how much actual work needs to be done bringing all the patch levels up to standard, then there are ways to produce these reports with simple bash scripting.

I have developed such a script for similar situations — quick reporting is sometimes necessary even when you are evaluating a large commercial patch management solution. It can even be implemented to coincide such solutions, for independent reporting perhaps.

This script would work well either by distributing it to each server and running the script via ssh key based authentication for centralized reporting. Alternatively, you could modify this script to perform each command via SSH over the network to gather information that way. It is probably more ideal to centrally distribute the script to each server so only one ssh command is executed per server.

Find the script below — note that it only works with RedHat / CentOS systems. Obviously if you are paying for Red Hat enterprise support you already are using satellite; If you are using CentOS then this script may be useful for you.

Enjoy!

#!/bin/sh

# Basic Information Gathering
# Star Dot Hosting
# https://www.stackstar.com

HOSTNAME=`hostname`
UNAME=`uname -a | awk '{print $3}'`

# Begin Package Scanning


# SSH

SSHON="0"
SSHRUN="NULL"
SSHRPM="NULL"
SSHMATCH="NULL"


if [ -f /usr/sbin/sshd ]
then
        SSHON="1"
	SSHMATCH="0"
        SSHRUN=`ssh -V 2>&1 | awk 'BEGIN { FS = "_" } ; { print $2 }' | awk '{print $1}' | cut -b 0-5`
	TESTRPM=`rpm -qa openssh`
	if [ "$TESTRPM" <> 0  ]
	then
	        SSHRPM=`rpm -qa openssh | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$SSHRUN" == "$SSHRPM" ]
        then
                SSHMATCH="1"
        fi

fi

# Apache

HTTPDON="0"
HTTPDRUN="NULL"
HTTPDRPM="NULL"
HTTPDMATCH="NULL"


if [ -f /usr/sbin/httpd ]
then
        HTTPDON="1"
	HTTPDMATCH="0"
        HTTPDRUN=`httpd -v | grep version | awk 'BEGIN {FS="/"};{print$2}'`
	TESTRPM=`rpm -qa httpd`
	if [ "$TESTRPM" <> 0  ]
	then
        	HTTPDRPM=`rpm -qa httpd | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$HTTPDRUN" == "$HTTPDRPM" ]
        then
                HTTPDMATCH="1"
        fi
fi

# MySQL

MYSQLON="0"
MYSQLRUN="NULL"
MYSQLRPM="NULL"
MYSQLMATCH="NULL"


if [ -f /usr/bin/mysql ]
then
        MYSQLON="1"
	MYSQLMATCH="0"
        MYSQLRUN=`mysql -V | awk '{print $5}' | cut -b 0-6`
	TESTRPM=`rpm -qa mysql`
	if [ "$TESTRPM" <> 0  ]
	then
        	MYSQLRPM=`rpm -qa mysql | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$MYSQLRUN" == "$MYSQLRPM" ]
        then
                MYSQLMATCH="1"
        fi
fi

# PHP

PHPON="0"
PHPRUN="NULL"
PHPRPM="NULL"
PHPMATCH="NULL"


if [ -f /usr/bin/php ]
then
        PHPON="1"
	PHPMATCH="0"
        PHPRUN=`php -v | grep built | awk '{print $2 }'`
	TESTRPM=`rpm -qa php`
	if [ "$TESTRPM" <> 0  ]
	then
        	PHPRPM=`rpm -qa php | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$PHPRUN" == "$PHPRPM" ]
        then
                PHPMATCH="1"
        fi
fi

# Exim
# Needs to be tested on RH box

EXIMON="0"
EXIMRUN="NULL"
EXIMRPM="NULL"
EXIMMATCH="NULL"


if [ -f /usr/sbin/exim ]
then
        EXIMON="1"
	EXIMMATCH="0"
        EXIMRUN=`exim -bV | grep version | awk '{print $3}'`
	TESTRPM=`rpm -qa exim`
	if [ "$TESTRPM" <> 0  ]
	then
        	EXIMRPM=`rpm -qa exim | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$EXIMRUN" == "$EXIMRPM" ]
        then
                EXIMMATCH="1"
        fi
fi

# OpenSSL

OSSLON="0"
OSSLRUN="NULL"
OSSLRPM="NULL"
OSSLMATCH="NULL"


if [ -f /usr/bin/openssl ]
then
        OSSLON="1"
	OSSLMATCH="0"
        OSSLRUN=`openssl version | awk '{print $2}'`
	TESTRPM=`rpm -qa openssl`
	if [ "$TESTRPM" <> 0  ]
	then
        	OSSLRPM=`rpm -qa openssl | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$OSSLRUN" == "$OSSLRPM" ]
        then
                OSSLMATCH="1"
        fi
fi

# PERL

PERLON="0"
PERLRUN="NULL"
PERLRPM="NULL"
PERLMATCH="NULL"


if [ -f /usr/bin/perl ]
then
        PERLON="1"
	PERLMATCH="0"
        PERLRUN=`perl -v | grep built | awk '{print $4}' | awk 'BEGIN { FS = "v" } ; { print $2 }'`
	TESTRPM=`rpm -qa perl`
	if [ "$TESTRPM" <> 0  ]
	then
        	PERLRPM=`rpm -qa perl | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$PERLRUN" == "$PERLRPM" ]
        then
                PERLMATCH="1"
        fi
fi


# PYTHON

PYON="0"
PYRUN="NULL"
PYRPM="NULL"
PYMATCH="NULL"


if [ -f /usr/bin/python ]
then
        PYON="1"
	PYMATCH="0"
        PYRUN=`python -V 2>&1 | awk '{print $2}'`
	TESTRPM=`rpm -qa python`
	if [ "$TESTRPM" <> 0  ]
	then
        	PYRPM=`rpm -qa python | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$PYRUN" == "$PYRPM" ]
        then
                PYMATCH="1"
        fi
fi

# GPG

GPGON="0"
GPGRUN="NULL"
GPGRPM="NULL"
GPGMATCH="NULL"


if [ -f /usr/bin/gpg ]
then
        GPGON="1"
	GPGMATCH="0"
        GPGRUN=`gpg --version | grep gpg | awk '{print $3}'`
	TESTRPM=`rpm -qa gnupg`
	if [ "$TESTRPM" <> 0  ]
	then
        	GPGRPM=`rpm -qa gnupg | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$GPGRUN" == "$GPGRPM" ]
        then
                GPGMATCH="1"
        fi
fi

# RPM

RPMON="0"
RPMRUN="NULL"
RPMRPM="NULL"
RPMMATCH="NULL"


if [ -f /bin/rpm ]
then
        RPMON="1"
	RPMMATCH="0"
        RPMRUN=`rpm --version | awk '{print $3}'`
	TESTRPM=`rpm -qa rpm`
	if [ "$TESTRPM" <> 0  ]
	then
        	RPMRPM=`rpm -qa rpm | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$RPMRUN" == "$RPMRPM" ]
        then
                RPMMATCH="1"
        fi
fi

# SENDMAIL

SENDON="0"
SENDRUN="NULL"
SENDRPM="NULL"
SENDMATCH="NULL"


if [ -f /usr/sbin/sendmail ]
then
        SENDON="1"
        SENDMATCH="0"
        SENDRUN=`echo 'quit' | nc localhost 25 | grep Sendmail | awk '{print $5}' | awk 'BEGIN { FS = "/" } ; { print $1 }'`
	TESTRPM=`rpm -qa sendmail`
	if [ "$TESTRPM" <> 0  ]
	then
	        SENDRPM=`rpm -qa sendmail | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$SENDRUN" == "$SENDRPM" ]
        then
                SENDMATCH="1"
        fi
fi

### Non running packages

# bind-libs

BINDLIB="NULL"
TESTRPM=`rpm -qa bind-libs`
if [ "$TESTRPM" <> 0  ]
then
	BINDLIB=`rpm -qa bind-libs | awk 'BEGIN { FS = "-" } ; { print $3 }'`
fi


# bind-utils

BINDUTIL="NULL"
TESTRPM=`rpm -qa bind-utils`
if [ "$TESTRPM" <> 0  ]
then
	BINDUTIL=`rpm -qa bind-utils | awk 'BEGIN { FS = "-" } ; { print $3 }'`
fi

# coreutils

COREUTIL="NULL"
TESTRPM=`rpm -qa coreutils`
if [ "$TESTRPM" <> 0  ]
then
	COREUTIL=`rpm -qa coreutils | awk 'BEGIN { FS = "-" } ; { print $2 }'`
fi

# chkconfig

CHKCONFIG="NULL"
TESTRPM=`rpm -qa chkconfig`
if [ "$TESTRPM" <> 0  ]
then
	CHKCONFIG=`rpm -qa chkconfig | awk 'BEGIN { FS = "-" } ; { print $2 }'`
fi

# initscripts

INITSCR="NULL"
TESTRPM=`rpm -qa initscripts`
if [ "$TESTRPM" <> 0  ]
then
	INITSCR=`rpm -qa initscripts | awk 'BEGIN { FS = "-" } ; { print $2 }'`
fi

# redhat-release

RHRELEASE="NULL"
TESTRPM=`rpm -qa redhat-release`
if [ "$TESTRPM" <> 0  ]
then
	RHRELEASE=`rpm -qa redhat-release | awk 'BEGIN { FS = "-" } ; { print $3"-"$4 }'`
fi



echo $HOSTNAME,$UNAME,$SSHMATCH,$HTTPDMATCH,$MYSQLMATCH,$PHPMATCH,$EXIMMATCH,$OSSLMATCH,$PYMATCH,$PERLMATCH,$GPGMATCH,
$RPMMATCH,$SENDMATCH,$BINDLIB,$BINDUTIL,$COREUTIL,$CHKCONFIG,$INITSCR,$RHRELEASE,$SSHON,$SSHRUN,$SSHRPM,$HTTPDON,$HTTPDRUN,
$HTTPDRPM,$MYSQLON,$MYSQLRUN,$MYSQLRPM,$PHPON,$PHPRUN,$PHPRPM,$EXIMON,$EXIMRUN,$EXIMRPM,$OSSLON,$OSSLRUN,$OSSLRPM,$PERLON,
$PERLRUN,$PERLRPM,$PYON,$PYRUN,$PYRPM,$GPGON,$GPGRUN,$GPGRPM,$RPMON,$RPMRUN,$RPMRPM,$SENDON,$SENDRUN,$SENDRPM

Note that you can modify the echo output to produce whatever output you need in order to present it in a nice human readable report.

Scheduled antivirus scans to prevent viral injections on user generated content

When dealing with high traffic sites, especially media based or community based sites, there is always the risk of javascript, virus, XSS or other malicious injection of badness when giving a community of users the ability to upload files to your site.

There are several things to consider when evaluating all “points of entry” that are available to the public, into your systems.

Most content management and community based systems use libraries such as Imagemagick to process images (such as profile pictures) into their proper format and size.

Believe it or not, it is hard to actually inject code or other malicious data into the actual image to survive this sanitizing process. There is still risks , however. The library version you are running may be vulnerable to exploits itself.

As always, a good rule of thumb is to ensure all possible aspects of your systems are up to date and that you are aware of any security vulnerabilities as they come out so they can either be patched or addressed in some other way.

One thing to consider, especially when dealing with thousands of users and even more uploads is a scheduled scan of your user uploads using free virus scanning tools such as clamscan. This is an endpoint reporting strategy that can at least cover your ass in the event that something else was missed or a 0day vulnerability exploited.

It should be noted that the virus scans themselves aren’t intended to protect the linux systems themselves, but rather the opportunistic ‘spreading’ of compromised images and code that having an infected file on a public community based system can provide.

Its very simple to implement clamav (daemonization is not necessary), clamscan is all we need to execute regular scans at 10, 15, 30 or 60 minute intervals.

Once clamscan is implemented, definitions updated (and regular update cronjobs in place) you can roll out a script similar to the one we have here to implement the scheduled scans :

#!/bin/bash
# Scheduled Scan of user uploaded files
# Usage : ./virusscan.sh /folder


SUBJECT="[VIRUS DETECTED] ON `hostname` !"
EMAIL="you@yourdomain.com"
LOG=/var/log/clamav/scan.log

# Clear out old logs -- the email alerts should be archived if we need to go back to old alerts
echo "" > $LOG

# Check if the folder is empty -- only scan if this is an active node in a clustered system
# look for empty dir
if [ "$(ls -A $1)" ]
then
        # Scan files 
        clamscan $1 -r --infected --scan-pdf --scan-elf --log=$LOG

        # Check the last set of results. If there are any "Infected" counts that aren't zero, we have a problem.
        cat $LOG | grep Infected | grep -v 0
        if [ $? = 0 ]
        then
                cat $LOG | mail -s "$SUBJECT" $EMAIL -- -F Antivirus -f antivirus@yourdomain.com
        fi

else
        echo "directory empty -- doing nothing"
        exit 0;
fi

The actual cronjob entry can look something like this :

0 */1 * * * /bin/bash /usr/local/bin/virusscan.sh "/your/path/to/user/uploaded/files/" > /dev/null 2>&1

It seems like a rather simple solution — but it does provide a venue for additional sanitizing of user input. In our experience , it is best to only report on anything that clamscan might find. You can, however tell clamscan to simply delete any suspected infections it finds.

Backup a live FreeBSD filesystem and remotely migrate to another server

Lately we’ve been all about live migrations / backups here at *.hosting. And why not? with the advent of such concepts as “self healing blade cloud environment” , we have made a point to testing / scripting live migration scenarios.

Following on our last post of backing up LVM volumes, we have decided to make a simple post for ‘dumping’ a live freebsd filesystem, compressing it mid-stream, and over the network (encrpyted through ssh of course) , before being saved as a file (or restored to a waiting live-cd mounted system).

By default in FreeBSD, it partitions your var, usr, root Filesystem Size Used Avail Capacity Mounted on /dev/sd0s1a 989M 445M 465M 49% / /dev/sd0s1f 9.7G 5.2G 3.7G 59% /usr /dev/sd0s1e 19G 1.5G 16G 9% /var

So lets dump the root partition since its the smallest :

dump -0uanL -f - /dev/sd0s1a | bzip2 | ssh user@0.0.0.0 "dd of=dump-root.bzip2"

Lets break down the options so you can fully understand what its doing :

-0 // dump level 0 = full backup
-u // update the dumpdates file after a successful dump
-a // bypass all tape length considerations; autosize
-n // notify if attention is required
-L // tell dump that it is a live filesystem for a consistent dump; it will take a snapshot

Alternatively you could dump the filesystem to a local file :

dump -0uanL -f - /dev/sd0s1a | bzip2 | dd of=/home/backups/dump-root.bzip2

If you wanted to dump from server1 and restore on server2 :

dump -0uanL -f - /dev/sd0s1a | ssh user@0.0.0.0 "restore rf -"

Again , this is a straightforward command. It is typically fast (within reason). You could script this for automated dumps/snapshots of your filesystem for full restores in a disaster scenario.

How to backup Xen with Logical Volume Mounts ; Works with HyperVM, SolusVM, FluidVM and More

Through our research and implementation of many Xen environments, it has become necessary to develop a reliable and secure method for backing up our Xen instances that are mounted on Logical Volumes (LVM).

The underlying problem is that the logical volume is usually a live file system that cannot be directly mounted / backed up or imaged safely.

We have written a script that processes all running Xen logical volumes, creates a snapshot of the volume and through that snapshot , uses dd to image the snapshot to another server over ssh.

You would be surprised at how well these dd images compress. Piping dd to bzip2 then to ssh to receive the image produces a very substantial compression ratio.

The initial trouble was writing the logic in the script to properly go through each Xen LV , create the snapshot, image and then remove the snapshot. Obviously extensive testing had to be completed to ensure reliability and proper error reporting.

This script should work with any 3rd party Xen control panel implementation (HyperVM, FluidVM, SolusVM to name a few). They all use the same underlying technology / framework. Since our script is a simple bash / shell script, it will run on any linux based system with little modification.

If you are using a LV for another purpose on the same box, it is probably a good idea to modify the script to ignore that so it doesn’t inadvertently get backed up.

Before implementing the script, it is probably a good idea to go through the motions manually just to see how it performs :

lvcreate -s -L 5G -n vm101_img_snapshot /dev/vps/vm101_img
dd if=/dev/vps/vm101_img_snapshot | bzip2 | ssh xenbackup@x.x.x.x "dd of=vm101_img.bz2"

One thing that you cant get around is space — you need to leave as much room as the largest Xen image on your logical volume — otherwise the script will fail at the snapshot creation process.

Find the script below. Hopefully it will help make your life easier (as well as being able to sleep at night) :

#!/bin/bash
# XEN Backup script
# Written by Star Dot Hosting

todaysdate=`date "+%Y-%m-%d"`

echo "XEN Backup Log: " $currentmonth > /var/log/backup.log
echo -e "------------------------------------" >> /var/log/backup.log
echo -e "" >> /var/log/backup.log


for obj0 in $(lvs --noheadings --separator ',' -o lv_name,lv_size | grep -v "swap" | awk -F "," '{printf "%sn", $1}');
do

#grab the snapshot size
snapsize=`lvs --noheadings --separator ',' -o lv_name,lv_size | grep -v "swap" | grep $obj0 | awk -F "," '{printf "%s", $2}'`

#create the snapshot
lvcreate -s -L $snapsize -n $obj0_snapshot /dev/xenlvm/$obj0 >> /var/log/backup.log 2>&1

#dd piped to bzip2 to compress the stream before piping it over the network via ssh to the destination box
dd if=/dev/xenlvm/$obj0_snapshot | bzip2 | ssh xenbackup@0.0.0.0 "dd of=/home/xenbackup/xen-backups/$obj0.$todaysdate.bz" >> /var/log/backup.log 2>&1

if [ "$?" -eq 1 ]
then
        echo -e "***SCRIPT FAILED, THERE WERE ERRORS***" >> /var/log/backup.log 2>&1
        cat /var/log/backup.log | mail -s "XEN Backup Job failed" admin@yourdomain.com
        lvremove -f /dev/xenlvm/$obj0_snapshot
        exit 1
else
        echo -e "Backup of $obj0 Completed Successfully!" >> /var/log/backup.log 2>&1
fi

# remove the snapshot
lvremove -f /dev/xenlvm/$obj0_snapshot


done

cat /var/log/backup.log | mail -s "XEN Backup Job Completed" admin@yourdomain.com

If you plan on automating this script in a cronjob, it may be a good idea to utilize ssh key authentication between your destination server and your Xen server.

Migrate FreeBSD to Xen

There seems to be a lot of tutorials with respect to how you can dump/restore FreeBSD implementations. However, none of them appear to be all encompassing what is actually required from start to finish during the entire process.

The one thing that I think is lacking in proper documentation is utilizing FreeBSD in a LiveCD scenario (LiveFS) within a network capacity (necessary for migration).

We decided to write this tutorial so that people could have one place to establish all the necessary things required for this type of migration from start to finish.

In this scenario we actually migrated a FreeBSD implementation on VMWARE to XEN HVM. In the end, there were no technical problems with FreeBSD actually running after it was migrated — it ran beautifully actually.

I should note that this was tested with FreeBSD 7.2-RELEASE disc images.

Please find the guide below :

Prepare OLD Instance

1. Boot into old operating system

2. Take note of partition slices / slice names / sizes / etc

3. Reboot with FreeBSD LiveFS disc

Prepare NEW Xen

1. Boot Xen instance with FreeBSD Disc 1 ISO

2. Partition / install boot loader exactly the same slices as the old instance. To be extra careful, give your slices a bit more disc space than the old implementation.

3. Write changes & reboot with FreeBSD LiveFS disc

Establish FreeBSD LiveFS environment

You need to establish a few things to get SSH / DUMP / RESTORE to work properly on both the ”’old”’ and ”’new”’ instances

1. Boot into FreeBSD LiveFS (Fixit > livefs)

2. Create the following folders :

/etc/ssh
/usr/sbin
/usr/bin
/root
/root/.ssh

3. Copy the following files :

cp /mnt2/bin/ps /bin
cp /mnt2/sbin/sysctl /sbin
cp /mnt2/etc/ssh/* /etc/ssh
cp /mnt2/bin/csh /bin
cp /mnt2/bin/cat > /bin
cp /mnt2/sbin/restore > /sbin

4. Set an IP address on both old and new instances:

new :

ifconfig eth0 10.0.0.50 netmask 255.255.255.0

old :

ifconfig eth0 10.0.0.60 netmask 255.255.255.0

5. Start sshd :

/mnt2/etc/rc.d/sshd forcestart

Start transferring slices

1. To allow for transferring of partitions properly, the /tmp partition should be mounted on the new Xen instance :

mount -t ufs /dev/ad0s1e /tmp

2. For the first partition you wish to transfer, mount the empty slice on the new xen instance :

mount -t ufs /dev/ad0s1a /mnt/ufs.1

Sometimes you have to fsck mark the filesystem clean to mount it :

fsck /dev/ad0s1a

3. On the old instance :

dump -0aLf - /dev/ad0s1a | ssh 10.0.0.50 "cd /mnt/ufs.1 && cat | restore -rf -"

That should dump/restore the slice from old > new.

Final things on the new Xen instance

Dont forget to boot the new instance in single user mode and modify ”’fstab”’ to reflect the new slice names (if applicable), as well as ”’rc.conf”’ for any hard coded interface names, etc. FreeBSD won’t boot if the right slice names / interface names aren’t present. Or at least cause problems.

You can mount the /etc slice while still in the LiveFS for the new FreeBSD instance.

Hopefully this was helpful! Obviously this has nothing to do with Xen, other than the fact that we were migrating the FreeBSD vmware instance to Xen.

You can do this on “real” machines, or from xen to vmware or anywhere. As long as the hardware is compatible.

Amazon S3 Backup script with encryption

With the advent of cloud computing, there have been several advances as far as commercial cloud offerings, most notably Amazon’s EC2 computing platform as well as their S3 Storage platform.

Backing up to Amazon S3 has become a popular alternative to achieving true offsite backup capabilities for many organizations.

The fast data transfer speeds as well as the low cost of storage per gigabyte make it an attractive offer.

There are several free software solutions that offer the ability to connect to S3 and transfer files. The one that shows the most promise is s3sync.

There are already a few guides that show you how to implement s3sync on your system.

The good thing is that this can be implemented in Windows, Linux, FreeBSD among other operating systems.

We have written a simple script that utilizes the s3sync program in a scheduled offsite backup scenario. Find our script below, and modify it as you wish. Hopefully it will help you get your data safely offsite 😉

#!/bin/sh
# OffSite Backup script

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`

export AWS_ACCESS_KEY_ID="YOUR-ACCESS-KEY"
export AWS_SECRET_ACCESS_KEY="YOUR-SECRET-ACCESS-KEY"

echo "Offsite Backup Log: " $currentmonth > /var/log/offsite-backup.log
echo -e "----------------------------------------" >> /var/log/offsite-backup.log
echo -e "" >> /var/log/offsite-backup.log

# Archive Files and remove files older than 3 days
/usr/bin/find /home/offsite-backup-files -type f -mtime +3 -delete

# Compress and archive a few select key folders for archival and transfer to S3
tar -czvf /home/offsite-backup-files/offsite-backup-`date "+%Y-%m-%d"`.tar.gz /folder1 /folder2 /folder3 >> /var/log/offsite-backup.log 2>&1

# Transfer the files to Amazon S3 Storage via HTTPS
/usr/local/bin/ruby /usr/local/bin/s3sync/s3sync.rb --ssl -v --delete -r /home/offsite-backup-files your-node:your-sub-node/your-sub-sub-node >> /var/log/offsite-b
ackup.log 2>&1

# Some simple error checking and email alert logging
if [ "$?" -eq 1 ]
then
        echo -e "***OFFSITE BACKUP JOB, THERE WERE ERRORS***" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "Offsite Backup Job failed" you@yourdomain.com
        exit 1
else
        echo -e "Script Completed Successfully!" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "Offsite Backup Job Completed" your@yourdomain.com
        exit 0
fi

Now if your data happens to be sensitive (most usually is), usually encrypting the data during transit (with the –ssl flag) is not enough.

You can encrypt the actual file before it is sent to S3, as an alternative. This would be incorporated into the tar command with the above script. That line would look something like this :

/usr/bin/tar -czvf - /folder1 /folder2 /folder3 | /usr/local/bin/gpg --encrypt -r you@yourdomain.com > /home/offsite-backup-files/offsite-backups-`date "+%Y-%m-%d"`.tpg

Alternative to gpg, you could utilize openssl to encrypt the data.

Hopefully this has been helpful!

Compress files and folders over the network without using rsync

The following command ssh’s to your remote server, tar + gzips a directory, and then outputs the compressed stream to your local machine.

This is a good alternative to rsync. Even though rsync can compress the transfer mid stream, the receiving end is still the un-extracted copy.

ssh -l username 0.0.0.0 '(cd /home/mysql-backups/ && tar -czf - . -C /home/mysql-backups)' >> test.tar.gz 2>&1

To do the above command, and extract it on your end (after transferring the compressed file over the network), simply do the following :

ssh -l username 0.0.0.0 '(cd /home/mysql-backups/ && tar -czf - . -C /home/mysql-backups)' | tar -xzf -

These commands could theoretically incorporate pgp encryption to encrypt and compress the archive before it travels across the network, for increased security. That is why this alternative to rsync may be preferential to some.

Obviously you could locally encrpyt + compress , then rsync, but its always a good idea to not utilize local storage for this process and keep all the storage capacity on the centralized storage system that you have already allocated.

Script to distribute SSH Keys across many servers

Hello once again!

You may remember an earlier post that detailed how to implement SSH Key based authentication.

We believe it is important, when administering many (sometimes hundreds or thousands) of servers, to implement a strategy that can allow systems administrators to seamlessly run scripts, system checks or critical maintenance across all the servers.

SSH Key authentication allows for this potential. It is a very powerful strategy and should be maintained and implemented with security and efficiency as a top priority.

Distributing keys for all authorized systems administrators is something that would allow for the maintenance of this authentication system much easier — when an admin leaves or is dismissed, you need to be able to remove his or her’s keys from the “pool” quickly.

The idea behind this script is to have a centralized, highly secure and restricted key repository server. Each server in your environment would run this script to “pull” the updated key list from the central server. The script would run as a cron job and can run as often as you like. Ideally every 5-10 minutes would allow for quick key updates / distribution.

Here is the perl script :

#!/usr/bin/perl
#
# A script to sync ssh keys on UNIX servers automatically.  This
# will not overwrite user installed ssh keys
#

use strict;
use IPC::Open3;
use File::Copy;

use POSIX ":sys_wait_h";

# This is overkill but FreeBSD may install wget in
# /usr/local/bin in some cases.
$ENV{PATH} = "/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin";

####################################################

use constant URL => 'https://keys.yoursite.com/ssh-keys.txt';
use constant WGET => 'wget --no-check-certificate -q -O - ';
use constant KEYS_FILE => '/root/.ssh/authorized_keys';
use constant RESTRICTED => 'https://keys.yoursite.com/restricted.txt';

####################################################

my ($url, $wget, $keys_file, $restricted, %restrict);

for (my $i=0;$i) {
                chomp;
                $restrict{$_}++;
        }
}

$pid = open3(*WTR, *RTR, *ERR, "$wget");

while () {
        next if $restrict{$1};
        $company_keys .= $_;
}

$user_keys = read_key_file();

# Sanity check
my @rows = split('n', $company_keys);


if (scalar @rows < 1) {
        print "Less than 1 company keys found, not installing keys..n";
        exit(1);
}

open(TMP, ">$keys_file.$$.tmp") or die "Could not open tmp keys file: $!n";
print TMP $company_keys;
print TMP $user_keys;
close(TMP);

# Sanity check

my (undef,undef,undef,undef,undef,undef,undef,$size,undef,undef,undef,undef,undef) = stat("$keys_file.$$.tmp");

if ($size < 100) {
        print "Keys file less than 100bytes, not writing";
        exit(1);
}

move("$keys_file.$$.tmp", $keys_file);

sub read_key_file {
        my $user_buf;

        open(KEY_FILE, "< $keys_file") or die "Could not open ssh key file; $!n";

        while () {
                next if $_ =~ /company$/;
                $user_buf .= $_;
        }

        close(KEY_FILE);
        return($user_buf);
}

sub sig_chld {
        my $pid = waitpid(-1, WNOHANG);
}

sub usage {
        print STDERR <<"EOS";

        Usage: $0 -[kuh]

                -k        Keys file to write to (default: @{[KEYS_FILE]})
                -u         URL to download keys from (default: @{[URL]})
                -h              This screen

EOS
        exit(1);
}

1;

__END__

Note that it downloads the public keys via http with wget. This can be easily modified to utilize https, if necessary, or perhaps even another protocol to make the transfer. HTTP Was chosen because the public keys are harmless and http is the easiest method. HTTPS would be desirable, however.

We hope this script helps you along the way towards making your life easier! 😉

Setup Up Exim with ClamAV and Spamassassin

I decided to post this article on implementing a simple single mail server with anti-spam and anti-virus capabilities.

This guide hopefully will help you on your way to configuring a basic mail system on Linux (specifically Debian).

Installing and configuring Exim 4 on Debian

1. First, install all the necessary Debian packages are on the system as the root user. (The exim4 package will REPLACE the exim package.)

NOTE: If you are using the stable branch, it is suggested to use the debian volatile packages (along with the security packages) so that your system is using the most up-to-date critical packages (like ClamAV) for security purposes. For production servers, you may not want to run a mixed stable/testing/unstable system (though I know some of you do!). To use these packages, see http://volatile.debian.net/ for more information. For those of you who are impatient and don’t want to find the correct mirror, here’s is what I added to my /etc/apt/sources.list file:

deb http://volatile.debian.net/debian-volatile sarge/volatile main contrib

I used aptitude to install these packages, but you could also use the old apt-get method:

apt-get install clamav-daemon 
clamav-freshclam exim4-daemon-heavy exim4 
courier-base courier-authdaemon courier-imap 
courier-pop spamassassin wget spamc sa-exim

When going through the exim4 config, be sure to select the multiple file configuration layout. If you didn’t (or weren’t prompted for it), simply set dc_use_split_config to true in the /etc/exim4/update-exim.conf.conf file. (Thanks Mike!)

2. Create your Maildir directory

maildirmake ~/Maildir/

3. Now we want to make exim4 use Maildir format mailboxes. Modify the file /etc/exim4/update-exim4.conf.conf so that it contains:

dc_localdelivery='maildir_home'

4. We need to Edit /etc/default/spamassassin to enable spamd.

5. Each user can set up their own filters by creating a .forward file in their home directory. If the first line of this file reads

# Exim filter then Exim4 will treat it as a filter.

Here is an example of an Exim filter that checks the headers that SpamAssassin adds and puts the mail in the appropriate Maildir folder:

      # Exim filter
      if $h_X-Spam-Status: CONTAINS "Yes"
           or
        $h_X-Spam-Flag: CONTAINS "Yes"
      then
        save $home/Maildir/.Spam/
        finish
      endif

Exim’s Interface To Mail Filtering (PDF format) – Local copy

6. Many system administrators like to set up the Maildir directories and .forward filter file in the /etc/skel directory so that when they make a new user on the system, everything is automatically copied over. I suggest that you do this as well as it makes things easier.

7. Before going live with the mail server, we will want to test it!

Testing the implementation

1. Generate the new configuration:

update-exim4.conf

If you made it through this, then your config files don’t have any syntax errors.

exim4 -bV

If that works, then there are no config issues

2. Next, start exim by issuing:

/etc/init.d/exim4 start

Above assumes that you are running exim4 as a daemon, and not through inetd

3. Now, check a local address:

            exim4 -bt local_user@example.com

4. Check sending an email:

            exim4 -v mailbox_you_can_check@dom.ain
               From: user@your.domain
               To: mailbox_you_can_check@dom.ain
               Subject: Testing exim
                     
               Testing exim
               .

You should now see some messages to let you know that the email was sent or information about what went wrong.

5. To test with full debug output using a specific config file, use something like:

            exim4 -C /etc/exim/exim_example.conf -d -bt user@example.com

6. To test the config coming from a specified ip address, use:

            exim4 -bh 192.168.1.10

            HELO example.com
               MAIL FROM: 
               RCPT TO: 
               DATA
               Subject: something
               your message here
               .
               QUIT

8. Add the following to your /etc/exim4/conf.d/main/01_exim4-config_listmacrosdefs file:

      # This tells what virus scanner to use
      av_scanner = clamd:/var/run/clamav/clamd.ctl

9. Edit /etc/exim4/conf.d/acl/40_exim4-config_check_data to inlude the following before the “# accept otherwise” line:

      # Reject messages that have serious MIME errors.
         # This calls the demime condition again, but it
         # will return cached results.
         deny message = Serious MIME defect detected ($demime_reason)
         demime = *
         condition = ${if >{$demime_errorlevel}{2}{1}{0}}
         
         # Reject file extensions used by worms.
         # Note that the extension list may be incomplete.
         deny message = This domain has a policy of not accepting certain types of attachments 
                        in mail as they may contain a virus.  This mail has a file with a .$found_extension 
                        attachment and is not accepted.  If you have a legitimate need to send 
                        this particular attachment, send it in a compressed archive, and it will 
                        then be forwarded to the recipient.
         demime = exe:com:vbs:bat:pif:scr
         
         # Reject messages containing malware.
         deny message = This message contains a virus ($malware_name) and has been rejected
         malware = *

10. Then, you need to enable ClamAV.

a) Firstly, you will want to be sure that it is running against messages. In /etc/exim4/sa-exim.conf, search for SAEximRunCond:

SAEximRunCond: ${if and {{def:sender_host_address} {!eq {$sender_host_address}{127.0.0.1}} {!eq {$h_X-SA-Do-Not-Run:}{Yes}} } {1}{0}}

That is simply skipping the scan on anything from the local machine or if the X-SA-Do-Not-Run header in the message is set to Yes. If you just want exim to run ClamAV on all messages, use this:

SAEximRunCond: 1

b) Before restarting ClamAV, we need to be sure that all of the access rights are in place so that the scans actually happen. The best way to handle this is to add the clamav user to the Debian-exim group. Either manually edit /etc/group, or simple run:

adduser clamav Debian-exim

c) Be sure that /etc/clamav/clamd.conf contains a line that reads:

AllowSupplementaryGroups

d) Set the file permissions for the /var/run/clamav directory to allow for the correct user to use it:

            chown Debian-exim.Debian-exim /var/run/clamav
            chmod g+w /var/run/clamav

e) A restart of ClamAV is necessary for the changes to take effect:

/etc/init.d/clamav-daemon restart

11. You should now be able to get your mail via IMAP with a mail client like Mozilla.

Check your headers (View Source) and see that SpamAssassin has added its headers. SMTP-end virus scanning should also be taking place. Check your /var/log/clamav/clamav.log to monitor this.

Multiple Domain Alias Files

The steps below are used to enable support for having multiple virtual domains each with its own alias file.

1. Exim will need to have the alias files for each domain.

a) Create the /etc/exim4/virtual directory.
b) For each virtual domain, create a file that contains the aliases to be used named as the domain.

For example, if I example.com was one of my domains, I’d do the following:

a) Create the /etc/exim4/virtual/example.com file.
b) If my system users were sys1, sys2, and sys3, and their email addresses were to be joe, john, jason, I’d put the following into the domain alias file:

                  joe:    sys1@localhost
                  john:   sys2@localhost
                  jason:  sys3@localhost

If john was also to get all mail addressed to info@example.com, you would add this entry:

info:   sys2@localhost

If you wanted all mail to user1@example.com to go to another email account outside of this domain, you would enter:

user1:  a.user@some.domain

If you wanted all mail directed at any address other than what is defined in the alias file to go to joe, you’d enter:

*:      sys1@localhost

In the above examples, the “@localhost” suffix to the user names forces the delivery to a system user. I found that if you do not include this in the alias files and your machine’s host name is within one of the domains handled by exim, every system user would need an entry in the machine’s domain in order to be delivered corectly.

For instance, if your host name was mail.example1.com and example1.com was handled by this server this would be needed. This would allow delivery to all the system user names at example1.com.

The reason is simple, and I will try to illustrate it for you here:

a) exim receives a message delivered to joe.blow@example3.com
b) The alias file for this domain has joe.blow: jblow in it.
c) This would translate to jblow@domain-of-the-system
d) The process would be repeated using jblow@domain-of-the-system
e) If there was no entry in the domain-of-the-system alias file for jblow, the message would be undeliverable (or non-routable)

You could even have special redirects like the following:

script: "| /path/to/some/script"
prev:   :fail: $local_part left!
kill:   :blackhole:

2. Edit /etc/exim4/conf.d/main/01_exim4-config_listmacrosdefs by replacing the current local_domains line with:

domainlist local_domains = @:localhost:dsearch;/etc/exim4/virtual

3. Create /etc/exim4/conf.d/router/350_exim4-config_vdom_aliases with the following content:

            vdom_aliases:
            driver = redirect
            allow_defer
            allow_fail
            domains = dsearch;/etc/exim4/virtual
            data = ${expand:${lookup{$local_part}lsearch*@{/etc/exim4/virtual/$domain}}}
            retry_use_local_part
            pipe_transport   = address_pipe
            file_transport   = address_file

4. Now, regenerate your exim4 config:

update-exim4.conf

5. If there were no errors, restart exim4:

/etc/init.d/exim4 restart

Domain Dependent Maximum Message Size

The next step for my server is to give each domain a configurable message size limit. Then, when the server get’s a message that is larger than the target domain’s size limit, I want to send a message back to the original sender telling them why the message was not delivered. However, I also want to have that message customized for each domain. That way, the domain owners can provide detailed instructions on how to send large messages to their domain if it is necessary. Of course, there will also need to be some kind of default size limit and message for domains that do not need the customization.

1. Create /etc/exim4/domain-size-limits to contain the list of domains and their maximum message size limits. You can also add a wildcard at the end entry if you want to set a default limit. The file may look something like the following:

      example.com: 20M
      example1.com: 5M
      *: 15M

This provides you a quick way to edit the values. The values will also take effect as soon as the file is saved – no need to restart exim!

2. OK, now we know what domains we want to customize the size for. Now it’s time to create a message to send for those domains. Create /etc/exim4/domain-size-limit-messages with content similar to:

      exmaple.com: The largest acceptable message size for Example.com is
                   ${expand:${lookup{$domain}lsearch*@{/etc/exim4/domain-size-limits}}}.
                   Your message was $message_size. If you feel that $local_part@$domain
                   should really get your message, then visit http://www.example.com/files/
                   where you can upload any large files. If you select $local_part@$domain
                   from the "notify" list, they will receive a message with a link directly
                   to your file.
      *:           The largest acceptable message size for $domain is
                   ${expand:${lookup{$domain}lsearch*@{/etc/exim4/domain-size-limits}}}.
                   Your message size was $message_size. Please revise your message so it
                   does not exceed this maximum file size and resend. If this is not
                   possible, contact the recipient in another way.

As you see, we have one domain that has a custom message sent out, and have defined a default message for all other domains. These messages can be edited at any time and do not need an exim restart to take effect.

3. Now for the fun part! We need a way to catch the messages that are too large for the domain! First, create /etc/exim4/conf.d/router/325_exim4-config_large_messages with the following:

      large_messages:
          driver = accept
          domains = dsearch;/etc/exim4/virtual
          condition = ${if >{$message_size}{${expand:${lookup{$domain}lsearch*@{/etc/exim4/domain-size-limits}}}} {yes}{no}}
          transport = bounce_large_messages
          no_verify

This router dynamically checks which domains are available and what their limits are set to.

4. Now create /etc/exim4/conf.d/transport/40_exim4-config_bounce_large_messages with the following content:

      # This bounces a message to people who send files too large for that domain
      bounce_large_messages:
        driver = autoreply
        from = $local_part@$domain
        to = $sender_address
        subject = Re: ${escape:$h_subject:}
        text = ${expand:${lookup{$domain}lsearch*@{/etc/exim4/domain-size-limit-messages}}}

This transport then sends the original sender a message using the text looked up from the domain-size-limit-messages file for that domain. The From: field is filled in with the intended recipient of the message – appearing to be a reply.

This was actually very simple to put together once I realized what I needed to do. The above is based on what I found in the Exim FAQ

Configuration Tips

Maybe this is something I should have said in the beginning, but at the time or writing this document, I had never set up an exim4 server, and the only exim3 server I had was used with the default debconf install. Therefore, if you see something on this page that could be done in a more elegant, more efficient or just plain better way, please send me a note.

Menu