Centralized Backup Script

Hello There!

I thought I’d share a backup script that was written to consolidate backups onto one server instead of spreading the backup process across several servers. The advantages are somewhat obvious to consolidating the script onto one server, namely being that editing or making changes is much easier as you only have one script to edit.

The environment where this may be ideal would be for environments with 15-20 servers or less. I’d recommend a complete end-to-end backup solution for servers that exceed that number such as Bacula perhaps.

The bash shell script that I pasted below is very straightforward and takes two arguments. The first is the hostname or ip address of the destination server you are backing up. The next (and potentially unlimited) arguments will be single quote encased folders which you would want backed up.

This script is dependent on the server the script is running on having ssh key based authentication enabled and implemented for the root user. Security considerations can be made with IP based restrictions either in the ssh configuration, firewall configuration or other considerations.

#!/bin/sh
# Offsite Backup script
# Written by www.stardothosting.com
# Dynamic backup script

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`
currentdate=`date "+%Y-%m-%d%H_%M_%S"`
backup_email="backups@youremail.com"
backupserver="origin-backup-server.hostname.com"

# Check User Input
if [ "$#" -lt 2 ]
then
        echo -e "nnUsage Syntax :"
        echo -e "./backup.sh [hostname] [folder1] [folder2] [folder3]"
        echo -e "Example : ./backup.sh your-server.com '/etc' '/usr/local/www' '/var/lib/mysql'nn"
        exit 1
fi

# get the server's hostname
host_name=`ssh -l root $1 "hostname"`
echo "Host name : $host_name"
if [ "$host_name" == "" ]
then
        host_name="unknown_$currentdate"
fi

echo "$host_name Offsite Backup Report: " $currentmonth > /var/log/backup.log
echo -e "----------------------------------------------------------" >> /var/log/backup.log
echo -e "" >> /var/log/backup.log

# Ensure permissions are correct
chown -R backups:backups /home/fileserver/backups/
ls -d /home/fileserver/backups/* | grep -v ".ssh|.bash" | xargs -d "n" chmod -R 775

# iterate over user arguments & set error level to 0
errors=0
for arg in "${@:2}"
do
        # check if receiving directory exists
        if [ ! -d "/home/fileserver/backups/$host_name" ]
        then
                mkdir /home/fileserver/backups/$host_name
        fi
        sanitize=`echo $arg | sed 's/[^/]/+$ //'`
        sanitize_dir=`echo $arg | awk -F '/' '{printf "%s", $2}'`
        /usr/bin/ssh -o ServerAliveInterval=1 -o TCPKeepAlive=yes -l root $1 "/usr/bin/rsync -ravzp --progress --exclude 'clam_quarantinedir' $sanitize/ backups@$backupserver:/home/fileserver/backups/$host_name/$sanitize_dir; echo $? > /tmp/bu_rlevel.txt" >> /var/log/backup.log 2>&1
        echo "/usr/bin/ssh -o ServerAliveInterval=1 -o TCPKeepAlive=yes -l root $1 "/usr/bin/rsync -ravzp --progress --exclude 'clam_quarantinedir' $sanitize/ backups@$backupserver:/home/fileserver/backups/$host_name/$sanitize_dir""

        runlevel=`ssh -l root $1 "cat /tmp/bu_rlevel.txt"`
        echo "Runlevel : $runlevel"

        if [ "$runlevel" -ge 1 ]
        then
                errors=$((counter+1))
        else
                echo -e "Script Backup for $arg Completed Successfully!" >> /var/log/backup.log 2>&1
        fi

done


# Check error level
if [ $errors -ge 1 ]
then
        echo -e "There were some errors in the backup job for $host_name, please investigate" >> /var/log/backup.log 2>&1
        cat /var/log/backup.log | mail -s "$host_name Backup Job failed" $backup_email
else
        cat /var/log/backup.log | mail -s "$host_name Backup Job Completed" $backup_email
fi

It should be explained further that this script actually connects to the destination server as the root user, using ssh key authentication. It then initiates a remote rsync command on the destination server back to the backup server as a user called “backupuser”. So that means that not only does the ssh key need to be installed for root on the destination servers, but a user called “backupuser” needs to be added on the backup server itself, with the ssh keys of all the destination servers installed for the remote rsync.

Hopefully I did not over complicate this, because it really is quite simple :

Backup Server –> root –> destination server to backup — > backupuser rsync –> Backup Server

Once you implement the script and do a few dry run tests then it might be ready to implement a scheduled task for each destination server. Here is an example of one cron entry for a server to be backed up :

01 1 * * * /bin/sh /usr/local/bin/backups.sh destination-server-hostname '/etc' '/usr/local/www' '/home/automysql-backups'

SVN Offsite Backup Script : Secure offsite backup solution for SVN to Amazon S3

Hi there!

Backing up your code repository is important. Backing up your code repository to an off-site location in a secure manner is imperative. Throughout our travels and experience utilizing the SVN code repository system, we have developed a quick bash script to export the entire SVN repository, encrypt it, compress it into an archive, and then ship it (over an encrypted network connection) to Amazon S3 storage.

We will be using the (familiar) s3sync Ruby script to do the actual transport to Amazon S3, which you can find here.

Note also that this script also keeps a local copy of the backups, taken each day, for a maximum of 7 days of retention. This might be redundant since all revisions are kept within SVN itself, but I thought it would provide an additional layer of backup redundancy. The script can be easily modified to only backup a single file every night, overwriting the older copy after every backup.

Here’s the script :

#!/bin/sh
# SVN Off Site Backup script
# www.stardothosting.com

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`
threedays=`date -v-5d "+%Y-%m-%d"`
todaysdate=`date "+%Y-%m-%d"`

export AWS_ACCESS_KEY_ID="YOUR-S3-KEY-ID"
export AWS_SECRET_ACCESS_KEY="YOUR-S3-ACCESS-KEY"


echo "SVN Offsite Backup Log: " $currentmonth > /var/log/offsite-backup.log
echo -e "--------------------------------------------" >> /var/log/offsite-backup.log
echo -e "" >> /var/log/offsite-backup.log

# Archive Repository Dump Files and remove files older than 7 days
/usr/bin/find /subversion/svn_backups -type f -mtime +7 -delete

# Backup SVN and encrypt it
svnadmin dump /subversion/repo_name | /usr/bin/openssl enc -aes-256-cbc -pass pass:YOUR-ENCRYPTION-PASSWORD -e > /subversion/svn_backups/repo_name-backup-$todaysdate.enc

#fyi to decrypt :
#openssl aes-256-cbc -d -pass pass:YOUR-ENCRYPTION-PASSWORD -in repo_name-backup.enc -out decrypted.dump

# Transfer the files to Amazon S3 Storage via HTTPS
/usr/local/bin/ruby /usr/local/bin/s3sync/s3sync.rb --ssl -v --delete -r /subversion/svn_backups S3_BACKUPS:svn/svnbackup >> /var/log/offsite-backup.log 2>&1

if [ "$?" -eq 1 ]
then
        echo -e "***SVN OFFSITE BACKUP JOB, THERE WERE ERRORS***" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "SVN Offsite Backup Job failed" your@email.com
        exit 1
else
        echo -e "Script Completed Successfully!" >> /var/log/offsite-backup.log 2>&1
        cat /var/log/offsite-backup.log | mail -s "SVN Offsite Backup Job Completed" your@email.com
        exit 0
fi

Note how I have provided an example , commented out within the script, on how you can go about decrypting the encrypted SVN dump file. You can also modify this script to back up to any offsite location, obviously. Just remove the s3sync related entries and replace with rsync or your preferred transport method.

I hope this makes your life easier!

Add your Dynamic IPs to Apache HTACCESS files

Hello!

We threw together a quick & simple script to dynamically update your .htaccess files within apache to add your dynamic IP address to the allow / deny fields.

If you’re looking to password protect an admin area (for example) but your office only has a dynamic IP address, then this script might be handy for you.

Its an extremely simple script that polls your dynamic hostname (if you use no-ip.org or dyndns.org for example) every 15 minutes as a cron job and, if it has changed, updates the .htaccess file

Hopefully it will make your life just a little bit easier 🙂

Sample Cron entry :

*/15 * * * * /bin/sh /usr/local/bin/htaccessdynamic.sh yourhostname.dyndns.org /var/www/website.com/public_html/.htaccess > /dev/null 2>&1

And now the script :

#!/bin/bash
# Dynamic IP .htaccess file generator
# Written by Star Dot Hosting
# www.stardothosting.com

dynDomain="$1"
htaccessLoc="$2"

dynIP=$(/usr/bin/dig +short $dynDomain)

echo "dynip: $dynIP"
# verify dynIP resembles an IP
if ! echo -n $dynIP | grep -Eq "[0-9.]+"; then
    exit 1
fi

# if dynIP has changed
if ! cat $htaccessLoc | /bin/grep -q "$dynIP"; then

        # grab the old IP
        oldIP=`cat /usr/local/bin/htold-ip.txt`

        # output .htaccess file
        echo "order deny,allow" > $htaccessLoc 2>&1
        echo "allow from $dynIP" >> $htaccessLoc 2>&1
        echo "allow from x.x.x.x" >> $htaccessLoc 2>&1
        echo "deny from all" >> $htaccessLoc 2>&1

        # save the new ip to remove next time it changes, overwriting previous old IP
        echo $dynIP > /usr/local/bin/htold-ip.txt
fi

Migrate from Linux to Xen with Rsync

I decided to write this little guide to provide the relatively simple steps needed to migrate your linux system to a Xen (HVM) virtual instance.

It is assumed that on your source and destination boxes, that you only have one root “/” partition. If you partitioned out your file system differently, you will have to accommodate that based on these instructions.

The following steps walk you through the process of migrating linux to Xen from start to finish :

1. Install the exact same version of linux on your destination server
This isn’t really 100% necessary, obviously. You could always boot into Finnix, partition your disk and install Grub. If you are uncomfortable doing that, install the distribution from start to finish. The file system will be overwritten anyways.

2. Boot into finnix on the destination system
If you have never used Finnix, it is a “self contained, bootable linux distribution”. I like it alot actually and have used it for similar purposes, rescue operations and the like.

3. Setup networking on both destination and source systems
If both systems are on the same network, you could assign local IP addresses to ensure the process of synchronisation is speedy and unobstructed.

Ensure you configure networking either way and that you set a root password and start ssh :

passwd
/etc/init.d/ssh start

4. Mount the partition that you want to copy to on the destination server
Remember, so far everything you are doing has been on the destination server. Mount the destination partition within finnix :

mount /dev/xvdb

5. On the source server, rsync all the files of the source partition to the destination partition
When logged into the source server, simply issue the following rsync command and direct it to the destination server’s partition you just mounted :

rsync -aHSKDvz -e ssh / root@12.34.56.78:/mnt/xvdb/

The rsync process will complete and the partition on the destination server should be ready to boot into. Remember to change the networking configuration if you dont want any IP conflicts to happen.

I hope this helps!

Relay Exim mail to google mail in Debian Linux

Sometimes its necessary to relay your mail through a third party provider. If your server environment has a dedicated sendmail server (most do), then this scenario is applicable to you. It is ideal to centralize your outgoing mail to one server so that changes, policies and configuration is located in a single place.

In this scenario, outgoing mail is relayed to google’s domain mail in an Exim mail environment. These steps are fairly straightforward and will hopefully help you to utilize google’s free mail service to send your mail.

Note that google has queuing and mass mail restrictions so if you plan on sending alot of mail this way, you will just get blocked.

    Step 1

Run dpkg-reconfigure exim4-config

1. Choose mail sent by smarthost; received via SMTP or fetchmail

2. Type System Mail Name: e.g. company.com

3. Type IP Adresses to listen on for incoming SMTP connections: 127.0.0.1

4. Leave Other destinations for which mail is accepted blank

5. Leave Machines to relay mail for: blank

6. Type Machine handling outgoing mail for this host (smarthost): smtp.gmail.com::587

7. Choose NO, don’t hide local mail name in outgoing mail.

8. Chose NO, don’t keep number of DNS-queries minimal (Dial-on-Demand).

9. Choose mbox

10. Choose NO, split configuration into small files

11. Mail for postmaster. Leaving blank will not cause any problems though it is not recommended

    Step 2

1. Open the file /etc/exim4/exim4.conf.template
2. Find the line .ifdef DCconfig_smarthost DCconfig_satellite and add the following in that section

 send_via_gmail:
 driver = manualroute
 domains = ! +local_domains
 transport = gmail_smtp
 route_list = * smtp.gmail.com

If you have any other smarthost defined with “domains = ! +local_domains” remove that smarthost.

3. Find the “begin authenticators”. In that section add the following

 gmail_login:
 driver = plaintext
 public_name = LOGIN
 client_send = : yourname@gmail.com : YourGmailPassword

Make sure you have no other authenticators with the same public_name (LOGIN). Comment them out if needed (Thanks Jakub for reminding me)

4. Find the comment “transport/30_exim4-config_remote_smtp_smarthost”. In that section add

 gmail_smtp:
 driver = smtp
 port = 587
 hosts_require_auth = $host_address
 hosts_require_tls = $host_address
    Step 3

1. Run update-exim4.conf

2. Do /etc/init.d/exim4 restart

That should be it. You can test by using the command line mail client.

Test :

 echo "test" | mail -s "subject" test@email-to-send-to.com

Integrate your custom IPTables script with Linux

How do I integrate my custom iptables script with Red Hat Enterprise Linux?

A custom iptables script is sometimes necessary to work around the limitations of the Red Hat Enterprise Linux firewall configuration tool. The procedure is as follows:

1. Make sure that the default iptables initialization script is not running:

service iptables stop

2. Execute the custom iptables script:

sh [custom iptables script]

3. Save the newly created iptables rules:

service iptables save

4. Restart the iptables service:

service iptables restart

5. Verify that the custom iptables ruleset have taken effect:

service iptables status

6. Enable automatic start up of the iptables service on boot up:

chkconfig iptables on

The custom iptables script should now be integrated into the operating system.

Patch Scanning / Information Gathering Script for RedHat / CentOS

With all the patch management solutions, local repositories and other options, it is rarely necessary to manually scan all servers on your network to build a “report” of the patch levels in your environment.

Sometimes it is, however. For instance, if you are brought into an environment that has not been properly managed and require some quick audits to evaluate how much actual work needs to be done bringing all the patch levels up to standard, then there are ways to produce these reports with simple bash scripting.

I have developed such a script for similar situations — quick reporting is sometimes necessary even when you are evaluating a large commercial patch management solution. It can even be implemented to coincide such solutions, for independent reporting perhaps.

This script would work well either by distributing it to each server and running the script via ssh key based authentication for centralized reporting. Alternatively, you could modify this script to perform each command via SSH over the network to gather information that way. It is probably more ideal to centrally distribute the script to each server so only one ssh command is executed per server.

Find the script below — note that it only works with RedHat / CentOS systems. Obviously if you are paying for Red Hat enterprise support you already are using satellite; If you are using CentOS then this script may be useful for you.

Enjoy!

#!/bin/sh

# Basic Information Gathering
# Star Dot Hosting
# https://www.stackstar.com

HOSTNAME=`hostname`
UNAME=`uname -a | awk '{print $3}'`

# Begin Package Scanning


# SSH

SSHON="0"
SSHRUN="NULL"
SSHRPM="NULL"
SSHMATCH="NULL"


if [ -f /usr/sbin/sshd ]
then
        SSHON="1"
	SSHMATCH="0"
        SSHRUN=`ssh -V 2>&1 | awk 'BEGIN { FS = "_" } ; { print $2 }' | awk '{print $1}' | cut -b 0-5`
	TESTRPM=`rpm -qa openssh`
	if [ "$TESTRPM" <> 0  ]
	then
	        SSHRPM=`rpm -qa openssh | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$SSHRUN" == "$SSHRPM" ]
        then
                SSHMATCH="1"
        fi

fi

# Apache

HTTPDON="0"
HTTPDRUN="NULL"
HTTPDRPM="NULL"
HTTPDMATCH="NULL"


if [ -f /usr/sbin/httpd ]
then
        HTTPDON="1"
	HTTPDMATCH="0"
        HTTPDRUN=`httpd -v | grep version | awk 'BEGIN {FS="/"};{print$2}'`
	TESTRPM=`rpm -qa httpd`
	if [ "$TESTRPM" <> 0  ]
	then
        	HTTPDRPM=`rpm -qa httpd | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$HTTPDRUN" == "$HTTPDRPM" ]
        then
                HTTPDMATCH="1"
        fi
fi

# MySQL

MYSQLON="0"
MYSQLRUN="NULL"
MYSQLRPM="NULL"
MYSQLMATCH="NULL"


if [ -f /usr/bin/mysql ]
then
        MYSQLON="1"
	MYSQLMATCH="0"
        MYSQLRUN=`mysql -V | awk '{print $5}' | cut -b 0-6`
	TESTRPM=`rpm -qa mysql`
	if [ "$TESTRPM" <> 0  ]
	then
        	MYSQLRPM=`rpm -qa mysql | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$MYSQLRUN" == "$MYSQLRPM" ]
        then
                MYSQLMATCH="1"
        fi
fi

# PHP

PHPON="0"
PHPRUN="NULL"
PHPRPM="NULL"
PHPMATCH="NULL"


if [ -f /usr/bin/php ]
then
        PHPON="1"
	PHPMATCH="0"
        PHPRUN=`php -v | grep built | awk '{print $2 }'`
	TESTRPM=`rpm -qa php`
	if [ "$TESTRPM" <> 0  ]
	then
        	PHPRPM=`rpm -qa php | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$PHPRUN" == "$PHPRPM" ]
        then
                PHPMATCH="1"
        fi
fi

# Exim
# Needs to be tested on RH box

EXIMON="0"
EXIMRUN="NULL"
EXIMRPM="NULL"
EXIMMATCH="NULL"


if [ -f /usr/sbin/exim ]
then
        EXIMON="1"
	EXIMMATCH="0"
        EXIMRUN=`exim -bV | grep version | awk '{print $3}'`
	TESTRPM=`rpm -qa exim`
	if [ "$TESTRPM" <> 0  ]
	then
        	EXIMRPM=`rpm -qa exim | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$EXIMRUN" == "$EXIMRPM" ]
        then
                EXIMMATCH="1"
        fi
fi

# OpenSSL

OSSLON="0"
OSSLRUN="NULL"
OSSLRPM="NULL"
OSSLMATCH="NULL"


if [ -f /usr/bin/openssl ]
then
        OSSLON="1"
	OSSLMATCH="0"
        OSSLRUN=`openssl version | awk '{print $2}'`
	TESTRPM=`rpm -qa openssl`
	if [ "$TESTRPM" <> 0  ]
	then
        	OSSLRPM=`rpm -qa openssl | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$OSSLRUN" == "$OSSLRPM" ]
        then
                OSSLMATCH="1"
        fi
fi

# PERL

PERLON="0"
PERLRUN="NULL"
PERLRPM="NULL"
PERLMATCH="NULL"


if [ -f /usr/bin/perl ]
then
        PERLON="1"
	PERLMATCH="0"
        PERLRUN=`perl -v | grep built | awk '{print $4}' | awk 'BEGIN { FS = "v" } ; { print $2 }'`
	TESTRPM=`rpm -qa perl`
	if [ "$TESTRPM" <> 0  ]
	then
        	PERLRPM=`rpm -qa perl | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$PERLRUN" == "$PERLRPM" ]
        then
                PERLMATCH="1"
        fi
fi


# PYTHON

PYON="0"
PYRUN="NULL"
PYRPM="NULL"
PYMATCH="NULL"


if [ -f /usr/bin/python ]
then
        PYON="1"
	PYMATCH="0"
        PYRUN=`python -V 2>&1 | awk '{print $2}'`
	TESTRPM=`rpm -qa python`
	if [ "$TESTRPM" <> 0  ]
	then
        	PYRPM=`rpm -qa python | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$PYRUN" == "$PYRPM" ]
        then
                PYMATCH="1"
        fi
fi

# GPG

GPGON="0"
GPGRUN="NULL"
GPGRPM="NULL"
GPGMATCH="NULL"


if [ -f /usr/bin/gpg ]
then
        GPGON="1"
	GPGMATCH="0"
        GPGRUN=`gpg --version | grep gpg | awk '{print $3}'`
	TESTRPM=`rpm -qa gnupg`
	if [ "$TESTRPM" <> 0  ]
	then
        	GPGRPM=`rpm -qa gnupg | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$GPGRUN" == "$GPGRPM" ]
        then
                GPGMATCH="1"
        fi
fi

# RPM

RPMON="0"
RPMRUN="NULL"
RPMRPM="NULL"
RPMMATCH="NULL"


if [ -f /bin/rpm ]
then
        RPMON="1"
	RPMMATCH="0"
        RPMRUN=`rpm --version | awk '{print $3}'`
	TESTRPM=`rpm -qa rpm`
	if [ "$TESTRPM" <> 0  ]
	then
        	RPMRPM=`rpm -qa rpm | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$RPMRUN" == "$RPMRPM" ]
        then
                RPMMATCH="1"
        fi
fi

# SENDMAIL

SENDON="0"
SENDRUN="NULL"
SENDRPM="NULL"
SENDMATCH="NULL"


if [ -f /usr/sbin/sendmail ]
then
        SENDON="1"
        SENDMATCH="0"
        SENDRUN=`echo 'quit' | nc localhost 25 | grep Sendmail | awk '{print $5}' | awk 'BEGIN { FS = "/" } ; { print $1 }'`
	TESTRPM=`rpm -qa sendmail`
	if [ "$TESTRPM" <> 0  ]
	then
	        SENDRPM=`rpm -qa sendmail | awk 'BEGIN { FS = "-" } ; { print $2 }'`
	fi
        if [ "$SENDRUN" == "$SENDRPM" ]
        then
                SENDMATCH="1"
        fi
fi

### Non running packages

# bind-libs

BINDLIB="NULL"
TESTRPM=`rpm -qa bind-libs`
if [ "$TESTRPM" <> 0  ]
then
	BINDLIB=`rpm -qa bind-libs | awk 'BEGIN { FS = "-" } ; { print $3 }'`
fi


# bind-utils

BINDUTIL="NULL"
TESTRPM=`rpm -qa bind-utils`
if [ "$TESTRPM" <> 0  ]
then
	BINDUTIL=`rpm -qa bind-utils | awk 'BEGIN { FS = "-" } ; { print $3 }'`
fi

# coreutils

COREUTIL="NULL"
TESTRPM=`rpm -qa coreutils`
if [ "$TESTRPM" <> 0  ]
then
	COREUTIL=`rpm -qa coreutils | awk 'BEGIN { FS = "-" } ; { print $2 }'`
fi

# chkconfig

CHKCONFIG="NULL"
TESTRPM=`rpm -qa chkconfig`
if [ "$TESTRPM" <> 0  ]
then
	CHKCONFIG=`rpm -qa chkconfig | awk 'BEGIN { FS = "-" } ; { print $2 }'`
fi

# initscripts

INITSCR="NULL"
TESTRPM=`rpm -qa initscripts`
if [ "$TESTRPM" <> 0  ]
then
	INITSCR=`rpm -qa initscripts | awk 'BEGIN { FS = "-" } ; { print $2 }'`
fi

# redhat-release

RHRELEASE="NULL"
TESTRPM=`rpm -qa redhat-release`
if [ "$TESTRPM" <> 0  ]
then
	RHRELEASE=`rpm -qa redhat-release | awk 'BEGIN { FS = "-" } ; { print $3"-"$4 }'`
fi



echo $HOSTNAME,$UNAME,$SSHMATCH,$HTTPDMATCH,$MYSQLMATCH,$PHPMATCH,$EXIMMATCH,$OSSLMATCH,$PYMATCH,$PERLMATCH,$GPGMATCH,
$RPMMATCH,$SENDMATCH,$BINDLIB,$BINDUTIL,$COREUTIL,$CHKCONFIG,$INITSCR,$RHRELEASE,$SSHON,$SSHRUN,$SSHRPM,$HTTPDON,$HTTPDRUN,
$HTTPDRPM,$MYSQLON,$MYSQLRUN,$MYSQLRPM,$PHPON,$PHPRUN,$PHPRPM,$EXIMON,$EXIMRUN,$EXIMRPM,$OSSLON,$OSSLRUN,$OSSLRPM,$PERLON,
$PERLRUN,$PERLRPM,$PYON,$PYRUN,$PYRPM,$GPGON,$GPGRUN,$GPGRPM,$RPMON,$RPMRUN,$RPMRPM,$SENDON,$SENDRUN,$SENDRPM

Note that you can modify the echo output to produce whatever output you need in order to present it in a nice human readable report.

Quick tips using FIND , SSH, TAR , PS and GREP Commands

Administering hundreds of systems can be tedious. Sometimes scripting repetitive tasks, or replicating tasks across many servers is necessary.

Over time, I’ve jotted down several quick useful notes regarding using various linux/unix commands. I’ve found them very useful when navigating and performing various tasks. I decided to share them with you, so hopefully you will find them a useful reference at the very least!

To find files within a time range and add up the total size of all those files :

find /opt/uploads -mtime -365 -printf "%sn"|awk '{sum+=$0}END{print sum}'

To watch a command’s progress :

watch -n1 'du -h -c --max-depth=1'

Transfer a file / folders, compress it midstrem over the network, uncompress the file on the recieving end:

ssh -l root 00.00.000.000 '(cd /opt/uploads/ && tar -czf - . -C /opt/uploads)' | tar -xzf -

Below will return any XYZ PID that is older than 10 hours.

ps -ef | grep XYZ | awk '{ print $7 ":" $2 }' | awk 'BEGIN { FS =":" }; {if ($1 > 10) print $4}'

Check web logs on www server for specific ip address access:

grep "ip_address" [a-h]*/logs/*access*4		<-- check a-h websites
grep "ip_address" [A-Z,0-9]*/logs/*access*4	<-- check A-Z, 0-9 websites

Those are just a few of the useful commands that can be applied to many different functions. I particularly like sending files across the network and compressing them mid stream :)

The above kind of administration is made even easier when you employ ssh key based authentication -- your commands can be scripted to execute across many servers in one sitting (just be careful) ;)

Software RAID in Linux

Several occasions have arisen where a client requested software raid-1 between two IDE drives in their server.

Obviously the servers in question had no hardware raid capabilities, and compromising disk I/O read/write times for increased redundancy was more important.

Below is a simple tutorial for setting up software raid in Linux, using MDADM. The specific examples in these instructions are with Debian, but can be applied to any linux distribution.

Linux Software RAID-1

1. Verify that you are working with two identical hard drives:

      # cat /proc/ide/hd?/model
      ST320413A
      ST320413A

2. Copy the partition table over to the second drive:

      dd if=/dev/hda of=/dev/hdc bs=1024k count=10

Edit the partition table of the second drive so that all of the partitions, except #3, have type ‘fd’.

      # fdisk -l /dev/hda /dev/hdc

Disk /dev/hda (Sun disk label): 16 heads, 63 sectors, 38792 cylinders
      Units = cylinders of 1008 * 512 bytes

         Device Flag    Start       End    Blocks   Id  System
      /dev/hda1             0       248    124992   83  Linux native
      /dev/hda2           248      2232    999936   83  Linux native
      /dev/hda3             0     38792  19551168    5  Whole disk
      /dev/hda4          2232     10169   4000248   83  Linux native
      /dev/hda5         10169     12153    999936   83  Linux native
      /dev/hda6         12153     18105   2999808   82  Linux swap
      /dev/hda7         18105     28026   5000184   83  Linux native
      /dev/hda8         28026     38792   5426064   83  Linux native

      Disk /dev/hdc (Sun disk label): 16 heads, 63 sectors, 38792 cylinders
      Units = cylinders of 1008 * 512 bytes

         Device Flag    Start       End    Blocks   Id  System
      /dev/hdc1             0       248    124992   fd  Linux raid autodetect
      /dev/hdc2           248      2232    999936   fd  Linux raid autodetect
      /dev/hdc3             0     38792  19551168    5  Whole disk
      /dev/hdc4          2232     10169   4000248   fd  Linux raid autodetect
      /dev/hdc5         10169     12153    999936   fd  Linux raid autodetect
      /dev/hdc6         12153     18105   2999808   fd  Linux raid autodetect
      /dev/hdc7         18105     28026   5000184   fd  Linux raid autodetect
      /dev/hdc8         28026     38792   5426064   fd  Linux raid autodetect

3. Install mdadm to manage the arrays.

      apt-get install mdadm

It’ll ask you a series of questions that are highly dependent on your needs. One key one is: “Yes, automatically start RAID arrays”

4. Load the RAID1 module:

      modprobe raid1

5. Create the RAID1 volumes. Note that we’re setting one mirror as “missing” here. We’ll add the second half of the mirror later because we’re using it right now.

      mdadm --create -n 2 -l 1 /dev/md0 /dev/hdc1 missing
      mdadm --create -n 2 -l 1 /dev/md1 /dev/hdc2 missing
      mdadm --create -n 2 -l 1 /dev/md2 /dev/hdc4 missing
      mdadm --create -n 2 -l 1 /dev/md3 /dev/hdc5 missing
      mdadm --create -n 2 -l 1 /dev/md4 /dev/hdc6 missing
      mdadm --create -n 2 -l 1 /dev/md5 /dev/hdc7 missing
      mdadm --create -n 2 -l 1 /dev/md6 /dev/hdc8 missing

6. Make the filesystems:

      mke2fs -j /dev/md0
      mke2fs -j /dev/md1
      mke2fs -j /dev/md2
      mke2fs -j /dev/md3
      mkswap /dev/md4
      mke2fs -j /dev/md5
      mke2fs -j /dev/md6

7. Install the dump package:

      apt-get install dump

8. Mount the new volumes, dump & restore from the running copies:

      mount /dev/md1 /mnt
      cd /mnt
      dump 0f - / | restore rf -
      rm restoresymtable

      mount /dev/md0 /mnt/boot
      cd /mnt/boot
      dump 0f - /boot | restore rf -
      rm restoresymtable

      mount /dev/md2 /mnt/usr
      cd /mnt/usr
      dump 0f - /usr | restore rf -
      rm restoresymtable

      mount /dev/md3 /mnt/tmp
      cd /mnt/tmp
      dump 0f - /tmp | restore rf -
      rm restoresymtable

      mount /dev/md5 /mnt/var
      cd /mnt/var
      dump 0f - /var | restore rf -
      rm restoresymtable

      mount /dev/md6 /mnt/export
      cd /mnt/export
      dump 0f - /export | restore rf -
      rm restoresymtable

9. Set up the chroot environment:

      mount -t proc none /mnt/proc

      chroot /mnt /bin/bash

10. Edit /boot/silo.conf, and change the following line:

      root=/dev/md1

11. Edit /etc/fstab, and point them to the MD devices:

      # /etc/fstab: static file system information.
      #
      #                
      proc            /proc           proc    defaults        0       0
      /dev/md1        /               ext3    defaults,errors=remount-ro 0       1
      /dev/md0        /boot           ext3    defaults        0       2
      /dev/md6        /export         ext3    defaults        0       2
      /dev/md3        /tmp            ext3    defaults        0       2
      /dev/md2        /usr            ext3    defaults        0       2
      /dev/md5        /var            ext3    defaults        0       2
      /dev/md4        none            swap    sw              0       0
      /dev/hdc        /media/cdrom0   iso9660 ro,user,noauto  0       0
      /dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0

12. Save the MD information to /etc/mdadm/mdadm.conf:

      echo DEVICE partitions >> /etc/mdadm/mdadm.conf

      mdadm -D -s >> /etc/mdadm/mdadm.conf

13. Rebuild the initrd (to add the RAID modules, and boot/root RAID startup information):

     mkinitramfs -o /boot/initrd.img-`uname -r` `uname -r`

14. Leave the chroot environment:

      exit

15. Unmount /boot. klogd uses the System.map file, and we need to kill it to unmount /boot.

      pkill klogd
      # wait a few seconds
      umount /boot

16. Add /dev/hda1 to /dev/md0 — the /boot mirror

      mdadm --add /dev/md0 /dev/hda1
      watch cat /proc/mdstat

Wait until the mirror is complete. CTRL-C to exit watch.

17. Mount the mirrored /boot:

      umount /mnt/boot
      mount /dev/md0 /boot

18. Stamp the boot loader onto both disks, and reboot:

      silo -C /boot/silo.conf && reboot

19. Assuming it booted up correctly, verify that we’re running on the mirrored copies:

      df -h

If so, add the other partitions into their respective mirrors:

      mdadm --add /dev/md0 /dev/hda1
      mdadm --add /dev/md1 /dev/hda2
      mdadm --add /dev/md2 /dev/hda4
      mdadm --add /dev/md3 /dev/hda5
      mdadm --add /dev/md4 /dev/hda6
      mdadm --add /dev/md5 /dev/hda7
      mdadm --add /dev/md6 /dev/hda8
      watch cat /proc/mdstat

And wait until the the mirrors are done building.

20. Edit /etc/mdadm/mdadm.conf and remove any references to the RAID volumes. Refresh the mdadm.conf information:

      mdadm -D -s >> /etc/mdadm/mdadm.conf

21. Rebuild the initrd one more time. The previous time only included one half of each mirror for root and swap.

      mkinitramfs -o /boot/initrd.img-`uname -r` `uname -r`

22. Reboot one more time for good measure. You now have software RAID1.

Testing the Software Raid & simulating a drive failure

Newer versions of raidtools come with a raidsetfaulty command. By using raidsetfaulty you can just simulate a drive failure without unplugging things off.

Just running the command

mdadm --manage --set-faulty /dev/md1 /dev/sdc2

First, you should see something like the first line of this on your system’s log. Something like the second line will appear if you have spare disks configured.

kernel: raid1: Disk failure on sdc2, disabling device. 
kernel: md1: resyncing spare disk sdb7 to replace failed disk

Checking /proc/mdstat out will show the degraded array. If there was a spare disk available, reconstruction should have started.

Try with :

mdadm --detail /dev/md1

Now you’ve seen how it goes when a device fails. Let’s fix things up.

First, we will remove the failed disk from the array. Run the command

mdadm /dev/md1 -r /dev/sdc2

Now we have a /dev/md1 which has just lost a device. This could be a degraded RAID or perhaps a system in the middle of a reconstruction process. We wait until recovery ends before setting things back to normal.

We re-establish /dev/sdc2 back into the array.

mdadm /dev/md1 -a /dev/sdc2

As disk returns to the array, we’ll see it becoming an active member of /dev/md1 if necessary. If not, it will be marked as an spare disk.

Checking for errors and alerting

Steps for setting up e-mail alerting of errors with mdadm:

E-mail error alerting with mdadm can be accomplished in several ways:

1. Using a command line directly

2. Using the /etc/mdadm.conf file to specify an e-mail address

NOTE: e-mails are only sent when the following events occur:

Fail, FailSpare, DegradedArray, and TestMessage

Specifying an e-mail address using the mdadm command line

Using the command line simply involves including the e-mail address in the command. The following explains the mdadm command and how to set it up so that it will load every time the system is started.

mdadm --monitor --scan --daemonize --mail=jdoe@somemail.com

The command could be put /etc/init.d/boot.local so that it was loaded every time the system was started.

Verification that mdadm is running can be verified by typing the following in a terminal window:

ps aux | grep mdadm

Menu