Centralized Backup Script

Hello There!

I thought I’d share a backup script that was written to consolidate backups onto one server instead of spreading the backup process across several servers. The advantages are somewhat obvious to consolidating the script onto one server, namely being that editing or making changes is much easier as you only have one script to edit.

The environment where this may be ideal would be for environments with 15-20 servers or less. I’d recommend a complete end-to-end backup solution for servers that exceed that number such as Bacula perhaps.

The bash shell script that I pasted below is very straightforward and takes two arguments. The first is the hostname or ip address of the destination server you are backing up. The next (and potentially unlimited) arguments will be single quote encased folders which you would want backed up.

This script is dependent on the server the script is running on having ssh key based authentication enabled and implemented for the root user. Security considerations can be made with IP based restrictions either in the ssh configuration, firewall configuration or other considerations.

It should be explained further that this script actually connects to the destination server as the root user, using ssh key authentication. It then initiates a remote rsync command on the destination server back to the backup server as a user called “backupuser”. So that means that not only does the ssh key need to be installed for root on the destination servers, but a user called “backupuser” needs to be added on the backup server itself, with the ssh keys of all the destination servers installed for the remote rsync.

Hopefully I did not over complicate this, because it really is quite simple :

Backup Server –> root –> destination server to backup — > backupuser rsync –> Backup Server

Once you implement the script and do a few dry run tests then it might be ready to implement a scheduled task for each destination server. Here is an example of one cron entry for a server to be backed up :

SVN Offsite Backup Script : Secure offsite backup solution for SVN to Amazon S3

Hi there!

Backing up your code repository is important. Backing up your code repository to an off-site location in a secure manner is imperative. Throughout our travels and experience utilizing the SVN code repository system, we have developed a quick bash script to export the entire SVN repository, encrypt it, compress it into an archive, and then ship it (over an encrypted network connection) to Amazon S3 storage.

We will be using the (familiar) s3sync Ruby script to do the actual transport to Amazon S3, which you can find here.

Note also that this script also keeps a local copy of the backups, taken each day, for a maximum of 7 days of retention. This might be redundant since all revisions are kept within SVN itself, but I thought it would provide an additional layer of backup redundancy. The script can be easily modified to only backup a single file every night, overwriting the older copy after every backup.

Here’s the script :

Note how I have provided an example , commented out within the script, on how you can go about decrypting the encrypted SVN dump file. You can also modify this script to back up to any offsite location, obviously. Just remove the s3sync related entries and replace with rsync or your preferred transport method.

I hope this makes your life easier!

Add your Dynamic IPs to Apache HTACCESS files

Hello!

We threw together a quick & simple script to dynamically update your .htaccess files within apache to add your dynamic IP address to the allow / deny fields.

If you’re looking to password protect an admin area (for example) but your office only has a dynamic IP address, then this script might be handy for you.

Its an extremely simple script that polls your dynamic hostname (if you use no-ip.org or dyndns.org for example) every 15 minutes as a cron job and, if it has changed, updates the .htaccess file

Hopefully it will make your life just a little bit easier 🙂

Sample Cron entry :

And now the script :

Migrate from Linux to Xen with Rsync

I decided to write this little guide to provide the relatively simple steps needed to migrate your linux system to a Xen (HVM) virtual instance.

It is assumed that on your source and destination boxes, that you only have one root “/” partition. If you partitioned out your file system differently, you will have to accommodate that based on these instructions.

The following steps walk you through the process of migrating linux to Xen from start to finish :

1. Install the exact same version of linux on your destination server
This isn’t really 100% necessary, obviously. You could always boot into Finnix, partition your disk and install Grub. If you are uncomfortable doing that, install the distribution from start to finish. The file system will be overwritten anyways.

2. Boot into finnix on the destination system
If you have never used Finnix, it is a “self contained, bootable linux distribution”. I like it alot actually and have used it for similar purposes, rescue operations and the like.

3. Setup networking on both destination and source systems
If both systems are on the same network, you could assign local IP addresses to ensure the process of synchronisation is speedy and unobstructed.

Ensure you configure networking either way and that you set a root password and start ssh :

4. Mount the partition that you want to copy to on the destination server
Remember, so far everything you are doing has been on the destination server. Mount the destination partition within finnix :

5. On the source server, rsync all the files of the source partition to the destination partition
When logged into the source server, simply issue the following rsync command and direct it to the destination server’s partition you just mounted :

The rsync process will complete and the partition on the destination server should be ready to boot into. Remember to change the networking configuration if you dont want any IP conflicts to happen.

I hope this helps!

Relay Exim mail to google mail in Debian Linux

Sometimes its necessary to relay your mail through a third party provider. If your server environment has a dedicated sendmail server (most do), then this scenario is applicable to you. It is ideal to centralize your outgoing mail to one server so that changes, policies and configuration is located in a single place.

In this scenario, outgoing mail is relayed to google’s domain mail in an Exim mail environment. These steps are fairly straightforward and will hopefully help you to utilize google’s free mail service to send your mail.

Note that google has queuing and mass mail restrictions so if you plan on sending alot of mail this way, you will just get blocked.

    Step 1

Run dpkg-reconfigure exim4-config

1. Choose mail sent by smarthost; received via SMTP or fetchmail

2. Type System Mail Name: e.g. company.com

3. Type IP Adresses to listen on for incoming SMTP connections: 127.0.0.1

4. Leave Other destinations for which mail is accepted blank

5. Leave Machines to relay mail for: blank

6. Type Machine handling outgoing mail for this host (smarthost): smtp.gmail.com::587

7. Choose NO, don’t hide local mail name in outgoing mail.

8. Chose NO, don’t keep number of DNS-queries minimal (Dial-on-Demand).

9. Choose mbox

10. Choose NO, split configuration into small files

11. Mail for postmaster. Leaving blank will not cause any problems though it is not recommended

    Step 2

1. Open the file /etc/exim4/exim4.conf.template
2. Find the line .ifdef DCconfig_smarthost DCconfig_satellite and add the following in that section

If you have any other smarthost defined with “domains = ! +local_domains” remove that smarthost.

3. Find the “begin authenticators”. In that section add the following

Make sure you have no other authenticators with the same public_name (LOGIN). Comment them out if needed (Thanks Jakub for reminding me)

4. Find the comment “transport/30_exim4-config_remote_smtp_smarthost”. In that section add

    Step 3

1. Run update-exim4.conf

2. Do /etc/init.d/exim4 restart

That should be it. You can test by using the command line mail client.

Test :

Integrate your custom IPTables script with Linux

How do I integrate my custom iptables script with Red Hat Enterprise Linux?

A custom iptables script is sometimes necessary to work around the limitations of the Red Hat Enterprise Linux firewall configuration tool. The procedure is as follows:

1. Make sure that the default iptables initialization script is not running:

2. Execute the custom iptables script:

3. Save the newly created iptables rules:

4. Restart the iptables service:

5. Verify that the custom iptables ruleset have taken effect:

6. Enable automatic start up of the iptables service on boot up:

The custom iptables script should now be integrated into the operating system.

Patch Scanning / Information Gathering Script for RedHat / CentOS

With all the patch management solutions, local repositories and other options, it is rarely necessary to manually scan all servers on your network to build a “report” of the patch levels in your environment.

Sometimes it is, however. For instance, if you are brought into an environment that has not been properly managed and require some quick audits to evaluate how much actual work needs to be done bringing all the patch levels up to standard, then there are ways to produce these reports with simple bash scripting.

I have developed such a script for similar situations — quick reporting is sometimes necessary even when you are evaluating a large commercial patch management solution. It can even be implemented to coincide such solutions, for independent reporting perhaps.

This script would work well either by distributing it to each server and running the script via ssh key based authentication for centralized reporting. Alternatively, you could modify this script to perform each command via SSH over the network to gather information that way. It is probably more ideal to centrally distribute the script to each server so only one ssh command is executed per server.

Find the script below — note that it only works with RedHat / CentOS systems. Obviously if you are paying for Red Hat enterprise support you already are using satellite; If you are using CentOS then this script may be useful for you.

Enjoy!

Note that you can modify the echo output to produce whatever output you need in order to present it in a nice human readable report.

Quick tips using FIND , SSH, TAR , PS and GREP Commands

Administering hundreds of systems can be tedious. Sometimes scripting repetitive tasks, or replicating tasks across many servers is necessary.

Over time, I’ve jotted down several quick useful notes regarding using various linux/unix commands. I’ve found them very useful when navigating and performing various tasks. I decided to share them with you, so hopefully you will find them a useful reference at the very least!

To find files within a time range and add up the total size of all those files :

To watch a command’s progress :

Transfer a file / folders, compress it midstrem over the network, uncompress the file on the recieving end:

Below will return any XYZ PID that is older than 10 hours.

Check web logs on www server for specific ip address access:

Those are just a few of the useful commands that can be applied to many different functions. I particularly like sending files across the network and compressing them mid stream 🙂

The above kind of administration is made even easier when you employ ssh key based authentication — your commands can be scripted to execute across many servers in one sitting (just be careful) 😉

Software RAID in Linux

Several occasions have arisen where a client requested software raid-1 between two IDE drives in their server.

Obviously the servers in question had no hardware raid capabilities, and compromising disk I/O read/write times for increased redundancy was more important.

Below is a simple tutorial for setting up software raid in Linux, using MDADM. The specific examples in these instructions are with Debian, but can be applied to any linux distribution.

Linux Software RAID-1

1. Verify that you are working with two identical hard drives:

2. Copy the partition table over to the second drive:

Edit the partition table of the second drive so that all of the partitions, except #3, have type ‘fd’.

3. Install mdadm to manage the arrays.

It’ll ask you a series of questions that are highly dependent on your needs. One key one is: “Yes, automatically start RAID arrays”

4. Load the RAID1 module:

5. Create the RAID1 volumes. Note that we’re setting one mirror as “missing” here. We’ll add the second half of the mirror later because we’re using it right now.

6. Make the filesystems:

7. Install the dump package:

8. Mount the new volumes, dump & restore from the running copies:

9. Set up the chroot environment:

10. Edit /boot/silo.conf, and change the following line:

11. Edit /etc/fstab, and point them to the MD devices:

12. Save the MD information to /etc/mdadm/mdadm.conf:

13. Rebuild the initrd (to add the RAID modules, and boot/root RAID startup information):

14. Leave the chroot environment:

15. Unmount /boot. klogd uses the System.map file, and we need to kill it to unmount /boot.

16. Add /dev/hda1 to /dev/md0 — the /boot mirror

Wait until the mirror is complete. CTRL-C to exit watch.

17. Mount the mirrored /boot:

18. Stamp the boot loader onto both disks, and reboot:

19. Assuming it booted up correctly, verify that we’re running on the mirrored copies:

If so, add the other partitions into their respective mirrors:

And wait until the the mirrors are done building.

20. Edit /etc/mdadm/mdadm.conf and remove any references to the RAID volumes. Refresh the mdadm.conf information:

21. Rebuild the initrd one more time. The previous time only included one half of each mirror for root and swap.

22. Reboot one more time for good measure. You now have software RAID1.

Testing the Software Raid & simulating a drive failure

Newer versions of raidtools come with a raidsetfaulty command. By using raidsetfaulty you can just simulate a drive failure without unplugging things off.

Just running the command

First, you should see something like the first line of this on your system’s log. Something like the second line will appear if you have spare disks configured.

Checking /proc/mdstat out will show the degraded array. If there was a spare disk available, reconstruction should have started.

Try with :

Now you’ve seen how it goes when a device fails. Let’s fix things up.

First, we will remove the failed disk from the array. Run the command

Now we have a /dev/md1 which has just lost a device. This could be a degraded RAID or perhaps a system in the middle of a reconstruction process. We wait until recovery ends before setting things back to normal.

We re-establish /dev/sdc2 back into the array.

As disk returns to the array, we’ll see it becoming an active member of /dev/md1 if necessary. If not, it will be marked as an spare disk.

Checking for errors and alerting

Steps for setting up e-mail alerting of errors with mdadm:

E-mail error alerting with mdadm can be accomplished in several ways:

1. Using a command line directly

2. Using the /etc/mdadm.conf file to specify an e-mail address

NOTE: e-mails are only sent when the following events occur:

Specifying an e-mail address using the mdadm command line

Using the command line simply involves including the e-mail address in the command. The following explains the mdadm command and how to set it up so that it will load every time the system is started.

The command could be put /etc/init.d/boot.local so that it was loaded every time the system was started.

Verification that mdadm is running can be verified by typing the following in a terminal window: