Log compression Bash script

In my experience as a Systems Administrator, it has come up quite often to create a script to rotate and compress rather large log files.

These log files could be anything: java logs, apache logs (apache should have its own log rotation built in) and mail logs for example. This script has two modes : daily and monthly.

The daily mode is intended to be run daily (obviously) , gzipping the previous days log file. The monthly mode, run monthly (obviously), then tar’s up all the previous month’s gzip files into one big tarball.

Note that this script assumes the log filenames are assorted by the filename + date (year/month/day). This can obviously be modified to suit the specific syntax of your log file names.

Here is the script :

I simply make two crontab entries :

The above entries run the script daily at 3:00am, and monthly on the 1st of every month at 5:00am, this ensures the script isn’t run at the same time on the 1st as the daily job.

That’s it!

Software RAID in Linux

Several occasions have arisen where a client requested software raid-1 between two IDE drives in their server.

Obviously the servers in question had no hardware raid capabilities, and compromising disk I/O read/write times for increased redundancy was more important.

Below is a simple tutorial for setting up software raid in Linux, using MDADM. The specific examples in these instructions are with Debian, but can be applied to any linux distribution.

Linux Software RAID-1

1. Verify that you are working with two identical hard drives:

2. Copy the partition table over to the second drive:

Edit the partition table of the second drive so that all of the partitions, except #3, have type ‘fd’.

3. Install mdadm to manage the arrays.

It’ll ask you a series of questions that are highly dependent on your needs. One key one is: “Yes, automatically start RAID arrays”

4. Load the RAID1 module:

5. Create the RAID1 volumes. Note that we’re setting one mirror as “missing” here. We’ll add the second half of the mirror later because we’re using it right now.

6. Make the filesystems:

7. Install the dump package:

8. Mount the new volumes, dump & restore from the running copies:

9. Set up the chroot environment:

10. Edit /boot/silo.conf, and change the following line:

11. Edit /etc/fstab, and point them to the MD devices:

12. Save the MD information to /etc/mdadm/mdadm.conf:

13. Rebuild the initrd (to add the RAID modules, and boot/root RAID startup information):

14. Leave the chroot environment:

15. Unmount /boot. klogd uses the System.map file, and we need to kill it to unmount /boot.

16. Add /dev/hda1 to /dev/md0 — the /boot mirror

Wait until the mirror is complete. CTRL-C to exit watch.

17. Mount the mirrored /boot:

18. Stamp the boot loader onto both disks, and reboot:

19. Assuming it booted up correctly, verify that we’re running on the mirrored copies:

If so, add the other partitions into their respective mirrors:

And wait until the the mirrors are done building.

20. Edit /etc/mdadm/mdadm.conf and remove any references to the RAID volumes. Refresh the mdadm.conf information:

21. Rebuild the initrd one more time. The previous time only included one half of each mirror for root and swap.

22. Reboot one more time for good measure. You now have software RAID1.

Testing the Software Raid & simulating a drive failure

Newer versions of raidtools come with a raidsetfaulty command. By using raidsetfaulty you can just simulate a drive failure without unplugging things off.

Just running the command

First, you should see something like the first line of this on your system’s log. Something like the second line will appear if you have spare disks configured.

Checking /proc/mdstat out will show the degraded array. If there was a spare disk available, reconstruction should have started.

Try with :

Now you’ve seen how it goes when a device fails. Let’s fix things up.

First, we will remove the failed disk from the array. Run the command

Now we have a /dev/md1 which has just lost a device. This could be a degraded RAID or perhaps a system in the middle of a reconstruction process. We wait until recovery ends before setting things back to normal.

We re-establish /dev/sdc2 back into the array.

As disk returns to the array, we’ll see it becoming an active member of /dev/md1 if necessary. If not, it will be marked as an spare disk.

Checking for errors and alerting

Steps for setting up e-mail alerting of errors with mdadm:

E-mail error alerting with mdadm can be accomplished in several ways:

1. Using a command line directly

2. Using the /etc/mdadm.conf file to specify an e-mail address

NOTE: e-mails are only sent when the following events occur:

Specifying an e-mail address using the mdadm command line

Using the command line simply involves including the e-mail address in the command. The following explains the mdadm command and how to set it up so that it will load every time the system is started.

The command could be put /etc/init.d/boot.local so that it was loaded every time the system was started.

Verification that mdadm is running can be verified by typing the following in a terminal window: