Patch Scanning / Information Gathering Script for RedHat / CentOS

With all the patch management solutions, local repositories and other options, it is rarely necessary to manually scan all servers on your network to build a “report” of the patch levels in your environment.

Sometimes it is, however. For instance, if you are brought into an environment that has not been properly managed and require some quick audits to evaluate how much actual work needs to be done bringing all the patch levels up to standard, then there are ways to produce these reports with simple bash scripting.

I have developed such a script for similar situations — quick reporting is sometimes necessary even when you are evaluating a large commercial patch management solution. It can even be implemented to coincide such solutions, for independent reporting perhaps.

This script would work well either by distributing it to each server and running the script via ssh key based authentication for centralized reporting. Alternatively, you could modify this script to perform each command via SSH over the network to gather information that way. It is probably more ideal to centrally distribute the script to each server so only one ssh command is executed per server.

Find the script below — note that it only works with RedHat / CentOS systems. Obviously if you are paying for Red Hat enterprise support you already are using satellite; If you are using CentOS then this script may be useful for you.

Enjoy!

Note that you can modify the echo output to produce whatever output you need in order to present it in a nice human readable report.

Scheduled antivirus scans to prevent viral injections on user generated content

When dealing with high traffic sites, especially media based or community based sites, there is always the risk of javascript, virus, XSS or other malicious injection of badness when giving a community of users the ability to upload files to your site.

There are several things to consider when evaluating all “points of entry” that are available to the public, into your systems.

Most content management and community based systems use libraries such as Imagemagick to process images (such as profile pictures) into their proper format and size.

Believe it or not, it is hard to actually inject code or other malicious data into the actual image to survive this sanitizing process. There is still risks , however. The library version you are running may be vulnerable to exploits itself.

As always, a good rule of thumb is to ensure all possible aspects of your systems are up to date and that you are aware of any security vulnerabilities as they come out so they can either be patched or addressed in some other way.

One thing to consider, especially when dealing with thousands of users and even more uploads is a scheduled scan of your user uploads using free virus scanning tools such as clamscan. This is an endpoint reporting strategy that can at least cover your ass in the event that something else was missed or a 0day vulnerability exploited.

It should be noted that the virus scans themselves aren’t intended to protect the linux systems themselves, but rather the opportunistic ‘spreading’ of compromised images and code that having an infected file on a public community based system can provide.

Its very simple to implement clamav (daemonization is not necessary), clamscan is all we need to execute regular scans at 10, 15, 30 or 60 minute intervals.

Once clamscan is implemented, definitions updated (and regular update cronjobs in place) you can roll out a script similar to the one we have here to implement the scheduled scans :

The actual cronjob entry can look something like this :

It seems like a rather simple solution — but it does provide a venue for additional sanitizing of user input. In our experience , it is best to only report on anything that clamscan might find. You can, however tell clamscan to simply delete any suspected infections it finds.

Backup a live FreeBSD filesystem and remotely migrate to another server

Lately we’ve been all about live migrations / backups here at *.hosting. And why not? with the advent of such concepts as “self healing blade cloud environment” , we have made a point to testing / scripting live migration scenarios.

Following on our last post of backing up LVM volumes, we have decided to make a simple post for ‘dumping’ a live freebsd filesystem, compressing it mid-stream, and over the network (encrpyted through ssh of course) , before being saved as a file (or restored to a waiting live-cd mounted system).

By default in FreeBSD, it partitions your var, usr, root

So lets dump the root partition since its the smallest :

Lets break down the options so you can fully understand what its doing : -0 // dump level 0 = full backup
-u // update the dumpdates file after a successful dump
-a // bypass all tape length considerations; autosize
-n // notify if attention is required
-L // tell dump that it is a live filesystem for a consistent dump; it will take a snapshot

Alternatively you could dump the filesystem to a local file :

If you wanted to dump from server1 and restore on server2 :

Again , this is a straightforward command. It is typically fast (within reason). You could script this for automated dumps/snapshots of your filesystem for full restores in a disaster scenario.