Centralized Backup Script

Hello There!

I thought I’d share a backup script that was written to consolidate backups onto one server instead of spreading the backup process across several servers. The advantages are somewhat obvious to consolidating the script onto one server, namely being that editing or making changes is much easier as you only have one script to edit.

The environment where this may be ideal would be for environments with 15-20 servers or less. I’d recommend a complete end-to-end backup solution for servers that exceed that number such as Bacula perhaps.

The bash shell script that I pasted below is very straightforward and takes two arguments. The first is the hostname or ip address of the destination server you are backing up. The next (and potentially unlimited) arguments will be single quote encased folders which you would want backed up.

This script is dependent on the server the script is running on having ssh key based authentication enabled and implemented for the root user. Security considerations can be made with IP based restrictions either in the ssh configuration, firewall configuration or other considerations.

#!/bin/sh
# Offsite Backup script
# Written by www.stardothosting.com
# Dynamic backup script

currentmonth=`date "+%Y-%m-%d %H:%M:%S"`
currentdate=`date "+%Y-%m-%d%H_%M_%S"`
backup_email="backups@youremail.com"
backupserver="origin-backup-server.hostname.com"

# Check User Input
if [ "$#" -lt 2 ]
then
        echo -e "nnUsage Syntax :"
        echo -e "./backup.sh [hostname] [folder1] [folder2] [folder3]"
        echo -e "Example : ./backup.sh your-server.com '/etc' '/usr/local/www' '/var/lib/mysql'nn"
        exit 1
fi

# get the server's hostname
host_name=`ssh -l root $1 "hostname"`
echo "Host name : $host_name"
if [ "$host_name" == "" ]
then
        host_name="unknown_$currentdate"
fi

echo "$host_name Offsite Backup Report: " $currentmonth > /var/log/backup.log
echo -e "----------------------------------------------------------" >> /var/log/backup.log
echo -e "" >> /var/log/backup.log

# Ensure permissions are correct
chown -R backups:backups /home/fileserver/backups/
ls -d /home/fileserver/backups/* | grep -v ".ssh|.bash" | xargs -d "n" chmod -R 775

# iterate over user arguments & set error level to 0
errors=0
for arg in "${@:2}"
do
        # check if receiving directory exists
        if [ ! -d "/home/fileserver/backups/$host_name" ]
        then
                mkdir /home/fileserver/backups/$host_name
        fi
        sanitize=`echo $arg | sed 's/[^/]/+$ //'`
        sanitize_dir=`echo $arg | awk -F '/' '{printf "%s", $2}'`
        /usr/bin/ssh -o ServerAliveInterval=1 -o TCPKeepAlive=yes -l root $1 "/usr/bin/rsync -ravzp --progress --exclude 'clam_quarantinedir' $sanitize/ backups@$backupserver:/home/fileserver/backups/$host_name/$sanitize_dir; echo $? > /tmp/bu_rlevel.txt" >> /var/log/backup.log 2>&1
        echo "/usr/bin/ssh -o ServerAliveInterval=1 -o TCPKeepAlive=yes -l root $1 "/usr/bin/rsync -ravzp --progress --exclude 'clam_quarantinedir' $sanitize/ backups@$backupserver:/home/fileserver/backups/$host_name/$sanitize_dir""

        runlevel=`ssh -l root $1 "cat /tmp/bu_rlevel.txt"`
        echo "Runlevel : $runlevel"

        if [ "$runlevel" -ge 1 ]
        then
                errors=$((counter+1))
        else
                echo -e "Script Backup for $arg Completed Successfully!" >> /var/log/backup.log 2>&1
        fi

done


# Check error level
if [ $errors -ge 1 ]
then
        echo -e "There were some errors in the backup job for $host_name, please investigate" >> /var/log/backup.log 2>&1
        cat /var/log/backup.log | mail -s "$host_name Backup Job failed" $backup_email
else
        cat /var/log/backup.log | mail -s "$host_name Backup Job Completed" $backup_email
fi

It should be explained further that this script actually connects to the destination server as the root user, using ssh key authentication. It then initiates a remote rsync command on the destination server back to the backup server as a user called “backupuser”. So that means that not only does the ssh key need to be installed for root on the destination servers, but a user called “backupuser” needs to be added on the backup server itself, with the ssh keys of all the destination servers installed for the remote rsync.

Hopefully I did not over complicate this, because it really is quite simple :

Backup Server –> root –> destination server to backup — > backupuser rsync –> Backup Server

Once you implement the script and do a few dry run tests then it might be ready to implement a scheduled task for each destination server. Here is an example of one cron entry for a server to be backed up :

01 1 * * * /bin/sh /usr/local/bin/backups.sh destination-server-hostname '/etc' '/usr/local/www' '/home/automysql-backups'

Script to distribute SSH Keys across many servers

Hello once again!

You may remember an earlier post that detailed how to implement SSH Key based authentication.

We believe it is important, when administering many (sometimes hundreds or thousands) of servers, to implement a strategy that can allow systems administrators to seamlessly run scripts, system checks or critical maintenance across all the servers.

SSH Key authentication allows for this potential. It is a very powerful strategy and should be maintained and implemented with security and efficiency as a top priority.

Distributing keys for all authorized systems administrators is something that would allow for the maintenance of this authentication system much easier — when an admin leaves or is dismissed, you need to be able to remove his or her’s keys from the “pool” quickly.

The idea behind this script is to have a centralized, highly secure and restricted key repository server. Each server in your environment would run this script to “pull” the updated key list from the central server. The script would run as a cron job and can run as often as you like. Ideally every 5-10 minutes would allow for quick key updates / distribution.

Here is the perl script :

#!/usr/bin/perl
#
# A script to sync ssh keys on UNIX servers automatically.  This
# will not overwrite user installed ssh keys
#

use strict;
use IPC::Open3;
use File::Copy;

use POSIX ":sys_wait_h";

# This is overkill but FreeBSD may install wget in
# /usr/local/bin in some cases.
$ENV{PATH} = "/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin";

####################################################

use constant URL => 'https://keys.yoursite.com/ssh-keys.txt';
use constant WGET => 'wget --no-check-certificate -q -O - ';
use constant KEYS_FILE => '/root/.ssh/authorized_keys';
use constant RESTRICTED => 'https://keys.yoursite.com/restricted.txt';

####################################################

my ($url, $wget, $keys_file, $restricted, %restrict);

for (my $i=0;$i) {
                chomp;
                $restrict{$_}++;
        }
}

$pid = open3(*WTR, *RTR, *ERR, "$wget");

while () {
        next if $restrict{$1};
        $company_keys .= $_;
}

$user_keys = read_key_file();

# Sanity check
my @rows = split('n', $company_keys);


if (scalar @rows < 1) {
        print "Less than 1 company keys found, not installing keys..n";
        exit(1);
}

open(TMP, ">$keys_file.$$.tmp") or die "Could not open tmp keys file: $!n";
print TMP $company_keys;
print TMP $user_keys;
close(TMP);

# Sanity check

my (undef,undef,undef,undef,undef,undef,undef,$size,undef,undef,undef,undef,undef) = stat("$keys_file.$$.tmp");

if ($size < 100) {
        print "Keys file less than 100bytes, not writing";
        exit(1);
}

move("$keys_file.$$.tmp", $keys_file);

sub read_key_file {
        my $user_buf;

        open(KEY_FILE, "< $keys_file") or die "Could not open ssh key file; $!n";

        while () {
                next if $_ =~ /company$/;
                $user_buf .= $_;
        }

        close(KEY_FILE);
        return($user_buf);
}

sub sig_chld {
        my $pid = waitpid(-1, WNOHANG);
}

sub usage {
        print STDERR <<"EOS";

        Usage: $0 -[kuh]

                -k        Keys file to write to (default: @{[KEYS_FILE]})
                -u         URL to download keys from (default: @{[URL]})
                -h              This screen

EOS
        exit(1);
}

1;

__END__

Note that it downloads the public keys via http with wget. This can be easily modified to utilize https, if necessary, or perhaps even another protocol to make the transfer. HTTP Was chosen because the public keys are harmless and http is the easiest method. HTTPS would be desirable, however.

We hope this script helps you along the way towards making your life easier! 😉

SSH Key based authentication

Administering a large number of servers is never easy. Sometimes it’s necessary to streamline many “menial” tasks across the board. This is especially true when performing simple audits across all your linux and unix based servers.

What makes this process seamless and efficient is to implement ssh key authentication to replace password based authentication.

When implemented, commands can be executed via scripts in order to execute and gather all the information that’s required in order to complete your task.

Sound easy? It is!

Public Key Setup

First, confirm that OpenSSH is the SSH software installed on the client system. Key generation may vary under different implementations of SSH. The ssh -V command should print a line beginning with OpenSSH, followed by other details.

$ ssh -V
OpenSSH_3.6.1p1+CAN-2003-0693, SSH protocols 1.5/2.0, OpenSSL 0x0090702f

Key Generation

A RSA key pair must be generated on the client system. The public portion of this key pair will reside on the servers being connected to, while the private portion needs to remain on a secure local area of the client system, by default in ~/.ssh/id_rsa. The key generation can be done with the ssh-keygen(1) utility.

client$ mkdir ~/.ssh
client$ chmod 700 ~/.ssh
client$ ssh-keygen -q -f ~/.ssh/id_rsa -t rsa
Enter passphrase (empty for no passphrase): …
Enter same passphrase again: …

Depending on your work environment and the security policies and procedures, you may want to enter a passphrase. This effectively defeats the purpose of this, however. Entering an empty passphrase will allow you to complete a seamless connection.

The file permissions should be locked down to prevent other users from being able to read the key pair data. OpenSSH may also refuse to support public key authentication if the file permissions are too open. These fixes should be done on all systems involved.

$ chmod go-w ~/
$ chmod 700 ~/.ssh
$ chmod go-rwx ~/.ssh/*

Key Distribution

The public portion of the RSA key pair must be copied to any servers that will be accessed by the client. The public key information to be copied should be located in the ~/.ssh/id_rsa.pub file on the client. Assuming that all of the servers use OpenSSH instead of a different SSH implementation, the public key data must be appended into the ~/.ssh/authorized_keys file on the servers.

first, upload public key from client to server:

client$ scp ~/.ssh/id_rsa.pub server.example.org:

next, setup the public key on server:

server$ mkdir ~/.ssh
server$ chmod 700 ~/.ssh
server$ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
server$ chmod 600 ~/.ssh/authorized_keys
server$ rm ~/id_rsa.pub

Be sure to append new public key data to the authorized_keys file, as multiple public keys may be in use. Each public key entry must be on a different line.

Many different things can prevent public key authentication from working, so be sure to confirm that public key connections to the server work properly. If the following test fails, consult the debugging notes.

client$ ssh -o PreferredAuthentications=publickey server.example.org
Enter passphrase for key '/…/.ssh/id_rsa': …
…
server$  

Key distribution can be automated with module:authkey and CFEngine. This script maps public keys stored in a filesystem repository to specific accounts on various classes of systems, allowing a user key to be replicated to all systems the user has access to.

If exporting the public key to a different group or company, consider removing or changing the optional public key comment field to avoid exposing the default username and hostname.

Easy!

Quick tips using FIND , SSH, TAR , PS and GREP Commands

Administering hundreds of systems can be tedious. Sometimes scripting repetitive tasks, or replicating tasks across many servers is necessary.

Over time, I’ve jotted down several quick useful notes regarding using various linux/unix commands. I’ve found them very useful when navigating and performing various tasks. I decided to share them with you, so hopefully you will find them a useful reference at the very least!

To find files within a time range and add up the total size of all those files :

find /opt/uploads -mtime -365 -printf "%sn"|awk '{sum+=$0}END{print sum}'

To watch a command’s progress :

watch -n1 'du -h -c --max-depth=1'

Transfer a file / folders, compress it midstrem over the network, uncompress the file on the recieving end:

ssh -l root 00.00.000.000 '(cd /opt/uploads/ && tar -czf - . -C /opt/uploads)' | tar -xzf -

Below will return any XYZ PID that is older than 10 hours.

ps -ef | grep XYZ | awk '{ print $7 ":" $2 }' | awk 'BEGIN { FS =":" }; {if ($1 > 10) print $4}'

Check web logs on www server for specific ip address access:

grep "ip_address" [a-h]*/logs/*access*4		<-- check a-h websites
grep "ip_address" [A-Z,0-9]*/logs/*access*4	<-- check A-Z, 0-9 websites

Those are just a few of the useful commands that can be applied to many different functions. I particularly like sending files across the network and compressing them mid stream :)

The above kind of administration is made even easier when you employ ssh key based authentication -- your commands can be scripted to execute across many servers in one sitting (just be careful) ;)

Menu