Foundry Load Balancers HTTP sticky sessions

This post is intended to be a general guide for configuring “stickied” load balanced HTTP servers. Whether it’s F5 load balancers, foundry load balancers or open source based load balancers (keepalived/lvs), the concepts are the same and can be migrated across said platforms.

If you have a paid of foundry’s and are looking to configure stickied load balanced HTTP servers, hopefully this guide will provide some assistance.

    Logging into the load balancer

Telnet to the box and ‘enable’ to allow admin access. The first thing you want to do is show the current configuration to view the existing setup for other working boxes :

    Real servers : defining the multiple load balanced boxes

Show the existing configuration on the foundary :

Take a look at the configuration of two “real” servers, which are the two servers that are behind the load balancer that will have balanced sticky connections :

The above example is balancing TCP 8001 traffic, which is for TOMCAT. Here are entries for two servers doing simple HTTP traffic :

This example is similar to the tomcat example, except you have several options. “port default disable” disables all other ports. “port http keepalive” and “port http url “HEAD /”” define the http checks that take place to ensure apache is running on that box. If not , it will fail over to the second box and stop sending traffic to it.

    SSL Connections

SSL incoming connections are handled by the load balancer initially, then passed off to the actual server as regular http / port 80 traffic. The internal box configuration would be similar to the above configuration examples :

    Configuring the external IP to NAT to the internal virtual

Typically, you will have a firewall in front of the load balancer that actaully holds the external ip addresses. The traffic is filtered initially by the firewall, then NAT’d to the virtual ip (VIP) of the load balancer, which then handles balancing the traffic.

You will need to either establish a new external ip , or use an existing one (for instance, if you are moving from 1 web server to 2 web servers , and want to balance the traffic using the load balancer). You need to setup the external IP address, and NAT it to the internal VIP.

    Verifying the configuration works

Once everything is setup properly, and the external IP is being NAT’d to the load balancer, it is time to ensure the load balancer is seeing the connections. You could do this before doing the switchover on the firewall as well, just to ensure everything looks right before actually doing the switchover.

To see the active connections being load balanced, issue the following command (replacing the servername for whichever one you want to check) :

That should display information similar to this :

The above is displaying the specific connection details for a single real server. To check the VIP / Virtual server :

Which will display the following :

You can see that “ServerConn” is displaying 46 connections. Thats it!

Automatically Deploy Debian Load Balancers with bash scripting

In yet another post in our automation series, we will share a bash script that automates the deployment of debian based load balancers (specifically with LVS / Linux Virtual Server project).

Even though the environments and systems you deploy may start to get more complicated such as with load balancers, there will always be a baseline level with which these systems can be brought to before further configuration and customization needs to be done.

There are many things that can be automated with this process, as you will see in the script below. In most round-robin load balancing scenarios, there wouldn’t be much more that needs to be done as far as configuration beyond what this script can do.

Obviously you will likely need to modify the script to suit your needs and requirements for the organization and standards therein.

Hopefully this will help you roll out many debian load balancers! May the load be split evenly between all your systems 😉

[cc lang=”bash”]

#!/bin/sh
# Debian LVS deployer script
# Version 1.0

PROGNAME=”$0″
VERSION=”1.0″

# working directory for deployer process.
WORKDIR=”/root”

# tasks left (this is updated every step to accommodate recovery during
# the deployer process)
TASKS=”./deploy-lvs.tasks”

init_tasks() {
# This function will write a new tasks file.
# it’s called from the main body of the script if a tasks file does not exist.
cat > $TASKS<hostname > /etc/hostname
echo hostname > /etc/mailname
return 0
}

usage() {
echo “[+] Usage: $PROGNAME”
echo
return 0
}

###############################
### MAIN SCRIPT STARTS HERE ###
###############################

# installer_splash
installer_splash

# fix working dir.
cd $WORKDIR

# does our installer file exist? if not, initalize it.
if [ ! -f $TASKS ]
then
echo “[+] No task file found, installation will start from beginning.”
init_tasks
if (($? != 0))
then
echo “[!] ERROR: Cannot create tasks file. Installation will not continue.”
exit 1
fi
else
echo “[+] Tasks file located – starting where you left off.”
fi

# start popping off tasks from the task list and running them.
# pop first step off of the list
STEP=head -n 1 $TASKS
while [ ! -z $STEP ]
do
# execute the function.
echo -e “nn###################################”
echo “[+] Running step: $STEP”
echo -e “###################################nn”
$STEP
if (($? != 0))
then
# command failed.
echo “[!] ERROR: Step $STEP failed!”
echo ” Installation will now abort – you can pick it up after fixing the problem”
echo
exit 1
fi
# throw up a newline just so things don’t look so crowded
echo
# remove function from function list.
perl -pi -e “s/$STEPn?//” $TASKS || exit 1
STEP=head -n 1 $TASKS
done

# clean_up_and_reboot
echo “[+] Installation finished – cleaning up.”
clean_up_and_reboot

# script is done now – termination should happen with clean_up_and_reboot.
echo “[!] Should not be here!”
exit 1
[/cc]