Foundry Load Balancers HTTP sticky sessions

This post is intended to be a general guide for configuring “stickied” load balanced HTTP servers. Whether it’s F5 load balancers, foundry load balancers or open source based load balancers (keepalived/lvs), the concepts are the same and can be migrated across said platforms.

If you have a paid of foundry’s and are looking to configure stickied load balanced HTTP servers, hopefully this guide will provide some assistance.

    Logging into the load balancer

Telnet to the box and ‘enable’ to allow admin access. The first thing you want to do is show the current configuration to view the existing setup for other working boxes :

    Real servers : defining the multiple load balanced boxes

Show the existing configuration on the foundary :

Take a look at the configuration of two “real” servers, which are the two servers that are behind the load balancer that will have balanced sticky connections :

The above example is balancing TCP 8001 traffic, which is for TOMCAT. Here are entries for two servers doing simple HTTP traffic :

This example is similar to the tomcat example, except you have several options. “port default disable” disables all other ports. “port http keepalive” and “port http url “HEAD /”” define the http checks that take place to ensure apache is running on that box. If not , it will fail over to the second box and stop sending traffic to it.

    SSL Connections

SSL incoming connections are handled by the load balancer initially, then passed off to the actual server as regular http / port 80 traffic. The internal box configuration would be similar to the above configuration examples :

    Configuring the external IP to NAT to the internal virtual

Typically, you will have a firewall in front of the load balancer that actaully holds the external ip addresses. The traffic is filtered initially by the firewall, then NAT’d to the virtual ip (VIP) of the load balancer, which then handles balancing the traffic.

You will need to either establish a new external ip , or use an existing one (for instance, if you are moving from 1 web server to 2 web servers , and want to balance the traffic using the load balancer). You need to setup the external IP address, and NAT it to the internal VIP.

    Verifying the configuration works

Once everything is setup properly, and the external IP is being NAT’d to the load balancer, it is time to ensure the load balancer is seeing the connections. You could do this before doing the switchover on the firewall as well, just to ensure everything looks right before actually doing the switchover.

To see the active connections being load balanced, issue the following command (replacing the servername for whichever one you want to check) :

That should display information similar to this :

The above is displaying the specific connection details for a single real server. To check the VIP / Virtual server :

Which will display the following :

You can see that “ServerConn” is displaying 46 connections. Thats it!

Creating a Xen template

One way to increase the efficiencies of Xen based systems is to utilize templates. VMware talks about this in their whitepaper for ESX2 best practices.

With Xen, you have to create your own. Here is a straight forward guide for how to do it.

1. Bootstrap a DomU named -tpl (e.g. centos4-tpl).

I recommend using a file-backed VBD, but partition or LVM volume will work fine as well. Here is an example /etc/xen/centos4-tpl

This is just a normal system (DomU) install – see Centos-4 on Xen for an example. Un-customize files

2.Inside the VM, edit the following files

/etc/hosts
remove any address lines other than localhost

/etc/sysconfig/network
use a generic hostname which will be unique to each deployment

/etc/sysconfig/network-scripts/ifcfg-eth0
should look like this:

also important – remove any line starting with HWADDR, e.g.:

Other configuration files to consider tweaking include /etc/dhclient.conf & /etc/hosts

3. Files to remove:

– SSH Host key files (auto-created at boot time)

4. Shutdown the template VM

You might normally link your VMs into /etc/xen/auto. I recommend against this as the template VM can be left shutdown until/unless you want to update it, saving valuable RAM and CPU cycles.

Clone the virtual disk Now we can deploy from the template by cloning the data into a clean diskimage (or partition or LVM volume). Create the diskimage using an appropriate size (must be larger than the template). Oh -the nice thing here is that there is flexibility. For instance, you can have a file-based diskimage and clone the data onto LVM volumes. As long as you can mount the (virtual) disks, you can clone templatized systems.

Here we use /mnt/disk to mount the new system disk, and /mnt/image to mount the template disk.

First, mount the template disk.

Next, create and mount the new system (DomU) disk space & swap space.

Create the exclude file in /tmp/XenCloneExclude

Copy the data across

Chroot into the newly copied template and fixup certain files

Fix the hostname, etc in the files we “un-customized” in the template.

Exit, unmount both the template image and volume

Setup your Xen config and be on your way!

cd /etc/xen
cp centos4-tpl cloned
(edit cloned to change name and paths to disk and swap)
xm create -c cloned