Clone a XEN VPS server that resides on a LVM / Logical Volume Manager

Hello!

We thought it would be important to share this information as it might be interesting to someone who wants to replicate the same VPS across many instances in order to create a farm of web servers (for example).

This uses very similar concepts to our LVM XEN backup post a while back.

Step 1: Take a snapshot of the current VPS

This is simple. Use the lvcreate command with the -s option to create a snapshot of the running VPS. We assume your VPS is 5GB in size, so just replace that with however large your VPS is :

lvcreate -s -L 5GB -n snapshot_name /dev/VolGroup00/running_vps_image

Step 2: Create your new VPS

This is important. You want to create a new vps, assign a MAC and IP address first and let the creation process fully finish. Then shut the VPS down.

Step 3: Copy the snapshot to the new VPS

All you have to do is use the dd command to transfer the snapshot image to the newly created VPS image :

dd if=/dev/VolGroup00/snapshot_name of=/dev/VolGroup00/new_vps_image

All done! Dont forget to remove the snapshot after your done with it :

lvremove -f /dev/VolGroup00/snapshot_name

Start up the new vps and you should have a carbon copy of the previous vps!

Migrate from Linux to Xen with Rsync

I decided to write this little guide to provide the relatively simple steps needed to migrate your linux system to a Xen (HVM) virtual instance.

It is assumed that on your source and destination boxes, that you only have one root “/” partition. If you partitioned out your file system differently, you will have to accommodate that based on these instructions.

The following steps walk you through the process of migrating linux to Xen from start to finish :

1. Install the exact same version of linux on your destination server
This isn’t really 100% necessary, obviously. You could always boot into Finnix, partition your disk and install Grub. If you are uncomfortable doing that, install the distribution from start to finish. The file system will be overwritten anyways.

2. Boot into finnix on the destination system
If you have never used Finnix, it is a “self contained, bootable linux distribution”. I like it alot actually and have used it for similar purposes, rescue operations and the like.

3. Setup networking on both destination and source systems
If both systems are on the same network, you could assign local IP addresses to ensure the process of synchronisation is speedy and unobstructed.

Ensure you configure networking either way and that you set a root password and start ssh :

passwd
/etc/init.d/ssh start

4. Mount the partition that you want to copy to on the destination server
Remember, so far everything you are doing has been on the destination server. Mount the destination partition within finnix :

mount /dev/xvdb

5. On the source server, rsync all the files of the source partition to the destination partition
When logged into the source server, simply issue the following rsync command and direct it to the destination server’s partition you just mounted :

rsync -aHSKDvz -e ssh / root@12.34.56.78:/mnt/xvdb/

The rsync process will complete and the partition on the destination server should be ready to boot into. Remember to change the networking configuration if you dont want any IP conflicts to happen.

I hope this helps!

Creating a Xen template

One way to increase the efficiencies of Xen based systems is to utilize templates. VMware talks about this in their whitepaper for ESX2 best practices.

With Xen, you have to create your own. Here is a straight forward guide for how to do it.

1. Bootstrap a DomU named -tpl (e.g. centos4-tpl).

I recommend using a file-backed VBD, but partition or LVM volume will work fine as well. Here is an example /etc/xen/centos4-tpl

kernel = "/boot/vmlinuz-2.6.12.6-xenU"
memory = 256
name = "centos4-tpl" 
disk = [  'file:/opt/xen/domains/centos4-tpl/diskimage,sdb1,w','file:/opt/xen/domains/centos4-tpl/swapimage,sdb2,w'  ]
root = "/dev/sdb1 ro"
dhcp="dhcp

This is just a normal system (DomU) install – see Centos-4 on Xen for an example. Un-customize files

2.Inside the VM, edit the following files

/etc/hosts
remove any address lines other than localhost

/etc/sysconfig/network
use a generic hostname which will be unique to each deployment

NETWORKING=yes
HOSTNAME=centos4-tpl-changeme.example.com

/etc/sysconfig/network-scripts/ifcfg-eth0
should look like this:

DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp

also important – remove any line starting with HWADDR, e.g.:

HWADDR=00:10:5A:XX:YY:ZZ

Other configuration files to consider tweaking include /etc/dhclient.conf & /etc/hosts

3. Files to remove:

– SSH Host key files (auto-created at boot time)

rm -f /etc/ssh/*host*

4. Shutdown the template VM

xm shutdown centos4-tpl

You might normally link your VMs into /etc/xen/auto. I recommend against this as the template VM can be left shutdown until/unless you want to update it, saving valuable RAM and CPU cycles.

Clone the virtual disk Now we can deploy from the template by cloning the data into a clean diskimage (or partition or LVM volume). Create the diskimage using an appropriate size (must be larger than the template). Oh -the nice thing here is that there is flexibility. For instance, you can have a file-based diskimage and clone the data onto LVM volumes. As long as you can mount the (virtual) disks, you can clone templatized systems.

Here we use /mnt/disk to mount the new system disk, and /mnt/image to mount the template disk.

First, mount the template disk.

mount -o loop /opt/xen/domains/centos4-tpl/diskimage /mnt/image

Next, create and mount the new system (DomU) disk space & swap space.

mkdir -p /opt/xen/domains/cloned
cd /opt/xen/domains/cloned
dd if=/dev/zero of=diskimage bs=1024k count=2048
dd if=/dev/zero of=swapimage bs=1024k count=256
mkfs.ext3 diskimage
mkswap swapimage
mkdir -p /mnt/disk
mount -o loop /opt/xen/domains/cloned/diskimage /mnt/disk

Create the exclude file in /tmp/XenCloneExclude

proc/*
users/*
tmp/*
lost+found/
etc/mtab

Copy the data across

rsync -av -SHWD --exclude-from="/tmp/XenCloneExclude" /mnt/image/ /mnt/disk

Chroot into the newly copied template and fixup certain files

chroot /mnt/disk /bin/bash

Fix the hostname, etc in the files we “un-customized” in the template.

Exit, unmount both the template image and volume

umount /mnt/disk
umount /mnt/image

Setup your Xen config and be on your way!

cd /etc/xen
cp centos4-tpl cloned
(edit cloned to change name and paths to disk and swap)
xm create -c cloned

How to backup Xen with Logical Volume Mounts ; Works with HyperVM, SolusVM, FluidVM and More

Through our research and implementation of many Xen environments, it has become necessary to develop a reliable and secure method for backing up our Xen instances that are mounted on Logical Volumes (LVM).

The underlying problem is that the logical volume is usually a live file system that cannot be directly mounted / backed up or imaged safely.

We have written a script that processes all running Xen logical volumes, creates a snapshot of the volume and through that snapshot , uses dd to image the snapshot to another server over ssh.

You would be surprised at how well these dd images compress. Piping dd to bzip2 then to ssh to receive the image produces a very substantial compression ratio.

The initial trouble was writing the logic in the script to properly go through each Xen LV , create the snapshot, image and then remove the snapshot. Obviously extensive testing had to be completed to ensure reliability and proper error reporting.

This script should work with any 3rd party Xen control panel implementation (HyperVM, FluidVM, SolusVM to name a few). They all use the same underlying technology / framework. Since our script is a simple bash / shell script, it will run on any linux based system with little modification.

If you are using a LV for another purpose on the same box, it is probably a good idea to modify the script to ignore that so it doesn’t inadvertently get backed up.

Before implementing the script, it is probably a good idea to go through the motions manually just to see how it performs :

lvcreate -s -L 5G -n vm101_img_snapshot /dev/vps/vm101_img
dd if=/dev/vps/vm101_img_snapshot | bzip2 | ssh xenbackup@x.x.x.x "dd of=vm101_img.bz2"

One thing that you cant get around is space — you need to leave as much room as the largest Xen image on your logical volume — otherwise the script will fail at the snapshot creation process.

Find the script below. Hopefully it will help make your life easier (as well as being able to sleep at night) :

#!/bin/bash
# XEN Backup script
# Written by Star Dot Hosting

todaysdate=`date "+%Y-%m-%d"`

echo "XEN Backup Log: " $currentmonth > /var/log/backup.log
echo -e "------------------------------------" >> /var/log/backup.log
echo -e "" >> /var/log/backup.log


for obj0 in $(lvs --noheadings --separator ',' -o lv_name,lv_size | grep -v "swap" | awk -F "," '{printf "%sn", $1}');
do

#grab the snapshot size
snapsize=`lvs --noheadings --separator ',' -o lv_name,lv_size | grep -v "swap" | grep $obj0 | awk -F "," '{printf "%s", $2}'`

#create the snapshot
lvcreate -s -L $snapsize -n $obj0_snapshot /dev/xenlvm/$obj0 >> /var/log/backup.log 2>&1

#dd piped to bzip2 to compress the stream before piping it over the network via ssh to the destination box
dd if=/dev/xenlvm/$obj0_snapshot | bzip2 | ssh xenbackup@0.0.0.0 "dd of=/home/xenbackup/xen-backups/$obj0.$todaysdate.bz" >> /var/log/backup.log 2>&1

if [ "$?" -eq 1 ]
then
        echo -e "***SCRIPT FAILED, THERE WERE ERRORS***" >> /var/log/backup.log 2>&1
        cat /var/log/backup.log | mail -s "XEN Backup Job failed" admin@yourdomain.com
        lvremove -f /dev/xenlvm/$obj0_snapshot
        exit 1
else
        echo -e "Backup of $obj0 Completed Successfully!" >> /var/log/backup.log 2>&1
fi

# remove the snapshot
lvremove -f /dev/xenlvm/$obj0_snapshot


done

cat /var/log/backup.log | mail -s "XEN Backup Job Completed" admin@yourdomain.com

If you plan on automating this script in a cronjob, it may be a good idea to utilize ssh key authentication between your destination server and your Xen server.

Migrate FreeBSD to Xen

There seems to be a lot of tutorials with respect to how you can dump/restore FreeBSD implementations. However, none of them appear to be all encompassing what is actually required from start to finish during the entire process.

The one thing that I think is lacking in proper documentation is utilizing FreeBSD in a LiveCD scenario (LiveFS) within a network capacity (necessary for migration).

We decided to write this tutorial so that people could have one place to establish all the necessary things required for this type of migration from start to finish.

In this scenario we actually migrated a FreeBSD implementation on VMWARE to XEN HVM. In the end, there were no technical problems with FreeBSD actually running after it was migrated — it ran beautifully actually.

I should note that this was tested with FreeBSD 7.2-RELEASE disc images.

Please find the guide below :

Prepare OLD Instance

1. Boot into old operating system

2. Take note of partition slices / slice names / sizes / etc

3. Reboot with FreeBSD LiveFS disc

Prepare NEW Xen

1. Boot Xen instance with FreeBSD Disc 1 ISO

2. Partition / install boot loader exactly the same slices as the old instance. To be extra careful, give your slices a bit more disc space than the old implementation.

3. Write changes & reboot with FreeBSD LiveFS disc

Establish FreeBSD LiveFS environment

You need to establish a few things to get SSH / DUMP / RESTORE to work properly on both the ”’old”’ and ”’new”’ instances

1. Boot into FreeBSD LiveFS (Fixit > livefs)

2. Create the following folders :

/etc/ssh
/usr/sbin
/usr/bin
/root
/root/.ssh

3. Copy the following files :

cp /mnt2/bin/ps /bin
cp /mnt2/sbin/sysctl /sbin
cp /mnt2/etc/ssh/* /etc/ssh
cp /mnt2/bin/csh /bin
cp /mnt2/bin/cat > /bin
cp /mnt2/sbin/restore > /sbin

4. Set an IP address on both old and new instances:

new :

ifconfig eth0 10.0.0.50 netmask 255.255.255.0

old :

ifconfig eth0 10.0.0.60 netmask 255.255.255.0

5. Start sshd :

/mnt2/etc/rc.d/sshd forcestart

Start transferring slices

1. To allow for transferring of partitions properly, the /tmp partition should be mounted on the new Xen instance :

mount -t ufs /dev/ad0s1e /tmp

2. For the first partition you wish to transfer, mount the empty slice on the new xen instance :

mount -t ufs /dev/ad0s1a /mnt/ufs.1

Sometimes you have to fsck mark the filesystem clean to mount it :

fsck /dev/ad0s1a

3. On the old instance :

dump -0aLf - /dev/ad0s1a | ssh 10.0.0.50 "cd /mnt/ufs.1 && cat | restore -rf -"

That should dump/restore the slice from old > new.

Final things on the new Xen instance

Dont forget to boot the new instance in single user mode and modify ”’fstab”’ to reflect the new slice names (if applicable), as well as ”’rc.conf”’ for any hard coded interface names, etc. FreeBSD won’t boot if the right slice names / interface names aren’t present. Or at least cause problems.

You can mount the /etc slice while still in the LiveFS for the new FreeBSD instance.

Hopefully this was helpful! Obviously this has nothing to do with Xen, other than the fact that we were migrating the FreeBSD vmware instance to Xen.

You can do this on “real” machines, or from xen to vmware or anywhere. As long as the hardware is compatible.

Menu