Howto make your private VM Cluster, Part II

In the previous entry I showed how to build the basic structure for your own private VM Cluster. Today I'm going to show you how to create a cluster with two VMs that provides High-Availability for a WebServer using DRBD to replicate the Web Site.

First you need to create a virtual machine. I decided to create a VM with Gentoo. I will use LVM to keep the partition to use for drbd small since this is a simple test.

I created a qemu-img for a base gentoo installation (my goal is to install gentoo on the VM and then reuse it as base for the other VMs). To create the image just run:
qemu-img create -f qcow2 gentoo.qcow2 10G
I started the VM using that image and followed Gentoo's installation guide. My partition scheme was 100Mb (boot), 512Mb (swap), 5Gb (root), rest for lvm.

I used the gentoo-sources, configuring all virtio devices, drbd and the device-mapper. I configured genkernel to use lvm so it detects the lvm volumes at boot. I used grub and added all the genkernel options.

Remember that if you use the minimal installation CD the hard disk will be called sda but after setting up virtio in the kernel it will be called vda. I created a generic startvm script that will provide the disk and network card using virtio. I called it startVM:
#!/bin/bash
set -x

if [[ $# != 2 && $# != 3 ]]; then
    echo "Usage: startVM <mac> <hda> [cdrom]"
    exit 1
fi

# Get the location of the scriptSCRIPT=`readlink -f $0`
SCRIPT_PATH=`dirname $SCRIPT`

# Create tap interface so that the script /etc/qemu-ifup can bridge it# before qemu startsUSERID=`whoami`
IFACE=`sudo tunctl -b -u $USERID`

# Setup KVM parametersCPUS="-smp 8"
MEMORY="-m 1G"
MACADDRESS=$1
HDA=$2
if [[ $# == 3 ]]; then
    CDROM="-cdrom $3"
else
    CDROM=""
fi
NET="-net nic -net tap,script=/etc/qemu-ifup"

# Start kvmkvm $CPUS $MEMORY -drive file=$HDA,if=virtio,boot=on $CDROM -net nic,model=virtio,macaddr=$MACADDRESS -net tap,ifname=$IFACE,script=$SCRIPT_PATH/qemu-ifup

# kvm has stopped - remove tap tap interfacesudo tunctl -d $IFACE &> /dev/null
After successfully booting to the Gentoo VM I halted the VM to create another disk image. I may want to reuse this vanilla Gentoo VM in the future so I created another disk image taking this vanilla one as base as follows:
qemu-img create -f qcow2 -o backing_file=gentoo.qcow2 gentoo-drbd.qcow2
Then I started the VM with the new script with:
startVM DE:AD:BE:EF:E3:1D gentoo-drbd.qcow2
The MAC Address will be important later.

Now I installed all the packages that I will be needing for each node in the cluster. The goal is to avoid having to build them many times and to reuse this image for all nodes. Basic steps are as follows:
emerge -av telnet-bsd drbd lvm2 pacemaker lighttpd
eselect python set python2.6
pvcreate /dev/vda4
vgcreate internalhd /dev/vda4
I changed python to 2.6 because pacemaker requires it. I added the following to /etc/hosts to avoid doing it everyware:
192.168.100.10 node1
192.168.100.11 node2
192.168.100.20 cluster
Next I created an image for the two nodes as follows:
qemu-img create -f qcow2 -o backing_file=gentoo-drbd.qcow2 gentoo-drbd-node1.qcow2
cp gentoo-drbd-node1.qcow2 gentoo-drbd-node2.qcow2
Then I created to scripts, one to start each VM. The contents are as follows (for node2 you must change the mac address):
#!/bin/bash
set -x

# Get the location of the scriptSCRIPT=`readlink -f $0`
SCRIPT_PATH=`dirname $SCRIPT`

startVM DE:AD:BE:EF:E3:1D gentoo-drbd-node1.qcow2
I then configured static IPs for each node, that is: edit /etc/conf.d/hostname to be either node1 or node2 and the contents of /etc/conf.d/net to be:
config_eth0=( "192.168.100.10/24" )
routes_eth0=( "default via 192.168.100.1" )
For node 2 the IP ends in 11. Next I confired corosync. This must be done on both nodes:
cd /etc/corosync
cp corosync.conf.example to corosync.conf
Edit the corosync.conf file and make the bindnetaddr be the IP address of the node. And add the pacemaker service by adding the following to the end of the file:
service {
   name: pacemaker
   ver: 0
}
I started corosync on both nodes and marked it to start on boot:
/etc/init.d/corosync start
rc-update add corosync default
Then I proceded to congiure the cluster. First I turned of STONITH:
crm configure property stonith-enabled=false
Then I created the Virtual IP for the Cluster:
crm configure primitive ClusterIP ocf:hertbeat:IPaddr2 params ip=192.168.100.20 cidr_netmask=32 op monitor interval=30s
Marked the cluster to runt with two nodes (without quorom, please don't discuss this in the comments) and for resource stickiness (to avoid having the resources move around if not needed):
crm configure property no-quorum-policy=ignore
crm configure rsc_defaults resource-stickiness=100
I added lighttpd as a service. First I created index.html on both nodes and different so that I can check if things are working. Next I created the service in the cluster:
crm configure primtive WebSite lsb:lighttpd op monitor interval=30s
crm configure colocation website-with-ip INFINITY: WebSite ClusterIP
crm configure order lighttpd-after-ip mandatory: ClusterIP WebSite

You can test and access the cluster address (http://192.168.100.20) to see whose answering. You can stop the corosync service to view the service migrate between nodes.

Next up DRBD.

Comments

Popular posts from this blog

Back to C: CMake and CUnit

OpenClock and Gentoo

Gentoo with LUKS and LVM