Howto make your private VM Cluster, Part III

Continuing with my saga, next up is DRBD. I'm using DRBD because I also want to test it as a viable alternative for network raid.

To use DRBD I first created a LVM volume to use:
lvcreate -n drbd-demo -L 100M internal-hd
Then I configured DRBD on both nodes, fortunately gentoo simplifies a great part of the process (you have to do this on both nodes):
cd /etc
cp /usr/share/doc/drbd-*/drbd.conf.bz2 .
bunzip2 drbd.conf.bz2
Then I created a wwwdata resource by first configuring it (again on both nodes). This is done by creating a file /etc/drbd.d/wwwdata.res with the contents:
resource wwwdata {
    meta-disk internal;
    device    /dev/drbd1;
    syncer {
        verify-alg sha1;
    }
    net {
        allow-two-primaries;
    }
    on node1 {
        disk    /dev/mapper/internalhd-drbd--demo;
        address 192.168.100.10:7789;
    }
    on node2 {
        disk    /dev/mapper/internalhd-drbd--demo;
        address 192.168.100.11:7789;
    }
}
I added the drbd module to the /etc/modules.autoload.d/kernel-2.6 on both nodes.

Finally I started drbd on node 1 as follows:
drbdadm create-md www-data
modprobe drbd
drbdadm up wwwdata
And on node 2 as follows:
drbdadm --force create-md www-data
modprobe drbd
drbdadm up wwwdata
I then used node 1 as reference for the data:
drbdadm -- --overwrite-data-of-peer primary wwwdata
I monitored the sync process until it was completed with:
watch cat /proc/drbd
When completed I created the file system and populated it with an index.html file indicating the cluster:
mkfs.ext4 /dev/drbd1
mount /dev/drbd1 /mnt
# Create a index.html
umount /dev/drbd1
I configured the cluster to use drbd as follows (this will enter the crm shell but don't panic):
crm
cib new drbd
configure primitive WebData ocf:linbit:drbd params drbd_resource=wwwdata op monitor interval=30s
configure ms WebDataClone WebData meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
cib commit drbd
quit
After this I configured a WebFS service so the lighttpd will serve from the DRBD mounted volume.
crm
cib new webfs
configure primitive WebFS ocf:heartbeat:Filesystem params device="/dev/drbd/by-res/wwwdata" directory="/var/www/localhost/htdocs" fstype="ext4"
configure colocation WebFS-on-WebData inf: WebFS WebDataClone:Master
configure order WebFS-after-WebData inf: WebDataClone:promote WebFS:start
configure colocation WebSite-with-WebFS inf: WebSite WebFS
configure order WebSite-after-WebFS inf: WebFS WebSite
cib commit webfs
quit
After this, if you go to your cluster web page (http://192.168.100.20) you will see the contents of the index.html that you created for the cluster.

You can use "crm_mon" to monitor the cluster and look at /var/log/message to view error messages. To simulate a full service relocation go the the node were the services are running and issue "crm node standby" this will put the current node on standby forcing the services to be moved to the other node. After that you can do "crm node online" to bring the node back online.

This concludes this series. Maybe I'll put up another on to have nfs use the drbd, depends on free time.

Comments

Popular posts from this blog

Back to C: CMake and CUnit

OpenClock and Gentoo

Gentoo with LUKS and LVM