Xen cluster nodes

On every cluster node the following systems must be configured:

Xen

As Debian provides an old version of Xen in its repositories, we compiled Xen from Source following the how-to from http://www.howtoforge.com/perfect_setup_xen3_debian. After installing Debian Sarge using the linux26 boot parameter, you have to execute the following commands:

apt-get remove exim4 exim4-base lpr nfs-common portmap\
 pidentd pcmcia-cs pppoe pppoeconf ppp pppconfig
apt-get install iproute bridge-utils python-twisted\
 gcc-3.3 binutils make zlib1g-dev python-dev transfig\
 bzip2 screen ssh debootstrap libcurl3-dev libncurses5-dev
cd /usr/src
wget http://bits.xensource.com/Xen/latest/xen-3.0.2-src.tgz
tar xvzf xen-3.0.2-src.tgz
cd xen-3.0.2-2
make world
make linux-2.6-xen-config CONFIGMODE=menuconfig

Now configure ReiserFS (or whatever your root file system is) and OCFS2 to be compiled into the kernel, not just as a module. Then you do not need to build an initial RAM disk (initrd). Afterwards, save the kernel config and continue the installation:

make linux-2.6-xen-build
make linux-2.6-xen-install
update-rc.d xend defaults 20 21
update-rc.d xendomains defaults 21 20

The last step in order to setup the Xen virtual machine monitor is to edit /boot/grub/menu.lst and add the following section as default:

title   Xen 3.0.1 / XenLinux 2.6.12-xen0
root    (hd0,0)
kernel  /boot/xen-3.0.gz dom0_mem=65536
module  /boot/vmlinuz-2.6-xen0 root=/dev/hda1 ro console=tty0

Reboot the system in order to boot your newly compiled Xen kernel.

Redhat cluster suite

The Redhat cluster suite needs to be installed and configured exactly the same way as described for the storage nodes under the section called “Redhat cluster suite”. Additionally, you must configure the GNBD imports to be started. Therefore, edit /etc/modules first and append the following line:

gnbd

and create the start/stop script /etc/init.d/gnbd_import:

#! /bin/sh
#
# gnbd_import   Import all GNBD devices from storage backend
#
# Author:       Daniel Bertolo <dbertolo@hsr.ch>.
#
# Version:      @(#)gnbd_import  1.00  25-Jun-2006  dbertolo@hsr.ch
#
set -e
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DESC="Import all GNBD devices from storage backend"
NAME=gnbd_import
DAEMON=/sbin/$NAME
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
# Gracefully exit if the package has been removed.
test -x $DAEMON || exit 0
# The config file contains the IP of the storage backend
# Read config file if it is present.
if [ -r /etc/default/$NAME ]
then
       . /etc/default/$NAME
fi
#
#       Function that starts the daemon/service.
#
d_start() {
        start-stop-daemon --start --quiet --pidfile $PIDFILE \
                --exec $DAEMON -i $BACKEND
}
#
#       Function that stops the daemon/service.
#
d_stop() {
        start-stop-daemon --stop --quiet --pidfile $PIDFILE \
                --name $NAME -R
}
case "$1" in
  start)
        echo -n "Starting $DESC: $NAME"
        d_start
        echo "."
        ;;
  stop)
        echo -n "Stopping $DESC: $NAME"
        d_stop
        echo "."
        ;;
  restart|force-reload)
        echo -n "Restarting $DESC: $NAME"
        d_stop
        sleep 1
        d_start
        echo "."
        ;;
  *)
        echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2
        exit 1
        ;;
esac
exit 0

Finally, create the file /etc/default/gnbd_import to set the IP address of the storage backend:

BACKEND='152.96.121.180'

The start/stop script will import all GNBD devices exported by the storage backend. Be aware that the IP address is the one configured as virtual device on the storage backend. In order to import the GNBD devices at start up, do:

update-rc.d gnbd_import defaults 53 25

OCFS2 cluster

The kernel module for the Oracle cluster file system was already configured together with the Xen kernel as OCFS2 has found its way into the standard Linux kernel. In order to work with OCFS2, the according tools need to be installed:

apt-get install ocfs2-tools

On every cluster node, the file /etc/ocfs2/cluster.conf needs to be created:

cluster:
        node_count = 3
        name = xencluster
node:
        ip_port = 7777
        ip_address = 10.0.0.1
        number = 0
        name = xenamo1
        cluster = xencluster
node:
        ip_port = 7777
        ip_address = 10.0.0.2
        number = 1
        name = xenamo2
        cluster = xencluster
node:
        ip_port = 7777
        ip_address = 10.0.0.3
        number = 2
        name = xenamo3
        cluster = xencluster

The OCFS2 service is automatically started during system startup. To do so manually, execute:

/etc/init.d/o2cb start

Mounting the cluster file system

After all subsystems have been started, you easily can mount the cluster file system:

mount -t ocfs2 /dev/gnbd/ocfscluster /mnt/cluster