Xen with DRBD, GNBD and OCFS2 HOWTO

Daniel Bertolo

Markus Zingg


Table of Contents

Introduction
Overview
Storage backend
Setup
Xen cluster nodes
Xen
Redhat cluster suite
OCFS2 cluster
Mounting the cluster file system
Usage
Setting up virtual machines (domU's)
Doing your first live migration
Setting up the Xenamo framework
Installing the cluster skeleton
Installing the Argo server

Introduction

This HOWTO covers a Xen setup that allows the live migration of virtual machines (domU's) from one physical machine to another. The easier but far more expensive way to achieve this would be to get a SAN and some fibre channel attached hosts. But this setup allows you to build your own redundant storage network with common PC hardware.

Overview

The storage backend consists of two hosts that use DRBD to keep the data redundant between the hosts. On top of DRBD a file system will be exported to the cluster nodes using GNBD. In order that all cluster nodes can access this file system concurrently, we use OCFS2 as a cluster file system[1]. Another possibility is to export a GNBD device for each virtual machine (domU). But then, you need a cluster file system which hold all configuration files of the domU's anyway as every cluster node must know these files. As the handling of new domU's is easier when using loop devices, we decided to export only one large GNBD device.

This HOWTO describes the setup of two storage nodes and three cluster nodes. All machines run a default installation of Debian Sarge whoms installation is not covered by this HOWTO.



[1] GFS would be the cluster file system that fits to GNBD. But the virtual machines reside in single files. And GFS does not allow to mount loop devices. Therefore, we decided to use OCFS2.