The following is part 3 of a 3 part series that goes over installation and configuration of Xen live migration with DRBD.
This article covers a installation and configuration of a DRBD resource.
The convention followed in the article is that [ALL]# denotes a command that needs to be run on both Xen nodes.
Installation
[ALL]# apt-get install drbd8-utils
Global DRBD Configuration
Open /etc/modprobe.d/drbd.conf for editing and add the following line:
options drbd disable_sendpage=1
Create the file if one does not exist.
Our global DRBD configuration /etc/drbd.d/global_common.conf can be seen below.
global {
usage-count no;
}
common {
handlers {
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
}
startup {
wfc-timeout 30;
}
options {
on-no-data-accessible io-error;
}
disk {
on-io-error detach;
resync-rate 10M; #100Mbps link
#fencing resource-and-stonith;
}
net {
protocol C;
timeout 50; #5 seconds
allow-two-primaries yes;
cram-hmac-alg sha1;
shared-secret XenNodes;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
rr-conflict disconnect;
ping-timeout 10; #1 second
verify-alg sha1;
csums-alg sha1;
}
}
DRBD Resources Configuration
We are going to need a logical volume to store DRBD metadata:
[ALL]# lvcreate --size 1g --name lv_meta vg_xen
In /etc/drbd.d/jessie01-disk.res below, we can see that we specify the line with meta-disk as using /dev/vg_xen/lv_meta[0]. Same device (logical volume) can be used to store meta-data for several DRBD resources, that is done with adding [X] after the device name.
Our DRBD resource /etc/drbd.d/jessie01-disk.res configuration can be seen below.
resource jessie01-disk {
meta-disk /dev/vg_xen/lv_meta[0];
device /dev/drbd0;
disk /dev/vg_xen/jessie01-disk;
on xen01.hl.local {
address 172.16.22.13:7789;
}
on xen02.hl.local {
address 172.16.22.14:7789;
}
}
Also, on the xen02 node only, create a logical volume called jessie01-disk and match the size of the one that’s on xen01 (4GB in our case):
[xen02]# lvcreate --size 4g --name jessie01-disk vg_xen
Create device metadata, bring the device up and force to become a primary node:
[xen01]# drbdadm create-md jessie01-disk [xen01]# drbdadm up jessie01-disk [xen01]# drbdadm primary --force jessie01-disk [xen01]# drbd-overview 0:jessie01-disk/0 WFConnection Primary/Unknown UpToDate/Outdated
Initialise and bring up DRBD resource on the second Xen node, and wait until synced:
[xen02]# drbdadm create-md jessie01-disk [xen02]# drbdadm up jessie01-disk
[xen02]# watch drbd-overview
0:jessie01-disk/0 SyncTarget Secondary/Primary Inconsistent/UpToDate
[============>.......] sync'ed: 67.5% (1339308/4097084)K
Once the sync is complete, set the xen02 DRBD node to primary to enable Xen live migration:
[xen02]# drbdadm primary jessie01-disk
[xen02]# drbd-overview 0:jessie01-disk/0 Connected Primary/Primary UpToDate/UpToDate
Xen Live Migration
Open the file /etc/xen/jessie01.cfg for editing on both Xen nodes and replace the following lines:
disk = [
'phy:/dev/vg_xen/jessie01-disk,xvda1,w',
]
with this:
disk = [
'drbd:jessie01-disk,xvda1,w',
]
Start the virtual guest:
[xen01]# xl create /etc/xen/jessie01.cfg
[xen01]# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 1022 1 r----- 160.6 jessie01 1 256 1 -b---- 2.9
Connect to the guest’s console to ensure it’s up and running.
[xen01]# xl console jessie01
Migrate the guest VM from xen01 to xen02:
[xen01]# xl migrate jessie01 xen02 migration target: Ready to receive domain. Saving to migration stream new xl format (info 0x0/0x0/706) Loading new save file (new xl fmt info 0x0/0x0/706) Savefile contains xl domain config xc: progress: Reloading memory pages: 7168/131072 5% xc: progress: Reloading memory pages: 13312/131072 10% xc: progress: Reloading memory pages: 20480/131072 15% xc: progress: Reloading memory pages: 26624/131072 20% xc: progress: Reloading memory pages: 32768/131072 25% xc: progress: Reloading memory pages: 39936/131072 30% xc: progress: Reloading memory pages: 46080/131072 35% xc: progress: Reloading memory pages: 53248/131072 40% xc: progress: Reloading memory pages: 59392/131072 45% xc: progress: Reloading memory pages: 65737/131072 50% migration sender: Target has acknowledged transfer. migration sender: Giving target permission to start. migration target: Transfer complete, requesting permission to start domain. migration target: Got permission, starting domain. migration sender: Target reports successful startup. migration target: Domain started successsfully. Migration successful.
[xen02]# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 1022 1 r----- 294.7 jessie01--incoming 2 143 0 --p--- 0.0
[xen02]# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 1022 1 r----- 301.4 jessie01 2 256 1 -b---- 1.6
Note how after the migration xen01 has become the “Secondary” on the DRBD device.
[xen01]# drbd-overview 0:jessie01-disk/0 Connected Secondary/Primary UpToDate/UpToDate
Migrate Back
[xen01]# drbdadm primary jessie01-disk [xen01]# drbd-overview 0:jessie01-disk/0 Connected Primary/Primary UpToDate/UpToDate
[xen02]# xen migrate jessie01 xen01 migration target: Ready to receive domain. Saving to migration stream new xl format (info 0x0/0x0/706) Loading new save file (new xl fmt info 0x0/0x0/706) Savefile contains xl domain config xc: progress: Reloading memory pages: 7168/131072 5% xc: progress: Reloading memory pages: 13312/131072 10% xc: progress: Reloading memory pages: 20480/131072 15% xc: progress: Reloading memory pages: 26624/131072 20% xc: progress: Reloading memory pages: 32768/131072 25% xc: progress: Reloading memory pages: 39936/131072 30% xc: progress: Reloading memory pages: 46080/131072 35% xc: progress: Reloading memory pages: 53248/131072 40% xc: progress: Reloading memory pages: 59392/131072 45% xc: progress: Reloading memory pages: 65571/131072 50% migration sender: Target has acknowledged transfer. migration sender: Giving target permission to start. migration target: Transfer complete, requesting permission to start domain. migration target: Got permission, starting domain. migration sender: Target reports successful startup. migration target: Domain started successsfully. Migration successful.
[xen01]# xl list Name ID Mem VCPUs State Time(s) Domain-0 0 1022 1 r----- 724.4 jessie01 6 256 1 -b---- 1.7
References
http://www.asplund.nu/xencluster/xen-cluster-howto.html
