Editing
2 Node Cluster: Dual Primary DRBD + CLVM + KVM + Live Migrations
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Dual Primary DRBD/KVM Virt Install == === New KVM Virt - Details === * '''NewVirt''': spacewalk * '''SIZE''': 20GB * '''DRBD res''': 8 * '''NODE1''' : IP: 10.69.1.253 : Name: bigeye : VG: raid1 * '''NODE2''' : IP: 10.69.1.250 : Name: blindpig : VG: raid10 * KVM DISK cache setting: none === Creating the Dual Primary DRBD KVM Virt === ;* Run this on NODE1 1) create LVM for DRBD device <source lang="bash"> lvcreate --name drbd_spacewalk --size 21.1GB raid1 ssh 10.69.1.250 -C lvcreate --name drbd_spacewalk --size 21.1GB raid10 </source> 2) copy spacewalk.res to /etc/drbd.d/ <source lang="bash"> cp spacewalk.res /etc/drbd.d/ scp spacewalk.res 10.69.1.250:/etc/drbd.d/ </source> 3) reloading drbd <source lang="bash"> /etc/init.d/drbd reload ssh 10.69.1.250 -C /etc/init.d/drbd reload </source> 4) create DRBD device on both nodes <source lang="bash"> drbdadm -- --force create-md spacewalk ssh 10.69.1.250 -C drbdadm -- --force create-md spacewalk </source> 5) reloading drbd <source lang="bash"> /etc/init.d/drbd reload ssh 10.69.1.250 -C /etc/init.d/drbd reload </source> 6) bring drbd up on both nodes <source lang="bash"> drbdadm up spacewalk ssh 10.69.1.250 -C drbdadm up spacewalk </source> 7) set bigeye primary and overwrite blindpig <source lang="bash"> drbdadm -- --overwrite-data-of-peer primary spacewalk </source> 8) set blindpig secondary (should already be set) <source lang="bash"> ssh 10.69.1.250 -C drbdadm secondary spacewalk </source> 9) bigeye create PV/VG/LV (not setting VG to cluster aware yet due to LVM bug not using --monitor y) <source lang="bash"> pvcreate /dev/drbd9 vgcreate -c n drbd_spacewalk /dev/drbd9 lvcreate -L20G -nspacewalk drbd_spacewalk </source> 10) Activating VG drbd_spacewalk -- (should already be, but just incase) <source lang="bash"> vgchange -a y drbd_spacewalk </source> 11) create the POOL in virsh <source lang="bash"> virsh pool-create-as drbd_spacewalk --type=logical --target=/dev/drbd_spacewalk </source> 12a) If this is NEW kvm install - continue following - else go to step 12b ::1. Install new virt on bigeye:/dev/drbd_spacewalk/spacewalk named spacewalk-ha ::2. After installed and rebooted - scp virt definition and define <source lang="bash"> scp /etc/libvirt/qemu/spacewalk-ha.xml 10.69.1.250:/etc/libvirt/qemu/spacewalk-ha.xml ssh 10.69.1.250 -C virsh define /etc/libvirt/qemu/spacewalk-ha.xml </source> ::3. Linux? Test virsh shutdown (may need to install acpid) <source lang="bash"> virsh shutdown -ha </source> ::4. SKIP step 12b (go to #13) 12b) If this is a migration from an exsiting KVM virt - continue, else skip this (ONLY if you completed 12a) ::1. restore your KVM/LVM to the new LV: of=/dev/drbd_spacewalk/spacewalk bs=1M <source lang="bash"> command: dd if=<your image files.img> of=/dev/drbd_spacewalk/spacewalk bs=1M </source> ::2. Edit the exists KVM xml file -- copy the existing file to edit <source lang="bash"> cp /etc/libvirt/qemu/spacewalk.xml ./spacewalk-ha.xml </source> #-modify: <name>spacewalk</name> to <name>spacewalk-ha</name> #-remove: <uuid>[some long uuid]</uuid> <source lang="bash"> emacs spacewalk-ha.xml cp spacewalk-ha.xml /etc/libvirt/qemu/spacewalk-ha.xml # this will setup a uniuq UUID, which is needed before you copy to blindpig virsh define /etc/libvirt/qemu/spacewalk-ha.xml scp /etc/libvirt/qemu/spacewalk-ha.xml 10.69.1.250:/etc/libvirt/qemu/spacewalk-ha.xml ssh 10.69.1.250 -C virsh define /etc/libvirt/qemu/spacewalk-ha.xml </source> ; All install work is done. deactivate VG / set cluster aware / and down drbd for pacemaker provisioning 13) deactivate VG drbd_spacewalk on blindpig <source lang="bash"> vgchange -a n drbd_spacewalk </source> 14) set drbd primary on blindpig to set VG cluster aware <source lang="bash"> vgchange -a n drbd_spacewalk ssh 10.69.1.250 -C drbdadm primary spacewalk </source> 15) activate VG on both nodes <source lang="bash"> vgchange -a y drbd_spacewalk ssh 10.69.1.250 -C vgchange -a y drbd_spacewalk </source> 16) set VG cluster aware on both nodes (only one command is needed due to drbd) <source lang="bash"> vgchange -c y drbd_spacewalk </source> 17) deactivate VG <source lang="bash"> vgchange -a n drbd_spacewalk ssh 10.69.1.250 -C vgchange -a n drbd_spacewalk </source> 18) down drbd on both - so we can put it in pacemaker <source lang="bash"> drbdadm down spacewalk ssh 10.69.1.250 -C drbdadm down spacewalk </source> ; Now lets provision Pacemaker -- we already expect you have a working pacemaker config with DLM/CLVM 19) Load the dual primary drbd/lvm RA config to the cluster <source lang="bash"> crm configure < spacewalk.crm </source> 20) verify all is good with crm_mon: DRBD should look like something below <source lang="bash"> crm_mon -f Master/Slave Set: ms_drbd-spacewalk [p_drbd-spacewalk] Masters: [ blindpig blindpig ] </source> 21) Load the VirtualDomain RA confi to the cluster <source lang="bash"> crm configure < spacewalk-vd.crm </source> ; Files Created # spacewalk.res # for DRBD # spacewalk.crm # DRBD/LVM configs to load into crm configure # spacewalk-vd.crm # KVM VirtualDomain configs to load into crm configure === Config Examples === ==== Pacemaker / crmsh ==== ===== DRBD/LVM ===== <pre> * Note - we do not monitor LVM. Sometimes LVM command hang and are not really an issue.. * these are all auto created from the script below <pre> primitive p_drbd-spacewalk ocf:linbit:drbd \ params drbd_resource="spacewalk" \ operations $id="p_drbd_spacewalk-operations" \ op monitor interval="20" role="Slave" timeout="20" \ op monitor interval="10" role="Master" timeout="20" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100" start-delay="0" primitive p_lvm-spacewalk ocf:heartbeat:LVM \ operations $id="spacewalk-LVM-operations" \ op start interval="0" timeout="120" \ op stop interval="0" timeout="120" \ params volgrpname="drbd_spacewalk" ms ms_drbd-spacewalk p_drbd-spacewalk \ meta master-max="2" clone-max="2" notify="true" migration-threshold="1" allow-migrate="true" target-role="Started" interleave="true" is-managed="true" clone clone_lvm-spacewalk p_lvm-spacewalk \ meta clone-max="2" notify="true" target-role="Started" interleave="true" is-managed="true" colocation c_lvm-spacewalk_on_drbd-spacewalk inf: clone_lvm-spacewalk ms_drbd-spacewalk:Master </pre> ===== KVM Virt - VirtualDomain===== * these are all auto created from the script below <pre> primitive p_vd-spacewalk-ha ocf:heartbeat:VirtualDomain \ params config="/etc/libvirt/qemu/spacewalk-ha.xml" migration_transport="ssh" force_stop="0" hypervisor="qemu:///system" \ operations $id="p_vd-spacewalk-operations" \ op start interval="0" timeout="90" \ op stop interval="0" timeout="90" \ op migrate_from interval="0" timeout="240" \ op migrate_to interval="0" timeout="240" \ op monitor interval="10" timeout="30" start-delay="0" \ meta allow-migrate="true" failure-timeout="10min" target-role="Started" colocation c_vd-spacewalk-on-master inf: p_vd-spacewalk-ha ms_drbd-spacewalk:Master order o_drbm-lvm-vd-start-spacewalk inf: ms_drbd-spacewalk:promote clone_lvm-spacewalk:start p_vd-spacewalk-ha:start </pre> ==== DRBD ==== * these are all auto created from the script below <pre> resource spacewalk { protocol C; startup { become-primary-on both; } net { allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } disk { on-io-error detach; fencing resource-only; } handlers { #split-brain "/usr/lib/drbd/notify-split-brain.sh root"; fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; } syncer { rate 50M; } on bigeye { device /dev/drbd9; disk /dev/raid1/drbd_spacewalk; address 10.69.1.253:7799; meta-disk internal; } on blindpig { device /dev/drbd9; disk /dev/raid10/drbd_spacewalk; address 10.69.1.250:7799; meta-disk internal; } } </pre> === Script === * this will create the config/install above <pre> #cat create.new.sh NAME=spacewalk ## virt name SIZE=20 ## virt size GB LVMETA=lvmeta ## volume group on VG stated above for metadata DRBDNUM=8 ## how many drbds do you have right now? NODE1_VG=raid1 ## VolumeGroup for DRBD lvm NODE2_VG=raid10 ## VolumeGroup for DRBD lvm NODE1_IP=10.69.1.253 NODE2_IP=10.69.1.250 NODE1_NAME=bigeye NODE2_NAME=blindpig #NODE3_NAME=blindpig2 ############ DO NOT EDIT BELOW ####################### NODE2=$NODE2_IP DRBD_SIZE=$SIZE let DRBD_SIZE+=1 let DRBDNUM+=1 #let DRBDNUM+=1 let PORT=7790+DRBDNUM echo ' resource '$NAME' { protocol C; startup { become-primary-on both; } net { allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } disk { on-io-error detach; fencing resource-only; } handlers { #split-brain "/usr/lib/drbd/notify-split-brain.sh root"; fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; } syncer { rate 50M; } on '$NODE1_NAME' { device /dev/drbd'$DRBDNUM'; disk /dev/'$NODE1_VG'/drbd_'$NAME'; address '$NODE1_IP':'$PORT'; meta-disk internal; } on '$NODE2_NAME' { device /dev/drbd'$DRBDNUM'; disk /dev/'$NODE2_VG'/drbd_'$NAME'; address '$NODE2_IP':'$PORT'; meta-disk internal; } } ' > $NAME.res echo 'primitive p_drbd-'$NAME' ocf:linbit:drbd \ params drbd_resource="'$NAME'" \ operations $id="p_drbd_'$NAME'-operations" \ op monitor interval="20" role="Slave" timeout="20" \ op monitor interval="10" role="Master" timeout="20" \ op start interval="0" timeout="240" \ op stop interval="0" timeout="100" start-delay="0" primitive p_lvm-'$NAME' ocf:heartbeat:LVM \ operations $id="'$NAME'-LVM-operations" \ op start interval="0" timeout="120" \ op stop interval="0" timeout="120" \ params volgrpname="drbd_'$NAME'" ms ms_drbd-'$NAME' p_drbd-'$NAME' \ meta master-max="2" clone-max="2" notify="true" migration-threshold="1" allow-migrate="true" target-role="Started" interleave="true" is-managed="true" clone clone_lvm-'$NAME' p_lvm-'$NAME' \ meta clone-max="2" notify="true" target-role="Started" interleave="true" is-managed="true" colocation c_lvm-'$NAME'_on_drbd-'$NAME' inf: clone_lvm-'$NAME' ms_drbd-'$NAME':Master ' > $NAME'.crm' #location drbd_'$NAME'_excl ms_drbd-'$NAME' \ # rule $id="drbd_'$NAME'_excl-rule" -inf: #uname eq '$NODE3_NAME' echo 'primitive p_vd-'$NAME'-ha ocf:heartbeat:VirtualDomain \ params config="/etc/libvirt/qemu/'$NAME'-ha.xml" migration_transport="ssh" force_stop="0" hypervisor="qemu:///system" \ operations $id="p_vd-'$NAME'-operations" \ op start interval="0" timeout="90" \ op stop interval="0" timeout="90" \ op migrate_from interval="0" timeout="240" \ op migrate_to interval="0" timeout="240" \ op monitor interval="10" timeout="30" start-delay="0" \ meta allow-migrate="true" failure-timeout="10min" target-role="Started" colocation c_vd-'$NAME'-on-master inf: p_vd-'$NAME'-ha ms_drbd-'$NAME':Master order o_drbm-lvm-vd-start-'$NAME' inf: ms_drbd-'$NAME':promote clone_lvm-'$NAME':start p_vd-'$NAME'-ha:start ' > $NAME'-vd.crm' ## test DRBD before cmd="drbdadm dump -t $NAME.res" $cmd >/dev/null rc=$? if [[ $rc != 0 ]] ; then echo -e "\n !!! DRBD config ("$NAME.res")file will not work.. need to fix this first. exiting...\n" echo -e " check command: "$cmd"\n"; echo -e "\n * HINT: you might just need to remove the file /etc/drbd.d/"$NAME.res" [be careful]"; echo -e " mv /etc/drbd.d/"$NAME.res" ./$NAME.res.disabled."$NODE1_NAME echo -e " scp "$NODE2":/etc/drbd.d/"$NAME.res" ./$NAME.res.disabled."$NODE2_NAME echo -e " ssh "$NODE2" -C mv /etc/drbd.d/"$NAME.res" /tmp/$NAME.res.disabled" # exit $rc fi echo -e " * DRBD config verified (it should work)\n" echo ' ' echo -e '\n# 1) create LVM for DRBD device' echo ' 'lvcreate --name drbd_$NAME --size $DRBD_SIZE'.1GB' $NODE1_VG echo ' 'ssh $NODE2 -C lvcreate --name drbd_$NAME --size $DRBD_SIZE'.1GB' $NODE2_VG echo -e '\n# 2) copy '$NAME'.res to /etc/drbd.d/' echo ' 'cp $NAME.res /etc/drbd.d/ echo ' 'scp $NAME.res $NODE2:/etc/drbd.d/ echo -e '\n# 3) reloading drbd' echo ' '/etc/init.d/drbd reload echo ' 'ssh $NODE2 -C /etc/init.d/drbd reload echo -e '\n# 4) create DRBD device on both nodes' echo ' 'drbdadm -- --force create-md $NAME echo ' 'ssh $NODE2 -C drbdadm -- --force create-md $NAME echo -e '\n# 5) reloading drbd' echo ' '/etc/init.d/drbd reload echo ' 'ssh $NODE2 -C /etc/init.d/drbd reload echo -e '\n# 6) bring drbd up on both nodes' echo ' 'drbdadm up $NAME echo ' 'ssh $NODE2 -C drbdadm up $NAME echo -e '\n# 7) set '$NODE1_NAME' primary and overwrite '$NODE2_NAME echo ' 'drbdadm -- --overwrite-data-of-peer primary $NAME echo -e '\n# 8) set '$NODE2_NAME' secondary (should already be set)' echo ' 'ssh $NODE2 -C drbdadm secondary $NAME echo -e '\n# 9) '$NODE1_NAME' create PV/VG/LV (not setting VG to cluster aware yet due to LVM bug not using --monitor y)' echo ' 'pvcreate /dev/drbd$DRBDNUM echo ' 'vgcreate -c n drbd_$NAME /dev/drbd$DRBDNUM echo ' 'lvcreate -L$SIZE'G' -n$NAME drbd_$NAME echo -e '\n# 10) Activating VG drbd_'$NAME' -- (should already be, but just incase)' echo ' 'vgchange -a y drbd_$NAME ## ubuntu bug -- enable if ubuntu host #echo ' 'vgchange -a y drbd_$NAME --monitor y echo -e '\n# 11) create the POOL in virsh' echo ' 'virsh pool-create-as drbd_$NAME --type=logical --target=/dev/drbd_$NAME echo -e '\n# 12a) If this is NEW kvm install - continue following - else go to step 12b' echo ' + NOW install new virt from '$NODE1_NAME' on /dev/drbd_'$NAME'/'$NAME named $NAME'-ha' echo ' # after intalled and rebooted' echo ' ' scp /etc/libvirt/qemu/$NAME'-ha.xml' $NODE2:/etc/libvirt/qemu/$NAME'-ha.xml' echo ' ' ssh $NODE2 -C virsh define /etc/libvirt/qemu/$NAME'-ha.xml' echo ' # test virsh shutdown -- install acpid' echo ' ' virsh shutdown $NAME1'-ha' echo ' * SKIP 12b ' echo ' 12b) If this is a migration from an exsiting KVM virt - continue, else skip 2, you already completed step 1 right?' echo ' ## restore your KVM/LVM to the new LV: of=/dev/drbd_'$NAME'/'$NAME' bs=1M' echo ' command: dd if=<your image files.img> of=/dev/drbd_'$NAME'/'$NAME' bs=1M' echo ' ## Edit the exists KVM xml file -- copy the existing file to edit' echo ' ' cp /etc/libvirt/qemu/$NAME'.xml' ./$NAME'-ha.xml' echo ' -modify: <name>'$NAME'</name> to <name>'$NAME'-ha</name>' echo ' -remove: <uuid>[some long uuid]</uuid>' echo ' ' emacs $NAME'-ha.xml' echo ' ' cp $NAME'-ha.xml' /etc/libvirt/qemu/$NAME'-ha.xml' echo ' #' this will setup a uniuq UUID, which is needed before you copy to $NODE2_NAME echo ' ' virsh define /etc/libvirt/qemu/$NAME'-ha.xml' echo ' ' scp /etc/libvirt/qemu/$NAME'-ha.xml' $NODE2:/etc/libvirt/qemu/$NAME'-ha.xml' echo ' ' ssh $NODE2 -C virsh define /etc/libvirt/qemu/$NAME'-ha.xml' echo -e '\n#' echo '# All install work is done. deactivate VG / set cluster aware / and down drbd for pacemaker provisioning' echo -e "#\n" echo -e '\n# 13) deactivate VG drbd_'$NAME' on '$NODE2_NAME ## ubuntu bug -- enable if ubuntu host #echo ' 'vgchange -a n drbd_$NAME --monitor y echo ' 'vgchange -a n drbd_$NAME echo -e '\n# 14) set drbd primary on '$NODE2_NAME' to set VG cluster aware' ## ubuntu bug -- enable if ubuntu host #echo ' 'vgchange -a n drbd_$NAME --monitor y echo ' 'vgchange -a n drbd_$NAME echo ' 'ssh $NODE2 -C drbdadm primary $NAME echo -e '\n# 15) activate VG on both nodes' ## ubuntu bug -- enable if ubuntu host #echo ' 'vgchange -a y drbd_$NAME --monitor y #echo ' 'ssh $NODE2 -C vgchange -a y drbd_$NAME --monitor y echo ' 'vgchange -a y drbd_$NAME echo ' 'ssh $NODE2 -C vgchange -a y drbd_$NAME echo -e '\n# 16) set VG cluster aware on both nodes (only one command is needed due to drbd)' echo ' 'vgchange -c y drbd_$NAME echo -e '\n# 17) deactivate VG' ## ubuntu bug -- enable if ubuntu host #echo ' 'vgchange -a n drbd_$NAME --monitor y #echo ' 'ssh $NODE2 -C vgchange -a n drbd_$NAME --monitor y echo ' 'vgchange -a n drbd_$NAME echo ' 'ssh $NODE2 -C vgchange -a n drbd_$NAME echo -e '\n# 18) down drbd on both - so we can put it in pacemaker' echo ' 'drbdadm down $NAME echo ' 'ssh $NODE2 -C drbdadm down $NAME echo -e '\n# 19) MAKE sure the disk cache for the virtio is set to NONE - live migrate will fail is no' echo -e '\n#' echo '# Now lets provision Pacemaker -- we already expect you have a working pacemaker config with DLM/CLVM' echo -e "#\n" echo -e '\n# 19) Load the dual primary drbd/lvm RA config to the cluster' echo ' crm configure < '$NAME'.crm' echo -e '\n# 20) verify all is good with crm_mon: DRBD should look like something below' echo -e " crm_mon -f\n" echo ' Master/Slave Set: ms_drbd-'$NAME' [p_drbd-'$NAME']' echo -e ' Masters: [ '$NODE2_NAME' '$NODE2_NAME" ]\n" echo -e '\n# 21) Load the VirtualDomain RA confi to the cluster' echo ' crm configure < '$NAME'-vd.crm' echo '#####################################################################' echo '# Files Created' echo '# '$NAME'.res # for DRBD' echo '# '$NAME'.crm # DRBD/LVM configs to load into crm configure' echo '# '$NAME'-vd.crm # KVM VirtualDomain configs to load into crm configure' </pre> ==== notes ==== * running the script will test the DRBD resource and at least print a warning <pre> !!! DRBD config (spacewalk.res) file will not work.. need to fix this first. exiting... check command: drbdadm dump -t spacewalk.res * HINT: you might just need to remove the file /etc/drbd.d/spacewalk.res [be careful] mv /etc/drbd.d/spacewalk.res ./spacewalk.res.disabled.bigeye scp 10.69.1.250:/etc/drbd.d/spacewalk.res ./spacewalk.res.disabled.blindpig ssh 10.69.1.250 -C mv /etc/drbd.d/spacewalk.res /tmp/spacewalk.res.disabled </pre>
Summary:
Please note that all contributions to RARForge may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
RARForge:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Home
All Pages
All Files
View Categories
Recent changes
Random page
Edit this menu
Tools
What links here
Related changes
Special pages
Page information