Editing
2 Node Cluster: Dual Primary DRBD + CLVM + KVM + Live Migrations
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Creating the Dual Primary DRBD KVM Virt === ;* Run this on NODE1 1) create LVM for DRBD device <source lang="bash"> lvcreate --name drbd_spacewalk --size 21.1GB raid1 ssh 10.69.1.250 -C lvcreate --name drbd_spacewalk --size 21.1GB raid10 </source> 2) copy spacewalk.res to /etc/drbd.d/ <source lang="bash"> cp spacewalk.res /etc/drbd.d/ scp spacewalk.res 10.69.1.250:/etc/drbd.d/ </source> 3) reloading drbd <source lang="bash"> /etc/init.d/drbd reload ssh 10.69.1.250 -C /etc/init.d/drbd reload </source> 4) create DRBD device on both nodes <source lang="bash"> drbdadm -- --force create-md spacewalk ssh 10.69.1.250 -C drbdadm -- --force create-md spacewalk </source> 5) reloading drbd <source lang="bash"> /etc/init.d/drbd reload ssh 10.69.1.250 -C /etc/init.d/drbd reload </source> 6) bring drbd up on both nodes <source lang="bash"> drbdadm up spacewalk ssh 10.69.1.250 -C drbdadm up spacewalk </source> 7) set bigeye primary and overwrite blindpig <source lang="bash"> drbdadm -- --overwrite-data-of-peer primary spacewalk </source> 8) set blindpig secondary (should already be set) <source lang="bash"> ssh 10.69.1.250 -C drbdadm secondary spacewalk </source> 9) bigeye create PV/VG/LV (not setting VG to cluster aware yet due to LVM bug not using --monitor y) <source lang="bash"> pvcreate /dev/drbd9 vgcreate -c n drbd_spacewalk /dev/drbd9 lvcreate -L20G -nspacewalk drbd_spacewalk </source> 10) Activating VG drbd_spacewalk -- (should already be, but just incase) <source lang="bash"> vgchange -a y drbd_spacewalk </source> 11) create the POOL in virsh <source lang="bash"> virsh pool-create-as drbd_spacewalk --type=logical --target=/dev/drbd_spacewalk </source> 12a) If this is NEW kvm install - continue following - else go to step 12b ::1. Install new virt on bigeye:/dev/drbd_spacewalk/spacewalk named spacewalk-ha ::2. After installed and rebooted - scp virt definition and define <source lang="bash"> scp /etc/libvirt/qemu/spacewalk-ha.xml 10.69.1.250:/etc/libvirt/qemu/spacewalk-ha.xml ssh 10.69.1.250 -C virsh define /etc/libvirt/qemu/spacewalk-ha.xml </source> ::3. Linux? Test virsh shutdown (may need to install acpid) <source lang="bash"> virsh shutdown -ha </source> ::4. SKIP step 12b (go to #13) 12b) If this is a migration from an exsiting KVM virt - continue, else skip this (ONLY if you completed 12a) ::1. restore your KVM/LVM to the new LV: of=/dev/drbd_spacewalk/spacewalk bs=1M <source lang="bash"> command: dd if=<your image files.img> of=/dev/drbd_spacewalk/spacewalk bs=1M </source> ::2. Edit the exists KVM xml file -- copy the existing file to edit <source lang="bash"> cp /etc/libvirt/qemu/spacewalk.xml ./spacewalk-ha.xml </source> #-modify: <name>spacewalk</name> to <name>spacewalk-ha</name> #-remove: <uuid>[some long uuid]</uuid> <source lang="bash"> emacs spacewalk-ha.xml cp spacewalk-ha.xml /etc/libvirt/qemu/spacewalk-ha.xml # this will setup a uniuq UUID, which is needed before you copy to blindpig virsh define /etc/libvirt/qemu/spacewalk-ha.xml scp /etc/libvirt/qemu/spacewalk-ha.xml 10.69.1.250:/etc/libvirt/qemu/spacewalk-ha.xml ssh 10.69.1.250 -C virsh define /etc/libvirt/qemu/spacewalk-ha.xml </source> ; All install work is done. deactivate VG / set cluster aware / and down drbd for pacemaker provisioning 13) deactivate VG drbd_spacewalk on blindpig <source lang="bash"> vgchange -a n drbd_spacewalk </source> 14) set drbd primary on blindpig to set VG cluster aware <source lang="bash"> vgchange -a n drbd_spacewalk ssh 10.69.1.250 -C drbdadm primary spacewalk </source> 15) activate VG on both nodes <source lang="bash"> vgchange -a y drbd_spacewalk ssh 10.69.1.250 -C vgchange -a y drbd_spacewalk </source> 16) set VG cluster aware on both nodes (only one command is needed due to drbd) <source lang="bash"> vgchange -c y drbd_spacewalk </source> 17) deactivate VG <source lang="bash"> vgchange -a n drbd_spacewalk ssh 10.69.1.250 -C vgchange -a n drbd_spacewalk </source> 18) down drbd on both - so we can put it in pacemaker <source lang="bash"> drbdadm down spacewalk ssh 10.69.1.250 -C drbdadm down spacewalk </source> ; Now lets provision Pacemaker -- we already expect you have a working pacemaker config with DLM/CLVM 19) Load the dual primary drbd/lvm RA config to the cluster <source lang="bash"> crm configure < spacewalk.crm </source> 20) verify all is good with crm_mon: DRBD should look like something below <source lang="bash"> crm_mon -f Master/Slave Set: ms_drbd-spacewalk [p_drbd-spacewalk] Masters: [ blindpig blindpig ] </source> 21) Load the VirtualDomain RA confi to the cluster <source lang="bash"> crm configure < spacewalk-vd.crm </source> ; Files Created # spacewalk.res # for DRBD # spacewalk.crm # DRBD/LVM configs to load into crm configure # spacewalk-vd.crm # KVM VirtualDomain configs to load into crm configure
Summary:
Please note that all contributions to RARForge may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
RARForge:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Home
All Pages
All Files
View Categories
Recent changes
Random page
Edit this menu
Tools
What links here
Related changes
Special pages
Page information