OVirt KVM

From DHVLab

Revision as of 15:43, 9 September 2016 by Wiki admin (talk | contribs)

The VM for the OVirt management engine is defined in ovirt.xml. It is used as a Pacemaker resource.
The b_engine bridge interface is passed to the VM. /dev/drbd/by-res/virt is used as root disk so that VM can be migrated between the cluster nodes.

SAN drives

In addition to the root partition that resides on the DRBD resource, two SAN partitions are passed to the oVirt VM. They are used for

  1. Backups
  2. As export domain for oVirt to store backups of the VM images that run inside the environment

Install required dependencies

yum install kvm virt-manager libvirt virt-install qemu-kvm xauth dejavu-lgc-sans-fonts nfs-utils libnfsidmap

Setup Networking

echo "net.ipv4.ip_forward = 1" | sudo tee /etc/sysctl.d/99-ipforward.conf
sudo sysctl -p /etc/sysctl.d/99-ipforward.conf

ovirt.xml

<domain type='kvm'>
  <name>ovirt</name>
  <uuid>69dc331a-83b9-4cfb-a746-08357bb3bace</uuid>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Penryn</model>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/dev/drbd/by-res/virt'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/dev/mapper/mpathg'/>
      <target dev='vdb' bus='virtio'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/dev/mapper/mpathh'/>
      <target dev='vdc' bus='virtio'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:a2:32:70'/>
      <source bridge='b_mgmt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:a2:30:70'/>
      <source bridge='b_backup'/>
      <model type='virtio'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-ovirt/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'/>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5903' autoport='no' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>

Add KVM as cluster resource

pcs resource create vm-guest1 VirtualDomain \
             hypervisor="qemu:///system" \
             config="/etc/libvirt/qemu/ovirt.xml" \
             op monitor interval=10s

Installation

Install CentOS on VM

Connect to the cluster node the VM is running on and get an ISO image of the latest CentOS.

wget -O /tmp/centos-install.iso http://centos.bio.lmu.de/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso 

#add cd drive to VM
nano -w /etc/libvirt/qemu/ovirt.xml

add the following to the <devices> section.

<disk type='file' device='cdrom'>
  <source file='/tmp/centos-install.iso'/>
  <target dev='hda' bus='ide'/>
  <readonly/>
</disk>

Then change the boot order by changing the <os> section to

<os>
   ...
   <boot dev='cdrom'/>
</os>

Then force reboot by executing

virsh destroy ovirt

and connect using a VNC client and follow the installation instructions.

Cleaning up

Afterwards remove the ISO from the VM and revert changes in OS section.

Setup oVirt Engine

#add repository
yum -y install http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm

#install oVirt and dependencies
yum -y install ovirt-engine

#setup the engine
engine-setup

#provide settings as follows
...
Configure Engine on this host (Yes, No) [Yes]: YES
Configure WebSocket Proxy on this host (Yes, No) [Yes]: YES
...
Do you want Setup to configure the firewall? (Yes, No) [Yes]: NO
...
Where is the Engine database located? (Local, Remote) [Local]: Remote
#enter your database details
#host CLUSTER.clan
#port 5432
#user ovirtusername
#password ovirtpassword
#database ovirtdb
...
#after setup is finnished, enable the engine
systemctl enable ovirt-engine
systemctl start ovirt-engine

Setting up networking

Setup the NIC that is attatched to b_mgmt to the OVIRT_MGMT_IP and the other at b_backup to OVIRT_BACKUP_IP. See config files.

Add NFS shares

We have passed two SAN disks to the VM that we are going to mount through /etc/fstab. Make sure the disks are not used otherwise!!

#create partitioning (one primary partition)
fdisk /dev/vdb
fdisk /dev/vdc

#create filesystem
mkfs.ext4 /dev/vdb1
mkfs.ext4 /dev/vdc1

#create folders 
mkdir /san
mkdir /san/{backups,export}

cat << EOT >> /etc/fstab
/dev/vdb1       /san             ext4    defaults        1 1
/dev/vdc1       /san/backups     ext4    defaults        1 1
EOT

#manually mount
mount /san
mount /san/backups

#create shares
cat << EOT >> /etc/exports
/san/export     CLUSTER_MGMT_NET/24(rw,async,no_root_squash,no_subtree_check)
/san/backup     CLUSTER_MGMT_NET/24(rw,async,no_root_squash,no_subtree_check)
EOT

#start and enable NFS server
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
systemctl start rpc-statd
systemctl start nfs-idmapd