Welcome To The Machine: Nesting Virtual Environments

Part 01: Feeling Very Xen.

As this is an IT blog, I thought, while I’m still waiting for parts for Ripley, I’d cover some other things I’ve been playing with. Namely virtualisation.

This was prompted by my job hunting and seeing the amount of roles that require some level of knowledge of VMware and Microsoft’s Hyper-V. I played around with VMware many moons ago when virtualisation started coming to the fore. Unfortunately, being a bit of a Linux geek, most of my virtualisation knowledge comes from designing and implementing Xen (and some KVM), solutions in both private and commercial production environments, largely due to the speed and reliability of Linux and Xen and also the cost (or lack thereof). This means I largely ignored Hyper-V. So, seeing as most job roles I’m interested in require it, I thought I’d better learn Hyper-V.

Given that I have limited funds and hardware, plus I didn’t want to keep rebooting between OS’s on my machine, I decided that it might be a good learning exercise to create a virtual environment within a virtual environment on my existing Linux installation (Kubuntu 16.04 Xenial Xerus). This would mean three levels of machine:

L0: Physical Host - Kubuntu 16.04 Xenial Xerus with Xen Hypervisor 4.6.0

L1: Virtual Host - Windows Server 2012 + Windows Hyper-V Server 2012

L2: Virtual Machine - Windows 7 Pro 64-bit

It would not only be convenient, not having to reboot each time I wanted to run the servers but, it would also be a starting point to see how far one can take nested virtual machines. I doubt, with the hardware I currently have, that I could take it very far, maybe as far as L0 to L5? I may break out my server one day and try it. As it stands, it may be useful to describe the hardware I’m using:

Machine:   Dell Vostro 460
Processor: Intel Core i5-2400 3.1Ghz quad-core processor.

The following are processor requirements for Hyper-V nested virtualisation to work:
Intel® vPro Technology 
Intel® Virtualisation Technology (VT-x)
Intel® Virtualisation Technology for Directed I/O (VT-d)
Intel® VT-x with Extended Page Tables (EPT)
Intel® 64
Instruction Set: 64-bit
Instruction Set Extensions: SSE4.1/4.2, AVX 

RAM: 14Gb DDR3
Storage: HostOS: Seagate ST3250318AS (sdb)
          Guest OS: Seagate ST500DM002 (sdc)

Please Note: All operations following are run as root. This is not advisable in a production environment. Doing so is at your own risk. Use of ‘sudo’ is advised.

Hard Drives and LVM

As I had originally set up my machine with a bog-standard partitioning system (“/” & “/home” on /dev/sdb), I didn’t really want to rebuild the machine from scratch. I already had LVM2 installed so, I simply reformatted a spare 500Gb drive I had (already in the machine) to LVM2, created a Volume Group (“/dev/Wintest”) and then created two logical volumes:

root@ripley:/# vgcreate Wintest /dev/sdc1

root@ripley:/# lvcreate -L 200G -n holly-vm Wintest

root@ripley:/# lvcreate -L 100G -n hyperv-vm Wintest

So, we now have a volume group, Wintest, and the two logical volumes: holly-vm and hyperv-vm on the primary partition /dev/sdc1.

sdc1

XEN Hypervisor

Next, I needed to install the Xen Hypervisor, without losing my GUI (as this is my personal machine I use it for all kinds of things, including internet browsing. Lynx is a good browser but Netflix and Amazon Prime don’t really work very well in it). On a server this isn’t a problem as I never bother with GUI’s on servers, it just slows things down and makes one lazy.

Installing Xen is relatively simple using apt-get:

root@ripley:/# apt-get install xen-hypervisor-amd64

Once Xen has finished installing, some changes need to be made in the Xen config file located at /etc/default/grub.d/xen.cfg. The following two lines need to be uncommented (or added if they’re not there):

GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT"
GRUB_CMDLINE_LINUX_XEN_REPLACE="$GRUB_CMDLINE_LINUX"

This enables Xen to use the same kernel as dom0 or natively (ie it keeps my GUI and machine running normally but as a Xen Hypervisor as well).

Once these changes have been made then “update-grub” needs to be run, the machine is rebooted and automatically defaults to “Ubuntu GNU/Linux, with Xen hypervisor” in grub.

The next configuration change is to the networking to allow bridging and give the VMs access to the host networking.

In Ubuntu, the networking config is usually contained in /etc/network/interfaces. If you are using NetworkManager then this will have to be disabled first. Normally a single NIC interfaces file would look something like:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

Although the name of the NIC will not necessarily be ‘eth0’ (mine is ‘enp4s0’), the changes are the same:

# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo eth0 xenbr0
iface lo inet loopback

iface xenbr0 inet dhcp
 bridge_ports eth0

iface eth0 inet manual

If the machine is using static IP then change the interfaces file to reflect this, for example:

iface xenbr0 inet static
        bridge_ports eth0
        address 192.168.1.2
        broadcast 192.168.1.255
        netmask 255.255.255.0
        gateway 192.168.1.1

(of course, replacing the IP addresses with the correct ones!).

The last step is to restart the networking by issuing the following command string:

ifdown eth0 && ifup xenbr0 && ifup eth0

Run ‘ifconfig’ to check that the NIC has come back up correctly and that ‘xenbr0’ has the IP address. It should look something like this:

enp4s0 Link encap:Ethernet HWaddr xxxxxxxxxxxxxxx 
       UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
       RX packets:187681 errors:0 dropped:0 overruns:0 frame:0
       TX packets:131166 errors:0 dropped:0 overruns:0 carrier:0
       collisions:0 txqueuelen:1000 
       RX bytes:172589684 (172.5 MB) TX bytes:22836449 (22.8 MB)
       Interrupt:19

lo     Link encap:Local Loopback 
       inet addr:127.0.0.1 Mask:255.0.0.0
       inet6 addr: ::1/128 Scope:Host
       UP LOOPBACK RUNNING MTU:65536 Metric:1
       RX packets:674 errors:0 dropped:0 overruns:0 frame:0
       TX packets:674 errors:0 dropped:0 overruns:0 carrier:0
       collisions:0 txqueuelen:1 
       RX bytes:53562 (53.5 KB) TX bytes:53562 (53.5 KB)

xenbr0 Link encap:Ethernet HWaddr xxxxxxxxxxxxxxxx
       inet addr:192.168.0.9 Bcast:192.168.0.255 Mask:255.255.255.0
       inet6 addr: fe80::7a2b:cbff:fe91:7b97/64 Scope:Link
       UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
       RX packets:176423 errors:0 dropped:0 overruns:0 frame:0
       TX packets:128578 errors:0 dropped:0 overruns:0 carrier:0
       collisions:0 txqueuelen:1000 
       RX bytes:168637110 (168.6 MB) TX bytes:22041476 (22.0 MB)

The next step is to create the guest operating system config files and obtain the ISO files for the two operating systems.

You may be wondering why I’m using two operating systems instead of Win2k12 alone with built in Hyper-v? The reason being is that I will be using the Win2k12 server for other operations independently of Hyper-V plus, its to do with using both sides of the coin: being able to work with both types of OS in a production environment. With Xen, I would normally use a CLI-only installation of Debian/Ubuntu etc with it but, since I’m also using my personal day-to-day machine it makes things slightly different. Anyway, I digress, as usual.

ISO’s & Configuration Files

The ISO files can be downloaded from Microsoft as evaluation editions, giving 180 days for the Win2k12R2 server to play with but unlimited for the Hyper-V server. They can be found here:

Win2k12R2 Server: https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2012-r2
Hyper-V Server: https://www.microsoft.com/en-us/evalcenter/evaluate-hyper-v-server-2012-r2
Please note: 
You will need to have a Microsoft login & Technet registration to access these files.

Store the files, once downloaded, in an easy to locate folder. Mine are contained in a folder called “~/Z_Software” (the ‘Z’ makes sure it appears at the bottom of all alpha-sorted file listings). I also renamed them to make them easier to refer to in the cfg file: The Server 2k12 Release 2 file is now called Win2k12R2.iso and the Hyper-V server is called Win2k12_Hypercore.iso.

The Xen Guest Configuration File:

To create a guest VM in Xen, it needs to know what hardware it needs to present to the VM and what type of VM it is presenting to. This information is contained in the Guest Configuration file (or cfg file for short). The first line in the file tells Xen what type of guest machine I’m creating: Hardware Assisted (HVM) or Para virtualised (PV). This will dictate how the guest interacts with the host machine and its hardware.

HVM or PV?

There are two types of virtualisation: HVM (Hardware Assisted Virtualisation) and PV (ParaVirtualisation). The best explanation of the difference is from the Xen Project pages:

PV

Paravirtualization (PV) is an efficient and lightweight virtualization technique originally introduced by Xen Project, later adopted by other virtualization platforms. PV does not require virtualization extensions from the host CPU. However, paravirtualized guests require a PV-enabled kernel and PV drivers, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. PV-enabled kernels exist for Linux, NetBSD, FreeBSD and OpenSolaris. Linux kernels have been PV-enabled from 2.6.24 using the Linux pvops framework. In practice this means that PV will work with most Linux distributions (with the exception of very old versions of distros).

HVM

Full Virtualization or Hardware-assisted virtualizion (HVM) uses virtualization extensions from the host CPU to virtualize guests. HVM requires Intel VT or AMD-V hardware extensions. The Xen Project software uses Qemu to emulate PC hardware, including BIOS, IDE disk controller, VGA graphic adapter, USB controller, network adapter etc. Virtualization hardware extensions are used to boost performance of the emulation. Fully virtualized guests do not require any kernel support. This means that Windows operating systems can be used as a Xen Project HVM guest. Fully virtualized guests are usually slower than paravirtualized guests, because of the required emulation.

Note that it is possible to use PV Drivers for I/O to speed up HVM guests. On Windows this requires that appropriate PV drivers are installed. You can find more information at

On operating systems with Xen Support – aka with PV or PVHVM drivers, these drivers will be automatically used when you select the HVM virtualization mode.

Because this is experimental and I’m using Microsoft Windows, I’m using HVM. This is noted in the first line of the cfg file:

builder='hvm' #Hardware assisted virtualisation

I then need to tell Xen how many CPU cores and how much memory to assign to the virtual machine. I also need to give the machine a name to identify it in the domain listing when it is running:

memory=4096       #Assign 4Gb of RAM to the VM
vcpus=2           #Assign two CPU cores to the VM
name="hyperv-vm"  #Descriptive name

The next settings are for the network, hard drive and CDROM/DVD (or, ISO file in this case. During the installation of Xen, Bridging Utils would have been installed and I set up the networking to use bridging. Now I can create a virtual NIC for the VM using the VIF setting. The MAC address is a partially reserved one (similar to the reserved IPv4 addresses eg 192.168). The MAC address should always start with 00:16:3e as this is an OUI that has been assigned to the Xen project for use by Xen users. I could also create a MAC address randomly but, as I’m on a very small network, I’ll use the OUI generated one. The second group of six digits are generated by me (and are from the Enterprise self-destruct sequence from ST:Search For Spock). The bridge is specified as the bridged NIC created during Xen setup (there can be many NIC bridges set up but there’s only one in this instance):

vif = ['mac=00:16:3e:1a:2a:02,bridge=xenbr0']

The hard drive and CD/DVD are designated through the disk setting. The first part specifies the physical disk that the VM is to go on, in this case the ‘holly-vm’ LV, designated as a hard drive by ‘hda’ and is set as read-write by the ‘w’ switch. The CD/DVD ISO file is specified in the second part, designated as a CDROM by ‘hdc:cdrom’ and read-only by the ‘r’ switch:

disk = ['phy:/dev/Wintest/hyperv-vm,hda,w','file:/home/canderson/Z_Software/Win2k12_Hypercore.ISO,hdc:cdrom,r']

The boot sequence is set as CDROM then hard drive by:

boot="dc"

The following two options are required for nested virtualisation. They can be omitted if standard virtual machines are to be created.

HAP is Hardware Assisted Paging and requires a CPU feature called EPT by Intel and RVI by AMD. It is used to manage the guest’s MMU (Memory Management Unit).

hap=1

NestedHVM allows Xen to run another virtualisation environment as a VM.

nestedhvm=1

The ‘vnc’ and ‘vnclisten’ option allow a VNC session to view and interact with the VM during boot and installation where, in the case of Microsoft OS’s, RDP is not yet activated.

vnc=1
vnclisten="0.0.0.0"

You can also set a VNC password if needed:

vncpassword="xxxxxxxxxxxxxxxxxxxxx"

This, then, is my virtual machine config file in full:

builder='hvm'
memory=4096
vcpus=1
name="hyperv-vm"
vif = ['mac=00:16:3e:1a:2a:02,bridge=xenbr0']
disk = ['phy:/dev/Wintest/hyperv-vm,hda,w','file:/home/canderson/Z_Software/Win2k12_Hypercore.ISO,hdc:cdrom,r']
boot="dc"
hap=1
nestedhvm=1
vnc=1
vnclisten="0.0.0.0"
vncpassword="xxxxxxxxxxxxx"

In Part 10 I’ll spin up the virtual machine and start installing Hyper-V Server 2012 R2.

And for those of you who don’t speak binary, 1. What are you doing here? and 2. Part 10 is Part 2 in binary. #geekhumour

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s