KVM Nested Virtualization In Fedora 23

Nested virtualization allows you to run a virtual machine (VM) inside another VM [1]. Both Intel and AMD supports nested virtualization.

This is very helpful when you are experimenting with the hypervisor related technologies. Example: I will be able to run KVM and Virtualbox both on my laptop but in different VMs. Also I will be able to run local installation of imagefactory to build Vagrant images in a VM  as imagefactory need a hypervisor to run the build . The best part is, I can experiment with all of these inside different VMs without damaging my primary workstation’s hypervisor.

The below steps are done on a Fedora 23 running a Lenovo Thinkpad with Intel chipset.

Step 1: Make sure Intel virtualization (VT) is enabled for the host machine.

$ cat /proc/cpuinfo | grep vmx

flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts

The output should contain vmx else Intel virtualization (VT) is not enabled on the machine. You should first fix the setting in the BIOS.

Step 2: Install KVM on the F23 host.

$ dnf install @virtualization

Nested virtualization should be disabled bydefault

$ cat /sys/module/kvm_intel/parameters/nested
 N

Step 3: Enable nested virtualization.  Run below commands as root

  • Temporarily remove the kvm kernel module
      $ sudo rmmod kvm-intel
  • Add the following directive to /etc/modprobe.d/dist.conf
    $ sudo sh -c "echo 'options kvm-intel nested=y' >> /etc/modprobe.d/dist.conf"
  • Insert the kvm module back in the kernel
     $ sudo modprobe kvm-intel

There is alternative way to do the same i.e. pass  kvm-intel.nested=1 on kernel commandline [3]

Step 4: Reboot and verify that nested virtualization is enabled

  • Check that nested virt is enabled
$ sudo cat /sys/module/kvm_intel/parameters/nested
 Y

Step 5: Install the beefy VM. (Lets call it parent VM)

  • I used CentOS 7 minimal ISO i.e. CentOS-7-x86_64-Minimal-1503-01.iso to install the VM through Virtual Machine Manger.
  • Parent VM configuration : 50GB disk, 4GB RAM and 4 vCPUs

Step 6: Enable the VM to use nested virt

  • Go to -> Virtual Machine Manger GUI -> CPU properties -> select “Copy host CPU configuration”

There is also another option i.e. host-passthrough [1] . It is supposed to be more stable then “Copy host CPU configuration” but I have not tried that yet.

Step 7:  Check that Intel virtualization (VT) is enabled in the VM

$ cat /proc/cpuinfo | grep vmx

Step 8: Install KVM inside the VM  [4]

$ yum install qemu-kvm qemu-img
$ yum install libvirt libvirt-python python-virtinst

$ systemctl enable libvirtd
$ systemctl start libvirtd
$ systemctl status libvirtd

Step 9:  Install the child VM inside the parent VM

  • I used Virtual Machine Manger to connect to the parent VM and then install the child VM.
  • Used the same CentOS 7 minimal ISO i.e. CentOS-7-x86_64-Minimal-1503-01.iso to install  the child VM.

[1] https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM

[2] http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html

[3] http://kashyapc.com/2012/01/14/nested-virtualization-with-kvm-intel/

[4] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Guest_Installation.html

vagrant-cachier in Fedora 23 with KVM Libvirt

Vagrant cachier is a very useful plugin for Vagrant users.  It helps to reduce time and  the amount of packages get downloaded from internet between each “vagrant destroy”.

For example, you are using a CentOS 7 image in Vagrant setup and want it to update with the latest packages every time you start working in the guest then the usual work flow is “vagrant up” -> “vagrant ssh” > “sudo yum update -y” -> “Do your stuff” -> “vagrant destroy” .  But the amount of packages get downloaded during yum update and the time consumed for it is somehow undesirable .

vagrant-cachier  keeps the downloaded packages in the file system of the host machine and uses this for the guest as cache. The yum update in the guest gets the packages from the cache  and the time and internet usage is drastically reduced.  Which is really cool!

I tried to install vagrant-cachier on my Fedora 23 laptop with KVM and libvirt and got in to below issue.

Issue:

[root@dhcp35-203 ~]# vagrant plugin install vagrant-cachier
Installing the 'vagrant-cachier' plugin. This can take a few minutes...
Bundler, the underlying system Vagrant uses to install plugins,
reported an error. The error is shown below. These errors are usually
caused by misconfigured plugin installations or transient network
issues. The error from Bundler is:

An error occurred while installing ruby-libvirt (0.5.2), and Bundler cannot continue.
Make sure that `gem install ruby-libvirt -v '0.5.2'` succeeds before bundling.

Gem::Ext::BuildError: ERROR: Failed to build gem native extension.

/usr/bin/ruby -r ./siteconf20151027-20676-13hfub7.rb extconf.rb
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
extconf.rb:73:in `<main>': libvirt library not found in default locations (RuntimeError)

extconf failed, exit code 1

Gem files will remain installed in /root/.vagrant.d/gems/gems/ruby-libvirt-0.5.2 for inspection.
Results logged to /root/.vagrant.d/gems/extensions/x86_64-linux/ruby-libvirt-0.5.2/gem_make.out

After installing “libvirt-devel” package the issue got resolved.

[root@dhcp35-203 ~]# dnf install libvirt-devel

[root@dhcp35-203 ~]# vagrant plugin install vagrant-cachier
Installing the 'vagrant-cachier' plugin. This can take a few minutes...
Installed the plugin 'vagrant-cachier (1.2.1)'!

However the vagrant up command again failed.

$ vagrant init centos/7

Then we need to modify the vagrantfile as vagrant-cachier by-default uses NFS to mount the host filesystem in to the guest.

$ cat Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "centos/7"
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box

    config.cache.synced_folder_opts = {
      type: :nfs,
      mount_options: ['rw', 'vers=3', 'tcp', 'nolock']
    }
  end

end

Next step was

$ vagrant up
xxxxxxxxxxxxxxxxxxxx
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o 'rw,vers=3,tcp,nolock' 192.168.121.1:'/home/lmohanty/.vagrant.d/cache/fedora/23-cloud-base' /tmp/vagrant-cache

Stdout from the command:

Stderr from the command:

mount.nfs: Connection timed out

After little troubleshooting it turned out to be a firewall i.e. iptable issue. iptable was blocking the nfs service of host for the operation. As a temporary workaround I removed all the iptable rules from the host.

$ iptables -F

After that “vagrant up” worked fine and I can see the changes vagrant-cachier did in the guest to make the caching work.

Here are the things done by vagrant-cachier for the caching to work.

  • Mounts the ~/.vagrant.d/cache/<guest-name> from host  in the guest on /tmp/vagrant-cache/
  • In Guest
    • It enables the yum caching i.e. sed -i ‘s/keepcache=0/keepcache=1/g’ /etc/yum.conf
    • It creates a symlink of /tmp/vagrant-cache/yum to /var/cache/yum
vagrant@localhost ~]$ ls -l /var/cache
total 8
drwx------. 2 root root 4096 Nov 15 00:08 ldconfig
drwxr-xr-x. 2 root root 4096 Jun  9  2014 man
lrwxrwxrwx. 1 root root   22 Nov 15 00:06 yum -> /tmp/vagrant-cache/yum

vagrant-cachier works fine with CentOS7 guests. However I found an issue with Fedora 23 guests as the default package manager is dnf instead of yum. I have filed an issue with vagrant-cachier and also working on a fix.