Using Imagefactory to build Vagrant images

Fedora Koji buildsystem and CentOS Community build system i.e. cbs  uses  imagefactory at the back-end of Koji to build Vagrant images.  I have used it as through cbs/koji but wanted to give it a try as I am looking for  easier methods to build adb-atomic-developer-bundle . Specially for developers who don’t have access to Fedora or CentOS build system.

Imagefactory needs a kvm/libvirt hypervisor to build images and it converts them for other providers e.g. Virtualbox or VMware Fusion

Setup:

I have used my laptop (which runs Fedora 23) for this. As I have plan to hack imagefactory and I did not want to damage my laptop’s kvm setup.  So I have used nested virtualization for this. Which means I have a CentOS 7 VM which can run virtual machines.

All below steps are done on a CentOS 7 VM which has a kvm setup in place.

Installation:

Imagefactory is available in Fedora and EPEL repo. But I wanted to try/test the latest code, so I generated RPMs from latest code and then installed the RPMs.

$yum install  rpmdevtools epel-release
$git clone https://github.com/redhat-imaging/imagefactory.git
$cd imagefactory
$make rpm
$cd imagefactory_plugins
$make rpm
$cd ~/rpmbuild/RPMS/noarch/
$sudo yum localinstall ./*

Building Vagrant Images:

For building Vagrant box I have used Ian’s example git repo. He is the maintainer and one of the primary developer for imagefactory.

Below commands are copied from imagefactory-examples git repo.

$ git clone https://github.com/imcleod/imagefactory-examples.git
$ cd imagefactory-examples/vagrant/

Once you are in the “imagefactory-examples/vagrant/” directory, you can see the required files are already there for running commands to generate image for Fedora 22. So we can start running required commands.

For getting a working Vagrant box we need to run three commands (as mentioned below) to create appropriate OVA image. Each command will give a UUID for the intermediate image file name and we need to use the UUID in the next command.

$ sudo imagefactory --debug base_image \
  --file-parameter install_script ./f22-vagrant.ks \
  --parameter offline_icicle true \
  ./f22-minimal-40g.tdl
Output:
xxxxxxxxxxxxxxxxxxxxxxxxxxx
============ Final Image Details ============
UUID: 109cb45f-bbd2-4a27-ba5f-42e2d368be32
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Image build completed SUCCESSFULLY!
$ sudo imagefactory --debug target_image --id 109cb45f-bbd2-4a27-ba5f-42e2d368be32  rhevm

Output:
============ Final Image Details ============
UUID: ce0dce5f-a1d1-4c1a-8e9b-fc56e022a1bc
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Image build completed SUCCESSFULLY!
$ sudo imagefactory --debug target_image --parameter rhevm_ova_format vagrant-libvirt --id ce0dce5f-a1d1-4c1a-8e9b-fc56e022a1bc ova

Output:
============ Final Image Details ============
UUID: 36fcb589-06b8-447b-85bf-ed4715bd2a93
Type: target_image
Image filename: /var/lib/imagefactory/storage/36fcb589-06b8-447b-85bf-ed4715bd2a93.body
Image build completed SUCCESSFULLY!

The last step will generate the F22 image for libvirt provider. You can rename it as f22.libvirt.box (usually Vagrant images have .box extension) and start using it.

$ cp /var/lib/imagefactory/storage/36fcb589-06b8-447b-85bf-ed4715bd2a93.body ./f22.libvirt.box

[1] http://imgfac.org/
[2] https://github.com/redhat-imaging/imagefactory
[3] https://lalatendumohanty.wordpress.com/2015/11/01/kvm-nested-virtualization-in-fedora-23/
[4] https://lalatendumohanty.wordpress.com/2015/05/28/installing-vagrant-in-centos7/

KVM Nested Virtualization In Fedora 23

Nested virtualization allows you to run a virtual machine (VM) inside another VM [1]. Both Intel and AMD supports nested virtualization.

This is very helpful when you are experimenting with the hypervisor related technologies. Example: I will be able to run KVM and Virtualbox both on my laptop but in different VMs. Also I will be able to run local installation of imagefactory to build Vagrant images in a VM  as imagefactory need a hypervisor to run the build . The best part is, I can experiment with all of these inside different VMs without damaging my primary workstation’s hypervisor.

The below steps are done on a Fedora 23 running a Lenovo Thinkpad with Intel chipset.

Step 1: Make sure Intel virtualization (VT) is enabled for the host machine.

$ cat /proc/cpuinfo | grep vmx

flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts

The output should contain vmx else Intel virtualization (VT) is not enabled on the machine. You should first fix the setting in the BIOS.

Step 2: Install KVM on the F23 host.

$ dnf install @virtualization

Nested virtualization should be disabled bydefault

$ cat /sys/module/kvm_intel/parameters/nested
 N

Step 3: Enable nested virtualization.  Run below commands as root

  • Temporarily remove the kvm kernel module
      $ sudo rmmod kvm-intel
  • Add the following directive to /etc/modprobe.d/dist.conf
    $ sudo sh -c "echo 'options kvm-intel nested=y' >> /etc/modprobe.d/dist.conf"
  • Insert the kvm module back in the kernel
     $ sudo modprobe kvm-intel

There is alternative way to do the same i.e. pass  kvm-intel.nested=1 on kernel commandline [3]

Step 4: Reboot and verify that nested virtualization is enabled

  • Check that nested virt is enabled
$ sudo cat /sys/module/kvm_intel/parameters/nested
 Y

Step 5: Install the beefy VM. (Lets call it parent VM)

  • I used CentOS 7 minimal ISO i.e. CentOS-7-x86_64-Minimal-1503-01.iso to install the VM through Virtual Machine Manger.
  • Parent VM configuration : 50GB disk, 4GB RAM and 4 vCPUs

Step 6: Enable the VM to use nested virt

  • Go to -> Virtual Machine Manger GUI -> CPU properties -> select “Copy host CPU configuration”

There is also another option i.e. host-passthrough [1] . It is supposed to be more stable then “Copy host CPU configuration” but I have not tried that yet.

Step 7:  Check that Intel virtualization (VT) is enabled in the VM

$ cat /proc/cpuinfo | grep vmx

Step 8: Install KVM inside the VM  [4]

$ yum install qemu-kvm qemu-img
$ yum install libvirt libvirt-python python-virtinst

$ systemctl enable libvirtd
$ systemctl start libvirtd
$ systemctl status libvirtd

Step 9:  Install the child VM inside the parent VM

  • I used Virtual Machine Manger to connect to the parent VM and then install the child VM.
  • Used the same CentOS 7 minimal ISO i.e. CentOS-7-x86_64-Minimal-1503-01.iso to install  the child VM.

[1] https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM

[2] http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html

[3] http://kashyapc.com/2012/01/14/nested-virtualization-with-kvm-intel/

[4] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Guest_Installation.html

vagrant-cachier in Fedora 23 with KVM Libvirt

Vagrant cachier is a very useful plugin for Vagrant users.  It helps to reduce time and  the amount of packages get downloaded from internet between each “vagrant destroy”.

For example, you are using a CentOS 7 image in Vagrant setup and want it to update with the latest packages every time you start working in the guest then the usual work flow is “vagrant up” -> “vagrant ssh” > “sudo yum update -y” -> “Do your stuff” -> “vagrant destroy” .  But the amount of packages get downloaded during yum update and the time consumed for it is somehow undesirable .

vagrant-cachier  keeps the downloaded packages in the file system of the host machine and uses this for the guest as cache. The yum update in the guest gets the packages from the cache  and the time and internet usage is drastically reduced.  Which is really cool!

I tried to install vagrant-cachier on my Fedora 23 laptop with KVM and libvirt and got in to below issue.

Issue:

[root@dhcp35-203 ~]# vagrant plugin install vagrant-cachier
Installing the 'vagrant-cachier' plugin. This can take a few minutes...
Bundler, the underlying system Vagrant uses to install plugins,
reported an error. The error is shown below. These errors are usually
caused by misconfigured plugin installations or transient network
issues. The error from Bundler is:

An error occurred while installing ruby-libvirt (0.5.2), and Bundler cannot continue.
Make sure that `gem install ruby-libvirt -v '0.5.2'` succeeds before bundling.

Gem::Ext::BuildError: ERROR: Failed to build gem native extension.

/usr/bin/ruby -r ./siteconf20151027-20676-13hfub7.rb extconf.rb
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
extconf.rb:73:in `<main>': libvirt library not found in default locations (RuntimeError)

extconf failed, exit code 1

Gem files will remain installed in /root/.vagrant.d/gems/gems/ruby-libvirt-0.5.2 for inspection.
Results logged to /root/.vagrant.d/gems/extensions/x86_64-linux/ruby-libvirt-0.5.2/gem_make.out

After installing “libvirt-devel” package the issue got resolved.

[root@dhcp35-203 ~]# dnf install libvirt-devel

[root@dhcp35-203 ~]# vagrant plugin install vagrant-cachier
Installing the 'vagrant-cachier' plugin. This can take a few minutes...
Installed the plugin 'vagrant-cachier (1.2.1)'!

However the vagrant up command again failed.

$ vagrant init centos/7

Then we need to modify the vagrantfile as vagrant-cachier by-default uses NFS to mount the host filesystem in to the guest.

$ cat Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "centos/7"
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box

    config.cache.synced_folder_opts = {
      type: :nfs,
      mount_options: ['rw', 'vers=3', 'tcp', 'nolock']
    }
  end

end

Next step was

$ vagrant up
xxxxxxxxxxxxxxxxxxxx
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o 'rw,vers=3,tcp,nolock' 192.168.121.1:'/home/lmohanty/.vagrant.d/cache/fedora/23-cloud-base' /tmp/vagrant-cache

Stdout from the command:

Stderr from the command:

mount.nfs: Connection timed out

After little troubleshooting it turned out to be a firewall i.e. iptable issue. iptable was blocking the nfs service of host for the operation. As a temporary workaround I removed all the iptable rules from the host.

$ iptables -F

After that “vagrant up” worked fine and I can see the changes vagrant-cachier did in the guest to make the caching work.

Here are the things done by vagrant-cachier for the caching to work.

  • Mounts the ~/.vagrant.d/cache/<guest-name> from host  in the guest on /tmp/vagrant-cache/
  • In Guest
    • It enables the yum caching i.e. sed -i ‘s/keepcache=0/keepcache=1/g’ /etc/yum.conf
    • It creates a symlink of /tmp/vagrant-cache/yum to /var/cache/yum
vagrant@localhost ~]$ ls -l /var/cache
total 8
drwx------. 2 root root 4096 Nov 15 00:08 ldconfig
drwxr-xr-x. 2 root root 4096 Jun  9  2014 man
lrwxrwxrwx. 1 root root   22 Nov 15 00:06 yum -> /tmp/vagrant-cache/yum

vagrant-cachier works fine with CentOS7 guests. However I found an issue with Fedora 23 guests as the default package manager is dnf instead of yum. I have filed an issue with vagrant-cachier and also working on a fix.

DevConf 2015

It was a wonderful experience attending Devconf 2015 in Brno, Czech Republic. The conference hosted some really interesting talks.

Talks I attended

  • Keynote:The Future of Red Hat
  • What is kubernetes?
  • Docker Security
  • Fedora Atomic
  • Performance tuning of Docker and RHEL Atomic
  • GlusterFS- Architecture and Roadmap
  • golang- the Good, the bad and the ugly
  • Vagrant for your Fedora
  • CI Provisioner – slave and test resources in Openstack, Beaker, Foreman, and Docker through one mechanism
  • Effective Beaker(workshop)
  • Project Atomic discussions (not listed in the schedule but interested folks got together)
  • Running a CentOS SIG (My talk)
  • CentOS: A platform for your success
  • Super Privileged Containers (SPC)
  • Ceph FS development update

Apart from this I had attended talks but were not useful to me for various reasons. Hence did not list them.

Most of the talks were live streamed and recorded. You can find them at Red Hat Czech’s youtube channel i.e. https://www.youtube.com/user/RedHatCzech

Because of time constraints I missed some of the good talks too e.g. systemd and open shift talks.

Also because of Devconf, now I can put faces to may irc nicknames and email address. It was a nice experience meet people physically with whom I had only communicated through irc and emails.

A lot of thanks to the organisers, speakers and participants for the success of Devconf 2015. Looking forward to the next Devconf.

Bangalore CentOS Dojo, 2014

The first CentOS Dojo in India took place in Bangalore on 15th November(Saturday) 2014 at Red Hat Bangalore office. Red Hat had sponsored the event.

I was  a co-organizer of the Dojo along with Dominic and Karanbir Singh.  Around 90 people RSVPed  for the event but around 40 (mostly system administrators and new users) attended the event.

The First talk was by Aditya Patawari on “An introduction to Docker and Project Atomic”. The talk included a demo and introduced audience to docker and Atomic host. Most of the attendees had questions on docker as they had used or have heard about it. There were some questions about differences between CoreOS and Project Atomic. The slides are available at http://www.slideshare.net/AdityaPatawari/docker-centosdojo. Overall this talk gave fair idea about Docker and Atomic project.

Second talk was “Be Secure with SELinux Gyan” by Rejy M Cyriac. This session about troubleshooting SELinux issues and introduction to creating custom SELinux policy modules.  Rejy made the talk interesting by distributing SELinux stickers to attendees who asked interesting questions or answered questions. Slides can be found here.

After these two talks we took a lunch break for around 1 hour.  During the lunch break we distributed the CentOS t shirts and got a chance to socialize with the attendees.

The first session post launch was “Scale out storage on CentOS using GlusterFS” by Raghavendra Talur. The talk introduced the audience to GlusterFS, important high level concepts and a demo was shown using packages from CentOS storage SIG. Slides can be found at slideshare.

The next session was “Network Debugging” by Jijesh Kalliyat. This talk covered all most all basic concepts/fundamental, network Diagnostic tools required to troubleshoot a network issue. Also  it included a demo of use Wireshark and Tcpdump to debug network issues. Slides are available here.

Before the next talk, we took break for some time and clicked some group pictures of all present for the Dojo.

The last session was on “Systemd on CentOS” by Saifi Khan. The talk covered a lot of areas e.g. comparison between SysVinit and systemd,  Concurrency at scale, how systemd is more scalable than other available init systems, some similarity of design principles with CoreOS and how it is suited better for Linux container technology. Saifi also talked about how systemd has saved his system from being unusable.  His liking for systemd was quite evident from the talk and enthusiasm.

Overall it was an awesome experience participating in the Dojo as it covered wide variety of topics which are important for deploying CentOS for various purposes.

Bangalore Dojo link: http://wiki.centos.org/Events/Dojo/Bangalore2014

Group Photo. You can see happy faces there 🙂

DSC07574_mod

Bangalore Dojo, 2014