Debugging with git bisect

This is a post  for appreciating “git bisect” and how it can be one of the most powerful tool to find out root cause of a broken build or a broken branch.

Here is simple example of  how “git bisect” can be used to find  a bad commit.

Lets assume that we have a git repository which has hundreds of commits and currently the HEAD of the master branch is broken. Our objective is to find out which commit introduced the bug in to the code base.

Before starting the git bisect process we need to know a couple of things. First we need know a good commit i.e. a old commit at which code worked as expected. This is not very difficult to find out as it is most likely the last release of the code.  Also we need to know the steps to test the code and reproduce the issue. It will help us to find out if certain commits are good or bad during the git bisect process.

Git bisect uses binary search algorithm between the good and bad commit to  find out the commit  that introduced the bug.

Here are the commands to start the git bisect work flow. Lets call the current HEAD commit as “original HEAD”

$ git bisect start

$ git bisect bad

$ git bisect good  <commit ID>

Bisecting: 130 revisions left to test after this (roughly 4 steps)

Once the above commands are executed, git bisect will change the HEAD to  the middle of the commits between the “original HEAD” and good commit. Read about binary search if you want to know how it decides to which commit the HEAD needs to be moved.

At this point we are expected to test the code and find out if we are able to reproduce the issue. After the testing we need to again  tell git bisect if it is bad commit (see below) i.e. we are able to reproduce the issue else it is a good commit.

$git bisect bad
Bisecting: 65 revisions left to test after this (roughly 3 steps)

Or

$ git bisect good
Bisecting: 65 revisions left to test after this (roughly 3 steps)

We need to continue the process few times and git bisect will give you the commit which introduced the issue/bug.

In my experience I always get to the commit (which introduced the issue) in 4 to 5 steps of git bisect.  Which I think is an awesome thing.

So go ahead and try git bisect if you have not tried it yet and do not forget to use it when you broken builds.

 

 

 

Creating Live media image

If you are a GNU/Linux user, their is a high chance that you have used live media images i.e. live CD, live DVD or ISO. I was curious about  how live images are created and recently got a chance to do some hands on.

There are multiple tools available for creating live media. However I am going to use livecd-tools to create live ISO. livecd tools need kickstart files to create the images and we will use CentOS upstream kickstart files [1].

As a first step install livecd-tools [2]. I using CentOS Vagrant box to create live ISOs.

$ sudo yum install livecd-tools git -y
$ git clone https://github.com/CentOS/sig-core-livemedia
$ cd sig-core-livemedia/kickstarts

#You can use any one of the .cfg files in /sig-core-livemedia/kickstarts
$ livecd-creator --config <Kickstart file>

If you are new to kickstart files and want to know more about it refer the documentation: https://github.com/rhinstaller/pykickstart/blob/master/docs/kickstart-docs.rst

[1] https://github.com/CentOS/sig-core-livemedia

[2] https://github.com/rhinstaller/livecd-tools

Running System V init script in CentOS7

RHEL 7 and CentOS 7 moved to systemd from System V init facility.  So if we want to run a script at boot,  it is highly recommended that you should write a systemd unit file.  However to ease the transition to systemd, CentOS7 /RHEL7 offers a backward compatibility mode for init scripts.

In a System V init system you need to add an entry of the script to /etc/rc.local (or equivalent) and the script will be executed during boot at the end of multi user run level.

In CentOS7 /etc/rc.local is a symbolic link to /etc/rc.d/rc.local.

root@localhost ~]# ls -l /etc/rc.local 
lrwxrwxrwx. 1 root root 13 Apr  4  2016 /etc/rc.local -> rc.d/rc.local

To enable the init compatibility mode, you need to make  /etc/rc.d/rc.local executable and then add the script to /etc/rc.local.

Here are the steps you need to perform to to start a script e.g. hello.sh during boot.

$ chmod +x /etc/rc.d/rc.local

#make the script executable
$ chmod +x hello.sh

#Make a entry in /etc/rc.local
$ cat /etc/rc.local
!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

/root/hello.sh

Reference : https://www.centos.org/forums/viewtopic.php?t=48140

Using Docker registry in Atomic Developer Bundle

With Atomic Developer Bundle you can easily setup OpenShift on your workstation and OpenShift creates a local docker registry which can be used independently.

To make these commands work, you need to have a working Vagrant setup with Virtualbox or Libvirt/KVM provider.

For this blog I am using ADB 2.1 . I also  believe these instructions will work for CDK 2.0.

$ vagrant plugin install vagrant-service-manager
$ git clone https://github.com/projectatomic/adb-atomic-developer-bundle
$ cd adb-atomic-developer-bundle/components/centos/centos-openshift-setup/
$ vagrant up

After the above commands you should get an OpenShift single node setup based OpenShift Origin. If you want to use a specific version of Origin  use the relevant tag from Docker hub in https://github.com/projectatomic/adb-atomic-developer-bundle/blob/master/components/centos/centos-openshift-setup/Vagrantfile#L9 and then do “vagrant up”.

There are various ways of using the OpenShift  from ADB/CDK i.e. web console or through oc binary/command line. I am going to login to the Vagrantbox and start using the docker registry.

$ vagrant ssh

Get the login credential

$ oc whoami -t
tF8vQU7xBaM4KA4iKgRmjWFlQex1oKJQr8nwAvblczE

Login to the docker registry

$ docker login -u admin -p tF8vQU7xBaM4KA4iKgRmjWFlQex1oKJQr8nwAvblczE -e abc@redhat.com hub.openshift.c
entos7-adb.10.1.2.2.xip.io
WARNING: login credentials saved in /home/vagrant/.docker/config.json
Login Succeeded

Pull an image from docker hub and push it to the local registry

$ docker pull fedora
Using default tag: latest
Trying to pull repository docker.io/library/fedora ... latest: Pulling from library/fedora
7891603e1bb1: Pull complete 
6932b0d5be7d: Pull complete 
Digest: sha256:cfd8f071bf8da7a466748f522406f7ae5908d002af1b1a1c0dcf893e183e5b32
Status: Downloaded newer image for docker.io/fedora:latest


Push the image to the local docker registry

$ docker push hub.openshift.centos7-adb.10.1.2.2.xip.io/sample-project/fedora:latest
The push refers to a repository [hub.openshift.centos7-adb.10.1.2.2.xip.io/sample-project/fedora] (len: 1)
6932b0d5be7d: Pushed 
7891603e1bb1: Pushed

Now you can pull the image from the local docker registry

$ docker pull hub.openshift.centos7-adb.10.1.2.2.xip.io/sample-project/fedora:latest
Trying to pull repository hub.openshift.centos7-adb.10.1.2.2.xip.io/sample-project/fedora ... latest: Pulling from sample-pro
ject/fedora
Digest: sha256:eb1987b9de75cd307f22ac8ed73cd20dcc39220904b8274eb2b92b55f383da30
Status: Downloaded newer image for hub.openshift.centos7-adb.10.1.2.2.xip.io/sample-project/fedora:latest

 

Get rid of password prompt for Vagrant commands on Libvirt

If you use Vagrant with KVM Libvirt , there is a high chance that you are annoyed by the  password prompts for every Vagrant commands e.g. vagrant up/ssh/destroy unless you are running the commands as root.

The issue is that the a typical Linux user does not have access permission to libvirt socket so to access it we need to provide extra permission.  Interestingly libvirt uses Policy Kit ( man polkit) to decide access permissions and we can add an explicit rule to polkit to give an user privilege to access libvirt.

To fix the issue you need to create a file in /etc/polkit-1/localauthority/50-local.d as mentioned below.

There are other methods to fix this issue too. You can also create a user group, give the group privilege and add the user to the group.

# cd  /etc/polkit-1/localauthority/50-local.d
# cat vagrant.pkla 
[Allow lmohanty libvirt management permissions]
Identity=unix-user:<USER_NAME>
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

References
[1] https://niranjanmr.wordpress.com/2013/03/20/auth-libvirt-using-polkit-in-fedora-18/
[2] https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/

A Docker Workshop

In Fudcon Pune 2015 we had conducted a Docker introductory workshop. It was well attended and  we got positive feedback about it.  While preparing for the Linux container track in Fudcon we had decided that we will put all the documentation in github. The idea was to keep the content open for collaboration so that others can contribute and reuse the content. Thanks to Neependra for the idea.

Here is the github link which has the workshop content.

The github project also contains some useful material e.g. Hands on Kubernetes which you might useful.

This workshop will take around 3 hours to complete. This is really useful If you are new to docker and wants to learn by doing some hands on.

Multi-Container Application Packaging With Nulecule

I got an opportunity to talk about “Project Atomic and Multi-container application packaging”  in recent docker meetup Bangalore. I have posted my slides to slideshare [1].

However I thought of giving more context and further pointers to the presentation through this blog.

As mentioned in my slide, Nulecule is a specification to define a multi-container applications.  So that we can get rid of the custom shell scripts, long docker run commands, moving the required configuration files and instructions about how to deploy the application to the end user.

Checkout the YouTube video about why we need  Nulecule: From the Nulecule Nest to an Atomic App

However just having a specification will not solve the problem. We need a code to do the required work to run multi-container application using Nulecule specification and that is Atomic App project .

Atomic App performs all actions to run the application by reading the Nulecule spec. Atomic App is used as a docker image.

To run the Atomic App installer for your application , atomic command line is used.

Here is the workflow if you want to build Atomic App installer for your application using Nulecule specification.   Please keep in mind that when I say application, I am actually talking about multi-container application.

Step-1

Write Nulecule specification files. Which also  includes manifesto files for underlying required orchestration platform (e.g. Kubernetes, OpenShift, Docker Compose, Apache Mosos Marathon etc)

Here is a blog from Ratnadeep about how he created a nulecule-ized application -> http://www.rtnpro.com/nuleculizing-an-docker-image/

Step-2

Create the layered docker image (use Atomic App docker image as base image) of the application. Which will include Nulecule specification and Atomic App code.

You can push the new image to your local docker registry or public docker hub for your use.

Step-3

Running the Atomic App image for the application. Here is a example of running a helloapache Atomic App -> https://hub.docker.com/r/projectatomic/helloapache/

Note that there are three way to run an Atomic App i.e.

  • Option 1: Non-interactive defaults
  • Option 2: Unattended
  • Option 3: Install and Run

Here is a YouTube video which shows a demo : WordPress Nulecule Demo

For further read check  Vasik’s presentation or Nulecule github project.

Get Involved:

Nulecule Poject: https://github.com/projectatomic/nulecule
Atomic App: https://github.com/projectatomic/atomicapp

Check the README files of the above projects for relevent communication channels for participating in the project.

[1] http://www.slideshare.net/then4way/project-atomicnulecule
[2] http://www.rtnpro.com/nuleculizing-an-docker-image/
[3] http://www.slideshare.net/VavPavl/nulecule

Cherry pick a PR (pull request) from github

Sometime you might want to test pull requests (from github) in local machine  by cherry picking it. This usually happens before it get merged in the upstream repo and released by the project.

I searched the internet but  did not get good reference about how to do it. After little bit of trial and error I came up with below steps.

Cherry picking a pull request:

For example you want to cherry pick https://github.com/fgrehm/vagrant-cachier/pull/164

Cherry picking a commit:

Using Imagefactory to build Vagrant images

Fedora Koji buildsystem and CentOS Community build system i.e. cbs  uses  imagefactory at the back-end of Koji to build Vagrant images.  I have used it as through cbs/koji but wanted to give it a try as I am looking for  easier methods to build adb-atomic-developer-bundle . Specially for developers who don’t have access to Fedora or CentOS build system.

Imagefactory needs a kvm/libvirt hypervisor to build images and it converts them for other providers e.g. Virtualbox or VMware Fusion

Setup:

I have used my laptop (which runs Fedora 23) for this. As I have plan to hack imagefactory and I did not want to damage my laptop’s kvm setup.  So I have used nested virtualization for this. Which means I have a CentOS 7 VM which can run virtual machines.

All below steps are done on a CentOS 7 VM which has a kvm setup in place.

Installation:

Imagefactory is available in Fedora and EPEL repo. But I wanted to try/test the latest code, so I generated RPMs from latest code and then installed the RPMs.

$yum install  rpmdevtools epel-release
$git clone https://github.com/redhat-imaging/imagefactory.git
$cd imagefactory
$make rpm
$cd imagefactory_plugins
$make rpm
$cd ~/rpmbuild/RPMS/noarch/
$sudo yum localinstall ./*

Building Vagrant Images:

For building Vagrant box I have used Ian’s example git repo. He is the maintainer and one of the primary developer for imagefactory.

Below commands are copied from imagefactory-examples git repo.

$ git clone https://github.com/imcleod/imagefactory-examples.git
$ cd imagefactory-examples/vagrant/

Once you are in the “imagefactory-examples/vagrant/” directory, you can see the required files are already there for running commands to generate image for Fedora 22. So we can start running required commands.

For getting a working Vagrant box we need to run three commands (as mentioned below) to create appropriate OVA image. Each command will give a UUID for the intermediate image file name and we need to use the UUID in the next command.

$ sudo imagefactory --debug base_image \
  --file-parameter install_script ./f22-vagrant.ks \
  --parameter offline_icicle true \
  ./f22-minimal-40g.tdl
Output:
xxxxxxxxxxxxxxxxxxxxxxxxxxx
============ Final Image Details ============
UUID: 109cb45f-bbd2-4a27-ba5f-42e2d368be32
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Image build completed SUCCESSFULLY!
$ sudo imagefactory --debug target_image --id 109cb45f-bbd2-4a27-ba5f-42e2d368be32  rhevm

Output:
============ Final Image Details ============
UUID: ce0dce5f-a1d1-4c1a-8e9b-fc56e022a1bc
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Image build completed SUCCESSFULLY!
$ sudo imagefactory --debug target_image --parameter rhevm_ova_format vagrant-libvirt --id ce0dce5f-a1d1-4c1a-8e9b-fc56e022a1bc ova

Output:
============ Final Image Details ============
UUID: 36fcb589-06b8-447b-85bf-ed4715bd2a93
Type: target_image
Image filename: /var/lib/imagefactory/storage/36fcb589-06b8-447b-85bf-ed4715bd2a93.body
Image build completed SUCCESSFULLY!

The last step will generate the F22 image for libvirt provider. You can rename it as f22.libvirt.box (usually Vagrant images have .box extension) and start using it.

$ cp /var/lib/imagefactory/storage/36fcb589-06b8-447b-85bf-ed4715bd2a93.body ./f22.libvirt.box

[1] http://imgfac.org/
[2] https://github.com/redhat-imaging/imagefactory
[3] https://lalatendumohanty.wordpress.com/2015/11/01/kvm-nested-virtualization-in-fedora-23/
[4] https://lalatendumohanty.wordpress.com/2015/05/28/installing-vagrant-in-centos7/

KVM Nested Virtualization In Fedora 23

Nested virtualization allows you to run a virtual machine (VM) inside another VM [1]. Both Intel and AMD supports nested virtualization.

This is very helpful when you are experimenting with the hypervisor related technologies. Example: I will be able to run KVM and Virtualbox both on my laptop but in different VMs. Also I will be able to run local installation of imagefactory to build Vagrant images in a VM  as imagefactory need a hypervisor to run the build . The best part is, I can experiment with all of these inside different VMs without damaging my primary workstation’s hypervisor.

The below steps are done on a Fedora 23 running a Lenovo Thinkpad with Intel chipset.

Step 1: Make sure Intel virtualization (VT) is enabled for the host machine.

$ cat /proc/cpuinfo | grep vmx

flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts

The output should contain vmx else Intel virtualization (VT) is not enabled on the machine. You should first fix the setting in the BIOS.

Step 2: Install KVM on the F23 host.

$ dnf install @virtualization

Nested virtualization should be disabled bydefault

$ cat /sys/module/kvm_intel/parameters/nested
 N

Step 3: Enable nested virtualization.  Run below commands as root

  • Temporarily remove the kvm kernel module
      $ sudo rmmod kvm-intel
  • Add the following directive to /etc/modprobe.d/dist.conf
    $ sudo sh -c "echo 'options kvm-intel nested=y' >> /etc/modprobe.d/dist.conf"
  • Insert the kvm module back in the kernel
     $ sudo modprobe kvm-intel

There is alternative way to do the same i.e. pass  kvm-intel.nested=1 on kernel commandline [3]

Step 4: Reboot and verify that nested virtualization is enabled

  • Check that nested virt is enabled
$ sudo cat /sys/module/kvm_intel/parameters/nested
 Y

Step 5: Install the beefy VM. (Lets call it parent VM)

  • I used CentOS 7 minimal ISO i.e. CentOS-7-x86_64-Minimal-1503-01.iso to install the VM through Virtual Machine Manger.
  • Parent VM configuration : 50GB disk, 4GB RAM and 4 vCPUs

Step 6: Enable the VM to use nested virt

  • Go to -> Virtual Machine Manger GUI -> CPU properties -> select “Copy host CPU configuration”

There is also another option i.e. host-passthrough [1] . It is supposed to be more stable then “Copy host CPU configuration” but I have not tried that yet.

Step 7:  Check that Intel virtualization (VT) is enabled in the VM

$ cat /proc/cpuinfo | grep vmx

Step 8: Install KVM inside the VM  [4]

$ yum install qemu-kvm qemu-img
$ yum install libvirt libvirt-python python-virtinst

$ systemctl enable libvirtd
$ systemctl start libvirtd
$ systemctl status libvirtd

Step 9:  Install the child VM inside the parent VM

  • I used Virtual Machine Manger to connect to the parent VM and then install the child VM.
  • Used the same CentOS 7 minimal ISO i.e. CentOS-7-x86_64-Minimal-1503-01.iso to install  the child VM.

[1] https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM

[2] http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html

[3] http://kashyapc.com/2012/01/14/nested-virtualization-with-kvm-intel/

[4] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Guest_Installation.html