Debugging with git bisect

This is a post  for appreciating “git bisect” and how it can be one of the most powerful tool to find out root cause of a broken build or a broken branch.

Here is simple example of  how “git bisect” can be used to find  a bad commit.

Lets assume that we have a git repository which has hundreds of commits and currently the HEAD of the master branch is broken i.e. has a bug. Our objective is to find out which commit introduced the bug in to the code base.

Before starting the git bisect process we need to know a couple of things. First we need know a good commit i.e. a old commit at which code worked as expected. This is not very difficult to find out as it is most likely the last release of the code.  Also we need to know the steps to test the code and reproduce the issue. It will help us to find out if certain commits are good or bad during the git bisect process.

Git bisect uses binary search algorithm between the good and bad commit to  find out the commit  that introduced the bug.

Here are the commands to start the git bisect work flow. Lets call the current HEAD commit as “original HEAD”

$ git bisect start

$ git bisect bad

$ git bisect good  <commit ID>

Bisecting: 130 revisions left to test after this (roughly 4 steps)

Once the above commands are executed, git bisect will change the HEAD to  the middle of the commits between the “original HEAD” and good commit. Read about binary search if you want to know how it decides to which commit the HEAD needs to be moved.

At this point we are expected to test the code and find out if we are able to reproduce the issue. After the testing we need to again  tell git bisect if it is bad commit (see below) i.e. we are able to reproduce the issue else it is a good commit.

$git bisect bad
Bisecting: 65 revisions left to test after this (roughly 3 steps)


$ git bisect good
Bisecting: 65 revisions left to test after this (roughly 3 steps)

We need to continue the process few times and git bisect will give you the commit which introduced the issue/bug.

In my experience I always get to the commit (which introduced the issue) in 4 to 5 steps of git bisect.  Which I think is an awesome thing.

So go ahead and try git bisect if you have not tried it yet and do not forget to use it when you broken builds.




A Docker Workshop

In Fudcon Pune 2015 we had conducted a Docker introductory workshop. It was well attended and  we got positive feedback about it.  While preparing for the Linux container track in Fudcon we had decided that we will put all the documentation in github. The idea was to keep the content open for collaboration so that others can contribute and reuse the content. Thanks to Neependra for the idea.

Here is the github link which has the workshop content.

The github project also contains some useful material e.g. Hands on Kubernetes which you might useful.

This workshop will take around 3 hours to complete. This is really useful If you are new to docker and wants to learn by doing some hands on.

Cherry pick a PR (pull request) from github

Sometime you might want to test pull requests (from github) in local machine  by cherry picking it. This usually happens before it get merged in the upstream repo and released by the project.

I searched the internet but  did not get good reference about how to do it. After little bit of trial and error I came up with below steps.

Cherry picking a pull request:

For example you want to cherry pick

Cherry picking a commit:

vagrant-cachier in Fedora 23 with KVM Libvirt

Vagrant cachier is a very useful plugin for Vagrant users.  It helps to reduce time and  the amount of packages get downloaded from internet between each “vagrant destroy”.

For example, you are using a CentOS 7 image in Vagrant setup and want it to update with the latest packages every time you start working in the guest then the usual work flow is “vagrant up” -> “vagrant ssh” > “sudo yum update -y” -> “Do your stuff” -> “vagrant destroy” .  But the amount of packages get downloaded during yum update and the time consumed for it is somehow undesirable .

vagrant-cachier  keeps the downloaded packages in the file system of the host machine and uses this for the guest as cache. The yum update in the guest gets the packages from the cache  and the time and internet usage is drastically reduced.  Which is really cool!

I tried to install vagrant-cachier on my Fedora 23 laptop with KVM and libvirt and got in to below issue.


[root@dhcp35-203 ~]# vagrant plugin install vagrant-cachier
Installing the 'vagrant-cachier' plugin. This can take a few minutes...
Bundler, the underlying system Vagrant uses to install plugins,
reported an error. The error is shown below. These errors are usually
caused by misconfigured plugin installations or transient network
issues. The error from Bundler is:

An error occurred while installing ruby-libvirt (0.5.2), and Bundler cannot continue.
Make sure that `gem install ruby-libvirt -v '0.5.2'` succeeds before bundling.

Gem::Ext::BuildError: ERROR: Failed to build gem native extension.

/usr/bin/ruby -r ./siteconf20151027-20676-13hfub7.rb extconf.rb
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.

extconf.rb:73:in `<main>': libvirt library not found in default locations (RuntimeError)

extconf failed, exit code 1

Gem files will remain installed in /root/.vagrant.d/gems/gems/ruby-libvirt-0.5.2 for inspection.
Results logged to /root/.vagrant.d/gems/extensions/x86_64-linux/ruby-libvirt-0.5.2/gem_make.out

After installing “libvirt-devel” package the issue got resolved.

[root@dhcp35-203 ~]# dnf install libvirt-devel

[root@dhcp35-203 ~]# vagrant plugin install vagrant-cachier
Installing the 'vagrant-cachier' plugin. This can take a few minutes...
Installed the plugin 'vagrant-cachier (1.2.1)'!

However the vagrant up command again failed.

$ vagrant init centos/7

Then we need to modify the vagrantfile as vagrant-cachier by-default uses NFS to mount the host filesystem in to the guest.

$ cat Vagrantfile
Vagrant.configure(2) do |config| = "centos/7"
  if Vagrant.has_plugin?("vagrant-cachier")
    config.cache.scope = :box

    config.cache.synced_folder_opts = {
      type: :nfs,
      mount_options: ['rw', 'vers=3', 'tcp', 'nolock']


Next step was

$ vagrant up
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

mount -o 'rw,vers=3,tcp,nolock''/home/lmohanty/.vagrant.d/cache/fedora/23-cloud-base' /tmp/vagrant-cache

Stdout from the command:

Stderr from the command:

mount.nfs: Connection timed out

After little troubleshooting it turned out to be a firewall i.e. iptable issue. iptable was blocking the nfs service of host for the operation. As a temporary workaround I removed all the iptable rules from the host.

$ iptables -F

After that “vagrant up” worked fine and I can see the changes vagrant-cachier did in the guest to make the caching work.

Here are the things done by vagrant-cachier for the caching to work.

  • Mounts the ~/.vagrant.d/cache/<guest-name> from host  in the guest on /tmp/vagrant-cache/
  • In Guest
    • It enables the yum caching i.e. sed -i ‘s/keepcache=0/keepcache=1/g’ /etc/yum.conf
    • It creates a symlink of /tmp/vagrant-cache/yum to /var/cache/yum
vagrant@localhost ~]$ ls -l /var/cache
total 8
drwx------. 2 root root 4096 Nov 15 00:08 ldconfig
drwxr-xr-x. 2 root root 4096 Jun  9  2014 man
lrwxrwxrwx. 1 root root   22 Nov 15 00:06 yum -> /tmp/vagrant-cache/yum

vagrant-cachier works fine with CentOS7 guests. However I found an issue with Fedora 23 guests as the default package manager is dnf instead of yum. I have filed an issue with vagrant-cachier and also working on a fix.

Bangalore CentOS Dojo, 2014

The first CentOS Dojo in India took place in Bangalore on 15th November(Saturday) 2014 at Red Hat Bangalore office. Red Hat had sponsored the event.

I was  a co-organizer of the Dojo along with Dominic and Karanbir Singh.  Around 90 people RSVPed  for the event but around 40 (mostly system administrators and new users) attended the event.

The First talk was by Aditya Patawari on “An introduction to Docker and Project Atomic”. The talk included a demo and introduced audience to docker and Atomic host. Most of the attendees had questions on docker as they had used or have heard about it. There were some questions about differences between CoreOS and Project Atomic. The slides are available at Overall this talk gave fair idea about Docker and Atomic project.

Second talk was “Be Secure with SELinux Gyan” by Rejy M Cyriac. This session about troubleshooting SELinux issues and introduction to creating custom SELinux policy modules.  Rejy made the talk interesting by distributing SELinux stickers to attendees who asked interesting questions or answered questions. Slides can be found here.

After these two talks we took a lunch break for around 1 hour.  During the lunch break we distributed the CentOS t shirts and got a chance to socialize with the attendees.

The first session post launch was “Scale out storage on CentOS using GlusterFS” by Raghavendra Talur. The talk introduced the audience to GlusterFS, important high level concepts and a demo was shown using packages from CentOS storage SIG. Slides can be found at slideshare.

The next session was “Network Debugging” by Jijesh Kalliyat. This talk covered all most all basic concepts/fundamental, network Diagnostic tools required to troubleshoot a network issue. Also  it included a demo of use Wireshark and Tcpdump to debug network issues. Slides are available here.

Before the next talk, we took break for some time and clicked some group pictures of all present for the Dojo.

The last session was on “Systemd on CentOS” by Saifi Khan. The talk covered a lot of areas e.g. comparison between SysVinit and systemd,  Concurrency at scale, how systemd is more scalable than other available init systems, some similarity of design principles with CoreOS and how it is suited better for Linux container technology. Saifi also talked about how systemd has saved his system from being unusable.  His liking for systemd was quite evident from the talk and enthusiasm.

Overall it was an awesome experience participating in the Dojo as it covered wide variety of topics which are important for deploying CentOS for various purposes.

Bangalore Dojo link:

Group Photo. You can see happy faces there 🙂


Bangalore Dojo, 2014

GlusterFS VFS plugin for Samba

Here are the topics this blog is going to cover.

  • Samba Server
  • Samba VFS
  • Libgfapi
  • GlusterFS VFS plugin for Samba and libgfapi
  • Without GlusterFS VFS plugin
  • FUSE mount vs VFS plugin

About Samba Server:

Samba server runs on Unix and Linux/GNU operating systems. Windows clients can talk to Linux/GNU/Unix systems through Samba server. It provides the interoperability between Windows and Linux/Unix systems. Initially it was created to provide printer sharing and file sharing mechanisms between Unix/Linux and Windows. As of now Samba project is doing much more than just file and printer sharing.

Samba server works as a semantic translation engine/machine. Windows clients talk in Windows syntax e.g. SMB protocol. And Unix/Linux/GNU file-systems understand requests in  POSIX. Samba converts Windows syntax to *nix/GNU syntax and vice versa.

This article is about Samba integration with GlusterFS.  For specific details I have taken example of GlusterFS deployed on Linux/GNU.

If you have never heard of Samba project before, you should read about it more , before going further in to this blog.

Here are important link/pointers for further study:

  1. what is Samba?
  2. Samba introduction

Samba VFS:

Samba code is very modular in nature. Samba VFS code is divided in to two parts i.e. Samba VFS layer and VFS modules.

The purpose of Samba VFS layer is to act as an interface between Samba server and  below layers. When Samba server get requests from Windows clients through SMB protocol requests, it passes it to Samba VFS modules.

Samba VFS modules i.e. plugin is a shared library (.so) and it implements some or all functions which Samba VFS layer i.e. interface makes  available.  Samba VFS modules can be stacked on each other(if they are designed to be stacked).

For more about Samba VFS layer, please refer

Samba VFS layer passes the request to VFS modules. If the Samba share is done for a native Linux/Unix file-system, the call goes to default VFS module. The default VFS module forwards call to System layer i.e. operating system. For User space file-system like GlusterFS, VFS layer calls are implemented through a VFS module i.e. VFS plugin for GlusterFS .The plugin redirects the requests (i.e fops) to GlusterFS APIs i.e. libgfapi. It implements or maps all VFS layer calls using libgfapi.


libgfapi (i.e. glusterfs api) is set of APIs which can directly talk to GlusterFS. Libgfapi is another access method for GlusterFS like NFS, SMB and FUSE. Libgfapi bindings are available for C, Python, Go and more programming languages. Applications can be developed which can directly use GlusterFS without a GlusterFS volume mount.

 GlusterFS VFS plugin for Samba and libgfapi:

Here is the schematic diagram of how communication works between different layers.


Samba Server:  This represents Samba Server and Samba VFS layer

VFS plugin for GlusterFS: This implements or maps relevant VFS layer fops to libgfapi calls.

glusterd: Management daemon of Glusterfs node i.e. server.

glusterfsd: Brick process of Glusterfs node i.e. server.

The client requests come to Samba server and Samba servers redirects the calls to GlusterFS’s VFS plugin through Samba VFS layer. VFS plugin calls relevant libgfapi fucntions. Libgfapi acts as a client, contacts glusterd for vol file information ( i.e. information about gluster volume, translators, involved nodes) , then forward requests to appropriate glusterfsd i.e. brick processes where requests actually get serviced.

If you want to know specifics about the setup to share GlusterFS’s volume through Samba VFS plugin, refer below link.

Without GlusterFS VFS plugin: 

Without GlusterFS VFS plugin, we can still share GlusterFS volume through Samba server. This can be done through native glusterfs mount i.e. FUSE (file system in user space). We need to mount the volume using FUSE i.e .glusterfs native mount in the same machine where Samba server is running, then share the mount point using Samba server. As we are not using the VFS plugin for GlusterFS here, Samba will treat the mounted GlusterFS volume as a native file-system. The default VFS module will be used and the file-system calls will be sent to operating system. The flow is same as any native file system shared through Samba.

FUSE mount vs VFS plugin:

If you are not familiar with file systems in user space,  please read about FUSE i.e. file system in user space.

For FUSE mounts, file system fops from Samba server goes to user space FUSE mount point -> Kernel VFS -> /dev/fuse -> GlusterFS and comes back in the same path. Refer to below diagrams for details. Consider Samba server as an application which runs on the fuse mount point.


Fuse mount architecture for GlusterFS

You can observe the process context switches happens between user and kernel space in above architecture. It is going to be a key differentiation factor when compared with libgfapi based VFS plugin.

For Samba VFS plugin implementation, see the below diagram. With the plugin Samba calls get converted to libgfapi calls and libgfapi forward the requests  to GlusterFS.


Libgfapi architecture for GlusterFS

The above pictures are copied from this presentation:

Advantage of libgfapi based Samba plugin Vs FUSE mount:

  • With libgfapi , there are no kernel VFS layer context switches. This results in performance benefits compared to  FUSE mount.
  • With a separate Samba VFS module i.e. plugin , features ( e.g: more NTFS functionality) can be provided in GlusterFS and it can be supported with Samba, which native Linux file systems do not support.




Using GlusterFS With GlusterFS Samba vfs plugin on Fedora

This blog covers the steps and implementation details to use GlusterFS Samba VFS plugin.

Please refer below link, If you are looking for architectural information for GlusterFS Samba VFS plugin,  difference between FUSE mount vs Samba VFS plugin

I have setup  two node GlusterFS cluster with Fedora 20 (minimal install) VMs. Each VM has 3 separate XFS partitions with each partitions 100GB each.
One of the Gluster node is used as a Samba server in this setup.

I had originally tested this with Fedora 20. But this example should work fine with latest Fedoras i.e. F21 and F22

GlusterFS Version: glusterfs-3.4.2-1.fc20.x86_64

Samba version:  samba-4.1.3-2.fc20.x86_64

Post installation “df -h” command looked like below in the VMs
$df -h
Filesystem                            Size  Used Avail Use% Mounted on
/dev/mapper/fedora_dhcp159–242-root   50G  2.2G   45G   5% /
devtmpfs                              2.0G     0  2.0G   0% /dev
tmpfs                                 2.0G     0  2.0G   0% /dev/shm
tmpfs                                 2.0G  432K  2.0G   1% /run
tmpfs                                 2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs                                 2.0G     0  2.0G   0% /tmp
/dev/vda1                             477M  103M  345M  23% /boot
/dev/mapper/fedora_dhcp159–242-home   45G   52M   43G   1% /home
/dev/mapper/gluster_vg1-gluster_lv1           100G  539M  100G   1% /gluster/brick1
/dev/mapper/gluster_vg2-gluster_lv2           100G  406M  100G   1% /gluster/brick2
/dev/mapper/gluster_vg3-gluster_lv3           100G   33M  100G   1% /gluster/brick3

You can use following commands to create xfs partitions
1. pvcreate /dev/vdb
2. vgcreate VG_NAME /dev/vdb
3. lvcreate -n LV_NAME -l 100%PVS VG_NAME /dev/vdb
4. mkfs.xfs -i size=512 LV_PATH

Following are the steps and packages need to be performed/installed on each node (which is Fedora 20 for mine)

#Change SELinux to either “permissive” or “disabled” mode

# To put SELinux in permissive mode
$setenforce 0

#To see the current mode of SELinux


SELinux policy rules for Gluster is present in recent Fedora releases e.g. F21, F22 or later. So SELinux should work fine with Gluster.

#Remove all iptable rules, so that it does not interfare with Gluster

$iptables -F

yum install glusterfs-server
yum install samba-vfs-glusterfs
yum install samba-client

#samba-vfs-glusterfs RPMs for CentOS, RHEL, Fedora19/18 are avialable at

#To start glusterd and auto start it after boot
$systemctl start glusterd
$systemctl enable glusterd
$systemctl status glusterd

#To start smb and auto start it after boot
$systemctl start smb
$systemctl enable smb
$systemctl status smb

#Create gluster volume and start it. (Running below commands from Server1_IP)

$gluster peer probe Server2_IP
$gluster peer status
Number of Peers: 1

Hostname: Server2_IP
Port: 24007
Uuid: aa6f71d9-0dfe-4261-a2cd-5f281632aaeb
State: Peer in Cluster (Connected)
$gluster v create testvol Server2_IP:/gluster/brick1/testvol-b1 Server1_IP:/gluster/brick1/testvol-b2
$gluster v start testvol

#Modify smb.conf for Samba share

$vi /etc/samba/smb.conf

comment = For samba share of volume testvol
path = /
read only = No
guest ok = Yes
kernel share modes = No
vfs objects = glusterfs
glusterfs:loglevel = 7
glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
glusterfs:volume = testvol

#For debug logs you can change the log levels to 10 e.g: “glusterfs:loglevel = 10”

# Do not miss “kernel share modes = No” else you won’t be able to write anything in to the share

#verify that your changes are correctly understood by Samba
$testparm -s
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section “[homes]”
Processing section “[printers]”
Processing section “[testvol]”
Loaded services file OK.
workgroup = MYGROUP
server string = Samba Server Version %v
log file = /var/log/samba/log.%m
max log size = 50
idmap config * : backend = tdb
cups options = raw

comment = Home Directories
read only = No
browseable = No

comment = All Printers
path = /var/spool/samba
printable = Yes
print ok = Yes
browseable = No

comment = For samba share of volume testvol
path = /
read only = No
guest ok = Yes
kernel share modes = No
vfs objects = glusterfs
glusterfs:loglevel = 10
glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
glusterfs:volume = testvol

#Restart the Samba service. This not a compulsory step as Samba takes latest smb.conf for new connections. But to make sure it uses the latest smb.conf, restart the service.
$systemctl  restart smb

#Set smbpasswd for root. This will be used for mounting the volume/Samba share on the client
$smbpasswd -a root

#Mount the cifs share using following command and it is ready for use 🙂
mount -t cifs -o username=root,password=<smbpassword> //Server1_IP/testvol /mnt/cifs

GlusterFS volume tuning for volume shared through Samba:

  • Gluster volume needs to have: “gluster volume set volname server.allow-insecure on”
  • /etc/glusterfs/glusterd.vol of each of gluster node
    add “option rpc-auth-allow-insecure on”
  • Restart glusterd of each node.

For setups where Samba server and Gluster nodes need to be on different machines:

# put “glusterfs:volfile_server = <server name/ip>” in the smb.conf settings for the specific  volume


comment = For samba share of volume testvol
path = /
read only = No
guest ok = Yes
kernel share modes = No
vfs objects = glusterfs
glusterfs:loglevel = 7
glusterfs:logfile = /var/log/samba/glusterfs-testvol.log

glusterfs:volfile_server = <server name/ip>
glusterfs:volume = testvol

#Here are the packages that were installed on the nodes

rpm -qa | grep gluster

[root@dhcp159-242 ~]# rpm -qa | grep samba

Note: The same smb.conf entries should work with CentOS6 too.