Debugging with git bisect

This is a post  for appreciating “git bisect” and how it can be one of the most powerful tool to find out root cause of a broken build or a broken branch.

Here is simple example of  how “git bisect” can be used to find  a bad commit.

Lets assume that we have a git repository which has hundreds of commits and currently the HEAD of the master branch is broken. Our objective is to find out which commit introduced the bug in to the code base.

Before starting the git bisect process we need to know a couple of things. First we need know a good commit i.e. a old commit at which code worked as expected. This is not very difficult to find out as it is most likely the last release of the code.  Also we need to know the steps to test the code and reproduce the issue. It will help us to find out if certain commits are good or bad during the git bisect process.

Git bisect uses binary search algorithm between the good and bad commit to  find out the commit  that introduced the bug.

Here are the commands to start the git bisect work flow. Lets call the current HEAD commit as “original HEAD”

$ git bisect start

$ git bisect bad

$ git bisect good  <commit ID>

Bisecting: 130 revisions left to test after this (roughly 4 steps)

Once the above commands are executed, git bisect will change the HEAD to  the middle of the commits between the “original HEAD” and good commit. Read about binary search if you want to know how it decides to which commit the HEAD needs to be moved.

At this point we are expected to test the code and find out if we are able to reproduce the issue. After the testing we need to again  tell git bisect if it is bad commit (see below) i.e. we are able to reproduce the issue else it is a good commit.

$git bisect bad
Bisecting: 65 revisions left to test after this (roughly 3 steps)

Or

$ git bisect good
Bisecting: 65 revisions left to test after this (roughly 3 steps)

We need to continue the process few times and git bisect will give you the commit which introduced the issue/bug.

In my experience I always get to the commit (which introduced the issue) in 4 to 5 steps of git bisect.  Which I think is an awesome thing.

So go ahead and try git bisect if you have not tried it yet and do not forget to use it when you broken builds.

 

 

 

Creating Live media image

If you are a GNU/Linux user, their is a high chance that you have used live media images i.e. live CD, live DVD or ISO. I was curious about  how live images are created and recently got a chance to do some hands on.

There are multiple tools available for creating live media. However I am going to use livecd-tools to create live ISO. livecd tools need kickstart files to create the images and we will use CentOS upstream kickstart files [1].

As a first step install livecd-tools [2]. I using CentOS Vagrant box to create live ISOs.

$ sudo yum install livecd-tools git -y
$ git clone https://github.com/CentOS/sig-core-livemedia
$ cd sig-core-livemedia/kickstarts

#You can use any one of the .cfg files in /sig-core-livemedia/kickstarts
$ livecd-creator --config <Kickstart file>

If you are new to kickstart files and want to know more about it refer the documentation: https://github.com/rhinstaller/pykickstart/blob/master/docs/kickstart-docs.rst

[1] https://github.com/CentOS/sig-core-livemedia

[2] https://github.com/rhinstaller/livecd-tools

Running System V init script in CentOS7

RHEL 7 and CentOS 7 moved to systemd from System V init facility.  So if we want to run a script at boot,  it is highly recommended that you should write a systemd unit file.  However to ease the transition to systemd, CentOS7 /RHEL7 offers a backward compatibility mode for init scripts.

In a System V init system you need to add an entry of the script to /etc/rc.local (or equivalent) and the script will be executed during boot at the end of multi user run level.

In CentOS7 /etc/rc.local is a symbolic link to /etc/rc.d/rc.local.

root@localhost ~]# ls -l /etc/rc.local 
lrwxrwxrwx. 1 root root 13 Apr  4  2016 /etc/rc.local -> rc.d/rc.local

To enable the init compatibility mode, you need to make  /etc/rc.d/rc.local executable and then add the script to /etc/rc.local.

Here are the steps you need to perform to to start a script e.g. hello.sh during boot.

$ chmod +x /etc/rc.d/rc.local

#make the script executable
$ chmod +x hello.sh

#Make a entry in /etc/rc.local
$ cat /etc/rc.local
!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

/root/hello.sh

Reference : https://www.centos.org/forums/viewtopic.php?t=48140

Get rid of password prompt for Vagrant commands on Libvirt

If you use Vagrant with KVM Libvirt , there is a high chance that you are annoyed by the  password prompts for every Vagrant commands e.g. vagrant up/ssh/destroy unless you are running the commands as root.

The issue is that the a typical Linux user does not have access permission to libvirt socket so to access it we need to provide extra permission.  Interestingly libvirt uses Policy Kit ( man polkit) to decide access permissions and we can add an explicit rule to polkit to give an user privilege to access libvirt.

To fix the issue you need to create a file in /etc/polkit-1/localauthority/50-local.d as mentioned below.

There are other methods to fix this issue too. You can also create a user group, give the group privilege and add the user to the group.

# cd  /etc/polkit-1/localauthority/50-local.d
# cat vagrant.pkla 
[Allow lmohanty libvirt management permissions]
Identity=unix-user:<USER_NAME>
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

References
[1] https://niranjanmr.wordpress.com/2013/03/20/auth-libvirt-using-polkit-in-fedora-18/
[2] https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/

A Docker Workshop

In Fudcon Pune 2015 we had conducted a Docker introductory workshop. It was well attended and  we got positive feedback about it.  While preparing for the Linux container track in Fudcon we had decided that we will put all the documentation in github. The idea was to keep the content open for collaboration so that others can contribute and reuse the content. Thanks to Neependra for the idea.

Here is the github link which has the workshop content.

The github project also contains some useful material e.g. Hands on Kubernetes which you might useful.

This workshop will take around 3 hours to complete. This is really useful If you are new to docker and wants to learn by doing some hands on.

Multi-Container Application Packaging With Nulecule

I got an opportunity to talk about “Project Atomic and Multi-container application packaging”  in recent docker meetup Bangalore. I have posted my slides to slideshare [1].

However I thought of giving more context and further pointers to the presentation through this blog.

As mentioned in my slide, Nulecule is a specification to define a multi-container applications.  So that we can get rid of the custom shell scripts, long docker run commands, moving the required configuration files and instructions about how to deploy the application to the end user.

Checkout the YouTube video about why we need  Nulecule: From the Nulecule Nest to an Atomic App

However just having a specification will not solve the problem. We need a code to do the required work to run multi-container application using Nulecule specification and that is Atomic App project .

Atomic App performs all actions to run the application by reading the Nulecule spec. Atomic App is used as a docker image.

To run the Atomic App installer for your application , atomic command line is used.

Here is the workflow if you want to build Atomic App installer for your application using Nulecule specification.   Please keep in mind that when I say application, I am actually talking about multi-container application.

Step-1

Write Nulecule specification files. Which also  includes manifesto files for underlying required orchestration platform (e.g. Kubernetes, OpenShift, Docker Compose, Apache Mosos Marathon etc)

Here is a blog from Ratnadeep about how he created a nulecule-ized application -> http://www.rtnpro.com/nuleculizing-an-docker-image/

Step-2

Create the layered docker image (use Atomic App docker image as base image) of the application. Which will include Nulecule specification and Atomic App code.

You can push the new image to your local docker registry or public docker hub for your use.

Step-3

Running the Atomic App image for the application. Here is a example of running a helloapache Atomic App -> https://hub.docker.com/r/projectatomic/helloapache/

Note that there are three way to run an Atomic App i.e.

  • Option 1: Non-interactive defaults
  • Option 2: Unattended
  • Option 3: Install and Run

Here is a YouTube video which shows a demo : WordPress Nulecule Demo

For further read check  Vasik’s presentation or Nulecule github project.

Get Involved:

Nulecule Poject: https://github.com/projectatomic/nulecule
Atomic App: https://github.com/projectatomic/atomicapp

Check the README files of the above projects for relevent communication channels for participating in the project.

[1] http://www.slideshare.net/then4way/project-atomicnulecule
[2] http://www.rtnpro.com/nuleculizing-an-docker-image/
[3] http://www.slideshare.net/VavPavl/nulecule

Cherry pick a PR (pull request) from github

Sometime you might want to test pull requests (from github) in local machine  by cherry picking it. This usually happens before it get merged in the upstream repo and released by the project.

I searched the internet but  did not get good reference about how to do it. After little bit of trial and error I came up with below steps.

Cherry picking a pull request:

For example you want to cherry pick https://github.com/fgrehm/vagrant-cachier/pull/164

Cherry picking a commit:

Using Imagefactory to build Vagrant images

Fedora Koji buildsystem and CentOS Community build system i.e. cbs  uses  imagefactory at the back-end of Koji to build Vagrant images.  I have used it as through cbs/koji but wanted to give it a try as I am looking for  easier methods to build adb-atomic-developer-bundle . Specially for developers who don’t have access to Fedora or CentOS build system.

Imagefactory needs a kvm/libvirt hypervisor to build images and it converts them for other providers e.g. Virtualbox or VMware Fusion

Setup:

I have used my laptop (which runs Fedora 23) for this. As I have plan to hack imagefactory and I did not want to damage my laptop’s kvm setup.  So I have used nested virtualization for this. Which means I have a CentOS 7 VM which can run virtual machines.

All below steps are done on a CentOS 7 VM which has a kvm setup in place.

Installation:

Imagefactory is available in Fedora and EPEL repo. But I wanted to try/test the latest code, so I generated RPMs from latest code and then installed the RPMs.

$yum install  rpmdevtools epel-release
$git clone https://github.com/redhat-imaging/imagefactory.git
$cd imagefactory
$make rpm
$cd imagefactory_plugins
$make rpm
$cd ~/rpmbuild/RPMS/noarch/
$sudo yum localinstall ./*

Building Vagrant Images:

For building Vagrant box I have used Ian’s example git repo. He is the maintainer and one of the primary developer for imagefactory.

Below commands are copied from imagefactory-examples git repo.

$ git clone https://github.com/imcleod/imagefactory-examples.git
$ cd imagefactory-examples/vagrant/

Once you are in the “imagefactory-examples/vagrant/” directory, you can see the required files are already there for running commands to generate image for Fedora 22. So we can start running required commands.

For getting a working Vagrant box we need to run three commands (as mentioned below) to create appropriate OVA image. Each command will give a UUID for the intermediate image file name and we need to use the UUID in the next command.

$ sudo imagefactory --debug base_image \
  --file-parameter install_script ./f22-vagrant.ks \
  --parameter offline_icicle true \
  ./f22-minimal-40g.tdl
Output:
xxxxxxxxxxxxxxxxxxxxxxxxxxx
============ Final Image Details ============
UUID: 109cb45f-bbd2-4a27-ba5f-42e2d368be32
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Image build completed SUCCESSFULLY!
$ sudo imagefactory --debug target_image --id 109cb45f-bbd2-4a27-ba5f-42e2d368be32  rhevm

Output:
============ Final Image Details ============
UUID: ce0dce5f-a1d1-4c1a-8e9b-fc56e022a1bc
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Image build completed SUCCESSFULLY!
$ sudo imagefactory --debug target_image --parameter rhevm_ova_format vagrant-libvirt --id ce0dce5f-a1d1-4c1a-8e9b-fc56e022a1bc ova

Output:
============ Final Image Details ============
UUID: 36fcb589-06b8-447b-85bf-ed4715bd2a93
Type: target_image
Image filename: /var/lib/imagefactory/storage/36fcb589-06b8-447b-85bf-ed4715bd2a93.body
Image build completed SUCCESSFULLY!

The last step will generate the F22 image for libvirt provider. You can rename it as f22.libvirt.box (usually Vagrant images have .box extension) and start using it.

$ cp /var/lib/imagefactory/storage/36fcb589-06b8-447b-85bf-ed4715bd2a93.body ./f22.libvirt.box

[1] http://imgfac.org/
[2] https://github.com/redhat-imaging/imagefactory
[3] https://lalatendumohanty.wordpress.com/2015/11/01/kvm-nested-virtualization-in-fedora-23/
[4] https://lalatendumohanty.wordpress.com/2015/05/28/installing-vagrant-in-centos7/

KVM Nested Virtualization In Fedora 23

Nested virtualization allows you to run a virtual machine (VM) inside another VM [1]. Both Intel and AMD supports nested virtualization.

This is very helpful when you are experimenting with the hypervisor related technologies. Example: I will be able to run KVM and Virtualbox both on my laptop but in different VMs. Also I will be able to run local installation of imagefactory to build Vagrant images in a VM  as imagefactory need a hypervisor to run the build . The best part is, I can experiment with all of these inside different VMs without damaging my primary workstation’s hypervisor.

The below steps are done on a Fedora 23 running a Lenovo Thinkpad with Intel chipset.

Step 1: Make sure Intel virtualization (VT) is enabled for the host machine.

$ cat /proc/cpuinfo | grep vmx

flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts

The output should contain vmx else Intel virtualization (VT) is not enabled on the machine. You should first fix the setting in the BIOS.

Step 2: Install KVM on the F23 host.

$ dnf install @virtualization

Nested virtualization should be disabled bydefault

$ cat /sys/module/kvm_intel/parameters/nested
 N

Step 3: Enable nested virtualization.  Run below commands as root

  • Temporarily remove the kvm kernel module
      $ sudo rmmod kvm-intel
  • Add the following directive to /etc/modprobe.d/dist.conf
    $ sudo sh -c "echo 'options kvm-intel nested=y' >> /etc/modprobe.d/dist.conf"
  • Insert the kvm module back in the kernel
     $ sudo modprobe kvm-intel

There is alternative way to do the same i.e. pass  kvm-intel.nested=1 on kernel commandline [3]

Step 4: Reboot and verify that nested virtualization is enabled

  • Check that nested virt is enabled
$ sudo cat /sys/module/kvm_intel/parameters/nested
 Y

Step 5: Install the beefy VM. (Lets call it parent VM)

  • I used CentOS 7 minimal ISO i.e. CentOS-7-x86_64-Minimal-1503-01.iso to install the VM through Virtual Machine Manger.
  • Parent VM configuration : 50GB disk, 4GB RAM and 4 vCPUs

Step 6: Enable the VM to use nested virt

  • Go to -> Virtual Machine Manger GUI -> CPU properties -> select “Copy host CPU configuration”

There is also another option i.e. host-passthrough [1] . It is supposed to be more stable then “Copy host CPU configuration” but I have not tried that yet.

Step 7:  Check that Intel virtualization (VT) is enabled in the VM

$ cat /proc/cpuinfo | grep vmx

Step 8: Install KVM inside the VM  [4]

$ yum install qemu-kvm qemu-img
$ yum install libvirt libvirt-python python-virtinst

$ systemctl enable libvirtd
$ systemctl start libvirtd
$ systemctl status libvirtd

Step 9:  Install the child VM inside the parent VM

  • I used Virtual Machine Manger to connect to the parent VM and then install the child VM.
  • Used the same CentOS 7 minimal ISO i.e. CentOS-7-x86_64-Minimal-1503-01.iso to install  the child VM.

[1] https://fedoraproject.org/wiki/How_to_enable_nested_virtualization_in_KVM

[2] http://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html

[3] http://kashyapc.com/2012/01/14/nested-virtualization-with-kvm-intel/

[4] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Guest_Installation.html

Does open source/community model is the better way?

After reading my previous blogs ( blog1 and blog2 ) , you might be wondering if open source/free software/community development model helps to create a better software?  and I am going to shed some light on it in this post.

Before going to further discussion, I want to to talk about community development model.  In a Community development model anybody can participate in the software development irrespective of race, religion, nationality, gender, educational qualification and social status. Anybody who uses the software, develops the software, does bug fix, creates documentation ,  maintain the infrastructure for the project and contributes to the success of the project  is part of the community. In a community project , community decides the road map for the project. This actually changes the nature of the project. We will discuss about how the nature of the project changes in further discussion. However, for the community i.e. everybody to participate in the project,  the source code must be made available to them and this is how source code availability becomes a very important and a bare necessity . Source code  access i.e. open source is a precondition for community development model.  Without access to the source code, we can’t follow a community development model.

I am not sure if all of you understand how  a  software is developed in a company .If you understand it, you can skip the below paragraph.  Else lets first discuss how typically  a software product gets developed in a proprietary company. Then we will compare how it is different from community development.

A company  sells software to solve a problem or a set of problems or a better solution over an existing one. Before selling the product , they develop/create  it. As part of the development process , they hire people to do market study/research on what are the competitive products available for solving the problem,  what they are also trying to solve ? What should be their approach to the problem?  Then they hire software engineers, put them to a RnD lab or “a development lab”  to create it. These  engineers are responsible for writing code and testing for the product. They are not allowed to share the information about the product , the code with outside world. When they are done with development of the product and it is ready,  the company starts selling it to its customers. After the 1st version of software, there might  be new requirements for new features, improvements  to be put in to the product for the consequent versions of the software to make it better or make it competitive  with  other similar products. So that the company can make more profit selling it.

However an opensource/community project usually get started by an  individual or a group of people initiative to solve a problem for themselves. However they make the  source code available for others, in the belief that it will be helpful for others too. If others find it useful, they use it. When people use a software, they might find issues with it. They report the issues to the developer group or  they fix it themselves. Some of them add new features to it according to their need. As a gratitude of the initial help they received as form of the software , they merge the new code/feature with the original software and make it available for others to use. Gradually a community is formed. Person having interest and most knowledge in the project take up  the role of maintaining the project. A maintainer essentially is a project leader whose responsibility is to oversee the project growth, to collaborate among community members, to understand expectation of the community on the project  among lots of other things.  As the project grows, community members decide which features need to be put in to the software, which hardware they want to run it, what would be the future road map in a democratic way.  This leads to development of features which people need most i.e. which solves their problems, not some fancy feature which some company executive thought would be useful for them. This also leads to better support for a wide ranging hardware as it is easier to port the code for the community members to different hardware when the source code is available. Where as proprietary companies support selective hardware which give them maximum user base and profit. But in community we need to support everybody’s hardware so that everybody should be beneficial from it, rather money or profits.

Most of the time open source/community   software has better inter portability with other open source software because the goal is to collaborate, get benefited from each other which ultimately benefits the community.  This leads to better integration between different software projects  with each-other and result in a better product or software ecosystem. However this is not the case with proprietary software. Their decision depends on profit margin, future scope, relationship with each other (i.e. if the software are from different companies)  and so on. Have you stared seeing the difference? 🙂

Even though community projects start with minimum required features but it gradually becomes a incubation ground for innovation or new ideas. Researchers, academicians, computer scientists, corporations,  governments use existing  open source projects to develop something new for their purpose. Lets take an example. A computer scientist doing research on distributed computing and he came up with a new algorithm which improves distributed computing. Now he want to implement and test his algorithm. Does he need to develop a new distributed system to implement his idea? or put his algorithm to a existing open source distributed system.The answer is pretty simple. He takes source code from a opensource project (something like Linux/GNU here), implements his algorithm into it.  However it depends on him whether he wants to merge the code into the existing code base and make it available for others or he want to keep it to himself.  But almost in most of the cases people give it back to the source, from where they took the initial code. Giving code for free doesn’t mean  they are not gaining anything. A code in a popular community project  gives far more credibility, popularity, reach, respect to the author along with his research publication and still if he wants, he can create money out of it . There are lots of examples of Phd  papers/subjects  becoming famous community/opensource/free software projects.

The graph shows how community developed software overtakes proprietary software in-terms of innovation in long run.

CommunityDevelopedSoftware

I copied the below lines form Debian Linux/GNU’s about page [1]

You may be wondering: why would people spend hours of their own time to write software, carefully package it, and then give it all away? The answers are as varied as the people who contribute. Some people like to help others. Many write programs to learn more about computers. More and more people are looking for ways to avoid the inflated price of software. A growing crowd contribute as a thank you for all the great free software they’ve received from others. Many in academia create free software to help get the results of their research into wider use. Businesses help maintain free software so they can have a say in how it develops — there’s no quicker way to get a new feature than to implement it yourself! Of course, a lot of us just find it great fun

When you are in a culture where others help you without any selfish motive, your attitude towards others also changes. You become helpful to others too. However not everybody is kind enough to give back the enhancement they make in the source code. For those we have open source licenses like GPL[2] to force them  to give it back to the the community  which gave them the initial source code if they are selling/commercializing it with enhancements.

Some times organisations  contribute to community projects or starts community projects e.g: Linux/GNU, Mozilla firefox, Fedora, Open suse, Chrome, openstack, Xen virtualization, because they understand the benefit of community development model . We have examples of individuals or group of people/companies starting in open source projects.

Following are the positive sides of a community driven/free software/opensource project.

  • More choice of hardware, platform. Most of the open-source software projects support all possible hardware.
  • The life span of the software will be very long. As it is easier to fix and contribute a feature rather then creating a new project/software.
  • It will be easier to customize open source software according to your needs and taste. You can remove unwanted  features. That will make its IT foot print optimal.
  • It wont have virus, spyware as the source code is available for everyone to see and any suspicious code  never gets into the project or can be easily removable.
  • Better inter portability as it is easier to integrate it with other software.
  • The quality of the code in open source projects are far better then closed source ones as code is reviewed/read by more people. Also the source is better modular because of its distributed way of development.
  • Helps to spread knowledge as source code is a great source of knowledge. You can learn from others work.
  • It helps to avoid vendor lock in. If any company giving you commercial support for a open source/free software, they can’t show monopoly on the software. You are always free to move the support to some other company or hire engineers to support the software as the source code is publicly available.
  • Cost is always less for community driven software when you need commercial supports for the software. This helps organisations to cut down their IT cost which in turn lowers the cost of their product or service.
  • Minimizes software piracy. The model allows everybody to use the community version of the software with no cost, so no need of piracy.
  • Does not take away freedom of users regarding how they want to use it or where they want to use it.
  • Helps to create better culture, where collaboration with others plays a key role.
  • Encourages innovation as there is no need to reinvent the wheel again and we can focus on new stuffs.

I am quoting Linus Torvalds  on open source. He has actually summarized it nicely.

“Me, I just don’t care about proprietary software. It’s not “evil” or “immoral,” it just doesn’t matter. I think that Open Source can do better, and I’m willing to put my money where my mouth is by working on Open Source, but it’s not a crusade – it’s just a superior way of working together and generating code.

It’s superior because it’s a lot more fun and because it makes cooperation much easier (no silly NDA’s or artificial barriers to innovation like in a proprietary setting), and I think Open Source is the right thing to do the same way I believe science is better than alchemy. Like science, Open Source allows people to build on a solid base of previous knowledge, without some silly hiding.

But I don’t think you need to think that alchemy is “evil.” It’s just pointless because you can obviously never do as well in a closed environment as you can with open scientific methods”

to-compete-or-collaborate

The topic is a very big one and it is hard to discuss it in a single blog post.  It is very much possible that I may have missed some obvious points.  So if you have any suggestion , kindly put them in comments. I would be happy to pick them and put it into the post.

[1] http://www.debian.org/intro/about#what

[2] http://www.gnu.org/licenses/gpl.html