Image Stitching Using OpenCV

Vagdevi Kommineni
Towards Data Science
4 min readOct 11, 2018

--

As you know, the Google photos app has stunning automatic features like video making, panorama stitching, collage making, sorting out pic based on the presence of the person in the photo, and many more. I always wondered how come all these are possible. But one day, I felt extremely cool about making panorama stitching on my own.

I felt really excited when I gotta do a project on image stitching. That was a eureka moment when I finally managed to build my own image stitcher:). I did it in Python — my all-time favorite language and using OpenCV 3.1.0.

Albeit many resources are available on the Internet for this, today I would like to present my work along with the code. The following code and explanation are all for stitching up 2 images together.

Firstly, let us import the necessary modules.

import cv2
import numpy as np
import matplotlib.pyplot as plt
from random import randrange

As we know that we are stitching 2 images, let’s go read them.

left.jpg and right.jpg
img_ = cv2.imread(‘right.JPG’)
img1 = cv2.cvtColor(img_,cv2.COLOR_BGR2GRAY)
img = cv2.imread(‘left.JPG’)
img2 = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

The cv2.cvtColor converts the input RGB image into its grayscale form.

For image stitching, we have the following major steps to follow:

  1. Compute the sift-keypoints and descriptors for both the images.
  2. Compute distances between every descriptor in one image and every descriptor in the other image.
  3. Select the top ‘m’ matches for each descriptor of an image.
  4. Run RANSAC to estimate homography
  5. Warp to align for stitching
  6. Now stitch them together

Eloborately… ,

Firstly, we have to find out the features matching in both the images. These best matched features act as the basis for stitching. We extract the key points and sift descriptors for both the images as follows:

sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

kp1 and kp2 are keypoints, des1 and des2 are the descriptors of the respective images.

Now, the obtained descriptors in one image are to be recognized in the image too. We do that as follows:

bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

The BFMatcher() matches the features which are more similar. When we set parameter k=2, we are asking the knnMatcher to give out 2 best matches for each descriptor.

‘matches’ is a list of list, where each sub-list consists of ‘k’ objects. To understand this and follow the coming parts better, please go through this.

Often in images, there are tremendous chances where the features may be existing in many places of the image. This may mislead us to use trivial features for our experiment. So we filter out through all the matches to obtain the best ones. So we apply ratio test using the top 2 matches obtained above. We consider a match if the ratio defined below is predominantly greater than the specified ratio.

# Apply ratio test
good = []
for m in matches:
if m[0].distance < 0.5*m[1].distance:
good.append(m)
matches = np.asarray(good)

It’s time to align the images now. As you know that a homography matrix is needed to perform the transformation, and the homography matrix requires at least 4 matches, we do the following. Find more here.

if len(matches[:,0]) >= 4:
src = np.float32([ kp1[m.queryIdx].pt for m in matches[:,0] ]).reshape(-1,1,2)
dst = np.float32([ kp2[m.trainIdx].pt for m in matches[:,0] ]).reshape(-1,1,2)
H, masked = cv2.findHomography(src, dst, cv2.RANSAC, 5.0)
#print H
else:
raise AssertionError(“Can’t find enough keypoints.”)

And finally comes the last part, stitching of the images. Now that we found the homography for transformation, we can now proceed to warp and stitch them together:

dst = cv2.warpPerspective(img_,H,(img.shape[1] + img_.shape[1], img.shape[0]))
plt.subplot(122),plt.imshow(dst),plt.title(‘Warped Image’)
plt.show()
plt.figure()
dst[0:img.shape[0], 0:img.shape[1]] = img
cv2.imwrite(‘output.jpg’,dst)
plt.imshow(dst)
plt.show()

We get warped image plotted using matplotlib to well visualize the warping.

The resultant is as follows:

--

--