Extracting regions of interest from images

Using OpenCV for efficiently extracting ROI from images

Debal B
Towards Data Science

--

Welcome to the second post in this series where we talk about extracting regions of interest (ROI) from images using OpenCV and Python.

As a recap, in the first post of this series we went through the steps to extract balls and table edges from an image of a pool table. We used simple OpenCV functions like inRange, findContours, boundingRect, minAreaRect, minEnclosingCircle, circle, HoughLines, line etc to achieve our objective.

For beginners in OpenCV, I would recommend to go through that post in order to get familiar with the usage of the above functions.

In this post we will look at a somewhat more complex problem and explore some methods which we can use to obtain the desired results.

Our task today is to extract the desired segments from an image which contains a snapshot of a patients brain activity map. The extracted segments can then be used in numerous applications e.g. in a Machine Learning model which can diagnose any health anomalies.

So let us start by looking at the input image itself. It is a typical report generated by medical instruments used in the field of Neurological Science which use sensors to detect signals from a patients brain and display them as colored maps. Typically there are four maps, all of which depict a certain feature and are analyzed together for diagnosis (further details are out of current scope).

Our target image for this exercise containing the four brain maps (image source author)

From the above image, we want to extract only the regions corresponding to the four maps (head scans) leaving everything else out. So lets get going.

The first step is detecting the edges of the segments we want to extract. This is a multi step process as mentioned below:

  1. Convert the RGB image to gray-scale using “cvtColor()”
  2. Remove noise from the gray-scale image by applying a blurring function “GaussianBlur()”
  3. Finally applying the “Canny()” function to the blurred image to obtain the edges

The output of the edge detection process looks something like this:

Edge detection output using Canny algorithm (image source author)

Notice that although the brain map segments are identified, there are a lot of unwanted edges which need to be eliminated and some of the edges have gaps in between which need to be closed.

A common method applied for such purpose is Morphological Transformation which involves using a succession of dilations and erosions on the image to remove unwanted edges and close gaps.

We use OpenCV functions “dilate()” and “erode()” over multiple iterations to get an output as below.

Some enhancements in the edges using OpenCV (image source author)

As you can see, the edges are now complete and much smoother than before.

Now we can extract the contours in this image using OpenCV function “findContours()” and select only those contours which have the following properties:

  1. Geometry is circle or oval shaped
  2. Area is above a certain threshold (the value 7000 works fine for this example).

For the first part, we will detect the bounding rectangle of each contour using OpenCV “boundingRect()” and check whether the aspect ratio (height to width ratio) is close to 1.

It may appear that our task is finished but there is a little bit of fine tuning required.

It is often the case that multiple overlapping contours are detected over a segment whereas we are interested in only one.

This problem is solved using Non Maxima Suppression, i.e. we look at all overlapping contours and select the one with the maximum area as the final candidate. The logic is pretty straightforward hence we do not need any inbuilt OpenCV or Python functions.

Another important logic is to identify the four segments separately i.e. Top-Left, Top-Right, Bottom-Left and Bottom-Right.

This is also pretty straightforward and involves identifying the image center coordinates as well as the centroid of each of our detected segments. Centroid detection of a segment contour requires applying the OpenCV “moments()” function on the contour and then calculating the center X, Y coordinates using the formula below:
center_x, center_y = (int(M[“m10”] / M[“m00”]), int(M[“m01”] / M[“m00”]))

Comparing the segment centroid coordinates with the image center coordinates lets us place the four segments in their respective positions.

Now that we have the four segments identified, we need to build the image mask which will allow us to pull out the desired features from the original image.

We will use the OpenCV function “drawContours()” using color as White (R,G,B=255,2555,255) and thickness as FILLED (-1) to draw all four segment contours over a black background. The result looks like below:

Mask for extracting our ROIs (image source author)

Applying this mask on the original image gets us the desired segments over a background of our choice (e.g. Black or White).

For a black background we create a black canvas and then draw upon it using the OpenCV function “bitwise_and()” with the previously obtained mask.

Extracted ROIs over a black background (image source author)

For a white background we first create a white canvas and then create a color inverted mask as below by drawing contours with OpenCV function “drawContours()” in black color (R,G,B = 0,0,0) and thickness as FILLED (-1).

An alternative Inverted mask for ROI extraction (image source author)

We then add this inverted mask with the previously obtained black background using OpenCV “add()” function and achieve the same result but with white background.

Extracted ROIs over a white background (image source author)

This concludes the current post in which we looked at few methods using which we can easily extract regions of interest from images.

It should be noted that the methods used above may undergo modifications in case of other images with varying complexity. However the basics discussed above would lay the groundwork for any advanced techniques that may be required to solve such problems.

--

--