
Apart from segregating objects based on their colors, another way to segregate objects is via their textures. To do this we can make use of the entropy function in Skimage. In this article we shall learn how to use the function to effectively extract objects of interest in our image.
Let’s begin!
As always, start by importing the required Python libraries.
import matplotlib.pyplot as plt
import numpy as np
from skimage.io import imread, imshow
from skimage import data
from skimage.util import img_as_ubyte
from skimage.filters.rank import entropy
from skimage.morphology import disk
from skimage.color import rgb2hsv, rgb2gray, rgb2yuv
Now let us import the image we will be working with.
shawls = imread('shawls.PNG')
plt.figure(num=None, figsize=(8, 6), dpi=80)
imshow(shawls);

The above image has shawls of varying prints and textures. Let us try to see if we can segregate them based on these features. As a start let us first convert our image to grayscale.
shawl_gray = rgb2gray(imread('shawls.PNG'))
plt.figure(num=None, figsize=(8, 6), dpi=80)
imshow(shawl_gray);

Excellent, from this point we can now apply the entropy function of Skimage.
entropy_image = entropy(shawl_gray, disk(5))
plt.figure(num=None, figsize=(8, 6), dpi=80)
imshow(entropy_image, cmap = 'magma');

In a nutshell, the entropy function gives a value that represents level of complexity in a certain section of an image. The resulting values are of course subject to the initial structuring element we chose. As an example let us experiment by changing the initial radius of the disk.
def disk_iterations(image):
image_gray = rgb2gray(image)
f_size = 20
radi = list(range(1,10))
fig, ax = plt.subplots(3,3,figsize=(15,15))
for n, ax in enumerate(ax.flatten()):
ax.set_title(f'Radius at {radi[n]}', fontsize = f_size)
ax.imshow(entropy(shawl_gray, disk(radi[n])), cmap =
'magma');
ax.set_axis_off()
fig.tight_layout()

We can see that the image becomes increasingly blurry if we increase the disk radius. As the goal of this exercise is to segment the image based on texture, we do not need a size that may include portions of a differently textured image. Let us choose a radius of 6 as it seems to be a good balance between the sharpness of 1 and the dullness of 9.
Our next task is to then turn this into a mask. To do this let us use image binarization. The below code will iterate over several thresholds.
def threshold_checker(image):
thresholds = np.arange(0.1,1.1,0.1)
image_gray = rgb2gray(image)
entropy_image = entropy(image_gray, disk(6))
scaled_entropy = entropy_image / entropy_image.max()
fig, ax = plt.subplots(2, 5, figsize=(17, 10))
for n, ax in enumerate(ax.flatten()):
ax.set_title(f'Threshold : {round(thresholds[n],2)}',
fontsize = 16)
threshold = scaled_entropy > thresholds[n]
ax.imshow(threshold, cmap = 'gist_stern_r') ;
ax.axis('off')
fig.tight_layout()

We can see that increasing the threshold for binarization decreases how much of the image is shown. Intuitively this makes sense as once the threshold is equal to 1, there is no part of the image that can match that level of entropy.
For visualization purposes let us set the threshold equal to 0.8 and see what happens when we use it as a mask for our image.
scaled_entropy = shawl_gray / shawl_gray.max()
entropy_image = entropy(scaled_entropy, disk(6))
scaled_entropy = entropy_image / entropy_image.max()
mask = scaled_entropy > 0.8
plt.figure(num=None, figsize=(8, 6), dpi=80)
imshow(shawl_gray * mask, cmap = 'gray');

As expected, see that only objects that breach a level of entropy were able to be rendered. If we flip the mathematical operator we can see the opposite effect.

Notice how only low entropy objects have been rendered. To aid in visualization let us convert both these masks to their original colored image and compare them side by side.
def entropy_mask_viz(image):
image_gray = rgb2gray(image)
entropy_image = entropy(image_gray, disk(6))
scaled_entropy = entropy_image / entropy_image.max()
f_size = 24
fig, ax = plt.subplots(1, 2, figsize=(17, 10))
ax[0].set_title('Greater Than Threshold',
fontsize = f_size)
threshold = scaled_entropy > 0.8
image_a = np.dstack([image[:,:,0]*threshold,
image[:,:,1]*threshold,
image[:,:,2]*threshold])
ax[0].imshow(image_a)
ax[0].axis('off')
ax[1].set_title('Less Than Threshold',
fontsize = f_size)
threshold = scaled_entropy < 0.8
image_b = np.dstack([image[:,:,0]*threshold,
image[:,:,1]*threshold,
image[:,:,2]*threshold])
ax[1].imshow(image_b)
ax[1].axis('off')
fig.tight_layout()
return [image_a, image_b]
entropic_images = entropy_mask_viz(shawls)

We can see that we were able to successfully split the figures by the level of complexity of the objects. The objects on the left image exhibit far more intricate design patterns (as well as being made of fabrics with more complex textures). The objects on the right image are far more plain and contain only one color.
An interesting note is how we were able to separate the text from the sheet of paper it was written on.

This is useful to keep in mind. Human texts tend to be written on plain backgrounds to facilitate reading. Knowing this means that it is possible to extract all text features from an image, but this is a task we shall reserve for another time.
In Conclusion
Entropy masking is a useful technique that can help data scientists segment portions of an image by complexity. The applications range from texture analysis, image filtering, and even text extraction (a feature that can lend itself well to Natural Language Processing). I hope that after reading this article you have a better appreciation and understanding of how to use this tool.