The world’s leading publication for data science, AI, and ML professionals.

Image Processing with Python – Blurring and Sharpening for Beginners

How do you apply convolution kernels to colored images?

Convolutional Dogs (Image by Author)
Convolutional Dogs (Image by Author)

In this article we shall discuss how to apply blurring and sharpening kernels onto images. These basic kernels form the backbone of a lot of more advanced kernel application. In my previous article I discussed the edge detection kernel, but I realized that I only stuck to greyscale images.

To act as a helpful guide, I shall discuss how we can go about applying these kernels onto colored images while still retaining the core image.

Let’s get started!

As always let us begin by importing the required Python Libraries.

import numpy as np
import matplotlib.pyplot as plt
from skimage.io import imshow, imread
from skimage.color import rgb2yuv, rgb2hsv, rgb2gray, yuv2rgb, hsv2rgb
from scipy.signal import convolve2d

For the purposes of this article, we shall use the below image.

dog = imread('fire_dog.png')
plt.figure(num=None, figsize=(8, 6), dpi=80)
imshow(dog);
Campfire Dog (Image by Author)
Campfire Dog (Image by Author)

Now the kernels we shall apply to the image are the Gaussian Blur Kernel and the Sharpen Kernel. You can see how we define their matrixes below.

# Sharpen
sharpen = np.array([[0, -1, 0],
                    [-1, 5, -1],
                    [0, -1, 0]])
# Gaussian Blur
gaussian = (1 / 16.0) * np.array([[1., 2., 1.],
                                  [2., 4., 2.],
                                  [1., 2., 1.]])
fig, ax = plt.subplots(1,2, figsize = (17,10))
ax[0].imshow(sharpen, cmap='gray')
ax[0].set_title(f'Sharpen', fontsize = 18)

ax[1].imshow(gaussian, cmap='gray')
ax[1].set_title(f'Gaussian Blur', fontsize = 18)

[axi.set_axis_off() for axi in ax.ravel()];
Sharpen and Gaussian Blur Kernels
Sharpen and Gaussian Blur Kernels

But how do we actually apply these kernels to our image? Well, let us first try by directly convolving them. I have defined the below function to allow us to the kernels iteratively. Note how we set the boundary to fill and fillvalue to 0, this is important to ensure that the output will be a 0 padded matrix of the same size as the original matrix.

def multi_convolver(image, kernel, iterations):
    for i in range(iterations):
        image = convolve2d(image, kernel, 'same', boundary = 'fill',
                           fillvalue = 0)
    return image
multi_convolver(dog, gaussian, 2)
Error Message
Error Message

Oh no, it seems that we have come across a value error. Why is this the case? Remember that when we convolve a matrix with another matrix, the matrices should be of the same dimensions. This means that we cannot apply a 2D Convolution to our 3D (because of the color channels) matrix. To solve this we must first convert the image to a greyscale.

dog_grey = rgb2gray(dog)
plt.figure(num=None, figsize=(8, 6), dpi=80)
imshow(dog_grey);
Grey Dog
Grey Dog

Now if we run the function, we should get the desired effect.

convolved_image = multi_convolver(dog_grey, gaussian, 2)
plt.figure(num=None, figsize=(8, 6), dpi=80)
imshow(convolved_image);
Blurred Dog
Blurred Dog

Wonderful! We can now see that the image has been clearly blurred. The below code will show us what happens to the image if we continue to run the gaussian blur convolution to the image.

def convolution_plotter(image, kernel):
    iterations = [1,10,20,30]
    f_size = 20

    fig, ax = plt.subplots(1,4, figsize = (15,7))
    for n, ax in enumerate(ax.flatten()):
        ax.set_title(f'Iteration : {iterations[n]}', fontsize =
                     f_size)
        ax.imshow(multi_convolver(image, kernel, iterations[n]), 
                  cmap='gray')
        ax.set_axis_off()
    fig.tight_layout()

convolution_plotter(dog_grey, gaussian)
Gaussian Blurring
Gaussian Blurring

Great! We can clearly see the continued blurring of the image due to the application of our kernel.

But what if you needed to blur the image and retain the color? Let us first try to apply the convolutions per color channel.

def convolver_rgb(image, kernel, iterations = 1):
    convolved_image_r = multi_convolver(image[:,:,0], kernel,
                                        iterations)
    convolved_image_g = multi_convolver(image[:,:,1], kernel, 
                                        iterations)
    convolved_image_b  = multi_convolver(image[:,:,2], kernel, 
                                         iterations)

    reformed_image = np.dstack((np.rint(abs(convolved_image_r)), 
                                np.rint(abs(convolved_image_g)), 
                                np.rint(abs(convolved_image_b)))) / 
                                255

    fig, ax = plt.subplots(1,3, figsize = (17,10))

    ax[0].imshow(abs(convolved_image_r), cmap='Reds')
    ax[0].set_title(f'Red', fontsize = 15)

    ax[1].imshow(abs(convolved_image_g), cmap='Greens')
    ax[1].set_title(f'Green', fontsize = 15)

    ax[2].imshow(abs(convolved_image_b), cmap='Blues')
    ax[2].set_title(f'Blue', fontsize = 15)

    [axi.set_axis_off() for axi in ax.ravel()]

    return np.array(reformed_image).astype(np.uint8)
convolved_rgb_gauss = convolver_rgb(dog, gaussian, 2)
RGB Channel Convolution
RGB Channel Convolution

The function actually returns to us the reformed image, we just have to plug it into the show function.

plt.figure(num=None, figsize=(8, 6), dpi=80)
imshow(convolved_rgb_gauss);
Reformed Gaussian Image
Reformed Gaussian Image

Great! It seems that the function worked well. As a fun exercise let us see what happens when we convolve the image 10 times.

Heavily Blurred Image
Heavily Blurred Image

So this solve our issue right? Well, not really. To see the issue this function has, let us try to sharpen the image.

convolved_rgb_sharpen = convolver_rgb(dog, sharpen, 1)
RGB Channel Convolution
RGB Channel Convolution

Looks good so far, let us see what the reformed image looks like.

Reformed Sharpened Image
Reformed Sharpened Image

The image has been reformed, but we now see that there are some slight distortions. Why is this the case?

Remember that the RGB color space implicitly mixes the luminescence of the pixels with the colors. This means that it is practically impossible to apply convolutions to the lighting of an image without changing the colors. So how do we handle this issue?

One way to go around this problem is by changing the color space the image. Instead of using the RGB color space, we can make use of the Y’UV color space. We do this because the lighting channel in the Y’UV space is actually separated from the colors (this is the Y component).

For the purposes of this article we shall edit the function to first convert the image into a Y’UV color space and then do the required convolutions.

def convolver_rgb(image, kernel, iterations = 1):
    img_yuv = rgb2yuv(image)   
    img_yuv[:,:,0] = multi_convolver(img_yuv[:,:,0], kernel, 
                                     iterations)
    final_image = yuv2rgb(img_yuv)

    fig, ax = plt.subplots(1,2, figsize = (17,10))

    ax[0].imshow(image)
    ax[0].set_title(f'Original', fontsize = 20)

    ax[1].imshow(final_image);
    ax[1].set_title(f'YUV Adjusted, Iterations = {iterations}', 
                    fontsize = 20)

    [axi.set_axis_off() for axi in ax.ravel()]

    fig.tight_layout()

    return final_image
final_image = convolver_rgb(dog, sharpen, iterations = 1)
Reformed Sharpened Image
Reformed Sharpened Image

We can see that our function now returns an image that is noticeably sharper with none of the color distortions. There are many other ways to tackle this issue with Y’UV conversion being only one of them. Remember that the V component of the HSV color space represents almost the same thing. However, the way that the luma component of Y’UV space and the value component of the HSV space are slightly different. Let us see what are the consequences of using one over the other.

def convolver_comparison(image, kernel, iterations = 1):
    img_yuv = rgb2yuv(image)   
    img_yuv[:,:,0] = multi_convolver(img_yuv[:,:,0], kernel, 
                      iterations)
    final_image_yuv = yuv2rgb(img_yuv)

    img_hsv = rgb2hsv(image)   
    img_hsv[:,:,2] = multi_convolver(img_hsv[:,:,2], kernel, 
                      iterations)
    final_image_hsv = hsv2rgb(img_hsv)

    convolved_image_r = multi_convolver(image[:,:,0], kernel, 
                         iterations)
    convolved_image_g = multi_convolver(image[:,:,1], kernel, 
                         iterations)
    convolved_image_b  = multi_convolver(image[:,:,2], kernel,
                         iterations)

    final_image_rgb = np.dstack((np.rint(abs(convolved_image_r)), 
                                 np.rint(abs(convolved_image_g)), 
                                 np.rint(abs(convolved_image_b)))) / 
                                 255

    fig, ax = plt.subplots(2,2, figsize = (17,17))

    ax[0][0].imshow(image)
    ax[0][0].set_title(f'Original', fontsize = 30)

    ax[0][1].imshow(final_image_rgb);
    ax[0][1].set_title(f'RGB Adjusted, Iterations = {iterations}', 
                       fontsize = 30)
    fig.tight_layout()

    ax[1][0].imshow(final_image_yuv)
    ax[1][0].set_title(f'YUV Adjusted, Iterations = {iterations}', 
                       fontsize = 30)

    ax[1][1].imshow(final_image_hsv)
    ax[1][1].set_title(f'HSV Adjusted, Iterations = {iterations}', 
                       fontsize = 30)

    [axi.set_axis_off() for axi in ax.ravel()]

    fig.tight_layout()
convolver_comparison(dog, sharpen, iterations = 1)
Comparisons of Convolutions
Comparisons of Convolutions

We see that there is some slight improvement of the HSV and Y’UV over the original RGB method. For better illustration we can up the amount of iterations from 1 to 2.

Distortion Comparisons
Distortion Comparisons

At 2 iterations the distortions become far more apparent. But it is also very clear that the HSV and Y’UV adjusted image are fairing much better than the original RGB adjusted image. These properties should be kept in mind when deciding the best way to apply convolutional kernels onto an image.

In Conclusion

To summarize, we’ve learned how to conduct blurring and sharpening convolutions to an image. Such techniques are vital for any data scientist working in the field of Image Processing and computer vision. Very importantly, we learned that simply applying convolutions to the individual RGB channels may not be the best way to go. When working with images, one should always be aware that there are plenty of different kinds of color spaces to work with. Hopefully you found this article helpful and can apply it in your own work.


Related Articles