Image Filters in Python

Manvir Sekhon
Towards Data Science
11 min readAug 10, 2019

--

I am currently working on a computer vision project and I wanted to look into image pre-processing to help improve the machine learning models that I am planning to build. Image pre-processing involves applying image filters to an image. This article will compare a number of the most well known image filters.

Image filters can be used to reduce the amount of noise in an image and to enhance the edges in an image. There are two types of noise that can be present in an image: speckle noise and salt-and-pepper noise. Speck noise is the noise that occurs during image acquisition while salt-and-pepper noise (which refers to sparsely occurring white and black pixels) is caused by sudden disturbances in an image signal. Enhancing the edges of an image can help a model detect the features of an image.

An image pre-processing step can improve the accuracy of machine learning models. Pre-processed images can hep a basic model achieve high accuracy when compared to a more complex model trained on images that were not pre-processed. For Python, the Open-CV and PIL packages allow you to apply several digital filters. Applying a digital filter involves taking the convolution of an image with a kernel (a small matrix). A kernal is an n x n square matrix were n is an odd number. The kernel depends on the digital filter. Figure 1 shows the kernel that is used for a 3 x 3 mean filter. An image from the KDEF data set (which can be found here: http://kdef.se/) will be used for the digital filter examples.

Figure 1: A 3 x 3 mean filter kernel

1. Mean Filter

The mean filter is used to blur an image in order to remove noise. It involves determining the mean of the pixel values within a n x n kernel. The pixel intensity of the center element is then replaced by the mean. This eliminates some of the noise in the image and smooths the edges of the image. The blur function from the Open-CV library can be used to apply a mean filter to an image.

When dealing with color images it is first necessary to convert from RGB to HSV since the dimensions of RGB are dependent on one another where as the three dimensions in HSV are independent of one another (this allows us to apply filters to each of the three dimensions separately.)

The following is a python implementation of a mean filter:

import numpy as npimport cv2from matplotlib import pyplot as pltfrom PIL import Image, ImageFilter%matplotlib inlineimage = cv2.imread('AM04NES.JPG') # reads the imageimage = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) # convert to HSVfigure_size = 9 # the dimension of the x and y axis of the kernal.new_image = cv2.blur(image,(figure_size, figure_size))plt.figure(figsize=(11,6))plt.subplot(121), plt.imshow(cv2.cvtColor(image, cv2.COLOR_HSV2RGB)),plt.title('Original')plt.xticks([]), plt.yticks([])plt.subplot(122), plt.imshow(cv2.cvtColor(new_image, cv2.COLOR_HSV2RGB)),plt.title('Mean filter')plt.xticks([]), plt.yticks([])plt.show()
Figure 2: The result of applying a mean filter to a color image

Figure 2 shows that while some of the speckle noise has been reduced there are a number of artifacts that are now present in the image that were not there previously. We can check to see if any artifacts are created when a mean filter is applied to a gray scale image.

# The image will first be converted to grayscale
image2 = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)
image2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)figure_size = 9new_image = cv2.blur(image2,(figure_size, figure_size))plt.figure(figsize=(11,6))plt.subplot(121), plt.imshow(image2, cmap='gray'),plt.title('Original')plt.xticks([]), plt.yticks([])plt.subplot(122), plt.imshow(new_image, cmap='gray'),plt.title('Mean filter')plt.xticks([]), plt.yticks([])plt.show()
Figure 3: The result of applying a mean filter to a grayscale image

Figure 3 shows that mean filtering removes some of the noise and does not create artifacts for a grayscale image. However, some detail has been lost.

2. Gaussian Filter

The Gaussian Filter is similar to the mean filter however it involves a weighted average of the surrounding pixels and has a parameter sigma. The kernel represents a discrete approximation of a Gaussian distribution. While the Gaussian filter blurs the edges of an image (like the mean filter) it does a better job of preserving edges than a similarly sized mean filter. The ‘GaussianBlur’ function from the Open-CV package can be used to implement a Gaussian filter. The function allows you to specify the shape of the kernel. You can also specify the the standard deviation for the x and y directions separately. If only one sigma value is specified then it is considered the sigma value for both the x and y directions.

new_image = cv2.GaussianBlur(image, (figure_size, figure_size),0)plt.figure(figsize=(11,6))plt.subplot(121), plt.imshow(cv2.cvtColor(image, cv2.COLOR_HSV2RGB)),plt.title('Original')plt.xticks([]), plt.yticks([])plt.subplot(122), plt.imshow(cv2.cvtColor(new_image, cv2.COLOR_HSV2RGB)),plt.title('Gaussian Filter')plt.xticks([]), plt.yticks([])plt.show()
Figure 4: The result of applying a Gaussian filter to a color image

Figure 4 shows that the Gaussian Filter does a better job of retaining the edges of the image when compared to the mean filter however it also produces artifacts on a color image. We can now check to see if the Gaussian filter produces artifacts on a grayscale image.

new_image_gauss = cv2.GaussianBlur(image2, (figure_size, figure_size),0)plt.figure(figsize=(11,6))plt.subplot(121), plt.imshow(image2, cmap='gray'),plt.title('Original')plt.xticks([]), plt.yticks([])plt.subplot(122), plt.imshow(new_image_gauss, cmap='gray'),plt.title('Gaussian Filter')plt.xticks([]), plt.yticks([])plt.show()
Figure 5: The result of applying a Gaussian filter to a grayscale image

Figure 5 shows that a 9 x 9 Gaussian filter does not produce artifacts when applied to a grayscale image. The filter can retain more detail than a 9 x 9 mean filter and remove some noise.

3. Median Filter

The median filter calculates the median of the pixel intensities that surround the center pixel in a n x n kernel. The median then replaces the pixel intensity of the center pixel. The median filter does a better job of removing salt and pepper noise than the mean and Gaussian filters. The median filter preserves the edges of an image but it does not deal with speckle noise. The ‘medianBlur’ function from the Open-CV library can be used to implement a median filter.

new_image = cv2.medianBlur(image, figure_size)plt.figure(figsize=(11,6))plt.subplot(121), plt.imshow(cv2.cvtColor(image, cv2.COLOR_HSV2RGB)),plt.title('Original')plt.xticks([]), plt.yticks([])plt.subplot(122), plt.imshow(cv2.cvtColor(new_image, cv2.COLOR_HSV2RGB)),plt.title('Median Filter')plt.xticks([]), plt.yticks([])plt.show()
Figure 6: The result of applying a median filter to a color image.

Figure 6 shows that the median filter is able to retain the edges of the image while removing salt-and-pepper noise. Unlike the mean and Gaussian filter, the median filter does not produce artifacts on a color image. The median filter will now be applied to a grayscale image.

new_image = cv2.medianBlur(image2, figure_size)plt.figure(figsize=(11,6))plt.subplot(121), plt.imshow(image2, cmap='gray'),plt.title('Original')plt.xticks([]), plt.yticks([])plt.subplot(122), plt.imshow(new_image, cmap='gray'),plt.title('Median Filter')plt.xticks([]), plt.yticks([])plt.show()
Figure 7: The result of applying the median filter to a grayscale image

Figure 7 shows that a 9 x 9 median filter can remove some of the salt and pepper noise while retaining the edges of the image.

Other Filters:

Here are a few more filters that can be used for image pre-processing:

Conservative Filter

The conservative filter is used to remove salt and pepper noise. Determines the minimum intensity and maximum intensity within a neighborhood of a pixel. If the intensity of the center pixel is greater than the maximum value it is replaced by the maximum value. If it is less than the minimum value than it is replaced by the minimum value. The conservative filter preserves edges but does not remove speckle noise.

The following code can be used to define a conservative filter:

# first a conservative filter for grayscale images will be defined.def conservative_smoothing_gray(data, filter_size):temp = []

indexer = filter_size // 2

new_image = data.copy()

nrow, ncol = data.shape

for i in range(nrow):

for j in range(ncol):

for k in range(i-indexer, i+indexer+1):

for m in range(j-indexer, j+indexer+1):

if (k > -1) and (k < nrow):

if (m > -1) and (m < ncol):

temp.append(data[k,m])

temp.remove(data[i,j])


max_value = max(temp)

min_value = min(temp)

if data[i,j] > max_value:

new_image[i,j] = max_value

elif data[i,j] < min_value:

new_image[i,j] = min_value

temp =[]

return new_image.copy()

Now the conservative filter can be applied to a gray scale image:

new_image = conservative_smoothing_gray(image2,5)plt.figure(figsize=(11,6))plt.subplot(121), plt.imshow(image2, cmap='gray'),plt.title('Original')plt.xticks([]), plt.yticks([])plt.subplot(122), plt.imshow(new_image, cmap='gray'),plt.title('Conservative Smoothing')plt.xticks([]), plt.yticks([])plt.show()
Figure 9: The result of applying the conservative smoothing filter to a grayscale image

Figure 9 shows that the conservative smoothing filter was able to remove some salt-and-pepper noise. It also suggests that the filter is not able to remove as much salt-and-pepper noise as a median filter (although it does preserve more detail.)

Laplacian Filter

The Laplacian of an image highlights the areas of rapid changes in intensity and can thus be used for edge detection. If we let I(x,y) represent the intensities of an image then the Laplacian of the image is given by the following formula:

The discrete approximation of the Laplacian at a specific pixel can be determined by taking the weighted mean of the pixel intensities in a small neighborhood of the pixel. Figure 10 shows two kernels which represent two different ways of approximating the Laplacian.

Figure 10: Two kernels used to approximate the Laplacian

Since the Laplacian filter detects the edges of an image it can be used along with a Gaussian filter in order to first remove speckle noise and then to highlight the edges of an image. This method is referred to as the Lapalcian of Gaussian filtering. The ‘Laplacian’ function from the Open-CV library can be used to find the Laplacian of an image.

new_image = cv2.Laplacian(image2,cv2.CV_64F)plt.figure(figsize=(11,6))plt.subplot(131), plt.imshow(image2, cmap='gray'),plt.title('Original')plt.xticks([]), plt.yticks([])plt.subplot(132), plt.imshow(new_image, cmap='gray'),plt.title('Laplacian')plt.xticks([]), plt.yticks([])plt.subplot(133), plt.imshow(image2 + new_image, cmap='gray'),plt.title('Resulting image')plt.xticks([]), plt.yticks([])plt.show()
Figure 11: The result of adding the Laplacian of an image to the original image

Figure 11 shows that while adding the Laplacian of an image to the original image may enhance the edges, some of the noise is also enhanced.

Frequency Filter

When applying frequency filters to an image it is important to first convert the image to the frequency domain representation of the image. The Fourier transform (which decomposes a function into its sine and cosine components) can be applied to an image in order to obtain its frequency domain representation. The reason we are interested in an image’s frequency domain representation is that it is less expensive to apply frequency filters to an image in the frequency domain than it is to apply the filters in the spatial domain. This is due to the fact that each pixel in the frequency domain representation corresponds to a frequency rather than a location of the image.

Low pass filters and high pass filters are both frequency filters. The low pass filters preserves the lowest frequencies (that are below a threshold) which means it blurs the edges and removes speckle noise from the image in the spatial domain. The high pass filter preserves high frequencies which means it preserves edges. The ‘dft’ function determines the discrete Fourier transform of an image. For a N x N image the two dimensional discrete Fourier transform is given by:

where f is the image value in the spatial domain and F in its the frequency domain. The following is the formula for the inverse discrete Fourier transform (which converts an image from its frequency domain to the spatial domain):

Once a frequency filter is applied to an image, the inverse Fourier transform can be used to convert the image back to the spatial domain. Now the python implementation of the low pass filter will be given:

dft = cv2.dft(np.float32(image2),flags = cv2.DFT_COMPLEX_OUTPUT)# shift the zero-frequncy component to the center of the spectrum
dft_shift = np.fft.fftshift(dft)
# save image of the image in the fourier domain.
magnitude_spectrum = 20*np.log(cv2.magnitude(dft_shift[:,:,0],dft_shift[:,:,1]))
# plot both imagesplt.figure(figsize=(11,6))plt.subplot(121),plt.imshow(image2, cmap = 'gray')plt.title('Input Image'), plt.xticks([]), plt.yticks([])plt.subplot(122),plt.imshow(magnitude_spectrum, cmap = 'gray')plt.title('Magnitude Spectrum'), plt.xticks([]), plt.yticks([])plt.show()
Figure 12: An image’s spatial domain and frequency domain representations
rows, cols = image2.shapecrow,ccol = rows//2 , cols//2# create a mask first, center square is 1, remaining all zerosmask = np.zeros((rows,cols,2),np.uint8)mask[crow-30:crow+30, ccol-30:ccol+30] = 1# apply mask and inverse DFTfshift = dft_shift*maskf_ishift = np.fft.ifftshift(fshift)img_back = cv2.idft(f_ishift)img_back = cv2.magnitude(img_back[:,:,0],img_back[:,:,1])# plot both imagesplt.figure(figsize=(11,6))plt.subplot(121),plt.imshow(image2, cmap = 'gray')plt.title('Input Image'), plt.xticks([]), plt.yticks([])plt.subplot(122),plt.imshow(img_back, cmap = 'gray')plt.title('Low Pass Filter'), plt.xticks([]), plt.yticks([])plt.show()
Figure 13: The result of applying a low pass filter to an image.

Figure 13 shows that a decent amount of detail was lost however some of the speckle noise was removed.

Crimmins Speckle Removal

The Crimmins complementary culling algorithm is used to remove speckle noise and smooth the edges. It also reduces the intensity of salt and pepper noise. The algorithm compares the intensity of a pixel in a image with the intensities of its 8 neighbors. The algorithm considers 4 sets of neighbors (N-S, E-W, NW-SE, NE-SW.) Let a,b,c be three consecutive pixels (for example from E-S). Then the algorithm is:

  1. For each iteration:
    a) Dark pixel adjustment: For each of the four directions
    1) Process whole image with: if a b+2 then b = b + 1
    2) Process whole image with: if a > b and b ≤ c then b = b + 1
    3) Process whole image with: if c > b and b ≤ a then b = b + 1
    4) Process whole image with: if c b+2 then b = b + 1
    b) Light pixel adjustment: For each of the four directions
    1) Process whole image with: if ab — 2 then b = b1
    2) Process whole image with: if a < b and b ≥ c then b = b1
    3) Process whole image with: if c < b and b ≥ a then b = b 1
    4) Process whole image with: if c ≤ b — 2 then b = b1

The Python implementation of the complementary culling algorithm can be found here: https://github.com/m4nv1r/medium_articles/blob/master/Image_Filters_in_Python.ipynb

Figure 14, shows the results of applying the Crimmins Speckle Removal filter to an image. Some of the speckle noise was removed however some of the edges were blurred.

Figure 14: The result of applying the Crimmins Speckle Removal filter

Unsharp Filter

The Unsharp filter can be used to enhance the edges of an image. The ImageFilter.Unsharpmask function from the PIL package applies an unsharp filter to an image (the image first needs to be converted to a PIL Image object.) The ImageFilter.Unsharpmask function has three parameters. The ‘radius’ parameter specifies how many neighboring pixels around edges get affected. The ‘percentage’ parameter specifies how much darker or lighter the edges become. The third parameter ‘threshold’ defines how far apart adjacent tonal values have to be before the filter does anything.

image = Image.fromarray(image.astype('uint8'))
new_image = image.filter(ImageFilter.UnsharpMask(radius=2, percent=150))
plt.subplot(121),plt.imshow(image, cmap = 'gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(new_image, cmap = 'gray')
plt.title('Unsharp Filter'), plt.xticks([]), plt.yticks([])
plt.show()
Figure 15: The result of applying the Unsharp filter

Figure 15 shows the results of an Unsharp filter. While the edges of the image were enhanced, some of the noise was also enhanced.

Conclusion

There is always a trade off between removing noise and preserving the edges of an image. In order to remove the speckle noise in an image a blurring filter needs to be applied which in turn blurs the edges of the image. If you want to retain the edges of an image the only noise that you can remove is the salt-and-pepper noise. A Jupyter notebook with all the code used for this article can be found here: https://github.com/m4nv1r/medium_articles/blob/master/Image_Filters_in_Python.ipynb

--

--