Denoising radar satellite images using deep learning in Python

How to tackle inherent interferences of radar satellites

Pierre Blanchard
Towards Data Science

--

A Sentinel satellite in orbit, image by Rama, Wikimedia Commons

By Emanuele Dalsasso (researcher at CNAM and Telecom Paris), Youcef Kemiche (Hi! Paris machine learning engineer), Pierre Blanchard (Hi! Paris machine learning engineer)

When people think about satellite imagery, they usually think of pictures showing massive hurricanes above continents. This kind of images are captured by optical sensors and are widely used by scientists to measure and anticipate forest fires, natural hazards and other consequences of global warming, by corporations to provide its customers with navigation features, by the military to monitor and track enemies’ troops or by urbanization institutions to measure habitat fragmentation or light pollution.

Overall, they tend to have an excellent level of detail, but they face (at least) two major problems for detailing Earth’s specifics : night and weather.

A special kind of sensor can help scientists see in the dark and through clouds and rains. We are talking about Synthetic Aperture Radars (SARs). A SAR system can be carried onboard satellites, aircrafts or even drones, allowing it to acquire data both at a global and local scale. While optical systems rely on the sunlight (i.e. the sensor is passive), radars send an electromagnetic wave and measure the component backscattered by the objects on the ground (i.e. the sensor is active).

This way, SAR sensors can acquire data at any time of the day and with any meteorological conditions, as the wavelength of the transmitted wave allows it to penetrate clouds. They however encounter an intrinsic issue : speckle. We will see in this paper what are speckle fluctuations and how, with the help of our package deepdespeckling, we can significantly increase the interpretability of radar images.

Why would one use radar satellite images ?

Photo by the author

Common optical satellites carry many sorts of digital still and video cameras to capture images of Earth. Synthetic aperture radar satellites (SAR), however, send electronic radio signals towards their target objects. The signal then bounces off the soil or ocean and returns to the emitter within a certain time that determines the distance and thus the topology observed. The antenna position also determines the azimuth and the altitude. The intrinsic nature of radio waves does not allow them to color the image. However, they have very appreciated particularities that make them essential for the scientific community (but not only!)

Indeed, radio waves do not bother with weather or day cycle. They make capturing images possible at pitch-dark night or through a massive cyclone. But not only as they can:

  • Detect humidity level
  • See through volcano smokes
  • Measure trees heights and thus the capacity for a forest to absorb CO2
Example of the complementarity between optical (Sentinel-2) and SAR (Sentinel-1) images: in this zone in the Amazon forest in Brazil, in the optical images clouds occlude some areas that are instead accessible with the SAR image: we can see street on the left part of the image and identify forested/deforested areas on the right. Images from Emanuele Dalsasso’s courses.

One major drawback of SAR images: Speckle.

Although radar satellites have many advantages, they inherently face one major drawback : speckle. Speckle is a granular interference due to bouncing properties of emitted radio waves that degrades the quality of images and therefore their interpretability with a human eye.

The speckle phenomenon explained by SAREDU researchers, License : https://creativecommons.org/licenses/by-sa/4.0/

Speckle looks like a grainy “salt and pepper” texture on radar images (see picture 1). This is because of the random constructive and destructive interference from the multiple scattering returns that will occur within each resolution cell.

Example of the denoiser (noisy on the left side, denoised on the right side) on a cropped area of Uluru rock in Australia : https://goo.gl/maps/jELs19EDypMUvjf3A

Though it is inherent in radar images, common methods exist for despeckling them : multiple looks or adaptive filters but they usually affect the level of details.

The method here proposed by Emanuele Dalsasso, Loïc Denis and Florence Tupin and developed in PyTorch and packaged by Hi! Paris Engineers Youcef Kemiche and Pierre Blanchard relies on the separation of real and imaginary parts of an image and their treatment. It allows us to reduce the speckle and preserve the level of detail.

DeepDespeckling, a Python package for tackling this issue

How does it work ?

So far, most approaches have considered a supervised training strategy: the networks are trained to produce outputs as close as possible to speckle-free reference images. Speckle-free images are generally not available, which requires resorting to natural or optical images or the selection of stable areas in long time series to circumvent the lack of ground truth. Self-supervision, on the other hand, avoids the use of speckle-free images.

We introduce a self-supervised strategy based on the separation of the real and imaginary parts of single-look complex SAR images, called MERLIN (coMplex sElf-supeRvised despeckLINg), and show that it offers a straightforward way to train all kinds of deep despeckling networks. Networks trained with MERLIN take into account the spatial correlations due to the SAR transfer function specific to a given sensor and imaging mode.

By requiring only a single image, and possibly exploiting large archives, MERLIN opens the door to hassle-free as well as large-scale training of de-speckling networks.

The principle of MERLIN: during step A, the despeckling network is trained to estimate the reflectivity based solely on the real part. The loss function evaluates the likelihood of the predictions according to the imaginary part. Once the network is trained, it can be used as shown in B: the real and imaginary parts are processed separately using networks with the same weights. The outputs are combined to form the final estimation. Image by author Emanuele Dalsasso. Source.

For a more detailed explanation, please follow this link to Emanuele Dalsasso, Loïc Denis and Florence Tupin’s work.

The deepdespeckling package arose directly from the researchers’ work listed above with the aim of providing the open-source community with a set of methods to deal with speckle interference for different types of operations (spotlight and stripmap).

How to install it ?

How to use it ?

Please note that two independent networks have been training on two image modalities:

TerraSAR-X Stripmap mode and TerraSAR-X HighResolution SpotLight mode.

In the following, examples on HighResolution SpotLight data are illustrated. To apply the available functions on SpotLight data, please replace “deepdespeckling.merlin.test.spotlight” with “deepdespeckling.merlin.test.stripmap”.

The TEST part

Given the quite impressive size of SAR images, we have developed a set of despeckling functions in this package, so that every user can apply the despeckling effect to a CoSar or Numpy image. Indeed, applying the despeckling function to a big image (commonly thousands of pixels wide and high) is very demanding in computing resources and can take quite a while.

The 3 functions that are available to you are the following:

  • despeckle()

The function takes a whole image to be despeckled. If the image size is large, you’d better rely on GPUs to reduce the computation time.

full-size noisy image
full-size denoised image
  • despeckling_from_crop()

The function allows you to crop a subset of your image provided. This can be useful if you only need to despeckle a part of a bigger image. As a result, computation time will be reduced compared to despeckle().

Note that this function takes an additional argument “fixed” that can either be set to True or False. If True, the crop will be a 256px*256px square. If False, the crop will be of your drawing size.

The cropping tool in action !
Noisy cropped image on the left side / Denoised cropped image on the right side
  • despeckling_from_coordinates()

The function applies the despeckling function to the part of the image described by the coordinates listed in “coordinates_dictionnary”. This is useful if you know the location of the details you wish to despeckle.

Noisy cropped from coordinates image on the left side / Denoised cropped from coordinates image on the right side

The TRAIN part

This package also allows you to use two model training methods, whether you wish to train your own model from scratch and obtain your own weights or train a model from our pre-trained model.

  1. Train your own model from scratch (i.e. set the from pretrained argument to False and get your own weights from the fitting)

2. Train a model from a pre-trained version (i.e set the from_pretrained argument to True and use our weights)

Conclusion

This article is about the denoising of satellite images captured with a radar sensor. These images inherently have granular interferences called speckle. To tackle this issue, a package has been issued by Emanuele Dalsasso from Telecom Paris and machine learning engineers from Hi! PARIS. It allows you to highly increase the interpretability of cosar and numpy images on both stripmap and spotlight operations.

I hope you enjoyed it! Feel free to contact me for questions and feedback !

Contact

To know more about Hi! PARIS and its Engineering Team:

Hi! PARIS

Hi! PARIS Engineering Team

--

--