Combining Satellite Imagery and machine learning to predict poverty

Jorge Lopez
Towards Data Science
3 min readMay 15, 2019

--

This is a review under 5 minutes of the paper with the same name, by Neil Jean et al. This is the video version of this article: https://youtu.be/bW_-I2qYmEQ .

Poverty estimation in the developing world, influences how governments of these countries allocate limited resources to create policies and conduct research.

Neal Jean et al. claim in their paper that, they have developed a method for detecting and predicting poverty by using machine learning and combining satellite imagery.

How can we measure levels of economic activity in vast geographical areas? One possibility is looking at their nighttime luminosity intensity. There is a correlation between satellite nightlights intensity imagery in these areas and, their levels of economic activities.

However, the authors observed, that the nightlights method per se is unable to detect economic activity in regions that are below the international poverty line. See Fig 1–2.

Figure 1
Figure 2

Commonly, governments depend on surveys to gather economic measurements and take actions. But applying these surveys is not simple and they are very costly.

Here it is, where the method proposed by the authors comes in. They propose a machine learning approach based on the technique of “Transfer Learning” that promise to resolve this drawback and offers a greater prediction accuracy than solely considering luminosity intensity.

They claim their method for predicting poverty is accurate, inexpensive and scalable. How do they achieve such predictive power? By combining survey and satellite data, where a convolutional neural network or CNN can be trained to discern features on daylight satellite images.

In their study, they consider 4 African developing countries and the granularity of the data that they estimate is at the household level.

What their method consists of? It involves three phases (See Fig. 3):

Phase 1 they train the CNN to learn features from day light satellite imagery. These features are for instance evidence of economic activity (or lack of) such as urban areas, non-urban areas, water and roads.

Phase 2 takes advantage of the knowledge gained in phase 1 and the CNN is adapted to be trained to estimate the night light intensities.

Phase 3 involves combining the economic survey data and image features extracted by the CNN from the daylight imagery to train regression models able to estimate the poverty indicators that they are considering.

Figure 3

The authors claim that their “transfer learning model” can predict their poverty indicators with high accuracy. What are those indicators?/ they handle two and they are consumption expenditure and asset wealth (where they claim having achieved a variability explanation up to 55% and 59% respectively.) See Fig 4 and 5.

Figure 4
Figure 5

In my opinion, this paper presents an enhanced, accurate and affordable technique that governments and organizations around the world can use to track and target poverty in developing countries to take mitigative actions. It demonstrates how powerful Machine Learning can be to help improving living conditions of the people.

My website is: https://www.georgelopez-portfolio.com/

All figures by Neal Jean et al. and adapted by G. Lopez.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Dear reader:

I would like to know from you what other humanitarian applications using AI/ML you do believe that can be implemented?

You can leave a comment for responding and I gladly will read it. Thanks.

--

--

DS, ML/DL, Evolving into an AI PhD, MSc in CS research on ML/NLP. SharpestMinds graduate, IT professional.