One of the biggest challenges in helping people living in poverty is locating them. The availability of accurate and reliable information on the location of poor areas is surprisingly lacking for much of the world, especially on the African continent. Aid groups and other international organizations often fill the gaps with door-to-door surveys, but these can be expensive and time-consuming.
Go to the website to watch the video.
In the current issue of ScienceStanford researchers offer an accurate way to identify poverty in areas previously devoid of valuable survey information. The researchers used machine learning – the science of designing computer algorithms that learn from data – to extract information about poverty from high-resolution satellite images. In this case, the researchers relied on previous machine learning methods to find poor areas in five African countries.
“We have a limited number of surveys conducted in villages scattered across the African continent, but otherwise we have very little local-level information on poverty,” said the study’s co-author. Marshal BurkeAssistant Professor of Earth System Science at Stanford and Fellow at Center on Food Security and the Environment. “At the same time, we’re collecting all sorts of other data in those areas – like satellite imagery – all the time.”
The researchers sought to understand whether high-resolution satellite imagery – an unconventional but readily available source of data – could inform estimates of where poor people live. The difficulty was that while standard machine learning approaches work best when they can access large amounts of data, then there was little poverty data to begin with.
“There are few places in the world where we can tell with certainty on the computer whether the people living there are rich or poor,” said study lead author Neal Jean, a PhD student in computer science at Stanford’s School of Engineering. “This makes it difficult to extract useful information from the huge amount of daytime satellite imagery available.”
Since areas brighter at night are usually more developed, the solution was to combine high-resolution daytime images with images of Earth at night. The researchers used the data from the “night light” to identify features of high-resolution daytime imagery that are correlated with economic development.
“Without being told what to look for, our machine learning algorithm learned to select from the images many things easily recognizable to humans – things like roads, urban areas and farmland” , said John. The researchers then used these features of daytime imagery to predict wealth at the village level, as measured in available survey data.
They found that this method did a surprisingly good job of predicting the distribution of poverty, outperforming existing approaches. These improved poverty maps could help aid organizations and policy makers distribute funds more effectively and adopt and evaluate policies more effectively.
“Our paper demonstrates the power of machine learning in this context,” said the study’s co-author. Stefano Ermonadjunct professor of computer science and courtesy fellow to Stanford Woods Institute for the Environment. “And because it’s cheap and scalable – requiring only satellite imagery – it could be used to map poverty around the world at very low cost.”
Co-authors of the study, titled “Combining Satellite Imagery and Machine Learning to Predict Poverty,” include Michael Xie of the Department of Computer Science at Stanford and David Lobell and W. Matthew Davis of the School of Stanford Earth, Energy and Environmental Sciences and the Center on Food Security and the Environment. For more information, visit the research group’s website at sustain.stanford.edu.