UMD Co-leads $750K NSF, Amazon Project to Tackle AI Bias in Mapping

Rachael Grahame | Maryland Today - July 20, 2022

INFO College's Dr. Sergii Skakun will play a key role in building the system that could enable fairer, safer decisions on resource distribution

A farmer tends crops in Uganda, an area where bias in AI algorithms monitoring global crop health might misinterpret data due to a lack of knowledge of local growing methods. Photo by Nuno Almeida, Dreamstime.com

A farmer tends crops in Uganda, an area where bias in AI algorithms monitoring global crop health might misinterpret data due to a lack of knowledge of local growing methods. Photo by Nuno Almeida, Dreamstime.com

The accelerating demand for maps to guide product deliveries, respond to natural disasters or monitor food crops globally can no longer be met manually, requiring automated systems that use artificial intelligence (AI) to handle vast quantities of data from satellites and other sensors.

But as disturbing episodes with chatbots and facial recognition systems that exhibited racism or other forms of bias illustrate, AI itself can take a wrong turn.

Now, supported by a $750,000 Fairness in Artificial Intelligence award—a program co-sponsored by the National Science Foundation and Amazon—University of Maryland researchers Yiqun Xie and Sergii Skakun are working with the University of Pittsburgh on a three-year research project to reduce AI bias in mapping algorithms.

A satellite image of California farms (top) contrasts with an AI-generated crop map (bottom) that humans need to review and label to keep AI algorithms free from bias. Two boxes on the left show areas where labels appear in-line with the satellite image; boxes on the right show areas where there are mismatches. (Visual by Yiqun Xie. Images courtesy of ArcGIS World Imagery basemap, top, and the USDA Cropland Data Layer, bottom.)

A satellite image of California farms (top) contrasts with an AI-generated crop map (bottom) that humans need to review and label to keep AI algorithms free from bias. Two boxes on the left show areas where labels appear in-line with the satellite image; boxes on the right show areas where there are mismatches. (Visual by Yiqun Xie. Images courtesy of ArcGIS World Imagery basemap, top, and the USDA Cropland Data Layer, bottom.)

The AI algorithms used to create crop maps, flood maps, road network maps and more need ground-truth labels to learn to make predictions. Such labels are typically expensive and time-consuming to collect, often requiring an expert to go into the field to determine if a pixel in a satellite imagery should be labeled “corn” or “soybean.”

AI bias particularly affects people in regions with high poverty rates, such as sub-Saharan Africa, where a lack of knowledge of the local crops—especially crops grown by smallholders—leads to farmland being classified as barren by algorithms more suited for agricultural dynamics that exist in high-income countries.

“Existing AI algorithms have no control over the prediction quality at different locations. They can, and tend to, compromise the accuracy at certain locations while chasing a better result at other places, which injects bias into the results,” explained Xie, an assistant professor of geospatial information science.

Among other consequences of AI bias—like incorrectly assessing how significantly certain areas are flooded after a hurricane or natural disaster, or holding up medical supplies and other deliveries during crises—crop failures in these underdeveloped regions could go unnoticed in satellite data.

“Machine learning methods have potential to advance and improve geoinformation products for agriculture monitoring; however, biases in maps might lead to biases in crop production estimates,” said Skakun, an assistant professor in the Department of Geographical Sciences with a joint appointment in the College of Information Studies. Skakun also works with the UMD-led NASA Harvest team, whose mission is to improve food security by accelerating the adoption of satellite-based agricultural monitoring.

Having confirmed the existence of location-based AI bias and measured its negative societal impact, the researchers are now developing a system through which experts—including those with NASA Harvest—can select an AI algorithm best suited for overcoming the bias often associated with the place they are interested in studying, and type of map they are trying to create.

They expect to complete their “Advancing Deep Learning Towards Spatial Fairness” project by the summer of 2025—addressing the problem of AI bias in mapping, and perhaps other issues.

“Currently, the fairness issue is a roadblock in many important areas where people want to deploy AI, so in some sense, this may help accelerate the use of AI in different businesses too,” Xie concluded.

Originally published July 20, 2022 in the UMD publication, Maryland Today