When the Feed Becomes the Front Line - College of Information (INFO)

When the Feed Becomes the Front Line

Laurie Robinson - April 15, 2026

A new generation of AI tools is changing how analysts and aid workers make sense of war in real time—if they can learn to trust what the models tell them

Government surveillance agency and military joint operation agent working at system control center

Photo licensed by Adobe Stock

When the port in Beirut exploded in August 2020, the blast was captured from what seemed like every angle. Within hours, thousands of videos and photos flooded social media—shot from balconies, dashcams, phones held out of windows across the city. For Cody Buntain, an assistant professor at the University of Maryland College of Information, that torrent of imagery wasn’t just documentation. It was data.

“We saw so many photos of the aftermath from so many different perspectives,” he says, “that one could be very certain about the impact of the event.” Volume and variety, cross-referenced against itself, produced something approaching ground truth.

Buntain has spent years asking what becomes possible when you apply that same logic to war. He studies how social media behaves during crises—natural disasters, political uprisings, full-scale conflicts—and builds AI tools to make sense of it at a scale no human analyst team could match.

As the war between the U.S. and Israel and Iran dominate global headlines and the information environment around modern conflict grows harder to navigate, that work has never felt more urgent. His tools are designed for the analysts, journalists, and humanitarian workers who need to understand what’s happening on the ground when getting there isn’t possible.

But building the tools, it turns out, is the easier part.

Reading the Visual Record of War

The premise is straightforward. Conflicts are now documented in real time—by soldiers, civilians, journalists, and propagandists alike. That documentation creates an unprecedented open-source intelligence opportunity, and a data problem no human team could handle at scale. Buntain’s approach is to train object-detection models to scan massive image collections, flagging those that contain specific weapon systems or insignia. His research shows that the timing of when weapon imagery appears online can correlate with offline conflict data—suggesting that the digital record can, under the right conditions, reflect physical reality.

Knowing that the signal exists, though, raises an uncomfortable question: should anyone act on it? “Especially when lives are on the line,” he says, “where is the root of trust here?”

Models can identify weapons in an image or estimate the severity of a strike, but the real question isn’t whether they perform well on a benchmark. It’s whether a human working alongside one makes better decisions than a human working without it.

That means studying the human as much as the system: how long a task takes without AI assistance, how that changes with a model in the loop, and how trust forms—and sometimes misfires—over time.

That problem doesn’t get easier when the underlying data is in question.

Fake Images, Real Stakes

The spread of AI-generated imagery has made an already chaotic information environment harder to navigate. But Buntain notes that misinformation itself isn’t new. In past crises, the same doctored images resurfaced again and again. What’s changed is the scale, the speed, and the erosion of a signal researchers once relied on. Early social media had a built-in correction mechanism: false claims attracted visible refutation, and that refutation was itself meaningful data. “Now you can have an audience of any number of people from anywhere in the world telling you something didn’t happen for ideological or monetary reasons.” The correction still exists. It just can’t be trusted the way it once was.

Instead, Buntain’s work looks beyond the content to the context surrounding it—who posted it, how old the account is, what it has shared before, how the information spread. “These things don’t emerge in a vacuum,” he says.

Context has limits too, though—especially when the data itself is thin.

When Conflicts Go Dark

Not all conflicts produce a dense digital record, but Buntain pushes back on the assumption that restricted environments mean no data at all. Armed groups and political actors tend to maintain an online presence even under suppression—for propaganda, for coordination, for international reach. And absence itself can be meaningful. “When communities that are usually active are now quiet,” he says, “identifying those black holes is itself important.”

The deeper problem is bias. In lower-resource settings, the content that does exist tends to come from those with the most access—typically the most powerful or organized actors. The data isn’t missing so much as skewed. To work around that, Buntain partners with organizations like the International Red Cross and Red Crescent, whose staff on the ground can help contextualize what the data can and can’t say.

Even with the best data and the most trusted models, though, none of it matters if the tools aren’t accessible.

Who Gets the Tools

The most unsettled question in Buntain’s research isn’t technical. It’s structural. If AI models can help interpret conflict imagery, who actually has access to them? Running large-scale AI systems requires computational resources that aren’t evenly distributed between countries or even within them. Some communities are able to build and deploy advanced tools, while others lack the infrastructure to use them at all.

Buntain expects costs to come down, and points to smaller, more efficient models as part of the solution. But access isn’t just a technical problem; it’s a question of investment, capacity, and who gets to shape the tools in the first place.

What’s clear is that the next phase of this work won’t be defined by model performance alone. It will hinge on whether these systems are trustworthy and whether they’re available to the people who need them most.