A new grant led by INFO Assistant Professor Cody Buntain boosts the U.S. military’s strategic messaging
Photo licensed by Adobe Stock
the_post_thumbnail_caption(); ?>Imagine scrolling through your social media feed and coming across a striking image of a warship cutting through the South China Sea, its flag waving against a backdrop of stormy skies. The image is powerful, evoking a sense of national pride and urgency. But what if this image wasn’t created by a photojournalist or a military photographer? What if it was generated by artificial intelligence (AI), tailored to resonate with specific audiences in the Asia-Pacific (APAC) region? This is the reality of modern influence efforts, where visual media—more emotionally engaging and cognitively impactful than text—has become a key tool for shaping perceptions and driving engagement.
The University of Maryland College of Information (INFO) is at the forefront of addressing the challenges and opportunities associated with AI-generated content. Assistant Professor Cody Buntain, principal investigator on a new grant from Parallax Advanced Research, is exploring how machine learning (ML) and AI can be leveraged to enhance the U.S. Air Force and Space Force’s ability to understand and counter visual media-based influence efforts. Funded as part of a U.S. Air Force contract, the project aims to advance the integration of ML and AI techniques into operational environments, ultimately supporting decision-making through better data analysis, human-machine interaction, and real-time situational awareness.
The Growing Risks of Visual Media in Influence Campaigns
Visual media has become a dominant force in online communication, with platforms like TikTok, YouTube Shorts, and Instagram Reels driving its popularity. Unlike text, visuals are more emotionally arousing, more engaging, and activate different cognitive pathways in the brain. As Buntain explains, “Visual media is much more impactful than textual media, and our engagement with it is different. This project is about bringing new understanding to what kind of visual media gains traction among different audiences, particularly in the APAC region.”
The rise of generative AI has further complicated the landscape. Creating visually compelling content is now cheaper and easier than ever, making it a potent tool for influence campaigns. “Recent advances in generative AI models have dramatically lowered barriers to creating new visual media,” Buntain notes. “This amplifies risks, especially for populations that lack access to robust journalistic and fact-checking institutions.”
A Human-Centered Approach to Countering Influence
The project focuses on developing scalable methods to characterize images, video, and social responses in online spaces. By combining computer vision and natural language processing, Buntain aims to identify how visual media is used in influence efforts, how it propagates, and how it evolves. “We’re using state-of-the-art vision-language models to understand what an image contains and then clustering those images into different types,” Buntain says. “For example, we see some audiences engage a lot with visuals of military equipment, while others find images of politicians more compelling.”
The project leverages this understanding of what audiences are looking for to show Air Force operators how to integrate generative AI to create counter-influence campaigns using culturally relevant visuals. “If we want to create an image that shows the U.S. military engaging in humanitarian aid, we can use AI to make it more compelling for a specific audience,” Buntain explains. “Are there cultural symbols we can integrate into those images? AI can help us craft more relevant imagery for those audiences.”
Supporting the U.S. Air Force
While the project is not directly aimed at supporting boots-on-the-ground warfighters, it has significant implications for the Air Force’s role in strategic messaging. “This is about amplifying the capabilities of U.S. Air Force operators and analysts in engaging with audiences they care about,” Buntain says. “It’s about boosting the U.S. image in potentially allied regions through more effective counter-messaging and defensive messaging.”
The research also addresses the challenges of human-AI collaboration. “There’s always a human in the loop,” Buntain emphasizes. “The goal is to give operators additional resources to craft content that’s more relevant to audiences who need this information. This human-in-the-loop approach is critical for ensuring that AI tools are used responsibly and effectively.”
Timeline and Milestones
The project, which spans three years, has ambitious goals. Within the first six months, Buntain aims to develop capabilities to differentiate which types of visuals gain traction in different countries. “We should be able to say something about why certain images resonate in the Philippines but not in Thailand or South Korea,” Buntain says. The next phase will focus on political content, examining how pro-U.S. or anti-U.S. imagery varies across audiences.
The long-term implications of this research extend beyond the Air Force. “The broader question is how we can better integrate the skills of AI systems with the communication needs of the Department of Defense,” Buntain explains. “In regions like the South China Sea or the Pacific Islands, much of the conflict is informational rather than kinetic. This research helps us understand how to reach these audiences more effectively.”
As visual media continues to dominate online spaces, understanding its role in influence campaigns is more important than ever. Through this project, Buntain is not only advancing the capabilities of the U.S. Air Force and Space Force but also contributing to a broader understanding of how AI can be used responsibly in the digital age. “This is about protecting vulnerable audiences and ensuring that the content we create is both impactful and ethical,” Buntain says. “That’s what excites me most about this work.”