Breaking the Loop: Creating a Trustworthy Social Media Platform

Laurie Robinson - March 28, 2023

As part of his NSF Career award, Assistant Professor Giovanni Luca Ciampaglia is designing a social media platform that will deliver users a trustworthy news feed.

Photo of a woman looking at her smart phone

Social media content feeds are crucial to media consumption today. According to Pew Research Center, 50% of U.S. adults say they get news from social media “often” or “sometimes.” Unfortunately, a lot of that news is of low quality, contains dubious or inaccurate claims, and tends to stoke partisan animosities. A Wall Street Journal investigation found that Facebook’s algorithm amplified angry and polarizing content—much of which comes from questionable sources, conspiracy theories, and unverified rumors. Moreover, with newspapers shuttering and journalists being laid off, dispassionate, factual reporting and investigative journalism have declined, leaving the media landscape with a vacuum that is ripe for mis/disinformation.

While social media algorithms are part of the issue when it comes to the proliferation of low quality news content, individual behavior is also a factor. People tend to read and watch content that appeals to their preferences and biases, and recommendation algorithms, designed to maximize views and clicks, continually feed that content to them. But what if there was a way to change that? As part of his $600,000 NSF Career award, Giovanni Luca Ciampaglia, assistant professor at the College of Information Studies, and colleagues are designing a social media platform that will deliver users a trustworthy news feed.

A key ingredient in this experimental platform is the algorithm that decides how content is ranked in the news feed. The researchers are looking to see if deploying ranking algorithms that maximize views and clicks from an ideologically and demographically diverse pool of users will increase the trustworthiness of the content users view and reshare. Their goal is to break the algorithmic engagement loop, which finds people gravitating to selective news sources based on their preferences, by exposing people to crowdsourced news that is not cycling through an echo chamber. “Our preliminary results show that the content recommendation algorithms that generate engagement in an audience that is diverse at least from an ideological point of view are able to identify more trustworthy news sources while keeping the recommendations relevant to users,” says Ciampaglia. But diversity does not necessarily have to do only with politics. “Is the diversity of an audience in terms of gender or age also a good signal to make the rankings more trustworthy?”

Ciampaglia has always been interested in crowdsourced information. For his PhD, he studied Wikipedia when it was in its infancy. He looked at how individual users with different points of view came to form a community around shared norms of editorial objectivity. He was fascinated by how a decentralized system could produce reliable information. He wonders why social media has failed where Wikipedia has succeeded. “Maybe the issue is that these systems rely on algorithms to do most of the heavy lifting of identifying content,” says Ciampaglia. He wants to salvage the idea of algorithmic curation—it doesn’t have to be the problem. “There can be a way to use an algorithm and create a healthy news diet,” he says.

To carry out their research, the UMD researchers will recruit social media users from the same online panels that are routinely used to conduct market research surveys. The team will invite them to join their experimental platform. To measure “quality,” the researchers will use apps like NewsGuard to assess whether content being recommended is credible or not. The project will end in 2028.

The marketplace of ideas and attention may be rescued if we adopt algorithms that do not incentivize blind engagement. Adopting a view of engagement that takes into account its diversity may be a way to achieve that. Creating trustworthy algorithmic systems is impossible without a proper understanding of the interplay between humans and algorithms.