Eight Projects Explore Use of Cutting-Edge Technology to Solve Range of Societal Issues
Eight seed grants from UMD's new TRAILS institute aim to diversify stakeholders in developing and governing AI, while using the emerging technology for societal good. Illustration courtesy of Maryland Today via iStock
the_post_thumbnail_caption(); ?>The inaugural round of seed grants from the University of Maryland’s new Institute for Trustworthy AI in Law & Society (TRAILS) aims to boost the diversity of stakeholders developing and governing artificial intelligence, and ultimately improve the accessibility and efficacy of the AI-infused systems that increasingly pervade everyday life.
The grants to eight interdisciplinary faculty teams announced Wednesday total just over $1.5 million. The projects include AI chatbots to help people quit smoking, animal-like robots to assist caregivers interacting with autistic children, and an exploration of how users interact with AI-generated language translation systems.
The projects fall under TRAILS’ broad mission to transform AI from a field motivated primarily by technological innovation to one driven by ethics and human rights, with participation from communities whose voices previously were marginalized.
“At the speed with which AI is developing, our seed grant program will enable us to keep pace—or even stay one step ahead—by incentivizing cutting-edge research and scholarship that spans AI design, development and governance,” said Hal Daumé III, a UMD professor of computer science who directs the institute.
TRAILS was launched in May 2023 with a $20 million award from the National Science Foundation and the National Institute of Standards and Technology, drawing on the strengths of its four primary institutions: the University of Maryland’s expertise in computing and human-computer interaction, George Washington (GW) University’s strengths in systems engineering and in AI as it relates to law and governance, Morgan State University’s work in addressing bias and inequity in AI, and Cornell University’s research in human behavior and decision-making.
This new seed grant program provides a mechanism to bridge academic and institutional boundaries around TRAILS’ primary research thrusts—participatory design, methods and metrics, evaluating trust, and participatory governance.
A second round of seed funding will be announced later this year, said Darren Cambridge, TRAILS’ newly hired managing director. First-round projects are eligible for a renewal, while other TRAILS faculty—or any faculty member at the four primary TRAILS institutions—can submit new proposals for consideration, he said.
The projects will strengthen other TRAILS activities now taking shape, including K–12 education and outreach programs, AI policy seminars and workshops on Capitol Hill, and multiple postdoc opportunities for early-career researchers.
“We want TRAILS to be the go-to resource for educators, policymakers and others who are seeking answers and solutions on how to build, manage and use AI systems that will benefit all of society,” Cambridge said.
The eight projects selected for the first round of TRAILS seed funding are:
Chung Hyuk Park and Zoe Szajnfarber from GW and Hernisa Kacorri from UMD look to improve access to a crucial but costly early intervention therapy for families of children with autism by assessing the at-home use of an intelligent robotic companion animal. In addition to advancing assistive technology while working with families of diverse racial, ethnic and socioeconomic backgrounds, they will assess the impact of teams to create more trust in human-robot collaborative settings.
Soheil Feizi from UMD and Robert Brauneis from GW will investigate issues surrounding text-to-image generative AI models like Stable Diffusion, DALL-E 2 and Midjourney, focusing on unresolved legal, aesthetic and computational questions, such as how copyright law might adapt if these tools create works in an artist’s style. The team will also explore whether specific artworks, especially rare and unique ones, have been used to train AI models.
Huaishu Peng and Ge Gao from UMD will work with Malte Jung from Cornell to build trust in embodied AI systems that bridge the gap between computers and human physical senses. The researchers will explore the use of small desktop robots—or even ones traverse the human body—that can convey or interpret nonverbal cues, such as nodding, between blind and sighted individuals, while also gaining a deeper understanding of both groups’ values concerning teamwork facilitated by embodied AI.
Adam Aviv from GW and Michelle Mazurek from UMD will examine how users of AI and machine learning systems such as chatbots understand and react to warnings from the platforms themselves that outputs might exhibit bias, sexism or racism, as well as the risk that risk that AI-generated responses might be false or misleading.
Marine Carpuat and Ge Gao from UMD seek to better understand “mental models”—how we perceive things—for language translation systems used by millions of people daily. They will focus on how individuals make sense of their “error boundary”—that is, deciding whether an AI-generated translation is correct or incorrect. The team will also develop innovative techniques to teach users how to improve their mental models as they interact with machine translation systems.
Hal Daumé III, Furong Huang and Zubin Jelveh from UMD and Donald Braman from GW will develop a new measure of “effort-aware fairness,” and create algorithms to optimize machine learning AI systems to incorporate it. The metric will be relevant to systems prone to biased outcomes—such as loan approvals or parole decisions—and consider not just an individual’s characteristics—such as their bank balance—but the effort they expended to have “gotten where they are.” Developing measures of fairness that better align with philosophical, legal and human notions of the concept are anticipated to lead to greater trust in AI-infused systems.
Lorien Abroms and David Broniatowski from GW will recruit tobacco smokers to study ChatGPT as the basis for a novel smoking cessation program. Additional research will examine smokers’ perceptions of the acceptability and trustworthiness of AI-infused technology for help in quitting smoking.
Susan Ariel Aaronson and David Broniatowski, both from GW, will create a prototype of a searchable, easy-to-use website to help policymakers better utilize academic research related to trustworthy and participatory AI. The team will analyze research publications by TRAILS-affiliated researchers; each publication relevant to policymaking will be summarized and categorized by research questions, societal issues, keywords and relevant policymaking bodies. The resulting prototype will enable the researchers to test the utility of this resource for policymakers.
The original article written by Maryland Today Staff was published in Maryland Today on Jan 24, 2024.