Institute for Trustworthy AI in Law and Society (TRAILS)

Artificial Intelligence (AI) systems have potential to enhance human capacity and increase productivity. They also can catalyze innovation and mitigate complex problems. Current AI systems are not created in a way that is transparent making them a challenge to public trust. The opaque processes used produce results that are not well understood. Trust is further undermined by the harms that AI systems can cause. Those most affected are the communities excluded from participating in AI system developments. This lack of trustworthiness will result in slower adoption of these AI technologies. It is critical to AI innovation to include groups affected by the benefits and harms of these AI systems. The TRAILS (Trustworthy AI in Law and Society) Institute is a partnership of the University of Maryland, The George Washington University, Morgan State University, and Cornell University. It encourages community participation in AI development of techniques, tools, and scientific theories. Design and policy recommendations produced will promote the trustworthiness of AI systems. A first goal of the TRAILS Institute is to discover ways to change the design and development of AI systems. This will help communities make informed choices about AI technology adoption. A second goal is the development of best practices for industry and government. This will foster AI innovation while keeping communities safe, engaged, and informed. The TRAILS Institute has explicit plans for increasing participation of affected communities. This includes participation of K-12 education up through Congressional staff. These plans will elicit the concerns and expectations from the affected communities. They also provide improved understanding of the risks and benefits of AI-enabled systems.

The TRAILS Institute’s research program identifies four key thrusts. These thrusts target key aspects of the AI system development lifecycle. The first is Social Values. It involves increasing participation throughout all aspects of AI development. This ensures the values produced by AI systems reflect community and interested parties? values. This includes participatory design with diverse communities. The result is community-based interventions and adaptations for the AI development lifecycle. The second thrust is Technical Design. It includes the development of algorithms to promote transparency and trust in AI. This includes the development of tools that increase robustness in AI systems. It also promotes user and developer understanding of how AI systems operate. The third trust is Socio-Technical Perceptions. This involves the development of novel measures including psychometric techniques and experimental paradigms. These measures will assess the interpretability and explainability of AI systems. This will enable a deeper understanding and perception of existing metrics and algorithms. This provides understanding of the values perceived and held by included community members. The fourth thrust is Governance. It includes documentation and analysis of governance regimes for both data and technologies. These provide the underpinning AI for the development of platform and technology regulation. Ethnographers will analyze the institute itself and partner organizations. They will document ways in which technical choices translate to governance impacts. The research focus is in two use-inspired areas. The first being information dissemination systems (e.g., social medial platforms). The second is energy-intensive systems (e.g., autonomous systems). The institute’s education and workforce development efforts in AI include new educational offerings. These cater to many markets, ranging from secondary through executive education. The TRAILS Institute is especially focused on expanding access to foundational education. The focus is on historically marginalized and minoritized groups of learners and users. The institute will work with these communities to learn from, educate, and recruit participants. The focus is to retain, support, and empower those marginalized in mainstream AI. The integration of these communities into this AI research program broadens participation in AI development and governance.

6/1/2023 - 5/31/2028 (Estimated)

Principal Investigator(s):

Additional UMD Investigator(s):
Hal Daume, Thomas Goldstein

Additional Non-UMD Investigator(s):
Susan Aaronson, David Broniatowski

Research Funder:

Total Award Amount:
$20,000,000.00 (Awarded Amount to Date: $7,626,273.00)

Research Areas: