Machine Learning, AI, Computational Linguistics, and Information Retrieval
Developing methods that allow computers to perform learned tasks autonomously, creating practical solutions for human needs.
Research Projects
DASS: Learning Code(s): Community-Centered Design of Automated Content Moderation
Principal Investigator(s): Katie Shilton
Funder: National Science Foundation
Research Areas: Accessibility and Inclusive Design > Machine Learning, AI, Computational Linguistics, and Information Retrieval > Social Networks, Online Communities, and Social Media
Principal Investigator(s): Katie Shilton
Funder: National Science Foundation
Research Areas: Accessibility and Inclusive Design > Machine Learning, AI, Computational Linguistics, and Information Retrieval > Social Networks, Online Communities, and Social Media
FAI: A Human-centered Approach to Developing Accessible and Reliable Machine Translation
Principal Investigator(s): Ge Gao
Funder: National Science Foundation
Research Areas: Human-Computer Interaction > Machine Learning, AI, Computational Linguistics, and Information Retrieval
Principal Investigator(s): Ge Gao
Funder: National Science Foundation
Research Areas: Human-Computer Interaction > Machine Learning, AI, Computational Linguistics, and Information Retrieval
Institute for Trustworthy AI in Law and Society (TRAILS)
Principal Investigator(s): Katie Shilton
Funder: National Science Foundation
Research Areas: Information Justice, Human Rights, and Technology Ethics > Machine Learning, AI, Computational Linguistics, and Information Retrieval
The TRAILS (Trustworthy AI in Law and Society) Institute, a collaboration among several universities, aims to enhance trust in AI systems. It focuses on community participation, transparent design, and best practices. Four key research thrusts address social values, technical design, socio-technical perceptions, and governance. The institute seeks to include historically marginalized communities and promote informed AI adoption.
Principal Investigator(s): Katie Shilton
Funder: National Science Foundation
Research Areas: Information Justice, Human Rights, and Technology Ethics > Machine Learning, AI, Computational Linguistics, and Information Retrieval
The TRAILS (Trustworthy AI in Law and Society) Institute, a collaboration among several universities, aims to enhance trust in AI systems. It focuses on community participation, transparent design, and best practices. Four key research thrusts address social values, technical design, socio-technical perceptions, and governance. The institute seeks to include historically marginalized communities and promote informed AI adoption.
Faculty
Recent News
![Faisal Quader](https://ischool.umd.edu/wp-content/uploads/Faisal-Quader-1200x600.png)
Turning Dreams into Impact: One Donor’s Contribution to Future Innovators
Adjunct faculty member Faisal Quader reflects on his philanthropy![Image of many dots plotted on a screen representing a knowledge graph for Ashville NC Southside owners harmed by urban renewal](https://ischool.umd.edu/wp-content/uploads/knowledge-graph-1200-x-600.jpg)
Knowledge graph for Southside owners harmed by urban renewal, by Nick de Raet
UMD Students Studying Archival Technology & AI Make Real-World Impact
19 UMD students participating in AICollaboratory summer courses aid Asheville Reparations Commission![Photo of Testudo statue on the UMD Campus with sunshine](https://ischool.umd.edu/wp-content/uploads/Testudo-Sunshine.jpg)