FAI: A Human-centered Approach to Developing Accessible and Reliable Machine Translation
This Fairness in AI project aims to develop technology to reliably enhance cross-lingual communication in high-stakes contexts, such as when a person needs to communicate with someone who does not speak their language to get health care advice or apply for a job. While machine translation technology is frequently used in these conditions, existing systems often make errors that can have severe consequences for a patient or a job applicant. Further, it is challenging for people to know when automatic translations might be wrong when they do not understand the source or target language for translation. This project addresses this issue by developing accessible and reliable machine translation for lay users. It will provide mechanisms to guide users to recognize and recover from translation errors, and help them make better decisions given imperfect translations. As a result, more people will be able to use machine translation reliably to communicate across language barriers, which can have far-reaching positive consequences on their lives.
Specifically, this project contributes advances in natural language processing and interaction design for a bot that can be added to any text-based conversation, where it can play a role similar to an interpreter. The bot will guide users to write appropriate inputs for machine translation, help users understand outputs, and intervene when it detects miscommunication and conversational breakdowns. The design of the bot will follow a human-centered design process, consisting of need-finding studies, iterative system development and deployment, and user evaluations via controlled experiments. On the back-end, the bot will rely on quality estimation models that automatically detect translation errors to produce useful guidance for end-users. The data, models, and design recommendations generated by this project will advance computational research in multiple ways. It will lead to new machine translation quality estimation techniques that take into account the impact of errors on end-users; it will expand the scope of explainable artificial intelligence research to encompass the considerable risks and harms caused by language generation tools, and it will generate new interface design that assists lay users’ sense making of artificial intelligence systems. This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.
Duration:
1/1/2022 - 12/31/2024
Additional UMD Investigator(s):
Marine Carpuat
Total Award Amount:
$999,757