Designing at the intersection of HCI and AI
A talk about Misinformation and Crowdsourced Annotation
While many models have been proposed for automated checking of news, even established news outlets and fact-checking services are often disputed today. How can we design AI models that are transparent and accountable to earn human trust and build a foundation for effective human-AI partnerships? We envision AI as an assistive technology to enhance and augment human abilities through interaction, rather than provide black-box automation. I’ll discuss our recent work integrating human-centered, front-end interface design with back-end language processing algorithms, using the specific problem of misinformation to ground our broader work toward FAT* design for enhanced search systems. Time allowing, I’ll discuss a second stream of research on effective task design and aggregation for collecting item ratings via crowdsourcing. Cost-benefit analysis over 10,000 ratings and rationales collected on Mechanical Turk suggests a win-win: experienced workers can provide rationales for their ratings with almost no increase in task completion time while providing a multitude of benefits: more reliable ratings, greater transparency for assessing workers and ratings, reduced need for expert gold, dual-supervision from ratings and rationales, and added value from the rationales themselves.
About the Speaker:
Matthew Lease (https://www.ischool.utexas.
All UMD faculty, staff, and students are invited. No registration required.
IRB 5105 (small room -- come early or bring a chair!)