UMD INFO Events - College of Information (INFO)

Events

OTTRS Speaker Series:”The Coevolution of Computational and Experimental Methods in Human-AI Teams”

Event Start Date: Friday, October 24, 2025 - 12:00 pm

Event End Date: Friday, October 24, 2025 - 1:00 pm

Location: Virtual


UMD students, faculty, staff, alumni, and friends—join us for the OTTRS Speaker Series. (Registration Required)


Abstract:

The study of human-AI teams, rife with unforeseeable conditions and dynamics, requires dynamic interplay between theoretical foundations, computational modeling, and experimental validation. Neal Outland (University of Georgia) explores how this coevolution advances HAT research. Part one discusses a systematic review of trust theories across disciplines, revealing a fragmented landscape of psychological, computational, organizational, and engineering perspectives. He demonstrates how computational modeling and experimental work have been used collectively to translate theory into practice. Part two presents a glance into the future: bidirectional trust models emerging from iterative simulation-experiment cycles, and identity dynamics frameworks evolving through computational exploration and empirical validation. These approaches show how computational methods and experimental data may mutually inform each other, capturing emergent phenomena that single methodologies miss. The talk concludes with future directions where theory, computation, and experimentation form integrated discovery cycles developing adaptive AI systems through real-time modeling, incorporating individual differences into computational frameworks, and creating testbeds that simultaneously validate and inspire theoretical insights.

Bio:

Neal Outland is Assistant Professor of Industrial-Organizational Psychology at the University of Georgia and AI Faculty Fellow of the Institute for Artificial Intelligence. His research bridges computational and experimental methods to understand human-AI team dynamics, trust calibration, and identity processes in organizations. He has published systematic reviews of trust theories across disciplines, developed bidirectional computational models of human-robot interaction, and designed innovative experimental paradigms including 3D virtual testbeds. His work on team composition, social network approaches, and individual differences in human-AI collaboration appears in American Psychologist, Organizational Psychology Review, Current Opinion in Psychology, and IEEE conference proceedings. Recent projects supported by DEVCOM Analysis Center and other agencies examine how personality and attitudes shape trust evolution in human-autonomy teams. He focuses on integrative approaches where computational and experimental methods coevolve to advance both theoretical understanding and practical applications for human-AI teaming.

Register Here!

Research Talks/Events