Human-Agent Teaming on Intelligence Tasks

Complex sensemaking tasks are a fundamental part of intelligence analysis. Analysts routinely sift through large numbers of source documents, identify key facts and trends, assemble them into coherent models of reality, and use those models to inform action. While some operations can be effectively handled by a single analyst, others require a team effort, which introduces additional cognitive challenges associated with sharing information and aligning mental models across the analysis team. The nature of threats has changed post-9/11 and as the sheer amount of available information sources has grown, the demands on analysts have become ever more overwhelming. As a result, artificial intelligence and machine learning are increasingly being discussed as essential technologies which will be needed to solve analysis challenges (Gartin, 2019).

This project aims to address AI and ML analysis capabilities by addressing the following research questions that emerge:

  • Can interactive (chatbot-style) AI agents mitigate or exacerbate the communication and coordination problems that occur with shift handovers of intelligence work (e.g., inaccuracy blindness, ignoring potentially relevant information)? What features are important for mitigation versus exacerbation?
  • How can humans deal with potential blind spots, biases, or inaccuracies within AI so that AI can mitigate rather than exacerbate human communication and coordination problems during shift handovers?

To address these research questions, the project team will create an experimental infrastructure consisting of task-relevant input materials (e.g., mission descriptions and source documents), activity recording tools (e.g., search tools and electronic notebooks), experimental monitoring capabilities (e.g., video recording and survey systems), and different AI supports for human analysts (e.g., chatbots offering advice to analysts engaged in intelligence tasks).

If successful, this project will substantially increase insight into the strengths and weaknesses of AI technology to support specific intelligence tasks. It will shed light on how and when human analysts can safely place their trust in AI technology, as well as how they can proactively identify problems in AI-generated input. Finally, it will improve our understanding of how teams of humans, including asynchronous teams, work together in situations that include AI-generated input. These are critical questions that must be answered if intelligence analysts are to fully and safely exploit AI technology within their work.

Duration:
7/1/2020 - 6/30/2023

Principal Investigator(s):

Additional Non-UMD Investigator(s):
Adam Porter, Aimee Kane, Tammie Nelson, Sarah H. Vahlkamp, Melissa J. Carraway

Partnering Organization(s):
Fraunhofer USA Center for Experimental Software Engineering (CESE)
Duquesne University
Fraunhofer Center for Experimental Computer Engineering

Research Funder:

Total Award Amount:
$616,700

Research Areas: