III: Small: Bringing Transparency and Interpretability to Bias Mitigation Approaches in Place-based Mobility-centric Prediction Models for Decision

The covid-19 pandemic has brought to light the importance of place-based mobility-centric prediction models in high-stakes settings. Place-based mobility-centric prediction models (PBMC) use human mobility data – together with other contextual information – to predict spatio-temporal statistics of significance to decision makers. For example, mobility patterns that reflect (lack of) compliance with travel restrictions and stay-at-home orders have been used to predict the number of covid-19 cases over time. However, the data used to train PBMC models can suffer from different types of bias that might in turn affect the fairness of the predictions. For example, under-reporting in the covid-19 case data used to train PBMC models might produce predictions that are wrongfully low, which could lead a decision maker to, for example, not locate a covid-19 testing unit in a given neighborhood. This project presents a set of approaches to mitigate – in a transparent and interpretable manner – a diverse set of bias present in PBMC models for two high-stakes settings: public health and public safety. In addition, by providing insights into the processes that led to the embedding of bias in the data and into the effects of bias on the fairness of the models, this project will hopefully move PBMC models closer to broad adoption in policy settings. This project will also offer educational opportunities for graduate and undergraduate students as well as computing workshops for high school students and under-represented genders in computing with a focus on the value of PBMC models, human mobility data and fairness for high-stakes settings.

The technical contributions of this project are divided in three thrusts. Thrust one will provide a novel PBMC prediction model – that can work with different neural architectures – to predict reported place-based statistics while mitigating for potential under-reporting bias. Thrust two will create a novel sampling bias mitigation approach to correct for under-represented groups in human mobility data collected from cell phones. Thrust three will produce novel transfer learning approaches to mitigate for algorithmic bias, i.e., low performing models in data-scarce regions. The thrusts proposed have been designed in a modular way, to allow for the layered combination of data and algorithmic bias mitigation approaches in end-to-end mitigation frameworks that are evaluated for fairness and accuracy. All bias mitigation methods are accompanied by novel interpretability approaches to distill the social determinants that might explain how the bias was embedded into place-based statistics and mobility data? as well as to identify the role that different model components might play in the mitigation itself. Our research outcomes will advance the state of the art in the design of transparent and interpretable bias mitigation approaches for PBMC models with evaluations in two high-stakes settings: public health and public safety.

Duration:
9/1/2022 - 8/31/2025

Principal Investigator(s):

Project Website:
https://www.nsf.gov/awardsearch/showAward?AWD_ID=2210572&HistoricalAwards=false

Research Funder:

Total Award Amount:
$600,000

Research Areas: