Events

SoDa Symposium: Universal Adaptability – A Target-Independent Approach to Inference

Event Start Date: Tuesday, March 8, 2022 - 12:00 pm

Event End Date: Tuesday, March 8, 2022 - 1:00 pm

Location: Online

Add to Calendar Tuesday, March 8, 2022 12:00 pm Tuesday, March 8, 2022 1:00 pm America/New York SoDa Symposium: Universal Adaptability – A Target-Independent Approach to Inference

social data science center banner

Universal adaptability: Target-independent inference that competes with propensity scoring

Authors: Michael P. Kim, Christoph Kern, Shafi Goldwasser, Frauke Kreuter, and Omer Reingold

The gold-standard approaches for gleaning statistically valid conclusions from data involve random sampling from the population. Collecting properly randomized data, however, can be challenging, so modern statistical methods, including propensity score re-weighting, aim to enable valid inferences when random sampling is not feasible. We put forth an approach for making inferences based on available data from a source population that may differ in composition in unknown ways from an eventual target population. Whereas propensity scoring requires a separate estimation procedure for each different target population, we show how to build a single estimator, based on source data alone, that allows for efficient and accurate estimates on any downstream target data. We demonstrate, theoretically and empirically, that our target-independent approach to inference, which we dub “universal adaptability,” is competitive with target-specific approaches that rely on propensity scoring. Our approach builds on a surprising connection between the problem of inferences in unspecified target populations and the multi-calibration problem, studied in the burgeoning field of algorithmic fairness. We show how the multi-calibration framework can be employed to yield valid inferences from a single source population across a diverse set of target populations.

Speakers

Christoph Kern, PhD

University of Mannheim (Postdoc)
University of Maryland (Research Assistant Professor)
Christoph Kern is a Post-Doctoral Researcher at the Professorship for Statistics and Methodology at the University of Mannheim and Research Assistant Professor at the Joint Program in Survey Methodology (JPSM) at the University of Maryland. He also is a Project Director at the Mannheim Centre for European Social Research (MZES) and a member of the Mannheim Center for Data Science (MCDS). He received his PhD (Dr. rer.pol.) in social science from the University of Duisburg-Essen (UDE) in 2016. Before joining the University of Mannheim, he was a research associate at the Professorship for Empirical Social Science Research andStatistics at UDE.

Michael P. Kim, PhD

Miller Institute at UC Berkeley (Postdoc)
Michael P. Kim is a Miller Postdoctoral Fellow at UC Berkeley, working with Shafi Goldwasser. Michael completed his PhD at the Stanford under the guidance of Omer Reingold. His research investigates foundational questions about responsible machine learning. Much of this work aims to (1) identify ways in which machine-learned predictors can exhibit unfair discrimination and (2) develop algorithmic tools that provably mitigate such forms of discrimination. More broadly, Michael is interested in how the computational lens (i.e. algorithms and complexity theory) can help tackle emerging societal and scientific challenges.


The SoDa Center at UMD
The powerful information available in large social science data sets is critical to understanding and addressing many of our nation and world’s most pressing challenges: from Covid-19 to racial, social and economic injustice; and from climate change to deep and damaging political and cultural divides. To help address these challenges, the University of Maryland has launched a new Social Data Science Center (SoDa) designed to advance research, education, and applications of social data measurement and analysis.

Online

social data science center banner

Universal adaptability: Target-independent inference that competes with propensity scoring

Authors: Michael P. Kim, Christoph Kern, Shafi Goldwasser, Frauke Kreuter, and Omer Reingold

The gold-standard approaches for gleaning statistically valid conclusions from data involve random sampling from the population. Collecting properly randomized data, however, can be challenging, so modern statistical methods, including propensity score re-weighting, aim to enable valid inferences when random sampling is not feasible. We put forth an approach for making inferences based on available data from a source population that may differ in composition in unknown ways from an eventual target population. Whereas propensity scoring requires a separate estimation procedure for each different target population, we show how to build a single estimator, based on source data alone, that allows for efficient and accurate estimates on any downstream target data. We demonstrate, theoretically and empirically, that our target-independent approach to inference, which we dub “universal adaptability,” is competitive with target-specific approaches that rely on propensity scoring. Our approach builds on a surprising connection between the problem of inferences in unspecified target populations and the multi-calibration problem, studied in the burgeoning field of algorithmic fairness. We show how the multi-calibration framework can be employed to yield valid inferences from a single source population across a diverse set of target populations.

Speakers

Christoph Kern, PhD

University of Mannheim (Postdoc)
University of Maryland (Research Assistant Professor)
Christoph Kern is a Post-Doctoral Researcher at the Professorship for Statistics and Methodology at the University of Mannheim and Research Assistant Professor at the Joint Program in Survey Methodology (JPSM) at the University of Maryland. He also is a Project Director at the Mannheim Centre for European Social Research (MZES) and a member of the Mannheim Center for Data Science (MCDS). He received his PhD (Dr. rer.pol.) in social science from the University of Duisburg-Essen (UDE) in 2016. Before joining the University of Mannheim, he was a research associate at the Professorship for Empirical Social Science Research andStatistics at UDE.

Michael P. Kim, PhD

Miller Institute at UC Berkeley (Postdoc)
Michael P. Kim is a Miller Postdoctoral Fellow at UC Berkeley, working with Shafi Goldwasser. Michael completed his PhD at the Stanford under the guidance of Omer Reingold. His research investigates foundational questions about responsible machine learning. Much of this work aims to (1) identify ways in which machine-learned predictors can exhibit unfair discrimination and (2) develop algorithmic tools that provably mitigate such forms of discrimination. More broadly, Michael is interested in how the computational lens (i.e. algorithms and complexity theory) can help tackle emerging societal and scientific challenges.


The SoDa Center at UMD
The powerful information available in large social science data sets is critical to understanding and addressing many of our nation and world’s most pressing challenges: from Covid-19 to racial, social and economic injustice; and from climate change to deep and damaging political and cultural divides. To help address these challenges, the University of Maryland has launched a new Social Data Science Center (SoDa) designed to advance research, education, and applications of social data measurement and analysis.

Register for event