Drs. Christoph Kern and Michael P. Kim present framework for ensuring protections of subpopulations and unbiased algorithmic conclusions in research.
Photo by Jacek Dylag on Unsplash
the_post_thumbnail_caption(); ?>The gold-standard approaches for obtaining statistically valid conclusions from data involve random sampling from the population. Collecting properly randomized data, however, can be challenging, so modern statistical methods, including propensity score reweighting, aim to enable valid inferences when random sampling is not feasible.
Dr. Christoph Kern, Postdoctoral Researcher at the University of Mannheim, and Research Assistant Professor at the University of Maryland, and Dr. Michael P. Kim, Postdoctoral Fellow at UC Berkeley, presented their joint research on a target-independent approach to inference data, dubbed “universal adaptability.” Their approach builds on a surprising connection between the problem of inferences in unspecified target populations and the multi-calibration problem studied in the burgeoning field of algorithmic fairness, the study of the causes of bias in data and algorithms. The duo puts forth an approach for making inferences based on available data from a source population that may differ in composition in unknown ways from a population that researchers plan to eventually target.
“The main intuitive observation in our work is that there is a clear analogy between the fairness goal to protect subpopulations from miscalibrated predictions and the statistical goal to ensure unbiased estimates on downstream target populations,” Kim said.
You can watch the full SoDa Symposium video below or watch it on YouTube here.