A Q&A with Katie Shilton, associate professor and program co-director of the BS in Social Data Science at the INFO College
Both data privacy and machine learning technologies/artificial intelligence have been behind a lot of attention-grabbing controversies. According to Katie Shilton, associate professor and program co-director of the BS in social data science at the College of Information Studies, a few have angered the public and shaken technology development communities: the Cambridge Analytica and PrePol scandals.
In 2013, A UK-based researcher collected data about 270,000 Facebook users using a Facebook app he created. The Facebook users were told the data collection was for research, but the researcher sold that data to commercial firm Cambridge Analytica. The firm then used the research data to target advertising for the Trump campaign.
In 2016, ProPublica–an independent, nonprofit newsroom–investigated PrePol, a prediction algorithm that calculated “recidivism scores” for use in sentencing and parole decisions. ProPublica found that, because of systemic biases in the data PredPol was trained on, the algorithm was twice as likely to falsely identify a Black defendant as a future offender than a white defendant.
“I just made a timeline of digital data research controversies for CITI training modules I’m developing, and there has been one or more a year since the early 2010s. (And that’s just in the digital research data space!),” Shilton says.
In a recent paper, “Ethics Governance Development: Bridging Ethics Work and Governance in the Menlo Report,” Shilton looks at how researchers have historically navigated such controversies. Beginning in 2009, a group of computer scientists, lawyers, and government officials made and sustained an effort to set ethical guidelines for information and communication technology (ICT) research, which culminated in the 2012 release of the Menlo Report in the United States’ Federal Register. Shilton (with her coauthor Megan Finn from the University of Washington) interviewed report participants to find out why and how the report and its guidelines came to be. We sat down with Shilton to discuss her experience working on this paper and where she thinks data ethics governance is going.
What motivated you to write this paper?
Megan Finn and I both teach courses in information ethics, and as part of our teaching, we teach ethical guidelines that have been created for information technology. The 2012 Menlo Report stood out to us because it was created by computer scientists for computer scientists, and it was written before big data and AI scandals were so publicly recognized. We wondered what inspired the Menlo authors to create computing ethics guidelines at a time when technology research and the technology industry was still in hype rather than “techlash” mode. And we wondered what today’s efforts to govern the ethics of information technologies could learn from this earlier effort.
What is ethics governance and what is its current state in ICT research?
Ethics governance is the attempt to steer the moral standards of a field or area by both enrolling people (in this case, researchers) into consensus norms, as well as “cutting” or excluding researchers who don’t agree with those norms. Right now, ethics governance in ICT research is, I would say, midstream. There’s growing agreement that computing researchers need ethical standards, and there have been lots of attempts to write those standards in small and large ways. But we’re still working out how to get consensus about the right standards to adhere to, how to build those standards into everyday technology research practices, and what it looks like to “cut” research that doesn’t fall into those standards (or even if “teeth” should be a part of ethics practice in the field).
What is the influence and legacy of the Menlo Report? Did it have the impact it intended?
Yes and no. One of the things that stood out to us in the paper was the authors’ pessimism about the impact of the Report. We conducted the interviews before the pandemic (but published them after) and their initial tone was gloomy–they feared Menlo hadn’t had the impact they’d hoped.
But an interesting thing happened when we re-contacted the authors to check quotes before publication–their tone had changed. Several noted that conferences in the network measurement and security fields were increasingly requiring ethics statements in papers and citing the Menlo report in those requirements. And we had seen this too–beginning slowly in the mid-2010s, and accelerating after 2016, conferences were beginning to require ethics statements. This is the first sign of “teeth” (or what we’d term cutting) we’d seen coming out of Menlo–if a researcher doesn’t include ethical reflection in a paper, it might not get published.
So my major takeaway is that ethics governance impacts are slow, and can feel really discouraging. And even now, you won’t find that most ICT researchers have heard of the Menlo Report. But I do think Menlo had an impact on conference leadership, which is now having an impact in how ICT research is conducted and published.
Why is it relevant today?
Menlo is still a reasonable guide for ICT research, although there is now a proliferation of computing codes of ethics and best practices. (This paper found 84 guidelines written just for AI!) One thing Menlo includes that is really useful for students is a “Companion Document” which offers dozens of case studies in computing ethics.
But what I think might be even more relevant today is Menlo as a model of doing ethics governance work. Menlo shows both the strengths and challenges of trying to gather technical researchers–people really close to research ethics problems–to set new standards. If we want to do better ethics governance, we can learn from Menlo. For example, part of why Menlo was so successful at shaping conference program requirements is that Menlo authors were influential researchers out serving on conference committees. That’s a plus, and something that can be emulated: if you want to change how computing ethics are done, get technical leaders invested. However, something Menlo struggled with was recruiting nontechnical voices, particularly ethicists and social scientists. (They did have lots of lawyers.) Something today’s efforts can do a better job of, I think, is incorporating those perspectives.
What struck you most in your interviews with the authors of the report?
Ethics governance work is hard. And ethics work is never perfect. We actually wrote about this for the magazine Communications of the ACM. Authors reported that the work was hard, and they didn’t have the resources to do the scope of work or the outreach that they’d hoped to. And yet, they were more influential than I think they gave themselves (initial) credit for. So for others interested in doing ethics governance work in computing, I think the message is: keep rolling that rock up that hill. Even if it’s kind of a lumpy rock.
What is the future of ethics governance?
I think it lies in the infrastructures that set research norms–conference committees chief among them–but also in training the next generation of information and computer scientists to see (and practice) ethical reflection as part of the job. Ethical reflexivity is a skill that can be developed, and that’s useful for spotting problems and coming up with solutions.