Karen’s Boyd’s dissertation defense
Event Start Date:
Friday, October 30, 2020
- 1:00 pm
Event End Date:
Friday, October 30, 2020 - 3:00 pm
Add to Calendar
Friday, October 30, 2020 1:00 pm
Friday, October 30, 2020 3:00 pm
America/New York
Karen’s Boyd’s dissertation defense
Please join us for Karen Boyd’s doctoral dissertation defense!
Date: Friday, October 30, 2020
Time: 1:00 PM eastern time
Title: Understanding and Intervening in Machine Learning Ethics: Supporting Ethical Sensitivity in Training Data Curation
Abstract:
Despite a great deal of attention to developing mitigations for ethical concerns in Machine Learning (ML) training data and models, we don’t yet know how these interventions will be adopted and used. Will they help ML engineers find and address ethical concerns in their work? This dissertation seeks to understand ML engineers’ ethical sensitivity (ES)— their propensity to notice, analyze, and act on socially impactful aspects of their work—while curating training data. A systematic review of ES (Chapter 2) addresses conflicts of conceptualization in prior work with a new framework describing three activities (recognition, particularization, and judgment); argues that ES offers a useful way to describe, evaluate, and intervene in ethical technology development; and argues that the methods and perspectives of social computing can offer richer methods and data to studies of ES. A think aloud study (Chapter 3) tests this framework by using ES to compare engineers working with unfamiliar training data, finding that engineers with an intervention (Datasheets) noticed ethical issues earlier and more frequently than those without; that participants relied on Datasheets extensively while particularizing; and describes recognition and particularization habits. Chapter 4 uses Value Sensitive Design to “design up,” mitigating harms by helping machine learning engineers particularize their ethical concerns and find appropriate technical tools. This dissertation introduces ES to studies of social computing, contributes a novel method for studying ES, offers rich data about how it functions in ML development, describes insights for designing context documents and other interventions designed to encourage ES, develops an extensible digital guide that supports particularization and judgment, and points to new directions for research in ethical sensitivity in technology development.
Committee:
- Dr. Katie Shilton, Chair
- Dr. Wayne Lutters
- Dr. Susan Winter
- Dr. Jessica Vitak
- Dr. Hal Daumé, Dean’s Representative
Join Zoom Meeting
https://umd.zoom.us/j/99467523030?pwd=MW5ISE5DTkhudW9vTXlmSm0wK0dnUT09
Meeting ID: 994 6752 3030
Passcode: 410207
One tap mobile
+16699006833,,99467523030# US (San Jose)
+12532158782,,99467523030# US (Tacoma)
Dial by your location
+1 669 900 6833 US (San Jose)
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
+1 301 715 8592 US (Germantown)
+1 312 626 6799 US (Chicago)
+1 929 436 2866 US (New York)
Meeting ID: 994 6752 3030
Find your local number: https://umd.zoom.us/u/acO57d8fZ3
Please join us for Karen Boyd’s doctoral dissertation defense!
Date: Friday, October 30, 2020
Time: 1:00 PM eastern time
Title: Understanding and Intervening in Machine Learning Ethics: Supporting Ethical Sensitivity in Training Data Curation
Abstract:
Despite a great deal of attention to developing mitigations for ethical concerns in Machine Learning (ML) training data and models, we don’t yet know how these interventions will be adopted and used. Will they help ML engineers find and address ethical concerns in their work? This dissertation seeks to understand ML engineers’ ethical sensitivity (ES)— their propensity to notice, analyze, and act on socially impactful aspects of their work—while curating training data. A systematic review of ES (Chapter 2) addresses conflicts of conceptualization in prior work with a new framework describing three activities (recognition, particularization, and judgment); argues that ES offers a useful way to describe, evaluate, and intervene in ethical technology development; and argues that the methods and perspectives of social computing can offer richer methods and data to studies of ES. A think aloud study (Chapter 3) tests this framework by using ES to compare engineers working with unfamiliar training data, finding that engineers with an intervention (Datasheets) noticed ethical issues earlier and more frequently than those without; that participants relied on Datasheets extensively while particularizing; and describes recognition and particularization habits. Chapter 4 uses Value Sensitive Design to “design up,” mitigating harms by helping machine learning engineers particularize their ethical concerns and find appropriate technical tools. This dissertation introduces ES to studies of social computing, contributes a novel method for studying ES, offers rich data about how it functions in ML development, describes insights for designing context documents and other interventions designed to encourage ES, develops an extensible digital guide that supports particularization and judgment, and points to new directions for research in ethical sensitivity in technology development.
Committee:
- Dr. Katie Shilton, Chair
- Dr. Wayne Lutters
- Dr. Susan Winter
- Dr. Jessica Vitak
- Dr. Hal Daumé, Dean’s Representative
Join Zoom Meeting
https://umd.zoom.us/j/99467523030?pwd=MW5ISE5DTkhudW9vTXlmSm0wK0dnUT09
Meeting ID: 994 6752 3030
Passcode: 410207
One tap mobile
+16699006833,,99467523030# US (San Jose)
+12532158782,,99467523030# US (Tacoma)
Dial by your location
+1 669 900 6833 US (San Jose)
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
+1 301 715 8592 US (Germantown)
+1 312 626 6799 US (Chicago)
+1 929 436 2866 US (New York)
Meeting ID: 994 6752 3030
Find your local number: https://umd.zoom.us/u/acO57d8fZ3