the_post_thumbnail_caption(); ?>
The impacts of machine learning have rapidly spread their way across many facets of our everyday lives, for better or worse. While many have benefited from its ability to automate various processes, we have also witnessed new issues emerge, namely automated pattern discrimination.
Indiana’s automated review of Medicaid applications made headlines as it was found to have booted about a million individuals off of their healthcare plans due to the system flagging discrepancies in documentation as a failure to cooperate. In 2016, Broward County, Florida’s justice department was found using a risk score algorithm to inform bail decisions which claimed black defendants were twice as likely to be labeled future criminals compared to white defendants. These are only a couple prominent instances showing the adverse effects of automated pattern discrimination.
At the University of Toronto’s Centre for Ethics, University of Maryland College of Information Studies professor, Daniel Greene, presents a materialist approach to understanding the kinds of problems found in artificial intelligence and machine learning, how technologists define these problems, and his proposed alternatives for ethical machine learning design.
You can watch Greene’s full presentation below or by following the link here.