{"id":70384,"date":"2017-07-05T09:15:12","date_gmt":"2017-07-05T14:15:12","guid":{"rendered":"https:\/\/thesocietypages.org\/socimages\/?p=70384"},"modified":"2017-07-06T12:24:42","modified_gmt":"2017-07-06T17:24:42","slug":"algorithms-replace-your-biases-with-someone-elses-biases","status":"publish","type":"post","link":"https:\/\/thesocietypages.org\/socimages\/2017\/07\/05\/algorithms-replace-your-biases-with-someone-elses-biases\/","title":{"rendered":"Algorithms replace your biases with someone else&#8217;s"},"content":{"rendered":"<p><em>Originally posted at <a href=\"https:\/\/scatter.wordpress.com\/2017\/06\/13\/algorithmic-decisionmaking-replaces-your-biases-with-someone-elses-biases\/\">Scatterplot<\/a>.<\/em><\/p>\n<p>There has been a lot of great discussion, research, and reporting on the promise and pitfalls of algorithmic decisionmaking in the past few years. As Cathy O\u2019Neil nicely shows in her\u00a0<em><a href=\"https:\/\/weaponsofmathdestructionbook.com\/\">Weapons of Math Destruction<\/a>\u00a0<\/em>(and associated <a href=\"https:\/\/www.bloomberg.com\/view\/contributors\/ATFPV0aLyJM\/catherine-h-oneil\">columns<\/a>), algorithmic decisionmaking has become increasingly important in domains as diverse as credit, insurance, education, and criminal justice. The algorithms O\u2019Neil studies are characterized by their opacity, their scale, and their capacity to damage.<\/p>\n<p>Much of the public debate has focused on a class of algorithms employed in criminal justice, especially in sentencing and parole decisions. As scholars like Bernard Harcourt and Jonathan Simon have noted, criminal justice has been a testing ground for algorithmic decisionmaking since the early 20th century. But most of these early efforts had limited reach (low scale), and they were often published in scholarly venues (low opacity). Modern algorithms are proprietary, and are increasingly employed to decide the sentences or parole decisions for entire states.<\/p>\n<p>\u201cCode of Silence,\u201d Rebecca Wexler\u2019s new piece in\u00a0<em><a href=\"http:\/\/washingtonmonthly.com\/magazine\/junejulyaugust-2017\/code-of-silence\/\">Washington Monthly<\/a>,\u00a0<\/em>explores one such influential algorithm: COMPAS (also the study of an extensive, if <a href=\"https:\/\/www.brookings.edu\/blog\/up-front\/2016\/08\/22\/are-criminal-risk-assessment-scores-racist\/\">contested<\/a>, <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">ProPublica report<\/a>). Like O\u2019Neil, Wexler focuses on the problem of opacity. The COMPAS algorithm is owned by a for-profit company, Northpointe, and the details of the algorithm are protected by trade secret law. The problems here are both obvious and massive, as Wexler documents.<\/p>\n<p>Beyond the issue of secrecy, though, one issue struck me in reading Wexler\u2019s account. One of the main justifications for a tool like COMPAS is that it reduces subjectivity in decisionmaking. The problems here are real: we know that decisionmakers at every point in the criminal justice system treat white and black individuals differently, from who gets stopped and frisked to who receives the death penalty. Complex, secretive algorithms like COMPAS are supposed to help solve this problem by turning the process of making consequential decisions into a mechanically objective one \u2013 no subjectivity, no bias.<\/p>\n<p>But as Wexler\u2019s reporting shows, some of the variables that COMPAS considers (and apparently considers quite strongly) are just as subjective as the process it was designed to replace. Questions <a href=\"https:\/\/www.documentcloud.org\/documents\/2702103-Sample-Risk-Assessment-COMPAS-CORE.html\">like<\/a>:<\/p>\n<blockquote><p>Based on the screener&#8217;s observations, is this person a suspected or admitted gang member?<\/p>\n<p>In your neighborhood, have some of your friends or family been crime victims?<\/p>\n<p>How often do you have barely enough money to get by?<\/p><\/blockquote>\n<p>Wexler reports on the case of\u00a0Glenn Rodr\u00edguez, a model inmate who was denied parole on the basis of his puzzlingly high COMPAS score:<\/p>\n<blockquote><p>Glenn Rodr\u00edguez had managed to work around this problem and show not only the presence of the error, but also its significance. He had been in prison so long, he later explained to me, that he knew inmates with similar backgrounds who were willing to let him see their COMPAS results. \u201cThis one guy, everything was the same except question 19,\u201d he said. \u201cI thought, this one answer is changing everything for me.\u201d Then another inmate with a \u201cyes\u201d for that question was reassessed, and the single input switched to \u201cno.\u201d His final score dropped on a ten-point scale from 8 to 1. This was no red herring.<\/p>\n<p>So what is question 19? The New York State version of COMPAS uses two separate inputs to evaluate prison misconduct. One is the inmate\u2019s official disciplinary record. The other is question 19, which asks the evaluator, \u201cDoes this person appear to have notable disciplinary issues?\u201d<\/p>\n<p>Advocates of predictive models for criminal justice use often argue that computer systems can be more objective and transparent than human decisionmakers. But New York\u2019s use of COMPAS for parole decisions shows that the opposite is also possible. An inmate\u2019s disciplinary record can reflect past biases in the prison\u2019s procedures, as when guards single out certain inmates or racial groups for harsh treatment. And question 19 explicitly asks for an evaluator\u2019s opinion. The system can actually end up compounding and obscuring subjectivity.<\/p><\/blockquote>\n<p>This story was all too familiar to me from Emily Bosk\u2019s <a href=\"https:\/\/socialwork.rutgers.edu\/faculty-staff\/emily-bosk\">work<\/a> on similar decisionmaking systems in the child welfare system where case workers must answer similarly subjective questions about parental behaviors and problems in order to produce a seemingly objective score used to make decisions about removing children from home in cases of abuse and neglect. A statistical scoring system that takes subjective inputs (and it\u2019s hard to imagine one that doesn\u2019t) can\u2019t produce a perfectly objective decision. To put it differently: this sort of algorithmic decisionmaking replaces your biases with someone else\u2019s biases.<\/p>\n<p><em>Dan Hirschman is a professor of sociology\u00a0at Brown University. He writes for <a href=\"https:\/\/scatter.wordpress.com\/\">scatterplot<\/a> and is\u00a0an editor of the ASA blog <a href=\"http:\/\/workinprogress.oowsection.org\/\">Work in Progress<\/a>. You can follow him on <a href=\"https:\/\/twitter.com\/asociologist\">Twitter<\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Originally posted at Scatterplot. There has been a lot of great discussion, research, and reporting on the promise and pitfalls of algorithmic decisionmaking in the past few years. As Cathy O\u2019Neil nicely shows in her\u00a0Weapons of Math Destruction\u00a0(and associated columns), algorithmic decisionmaking has become increasingly important in domains as diverse as credit, insurance, education, and [&hellip;]<\/p>\n","protected":false},"author":51,"featured_media":70385,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[23642,2056,100606,283,20063],"class_list":["post-70384","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-class-prejudicediscrimination","tag-crimelaw","tag-methods-big-data","tag-prejudicediscrimination","tag-raceethnicity-prejudicediscrimination"],"jetpack_featured_media_url":"https:\/\/thesocietypages.org\/socimages\/files\/2017\/07\/4.png","_links":{"self":[{"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/posts\/70384","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/users\/51"}],"replies":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/comments?post=70384"}],"version-history":[{"count":3,"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/posts\/70384\/revisions"}],"predecessor-version":[{"id":70398,"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/posts\/70384\/revisions\/70398"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/media\/70385"}],"wp:attachment":[{"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/media?parent=70384"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/categories?post=70384"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thesocietypages.org\/socimages\/wp-json\/wp\/v2\/tags?post=70384"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}