{"id":16041,"date":"2013-06-24T10:44:30","date_gmt":"2013-06-24T14:44:30","guid":{"rendered":"http:\/\/thesocietypages.org\/cyborgology\/?p=16041"},"modified":"2013-06-24T10:44:30","modified_gmt":"2013-06-24T14:44:30","slug":"stop-and-drone","status":"publish","type":"post","link":"https:\/\/thesocietypages.org\/cyborgology\/2013\/06\/24\/stop-and-drone\/","title":{"rendered":"Stop And Drone"},"content":{"rendered":"<p dir=\"ltr\"><a href=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2013\/06\/police-drone-graffiti.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-16043 aligncenter\" alt=\"police-drone-graffiti\" src=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2013\/06\/police-drone-graffiti-500x282.jpg\" width=\"500\" height=\"282\" srcset=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2013\/06\/police-drone-graffiti-500x282.jpg 500w, https:\/\/thesocietypages.org\/cyborgology\/files\/2013\/06\/police-drone-graffiti-250x141.jpg 250w, https:\/\/thesocietypages.org\/cyborgology\/files\/2013\/06\/police-drone-graffiti-400x226.jpg 400w, https:\/\/thesocietypages.org\/cyborgology\/files\/2013\/06\/police-drone-graffiti.jpg 1375w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/a>As drones become increasingly autonomous, there is growing concern that they lack some fundamentally \u201chuman\u201d capacity to make good judgment calls. In the penultimate episode of this season\u2019s Castle (yes, the Nathan Fillion-staring cheez-fest that is nominally a cop procedural)&#8211;titled \u201cThe Human Factor\u201d (S5 E23)&#8211;addresses just this concern. In it, a bureaucrat explains how a human operator was able to trust his gut and, unlike the drone protocols the US military would have otherwise used, distinguish a car full of newlyweds from a car full of (suspected) insurgents. Somehow the human operator had the common sense that a drone, locked into the black and white world of binary code, lacked. This scene thus suggests that the \u201chuman factor\u201d is that ineffable je ne sais quois that prevents us humans from making tragically misinformed judgment calls.<!--more--><\/p>\n<p>In this view, drones are problematic because they don\u2019t possess the \u201chuman factor\u201d; they make mistakes because they lack the crucial information provided by \u201cempathy\u201d or \u201cgut feelings\u201d or \u201ccommon sense\u201d&#8211;faculties that give them access to kinds of information that even the best AI (supposedly) can\u2019t process, because it\u2019s irreducible to codable propositions. This information is contained in affective, emotional, aesthetic, and other types of social norms. It\u2019s not communicated in words or logical propositions (which is what computer code is, a type of logical proposition), but in extra-propositional terms. Philosophers call this sort of knowledge and information \u201cimplicit understanding.\u201d It\u2019s a type of understanding you can\u2019t put into words or logically-systematized symbols (like math). Implicit knowledge includes all the things you learn by growing up in a specific culture, as a specific type of person (gendered, raced, dis\/abled, etc.)&#8211;it\u2019s literally the \u201ccommon sense\u201d that\u2019s produced through interpollation by hegemony. For example, if you hear a song and understand it as music, but can\u2019t explicitly identify the key it\u2019s in or the chord changes it uses, then you\u2019re relying on implicit musical knowledge to understand the work. Walking is another example of an implicitly known skill: for most able-bodied people, walking is not a skill that is reducible to a set of steps that can be articulated in words. Because it can\u2019t be put into words (or logical propositions\/computer code), implicit understanding is transmitted through human experience&#8211;for example, through peer pressure, or through repetitive practice. I\u2019m not the person to ask about whether or not AI will ever be able to \u201cknow\u201d things implicitly and extra-propositionally. And it\u2019s irrelevant anyway, because what I ultimately want to argue is that humans\u2019 implicit understanding is actually pretty crappy, erroneous, and unethical to begin with.<\/p>\n<p>Our \u201cempathy\u201d and \u201ccommon sense\u201d aren\u2019t going to save us from making bad judgment calls&#8211;they in fact enable and facilitate erroneous judgments that reinforce hegemonic social norms and institutions, like white supremacy. Just think about stop-and-frisk, a policy that is widely known to be an excuse for racial profiling. Stop-and-frisk is a policy that allowed New York City police officers to search anyone who arose, to use the NYPD\u2019s own term, \u201c<a href=\"http:\/\/colorlines.com\/archives\/2013\/02\/nypd_breaks_down_stop-frisk_data_by_precinct_and_race.html\">reasonable suspicion<\/a>.\u201d As the term \u201creasonable\u201d indicates, the policy requires police officers to exercise their judgment&#8211;to rely on both explicitly and implicitly known information to decide if there are good reasons to think a person is \u201csuspicious.\u201d Now, in supposedly post-racial America, a subject\u2019s racial identity is not a reason one could publicly and explicitly cite as justification for increased police scrutiny. That\u2019s racial profiling, and there is a general (if uneven) consensus that racial profiling is unethical and unjust.<\/p>\n<p>When we rely on our implicit knowledge, we can do racial profiling without explicitly saying or thinking \u201crace\u201d (or \u201cblack\u201d or \u201cLatino\u201d). And this is what the language of \u201creasonability\u201d does: it<\/p>\n<p dir=\"ltr\">allows officers to make judgments based on their implicit understanding of race and racial identities. \u201c<a href=\"http:\/\/colorlines.com\/archives\/2013\/04\/nypd_commissioner_ray_kelly_wanted_to_instill_fear_in_black_and_latino_men.html\">Seeming suspicious<\/a>\u201d is sufficient grounds to stop someone and search them. Officers didn\u2019t have to cite explicit reasons; they could just rely on their gut feelings, their common sense, and other aspects of their implicit knowledge. In a white supremacist society like ours, dominant ways of knowing are normatively white; something seems reasonable because it is consistent with white hegemony (for more on this, see Linda Alcoff\u2019s Visible Identities and Alexis Shotwell\u2019s Knowing Otherwise). So it\u2019s not at all surprising when, as Jorge Rivas <a href=\"http:\/\/colorlines.com\/archives\/2013\/04\/nypd_commissioner_ray_kelly_wanted_to_instill_fear_in_black_and_latino_men.html\">puts<\/a> it, \u201cof those who were stopped and patted down for \u201cseeming suspicious,\u201d 86 percent were black or Latino\u201d (emphasis mine). White supremacy trains us to feel more threatened by non-whites and non-whiteness, and stop-and-frisk takes advantage of this.<\/p>\n<p>In other words, our implicit understanding is just as, if not more fallible&#8211;in this case, racist&#8211;than any explicit knowledge. Human beings already make the bad, inuhman judgments that some fear from drones. Stop-and-frisk is just one example of how real people already suffer from our bad judgment. We\u2019re really good at targeting threats to white supremacy, but really crappy at targeting actual criminals.<\/p>\n<p>We make such bad calls when we rely on mainstream \u201ccommon sense\u201d because it is, to use philosopher Charles Mills\u2019s term, an \u201cepistemology of ignorance\u201d (RC 18). Errors have been naturalized so that they seem correct, when, in fact, they aren\u2019t. These \u201ccognitive dysfunctions\u201d seem correct because all the social cues we receive reinforce their validity; they are, as Mills puts it \u201cpsychologically and socially functional\u201d (<a href=\"http:\/\/bmorereadinggroup.files.wordpress.com\/2012\/04\/ebooksclub-org__the_racial_contract.pdf\">RC <\/a>18). In other words, to be a functioning member of society, to be seen as \u201creasonable\u201d and as having \u201ccommon sense,\u201d you have to follow this naturalized (if erroneous) worldview. White supremacy and patriarchy are two pervasive epistemologies of ignorance. They have trained our implicit understanding to treat feminine, non-white, and non-cis-gendered people as less than full members of society (or, in philosophical jargon, as less than full moral and political persons). Mainstream \u201ccommon sense\u201d actually encourages and justifies our inhumane treatment of others; the \u201chuman factor\u201d is actually an epistemology of ignorance. So, maybe without it, drones will make better decisions than we do?<\/p>\n<p>What if some or most of the anxiety over drone-judgment isn\u2019t about its (in)accuracy, but about its explicitness? In stop-and-frisk, racial profiling was implicit in practice, but absent from explicit policy. In order to make the drones follow the same standard of \u201creasonability\u201d that applied to the NYPD\u2019s human officers, programmers would have to translate their racist implicit understanding into explicit, code-able propositions. So, what was implicit in practice would need to be made explicit in policy. In so doing, we would force our \u201cpost-racial\u201d society\u2019s hand, making it come clean about its ongoing racism.<\/p>\n<p><em>Robin James is Associate Professor of Philosophy at UNC Charlotte. She blogs about philosophy, pop music, sound, and gender\/race\/sexualitystudies at\u00a0<a dir=\"ltr\" title=\"http:\/\/its-her-factory.blogspot.com\" href=\"http:\/\/t.co\/Z4Cs6rcKQA\" target=\"_blank\" rel=\"nofollow\" data-expanded-url=\"http:\/\/its-her-factory.blogspot.com\">its-her-factory.blogspot.com<\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>While people complain that drones lack human &#8220;common sense&#8221;, policies like stop and frisk demonstrate that human common sense isn&#8217;t all it&#8217;s cracked up to be to begin with.<\/p>\n","protected":false},"author":559,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[10006],"tags":[22866,10794,1067,267,82,22865,12],"class_list":["post-16041","post","type-post","status-publish","format-standard","hentry","category-guest-author","tag-common-sense","tag-drones","tag-human","tag-knowledge","tag-racism","tag-stop-and-frisk","tag-technology"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/16041","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/users\/559"}],"replies":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/comments?post=16041"}],"version-history":[{"count":3,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/16041\/revisions"}],"predecessor-version":[{"id":16045,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/16041\/revisions\/16045"}],"wp:attachment":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/media?parent=16041"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/categories?post=16041"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/tags?post=16041"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}