police-drone-graffitiAs drones become increasingly autonomous, there is growing concern that they lack some fundamentally “human” capacity to make good judgment calls. In the penultimate episode of this season’s Castle (yes, the Nathan Fillion-staring cheez-fest that is nominally a cop procedural)–titled “The Human Factor” (S5 E23)–addresses just this concern. In it, a bureaucrat explains how a human operator was able to trust his gut and, unlike the drone protocols the US military would have otherwise used, distinguish a car full of newlyweds from a car full of (suspected) insurgents. Somehow the human operator had the common sense that a drone, locked into the black and white world of binary code, lacked. This scene thus suggests that the “human factor” is that ineffable je ne sais quois that prevents us humans from making tragically misinformed judgment calls.

In this view, drones are problematic because they don’t possess the “human factor”; they make mistakes because they lack the crucial information provided by “empathy” or “gut feelings” or “common sense”–faculties that give them access to kinds of information that even the best AI (supposedly) can’t process, because it’s irreducible to codable propositions. This information is contained in affective, emotional, aesthetic, and other types of social norms. It’s not communicated in words or logical propositions (which is what computer code is, a type of logical proposition), but in extra-propositional terms. Philosophers call this sort of knowledge and information “implicit understanding.” It’s a type of understanding you can’t put into words or logically-systematized symbols (like math). Implicit knowledge includes all the things you learn by growing up in a specific culture, as a specific type of person (gendered, raced, dis/abled, etc.)–it’s literally the “common sense” that’s produced through interpollation by hegemony. For example, if you hear a song and understand it as music, but can’t explicitly identify the key it’s in or the chord changes it uses, then you’re relying on implicit musical knowledge to understand the work. Walking is another example of an implicitly known skill: for most able-bodied people, walking is not a skill that is reducible to a set of steps that can be articulated in words. Because it can’t be put into words (or logical propositions/computer code), implicit understanding is transmitted through human experience–for example, through peer pressure, or through repetitive practice. I’m not the person to ask about whether or not AI will ever be able to “know” things implicitly and extra-propositionally. And it’s irrelevant anyway, because what I ultimately want to argue is that humans’ implicit understanding is actually pretty crappy, erroneous, and unethical to begin with.

Our “empathy” and “common sense” aren’t going to save us from making bad judgment calls–they in fact enable and facilitate erroneous judgments that reinforce hegemonic social norms and institutions, like white supremacy. Just think about stop-and-frisk, a policy that is widely known to be an excuse for racial profiling. Stop-and-frisk is a policy that allowed New York City police officers to search anyone who arose, to use the NYPD’s own term, “reasonable suspicion.” As the term “reasonable” indicates, the policy requires police officers to exercise their judgment–to rely on both explicitly and implicitly known information to decide if there are good reasons to think a person is “suspicious.” Now, in supposedly post-racial America, a subject’s racial identity is not a reason one could publicly and explicitly cite as justification for increased police scrutiny. That’s racial profiling, and there is a general (if uneven) consensus that racial profiling is unethical and unjust.

When we rely on our implicit knowledge, we can do racial profiling without explicitly saying or thinking “race” (or “black” or “Latino”). And this is what the language of “reasonability” does: it

allows officers to make judgments based on their implicit understanding of race and racial identities. “Seeming suspicious” is sufficient grounds to stop someone and search them. Officers didn’t have to cite explicit reasons; they could just rely on their gut feelings, their common sense, and other aspects of their implicit knowledge. In a white supremacist society like ours, dominant ways of knowing are normatively white; something seems reasonable because it is consistent with white hegemony (for more on this, see Linda Alcoff’s Visible Identities and Alexis Shotwell’s Knowing Otherwise). So it’s not at all surprising when, as Jorge Rivas puts it, “of those who were stopped and patted down for “seeming suspicious,” 86 percent were black or Latino” (emphasis mine). White supremacy trains us to feel more threatened by non-whites and non-whiteness, and stop-and-frisk takes advantage of this.

In other words, our implicit understanding is just as, if not more fallible–in this case, racist–than any explicit knowledge. Human beings already make the bad, inuhman judgments that some fear from drones. Stop-and-frisk is just one example of how real people already suffer from our bad judgment. We’re really good at targeting threats to white supremacy, but really crappy at targeting actual criminals.

We make such bad calls when we rely on mainstream “common sense” because it is, to use philosopher Charles Mills’s term, an “epistemology of ignorance” (RC 18). Errors have been naturalized so that they seem correct, when, in fact, they aren’t. These “cognitive dysfunctions” seem correct because all the social cues we receive reinforce their validity; they are, as Mills puts it “psychologically and socially functional” (RC 18). In other words, to be a functioning member of society, to be seen as “reasonable” and as having “common sense,” you have to follow this naturalized (if erroneous) worldview. White supremacy and patriarchy are two pervasive epistemologies of ignorance. They have trained our implicit understanding to treat feminine, non-white, and non-cis-gendered people as less than full members of society (or, in philosophical jargon, as less than full moral and political persons). Mainstream “common sense” actually encourages and justifies our inhumane treatment of others; the “human factor” is actually an epistemology of ignorance. So, maybe without it, drones will make better decisions than we do?

What if some or most of the anxiety over drone-judgment isn’t about its (in)accuracy, but about its explicitness? In stop-and-frisk, racial profiling was implicit in practice, but absent from explicit policy. In order to make the drones follow the same standard of “reasonability” that applied to the NYPD’s human officers, programmers would have to translate their racist implicit understanding into explicit, code-able propositions. So, what was implicit in practice would need to be made explicit in policy. In so doing, we would force our “post-racial” society’s hand, making it come clean about its ongoing racism.

Robin James is Associate Professor of Philosophy at UNC Charlotte. She blogs about philosophy, pop music, sound, and gender/race/sexualitystudies at its-her-factory.blogspot.com.