As drones become increasingly autonomous, there is growing concern that they lack some fundamentally “human” capacity to make good judgment calls. In the penultimate episode of this season’s Castle (yes, the Nathan Fillion-staring cheez-fest that is nominally a cop procedural)–titled “The Human Factor” (S5 E23)–addresses just this concern. In it, a bureaucrat explains how a human operator was able to trust his gut and, unlike the drone protocols the US military would have otherwise used, distinguish a car full of newlyweds from a car full of (suspected) insurgents. Somehow the human operator had the common sense that a drone, locked into the black and white world of binary code, lacked. This scene thus suggests that the “human factor” is that ineffable je ne sais quois that prevents us humans from making tragically misinformed judgment calls.
In this view, drones are problematic because they don’t possess the “human factor”; they make mistakes because they lack the crucial information provided by “empathy” or “gut feelings” or “common sense”–faculties that give them access to kinds of information that even the best AI (supposedly) can’t process, because it’s irreducible to codable propositions. This information is contained in affective, emotional, aesthetic, and other types of social norms. It’s not communicated in words or logical propositions (which is what computer code is, a type of logical proposition), but in extra-propositional terms. Philosophers call this sort of knowledge and information “implicit understanding.” It’s a type of understanding you can’t put into words or logically-systematized symbols (like math). Implicit knowledge includes all the things you learn by growing up in a specific culture, as a specific type of person (gendered, raced, dis/abled, etc.)–it’s literally the “common sense” that’s produced through interpollation by hegemony. For example, if you hear a song and understand it as music, but can’t explicitly identify the key it’s in or the chord changes it uses, then you’re relying on implicit musical knowledge to understand the work. Walking is another example of an implicitly known skill: for most able-bodied people, walking is not a skill that is reducible to a set of steps that can be articulated in words. Because it can’t be put into words (or logical propositions/computer code), implicit understanding is transmitted through human experience–for example, through peer pressure, or through repetitive practice. I’m not the person to ask about whether or not AI will ever be able to “know” things implicitly and extra-propositionally. And it’s irrelevant anyway, because what I ultimately want to argue is that humans’ implicit understanding is actually pretty crappy, erroneous, and unethical to begin with.
Our “empathy” and “common sense” aren’t going to save us from making bad judgment calls–they in fact enable and facilitate erroneous judgments that reinforce hegemonic social norms and institutions, like white supremacy. Just think about stop-and-frisk, a policy that is widely known to be an excuse for racial profiling. Stop-and-frisk is a policy that allowed New York City police officers to search anyone who arose, to use the NYPD’s own term, “reasonable suspicion.” As the term “reasonable” indicates, the policy requires police officers to exercise their judgment–to rely on both explicitly and implicitly known information to decide if there are good reasons to think a person is “suspicious.” Now, in supposedly post-racial America, a subject’s racial identity is not a reason one could publicly and explicitly cite as justification for increased police scrutiny. That’s racial profiling, and there is a general (if uneven) consensus that racial profiling is unethical and unjust.
When we rely on our implicit knowledge, we can do racial profiling without explicitly saying or thinking “race” (or “black” or “Latino”). And this is what the language of “reasonability” does: it
allows officers to make judgments based on their implicit understanding of race and racial identities. “Seeming suspicious” is sufficient grounds to stop someone and search them. Officers didn’t have to cite explicit reasons; they could just rely on their gut feelings, their common sense, and other aspects of their implicit knowledge. In a white supremacist society like ours, dominant ways of knowing are normatively white; something seems reasonable because it is consistent with white hegemony (for more on this, see Linda Alcoff’s Visible Identities and Alexis Shotwell’s Knowing Otherwise). So it’s not at all surprising when, as Jorge Rivas puts it, “of those who were stopped and patted down for “seeming suspicious,” 86 percent were black or Latino” (emphasis mine). White supremacy trains us to feel more threatened by non-whites and non-whiteness, and stop-and-frisk takes advantage of this.
In other words, our implicit understanding is just as, if not more fallible–in this case, racist–than any explicit knowledge. Human beings already make the bad, inuhman judgments that some fear from drones. Stop-and-frisk is just one example of how real people already suffer from our bad judgment. We’re really good at targeting threats to white supremacy, but really crappy at targeting actual criminals.
We make such bad calls when we rely on mainstream “common sense” because it is, to use philosopher Charles Mills’s term, an “epistemology of ignorance” (RC 18). Errors have been naturalized so that they seem correct, when, in fact, they aren’t. These “cognitive dysfunctions” seem correct because all the social cues we receive reinforce their validity; they are, as Mills puts it “psychologically and socially functional” (RC 18). In other words, to be a functioning member of society, to be seen as “reasonable” and as having “common sense,” you have to follow this naturalized (if erroneous) worldview. White supremacy and patriarchy are two pervasive epistemologies of ignorance. They have trained our implicit understanding to treat feminine, non-white, and non-cis-gendered people as less than full members of society (or, in philosophical jargon, as less than full moral and political persons). Mainstream “common sense” actually encourages and justifies our inhumane treatment of others; the “human factor” is actually an epistemology of ignorance. So, maybe without it, drones will make better decisions than we do?
What if some or most of the anxiety over drone-judgment isn’t about its (in)accuracy, but about its explicitness? In stop-and-frisk, racial profiling was implicit in practice, but absent from explicit policy. In order to make the drones follow the same standard of “reasonability” that applied to the NYPD’s human officers, programmers would have to translate their racist implicit understanding into explicit, code-able propositions. So, what was implicit in practice would need to be made explicit in policy. In so doing, we would force our “post-racial” society’s hand, making it come clean about its ongoing racism.
Robin James is Associate Professor of Philosophy at UNC Charlotte. She blogs about philosophy, pop music, sound, and gender/race/sexualitystudies at its-her-factory.blogspot.com.
Comments 6
ChrisA — June 25, 2013
That's a fascinating idea - automation in policy as a sort of 'brinksmanship', daring the Right to double-down on their racist/sexist convictions if they're so keen on using drone technology. And for the Left, it's almost a reversal of the idea of digital liberation from embodied forms of inequality; instead, we're forcing technology to recognize our embodied selves as a way of flushing the implicit structures of discrimination into the open. Wonderful article, Robin.
Atomic Geography — June 26, 2013
I don't think we can function as self aware beings without our "epistemolog(ies) of ignorance". On a logistical level, not using factual distortions and short cuts ("sunrise" becomes a discussion of how celestial bodies move) becomes cumbersome to impossible. Ethically, we constantly shuttle between deontological and consequential modes. The inherent contradictions and paradoxes of this are I think, unavoidable.
Which is not to say that we should uncritically accept them. Rather they and the critical examination of them, are part of the self optimizing process.
As far as drones specifically, perhaps you would find my three part posts on them interesting. Part 1 is here http://atomicgeography.com/2013/02/26/drone-strikes-in-the-uncanny-valley-part-1/
Follow the pingbacks to the other 2 parts.
Asking Computers What Our Ethics Are — July 3, 2013
[...] an essay on drone policy, Cyborgology is skeptical of our intuitive approach to ethics and empathy, for many of the same reasons as psychologics Paul Bloom. In the Cyborgology piece, Robin James [...]
Alana — July 29, 2013
I think it's important that we observe and refine our criteria both as intuitive human beings, and as the programmers of AI that assist us. This sort of ethical dilemma is indeed a part of our evolution; we must not only refine technological decision-making but also refine ourselves. And some of us recognize it. What an amazing time we live in!
Most people took racism and patriarchy for granted even 50 years ago, and we barely had the technology to reach the moon. I am so excited about how fast we're growing, even when we backslide. Backsliding could, at this point, wipe us all out in a heartbeat... but maybe we'll get lucky.
BTW, Castle may be cheesy, but it's realllllllly good cheese :-D
ICYMI: My “Stop & Drone” post over on Cyborgology — August 12, 2014
[…] my first guest post is up. It’s on drones, human judgment, and implicit understanding. Here’s a snippet: In other […]