The philosopher Michel Foucault taught that sexual repression and taboos aren’t so much the repression of sex but instead evidence of obsession. I’m reminded of this lesson after reading about a terrible story wherein a Georgia high-school decides to make a presentation to hundreds of students and parents on what not to do on social media. In doing so, they project a photo of one student in her bikini, an image taken from her Facebook page [she says shared with ‘friends of friends’] without her permission. Her face is not blurred; in fact, her full name is printed below the image. Her photo, indeed, her body itself, is being projected to all these people as ‘what not to do’, her image and body construed as a problem, how one should not present themselves in 2013. Humiliated by the school’s disrespectful and irresponsible behavior, the woman is suing. In trying to warn students of the dangers of posting online, the Georgia high-school acted in exactly the dangerous way students—everyone—shouldn’t act.
Foucault might say this presentation, ostensibly about teaching modesty, is a fixation on this woman’s body, projected and blown up, to be morally dissected by eyes not dismissive of but consumed by sex. So obsessed with young women’s sexuality, the school becomes preoccupied on the women in the photos, echoing that now familiar refrain that shames and blames the victims of privacy violations instead of focusing on the violators.
A School attorney has responded, quoted from the story linked to above as saying that he “finds it perplexing that someone is suing for millions for a picture she herself posted on the internet.”
In this logic, just posting anything to anyone, a basic fact of life in 2013, means you have no right to the content being spread beyond intent; and those who spread it without permission have no responsibility. I’ve written about why this common victim-blaming approach to social photos is wrong a couple of times, on the Girls Around Me app and on a series of news stories about women being threatened with nude photos. And that’s what’s happening here.
The school’s motive is to make sure students (girls) know that posting photos of themselves in swimsuits is dangerous and wrong. The photos can get out further than you think they will. To make that point abundantly clear, the school went ahead and did just this itself, taking this one photo to make an example of and shame a particular woman. This is just one more example of the worthlessness of the common victim-blaming approach to digital discretion.
Instead, the biggest problem that people–especially young people, especially girls–face when it comes to sharing photos is that their peers–especially boys–too often share those photos beyond what was consented to. Schools, especially the attorney quoted above, seem to take a “boys-will-be-boys” approach and instead shame girls from something as banal as having a photograph in a bathing suit. That photo is only risqué to the degree that a school would knowingly take it well beyond its intended audience. This isn’t a lesson about not taking swimsuit photos, it’s a lesson on why you should be careful not to shame and embarrass other people for simply having a body and happening to be a woman who is alive in 2013.
Managing one’s privacy is important, but what I want to see much, much more of is school’s focusing on acknowledging and taking seriously the management, and care, of other’s privacy. To stop insisting that a social problem (sexism) is a personal problem to be solved by the victims. The school chose to send a message that emphasized managing only one’s own privacy without regard for others and what harm and humiliation can be done by sharing photographs beyond intent.
The school has set a terrible precedent by doing precisely what they should be telling their students to avoid; ironically, all in the name of setting a good example. If someone should be pulled aside and made an example of, it isn’t the women being told to cover up but instead this school and others like it that think it’s okay to share an image beyond its intended audience. The focus should be on those who abuse consent, who violate privacy, instead of those being shamed for simply living their life—perhaps a tall order for a culture exhibiting such a paternalistic, “proactive”, Foucauldian obsession with young women’s sexuality.
As drones become increasingly autonomous, there is growing concern that they lack some fundamentally “human” capacity to make good judgment calls. In the penultimate episode of this season’s Castle (yes, the Nathan Fillion-staring cheez-fest that is nominally a cop procedural)–titled “The Human Factor” (S5 E23)–addresses just this concern. In it, a bureaucrat explains how a human operator was able to trust his gut and, unlike the drone protocols the US military would have otherwise used, distinguish a car full of newlyweds from a car full of (suspected) insurgents. Somehow the human operator had the common sense that a drone, locked into the black and white world of binary code, lacked. This scene thus suggests that the “human factor” is that ineffable je ne sais quois that prevents us humans from making tragically misinformed judgment calls.
In this view, drones are problematic because they don’t possess the “human factor”; they make mistakes because they lack the crucial information provided by “empathy” or “gut feelings” or “common sense”–faculties that give them access to kinds of information that even the best AI (supposedly) can’t process, because it’s irreducible to codable propositions. This information is contained in affective, emotional, aesthetic, and other types of social norms. It’s not communicated in words or logical propositions (which is what computer code is, a type of logical proposition), but in extra-propositional terms. Philosophers call this sort of knowledge and information “implicit understanding.” It’s a type of understanding you can’t put into words or logically-systematized symbols (like math). Implicit knowledge includes all the things you learn by growing up in a specific culture, as a specific type of person (gendered, raced, dis/abled, etc.)–it’s literally the “common sense” that’s produced through interpollation by hegemony. For example, if you hear a song and understand it as music, but can’t explicitly identify the key it’s in or the chord changes it uses, then you’re relying on implicit musical knowledge to understand the work. Walking is another example of an implicitly known skill: for most able-bodied people, walking is not a skill that is reducible to a set of steps that can be articulated in words. Because it can’t be put into words (or logical propositions/computer code), implicit understanding is transmitted through human experience–for example, through peer pressure, or through repetitive practice. I’m not the person to ask about whether or not AI will ever be able to “know” things implicitly and extra-propositionally. And it’s irrelevant anyway, because what I ultimately want to argue is that humans’ implicit understanding is actually pretty crappy, erroneous, and unethical to begin with.
Our “empathy” and “common sense” aren’t going to save us from making bad judgment calls–they in fact enable and facilitate erroneous judgments that reinforce hegemonic social norms and institutions, like white supremacy. Just think about stop-and-frisk, a policy that is widely known to be an excuse for racial profiling. Stop-and-frisk is a policy that allowed New York City police officers to search anyone who arose, to use the NYPD’s own term, “reasonable suspicion.” As the term “reasonable” indicates, the policy requires police officers to exercise their judgment–to rely on both explicitly and implicitly known information to decide if there are good reasons to think a person is “suspicious.” Now, in supposedly post-racial America, a subject’s racial identity is not a reason one could publicly and explicitly cite as justification for increased police scrutiny. That’s racial profiling, and there is a general (if uneven) consensus that racial profiling is unethical and unjust.
When we rely on our implicit knowledge, we can do racial profiling without explicitly saying or thinking “race” (or “black” or “Latino”). And this is what the language of “reasonability” does: it
allows officers to make judgments based on their implicit understanding of race and racial identities. “Seeming suspicious” is sufficient grounds to stop someone and search them. Officers didn’t have to cite explicit reasons; they could just rely on their gut feelings, their common sense, and other aspects of their implicit knowledge. In a white supremacist society like ours, dominant ways of knowing are normatively white; something seems reasonable because it is consistent with white hegemony (for more on this, see Linda Alcoff’s Visible Identities and Alexis Shotwell’s Knowing Otherwise). So it’s not at all surprising when, as Jorge Rivas puts it, “of those who were stopped and patted down for “seeming suspicious,” 86 percent were black or Latino” (emphasis mine). White supremacy trains us to feel more threatened by non-whites and non-whiteness, and stop-and-frisk takes advantage of this.
In other words, our implicit understanding is just as, if not more fallible–in this case, racist–than any explicit knowledge. Human beings already make the bad, inuhman judgments that some fear from drones. Stop-and-frisk is just one example of how real people already suffer from our bad judgment. We’re really good at targeting threats to white supremacy, but really crappy at targeting actual criminals.
We make such bad calls when we rely on mainstream “common sense” because it is, to use philosopher Charles Mills’s term, an “epistemology of ignorance” (RC 18). Errors have been naturalized so that they seem correct, when, in fact, they aren’t. These “cognitive dysfunctions” seem correct because all the social cues we receive reinforce their validity; they are, as Mills puts it “psychologically and socially functional” (RC 18). In other words, to be a functioning member of society, to be seen as “reasonable” and as having “common sense,” you have to follow this naturalized (if erroneous) worldview. White supremacy and patriarchy are two pervasive epistemologies of ignorance. They have trained our implicit understanding to treat feminine, non-white, and non-cis-gendered people as less than full members of society (or, in philosophical jargon, as less than full moral and political persons). Mainstream “common sense” actually encourages and justifies our inhumane treatment of others; the “human factor” is actually an epistemology of ignorance. So, maybe without it, drones will make better decisions than we do?
What if some or most of the anxiety over drone-judgment isn’t about its (in)accuracy, but about its explicitness? In stop-and-frisk, racial profiling was implicit in practice, but absent from explicit policy. In order to make the drones follow the same standard of “reasonability” that applied to the NYPD’s human officers, programmers would have to translate their racist implicit understanding into explicit, code-able propositions. So, what was implicit in practice would need to be made explicit in policy. In so doing, we would force our “post-racial” society’s hand, making it come clean about its ongoing racism.
Robin James is Associate Professor of Philosophy at UNC Charlotte. She blogs about philosophy, pop music, sound, and gender/race/sexualitystudies at its-her-factory.blogspot.com.
One problem with taking social problems and re-framing them as individual responsibility is that it ends up blaming victims instead of pressuring root causes. This mentality creates a temptation to, for example, respond to the NSA scandal involving the government tapping into Internet traffic with something like, “well stop posting your whole life on Facebook, then”. Or less glib is the point raised many times this month that the habit of constant self-documentation on social media has made possible a state of ubiquitous government surveillance. The brutality of spying is made both possible and normal by the reality of digital exhibitionism. How can the level of government spying be so shocking in a world where people live-tweet their dinner? Perhaps we should stop digitally funneling so much of our lives through Gmail now that the level of surveillance is becoming clearer. Sasha Weiss writes in The New Yorker that,
Most of us react with horror to the idea that our online messages are in the hands of the government—in the sense of being collected in a massive stream of data and analyzed for suspicious patterns—but have no problem posting a photo of our kids, our wedding, or our lunch on Facebook or Instagram
This meshes with the surveillance studies literature that argues banal, voluntary, and habitual publicity makes us used to be being watched and thus less concerned about our privacy, ultimately leading us to become complicit with surveillance in general over time. The structure of digitally mediated life as it exists is indeed highly compatible with the surveillance apparatus. As such, it is very easy to use this scandal as an excuse to critique our culture of digital publicity, a culture where intimate spying is an inevitable outcome.
However, we should always be a little skeptical of thinking with such inevitabilities. The “resistance is futile” fallacy—the Borg Complex—takes for granted that everything that can be seen by anyone will be seen by everyone, and, in the process, fundamentally misplaces who is responsible for violations of privacy. At play here is that pesky, predictable trend of making individuals responsible for social problems: When the government—or in another news cycle, a social media company—violates user privacy, the seemingly-helpful response is to advise individual users to change how they behave.
Yes, if you use the Web less, the NSA will have less of you within their Utah zettabytes (aside: that’s a better team name than ‘Utah Jazz’). But the bigger point is that you should be able to use the Web without being spied on in the first place. To take this social problem, treat it as an inevitability, and then place responsibility back on the very individuals being violated does nothing to address the root problem: government overreach in the name of “security.”
Breezily linking NSA spying with oversharing on social media misses that always crucial element: consent. Through the lens of consent, voluntarily posting photos of your vacation and the NSA having access to your emails are fundamentally different things. One does not inevitably need to lead to the other; in fact, we know that people who post more and are more public online tend to also enact more privacy measures. Privacy and publicity are not always antithetical, but often mutually-reinforcing. The goal shouldn’t be to ask individuals to stop living a digitally mediated life if they so choose but to make that mediation safer from violation in the first place.
That consensual exhibitionism makes nonconsensual spying possible may be technically right, but such a focus is morally wrong. Any response to the NSA scandal that ignores the importance of consent and instead places the responsibility for our own privacy, and the blame of its violation, back on us is untenable.
Lead image is cropped from a 1970 Newsweek cover, via.
Over the past few months, a lot of theoretical work has been done to further develop the concept of “digital dualism.” Following a provocation from Nicholas Carr, a number of thoughtful people have chimed in to help both further explicate and defend the theory. Their responses have been enlightening and are worth reading in full. They have also clarified a few things for me about the topic that I’d like to share here. Specifically, I’d like to do a bit of reframing regarding the nature of digital dualism, drawing upon this post by Nathan Jurgenson, then use this framework to situate digital dualism within a broader field of political disagreement and struggle.
In his reply to Carr, Jurgenson helpfully parses apart two distinct-but-related issues. (Technically he draws three distinctions, but I will only focus upon two here). First, Jurgenson identifies what he calls “ontological digital dualism theory,” a research project that he characterizes as focused upon that which exists. Such theory would seem to include all efforts that seek to explain (or call into question) the referents of commonly used terms such as “digital” or “virtual,” “physical” or “real.” In contrast to this ontological theory, he then identifies what might be called normative digital dualism theory—a branch of analysis concerned with the comparative value that is attributed to the categories established by one’s ontological position. Such theory would thus analyze the use of value-laden modifiers such as “real” or “authentic” in describing the “digital” or the “physical.”
I posit that digital dualism, in fact, draws from both the ontological and the normative analyses. Specifically the digital dualist:
- Establishes an ontological distinction that carves up the world into two mutually exclusive (and collectively exhaustive) categories—at least one of which is somehow bound up with digital technology (e.g., that which is “virtual” vs. that which is “real”.)
- Posits some normative criteria that privileges one category over the other. (In most cases, it is the non-technological category that is deemed morally superior. However, charges of digital dualism would equally apply to views that favored the technological.)
Often the normative ranking of Step 2 is built into the very names of the categories posited in Step 1, as Stéphane Vial has previously argued. For example, the commonly-deployed term “real” suggests that what goes on in the opposing “virtual” realm is somehow “fake”—a term with strong negative connotations. Other times the dualist will rely upon additional description to suggest the inferiority of the technological. But regardless of how this normative evaluation is communicated, the basic claim of the digital dualist is (usually) the same: The activities and modes of existence falling within the technological category are morally inferior to their non-technological counterparts.
This two-component view of digital dualism, whereby a normative hierarchy is superimposed onto (or, perhaps, built into) an ontological distinction, makes it possible to locate it within a broader constellation of conservative thought. Specifically, I will try to show that digital dualism’s two steps are built into a variety of conservative views—a recurrence that I argue is not a coincidence but a testament to the inherent conservatism of the dualist’s two-step move. Then, I will use this observation to suggest that digital dualism, too, is a deeply conservative ideology.
First, though, it is necessary to clarify what exactly it would mean for a given view to be “conservative.” Such a view, I contend, is one that serves to either justify existing social hierarchies (and delegitimize efforts to subvert or undermine those hierarchies) or to establish new ones. In other words, it provides reason for privileging one group over the rest, typically by justifying—sometimes tacitly or obliquely—why one group should either power over its inferiors or a greater share of social goods.
Rather than defend the centrality of hierarchy to conservatism here, I’ll instead gesture towards the work of other theorists who have sought to expose how a taste for hierarchy is common to otherwise-diverse conservative thinkers and movements. Corey Robin, for instance, has devoted an entire book to the subject, arguing that the “capitalists, Christians, and warriors” who comprise the conservative block are united in their “opposition to the liberation of men and women from the fetters of their superiors, particularly in the private sphere,” in the belief that an emancipated world would “lack the excellence of a world where the better man commands the worse.”1
Admittedly, including the positing of novel hierarchies within the scope of “conservatism” seems to push at the boundaries of the concept. However, the inclusion is both useful for political analysis—any critique of traditional hierarchies would likely apply to novel ones as well—and seems necessary for capturing conservative ideological defenses against new modes of existence (transphobia, for example, can be considered a reactionary defense of traditional ways of being built upon a newly-developed social hierarchy). Given this definition of conservatism, my central claim is that conservative ideologies tend to rest on the same two elements that define digital dualism—an ontological division that is then imbued with normative weight.
Consider a few suggestive examples.
In “The Aristocracy of Culture,” sociologist Pierre Bourdieu seeks to show how aesthetic and cultural preferences are deployed to establish hierarchy dividing the elites from the masses. He theorizes that cultural elites establish an ontological distinction between “pure taste” and “barbarous taste” to support the hierarchical notion of “a radical difference which seems to be inscribed in ‘persons’.”2 As an example, he quotes the philosopher Ortega y Gasset at length, noting in particular the latter’s assertions “that some possess an organ of understanding which others have been denied; that these are two distinct varieties of the human species,”3 and that:
The music of Stravinsky or the plays of Pirandello have the sociological power of obliging [the masses] to see themselves as they are, as the ‘common people’, a mere ingredient among others in the social structure, the inert material of the historical process, a secondary factor in the spiritual cosmos. By contrast, the young art helps the ‘best’ to know and recognize one another in the greyness of the multitude and to learn their mission, which is to be few in number and to have to fight against the multitude.4
Through these quotes, Bourdieu reveals how the ontological distinction between high and low art comes to be imbued with normative weight, with the enjoyment of the former coming to serve as an “affirmation of the superiority of those who can be satisfied with [high culture].”5 In this way, he concludes, “high” vs. “low” art is distinction predisposed “to fulfill a social function of legitimating social differences.”6
Nietzsche, too—at least by one reading—legitimates inequality via the normative weighting of ontological categories. In On the Genealogy of Morals, Nietzsche posits (and seemingly endorses) a “noble” normative framework that privileges the “noble ones, we good, beautiful, happy ones”7 over the low, the common, and the bad. To clarify this distinction, he reframes the divide as one between the “blond beast prowling about avidly in search of spoil”8 and “the ill-constituted, sickly, weary and exhausted people of which Europe is beginning to stink,”9 and again as “Rome against Judea,” and “higher man” versus the “‘tame man,’ the hopelessly mediocre and insipid man,”10 among others . Through such descriptions, Nietzsche seeks to establish a clear ontological caste system—a framework from which he might call into question egalitarian ethical principles and normative positions. For why would one structure society around the needs of the pathetic? In this respect, Nietzsche’s position seems to both qualify as deeply conservative while also clearly mirroring the two moves of the digital dualist.
Finally, white supremacism (among other forms of racial chauvinism) also seems to be characterized by an ontological insistence upon carving up the social world upon racial lines, a move that is then followed by a normative ranking of the groups. (Though white supremacists try to obfuscate this second move so as to make their ideology seem more palatable, such maneuvering doesn’t change the reality of their ideology).
In each of these cases, the same basic pattern of the two-step move reemerges. In a sense, this should be expected, as the normative ranking of Step 2 seems to be the only way to establish some sort of normative or justificatory underpinnings for affirming a conservative hierarchy. Yet to see this normative ranking embodied in actual conservative ideologies really clarifies the point, I think.
More importantly, these examples also make apparent the way in which digital dualism is also a conservative ideology. Though dualists tend to focus upon digital activities rather than people, it seems that such judgment inevitably expands so as to include those who engage in digital activities as well. If “virtual” friendships are shallower than “real” ones, what does that say about those who are drawn to the former? Indeed, just as the privileging of “high” over “low” art isn’t so much a judgment about the works themselves as it is their enthusiasts, digital dualism is a way of developing a new hierarchy to separate those with a taste for the “physical” and the “real” from the digitally-inclined masses. And, in this way, it becomes the latest manifestation of a long tradition of developing social schemas that caste a particular group as inferior for the benefit of the privileged group. And though digital dualism is certainly not as extreme or seemingly oppressive as Nietzscheanism or white supremacism, the difference appears to be one of degree rather than kind.
By thus locating digital dualism within a broader field of conservative ideologies, it becomes possible to both understand why the view strikes so many as problematic or even borderline-offensive. At least for those with egalitarian sensibilities, digital dualism could—and should—trigger the same alarm bells as Ortega y Gasset, Nietzsche, and David Duke, albeit at a lower volume.
Finally, recognizing the conservatism of digital dualism opens up new strategies for critique. For anti-dualists, in addition to formulating novel criticisms, might also bring to bear a broad array of egalitarian and anti-conservative thought and arguments to the issue at hand. Likewise, many of the arguments that have appeared on this blog might be usefully repurposed to aid in broader disputes between egalitarians and hierarchs. In this way, understanding digital dualism as a conservative ideology might serve to unify otherwise-diverse thinkers, bringing together and amplifying their theoretical work in the struggle against the ideologies of the right.
Jesse Elias Spafford enjoys reading the Internet and writing about power, politics, and pop culture. @jessespafford
Describing “types” of capitalisms, their components, the central logics they operate by is always a risky game: nothing is ever entirely new, there are always outliers, the different types always overlap, and so on. However, I’d like to speculate very briefly on a specific trend within Silicon Valley capitalism, what strikes me as an anti-capitalism sort of capitalism. I’m speaking of this type of capitalism not as something that is fully realized in reality, but as an “ideal type”, a hypothetical possibility that we can determine if or how much validity it has in illuminating the world–or at least one small chunk of the contemporary economy. Mostly, I’m just musing on a smart, fun piece by Sam Biddle about the rhetoric of Tumblr founder David Karp before Yahoo’s acquisition of the site for one billion dollars.
The rhetoric is familiar for those who follow Silicon Valley and is indicative of a particular type of capitalism. Silicon Valley entrepreneurs often talk about not caring about profit, that they do what they do not to make money, and then subsequently cash in. Here, I am not referring to the notion that a Silicon Valley start-up should grow first and worry about profitability later–that’s a seemingly slower capitalism, but not anti-capitalism-capitalsim–instead, I am talking about pretending not to care about profitability at all as a useful component of profitability. As Biddle writes,
As time went by, Karp went from saying he wasn’t “motivated” by revenue in that ’08 interview to even stronger anti-business proclamations: Advertisements on Tumblr would “turn his stomach.” He most recently, in grownup startup-speak, claimed profitability is “not a metric that is particularly important to [Tumblr].”In other words, making money is for chumps.
Last year, I identified this trend within one of Silicon Valley Capitalism’s most important mouthpiece’s, TED, saying,
TED smells of corporatism. With the Facebook IPO around the corner, we are all well aware of the big venture-capital sums floating around Silicon Valley (the new Wall Street?). What’s infuriating is how Silicon Valley capitalism consistently attempts to sell itself as outside or even above corporatism. In announcing Facebook’s IPO, Mark Zuckerberg, whose company has consistently violated user privacy in the name of profit, stated that “we don’t build services to make money.” He actually said that.
All of this is admittedly speculative, but there certainly seems to be within Silicon Valley a sort-of capitalism against itself, a capitalism that not only appropriates anti-capitalism, not only uses it, but is founded on it. The notion of working to not make money is radically anti-capitalist in the strictest sense, even if we know that much of what Karp, Zuckerberg, and the rest do smells a lot like normal capitalism. Indeed, Biddle’s most provokotive statement, for me, was,
Now Yahoo will own Tumblr, and Karp will be an immensely rich 20-something, because he refused to demonstrate that his company is worth anything
This may be an example of a larger trend towards a capitalism that doesn’t exist in spite of, doesn’t merely appropriate, but exists and even thrives precisely because of its anti-capitalist base. This isn’t exactly like how, say, punk so quickly got sold back to us by capitalists, or how capitalists have slapped Che’s face on t-shirts sold at Urban Outfitters, but instead a type of capitalism that is predicated, knowingly or unknowingly, on the idea of anti-capitalism. It’s not a capitalist logic that can co-opt anti-capitalism, but capitalism where anti-capitalism is an inherent part of its logic. Said differently, Silicon Valley’s habit of acting outside or above capitalism as an essential part of their business model is the essence of anti-capitalism-capitalism.
So: is this really a feature of a certain type of capitalism? Does Silicon Valley really operate by this logic very often? Are there other zones within the economy that operate by a similar logic? And what theories or theorists might be most useful in articulating this trend? I’m immediately reminded of Fred Turner’s brilliant work on Silicon Valley capitalism, particularly his Burning Man at Google talk, here:
Last, it seems logical to ask if this is more or less insidious than capitalism in general? Some are still surprised to find out, say, Google is a for-profit company. Presumably, some people really believe Silicon Valley entrepreneurs when they say they don’t care about making money. This appears to be a capitalist attempt to hide capitalism, to exploit its wealth-generating capabilities without having to assume its responsibilities and drawbacks. The post-profit Silicon Valley has, for some, been quite profitable.