rousseau spiderman

Just about every one of our contributing authors has written a piece that challenges or refutes the claims made by tech journalists, industry pundits, or fellow academics. Part of the problem is technological determinism- the notion that technology has a unidirectional impact on society. (i.e. Google makes us stupid, cell phones make us lonely.) Popular discussions of digital technologies take on a very particular flavor of technological determinism, wherein the author makes the claim that social activity on/in/through Friendster/New MySpace/ Google+/ Snapchat/ Bing is inherently separate from the physical world. Nathan Jurgenson has given a name to this fallacy: digital dualism. Ever since Nathan posted Digital dualism versus augmented reality I have been preoccupied with a singular question: where did this thinking come from? Its too pervasive and readily accepted as truth to be a trendy idea or even a generational divide. Every one of Cyborgology’s regular contributors (and some of our guest authors) hear digital dualist rhetoric coming from their students. The so-called “digital natives” lament their peers’ neglect of “the real world.” Digital dualism’s roots run deep and can be found at the very core of modern thought.  Indeed, digital dualism seems to predate the very technologies that it inaccurately portrays.

What evidence do we have that the beginnings of digital dualism has been with us for centuries? Obviously any evidence would not mention yet-to-be-invented artifacts, but it would mention relatively new technology. (I’ll defend this conflation of new and digital technology later.) Let’s start with a quote from Plato’s Phaedrus, a Socratic dialogue that ends with a discussion on the merits of writing:

“In fact, it [writing] will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others…” [1]

Along the same lines, are some historical quotes that WIRED magazine collected in 2006 just as Hillary Clinton was crusading against violent video games in the United States Senate. [The full list, titled “Culture Wars,” can be found here.]:

“The free access which many young people have to romances, novels, and plays has poisoned the mind and corrupted the morals of many a promising youth; and prevented others from improving their minds in useful knowledge. Parents take care to feed their children with wholesome diet; and yet how unconcerned about the provision for the mind, whether they are furnished with salutary food, or with trash, chaff, or poison?”
- Reverend Enos Hitchcock, Memoirs of the Bloomsgrove Family, 1790

“Does the telephone make men more active or more lazy? Does [it] break up home life and the old practice of visiting friends?”
- Survey conducted by the Knights of Columbus Adult Education Committee, San Francisco Bay Area, 1926

These two quotes, along with others about motion pictures, comic books, and video games, give us some perspective. Reverend Hitchcock bares a striking resemblance to the New York Times’ article about the New Digital Divide and the Knights of Columbus sound a lot like Sherry Turkle.  Both quotes, Turkle, and the New York Times all share a common set of prerequisite assumptions. 1) There exists a qualitative and categorical difference between the technology identified and past ways of doing things; 2) this change is determinist in nature and; 3) the difference is the result of adding something unnecessary or superfluous to a system that was, while far from perfect, stable or more “natural.”

The popular movie series produced by South African apartheid supporters, "The Gods Must Be Crazy," is an excellent example of the technological determinist fallacy. Such thinking can easily be used to justify oppression or the withholding of resources as an effort to "save them from themselves."

The popular movie series produced by South African apartheid supporters, “The Gods Must Be Crazy,” is an excellent example of the technological determinist fallacy. Such thinking can easily be used to justify oppression or the withholding of resources as an effort to “save them from themselves.”

These three basic assumptions, exemplified by the quotes above, appear to be rhetorically similar and cognitively related. They’re not identical arguments by any means, but I suspect that they all arise from what the German philosopher Ludwig Klages called “logocentrism” or the privileging of the spoken word over other forms of communication. Klages was putting a name to what dozens of writers before him had already thought. Jean Jacques Rousseau (18th century), and De Saussure (19th century) had already described the written word as exterior to or a representation of speech. It is only until the 1960s, when French Philosopher Jacques Derrida challenges logocentrism directly, that we begin to see speech and written language on even footing.  Both forms of communication, according to Derrida, are deferrals from arriving at true meaning. According to Derrida, both speech and text are pointing toward a third pure idea that cannot be expressed. For example, the written word “computer” and the utterance computer are not computers, nor can they fully express the entire concept of everything encompassed by the author’s idea of a computer.

But what does all of this have to do with digital dualism? Plato’s fear that writing reduces one’s memory, and Carr’s infamous provocation that Google is making us stupid sound the same, but are they related phenomena? I believe that logocentrism and digital dualism are closely related, and my reason has everything to do with masturbation. Or, more specifically, Rousseau’s opinions on masturbation. Rousseau claims at different points in his Confessions that masturbation is a supplement to nature: something constructed or virtual that competes with an existing real or natural phenomenon. Derrida, in his Of Grammatology asserts that erotic thoughts not only precede sexual action (you think about what you do before you do it) but that there is no basis for finding sex any more “real” than auto-affective fantasies. This “logic of the supplement” mistakes something that was “always already” there with an unneeded addition.  Derrida writes,

“Auto-affection is a universal structure of experience. All living things are capable of auto-affection. And only a being capable of symbolizing, that is to say of auto-affecting, may let itself be affected by the other in general.”[2]

Just as speech was privileged over the written word in ancient Greece, we tend to privilege the physical over the digital. A hardbound book is the real thing, while the ebook is something ephemeral or unnecessary. As our own Sarah Wanenchak describes it, “This feeling is instinctive, gut-level; it can drive us without us being explicitly aware of it.” My own print book collection and skimpy Kindle library are a testament to my own digital dualism. The feeling is so hard to shake, it seems, because the logic of the supplement is so pervasive. I don’t want to get too caught up in the different technological affordances of digital and physical copies in this post. Instead, I want to focus on the differences of technologies. What is readily considered a supplement, and what is considered natural or part of the complete. My argument is a relatively simple one: I simply want to extend the boundaries of logocentrism to explicitly include digital media. We should treat Turkle, Carr, and The New York Times, the same way Derrida treats Plato and Rousseau. Derrida is like the friend that cannot help but point out internal inconsistencies within movies. Your friend might point out that there’s no way Princess Leia can “remember images and feelings” of Padmé because she died in childbirth, while Derrida is more interested in how late Enlightenment scholars can hate on writing but still produce so much of it. Derrida calls this focus on internal paradoxes and contradictions a “double reading” and it can be a useful tool for ferreting out digital dualism.[3]

Digital dualism is pretty easy to spot once you know about it, because the distinction is so glaring. It’s like noticing a chip on your iPhone’s screen. It’s a commonly held fallacy because centuries of western thought force us to look at new technologies as an unnecessary addition to some kind of completed whole. Derrida characterized the classics’ fear of writing as a fear of the dead. The text lies there, unchanged by its audience or the discovery of new facts. It is horrifying in its lifelessness. If the written word is dead, then perhaps our fear of hypertext comes from its uncanny ability to mimic life. It is a modern-day Prometheus; animated dead text seeking a willing audience. Zombified letters and images projected onto the faces of the sorts of people we deem too dim-witted to know any better: the poor, the young, the other. The entirety of the post-modern and post-structuralist projects in social theory have been about questioning, displacing, and ultimately dismantling old boundaries. This is not because the boundaries are bad (although some are) but because they let us see things in a brand new light.

The theory of augmented reality, the idea that digital technologies are not separate but are enmeshed in previously existing social structures, is not –inherently- uncritical of the technophile or new digital tools. Far from it. Augmented reality is simply an application of the last 40 years of philosophy and social theory to our increasingly networked lives. It eschews the outmoded frameworks that lead to uncritical thought and privileged conclusions, that is, digital dualism.

This is a shortened version of a full paper accepted to Theorizing the Web 2013. Full paper forthcoming. Thanks go to Britney Summit-Gil, and the Cyborgology editors and fellow authors for confirming some initial assumptions about students.

David A. Banks is on Twitter and Tumblr


[1] Quoted from page 79 in, Plato. Phaedrus. Indianapolis: Hackett, 1995.

[2] From Derrida, Jacques. Of Grammatology. Translated by Gayatri Chakravorty Spivak. Baltimore, MD: Johns Hopkins University Press, 1998.

[3] I would like to take this opportunity to note that I read Alone Together on a Kindle in a crowded car on the way to a conference. I spoke to no one.