This xkcd comic humorously highlights a seeming tension in  WikiLeaks’ so-called “anti-secrecy agenda:” While secrecy facilitates the systemic abuses of institutional power that WikiLeaks opposes, it also protects extra-institutional actors working to disrupt conspiracies (i.e., uneven distributions of information) that benefit the few at the expense of the many. However, as I discuss a recent Cyborgology post and a chapter (co-authored with Nathan Jurgenson) for a forthcoming WikiLeaks reader, Julian Assange’s approach to secrecy is far more sophisticated than just unconditional opposition. For example, he explains in a 2010 TIME interview:

secrecy is important for many things but shouldn’t be used to cover up abuses, which leads us to the question of who decides and who is responsible. It shouldn’t really be that people are thinking about: Should something be secret? I would rather it be thought: Who has a responsibility to keep certain things secret? And, who has a responsibility to bring matters to the public? And those responsibilities fall on different players. And it is our responsibility to bring matters to the public.

Assange is saying that secrecy is not a problem in and of itself; in fact, society generally benefits when individuals and extra-institutional actors are able to maintain some level of secrecy. Secrecy only become a problem when it occurs in institutional contexts, because institutions have an intrinsic tendency to control information in order to benefit insiders. This conspiratorial nature of institutions is what WikiLeaks truly opposes, and enforced transparency (i.e., leaking) is merely a tactic in that struggle. For this reason, WikiLeaks and Anonymous (the extra-institutional Internet community and hacker collective) are allies, despite the superficial tension highlighted in this comic.

Credit: Andrew Hoppin

Cyborgology editors Nathan Jurgenson (@nathanjurgenson) and PJ Rey (@pjrey) live-tweeted Personal Democracy Media’s From the Tea Party to Occupy Wall Street and Beyond: A Flash Conference. Below is an archive of the conference backchannel (also here) as well as the video from the event.

Conference Description

Across America and the world, millions of people are entering the public arena and, using social and collaborative media, forming powerful new networks for change. The result is a rising wave of challenges to the political order that are expanding the boundaries of political discourse and forcing new issues into the conversation. At this event, we’ll hear from leading organizers and observers of these new movements, and explore questions like:

  • Are these movements leaderless, or leaderfull? And either way, how do they make decisions?
  • Are these movements working within the system or trying to create a new one?
  • Getting co-opted: A danger or a sign that you’re winning?
  • Is a group still its own worst enemy?
  • New media and over-communication: how do these movements manage the cacophony they help create?
  • Is networked democracy to top-down politics what citizen media is to broadcast media?

Join us to explore these questions with:

  • Ori Brafman, co-author, The Starfish and the Spider
  • Beka Economopolous, organizer, Occupy Wall Street
  • Marianne Manilov, co-founder, The Engage Network
  • Mark Meckler, co-founder, Tea Party Patriots
  • Jessica Shearer, faith and labor organizer, Occupy Wall Street
  • Clay Shirky, NYU, author, Here Comes Everybody
  • Zeynep Tufekci, University of North Carolina

moderated by Micah L. Sifry and Andrew Rasiej, Personal Democracy Media


It’s a notable coincidence that Steve Job died exactly two decades after Neil Stephenson completed Snowcrash, arguably, the last great Cyberpunk novel. Stephenson and Jobs’ work exemplified two alternative visions of humans’ relationship with technology in the Digital Age. Snowcrash offers a gritty, dystopian vision of a world where technology works against human progress as much as it works on behalf of it. Strong individuals must assert themselves against technological slavery, though ironically, they rely on technology and their technological prowess to do so.

Apple, on the other hand, tells us that the future is now, offering lifestyle devices that are slick (some might say, sterile). Despite being mass produced, these devices are supposed to bolster our individuality by communicating our superior aesthetic standards. Above all, Apple offers a world where technology is user-friendly and requires little technical competency. We need not liberate ourselves from technology; there’s an app for that.

YouTube Preview Image

Values and style are inextricably linked (as Marshal McLuhan famously preached). So, unsurprisingly, the differences between Apple’s view of the future and that of Cyberpunk authors such as Stephenson run far deeper. The Cyberpunk genre has a critical mood that is antithetical to Apple’s mission of pushing its products into the hands of as many consumers as possible. The clean, minimalist styling of Apple devices makes a superficial statement about the progressive nature of the company, while the intuitive interface makes us feel that Apple had us in mind when designing the product—that human experience is valued, that they care. Of course, this is all a gimmick. Apple invokes style to “enchant” its products with an aura of mystery and wonderment while simultaneously deflecting questions about how the thing actually works (as discussed in Nathan Jurgenson & Zeynep Tufekci’s recent “Digital Dialogue” presentation on the iPad). Apple isn’t selling a product, it’s selling an illusion. And to enjoy it (as I described in a recent essay), we must suspend disbelief and simply trust in the”Mac Geniuses”—just as we must allow ourselves to believe in an illusionist if we hope to enjoy a magic show. Thus, the values coded into Apple products are passivity and consumerism; it is at this level where it is most distinct from the Cyberpunk movement.

In the “The Gernsback Continuum,” William Gibson—Cyberpunk’s best-known author—makes a similar critique of 1930s Futurism / Art Deco for its naive optimism, in which the present masquerades as a Utopian future. Gibson explains:

The  Thirties  had  seen  the first  generation  of American industrial designers; until the Thirties, all  pencil sharpeners had looked like pencil sharpeners  your  basic  Victorian  mechanism,  perhaps with  a curlicue  of decorative  trim.  After the advent of the designers, some pencil sharpeners looked as though  they’d been put  together  in  wind tunnels. For  the most part, the  change was only skin-deep; under  the streamlined  chrome  shell, you’d find the same Victorian mechanism. Which made a certain kind of sense, because  the most  successful American designers had been recruited from the ranks of  Broadway theater designers. It was  all a  stage  set, a series of elaborate props for playing at living in the future.

Today’s popular consumer electronic devices—typified by Apple products—have revived this futurist pattern of enveloping technology in a fantastical veneer, though exchanging chrome and Bakelite for brushed aluminum and pristine white plastic. The important parallel between Gibson’s pencil sharpener and the iPhone is that, by burying the inner-workings of its devices in non-openable cases and non-modifiable interfaces, Apple diminishes user agency—instead, fostering naïveté and passive acceptance.

Jobs delivered consumers a clean, safe future in the here and now—a future which, paradoxically, brings pollution and exploitation to much of the rest of world, who are not so lucky as to indulge in this illusion (a story told in this game, which, tellingly, has been censored by Apple’s app store). The problem with this sort of fetishistic techno-Utopianism—as 30s Futurism and the Apple corporation both demonstrate—is that, by living with our heads in an idyllic future, we tend to ignore, or simply paper over, the problems of the present. In conventional Marxian terms, we might say Utopian Futurism is form of “false consciousness,” a buoyant ideology disconnected from real material conditions. Futurism is also, arguably, a post-Modern phenomenon, insofar as in implodes the present and the future. Of course, because this future has not yet happened, it is an imaginary future. In attempting to pass the present off as an imaginary future, Futurism negates both, creating a simulation (present qua future) of a simulation (the imaginary future)—what philosophers call a simulacrum. By asserting that the future has arrived, Futurism negates the real conditions of the present as well as any constructive imaginings of the future.

Even before the Internet Age blossomed, the Cyberpunk movement anticipated the potential for this new breed of (cyber-)Utopianism and offered itself as a sort of vaccine against the irrational exuberance that we, nevertheless, witnessed in the 1990s. The genre is characterized by the marriage of  a deep interest in (and embrace of) modern technology with pessimism regarding the potential social consequences of this technology’s pervasive use. Far from being techno-evangelists, Cyberpunk authors warn against a future they nonetheless portray as inevitably. Loss of individual liberty is almost invariably a central concern—however, scenarios created by Cyberpunk authors tend to promote an anarchist (as opposed to libertarian) concept of freedom. Stephenson, in particular, portrays worlds in which government institutions have become ineffective at regulation so that life or death decision are left to whims of market forces. In light of such conditions, Stephenson details a world with stark contrasts between those who have power and status and those left to languish on the margins. In Snowcrash, for example, a wealthy media baron drugs tens of thousands of people and drags them off to an enormous chain of rafts where they are used as living hosts to breed a virus. Under such conditions of extreme exploitation, a neoliberal ideology that encourages personal expression through consumer devices—as celebrated by Apple—hardly seems plausible. (Admittedly, in Stephenson’s Diamond Age, a lead protagonist finds salvation through a self-customizing virtual reality device, but it is not fetishized; it works because it allows her to make a meaningful, albeit indirect, connection with another human being).

Cyberpunk’s allegiance is not to technology itself, but to a culture that values freedom for individuals. Technology is a means to an end. However, it would be misleading to say that the wired and well-equipped characters merely use technology in pursuit of liberation. Technology is not a “neutral” thing to be consumed when convenient. Instead, technology itself become a site of struggle. In fact, the focal technologies in these narratives are often destructive or oppressive by nature. The protaganists are generally hackers who must subvert intentions of the technology’s creators by fundamentally altering the nature of the technology itself to function in accordance with a competing set of values. In short, the protagonists and the antagonists are wrestling to shape technology to best fit there own goals and values.

In Snowcrash, for example, Stephenson imagines a globally-connected digital environment that in many ways prefigures the modern Internet; though, Stephenson’s choice to label it “the Metaverse” is superior to the “virtual reality” moniker that ultimately prevailed in our culture, because the “meta-” prefix acknowledges an intrinsic connection to, or referencing of, the existing physical world, its values, and its pre-established power relations. In the story, the Metaverse is owned and controlled by then antagonist, who uses it as a tool of oppression. Stephenson envisions a world where digital information can physically manipulate and control the brains of programmers who are fluent in programming languages. In this case, the struggle to control the code is, literally, and existential struggle.

In contrast to consumer-oriented Futurism, Cyberpunk isn’t pretty. The environments course and polluted from centuries of human abuse. The settings offer an ideal-type of messy, “augmented reality,” with Tron‘s de-rezzer making literal the transformation from atoms to bits and The Diamond Age‘s synthesizers exemplifying the conversion of bits into atoms. Characters have a visceral relationship with technology, which they both depend on and are violated by. Returning again to The Diamond Age, we find swarms of nano-bots that invade, and even wage war inside of, human bodies. Yet, at the same time, these nano-bots also protect their hosts from outside pollutants. The Cyberpunk imagination bends the formal distinction between human and machine until it has little practical meaning. In this sense, we might call Cyberpunk characters “post-human.” However, these characters are still very much flesh and blood. Snowcrash introduces us to a breed of cyborgs called “gargoyles” that are burdened with heavy computer components and goggles and that, retrospectively, appear quite low-tech. As I discussed in a previous essay, when tools are so conspicuous, the limits of our (default) bodies remain readily apparent. Yet, even Bladerunner‘s fully synthetic replicants—in approximating the human body—must confront its limits. There is no Utopian transcendence of the flesh and blood. It is in this context that the otherwise gratuitous violence filling the genre’s pages and frames finds meaning.

Cyberpunk authors in general, and Stephenson in particular, also view technology as contributing to a decline in centralized authority, which is supplanted by a patchwork of various organizations that are, at the same time, both more local and more global (i.e., “glocal“) than traditional states. The lack of a central government produces a Wild West type atmosphere, where danger and violence are pervasive, creating the conditions for a particularly masculine breed of heroism. This recourse to male-centered, rugged individualism is, undoubtedly, the movement’s weak spot—a problem that was practically begging to be remedied by feminist technoscience. When it comes to understanding individuals, these universes channel a bit too much Ayn Rand (and, perhaps, not enough Michel Foucault) to be quite believable.

Roy Batty (from Blade Runner): "Quite an experience to live in fear, isn't it? That's what it is to be a slave."

Nevertheless, the Cyberpunk genre has much to offer to technology researchers because, as author Bruce Sterling explains, for Cyberpunks, “extrapolation, technological literacy—are not just literary tools but an aid to daily life. They are means of understanding and highly valued.” At its height in the 1980s and early 1990s, Cyberpunk gave us an ambivalent glimpse into the future—one where our lives are increasingly “augmented” by complex technologies. Most of all, it warn against the fetishization of technology. We must never be dazzled into forgetting that technology is a site of power; the consequences of falling prey to this illusion are, well, dystopian.

Follow PJ Rey on Twitter: @pjrey

With police dismantling Zuccotti Park and other #Occupy encampments throughout the country and impending Winter weather, pundits and activist alike are asking: Does the #Occupy movement have a future? To survive, #Occupy must begin—and, in fact, has already begun—a tactical shift. However, before I attempt to discuss #Occupy’s future, let me first be clear: The #Occupy movement is already a success. Recent months have witnessed a radical shift in mainstream political discourse, where concerns over America’s widening income and wealth gaps now have near equal footing with the deficit-reduction agenda. It has become common knowledge that the top 1% receive roughly a fifth of America’s collective income and control a third of the wealth. More Americans view Occupy Wall Street favorably (35%) than Wall Street (16%), government (21%), or the Tea Party (21%); and, though the country is gripped by a state of general cynicism, more people hold unfavorable impressions of big business (71%), government (71%), and the Tea Party (50%), than of #Occupy Wall Street (40%). Put simply, #Occupy is the most popular (and least unpopular) thing we’ve got.

The success of the #Occupy movement has thus far been a product of both its visibility and its endurance. Occupiers have been adept at leveraging mobile computing and social media technologies (as well as tourists!) to ensure that an abundance of content circulates both virally and through traditional media outlets. Moreover, by continuing to tax local and federal resources, the physical presence of the occupiers has ensured continued media attention. Finally, the lack of leaders or spokespeople has meant that the mainstream media has been unable to reduce the movement to a simplistic and easily dismissible narrative.

Can it survive? Arguably, the recent spate of raids is the best thing that could have happened to the #Occupy movement for two reasons: 1.) The brutal manner in which raids were carried out attracted media coverage and garnered widespread sympathy; these images were particularly striking, given that the Arab Spring is still fresh on the minds of many Americans. 2.) It gave occupiers a graceful exit strategy—they are able to leave the encampments, not as deserters, but as heroic victims of state repression. The raids provided the movement one last moment of explosive confirmation, rather than allowing the occupiers to lose a long the war of attrition against winter cold.

YouTube Preview Image

However, it is clear that the environment will only grow increasingly hostile to the occupation of physical space. Thus, if the movement is to survive, it must transform, while continuing to capitalize on what has thus made it successful. By combining tech savvy with now widely-recognizable memes such as “occupy [fill-in the-blank],” “we are the 99%,” and “we are unstoppable, another world is possible,” #OWS has built what is, essentially, a new brand of political activism. Except, in the age of social media, brands are no longer a thing that is created by the few at the top and consumed by the many on the bottom, brands—or, more broadly speaking, memes—are circulated and recirculated, simultaneously being produced and consumed by participants. These little cultural nuggets are, at once, decentralized and universally recognizable. Regardless of origin, memes take on a life of their own, being reinvented with each repetition. Sarah Wanenchak provided and excellent example of this process unfolding with respect to the “evolving human microphone.” What was originally invented as an analog amplifier for use where electronic amplification was prohibited is now an instrument for disrupting and appropriating events serving the interests of the 1%. Similarly, the “casually pepper spraying cop” meme has used humor to draw attention to the excessive use of force by police against protestors. It was only a matter of time before this cultural neologism—the Internet meme—was brought to bear on politics. That is to say, activism in the 21st Century can learn as much from the Rickroll as it can Civil Rights Movement.

Follow PJ Rey on Twitter: @pjrey

A few weeks back, I wrote a post about special pieces of technology (e.g., backpacks, glasses, a Facebook profile), which become so integrated into our routines that they become almost invisible to us, seeming to act as extension of our own consciousness. I explained that this relationship is what differentiates equipment from tools, which we occasionally use to complete specific tasks, but which remain separate and distinct to us. I concluded that our relationship with equipment fundamentally alters who we are. And, because we all use equipment, we are all cyborgs (in the loosest sense).

In this essay, I want to continue the discussion about our relationship with the technology we use. Adapting and extending Anthony Giddens’ Consequences of Modernity, I will argue that an essential part of the cyborganic transformation we experience when we equip Modern, sophisticated technology is deeply tied to trust in expert systems. It is no longer feasible to fully comprehend the inner workings of the innumerable devices that we depend on; rather, we are forced to trust that the institutions that deliver these devices to us have designed, tested, and maintained the devices properly. This bargain—trading certainty for convenience—however, means that the Modern cyborg finds herself ever more deeply integrated into the social circuit. In fact, the cyborg’s connection to technology makes her increasingly socially dependent because the technological facets of her being require expert knowledge from others.

Let us begin by further exploring why Giddens claims that the complexity of the Modern world requires a high degree of trust. Consider the experience of flying on an airplane. Perhaps the typical passenger has vague notions of lift and drag, but these passengers are certainly not privy to the myriad formulas used to calculate the precise mechanics that keep the craft airborne. Unlike the Wright Brothers and their famous “Flyer,” a single engineer can no longer be expected to understand all the various systems that comprise modern aircraft. In fact, the design team for a plane is likely so segmented and specialized that it would be impossible to fit a team capable of understanding a craft inside the craft itself. Giddens explains that complex technologies such as airplanes are “disembedded” from the local context of our lives and our social relations; that is to say we lack direct or even indirect experiential knowledge of modern technology. Instead, our willingness to, say, hurl ourselves 30,000 feet above the Earth in an aluminum cone, derives solely from our trust in expert systems. Importantly, this trust is not in individual experts, but in the institutions that organize and regulate their knowledge as well as the fruits of that knowledge.

Anthony Giddens

Modern day cyborgs are characterized by profound trust in both technology and the expert systems that create it. That is to say, in order to make use of complex technology, we have to accept limited understanding of it and simply assume that it was properly designed and tested. However, trust is not merely passive acceptance of a lack of understanding; it also involves a commitment. Giddens (CoM, p. 26-7) explains:

Trust […] involves more than a calculation of the reliability of likely future events. Trust exists, Simmel [a Classical sociologist] says, when we “believe in” someone or some principle: “It expresses the feeling that there exists between our idea of a being and the being itself a definite connection and unity, a certain consistency in our conception of it, an assurance and lack of resistance in the surrender of the Ego to this conception, which may rest upon particular reasons, but is not explained by them.” Trust, in short, is a form of “faith,” in which the confidence vested in probable outcomes expresses a commitment to something rather than just a cognitive understanding.

The use of complex technology involves an element of risk (e.g., crashing back to Earth). The cyborg’s confidence in the expert systems behind technology must be sufficiently strong to mitigate any perceived risks from use of that technology. Once we have equipped a piece of technology, we become dependent on it. We make decisions that assume its full functioning, and its failure can be perilous. A rock climber, for example, places her life in the hands of her harness (and the experts that engineered it) every time she scales a rock face. She cannot know with certainty that the molecules of her carabineer have been properly alloyed, but her confidence, and her life, rest on the belief that the expert system will not have failed. This trust in equipment demonstrates an existential commitment to technology. Giddens (CoM, p. 28) elaborates:

Everyone knows that driving a car is a dangerous activity, entailing the risk of accident. In choosing to go out in the car, I accept that risk, but rely upon the aforesaid expertise to guarantee that it is minimised as possible. […] When I park the car at the airport and board a plane, I enter other expert systems, of which my own technical knowledge is at best rudimentary.

Being a cyborg is risky business; we must depend on the expertise of others to ensure that our equipment is fit for use. This radical dependency on expert systems—and the societies that create them—makes cyborgs fundamentally social beings. In fact, it is through dependency on technology, and the subsequent loss of self-sufficiency, that we express our commitment to society. Technology has always been part and parcel to the division of labor. Think bows and shovels. In this sense, being a cyborg requires not only trust in technology producers, but trust in other technology users. There is no such thing as a lone cyborg. The birth of cyborg marks the death of the atomistic individual (if such a thing every existed). Donna Haraway rightly contrasts the cyborg to Romantic Goddesses channeled in small lakeside cabins. Cyborgs are cosmopolitan.

This is not to say that Modern day cyborgs are incapable of being critical of technology or expert systems. On the contrary, the cyborg’s humility in admitting her own dependencies leads her to acknowledge the importance of struggling to enforce certain values within techno-social systems, rather than plotting a Utopian escape (the sort that had currency with Thoreau and other Romantics and that continues to be idealized by cyber-libertarians who view the Internet as a fresh start for society). My favorite Haraway quote explains:

This is not some kind of blissed-out technobunny joy in information. It is a statement that we had better get it – this is a worlding operation. Never the only worlding operation going on, but one that we had better inhabit as more than a victim. We had better get it that domination is not the only thing going on here. We had better get it that this is a zone where we had better be the movers and the shakers, or we will be just victims.

Cyborgs always see the social in the technological; the “technology is neutral” trope is a laugh line.

Nowhere are mutual trust and co-dependency more apparent than with social media. Few of us have any clue how the Internet’s infrastructure delivers our digital representations across the world in an instant. This lack of knowledge means simply that we must trust that platforms such as Facebook or Google are delivering information accurately. As the Turing test has demonstrated, computers can easily fool us into believing we are communicating with someone who is not present or who does not even exist, if the system allows. Moreover, on platforms such as Facebook, we also must trust the system to enforce a norm of honesty. If we cannot trust that other users are honestly representing themselves, we become unsure of how to respond. Honesty and accuracy of information are preconditions to participation. And because, as individuals, we lack the capacity to ensure either, we must place our trust in experts. We users do not understand the mechanics of Facebook, we simply accept it as reality; that is to say, Facebook is made possible through widespread suspension of disbelief. Thus, use social media is a commitment to pursuit the benefits of participation, despite the risk that we could be fooled or otherwise taken advantage of. Facebook is not merely social because it involves mutual interaction, it is social because trust in society’s expert systems is a precondition to any such interaction.

Follow PJ Rey on Twitter: @pjrey

For nearly two centuries, the term “production” has conjured an image of a worker physically laboring in the factory. Arguably, this image has been supplanted, in recent decades, by office worker typing away on a keyboard; however, both images share certain commonalities. Office work and factory work are both conspicuous—i.e., the worker sees what she is making, be it a physical object or a document. Office work and factory work are also active—i.e., they require the workers’ energy and attention and come at the expense of other possible activities. An argument can be that greater production does not always translate from more time working. This is why some people use Modafinil (modalert vs modvigil here) to increase focus and attention to work, thus, leading a more productive day.

The nature of production has undergone a radical change in a ballooning sector of the economy. The paradigmatic images of active workers producing conspicuous objects in the factory and the office have been replaced by the image of Facebook users, leisurely interacting with one another. But before we delve into this new form of productivity we must take a moment to define production itself.

Following Marx, we can say that any activity that results in the creation of value is production of one sort or another. Labor is a form of production specific to humans because human are capable of imagination and intentionality. He explains in the Philosophical and Economic Manuscripts of 1844 that

Conscious life activity distinguishes man immediately from animal life activity. It is just because of this that he is a species-being. Or it is only because he is a species-being that he is a conscious being, i.e., that his own life is an object for him. Only because of that is his activity free activity.

Labor is production that is imagination-driven. However, production need not be intentional. Marx acknowledges this fact in Capital saying:

We pre-suppose labour in a form that stamps it as exclusively human. A spider conducts operations that resemble those of a weaver, and a bee puts to shame many an architect in the construction of her cells. But what distinguishes the worst architect from the best of bees is this, that the architect raises his structure in imagination before he erects it in reality. At the end of every labour-process, we get a result that already existed in the imagination of the labourer at its commencement. He not only effects a change of form in the material on which he works, but he also realises a purpose of his own that gives the law to his modus operandi, and to which he must subordinate his will. And this subordination is no mere momentary act. Besides the exertion of the bodily organs, the process demands that, during the whole operation, the workman’s will be steadily in consonance with his purpose. This means close attention. The less he is attracted by the nature of the work, and the mode in which it is carried on, and the less, therefore, he enjoys it as something which gives play to his bodily and mental powers, the more close his attention is forced to be.

Now, we are in a position to observe that production in the factory and in office are united by a third characteristic: value is produced via labor (specifically, alienated labor).

What is most remarkable about the much of the value produced on social media is that it comes from activities that can hardly be described as labor. The best example is, probably, the self-improving Google algorithm, which tracks individual usage patterns then aggregates the data to make itself more intelligent. Each individual is user is merely a passive consumer seeks a specific pieces of information; however, they also, simultaneously, generate valuable information for Google. The users are not active in the process. They do not imagine the end product in their minds before creating as Marx described. In fact, this product is completely invisible to them, buried deep in the infrastructure of the site. Similarly, Facebook silently derives value from the mundane interaction of its users.

We might compare this process to a waterwheel. The following is the one-sentence definition of a water wheel from Wikipedia:

A water wheel is a machine for converting the energy of free-flowing or falling water into useful forms of power, the development of hydropower.

Value is produced from everyday activities much like a waterwheel harnesses the power of flowing water. This fact has profound implications, potentially requiring us to rethink traditional critiques of capitalism. However, before diving into these implications it would be useful to develop a vocabulary describe these new conditions in which production can occur without active laboring.

We might start by considering some previous attempts to discuss immaterial production. In a presentation at the VII Annual Social Theory Forum on Critical Social Theory, Nathan Jurgenson and I used the term “ambient production” to describe an environment in which production simply occurs as result of one’s mere presence. We discussed how various modes/mechanisms of production on the Web might be placed on a visibility-invisibility continuum. However, this notion is complicated by the fact that as certain things are concealed (e.g., Facebook’s use of person data in targeted marketing) other things are more likely to be revealed (e.g., Facebook’s users are likely to share more data when they are not focused on how that data might be used to manipulated them).

More recently, we described the social media as populated with “digital paparazzi” (i.e., invisible data collection mechanisms that track and surveil users). However, we also demonstrated that there is a tendency for users to be made aware of the ubiquitous documentation in their environment and to alter their behaviors accordingly. Jurgenson labels as “documentary vision” this tendency to view one’s actions in the present through the lens of the future documents they will produce. Though the mechanisms themselves are concealed, one might aptly argue that Facebook users are reacting to this environment of omnipresent documentation with the expectation that they are always being recorded. As such, users begin posing all the time and actively use these mechanisms to produce (or “prosume”) their own identities. Certainly, this kind of identity work is an active labor for many users and has visible consequences.

There are, clearly, coextensive modes of production operating on social-networking sites such as Facebook: On the one hand, individuals conspicuously labor to shape their identities. On the other hand, their presence on the platform allows for the ambient production of valuable data that company can sale to marketers. In fact, these two modes of production are intertwined. Active identity work creates data for targeted market, while marketing provides new consumer objects through which identity is expressed. This environment, where social activity becomes productive activity is not dissimilar to what Mario Tronti (1966) and Antonio Negri (1989) respectively described as a “factory without walls” or a “social factory.”

Does ambient production qualify as labor?

The conspicuousness of the thing being produced becomes extremely important. If we are able to see what we are producing (as with self-presentation via Facebook’s new “frictionless sharing” application), then we will likely attempt to shape the end product by actively managing our own behavior (which amounts to labor). However, if the thing we are producing is concealed from us (as with our contributions to the Google algorithm), then we are largely denied any agency with respect to the final product. Yet, unlike alienated factory workers, our attention is not occupied by this production. In fact, this sort of of inconspicuous creation of value is largely incidental to the task that is really occupying our attention (e.g., using Google to locate a particular piece of information). Our relationship to such invisible objects is necessarily passive. As such, it does not meet Marx’s (and, likely, most other) criteria for labor. I will define this passive and inconspicuous creation of value as “incidental productivity.” Users engaged in incidental productivity don’t know or don’t care about about the valuable data they are creating; it is simply byproduct of other activity.

Why does “incidental productivity” matter?

For many commentators, our rights over the data that we create is an extension of an abstract notion that these data are fruits of our labor. The economic conditions of social media users have been described as “precarious labor” or, even, “over-exploitation.” Both allude to one of Marx’s major critiques of capitalism: that laborers are exploited (i.e., their wages to not amount the full value of their work because some of that value is skimmed off by the employer). Much of the identity work done on social media is active and intentional labor. And, this labor is often exploited (I discuss this in depth in an article titled “Alienation, Exploitation & Social Media” soon to be published in the American Behavioral Scientist).  However, much of the value created on the Web does not even result from labor; it is incidental value. This leaves us with the question: If a productive activity is not labor, can it be exploited? This question requires considerably more examination as it was not a phenomenon observed by Marx himself. However, Marx is not certainly not irrelevant. A quintessentially Marxian question remains: Who should control the means of incidental production? Just as Marx concluded that the means of production do not work in the best interest of worker when controlled by a separate ownership class, we have reason to be skeptical that the means for harnessing incidental productivity will work in the best interest of users who exercise little control over them. Revelations, such as Apple’s clandestine use of the iPhone to collect data about changes in users’ geographic location or Yahoo! and Blackberry’s cooperation with the intelligence agencies of various authoritarian regimes, demonstrate the the interests of users and owners are often out-of-sync.

Follow PJ Rey on Twitter: @pjrey

A recent piece by Bonnie Stewart (a previous Cyborgology contributor) offers an interesting analysis of Klout, the increasingly popular tool for measuring personal value and influence. The Klout site explains:

Our friendships and professional connections have moved online, making influence measurable for the first time in history. When you recommend, share, and create content you impact others. Your Klout Score measures that influence on a scale of 1 to 100.


The Klout Score measures influence based on your ability to drive action. Every time you create content or engage you influence others. The Klout Score uses data from social networks in order to measure:

  • True Reach: How many people you influence
  • Amplification: How much you influence them
  • Network Impact: The influence of your network

Stewart criticizes the idea of rationalizing our online interaction (i.e., submitting them to greater efficiency, calculability, predictability, and control). She also notes that Klout is limited in that it fails to measure how our online actions influence (i.e., augment) activity in the offline world. Finally, she discusses how knowledge that Klout exists influences the way people behave online, making them more inclined to act in such a way as to improve their score. Cyborgology editor Nathan Jurgenson recently described this tendency  to view our present actions from the perspective of the documents they will eventually produce as “documentary vision.” Stewart concludes:

Rankings are useful as relative assessments: My score on Klout in relation to yours, or on Mr. Ives’ history test this month as compared to last, can be indicative of meaningful change.

But my Klout score does not tell you — and cannot tell you — what kind of influence I have in my community, not really. Such scoring may be handy in a competitive neoliberal economy looking to quantify and compare abilities, but that doesn’t mean it’s actually valid.

Read the whole piece here.

Julian Assange, the notorious founder and director of WikiLeaks, is many things to many people: hero, terrorist, figurehead, megalomaniac. What is it about Assange that makes him both so resonant and so divisive in our culture? What, exactly, does Assange stand for? In this post, I explore two possible frameworks for understanding Assange and, more broadly, the WikiLeaks agenda. These frameworks are: cyber-libertarianism and cyber-anarchism.

First, of course, we have to define these two terms. Cyber-libertarianism is a well-established political ideology that has its roots equally in the Internet’s early hacker culture and in American libertarianism. From hacker culture, it inherited a general antagonism to any form of regulation, censorship, or other barrier that might stand in the way of “free” (i.e., unhindered) access of the World Wide Web. From American libertarianism it inherited a general belief that voluntary associations are more effective in promoting freedom than government (the US Libertarian Party‘s motto is “maximum freedom, minimum government”). American libertarianism is distinct from other incarnations of libertarianism in that tends to celebrate the market and private business over co-opts or other modes of collective organization. In this sense, American libertarianism is deeply pro-capitalist. Thus, when we hear the slogan “information wants to be” that is widely associated with cyber-libertarianism, we should not read it as meaning  gratis (i.e., zero price); rather, we should read it as meaning libre (without obstacles or restrictions). This is important because the latter interpretation is compatible with free market economics, unlike the former.

Cyber-anarchism is a far less widely used term. In practice, commentators often fail to distinguish between cyber-anarchism and cyber-libertarianism. However, there are subtle distinctions between the two. Anarchism aims at the abolition of hierarchy. Like libertarians, anarchists have a strong skepticism of government, particularly government’s exclusive claim to use force against other actors. Yet, while libertarians tend to focus on the market as a mechanism for rewarding individual achievement, anarchists tend to see it as means for perpetuating inequality. Thus, cyber-anarchists tend to be as much against private consolidation of Internet infrastructure as they are against government interference. While cyber-libertarians have, historically, viewed the Internet as an unregulated space where good ideas and the most clever entrepreneurs are free to rise to the top, cyber-anarchists see the Internet as a means of working around and, ultimately, tearing down old hierarchies. Thus, what differentiates cyber-anarchist from cyber-libertarians, then, is that cyber-libertarians embrace fluid, meritocratic hierarchies (which are believed to be best served by markets), while anarchists are distrustful of all hierarchies. This would explain while libertarians tend to organize into conventional political parties, while the notion of an anarchist party seems almost oxymoronic. Another way to understand this difference is in how each group defines freedom: Freedom for libertarians is freedom to individually prosper, while freedom for anarchists is freedom from systemic inequalities.

In many ways, the Internet community / hacker collective known as “Anonymous” are the archetypical cyber-anarchist group. As their namesake indicates, they embrace a principle of anonymity that places inherent limits on hierarchy within the group. Members often work collectively to disrupt the technology infrastructure of established institutions (often in response to perceived abuses of power). All actions initiated by the group are voluntary and it is said that anyone can spontaneously suggest a target. The ethos of the organization was well-captured in a quote from one of its Twitter feeds: “RT: @Asher_Wolf: @AnonymousIRC shouldn’t be about personalities. The focus should always be transparency for the powerful, privacy for the rest.” [Author’s Note: The previous sentence was ambiguously phrased. @Asher_Wolf is not personally affiliated with Anonymous. I meant merely to suggest that her comment was insightful. My apologies to her if readers were misled.] Note that this implicit linkage between transparency and accountability is what distinguishes a cyber-anarchist from other run-of-the-mill anarchists. For the cyber-anarchist, the struggle against power is a struggle for information.

So, is Julian Assange a cyber-libertarian or a cyber-anarchist? This proves difficult to sort out. Assange is an activist, not a philosopher, so we ought not to expect his theoretical statements to be completely coherent; nevertheless, he does appear to be operating with a consistent and quite nuanced philosophy. A recent Forbes interview is revealing. Though, Assange is quite evasive in his own ideological self-identification:

It’s not correct to put me in any one philosophical or economic camp, because I’ve learned from many. But one is American libertarianism, market libertarianism.

So, Assange is relatively clear in is his affinity for both markets and libertarianism. In fact, Assange justifies Wikileaks’ activities, in part, through pro-market rhetoric, saying, for example, that:

WikiLeaks means it’s easier to run a good business and harder to run a bad business, and all CEOs should be encouraged by this. […] it is both good for the whole industry to have those leaks and it’s especially good for the good players. […] You end up with a situation where honest companies producing quality products are more competitive than dishonest companies producing bad products.

Yet, WikiLeaks frequently engages in what might be interpreted in as anti-business activity—distributing proprietary information (e.g., documents that indicate insider trading at JP Morgan, a list of companies indebted to Iceland’s failing Kaupthing Bank, legal documents showing that Barclays Bank sought a gag order against the Guardian, a list of accounts concealed in the Cayman Islands, etc.). In fact, Assange claims that 50% of the data in their repository is related to the private sector.

Why would Assange threaten these businesses if he is so pro-market? Assange offers a clue when he says, “I have mixed attitudes towards capitalism, but I love markets.” The two parts of this statement are reconcilable because capitalism is primary about private ownership of the means of productions, while markets are primarily about the decentralization of supply and demand. Historically speaking, markets certainly preceded capitalism. He believes Wikileaks assists markets in the following way:

To put it simply, in order for there to be a market, there has to be information. A perfect market requires perfect information.

In any case, Assange seems to be saying that he favors minimal interference in the relationship between supply and demand, but he is skeptical as to whether private ownership of the means of product (as opposed to collectivist or government control) is the best means of accomplishing this goal.  He explains the thinking behind this nuanced position of supporting markets, while being skeptical towards capitalism:

So as far as markets are concerned I’m a libertarian, but I have enough expertise in politics and history to understand that a free market ends up as monopoly unless you force them to be free.

The next question is, naturally: Who should be responsible for forcing markets to be free? Given his generally negative assessment of governments, it is unsurprising when Assange balks at the notion the governments are up to the task. Assange seems to favor the promotion of culture of transparency as a substitute for regulations. He explains:

I’m not a big fan of regulation: anyone who likes freedom of the press can’t be. But there are some abuses that should be regulated […].

Traditionally, regulations are controls placed on one set of institution (i.e., businesses) by another institution (i.e., government). Here, I think, is where Assange’s fundamental thinking is revealed. He does not trust institutions to regulate each other, because he does not trust institutions.  He seems to believe that institutions, and their propensity for secrecy, have a corrupting effect. This is why he champions individuals and small groups—extra-institutional actors—as change agents. Anti-institutionalism appears to be Assange’s driving principle—even more so than his appreciation for markets. This, paired with his skepticism toward capitalism, seems to indicate that Assange better fits the ideal-type of the cyber-anarchist than that of the cyber-libertarian. The arc of Assange’s argument is not so much that the public sector’s role in decision-making should be minimized in favor of private entrepreneurs, rather he seems to believe that—insofar as it is possible without descending into complete chaos—institutions should be diminished in favor of extra-institutional actors (i.e., individuals and small voluntary associations). Wikileaks is attempt, on the part of extra-institution, to exercise more accountability over institutions through the mechanism of transparency.

It should be noted that Assange has resisted attempts to label him as “anti-institutional,”explaining that he has visited countries where institutions are non-functional and that this sort of chaos is not what he has in mind. One, thus, has to infer that Assange believes institutions are a necessary evil—one that must be guarded against through enforced transparency. In the classic sociological framework, we might position Assange as a sort of conflict theorist: People require institutions for order and stability, yet are perpetually threatened by the tyrannical inclinations of such institutions; he believes the people can only gain the upper hand in the struggle by preventing/exposing institutional secrets.

From the previous statements, we can conclude that Assange has two central assumptions about our social world: 1.) Institutions, by their nature, will always become corrupt when not closely monitored. 2.) Secrecy is a necessary precondition for corruption; diminish secrecy and you diminish corruption. To take a bit of a critical angle, it appears that the degree of faith in transparency expressed by Assange and his compatriots seems to necessitate either a lack of attentiveness to power and/or a sort of naïve optimism that deeply embedded power relations can be easily overturned. Does simply knowing that an institution is corrupt really sufficiently empower us to end that corruption? The lack of prosecutions in light of the all the malfeasance and outright criminality that led to the recent collapse of the financial sector would seem to indicate otherwise. As Michel Foucault and countless other theorists have argued, there is a definite relationship between visibility and power; however, it may not be as direct or as simplistic as Assange appears to believe. To be fair to Assange, he is not necessarily arguing that, through transparency,  individuals are empowered to hold institutions accountable but, instead, that information can be used strategically to play institutions against one another.

In this way, Assange is more nuanced than he often appears in media caricatures. WikiLeaks is not, simply, an effort at maximal transparency; rather, it is involved in a complex game of reveal and conceal, motivating institutions oppose or compete with one another. In fact, in a separate interview, Assange even praises secrecy, saying that:

secrecy is important for many things but shouldn’t be used to cover up abuses, which leads us to the question of who decides and who is responsible. It shouldn’t really be that people are thinking about, Should something be secret? I would rather it be thought, Who has a responsibility to keep certain things secret? And, who has a responsibility to bring matters to the public? And those responsibilities fall on different players. And it is our responsibility to bring matters to the public.

At his most cynical and, perhaps, megalomaniacal, Assange sounds as if the only person that can be trusted to regulate governments and businesses is Julian Assange. More generously, we can interpret that Assange’s rhetoric and WikiLeak’s actions indicate a general antagonism to institutions that places them much closer to the cyber-anarchists of Anonymous than the cyber-libertarians barons of Silicon Valley (e.g., Mark Zuckerberg, Eric Schmidt) who support enforced transparency, primarily, for their own financial gain.

Follow PJ Rey on Twitter: @pjrey

Everybody knows the story: Computers—which, a half century ago, were expensive, room-hogging behemoths—have developed into a broad range of portable devices that we now rely on constantly throughout the day.  Futurist Ray Kurzweil famously observed:

progress in information technology is exponential, not linear. My cell phone is a billion times more powerful per dollar than the computer we all shared when I was an undergrad at MIT. And we will do it again in 25 years. What used to take up a building now fits in my pocket, and what now fits in my pocket will fit inside a blood cell in 25 years.

Beyond advances in miniaturization and processing, computers have become more versatile and, most importantly, more accessible – you can easily sell your computer processor, there’ll be plenty of those interested, everybody needs it nowadays.  In the early days of computing, mainframes were owned and controlled by various public and private institutions (e.g., the US Census Bureau drove the development of punch card readers from the 1890s onward). When universities began to develop and house mainframes, users had to submit proposals to justify their access to the machine. They were given a short period in which to complete their task, then the machine was turned over to the next person. In short, computers were scarce, so access was limited.

The paradigm of access shifted with the so-called “personal computing revolution” (most often associated with late Apple co-founder Steve Jobs). Computer access is no longer centrally controlled. Instead, computers are so abundant that access is ubiquitous (at least in the developed world, though computer access is also increasing in the developing world in the form of cell phones). In fact, according to the Pew Internet & American Life Project, 91% of American adults have a cellphone, desktop computer, laptop, mp3 player, game console, e-reader, or tablet.  This number reaches 99% among those 18-34 years of age, statistics provided by a mobile computer repair company serving Boyton Beach.

In this essay, I want some time to reflect upon the psycho-social implications of the mass personalization of computing. How do we relate to our computing devices? And, what does this mean for our individual and collective identities? In tackling this issue, I propose that we might find it useful to dust off a copy of 20th Century German philosopher Martin Heidegger’s seminal work, Being and Time. I know what your thinking. “Wasn’t that book written in 1927? What could it have to say about personal computing?” Fair questions. In fact, rather coincidentally, Heidegger died in 1976—the year Apple released its first personal computer—so, he clearly was not privy to the developments I am endeavoring to discuss here. Nevertheless, Heidegger was a keen observer of how humans relate to the objects that inhabit our world. Though those objects have changed, I hope to argue that the logic of our interaction with objects (particularly, with technology) follows patterns similar to what Heidegger described.

It starts with a hammer. What makes a hammer different from other sorts of objects? Heidegger argues that there is a certain kind of functional knowledge associated with a hammer.  It has a special presence because it is a thing that we know can be exceedingly useful in doing something—for example, pounding nails. Of course, a rock can also pound nails. But, a hammer stands out because it is exceptional at pounding nails when compared to other objects. Heidegger explains that,

Strictly speaking, there “is” no such thing as a useful thing. There always belongs to the being of a useful thing a totality of useful things in which this useful thing can be what it is. A useful thing is essentially “something in order to … “

Hammers come to occupy a privileged position in our perception of world because it of their utility (or, as Heidegger says, “handiness”) in helping us to accomplish various goals; this “handiness” is highly contextual; that is to say, handiness derives from certain properties a hammer has with respect to our bodies, nails, other potential pounding devices, etc.  Importantly, context is not objective (meaning it is not part of the object itself); rather, it is subjective or interpretive (meaning it is something we bring into our relationship with and object).

Heidegger takes pains to argue, however, that the handiness of an object is not merely a product of how we interpret what we can do with the object. Objects also have properties in-themselves. For example, a hammer is only handy because its qualities of being hard and heavy allows it to drive things. The most important objective trait of a tool is accessibility. A tool cannot be handy if it is unavailable to the user. To be handy, a tool must be more than theoretically useful, it must be practically attainable.

Let us come back to Earth for a second and discuss how this all relates to computers. Following Heidegger’s definition, mainframe computers—however powerful—were not handy because they lacked the key feature of accessibility. The personal computing revolution, then, is a revolution in handiness. Yet, to fully understand why this is important, we must make a final foray into to Heidegger’s abstract theorizing.

Heidegger’s most fascinating observation is that, when a device is extremely handy, it tends to become inconspicuous. We no longer consciously engage with the device; instead, we find ourselves operating the device at an unconscious level.  Most people have probably had this experience when driving an automobile. Lost in thought, you suddenly come to realize, “hey, I’ve been driving on the highway for the last half hour and haven’t really noticed.” On the contrary, when a device suddenly ceases to function properly, we are jarred with the realization that we have been taking its functionality, and even its existence, for granted. Disruptions in handiness make devices suddenly conspicuous to us. For example, it is virtually impossible to lose sight of the fact that we are driving if we have just gotten a flat tire. Interestingly, we feel related to our bodies in much the same way. You are never more aware of an ankle than when you sprain it.

Heidegger is not as clear with his terminology as we might like, so we have to do a bit of category work here to fully flesh out the implications of his theory. As Heidegger suggests, we can readily observe that tools and devices become so handy that they become almost invisible to us. The boundaries between our consciousness and the object begin to blur. We start to relate to such objects as if they were extensions of ourselves.  They begin to feel as if they are part us. Similar to our bodies, we just project our conscious intentions into these objects and they respond appropriately. We call these extremely handy objects “equipment.”  Backpacks are excellent examples of equipment; they conform to our bodies and enable us to store a range of things on our person that we readily forget about unless we suddenly find a need for them.

Interestingly, Heidegger’s cherished hammer, does quite seem to fit this definition of equipment. A hammer’s shape and weight makes it unwieldy. The hammer burdens us, making it difficult to shift attention from our use of it. Though a hammer is useful, it is less handy and more conspicuous than other useful objects. Such objects are better described as “tools.”

The distinction between tools and equipment helps us to flesh out the cyborg metaphor that is the namesake of this blog and which has a history of use in Science and Technology Studies (STS) traceable to Donna Haraway’s Cyborg Manifesto. Cyborgs have a unique relationship with technology. For the cyborg, technology is no longer a thing you use—rather, it is a thing you are. Technology becomes part of you. Cyborgs are not users of tools; they are, instead, equipped with technology. As such, the stereotype of cyborgs as conspicuously laden with machine parts is wrong-headed. Cyborg equipment, like all equipment, is inconspicuous. The replicants from Bladerunner are better examples of cyborgs than the Borg in Star Trek because the replicants’ equipment is so inconspicuous that it is possible for them to not even realize that they are replicants.

So, what significance do these insights about tools and equipment have for Americans living in 2011? I think we can all agree with the premise that laptops, tablets, and, particularly, smartphones have made computers handier than ever.  This is evident in the discursive shifts surrounding computation. Originally, one “used” or “operated” a computer. As personal computing became more accessible, a new, less instrumental activity became common, and we captured its casual nature with the phrase: “surfing the web.” Today, a new pattern of behavior is emerging—i.e., constant oscillation between online and face-to-face communication—which we might call “drifting.” Our devices have become so handy that it is just as easy to project our subjectivity through them as it is to express ourselves through our own bodies. It follows, then, that computers have ceased being tools and have become equipment. As such, is not hyperbolic to claim, for example, that Facebook is a piece of equipment that has become an extension of our very consciousness. As equipment, social media fundamentally alters who we are.

This transition in computing from tool to equipment may also help explain (phenomenologically) the waning appeal of the digital dualist perspective (i.e., the belief that the online and offline world are fundamentally separate, rather than integrated). So long as computers were difficult to access, we remained constantly conscious of their separation from us, existing equally as tools and obstacles to accomplishing a task. The subjective distance we felt from computers (when we were interacting with them instrumentally as tools) was translated into a sense of social distance between us and those with whom we were communicating through the device. As computers became handier—more portable, easier to use, more versatile—they became less conspicuous—disrupting our communicative experience less. As the subjective boundaries between self and computer collapsed, so too did the distance we perceived between self and the others with whom we were networked.

In many ways, the augment reality perspective on technology (i.e., that the digital and physical worlds are co-determining) is implicitly predicated on the assumption that people are capable of interfacing with technology as equipment. No doubt, even when computers were tools they influenced our offline social lives (and vice versa). However, the reciprocal relationship between online and offline experiences becomes more obvious and more significant when technology becomes invisible to us, so that we can simply drift online and offline with little notice of the transition. Ironically, this means that as we, smartphone-wielding cyborgs, become further enmeshed into this augmented reality, we are less consciously aware of our technological integration (at least, until this equipment malfunctions).

The personal computer revolution is not merely a technological development; it is an ontological shift (i.e., a shift in the nature of being). Human consciousness is expanding out from the realm of the physical into the realm of the digital and back again. As a consequence, it is dangerous to trivialize our online presence. This equipment—our profile, our updates, our “likes,” and everything else that functions as a mechanism of self-expression online—like all equipment, comes to constitute an important part of our very being. We are what we equip.

Follow PJ: @pjrey on Twitter |

Last week, Nathan Jurgenson linked to an interview with Noam Chomsky, where Chomsky argued that social media is superficial:

Jeff Jetton: Do you think people are becoming more comfortable communicating through a device rather than face to face or verbally?

Noam Chomsky: My grandchildren, that’s all they do. I mean, of course they talk to people, but an awful lot of their communication is extremely rapid, very shallow communication. Text messaging, Twitter, that sort of thing.

Jeff Jetton: What do you think are the implication for human behavior?

Noam Chomsky: It think it erodes normal human relations. It makes them more superficial, shallow, evanescent. One other effect is there’s much less reading. I can see it even with my students, but also with my children and grandchildren, they just don’t read much.

Jeff Jetton: Because there’re so many distractions, or…?

Noam Chomsky: Well you know it’s tempting…there’s a kind of stimulus hunger that’s cultivated by the rapidity and the graphic character and, for the boys at least, the violence, of this imaginary universe they’re involved in. Video games for example. I have a daughter who lives near here. She comes over Sunday evening often for dinner. She brings her son, a high school student. And of course he hasn’t done any homework all weekend, naturally, so he has to do all his homework Sunday night. What he calls doing homework is going into the living room while we’re eating, sitting with his computer and with his headphones blaring something, talking to about ten friends on whatever you do it on on your computer, and occasionally doing some homework.

Jeff Jetton: How do you know what he’s doing?

Noam Chomsky: I watch him.

Jurgenson offered an epistemological critique of Chomsky, arguing that Chomsky’s dismissal of social media as superficial fits a long-standing pattern of affluent white academics maintaining their privileged position in society by rejecting media that is accessible to non-experts.  Jurgenson pointedly asks “who benefits when what you call “normal” human relationships get to be considered more “deep” and meaningful?”  Chomsky is seemingly ignorant to the use of Twitter and other networks in shaping the Arab Spring and the #Occupy movement; or the fact that young people are voraciously sharing and consuming important news stories through these same networks; or that Blacks and Hispanics were early adopters of smartphones; or that gay men have been pioneers in geo-locative communication. In many cases, historically-disadvantaged groups have used social media technology to find opportunities previously foreclosed to them.  For these folks, social media is hardly trivial.

Noam Chomsky Toronto 2011

Jurgenson’s post generated much discussion, perhaps due to Chomsky’s iconic status amongst America’s left-wing.  And, while I, too, must profess a personal indebtedness to Chomsky for serving, in my youth, as an introduction to cogent political thinking outside of the mainstream, my admiration for the man in no way diminishes the fact that his framing of this issue is woefully out-of-date.  Interestingly, most negative reactions to Jurgenson’s post argue that Jurgenson’s critique was a cheap shot, attacking an off-the-cuff remark that was never meant for rigorous debate.  It is this claim that I wish to dispel.  Chomsky’s comment is, in fact, a logical extension of  his well-known and long-defended perspective on media consumption—developed, most notably, in Manufacturing Consent (the book co-authored by Edward S. Herman that also inspired a documentary of the same name).

Chomsky—in his politics as in his linguistics—is a rigid structural determinist.  This means that he tends to give priority to the forces of order, structure, and control.  Ironically, for such a well-regarded social activist, this leaves very little room for people to engage in resistance or to appropriate the means of communication (at least without wholesale revolutionary change). Chomsky’s grim view of broadcast media is, arguably, a simplification of Adorno and Horkheimer’s gravely pessimistic treatise on the culture industry—still read by most undergraduate social science majors.  Chomsky makes what is, basically, an epistemological claim: those in the ownership class will inevitably use the means of communication at their disposal to produce an ideology that reinforces their own privileged position in the world.  The financial/political elite are best able to pursuit their own interests when the masses are distracted and passified.

Unlike many commentators, Chomsky does not view communication technologies as politically neutral.  Rather, Chomsky argues that certain characteristics of various media either promote or inhibit critical discussion.  For example, Chomsky attacks concision in television and radio interviews as a structural constraint that prevents discourse from veering off of well-established scripts.  You can get a good taste of his position in a clip from the documentary starting at 1:30 here:

While this pessimism was, perhaps, well-founded in the era of top-down broadcast media (i.e., national newspaper, radio, and television), Chomsky’s mistake is in his failure to recognize that social media marks a significant shift away from the passivity of mass consumption and towards a paradigm of mass participation.  Social media’s characteristic rapidity—which Chomsky ties to shallowness—is also what facilitates its interactivity.   And it is, in fact, the participation and interaction engendered by social media that differentiates it from broadcast media.  The old media-manipulation frame is simply inadequate to capture all the activity occurring through these new means of communication.  To be clear, this is not an endorsement of cyber-Utopian celebrationism.  Of course, social media has its own issues (e.g., the tendency of the Web’s infrastructure to lead us to information that confirms what we already believe—an effect described by Eli Pariser as the “filter bubble”), but we need new theory that better captures the nuances of this new techno-social formation.  Thus, it is a perfectly legitimate practice to criticize Chomsky and any other public intellectual who (implicitly or explicitly) takes the position that social media is merely an extension of the logic of broadcast media.