We all know them: the conscientious objectors of the digital age.  Social media refusers and rejecters—the folks who take a principled stance against joining particular social media sites and the folks who, with a triumphant air, announce that they have abandoned social media and deactivated their accounts. Given the increasing ubiquity social media and mobile communications technologies, voluntary social media non-users are made increasingly apparent (though, of course, not all non-users are voluntarily disconnected—surely some non-use comes from a lack of skill or resources).

The question of why certain people (let’s call them “Turkle-ites”) are so adverse to new forms of technologically-mediated communication—what Zeynep Tufekci termed “cyberasociality”—still hasn’t been sufficiently addressed by researchers. This is important because abstaining from social media has significant social costs, including not being invited to or being to access to events, loss of cultural capital gained by performing in high-visibility environments, and a sense of feeling disconnected from peers because one is not experiencing the world in the same way (points are elaborated in Jenny Davis’ recent essay). Here, however, what I want to address here isn’t so much what motivates certain people to avoid smartphones, social media, and other new forms of communication; rather, I want to consider the more fundamental question of whether it is actually possible to live separate from these technologies any longer. Is it really possible to opt out of social media? I conclude that social media is a non-optional system that shapes and is shaped by non-users.

I should start by noting that I am not the first person to observe that, while signing up and logging on to social media may be voluntary, participation is not. In fact, a panel of social media researchers recently gathered at the Theorizing the Web 2012 conference to discuss this very topic. Moreover, commentators have made similar points about many technologies in the past. For example, the automobile transformed the American landscape so much that the effects of suburbanization were nearly inescapable. Digital communications technologies have precipitated at least as large of a shift in social relations, and, depending on how we judge the implications of social media for global politics (think social media’s role in the #Occupy movement or the Arab Spring), it may even be larger. So, while I am joining a small but growing chorus of voices in arguing that we can’t escape social media any more than we can escape society itself, my goal is to try to offer a more compelling way of talking about this fact.

I strongly believe that our research into and conversations about the world are improved when we have well-formed language that captures and reminds us of the basic facts/arguments that we presuppose (I’ve written elsewhere about the importance of having precise language when theorizing the Web). In this particular case, I argue that it is time to revive some language from Donna Haraway (most famous for the 1985 “Cyborg Manifesto,” which lent this blog its namesake) and to apply it specifically to social media. Haraway’s basic argument in the “Cyborg Manifesto” is that humans and technology are ontologically inseparable—meaning, in plain English, that you cannot understand the nature of human beings without understanding the technological milieu they inhabit and that you cannot understand technology separate from human needs and social context.  In her (1985) dense, though ever-playful, style, she lays out this argument:

A cyborg is a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction… creatures simultaneously animal and machine, who populate worlds ambiguously natural and crafted… By the late twentieth century, our time, a mythic time, we are all chimeras, theorized and fabricated hybrids of machine and organism; in short, we are cyborgs. The cyborg is our ontology; it gives us our politics. The cyborg is a condensed image of both imagination and material reality, the two joined centres structuring any possibility of historical transformation.

Here, Haraway is relativizing human nature, while politicizing technology. That is say, Haraway is simultaneously dispatching with two assumptions of Modernity: 1.) that humans have a deep, unchanging essence at their core, which is being further realized as society progresses, and 2.) that technology is inherently neutral and that it is left to users to determine its significance. Haraway instead believes that we humans, and our social structures, adapt ourselves to fit the technologies of a given historical moment and that technology itself is a site of political action. In other words, the struggle to shape technology is always a struggle to shape society itself. Haraway is an anti-essentialist and an anti-Romantic. Because she allows for no idealized version of the past (think Sherry Turkle opining for the days of real conversation or Andrew Keen mourning the loss of human creativity) and for no idealized versions of the future (think the cyber-Utopian ethos of Silicon Valley circa 1994, when tech evangelist heralded the realization of human destiny in the new frontier of cyberspace), Haraway presents us with a sort-of techno-realism (dare I draw a parallel with Evgeny Morozov’s “cyber-realist” position?). She believes, somewhat fatalistically, that we are born into to the socio-technical system always already as its subjects. There is no question of escape, only of how we struggle for position within that system—of how we make use of the tools available to us.

In past discussions of Haraway, I’ve often cited a passage from a 2004 interview that I think captures the essence of the human condition in the information age (particularly, the age of participatory media) and I believe it is worth revisiting, again, on this occasion; here she describes what she set out to do in the “Cyborg Manifesto:”

This is not about things being merely constructed in a relative sense. This is about those objects that we non-optionally are… It is not that this is the only thing that we or anyone else is. It is not an exhaustive description but it is a non-optional constitution of objects, of knowledge in operation. It is not about having an implant, it is not about liking it. This is not some kind of blissed-out technobunny joy in information. It is a statement that we had better get it – this is a worlding operation. Never the only worlding operation going on, but one that we had better inhabit as more than a victim. We had better get it that domination is not the only thing going on here. We had better get it that this is a zone where we had better be the movers and the shakers, or we will be just victims… So inhabiting the cyborg is what this manifesto is about. The cyborg is a figuration but it is also an obligatory worlding…

So technology is political in the sense that it is a site of struggle (perhaps, one could say, communication technologies are “places where revolutionaries go“) but it is not political in the naive sense that it determines the outcomes of social action (i.e., there are no Facebook or Twitter revolutions). Most relevant for the present conversation is this concept of non-optionality—that we can neither opt-in or opt-out of the socio-technical system. We are all touched by the emergence of new technology, even those who are most marginalized within the system. Because, at any given historical moment, technology and social organization are always linked, we all inevitably feel the ripple effects when new technologies are introduced. This very point was the premise of the South African slapstick film The Gods Must Be Crazy, where a single Coke bottle tossed from a plane is imagined to upset the entire social order of a remote Bushmen tribe (caveat emptor: racist and inaccurate portrayals abound).

YouTube Preview Image

More seriously, we, as consumers, would experience the non-optionality of Web-based technologies like video streaming services (e.g., Netflix and Hulu) if we were to try to rent The Gods Must Be Crazy on DVD, because these technologies have led to the shuttering of video rental stores across the country and the remaining localized rental options like Redbox offer only a limited selection of the most popular movies. Another example is the dominant role that Facebook has taken in event-planning. In many social circles, event invites are sent exclusively through Facebook, so that, if you’re not on Facebook, you don’t get invited. While you can still chose to not be on Facebook, you cannot choose to live in a world where events are not organized via Facebook. Similar issues extend into the workplace. With almost half of all employers admitting that they use social media profiles to screen applicants, we have to begin wondering if non-users will simply be dismissed as “unknown quantities.”  Returning to the political realm, there is much debate about the role that social media plays in social movements such as the recent Egyptian Revolution, but what is clear is that, to the extent that social media shaped the character of the revolution, it also, then, shaped the lives of non-users. In all these cases, social media may not have a direct impact on the lives of non-users, but non-users are nevertheless part of a society which constantly changes as the mutually-determining (i.e., “dialectical”) relationship between society and techonology unfolds. Social media is non-optional: You can log off but you can’t opt out.

Why, then, do we continue to believe that we can be part of society and still exist apart from social media? Despite evidence to the contrary, people quite regularly speak as if new communications technologies are something otherworldly—something we can take or leave and that is merely incidental to social reality (an assumption Nathan Jurgenson labelled “digital dualism“). Our language tends to reinforces this way of thinking when we talk about online communication as “virtual” and contrast it to “real” face-to-face communication. We continue to indulge in the fantasy of the Web as “cyberspace”—a separate geography from the world we natural inhabit, one that certain folks—we used to call them “hackers” before the Web was democratized—escape into but that stays neatly confined within our machines. We, still, are yet to fully realized that cyberspace has “everted” (William Gibson’s term)—that it has colonized physical space—because, if we did, we would realize that flows of digital information are now an unavoidable force in our daily lives. Atoms can no longer escape the influence of bits. To escape the influence of new communications technologies on social reality, one must now, ironically, abandon that social reality altogether and create a radical separatist fantasy-world for oneself (à la Henry David Thoreau and Ted Kaczynski).

Part of our collective insistence that social media is something we opt-in to—or, at least, may opt-out of—stems from an underlying moral conviction that the old ways of communicating are more genuine than the new ways of communicating—the “appeal to tradition” fallacy, if you like. We continue to give ontological priority to physical communication over electronic communication, when, instead, we should acknowledge that both forms of communication are profoundly influential in our social world. Our newfound obsession with the authenticity in our choice of medium, even, potentially, comes at the expense of the message. As Sarah Nicole Prickett recently argued “What matters isn’t whether you’re talking (out loud) or texting (into your phone), but what you have to say.” She goes on to argue, essentially, that certain people communicate more comfortably and more genuinely via social media than face-to-face. Rather than obsessing over ranking the authenticity of various media, we ought to realize that information is highly fluid (Zygmunt Bauman, what what!) and easily slides between various media. A rumor passed face-to-face can quickly make the leap into email or Facebook messages. The borders between analog and digital communication are porous, the two continuously augment each other.

Regardless of whether we communicate face-to-face or though digital technologies, our conversations will travel to and from one or the other medium. And, where we are absent, others will continue to chat about us and to produce documents that come to represent our lives to the world. Technology is so deeply intertwined with our social reality that, even when we are logged off, we remain a part of the social media ecosystem. We can’t opt out of social media, without opting out of society altogether (and, even then, we’ll inevitably carry traces).

#TtW12 Twitter Backchannel

Co-authored with Nathan Jurgenson for the UMD Sociology Department Newsletter.

The crowd is gone, the banners rolled back up, the rooms cleaned, and now we have a chance to sleep–and reflect on Theorizing the Web 2012. After two successful years, the conference—born as a fun idea and with humble expectations—has morphed into an institution in that sociological sense of the term. We’re proud of Theorizing the Web and those who made it a success: our committee and the attendees.

For those who don’t know, this conference is a grad-student-production through and through. We had a terrific committee again this year, which included sociology grad students Tyler Crabb, Rachel Guo, Zach Richer, Jillet Sam, David Strohecker, Matthias Wasser, Sarah Wanenchak, and William Yagatich. Dan Greene from American Studies also joined the team this year. Additionally, we’d also like to recognize Ned Drummond (our designer), Rob Wanenchak (our photographer), and DJ Sean Gray, who you might remember as a former sociology undergrad. Also, we had terrific sponsors, especially the sociology department, who has supported this event in any and every way that we’ve asked. We’re very lucky to be here.

We created the conference for the simple reason of wanting something we were not getting at other conferences. First, there is the substantive focus of the event: theory sessions at disciplinary conference seldom feature presentations that focus on the radically transformative nature of the Web, while tech sessions and tech conferences tend to focus solely on description rather than on making theoretical arguments. Without an apparent space to theorize the Web it became clear that we needed Theorizing the Web.

But if we’re going to make a conference, we’re going to make one we want to attend. The cost? Pay-what-you-want. Grad students could attend for $1, and those who could afford it donated generously. The program should be filled with smart, clearly presented theories from a range of perspectives. Viewpoints often neglected at tech conferences, be they critical, queer, feminist, etc, undergird the entire event, rather than being ignored or placed in token sessions. More than interdisciplinary, this conference is also non-disciplinary, taking very seriously the importance of non-academic knowledges to gain insights about the social world. An art gallery, film screening, and other ways of knowing/communicating augment the paper presentations.

Rey, Tufekci, Carvin, & Jurgenson

Theorizing the Web 2012 featured roughly 40 paper presentations. Rooms were packed (over 200 people registered to attend in-person). All presentations were also livestreamed (we had over 70 people simultaneously watching live at various points in the day). There was also an extraordinary conversation taking place on Twitter throughout the event. Indeed, the Twitter backchannel itself became a topic of discussion throughout the day. Traffic was heavy, with 4,750 tweets from over 650 people to the official conference account and to the #TtW12 hashtag. And, we made an attempt to innovate in bringing together online and offline interactions at the conference by creating the role of “backchannel moderator” for each session. These folks drew questions from the Twitter stream, giving voice to many people who could not travel but were watching the event remotely.

Following 2011’s talks by Saskia Sassen, George Ritzer and danah boyd, the keynote for 2012 was a conversation between Sociologist Zeynep Tufekci of University of North Carolina, Chapel Hill and Twitter journalist Andy Carvin of NPR. During the keynote, Tufekci sat across from Carvin and acted as interviewer, while, simultaneously, using her laptop to lead the backchannel discussion by taking questions, posting articles and letting what was happening both on and offline drive the session. The audience, following what was happening on stage and the fast-moving backchannel on their devices were inundated with smart ideas and information and left with much to think about regarding social media, social movements, and journalism.

The Afterparty

This is just too much fun to not do this again next year.

Thanks to all those who helped make this happen, thanks to everyone who attended, thanks to everyone who encouraged and congratulated us, and thanks for reading.

Let’s start with a simple question: Would you use Facebook the same way if you knew others were notified when you viewed their page? Certainly, Facebook collect this sort of data on your viewing habits, and the decision not to share information about who has visited you profile is an intentional one. In fact, many sites—perhaps, most notably, dating sites like OkCupid—make the ability to track visitors a central part of their site design. OkCupid regularly sends emails to users when a good match views their profiles. The idea is to turn passive lurking into active interaction. Of course, the purpose of a dating site is to connect strangers.

Lurking, however, has become a definitive part of the Facebook experience. As we now know, most Facebook friends are people we have meet in person. Facebook doesn’t need to break the ice. Yet, users rarely interact with many, if not most, of their Facebook friends. So, why be friends at all? Of course, the answer is that we love to stalk the people in our social networks, especially those weak links on the margins whose live we aren’t keyed into through regular interaction. Today, Facebook stalking plays the same role (albeit more efficiently) that gossip chains played in generations past: It keeps us connected through heavily-mediated and indirect forms of interaction. And, both gossip and Facebook share the unique property of making us more visible to each other.

Gossip invisibly produced visibility by mediating interactions through one or more other people. By getting “the dish” on friends, colleagues, and even rivals, we maintain a sense of closeness and involvement without necessarily intruding or intervening directly in others’ affairs.  Gossip is a decentralized form of (mutual) surveillance. This surveillance gives us knowledge about those in our social circles, and it guides our future interactions with those whom we surveil (e.g., it helps us to know when to pursue an opportunity or to offer help). However, this surveillance also guides us as to which topics and situations are to be avoided. That is to say, the invisibility granted by gossip allows us to make us all more visible to each other, but this does not mean we simply know everything about each other; instead, we learn there are many things that we cannot find out about others. We learn what questions cannot be posed—which topics are too embarrassing, too impolite, or too secret to be broached.

Facebook functions similar to gossip in that, through extensive self-documentation, we render ourselves highly visible to one another. Yet, we also leave many clues about what must remain invisible or unknown (Nathan Jurgenson argued this point quiet compellingly in his essay on “Rethinking Privacy & Publicity on Social Media.”) The relationship status field, for example, is often highly contentious. Many people choose to list a close friend or to left it blank in order to avoid drawing light on what might be a sensitive or unstable aspect of their life. When we see a blank or joke relationship status, we immediately recognize that the knowledge revealed in this field only of what is unknown. Such observations produce “a known unknown” (as former Defense Secretary Donald Rumsfeld once said). Thus, the visible self-documentation aspect of Facebook and the invisible lurking aspect work in tandem to reveal both what is known and what is unknown, what is visible and what is visibly invisible. In this sense, it can be argued that privacy and publicity are dialectical in character as opposed to a being opposites on a zero-sum continuum.

Gyges lurks in the queen's bedroom.

Once we recognize the complexity of the relationship between privacy and publicity on social media, we then must ask how user can best adapt to this situation. The question of how in/visibility influences behavior nearly as old as the written word itself. In The Republic, Plato’s character Glaucon, recounts the tale in which Gyges, a shepherd, discovers a ring that makes the wearer invisible. Gyges uses its power to usurp his king’s throne. Glaucon invokes the legend to argue that that morality is a social construct enforced only by fear of reprisal:

man is just, not willingly or because he thinks that justice is any good to him individually, but of necessity, for wherever anyone thinks that he can safely be unjust, there he is unjust… If you could imagine any one obtaining this power of becoming invisible, and never doing any wrong or touching what was another’s, he would be thought by the lookers-on to be a most wretched idiot, although they would praise him to one another’s faces, and keep up appearances with one another from a fear that they too might suffer injustice.

I’m more interested in the sociological implications of Glaucon’s argument than the moral/philosophical ones. Importantly, the legend illustrates the complexity of the relationship between visibility and invisibility. First, there is the fact that Gyges’ invisibility renders others more visible to him. But, Glaucon makes a second, and even more significant, observation about visibility: He notes that what we say to someone’s face—when we are visible to them—is different than what we say behind their back—when we are invisible to them. That is to say, as early as the 4th Century BCE (when The Republic was written), social observers had already recognized that being watched changes our behavior and that different norms exists in circumstances of high visibility and circumstances of low visibility.

The ring of Gyges, however, creates a wrinkle that Glaucon overlooks: If a ring that granted its wearer the power of invisibility truly existed, it would seem to erode any meaningful distinction between speaking to someone’s face or speaking behind their back, because there would always be the potential that the person in question was invisibly lurking on the conversation. We can imagine, for example, that King Gyges’ subjects must have lived in constant fear that even their most private conversations were being overheard, and thus the subjects, presumably, would avoid ever uttering an ill word against the king.

In many ways, this imaginary situation resembles Michel Foucault’s pantopticon, where prisoners were placed in a circle around a turret with one-way glass, so that they were always visible but never sure if someone was watching. However, the comparison is imperfect. Prisoners in cells never have the luxury of shifting contexts. Gyges and his subjects, on the other hand, would constantly move through an array of different relations between visibility and invisibility. Ironically, Gyges’ subjects were apt to feel less visible in his presence than in his absence. We can envision a dinner scenario, for example, where the subjects might feel free to whisper a few hushed criticisms while Gyges (and, more importantly, his ring) remained visible at the head of the table. If Gyges invisibility heightens his subjects’ sense of visibility, Gyges visibility inversely provides them a greater sense of invisibility.

In the case of Gyges’ ring, visibility does not pertain only to the actors (i.e., the king and his subjects) but also to the technology or infrastructure of visibility. Unlike Foucault’s panopticon, Gyges subjects were able to observe certain instances when surveillance was inactive. The visibility of the invisibility apparatus (i.e., the ring) grants the subjects their own degree of invisibility.

There are two main difference between lurking on social media and the scenario of Gyges and his subjects: 1.) On social media platforms such as Facebook, everyone has the equivalent of an invisibility ring. We no longer have to be concerned with a single lurking king, but with entire “invisible audiences” (see: danah boyd’s piece on networked publics). 2.) Social media networks are asynchronous, meaning that, though others are visible to us, when can’t necessary know what they are doing in real-time—whether they are watching us or whether they are preoccupied with something else.

What makes social media analogous to the ring of Gyges is that, in both cases, those involved have a general understanding of how the technologies of visibility/invisibility operate and shape the dynamics of their situation. Just as the king’s subjects are able to observe the ring and to determine when it is inactive, social media users are able to ascertain what is visible in a specific context and also what can be concealed in that same context. Gyges’ subjects would certainly judge each other based on their ability to conceal their true feelings from the king (and, perhaps, even each other).

Learning “the rules of the game” (a phase lifted from Pierre Bourdieu) allows social media users to be both more visible and more invisible simultaneously. In this context broadcasting that there are things we do not think appropriate to broadcast becomes just as important as what we do broadcast. This observation is highlighted by the fact that many employers are now asking for passwords to log into social media accounts. These employers are likely more interested in ensuring that the applicant is not documenting certain activities (e.g., partying, drug use, criminal activity, etc.) than whatever else the applicant is actually documenting. We find social media user engaged in a range of sophisticated strategies to simultaneously reveal and conceal information (e.g., white-walling [i.e., erasing all messages after they are sent], social steganography [i.e., hiding messages in song lyrics or other codes], the super log-off [i.e., deactivativing an account every time you are done using it], maintaining multiple profiles, tweaking privacy settings, piggy-backing on partner’s/spouse’s accounts to avoid direct use, etc.). We are judged not just on what is revealed but also on what is concealed—not just on what is known but also on “known unknowns.”

The (Visibly) Invisible Man

Secretiveness and discretion are two sides of the same coin. The former vice demonstrates a lack of awareness as to how to be appropriately visible, while the latter virtue demonstrates an ability to conceal aspects of oneself while embracing visibility. The concept of discretion implies that both visibility and invisibility are desirable—that one does not come at the expense of the other—but that each may be best realized in the context of the other. To be discreet is to be highly visible without ceasing to be invisible; it is also to be visibly invisible. Discretion is a strategy that reflects the complex interaction between visibility and invisibility, publicity and privacy.

Given the complex and interdependent relationship between visibility and invisibility, the ideological divisiveness regarding publicity/privacy or transparency/secrecy may be wrong-headed. When discussing our digitally-mediated environment and how we ought to react to it, we tend to view these categories as being an either/or proposition. The reality is that social media demands that we put greater effort both toward being public and toward being private, being transparent and protecting secrets.  We are not best served by being exhibitionists or by being enigmas; instead this new environment calls for discretion.

Interestingly, discretion also require us to modulate our sharing based what we believe others know. Only a fool would act as if others know nothing or know everything. Discretion requires us to make ourselves visible based on what is in/visible about others. Of course, it is easy to know what we can see about others and what they can see about us; the difficult aspect of being discreet is determining the important things others don’t or shouldn’t know about us and the important things that we don’t or shouldn’t know about others, so we can act accordingly. Perhaps it is no small coincidence that the Oracle of Delphi judged Socrates the wisest man in Athens precisely because he was the only one who recognized what was unknown to him.

Social media is not simply a space of widespread surveillance, but, more specifically, a space of ubiquitous lurking, where the capacity to watch without being seen makes us all more watched. However, these circumstances do not simply create a situation where we are all expected to be unconditionally visible; instead, the visibility of the platform and its users actually creates new opportunities for concealment. Discretion—the ability to be both extremely private and extremely public without conflict—is the strategy demand of such an environment. We cannot simple understand social media (or any social phenomenon) from the perspective of what is visible, but we must also consider what is (visibly) invisible.

The ideas emerged in large part through discussion Nathan Jurgenson. See a joint discussion of the complex relationship between visibility and invisibility here.

PJ Rey (@pjrey) is a sociologist at the University of Maryland working to describe how social media and other technology reflect and change our culture and the economy.

I just published a piece in a special issue of American Behavioral Scientist on the topic of Prosumption and the Prosumer. The article will likely be of interest to many of Cyborgology blog readers. In it, I consider the applicability of Marx’s two main critiques of capitalism—alienation and exploitation—to social media. Though Marx was describing the materialist paradigm of factory production, it is useful to see how far these concepts can be stretched to account for the immaterial paradigm of digital prosumption, because, even if we observe weaknesses in how the concepts graft on to social media, these observations become a starting point for new kinds of theorizing.

Additionally, I found it important to try to bring alienation into the conversation surrounding social media. Thus far, most Marxian analysis has focused solely on exploitation (many hyperbolically claiming that we have entered an era of hyper- or over-exploitation). I argue that exploitation has remained a constant between material and immaterial modes of production and that what is most remarkable is the fact that productivity occurs on social media with so little alienation.

You can access the article on the publisher’s site or as a .pdf here.

PJ Rey (@pjrey) is a sociologist at the University of Maryland working to describe how social media and other technology reflect and change our culture and the economy.

This week, an ad agency (BBH Labs [see: previous stunt]) succeeded at its goal of grabbing headlines (see: Pitchfork and Wired) by employing homeless people as mobile WiFi hotspots for SXSW. While the scheme purports to be an attempt at “charitable innovation;” it is, in reality, a stunning expansion of neoliberal economic principles by turning exploitation into a feel-good sport.

The scheme distributes MiFi devices capable of sharing 4G connectivity to passersby. The homeless workers wear shirts that state “I’m [name], a 4G hotspot” and list the directions for connecting. Users must text the name of the homeless workers that they have encountered to a particular number and they then receive login credentials and a convenient link to a website that touts the benevolence of the ad agency responsible for helping both the user and the poor homeless person in front of them.

Users are also prompted to make a donation to the homeless person providing them with WiFi service. According to the firm’s “Director of Innovation, ” Saneel Radia, the program is a “charitable experiment” aimed at “charitable innovation.” The Homeless Hotspots website compares itself to street newspapers run by the homeless, explaining that the scheme

offer[s] homeless individuals an opportunity to sell a digital service instead of a material commodity. SxSW Interactive attendees can pay what they like to access 4G networks carried by our homeless collaborators. This service is intended to deliver on the demand for better transit connectivity during the conference.

The backlash against the publicity stunt was sharp and immediate; however, most commentaries have been largely superficial. Megan Garber of The Atlantic magazine, for example, criticizes Homeless Hotspots for engaging in what she calls “digital colonialism,” saying “the whole thing reek[s] of digital privilege and entitlement.” Unfortunately, she does not fully spell out what digital colonialism encompasses, though the term does seem us conjures several relevant issues: exploitation, blindness to privilege, and systemic racial/economic inequality.

Similarly, Tim Carmody (writing for Wired) said the scheme sounded like “something out of a darkly satirical science-fiction dystopia” and proceeded to elaborate a bit on the theme of exploitation, saying:

the homeless turned not just into walking, talking hotspots, but walking, talking billboards for a program that doesn’t care anything at all about them or their future, so long as it can score a point or two about digital disruption of old media paradigms. So long as it can prove that the real problem with homelessness is that it doesn’t provide a service.

Carmody’s statement is an implicit critique of neoliberal ideology which holds that social problems are best addressed by minimizing government regulation and maximizing private sector innovation; however, he never really build on the assertion stated above.

Jon Mitchell (on Read Write Web) has made what is, perhaps, the most developed critique of how the homeless are affected by the Homeless Hotspots scheme:

The Homeless Hotspots website frames this as an attempt “to modernize the Street Newspaper model employed to support homeless populations.” There’s a wee little difference, though. Those newspapers are written by homeless people, and they cover issues that affect the homeless population. By contrast, Homeless Hotspots are helpless pieces of privilege-extending human infrastructure. It’s like it never occurred to the people behind this campaign that people might read street newspapers. They probably just buy them to be nice and throw them in the garbage.

Mitchell captures a key shortcoming of the comparison between street newspapers and Homeless Hotspots: In the case of street newspapers, homeless people enjoy direct benefits from the thing they are helping to produce, while in the case of homeless hotspots, the homeless are turned into glorified wage labors. Importantly, Homeless Hotspot users do not purchase access to promote the cause of the homeless; they purchases access to promote themselves on Facebook, Twitter, and other social media outlets. Interestingly, Radia acknowledged the validity of Mitchell’s argument, saying:

The biggest criticism (which we agree with actually) is that Street Newspapers allow for content creation by the homeless (we encourage those to research this a bit more as it certainly does not work exactly as you would assume). This is definitely a part of the vision of the program but alas we could not afford to create a custom log-in page because it’s through a device we didn’t make. However, we’d really like to see iterations of the program in which this media channel of hotspots is owned by the homeless organizations and used as a platform for them to create content. We are doing this because we believe in the model of street newspapers.

However, Radia—and Carmody, for that matter—do not engage with the broader implications of their observations. It’s not merely that homeless people fail to directly benefit from the thing they are being put to work producing but that this form wage labor, guised as charity, also happens to be completely unregulated (e.g., homeless participants are guaranteed no minimum wage [update: BBH Labs posted a clarification that participants were paid a $50/day wage]). That is to say, the homeless hotspots scheme takes exploitation to extremes generally not tolerated in our society, plunging the homeless into a state of uncertainty regarding income that Marxian thinkers call “precariousness.”

Many readers will react by claiming “well, at least some possible income is better than no income for these individuals.” However, this argument is rather myopic; we also need to consider how this sort of scheme factors into the broader socio-economic system. What are the consequences of living a society that acquiesces the fact that its constituents are not even guaranteed $60 for a hard day’s work because their service is classified as charity and not as proper labor? What happens to the already vastly disproportionate distribution of wealth if companies can supplant their workforce with so-called charity cases? And, should we not find such an occurrence all the more outrageous at a time when workers are desperate for jobs, while companies in many sectors of the economy continue to enjoy deep profit margins? The homeless do not need marketing gimmicks; they need stable assistance programs and real economic opportunity. This sort of charity, in its intrinsically ad hoc nature, fails on both accounts.

As the example of Homeless Hotspots demonstrates, charity is most often merely a band-aid that treats the most egregious failures of neoliberal economics without addressing the fact that poverty is endemic to that system. Charity justifies and reinforces privilege, while making the “haves” feel good for alleviating the very problems they are complicit in creating.

Equally as problematic, Radia states that

We are not selling anything. There is no brand involved. There is no commercial benefit whatsoever.

This claim is patently false. What he means is that his agency is not being paid to represent a client for this particular venture. BBH Labs is, of course, using the stunt to build their own brand, whose logo happens to be neatly fixed at the bottom of the Homeless Hotspot webpage and which has been mentioned in each and every news articles covering the scheme. Sure, there is not direct commercial benefit, but the firm is accruing what sociologists call “cultural capital”—fame and recognition that can be cashed in on at a later date. As any sports star can tell you, achievement itself does not bring in the real dough; rather, commercial endorsements covert a celebrity’s fame into monetary assets. BBH Labs is going for something similar here.

Even if programs such as Homeless Hotspot do some good and “have their heart in the right place,” we must evaluate them in the broader context of the neoliberal economic system they both reflex and reinforce. We should spend more time thinking about how to fix the structural problems that create homelessness and less time thinking about we can use homelessness to improve our own situation and to make ourselves feel good.

PJ Rey (@pjrey) is a sociologist at the University of Maryland working to describe how social media and other technology reflect and change our culture and the economy.

The Organizations, Occupations, and Work blog (associated with the American Sociological Association) organized an interesting panel discussion between Chris Prener, Christopher Land, Steffen Böehm and myself. I’ll summarize/critique the positions here and provide links for further reading.

Chris Prener initiated the conversation by asking “Is Facebook “Using” Its Members?” Prener claims that, though the company gives users “access to networks of friends and other individuals as well as social organizations and associations,” Facebook—with it’s advertising revenue “somewhere in the neighborhood of $3.2 billion”—” benefits far more in this somewhat symbiotic relationship.” He concludes that Facebook, and social media more broadly, represent “a [new] space where even unpaid, voluntary leisure activities can be exploited for the commercial gain of the entities within which those activities occur.”

My critique of Prener’s piece is that he assert that Facebook benefits more from the online social networking than its 845 million users; this is an empirical question—one that requires further evidence an calculation. If we assume Prener’s $3.2 billion figure is correct, Facebook is making only $3.79 in ad revenue per user. I would guess that many, if not most, users believe Facebook provides them with benefits that exceed that sum. In any case, exploitation still exists regardless of who benefits most.

Christopher Land and Steffen Böehm echo Prener in their piece: “They are exploiting us! Why we all work for Facebook for free.”  They too see Facebook’s profit model as dependent on exploitation. But they approach the issue from a slightly different theoretical bent. Drawing on Herman and Chomsky’s famous work on mass media, Land and Böehm argue that users (not content) are the primary product that social media creates, since it is users that are being sold to advertisers. They also observe, sardonically, that Facebook users experience a double-freedom insofar as users efforts are non-coerced but also unpaid. But just as Marx noted that capitalism achieved a monopoly over the means of survival (that is to say, people have to sell their labor to survive because that are otherwise denied access to the mean of production), Land and Böehm argue that Facebook (and capitalism, more broadly) have achieved a monopoly over the means of online social networking.

Land and Böehm might have improved their analogy by spelling out that social interaction fulfills a natural human need and that participation on Facebook is coercive because so many invitations, conversations, memes, etc. are accessible only through the platform, thus non-participation leads to non-inclusion and social isolation. A more important issue I see with their argument, however, is the apples-to-apples comparison between broadcast media consumers and social media prosumers. Herman and Chomsky were focused on the broadcast media’s production of passive subjects; social media, on the other hand, is significant in that it produces active subjects (or, rather, subjects produce themselves for social media [see: Gilles Deleuze, “Post-script on the Society of Control” and Nathan Jurgeson, “Experiencing Life Through the Logic of Facebook“]). Herman and Chomsky were less concerned with exploitation than with political acquiescence. If Herman and Chomsky are correct in assuming that passivity in the realm of broadcast media consumption passes over into the realm of politics, then activity in the realm of social media prosumption might equally be expected to translate into politics (though Chomsky himself remains skeptical). In any case, while broadcast media consumers are subject to manipulation, it is unclear that they are subject to exploitation.

In my piece, “Facebook is Not a Factory (But Still Exploits its Users),” I argue that Facebook use benefits both users and owners, but, while Facebook gains monetarily, users receive immaterial benefits. This qualitative/quantitative difference in the forms of capital derived from Facebook makes it difficult to compare the relative degree of exploitation between Facebook use and traditional labor. However, we can infer that Facebook is probably not more exploitative than conventional labor and is certainly less alienating.

The critique I have of my own piece (pointed out by Alexis Madrigal via Nathan Jurgenson) is that I use Facebook’s total market valuation in estimating the rate of exploitation for each user. This valuation is, of course, highly speculative and also includes so-called “constant capital.” It is more appropriate to do as Prener has done and use ad revenue as the basis for calculating the rate of exploitation.

Clearly, the consensus among this group of authors is the exploitation is an integral part of Facebook’s operation; however, questions remain as to its scope and significance.

Follow PJ Rey on Twiter: @pjrey

This piece is posted in cooperation with the Organization, Occupations, and Work Blog.

Facebook’s IPO announcement has stirred much debate over the question of whether Facebook is exploiting/using/taking advantage of its users. The main problem with the recent discussion of this subject is that no one really seems to have taken the time to actually define what exploitation is. Let me start by reviewing this concept before proceeding to examine its relevance to Facebook.

Defining exploitation. The concept of exploitation came to prominence about a century and a half ago through the writings of Karl Marx, and he gave it a specific, objectively calculable definition—though, I’ll spare you the mathematical expressions. Marx starts from the assumption that value is created though labor (most people today acknowledge that value is contingent on other factors as well, but we need merely to accept that labor is one source of value for Marx’s argument to work). According to Marx, humans have an important natural relationship to the fruits of our labor, and our work is a definitive part of who we are. Modern capitalist society is unique from other periods in history because workers sell their labor time in exchange for wages (as opposed to, say, creating objects and bartering them for other objects). Capitalists accumulate money by skimming off some of the value created by worker’s labor and, so that the wages a worker receives is only a fraction of the total value he or she has created. The portion of the value created by a worker that is not returned back to that worker (after operating costs are covered) is called the rate of exploitation.

So, for example, imagine that, during one day of work, a factory worker takes $10 worth of wood and assembles a chair that retails for $60; if the worker is paid $20 in wages for that day, then rate of exploitation would equal $30/day. That is to say, the capitalist is made $30 richer each day at the expense of the worker.  The real degree of exploitation, however, is best represented in relative terms. If we calculate the $30 of surplus value expropriated by the capitalist as a percentage the total value created ($60 – $10 = $50), then we find that the real degree of exploitation is 60% (= $30/$50*100) of the value created by the worker.

The important point here is that exploitation is an objective calculation—one that is separable from the subsequent moral debates we often have about ensuring fairness versus rewarding risk/innovation. So, if we want have a debate about the (im)morality of Facebooks’ business model—as many recent commentators are, in fact, endeavoring to do—we must first establish that exploitation objectively exists on Facebook. However, this is not as easy as it might seem. The organization of Facebook hardly resembles the “cattle-like existence” in the factories that Marx originally set out to describe. While Facebook’s users flock to the site willingly, even happily, Marx summarized the factory as a place where the worker:

does not feel content but unhappy, does not develop freely his physical and mental energy but mortifies his body and ruins his mind. . . . His labor is therefore not voluntary, but coerced; it is forced labor. It is therefore not the satisfaction of a need; it is merely a means to satisfy needs external to it. Its alien character emerges clearly in the fact that as soon as no physical or other compulsion exists, labor is shunned like the plague.

Given this apparent disconnect between the experience of factory work and social media use, we should be reluctant about merely attempting to superficially shoehorn this new phenomenon into the classic conceptual frame of exploitation. Instead, we should ask ourselves if these apparent differences have any bearing on the assumptions underlying our calculation of the rate of exploitation. Indeed, a closer examination reveals significant differences between the way exploitation is carried out in the factory and on social media.

Is This What Goes on Inside Facebook?

Why do people use social media voluntarily, but avoid factory-style work whenever possible?  There are two important reasons: 1.) Factory work is alienating, separating workers’ from their creative faculties to shape the nature of the objects they are laboring to produce. Social media encourages a much greater degree of self-directed creativity. 2.) Social media users are not only producers of social media but also consumers. As such, there are direct and obvious benefits to social media use, unlike the discomfort of factory work which is only partially and indirectly remediated by wages. These benefits of social media use are largely immaterial: e.g., making and preserving social connections, cultivating and demonstrating taste, and telegraphing that you are “with it,” part of the in-crowd. (Sociologists call these benefits cultural, social, and symbolic capital).

Though we all can recognize that these immaterial benefits have real value, it is extremely difficult to fix a price to them. How many dollars is a friendship worth? How much culturally literacy can you purchase for $100? Moreover, because usage patterns vary so widely, different social media users are bound to derive different sorts of value from their use. That is to say, the value of these immaterial benefits is relative to the unique circumstances of each user. Because users are “compensated” through these immaterial benefits (rather than receiving conventional wages), it is extremely difficult to come up with a concrete figure for the real degree of exploitation (note: this is not really even “compensation” because this value is created directly by and shared between users). We know that Facebook receives all the material benefits from our use. Most readers have probably already seen it calculated that with 850 million users and a speculative valuation of $100 billion, Facebook has netted $117.65 from each user. In this sense, we might say that the rate of exploitation is $117.65 per user for the eight years that Facebook has been in existence. However, without a concrete figure for the total value produced by each user, we are not really able to derive the real degree of exploitation. The best we can do is a sort of thought experiment: Would I pay $117.65 for what I have gotten out of Facebook in the past several years? If the answer is yes (personally speaking, I know I pay that much annually to host a website that I use far less than my profile), then we can loosely infer that the real degree of exploitation is less than 50%.

Why is important to know that the real degree of exploitation is less than 50%? Many critics of social media downplay the immaterial benefits of social media and argue that, because workers do not receive wages, they are experiencing “over-exploitation” (i.e., a condition where the real degree of exploitation approaches 100%). Our little thought experiment enables us to conclude that, while Facebook is exploitative (like all capitalist enterprise), it does not appear to be substantially more exploitative than conventional “brick and mortar” businesses. While all the alarm about hyper-capitalism and the precarious state of social media users is probably overstated, the fundamentally exploitative nature of Facebook’s business model gives users ample cause to be skeptical of the benevolent image of Facebook that founder and CEO Mark Zuckerberg painted in his recent IPO letter, saying “We don’t build services to make money; we make money to build better services.” This statement exemplifies the fact that the fuzziness surrounding these economic relationships may make it easier than ever to induce and perpetuate this sort of “false consciousness.”

Author’s note: A related  peer-reviewed article on the topic of “Alienation, Exploitation, and Social Media” is forthcoming in next edition of American Behavioral Scientist.

Follow PJ Rey on Twitter: @pjrey.

Last week, I wrote a piece entitled “There is no Cyberspace,” where I argued the today’s World Wide Web bears little resemblance to the thing that cyberpunk authors like William Gibson imagined as cyberspace. I explained that Gibson defined cyberspace as a “consensual hallucination” and proceeded to argue that the Web was neither consensual nor hallucinatory. I noted that even Gibson himself acknowledges that the cyberspace concept is outmoded—that, rather than being sucked into the world behind the screen, computers have “everted,” overlaying the physical with the digital. I concluded that the term “cyberspace” confounds our ability to makes sense of a social Web that has very real consequences in our lives because it evokes images of fantastical space apart from reality that we can enter and exit at our leisure.  The piece received thorough feedback and critique in posts by Mike Bulajewski (on his Mr. Teacup blog)  and Jeremy Antley (on his Peasant Muse blog), which has encouraged me to further develop my argument.

My claim that the “cyberspace” misleadingly evokes elements of fantasy left room for possible confusion insofar as I failed to define what I meant by fantasy. Bulajewski, for example, attempted to invert my argument, making a sort of post-Modern claim that “there is only cyberspace” because both our individual psyches (à la Sigmund Freud) and our collective consciousness (à la Emile Durkheim) mediate and interpret experience through the lens of our history, memory, traumas, etc. As Immanuel Kant (and his sociological successor Georg Simmel) explained long ago, there is no access to “real,” unmediated experience—all subjective input is filtered through the pre-existing structures of our consciousness. Bulajewski wants to call all experience “fantasy” because it is historically and culturally relative. Perhaps this is an important distinction in an arcane philosophical context, but I’m rather more concerned with what people actually mean when they say “real” in the context of the Web, as in: “real” life vs. cyberspace.

Credit: Werner Kunz

What do we mean when we say “the real?” Why is it generally stated in a positive tone, while “the virtual” or “cyberspace” is generally stated in a negative tone? What does this reveal about our present historical moment? These are the question that concern me. I think the social theorist Jean Baudrillard has a lot to offer to this conversation. In his (1981/1994) classic book, Simulacra and Simulation, Baudrillard contrasts the real to its simulations. Baudrillard never actually does a great job at defining the real—mostly because he’s convinced it no longer exists—but we can generally gather that he (and the culture he is describing) interpret the real variously as that which is organic, original, indigenous, non-rationalized, undiscovered, and undisturbed. The real invokes a romanticized vision of the primitive or the pre-Modern. Baudrillard opines that the real has been supplanted by various layers of simulation; he explains that there are four orders of such simulations:

[1.] it is the reflection of a profound reality;
[2.] it masks and denatures a profound reality;
[3.] it masks the absence of a profound reality;
[4.] it has no relation to any reality whatsoever; it is its own pure simulacrum.

Here’s where the confusion arises. I claim that we wrongly perceive the Web as a fantasy, a simulation called “cyberspace,” but I did not specify what order of simulation I was referring too. I believe Gibson (an other cyberpunk authors) were offering us an enlightening and engaging second-order simulation that tweaked and distorted reality through the lens of fiction. The cyberpunk authors demonstrated a profound ability to reveal important aspects of the present through an imagined future; they tapped into what Marshal McLuhan recognized as artist’s capacity to recognize important patterns emerging in the present (more on that below) such as the fact that the our culture was increasingly organizing itself around a perceived divide between physical and the digital. Importantly, however, Gibson and his peers avoided conflating fantasy and reality by setting their stories at some ambiguous future date, thus the images they present (including that of cyberspace) do not originate as third-order simulations (i.e., fantasy masquerading as reality); rather, something drove us, as a culture, to move from interpreting cyberspace as a matter of science fiction and toward an interpretation of cyberspace as a matter of science fact—to say, “hey, that Web thing we’re all doing now, that’s a separate, virtual world: a space set apart from the real.” This cultural movement to characterize the Web through the fantasy of cyberspace does violence to the very real social relationships that flow on and off the Web; it posits them as otherworldly and, as such, inessential to our lives.

Why is it, then, that we are so prone to denial and self-deception when it comes to the role that the Web plays in our culture? I believe that accepting the Web as integral to the fabric of reality threatens comfortable assumptions about our natures, about the essence of the self and its authenticity, and about our romantic conceptualization of the human soul. If the Web is enmeshed in every aspect of human life and we accept that the Web is real, then we must conclude that every aspect of our lives are synthetic—that nothing is “real” in Baudrillard’s romantic conceptualization of the term. McLuhan once presented just such a vision in televised debate:

Whenever a new environment goes around an old one there’s always new terror… When you put a man-made environment around the planet, nature from now on has to be programmed… the [new man-made] environment is not visible, it’s electronic.

Moreover, McLuhan proceeded to offer what could be interpreted as and explanation of why Baudrillard simply recoiled from this new, heavily-mediated environment, while Gibson is capable of providing an nuanced account of how one might come to terms with inhabiting such a world:

Every age creates as a Utopian image a nostalgic rear-view image of itself, which puts it thoroughly out of touch with the present. The present is the enemy… The present is only faced in any generation by the artist… The artist is prepared to study the present as his material because it is the area of challenge to the whole sensory life and therefore it’s anti-Utopian… it’s a world of anti-values and the artist who comes in contact with the present produces an avant garde image that is terrifying to other contemporaries…

Additionally, McLuhan explained that

Information overload produces pattern recognition… When give people too much information, they instantly resort to pattern recognition—in other words, to structuring the experience. And, I think this is part of the artist’s world. The artist, when he encounters the present… is always seeking new patterns, new pattern recognition, which is his task, for heavens sake. His great need—the absolute indispensability of the artist—is that he alone, in the encounter with the present, can give the pattern recognition. He alone has the sensory awareness to tell us what the world is made of.

As an artist, Gibson engages in pattern recognition, managing to observe not only the increasing interplay between physical and digital but also our latent anxieties about a world that moves between the two. For example, Gibson provokes us confront to deep philosophical tensions in our belief that identity rests both in the unity of body and mind and in a transcendent soul, by challenging us to imagine a world where mind can be separated from a living body and can inhabit new bodies. Where, then, does the self reside? Can the self be split? Copied? Can identity take physical manifestations outside of the body. Can identity take immaterial manifestations in the absence of or beyond the body?

Gibson's novel with a coincidental title.

As an artist, Gibson’s role is not so much to provide answers but to articulate the problem. He concedes: “we live in an incomprehensible present… I’m not really trying to explain the moment, I’m just trying to make it accessible.” His  fiction conjures the very aspect of the present that we are least equipped to handle and gives it name (i.e., “cyberspace”). He takes our deep-seated informational/material dualism to its logical conclusion and makes these ideas manifest in a world of fantasy. In so doing, he gives us a reference point against which we can contrast our own present experience. what we discover, in retrospect, is that the world he develops on the basis of our dualist assumption bears little resemblance to our own. Once this gulf between ideas and experience is laid bare, we are left with two choices: get new ideas, or escape to a new world. We have tried desperately for several decades to accomplish the latter. Rather than confronting the shortcomings of our ideas, we have asserted that Gibson’s fantasy of cyberspaces is real. We have sought replace reality with fantasy, and in so doing, we have denied existence of the mixed/blended/mediated/augmented reality that we truly inhabit. This has led to a state of pessimism and despair for Baudrillard and others who mourn the loss of reality altogether.

The alternate, of course, is to simply accept the idea that the real is synthetic as theorists such as Donna Haraway council. Haraway famously voiced her opposition to the pessimistic romanticism of Baudrillard and company when she concluded: “Cyborg imagery can suggest a way out of the maze of dualisms in which we have explained our bodies and our tools to ourselves…  Though both are bound in the spiral dance, I would rather be a cyborg than a goddess.” The goddess, here, represents the romantic view of identity, where identity arises out of an opposition between binaries; on the other hand, the cyborg is a creature of context that continually renegotiates its identity in the space between supposed opposites (most pertinent in this case, the opposition between the physical and the digital). We must learn to embrace an augmented, cyborg reality, characterized by synthetics, copies, colonization, (de-)rationalization, reflexivity, and networked interactions.

YouTube Preview Image

Ever ahead of the curve, Gibson, himself, has embraced the view that the digital and the physical occupy the same space:

Cyberspace, not so long ago, was a specific elsewhere, one we visited periodically, peering into it from the familiar physical world. Now cyberspace has everted. Turned itself inside out. Colonized the physical.

Similarly, he contends that:

Cyberspace is colonising what we used to think of as the real world. I think that our grandchildren will probably regard the distinction we make between what we call the real world and what they think of as simply the world as the quaintest and most incomprehensible thing about us.

And that “[t]he non-mediated world has become a lost country… the mediated world is now the world.” Perhaps most provocatively, Gibson has claimed that his own neologism has outlived its usefulness:

Cyberspace might one day be the last usage of the prefix ‘cyber,’ because, I think, “cyber” is going to go the way of the prefix “electro.” We don’t use the prefix “electro” in pop-cultural parlance much anymore… it being taken for granted that most things are electrical.  And, I think, at this point, it could be taken for granted that most things are computerized.

This critique of digital dualism, however, does not imply that the interchanges between the physical and the digital are “frictionless.” I am not seeking to promote a naive Utopianism. As Haraway says: “This is not some kind of blissed-out technobunny joy in information.” Antley, for example, rightly observed that elements of our synthetic identity are prone coming  out of sync. This experience can be quite alarming. We all likely have stories of logging on to find uncomfortable images or other documents about us posted without our knowledge; panic ensues as we attempt restore and re-sync our online profile. Similarly, many of us have also probably found ourselves lost because reception on our phone or GPS cut out. Antley argues:

If we accept the premise that the Web is reality, then we must also accept that primary loss of connection to the web will create asynchronous gaps between our experience and the experience pervasively documented on the web.

Moving beyond the dualism of “cyberspace” does not mean escaping the difficulties of our mediated lives; rather, it is merely step towards better identifying and negotiating such issues. And, if the past is any indicator, we are well-advised to be on the lookout for new works of art that will aid us in better identifying the problems of our present reality and the anxieties they will surely provoke.

Follow PJ Rey on Twitter: @pjrey

The words and ideas we use to make sense of the Web owe as much to science fiction (particularly, the cyberpunk genre) as they do to the work of technicians or to rigorous scientific inquiry. This by no means a bad thing; the most powerful of such literary works call upon our collective imagination and use it to direct society to prepare for major transformations looming on the horizon. William Gibson’s (1984) Neuromancer was, no doubt, one such work. Neuromancer features  the exploits of a “console cowboy” (i.e., a computer hacker) named Case, who travels across a dystopian world serving a mysterious employer. The work is notable for popularizing the term “cyberspace,” which Gibson coined a couple years earlier in a short story called “Burning Chrome.”

In Neuromancer, Gibson described cyberspace as a”consensual hallucination” and more specifically: “A graphic representation of data abstracted from the banks of every computer in the human system. […] Lines of light ranged in the nonspace of the mind, clusters and constellations of data.” Rather than just staring into a computer screen, hackers “jack in” directly interfacing with these visual representations of data in their minds. The images described here are reminiscent of those portrayed in movies such as Tron (1982),  Hackers (1995), and, to a lesser extent, The Matrix (1999).

Tron (1982)

 Thinking further about the term, we see that it is a portmanteau between “cybernetics” and “space.” Thus we might also define “cyberspace” as a site where information is organized and controlled. Gibson sometimes emphasizes cyberspace’s lack of physicality or materiality, calling it a “nonspace.”

Gibson denies that term “cyberspace” represents any sort of coherent vision of the future, saying in an interview that the term was both “evocative and essentially meaningless […] suggestive of something, but had no real semantic meaning.” However, Gibson original intent in introducing the cyberspace concepts is of little consequence; rather, what matters is the role the concept has come to occupy in our cultural imaginary. And, it is precisely this view of the Internet as a space of “consensual hallucination” that has achieved dominance in present discourse.

Hackers (1995)

 Equating the contemporary social Web to “cyberspace” is, however, deeply problematic because the Web is neither consensual nor a hallucination.

Not Consensual. The Web is what Donna Haraway calls “a non-optional system.” Social media is now part of the very fabric of our society. Information distributed via social media affects all of us regardless of whether we participate in it directly or not. At the individual level, the behaviors of our peers are shaped by a “documentary vision” (i.e., we make choices through the lens of the potential future documents our actions will create). At  the societal level, what is trending on Twitter one day makes the news the next and affects our behavior on the following day. For example, it is difficult to imagine that the Arab Spring or the Occupy movements would have received the same level of media attention had networks not had access to countless real-time streams of information pushed out by protesters and other observers. Mass attention, of course, encouraged and, even, vindicated the protestors. The back-and-forth flow of information between online and offline has become integral to the current political moment to which we are all subjects.

Not a Hallucination. The Web is not a suspension of reality: it is not a game, and it is not a fantasy. Nor is the Web merely about reality. That would imply that the Web is only epiphenomenal (i.e., that the causal relationship between the Web is unidirectional, that the reality causes the Web but is isolated from it’s effects). Instead, the Web is part of reality; it is real. As Nathan Jurgenson recently described, we are as much a product of our online profiles as a they are a product of us. Causality is bi-directional. We are all part of the same human-computer system.

The Matrix (1999)

 Gibson and other cyberpunk authors/directors imagined that, in the complexity of computer code, a separate, virtual reality would emerge—existence in a realm of pure information. But this thinking was simply a reiteration of the same mind-body dualism that plagued Western philosophy for centuries—a new philosophical Idealism that rivals even Bishop Berkeley. Gibson is not a static thinker, however, and, in light of the social-technical development of the subsequent decades, he has distanced himself from the cyberspace concept. In a 2010 New York Times opinion piece, Gibson explains:

Cyberspace, not so long ago, was a specific elsewhere, one we visited periodically, peering into it from the familiar physical world. Now cyberspace has everted. Turned itself inside out. Colonized the physical. Making Google a central and evolving structural unit not only of the architecture of cyberspace, but of the world. This is the sort of thing that empires and nation-states did, before. But empires and nation-states weren’t organs of global human perception. They had their many eyes, certainly, but they didn’t constitute a single multiplex eye for the entire human species.

This excerpt reads more accurately as an evolution of Gibson’s thought than as a description of any real changes in the world. The Web has always already been “everted;” it has always had a dialectical relationship with the physical world. Interestingly, Gibson claimed in a interview at the Chicago Humanities Festival (embedded below, start at 22:25) that one day people will look back on the present historical moment and say it was characterized by “a need to distinguish between what they thought of as the real and the virtual”—what we, on this blog, call “digital dualism.”

YouTube Preview Image

Gibson: “Cyberspace… no longer describes whats happening.”

If not “cyberspace,” then what? The Web is probably better captured (etymologically, though not in actual usage) by the term “metaverse,” coined by fellow cyberpunk author Neil Stephenson. “Metaverse,” a composite of meta- and universe, means a universe abstracted from, about, or completing another universe. The Web is all these things simultaneously. Unfortunately, Stephenson implements the term in a way that differ little from Gibson’s cyberspace, so that it too now conjures dualist images, diminishing its utility.

Arguably we do not need another neologism. Simple “reality” should suffice as a catch-all for the online and offline aspects of our lives. However, dualist assumptions so pervade our vocabulary, that our very language often betrays us, subverting our intended meaning. For this reason, we are in need of new vocabulary that explicitly affirm our understanding that the Web is not a separate part of our lives. On this blog, we have chosen to promote the term “augmented reality.” Others have chosen “mixed” or “blended reality.” Regardless of whatever term ultimately gains most currency, one thing is certain: It is time to lay the term “cyberspace” to rest.

Not "Cyberspace"

Follow PJ Rey on Twitter: @pjrey

Google

While tech geeks and Silicone Valley execs have been decrying the the Stop Online Piracy Act (SOPA) and its sister bill, the Protect Intellectual Property Act (PIPA), today, January 18, 2012, marks an unprecedented day of action across the Web. The protest is remarkable because it fully utilizes the Internet’s unprecedented ability blend top-down and bottom-up organization. Specifically, the action achieves maximum effect because major sites like Wikipedia, Google, and reddit have blacked-out part or all of their pages while individuals users also black-out their own profile images and posts. As a result, it is near impossible to surf the Web without encountering a deluge of such images and, thereby, being encouraged to do at least a little investigating into why these bills provoke such ire.

Interestingly, the debate of intellectual property law pits new, social media against old, broadcast media. For this reason, the anti-SOPA movement may be the clearest demonstration to date of social media’s superior capacity for organizing and mobilizing social or political movements (vis-à-vis broadcast media). New media is clearly winning the messaging war. A recent Zogby poll finds that 60% of likely voters are aware of the SOPA legislation and 68% of them oppose it; the same number also believe it infringes on First Amendment Rights.

The following is as scrapbook-style archive of anti-SOPA images from across the Web.

Wikipedia Blackout
Pirate Bay
Wikipedia
reddit
Random Selection of Twitter Users
Craigslist
reddit blackout

boingboing
Facebook Profile Pic I
Facebook Profile Pic II
Society Pages
Zuckerberg's Remarks on SOPA
Google
xkcd
Wordpress