Note: This article touches on slut shaming, body shaming, homophobia, and ableism.
I love swearing. It’s a weekly miracle that my essays don’t include “totally fucked” or “fucked up and bullshit” in every paragraph. If I were reborn as a linguist, I would study swearing and cursing. I watch documentaries about cursing, I play a lot of Cards Against Humanity, and this interview with Melissa Mohr, the author of Holy Shit: A Brief History of Swearing is my favorite episode of Slate’s just-nerdy-enough podcast Lexicon Valley. If you’ve been in the audience when I give a presentation, you probably (despite my efforts to the contrary) heard me swear five or six times. I would hate to live in a world without swearing because it would be fucking dull. Unfortunately, my (and most English-speaking people) love of swearing comes into direct contradiction with inclusionary social politics. I need a new arsenal of swear words that punch up and tear down destructive stereotypes. Every time I swear, I want to be totally confident that I’m offending the right people.
Swearing, while not its only function, has a lot to do with offending people. Swearing is a necessary social sanction that does a lot of good in the world. There will always be people in this world that deserve to be told off. (Like my neighbor for example.) But in the process of telling each other where to shove it, we also reaffirm and establish who in the world is desirable and who is unwanted. So if I call you dumb, stupid, lame, gay, retarded, or even a girl, I’m not only saying that women, non-cis gendered people, or the differently abled are inherently bad, I’m also invoking all of the power of ableism, homophobia, and patriarchy to make you feel bad. Too many curse words strengthen the kind of social structures that we should be dismantling. I want to quickly and easily compare people to the parts of society that I find gross and unseemly. I want words that compare people to those with ill begotten wealth or obscene power but, so far, calling someone the President of the United States of America doesn’t have the sticking power it should.
Efforts to consciously and directly alter language rarely work, so producing a new collection of commonly used swear words is going to take more effort than making some up and putting them in a list. I do not want to rely on the “fetch” method of consciously injecting new words into daily conversation. That’s not to say such efforts are hopeless or naïve—putting a word to a feeling or a phenomena is the beginning of all sorts of movements and cultural revolutions—but I get the feeling that swear words just need to feel right. They need to come out of your mouth without a second thought.
The good news is that there are two large sociotechnical trends that work in our favor. The first is economic stagnation. Mohr, in the aforementioned Lexicon Valley interview, notes that the social taboo against swearing has everything to do with keeping your status. The very poor and the very rich (two classes that continue to grow in our present economic situation) have always been comfortable and blatant in their swearing. Swearing bares no risk if you don’t have anything to lose or are so well-heeled that there is no one else in the room that you need to impress. Only the upwardly mobile bourgeoisie are afraid of swearing. One could say that the socioeconomic climate is primed for swearing experimentation.
The second trend is the decentralization of media. Podcasts, YouTube videos, blogs, and even Netflix and Hulu exclusive content are all subject to far less regulation than radio or television. The words you cannot say on television are still the same, but there are plenty of other venues to test out new swear words. It’s strange then, that given all the Internet-inspired new words that have made it into dictionaries over the past decade (e.g. tweet, defriend, uplink), none of them are swears or curses. You might stop me here and say that those press releases are just ways of ginning up press for a dying institution– some shameless link bait by people that don’t really know what that means. I think that’s besides the point entirely. After all, what would be more press-worthy than a word you can’t say in polite company? And yet, the offerings remain scant. I guess I could call you a Scumbag Steve but in the heat of the moment I’m probably just going to call you a motherfucker.
Perhaps that’s just it. Most of the communicative innovation of the past decade has used photos, illustrations, video, and emoticons to express a feeling or an idea. As Jenny Davis wrote a few years back, memes are the mythology of our digitally augmented society. They don’t make arguments; they are the dominant ideologies of our time. I can offend you with an Insanity Wolf meme in ways that my parents probably couldn’t, but its going to use the same lexicon that they had. I’m not suggesting that this is a zero-sum game where we either get new words or new memes, but perhaps I’m looking for the wrong thing. Maybe new curse words won’t do as much culture work as I think they will because the fight has moved elsewhere: Away from utterances and towards a more heterogeneous system of self-expression.
Be that as it may, there’s no substitute for a new expletive to yell at people who cut you off on the highway. I’m not going to end this with a call for more swear words because that would be missing the point. Rather I’d like to see some words that are already in widespread use in relatively small communities (I imagine ShitRedditSays has a few.) and descriptions of how they came into being. I don’t think we can purposefully recreate moments where new words are born, but we can certainly foster an attentiveness or sensitivity to modes of evocative expression that rely solely on utterance. Perhaps, instead of copying and pasting something you whipped up on memegenerator.net, try to mash some words together. We could really fucking use some new ones.
In Taksim Square, at around 8PM local time, a man started standing near Gezi park facing the Atatürk Cultural Center. According to CNN –and more importantly Andy Carvin (@acarvin) and Zeynep Tufekci (@zeynep) — the man is believed to be Erdem Gündüz, a well known Turkish performance artist who has inspired a performative internet meme that has already made it around the globe. (There’s a nice Storify here. Thanks to @samar_ismail for putting it on my radar.) Gündüz and his supporters were removed by police after an 8 hour stand-off (in multiple senses of the term) but now that small act has gone viral and spread well beyond Taksim Square. The idea is simple: a photo, usually taken from behind demonstrates that person’s solidarity with those hurt or killed by Turkish police actions in the past month, and the increasingly repressive policies of that country’s government in the last few years. On twitter, the hashtag #duranadam (“duran adam” is “the standing man” in Turkish) quickly spilled over the borders of Turkey and has been translated to #standingman as more people in North America and Western Europe start to stand in solidarity with those in Taksim. #standingman is an overtly political meme because, unlike other performative memes like #planking, #owing, or even #eastwooding, it is meant to demonstrate a belonging to a cause.
Performative internet memes are usually photos, sometimes videos, of people doing the same thing in different contexts. Their social or political value come from playing with the genre itself. Planking, lying face down with your arms at your side in a strange or difficult to access place, is one of the first performative internet memes. Know Your Meme traces planking’s origin to the Australian “Lying Down Game” which became popular on Facebook around 2009. During the 2012 American presidential campaign, actor Clint Eastwood spoke to an empty chair (meant to symbolize an absent president) at the Republican National Convention. The performance was so bizarre and creepy that it backfired and #eastwooding meant making fun of the GOP, not the sitting president. I wrote at the time,
#eastwooding is funny because it exemplifies all of the problems of the modern GOP in a simple, easy-to-enact gesture. It is appropriate then, that a party that is made up of old white men would be so perfectly critiqued by a technology (and mythology!) that runs on popular participation.
#eastwooding wasn’t much more than political satire meant to poke fun at a poorly executed effort to speak in memes. Ultimately the symbol of the empty chair was taken up by the very campaign meant to re-elect the most powerful man in the world, so to call #eastwooding activism is a big stretch. Other than #standingman, the only overtly political, subversive meme that comes to mind is the We Are The 99 Percent Tumblr started by one of the original founding members of Food Not Bombs at the very beginning of #OWS. The blog collects and displays photos of people with handwritten messages about why they identify as part of the 99%. Unlike #eastwooding or #standingman however, We Are the 99 Percent is not a single, simple act that speaks for itself. You have to write out a message and it is that handwritten letter that is supposed to convey most of the meaning. There are certainly ways of standing out and being creative with the medium, but it isn’t really a performative internet meme. By posting a 99 percent photo you’re contributing to a project that –as the site itself states– introduces ourselves to one another. It isn’t doing the same kind of rhetorical work that #standingman aims to accomplish. One isn’t necessarily better than the other, they just achieve different ends by somewhat similar means.
#Standingman is good activism for the same reasons it makes for a good meme: It has low barriers to participation, its simple enough that individuals can innovate and keep the conversation fresh, and it is easily explained to the uninitiated. Standing is pretty easy for a sizable majority of people in the world and if you can get someone to take a picture of you from behind you’re basically done. In fact, it is the simplic, everyday nature of just standing there that makes #standingman so transgressive. Moreover the aggregate affect of a simple action performed by thousands of people is moving and gives stark visibility to a wide range of intangible inter-personal relationships among strangers, friends, and compatriots. Stand as a crowd and the effect is something between a slow-moving zombie movie and the end of V for Vendetta: inspiring, a little creepy, and very intimidating.
Standing is simultaneously mundane and immensely brave. Stand in line at the grocery store and you’re not doing much, but stand in front of a line of tanks and you might go down in history. Standing can get in the way of something (like a tank, or a tree-destroying bulldozer) but it can also mark a spot. In the right context it can be a powerful message of solidarity, vigil, and remembrance. Some #standingman participants have chosen to stand where fellow activists were murdered by police while others stand far away from Taksim Square to show global support for the Turkish struggle. As a rhetorical device, standing-as-protest is a classic example of Ghandian-style nonviolence. By getting arrested for something as innocuous as standing, you let the government do the work of displaying your oppression.
One thing that #standingman has in common with #eastwooding (and most other performative internet memes) is interpretive flexibility. It is quite easy to frame someone’s actions as a kind of participation in #standingman without their consent. Police standing around waiting to arrest someone can be photographed and said to be part of the protest. A standing penguin became a running gag within the movement when a comparison photo of CNN Turk showing a documentary on penguins while CNN International broadcasted images of protests in Taksim Square. The ease with which #standingman was able to call attention to, and lampoon, obvious censorship demonstrates just how useful memes are in spreading the word about difficult subjects.
Even before #standingman existed, the very idea of standing was loaded with conceptual metaphor. One stands up to/for/with/against/behind one thing or another. To stand is to say you’ve taken a side. You have taken a position. It means you are not going to take it lying down. One stands out in a crowd or strives to be the last one standing in a conflict. Standing is winning, it is powerful. And like all powerful things, it is somewhat unwieldily and sits atop its own forms of oppression and violence. Not all courageous people can stand, and they are certainly not all men. #standingman, fundamentally, relies on the cultural currency of the strong able-bodied man as a source of rhetorical power. That doesn’t mean the meme is completely ablest or sexist, but one should always (at least) acknowledge the more dubious qualities of their activism if for no other reason than to recognize the limits of the tactic. Plenty of women have participated in the meme and if you search #standingwoman on Twitter you get a healthy dose of photos and artwork. The problem is that by relying on an overtly gendered meme you divide potential participants. To dual tag takes up precious characters in your tweet and it can alienate large swaths of potential participants.
#standingman might represent the birth of a new form of political protest. One that is displaced in time and space and extremely difficult to quell. The image of the standing person jumps up like a dandelion in spring time: by removing one you make a dozen more until an entire field is full of them. One the most powerful aspects of #standingman is its recursivity. By performing the meme you not only promise an audience for others, encouraging them to perpetuate the meme, you also produce a setting for the meme. Each individual performance of #standingman is an acknowledgement of the whole and a recreation of the seed pattern. It is a symbol that constantly recreates itself with seemingly no end. It is just one of many ideas who’s time has certainly come.
I start with a nota bene by saying that I do not self-identify as a “surveillance scholar” but given our current sociotechnical and political climates, the topic is unavoidable. One might even be tempted to say that if you aren’t thinking about state and corporate surveillance, you’re missing a key part of your analysis regardless of your object of study. Last week, Whitney Erin Boesel put out a request for surveillance study scholars to reassess the usefulness of the panopticon as a master metaphor for state surveillance. Nathan Jurgenson commented on the post, noting that Siva Vaidhyanathan (@sivavaid) has used the term “nonopticon” to describe “a state of being watched without knowing it, or at least the extent of it.” I would like to offer up a different term –taken straight from recent NSA revelations—that applies specifically to surveillance that relies on massive power differentials and enacted through the purposeful design of the physical and digital architecture of our augmented society. Nested within the nonopticon, I contend, are billions of “boundless informants.”
Boundless Informant is the part of the NSA that analyzes the data brought in through another recently revealed program called PRISM. It sniffs out trends and linkages by, according to The Guardian’s Greenwald and MacAskill, “counting and categorizing the records of communications, known as metadata, rather than the content of an email or instant message.”
First, I want to make a clear distinction between Boundless Informant (upper case) the NSA program and boundless informant (lower case) the metaphor. Unlike the Bentham/Foucault panopticon, Boundless Informant really does monitor everyone. That might make it seem too literal (and too soon) to use as a metaphor for digitally augmented state surveillance, but I think that makes it perfect. Thanks to recent leaks and excellent reporting, we know Boundless Informant exists, but we don’t really know how it works or what role its played in past events. The boundless informant is half boogeyman and half Orwellian police state. The true evil genius of boundless informant is that it makes Alex Jones conspiracy theories feel dangerously possible. It makes you wonder who’s seen your Dropbox files and why the MPAA hasn’t come after you for all those Game of Thrones episodes you pirated. Boundless informant as a metaphor stands for the secret and arbitrary use of power based on the limitless capacity to collect and subsequently analyze data.
The power of Boundless Informant comes from its position high “above” our planetary information networks. From the NSA’s vantage point one gets exclusive access to a map of 21st century geopolitics. It shows the geometry and acceleration of the world’s burgeoning nonstate political networks: The thousands of decentralized, global sociotechnical systems that topple dictators and occupy city centers are difficult to see and even harder to infiltrate. An individual becomes a boundless informant by using the products and services that fall under the purview of the state’s PRISM. The state creates boundless informants through the perfection of digital monitoring systems, studying the collection and interpretation of biological evidence, and human terrain mapping. These are the same ordered bodies that Foucault saw as ensnarled in capillaries of power. Even the most finely tuned body gestures and digitally mediated social interactions are studied and catalogued for later analysis.
Describing people as boundless informants should be used to highlight the way individuals unknowingly give up crucial information to powerful actors. The boundless informant does this through customer loyalty cards and connected social media accounts, but mainly through the metadata of how, when, and where those services are utilized. The boundless informant is unable to opt out because to do so would mean to live outside of a sovereign nation. There are degrees to which one can reduce their connections—living “off the grid”—but unless one is willing to live a hard subsistence life there are few means of doing this meaningfully. Moreover, those that rely on computerized state services— Medicaid, SNAP, public transit—are the easiest to track and monitor. To invoke the boundless informant is to call attention to the way powerful actors nonconsensually extract data from populations through the design of everyday life.
Unlike the panopticon, which requires prisoners to know how (but not when) they are being watched, the systems that produce boundless informants are distributed and ultimately unseeable in their totality by any one individual. And unlike the nonopticon, the creation of the boundless informant requires actively obscuring and hiding the tools of surveillance. In fact, knowledge of the modes of watching is grounds for severe prosecution. Power no longer resides in a central tower (physical or cognitive) or in the ignorance of the masses, but in the deliberately opaque, countless and unremarkable black boxes that make up the physical, digital, and cognitive landscape of the network society. Watching must be so pervasive that to point out that one has internalized its gaze is a prosaic–almost naïve–concept that scares no one. The cultural shift is startlingly fast. Just 20 years after their creation the X-Files’ famous dual taglines The Truth is Out There and Government Denies Knowledge are absolutely quaint.
The boundless informant is the state’s ruthlessly logical and genius solution to controlling a world shot through with actor-networks and object oriented ontologies. The program so perfectly matches the recent trends in contemporary philosophy (it’s the connections that matter, not the nodes) that one can easily imagine a row of NSA cubicles filled with books by Bruno Latour and Graham Harman.
For the past 500 years, empires have sustained their prominence by shifting from the dominant player in industry and trade, to being the very medium of those transactions.[1] The United States has done this two fold by not only establishing its war debt (aka U.S. treasury bonds) as a global reserve currency, but by also playing host to a majority of the world’s communications. While a velvet-gloved iron fist still works, it is less obvious (and therefore much more difficult to defy in a way that garners sympathy or solidarity) to command a billion little straws constantly sipping at the world’s collective conversation. The result is a vast treasure-trove of blackmail and ammunition for social movement sabotage.
Finally, (B)boundless (I)informant invokes how much each one of us are implicated in our own surveillance. Everything from our willingness to (pro)actively share our most intimate moments[2] to professionals’ fetishization of Big Data feed into digitally augmented, decentralized state surveillance. Boundless informants are “boundless” in two senses: the information never seems to stop flowing, and the very boundaries of their identy are porus and emergent. The name captures the vast quantities of data that any single self-quantified citizen can provide to powerful entities. Informants are, even if somewhat begrudgingly, willing parties to investigations they do not fully understand and do not have control over. Think of the police informant in crime procedurals (or the X Files!): torn between their concerned for the safety and protection of themselves and their loved ones, and the largely imperfect administration of justice and search for truth. The boundless informant has ambiguous motives and may have his or her data extorted or manipulated out of their hands. They may provide information not knowing that it will be used against their compatriots or themselves.
We are boundless informants when powerful actors can record and analyze our texts to loved ones, our account logins, and our GPS markers. Not because of the content of those actions, but because of their relationship to one-another and our relationships to fellow humans. The boundless informant can be said to be within the nonopticon—that is they do not know they are being watched—but more importantly they enable this unseen watching just by going about their everyday life. We are constantly shedding bits and data points along with our hair and finger prints. We cannot help but leave a trail.
[1] This observation was first made by the economist Giovanni Arrighi and has since been picked up by David Graeber and other activists as a point of departure for critiquing Wall Street’s “mafia capitalism.” See The Democracy Project by David Graeber (2013) page 104.
[2] I was hesitant to make even an oblique reference to the click bait-inspired “death of privacy” meme but ultimately decided that no matter the individuals’ privacy setting savy, state surveillance programs are able to capture it all. Additionally, most research suggests we are more actively concerned with “social surveillance” –family and friends seeing private information– than the kind of watching done by governments or private corporations.
Maps are always political. Most maps show us something that we already believe, so its difficult to see what is being reinforced and what is systematically ignored. Even the most mundane AAA maps of highways and state borders are doing political work by recognizing the sovereignty of individual states and the obduracy of highways and roads. The near-infinite number of things, qualities, measurements, and people that have spatial characteristics (seriously, just think of all of it: temperatures, ancestral lands, endemic species, isobars, places to buy smoothies, locations of hidden treasure, and so on, and so on…_) mean that map makers must always select what is relevant and what is not. This selection process—a human endeavor—is inherently social and deeply political. Google, a company that has taken upon itself to reject that selection process and “organize all the world’s information,” wants to provide a single map and, instead of deciding what is relevant in any given map, will personalize it based on information it has about you and your friends. Evgeny Morozov, writing in Slate,[1] is rightfully concerned that Google doesn’t quite know what they’re dealing with when they say they want to organize public spaces in their databases right next to email and photos of cats. He is concerned that–unlike books or weather forecasts—Google doesn’t “acknowledge the vital role that disorder, chaos, and novelty play in shaping the urban experience.” I completely agree that unpredictability is necessary for good urban space, but the biggest threat Google poses to public space isn’t that its maps are “profoundly utilitarian, even selfish in character.” Rather, Google hasn’t done enough to personalize maps in such a way that they become part of everyday social (and Social) life.
Maps are so good at their job of transforming abstract ideas into visible entities that we often conflate the two. If I were to ask you “what does Portugal look like?” you you’d probably show me a map. A map is a representation or sign of the actual object but we generally accept a map as the “correct” answer because a nation-state isn’t something tangible that we can pick up and observe. This hyperreal state, where the signified political entity is indistinguishable from the signifying map, means that changes in maps can have a very tangible impact on our built environment. Maps aren’t just representations of the world, they are also tools for shaping the environment.
One of the many ways the mutual shaping of maps and landscapes occurs is through the enabling and affording of individual action. By spatially ordering information, maps make it easier to see patterns and make certain kinds of decisions. The first thing Nextdoor, an iPhone app advertising itself as a “private social network for your neighborhood,” asks you to do when setting up a new neighborhood is to draw a border on a Google map. Drawing borders might be one of the more basic things you can do with a map, but its also one of the most important. The border in Nextdoor is a prerequisite for any and all action within your neighborhood. The app doesn’t work–technically nor conceptually– without those borders. Similarly, a map that barely makes mention of major highways or borders and focuses mainly on biking paths, enables a myriad of bike-related activity while relegating highways to nothing more than obstacles. While the mapmaker might have a particular user in mind, maps can be appropriated for many different purposes. A map of North American bike trails can be useful to a radical luddite or a bourgeois cyclist. Both may have gotten it from the Adventure Cycling Association, and use it for totally different ends. The former might use it to find some federal land to squat on, while the latter will use it to find the trail that does the most for their gluteus.
This concept of mutual shaping, paired with ambiguous user intentionality, gets at the heart of the danger of personalized maps and my disagreement with Morozov: When we go out and act with our personalized maps in hand, we shape the world in our image but sometimes not in ways that the map maker (or programmer) intended. Weird things happen when an algorithm can tell a coffee snob that a new purveyor of Blue Bottle Coffee opened up in a neighborhood they usually don’t walk through. It could mean gentrification happens faster, or that “districts” no longer serve the function they once did. It could also mean something totally different that only the best science fiction writers can imagine. Personalized Google maps could turn us into the predictable and computer-readable people that Morozov says it will, but it might makes us more willing to investigate new and different parts of our towns and cities.
Morozov warns, “If Google has its way, our public space might soon look like the Californian suburbia that the company calls home: nice but isolated, sunny but relying on decrepit infrastructure, orderly but segregated by income.”
Given that just about half of Americans live in suburbs, it makes sense that Google has put a lot of thought into mapping them. Does that automatically mean that Google wants to terraform the Earth to look like Santa Clara County? Not necessarily. Google Maps, in its current incarnation, works best in the suburbs and huge metropolises, but that probably has more to do with following where the users are, than any kind of preference for a particular built environment. A major shift in built environments would not be in Google’s interest, but that is different from saying that the suburbs are what Google would make if it could choose from any kind of built environment.
This is not to say however, that Google doesn’t benefit from particular sociotechnical arrangements. Systems interact with the least amount of friction when they are isometric. Social construction of technology theorists like Thomas Hughes noted that big technical systems beget human organizations and bureaucracies with a similar size and shape. Cities built for cars and computers built for networks have a lot in common: they have huge arteries and backbones that end in tiny capillaries, cul-de-sacs, and routers. Its no coincidence that the highway was an early metaphor for describing the Internet: it compresses space, relies on a simple hierarchy to order completely unconnected units (cars and packets), and connects people quickly with a ruthless kind of self-justifying technique. Whatever is in Google’s interest is probably big and complex, but not necessarily California suburb. It could also be New York City or Shanghai.
Morozov isn’t wrong when he says, “enriching the database—rather than our urban experience—is the company’s primary objective.” But the database and our urban experience are deeply enmeshed and, sometimes, interchangeable. My reviews and Foursquare check-ins are intertwined with past memories; imaginaries of new or yet-to-exist places are prefigured with stories from friends and photos I found on a travel blog. In short, whether or not Google is interested solely in its database or the urban experience is beside the point. The city is subject to observer effects: it changes in unpredictable ways just by watching it. Rather than thinking about what Google wants, we should be concerned about what their actions ultimately produce.
The urban experience, to the extent such a dubious unity can even be said to produce self-similar experiences at all[2], is paradoxically highly orchestrated and totally unpredictable. Thousands of things have to happen the same way everyday—buses running, water flowing, people exchanging money—but each day is just one of an infinite number of combinations. Google’s job, in many ways, will never be done because the city is one long emergent phenomenon, constantly being remade and reordered. Any one of Google’s engineers that have been working on the self-driving car can attest to the infinite complexity of a single pre-planned car route.
Enriching the urban experience does not necessarily make it more difficult to enrich the database. Observer effects are famously difficult to predict or even notice once they’re happening. Highways were thought to be the answer to cities’ traffic congestion. By making huge roads that never have to stop, planners and engineers thought they were ushering in a new era of easy motoring. When congestion got worse, their immediate decision was to add more highways. Now we know that this line of thinking makes about as much sense as buying bigger pants to lose weight—it just doesn’t make sense. Efforts to reduce car traffic through better information will probably yield similar results. Making it easier to drive will never reduce the amount of drivers. Seeking to reduce friction at such a large scale will always lead to unintended consequences.
Google only makes money on the promise that they are well positioned to sway people as they are making decisions about where to go and what to buy. According to Morozov:
The best way to do that is to actually turn us into highly predictable creatures by artificially limiting our choices. Another way is to nudge us to go to places frequented by other people like us—like our Google Plus friends. In short, Google prefers a world where we consistently go to three restaurants to a world where our choices are impossible to predict.
I’m having a hard time imagining the ideal three-restaurant scenario. When did anyone choose their top three restaurants and where do these new people come from? When do they choose their top three? If everyone already has their three places, who is being convinced by their Google Plus to eat at new places? Why would Google want consistency when they make money by facilitating businesses’ ability to change your mind through advertising?
To summarize, Google’s personalized maps may intend to do what Morozov says: turn public spaces into frictionless and lifeless pathways to Yelp-reviewed destinations, but historically such efforts have failed. Observer effects combined with the sheer unpredictability and complexity of large sociotechnical systems means that neither databases nor urban experiences follow the other. Rather, they are in a constantly emerging process of mutual shaping and never fully formed or complete. Given this reality, we should care less about Google’s stated or perceived intentions, and focus more on what their actions end up producing.
I want to conclude by offering up the possibility that personalized maps is nothing new, but the opportunity to see and share these maps could potentially reinvigorate our public spaces. In the late 50s, MIT urban planner and architect Kevin Lynch asked residences of Boston, Los Angeles, and Jersey City to draw a map of their city. He found that while individual maps were distorted, the distortions almost disappeared in the aggregate. One person might forget the existence of an entire boulevard or transpose the order of churches going North to South, but overall the maps were fairly accurate. Lynch, while acknowledging that his sample sizes weren’t very representative or large (30 in Boston, 15 in Los Angeles and Jersey City, all of middle and upper income) couldn’t help but comment on how strong and predictable the trends were. People with cars would see highways as smaller than they actually were (the speed of the car tends to reduce perceived distance), while pedestrians tended to exaggerate the size of the highway (because it was a nuisance and an obstacle, rather than a useful path).
Lynch concluded that we all have an “image of the city” in our minds that exaggerate salient features and actively delete places that do not serve a purpose in our daily lives. We’ve always had personalized maps, but up until recently, lacked the tools to effectively share them with each other on a consistent basis or in useful ways. Personalized Google maps, so long as they provide an opportunity for sharing, could provide some of the richest, most evocative maps to date. The over-lapping of millions of personal maps will illuminate hot public spaces and identify emerging new ones. The key here is whether or not Google lets us compare our maps. It seems like a killer social media function so there’s no reason not to. Morozov says, “no one formally reviews public space or mentions it in their emails” but I beg to differ. You can check into all sorts of public spaces on Foursquare (even airport terminals) and I wrote a lot of emails about public space when I was actively involved in Occupy. Google has lots of information on public space, and given they’re on gigantic private corporation, that seems to be precisely the problem.
Morozov quotes a really telling passage from Google’s press announcement. It reads,
In the past … a map was just a map, and you got the same one for New York City, whether you were searching for the Empire State Building or the coffee shop down the street. What if, instead, you had a map that’s unique to you, always adapting to the task you want to perform right this minute?
That’s just wrong. Tourists get a map with all the destinations highlighted. If you’re visiting from out of town, a friend might draw a map on a bar napkin. AAA’s triptik planner has been around for ages. Informal, personalized maps have always been around (they used to be the only maps when you think about it) the difference is in the wealth of information, the accuracy, how quickly they can be produced, and the centralization of ownership. The press release is a classic misdirection, Google’s innovation isn’t the personalization, its how lots of different personalized maps are connected to one-another and owned by a single private entity. The real danger from maps come not only from the potential corporate disinterest in letting people share their image of the city, but also from Google’s ability to actively delete objects from our collective memory. If Google destroys public space, it won’t be from a new invention, it’ll be from taking away an invention we’ve come to rely upon.
[1] This reply is somewhat belated, since I am only just emerging from a hole of exams and subsequent fieldwork.
[2] I’ll admit I do not really know what Morozov means when he says “the urban experience” let along what “enriching” it looks like. I take “the urban experience” to mean anything having to do with the sensorial and affective components of everyday life in a city. Enriching such an experience may mean simply increasing these components by some degree or something more than the sum of their parts.
In the first chapters of every Economics 101 textbook there’s a misleading hypothetical about the origins of money. David Graeber, in his book Debt: The First 5,000 Years calls it “the founding myth of our system of economic relations.” This myth is so pervasive that even people who have never taken an Economics 101 class know, and believe in, this myth. We tend to assume that before money there was this awkward barter system where you had to keep all your chickens and yams with you when you went to market to buy a calf. If the person selling the calf didn’t want chicken or yams, no transaction would take place. Money seems to fill a very important need: it lets us compare and exchange a wide variety of goods by establishing a common metric of value. The problem with this construction—of simple barter being replaced with cash economies—is that it never happened. That’s what makes Bondsy, an app that let’s you effortlessly barter with a private set of friends, so interesting: It takes a modern myth and turns it into everyday reality. What has existed for centuries, according to Graeber and other anthropologists, are debt and credit systems utilizing some sort of value measurement. Some societies might measure the value of objects in terms of other objects (e.g. clams, feathers, buffalo skins, beads) but those are measurements, not actual bartered objects. They act more like money than barter. There are far too many different credit/debt systems to generalize accurately but lots of them operated on the premise that gifts carry some promise of reciprocation. Stiffing your neighbors was a good way to starve to death unless you were totally self-sufficient.
Currencies are a good way of making transactions among people you don’t know or actively dislike. It settles debts quickly and efficiently by offering a common object that can be exchanged later on. That is why throughout history, currencies generally show up just after armed conflict. Currencies aren’t for friends, they’re for enemies. Again, Graeber:
Gold and silver coins are distinguished from credit arrangements by one spectacular feature: they can be stolen. A debt is, by definition, a record, as well as a relation of trust. Someone accepting gold or silver in exchange for merchandise, on the other hand, need trust nothing more than the accuracy of the scales, the quality of the metal, and the likelihood that someone else will be willing to accept it
Enter Bondsy. The app lets you find friends on Twitter and Facebook but it doesn’t rely on either to build your in-app profile. You cannot “sign in” to Bondsy using any other social networking service, which is interesting for reasons I’ll get into later. Once you build a simple profile, (name, photo, email, and location) you are ready to start bartering. The app lets you take a picture of what you’re willing to part with, and lets you list a series of things (including cash) that you’re willing to take in exchange. Instead of putting your extensive collection of licensed Star Trek Micro Machines starships up for sale on eBay where some stranger might give them to an unappreciative brat that will never fully understand the utopian vision of Gene Roddenberry, you can put them on Bondsy and trade them with a friend that you know Gets It. In exchange you might get a houseplant, a nice cookie, or maybe a hug.
Bondsy is a really fascinating concept because it encourages people to think differently about the value of their material possessions and what it means to no longer fully possess them. Selling a couch on Craigslist means it’s gone from my life forever, bartering it on Bondsy means I’m probably going to see it again when I go to my friends house: The binary ownership model of “have” and “have not” dissolves into a spectrum of sharing, visiting, and lending. Material goods have ordered social relations for centuries and the sort of asocial “its mine and no one else’s” mentality that characterizes modern consumption patterns is only a recent invention. Capitalism mystifies and separates the delicate and complex relationships between humans and material objects.
Karl Marx observed that we tend to think of exchange value as some sort of natural, objective phenomenon. Once a price is stamped on a commodity we tend to mistake that number for the whole and total value of that object. This commodity fetishism lets us exchange goods and services very easily, but it also obscures very important social relations like the value of an individual’s labor or the simple fact that different objects mean different things to different people. Marx said that stamping simple prices on commodities creates “material relations between persons and social relations between things.” I can easily compare the amount of labor (a social relationship) that went into any two objects by looking at their prices, but the only relationship I have to those laborers is the objects themselves. Bondsy doesn’t do much to connect me to the factory workers that build my things but it does get me thinking about the value of things outside of their sticker prices. I have to decide whether or not my once-worn shoes are equal in value to a power drill. I might look up the going price of either object, but that’s only part of my calculus.
The decision to not tie Bondsy to existing social media platforms is incredibly important. According to VentureBeat, Bondsy inventor and CEO Diego Zambrano “didn’t trust the way other companies managed their privacy settings.” He expects his app will “directly change what people share” and “how open people are.” These quotes sound like just about any other Silicon Valley entrepreneur that wants to redefine human behavior and acceptable norms, but I suspect Zambrano’s case is much different. Last November Nathan and Whitney wrote a post about what Silicon Valley means when they say “Social.” Capital S Social
…is what Silicon Valley Capitalists usually mean when they use the term “social”: interactions that are measurable, trackable, quantifiable, and above all exploitable. Whereas much (but not all) of social is nebulous and difficult to force into databases, Social can be captured and more effortlessly put to work. Thus, behavior Social to the degree that it is easily databaseable.
Lower-case “social” is the much broader term with a dictionary definition. It refers to anything relating to society and its organization: ranking and status within groups, companionship, and engaging in activities with others. Nathan and Whitney warn:
It is a mistake, however, to think either that you can separate Social from social or that Social is interchangeable with social. Without social there simply is no Social. Said differently, all Social is social, but not all social is Social. [Easy!]
Bondsy is definitely the kind of Social that is also social. It has the potential to radically disrupt conventional notions of value and exchange. No single invention is going to unseat commodity fetishism, but something that encourages a widespread change in everyday practice could make it harder to believe. Bondsy affords a new way of looking at commodities’ value by reasserting the social nature of objects. It gets you thinking about how material objects enter into our relationships with others and what their value says about those relationships. How does my neighbor’s cooking compare to my old copy of American Gods? How many beers does it take to convince my cousin to clean my bathroom for me? As The Next Web’s Harrison Weber wrote, “Bondsy becomes an entertaining way to foster deeper connections with the people you know.”
As Science and Technology Studies scholar, I am sensitive to what is not built into inventions. So when Zambrano says, “What is interesting about leaving Bondsy open and not very structured is that we allow our users to define how they want to use it,” I immediately start thinking about what kind of “open” Bondsy constructs. In other words, what kinds of assumptions about barter and trade are built into Bondsy? After looking at all of the press (e.g. The Verge, Fast Company, The Next Web) and trying out the app myself, I don’t see much effort put into letting me record or bank debt and credit. Bondsy seems to reify the mythical Econ 101 barter system that never existed. There’s an implicit assumption that trades are one-off transactions of equally valuable goods. What if I want to intentionally trade something of greater value so that my friend still owes me? The app might have a history feature but it isn’t advertised. In some (maybe even most) cultures, such a feature would be so fundamental to barter that to not tout it would be as inconceivable as not mentioning the fact that you can post pictures of your item.
It will be interesting to watch Bondsy add features and evolve as a platform. The various affordances and capabilities of everyday technologies are a nice way of looking at what are otherwise abstract and intangible aspects of society. An iPhone app that aids in barter will inevitably say something about who users trust, what they are willing to share and trade, and what they value as a community. Bondsy would certainly look different in a pre-modern gemeinschaftsociety where you are born with most of your social status and relationships already decided. Bondsy doesn’t assume that you’d rather trade with people near you, people with a different last name, or people that wear a special kind of hat. Again, these are all things that might be possible with the current incarnation of Bondsy, but they would be absolutely essential to a culture different from our own. The only thing that might say more about ourselves than the things we trade, is how we trade them.
David is on Twitter @da_banks and Tumblr. He’s also waiting for someone to trade with him on Bondsy. Username: davidbanks
For Christmas in 2004 I received every episode of the original series on VHS. Each tape contained two episodes separated by the kind of cheesy music you might expect from a local news daytime talk show in 1992. I watched all 30 or so tapes, multiple times, sometimes with my high school English teacher during lunch after he had finished sneaking a cigarette in his beat up Civic. I have fond memories of eating turkey sandwiches and laughing at William Shatner’s fighting style. But what was more important (to us anyway) than the unchoreographed fight sequences were the literary parables. I see no exaggeration or hyperbole when people describe Star Trek as a philosophy or a religion, but I see it much more as a political orientation. The crew might go where no one has gone before, but the show rarely strayed from the very basics of the human condition. Star Trek holds a mirror to the society that produced it, and J.J. Abrams’ trek is most certainly a product of the Endless War on Terror.
First, let’s get something straight. Khan Noonien Singh is a genetically engineered human reigned over almost half of planet Earth from 1992 to about 1996 before being overthrown by rebels. Khan, along with a number of loyal superhumans, leave Earth in a sleeper ship named the SS Botany Bay and float in space for over a hundred years before Kirk finds the ship and reawakens them. He was the product of many different ethnic groups but identified as a Sikh. In non-canon novels written by Greg Cox, Khan’s reign was run through secret shadow governments that ran throughout Asia and Eastern Europe in the 90s.
The Khan that we see in 2013 looks more like a conniving bond villain than “the best of the tyrants.” Khan is a terrorist, not a deposed dictator. He is also white, which can be read as either Abrams’ total disregard for the multicultural message of Star Trek, or as hesitancy to cast a person of color as a terrorist in a movie that echoes American interventionism a little too well. Khan must be a terrorist and he must be white to the point of transparency because to do otherwise in one of the longest-running parables of western civilization would be too problematically formulaic for Abrams or the American movie-going public to accept.
That’s because tyrants don’t scare us anymore. They’re always the mustachioed men with weird obsessions and dubious military support. We’ve toppled a dozen of them since the 2009 Abrams movie and what scares us now are rogue agents with confusing loyalties. People that we know are armed and dangerous because we made them that way. Khan is blowing up Starfleet because they used him and manipulated him to built a war machine capable of defending against people like Khan. Self-justifying, perpetual war machines are what we have come to expect from governments. Even if you are defending the war, you have to justify this “new kind of war” by describing and identifying an enemy that demands a war of ambiguous lines and endless horizons. Talk about policing, intelligence, boots on the ground, or peace-keeping missions but don’t question the need for constant intervention. J.J. Abrams’ Star Trek might not be the Star Trek you want, but it is definitely the Star Trek America deserves.
To that point, Abrams reduces Kirk to a horny frat boy and Uhura to a doting girlfriend. Its hard to tell if this is something we can blame on Abrams’ love of Star Wars or the fact that America’s gender politics have gone into such a horrendous retrograde that we can’t expect much else from either character. Both have their moments, (and they’re amazing when they happen) but ultimately we have to accept that the Abrams alternate universe is not nearly as aspirational as the Roddenberry universe and perhaps that’s just what we need right now.
I want to back up for a moment and delineate the path that brought us here. The original series always had a Janus face for a political message: mutually assured destruction is no way to win peace, but you will always have to be ready to defend yourself against avowed enemies just on the other side of The Neutral Zone. Kirk and his crew rarely live their politics: when faced with warring civilizations the crew invariably destroys the very tools that make war clean, easy, and desirable. A clear message to super powers that are more than happy to fund proxy wars but never fight on their own soil. It is a hypocritical edict handed down from a unified humanity that will let a proud Russian steer the ship but allow a deep hatred for Klingons (who sport fu manchus and fight for the glory of the empire) run rampant in their ranks. It’s no surprise that as the Cold War fades into detente, the Federation signs a peace treaty with a Klingon Empire crippled by the self-inflicted wounds of over-production and infighting.
The Next Generation is equally fraught with paradoxes borne out of the same unwillingness to live one’s stated politics. At the height of Reagan America, a French archeologist presides over a UN In Space that has welcomed a Klingon (naturalized into the Federation by human parents) to the bridge but despises the children that he simultaneously hates and desires for himself. It is a series that, appropriately enough, opens with nothing less than a trial of all humanity heard by an omnipotent trickster god. It is a Star Trek that fully realizes Fukiyama’s decree that we are living at the end of history and all that is left to do is reconcile past differences and perfect an already spectacularly efficient system of exchanging goods and services.
The cosmopolitan multiculturalism of Deep Space Nine and the late second wave feminism of Voyager are one 14-season-long transgression of the never-ending-present that The Next Generation sets up. Q, the omnipresent trickster god that saw it fit to put all of humanity on trial is now physically assaulted by Benjamin Sisko and romantically rejected by Kathryn Janeway. Janeway goes one step further and, in a deeply underappreciated series, stands in literal judgment of the Q continuum itself for its desire to keep one of its own from committing suicide. In a trial of her own, reminiscent of the time Data defends his sentience and Spock is tried for treason, Janeway actually rules in favor of individual autonomy over the Foucauldian power of the state to regulate life and death:
But then there are the rights of the individual in this matter. I find it impossible to support immortality forced on an individual by the state. The unforeseen disruption that may occur in the continuum is not enough, in my opinion to justify any additional suffering by this individual.
Deep Space Nine shows us the morally dubious and difficult decisions that prop up great societies. It reveals the Federation as a dubious unity: a patchwork of good-enough decisions and lukewarm compromises that have more to do with who is in the room than universal morality. It requires unilateral decisions by imperfect people. It means ignoring the plights of others to prevent all-out war. DS9 complicate the Star Trek universe to the point of breaking, but it does show us the tattered edges and lose ends of what we thought was an expertly woven tapestry.
Star Trek as we had known it dies with Data –someone who’s sole purpose in life was to understand everything Star Trek was about—in the final Star Trek movie with the Next Generation Crew. This universe must die because we are no longer living in a world after history. America’s War on Terror is incompatible with the Gene Roddenberry vision of a socialist utopia that provides for every want and desire. Self-actualization is no longer a realistic goal once a week. The franchise struggles briefly through four seasons of Enterprise (which debuted just 15 days after 9/11) to translate the utopia of the 23rd and 24th centuries into a story of plucky, modest, and messy 22nd century progress. It fails because the opening credits have none of the grandeur and stateliness of the other series, nor does it evoke anything new that we can believe in. We do not even believe we’re on our way to utopia.
J.J. Abrams, a man that has openly stated that he does not want to write or produce a philosophical Star Trek, produced the perfect meditation on the 21st century political condition. The generals of the Cold War have been massacred by a terrorist of their own creation, and must be saved by the young mind that has known nothing but this alternate universe of endless war. The Klingons are less like the Soviet Union and more like Pakistan: Admirals are content with sitting on the border and shooting missiles at individual targets based on bad intelligence.
Kirk always served as the balance between Spock’s logic and Bones’ passion. He took the best of each and applied them to the problem at hand. Kirk mediated this Enlightenment-era dualism of emotion and reason through the values of the era. Kirk’s job as captain was to apply reason and passion for the sake of the good and just. Whereas, in Roddenberry’s The Great Society, you defeated your enemy by sacrificing logic temporarily by letting it live inside passion (something cyberneticists and cognitive scientists would appreciate) the War on Terror asks that we sacrifice that part of ourselves that balances passion and logic through values (patriarchy and all) and let both weep at its loss. We only resurrect ourselves by sticking to a moral code that rejects revenge killings and seeks justice: letting your enemy stand trial restores you. It means letting your passion infuse you with the blood of the enemy.
The Star Trek universe is a foil for our own. When we watch Star Trek we see the hopes, dreams, fears, and optimism of a generation reflected back at us, draped in over-wrought Shakespearean acting and goofy uniforms. Star Trek, like most good science fiction, lets us step out of ourselves and talk about humanity, the state, and individual freedoms without the trappings of real world political parties and geopolitics. I can have a deep conversation about the state’s role in governing bodies by talking about Voyager, not Foucault. That’s the power of good story telling.
I know that Slate’s Matt Yglesias has recently written about the Star Trek franchise and he hits a few good marks, but he generally misses the mark entirely. (Proton torpedoes? What are you, new? ) The fact that he calls the DS9 metafiction episode “Far Beyond the Stars” as “bizarre” says everything you need to know about Yglesias’ shallow, elitist, milquetoast read of the Star Trek universe. Ronald D. Moore the writer of Battlestar Galactica and several episodes of Deep Space Nine (not this one) said it was “one of the best episodes in the entire franchise.” I am not surprised that Yglesias doesn’t see any “particular connection to Trek’s distinctive themes.” Its easy to see the Cold War allegory when it is long gone, but it takes an iota of thoughtful consideration to see your own world reflected back at you.
I don’t recommend doing it, but if you search for “Charles Ramsey” on Reddit, something predictably disturbing happens. First, you’ll notice that the most results come from /r/funny, the subreddit devoted to memes,puns, photobombs, and a whole bunch of sexist shit. Charles Ramsey, in case you don’t know, is the Good Samaritan that responded to calls for help by Amanda Berry- a woman that had been held captive for 10 years in a Cleveland basement, along with Gina DeJesus and Michelle Knight. The jokes on Reddit are largely at the expense of Ramsey, poking fun at his reaction to a police siren or his reference to eating ribs and McDonalds. As Aisha Harris (@craftingmystyle) said on Slate: “It’s difficult to watch these videos and not sense that their popularity has something to do with a persistent, if unconscious, desire to see black people perform.”
Harris situates Ramsey as the latest instance of an obnoxiously persistent tradition of people of color being interviewed by local news reporters and subsequently lampooned and remixed on the Internet. Ramsey is a little different in that Antoine Dodson (of “hide your kids, hide your wife” fame) and Michelle Clarke (“Kabooyow!”) may not have reached national fame without the attention of meme makers and auto tuners. Ramsey did something that made front-page news across the country, which means there’s a lot more “source material” to go on. But his interviews on Anderson Cooper and George Stephanopoulos haven’t been nearly as picked apart and appropriated as that first interview. Perhaps its because in later interviews there’s less ducking from cop car sirens and more references to helping the poor.
The Anderson Cooper interview is interesting because in an otherwise continuously shot interview there are just three cuts. Two of the three happen just as Ramsey starts talking about the poor and lack of services in his neighborhood, but perhaps this is a different problem entirely. One certainly does get the sense however, that Redditors aren’t the only ones focusing on the wrong aspects of what Ramsey has to say.
Would if we looked at the racist and classist jokes about Ramsey as a design problem? Racism and the ever-present and pervasive microaggressions that reproduce and sustain it are not going to disappear with the advent of a new tagging system, but there might be ways of tilting the scales a little bit so that people think critically about what they’re submitting. Technology has a certain capacity to frame social interaction, and that framing can have a specific political or social orientation. The hard part is getting the technology to reflect anything other than dominant narratives.
Most technologies appear to us as “neutral” because they conform to and reflect many of our dominant beliefs and organizational logics: Very few people, for example, are working on how to administer domain names and IP addresses without or outside of the corporate-dominated Internet Corporation for Assigned Names and Numbers (ICANN). Designing a news aggregator or link sharing community that discourages racism (or rape culture, or any number of undesirable social phenomena) is hard to think about because we so rarely think or talk about racism as systematic phenomenon. This is reflected in the “report” option on Reddit (or YouTube, or almost any other social network), which assumes that individuals post specific offensive content, but there is no way to quickly amend or alter parts of the sociotechnical arrangement that might reward or just ignore the broader underlying factors that might encourage base behavior.
Providing feedback isn’t revolutionary but it might be undersold. Reddit’s source code is open and available to look at on GitHub, but I can guarantee you that more people have downvoted or reported a link as inappropriate than chosen to contribute anti-racist code (whatever that might look like) to Reddit’s developers. A large part of this might be that coding isn’t as widely taught as it should be, but also because it is way more time consuming than reporting offensive content. There are two ways around this: 1) build tools that let non-coders alter site functionality and then provide decision-making tools that coordinate the implementation of the new functionality or,2) build alliances and networks of users (and potential users) and act in unison towards a desired goal.
Option 1 brings us close to what anthropologist Chris Kelty calls a recursive public. A recursive public is,
a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public; it is a collective independent of other forms of constituted power and is capable of speaking to existing forms of power through the production of actually existing alternatives.
These actually existing alternatives not only do useful work, they also critique dominant power narratives. Again, I do not know what an anti-racist Reddit would look like, but I want those with the ideas to have a chance at implementing them.
Option 2 is already seen to some extent in the formation of /r/shitredditsays (SRS). SRS contributors quote and gather the terrible things fellow redditors say into a single subreddit. SRS gives users a means catalogue and comment on examples of unacceptable behavior on a site that, above all, values what white dudes think free speech means. While the subreddit expressly forbids users from forming “downvote brigades” and going to the original problematic posts to publicly shame users, it does happen and the existence of SRS is felt well beyond the confines of the domain name. It is also worth noting that while these racist images usually rank the highest on Reddit, there are also links to this thoughtful NPR article that asks the important question: “Are We Laughing With Charles Ramsey?” The most upvoted comments are decent, but are emphatic that everyone is laughing “with,” not “at.” I would respect someone more for at least acknowledging the possibility of “at” or even consider that it isn’t perceived that way for any number of good reasons.
Perhaps the design solutions necessary to sufficiently discourage racism on Reddit would make it unrecognizable. A web platform that relies so heavily on quantifiable upvotes, comments, and karma might very well encourage undesirable behavior. Things that are shocking or provocative garner a lot of attention, which almost always translates into karmic rewards. It might be worth comparing the quantification-heavy design of Reddit with the virtually number-less Tumblr interface. Tumblr always asks that you either put your identity on the object (leaving a note) or put the object on your Tumblr identity (reblogging). In either case, activity on Tumblr does not lend itself to the cumulative nature of microaggressions or the base desire for quantifiable attention. While I don’t have hard numbers to back this up, Reddit seems to get in the news for bigotry, hate, and violence a lot more than Tumblr. The relationship between quantification and problematic behavior might be a total coincidence, but I doubt it.
At the heart of the augmented reality thesis (and the digital dualist critique) is the acknowledgement that no technology is an oasis from the social, cultural, and political forces that surround it. The Internet is not an insignificant cultural artifact (like so much fungus on a log) nor is it an undefined Wild West. And, as the comparison between Reddit and Tumblr suggests, one network might encourage behavior that another discourages, making it extremely difficult to say whether “The Internet” as a whole encourages us to do anything. The racist jokes made at Ramsey’s expense are encouraged through the implicit promise of a receptive (read: racist) audience, and the history of pre-existing memed interviews that went viral. Perhaps the best way to end this trend is the tried and true method of calling it out for what it is: racist.
It is pretty easy to mistake most technologies as politically neutral. For example, there is nothing inherently radical or conservative about a hammer. Washing machines don’t necessarily impose capitalism on whoever uses one, and televisions have nothing to do with communism. You might hear about communism through television, and there is certainly no shortage of politically motivated programming out there, but you’d be hard-pressed to find someone that says the technology itself has a certain kind of politics. This sort of thinking (combined with other everyday non-actions) is what philosopher of technology Langdon Winner (@langdonw) calls technological somnambulism: the tendency of most people to, “willingly sleepwalk through the process of reconstituting the conditions of human existence.” It is difficult to see the politics in technology because those politics are so pervasive. The fact that technological artifacts have politics is kind of like Call Me Maybe, once you’re exposed, it is hard to get it out of your head.
Technologies may appear to us as neutral or unbiased but those are constructed categories. In other words, neutral and unbiased are qualities of things in relation to the environment in which you find them. Being surrounded by water is totally normal for a fish, but humans might feel otherwise. Television is politically neutral to the extent that we cannot imagine life outside of certain political arrangements. If we didn’t have large state or corporate bureaucracies filled with experts that have impassioned and nuanced opinions about how the electromagnetic spectrum should be allocated, we could not have broadcast television in its current form. There was a decision, at some point in history, to allocate spectrum via human decision-making and not with an algorithm. This is a political decision. It is political because it requires large governing entities and the concomitant legitimacy and promise of sanction necessary to enforce their decisions. Television for an anarchic society has yet to be invented.
What has been invented, is a decentralized network of peer-to-peer machines that share data and information over a variety of hardware and software governed by an equally diverse amalgam of intellectual property rights and service contracts. It is called the Internet, and it is fraught with politics. The Cold War is usually associated with big, hulking organizations that rely on strategic planning and mathematical theory: Historical accounts are replete with continental super powers strutting along each others’ borders with military technologies that are, themselves, highly centralized and ordered entities. Both sides tried to out-maneuver the other by decentralizing resources and populations. In America, it meant spending lots of defense money on building the first peer-to-peer computer networks and the nation’s first interstate highways. Decentralization and redundancy is the best defense against centralized power. Again, the decision to decentralize cities and computer systems was a political (not to mention military) decision.
Perhaps the Cold War logic that birthed the Internet has such a tenuous bearing on how we currently use the Internet, that it barely warrants mentioning. The intentions of the early Internet’s designers probably do not factor into my choice of Tumblr theme, or the Instagram filter I put on a photo of my houseplants. But intentions aren’t even half the story. Technologies live and act beyond their creators’ intentions and quite often produce unintended consequences. Think about all of the decentralized, rhizomatic organizations and social movements that have been earmarked or popularly associated with the digital technologies they used so well: the Arab Spring, Occupy Wall Street, Anonymous, and the BART protests have all out-maneuvered (at least for a time) the state and corporate bureaucracies that sought to shut them down. The Internet doesn’t unilaterally impose or determine certain political organizations, but it does assist and afford their continued existence.
To sum up thus far: Technological systems and artifacts have politics, and communications technologies are particularly interesting in this regard. They can be designed to decentralize organizations and resources, or they can require the continued existence of large bureaucracies. Communications technologies are politics frozen in silicon. Not only because these systems mediate our relationships at multiple scales, but also because looking at what is not there says a lot about who is and is not allowed to politically organize. Again, this isn’t about the intentions of engineers or designers per se, rather the technologies are imperfect and incomplete physical manifestations of the current political order.
For example, consider the last time you used a public library computer terminal. It was probably uncomfortable and generally unpleasant. The greasy keyboard might have grossed you out, and there might have been a creepy dude trying to look at your screen. There are many ways computers could be designed to better serve the unique needs of libraries and their patrons, but mainly they buy the same computers that are meant for homes and offices. Now consider that, according to a study released in 2010, households below the poverty line reported using library computers more than any other socio-economic demographic. Anti-bacterial keyboards, monitors with modifiable viewing angles, and countless other non-existent inventions stand in silent testament to the more egalitarian augmented society that never was. The collective resources of government and industry have largely ignored the innovations that might make for better public computing.
Did Twitter cause the Arab Spring? A million times no. Were the two similarly structured and, perhaps, mutually shaping one-another throughout those historic few months? Yes, probably. The politics of technology are difficult to see because technologies that “work” are very compatible with the dominant political order, or a community that is large enough to provide and sustain the practical necessities for its continued existence. Technologies’ perceived “neutrality” is the up shot of this compatibility. The inherent politics of a technology are revealed by either cataloging how it “fits” within the given order, or by engaging in a thought experiment wherein one seeks out the technologies that never were. These exercises are especially important as our technology becomes more semantic and adaptive to the world around it. Little loops of recursion surround us all the time, and I want to know what exactly they’re compounding and reifying. I worry that Winner’s observations, written in the mid 80s, are more pressing today than when he wrote them: “The excruciated subtleties of measurement and modeling mask embarrassing shortcomings in human judgment. We have become careful with numbers, callous with everything else. Our methodological rigor is becoming spiritual rigor mortis.”
I know this is predictable. And this pun is lazy. But here’s my twitter. So tweet me Maybe? @da_banks
The very fact that your eyes rolled (just a little bit) at the title tells you that it is absolutely true. So true its obnoxious to proclaim it. Perhaps cable news died when CNN made a hologram of Jessica Yeller and beamed her into the “Situation Room” just to talk horse race bullshit during the 2008 election. Or maybe it was as far back as 2004 when Jon Stewart went on Crossfire and shattered the fourth wall by excoriating the dual hosts for destroying public discourse. The beginning of the end might be hard to pinpoint, but the end is certainly coming. Fox News had its lowest ratings since 2001 this year, but still has more viewers than CNN & MSNBCNEWSWHATEVERITSCALLEDNOW combined. Even if ratings weren’t a problem, credibility certainly is. Imagine if CNN stopped calling themselves the “Most Trusted Name In News” and used the more accurate, “A Little Over Half of Our Viewers Think We’re Believable.” By now it is clear that the zombified talking heads of cable news are either bought and sold, or just irrelevant. Cable news channels’ hulking, telepresent bodies have been run through and left to rot on the cynical barbs of political bloggers and just about anyone at a comedy shop’s open-mic night. This last series of screw-ups in Boston (here, here, here and unless it was avant-garde electronic literature, here) begs the question if cable news channels can even tell us what’s going on anymore. Cable news is dead, but something keeps animating the corpse.
Perhaps its more accurate to say that cable news has a terminal illness. The state of cable news is still pretty good, but there are few signs of stability or long-term health. Below are a few pertinent figures presented by Lee Rainie (@Lrainie), the Director of Pew Research Center’s Internet & American Life Project, at the International Journalism Festival last year:
The percentage of people reporting that they get their news from television has decreased from 68% in 1991 to 58% in 2010.
Since 1998, the percentage of people that say they don’t follow the news at all went up from 14% to 17%.
Over the last decade, more people say they check the news “from time to time” rather than at regular times.
63% of Americans think the news is politically biased, up from 45% in 1985. Republicans and independents largely drive this figure.
CNN might not be shuttering its touchscreens anytime soon but the trend is clear: television news is still dominant but not for long. The demand for cable news (or any news for that matter) is shrinking, and yet the supply seems to grow. Al-Jazeera is coming to the United States, and there are dozens of online news outfits that offer hours of original reporting distributed as audio and video content. Some are great; others make you question the wisdom of the first amendment. All are looking to specialize, carve out niche viewer markets, and remain solvent in an industry that is constantly being disrupted by new technologies and socio-political trends. One could argue, like Globalvision Inc’s Rory O’Conner does, that new players signal a lack of quality product, not a lack of demand:
…this is an excellent time for any entity interested in making an impact in the increasingly dismal US cable news environment. Fox News, which has long been the industry leader in ratings, has lost huge audience share since the re-election of its bete noir Barack Obama. Fox stars like Sean Hannity are reportedly “haemorrhaging viewers” – Hannity has lost nearly half of his audience since the election, with the biggest drop in ratings coming in the coveted 24 to 54-year-old demographic, which had been one of his strongest groups of viewers.
We won’t know until Al-Jazeera America is up and running, if O’Conner is right and declining viewership has more to do with terrible reporting and producing, than an overall declining interest in this thing invented in the 80s called cable news. This seems unlikely, given Rainie’s presentation data that suggests the changing technology has changed how we consume and think about news. News isn’t something we passively receive; it is part of a conversation. Even if I see a story on television I might want to express my feelings about it on Facebook. I might also want to sign a petition, change my profile picture in solidarity, and engage in vigilante justice. The news segments and interviews are cannon fodder for political arguments with my cousin and talking points for when I’m feeling preachy. The digital artifacts that contain Breaking News™ are rallying points for debate and action.
Spivak once observed that theory is like the currency of ideas. Just as Dollars and Yen make goods easily comparable and exchangeable, theory makes it easier to circulate ideas. News stories work in a very similar way. Monologues and dialogues are edited down to provide a single, highly refined narrative. It does not matter whether that narrative strives to invoke partisanship, entertain, distract, or appear as objective fact, they are all compressed into the two socio-technical codes that permit the widest consumption: General American English and H.264. News propagates faster and wider if it is easily compatible with the places we meet and talk with one-another. TVs are still in family homes, bars, and offices, but they’re competing with the smartphone in your pocket and the computer that you’re supposed to be working on. That news segment will only go so far on television. But dice up that Maddow interview into 8-minute segments; slap a share this plugin on it and it’ll be the new reason why I won’t speak to my friends from high school.
These are very dangerous waters. Spivak’s currency metaphor lets us commit an egregious amount of moral relativism. Agnostic to content, we can treat all stories told in the news genre as the same kind of artifact. Glenn Beck and Democracy Now! are about as different as Fruit Loops and Kashi cereal. There might be some nutritional differences, but they’re both highly processed, pre-packaged goods meant for mass consumption. If news is only as good as its ability to traverse our augmented society, then a story about a water-skiing squirrel bests an exposé on Chinese factories so long as the former is on Youtube and the latter remains on paper. This doesn’t seem totally right. The death of newspapers and the terminal illness contracted by cable news is not a case of technological disruption: it’s a crisis of legitimacy.
Remember, we’re not talking about news here. Not journalism. The news contains journalism but it also has corporate press releases, weather forecasts, recipes from Emeril Lagasse, and Amanda Bynes’ latest tweets. Journalism is a profession. News is a sociotechnical phenomenon. There is some excellent journalism being done around the world, but very little of it has anything to do with cable news. O’Conner might be right, and Al Jazeera can succeed based on quality content, but I remain unconvinced that critical thought can be conveyed through the current sociotechnical apparatus.
If you're just waking up, CNN has by far the best coverage.
Last week, Whitney wrote about the documentation of tragedy using Vine. She noted that the technological affordances of the service, “reduces the tragedy of a violent act down to a bright orange flash.” The widespread use of vine as a documentary device occludes complexity and nuance. Cable news does something similar. It tells one kind of story, and it’s the kind of story that fewer people are willing to listen to. Cable news stories are positively hackneyed compared to our fictional ones: The news still has obvious protagonists and predictable villains. (Of course the same people play opposite roles depending on the channel.) The characters are woefully stereotypical and the production value just isn’t there. How can this be the case? How can these global news organizations that have amassed so much capital do so poorly?
Despite (or because of) all the fancy-looking sets and far-flung bureaus, even national news organizations run on a shoestring budget. Networks will gladly run a story about your new drug breakthrough if you already wrote the copy and provide high-quality video of interviews with the scientist. Pharmaceutical companies do just that, as do governments, universities, and NGOs. The Yes Men, a group of activists that expose corruption through manipulating this corporation-to-corporation communication scheme. They film and produce their own actions. You’re more likely to get airtime if you hand them a finished product. (Note in the CNN segment below, the black boxes that in the top right that show the various sources that provided the content.)
This past week of horrendous events drove a massive boost in ratings for all three cable news outlets, but all they could deliver were confused accounts and vacuous speculation. At the moment when everyone was watching, all they could muster were wild accusations about Saudi Nationals and bloviating uncritical praise for police authority. Watching the news, something that should be as serious and utilitarian as it gets, is now a guilty pleasure. Meet the Press is just like Storage Wars: “I know it’s all staged but its fun to watch.”
The thoughtless confession that news is sensationalized and uncritical will beget more sensationalism and uncritical analysis. There will always be a wide selection of talking heads ready to offer speculation, fear, and misinformation. The good news in all of this, is that smart people are excited and increasingly more desperate to share their ideas too.
Going on TV makes my print employers happy. It makes me happy, too: it’s a relatively cost-free way of saying to a larger public what I write for my readers. To be completely honest, it’s fun. And the vast majority of the talking heads you see on cable news are doing it for the same reasons I am: it makes their employers happy, and it’s mildly exhilarating, like the rollercoasters you don’t have to be tall to ride. They are not doing it for the money, believe it or not. Only a fraction of the pundits appearing on any given show are paid “analysts.” Most of them are remunerated in more existential terms.
I am pegging my hopes on those existential remunerations. I know that there are lots of people, with a lot of smart and interesting things to say who want to get them out to a general public. Some might be practiced in the Art of the Talking Head like Cox, but others want a different medium. That is why, according to media scholar Robert McChesney, “journalists rank among the leading proponents of media reform. They know firsthand how the media system is overwhelming their best intentions and their professional autonomy, and unless the system changes there is little hope for viable journalism.” The econometric language I have (intentionally) used throughout this essay will have to be abandoned if decent journalism is to prevail and critical thought brought to the issues of the day. Ted Turner’s 1980s behemoth has to go, and in its place should stand something leaner and smarter.
McChesney says the solution is more autonomy for individual journalists. But he falls prey to naïve objectivity:
The solution to this problem—then as now—was professional autonomy for journalists. Trained, professional reporters and editors who were politically neutral would cover the news in an objective manner. The political views of the owners and advertisers would be irrelevant except on the editorial page. This was the revolutionary idea of separating editorial from business, like the separation of church and state.
The idea of the standoffish white Western bloke in a tie as the universal journalistic eyepiece was able to develop because we have spent centuries seeing the world purely through the eyes of white men. Right now, that’s changing. Journalism is changing, and the internet is driving an explosion of media production from people all over the world who understand that subjectivity doesn’t have to mean innacuracy, especially when you’re telling stories.
If we could bridge McChesney and Penny at all, it would look something like strong objectivity. Developed by Sandra Harding, the theory of strong objectivity states that detailing the same events from multiple subjective points of view creates a stronger form of objectivity. Multiple subject positions are much more useful than a single person feigning value neutrality. Imagine, instead of a single Associated Press and its various cable flavors, there was a federated network of independent journalists that exchanged and contributed to covering individual events. Stories would cluster around events and those events are regularly summarized and distilled in multipartisan meta-reports that are updated as events unfold. Why do stories need to be “anchored” by a person behind a desk anymore? Instead of meaningless banter, spend the time drilling down into specific angles that might be too specific for a general audience but very valuable to a sizable plurality. Cable news was a fad that started in the 80s and like Members Only jackets, enjoyed a quiet acceptance in the 90s and resurgence in the early 2000s. Today, the audience for cable news is literally dying and with them goes this particularly strange technologically mediated relationship with current events. There might always be a channel or two that calls itself news, but that will be (again, like Members Only jackets) a consciously and obviously guilty pleasure.
At the beginning of the year, rumors were going around that the popular but relatively small citation software company Mendeley Ltd. was going to be purchased by the publishing giant Elsevier. TechCrunch ran a story and there were a few others but not much else came out of it. When I heard these “advanced talks” were taking place, I wrote an essay in which I said,
“When our accounts of reality are owned by profit-seeking organizations and those organizations control the very tools that help us exchange those accounts, we are in danger of losing something fundamental to the institution of science. Ideas should not end up behind prohibitively expensive pay walls, especially when so little of that money goes towards new scientific discovery.”
Today, Mendeley announced on their blog that their purchase by Elsevier was official. They also reassured existing users, “Mendeley is only going to get better for you.”
I’m very skeptical. Back in January, I raised the question, “what is Elsevier going to do with Mendeley that warrants uninstalling it from you computer?” and hinted that the kind of criminal charges faced by the late Aaron Schwartz could become commonplace, if not easier to prove and litigate. I also noted that Elsevier has been so malicious and aggressive in their search to control and subsequently monetize knowledge that it has inspired over thirteen thousand academics to sign a pledge saying they will not support Elsevier’s journals. They have supported SOPA, PIPA, and used to support the Research Works Act as well. Oh, and they support CISPA too. None of that has changed, and there’s still plenty to be done if Elsevier wants to gain the respect their new property once had.
A good deal of the announcement is actually devoted to calming fears about Elsevier:
“Your data will still be owned by you, we will continue to support standard and open data formats for import and export to ensure that data portability, and – as explained recently – we will invest heavily in our Open API, which will further evolve as a treasure trove of openly licensed research data.
“We were being challenged by some parts of the organization over whether we intended to undermine journal publishers (which was never the case), while other parts of the organization were building successful working relationships with us and even helped to promote Mendeley.
“Elsevier is a large, complex organization – to say the least! While not all of its moves or business models have been universally embraced, it is also a hugely relevant, dynamic force in global publishing and research. More importantly, we have found that the individual team members – the employees, editors, innovators, and tool developers we’ve worked with – all share our genuine desire to advance science. This is why we’re thrilled to join Elsevier and help shape its future.”
Dr. William Gunn, head of Academic Outreach for Mendeley and an accomplished biologist (according to his Mendeley profile!) was good enough to comment on my original post from January and ask, “So now that the deal is official, and both sides have said that the data will remain open and no one will get sued on account of their reading history, maybe we can revisit some of this?” He then links to Mendeley’s Q&A about the acquisition. In it, there are more reassurances that Mendeley will continue their Open API and that “there will be more and better data to work with, under the Creative Commons CC-BY license, as before.”
I have to say that I’m impressed by Elsevier’s sharp turn in this direction, and one could read this as the kind of radical change in business practices that those pledge signatories demanded. I still, however, don’t see the explicit promise that Elsevier will not sue based on reading history or what you have saved on their servers. That’s a big deal, and something I don’t want to infer from a promise of “private and secure access to your data.” If that were the only thing keeping me from using Mendeley, I would ask for something on the level of a public, written pledge from an Elsevier executive. Given that Elsevier is a “complex organization” I doubt they can make such a pledge.
Along with claims to continued open standards, the second-most repeated guarantee is that the prices will not be raised. In some respects they went down, since Mendeley is doubling storage capacities. This is a nice gesture but I have to admit it feels like the asshole rich guy just showed up to the party and bought the bar a round of drinks so that everyone will be nice to him. I’ll say thank you, but that doesn’t change who you are: a tremendous asshole.
To be clear, I’m not calling any particular person an asshole. I’m calling the company an asshole (corporate personhood has its downsides) because it hasn’t even come close to making up for the practices they have grown notorious for. I’m not a business analyst and I don’t claim to be, but I get the sense that the purchase of Mendeley is a long-term strategy meant to reposition Elsevier in a rapidly changing industry. TechCrunch (a business publication) seems to agree:
“In theory you could even now produce a ‘Klout for academics’ based on Mendeley’s data.”
Like I said. Asshole.
Ranking impact factors and selling access to important metric data looks like the future for academic publishers. Pay walls, while still holding strong, are beginning to show foundation problems. They’re the kinds of horizontal micro-fractures that aren’t necessarily signaling a near collapse, but they can’t be ignored either. A company like Elsevier loses very little from making systems on open standards because the secret sauce is really in the databases that provide the sellable analytics. From the same TechCrunch article:
“Mendeley’s tools now touch about 1.9 million researchers, pooling 65 million documents and claims to cover 97.2% to 99.5% of all research articles published. By contrast commercial databases by Thomson Reuters and Elsevier contain 49 million and 47 million unique documents, respectively.”
This is the stuff Elsevier wants and cannot duplicate. It can only acquire it through the purchase of a small company that has a very active and very social user base. Mendeley is the kind of software that sits nicely next to your Evernote account and your Dropbox. You might use all three, along with your Github and Google accounts to collaborate in Hojoki. The Mendeley acquisition is Elsevier’s first big step away from dead tree publishing and into the world of end-user web services. Its a brilliant future strategy, but it doesn’t erase the past.
What makes it possible for me to shun Elsevier’s Mendeley and still use almost all of those other services on a daily basis? Why in the world would I be totally fine with Google or Evernote buying Mendeley? Because neither of those companies have made their millions by hiding my work, or by gouging my university’s library. I am not about to applaud their decision to keep an open API when they still charge libraries hundreds of thousands of dollars just for the sake of artificial scarcity.
If Mendeley wants to hitch their horse to the Monsanto of academic publishing they can be my guest. The service will probably be amazing. But remember that the money they gave you –all the new resources you have at your disposal– were purchased with tuition money and charitable donations that should have gone to higher education. Instead, it went to Elsevier (and Thompson Reuter, and Springer and…) so that they could find new and inventive ways of hiding research so that they could continue to charge exorbitant prices. All the openwashing in the world doesn’t distract from the gigantic piles of ill-gotten money that, I suspect, the creators of Mendeley once resented.
David A Banks is trying to make #uninstallmendeley happen on twitter: @da_banks
About Cyborgology
We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.