Thiel - Girard

During the week of July 12, 2004, a group of scholars gathered at Stanford University, as one participant reported, “to discuss current affairs in a leisurely way with [Stanford emeritus professor] René Girard.” The proceedings were later published as the book Politics and Apocalypse. At first glance, the symposium resembled many others held at American universities in the early 2000s: the talks proceeded from the premise that “the events of Sept. 11, 2001 demand a reexamination of the foundations of modern politics.” The speakers enlisted various theoretical perspectives to facilitate that reexamination, with a focus on how the religious concept of apocalypse might illuminate the secular crisis of the post-9/11 world.

As one examines the list of participants, one name stands out: Peter Thiel, not, like the rest, a university professor, but (at the time) the President of Clarium Capital. In 2011, the New Yorker called Thiel “the world’s most successful technology investor”; he has also been described, admiringly, as a “philosopher-CEO.” More recently, Thiel has been at the center of a media firestorm for his role in bankrolling Hulk Hogan’s lawsuit against Gawker, which outed Thiel as gay in 2007 and whose journalists he has described as “terrorists.” He has also garnered some headlines for standing as a delegate for Donald Trump, whose strongman populism seems an odd fit for Thiel’s highbrow libertarianism; he recently reinforced his support for Trump with a speech at the Republican National Convention. Both episodes reflect Thiel’s longstanding conviction that Silicon Valley entrepreneurs should use their wealth to exercise power and reshape society. But to what ends? Thiel’s participation in the 2004 Stanford symposium offers some clues.   

Thiel’s connection to the late René Girard, his former teacher at Stanford, is well known but poorly understood. Most accounts of the Girard-Thiel connection have described the common ground between them as “conservatism,” but this oversimplifies the matter. Girard, a French Catholic pacifist, would have likely found little common ground with most Trump delegates. While aspects of his thinking could be described as conservative, he also described himself as an advocate of “a more reasonable, renewed ideology of liberalism and progress.” Nevertheless, as the Politics and Apocalypse symposium reveals, Thiel and Girard both believe that “Western political philosophy can no longer cope with our world of global violence.” “The Straussian Moment,” Thiel’s contribution to the conference, seeks common ground between Girard’s mimetic theory of human social life – to which I will return shortly – and the work of two right-wing, anti-democratic political philosophers who were in vogue in the years following 9/11: Leo Strauss, a cult figure in some conservative circles, and a guru to some members of the Bush administration; and Carl Schmitt, a onetime Nazi who has nevertheless been influential among academics of both the right and the left. Thiel notes that Girard, Strauss, and Schmitt, despite various differences, share a conviction that “the whole issue of human violence has been whitewashed away by the Enlightenment.” His dense and wide-ranging essay draws from their writings an analysis of the failure of modern secular politics to contend with the foundational role of violence in the social order.

Thiel’s intellectual debt to Girard’s theories has a surprising relevance to some of his most prominent investments. For anyone who has followed Thiel’s career, the summer of 2004 – the summer when the “Politics and Apocalypse” symposium at Stanford took place – should be a familiar period. About a month afterward, in August, Thiel made his crucial $500,000 angel investment in Facebook, the first outside funding for what was then a little-known startup. In most accounts of Facebook’s breakthrough from dormroom project to social media empire (including that offered by the film The Social Network), Thiel plays a decisive role: a well-connected tech industry figure, he provided Zuckerberg et al, then Silicon Valley newcomers, with credibility as well as cash at a key juncture. What made Thiel see the potential of Facebook before anyone else? We find his answer in an obituary for René Girard (who died in November 2015), which reports that Thiel “credits Girard with inspiring him to switch careers and become an early, and well-rewarded, investor in Facebook.” It was the French academic’s mimetic theory, he claims, that allowed him to foresee the company’s success: “[Thiel] gave Facebook its first $500,000 investment, he said, because he saw Professor Girard’s theories being validated in the concept of social media. ‘Facebook first spread by word of mouth, and it’s about word of mouth, so it’s doubly mimetic,’ he said. ‘Social media proved to be more important than it looked, because it’s about our natures.'” On the basis of such statements, business analyst and Thiel admirer Arnaud Auger has gone so far as to call Girard “the godfather of the ‘like’ button.”

In order to make sense of how Girard informed Thiel’s investment in Facebook, but also how he has shaped Thiel’s ideas about violence, we need to examine the basic tenets of Girard’s thought. Mimetic theory has not been widely applied in social analyses of the internet, perhaps in part because Girard himself had essentially nothing to say about technology in his published oeuvre. Yet the omission is surprising given mimetic theory’s superficial resemblance to the more often discussed “meme theory,” which similarly posits imitation as the basis of culture. Meme theory began with Richard Dawkins’s The Selfish Gene, was codified in Susan Blackmore’s The Meme Machine, and has been applied broadly, in popular and scholarly contexts, to varied internet phenomena. Indeed, the traction achieved by the term “meme” has made most of us witting or unwitting adopters of meme theory. Yet as Matthew Taylor has argued, Girard’s account of mimeticism has significant theoretical advantages over Dawkins-derived meme theory, at least for anyone interested in making sense of the socio-political dimensions of technology. Meme theory tends to reify memes, separating them from the social contexts in which their circulation is embedded. Girard, in contrast, situates imitative behaviors within a general social theory of desire.

Girard’s theory of mimetic desire is simple in its basic framework but has permitted complex, detailed analyses of a wide range of cultural and social phenomena. For Girard, what distinguishes desire from instinct is its mediated form: put simply, we desire things because others desire them. There is some continuity with familiar strands of psychoanalytic theory here. I quote, for example, from Slavoj Žižek: “The problem is, how do we know what we desire? There is nothing spontaneous, nothing natural, about human desires. Our desires are artificial. We have to be taught to desire.” Compare this with Girard’s statement: “Man is the creature who does not know what to desire, and who turns to others in order to make up his mind. We desire what others desire because we imitate their desires.” For Girard (and here he differs from psychoanalysis), mimesis is the process by which we learn how and what to desire. Any subject’s desire, he argues, is based on that of another subject who functions as a model, or “mediator.” Hence, as he first asserted in his book Deceit, Desire, and the Novel, the structure of desire is triangular, incorporating not only a subject and an object, but also, and more crucially, another subject who models any subject’s desire. Moreover, for Girard, the relation to the object of desire is secondary to the relation between the two desiring subjects – which can eclipse the object, reducing it to the status of a prop or pretext.

The possible applications of this thinking to social media in particular should be relatively obvious. The structures of social platforms mediate the presentation of objects: that is, all “objects” appear embedded in, and placed in relation to, visible signals of the other’s desire (likes, up-votes, reblogs, retweets, comments, etc.). The accumulation of such signals, in turn, renders objects more visible: the more mediated through the other’s desire (that is, the more liked, retweeted, reblogged, etc.), the more prominent a post or tweet becomes on one’s feed, and hence the more desirable. Desire begets desire, much in the manner that Girard describes. Moreover, social media platforms perpetually enjoin users, through various means, to enter the iterative chain of mimesis: to signal their desires to other users, eliciting further desires in the process. The algorithms driving social media, as it turns out, are programmed on mimetic principles.

Yet it is not simply that the signaling of desire (for example, by liking a post) happens to produce relations with others, but that the true aim of the signaling of desire through posting, liking, commenting, etc. is to produce relations with others. This is what meme theory obscures and mimetic theory makes clear: memes, far from being autonomous replicators, as meme theory would have it, function entirely as mediators of social relations; their replication relies entirely on those relations. Recall that for Girard, the desire for any object is always enmeshed in social linkages, insofar as the desire only comes about in the first place through the mediation of the other. A reading of Girard’s analyses of nineteenth-century fiction or of ancient myth suggests that none of this is at all new. Social media have not, as the popular hype sometimes implies, altered the structures that underlie social relations. They merely render certain aspects of them more obvious. According to Girard, what stands in the way of the discovery of mimetic desire is not its obscurity or complexity, but the seeming triviality of the behaviors that reveal it: envy, jealousy, snobbery, copycat behavior. All are too embarrassing to seem socially, much less politically, significant. For similar reasons, to revisit Thiel’s remark, “social media proved to be more important than it looked.”

But so far, I have been expanding on what Thiel himself has said, which others have echoed. However, what accounts of Girard’s role in Thiel’s Facebook investment never mention is the other half of Girard’s theory, the half that Thiel was at Stanford to discuss in 2004: mimetic violence, which, for Girard, is the necessary corollary of mimetic desire.

Thiel invested in and promoted Facebook not simply because Girard’s theories led him to foresee the future profitability of the company, but because he saw social media as a mechanism for the containment and channeling of mimetic violence in the face of an ineffectual state. Facebook, then, was not simply a prescient and well-rewarded investment for Thiel, but a political act closely connected to other well-known actions, from founding the national security-oriented startup Palantir Technologies to suing Gawker and supporting Trump.

According to Girard’s mimetic theory, humans choose objects of desire through contagious imitation: we desire things because others desire them, and we model our desires on others’ desires. As a result, desires converge on the same objects, and selves become rivals and doubles, struggling for the same sense of full being, which each subject suspects the other of possessing. The resulting conflicts cascade across societies because the mimetic structure of behavior also means that violence replicates itself rapidly. The entire community becomes mired in reciprocal aggression. The ancient solution to such a “mimetic crisis,” according to Girard, was sacrifice, which channeled collective violence into the murder of scapegoats, thus purging it, temporarily, from the community. While these cathartic acts of mob violence initially occurred spontaneously, as Girard argues in his book Violence and the Sacred, they later became codified in ritual, which reenacts collective violence in a controlled manner, and in myth, which recounts it in veiled forms. Religion, the sacred, and the state, for Girard, emerged out of this violent purgation of violence from the community. However, he argues, the modern era is characterized by a discrediting of the scapegoat mechanism, and therefore of sacrificial ritual, which creates a perennial problem of how to contain violence.

For Girard, to wield power is to control the mechanisms by which the mimetic violence that threatens the social order is contained, channeled, and expelled. Girard’s politics, as mentioned above, are ambiguous: he criticizes conservatism for wishing to preserve the sacrificial logic of ancient theocracies, and liberalism for believing that by dissolving religion it can eradicate the potential for violence. However, Girard’s religious commitment to a somewhat heterodox Christianity is clear, and controversial: he regards the non-violence of the Jesus of the gospel texts as a powerful exception to the violence that has been in the DNA of all human cultures, and an antidote to mimetic conflict. It is unclear to what degree Girard regards this conviction as reconcilable with an acceptance of modern secular governance, founded as it is by the state monopoly on violence. Peter Thiel, for his part, has stated that he is a Christian, but his large contributions to hawkish politicians suggest he does not share Girard’s pacifist interpretation of the Bible. His sympathetic account, in “The Straussian Moment,” of the ideas of Carl Schmitt offers further evidence of his ambivalence about Girard’s pacifism. For Schmitt, a society cannot achieve any meaningful cohesion without an “enemy” to define itself against. Schmitt and Girard both see violence as fundamental to the social order, but they draw opposite conclusions from that finding: Schmitt wants to resuscitate the scapegoat in order to maintain the state’s cohesion, while Girard wants (somehow) to put a final end to scapegoating and sacrifice. In his 2004 essay, Thiel seems torn between Girard’s pacifism and Schmitt’s bellicosity.

The tensions between Girard’s and Thiel’s worldviews run deeper, as a brief overview of Thiel’s politics reveals. As a libertarian, he has donated to both Ron and Rand Paul, and he has also supported Tea Party stalwarts including Ted Cruz. George Packer, in a 2011 profile of Thiel, reports that his chief influence in his youth was Ayn Rand, and that in political arguments in college, Thiel fondly quoted Margaret Thatcher’s claim that “there is no such thing as society.” As George Packer notes in his New Yorker profile of Thiel, few claims could be more alien to his mentor, Girard, who insists on the primacy of the collective over the individual and dedicated several books to debunking modern myths of individualism. Indeed, Thiel’s libertarian vision of the heroic entrepreneur standing apart from society closely resembles what Girard derided in his work as “the romantic lie”: the fantasy of the autonomous, self-directed individual that emerged out of European Romanticism. Girard went so far as to suggest replacing the term “individual” with the neologism “interdividual,” which better conveys the way that identity is always constructed in relation to others.

In a seemingly Ayn-Randian vein, Thiel likes to call tech entrepreneurs “founders,” and in lectures and seminars has compared startups to monarchies. He envisions “founders” in mythical terms, citing Romulus, Remus, Oedipus, and Cain, figures discussed at length in Girard’s analyses of myth. Thiel’s pro-monarchist statements have been parsed in the media (and linked to his support for the would-be autocrat Trump), but without noting that for a self-proclaimed devotee of René Girard to advocate for monarchy carries striking ambiguities. According to Girard’s counterintuitive analysis, monarchical power is the obverse side of scapegoating. Monarchy, he hypothesizes, has its origins in the role of the sacrificed scapegoat as the unifier and redeemer of the community; it developed when scapegoats managed to delay their own ritual murder and secured a fixed place at the center of a society. A king is a living scapegoat who has been deified, and can become a scapegoat again, as Girard illustrates in his reading of the myth of Oedipus (Oedipus begins as an outsider, goes on to become king, and is ultimately punished for the community’s ills, channeling collective violence toward himself, and returned to his outsider status).

If Thiel, as he reveals in a 2012 seminar, views the “founder” as both potentially a “God” and a “victim,” then he regards the broad societal influence wielded by the tech élite as a source of risk: a king can always become a scapegoat. On these grounds, it seems reasonable to conclude that Thiel’s animus against Gawker, which he has repeatedly accused of “bullying” him and other Silicon Valley power players, is closely connected to his core concern with scapegoating, derived from his longstanding engagement with Girard’s ideas. Thiel’s preoccupation with the risks faced by the “founder” also has a close connection to his hostility toward democratic politics, which he regards as placing power in the hands of a mob that will victimize those it chooses to play the role of scapegoat. Or as he states: “the 99% vs. the 1% is the modern articulation of [the] classic scapegoating mechanism: it is all minus one versus the one.”

No serious reader of Girard can regard a simple return to monarchical rule – which Thiel has sometimes seemed to favor – as plausible: the ritual underpinnings that were necessary to maintain its credibility, Girard insists, have been irreversibly demystified. Perhaps on the basis of this recognition, and even while hedging his bets through his involvement in Republican politics, Thiel has focused instead on the new possibilities offered by network technologies for the exercise of power. A Thiel text published on the website of the libertarian Cato Institute is suggestive in this context: “In the 2000s, companies like Facebook create . . . new ways to form communities not bounded by historical nation-states. By starting a new Internet business, an entrepreneur may create a new world. The hope of the Internet is that these new worlds will impact and force change on the existing social and political order.” Although Thiel does not say so here, from a Girardian point of view, a “founder” of a community does so by bringing mimetic violence under institutional control – precisely what the application of mimetic theory to Facebook would suggest that it does.

As we saw previously, Thiel was ruminating on Strauss, Schmitt, and Girard in the summer of 2004, but also on the future of social media platforms, which he found himself in a position to help shape. It is worth adding that around the same time, Thiel was involved in the founding of Palantir Technologies, a data analysis company whose main clients are the US Intelligence Community and Department of Defense – a company explicitly founded, according to Thiel, to forestall acts of destabilizing violence like 9/11. One may speculate that Thiel understood Facebook to serve a parallel function. According to his own account, he identified the new platform as a powerful conduit of mimetic desire. In Girard’s account, the original conduits of mimetic desire were religions, which channeled socially destructive, “profane” violence into sanctioned forms of socially consolidating violence. If the sacrificial and juridical superstructures designed to contain violence had reached their limits, Thiel seemed to understand social media as a new, technological means to achieving comparable ends.

If we take Girard’s mimetic theory seriously, the consequences for the way we think about social media are potentially profound. For one, it would lead us to conclude that social media platforms, by channeling mimetic desire, also serve as conduits of the violence that goes along with it. That, in turn, would suggest that abuse, harassment, and bullying – the various forms of scapegoating that have become depressing constants of online behavior – are features, not bugs: the platforms’ basic social architecture, by concentrating mimetic behavior, also stokes the tendencies toward envy, rivalry, and hatred of the Other that feed online violence. From Thiel’s perspective, we may speculate, this means that those who operate those platforms are in the position to harness and manipulate the most powerful and potentially destabilizing forces in human social life – and most remarkably, to derive profits from them. For someone overtly concerned about the threat posed by such forces to those in positions of power, a crucial advantage would seem to lie in the possibility of deflecting violence away from the prominent figures who are the most obvious potential targets of popular ressentiment, and into internecine conflict with other users.

Girard’s mimetic theory can help illuminate what social media does, and why it has become so central to our lives so quickly – yet it can lead to insights at odds with those drawn by Thiel. From Thiel’s perspective, it would seem, mimetic theory provides him and those of his class with an account of how and to what ends power can be exercised through technology. Thiel has made this clear enough: mimetic violence threatens the powerful; it needs to be contained for their – his – protection; as quasi-monarchs, “founders” run the risk of becoming scapegoats; the solution is to use technologies to control violence – this is explicit in the case of Palantir, implicit in the case of Facebook. But there is another way of reading social media through Girard. By revealing that the management of desire confers power, mimetic theory can help us make sense of how platforms administer our desires, and to whose benefit. For Girard, modernity is the prolonged demystification of the basis of power in violence. Unveiling the ways that power operates through social media can continue that process.

16024515689_3cc1be05a2_z

Sherry Turkle has been very successful lately. She is still touring the country giving high-profile talks and her best-selling books are assigned in college classrooms all across the country. The quotes on her books’ dustjackets are from respected authors and thinkers. She is a senior faculty member at an elite east coast university. She is by all accounts someone with an ostensibly left-of-center perspective that is popular while still pushing audiences to consider the ramifications of their actions. Turkle, through her critical analysis of social media and portable digital devices, wants people to think twice about the unintended consequences of their actions; how individual choices often aggregate into undesirable interpersonal dynamics. This is important work worthy of public debate but, precisely because it is so important, it is worth asking who benefits from Turkle’s particular brand of mindfulness.

Critiques of Turkle are too few, but the ones that exist are spot on. Focusing on individuals’ technology use, according to Nathan Jurgenson, not only turns the subjects of Turkle’s analysis into broken subhumans, it also gives the reader the opportunity to feel superior simply by fretting over when and how a device comes out of their pocket. Her work also misses, according to Zeynep Tufeci and Alexandra Samuel all the ways social media is a way of reclaiming some form of sociality in a world dominated by televisions, the suburbs, long work hours, and life circumstances that geographically separate us. Taken together we might understand the shortcomings of Turkle’s work as primarily one of digital dualism, i.e. that she considers non-mediated, in-person interaction as inherently more real or authentic compared to anything done through digital networks. What has been left unsaid, and what I want to focus on here, is how Turkle contradicts herself and, in so doing, reveals a bias toward authority and socially conservative political institutions. Turkle selectively deploys her analysis in such a way that traditional sources of authority are left unchallenged.

Technology criticism tends to be progressive or at least questioning of unchecked technological innovation. In the tradition of Lewis Mumford or Jacques Ellul it is an analysis meant to reveal how inventions and artifacts hamper freedom rather than expand it. And so it makes sense that Turkle’s critique of technology in our everyday lives has been interpreted as a larger critique of the structures that put them there. Turkle’s last two books Alone Together (2011) and Reclaiming Conversation (2015) are easily read as counter-arguments to an entire industry’s modus operandi: Silicon Valley, in its unthinking quest for power and wealth have laid waste to thoughtful repose, meaningful conversation, and an inclusive social order. Turkle’s work is full of examples of people too busy for their children, children too busy for each other, and entire organizations’ social fabric quickly wearing out as they succumb to emails and texts.

It is curious then, that Turkle is very popular among the people that she ostensibly critiques. Kevin Kelly, the founding editor of WIRED Magazine provides glowing reviews on her books’ dust jackets. She gives talks at conferences like “Wisdom 2.0”, the World Economic Forum, The World Business Forum, the Association of Financial Professionals, Partners HealthCare, and SAP Global Marketing. SAP is particularly confusing given that they are one of those companies that give bosses more tools to bug you at work with instant messages and automated reports.  Why are they so receptive to her work? What do they gain from Turkle’s work?

The answer to these questions is in the uneven application of her theories and where she locates the source of the problem. If the disruption caused by smartphones or social media is levied at social conditions that would be obviously undesirable to an educated professional audience, then technology has done a good thing. For example, in her latest book Turkle writes:

Gay or transgender adolescents in a small, culturally conservative rural town can find a larger community online; a circumstance that once would have been isolating no longer needs to be. If your own values or aspirations deviated from those of your family or local community, it is easy to discover a world of peers beyond them.

This observation, in the wake of the Pulse Nightclub shooting, is important now more than ever. It was both heartbreaking and inspiring to watch the LGBTQ community offer solidarity and comfort to one-another across long distances. It is puzzling then, that this “world of peers” was never mentioned in a recent interview with NPR’s Alina Selyukh. Neither Turkle nor Selyukh mention the LGBTQ community, their long history of enduring extreme violence, and the tools the community has cultivated to survive. The topics they did cover, however, included “radical islamists”,  the ablest narrative that violence is attached to mental illness, and the completely discredited theory of radicalization peddled by war hawks. All of which is made even more confounding given that two days prior to Turkle’s interview the CIA had announced the shooter had no tangible connections to ISIS.

The reason d’etre for the NPR interview was, after all, that “the gunman searched and posted on Facebook, in part to find out if he was in the news.” We could give Turkle the benefit of the doubt and say that Selyukh had an angle in mind and it is hard to reframe an interview when the reporter has a specific agenda in mind. Turkle though, seems to have no qualms with this line of inquiry and instead offers up even more problematic framings. Further on in the interview she claims:

if [the shooter had] been part of a church group, a community group, and wasn’t so alienated in the way we live now where so many people are so isolated, alone … then there are more things we can do, to bring them more into a fold as a society.

Even if we bracket off the fact that she is arguing that “church” would have kept him from becoming a “radical islamist” we are left with the contradiction that networked relationships can be meaningful enough to convince someone to commit mass murder but not powerful enough to keep someone well-adjusted. Social media seems to be both too shallow for meaningful community and so evocative that it would draw us into unspeakable acts. Turkle assumes that in-person communities are the antidote to extremism, as if those that commit atrocities do not have cohesive world views that are forged in tight-knit communities like evangelical churches, fraternities, and the exact sort of “conservative rural towns” that even she acknowledges are antagonistic to human flourishing.

Still though, Turkle’s exceptions to her own theory (i.e. that in-person, geographically-defined communities are always worth preserving) are understandable: communities that are antagonistic to tolerance and plurality require intervention, but we should keep in mind that the same dynamic can also be weaponized to give individuals access to violent ideologies. In the interview Turkle hopes for a society that espouses “tremendous integration, and community, and hopefulness, where people feel a part and included.” This is a very beautiful sentiment that she and I share. Companies endorse Turkle’s work for the same reasons they do not like the North Carolina’s hateful HB 2 “bathroom bill.” If we are going to live in a cosmopolitan and interconnected society, then inclusion into a larger and meaningful societal project is essential to achieving harmony. Digital networks can help or hinder such a society and it is understandable to read Turkle’s work as nothing more or less than a meditation on this dynamic.

Exactly how we achieve such a society however, may be more important and according to Turkle, we get there through an unquestioning obedience to benevolent bosses, teachers and parental authority. Her work contains a thinly veiled but pervasive trust in status quo institutions: schools, the well-educated family home, and summer camp are unmitigated good things in Turkle’s world. These are places, that should remain intact so as to maintain a monopoly on our attention. Families, according to Turkle, “are a training ground for empathy” and “a place to let ideas grow without self-censorship.” I doubt that last sentence was true for everyone at the Pulse nightclub that night or in those rural, conservative communities. Already we are getting a clear picture of who she is speaking to and who can find her work useful.

Of course I am not arguing for the preservation of small-town bigotry or international terrorist organizations. What it is at issue here is how quickly Turkle’s analysis loses focus of underlying problems, and instead scapegoats new technologies. This contradiction in her work—that technologically-mediated sociality is both powerfully alluring and incapable of delivering emotionally fulfilling conversation—cannot be an oversight because too much basic research (indeed too much to even summarize here) points to a much more nuanced reality: that technology and sociality is mutually shaping and humans’ satisfaction with any given social encounter is never so cut and dry as to be determined by one or two factors.

It is clear from her interviews with business publications that this contradiction is an essential component, not a failing, her thesis that technology is disrupting important family and community institutions. She writes in generalities about inclusion without a substantial discussion of what structural changes need to take place to make people feel included in the first place. It is a kind of David Brooks-esque political consciousness that makes overtures at inclusion and acceptance so long as it does not threaten substantial power structures like white supremacy or the security state. Turkle consistently shows no interest in articulating how family, work, and school might be sites of violence or coercion, rather than (or even in addition to) social stability.

This refusal to engage with the sort of structural critique that is a prerequisite for progressive change is on full display when she discusses work and productivity. Turkle, rather than considering how and why we have arrived at our alienating social condition, would rather us turn inward and blame each-other’s social media habits. Rather than seek out the sources of widespread acrimony and distrust in institutions, Turkle would have us blame ourselves for not living our most authentic lives. It is a victim-blaming ideology that belongs with the “culture of poverty” myth and broken windows policing. Eroding work-life balances brought about by stagnating wages and longer work hours, for example, is turned into bad parenting in this sad quote from a fifteen-year-old boy in Reclaiming Conversation: “When I come home from school, my mom is usually on her computer doing work. … Sometimes she doesn’t look up from her screen when I am talking to her.” Turkle chides:

Of course, distracted parents are nothing new, but sharing parents with laptops and mobile phones is different than an open book or a television or a newspaper. Texting and email take people away to worlds of more intense and concentrated focus and engagement.

Yes, because emails and texts have the capacity to be work related, prime time TV generally does not. It seems like an uphill battle for someone low on the food chain at work to swear off email after 5PM if their boss expects constant availability. (Of course this should be a demand of organized labor but such organizing does not make it into her “way forward” section which instead is dedicated to reminding us that Kony 2012 didn’t accomplish anything and the news is scary.) Turkle’s observations are only useful to people with relatively high degrees of power over their own lives and means of subsistence.

Her elitism is so obvious that even business reporters have a hard time figuring out how to make use of her conclusions. In an interview with TechRepublic (tagline: “Empowering the people of business and technology!”) the interviewer pushes Turkle on her lack of engagement with power and privilege. Her response is one of corporate benevolence in service of efficient and happy workers: First she acknowledges that the privileged do have more control over how and when they work but it’s because “the privileged know that they get more work done when they live that way.” She follows that up by clarifying that such a lifestyle must be a gift from corporate overlords, not the result of collective bargaining or any other kind of bottom-up reform: “I do try to make it clear in my writings on business that it’s up to the leadership at the firm to create a culture of conversation in their business.”

Then there is the matter of those people who might not fit perfectly into Turkle’s happy worker utopia. Here she strays far away from anything approaching science and becomes positively cruel. In Reclaiming Conversation, she makes anti vaxxers sound like public health experts:

Parents wonder if cell phone use leads to Asperger’s syndrome. It is not necessary to settle this debate to state the obvious. If we don’t look at our children and engage them in conversation, it is not surprising if they grow up awkward and withdrawn.

Not only is it alarming to read a licensed clinical psychologist conflate attitude with a developmental condition thereby reducing the latter to the former, it is uncited nonsense that runs counter to actual research. While there is still disagreement about what causes Asperger’s, genetics is largely determinative of the condition. Even if there was no evidence of a genetic cause, we could simply note that Asperger’s predates cell phones. (Side note: Turkle makes clever use of a citation style common in popular press books that hides which sentences are backed up by citations in the back of the book and which ones are not.)

As any epidemiologist can tell you, widespread misunderstanding of the root of a problem will only exacerbate the issue. Not only because people are acting on bad information, but because people with the right information have to spend more time correcting the record and not enough time thinking about remedies. As sociologist Jenny Davis put it: “Those who disagree with Turkle’s claim that we are more connected to devices and less connected to each other find themselves in the position of technological apologist, enthusiast, or utopian—intellectual positions that they may well not hold.” This is exactly the position I find myself in here. I am deeply skeptical of the profit motives embedded in digital networks (among other problems) but the focus should be on the ways in which technology embodies anti-social structural oppression, not as the propagators of unthinking or broken individuals’ unintended consequences. Also, for what it’s worth, Davis, who actually cites autism research, notes that digital technologies may be more palliative than toxic. In fact, “researchers at Stanford have a lab team dedicated to improving empathy through digital tech.”

Running throughout Sherry Turkle’s work is a dedication to a fairly conservative worldview where the pace of work and the environment in which it takes place should be set exclusively by bosses acting as wellsprings of morality. Extreme violence due to a failure of culture, not real material violence or structural inequality that manifests in strange and tragic ways. Differently abled people are simply broken.

The ultimate irony here is that Sherry Turkle’s success is directly attributable to the only critique that she gets right: social media provides perverse incentives for attention-seeking. The outlets that welcome Turkle’s polemics are trading in the illusion of intelligence. They collect quotes from neuroscientists and quacks that call themselves things like “happiness experts”, package up half-thoughts into edgy-but-not-too-edgy counter-intuitive claims, and then overlay a narrative that assures their audience that they already knew how to live according to science but maybe they missed a few things. Turkle has expertly manipulated an already dishonest landscape of science journalism meant to provide fodder for condescending liberals. Some may have read Turkle’s latest interview and been surprised by her total lack of empathy but it should surprise no one: if her work is good for anything it is dividing the world into priests and pariahs.

I shopped a version of this essay to several media outlets with an ostensible left-leaning editorship that had a history of commenting on the role of technology in society. What I am about to say is neither an indictment of the particular editors I spoke to, nor their publications. To blame them would be doing what Turkle does: blaming individuals for structural problems. Multiple editors said they understood my point, even found it persuasive, but thought my argument was too afield from their usual audience or too harsh for such a well-respected author. The problem is that they are right: tech criticism is not seen by political magazines as the overtly political topic it should be and publications focused on technology rarely step into the murky waters of political or social criticism. There is no appropriate venue to correct the record.

Here I am not frustrated with the editors I spoke to or even Turkle. I am disappointed at the researchers who know better but remain silent. She is speaking from a partisan position that favors authority and those of us that hold other positions should be roundly and loudly criticizing this kind of analysis and building compelling counter-narratives and venues to broadcast them. Social scientists should do what scientists are supposed to do: experiment, observe, and confirm or deny existing theories. This has not happened with Sherry Turkle’s work as of late. I know that, privately, people talk about Turkle’s work as detached from rigorous research—more of an exercise in pandering to what we want to believe than a careful review of the state of the field. Publicly however, nearly no one seems willing to say anything. Too few scientists with a platform are willing to call her books what they are: polemics for conservatives.

David is on Twitter

Image credit: Steve McClanahan

unnamed

My mom and I spent some part of the 1995 summer with my aunt and her house, complete with backyard. I was three, and having lived most of my life in a small New York studio apartment, my mom must’ve thought I would enjoy the few elements of nature often found in quiet Californian suburbs. She was wrong: each time they tried setting me in the grass, I would crawl desperately back to the beautiful, safe, concrete patio.

This is a childhood story that still speaks to my identity: camping is not my first choice of activities, and the narratives of people who lose themselves in the wilderness are  tedious to me. So it was quite a surprise when I willing accepted the hiking trail Pokémon Go had set for me with the promise of Clefairies:

image

That Pokémon Go managed this feat is a miracle, and betrays my investment in the game. Pokémon celebrates its 20th birthday this year; in those twenty years, a generation of young adults grew up building the Pokémon World over kitchen tables and in parks. Anthropologist Anne Allison described Pokémon’s designers as part of a trend in game design through the 90s that used cheaper digital technology to resolve the increasing isolation of children. If children couldn’t fit friends into the regimented schedule of school, extracurriculars, and sleep, then companies could sell games that made friends portable and accessible on each child’s schedule.  

Although Pokémon started as corporate code, demand soon outstripped Nintendo’s capacity to produce Pokémon. Unlicensed merchandise and cards spawned anywhere demand existed. These days, Pokémon has its own search tag on most porn sites, and is comfortably in the top 10 of FanFiction fandoms. In 2014, a cult quickly developed around Twitch Plays Pokémon: an event in which some hundred thousand players worked together to complete Pokémon: Red Version, through Twitch.TV’s chat screen. Playing Pokémon––whether at the kitchen table, at recess, or after school––has become a collective act of participation. The French sociologist Èmile Durkheim would have called it the collective effervescence of the 1990s: an activity as much about the affect and socialization produced as it was about the game. Nintendo may have designed it, but people made it real.

It’s no accident that most people who register critiques of the game are left with bemusement at why the game captivates: the game’s immersion has incorporated the previous efforts of players in the Pokémon world. Sam Kriss at Jacobin takes the game to task for another reason: it crashes through our reality, displacing it with an “objective fantasy” where all possible routes are already mapped. In other words, the game pretends to provide you with a fantastical world, but actually is strictly regimented access to everyday, physical space.

In some ways, Kriss is correct in his assessment. While the game offers the potential to live a child’s dream as an adult, it does so with specific paths laid out, and thus sharply limits the imaginative prowess. This is especially true with the placement of PokéStops. PokéStops serve two important functions in the game: They allow players to obtain items without paying for the in-game currency (this is, after all, a Freemium game), and they allow for the use of PokéLures, an item that draws wild Pokémon to the location of the PokéStop for thirty minutes. The promise of easily attainable Pokémon, made visible by petals falling from the stop’s icon, inevitably lures players as consistently as they do Pokémon. And when multiple PokéLures are used in close proximity, groups of strangers suddenly talk to each other with a common vocabulary; between bouts of excited shouting about the rare Pokémon that just flashed on everyone’s screens.

PokéLures are the heart that drives socializing in the game. They’re a new coffeehouse or bar: one shows up on my screen, followed quickly by another right across the street, and I know that I’ll run into the set of players I’ve met at similar setups of PokéLures. Conversations often run between local events in New Haven, and rumors of Pokémon spot in specific regions: “Yeah, West Haven is full of Jynx at night,” someone tells me. Dammit, I think to myself. I’d need a car to get there.

I’m fortunate that playing is an option at all: PokéStops and their Lures are usually associated with spaces that have “cultural significance.” Art installations, interesting historical landmarks, or monuments, criterion that often end up placing them in a critical mass within cities. Some players have said they have to drive ten minutes before finding a single PokéStop. Meanwhile, on a recent trip to New York, a friend told me the 13-15 PokéLures set up within a five minute radius was a “slow” night for Pokémon in Central Park.

13652639_10153658543821776_2016300774_n

13823377_10153658543941776_1871262452_n

And yet, even critiques like Kriss’ rely on a simplified relationship between players and rules. We are describing vast swathes of the population, many of whom play the game for their own purposes. Attempting to categorize all experiences as a single perspective of purpose risks reframing Pokémon Go as another product of an ill-defined mass culture. In the mid-twentieth century, cultural studies studies––studying the heterogeneity of culture and cultural reception––was born to push against the anxiety of the atomized individual, found in many critiques of mass culture.

But Pokémon Go neither suspends nor homogenizes the problems of identity in twenty-first century America. Maddy Myers anticipates that “Are you playing Pokémon Go?” will become one of the hottest pick-up lines of 2016; PokéMatch appears to validate her concerns. The timing of Pokémon Go’s release––in the aftermath of America reeling after the footage of Alton Sterling and Philando Castile, dying, as a result of municipal police––has made the game a debate between escapism from brutal atrocities and the fact that such escapism is limited by realistic expectations of safety for Black folks while playing the game. Just as some people “escape” by the spectacle on their screens, so do others negotiate these issues as part of the experience of play.

Critiques of mass culture also found trouble in their excessive reliance on the structure set by rules: it denied the possibility of resistance. If the rules of Pokémon Go are so objectively fixed, there is little hope of using the game for anything more than the game’s designated purpose. This interpretation of cultural limitations is what the Marxist philosopher Henri Lefebvre resists in his text The Production of Space. Lefebvre viewed the triumph of “mental space” (that of theorizing, of reading, of rulemaking) over “physical space” (where we eat, sleep, and play) as nonsensical and unsubstantiated. While Lefebvre’s main target were those philosophers of language, who collapsed the mental space into the words that formed it, Lefebvre suggests that mental space and physical space cannot be understood as two halves: they must be evaluated as a single system.

Many players of Pokémon Go take the rules and goals of the game as suggestions. While the game suggests you “catch ‘em all,” some players are content with making fun of the game’s inability to produce more than Weedles, Pidgeys and Rattatas. A whole new set of memes and macros used to convey disappointment, anger, and general absurdity take advantage of the universal experience that is only finding these three Pokémon almost anywhere.

But, even while playing the game seriously, players are more heterogenous than anything else. Some players catch the cute Pokémon; some players catch the Fire types; others play to socialize with friends. It is these heterogenous experiences that structure Alexandra Samuel’s evaluation of the games cartographic implications. A key detail is the map’s overlay and the information it lacks:

“In the case of Pokémon Go, the most noteworthy feature is the absence of any street names: your only navigation clues are nameless blocks and intersections and named landmarks. This makes Pokémon Go far closer to pre-cartographic navigation by landmark than to modern wayfinding via street names and addresses.”

The intent to take away information––cross streets, or traffic patterns––goes directly against Google Map’s project to make travel from Point A to Point B as efficient as possible. If you want to hit up Gyms, or find PokéLures, a player must be willing to add five or ten minutes to a route. The reward, of course, might include levelling in the game. But it might also include meeting the group of players whose company you enjoyed at the past two PokéLures. My hike was only one of several neighborhood excursions, motivated by people I’d met at the Lures, excitedly describing what they’d found, and where they’d met other players. Building my piece of this new Pokémon world, hanging out with a group of Pokémon trainers I would never have met otherwise, and exploring the entirety of New Haven’s offerings have left me with a new map of the city, one much more memorable than would have been possible otherwise.

Much like any facet of American culture in the twenty first century, the game is playable to the extent that someone wants to participate in American society. Pokémon Go is neither a messianic salvation, nor is it an apocalyptic nightmare. It’s a game whose possibilities of resistance and compliance are found with the people who interact with it.

Marley-Vincent Lindsey is a PhD student at Brown, where he works on religion, economics, and ideas between Spain and Latin America in the sixteenth century. He uses this experience to ask what is new and old about human beings on the web, a question that is very dear to him. He occasionally tweets.

Picture1

Throughout their history, national conventions for American political parties have become more and more public events. Closed off affairs in smoky rooms and convention halls gave way to televised roll calls and speeches. In the Year of Our Big Brother, 1984, C-SPAN aired uninterrupted coverage of the Democratic and Republican conventions. Conventions became more polished and choreographed, with 1996’s DNC being the zenith of this trend. Conventions moved to the internet in the aughts, using a variety of different platforms to distribute streams and commentary.

This election cycle incorporated something new into the dissemination of gavel-to-gavel coverage of the conventions: Twitch.tv. The platform designed for videogame streaming offered full coverage of the conventions. Additionally, it gave Twitch users the ability to host the coverage of both conventions on their own channels. In the case of the DNC, Twitch users were able to add commentary to the stream of the convention on their channel, giving their followers and other users an opportunity to hear their favorite gamers’ takes on the presentation of the Democratic National Committee.

But why did the RNC and DNC take this step? Twitch’s announcement of the collaboration points towards granting accessibility, giving users a voice in the political process, and emphasizes the exceptional power of American politics throughout the world. While this explains Twitch’s rationale for the partnership, it is necessary to understand how the platform fits into the landscape of American coverage of conventions for this election cycle and the future.

A partial explanation is available through the concepts of remediation, immediacy, and hypermediacy offered in Jay David Bolter’s and Richard Grusin’s Remediation: Understanding New Media. Bolter and Grusin use remediation as a way to explain how media ecologies evolve over time—how and why certain mediums gain usage and importance. The significance of digital images, they argue, could only be understood through the successive supremacies of perspective, photography, and film. Each of these added some aspect to the previous medium, they argue, allowing those technologies to gain dominance in a particular time—perspective to the Rennaissance, photography to the nineteenth century, film to the twentieth, and so on.

Remediation works through the two concurrent and competing logics of immediacy and hypermediacy, which create different experiences based on how they flow back and forth in a particular media object. Immediacy is the erasure of a medium’s qualities in order to get at the heart of what is “really” there, with the full immersion of virtual reality as its ultimate form. Along and against immediacy is the logic of hypermediacy, which “makes us aware of the medium or media.”

Bolter and Grusin discuss illuminated manuscripts and Gothic cathedrals as hypermediated mediums, but we can see it plainly in cable news representations of the conventions. Here, conventions are presented not as their own experience, not as events to immerse oneself in, but something to be picked apart and digested prior to and synchronized with the event. Cable news outlets offer banners that chop speeches up into the headlines and talking points that will be repeated by analysts and commentators.

Picture2

This year CNN emphasized the lower interactivity of television by acting as a sort of Google for their viewers. When airing speeches of non-headliners a banner underneath would offer factoids for the audience to understand who they were and occasionally why they were there. By using these banners, we see a medium that is aware of its place in the media environment, keeping eyes on the screen instead of wandering towards a second screen for further information.

While cable news offered a hypermediated presentation of the conventions, Twitch streams sought to enforce their qualities of immediacy. With Twitch’s Theater Mode enabled and chat disabled, the immediacy of the DNC was readily apparent. With the exception of a small Twitch icon in the lower left-hand corner and the toolbar that pops up when a cursor moves over it, the user is presented with the immediacy of an uncluttered video feed.

Picture3

However, this is not what a viewer is presented with if they are not a Twitch user or fluent in the various buttons and toolbars that accompany the base Twitch layout. Instead, those viewers are presented with this:

Picture4

This seems to be a textbook case of hypermediacy, especially since Bolter and Grusin explain that “hypermediacy expresses itself as multiplicity.” The chat, sidebar, and a channel description box filled with links to other DNC-related websites and social media accounts present viewers with discrete content streams. The chat moves by quickly, enforcing the constant presence of the commenter swarm. In the archiving of these livestreams (all eight days of streams are archived on the respective convention Twitch channels), the chat is preserved alongside and in time with the archived stream. The howling masses, and their prime place in the platform, concretely linked with the event itself.

But, I think this is where Bolter and Grusin’s discussion of immediacy and hypermediacy falls apart a bit. Throughout their book, these flows are often presented as a ratio—that the experience acts as a point that moves back and forth along a continuum between the poles of immediate and hypermediate. A viewer might only experience immediacy or hypermediacy at any single moment, but encountering one does not necessarily diminish the force of the other. Twitch’s chat function—because it moves by too quickly for any viewer to read everything—demonstrates that hypermediacy and immediacy are realized through a constant and complex negotiation between the technology’s affordances, a viewer’s attention, and user practices. Viewers see snippets, single words, the small icons Twitch dubs emotes, or the angle brackets denoting something that a moderator deleted. They might not even see it that often if paying attention to the stream, the random messages off to the periphery of their vision. I suggest that this is equal parts hypermediate and immediate because of this blurring and meshing of experience.

Bolter and Grusin describe immediacy as emphasizing the “contact point between the medium and what it represents.” When the thing being represented is a raucous convention in this, The Cycle of Our Trump 2016, an indistinct frenzy of comments, cheers and jeers directly connects the medium to the convention floor. It just made sense in the last few weeks, as it will in September for the first debates, where a shouting chat scroll has been and will be connected to Boisterous Trump.

Where Twitch goes from here with streaming political events remains to be seen. Webcasting has been a large part of political discourse in the US since the 1990s and the time of the “information superhighway.” From Bill Clinton’s Online Town Hall to Barack Obama’s Google+ Hangout, video streaming has been used as ways to demonstrate the possibility of closing the gap between the highest representative of government and their constituents. As I mentioned, Twitch will likely be a space where the September debates are streamed, since in 2012 both YouTube and Xbox Live streamed the debates for their users (with Xbox Live even offering real-time polls to its viewers).

However, the most interesting part of Twitch taking on these events is the possibility for users to simultaneously stream and comment on the events. The DNC allowed users to easily stream their own responses to the convention as it unfolded, which gives users the ability to put on their own telling of the conventions for their subscribers and any other viewers. They were able to use their own channel’s overlays and banners, or talk directly over the speeches. The RNC, in contrast, only allowed users to participate through Twitch’s Host mode, which only allowed users to syndicate the RNC stream on their channels. During this past convention, the permission to co-stream the events is where the chaos and vitriol of Twitch took over, especially with oversight focused on the main RNC and DNC channels.

The Twitch channel for r/the_donald
The Twitch channel for r/the_donald

While streams can always be hijacked by chattering assholes and hosts are able to overlay their own rubbish politics, Twitch can offer a different way of presenting events and commentary to viewers. While CNN offers us tidbits about speakers in their banners, users might be able to offer real-time fact checking or other relevant information during speeches. Streams could give us issue positions and voting history of elected officials. A coordinated effort that conglomerated different channels could give viewers various viewpoints from which to see the convention—a sort of Gore-Vidal point/counterpoint baked into the presentation of the same event. These approaches could emphasize the hypermediacy of Twitch in ways that cable news has while not slipping into a discourse of objectivity that is inevitably infused into mediums of immediacy.

In Twitch’s announcement of the RNC/DNC partnership they wrote that it was “an opportunity for [users] to engage in the political process…without leaving your native habitat, using the social and communication tools you know and love.” While positioning this as a “public service” is beneficial for Twitch, any possibility for resistant approaches to these events will be stymied if the user base and common practices of viewers and chatters are not altered. Just as broadcast networks did not push for guerilla television, Twitch will not guide its streamers and channels in creating alternative futures. Instead the mediation of change will come down from new users pushing into this “native habitat” of gamers.

Bolter and Grusin naturalize the evolution towards immediate media by explaining that hypermediacy “reminds us of our desire for immediacy.” This argument is steeped in ideals of objectivity that have been mirrored by journalistic practices over the last century. The camera, banners of quotes and factoids, and graphics displaying the time and place of the video all seek to present the event as it “naturally” is. These procedures exhibit a single view of the experience, placing politics as a distinct object, out there in the world for viewers to consume objectively. The potential of a service like Twitch is that we may start building a more evocative and self-aware form of depicting these events. One that eschews the immediacy and false objectivity of cable news and revels in the hypermediacy of multiple voices contributing to a single project. This sort of goal would create a deeper and more profound consciousness that politics are personal, beginning with how they are mediated.

Nick is managing editor for the Journal of Games Criticism and is on Twitter.

2829815438_0b6a4696ed_z

With the Republican National Convention still freshly branded into our brains and the Democratic National Convention beginning to stagger into the media cycle, now is a good time to learn a few things about spectacles. If nominating conventions are anything, they are spectacles. For this we should turn to no one less than Guy Debord and his classic text The Society of the Spectacle.

Debord uses “spectacle” to describe “a social relationship between people that is mediated by images.“ It is important to remember that spectacle can mean a visually rich event or something that you wear over your eyes to change your vision. The society of the spectacle shifts between both: media-saturated events support the creation of lenses with which to see the world. The propaganda of political rallies is not washed away by the balloon drop: it sticks with you long afterward. Throughout The Society of the Spectacle Debord makes reference to real and natural worlds but do not mistake such a distinction for a (digital) dualist conception of the world. Rather, Debord observes:

The spectacle cannot be set in abstract opposition to concrete social activity, for the dichotomy between reality and image will survive on either side of any such distinction. Thus the spectacle, though it turns reality on its head, is itself a product of real activity.

The spectacle is capitalist means of production feeling itself. It is always ready to replace meaningful experiences that we make for each other, with meticulously crafted moments that feel and look bigger and better than what we might have made for ourselves. Like the sweetness of candy compared to a beet from a garden. Debord suggests that a prerequisite for these moments is the destruction of more “real” experiences:

In its most advanced sectors, a highly concentrated capitalism has begun selling “fully equipped” blocks of time, each of which is a complete commodity combining a variety of other commodities. This is the logic behind the appearance, within an expanding economy of “services” and leisure activities, of the “all­inclusive” purchase of spectacular forms of housing, of collective pseudo­travel, of participation in cultural consumption and even of sociability itself, in the form of “exciting conversations,” “meetings with celebrities” and suchlike. Spectacular commodities of this type could obviously not exist were it not for the increasing impoverishment of the realities they parody. And, not surprisingly, they are also paradigmatic of modern sales techniques in that they may be bought on credit.

This is how we might make sense of the fact that conventions often give plumb speaking slots to celebrities and other folks that hold no office in the party or the government. Celebrities can stand in for a friend or a mentor. They are emotional surrogates as much as anything else.

Conventions also appear to be celebrating something when in fact few people know or care about what is going on, let alone are excited about their outcome. Debord’s “pseudo-festival” might help us understand what is going on here. Like the “real” experience, the conventions make up for the lack of actual celebration-worthy events through sheer force of visibility and manufactured excitement:

Our epoch, which presents its time to itself as essentially made up of many frequently recurring festivities, is actually an epoch without festival. Those moments when, under the reign of cyclical time, the community would participate in a luxurious expenditure of life, are strictly unavailable to a society where neither community nor luxury exists. Mass pseudo­festivals, with their travesty of dialogue and their parody of the gift, may incite people to excessive spending, but they produce only a disillusion ­­ which is invariably in turn offset by further false promises. The self­approbation of the time of modern survival can only be reinforced, in the spectacle, by reduction in its use value. The reality of time has been replaced by its publicity.

This last sentence sounds strange but the idea is simple: rather than celebrate the passage of historical time—acknowledging that life itself is defined as a span of time, that all things emerge as a product of time’s passing, and that by the very nature of things the biggest endeavors must be social because they take longer than a single human life—the spectacle requires time be both infinite and dividable into standard segments. The convention schedule, the time slot, and the commercial break are all predicated on the assumption that certain moments must be well seen and others are far less important. Time is divided so that it might be sold and the selling price is pegged to its ability to be seen, its publicity.

As we watch this convention and re-watch the last one (either uncut and pure, or reformulated as more obvious farce) let’s know spectacle when we see it, but also leave room for moments of honest candor. The spectacle is a useful theoretical lens, but it is also important to take it off once in a while. Debord leaves little room for moments where commodified time might be turned back on itself and appropriated for parody or narratives resistant to hegemonic discourse. (Debord sees the spectacle as, itself, a parody of more authentic ways of living.) The conventions are spectacle from gavel to gavel, but humanity always has a way of shining through.

David is on Twitter

Image credit: PBS Newshour

57958534_3d8c9b4a2c_z

One of the first news stories about the June 12th Orlando shooting that I read focused on the mother of a young man trapped inside Pulse nightclub, and the text messages that she had exchanged with her son. When I first read the story, the fate of the young man was not yet known, although his text messages had ceased by 3am, and his mother was quoted as having a “bad feeling” about the outcome. That day, as the names of the victims trickled out, I followed the news intently, hoping that somehow this young man’s name would not appear on the list of the deceased. But it did.

Like so many others across the country and the world in the wake of the Orlando massacre, I experienced an intense form of empathy for the victims and their families, made possible in part by increasingly timely and intimate forms of news gathering in the digital age. I read the news from a position of safety and security, but still felt that empty pit in my stomach, still had to stop in my tracks as the young man’s name came across my constantly updating Twitter feed. Millions of others felt something similar. But what becomes of all this empathy?

Empathy has increasingly come to be seen as an important component of efforts at social justice across a host of different contexts. For instance, writing on the current refugee crisis, Britney Summit Gill suggested that “if Westerners don’t care about the stability of the Middle East or the refugee crisis, we need to close the empathy gap and make the peoples of other regions of the world more familiar, more relatable.” This same “empathy gap” has also been used to describe the relatively low level of public attention paid to the recent terror attack in Istanbul, compared with the dramatic outpouring of emotion in the West devoted to last year’s attacks in Paris. And some argue that new technologies like virtual reality can cause us to “instinctively feel a surge of empathy for those whose experiences we are immersed in.” The assumption in these and other cases appears to be that an increase in empathy for the suffering of distant others can lead to improved outcomes for suffering people down the line. Indeed, we might be more focused on ending Western military adventurism if we viewed all people as equally worthy of our attention and protection. As Summit Gil laudably put it, “the least we can do is try.”

But this view may misunderstand what empathy really is, and its many limitations. As philosopher Jesse Prinz explained, “empathy is partial; we feel greater empathy for those who are similar to ourselves,” and numerous studies have born this out. For example, one psychological experiment found that whites who strongly identified with their own racial group biased their charitable giving against black disaster victims. Another confirmed that even on a sensory level, people experience more empathy for the physical pain of those with the same skin color. Race is, of course, a social construct, not some kind of natural or inherent barrier between peoples. But as long as some people continue to imagine that phenotypical differences are markers of significant distinctions between themselves and others, then we can expect empathy to have trouble crossing racial and other boundaries.

These are the perils of relying on what is essentially an imaginary relationship between some distant unfortunate and oneself. Psychologist Lauren Wispé argued that because it refers to “the attempt of one self-aware self to understand the subjective experiences of another self” empathy doesn’t necessarily involve “awareness of another’s plight as something to be alleviated.” Sociologist Candace Clark has suggested that empathy is simply the first step in a process that can lead to a desire to help, but can also lead to indifference or even disgust towards the other. My own textual analysis of an anti-Occupy Wall Street blog has shown that people can quite easily imagine the suffering of others as manageable or surmountable. The larger point here is that these things are not failures of empathy—this is how empathy works. We can’t truly know another’s pain, and in that gap between one’s own subjective experience and the pain of another, there is room for all sorts of biases, misunderstandings, and even enmity.

This makes the focus on empathy a somewhat poor choice for social justice movements. If we assume that empathy can and ought to be distributed equally, or that a more just society is predicated on a more empathetic public sphere, we are likely to be disappointed. This is true even as social networking sites, viral videos, and mobile devices put us in such intense and intimate contact with the suffering of others. After all, these things can expose the brutality of police violence against people of color, but just as easily rally support for those same police.

Of course, I believe that empathy is a virtue. I try to be as empathetic as I can toward those close to me as well as those very distant or different from me. And I hope that my friends and family are similarly empathetic. Nonetheless, I am troubled by the politics of empathy, which privilege short-sighted resolutions that salve emotions but often do little to fix underlying problems. The shape of the gun control debate in the wake of the Orlando shooting revealed as much.

As has become customary after such tragedies, calls for federal gun control legislation once again rang out after Orlando. This time they inspired, at the very least, some significant acts of political theater. Just four days after the Orlando massacre, Senator Chris Murphy engaged in a 15-hour filibuster in the Senate to demand a gun control vote. A week later, Congressional Democrats staged a sit-in on the floor of the House, also demanding the passage of new legislation, in a move that was heavily tweeted and streamed via the Periscope app. These displays felt good, they showed us politicians who were just as fatigued and frustrated and frightened as we were, and who were moved enough to disrupt the normal way of doing things.

Perhaps these ultimately fruitless maneuvers were the first steps in a broader movement. But in the rush to demonstrate empathy for the victims and their families, to salve the public’s enflamed emotional state, the bills put forward to address America’s gun problem were terrible. The so-called “no fly no buy” legislation would merely graft gun control onto a federal “no fly” list that, as Alex Pareene wrote, has been “a civil rights disaster by every conceivable standard. It is secret, it disproportionately affects Arab-Americans, it is error-prone, there is no due process or effective recourse for people placed on the list, and it constantly and relentlessly expands.” So even as these proposed laws played off our extreme empathy for the victims of gun violence in Orlando and elsewhere, they required us to avoid empathizing with all of those who have been secretly and often unfairly denied basic rights in the name of the war on terror. This is sadly reminiscent of the way that Americans’ empathy for the thousands of innocent victims of the September 11 attacks blinded them to the suffering that we soon unleashed on hundreds of thousands of other innocent victims in Afghanistan and Iraq. In this way empathy can actually impede social justice.

We ought to remind ourselves, then, that justice doesn’t actually require empathy. It doesn’t rely on everyone developing a deeply felt understanding of what others are going through, and won’t necessarily be derailed by misunderstandings of this. There are likely many people who will never be empathetic toward disaster victims, or only do so in the most cursory and personally unchallenging ways. A safer and more just world will not be delivered through viral videos or Facebook posts. It requires hard work. It does require a movement that keeps going in the in-between times, when no new gun massacres are confronting us on television or on Twitter. It requires paying attention to the day-to-day handgun violence that results in less-spectacular but equally senseless losses. It depends on a movement of people who will keep the issue of gun control at the forefront of public consciousness once our mass-mediated empathy for the victims of gun violence has begun to fade. Many are already doing this. Many more are clearly needed. But if we truly want a safer and less violent future for this country, we can’t assume that the public’s mass-mediated empathy is going to make it happen on its own, or push legislators in the right direction. Instead, we need to keep working on ways to transform our empathy into action now, and in the months and years to come.

 

Timothy Recuber is a Visiting Assistant Professor of Communication at Hamilton College. His book, Consuming Catastrophe: Mass Culture in America’s Decade of Disaster, is out  Fall 2016 from Temple University Press. He tweets from @timr100,

Image credit: Francois Bester

I grew up watching a lot of Star Trek. It would be an understatement to say that the franchise was a big part of my life. Immediately after the last episode of Star Trek: Voyager I cut off my hair. I took Enterprise as a personal insult. If I saw J.J. Abrams I’d probably try to blind him with a strobe light while yelling, “How’s that lens flare working for you?!” I was feeling much more optimistic about the new TV series incarnation under Bryan Fuller until a couple days ago when, in an interview with Collider he told a reporter asking about casting decisions: “I’ve met with a few actors, and it’s an interesting process. There’s a few people that we like and we want to carry on what Star Trek does best, which is being progressive. So it’s fascinating to look at all of these roles through a colorblind prism and a gender-blind prism, so that’s exciting.” I try not to notice the color of flags but I’m pretty sure I’m seeing red ones.

After watching the whole rope-side interview and reading the transcript it seems that when he says “___-blind prism” Fuller is talking about an inclusionary and diverse cast. He goes on to clarify: “I think the progressive audience that loves Star Trek will be happy that we’re continuing that tradition.” This is faint reassurance. What a “progressive audience” wants out of Star Trek may not necessarily be what a diverse audience wants. Well-meaning, white progressive writers, producers, and directors, will not do the same work as a production team with diverse representation. Put another way, it is fine and right that the cast be diverse but who is behind the camera and holding the pen is just as (maybe even more) important here. The progressiveness that I heard in that interview sounded more like white people being generous: a working “for” rather than a working “with.”

As our last two essays from Apryl Williams and Jenny Davis on the newest season of Orange is the New Black (spoilers in both) have argued: having non-white characters—even a lot of them—does not get you anywhere near good race politics. What it usually delivers, in the words of Williams, are “[black people’s] stories but from a white perspective.” We can expect the same from Star Trek given that everyone working on the show (so far) is white:

Bryan Fuller
Bryan Fuller
Alex Kurtzman
Alex Kurtzman
Heather Kadin
Heather Kadin
Nicholas Meyer
Nicholas Meyer

The utopia depicted in the Star Trek universe is largely the construction of the white men that wrote the majority of the episodes and it is in danger of getting worse not better. So far it already hasn’t met the expectations set by its predecessors: The original series was one of the first science fiction television series with a woman as a writer and black actors like Avery Brooks, Michael Dorn, and LeVar Burton have all directed multiple episodes, many of which garnered the most accolades. Star Trek accomplished a lot with a stable of writers that woefully underrepresented the audiences it attracted and spoke to.

Moreover, if the Star Trek of 2016 is going to be as groundbreaking as it was half a century ago, it will have to achieve something beyond fair hiring practices. It would be a missed opportunity to hire everyone that does not identify as white and male, and interrogate the same utopia. The very foundations, the very premise of the Star Trek universe needs to shift and reveal the underlying mechanism that make this particular utopia work.

The franchise must undergo a shift on the order of what happened in the mid-nineties. What Trek writers called “leaving the Roddenberry Box.” Former producer Michael Piller, in an unpublished manuscript about working on the show, describes the box like this:

Roddenberry was adamant that Twenty-Fourth Century man would evolve past the petty emotional turmoil that gets in the way of our happiness today. Well, as any writer will tell you, ‘emotional turmoil’, petty and otherwise, is at the core of any good drama. It creates conflict between characters. But Gene didn’t want conflict between our characters. “All the problems of mankind have been solved,” he said. “Earth is a paradise.”

Now, go write drama.

PIller goes on to say that he actually came to respect and even be the final champion of the Roddenberry Box until he left amongst what he calls a “writers’ rebellion of sorts” after the second season of Voyager. The writers said they had to be let out of the box and both series that were running at the time, Voyager and Deep Space Nine, got immensely better. They got better not because they left the box behind, the show got better because the writers put characters in competing positions to justify the box. Utopia had to be defended.

While The Next Generation is undoubtedly a good show, the later seasons of Deep Space Nine and Voyager have a richness that made them more thought-provoking. Both of these series are meditations on justifying received values with none of the prerequisite material support that makes those values work. How do post-scarcity ethics work when you’re stranded far away from home? What does it mean to uphold exploration as your highest ideal when you are at war? To what lengths will you go to defend utopia? These shows were, as far as I can remember, the last popular pieces of culture that asked us to think in utopian terms. We need that again, especially given how shaky everything feels as of late.

Ronald D. Moore, one of the most strident critics of the Roddenberry Box, eventually ended up being one of the most prominent writers and left an indelible mark on the series. If this new series is going to do what Star Trek should do—push boundaries about what is politically, culturally, and socially possible—then we will need a shift of similar proportions and scope. What have we been missing in this utopia? What remains stubbornly scarce in a post-scarcity utopia? In a world where you can transport across the planet in a fraction of a second, what keeps regional cultures alive? Is Sisko’s Creole Kitchen more like an Olive Garden in Tuscany—a chain restaurant mimicking what it is situated in—or is that no longer a useful distinction? Is whatever we might call “Italian culture” today enacted solely in places like the Olive Garden in the 24th century? Might we be disgusted by a society that is utterly and completely free and happy? I want to see those stories.

Star Trek was never progressive, it was utopian. It makes sense though that Fuller would characterize Star Trek as both “progressive” and “color blind” because that is, in a sense, what Gene Roddenberry had made in retrospect. But at the time it was not merely progressive, it was utopian. Utopias, like the Roddenberry Box, don’t just let us display the final result of a certain kind of politics, they let us interrogate the very foundations of our politics. They let us bring ideas to their logical and illogical conclusions and, in so doing, gives us a crucible in which to crush them up, mix them, and come up with brand new ideas. Utopic story telling should not be blind to anything: it should meet race, class, gender, and any other social structure head on and complicate it beyond comprehension. What comes out the other side should be a little unnerving, exciting, and dangerous. Exactly what the future should be.

David is on Twitter: @da_banks

 

Alethia-Jones-Virginia-Eubanks-ed-Aint-Gonna-Let-Nobody-Turn-Me-Around-Forty-Years-of-Movement-Building-with-Barbara-Smith

The recent tragedy in Orlando should remind us, among many other things, that building solidarity and compassion across multiple identities is both difficult and necessary. It is difficult because too few people are willing or able to understand how intersecting forms of oppression can leave their mark on one’s identity. It is necessary because those forms of oppression are at their most powerful when they divide people as they hold them down. Building a politics that recognizes the unique challenges of intersecting identities while not stopping at advocating for the freedom of only that identity is the sort of critique that reminds us that organizations like The Human Rights Campaign are both a force for progressive change but ultimately an extremely limited one. I like to think of concepts like identity politics and intersectionality as inventions or technologies because it underscores how analytical concepts do work in the world. You can look at writing by radical collectives before and after these concepts were invented and see very different kinds of points being made and new approaches to activist work being tested. Thinking this way also helps us think about how and to what degree people use these concepts correctly or productively.

The Combahee River Collective started out as a chapter of the National Black Feminist Organization but eventually became a black feminist lesbian organization of its own operating out of Boston in the second half of the 1970s. The term “identity politics” was first coined in their collective statement released in 1977 which was consciously part of building a movement around intersecting forms of oppression. In the interview below, between author and black feminist Kimberly Springer and Combahee River Collective member Barbara Smith, we can see how identity politics and intersectionality were “invented” for a very particular purpose but then appropriated by the right wing to do the exact opposite kind of rhetorical work. After so much abuse, these terms get “watered down” even when they’re used by well-intentioned leftists.

Before turning to the interview I want to suggest that while Smith and Springer don’t dwell too long on the right-wing’s intentions for using terms like identity politics, I suspect the hijacking of these terms was an intentional act of sabotage (or technological appropriation [PDF]) and not a misunderstanding. The intentionality becomes more obvious when conservatives seem to “get” intersectionality better than liberals. For example, Melissa Gira Grant in Pacific Standard writing about the recent spate of anti-trans bathroom bills notes: “Same-sex marriage, straight sex outside marriage, and trans people — they see the sexual politics linking all these issues, a kind of conservative intersectionality liberals still struggle over.”

The following is excerpted from pages 53 and 54 in Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith (2014).

Barbara Smith: We meant to assert that it is legitimate to look at the elements of a combined identity that included affiliation or connection to several marginalized groups in this society. There is meaning in being not solely a person of color, not solely Black, not solely female, not solely lesbian, not solely working class or poor. There is a new constellation of meaning when those identities are combined. That’s what we were trying to say. … Black politics at the time, as defined by males, did not completely or sufficiently address the actual circumstances of real, live Black women. They just didn’t.

What the right wing meant by identity politics was that those people who are not white, not male, not straight, and not rich, it was not legitimate for them to assert anything, because they just wanted special privileges and special rights in a context of: “Enough rights already.”  … White males who are heterosexual and have class privileges—the system does work pretty well for them. There was a great resentment that theses other people, these people they considered to be marginal and undeserving of the same kind of privileges and access, they were irritated that those people were asserting that, again, it made a difference whether you were an immigrant of Muslim heritage or religious beliefs, living in the United States, and maybe even queer at the same time. They didn’t want to hear about that. …

Kimberly Springer: But it seems like there are some on the left who might let the Right co-opt the term “identity politics” by talking about the differences that they think identity politics creates. It seems like the disparagement of identity politics is something that works against having unity within a movement. So, if people are organizing that there’s value in their own situation and their own identity, how does that work with the goal of solidarity?

Barbara Smith: That was another aspect of it Because the watered-down version of identity politics was just what you described. Which was, “I’m an African American, working-class lesbian with a physical disability and those are the only things I’m concerned about. I’m not really interested in finding out about the struggles of Chicano farm workers to organize labor unions, because that doesn’t have anything to do with me.” The narrow watered-down dilution of the most expansive meaning of the term “identity politics” was used by people as a way of isolating themselves, and not working in coalition, and not being concerned about overarching systems of institutionalized oppression. That was narrow.

 

 

14221637039_7d81bd26b4_z

In the 60s there was a movement in engineering and the physical sciences towards building what the British economist E.F. Schumacher called “appropriate technology.” Appropriate technology is sort of what it sounds like: build things that are appropriate to the context in which they are meant to be deployed. If that sounds like common sense to you, then you are benefitting from a minor scientific revolution that occurred in the midst of incredible professional hubris. For quite a while (and still today, as I can personally attest to during my time at a polytechnic institute) scientists and engineers thought that what works in an American lab will work anywhere in the world. Physics is physics no matter where you are and so the underlying mechanical properties of any given technology should work wherever it is situated. Appropriate technology pushed back against that concept, encouraging practitioners to think long and hard about social, economic, political, environmental, and any other context an artifact might find itself in.

Such a broad critique is bound to distort and end up (somewhat ironically in this case) broken apart into several different flavors based on who uses that idea. In 1990, Kelvin Willoughby gave appropriate technology a book-length evaluation and critique, first noting that the fracturing of the term makes it a difficult idea to wrestle with:

The term “appropriate technology” has been taken up by a plethora of organizations, interest groups, individuals and schools of thought, and its usage has consequently been loose and confusing. It is used variously to refer to particular philosophical approaches to technology, to ideologies, to a political-economic critique, to social movements, to economic development strategies, to particular types of technical hardware, or even to anti-technology activities.

Willoughby’s working definition of appropriate technology ends up being, “technology tailored to fit the psychosocial and biophysical context prevailing in a particular location and period.” Think of water pumps built for rural farms in Zimbabwe [PDF] or insulation made out of agricultural waste. These things seek to leverage or work seamlessly within, existing resource flows or topological realities. The water pumps are simple and their parts replaceable and widely replicable specifically because it is difficult to ship and sell finely machined parts in rural Zimbabwe. The insulation uses highly fibrous materials like coconuts and cork that are found in equatorial regions that are in need of exports and well-performing insulation to keep cool. These technologies seek to “fit” the context but ultimately they mean to change it in the long run.

Fitting into a context so as to ultimately change it, is an insidious kind of appropriate. Certainly there are things that need changing and water pumps and insulation are good things, but there is a danger in focusing on the initial conditions when describing the benefits of a technology. A hundred years ago the industrializing cities of America underwent significant changes as part of the City Beautiful Movement. “City Beautiful advocates” writes Catherine Tumber in the introduction to Small, Gritty, and Green,

mainly local elites joined in voluntary municipal art and civic improvement associations that served as informal planning boards–concerned themselves with the orderly grouping and placement of public buildings, railway stations, and parks, nurturing an exemplary vision of the urban public realm. Influenced by neoclassicism and the arts-and-crafts movement ideal that “what is most adapted to its purposes is most beautiful,” they were particularly attentive to appropriate fit and scale. Some of their handiwork remains in smaller cities across the land, since the market for new downtown development, which usually results in the demolition of older buildings, did not take shape as it did in large cities over the past few decades.

This pair of interwoven movements –City Beautiful and Arts and Crafts– are not unlike trends we see today. Proponents of the City Beautiful sought to literally and directly take control over the shape and character of their cities and the Arts and Crafts movement was not unlike the maker movement of today: a response to the pre-made-ness of the world and a desire to find some semblance of control in a rapidly changing world. All of these movements are well-intended –and individuals certainly do find joy in tinkering with the things around them– but ultimately we should understand that all of this is about control. Sometimes that control is over personal possessions: being able to repair a pair of headphones instead of buying new ones. Most of the time though, having the leisure time to attend meetings and being a part of organizations is a good way for local elites to make big changes while appearing democratic. The working poor, in addition to being financially strapped, are often deprived of leisure time as well.

Moreover, the focus on artifacts –whether they are buildings or hand-made Etsy commodities– generally misses underlying social factors that contribute to communities’ problems. You won’t end poverty by putting the poor in different buildings and you won’t make a dent in alienation by focusing solely on the digital devices that embody the latest instantiation of capitalism. Wade Graham in Dream Cities is instructive here:

Ultimately, the revolutionary intent of the Arts and Crafts movement failed: handmade objects were too expensive for any but the wealthy, most of whom had gotten rich from the industrialization and standardization of production that the movement decried.

The Arts and Crafts reformers were looking at the wrong things: things. Just as [Arts and Crafts movement leader William] Morris wanted to believe in the power of better-designed and more humanely made objects to cure our social ills, … critics of the conditions of the new industrial city wanted to believe that the city itself was the culprit, not the economic conditions that drove its growth. They failed to look at themselves behind the curtain. like the Wizard of Oz, operating the machinery.

A similar, and perhaps larger, irony plays out today in Troy, New York where I live. A century ago this city was at the forefront of industrialization but it lost the game it helped make and is now resorting to the maker movement and handmade goods to rebuild what it has lost. There is a humongous makerspace downtown and all the little shops have jewelry and furniture made and recycled by local artists.

Circling back now to appropriate technology I think it is fair to ask what sort of context is Troy and what technology would be appropriate to help it thrive? Is it more industry? That is what New York’s Governor Cuomo thinks is the right answer. Is it more handicrafts? That’s what the local leaders of the Business Improvement District seem to agree on. The literature on appropriate technology might agree with both of these assessments: That the appropriate organizational technology for a place like Troy is some sort of commodified handicraft industry. Something that leverages all of the empty warehouses and latent creative talent in the region, along with its proximity to larger cities and markets. All of this sounds wrong to me because who wants to play to the context of a boom-and-bust cycle of wealth inequality? Building appropriately for such a context seems like a good way to uphold a status quo and not get at the heart of the problem. Perhaps the technology appropriate for places like Troy are ones that are not appropriate for their contexts at all. What exactly that is, will require some political imagination.

David is on twitter: @da_banks

 

Lede image by Brian Debus

21359643669_fa6ab6e1d8_z

“The founding practice of conspiratorial thinking” writes Kathleen Stewart, “is the search for the missing plot.” When some piece of information is missing in our lives, whether it is the conversion ratio of cups to ounces or who shot JFK, there’s a good chance we’ll open up a browser window. And while most of us agree that there are eight ounces to every cup, far fewer (like, only 39 percent) think Lee Harvey Oswald acted alone. Many who study the subject point to the mediation of the killing –The Zapruder film, the televised interviews and discussions about the assassination afterward—as one of the key elements of the conspiracy theory’s success. One might conclude that mediation and documentation cannot help but provide a fertile ground for conspiracy theory building.

Stewart goes so far as to say “The internet was made for conspiracy theory: it is a conspiracy theory: one thing leads to another, always another link leading you deeper into no thing, no place…” Just like a conspiracy theory you never get to the end of the Internet. Both are constantly unfolding with new information or a new arrangement of old facts. It is no surprise then, that with the ever-increasing saturation of our lives with digital networks that we are also awash in grotesque amalgamations of half-facts about vaccines, terrorist attacks, the birth and death of presidents, and the health of the planet. And, as the recently leaked documents about Facebook’s news operations demonstrate, it takes regular intervention to keep a network focused on professional reporting. Attention and truth-seeking are two very different animals.

The Internet might be a conspiracy theory but given the kind, size, and diversity of today’s conspiracy theories it is also worth asking a follow-up question: what is the Internet a conspiracy about? Is it a theory about the sinister inclinations of a powerful cabal? Or is it a derogatory tale about a scapegoated minority? Can it be both or neither? Stewart was writing in 1999, before the web got Social so she could not have known about the way 9/11 conspiracies flourished on the web and she may not have suspected our presidential candidates would make frequent use of conspiratorial content to drum up popular support. Someone else writing in 1999 got it right though. That someone was Joe Menosky and he wrote one of the best episodes of Star Trek: Voyager. Season 6, Episode 9 titled The Voyager Conspiracy.

In The Voyager Conspiracy Seven of Nine, a former Borg drone who can download and upload data like a computer but still has the reasoning capacities of a fleshy human brain, has decided to upload the ships’ logs into her mind. As a literal cyborg Seven can leverage the storage capacity of a computer with the analytic capabilities we (and even the Federation) have yet to build into an artificial intelligence. No time is spent explaining why anyone would want to do such a thing but perhaps the reason we need no explanation is the same reason anyone thought Siri would be a good idea.

The initial results of her experiment are great—Seven correctly deduces that photonic fleas have disrupted power flow to the sensor grid—but things go awry from there. She quickly starts making connections across disparate events that are “highly speculative” (to borrow a phrase from Vulcan Security Officer, Tuvok) but difficult to disprove. Seven accuses Captain Janeway of plotting to send a Federation invasion force into the Delta Quadrant but not before accusing First Officer Chakotay of plotting to mount a similar invasion against the Federation. The episode crescendos with Seven attempting to flee the ship out of fear that she is the subject of a third elaborate conspiracy to use her as a science experiment.

In the final act of the episode Seven reports that the technology “functioned within expected parameters. Unfortunately, I did not.” It is a simple, elegant way to describe humans’ relationship to their creations. Like a wish granted by a monkey paw, we over-estimate our ability to handle technologies’ fulfillment of our desires. Seven starts by solving problems, then starts to question the motivations of powerful people, and finally turns inward convinced that everyone is out to get her. The first action is useful for a hierarchical organization like a ship. The first two actions are useful in a democracy. This third and final stage is definitely anti-social, perhaps even pathological, and so it is easy to dismiss this self-centered perspective as anathema to any sort of political or social organization.

Psychologists have studied conspiracy for a long time and have come to a similar conclusion. Daniel Jolley [PDF] cites one study that showed being exposed to one conspiracy theory made respondents more susceptible to believing subsequent conspiracy theories and were “unaware of the change in their conspiracy endorsement.” After being exposed to conspiracy theory material participants in one study “were less likely to engage with politics, relative to those who were exposed to information refuting conspiracy theories. This effect was shown to be caused by an increase in feelings of political powerlessness.” Jolley also cites multiple studies that show no particular demographic seems to “reliably predict conspiracy beliefs”, which Jolley interprets to mean that “we are all susceptible to conspiracy theories, which may subsequently help explain why conspiracy theories have flourished. Conspiracy theories appear to be viral apathy.

Political apathy, however, is not the same as total social isolation. Michael Wood, after citing the same Stewart essay quoted above, notes that “research has shown that people who once were afraid to express their opinions openly are now free to gather with like-minded individuals on forums, blogs, and social media, developing opinion-based communities of a breadth and depth never seen before.” Conspiracy in the age of the Internet, according to Wood, has become increasingly vague because the power to debunk has risen alongside the power to question. Instead of describing a plot with an intended goal (e.g. The U.S. faked the moon landing to win the space race.) conspiracy theories describe vague yet menacing government agencies. The idea that the bombings at the Boston Marathon and the Sandy Hook shooting were both false flag operations populated by crisis actors is a good example of post-internet conspiracy. They are specific stories about vague anxieties that the government and other powerful organizations are antagonistic to your existence.

Jolley and Wood are missing something though. Actually quite a few things. Perhaps it is true that everyone is equally capable of believing a conspiracy theory but I suspect there are lots of structural concerns –race, class, and gender positionality just to name a few—that factor into which conspiracy any given person believes in or gets introduced to. For example, Pasek et al. [PDF] show that conservative-leaning white Americans are overwhelmingly more likely to believe that Barack Obama is a secret Kenyan Muslim. Reporting has also shown that it is largely well-educated, affluent parents that believe vaccines cause autism. Conspiracy theories are not, as Stewart claims, “all over the map: … right wing one moment and left-wing the next.” Or at least they aren’t anymore. As conspiracy theories have come into the mainstream they have made real changes to the world: preventable diseases, both biological and ideological, have broken out and they have a very particular political character.

It makes sense then, that recent political candidates have been greatly rewarded by not only citing conspiracy theories, but weaving them into a larger narrative of inequality brought about by elites beholden to foreign interests. The resulting story may seem ideologically confusing –rarely in American politics do we see a candidate defend social safety net programs while deriding illegal immigration—but it is populism, pure and simple.

Again, science fiction is instructive here. 9/11 is often treated as an inflection point for so many socio-political trends and conspiracy theories are no different. The recent X-Flies miniseries shows how the conspiracy theory terrain has shifted over the last 30 years. Whereas the original X-Files series (which ran from 1992 to 2002) cast the conspiracy theorist as a lone searcher for the truth, the reboot must account for a robust conspiratorial culture that has blossomed within populist political circles. “Since 9/11,” Skinner says in the first episode of the 10th season, “this country has taken a big turn in a very strange direction.”

When the original series launched public trust in government was at an all-time low of 18 percent. It went off the air amidst a potent combination of contract disputes and skyrocketing post-9/11 nationalism, but now we have come back to our senses and trust has fallen even lower. Except this new mini-series must deal with a very different conspiratorial terrain that is far more conservative and lucrative. One where the military, small businesses, and the police are the only institutions capable of garnering trust from over half of the population. Conspiracy among government elites is so expected and widely-believed that far-right media personalities not only have a big audience, they can build a successful business off of it. (Not to mention successful primary campaigns.)

The new X-Files perfectly mirrors Americans’ new relationship with political prominence, conspiracy, and governmental over-reach: Scully and Mulder find themselves making uneasy alliances where left and right means less than agitating a populist revolt against elites. The plot of the first and last episodes revolve around a conservative millionaire YouTube personality who wants to help them reveal The Truth but neither of the agents trust his motivations any more than they trust the elites within the FBI. It is a bargain many young leftists are considering themselves: compromise one’s deepest-held values and accept the offer to be brought back into the elite’s fold, or join with the conservative populist with whom you share a common enemy?

In considering the power of the Internet to spread conspiracy theories we have to take into account who is best poised to take advantage of widespread doubt. We have to remember that Seven cast a finger at authority first, but ultimately made it about herself. Such self-centered behavior can cause reactionary, anti-social behavior that easily maps onto open-ended, post-internet. Such vague anxieties are the proving ground for strong-man political campaigns. At the heart of conspiracy theories are disparities of power and any powerful person that promises to act on behalf of the people who “know the truth” can build an immensely successful campaign to extend that power. If the internet is a conspiracy theory, it is a mass of tangled, obscured lines of power that put the individual at the heart of the web. It is a theory that distorts the material relations of authority and constructs a truth that, ultimately, implicates everyone and no one. That truth is out there and to resist it, is futile.

David is on twitter: @da_banks