218203422_bfd96fa804_o
Image Credit: NASA

As we approach the tenth anniversary of Hurricane Katrina’s assault on New Orleans, it is almost impossible not to draw connections between Katrina and the Black Lives Matter movement. Just as the storm exposed long-standing patterns of institutional neglect and structural racism that had typically been overlooked by the white American mainstream, so too have the uprisings across the country against police brutality drawn renewed attention to institutionalized racism in America this year. As Jamelle Bouie put it, “Black collective memory of Hurricane Katrina, as much as anything else, informs the present movement against police violence, ‘Black Lives Matter.’”

But what does such memory look like? What tools are available online besides simply Googling old news articles? What history do we have at our fingertips, beyond returning to the heavily criticized mainstream media coverage, which at the time was limited at best and, at worst, trafficked in harmful racial stereotypes? For instance, in one heavily publicized example, photos of African American storm survivors were captioned as showing “looting,” when nearly identical images of white survivors were captioned as simply “finding food.” This, too, is part of the memory of Katrina, but it is a part in which survivors were not allowed to speak for themselves.

One alternative resource at our disposal is the Hurricane Digital Memory Bank, a site devoted to “collecting and preserving the stories of Katrina” and Hurricane Rita as well. Created in 2005 by the Roy Rosenzweig Center for History and New Media at George Mason University and the University of New Orleans, it is a digital archive where members of the public have uploaded their stories and images of the storm. I have critiqued these sort of digital archives in the past for privileging a kind of cathartic, inner-directed self-help, but in the present moment this one seems particularly valuable.

A decade on from the devastating collapse of the levees, the Hurricane Digital Memory Bank provides vivid examples of the storm’s impact on particular individuals. Of course, some would argue that returning to these stories is simply gawking: disaster porn, ten years removed. But the fact that the authors of these memories went online in order to preserve and share them, to me, suggests otherwise. Refusing to engage with others’ traumas, when they have shown the bravery to write them publicly, risks transmuting one’s own sensitivities into callousness. Such a gesture is also a cruel reenactment of the American public’s initial loss of interest in Katrina, when news readers, politicians, and pundits began complaining of “Katrina fatigue” just a few months after the storm hit. In this context, the need for remembrance is clear.

Here, then, are a few of examples of the memories collected on the site. We begin with an excerpt from one of the most heart-wrenching entries in the memory bank:

As the sun was going down on August 29, 2005, my 95-year-old, invalid mother died in my arms as we tried to escape the rising flood waters coming into our house by climbing the fold-down stairs into the attic of our house. The next morning, August 30, at about 9:30 AM, I had to leave her body behind in the bathroom of the house so I could swim to a neighbor’s house to let him and his wife know that I was okay. Men in a flatboat who came to rescue me and other neighbors would not let me return to my house to rescue my two dogs… Without going into exhaustive detail, I will simply say that my past life died that day with Mother and my dogs. I now wish to devote my life to living my life to be a blessing to others.

A younger storm victim shared the pain of leaving her parents behind during the evacuation:

Monday the storm hit and my family and I were glued to the television in the church mission we were staying in. When the news anchor said that all of New Orleans was underwater I started to cry because my parents were still there and I couldn’t get through to them on the house or cellphones. One of my uncles hugged me and told me that my father was a survivor and not to worry… I spent the next few days trying to get in touch with my parents and watching the horror unfolding on the news and wondering where my older siblings were and if they were safe. One of my older sisters lived in the East and the other in the lower 9. Finally I talked to my parents and I started to cry and thank God for protecting them.

And users like this one merely commented on what it was like to view the storm from a distance:

I was watching the news on how a destructive hurricane was headed towards New Orleans and thinking how are the people with who are barely getting by going to get out to safety? And are there enough buses to carry everyone out? When the storm hit and cleared I thought there would be some type immediate help just like the tsunami disaster, but when I saw the total disrespect towards people of color, it kind of reminded me that as far as we have made progress we have digressed just as much… Then I thought maybe this rebuilding process is where Bush would empathize and start rebuilding the city of New Orleans, but what happened? The city of who used to people mostly black is now just like every other city in America it is now mostly white… This really showed me how worthless our lives [are] to America.

Those who directly experienced Katrina could surely never forget even if they wanted to. But for those of us who watched from a distance, in relative safety, the question of what it means to remember the storm is a slightly more ambiguous matter. If remembrance simply means “to learn from” Katrina, then those lessons ought to clearly translate into direct support for the Black Lives Matter movement and other movements that struggle against the forces of structural and institutional racism.

But perhaps remembering means something more than learning. Empathy, compassion, and care are, after all, also important takeaways from remembrance projects. So perhaps it means spending time with the memories that Katrina’s victims have shared with us. Making use of the tools that digital culture provides to try to understand the suffering of others. Sitting with those memories, in sympathy and in discomfort. Not because we can ever fully understand what it is like to survive such a trauma. And not because I, as a white person, will ever completely understand the physical and psychic burdens imposed by racism. But because knowing the inscrutability of others’ pain, the ever present distance between others and ourselves, is precisely what impels us to try to understand. It is, in a way, the least we can do for those who have suffered—to take their pain seriously.

The tenth anniversary of Hurricane Katrina has already begun to generate a significant amount of news coverage. No doubt there is more to come. But such commemoration in the mainstream media is likely to peak with this milestone, and decline precipitously thereafter. Indeed, the temptation to forget Katrina is great for an American mainstream that is deeply uncomfortable with the deeply rooted racism that the storm laid bare.

This is all the more reason to return to the few online spaces that have preserved these memories exactly as they were recorded by those who lived them. To browse through these remembrances—some painful, many mundane—is to receive a fleeting glimpse of the full social and psychological impact of the storm, and simultaneously, to grasp the inadequacy of our attempts to fully recall it.

In the end, memory can only do so much, and social movements are necessarily rooted in the present. But the injustices of the past, the trauma of survivors, and the losses of the victims are reason enough to remember.

Timothy Recuber is a sociologist who studies how American media and culture respond to crisis and distress. His work has been published in journals such as New Media and Society, The American Behavioral Scientist, Space and Culture, Contexts, and Research Ethics.

@timr100
timrecuber.com

"Lone Hacker in Warehouse" by Brian Klug
“Lone Hacker in Warehouse” by Brian Klug

The hacker label is, as Foucault might say, a “dubious unity.”  The single phrase can barely contain its constituent multitude. Even if every single person that self-identified as a hacker had a stable definition, the media would warp, expand, and misunderstand the definition to include all sorts of other identities, tactics, and personas. We cannot know what is in the hearts and minds of every person that feels an allegiance to the hacker brand but this past week’s Ashley Madison hack, where deeply private information was leaked supposedly in the name of consumer protection, forces a conversation about the politics of hacking. Are hackers fundamentally conservative if not in intention, then in deed?

Such a question requires a working definition of hackers. One that, at the very least, identifies who and what is a hacker and hacking respectively. I can’t think of a better source to turn to than Gabriella Coleman’s Hacker, Hoaxer, Whistleblower, Spy because not only is it a book-length meditation on what it means to call yourself a hacker, but her own work is deeply enmeshed in the boundary policing of hackerdom itself. This merits a fairly long block quote taken from her chapter on LuzSec, but this quote speaks mainly to her initial interaction with the information security community. It starts on page 256 if you want read around this quote, which I recommend:

Hacking they [members of the information security (InfoSec) community] would tell me, is digital trespass: breaking into a system, owning it hard, doing what you want with it. I had recently published my book on free software “hackers,” Coding Freedom: The Ethics and Aesthetics of Hacking, and it seemed that these InfoSec word warriors thought I had a narrow understanding of the term, one that omitted their world. But, my understanding of the term is much more nuanced than they realized. My definition includes free software programmers, people who make things, and also people who compromise systems—but that doesn’t mean they have to all be talked about at the same time. My first book was narrowly focused.

Interestingly, while each microcommunity claims the moniker “hacker,” some always refute the attempts of other microcommunities to claim the term. So when InfoSec people started yelling at me that free software “hackers” weren’t “hackers,” I wasn’t surprised.

Policing of the term “hacker” could be read as a kind of proxy war over what hackers should do. That is, should the gravitational pull of the desirable (to some) hacker brand be used in service of communitarians dedicated to building free software or should it be a banner for libertarian free speech warriors (and everything in between)? For the purposes of this essay I’m going to focus on the assortment of people and ideas that orbit a Guy Fawkes mask. That is, those people who hack to break, “own hard”, and trespass.

In light of this definition and many qualifications we should be asking three questions to determine hackers’ political ambitions: What sorts of systems are generally broken by hackers? To what ends are they broken? What arises from the broken code? In the AH case, hackers were upset that the service was full of fake female profiles and that it charged $19.99 to deactivate accounts. There also seems to be a certain kind of desire to see cheaters exposed. Past high-profile hacks include the Sony leak that was done in protest of poor security surrounding users’ account information, and the Stratfor hack that mainly served as retribution for years of corporate espionage.

The motivation for the Ashley Madison hack seems, at best, confused and contradictory. If your stated aim is the well-being of consumers, then threatening to expose their information seems like a bad bargaining chip. Its like threatening Shell oil by holding a gun to the head of a polar bear. The same could be said about the Sony hack and, at first, the blowback said just as much. Here is part of Coleman’s recounting of the immediate aftermath:

Very quickly, the operation went south. DDoSing Sony’s PlayStation Network (PSN) did not earn Anonymous any new friends, only the ire of gamers who foamed with vitriol at being deprived of their source of distraction. Amidst the DDoSing, a splinter group calling itself “SonyRecon” formed to dox Sony executives. This move proved controversial among Anonymous activists and their broader support network.

Spurred by the operation’s immediate unpopularity, Anonymous released the following statement: “We realized that targeting the PSN is not a good idea. We have therefore temporarily suspended our action until a method is found that will not severely impact Sony’s customers.” They hoped that this would put out the fire.

Just to recap: It is okay to destroy something that lots of people pay for and rely on to entertain themselves, and it is okay to release sensitive information about millions of people, but doxxing millionaires is “controversial.” This is not an isolated case either. Even the Stratfor hack, which was an undeniably anti-corporate act (which incluided stealing emails, donating to the Manning support fund with stolen corporate credit cards, and replacing the company’s website with a manifesto about communal living brought about through armed insurrection) never treated the executives of Stratfor the way they might treat a kid that owned a Playstation. Unless a CEO says something brash about Anonymous itself (as was the case with HBGary Federal) hackers seem to hit customers hard, but treat executives about as harshly as a retiree writing an angry letter about sub-par cable programming.

There’s no doubt that the operations that lead to security breaches at corporate espionage firms like Stratfor and HBGary Federal required bravery, skill, and contained within them a revolutionary spirt. But for all of the ostensible machismo and trickster joviality, there is an underlying respect for the security state.  If any motivation (outside of “the lulz” which is more of a means than an end when you think about it) can be attributed to hackers it is the following: more and better security, deference to millionaires, and the sacrifice of immoral people for the future common good. That sounds an awful lot like Republicans.

To be really clear here: even if every person that has ever called themselves a hacker hated millionaires and believed in a borderless utopia, the effects of their actions produce a more battle-hardened police state. Of course many hackers would say that security is to meant to keep the government and corporations out of the business of individual citizens and the release of sensitive information would happen anyway given the poor security measures they are protesting in the first place. Such defenses actually brings to light another conservative attribute: the fear of a looming and malignant outsider waiting to prey on hapless victims who don’t seem to appreciate how hard it is to keep everyone safe.

How does the hacker, despite frequently stated anti-authoritarian leanings, end up fighting for things like traditional marriage arrangements and national security? The easy answer is that many hackers are, in fact, socially conservative libertarians who actually harbor animosity for adulterers regardless of reason or context. This might very well explain some hackers but what about the avowed anarchists? What about the people who work in solidarity with Kurdish social ecologists or Tunisian fighting for free elections? How are these people unwittingly acting in a conservative way?

Back in April I wrote “Instead of handing over our trust to organizations like professional associations, governments, or corporations, hackers would have us move that trust to algorithms, protocols, and block chains.“ I argued that the rationalization for encryption and automation –humans cannot be trusted so we must replace them with code—is no different than progressive era activists’ insistence that essential services like municipal government should be depoliticized and turned over to professionals and bureaucrats instead of elected politicians. Bureaucracy, like code, is supposed to act predictably and equally for everyone. The technologist’s solutions are no different that the reforms that gave us city management experts. (Perhaps this is why hackers secretly love CEOs, because they are at once both the very top management expert and the person that totally owned the system. The fact that they are now the system, is the only reason why the CEO’s business must be hacked at.)

This brings up a fundamental paradox: the hacker and the bureaucrat are polar opposites in terms of means –the former is trickster incarnate while the latter plots along as predictably as humanly possible– but they advocate for similar solutions to difficult problems. Even though the bureaucrat seeks and fosters smooth operation of a system, and hackers are motivated by a goal and are animated by chaotic destruction, they both share a fundamental distrust of humans as political entities. Hackers may embody the opposite of bureaucracy, but they ultimately desire the same thing as bureaucrats: technologies that obviate trust.

In the final chapter of Utopia of Rules David Graeber concludes that we all secretly love bureaucracy because it promises stability and predictability in an otherwise uncertain world. That while play can be creative and generative, we know it can also be destructive and disruptive. We cannot build complex systems like universal healthcare administration or nuclear missile launch systems atop ever-shifting human desire. Instead we have to make bureaucracies as Weber described them: hierarchical organizations with written rules staffed by trained (but ultimately and imminently replaceable) professionals. Bureaucracies date back to Mesopotamia but remain the least worst organizational solution we’ve come up with thus far for tackling big projects. And while it has let us accomplish a great deal, bureaucracies are still incredibly alienating, frustrating, and boring for everyone that interacts with them. The perfectly-functioning bureaucracy has never existed. Incompetence, nepotism, and all sorts of human foibles (and values) get in the way of true and complete bureaucratic predictability. It is no surprise then, that political actors get lots of traction by hating on bureaucracy.

The right, Graeber argues, came up with a critique of bureaucracy early on, and have benefited greatly. They peg public organizations as bureaucratic and private entitles as dynamic problem-solvers even though private firms are just as bureaucratic as governments. This has let them create bureaucracies with impunity: an ever-increasing military-industrial complex and oligarchic state wrapped in the glitzy paper of dynamism. The left, he argues, has yet to come up with an equally rhetorically effective critique of bureaucracy. I disagree. Hacking has risen as the heir-apparent for people that are as critical of corporations as they are of governments. The problem is that while the rhetoric is provocative, hackers are still as bureaucracy-loving as the Progressive Era reformists mentioned above. There may be a fix though.

The hacker critique of bureaucracy is simple: states and corporations are greedy and careless and you have to threaten them with destruction in order for them to behave. Ultimately we should replace bureaucracies –that try to make humans emulate robots– with actual robots and algorithms that will be invented through the creative destruction of existing institutions. It is a tantalizing argument, but right now it fails in practice because (and here I go back to agreeing with Graeber) it is still far too easy and cheap to exploit workers. All the free software created (and allowed to be used by corporations) by volunteer labor, cannot overcome the power corporations and states wield in steering R&D money towards profit-seeking behavior. When you attack a company for not safe-guarding sensitive information, the result is more security, not less possibility of theft. Or even better, a world where theft is unnecessary because everyone has what they need.

The hacker, for all its drawbacks, is still a helpful post-capitalist imaginary. That imaginary is instructive of a desirable future, as all utopian thinking is, but in the present historical moment the behavior of the hacker has conservative and authoritarian results. This happens in spite of all of the grandiose claims to libertarianism and trickster unpredictability because underlying all their actions is a deep-seated distrust of humans’ ability to work together at a grand scale. This cynicism manifests in a willingness to expose people engaging in all sorts of non-monogamous relationships and a desire to go even further than than early champions of bureaucracy by inventing things that obviate trust rather than require it.

Perhaps then, the way to bring practice in line with rhetoric is to (counter-intuitively) expand the common definition of hacker. The hacker imaginary should include, as my fellow editor Jenny Davis argued last week, social movements like Black Lives Matter. Davis argues that such a redefinition will also require a move away from anonymity and towards speaking from a situated identity:

Because of this insistence upon centrality, Black Lives Matter refuses to be Anonymous. They do not disrupt the system quietly. The hack is their presence. The hack is their voices. The hack is their faces. It’s not about discourse or even policy, but an insistence upon visibility; a refusal to remain unseen.

This shift in tactics from invisibility to obvious visibility does two things. First, as Davis notes, it forces power centers who otherwise benefit from quiet dominance to admit and show the violence that is quietly wielded every day. Such blatant violence and and often does push otherwise “moderate” people to adopt an antagonistic stance against oppression. Second, it breaks something fundamental that bureaucracies need to function properly: standardized objects. By refusing to act anonymously –and thus uniformly— BLM hackers make it difficult for bureaucracies to continue doing their work. By refusing to treat people (like Bernie Sanders) as equals at the moment of protest, they display how they have been treated historical as less-than equals.

More than anything, the hacker has to pick a side. He or she has to come into the light of politics and, instead of hiding in the shadows of unmarked categories, be an ostentatious and confusing thing willing to make alliances with people based on their living histories, rather than rallying around a single bad apple worthy of defacement. Hackers have long benefited from propaganda by the deed: winning adherents by acting as if their political agenda was already hegemonic. That should continue but with more clarity and resolution. Future hackers would do well to put aside masks, trust-less encryption technologies, and unpredictability, and instead act ostentatiously, with no regard for boundaries, and do so predictably and repeatedly until something breaks. That something, with enough tenacity and time, could very well be capitalism.

Content Note: This post deals with the trigger warnings, the belittling of people who ask for them, and embarrassment in the classroom.

Image Credit: Alan Levine
Image Credit: Alan Levine

I have been lucky enough to get professional advice from some truly wonderful people and many of them have told me that the key to a productive and fulfilling academic exchange of ideas is to give others the benefit of the doubt and be generous in your reading of their work. Assume that everyone wants to make the world a better place through the sharing of their ideas and if you disagree with them it is because you more or less disagree on what that better place looks like. I am going to continue working on that but today I am going to gift myself one last moment where I truly believe there are people that are out there who want to make life harder for millions of people.

If you shared that last Atlantic article about trigger warnings in college classrooms, and you have nothing to do with higher education, I think you are a hateful person.

At the very least, if I were to give you the benefit of the doubt (that you do not deserve), I might say that you are incredibly misinformed. That you do not understand what a trigger warning is, or what it is supposed to do, or in what contexts it is deployed, but then why suddenly get interested in a thing you know nothing about? When was the last time you were in a classroom? Was there ever a moment where you were in a classroom and someone seemingly inexplicably got up and ran screaming and crying never to be seen from again? If that did in fact happen and your first thought was, what a weak and childish person, then what the fuck is wrong with you? Where did your compassion go? Was it ever there?

If you shared that last Atlantic article about trigger warnings in college classrooms, and you are an educator, I think you are a bad at your job.

Perhaps you do not understand the dynamics of the administrative tactics that go on above your head and behind your back that order your daily working life. The ones that use parents and students’ complaints to strengthen their control over what is taught and who does the teaching. Maybe you have some profound disdain for your students that you keep locked away in tiny snide comments in your syllabus. The sorts of denigrating and smarmy jokes that the alpha males wearing Sponge Bob baseball caps find really, really funny and whose hung over laughter, give you a moment of vicarious youth. If that is the case, please go sell out, or get a job at some evangelical Christian college because you belong with your retrograde brethren on the wrong side of history.

Even if you do still love your students (and even when they frustrate the ever-loving-shit out of you, you must love them because that is your job and trust me you are at your best when you love them in spite of themselves) but you share that article as a means to shake your fist at some imagined other set of students that you are sure you’ll get this Fall (for sure this time you can feel it) then you are still bad at your job because you are scaring away the prospective graduate students. Stop scaring away good people that care about other people and who do not want to join a profession that drips with disdain for anyone that tries to take the slightest bit of control over their lives. We need good people teaching and researching.

If you shared that last Atlantic article about trigger warnings in college classrooms, and you are an editor of a major publication I think you are both bad at your job and a hateful person.

Not only is the ARE THE KIDS TOO SENSITIVE?!?!? the most warmed-over, left under the heating lamps too long and the French fries are soggy, hot take on the planet, it makes anyone that touches these stories look like a horse’s ass. The interviewees come off as bigots and your writers come off as fedora-wearing shitheels that say things like “I only sleep with people who read Phillip Roth.” It makes your entire publication look like The Drudge Report.

At this point you might be saying, “Wow, that was a lot of cursing and personal attacks that do not refute the Atlantic article’s underlying arguments about how young people are maintaining such impenetrable filter bubbles and echo chambers that they are not becoming well-rounded citizens. Rather than confront and consider uncomfortable ideas, they tend to run away from them.”

To which I might say, “Go fuck yourself.”

Not only are black and brown children still getting the “here’s how to not be murdered by the cops” before they learn about sex; not only are kids bold and beautiful enough to fight the sorts of complex oppressions that their parents are still hiding from in their life-long retreats to the cul-de-sacs of fly-over country; not only are they fighting harder and paying more than ever to even get into these college classrooms in the first place; the kids today are expert curators of information. They have been doing it before they got to pre-k. They know ­more than you about how to handle complex ideas and under what circumstances they should be confronted.

What in the world makes you think that a professor’s forgotten and neglected 15-year-old syllabus about sex in western literature is more worthy of total uncompromising acceptance than the meticulously arranged collection of 500 different Tumblrs about intersectional feminism that his students comb through on a daily basis? Because here’s a secret: they’re asking for and utilizing trigger warnings in those too but it isn’t like everyone is sharing a bunch of posts that they do not read. Okay that was not a secret. So what the fuck is your excuse?

I should also mention that this blog does not always issue content warnings or trigger warnings at the top of the post. We leave that up to each author and some of us use them and some of us do not. I mostly do not add them. That is because if there is something worth warning about in the body of the post I generally try to make it clear in the title or the first few paragraphs that such a topic will come up. I’ve been trying to work it in more naturally but maybe that is failing. I would appreciate some criticism in that regard.

Trigger warnings are not requests to leave the conversation; they are demands that one be given the chance to prepare for difficult topics. Topics that are difficult, usually, because so many people have experienced them first hand. That is what makes articles like this latest one in the Atlantic so truly and completely disgusting: that adults who fear a generation of coddled children are in fact the coddled few who are spewing uninformed gibberish at young adults who have already experienced so much in their lives.

If you shared that last Atlantic article about trigger warnings in college classrooms, and you are a student in those college classrooms then you still have a lot to learn. And that is okay.

You are a work in progress. We are all works in progress but you are still more work than progress. But again, that is just fine. It is the job of teachers and professors to help you along and, in tones far gentler and more productive than the one I struck above, give you the tools to become the best person you can be. Those tools are varied and take a lifetime to master but you can be damn sure that within a few years you’ll be better than these assholes that write these articles.

David is on twitter and tumblr.

 

 

[Edit] Also, read Sara Ahmed’s excellent essay Against Students:

The “problem student” is a constellation of related figures: the consuming student, the censoring student, the over-sensitive student, and the complaining student. By considering how these figures are related we can explore connections that are being made through them, connections between, for example, neoliberalism in higher education, a concern with safe spaces, and the struggle against sexual harassment. These connections are being made without being explicitly articulated.  We need to make these connections explicit in order to challenge them. This is what “against students” is really about. More…

Would if this were true?
Would if this were true?

The Facebook newsfeed is the subject of a lot of criticism, and rightly so. Not only does it impose an echo chamber on your digitally-mediated existence, the company constantly tries to convince users that it is user behavior –not their secret algorithm—that creates our personalized spin zones. But then there are moments when, for one reason or another, someone comes across your newsfeed that says something super racist or misogynistic and you have to decide to respond or not. If you do, and maybe get into a little back-and-forth, Facebook does a weird thing: that person starts showing up in your newsfeed a lot more.

This happened to me recently and it has me thinking about the role of the Facebook newsfeed in inter-personal instantiations of systematic oppression. Facebook’s newsfeed, specially formulated to increase engagement by presenting the user with content that they have engaged with in the past, is at once encouraging of white allyship against oppression and inflicting a kind of violence on women and people of color. The same algorithmic action can produce both consequences depending on the user.

For the white and cis-male user the constant reminder that you have some social connection to a racist person might encourage (or at least afford the opportunity that) you take that person to task. After all, white allyship is something you do, not a title that you put in your Twitter bio. There are, of course, lots of other (and better) ways to be an ally but offering some loving criticism to acquaintances and loved ones can make positive change over time. It is almost, for a moment, as if Facebook has some sort of anti-racist feature: something that puts white men in the position to do the heavy lifting for once and confront intersecting forms of oppression instead of leaving it up to the oppressed to also do the educating.

The same algorithmic tendency to continually show those you have interacted with, as if all intense bouts of interaction are signs of a desire for more interaction, can also be an instigator and propagator of those same sorts of oppression. An argument can turn into a constant invitation for harassment because, just as you see more of your racist acquaintance, so too do they see you. This could lead to more baiting for arguments and more harassment. But even if this does not happen, the incessant racist memes that now show up in your timeline are themselves psychically exhausting. This algorithmic triggering –the automated and incessant display of disturbing content–  is especially insidious because it is inflicted on users who stood up to hateful content in the first place.

This agnosticism towards content in favor of “engagement” for its own sake is remarkably flat-footed given all the credit we give Facebook for being able to divine intimate details and compose them into a frightening-as-it-is-obscured “graph” of our performed self. If we wanted to keep the former instance (encouraging ally behavior) but reduce the possibility of the latter (algorithmic triggering) what might we request? How can something like Facebook be sensitive to issues of race, class, and gender?

One option might also use some sort of image-recognition technology that gives the user the option to unhide a hateful image rather than see it by default.  If Facebook can detect my face it can certainly detect the rebel flag or even words printed onto image macros. Yik Yak, for example, does not allow photos of human faces and implements that rule through face detection technology. If your photo has a face in it, the app doesn’t let you post it. If a social media company can effectively weather free speech extremists’ outrage, they might be able to impose some mild palliatives to potentially upsetting content.

The problem with these interventions is that it requires that Facebook collect and act on even more information. It asks that Facebook redouble their efforts to collect and analyze data that determines race and ethnicity. It asks them to study photos and proactively show and hide them. It also falling into some of the issues I’ve raised in the past about trying to write broad laws to eliminate very specific kinds of behavior. That seems to be the wrong direction.

The Occam’s razor solution is to have Facebook adopt some sort of anti-racist and/or anti-sexist policy where it pits those people who have demonstrated anti-racist tendencies, against those with more retrograde viewpoints. The algorithm could be tweaked so that white people who have espoused anti-racist sentiments in the past are paired with their “friends” that think the confederate flag is about heritage or whatever. Men who have shared content with a feminist perspective could be paired with men who wear fedoras. All the while controlling and modulating who sees whom so that the burden of teaching and consciousness-raising isn’t unevenly distributed to those that bear the brunt of hate.

This actively imposed context collapse is admittedly improbable –I know there’s no chance that Facebook would decide to do this—but thinking through the implementation of such a policy is a good thought experiment. It highlights the embedded politics of Facebook—a platform that would rather have us be happy and sharing than critical and counterposed. Engagement with brands not only requires active contributions to the site, but positive feelings that can be usurped for the benefit of brands. Deeper still, social media as an industry is deeply committed to the “view from nowhere” where hosts to conversation are only allowed to intervene in the most egregious of circumstances, almost always as a reaction to a user complaint, and never as part of a larger political orientation.

Even the boardroom feminism of their own Sheryl Sandberg is largely absent. As far anyone can tell, there is nothing in the Facebook algorithm that encourages women to “lean in” in Facebook-hosted conversations. Such a technological intervention –and we could have fun thinking about how to design such a thing– could have done just as much, if not more, than the selling of a book and a few TED talks. For example, imagine if Facebook suddenly put the posts and comments of self-identified women at the top and buried men’s. Maybe for just a few days.

Perhaps we should simply cherish and make the most of the moments when the algorithms in our lives start inadvertently working towards the betterment of society. I’m going to keep calling out that racist person on Facebook and while that certainly doesn’t qualify me for a reward or really even a pat on the back, it is (for me) something that doesn’t take a lot of time or effort and might possibly make the world (or that one person) marginally better.

I do not think anyone, at the present moment, is suited to offer a viable proposal for leveraging the Facebook algorithm to promote allyship or even reduce what I’ve been calling algorithmic triggering. Those with the relevant backgrounds, either through formal education or past experience, are missing from the board rooms where the salient decisions are being made. Conversely, those in the board rooms that actually know how the algorithm works, what it is capable of, and are poised to monetarily gain (and lose) from Facebook’s ability to attract advertisers are not necessarily the best people to make these sorts of political design choices.  Perhaps the best way to think of algorithmic triggering is the automation of “if it bleeds it leads” editorial choices. The decision to show violent and disturbing video (of police officers murdering black people for example) can be motivated by good intentions but can lead to an impossibly cruel media landscape. Obviously we should all be fighting to end the events that are the subject of this disturbing media but for now we would do well to demand that the gatekeepers of (social) media take our collective mental health into consideration. What that looks like, is yet to be seen.

David is on twitter and tumblr.

Photo taken at the Napoli Pride Parade in 2010
Photo taken at the Napoli Pride Parade in 2010

Content Note: This posts discusses various forms of transmisogyny and TERFs

On Tuesday, Lisa Wade posted a piece to the Sociological Images blog, asking some important questions about drag- Is it misogynistic? Should it be allowed in LGBT safe spaces? How can pride organizers enforce drag-free pride events, if such an idea is useful? The good news is that many of these questions are already being asked in some circles. The bad news, is that outside of these circles –where specifics are unknown and the cis experience takes centre stage– such questions can lead to some harmful conclusions.

First some basics. Wade contends that a recent Glasgow Free Pride event “’banned’ drag queens from the event, citing concerns that men dressing up like women is offensive to trans women.” The event didn’t ban drag queens, but rather decided not to have any drag acts perform on their stage, but even this decision has now been reversed. In any case, the initial decision to go without drag performances was not made because of offence caused, as Wade says, but rather because the Trans/Nonbinary Caucus of the event felt that it would “make some of those who were transgender or questioning their gender uncomfortable”. Wade’s misunderstandings seem to come from having used the Daily Beast article on the matter as a source rather than the actual press release from free pride.

The title of Wade’s essay, and the repeated references to “girlface” in the essay itself, not only misunderstood the critiques levelled at drag, but also conflated blackface and drag. This misconception is appropriative of black struggle- it stems from conflation of the two separate histories, one of which was a major tool in the subjugation of black people across America and another which grew as part of queer (then, gay) liberation in a diverse, working class environment, led by women of colour. Comparing the two of them is highly disingenuous.

It is an argument that is about as novel as it is accepting of trans people’s existence. Sheila Jeffries, among many other TERFs, is infamous for using this line of argument to capitalize on the widespread condemnation of blackface in her efforts to attack trans women. Wade is, whether she intends to or not, using this dog whistle in her essay.

Getting a few facts wrong (Which is understandable if you are not part of these conversations. The Daily Beast got it wrong too and this is why allies are usually asked to take a seat in these debates.) and using terminology that is usually reserved for deeply transphobic arguments are somewhat superficial problems that lay on the surface of a much bigger problem: the centering of cis feelings on trans issues. Wade seems to think that the biggest problem, with the Glasgow Free Pride decision is that drag parodies femininity and womanhood.

While this is true in the general sense, drag is understood in the trans community to be oppressive because of the central conceit of the parody: that the performer, while affecting womanhood, is “actually a man.”

It’s about the bulge in the dress, the errant chest hair and the deep voice from the sculpted body. The fact that they’re “always PMSing” is a joke about how they don’t have uteruses. Their stage names, often punning on genitals (“Conchita Wurst”), act to center not their femininity, but the “failure” to produce a cis femininity. This was the drag that the gay media was insisting be reinstated, and that Glasgow Free Pride allowed on their stage again when they reversed their decision.

Drag is not monolithic –both historically and sociologically, different drags have and do exist– which is why Glasgow Free Pride specifically critiques “cis drag” (drag performed by cis people) as making people uncomfortable.

Many of the drag queens of color who led S.T.A.R. and Stonewall were not people who played a woman on stage or in a bar for a few hours a week, but people who lived their lives as women, and their drag is fundamentally different from that of people who perform in televised competition today.

Maybe these drags belong on a pride. Maybe there are decolonised drags which would be welcome. But contemporary western cis drag isn’t about femininity, it’s about the drag queen’s failures to produce an impression of cis womanhood, the upshot of which, also produces a caricature of trans womanhood, seen by society as a flawed womanhood.

Given this, it is possible to see drag as an attack on transwomanhood first and foremost, and cis women more as collateral damage in a long controversy within LGBTQIA+ communities. Glasgow Free Pride understood this, and this is why the call came from their trans caucus, not their women’s caucus.

Writing a post which centers the debate on cis women while spending a minimal time on trans women derails a conversation that should be about the transmisogyny of contemporary drag. It is an issue which is actively causing damage by perpetuating stereotypes and, yes, making pride parades unwelcoming for trans women and other maab trans people.

Wade should rest assured that the “conversation” she calls for is, actually, happening. It happens in trans communities all the time. It bubbled over into the mainstream for a few days, and trans people lost a safe space in a radical pride alternative in the process. What she’s actually asking is that the conversation become permanently legible to cis women by focusing on the minor issues that effect them, rather than the transmisogyny of drag.

T.Walpole is on twitter. More info at drcabl.es/awesome/

Photo taken by Dheera Venkatraman in Myanmar.
Photo taken by Dheera Venkatraman in Myanmar.

For a little over a decade, those researchers and visionaries originally involved in establishing the infrastructure for the World Wide Web have set their sights higher.  While hyperlinking Web pages has been pivotal to creating a Web of documents, the more recent goals to establish a Semantic Web involve hyperlinking data, or individual elements within a Web page.  In attaching unique identifiers (in the form of Uniform Resource Identifiers or URIs) and metadata to data points (rather than to just the documents where those data points appear) machines are able to interpret, not just what the browser should display, but also what the page is about.  The hope is that, in providing machines with the capacity to interpret what data is about, it will be possible to drastically improve Web search and to allow researchers to perform automated reasoning on the massive amounts of data contributed to the Web.  There are numerous examples where this infrastructure is already having impact (albeit largely behind-the-scenes).  For instance, the New York Times has already “semantified” all of its data and created a Semantic API where researchers can query its database.  Facebook’s Graph API, which employs Semantic Web infrastructure to structure user profile data, has been the foundation for several studies attempting to make sense of human behavior and interactions through the platform’s “big data.”

Inherent in the project of structuring meaning are philosophical questions about sameness and difference. How do we define and formalize identity – when one thing is exactly the same as the other?  Semantic Web engineers are well-attuned to these questions; in fact, many have degrees in Philosophy.  Yet, questions about sameness are difference are not just philosophical; they are also deeply political.  There are social repercussions to formally marking two things as the same or two things as different.  We need to be attuned to how the digital infrastructure built for the Semantic Web reflects and projects political commitments – how it shapes a politics of representation. This has serious implications for how identity can be organized, and how we (and machines) understand what the world is about as we access Web knowledge bases for information.

It is notable that establishing the infrastructure needed to meet the vision of a Semantic Web involves engineering a shared language between a content creator and a machine. What happens when language is literally engineered – when digital infrastructure deliberately structures the meaning of content on the Web?

There are several layers to the infrastructure of the Semantic Web; the most important layers are arguably schemas, semantics, and ontologies. ”  Schemas provide a range of properties for describing data.  For instance, a schema may provide properties such as ‘restaurant telephone number’, ‘event start date’, or ‘gender’, which can be referenced to describe a piece of data. Semantics establish the structure for how these properties and their values can be attached to data points.  Semantic data is most commonly structured in “triples” of subject, predicate, object; the data point (subject) is linked to a schema property (predicate), and a value is attached to this property (object).

hitchens_app2_Slide9

Finally, ontologies formalize how researchers mark the relationships, hierarchies, and differences between pieces of data; they offer a formal way for representing knowledge. For example, an ontology may be applied to show that Miley Cyrus is a child of Billy Rae Cyrus, or a carnivore is a subcategory of an animal.  Schemas, semantics, and ontologies all become machine-readable through different coding languages and standards.

As you can imagine, building these languages and establishing these standards is quite a contentious endeavor; it involves delineating the boundaries of meaning around just about anything in the world.  Tedious discussions arise as Web engineers engaged in establishing this infrastructure, in collaboration with the World Wide Web Consortium (W3C), attempt to formalize the way data should be described and ontologically represented.

Consider, for instance, “OWL:sameas”, a property in OWL (an ontology coding language) that was established to codify when two pieces of data on the Web (with two different URIs) refer to the same thing, or have the same identity.  The W3C documentation outlining this property offers the following example, showing how OWL:sameas would describe a reference to William Jefferson Clinton to be the same as a reference to Bill Clinton:

<rdf:Description rdf:about=”#William_Jefferson_Clinton”>

<owl:sameAs rdf:resource=”#BillClinton”/>

</rdf:Description>

As more and more webmasters take advantage of Semantic Web infrastructure to describe their data, many Web researchers and engineers have lamented how OWL:sameas is being used and (ab)used.  What happens when ‘morning star’ and ‘evening star’ are both described as being the “same as” ‘Venus’?  Should the automated reasoners that attempt to make inferences from this data then assume that they are the “same as” each other?  Is the time of day that Venus is seen (or “sensed” in the words of Gottlub Frege, a logician and philosopher inspiring much work in Web ontologies) a difference that makes a difference? ref-sent Notably, formalized logic breaks down when OWL:sameas is applied more loosely; automated reasoners produce tangled results.  The stickiness of difference keeps getting in the way of clean ontological depictions of the world.

The controversial politics of representation becomes apparent as soon as OWL:sameas is applied to contested data points.  Take, for example, DBpedia, a crowd-sourced project aiming to semantify data that has been contributed to Wikipedia. As of July 2015, DBpedia still has no entry for Caitlyn Jenner.  But it does have an entry for Bruce Jenner.  Scroll through the metadata at this URI to the OWL:sameas property, and you will find several URIs – all of which link to Web pages on Caitlyn Jenner.  Other examples illustrate international naming politics.  DBpedia has no entry for Myanmar, but it does have an entry for Burma.  Scroll to the OWL:sameas property, and you will find that the US-based and UK-based URIs marked as being the “same as” this entry all refer to Burma, while those based elsewhere in the world refer to Myanmar – the name change the US and UK refused to recognize due to the reported human rights abuses that led to 1989 regime change.

map_burmaShould automated reasoners assume that a reference to Bruce Jenner is the “same as” a reference to Caitlyn Jenner, or that a reference to Burma is the “same as” a reference to Myanmar – that the two have the same “identity”?  And more importantly, who gets to decide?  What happens when this sameness organizes how we see data on the Web?  Or when it becomes the basis of research conducted on the Web?

Attempts to iron out these differences, or even to nail down when and how differences make a difference, discount the importance of permitting difference to remain sticky – of allowing data to sit comfortably and uncomfortably in a conflicting ontological space of sameness and difference.   In this sense, there are not just technical and philosophical difficulties to semantifying the Web; there are also political difficulties – considerations that are often ignored as Web researchers attempt to engineer vocabularies and ontologies that capture a consistent depiction of the world.   This can be thought of in terms of what postcolonial and feminist theorist Gayatri Chakravorty Spivak refers to as “worlding the world.”  “Worlding” refers to the way colonizers inscribe new worlds – worlds they assume were previously uninscribed. It is with this “worlding” that certain forms of meaning become salient – that the “Third World” comes to be recognized as the “Third World” and that the countries that constitute it are homogenized to sameness.  We need heightened awareness of how the worlding of the Web – the engineering of semantic infrastructure – shapes what we know and can know – what can be made meaningful in a world full of sticky differences.

Lindsay Poirier is a PhD Student in Science and Technology Studies at Rensselaer Polytechnic Institute.  She occasionally Tweets at @lindsaypoirier. http://lindsaypoirier.com/

reddit_wallpaper_by_labsofawesome-d4a75f4

Reddit’s co-founder Steve Huffman, who is currently taking over CEO responsibilities in the wake of Ellen Pao’s resignation, has started doing these Fireside AMAs where he makes some sort of edict and all of the reddit users react and ask clarifying questions. Just today he made an interesting statement about the future of “free speech” in general and certain controversial subreddits in particular. The full statement is here but I want to focus on this specific line where he describes how people were banned in the beginning of reddit versus the later years when the site became popular:

Occasionally, someone would start spewing hate, and I would ban them. The community rarely questioned me. When they did, they accepted my reasoning: “because I don’t want that content on our site.”

As we grew, I became increasingly uncomfortable projecting my worldview on others. More practically, I didn’t have time to pass judgement on everything, so I decided to judge nothing.

This all comes at the heels of some interesting revelations by former, former Reddit CEO Yishan Wong saying that Ellen Pao was actually the person in the board room championing free speech and it was Huffman, fellow co-founder Alexis Ohanian, and others that really wanted to clamp down on the hate speech. So that’s just a big side dish of delicious schadenfreude that’s fun to nibble on

But those quotes bring up some questions that are absolutely crucial to something Britney Summit-Gil posted here a few days ago, namely that reddit finds itself in a paradox where revolting against the administration forces users to recognize that “Reddit is less like a community and more like a factory,” and that the free speech they rally around is an anathema to their other great love: the free market. What structures this contradiction, what sets everyone up at cross-purposes, also has a lot to do with Huffman’s reticence to ban people as the site grew. After all, why would Huffman feel “increasingly uncomfortable” making unilateral banning decisions as the site grew, and why was his default position then be “to judge nothing”? Why does it, all of a sudden, become unfair or inappropriate to craft a community or even a product with the kind of decisiveness that comes with “I just don’t like it”?

The answer to all of this comes out of two philosophic ideas: One is the Enlightenment model of reason that we still use to undergird our concepts of legitimacy and rhetorical persuasiveness. That big decisions that effect lots of people should be argued out and have practical and utilitarian reasons and not be based on the whims of an individual. That’s what kings did and that sort of authority is arbitrary even if the results seem desirable. The second is relatively more recent but still fundamental to the point of vanishing: the idea of the modern society as being governed by bureaucracies that have written rules that are followed by everyone. The rule of law, not of individuals. Bureaucracies are nice when they work because if you look at the written down rules, you have a fairly good idea of how to behave and what to expect from others. It’s a very enticing prospect that is rarely fully experienced.

Huffman doesn’t say as much but this is essentially how we went from fairly common-sense decisions about good governance to free speech fanaticism: not choosing to ban is the absence of arbitrary authority. When you have a site that lets you vote on things it feels like a decision to stop imposing order from the top is making room for democratic order from below. But this is closer to the kind of majoritarian tyranny that even the architects of the American constitution were worried about. Voting in the 1700s was something that only aristocrats were qualified to do. Leave it to rabble and you would have chaos. That’s why they built a bicameral legislature that originally featured a senate with members appointed by state governments.

It should also be said that one of the oldest laws in the United States is that Congress can’t make laws that specifically target a single individual or organization. That’s why those efforts to defund Planned Parenthood in 2011 were immediately dismissed as unconstitutional. Laws have to apply to everyone equally.

And so what Huffman is presently faced with is a problem of liberal (lowercase L) and modern state governance. How do you write broad laws that classify r/coontown without just saying “I ban r/coontown”?  Unfortunately, this is also the biggest fuel line to the flames of fear that banning even detestable subreddits are a threat to free speech in general. This is, fundamentally, why it even makes sense to argue that banning an outwardly and explicitly racist subreddit can threaten the integrity of other subreddits either in the present or sometime in the future. Laws apply to everyone equally.

So if Reddit wants to get itself out of this paradox, I say dispense with liberalism all-together. At the very least come up with some sort of aspirational progressive vision of what kind of community you want to have and persuade others that they should work to achieve it. This sort of move is the biggest departure that anarchist political theory takes from mainstream liberalism: that communities can agree on the features of a future utopia and govern in the present as if you are already free to live that future utopia. Organizing humans with blanket laws forces you to explain the obvious, namely that hateful people suck and should be persuaded to act otherwise if they wish to remain part of a community that is meaningful to them.

Right now Huffman and the rest of the reddit administration have come up with some strange and inelegant ways of dealing with the present problem. They make all these dubious distinctions between action and speech; between inciting harm and just abstracting wishing it on people; and lots of blanket “I know it when I see it” sorts of decency rules. Under liberalism redditors would be right to demand very specific descriptions of the “I know it when I see it” kinds of moments. But if prominent members were to just be upfront in stating what sort of community they would like to see and then acting as if it already existed, discontents would have to persuade admins that they were acting against their own interests and propose a more compelling way to achieve the stated utopia. If they don’t like the utopia at all, then those people can leave for Voat and new users who like that utopia might come to replace them. At the very least, reddit were to take this approach, users might actually start answering the question that is at the heart of the matter but is rarely stated in explicit terms: who gets to be a part of the community?

David is on Twitter and Tumblr

We'll never get tired of putting different words on the enter button.
We’ll never get tired of putting different words on the enter button.

In May of 1999 two people filed a lawsuit against AOL. They were volunteers in the company’s Community Leaders program which encompassed everything from chatroom moderation to teaching online classes. You had to apply to be a Community Leader and once you were selected you had a minimum amount of hours you needed to work every week, a time card to keep track of those hours, and reports that needed to be filed with administration. It had all the hallmarks of a real job which is precisely what those two people claimed in their lawsuit. Their argument was that their role constituted an “employee relationship” but I think it is more accurate to say they were creating value for a company that didn’t even feel the need to provide some kind of subsistence wage.

This story has been told countless times as a jumping off point for arguments that labor has left the factory or that even those companies like Amazon or Uber that have been leaders in the contractor / sharing / worse-than-capitalism economy are not paying enough. Some are even calling for “platform cooperativism” which sounds super cool. But there is another, very big, reason why social media companies (in particular) should be paying their moderators and other community leaders: it helps with diversity.

A similar realization came to the fore during the Progressive Era in the United States. In an effort to weed out corruption and machine-style politics at the municipal level many reformers adopted non-partisan elections (no parties), strong city councils, and very weak mayors. Some towns got rid of them all-together and instead hired a professional city manager. The idea was as simple as it was radical: towns and cities should be scientifically managed not politically organized.

That process of reform was flawed and incomplete but it hit upon a fundamental barrier to community leadership: the unequal distribution of free time. City managers were and still are full-time employees with benefits and a healthy salary. Anyone with the right credentials can be hired to be a city manager. City councils on the other hand, especially in smaller cities and towns, are part-time positions. Whereas the independently wealthy and retired can take a time-consuming job with little-to-no pay, workers and even middle-class folks cannot reasonably run for, let alone do all the work of a political leader. That is, of course, unless they took lots and lots of bribes.

The inability for anyone but the well-off and morally corrupt (lots of overlap in that venn diagram) to run for city councils has actually led to a raise in the wages of council members and even the inclusion of health insurance. This quote from the LA Times sums up the situation nicely:

Some experts said the move to provide healthcare benefits occurred as city councils became less a bastion of white men, many of whom owned local businesses or were executives in local companies. With diversity came a need for better compensation to make public service possible, said David Mora, an analyst with the International City/County Management Assn.

“A health insurance benefit was something that would make it a bit more manageable for the incumbent, so that more people might be able to run for office,” Mora said. “It’s generally accepted practice.”

The responsibilities of a Reddit moderator or Facebook group administrator, like a city council position, can run the gamut from a few hours a month to a full time job. Some people will do these jobs no matter what the personal cost because it means a lot to them and they are willing to absorb substantial opportunity costs. For a vast majority of people however, this is simply not the case. Lots of passionate people can’t take leadership roles in online communities because they cannot afford to give away their labor. That is a good a reason as any to pay people even a few dollars an hour to check for spam and ban some trolls.

Of course, if we were to calculate out the value of all that volunteer labor that makes many of our social media platforms possible, and give that money directly to workers, even accounting for server costs, we’d arrive at some pretty lavish salaries. Consider for example, this back-of-the-envelop math on reddit moderators:

Reddit’s estimated value is about $500 million. Let’s say the stingy corporate types are only willing to spend a quarter of that value on the labor that makes reddit even remotely possible. There are 10,114 active subreddits as of today and while I can’t seem to find the estimated number of active mods, let’s just go with 30,000. Some big subreddits have over 15 mods and most have one or two. There are some complicated arguments over which mods should get paid but let’s just simplify the whole thing and pay each one a flat rate. $125 million (a quarter of reddit’s value) divided by 30,000 is $4,166.

No one can live on $4k a year but consider how conservative we were with the amount going to salaries and how liberal we were with who gets them: sure each moderator of r/pics is going to get far less than what they are owed but collectively that team of 23 will get over $95,000. Perhaps that team could split that money up in some sort of progressive way where a successful and retired photographer can forgo their salary and pass it on to a young upstart. Meanwhile the moderators of small and obscure subreddits like r/Troy (local news for my city of 50,000 people) with only 514 readers would get a relatively sizeable amount of money for a small amount of work.

Just as reformers of the turn of the 20th century realized that paying officials actually reduced corruption, we might do well to start turning volunteers into part-time employees if for no other reason than to encourage a more diverse pool of community leaders. We should be paying them anyway, given that they generate so much value, but even if you are not convinced by the Marxist value-creation argument you can at least get behind improving communities.

David is on Twitter and Tumblr

ugh
ugh

A few years ago (I don’t really remember when) someone on this blog (don’t remember who) [edit: it was nathan] lamented the fact that the increased visibility of our childhood indiscretions, thanks in no small part to Facebook, had never resulted in a change in how we forgive one-another for our past-selves. That instead of saying, “eh I was a kid once too” we continue to roll our eyes, clutch our pearls, and even deny each other jobs based on the contents of timelines, profiles, and posts. Today I’m starting to feel like such forgiveness might have to begin with ourselves because –as many of us might be experiencing at this moment—I have started a free trial of Apple Music and I am confronted with my old iTunes music purchases. I need to forgive myself for the purchase of A Bigger Bang when it came out in 2006. This is hard.

Apple doesn’t make it easy. There’s all this album art staring at you under the words “My Music.” It’s all there, in alphabetical order as if my decision to spend actual US currency on Madeleine Peyroux is the same kind of decision to let iTunes think I had “purchased” a Dead Prez CD and ripped its contents to the massive 80 gig hard drive that once inhabited my Macbook G4. Then there are all of these personal one-hit wonders that, for the life of me, I cannot remember from the album art (probably because it didn’t have any) but now stare at me like old friends who don’t look anything like they did in high school but their voice is unmistakable. Oh! You’re THAT track. Wow Magenta Lane’s Wild Gardens, I totally forgot about you.

Why do I have not one but two MC Hammer albums?

Remember that time Stephen Colbert has the Swedish-language hip-hop swing fusion band Movits! on his show and it was better than something like that has any business sounding? I’m not saying my decision to use half of the value of my fifth night of Hanukah iTunes gift card on that album was a good decision, but I suppose that’s just how we learn.

So now I’m wondering if the fact that I was one of those people that first heard Modest Mouse via Good News For People Who Love Bad News is the reason I fell so hard and completely into hating hipsters in the early 00s. I dunno, Building Nothing out of Something is an excellent album but, nine times out of ten, I’ll still choose to listen to Dashboard when I’m driving. I don’t know what that says about me.

Oof, Major Lazer is bad writing music.

Was anyone ever into Birdmonster in 2007? Pitchfork’s William Bowers in August of 2006 says that there were some “bloggers” that really liked them but he only gave No Midnight a 5.6. I remember them (sorta) as one of those British pop punk bands that had a moment in that time. The Fratellis, The Futureheads, Kubichek! All sound virtually interchangeable, now (and probably then).

If Spotify is the gabby friend that likes to tell all of your other friends that you listen to bad psytrance at the gym, iTunes is the parent that recommends The White Stripes “deep cuts” because remember all The White Stripes you listened to, don’t you like The White Stripes I thought you really liked them. The former is a performance, but the latter is a kind of meditation. Neither is more or less authentically “you” but both do sort of belie a misunderstanding on the part of designers and engineers, about what we do with our music and why.

The impulse to recommend is always already context collapsed. Recommendations come from paying attention to you and only you, regardless of context or co-present audience. No platform has yet mastered the when, how, or with whom of music listening and so we end up forced to explain Squarepusher to our aunt who we’re driving home from the airport as it comes up on your finely tuned driving Spotify radio station. That’s a good thing. Those moments should never be smoothed over by wearables that will report the audience to some onboard car computer designed to play the “perfect” Bruce Springsteen track off of Nebraska that everyone will tolerate.

We tend to think of them as sooth sayers, but algorithms meant to suggest “more that we love” are also products of what Carl DiSalvo calls adversarial design. Adversarial relationships are characterized by disagreement, but never in the Hegelian one-must-be-destroyed-to-realize-the-other sort of way. Adversarial relationships produce productive tensions that do useful political work through the juxtaposition and shifting relationships of individual actors. We come to understand how we relate to other people and the material world around us in moments where things don’t fit quite right. When that one Billy Joel track you like comes on when you’re with someone you are trying to impress, when a particularly raunchy song comes on during a dinner party. These are moments where we learn a lot and they might not be comfortable but that doesn’t make them important.

Maybe then, the increased mutual understanding, the forgiveness that we were expecting to arrive with the ubiquity of the timeline, is still in the works. Maybe we will still get that, but it will take a lot more uncomfortable moments. In that time, unfortunately, social inequities will make the adversarial moments designed by and through algorithmically-induced context collapse more consequential for some and not for others. Gregarious algorithms can and have gotten people into serious trouble, I only have to worry about defending my purchase of that one Citizen Cope album from 2004.

David is on Twitter and Tumblr.

Scene from Die Hard 4: Live Free or Die Hard
Scene from Die Hard 4: Live Free or Die Hard

David Graeber has republished his popular essay Of Flying Cars and the Declining Rate of Profit in his new book Utopia of Rules with some small changes that go toward supporting the book’s over-all argument that the hallmark of American neoliberalism is not dynamism and a freeing up of individuals to peruse “creative class” jobs but rather a bureaucratization of every aspect of life. This total bureaucratization (almost literally) papers over the structural violence that supports capitalism. Of Flying Cars specifically argues that the utter failure to deliver on the implicit promises of Jetsons-level automation by the 21st century is not necessarily a matter of market forces (no one actually wants a flying car!) or technical impossibility (Moore’s law hasn’t delivered thinking computers yet!) but is in fact a product of both squashing the imagination through bureaucratic devices, and the immense devaluing of labor and elimination of corporate profit taxation that leads to paltry civilian research and development. In essence, capitalism in its present form, is anathema to the future it once promised.

Graeber states in the beginning of the essay that he is puzzled by the near silence from those people who saw the moon landing on their televisions but today do not, themselves, live on the moon (or can easily teleport there, or take a drug that might extend their life to the time that both of those things are available). “Instead,” he writes, “just about all the authoritative voices—both on the Left and Right—began their reflections from the assumption that a world of technological wonders had, in fact, arrived.”

Graeber relatively quickly drops the issue of how our collective expectations of the future could be so quickly and completely re-aligned (his answer is postmodernism) and goes on to explain why such an alignment has become necessary (capitalism’s secret love of bureaucracy) but I want to dwell on the “how” question a little bit longer by offering up a corollary to Of Flying Cars. The argument that follows is also a reprinting of my own work, an article published in a 2012 issue of the International Journal of Engineering Social Justice in Peace, co-authored with Arizona State University’s Joseph R. Herkert. I want to argue that our expectations for the future are purposefully managed through a circulation of imagined threats to capitalism, the popularizing of narratives that flesh out that threat, and the re-articulation of those imagined threats as real ones that must be met with massive government funding. I will demonstrate this process using a beloved and uniquely American franchise: Die Hard.

The original article, available in full here, argued that the top-down technocratic perspective exemplified by Robert Moses’ demolishing of vast swaths of New York City are still alive and well today, but are repackaged in Silicon Valley platitudes of disruption and hacking that circulate in popular media so as to 1) provide the technocrat’s view of the world as an inevitable future, 2) drum up support for a clearly unethical approach to technological development by establishing narratives that reaffirm the need for the technocratic view, and 3) establish popular touchstones that make small areas of research that benefit an elite few appear to be global needs on the scale of clean water or sanitation.

We argued that in past iterations of this process, companies like GM and Ford offered positive views of the future through (among other marketing campaigns) their exhibits at the 1939 and 1964 world’s fairs that gave out pins to visitors that read “I have seen the future!” Today’s expectation management is different in that rather than plainly state “this is the future we will create” our media describes future development as merely a response to seemingly uncontrollable events such as terrorism or resource scarcity. This is where my previously published work on Die Hard comes in.


The Die Hard franchise is notable, among other reasons, for the variety of its source material. The first two movies were based on paperback action novels published in the mid to late 1980s. The third installment, Die Hard: With a Vengeance, was adapted from an orphaned screenplay titled Simon Says. The fourth movie, Live Free or Die Hard, however, is a radical departure from the prior three. Live Free is based on a WIRED Magazine article, “A Farewell to Arms,” in which John Carlin describes the U.S. military’s preparations for “I-war”. Carlin quotes the Chinese military newspaper, Jiefangju Bao, for a summary of I-war. It reads, in part:

After the Gulf War, when everyone was looking forward to eternal peace, a new military revolution emerged. This revolution is essentially a transformation from the mechanized warfare of the industrial age to the information warfare of the information age. Information warfare is a war of decisions and control, a war of knowledge, and a war of intellect. The aim of information warfare will be gradually changed from “preserving oneself and wiping out the enemy” to “preserving oneself and controlling the opponent.”

Live Free is about control: the control of people, resources, institutions, and (most importantly) infrastructure. The plot revolves around a spurned government cyber-security official named Thomas Gabriel, who carries out the mythical “fire sale” cyber security breach. The “fire sale” is named as such because, just like the eponymous inventory clearance event, “everything must go.” Mass media, financial systems, and infrastructure are all compromised and brought under the control of Gabriel’s small army of hackers and mercenaries. They are only able to accomplish such a feat by anonymously soliciting outside hackers to write viruses under the auspices of a corporate computer security firm. Once the viruses have been written, Gabriel orders all of the hackers killed. John McClane (played by Bruce Willis) saves one of the hackers, Matthew Farrell (played by Justin Long), just as the assassin team arrives at his apartment. The rest of the movie follows Farrell and McClane as they attempt to thwart the massive attack on America’s computer-run infrastructure.

The two characters’ perspectives on technology are representative of mass media’s imposed narratives on the “generation gap” between so-called millennials and baby boomers. Technology, as it appears to Farrell (the millennial), improves individuals’ lives; society is an afterthought. McClane (the boomer) has come to recognize that there is no technological white knight that will end hunger or disease. His technological optimism-turned pragmatic idealism is representative of his fellow baby boomers who, just as they are reaching retirement, find the social safety net in tatters. Institutions are corrupt and inept, and technology is just as alienating as it is tragically flawed. This tension is perfectly demonstrated in two scenes.

In the first scene, McClane is escorting Farrell to a police precinct just as the “Fire Sale” begins. Gabriel calls McClane and offers him a tradeoff similar to actual U.S. fiscal policy: by sacrificing the millennial, McClane’s debt will be eliminated and his own millennial children will be “set for life.” Gabriel makes this offer only after emptying McClane’s retirement fund as a demonstration of his power. McClane declines the offer and (by way of machine-gun-equipped black helicopter) is immediately denied by Gabriel the relative safety of his cop-filled SUV. This makes for an interesting comparison to the 90s era installment With a Vengeance, wherein an equally decade-appropriate offer is made to McLane: a dump truck full of inflation-resistant gold bullion stolen from the Federal Reserve Bank of New York.

The second scene comes just as the full effects of the Fire Sale have become clear. Farrell, recognizing his own latent desire for wanton destruction of “the system,” prompts a frank discussion of what is at stake:

Farrell: This is virtual terrorism.

McClane: What?

Farrell: You know, first time I heard about the concept of a fire sale . . . I actually thought it would be cool if anyone ever did it. Just hit the reset button and melt the system just for fun.

McClane: Hey, it’s not a system; it’s a country. You’re talking about people, all right? A whole country full of people. Sitting at home alone scared to death in their houses, all right? So if you’re done with your little nostalgic moment and think a little bit and help me catch these guys, just help me. Just put yourself in their shoes.

This exchange between McClane and Farrell mirrors the Faustian bargain demanded by technocrats and bureaucrats: if you take any interest in technological development that does not conform to the grand narrative of progress (such as hitting the “reset button”), you are co-signing your fellow Americans to a short life of Hobbesian terror. The young radical and the skeptical citizen alike pose a danger to everyone’s collective livelihood simply by making rhetorical room for alternative conceptions of progress. Meanwhile, the “old national faith in the advancement of technology as a basis for social progress,” to quote Leo Marx, not only keeps McClane (and the sympathetic audience) loyal to this sociotechnical regime, but it translates a system of pipes and cables into a nation.


Graeber's new book Utopia of Rules, published by Melville House
Graeber’s new book Utopia of Rules, published by Melville House

Graeber concludes his Of Flying Cars chapter by observing that:

…ultimately, claims for the present-day inevitability of capitalism have to be based on some kind of technological determinism. And for that very reason, if the ultimate aim of neoliberal capitalism is to create a world where no one believes any other economic system could really work, then it needs to suppress not just any idea of an inevitable redemptive future, but really any radically different technological future at all.

What I have provided is a very specific mode by which our expectations of possible technological futures are managed through popular culture. There is a pervasive notion, exemplified by the scenes in Live Free or Die Hard that I just described, that any rebellion against these expectations are immature desires that could lead to collateral damage.  But how, specifically, does this sort of culture work influence actually existing research and development?

The article Herkert and I wrote was part of an issue dedicated to critiquing the National Academy of Engineer’s Grand Challenges For Engineering, a document meant to set the tone for future investment in applied science and technology research and development. It lists several major areas of future research that grant writing institutions should fund if society is to meet some of its biggest 21st century challenges. For the most part it is a fairly stern and dry document until you reach the section titled “secure cyberspace” which reads in part:

Electronic computing and communication pose some of the most complex challenges engineering has ever faced. They range from protecting the confidentiality and integrity of transmitted information and deterring identity theft to preventing the scenario recently dramatized in the Bruce Willis movie “Live Free or Die Hard,” in which hackers take down the transportation system, then communications, and finally the power grid.

Rarely is the cycle of imagined threats, popularized threats, and constructed futures so blatant but I’m sure there are hundreds more examples lurking out there. This sort of dynamic makes it difficult to remember that our expectations of what the future will look like, and what sort of planetary social order will provide that future, are of our choosing. It also means that pop culture, as many before me have argued, is far from trivial. It is, perhaps, our best hope for steering the future course of technological development.

 

 

David is on Twitter and Tumblr.

Note on authorship: The section between the two horizontal lines are a slightly altered reprint of a section of Herkert, Joseph R, and David A Banks. “I Have Seen the Future!: Ethics, Progress, and the Grand Challenges for Engineering.” International Journal of Engineering, Social Justice, and Peace 1, no. 2 (2012): 109–22. Used under a Creative Commons Attribution 3.0 License that allows others to share the work with an acknowledgement of the work’s authorship and initial publication in this journal. Full text available here: http://library.queensu.ca/ojs/index.php/IJESJP/article/view/4306