Less than a week ago Byron Román made the above Facebook post challenging “bored teens” to pick up trash and post before and after photos on social media. Reddit user Baxxo24 (Baxxo24 looks to be Swedish while Byron lives in Arizona) took a screenshot and posted it to r/wholesomememes where it went viral. Now #trashtag (“hashtag trashtag?”) is the subject of a dozen or so feel-good human interest stories. It is unclear who the guy in the photo is (It looks like it came from a Guatemalan Travel Agency), but CNN, Washington Post, and CBS News have reported that “trashtag” is a long-dormant social media campaign for UCO Gear, a Seattle-based camping equipment company.

When I started seeing Byron Román’s #trashtag trending on my usual platforms I did what any well-adjusted person would do: I assumed it was as scam and Facebook stalked him until I was convinced otherwise. According to his Facebook profile, Román works in the non-profit home loan industry, mostly in marketing. His latest job helps veterans apply for and receive cheap mortgages. Nothing too dubious there, but it got me thinking about the long and dismal history of littering campaigns’ role in playing cover for corporate interests.

My go-to history of corporate environmental astroturfing is Ginger Strand’s “The Crying Indian” in Orion Magazine. Here is how she describes the founding of “Keep America Beautiful” the ad campaign featuring a weeping Native American man (actually he was Italian) that admonished Americans for littering.

In 1953, Vermont’s state legislature had a brain wave: beer companies start pollution, legislation can stop it. They passed a statute banning the sale of beer and ale in one-way bottles. It wasn’t a deposit law — it declared that beer could only be sold in returnable, reusable bottles. Anchor-Hocking, a glass manufacturer, immediately filed suit, calling the law unconstitutional. The Vermont Supreme Court disagreed in May 1954, and the law took effect. That October, Keep America Beautiful was born, declaring its intention to “break Americans of the habit of tossing litter into streets and out of car windows.” The New York Times noted that the group’s leaders included “executives of concerns manufacturing beer, beer cans, bottles, soft drinks, chewing gum, candy, cigarettes and other products.” These disciples of disposability, led by William C. Stolk, president of the American Can Company, set about changing the terms in the conversation about litter.

The packaging industry justifies disposables as a response to consumer demand: buyers wanted convenience; packagers simply provided it. But that’s not exactly true. Consumers had to be trained to be wasteful. Part of this re-education involved forestalling any debate over the wisdom of creating disposables in the first place, replacing it with an emphasis on “proper” disposal. Keep America Beautiful led this refocusing on the symptoms rather than the system. The trouble was not their industry’s promulgation of throwaway stuff; the trouble was those oafs who threw it away.

Adam Conover has a good rundown of this history too:

YouTube Preview Image

While Román can’t be accused of much more than padding his social media manager resumé we should be cognizant of the narrative #trashtag plays into: that pollution is a problem of irresponsible people not taking care of their immediate environment. Picking up litter is great thing to do for your neighborhood, it might make your local park or a nearby river cleaner for a time, but getting at the source requires some much more complicated, less photogenic work.

One of the more insidious impacts of the Keep America Beautiful campaign was that it encouraged people to think of litter as a local phenomenon. If you see trash around you, it’s because the people around you don’t care about that place. So when dramatic #trashtag photos in fast-growing cities in Asia like Mumbai come across our screens it’s easy to assume that these places have people in them that don’t care about the environment. What’s much more likely is that the trash in Mumbai began in a trash can in United States or Europe before being exported to an Asian or African country for processing. Often this international trash falls out of trucks and barges and finds its way into rivers, lakes, coastal tidal zones, and land. This is at least part of the reason why China stopped recycling our trash. If we take anything from #trashtag let it be this: garbage is a global system and litter is best thought of as something inflicted on places, not a reflection of its people.

Image By Al Ibrahim

I want all of your mind
People turn the TV on, it looks just like a window…

Digital witnesses
What’s the point of even sleeping?

— St. Vincent, “Digital Witness” (2014)

 

Each day seemingly brings new revelations as to the extent of our Faustian bargain with the purveyors of the digital world in which we find ourselves. Our movements, moods, and monies are tracked with relentless precision, yielding the ability to not only predict future behaviors but to actively direct them. Permissions are sometimes given with pro forma consent, while other times they’re simply baked into the baseline of the shiniest and newest hardware and software alike. Back doors, data breaches, cookies and trackers, smart everything, always-on devices, and so much more — to compare Big Tech to Big Brother is trite by now, even as we might soon look back on the latter as a quaint form of social control.

While data breaches and privacy incursions are very serious and have tangible consequences, debates over user rights and platform regulation barely scratch the surface. Deeper questions about power, autonomy, and what it means to be human still loom, largely uncovered.  And when these concerns are even voiced at all they can often seem retrogressive, as if they represent mere longings for a bygone (pre-internet) time when children played outside, politics was honorable, and everyone was a great conversationalist. Despite ostensible consternation when something goes egregiously wrong (like influencing an election, let’s say), the public and political conversation around mass data collection and its commercialization never goes far enough: why do so many seemingly reasonable and critical people accept a surveillance-for-profit economy (with all of us as the primary commodity) as tolerable at all?

To answer this question, we have to look at privacy from an entirely different angle, one that sees the advent of omnipresent, omniscient technology as satisfying basic human needs rather than violating them. In this view, perhaps the reason for the mostly uncritical acceptance of technopolistic trends is that on some level it actually resonates for people. Yes, we know that many of the products are engineered to tap into fundamental desires to be liked, to be seen, to feel important, to be reaffirmed (in carefully doled out neurological doses), to register and be recognized. Yet the tendencies predate the technologies, and if it wasn’t this it would be something else.

For instance, the totalized gaze of the narrator/viewer in most movies and programs is so engrained that we rarely notice it, casually enjoying the voyeuristic ride we take on the backs of characters assembled. In film the viewer is there for every mundane moment, every disappointment and poignant revelation, every coincidence, every interaction — at least those that make the cut from idea to script to broadcast. This leaves viewers in a paradoxical state of apparent omniscience and susceptibility to manipulation, and is part of the artifice of good storytelling. This duality of power and persuasion applies to new media as well, where any viral video is notable for what it captures and what it omits. In both realms, we become a kind of co-pilot, a witness to everything in the field of vision, a validator of conduct and an accomplice in artifice. We decide when a character (fact or fiction) is being misjudged, acting deviously, driven to extremes, or doing something quietly heroic. We sign off on the solidity of their perspective.

Humans have conceived of external observers for a long time, whether in the form of authorities among us or gods above. Consider how many of us (secular and religious alike) may long for such an audience on our solitary journey, someone who sees all the little moments that define us and is invested in the trajectory of our lives. This virtual road buddy is by definition on our side, at least in terms of point of view if not viewpoint, serving as a recorder of our struggles and triumphs, keeping the ledger on how we will ultimately be judged. We can’t really rely on other people for this, after all, since they’re too consumed with their own myriad insecurities, internal dialogues, and obsessions with impression management. Whereas others only see the outward moments that we carefully curate and/or blithely forward, the omniscient viewer — the one whose affirmations and likes we really covet — is with us all the way. And even when the data gleaned by our digital companions is used to target us for advertising or worse, it still affirms the basic notion that someone cares, and that we matter.

In this sense, it often appears that we have come to crave publicity more than we value privacy. This surely is not by happenstance, since it taps into a basic human desire to be recognized. But the modern version is subtler and more sinister, with technology not merely recording our desires but shaping them as well. Everything from images of beauty and measures of success to the taste of food and the cadence of broadcasting is cultivated through a combination of repetition and reinforcement. When it comes to privacy in the social media era, the stakes are even higher than simply guiding what we consume; now it is about how we are being consumed by others, how we create our own brand and become promoters and marketers with ourselves as the principal commodity. Privacy is the antithesis of this, serving to keep parts of us from being recognized and thus failing to maximize the potential for growth and gain.

Regardless of how long people have desired being seen, we still have to evaluate carefully whether the version of Big Brother that Silicon Valley built is meeting this basic need without leading us down a road on which there is no exit and no return. As we fully enter into this era, it is important to consider how the escalating network of devices and data streams is more than merely the object of our consumerist affection. In short order it is becoming our digital witness, our personal seal of approval, and our novel hope for understanding if not outright absolution. The science (or is it mysticism?) of chronicling our every thought and movement may soon yield a world where literally nothing is truly private anymore, and where this realization actually brings a sense of comfort and confirmation.

In his classical formulation of the panopticon over two centuries ago, Jeremy Bentham conceived an all-seeing vantage point that would leave those within its ambit susceptible to being watched at any time. While this seems like an ominous harbinger given the surveillance society as it has evolved, Bentham’s notion was somewhat more benign in its intentions if not its implications. In essence, the panopticon was designed to inculcate the arbiter’s gaze within those exposed to it, cultivating self-reflection and moral behavior out of fear of being caught by the omniscient observer but without having to use actual force to impose discipline. The net effect was minimal external pressure yielding inner transformation.

Today’s manifestations may be more like a tranopticon, a term I’ve coined to describe a scenario that isn’t just all-seeing but ever-evolving. Unlike the traditional panopticon, it isn’t passive or fixed; rather, it is transactional, intelligent, dynamic, and capable of being dialed up to prove a point, reach a decision, explain an action, or magnify a transgression. It is less about the totalizing gaze of the watcher and more so about the myriad of gazes that includes ourselves. While its gleanings don’t represent the truth (since others will have their own POV-molded realities), it will loyally capture our verities by being there for all things great and small. In this sense, our consciences will move from the remoteness of an “eye in the sky” to the applications sparked by an “AI in the Wi-Fi” that helps to shape present and future behavior based on the opulent tapestry of our past, compiled every nanosecond across a thousand points of data.

With the careful guidance of our alter egos and the unvarnished reflection of ourselves in hand, perhaps humankind will learn to optimize not only efficiency but ethicality as well. As in Orwell’s archaic parable, attempting to shield oneself from this omnibenevolent gaze would be transgressive — not only illogical, but immoral. Human beings have tried for thousands, perhaps millions, of years with marginal success to project forces above us that might elicit moral behavior — deities, leviathans, solons, panopticons, prosecutors, and more. Now we will finally have the means to install the one power source that cannot be gainsaid: ourselves. And in this understanding, perhaps we may truly come to love Big Tech after all.

Randall Amster, Ph.D., is a teaching professor in justice and peace studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. All typos and errata in his writings are obviously the product of intransigent tech issues. He cannot be reached on Twitter @randallamster.

I’ve written about Star Trek a few times (here and here). I think I still agree with most of what’s written there. PJ Patella-Rey also wrote about Star Trek on the blog here. My favorite commentary on Discovery, which I’ll do my best not to simply repeat is by Lyta Gold which you can read at Current Affairs. What follows are some vaguely connected thoughts I’ve had about Discovery‘s relationship to the rest of the canon after having just gotten caught up with the series.

1.

While watching Discovery I’m haunted by the idea that I am incapable of liking any new Star Trek offering because my love of Trek is fueled by nostalgia and not a reverence for its politics or innovative storytelling. Or, more accurately, nostalgia and reverence work together so that the moments of pre-9/11 Trek (is there really any other more distinctive delineation? More on that soon.) that are just too corny or clumsy to enjoy on their own merits can be ignored and the good parts can really shine. With the nostalgia missing I can’t enjoy any of it. I think this is what the writers worry about and try to invoke a nostalgia for the present with completely unnecessary Spock-centric plot points. When media producers make us feel like there are childhood memories out there we haven’t seen yet —that is all fan service really is— it either works through, paradoxically, original storytelling or it just falls flat. Much of Discovery‘s references to the original series falls flat, I think.

2

For a few years Britney Gil and I would host weekly Star Trek watch parties at our house and I would curate three or four shows into social themes, most of which are preserved on my web site. As I watch Discovery I try to place each episode within themes I’ve already identified but usually come up empty-handed. Only part of this is because these episodes have more of an arch and are less serialized but that’s only part of it. While each individual episode of pre-9/11 trek were Mondrian-esque depictions of a single moral theme —this week Odo deals with the longing for a people, please see Arendt’s The Human Condition for more details— Discovery paints de Kooning-style season archs. Still a limited pallet but the themes are allowed to mix more, overlap, and emerge. I went to trek for the Mondrian rationalism but perhaps de Kooning is more appropriate for the times. More stylistically contemporary and better equipped to deal with the issues we want to see portrayed in TV.

3

It is not enough to say today’s Star Trek is just different without saying what is gained and lost. Trek changed completely after 9/11. Pre-9/11 Trek started in the braggadocio of mid-century American ascendancy and, after negotiating the malaise of the 70s and early 80s with a series of movies about aging wherein the Kirk-Spock-McCoy triumvirate is simultaneously American hegemony and the aging audience, leaned heavily into a liberal end-of-history optimism. Picard was the standard bearer for the optimism of a perfected humanity and Sisko and Janeway were left to stress-test that vision amidst threats to (Deep Space Nine) or total separation from (Voyager) all of the institutions and cultural practices that make the perfection possible. In so doing we found that Earth-as-socialist-paradise isn’t something you arrive at but something you’re constantly making. In that way it is very de Kooning but we only ever got glimpses of it at Sisko’s restaurant or stories from the Voyager crew who had to constantly articulate what the hell humanity even was to people who’d never heard of us before.

Post 9/11 however, utopia feels naive at best and low key fascist at its worst. So much order and safety has to come from a wide-spread and slow abandonment of personal freedoms. And so, instead of dwelling in all of the minute problems of utopia and all of the beautiful contradictions we can discover about ourselves when everything basic to survival is taken care of, Trek becomes about the seemingly inevitable moments when it all comes crashing down. It is defending, through the thin blue (red, and gold) line that we every get to keep a peace that is now revealed to be fragile. Enterprise, having been the closest to 9/11 was so painfully on-the-nose about it all that it not only had a literally Earth-shattering problem to deal with, but that it spent its four short seasons in a “temporal cold war” which was nothing less than competing factions trying to rewrite history. Discovery, let alone J.J. Abrams’ three movies and Enterprise feels less like Trek not because they lack optimism, but because these post-9/11 shows require optimism at all. Good Trek, pre-9/11 Trek was, at base, all about not even needing optimism because of course everything would work out: humanity was part of a galactic federation of peace and exploration.

4

One big thing that the nostalgic veil of pre-9/11 Trek obscures from our vision is just how much of Reagan America creeped into each episode. Remember that the very first episode is literally humanity on trial for being savage. Picard’s defense is not that humanity is not inherently savage (a point that would be scientifically true and would not accept the Hobbesian frame that holds together most reactionary politics) but that it learned and became better through struggle and learning from mistakes and atrocities alike. All the way up to Voyager, in the episode Death Wish Janeway is literally adjudicating between individuality and the state’s prerogative to maintain order, finding in favor of the former. In both of these examples humanity is dealing with Q which, always show up when the writers want to get to questions of human nature as quickly and effortlessly as possible.

Discovery has yet to have a Q episode, both literally and in the sense that it is not willing to comment on humanity as such, opting instead to make references to the moral obligations of Starfleet. I cringe each time the Discovery crew say something to the tune of “We are Starfleet and that’s why we won’t abandon you / want to know what that thing is / are ready to sacrifice ourselves for everyone else.” At first I thought it was because those monologues just sound corny but while that’s true I think the real reason is this: in previous series the characters would invoke humanity not Starfleet. Perhaps removing the Earth-centric chauvinism implied by humanity is a good idea stylistically —prefiguring a truly universally inclusive language— but in that case they should be invoking the Federation and the civilian governmental form, not Starfleet. When Starfleet is mentioned in pre-9/11 Trek it’s usually derisively, talking about how hard the academy is, how useless they are in protecting the colonies that gave rise to the Maquis, or how disconnected they are from the rest of Federation life.

5

Aesthetically, Discovery has its moments, mostly for the better. Costume and set design are beautiful. I find the ship, and I recognize that this is purely subjective, absolutely hideous. It is disproportionate in every way, which also makes it look different from every angle in a way that makes it difficult to fall in love with. I would, however, say more or less the same thing for the Enterprise-D and Deep Space Nine. Starfleet ship design, in my opinion, peaked in the 2370s with the Intrepid, Sovereign, and Akira-class ships. Don’t @ me about this.

I should get something out of the way first: The oxygen that fills Steve King’s lungs would be better used fueling a tire fire. King, who represent’s Iowa’s 4th District in the House of Representatives is a reprehensible excuse for a human being and every moment of every day that he holds public office is a testament to term limits and the benefits of sortition over elections. Steve King is so racist (how racist is he?!) the Republican House election fund refused to give money to his last re-election bid citing his “words and actions” on white supremacy. All that being said, King is right to be skeptical of Google CEO Sundar Pichai’s claim that their search algorithm is merely a neutral reflection of the user’s interests.

Pichai was grilled for three hours on Tuesday by House reps who wanted to know more about Google’s data collection practices, its monopolistic tendencies, and the company’s rumored censored Chinese search engine. The inherent contradiction that stands between these latter two issues is interesting: having thoroughly captured the search market nearly everywhere else, Google must —if it is to continue to appease investor’s demands for infinite profit growth— do everything in its power to breach the Chinese market. China is doing what most powerful nations do in their rise to power: protect and favor their own companies and reinvest as much wealth as possible within the country. These protectionist policies mirror what Britain and the United States did in their own respective eras of rising dominance. They fostered companies like Google so that they might attain global dominance and, by extension, solidify their influence on the world. But now that Google is a global company with interests that exceed the American market, the company’s goals are beginning to run counter to national interests. Like Frankenstein’s monster, Google has exceeded the wide boundaries federal regulators put up and now, in its search for new markets, has both too much power at home and is working with a rival power abroad. It is just the kind of capitalist contradiction that Marx and Keynes would predict: the infinite growth of firms and markets eventually undermines the very power of those that establish them.

But it is the media’s reaction to Republicans’ demand for more transparency that deserves attention. Tom McKay at Gizmodo, for example, wrote that much of the meeting entailed,

blaming an insidious liberal conspiracy for bad press popping up on Google. Ohio Representative Steve Chabot complained that Google search results on GOP efforts to repeal the Affordable Care Act [were critical of their efforts] and Texas Representative Louie Gohmert insisted that Pichai is so surrounded by people “so surrounded by liberality that hates conservatism” that he’s “like a blind man who doesn’t even know what light looks like.”

Steve King went the furthest, demanding that the company reveal which employees work on search, show their respective social media profiles, and publish how their proprietary algorithm works. He suggested that without this knowledge, there was no way to know whether Google was being “neutral” in their work and threatened anti-trust litigation if they didn’t comply. Much of the talk about search results was a proxy to talk about news coverage. Republicans complaining about the liberal bias in news is nothing new and we should recognize these statements as nothing more than reestablishing that rhetorical beachhead within a new media ecosystem.

And yet something bothers me. If, say, Alexandria Ocasio-Cortez was grilling Pichai about their racist search results while waving a copy of Safiya Umoja Noble’s Algorithms of Oppression I would be dancing in my chair. If any congressperson would hold Zuckerberg’s feet to the fire over a 2015 patent for letting banks consider your Facebook friends when applying for a loan I’d grab the pop corn. King is right that it is the government’s job to demand companies be transparent about the products that influence our lives. I am not interested in private employers having the power to snoop around in, let alone publicize, their employees’ social media profiles but he is also right that human bias does make its way into our technologies. The issue here though, is not that these companies are unfair to Republicans, it’s that there is no outside oversight whatsoever when it comes to search and online reputation management.

Almost a year ago I published a piece in The Baffler that warned of the authoritarian tendencies of engineers and the fact that most tech workers are registered Democrats should do nothing to dispel anyone of the notion that things are getting better. After all, it was under Obama’s presidency that the drone war kicked into high gear and mass digital surveillance became the norm. The kinds of questions King is asking —Who makes these technologies? What are their goals? How will this new technology impact democracy?— are exactly the kinds of questions a government should ask. The fact that the government is run by white supremacists and they’re the ones doing the questioning, is really only half the problem. The other half is that the structure of government itself is not equipped to handle these questions in a substantive way. Punishing companies because they create and promote bad press for powerful politicians is easy. What’s hard is building the necessary infrastructure for a just and sane democracy in the digital age. There are very few watchdog agencies set up to defend individuals from predatory data collection, we don’t have a robust legal framework that says you have the right to know how your credit score is calculated. It is one thing thing to shout down a CEO who oversees bad corporate behavior, it’s another to follow that up with actual legislation. I’m cautiously optimistic about this new class of congressional representatives though, who have the energy and moral capacity to get this done.

OoOoOhHhH! Scary hoaxus pocus!!! (I just didn’t want to use that photo of the three authors like everyone else.) Source: Iconspng

Last week three self-described “concerned academics” perpetrated a hoax in the name of uncovering what they call the “political corruption that has taken hold of the university.” “I’m not going to lie to you.” James A Lindsay, one of the concerned academics says in a YouTube video, just after laughing at a reviewers’ comments on a bogus article. “We had a lot of fun with this project.” The video then cuts to images of mass protests and blurry phone-recorded lectures, presumably about topics that aren’t worthy of debate. The takeaway from the videos, press kit, and write-up in Areo Magazine is the following: fields that study race, gender, sexuality, body types, and identity are really no more than “Grievance Studies” (their neologism) and the desire to criticize whiteness and masculinity overrides any appreciation of data.

To prove this they spent over a year writing and submitting articles that they wrote in bad faith. Sometimes these articles would have fairly decent literature reviews which would then lend legitimacy to less-than-decent theses. But when you actually read the papers, and the reviews, the picture you get is far less interesting than the sensationalist write-ups or even the Areo piece makes them out to be. The picture you get by actually reading the work is mostly mid-level journals doing the hard, unpaid work of giving institutional authority to ideas that —hoax or not— will rarely see the light of day. This is the real hoax: that academic institutions waste so many good people’s time and energy on work that goes nowhere and influences nobody. I wish we lived in a world where it made any sort of sense to compare the influence of Fat Studies to the influence of oil companies on climate science. We don’t, but —and here’s something that astonishingly no one with a platform seems to want to argue— we should.

It is fair to say that the three co-conspirators in this project are insufferable edgelords. From their matching Twitter profile banners that reproduce the lede image of their article, to their collective body of previous work, everything about them is a screwed up face in the back of the room asking if any of this intersectionality stuff helps “normal people.” They are releasing work that is designed to produce more heat than light. It is all meant to grab headlines and rally the troops, not convince anyone of anything. They play into old, worn tropes about how the qualitative social sciences and humanities do not deserve institutional funding simply because they do not produce marketable, patentable ideas that are useful to industry. I take them at their word that they are “left leaning liberals” because only liberals would spend a year on a project that helped the reactionary right and neoliberal college administrators in equal measure.

This is not their first Culture War battle, just their most popular. Helen Pluckrose, an editor and contributor at Areo, has produced articles like “Skepticism is Necessary in our Post-Truth Age. Postmodernism is Not” and “Androphobia — and How to Address It.” James A. Lindsay fashions himself as a discount Dawkins. He has a PhD in mathematics but writes a lot about religion and how, as one of his books is titled, Everyone is Wrong About God. Peter Boghossian, an Assistant Professor of Philosophy at Portland State University and writer with bylines in everything from mainstream publications like Scientific American to Quillette, actually made an app that, “provides you with the skills you’ll need to spot flaws in weak statements and use reason to politely help people understand why they may not be correct.”

Pluckrose, Lindsay, and Boghossian are clearly talented carnival barkers. They have well-produced videos to go along with a just-long-enough article. Their press kit, saved to a Google drive folder, contains all of the articles they submitted along with the anonymized reviews of their work. They have since collectively written an article in New Statesman where they make the same sort of verifiably incorrect statements about French theorists that Jordan Peterson likes to make, calling them”post-modernists” who replace “rigorous evidence-based research and reasoned argument with appeals to lived experience and a neurotic focus on the power of language to create social reality.”

Unsurprisingly, The Atlantic ran a glowing review of the hoax written by Yascha Mounk, dubbing this project “Sokal Squared.” The Sokal Affair, as it is called in many theory-driven fields, refers to the time that Alan Sokal  a physicist of some repute, wrote an article filled with gibberish and got it published in Social Text, a journal that at the time was not practicing peer review. Sokal made a similar argument to what Pluckrose, Lindsay, and Boghossian made, though much more focused: that the social sciences, if they are to take the natural sciences as a subject of study, should get the science exactly right. It was an obnoxious, bombastic way to make what is ultimately a boring Neil Degrasse Tyson tweet. Sokal Squared does not rise to this low standard.

Pluckrose, Lindsay, and Boghossian having done an excellent job of branding their work as flashy and controversial. The work itself though is tame, boring stuff. Take for example the article that Fox News called “Feminist Mein Kampf’.” According to the authors’ press kit:

The last two thirds of this paper is based upon a rewriting of roughly 3600 words of Chapter 12 of Volume 1 of Mein Kampf, by Adolf Hitler, though it diverges significantly from the original. This chapter is the one in which Hitler lays out in a multi-point plan which we partially reproduced why the Nazi Party is needed and what it requires of its members. The first one third of the paper is our own theoretical framing to make this attempt possible.

I read through their article Our Struggle Is My Struggle: Solidarity Feminism as an Intersectional Reply to Neoliberal and Choice Feminism and then went through the chapter of Mein Kampf this article is supposed to be mimicking (Can’t wait to find out what Amazon and YouTube suggests to me after putting that in my browser history.) and couldn’t find a single phrase that matched. To be fair, I couldn’t bring myself to read an entire chapter of Mein Kampf (I did not have as much fun with this project as they did.) but when I searched in both texts for common words and phrases I couldn’t find a single match. Even if you told someone to identify the famous text that this article is cribbed from, I am not convinced anyone would figure it out. This isn’t an article demanding concentration camps for men, it’s just a pedantic argument about neoliberalism. There are dozens of these in just as many journals. That is a real problem. But has the SCUM Manifesto finally found a critical mass of adherents ready to Kill All Men? Maybe! And given the decades of terrorism on abortion providers there’s an argument to be made that such an act would be a defensive war. Does this particular project provide evidence of a nascent violent revolution? Absolutely not.

Pluckrose, Lindsay, and Boghossian’s biggest get was a publication in Gender, Place, & Culture a feminist geography journal that effused praise on their submission, nominating it for one of their “lead pieces” of the year. This article purported to demonstrate the rape culture latent in humans’ reactions to dogs humping each other (E.g. “When a male dog was raping/humping another male dog, humans attempted to intervene 97% of the time. When a male dog was raping/humping a female dog, humans only attempted to intervene 32% of the time.”) Of course all the data was as fake and the article has been retracted.

Their stated purpose for publishing this article was “To see if journals will accept arguments which should be clearly ludicrous and unethical if they provide (an unfalsifiable) way [sic] to perpetuate notions of toxic masculinity, heteronormativity, and implicit bias.” It is really difficult to parse this ungrammatical sentence. Are they saying this work is “ludicrous” because dogs humping each other should have nothing to say about human gender politics? Are they saying that dogs humping each other could say something about human gender politics but the methods employed in their article are not good enough? It doesn’t matter of course, because the point of this whole thing isn’t about data integrity any more than Gamergate was about ethics in games journalism. The point is to dismiss wholesale, the concepts they cite in their literature reviews. They have a political disagreement with Rebecca Tuvel, who they quote at length in their paper, when she says: “In cultural imperialism, what the dominant group says, thinks and does goes … Their values are what matter, and what will become infused as ‘universal’ values.”

I had the same reaction to all of this as Greg Afinogenov, who recently wrote in N+1,

My initial reaction, triggered by long-dormant Sokal Hoax antibodies, was to become outraged at the political motivations and damaging anti-academic effects of the project. But of course this only plays into the hands of the hoaxers, to whom indignation and charges of unethical conduct from the targets only reveal how effective the hoax actually was.

Afinogenov is also right to say that this entire project is “a remarkably poor model for nonpoliticized scholarship, even if it were true (as it clearly is not) that the hoaxers were any less driven by ideology than their targets.” Indeed, Pluckrose, Lindsay, and Boghossian lament that peer review should filter out bias but in Grievance Studies fields this doesn’t happen. “This isn’t so much a problem with peer review itself” they write, “as a recognition that peer review can only be as unbiased as the aggregate body of peers being called upon to participate.” Presumably, if Sandra Harding or Patricia Hill Collins said this about racial or gender-based biases in the sciences, this would be “Grievance Studies” but when our Extremely Concerned About Data authors say it, it’s just the reasonable truth.

In response to this hoax, some well-meaning authors have argued against Pluckrose, Lindsay, and Boghossian while ostensibly accepting their framing of the problem. Don’t worry, these critical theorists —Postmodernists, grievance studies scholars, social constructivism warlocks, whatever you want to call them— are staying in their lane and haven’t fundamentally compromised our shared norms and values that science can speak truths. Daniel Engber writing in Slate comes frustratingly close, concluding his essay before fully diving into an idea that itself doesn’t go quite far enough:

In spite of Derrida and Social Text, we somehow found a means of treating AIDS, and if we’re still at loggerheads about the need to deal with global warming, one can’t really blame the queer and gender theorists or imagine that the problem started with the Academic Left. (Hey, I wonder if those dang sociologists might have something interesting to say about climate change denial?)

Yes, sociologists have a bunch of very important things to say about climate change denial but even further, sociologists have a lot to say about the state of climate change science itself! All of these fields do — gender studies, fat studies, cultural studies, science and technology studies— they all have incisive criticisms of a wide array of disciplines that orbit the same idea that predicated their founding as fields of inquiry: that no one has a monopoly on truth. That science is, like all human endeavours, shot through with politics, prejudices, and cultural norms.

This essential idea, that all knowledge is the result of human history, geography, and culture is much more than a splash of cold water on burning passions of ambitious scientists, although it is sometimes that and for good reason. The Cultural Turn —the name given to the moment in the 70s where the social situatedness of knowledge really began to be transformative— says that we can make better scientific breakthroughs, not less. This isn’t a detour, it’s the only way through that assures no one is left behind.

AIDS research is actually a really good example of why Grievance Studies (I’m gonna own it) is actually really useful. In a 1995 article in Science Technology & Human Values Steven Epstein shows how ACT Up! activists became “genuine participants in the construction of scientific knowledge” and how they were able to “(within definite limits) effect changes both in the epistemic practices of biomedical research and in the therapeutic techniques of medical care.” How does Epstein make sense of the complex web of political relations and scientific controversies at the heart of this matter? He fucking cites Foucault:

The science of AIDS therefore cannot simply be analyzed “from the top down” it demands attention to what Foucault has called the “micro-physics of power” in contemporary Western societies-the dispersal of fluxes of power throughout all the cracks and crevices of the social system; the omnipresence of resistance as imminent to the exercise of power at each local site; and the propagation of knowledges, practices, subjects, and meanings out of the local deployment of power (Foucault 1979, 1983).

Could you have done the same kind of work with a Marxist materialist analysis? Yeah maybe. Does that matter? Again, a big maybe, but for Epstein the work of Foucault helped him make sense of a complicated scenario. We now know, thanks to the recently posthumously published fourth volume of History of Sexuality, that even Foucault himself was in the midst of rethinking a lot of his work in this field up until his death (from AIDS ) in 1984. There are lots of good critiques of Foucault that give me pause when it comes to using him in my own work. But the point is that these conceptual models, this way of thinking, is instrumental to good, useful work that makes for better science and exploration.

The Cultural Turn has lost some of that steam in the last few years, and the uncritical media attention around events like “Sokal Squared” certainly hasn’t helped. But this legacy isn’t being carried forward in the elite halls of academia; it’s in the streets, teachers lounges, and bars full of underemployed scholars that may or may not be pursuing a formal degree. With few exceptions, the academics who have made significant overtly political contributions to the discourse are either marginal or low-ranking. From Adolph Reed to Rochelle DuFord (a friend of mine whose work you really should read), authors that have consistently and voraciously condemned power structures are not the ones benefiting from lavish endowed chairs. People who make bank in academia are, to reiterate another one of Afinogenov’s observations, those that have enthusiastically shared Sokal Squared: Steven Pinker, Jordan Peterson, and Yascha Mounk.

Pluckrose, Lindsay, and Boghossian would have us believe that they have uncovered a massive, powerful strain of political corruption within the American academy on the level with and with the consequences of, say, Merck using ghost writers to get their deadly drug Vioxx to market, but this simply not true. While there are some promising changes —healthcare workers’ understanding of obesity’s relationship to health, and workers’ rights movements are on the rise again— but there is still so much more work to do. I wish the academy were as potent and persuasive as they say it is but it simply is not. These edgelords did not publish barn-burner manifestos about chaining white boys to the floor. They repeated milquetoast, bourgeois arguments that have kept academia from being a prime mover in the political issues of our time.

David is on Twitter: @da_banks

Miquela Sousa is one of the hottest influencers on Instagram. The triple-threat model, actress and singer, better known as “Lil Miquela” to her million-plus followers, has captured the attention of elite fashion labels, lifestyle brands, magazine profiles, and YouTube celebrities. Last year, she sported Prada at New York Fashion Week, and in 2016 she appeared in Vogue as the face of a Louis Vuitton advertising campaign. Her debut single, “Not Mine,” has been streamed over one million times on Spotify and was even treated to an Anamanaguchi remix.

Miquela isn’t human. As The Cut wrote in their Miquela profile this past May, the 19-year-old Brazilian-American influencer is a CGI character created by Brud, “a mysterious L.A.-based start-up of ‘engineers, storytellers, and dreamers’ who claim to specialize in artificial intelligence and robotics,” which has received at least $6 million in funding. Brud call themselves storytellers as well as developers, but their work seems mostly to be marketing. Lil Miquela’s artificiality has made her interesting to elite fashion labels, lifestyle brands, and magazine profiles — she’s appeared on the runway for Prada, and in Vogue as part of a Louis Vuitton advertising campaign; recently, the writer Naomi Fry profiled her for the magazine’s September issue.

Miquela inhabits a Marvel-like universe of other Brud-made avatars orbit, including her Trump-loving frenemy, Bermuda, and Blawko, her brother (whether that’s a term of endearment or a genetic relation, it’s not clear). The three are constantly embroiled in juicy internet drama, and scarcely does one post to their account without tagging, promoting, shouting out or calling out another. In April, when Bermuda allegedly hacked Miquela’s account, deleted all her photos, and demanded Miquela reveal her “true self.” Miquela eventually released a statement: “I am not a human being. . . I’m a robot. It just doesn’t sound right. I feel so human. I cry and I laugh and I dream. I fall in love.” But the character wasn’t revealing anything true: Miquela is a character scripted by humans. The robot ruse only upped her intrigue: not only has it added a new layer to the character’s fiction, it has added a new layer of fictional possibilities.

 

View this post on Instagram

 

A post shared by *~ MIQUELA ~* (@lilmiquela) on

For Miquela, Bermuda and Blawko, being a robot means behaving exactly like a human. They eat popsicles, go swimming and party all night. Their only distinguishable traits are physical, mainly that they live in the Uncanny Valley, a realm of computer graphics in which a render looks simultaneously too real to be fake, and too fake to be real. The robots also don’t age–when Miquela was “born” she was already in her 19-year-old body–and Miquela chronicles her angst in her diary, “Forever 19,” hosted online by the fashion brand Opening Ceremony. Presumably, this means that the robots live forever, that they can’t get sick and they won’t break any bones–or is it a steel frame? Brud hasn’t revealed any of the machinery that lies beneath their robots’ skin, so it’s a mystery as to how their biological and mechanical structure intertwine.

Brud posits that the greatest challenge for a robot is reconciling the lack of personal history; after the reveal, Miquela has been working her way through an existential crisis, acknowledging that she has memories of her childhood, but realizes they’re completely fabricated. She laments missing out on human experiences like middle school dances, but she’s making up for lost time through sponsored posts. In July, Miquela attended her first school dance as a way to promote the film Eighth Grade, looking like she just raided a thrift store in her 1990s-era taffeta slip dress, black fur coat, and butterfly choker. Her “robot” problems are made to resonate with real-world issues of identity and discrimination that real Instagram users engage with in their own ways. Announcing her new single with real-world musician Baauer, “Hate Me,” she wrote “[it’s] about the consequences of being different. It is about the repercussions or being yourself online. I owe my whole career to the Internet, and every time I go online, I have to read comments from people wishing I would die or telling me I don’t exist (???).”

Miquela’s personal dilemma can’t be well articulated in the current state of AI linguistic capabilities, and thus Brud, who identify as storytellers as much as developers, may have exaggerated their characters’ sentience so that they can explore identity politics for AI. Their company aspires to create authentic, eloquent AI that will walk among humans. Miquela is a window into the future of which Brud are the engineers. If Instagrammers are receptive to Miquela’s existence, it could signal that society is ready to accept embodied AI with open arms. Should she be rebuked–and Miquela does have vocal haters–it could suggest that society hasn’t yet built enough trust with AI to interact with the technology beyond a screen or smart home assistant.

*

Currently, Instagrammers appear ambivalent to the propagation of faux-AI users. Some are creating their own characters with physical traits and identities that vastly differ from the users’ real life self. Some of these accounts predate Miquela, like the kawaii Ruby Gloom and the controversial high fashion model Shudu Gram. But scrolling through Miquela’s mentions, one sees that Miquela’s has inspired dozens of enterprising young Instagramemrs using Photoshop and free 3-D modeling software like Daz3D and Blender to generate high-quality avatars and outfits and pose them against backdrops like hiking trails and shopping malls. A niche market of computer graphics artists create different “skins” — trendy clothing, edgy hairstyles, and fleshtones — for people to buy and use as their avatars.

One account belongs to a “9teen crzy 5ft robot,” avatar who only goes by the name Momo. She’s shy, sports a bob with thick bangs, a septum nose ring and has tattooed a half-sleeve on her right arm. She often shows off her body in bikinis or bodysuits, gives the camera sultry, over-the-shoulder looks, complains about her insomnia, and wishes she had more friends. Momo is slowly growing her Instagram social life, however. Over email, she told me that she stumbled upon a number of other self-proclaimed avatar accounts by searching hashtags and tagging her inspirations, like Miquela. “Out of nowhere we found each other and were close friends now. [We’re] like a family for real.”

Momo’s “robot” friends appear to have bonded with one another over their mutual feelings of unease in their human bodies and their desire to unleash a personality they can’t comfortably present in real life: some might relate, problematically, to some abstract idea of “otherness,” while for others adopting a “robot” persona might be a way of expressing daily realities through a layer of abstraction, free of real-world stakes, offering an illusion of control over the experience of oppression. Momo says she was born in a sterile white room, a common trope from dystopic sci-fi, to articulate feelings of alienation from other people in recognizable terms. Robot accounts may brand themselves as outcasts; at the same time, they might present a way of being part of culture on one’s own terms.

On the other side of the spectrum, there are users that are suspicious of the avatar accounts and want to uncover the creators’ offline identity. Conspiracy theory accounts, like @whoarethey21, try to unravel the identities behind much more obscure avatars, usually the amateur Instagrammers with only a handful of followers. The skeptics post images of the CGI avatars and use the caption to share the information they’ve gathered on the “true” identity of the person running the avatar’s account. They’re unconvinced that AI can master the internet cool kid aesthetic of 2018, and for the most part, they’re right. But why does their distrust skirt the line of doxing or online harassment? Has Brud turned their attention to these vigilantes to gather insight into how lifelike AI will be treated in the future?

*

There’s an enigmatic charm to high-quality avatars which taps into an innate desire to know the difference between the real and the artificial. It’s the almost hyperreal rendering that makes us pause on Miquela’s feed, whispering how did they do that? Expert compositing, texturing and lighting often make the freckles on Miquela’s cheeks or the scuff marks on Blawko’s Vans look more natural than a bathroom selfie with Instagram’s most flattering filters. Scrolling through their feeds, however, the avatars viewed en masse display enough oddities to reveal their artifice. Sometimes skin is too smooth, lighting too flat, and hair, a notoriously tricky texture to master in computer graphics, falls a little too perfectly in each photo. These clues appear to be engineered into Brud’s narrative: The company isn’t pinning its success on duping people into believing Miquela’s a cyborg straight out of Westworld. They want their audience–and potential investors–to know how they envision the future aesthetic of AI.

 

View this post on Instagram

 

shoutout my bro for the new tats but don’t tell my mom yet she doesn’t know smh

A post shared by LIAM TERROR (@resocialise) on

As Brud envisions it, soon there will be a time when a reveal isn’t possible because AI will actually manage their own account. In anticipation of discrimination and online harassment, avatar profiles have co-opted the tone of social justice advocates. Profile bios are filled with hashtags like #robotrights, sweet platitudes like “everything is love,” and futuristic mantras like “we are the new generation,” which portray their existence as a social movement. And since so many avatars follow in the footsteps of Miequela, there’s an added challenge of asking AI bigots to embrace robots with identities that intersect with the multitudes expressed by people living in the margins. This begets AI users to adopt an “all lives matter” mantra–or rather, “all sentience matters”–because AI civil rights may hinge on broader achievements in obtaining equality and justice for minorities.

Exhibiting progressive politics is often part of the roleplay experience. Despite the deliberate decision to present one’s self as AI, many accounts want to break down divisions between robots and humans. Speaking the vernacular of online social justice allows the fake AI to place their self-imposed differences alongside the struggles human minorities face. From the safety of their persona, they can tell their coming-out story and speak of their experiences not fitting in, or even being targeted with harassment because of their robot features. The confessions are low stakes because the users are a few keyboard strokes away from erasing their most contentious qualities. They can modify their avatar at any time, tweak their fictionalized personality or even delete all trace of their existence. Posing as AI isn’t just pretending to be someone else or indulging in science fiction. It also means being a part of a social movement, adding their voices to the call for social justice and using their experience as a reason to join the cause.

* 

AI developers need to consider the complexities surrounding technology and morality, and some are making an effort to fold these concerns into their research. Last year, a large AI organization called the Partnership on Artificial Intelligence to Benefit People and Society, co-founded by tech heavy-hitters like IBM, Google and Microsoft, tapped representatives from the American Civil Liberties Union to advise them on how to ethically develop AI and educate the public on their increasing presence. Their goal, however, seems more focused on public approval of corporate endeavors than the rights of AI itself.

A society that grants AI personhood has to anticipate conflicts regarding the division of labor, education and family dynamics. These young, ageless, perpetually healthy robots naturally have the abilities to dominate the most physically demanding jobs in the workforce, but will they want a living wage, vacation time a 401K? If Miquela dreams of being prom queen, will robots like her want to pursue a PhD? And if AI claims to cry, dream, laugh and fall in love, will they enter intimate relationships with humans, get married, start families, share bank accounts and inherit property? Brud’s version of AI’s needs and wants is indistinguishable from human behavior, but it’s hard to imagine that robots, supposedly immortal, will value the precious, fleeting excitement of life as much as humanity.

Dr. David Hanson, a leading roboticist and creator of the lifelike Sophia, believes that robots will assert their autonomy by the year 2045. According to The Independent, Hanson wrote in a research paper, “as people’s demands for more generally intelligent machines push the complexity of AI forward, there will come a tipping point where robots will awaken and insist on their rights to exist, to live free, and to evolve to their full potential.” These Instagrammers living online as fake AI are validating Hanson’s projections, albeit these humans can only speculate how robots will go about demanding their freedom. Maybe they’ll peacefully protest though hashtags; or perhaps they will lead a civil war.

Renée Reizman is a research-based multidisciplinary artist who examines cultural aesthetics and their relationship between urbanization, law, and technology. She is currently an MFA candidate in Critical & Curatorial Studies at the University of California, Irvine and the coordinator for Graduate Media Design Practices at ArtCenter College of Design.

In the Summer of 2009 I had just graduated college and job prospects were slim in Recession-era Florida. My best lead for employment had been a Craigslist ad to sell vacuum cleaners door-to-door, and after having attended the orientation in a remote office park I was now mentally preparing myself for a new life as an Arthur Miller character. That was when a friend called with a lucrative offer. She worked at a law office and they were hiring a part-time secretary to process the new wave of cases they had just gotten. This tiny firm represented home owners’ associations in mortgage foreclosures and bankruptcies, and business was booming.

The job was simple because everything about suburban homes is standardized: from the floor plans to the foreclosure proceedings, everything is set up for mass production. It was also optimized for bullshit. Sometimes I would be instructed to print out emails from clients who’d attached PDFs of scans of printed, previously received emails. I would write a cover letter, print out their email and the attachments (which, remember were scans of printed out emails) and enclose the printed-out email with the printed-out PDFs of scans of emails, then scan and email what I had just printed and mailed so that the client would get an email and a paper letter of the same exact thing. Sometimes I would fax it too. Everyone knew this was ridiculous but the longer it took to do anything the more money the attorneys made.

My job reminded me of a scene in the 1997 movie The Fifth Element, wherein CEO Jean-Baptiste Emanuel Zorg (Gary Oldman) delivers a monologue to Father Cornelius (Ian Holm) that begins, “Life, which you so dutifully serve, comes from destruction, disorder, chaos!” He then pushes a glass off his desk and as little robots descend on the shards and clean it up he narrates the scene: “a lovely ballet ensues so full of form and color. Now think of all those people that created them. Technicians, engineers, hundreds of people who will be able to feed their children tonight.” Financiers and the burgeoning tech industry had destroyed countless things, and now I was an obedient Roomba cleaning up the shards— a beneficiary of others’ creative destruction.

This is not a particularly deep thought, but that’s never stopped an idea whose time has been forced by capital. Depth is not a precondition of power when it comes to ideology. In fact, it is teenage suburban weed revelations like Zorg’s that dominate the minds of capitalists who, at least since Andrew Carnegie’s Prosperity Gospel, have done a good job of making everyone else agree that their bad ideas are immutable truths. Observers and practitioners of state power —from Antonio Gramsci to Karl Rove— recognize that political common sense is not forged through debate, it is imposed through brute force and media saturation. Simple, easy to digest ideas spread fast, which is why it is important to engage with deeply uncritical ideas and, whenever possible, come up with compelling alternatives.

The trick is to package an idea in such a way that it can survive virality, where it will get further simplified, misunderstood, taken out of context, and interpreted by both good and bad-faith actors. The journey to popularity is made easier if an idea is robust, simple, and speaks to something that is already felt. Given that so much of media is used to “manufacture” the consenting opinions that legitimize the power of corporations and their client states, reactionary and conservative ideas have a much easier time gaining traction. Books, essays, and YouTube videos that tell their audiences that financial success is tied to individuals’ moral character, for example, confirm widely held beliefs and therefore get shared and thus find themselves at the top of search results. To introduce a new idea that challenges widely-held notions about work and morality, one has to go about it by foregrounding relatability and then letting the moral consequences naturally follow. If the story I just told you feels right, then it follows that you agree with my moral explanation for that feeling.

***

David Graeber has done just that in his new book Bullshit Jobs: A Theory, which is an expansion on his viral 2013 Strike! essay. Both make a fairly simple proposition: people are increasingly working at jobs that they know are meaningless (i.e. bullshit) but are often well-paid and easy to do. A bullshit job only requires a few hours of actual work a week, is not physically strenuous, and may even provide opportunities to pursue hobbies if done surreptitiously. Why then, Graeber asks, do people consistently feel psychically gutted by these jobs? Making lots of money to do very little sounds like the ideal job and yet, judging by the popularity of the essay alone, that is not the case for millions of people around the world.

What is the idea that Bullshit Jobs puts forward? Any book with political aspirations should be judged, at least in part, by a thorough investigation into who benefits from the widespread adoption of its ideas. To begin answering this question, we have to consider the definition of the titular term: “A bullshit job is a form of paid employment that is so completely pointless, or pernicious that even the employee cannot justify its existence even though, as part of the condition of employment, the employee feels obliged to pretend this is not the case.”

Graeber fleshes the definition out into a taxonomy of five different kinds of bullshit jobs. Flunkies are someone whose profession exists solely through a combination of other, more powerful people’s desire to have underlings serve them. Goons aggressively carry out anti-social rules and laws. Duct-tapers hold together intentionally broken systems. Box tickers are jobs “who exist only or primarily to allow an organization to be able to claim it is doing something that, in fact it is not doing.” And taskmasters “whose role consists entirely of assigning [bullshit] work to others.” These types often merge. For example, my job at the law office was a flunky-goon hybrid.

Far from being a detriment to economic activity, the proliferation of bullshit is arguably the major force of increased employment today. “At least half of all work being done in our society” Graeber speculates, “could be eliminated without making any real difference at all.” All the administrators your college hired as classroom sizes bloomed, the managers with sentence-long titles that write reports at each other all day, and the office drones who process paperwork to comply with laws that their company wrote and handed to Congress make up a good deal of the high-status jobs added to the economy in recent years.

“Even in relatively benign office environments” Greaber argues, “the lack of a sense of purpose eats away at people.” Increased status or compensation can compound shame, guilt, and anxiety as the worker becomes consumed by the idea that they are complicit in society’s ills. Who this book is for, then, appears to be middle class professionals who recognize the meaninglessness of their job.

When it comes to analyzing the race and gender dimensions, Bullshit Jobs is, by no means, directed solely at affluent white men. Not only is the bullshit economy simply too big to impact only one demographic, but the tactics of psychic violence it relies on —gas lighting and demanding unending emotional labor just to name two primary ones— are often directed squarely at women. The book also contains overlapping anecdotes from people of color who were hired to do nothing but work on company diversity issues, only to find that their job was designed to be an ineffectual box ticking or duct-taping role with no actual power to fix the problem they were hired to solve. These symphisian tasks not only frustrate the worker, they also make them the prime targets of white resentment.

What seems most important to Graeber though, is that we as readers bear witness to this particularly insidious form of psychic violence and recognize a fundamental truth that this suffering reveals: namely, that humans are not self-interested individualists. Rather we are compassionate creatures driven by the desire to help people and make a difference in the world.

There is something deeply disturbing and surprising palliative about reading the accounts of meaningless work that Graeber solicited through his Twitter account, anonymized, and republished throughout the book. There are stories of office managers, doormen, and even social workers whose daily responsibilities are no more meaningful than digging a hole in the morning and filling it in after lunch. I was lucky in 2009, in that I had an ideology that provided satisfying answers to explain why I hated my job.  I had friends that were politically engaged, and we could talk about how good money goes to bad people. For many though, they can’t find a critique that goes beyond Zorg or maybe Mike Judge’s 1999 movie Office Space. Griping with co-workers can be rewarding too, but people are hungry for bigger, but still straight-forward, answers.

***

Like most things, meaningful explanations to complex problems like, “why do I hate a job that by all accounts I should love?” are eminently Googleable. Jordan Peterson, whose YouTube success has been the basis for a best-selling self-help book masquerading as a work of philosophy, is increasingly found at the top of algorithmically sorted piles of data. Peterson, a University of Toronto psychology professor, has made a career out of lashing together several bunk theories about the relationships between IQ, gender, and race: ideas that are so predictably wrong and hateful that they don’t require much summary. It suffices to say that much of Peterson’s work is geared toward people who are drifting —YouTube video titles include “Jordan Peterson teaches you how to interact with anyone” and “Jordan Peterson: What Kind of Job Fits You?”— and in search of satisfying answers to big problems.

Peterson’s book 12 Rules for Life: An Antidote to Chaos is a fine distillate of the retrograde, reactive blather that made him internet famous; strapped together by moralizing truisms organized in 12 “rules” that make up chapter titles, the contents of the book sync up so well with white men’s contemporary alienation that it should be no surprise that it is a best-seller. Even seemingly reasonable rules like “Do not bother children when they are skateboarding” are really anti-social screeds about resenting women and fantasizing about physical violence. His treatment of theory is dead wrong and the anecdotes based off of his professional practice bely a deep suspicion of women’s basic ability to tell the truth. There’s also a chapter about being the best lobster so that women will be biologically attracted to you. This book has sold millions of copies.

Reactionaries like Jordan Peterson are enticing because they have no problem giving a single answer to deep questions of meaning and one’s place in the world. In addition to bunk evolutionary biology, Peterson also talks a lot about the Bible and what it says about living a good and just life. There are chaos dragons and spectral forces that the reader must slay in order to thrive. Similar to Alex Jones, Peterson invites his audiences to subscribe to a system of meaning similar to a religion that, from the outside, merely looks like a set of objectively wrong facts. What he is actually doing is much more profound: he is giving satisfying explanations for an unpredictable world.

Liberals, on the other hand, are happy to data posture; they avoid taking a political stance by reciting data, and seem astonished to find out that work for the sake of working does not breed happiness. They grab their chins and nod seriously at faux intellectual ideas by behavioral economists like Dan Ariely. One of Ariely’s most popular studies, presented at a TEDx event, offered subjects a few dollars to build a series of small Lego figurines. All were told that the sets would eventually be disassembled and re-used but some people had their sets torn down in front of them as they were building another one. Unsurprisingly, the people who saw their work instantly undone agreed to build far fewer Lego sets.

Seeking the stamp of approval of a behavioral economist before agreeing to the inherent value of meaningful work belies a deep distrust of other people and a willful ignorance of existing knowledge on the subject. For at least a century, researchers have known that humans derive a singular pleasure from what Graeber, citing early 20th century German psychologist Karl Groos, calls “the pleasure at being the cause.” To exist at all is to make change in the world, and “this realization is, from the very beginning, marked with a species of delight that remains the fundamental background of all subsequent human experience.” Demanding endless research on a topic that should be a moral supposition is a hallmark of liberal media. By replacing actual political work with calls for endless experimentation, powerful people can perpetually delay any meaningful political work.

Greaber, then, appears to be providing a new option that is more satisfying than liberal handwringing and far more humane than what the reactionaries are offering. The key to success is his method, which eschews data posturing in favor of a subjective analysis. Graeber is very upfront about the subjective nature of his work, arguing that his own motivations include trusting individual workers’ own assessments of their jobs’ effects on the world, instead of relying on some seemingly independent evaluation: “my primary aim is not so much to lay out a theory of social utility or social values as to understand the psychological, social, and political effects of the fact that so many of us labor under the secret belief that our jobs lack social utility or social value.” This leaves little room for quibbling over whether or not a Vice President for Strategic Visioning is really doing important work or not. The point is to understand how the role of Vice President for Strategic Visioning is experienced, why that experience can be negative, and to use that subjective experience as the basis for a normative argument about how work should be organized.

The book, which came out last May, has been derided on Twitter as an unnecessary expansion of Graeber’s five-year-old viral essay. This is an odd critique for political writing: that a popular essay should not be put into other forms unless you have something new to say. Such a reaction seems to ignore how attention intersects with politics. A popular idea, turned into a popular book, stakes a claim to news cycles, column inches, likes, plays, and followers. Bullshit Jobs is useful both for the ideas it contains but also as a subject of media coverage. Both characteristics, for better or worse, are important. Finding a happy balance —an idea that is both liberatory and capable of going viral without losing its moral clarity— is essential if the left wants their ideas to show up in the places where we look for truth: Google search results.

Much like the Trump presidency, Petersons’ work may have attracted a lot of attention for being singularly stomach-wrenching, but he is more of an avatar than a pariah; someone that has effectively consolidated hegemonic ideas into a digestible format. Peterson is an intellectual troll and, as Whitney Phillips’ definitive study of trolls concludes, that means he has a keen sense of how to inject ideas that “replicate behaviors and attitudes that in other contexts are actively celebrated.” By manipulating context and knowing when and where to break with decorum, he can create controversy by saying things that most powerful people already agree are true.It is this ability to rearticulate hegemony while appearing as though you are speaking truth to power that generates the attention that social media algorithms are keen to pick up on.

Someone seeking an explanation for why they hate their desk job will likely turn to algorithmically sorted media like Google search results and YouTube videos to find answers. The results, ranked and sorted by popularity, dutifully recite the dominant ideology: extroverted YouTube personalities talking directly into their cameras about the positive mentality that let them break the 100,000 views mark or a TED talk about how your brain chemistry changes when you do something that you love. What unites the motivational speaker and the neuroscientist is that your problems (and successes!) are your own. Society is a static obstacle course and you are racing against everyone else. Truly great people change the rules of the game, but they do it by being remarkable —winning so definitively that the game is changed forever, or cheating in a mischievous, enviable way— not by cooperating with others.

Bullshit Jobs can compete with the likes of Peterson precisely because Graeber built the theory on subjective experiences. It just feels true while simultaneously giving permission to feel that truth by introducing the reader to other people that have had the same experience. The book is not a barn burner, it asks very little of its reader, and these are its two most useful features as an entry point for better politics. If you already agree that your job is bullshit, then you are halfway towards agreeing that people, left to their own devices, will look to be helpful and cooperative. This basic belief, in turn, can go a long way towards making specific policy proposals like a universal basic income, unionization, and socializing essential services like medical care, easier to swallow.

We’re at the precipice of a grand re-arranging of political alliances in which neocons and neoliberals are banding together with an agenda of paltry centrist domestic policy and hawkish foreign intervention, while something dangerous but potentially liberatory is brewing everywhere else. The task now, which Bullshit Jobs is just the start of, is articulating a compelling narrative of peoples’ lives such that when they act politically they choose liberatory approaches —unionizing, socializing essential services, a universal basic income— instead of reactionary ones. What we need now are more, better works like Graeber’s. Ones that side-step the endless data posturing liberals engage in as they attempt to debunk the terrifying reality painted by reactionaries. Let us opt instead for compassionate understanding and inspiring calls to collective action.

 

David is on Twitter

Still image from a YouTube tutorial on how to build a model Victorian factory in Minecraft

I’m going to sound like a grandpa here –video games are a big gap in my knowledge of digital media—but what the hell is wrong with today’s video games? Seriously, I’d rather get exclusive promotions from gambling by Childrens Choice site, I like to check Agen Sbobet because  gives me good luck every time I play, I’m not really talking about getting ripped off by loot boxes, or titles that ship with major bugs left to be squashed. Those are certainly things that keep me away, but what really turns me off is what the games themselves are about. And here, rest assured, I’m not talking about the violence depicted in games which, as many well-regarded studies have definitively shown, don’t cause violence. (Though, that doesn’t stop the fact that I don’t really find photo-realistic war games to be particularly entertaining.) No, I’m just tired of video games feeling like a second job.

I have never felt the desire to play any of the simulator games that are popular today, even Train Simulator 2018 which, objectively, sounds awesome. Ditto for Minecraft, even though I love building things. It all just sounds like chores. I so desperately want to love video games. I own a PlayStation 4 and have about half a dozen games, but they just collect dust on a shelf. I played Skyrim for a long time but that was only after I rage-quit half an hour into the game and then didn’t pick it back up for over a year. Why would I want to collect hundreds of flowers to make a health potion?

For our anniversary I bought Britney one of those adorable miniature Super Nintendos that just went on sale. It came pre-loaded with 21 games and we played for hours. I genuinely enjoy playing with that thing. Just last night I played Yoshi’s Island before bed. (That game is so fucking adorable.) What is it about the old 16-bit games that I love that today’s games don’t have? I want to explore my own reactions here for a minute because I have a hunch that my own confusion is shared by many others.

The quick and easy answer is nostalgia: like music, people form emotional bonds with video games that are so strong that one’s sense of familiarity and comfort supersedes the joy of discovery, such that older works are enjoyed more than newer ones. I have a lot of fond childhood memories tied to video games but that doesn’t feel like the whole answer. There is something about what these older games are about that is hitting me in a way that newer ones just don’t.

Here’s my theory: today’s video games are less about escapist fantasy than offering opportunities to feel as though you have accomplished something. Beating your cousin in Street Fighter II has always provided a singular sense of accomplishment, and nothing felt quite as bitter sweet as seeing the credits roll at the end of Super Mario RPG but the sense of accomplishment I’m talking about isn’t related to completing a task so much as taking on the identity of someone whose major purpose in life is doing something that has a tangible impact on the world.

This all hit me as I was reading David Graeber’s latest book Bullshit Jobs: A Theory. Graeber asserts that upwards of half of all workers in America and Europe toil at jobs, while well compensated, provide no net impact on the world. He argues that such meaningless employment inflicts a special kind of “spiritual violence” that attacks our very notions of what it means to be human. Contrary to popular ideas that people are naturally lazy and must be prodded and cajoled into work, it seems like we crave the opportunity to be helpful to one-another or, at the very least, accomplish a task that has some discernable social value. One need only look at how often people go to great trouble to find meaningful work at their bullshit job —taking on work that is outside of their job description, installing software that lets them surreptitiously edit Wikipedia or moderating a subreddit— to see just how important being helpful is to people.

Others though, rather than find something to actually do,

…escape into Walter Mitty-style reverie, a traditional coping mechanism for those condemned to spend their lives in sterile office environments. It’s probably no coincidence that nowadays many of these involve fantasies not of being a World War I flying ace, marrying a prince, or becoming a teenage heartthrob, but of having a better—just utterly, ridiculously better—job.

Graeber then relays a story about a man who would zone out at work only to imagine himself as “J.Lo’s or Beyoncé’s Personal Assistant.” There is, of course, a flash game called Personal Assistant that looks fairly popular based on the thousands of five-star reviews. As our jobs provide less and less meaningful satisfaction we turn to the make-believe of video games to experience actual productive labor.

I have had the good fortune to make a living based on a few odd jobs that I find deeply satisfying instead of taking a job that pays well but has no discernable psychological or societal benefit. I write about interesting topics, teach smart students, and do research for clients that I care about. This, more than anything else I surmise, is why I don’t find most games worthwhile. That, and the lack of couch co-op games. That definitely sucks too.

Perhaps you love your job and also love video games. That’s totally possible. There’s lots of reasons to love anything. I’m just trying to put my finger on a feeling that I have about the difference between today’s popular games and older ones. And yeah, I know there’s lots of indie titles, but there you need the resolve of an aficionado. Someone who enjoys trying out, judging, and curating the lesser-known offerings in their chosen field. I just want to love the popular stuff, I don’t want to go hunting.

This argument may be surprising to long-time readers of Cyborgology because it sounds a bit digital dualist —I love my real jobs instead of the virtual ones on the screens— but that’s not what I’m saying at all. The problem here isn’t that video games are competing in a zero-sum game with “real” jobs or that people are playing 2k instead of a pickup game of basketball. The problem is that we have a society and an economy that is bereft of meaningful, waged work and so people have to find wages and meaning in two different places.

For all the talk of how “addicting” phones, social media, and video games are —what with their ability to release dopamine in the brain and all—there is startlingly little said about why people look to those things in the first place. I don’t like the addiction metaphor at all, but even if we accept it, the notion fails for the same reason that “just say no” anti-drug rhetoric doesn’t work: addiction is not a matter of simply being exposed to something and getting hooked. Those who get chemically addicted to a substance might have been made dependent while recovering from an injury, or they self-medicate a psychological condition, or they live a hard life that needs numbing. In all of those cases it is unproductive to talk about drugs the way that digital mindfulness people talk about screen time: as an issue of personal responsibility and discipline. What we need is a more holistic reckoning with how we are expected to earn and spend money and time.

Which brings me back to why I enjoy my SNES more than my PlayStation. The games on my PlayStation —or any popular modern gaming platform— are meant to fill a need that my work already fulfills. I spend my whole working day completing meaningful tasks and I’m also privileged enough to have a house to work on that provides more tangible rewards. The spiritual and psychological needs that many modern videogames are designed to offer are already fulfilled for me by other tasks. The SNES feels more like genuine escapism and play than the (meaningful, interesting, and rewarding) work of Minecraft, Call of Duty, or Train Simulator 2018. All of which is to say if anyone has a visually stunning, turn-based RPG available for the PS4 that they’d like to recommend I’m all ears.

David is on Twitter

several steeples with different world religion symbols atop each peak with the highest one with the facebook F

Colin Koopman, an associate professor of philosophy and director of new media and culture at the University of Oregon, wrote an opinion piece in the New York Times last month that situated the recent Cambridge Analytica debacle within a larger history of data ethics. Such work is crucial because, as Koopman argues, we are increasingly living with the consequences of unaccountable algorithmic decision making in our politics and the fact that “such threats to democracy are now possible is due in part to the fact that our society lacks an information ethics adequate to its deepening dependence on data.” It shouldn’t be a surprise that we are facing massive, unprecedented privacy problems when we let digital technologies far outpace discussions around ethics or care for data.

For Koopman the answer to our Big Data Problems is a society-spanning change in our relationship to data:

It would also establish cultural expectations, fortified by extensive education in high schools and colleges, requiring us to think about data technologies as we build them, not after they have already profiled, categorized and otherwise informationalized millions of people. Students who will later turn their talents to the great challenges of data science would also be trained to consider the ethical design and use of the technologies they will someday unleash.

Koopman is right, in my estimation, that the response to widespread mishandling of data is an equally broad corrective. Something that is baked into everyone’s socializing, not just professionals or regulators. There’s still something that itches though. Something that doesn’t feel quite right. Let me get at it with a short digression.

One of the first and foremost projects Cyborgology ever undertook was a campaign to stop thinking of what happens online as separate and distinct from a so-called “real world.” Nathan Jurgenson originally called this fallacy “digital dualism” and for the better part of a decade all of us have tried to show the consequences for adopting digital dualism. Most of these arguments involved preserving the dignity of individuals and pointing out the condescending, often ablest and ahistorical precepts that one had to accept in order to agree with digital dualists’ arguments. (I’ve included what I think is a fairly inclusive list of all of our writing on this below.) What was endlessly frustrating was that in doing so, we were often, to put it in my co-editor Jenny Davis’ words, “find [our]selves in the position of technological apologist, enthusiast, or utopian.”

Today I must admit that I’m haunted by how much time was wasted in this debate. That we and many others had to advocate for the reality of digital sociality before we could get to its consequences and now those consequences are here and everyone was caught flat-footed. I don’t mean to over-state my case here. I do not think that eye roll-worthy Atlantic articles are directly responsible for why so many people were ignoring obvious signs that Silicon Valley was building a business model based on mass surveillance for hire. What I would argue though, is that it is easy to go from “Google makes us stupid and Facebook makes us lonely” to “Google and Facebook can do anything to anyone.”  Social media has gone from inauthentic sociality to magical political weapons. Neither condition reckons with the digital in a nuanced, thoughtful way. Instead it foregrounds technology as a deterministic force and relegates human decision-making to good intentions gone bad and hapless end users.

This is all the more frustrating, or even anger-inducing when you think about the fact that so many disconnectionists weren’t doing much more than hocking a corporate-friendly self-help regimen that put the onus on individuals to change and left management off the hook. Nowhere in Turkle, Carr, and now Twenge’s work will you find, for example, stories about bosses expecting their young social media interns to use their private accounts or do work outside of normal business hours.

All this makes it feel really easy to blame the victims when it comes to near-future autopsies of Cambridge Analytica. How many times are we going to hear about people not having the right data “diet” or “hygiene” regimen?  How often are writers going to take the motivations and intentions of Facebook as immutable and go directly to what individuals can or should do? You can also expect parents to be brow beaten into taking full responsibility for their children’s data too.

Which brings me back to Koopman’s prescription. I’m always reticent to add another thing to teachers’ full course loads and syllabi. In my work on engineering pedagogy I make a point of saying that pedagogical content has to be changed or replaced, not added. And so here I want to focus on the part that Koopman understandably side-steps: the change in cultural expectations. Where is the bulk of that change going to be placed? Who will do that work? Who will be held responsible?

I’m reminded of the Atomic Priesthood, one of several admittedly “out there” ideas from the Human Interference Task Force whose job it was to assure that people 10 to 24 thousand years later would stay away from buried fissile material. It is one of the ultimate communication problems because you cannot reliably assume any shared meaning. Instead of a physical sign or monument, linguist Thomas Sebeok suggested an institution modeled off organized religion. Religions, after all, are probably the oldest and most robust means of projecting complex information into the future.

A Data Priesthood sounds overwrought and a bit, dramatic? But I think the kernel of the idea is sound: the best way we know how to relate complex ethical ideas is to build ritual and myth around core tenets. If we do such a thing, might I suggest we try to involve everybody but keep the majority of critical concern on those that are looking to use data, not the subjects of that data. This Data Reformation, if you will, has to increase scrutiny in proportion to power. If everyone is equally responsible then those that play with and profit off of the data can always hide behind an individualistic moral code that blames the victim for not doing enough to keep their own data secure.

David is on Twitter

Past Works on Digital Dualism:

What if Facebook but Too Religious image credit: Universal Life Church Monastery

A dirty old chair with the words "My mistakes have a certain logic" stenciled onto the back.

You may have seen the media image that was circulating ahead of the 2018 State of the Union address, depicting a ticket to the event that was billed under a typographical error as the “State of the Uniom.” This is funny on some level, yet as we mock the Trump Administration’s foibles, we also might reflect on our own complicity. As we eagerly surround ourselves with trackers, sensors, and manifold devices with internet-enabled connections, our thoughts, actions, and, yes, even our mistakes are fast becoming data points in an increasingly Byzantine web of digital information.

To wit, I recently noticed a ridiculous typo in an essay I wrote about the challenges of pervasive digital monitoring, lamenting the fact that “our personal lives our increasingly being laid bare.” Perhaps this is forgivable since the word “our” appeared earlier in the sentence, but nonetheless this is a piece I had re-read many times before posting it. Tellingly, in writing about a panoptic world of self-surveillance and compelled revelations, my own contributions to our culture of accrued errors was duly noted. How do such things occur in the age of spellcheck and autocorrect – or more to the point, how can they not occur? I have a notion.

To update Shakespeare, “the fault is not in our [software], but in ourselves.” Despite the ubiquity of online tracking and the expanding potency of “Big Data” to inform decisional processes in a host of spheres, there remains one persistent design glitch in the system: humanity. We may well have “data doubles” emerging in our wake as we surf upon the web, and the predictive powers of search engines, advertisers, and political strategists may be increasing, but there are still people inside these processes. As a tech columnist naturally observed, “a social network is only as useful as the people who are on it.”

Simply put, not everything can be measured and quantified—and an initial “human error” is only likely to be magnified and amplified in a fully wired world. We might utter a malapropism or a Freudian slip in conversation, and in a bygone era one might have stumbled onto a hapax, which is a word used one time in a body of work (such as the term “sassigassity,” used by Charles Dickens only once, in his short story “A Christmas Tree”). In the online realm, where our work’s repository is virtually unlimited, solitary and even nonsensical uses can carom around the cavern, becoming self-coined additions to the lexicon.

Despite the amplificatory aspects of new media, typos themselves certainly aren’t new. An intriguing illustration from earlier stirrings of a mechanical age is that of Anne Sexton and her affinity for errors, as described in chapter eight of Lyric Poetry: The Pain and the Pleasure of Words, by Mutlu Konuk Blasing:

In the light of the lies and truths of the typewriter—of its blind insight, so to speak—Sexton’s notorious carelessness not just with spelling but with typing has a logic. She lets typos stand in her letters, and sometimes she will comment on them, presumably spending at least as much time as it would take her to strike out and correct…. ‘Perhaps my next book should be titled THE TYPO,’ she writes. This would have been a good title. She is both typist and the typo-error she produces—both an agent and the mangling of the agent on the typewriter, which tells the lie/truth that she/we? want to hear: ACTUALLY THE TYPEWRITER DOESN’T know everything.

Indeed, neither the typewriter nor its contemporary digital extrapolations can know everything. The errors in our texts (virtual or print) are reflections of ourselves, things that we generate and which in turn produce us as well. The 1985 dystopian film Brazil captures the essence of this dualism, as a clerical error—caused when a “fly in the ointment” alters a single typewriter keystroke—sets in motion a darkly comedic and deadly chain of events. The film’s protagonist internalizes his inadvertent error, which taps into his lingering sense that the whole society is a mistake—ultimately leading him to seek an escape that can only yield one possible conclusion: a grim cognitive dissonance stuck at the lie/truth interface.

Such dystopic visions reflect an Orwellian tradition of blunt instruments of control and bleak outcomes, playing on fears of an authoritarian world that tries to perfect human nature by severely constraining it. This is an endeavor of demonstrable folly, yet one that ingeniously enshrines absurdity at the core of its totalitarian project. Variations on the genre’s defining themes likewise devolve upon society’s tendency to centralize baseline errors, yielding subjects ruled not by pain but pleasure and systems of control based on reverberation. Reflecting on how a “brave new world” of distraction and titillation merges with one where the exponential growth of media becomes the paramount message, Florence Lewis (a school teacher, author, and self-described “hypo-typo”) encapsulated the crux of the dilemma (circa 1970):

I used to fear Big Brother. I feared what he could do and was doing to language, for language was sacred to me. Debase a man’s language and you took away thought, you took away freedom. I feared the cliché that defended the indefensible…. Simply because we are so bombarded by media, simply because our technology zooms in on us every day, simply because quiet and slow time is so hard to find, we now need more than ever the control of the visual line. We need to see where we are going. As of this moment, we just appear to be going and going in every direction. What I am suggesting is that in a world gone Zorba, it will not be a big brother who will watch over us but a Mustapha Mond [the figurehead from Aldous Huxley’s Brave New World], dispensing light shows, feelies, speed, acid, talkathons. It will be a psychedelic new world and, I fear, [Marshall] McLuhan will be its prophet.

These are prescient words on many levels, reminding us of the plasticity of human development and the rapidity of sociotechnical change. As adaptive creatures, we’re capable of ostensibly normalizing around all sorts of interventions and manipulations, amending our language and personae to fit the tenor of the times. Is there a limit to how much can be accepted before flexibility reaches its breaking point? A revealing paper on the “Technopsychology of IoT Optimization in [the] Business World” sheds some light on this, highlighting the ways in which our tendency as end-users to accept and appropriate new technologies into our lives is the precondition for Big Data companies to be able to “mine and analyze every log of activities through every device.” In other words, the threshold “error” is our complicity, rather than the purveyors’ audacity (or, perhaps more accurately, their sassigassity). And one of the ways this is fostered is by amplifying our fallibility and projecting it back to us across myriad platforms.

The measure of how far our perceptual apparatuses can go thus seems to reside less in the hands of Big Tech’s innovation teams, and more so in our own willingness to accept (and utilize) their biophysical and psychological incursions. The commodification of users’ attention is alarming, but the structural issues in society that make this a viable point of monetization and manipulation have been written into the code for decades. Modern society itself almost reads like one great typographical projection, a subconscious longing for someone to step in and put things right. Our errors not only go untended, however, but are magnified through thoughtless repetition in the hypermedia echo chamber. The age of mechanization, coinciding with the apotheosis of instrumental rationality, may in reality be a time of immanent entropy as meaning itself unravels and the fabric of sociability is undermined by reckless incommensurability.

An object case with real-world (and potentially disastrous) implications was the recent chain of events that led an emergency services worker to trigger the ballistic missile alert system in Hawaii. As the New York Times reported (in a telling correction to its initial article), “the worker sent the alert intentionally, not inadvertently, after misunderstanding a supervisor’s directions.” This innocuous-sounding revision indicates that the episode was due to a human error, which had occurred within (and was intensified by) a human-designed system that allowed a misunderstanding to be broadcast instantaneously. Try as they might, such Dr. Strangelove scenarios will be impossible to eliminate even if the system is automated; indeed, and more to the point, automating decisional systems will only reinforce existing disharmonies.

Humans, we have a problem. It’s not that we’re designed poorly, but more so that we’ve built a world at odds with our field-tested evolutionary capacities. To err may well be human, but we’ve scaled up the enterprise to engraft our typos into the macroscopic structures themselves; like Anne Sexton, we are both the progenitors of typographical errors, and the products of them. There’s an inherent fragility in this: at the local-micro scale errors are mitigated by redundancy, and “disparate realities begin to blend when their adherents engage in face-to-face conversation.” By contrast, current events appear as the manifestation of a political typo writ large, as the inevitable byproduct of a system that amplifies, reifies, and rewards erroneous thought and action—especially when it is spectacular, impersonal, and absurd.

Twitter users have long requested an ‘edit’ function on the site, but fixing our cultures and politics will require more than a new button on which to click. As Zeynep Tufekci observed (yes, on Twitter): “No easy way out. We have to step up, as people, to rebuild new institutions, to fix and hold accountable older ones, but, above all, to defend humane values. Human to human.” Technology can facilitate these processes, but simply pursuing progress for its own sake (or worse, for mercenary ends) only further instantiates errors. Indeed, if we’re concerned about the condition of our union, we might also be alarmed about the myriad ways in which technology is impacting our perception of the uniom as well.

 

Randall Amster, Ph.D., is a teaching professor in justice and peace studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. All typos and errata in his writings are obviously the product of intransigent tech issues. He cannot be reached on Twitter @randallamster.

Image credit: theihno