Miquela Sousa is one of the hottest influencers on Instagram. The triple-threat model, actress and singer, better known as “Lil Miquela” to her million-plus followers, has captured the attention of elite fashion labels, lifestyle brands, magazine profiles, and YouTube celebrities. Last year, she sported Prada at New York Fashion Week, and in 2016 she appeared in Vogue as the face of a Louis Vuitton advertising campaign. Her debut single, “Not Mine,” has been streamed over one million times on Spotify and was even treated to an Anamanaguchi remix.

Miquela isn’t human. As The Cut wrote in their Miquela profile this past May, the 19-year-old Brazilian-American influencer is a CGI character created by Brud, “a mysterious L.A.-based start-up of ‘engineers, storytellers, and dreamers’ who claim to specialize in artificial intelligence and robotics,” which has received at least $6 million in funding. Brud call themselves storytellers as well as developers, but their work seems mostly to be marketing. Lil Miquela’s artificiality has made her interesting to elite fashion labels, lifestyle brands, and magazine profiles — she’s appeared on the runway for Prada, and in Vogue as part of a Louis Vuitton advertising campaign; recently, the writer Naomi Fry profiled her for the magazine’s September issue.

Miquela inhabits a Marvel-like universe of other Brud-made avatars orbit, including her Trump-loving frenemy, Bermuda, and Blawko, her brother (whether that’s a term of endearment or a genetic relation, it’s not clear). The three are constantly embroiled in juicy internet drama, and scarcely does one post to their account without tagging, promoting, shouting out or calling out another. In April, when Bermuda allegedly hacked Miquela’s account, deleted all her photos, and demanded Miquela reveal her “true self.” Miquela eventually released a statement: “I am not a human being. . . I’m a robot. It just doesn’t sound right. I feel so human. I cry and I laugh and I dream. I fall in love.” But the character wasn’t revealing anything true: Miquela is a character scripted by humans. The robot ruse only upped her intrigue: not only has it added a new layer to the character’s fiction, it has added a new layer of fictional possibilities.

 

View this post on Instagram

 

A post shared by *~ MIQUELA ~* (@lilmiquela) on

For Miquela, Bermuda and Blawko, being a robot means behaving exactly like a human. They eat popsicles, go swimming and party all night. Their only distinguishable traits are physical, mainly that they live in the Uncanny Valley, a realm of computer graphics in which a render looks simultaneously too real to be fake, and too fake to be real. The robots also don’t age–when Miquela was “born” she was already in her 19-year-old body–and Miquela chronicles her angst in her diary, “Forever 19,” hosted online by the fashion brand Opening Ceremony. Presumably, this means that the robots live forever, that they can’t get sick and they won’t break any bones–or is it a steel frame? Brud hasn’t revealed any of the machinery that lies beneath their robots’ skin, so it’s a mystery as to how their biological and mechanical structure intertwine.

Brud posits that the greatest challenge for a robot is reconciling the lack of personal history; after the reveal, Miquela has been working her way through an existential crisis, acknowledging that she has memories of her childhood, but realizes they’re completely fabricated. She laments missing out on human experiences like middle school dances, but she’s making up for lost time through sponsored posts. In July, Miquela attended her first school dance as a way to promote the film Eighth Grade, looking like she just raided a thrift store in her 1990s-era taffeta slip dress, black fur coat, and butterfly choker. Her “robot” problems are made to resonate with real-world issues of identity and discrimination that real Instagram users engage with in their own ways. Announcing her new single with real-world musician Baauer, “Hate Me,” she wrote “[it’s] about the consequences of being different. It is about the repercussions or being yourself online. I owe my whole career to the Internet, and every time I go online, I have to read comments from people wishing I would die or telling me I don’t exist (???).”

Miquela’s personal dilemma can’t be well articulated in the current state of AI linguistic capabilities, and thus Brud, who identify as storytellers as much as developers, may have exaggerated their characters’ sentience so that they can explore identity politics for AI. Their company aspires to create authentic, eloquent AI that will walk among humans. Miquela is a window into the future of which Brud are the engineers. If Instagrammers are receptive to Miquela’s existence, it could signal that society is ready to accept embodied AI with open arms. Should she be rebuked–and Miquela does have vocal haters–it could suggest that society hasn’t yet built enough trust with AI to interact with the technology beyond a screen or smart home assistant.

*

Currently, Instagrammers appear ambivalent to the propagation of faux-AI users. Some are creating their own characters with physical traits and identities that vastly differ from the users’ real life self. Some of these accounts predate Miquela, like the kawaii Ruby Gloom and the controversial high fashion model Shudu Gram. But scrolling through Miquela’s mentions, one sees that Miquela’s has inspired dozens of enterprising young Instagramemrs using Photoshop and free 3-D modeling software like Daz3D and Blender to generate high-quality avatars and outfits and pose them against backdrops like hiking trails and shopping malls. A niche market of computer graphics artists create different “skins” — trendy clothing, edgy hairstyles, and fleshtones — for people to buy and use as their avatars.

One account belongs to a “9teen crzy 5ft robot,” avatar who only goes by the name Momo. She’s shy, sports a bob with thick bangs, a septum nose ring and has tattooed a half-sleeve on her right arm. She often shows off her body in bikinis or bodysuits, gives the camera sultry, over-the-shoulder looks, complains about her insomnia, and wishes she had more friends. Momo is slowly growing her Instagram social life, however. Over email, she told me that she stumbled upon a number of other self-proclaimed avatar accounts by searching hashtags and tagging her inspirations, like Miquela. “Out of nowhere we found each other and were close friends now. [We’re] like a family for real.”

Momo’s “robot” friends appear to have bonded with one another over their mutual feelings of unease in their human bodies and their desire to unleash a personality they can’t comfortably present in real life: some might relate, problematically, to some abstract idea of “otherness,” while for others adopting a “robot” persona might be a way of expressing daily realities through a layer of abstraction, free of real-world stakes, offering an illusion of control over the experience of oppression. Momo says she was born in a sterile white room, a common trope from dystopic sci-fi, to articulate feelings of alienation from other people in recognizable terms. Robot accounts may brand themselves as outcasts; at the same time, they might present a way of being part of culture on one’s own terms.

On the other side of the spectrum, there are users that are suspicious of the avatar accounts and want to uncover the creators’ offline identity. Conspiracy theory accounts, like @whoarethey21, try to unravel the identities behind much more obscure avatars, usually the amateur Instagrammers with only a handful of followers. The skeptics post images of the CGI avatars and use the caption to share the information they’ve gathered on the “true” identity of the person running the avatar’s account. They’re unconvinced that AI can master the internet cool kid aesthetic of 2018, and for the most part, they’re right. But why does their distrust skirt the line of doxing or online harassment? Has Brud turned their attention to these vigilantes to gather insight into how lifelike AI will be treated in the future?

*

There’s an enigmatic charm to high-quality avatars which taps into an innate desire to know the difference between the real and the artificial. It’s the almost hyperreal rendering that makes us pause on Miquela’s feed, whispering how did they do that? Expert compositing, texturing and lighting often make the freckles on Miquela’s cheeks or the scuff marks on Blawko’s Vans look more natural than a bathroom selfie with Instagram’s most flattering filters. Scrolling through their feeds, however, the avatars viewed en masse display enough oddities to reveal their artifice. Sometimes skin is too smooth, lighting too flat, and hair, a notoriously tricky texture to master in computer graphics, falls a little too perfectly in each photo. These clues appear to be engineered into Brud’s narrative: The company isn’t pinning its success on duping people into believing Miquela’s a cyborg straight out of Westworld. They want their audience–and potential investors–to know how they envision the future aesthetic of AI.

 

View this post on Instagram

 

shoutout my bro for the new tats but don’t tell my mom yet she doesn’t know smh

A post shared by LIAM TERROR (@resocialise) on

As Brud envisions it, soon there will be a time when a reveal isn’t possible because AI will actually manage their own account. In anticipation of discrimination and online harassment, avatar profiles have co-opted the tone of social justice advocates. Profile bios are filled with hashtags like #robotrights, sweet platitudes like “everything is love,” and futuristic mantras like “we are the new generation,” which portray their existence as a social movement. And since so many avatars follow in the footsteps of Miequela, there’s an added challenge of asking AI bigots to embrace robots with identities that intersect with the multitudes expressed by people living in the margins. This begets AI users to adopt an “all lives matter” mantra–or rather, “all sentience matters”–because AI civil rights may hinge on broader achievements in obtaining equality and justice for minorities.

Exhibiting progressive politics is often part of the roleplay experience. Despite the deliberate decision to present one’s self as AI, many accounts want to break down divisions between robots and humans. Speaking the vernacular of online social justice allows the fake AI to place their self-imposed differences alongside the struggles human minorities face. From the safety of their persona, they can tell their coming-out story and speak of their experiences not fitting in, or even being targeted with harassment because of their robot features. The confessions are low stakes because the users are a few keyboard strokes away from erasing their most contentious qualities. They can modify their avatar at any time, tweak their fictionalized personality or even delete all trace of their existence. Posing as AI isn’t just pretending to be someone else or indulging in science fiction. It also means being a part of a social movement, adding their voices to the call for social justice and using their experience as a reason to join the cause.

* 

AI developers need to consider the complexities surrounding technology and morality, and some are making an effort to fold these concerns into their research. Last year, a large AI organization called the Partnership on Artificial Intelligence to Benefit People and Society, co-founded by tech heavy-hitters like IBM, Google and Microsoft, tapped representatives from the American Civil Liberties Union to advise them on how to ethically develop AI and educate the public on their increasing presence. Their goal, however, seems more focused on public approval of corporate endeavors than the rights of AI itself.

A society that grants AI personhood has to anticipate conflicts regarding the division of labor, education and family dynamics. These young, ageless, perpetually healthy robots naturally have the abilities to dominate the most physically demanding jobs in the workforce, but will they want a living wage, vacation time a 401K? If Miquela dreams of being prom queen, will robots like her want to pursue a PhD? And if AI claims to cry, dream, laugh and fall in love, will they enter intimate relationships with humans, get married, start families, share bank accounts and inherit property? Brud’s version of AI’s needs and wants is indistinguishable from human behavior, but it’s hard to imagine that robots, supposedly immortal, will value the precious, fleeting excitement of life as much as humanity.

Dr. David Hanson, a leading roboticist and creator of the lifelike Sophia, believes that robots will assert their autonomy by the year 2045. According to The Independent, Hanson wrote in a research paper, “as people’s demands for more generally intelligent machines push the complexity of AI forward, there will come a tipping point where robots will awaken and insist on their rights to exist, to live free, and to evolve to their full potential.” These Instagrammers living online as fake AI are validating Hanson’s projections, albeit these humans can only speculate how robots will go about demanding their freedom. Maybe they’ll peacefully protest though hashtags; or perhaps they will lead a civil war.

Renée Reizman is a research-based multidisciplinary artist who examines cultural aesthetics and their relationship between urbanization, law, and technology. She is currently an MFA candidate in Critical & Curatorial Studies at the University of California, Irvine and the coordinator for Graduate Media Design Practices at ArtCenter College of Design.

In the Summer of 2009 I had just graduated college and job prospects were slim in Recession-era Florida. My best lead for employment had been a Craigslist ad to sell vacuum cleaners door-to-door, and after having attended the orientation in a remote office park I was now mentally preparing myself for a new life as an Arthur Miller character. That was when a friend called with a lucrative offer. She worked at a law office and they were hiring a part-time secretary to process the new wave of cases they had just gotten. This tiny firm represented home owners’ associations in mortgage foreclosures and bankruptcies, and business was booming.

The job was simple because everything about suburban homes is standardized: from the floor plans to the foreclosure proceedings, everything is set up for mass production. It was also optimized for bullshit. Sometimes I would be instructed to print out emails from clients who’d attached PDFs of scans of printed, previously received emails. I would write a cover letter, print out their email and the attachments (which, remember were scans of printed out emails) and enclose the printed-out email with the printed-out PDFs of scans of emails, then scan and email what I had just printed and mailed so that the client would get an email and a paper letter of the same exact thing. Sometimes I would fax it too. Everyone knew this was ridiculous but the longer it took to do anything the more money the attorneys made.

My job reminded me of a scene in the 1997 movie The Fifth Element, wherein CEO Jean-Baptiste Emanuel Zorg (Gary Oldman) delivers a monologue to Father Cornelius (Ian Holm) that begins, “Life, which you so dutifully serve, comes from destruction, disorder, chaos!” He then pushes a glass off his desk and as little robots descend on the shards and clean it up he narrates the scene: “a lovely ballet ensues so full of form and color. Now think of all those people that created them. Technicians, engineers, hundreds of people who will be able to feed their children tonight.” Financiers and the burgeoning tech industry had destroyed countless things, and now I was an obedient Roomba cleaning up the shards— a beneficiary of others’ creative destruction.

This is not a particularly deep thought, but that’s never stopped an idea whose time has been forced by capital. Depth is not a precondition of power when it comes to ideology. In fact, it is teenage suburban weed revelations like Zorg’s that dominate the minds of capitalists who, at least since Andrew Carnegie’s Prosperity Gospel, have done a good job of making everyone else agree that their bad ideas are immutable truths. Observers and practitioners of state power —from Antonio Gramsci to Karl Rove— recognize that political common sense is not forged through debate, it is imposed through brute force and media saturation. Simple, easy to digest ideas spread fast, which is why it is important to engage with deeply uncritical ideas and, whenever possible, come up with compelling alternatives.

The trick is to package an idea in such a way that it can survive virality, where it will get further simplified, misunderstood, taken out of context, and interpreted by both good and bad-faith actors. The journey to popularity is made easier if an idea is robust, simple, and speaks to something that is already felt. Given that so much of media is used to “manufacture” the consenting opinions that legitimize the power of corporations and their client states, reactionary and conservative ideas have a much easier time gaining traction. Books, essays, and YouTube videos that tell their audiences that financial success is tied to individuals’ moral character, for example, confirm widely held beliefs and therefore get shared and thus find themselves at the top of search results. To introduce a new idea that challenges widely-held notions about work and morality, one has to go about it by foregrounding relatability and then letting the moral consequences naturally follow. If the story I just told you feels right, then it follows that you agree with my moral explanation for that feeling.

***

David Graeber has done just that in his new book Bullshit Jobs: A Theory, which is an expansion on his viral 2013 Strike! essay. Both make a fairly simple proposition: people are increasingly working at jobs that they know are meaningless (i.e. bullshit) but are often well-paid and easy to do. A bullshit job only requires a few hours of actual work a week, is not physically strenuous, and may even provide opportunities to pursue hobbies if done surreptitiously. Why then, Graeber asks, do people consistently feel psychically gutted by these jobs? Making lots of money to do very little sounds like the ideal job and yet, judging by the popularity of the essay alone, that is not the case for millions of people around the world.

What is the idea that Bullshit Jobs puts forward? Any book with political aspirations should be judged, at least in part, by a thorough investigation into who benefits from the widespread adoption of its ideas. To begin answering this question, we have to consider the definition of the titular term: “A bullshit job is a form of paid employment that is so completely pointless, or pernicious that even the employee cannot justify its existence even though, as part of the condition of employment, the employee feels obliged to pretend this is not the case.”

Graeber fleshes the definition out into a taxonomy of five different kinds of bullshit jobs. Flunkies are someone whose profession exists solely through a combination of other, more powerful people’s desire to have underlings serve them. Goons aggressively carry out anti-social rules and laws. Duct-tapers hold together intentionally broken systems. Box tickers are jobs “who exist only or primarily to allow an organization to be able to claim it is doing something that, in fact it is not doing.” And taskmasters “whose role consists entirely of assigning [bullshit] work to others.” These types often merge. For example, my job at the law office was a flunky-goon hybrid.

Far from being a detriment to economic activity, the proliferation of bullshit is arguably the major force of increased employment today. “At least half of all work being done in our society” Graeber speculates, “could be eliminated without making any real difference at all.” All the administrators your college hired as classroom sizes bloomed, the managers with sentence-long titles that write reports at each other all day, and the office drones who process paperwork to comply with laws that their company wrote and handed to Congress make up a good deal of the high-status jobs added to the economy in recent years.

“Even in relatively benign office environments” Greaber argues, “the lack of a sense of purpose eats away at people.” Increased status or compensation can compound shame, guilt, and anxiety as the worker becomes consumed by the idea that they are complicit in society’s ills. Who this book is for, then, appears to be middle class professionals who recognize the meaninglessness of their job.

When it comes to analyzing the race and gender dimensions, Bullshit Jobs is, by no means, directed solely at affluent white men. Not only is the bullshit economy simply too big to impact only one demographic, but the tactics of psychic violence it relies on —gas lighting and demanding unending emotional labor just to name two primary ones— are often directed squarely at women. The book also contains overlapping anecdotes from people of color who were hired to do nothing but work on company diversity issues, only to find that their job was designed to be an ineffectual box ticking or duct-taping role with no actual power to fix the problem they were hired to solve. These symphisian tasks not only frustrate the worker, they also make them the prime targets of white resentment.

What seems most important to Graeber though, is that we as readers bear witness to this particularly insidious form of psychic violence and recognize a fundamental truth that this suffering reveals: namely, that humans are not self-interested individualists. Rather we are compassionate creatures driven by the desire to help people and make a difference in the world.

There is something deeply disturbing and surprising palliative about reading the accounts of meaningless work that Graeber solicited through his Twitter account, anonymized, and republished throughout the book. There are stories of office managers, doormen, and even social workers whose daily responsibilities are no more meaningful than digging a hole in the morning and filling it in after lunch. I was lucky in 2009, in that I had an ideology that provided satisfying answers to explain why I hated my job.  I had friends that were politically engaged, and we could talk about how good money goes to bad people. For many though, they can’t find a critique that goes beyond Zorg or maybe Mike Judge’s 1999 movie Office Space. Griping with co-workers can be rewarding too, but people are hungry for bigger, but still straight-forward, answers.

***

Like most things, meaningful explanations to complex problems like, “why do I hate a job that by all accounts I should love?” are eminently Googleable. Jordan Peterson, whose YouTube success has been the basis for a best-selling self-help book masquerading as a work of philosophy, is increasingly found at the top of algorithmically sorted piles of data. Peterson, a University of Toronto psychology professor, has made a career out of lashing together several bunk theories about the relationships between IQ, gender, and race: ideas that are so predictably wrong and hateful that they don’t require much summary. It suffices to say that much of Peterson’s work is geared toward people who are drifting —YouTube video titles include “Jordan Peterson teaches you how to interact with anyone” and “Jordan Peterson: What Kind of Job Fits You?”— and in search of satisfying answers to big problems.

Peterson’s book 12 Rules for Life: An Antidote to Chaos is a fine distillate of the retrograde, reactive blather that made him internet famous; strapped together by moralizing truisms organized in 12 “rules” that make up chapter titles, the contents of the book sync up so well with white men’s contemporary alienation that it should be no surprise that it is a best-seller. Even seemingly reasonable rules like “Do not bother children when they are skateboarding” are really anti-social screeds about resenting women and fantasizing about physical violence. His treatment of theory is dead wrong and the anecdotes based off of his professional practice bely a deep suspicion of women’s basic ability to tell the truth. There’s also a chapter about being the best lobster so that women will be biologically attracted to you. This book has sold millions of copies.

Reactionaries like Jordan Peterson are enticing because they have no problem giving a single answer to deep questions of meaning and one’s place in the world. In addition to bunk evolutionary biology, Peterson also talks a lot about the Bible and what it says about living a good and just life. There are chaos dragons and spectral forces that the reader must slay in order to thrive. Similar to Alex Jones, Peterson invites his audiences to subscribe to a system of meaning similar to a religion that, from the outside, merely looks like a set of objectively wrong facts. What he is actually doing is much more profound: he is giving satisfying explanations for an unpredictable world.

Liberals, on the other hand, are happy to data posture; they avoid taking a political stance by reciting data, and seem astonished to find out that work for the sake of working does not breed happiness. They grab their chins and nod seriously at faux intellectual ideas by behavioral economists like Dan Ariely. One of Ariely’s most popular studies, presented at a TEDx event, offered subjects a few dollars to build a series of small Lego figurines. All were told that the sets would eventually be disassembled and re-used but some people had their sets torn down in front of them as they were building another one. Unsurprisingly, the people who saw their work instantly undone agreed to build far fewer Lego sets.

Seeking the stamp of approval of a behavioral economist before agreeing to the inherent value of meaningful work belies a deep distrust of other people and a willful ignorance of existing knowledge on the subject. For at least a century, researchers have known that humans derive a singular pleasure from what Graeber, citing early 20th century German psychologist Karl Groos, calls “the pleasure at being the cause.” To exist at all is to make change in the world, and “this realization is, from the very beginning, marked with a species of delight that remains the fundamental background of all subsequent human experience.” Demanding endless research on a topic that should be a moral supposition is a hallmark of liberal media. By replacing actual political work with calls for endless experimentation, powerful people can perpetually delay any meaningful political work.

Greaber, then, appears to be providing a new option that is more satisfying than liberal handwringing and far more humane than what the reactionaries are offering. The key to success is his method, which eschews data posturing in favor of a subjective analysis. Graeber is very upfront about the subjective nature of his work, arguing that his own motivations include trusting individual workers’ own assessments of their jobs’ effects on the world, instead of relying on some seemingly independent evaluation: “my primary aim is not so much to lay out a theory of social utility or social values as to understand the psychological, social, and political effects of the fact that so many of us labor under the secret belief that our jobs lack social utility or social value.” This leaves little room for quibbling over whether or not a Vice President for Strategic Visioning is really doing important work or not. The point is to understand how the role of Vice President for Strategic Visioning is experienced, why that experience can be negative, and to use that subjective experience as the basis for a normative argument about how work should be organized.

The book, which came out last May, has been derided on Twitter as an unnecessary expansion of Graeber’s five-year-old viral essay. This is an odd critique for political writing: that a popular essay should not be put into other forms unless you have something new to say. Such a reaction seems to ignore how attention intersects with politics. A popular idea, turned into a popular book, stakes a claim to news cycles, column inches, likes, plays, and followers. Bullshit Jobs is useful both for the ideas it contains but also as a subject of media coverage. Both characteristics, for better or worse, are important. Finding a happy balance —an idea that is both liberatory and capable of going viral without losing its moral clarity— is essential if the left wants their ideas to show up in the places where we look for truth: Google search results.

Much like the Trump presidency, Petersons’ work may have attracted a lot of attention for being singularly stomach-wrenching, but he is more of an avatar than a pariah; someone that has effectively consolidated hegemonic ideas into a digestible format. Peterson is an intellectual troll and, as Whitney Phillips’ definitive study of trolls concludes, that means he has a keen sense of how to inject ideas that “replicate behaviors and attitudes that in other contexts are actively celebrated.” By manipulating context and knowing when and where to break with decorum, he can create controversy by saying things that most powerful people already agree are true.It is this ability to rearticulate hegemony while appearing as though you are speaking truth to power that generates the attention that social media algorithms are keen to pick up on.

Someone seeking an explanation for why they hate their desk job will likely turn to algorithmically sorted media like Google search results and YouTube videos to find answers. The results, ranked and sorted by popularity, dutifully recite the dominant ideology: extroverted YouTube personalities talking directly into their cameras about the positive mentality that let them break the 100,000 views mark or a TED talk about how your brain chemistry changes when you do something that you love. What unites the motivational speaker and the neuroscientist is that your problems (and successes!) are your own. Society is a static obstacle course and you are racing against everyone else. Truly great people change the rules of the game, but they do it by being remarkable —winning so definitively that the game is changed forever, or cheating in a mischievous, enviable way— not by cooperating with others.

Bullshit Jobs can compete with the likes of Peterson precisely because Graeber built the theory on subjective experiences. It just feels true while simultaneously giving permission to feel that truth by introducing the reader to other people that have had the same experience. The book is not a barn burner, it asks very little of its reader, and these are its two most useful features as an entry point for better politics. If you already agree that your job is bullshit, then you are halfway towards agreeing that people, left to their own devices, will look to be helpful and cooperative. This basic belief, in turn, can go a long way towards making specific policy proposals like a universal basic income, unionization, and socializing essential services like medical care, easier to swallow.

We’re at the precipice of a grand re-arranging of political alliances in which neocons and neoliberals are banding together with an agenda of paltry centrist domestic policy and hawkish foreign intervention, while something dangerous but potentially liberatory is brewing everywhere else. The task now, which Bullshit Jobs is just the start of, is articulating a compelling narrative of peoples’ lives such that when they act politically they choose liberatory approaches —unionizing, socializing essential services, a universal basic income— instead of reactionary ones. What we need now are more, better works like Graeber’s. Ones that side-step the endless data posturing liberals engage in as they attempt to debunk the terrifying reality painted by reactionaries. Let us opt instead for compassionate understanding and inspiring calls to collective action.

 

David is on Twitter

Still image from a YouTube tutorial on how to build a model Victorian factory in Minecraft

I’m going to sound like a grandpa here –video games are a big gap in my knowledge of digital media—but what the hell is wrong with today’s video games? I’m not really talking about getting ripped off by loot boxes, or titles that ship with major bugs left to be squashed. Those are certainly things that keep me away, but what really turns me off is what the games themselves are about. And here, rest assured, I’m not talking about the violence depicted in games which, as many well-regarded studies have definitively shown, don’t cause violence. (Though, that doesn’t stop the fact that I don’t really find photo-realistic war games to be particularly entertaining.) No, I’m just tired of video games feeling like a second job.

I have never felt the desire to play any of the simulator games that are popular today, even Train Simulator 2018 which, objectively, sounds awesome. Ditto for Minecraft, even though I love building things. It all just sounds like chores. I so desperately want to love video games. I own a PlayStation 4 and have about half a dozen games, but they just collect dust on a shelf. I played Skyrim for a long time but that was only after I rage-quit half an hour into the game and then didn’t pick it back up for over a year. Why would I want to collect hundreds of flowers to make a health potion?

For our anniversary I bought Britney one of those adorable miniature Super Nintendos that just went on sale. It came pre-loaded with 21 games and we played for hours. I genuinely enjoy playing with that thing. Just last night I played Yoshi’s Island before bed. (That game is so fucking adorable.) What is it about the old 16-bit games that I love that today’s games don’t have? I want to explore my own reactions here for a minute because I have a hunch that my own confusion is shared by many others.

The quick and easy answer is nostalgia: like music, people form emotional bonds with video games that are so strong that one’s sense of familiarity and comfort supersedes the joy of discovery, such that older works are enjoyed more than newer ones. I have a lot of fond childhood memories tied to video games but that doesn’t feel like the whole answer. There is something about what these older games are about that is hitting me in a way that newer ones just don’t.

Here’s my theory: today’s video games are less about escapist fantasy than offering opportunities to feel as though you have accomplished something. Beating your cousin in Street Fighter II has always provided a singular sense of accomplishment, and nothing felt quite as bitter sweet as seeing the credits roll at the end of Super Mario RPG but the sense of accomplishment I’m talking about isn’t related to completing a task so much as taking on the identity of someone whose major purpose in life is doing something that has a tangible impact on the world.

This all hit me as I was reading David Graeber’s latest book Bullshit Jobs: A Theory. Graeber asserts that upwards of half of all workers in America and Europe toil at jobs, while well compensated, provide no net impact on the world. He argues that such meaningless employment inflicts a special kind of “spiritual violence” that attacks our very notions of what it means to be human. Contrary to popular ideas that people are naturally lazy and must be prodded and cajoled into work, it seems like we crave the opportunity to be helpful to one-another or, at the very least, accomplish a task that has some discernable social value. One need only look at how often people go to great trouble to find meaningful work at their bullshit job —taking on work that is outside of their job description, installing software that lets them surreptitiously edit Wikipedia or moderating a subreddit— to see just how important being helpful is to people.

Others though, rather than find something to actually do,

…escape into Walter Mitty-style reverie, a traditional coping mechanism for those condemned to spend their lives in sterile office environments. It’s probably no coincidence that nowadays many of these involve fantasies not of being a World War I flying ace, marrying a prince, or becoming a teenage heartthrob, but of having a better—just utterly, ridiculously better—job.

Graeber then relays a story about a man who would zone out at work only to imagine himself as “J.Lo’s or Beyoncé’s Personal Assistant.” There is, of course, a flash game called Personal Assistant that looks fairly popular based on the thousands of five-star reviews. As our jobs provide less and less meaningful satisfaction we turn to the make-believe of video games to experience actual productive labor.

I have had the good fortune to make a living based on a few odd jobs that I find deeply satisfying instead of taking a job that pays well but has no discernable psychological or societal benefit. I write about interesting topics, teach smart students, and do research for clients that I care about. This, more than anything else I surmise, is why I don’t find most games worthwhile. That, and the lack of couch co-op games. That definitely sucks too.

Perhaps you love your job and also love video games. That’s totally possible. There’s lots of reasons to love anything. I’m just trying to put my finger on a feeling that I have about the difference between today’s popular games and older ones. And yeah, I know there’s lots of indie titles, but there you need the resolve of an aficionado. Someone who enjoys trying out, judging, and curating the lesser-known offerings in their chosen field. I just want to love the popular stuff, I don’t want to go hunting.

This argument may be surprising to long-time readers of Cyborgology because it sounds a bit digital dualist —I love my real jobs instead of the virtual ones on the screens— but that’s not what I’m saying at all. The problem here isn’t that video games are competing in a zero-sum game with “real” jobs or that people are playing 2k instead of a pickup game of basketball. The problem is that we have a society and an economy that is bereft of meaningful, waged work and so people have to find wages and meaning in two different places.

For all the talk of how “addicting” phones, social media, and video games are —what with their ability to release dopamine in the brain and all—there is startlingly little said about why people look to those things in the first place. I don’t like the addiction metaphor at all, but even if we accept it, the notion fails for the same reason that “just say no” anti-drug rhetoric doesn’t work: addiction is not a matter of simply being exposed to something and getting hooked. Those who get chemically addicted to a substance might have been made dependent while recovering from an injury, or they self-medicate a psychological condition, or they live a hard life that needs numbing. In all of those cases it is unproductive to talk about drugs the way that digital mindfulness people talk about screen time: as an issue of personal responsibility and discipline. What we need is a more holistic reckoning with how we are expected to earn and spend money and time.

Which brings me back to why I enjoy my SNES more than my PlayStation. The games on my PlayStation —or any popular modern gaming platform— are meant to fill a need that my work already fulfills. I spend my whole working day completing meaningful tasks and I’m also privileged enough to have a house to work on that provides more tangible rewards. The spiritual and psychological needs that many modern videogames are designed to offer are already fulfilled for me by other tasks. The SNES feels more like genuine escapism and play than the (meaningful, interesting, and rewarding) work of Minecraft, Call of Duty, or Train Simulator 2018. All of which is to say if anyone has a visually stunning, turn-based RPG available for the PS4 that they’d like to recommend I’m all ears.

David is on Twitter

several steeples with different world religion symbols atop each peak with the highest one with the facebook F

Colin Koopman, an associate professor of philosophy and director of new media and culture at the University of Oregon, wrote an opinion piece in the New York Times last month that situated the recent Cambridge Analytica debacle within a larger history of data ethics. Such work is crucial because, as Koopman argues, we are increasingly living with the consequences of unaccountable algorithmic decision making in our politics and the fact that “such threats to democracy are now possible is due in part to the fact that our society lacks an information ethics adequate to its deepening dependence on data.” It shouldn’t be a surprise that we are facing massive, unprecedented privacy problems when we let digital technologies far outpace discussions around ethics or care for data.

For Koopman the answer to our Big Data Problems is a society-spanning change in our relationship to data:

It would also establish cultural expectations, fortified by extensive education in high schools and colleges, requiring us to think about data technologies as we build them, not after they have already profiled, categorized and otherwise informationalized millions of people. Students who will later turn their talents to the great challenges of data science would also be trained to consider the ethical design and use of the technologies they will someday unleash.

Koopman is right, in my estimation, that the response to widespread mishandling of data is an equally broad corrective. Something that is baked into everyone’s socializing, not just professionals or regulators. There’s still something that itches though. Something that doesn’t feel quite right. Let me get at it with a short digression.

One of the first and foremost projects Cyborgology ever undertook was a campaign to stop thinking of what happens online as separate and distinct from a so-called “real world.” Nathan Jurgenson originally called this fallacy “digital dualism” and for the better part of a decade all of us have tried to show the consequences for adopting digital dualism. Most of these arguments involved preserving the dignity of individuals and pointing out the condescending, often ablest and ahistorical precepts that one had to accept in order to agree with digital dualists’ arguments. (I’ve included what I think is a fairly inclusive list of all of our writing on this below.) What was endlessly frustrating was that in doing so, we were often, to put it in my co-editor Jenny Davis’ words, “find [our]selves in the position of technological apologist, enthusiast, or utopian.”

Today I must admit that I’m haunted by how much time was wasted in this debate. That we and many others had to advocate for the reality of digital sociality before we could get to its consequences and now those consequences are here and everyone was caught flat-footed. I don’t mean to over-state my case here. I do not think that eye roll-worthy Atlantic articles are directly responsible for why so many people were ignoring obvious signs that Silicon Valley was building a business model based on mass surveillance for hire. What I would argue though, is that it is easy to go from “Google makes us stupid and Facebook makes us lonely” to “Google and Facebook can do anything to anyone.”  Social media has gone from inauthentic sociality to magical political weapons. Neither condition reckons with the digital in a nuanced, thoughtful way. Instead it foregrounds technology as a deterministic force and relegates human decision-making to good intentions gone bad and hapless end users.

This is all the more frustrating, or even anger-inducing when you think about the fact that so many disconnectionists weren’t doing much more than hocking a corporate-friendly self-help regimen that put the onus on individuals to change and left management off the hook. Nowhere in Turkle, Carr, and now Twenge’s work will you find, for example, stories about bosses expecting their young social media interns to use their private accounts or do work outside of normal business hours.

All this makes it feel really easy to blame the victims when it comes to near-future autopsies of Cambridge Analytica. How many times are we going to hear about people not having the right data “diet” or “hygiene” regimen?  How often are writers going to take the motivations and intentions of Facebook as immutable and go directly to what individuals can or should do? You can also expect parents to be brow beaten into taking full responsibility for their children’s data too.

Which brings me back to Koopman’s prescription. I’m always reticent to add another thing to teachers’ full course loads and syllabi. In my work on engineering pedagogy I make a point of saying that pedagogical content has to be changed or replaced, not added. And so here I want to focus on the part that Koopman understandably side-steps: the change in cultural expectations. Where is the bulk of that change going to be placed? Who will do that work? Who will be held responsible?

I’m reminded of the Atomic Priesthood, one of several admittedly “out there” ideas from the Human Interference Task Force whose job it was to assure that people 10 to 24 thousand years later would stay away from buried fissile material. It is one of the ultimate communication problems because you cannot reliably assume any shared meaning. Instead of a physical sign or monument, linguist Thomas Sebeok suggested an institution modeled off organized religion. Religions, after all, are probably the oldest and most robust means of projecting complex information into the future.

A Data Priesthood sounds overwrought and a bit, dramatic? But I think the kernel of the idea is sound: the best way we know how to relate complex ethical ideas is to build ritual and myth around core tenets. If we do such a thing, might I suggest we try to involve everybody but keep the majority of critical concern on those that are looking to use data, not the subjects of that data. This Data Reformation, if you will, has to increase scrutiny in proportion to power. If everyone is equally responsible then those that play with and profit off of the data can always hide behind an individualistic moral code that blames the victim for not doing enough to keep their own data secure.

David is on Twitter

Past Works on Digital Dualism:

What if Facebook but Too Religious image credit: Universal Life Church Monastery

A dirty old chair with the words "My mistakes have a certain logic" stenciled onto the back.

You may have seen the media image that was circulating ahead of the 2018 State of the Union address, depicting a ticket to the event that was billed under a typographical error as the “State of the Uniom.” This is funny on some level, yet as we mock the Trump Administration’s foibles, we also might reflect on our own complicity. As we eagerly surround ourselves with trackers, sensors, and manifold devices with internet-enabled connections, our thoughts, actions, and, yes, even our mistakes are fast becoming data points in an increasingly Byzantine web of digital information.

To wit, I recently noticed a ridiculous typo in an essay I wrote about the challenges of pervasive digital monitoring, lamenting the fact that “our personal lives our increasingly being laid bare.” Perhaps this is forgivable since the word “our” appeared earlier in the sentence, but nonetheless this is a piece I had re-read many times before posting it. Tellingly, in writing about a panoptic world of self-surveillance and compelled revelations, my own contributions to our culture of accrued errors was duly noted. How do such things occur in the age of spellcheck and autocorrect – or more to the point, how can they not occur? I have a notion.

To update Shakespeare, “the fault is not in our [software], but in ourselves.” Despite the ubiquity of online tracking and the expanding potency of “Big Data” to inform decisional processes in a host of spheres, there remains one persistent design glitch in the system: humanity. We may well have “data doubles” emerging in our wake as we surf upon the web, and the predictive powers of search engines, advertisers, and political strategists may be increasing, but there are still people inside these processes. As a tech columnist naturally observed, “a social network is only as useful as the people who are on it.”

Simply put, not everything can be measured and quantified—and an initial “human error” is only likely to be magnified and amplified in a fully wired world. We might utter a malapropism or a Freudian slip in conversation, and in a bygone era one might have stumbled onto a hapax, which is a word used one time in a body of work (such as the term “sassigassity,” used by Charles Dickens only once, in his short story “A Christmas Tree”). In the online realm, where our work’s repository is virtually unlimited, solitary and even nonsensical uses can carom around the cavern, becoming self-coined additions to the lexicon.

Despite the amplificatory aspects of new media, typos themselves certainly aren’t new. An intriguing illustration from earlier stirrings of a mechanical age is that of Anne Sexton and her affinity for errors, as described in chapter eight of Lyric Poetry: The Pain and the Pleasure of Words, by Mutlu Konuk Blasing:

In the light of the lies and truths of the typewriter—of its blind insight, so to speak—Sexton’s notorious carelessness not just with spelling but with typing has a logic. She lets typos stand in her letters, and sometimes she will comment on them, presumably spending at least as much time as it would take her to strike out and correct…. ‘Perhaps my next book should be titled THE TYPO,’ she writes. This would have been a good title. She is both typist and the typo-error she produces—both an agent and the mangling of the agent on the typewriter, which tells the lie/truth that she/we? want to hear: ACTUALLY THE TYPEWRITER DOESN’T know everything.

Indeed, neither the typewriter nor its contemporary digital extrapolations can know everything. The errors in our texts (virtual or print) are reflections of ourselves, things that we generate and which in turn produce us as well. The 1985 dystopian film Brazil captures the essence of this dualism, as a clerical error—caused when a “fly in the ointment” alters a single typewriter keystroke—sets in motion a darkly comedic and deadly chain of events. The film’s protagonist internalizes his inadvertent error, which taps into his lingering sense that the whole society is a mistake—ultimately leading him to seek an escape that can only yield one possible conclusion: a grim cognitive dissonance stuck at the lie/truth interface.

Such dystopic visions reflect an Orwellian tradition of blunt instruments of control and bleak outcomes, playing on fears of an authoritarian world that tries to perfect human nature by severely constraining it. This is an endeavor of demonstrable folly, yet one that ingeniously enshrines absurdity at the core of its totalitarian project. Variations on the genre’s defining themes likewise devolve upon society’s tendency to centralize baseline errors, yielding subjects ruled not by pain but pleasure and systems of control based on reverberation. Reflecting on how a “brave new world” of distraction and titillation merges with one where the exponential growth of media becomes the paramount message, Florence Lewis (a school teacher, author, and self-described “hypo-typo”) encapsulated the crux of the dilemma (circa 1970):

I used to fear Big Brother. I feared what he could do and was doing to language, for language was sacred to me. Debase a man’s language and you took away thought, you took away freedom. I feared the cliché that defended the indefensible…. Simply because we are so bombarded by media, simply because our technology zooms in on us every day, simply because quiet and slow time is so hard to find, we now need more than ever the control of the visual line. We need to see where we are going. As of this moment, we just appear to be going and going in every direction. What I am suggesting is that in a world gone Zorba, it will not be a big brother who will watch over us but a Mustapha Mond [the figurehead from Aldous Huxley’s Brave New World], dispensing light shows, feelies, speed, acid, talkathons. It will be a psychedelic new world and, I fear, [Marshall] McLuhan will be its prophet.

These are prescient words on many levels, reminding us of the plasticity of human development and the rapidity of sociotechnical change. As adaptive creatures, we’re capable of ostensibly normalizing around all sorts of interventions and manipulations, amending our language and personae to fit the tenor of the times. Is there a limit to how much can be accepted before flexibility reaches its breaking point? A revealing paper on the “Technopsychology of IoT Optimization in [the] Business World” sheds some light on this, highlighting the ways in which our tendency as end-users to accept and appropriate new technologies into our lives is the precondition for Big Data companies to be able to “mine and analyze every log of activities through every device.” In other words, the threshold “error” is our complicity, rather than the purveyors’ audacity (or, perhaps more accurately, their sassigassity). And one of the ways this is fostered is by amplifying our fallibility and projecting it back to us across myriad platforms.

The measure of how far our perceptual apparatuses can go thus seems to reside less in the hands of Big Tech’s innovation teams, and more so in our own willingness to accept (and utilize) their biophysical and psychological incursions. The commodification of users’ attention is alarming, but the structural issues in society that make this a viable point of monetization and manipulation have been written into the code for decades. Modern society itself almost reads like one great typographical projection, a subconscious longing for someone to step in and put things right. Our errors not only go untended, however, but are magnified through thoughtless repetition in the hypermedia echo chamber. The age of mechanization, coinciding with the apotheosis of instrumental rationality, may in reality be a time of immanent entropy as meaning itself unravels and the fabric of sociability is undermined by reckless incommensurability.

An object case with real-world (and potentially disastrous) implications was the recent chain of events that led an emergency services worker to trigger the ballistic missile alert system in Hawaii. As the New York Times reported (in a telling correction to its initial article), “the worker sent the alert intentionally, not inadvertently, after misunderstanding a supervisor’s directions.” This innocuous-sounding revision indicates that the episode was due to a human error, which had occurred within (and was intensified by) a human-designed system that allowed a misunderstanding to be broadcast instantaneously. Try as they might, such Dr. Strangelove scenarios will be impossible to eliminate even if the system is automated; indeed, and more to the point, automating decisional systems will only reinforce existing disharmonies.

Humans, we have a problem. It’s not that we’re designed poorly, but more so that we’ve built a world at odds with our field-tested evolutionary capacities. To err may well be human, but we’ve scaled up the enterprise to engraft our typos into the macroscopic structures themselves; like Anne Sexton, we are both the progenitors of typographical errors, and the products of them. There’s an inherent fragility in this: at the local-micro scale errors are mitigated by redundancy, and “disparate realities begin to blend when their adherents engage in face-to-face conversation.” By contrast, current events appear as the manifestation of a political typo writ large, as the inevitable byproduct of a system that amplifies, reifies, and rewards erroneous thought and action—especially when it is spectacular, impersonal, and absurd.

Twitter users have long requested an ‘edit’ function on the site, but fixing our cultures and politics will require more than a new button on which to click. As Zeynep Tufekci observed (yes, on Twitter): “No easy way out. We have to step up, as people, to rebuild new institutions, to fix and hold accountable older ones, but, above all, to defend humane values. Human to human.” Technology can facilitate these processes, but simply pursuing progress for its own sake (or worse, for mercenary ends) only further instantiates errors. Indeed, if we’re concerned about the condition of our union, we might also be alarmed about the myriad ways in which technology is impacting our perception of the uniom as well.

 

Randall Amster, Ph.D., is a teaching professor in justice and peace studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. All typos and errata in his writings are obviously the product of intransigent tech issues. He cannot be reached on Twitter @randallamster.

Image credit: theihno

 

This image provided by the U.S. Coast Guard shows fire boat response crews battle the blazing remnants of the off shore oil rig Deepwater Horizon Wednesday April 21, 2010. The Coast Guard by sea and air planned to search overnight for 11 workers missing since a thunderous explosion rocked an oil drilling platform that continued to burn late Wednesday. (AP Photo/US Coast Guard)

It has been really thrilling to hear so much positive feedback about my essay about authoritarianism in engineering. In that essay, which you can read over at The Baffler, I argue that engineering education and authoritarian tendencies trend very closely and that we see this trend play out in their interpretations of dystopian science fiction. Instead of heeding very clear warnings about the avarice of good intentions gone awry, companies like Axon (né TASER) use movies and books like Minority Report as product roadmaps. I conclude by saying:

In times like these it is important to remember that border walls, nuclear missiles, and surveillance systems do not work, and would not even exist, without the cooperation of engineers. We must begin teaching young engineers that their field is defined by care and humble assistance, not blind obedience to authority.

I’ve got some pushback, both gentle and otherwise about two specific points in my essay which I’d like to discuss here. I’m going to paraphrase and synthesize several people’s arguments but if anyone wants to jump into the comments with something specific they’re more than welcome to do so.

Pushback 1: “Engineering” is too broad a category to do that much analytical work to. Civil engineers do very different work and have very different employers than those in aerospace or mechanical engineering.

It is certainly fair to say that civil engineers, who build bridges, tunnels, and lots of other important infrastructure are not under the same pressures to work in and otherwise support the military industrial complex the way aerospace engineers are. There are, indeed, different professional cultures that exist across these subfields. That being said, lots of universities have schools of engineering that contain aerospace, civil, and many other kinds of engineering. Those engineers take the same introductory courses and the same ethics or professional development courses. Engineering curriculums, when it comes to the social impacts of engineering and the very fundamentals of engineering, often have quite a bit of overlap.

ABET, the accreditation body for most American higher education engineering programs has a fairly centralized system where EVERY engineering program or department must abide by several fairly specific criteria. The closest that criteria gets to political implications of engineers’ work, by the way, is requiring that students be evaluated for their: “understanding of and a commitment to address professional and ethical responsibilities, including a respect for diversity.” Exactly what those ethical responsibilities are (not to mention what constitutes diversity), is left up to individual programs.

If we look at specific program criteria, like aerospace for example, there are absolutely no references to ethics whatsoever. That bears repeating: the association that reviews whether you have a functioning program for teaching humans how to build drones, missiles, fighter jets, and all sorts of machines of war has no additional ethics guidelines. If ABET can make one, brief requirement for ethics across all engineering disciplines and doesn’t have to distinguish between those different engineering disciplines when it comes to ethics guidelines, then criticism of that system can operate at that resolution as well. To say that my essay relies on too-broad of a category would also call into question nearly every university’s engineering curriculum.

Finally, there’s already a lot of acclaimed work in engineering pedagogy, STS, and other fields that make definitive, empirical claims across the engineering professions. Professor of engineering pedagogy Alice Pawley has done extensive surveys of engineers and found that most work in corporate or military organizations that are fairly large and are organized in hierarchical managerial structures. Louis L. Bucciarelli’s Designing Engineers is regular reading for anyone doing work in this area and he too makes reference to “engineers” very broadly. To discount my work would mean throwing out a fairly large portion of well-regarded research on the topic, much of which I cite in the essay.

Pushback 2: Contrary to what you argue in your piece, engineers do have ethics oversight and there are licensure bodies that require continuous training and have oversight boards.

While that first pushback has the opportunity for generative tensions and interesting discussion, I feel like this argument is a bad faith engagement with the topic. In my essay I write,

Unlike medical professionals who have a Hippocratic oath and a licensure process, or lawyers who have bar associations watching over them, engineers have little ethics oversight outside of the institutions that write their paychecks. That is why engineers excel at outsourcing blame: to clients, to managers, or to their fuzzy ideas about the problems of human nature. They are taught early on that the most moral thing they can do is build what they are told to build to the best of their ability, so that the will of the user is accurately and faithfully carried out. It is only in malfunction that engineers may be said to have exerted their own will.

Canadian engineers, many have pointed out, receive an iron ring in a ceremony designed by Rudyard Kipling called The Ritual of the Calling of an Engineer. While that ceremony sounds very elaborate and might make for great in-group solidarity (which can be helpful in maintaining and enforcing ethical norms) it is not at all what I’m talking about. I didn’t say engineers have no sense of ethics, I argue that they actually have something worse: a definition of ethics wherein the individual engineer really only exercises their agency when something goes wrong. If the engineer does exactly as they are told and, for example, builds a perfectly working four-legged weapons platform for Boston Dynamics, they will have achieved a widely held definition of ethical engineering practice. That’s not good enough.

Others have argued that engineers do have oversight organizations that confer licenses and can take them away. Indeed, in the United States the National Society of Professional Engineering does confer a Professional Engineer (PE) license that is overseen by state-level licensure boards. Again, I said “little ethics oversight” not “no ethics oversight” but that is really beside the point because the NSPE does not revoke your PE license for building, say, an oil pipeline that leaks at a rate that is considered normal for that chosen design. The PE license is an example of my critique, not an argument against it because it only focuses on doing a job well, not whether the job itself comports with any sort of social justice standard or larger ethics framework.

Put another way, THE NSPE does nothing to work against what sociologist of engineering Diane Vaughn calls “normalization of deviance.” Bad, even deadly decisions, can be baked into systems-level decision-making such that individual actors might be dutifully following directions and making sure everything is staying within parameters, but there are few mechanisms for questioning the parameters in the first place. Vaughn coined normalization of deviance in studying the Challenger disaster but it works just as well to describe the BP Deepwater Horizon spill. Some might say “oh well that’s management” to which I would say the following: engineers love to boast that they have world-changing powers until something goes wrong. Then a paper-pusher becomes an insurmountable obstacle. I just don’t buy it.

A better argument against my critique would go after bar associations and medical licensure. Bar associations do not suspend lawyers for defending terrible companies and Dick Cheney’s doctors haven’t be censured for keeping a war criminal alive. Still though, lawyers also have the National Lawyers’ Guild and at least the Hippocratic Oath is partisan towards upholding and preserving life. There is no engineering organization that has significant power and would censure the NSPE-licensed engineer that will make sure Trump’s border wall is structurally sound.

 

 

 

 

 

An artists’ rendering of a possible future Amazon HQ2 in Chicago. Image from the Chicago Tribune.

The Intercept’s Zaid Jilani asked a really good question earlier today: Why Don’t the 20 Cities on Amazon’s HQ2 Shortlist Collectively Bargain Instead of Collectively Beg? Amazon is looking for a place to put its second headquarters and cities have fallen over each other to provide some startlingly desperate concessions to lure the tech giant. Some of the concessions, like Chicago’s offer to essentially engage in wage theft by taking all the income tax collected from employees and hand it back to Amazon, make it unclear what these cities actually gain by hosting the company. The reason that city mayors will never collectively bargain on behalf of their citizens is two fold: 1) America lacks an inter-city governance mechanism that prevents cities from being blackballed by corporate capital and 2) most big city mayors are corrupt as hell and don’t care about you.

In 1987 urban sociologists John Logan and Harvey Molotch put forward the “Growth Machine” theory to explain why cities do not collectively bargain and instead compete with one-another in a race-to-the-bottom to see which city can concede the most taxes for the least gain. The theory is rather straightforward: cities may have one or two inherent competitive advantages that no other city has, but beyond that you can only offer tax breaks. Maybe you’ve got a deep water port that big container ships can use, or you’re situated at the only pass in a mountain range. Other than that, location is completely fungible. All that’s left is tax policy and land grants.

Technology clearly makes cities’ competitive advantages even slimmer. Cities that flourished because they were well situated along water ways slowly declined as trains and the National Highway System surpassed canals as the preferred mode of freight transit. The list of things a city can exclusively offer a prospective employer seems to be getting smaller.

Meanwhile, the competition between cities has only got fiercer. “The jockeying for canals, railroads, and arsenals of the previous century,” wrote Logan and Molotch, “has given way in this one to more complex and subtle efforts to manipulate space and redistribute rents.” Instead of a handful of elites making handshake agreements over where to put a government arsenal or the Pennsylvania Railroad’s major terminus, the duty to attract major investment in the 20th century was turned over to teams of PR experts and economic development coordinators. Entire departments in cities and counties around the country were tasked with inventing incentive packages for major employers.

The Growth Machine puts business interests first, but  some stuff does actually “trickle down” to some people. Public spending may be slightly increased to the extent that capital investment isn’t actively deterred. For example, a business won’t relocate to a city where their top management’s kids can’t go to a nice school, so a city might invest in its schools to lure new business. Businesses also demand things that the rest of the public can use like airports or high speed internet. A city might even adopt the Richard Florida playbook and invest in public arts and entertainment. There was a sweet spot, between the late 70s and the early 90s, where this way of doing business was defensible. Schools were less segregated and economic inequality was bad but not horrendous.

Now, in the twenty-first century, all that is old is new again. Inequality is reaching 19th century levels and cities and school districts in many parts of the country are more segregated today than they were in the 60s. What little benefits the public received when their local governments went after major companies, has now become privatized. Again, Chicago’s bid is illustrative here: Mayor Rahm Emanuel’s brutal fight to privatize the city’s schools has created a two-tiered education system with elite Charter schools and cash-starved public ones. Whereas Amazon’s presence would have signaled the possibility that Chicago public schools would see an infusion of cash, Charter schools promise a closed circuit of money and services.

In a world where 82% of the wealth created goes to the wealthiest 1% of people, city leaders are bargaining with Amazon but with other people’s money. Some cities might have more enlightened mayors but, for the most part, there doesn’t seem to be a desire among the ruling class to extract wealth from private capital and redistribute it to average citizens. Rather, this is about securing closed circuits of wealth among a privileged few. To think that these mayors are first and foremost going to bargain for the best deal for their constituents comes off as, sadly, naive.

But lets say, for the sake of argument, a large portion of mayors did want to flip the script and collectively bargain on behalf of their citizens. First they would be confronted with the simple fact that they are organizing a detente on one level so that they can compete on another. Richard Florida, writing in CNN and also quoted in The Intercept, calls on city mayors, “to forge a pact to not give Amazon a penny in tax incentives or other handouts, thereby forcing the company to make its decision based on merit.”

What merit would that be though? Would the city with the least homeless people win? Bezos would be more apt to pick based solely on which city has the best weather. What would have been offered in explicit subsidies would really come down to the same low-tax business climate that the original Urban Growth Machine is predicated on but instead of a special gift to Amazon, cities would pass tax laws that gave away the farm to any company of sufficient size. Instead of Amazon picking from a list of tailor-made proposals, they would be looking for the city or county that just passed another staggeringly low tax policy. Chicago’s offer of routinized wage theft wouldn’t be impacted either, since it’s a state-wide program and has been in place since 2001. Mississippi, Indiana, and Missouri have similar programs.

The point here is that corporations and the people that run them are ideological. Companies do not set up shop based on what is good for people, they choose their location based on what is good for capital. How else do you explain all the businesses that set up shop in Delaware? The ideological fervor of CEOs also points to another problem: Even if cities bound together in some sort non-aggression pact so that none of them promised a single tax break, what would happen the next time a Fortune 500 company starts looking for a new headquarters? Would those cities get a shot? No. They would be blacklisted.

In Richard Florida’s latest book he laments that in an alternate universe President Hillary Clinton would have adopted his “detailed proposal for a new Council of Cities, comparable to the National Security Council,” This Council would foster “a new partnership between national government and the cities in which federal investments would flow.” This is a politically shrewd idea for reasons I have outlined before, but we are unlikely to see this happen any time soon. Even if we were to establish it tomorrow though, the larger problem remains: we have massive monopolistic companies that can make unilateral, undemocratic decisions that impact the lives of millions of people. More than anything it is our state of inequality and the attendant disinvestment in public resources that is, ultimately, the problem.

David is on Twitter.

Jack Nicholson’s President James Dale

I have this childhood memory of one of those rigged games at a county fair where the prize was a stuffed alien. I wanted it really bad. It looked just like the Halloween costume I’d made with my mom a few years back. We covered a balloon with Papier-mâché and when it dried we popped the balloon, cut out almond-shaped eyes, and spray painted the whole thing silver. This stuffed alien looked just like my costume but it was electric green and had a beautiful black cape with silver embroidery. I won it (don’t remember the game) and kept it for a long time. I might still have it somewhere.

Being the 90s kid I am, I was excited to see a New York Times story about a 2004 incident off the coast of San Diego where two Navy airmen followed a U.F.O. as it, “appeared suddenly at 80,000 feet, and then hurtled toward the sea, eventually stopping at 20,000 feet and hovering. Then they either dropped out of radar range or shot straight back up.” I was hoping this story might circulate for a while, especially given that a $22 million Defense Department program meant to study U.F.Os was recently discovered in the Pentagon’s black money budget. There’s even video of the thing! Sadly, it barely scratched the surface of most newsfeed algorithms.

The paltry reaction to such amazing footage might annoy me, but it isn’t surprising. The 21st century, in spite of 20th century sci-fi’s predictions, has been radically ambivalent to the stars. There’s no Star Trek  on primetime TV and The X-Files reboot received mixed reviews. In the 90s there were not one but two Star Trek series running throughout the whole decade, The X-Files was one of the most popular shows on television, and alien abductions were fodder for weekly episodes of Unsolved Mysteries. UFO sightings were also a dime a dozen, providing source material for books, documentaries, and even feature films.

Then, something changed. Part of the change is cultural, which, I’ve argued before, is exemplified by South Park’s Eric Cartman. Even as an 80-foot satellite dish emerges from his butt, he refuses to believe that he’s been abducted by aliens:

This syncs up nicely with Vox style explainerism to create a furiously obnoxious ethos where fun half-truths die and only the vindictive lies remain. One is either the liberal explainer Cartman who is technically correct (e.g. “There is only a 0.0024 percent chance that an 80-foot satellite dish is coming out of my ass.”) or the alt-right Cartman who refuses to acknowledge the satellite dish in the first place. Either way you’re Cartman.

I still think its accurate to say that we’re governed by a cynical desire to prove others wrong, either through bad faith deployments of data or categorical denials of incontrovertible evidence. What’s remarkable is how well represented both perspectives seem to be in our politics. It’s sort of amazing that one society can contain both Fivethiryeight.com and a Centers for Disease Control that can’t use the phrase “evidence-based” in their reports.

First contact stories have always really been about humanity. We are on our best behavior, or rise to the occasion when aliens arrive. In the 90s we proved our worth through feats of technical achievement (Star Trek: First Contact, Contact) or we defeated them (Independence Day, Mars Attacks). Either case required massive cooperation and the suspension of usual conflict. But what happens when a fragmented society such as our own encounters the extraterrestrial?

More recent takes on first contact —namely Europa Report in 2013 and District 9 in 2006— are very different. In Europa Report first contact is deadly and a part of a larger corporate conspiracy. In District 9 humans are the antagonists: forcing aliens into Johannesburg’s slums. Mars Attacks may actually belong in this list too. Jack Nicholson’s President James Dale gives what reads today as a decidedly Trumpian speech (read the YouTube comments if you don’t believe me): “what is wrong with you people? we could work together! why be enemies? because we’re different? is that why? think of the things we could do. think how strong we would be. earth and mars— together.” President Dale is then stabbed through the heart by a Martian’s robot hand. Defeating the aliens in Mars Attacks is achieved through an accidental discovery instead of super-human achievement.

While District 9 is based in (white) humanity’s track record of reacting to foreign visitors, Mars Attacks pokes fun at our earnest belief that our leaders are the most honorable and talented society has to offer– their Sorkin-esque speeches ensuring that “we do not go quietly into the night.”

We don’t believe that anymore. Most don’t see the president as competent, let alone inspiring. If we can no longer maintain the fiction of imagining our leadership as competent, then what use are aliens to us? They’re dinner guests showing up when you haven’t finished tidying up. They’re rubberneckers at a crash site. If aliens showed up today we would feel kinda embarrassed because we don’t feel like we’re at our best right now. Sure in the 90s, when we published books that heralded the end of history, we were happy to show off humanity, but today we are back to feeling society is a work in progress.

We aren’t paying attention to the New York Times’ reporting on U.F.Os because we don’t want to pay attention to humanity. In the past we used U.F.Os as an excuse to imagine what global cooperation would look like and we searched the skies to see if we would ever have the chance to try it out. Such cooperation and even our own best selves seem very far away at the moment. We’re not accepting visitors at the moment but hopefully, soon, we will.

 

Photo: independent.co.uk , 2017

We should not be at all surprised to find ourselves online, but we are disturbed to find ourselves where we did not post, especially elements of ourselves we did not share intentionally. These departures from our expectations reveal something critical to the appeal of social media: it seems to provide a kind of identity control previously available only to autobiographers. We feel betrayed, as the writer would, if something is published which we had wanted struck from the record. The genius of social media is meeting this need for editorial control, but the danger is that these services do not profit from the user’s sense of coherent identity, which they appear to produce. The publisher is not interested primarily in the health of the memoirist, but in obtaining a story that will sell.

The intersection of autobiography and social media, especially emphasized by the structure of the Facebook Timeline, should raise questions about how identity is disclosed both before and after the advent of Facebook. The data self Facebook creates, which Nathan Jurgenson wrote about five years ago, is a dramatic departure from the way many of us likely conceive of ourselves. He suggests that the modern subject is constituted largely by data even as the subject creates that data; the self we reference and reveal to others is built on things that can be found out without our consent or effort. A more recent article in New York Times Magazine highlights the power of the immense data available on each of us with a profile.

Narrative identity theory has been developed by psychologists and theorists such as Paul Ricoeur, Jerome Bruner, and Paul John Eakin. It suggests that our sense of self is fundamentally the sense of a character in a narrative. In other words, the character named ‘I’ in the stories we tell, is a character who we understand rather well and with whom we identify, but it is not ontologically different from other characters in fictional or non-fictional narratives. The story our I-character appears in is simply a life. It contains so many events that they cannot all possibly be included, and when telling others or in our memory we all become autobiographers as we retroactively select and grant meaning to experiences and choices.

Narrative identity theory can help to render the Person-Profile dialectic more comprehensible. Just because an embodied subject is creating the content which is shared on social media does not mean the two are in a chicken and egg relationship. Under narrative identity theory, even though the author writes the autobiography, the self is already a story, and so perhaps the person is already a profile. The phrase ‘life story’ is redundant; we understand ourselves as well as we understand the stories that portray our character. How might social media, which grant users such extensive control over these stories, affect this process?

The possession of a social media account does as we seek out events which are documentable. The restaurant or concert that will fit nicely into the narrative of a profile is preferable to something which would be out of place in the story. Narrative identity theory suggests we have always sought to control our story, but the advent of social media brings this action into a new phase.

The Facebook Timeline clearly reflects the common ground between the theories of narrative identity and the data self. Rob Horning wrote about the Timeline when it was first introduced, citing an article explaining that the interface aims to evoke “the feeling of telling someone your life story, and the feeling of memory–of remembering your own life” which, under narrative identity theory are very similar actions; the creation of a sense of self comes through stories told not only to others but also internally in memory.

Horning asserts that the formulation of life as a stream of narrative is an imposition by Facebook on its users, not a natural or neutral process. When we make a coherent story out of what we post, he claims we are playing into Facebook’s hands by providing them with more useful data. Horning suggests we would not put effort into presenting a coherent narrative if it were not for the Timeline, but this is doubtful. Narrative identity theory suggests we cannot do otherwise; without a story to tell, we would not know ourselves. It may not be neutral, but it is not a total imposition on the part of the UI, either. Narrative identity theory has been around for decades, and perhaps the Timeline format has been successful because it agrees with the way we already understand ourselves.

Even basic questions about a person tend to create a kind of narrative: employment, relationships, where he/she has lived, etc. This is social accountability – the way it is normal for us to disclose our identities to others – and it is one very concrete intersection of narrative identity and the Timeline. In face-to-face expressions of identity, social accountability can be seen clearly in the questions we ask when meeting someone. Just as users cannot utilize Facebook without a profile, the story latent in a stranger’s introduction is his or her price of entry to all kinds of relationships. You might be comfortable with a coworker about whom you know very little, but a potential friend who withholds her life story or a suitor who refuses to elucidate his past? These are requests from profiles with no picture. Consider also the young professional without LinkedIn, the photographer without Instagram, or the student without a Facebook page: for better or worse, their failure to account for themselves in the expected way will inhibit their potential. It seems that social media has become the new social accountability; if you do not have a profile, you are failing to present yourself in the way society expects. This is to say nothing of the services and websites which require linked accounts in a preexisting, larger social network.

Horning’s assertion that identity forming frameworks can be changed within a generation is key to understanding how we express and —partially as a consequence of that expression— understand ourselves. When we compare pre-Timeline Facebook and MySpace to today’s infinitely scrolling Timeline one thing becomes clear: social media no longer demands static identities represented by a filled-out profile page. Instead there is a single box that constantly asks you to fill it with whatever is happening to you now. Story has overtaken stability, not only by calling for more frequent visits and updates, but by providing a stage for us to direct our character. Is it our fate to account for ourselves with these bottomless text fields, guided only by minimalistic web page designs, trending hashtags, and caption norms? If so, why have so many of us chosen it?

One reason we increasingly look to social media to host our narrative identities is because, for many of us, we have lost strong affiliations with church, state, family, company, and gender roles. These social institutions act as a point of reference to call on when identifying oneself. But by choosing to qualify our associations, and not to simply say “I am a Christian and an accountant,” the responsibility falls increasingly on the hyper-individualized subject. Identifying with one’s company, Evangelism, Catholicism, or patriotism provides a firm foundation but comes loaded with connotations and subtext over which the subject has no control. For the sake of freedom from the impositions of those structures, we have taken on the pressures of justifying and making meaning in our actions, our stories, and ultimately our identities.

A common criticism of theory is that it does not reflect lived experience, and it is indeed a tall order to ask individuals with online profiles to believe they are constituted by that data. If data in the form of the Timeline is becoming a foundation for identity, its narrative structure at least has a precedent in narrative identity theory. The narrative we write into our online data is familiar, and it helps to render the data self more comprehensible. If we are becoming data selves, it is perhaps through this very need to account for ourselves in the form of a story.

The important change is that our urge to narrate is no longer merely personal, it is profitable. Whether or not our purposes for creating a narrative are novel, there are new consequences to the act as it is mediated by social media. Facebook has done what so many successful companies have, and found a way to monetize something people already do, but what does Facebook’s immense success say about the behavior they have tapped for this profit?  To pick an easy target for comparison and consideration, consider the double purpose served by the content of weight loss and beauty magazines: images of attractive people not only suggest the success of the products for sale, they also undermine confidence and thoughts that the reader could do without those products. Could social media do the same? Continuing and accelerating the internalization of identifiers, social media has given us the control we want and the social accountability we need. Like the magazines however, for growth to continue, we must always want more. How and when might Facebook increase the demand for its product: identity?

Daniel Affsprung is a recent graduate of SUNY New Paltz, where he studied English Literature with emphasis on critical theory and creative writing, and wrote an honors thesis on narrative identity theory in autobiography.

The Daily Beast ran a story last week with this lede: “Roseanne Barr and Michael McFaul argued with her on Twitter. BuzzFeed and The New York Times cited her tweets. But Jenna Abrams was the fictional creation of a Russian troll farm.” Abrams, the story goes, was a concoction of The Internet Research Agency, the Russian government’s troll farm that was first profiled in New York Times Magazine by Adrian Chen in June 2015. During its three-year life span the Abrams account was able to amass close to 70,000 followers on Twitter and was quoted in nearly every major news outlet in America and Europe including The New York Times, The BBC, and France 24.

The Abrams Twitter account was a well of viral content that over-worked listicle writers couldn’t help but return to. Once the account had amassed a following the content shifted away from innocuous virality to offensive trolling: saying the civil war wasn’t about slavery, mocking Black Lives Matter activists, and jumping on hashtags that were critical of Clinton. “When Abrams joined in with an anti-Clinton hashtag,” The Daily Beast reports, “The Washington Post included her tweet in its own coverageOne outlet used an image of a terrorist attack sourced from Abrams’ Twitter feed.”

The Abrams account, they write, “illustrates how Russian talking points can seep into American mainstream media without even a single dollar spent on advertising.” This framing portrays journalists as passive filters that automatically parrot whatever popular Twitter users say. Journalists are supposed to be critical fact-checkers and the last defense against misinformation entering the public sphere. The rate at which false information keeps “seeping” in seems to be growing, and so it is worth asking: are there structural reasons that fake news keeps making its way into reputable news sources?

Jay Rosen is the obvious person to answer this question, and to some degree he did answer it last March when he announced a partnership with the Dutch news site De Correspondent: “if you’re doing public service journalism” he wrote, “and trying to optimize for trust, it helps immensely to be free from the business of buying and selling people’s attention.” Not having commercial sponsors also means, “not straining to find a unique angle into a story that the entire press pack is chewing on, it’s easier to avoid clickbait headlines, which undo trust. Not chasing today’s splashy story can hurt your traffic, but when you’re not selling traffic (because you don’t have advertisers) the pain is minimized.”

It is frustrating that prominent public radio personalities like Ira Glass are running in the opposite direction. Glass, talking to an AdAge reporter in 2015 confidently stated, “Public radio is ready for capitalism.” This is dangerous because much of Russia’s disinformation campaign and Trump’s home-grown trolling relied on the capitalist attention economy that governs every major media outlet. Breitbart and InfoWars republished Abrams’ tweets, but so did The Washington Post and The Times of India. The only thing these news organizations have in common is their advertiser-centered business model.

It’s no secret that most staff writers are underpaid and over-worked, and they are the lucky ones. There are thousands of wildly talented freelance writers that spend half their time writing and reporting and the other half chasing down their overdue paychecks. Reporters with no research budget and a huge publishing quota are understandably going to do a bit of Googling, pull a quote from Twitter, and call it a day. Over-worked and under-paid journalists are the weakened immune system that lets viral fake news take over the body politic.

Herman and Chomsky, in their famous book Manufacturing Consent, pointed to the high cost and time-consuming nature of good journalism as one of the five “filters” that discourage critical reporting. Instead of going to the source of the story, journalists go to police departments and corporate PR offices to grab quotes. This is not because they are lazy, but because they lack the time or money to report the story from scratch. PR offices and police departments’ spokespeople offer one-stop-shops for an official account of what happened in any given story.

The Yes Men—two artists who, for example, will pose as the spokesperson for Dow Chemical and tell a BBC reporter that they take full responsibility for the Bhopal Disaster— know that news agencies are more likely to report on something if they are handed a media package or are offered access to a talking head from a well-known organization. Their hoaxes have real consequences: sending corporate stocks temporarily tumbling and attracting mainstream attention to ignored environmental disasters.

Twitter affords a similar shortcut to newsworthiness. Putting someone with a high follower count (to say nothing of a blue checkmark) in your story increases the possibility of reciprocal attention: you click my content and I’ll click yours. When someone with 70,000 followers says something controversial to their substantial audience, that’s worth a shout out in your news story, especially when that story is little more than a survey of what people are talking about. That Twitter user, after seeing a spike in followers and  mentions related to the article, will share it themselves sending off a quick, “was included in this thing, haha.” This is the mundane, reciprocal manufacturing of attention that feeds micro celebrity and now, apparently, geopolitics. Anything with a decent follower account is low-hanging fruit for finishing a reporter’s daily content quota.

What is absolutely maddening is that the demands and responses to the fake news phenomenon have centered on social media and the algorithms that govern their behavior. Some of the solutions out there —cough Verrit cough— are either so absurd that they can only be explained as either the product of cynical opportunists looking to make fact-flavored content, or the result of too many well-connected people not understanding the nature of the problem they are facing. Both seem equally likely. The intent barely matters though, because the result is the same: a more elaborate apparatus to churn out attention-grabbing media for its own sake.

Social media has exacerbated and monetized fake news but the source of the problem is advertising-subsidized journalism. Any proposed solution that does not confront the working conditions of reporters is a band aid on a bullet wound. The problem is systematic, which means any one actor —whether it is Mark Zukerberg or Facebook itself— is neither the culprit nor the possible savior. So long as our attention is up for sale, people with all sorts of motives will pay top dollar.

Image courtesy Free Press