Minimalism has a way of latching on to people that want nothing to do with it. None of the artists contained in Kyle Chayka’s Longing for Less wanted to be associated with the term, and yet here they are, mostly posthumously, contained in a book subtitled Living with Minimalism. Chayka nevertheless pulls together midcentury artists like Philp Glass and Donald Judd and contemporary pop culture icons like Marie Kondo and the author of the 2016 self-help-through-minimalism book The More of Less Joshua Becker into a single, slim volume against their will.

Objections to having your work defined as “minimalist” come out of a desire to use art to point to the maximal things that are not contained within the minimalist work— the majesty of the universe, the stirring sounds of an audience, a contemplative desert expanse are the real subjects. The frustration is understandable: Imagine if you underlined an important passage in a book, handed it to someone else, and their only reply was, “Interesting that you chose to use pencil to draw that line. Is this about the impermanence of thought?” This is the plight of the so-called minimalist artist. Their art was meant to be so straightforward and devoid of metaphor or allusion that the observer would have no choice but to consider the implications of whatever is right in front of them— whether that’s a pile of dirt hiding in a New York City loft (The Earth Room, by Walter De Maria, 1977) or a collection of metal cubes in Marfa, Texas (100 untitled works in mill aluminum, by Donald Judd, 1982-1986).

Most minimalists seem unhappy with wherever they find fame, whether that’s 1970s SoHo or Paris, or Japan. As soon as their talents could be translated into you-come-to-me stardom, they left for someplace else. Chayka is determined to give each minimalist on their own terms describing their careers with deference, even the Nazi-sympathizing architect Philip Johnson gets to be something more complicated than just that. He’s also an exhibitionist neighbor, the originator of the term “International Style”, and a certified (by the FBI, actually) hottie. It’s the right choice for a book whose stated mission is to “figure out the origins of the thought that less could be better than more—in possessions, in aesthetics, in sensory perception, and in the philosophy with which we approach our lives.” Considering a broad range of work on its own terms goes a long way towards understanding what Chayka calls his “working definition of a deeper minimalism” which is an “appreciation of things for and in themselves, and the removal of barriers between the self and the world.“

At the end of a chapter there’s a short anecdote about Johnson’s first night in The Glass House, a small home made up of just a few steel beams to support walls of glass. Johnson walks into the house at night and upon turning on the lights finds that the glass acts as a mirror more than a lens. He calls an associate and yells: “You’ve got to come over immediately. I turned on the lights and all I see is me, me, me, me, me!” They solve the problem by setting spotlights outside on the trees, such that more light comes in from the outside than is reflected in glass. Chayka doesn’t say as much but from what I gather from the rest of the book, this is a metaphor (or maybe a Zen koan?) for all minimalist works: in an attempt to strip everything away and catch an unimpeded glimpse of the world all you end up with is an unwelcome picture of yourself. The artist’s reaction is always to change the scenery instead of feeling more comfortable with themselves where they are.

The artists are not alone in finding out, once the work is completed, that their works draw attention to the wrong things: the work itself or the artist instead of some deeper truth. This is a fairly common problem with casual consumers of art, though most people articulate it as being picky about what art they deem important. Someone can shout “my kid could make that” in a modern art gallery and then melt down the next day when they learn someone didn’t stand for the national anthem. Both the modern art hanging in a museum and a rendition of the national anthem at a ball game are works of art, but their stated values and contexts are shot through with different and at times contrasting socio-economic identifiers. Pierre Bourdieu used the word “distinction” to refer to this nexus of social position, economy, and cultural interpretation. We learn to appreciate and decode cultural objects in a social way, ascribing political valences to both their intended audiences and the artifacts themselves. You already understand this implicitly: think of the stereotypes of who enjoys opera versus NASCAR and Bourdieusian distinction immediately makes sense.

It is clear that Chayka and I do not have the same distinction. He describes watching Lost in Translation for the first time at age 15 as a transformative moment. I remember checking it out thinking a Bill Murray movie would be funny and being sorely disappointed. Even though the Massachusetts Museum of Modern Art is less than an hour away from me I have never been there. I lack both the interpretive schema and social pressures that makes going there enticing. Ditto most symphonies and things described as “experimental.” I don’t even like poetry very much. This makes me uniquely qualified to review The Longing for Less though, as it was written for a broader audience as an introduction to this material. My interest in the book is not propelled by a love of Philip Glass, John Cage, Richard Gregg, Shūzō Kuki, or Agnes Martin. I didn’t even know who Donald Judd was until reading this book. My sole motivation is to understand that “deeper minimalism” Chayka is after and if that takes me through a bunch of stuff I don’t find particularly interesting, so be it. This is minimalism in action: looking at a greater whole with the help of something direct and unsettling. To review this book is not only to review the argument printed on the page, but the actual thing itself.

My copy of The Longing for Less arrived in the mail just as I was lapsing into an extended fit of sadomasochism that took the form of watching The West Wing for the first time. There’s an episode in the third season where someone named Tawny is arguing with Sam Seaborne over the national endowment for the arts funding. Tawny keeps spitting artworks at Sam— “’Slut’ is a one-word poem by Jules Woltz. It’s stamped in scarlet on a piece of forty by forty black canvas.”— as if their descriptions alone make her argument: aren’t you mad your government spent money on this?! Sam calmly agrees that these examples of art are embarrassing but disagrees fundamentally with government assessing the value of the resulting art once the artist has been funded. It is a great example of the liberal mind at work: having values but mistaking them as mere opinions not worthy of connecting to the power you wield.

Perhaps it is through sheer force of serendipitous juxtaposition that The West Wing and The Longing for Less kept bumping into one another in my head. Like Johnson rejecting the reflection in his own creation, I was catching glimpses of myself in both pieces of media: values I once espoused, desires long extinguished, and half-forgotten memories of past realities. I caught myself having a Sorkin-esque dialogue in my head when reading about Walter De Maria’s The Earth Room. Why did the microbes contained within the 250 cubic yards of dirt have better living conditions than the 70,000 homeless people in the same city? Well, if we waited to solve all human needs before making art then what kind of society would we really be living in? Yes, but this kind of art? Who’s to say what art gets to be made? Yeah, but this dirt has been living in the same SoHo loft since 1977— how many humans have that kind of security?! And so on.

Closing The Longing for Less does not stop it from broadcasting its thesis. The book is clearly a thoughtfully designed object, meant to participate in the minimalist project it seeks to understand. In an interview with Delia Cai, Chayka says he “wanted the book itself to be a minimalist object, a visual representation of what the book was about.” Designed by Tree Abraham —whose work could be described as minimalist— the dust jacket only comes up halfway but it contains the title and author. On the hardcover is a cube sitting on edge with its three visible faces of white, hunter green, and sky blue. However, once the jacket is removed, the cube is revealed to be part of a longer shape. The white extends down into a parallelogram and the blue and green —now they are more recognizable as diamonds— are mirrored on the opposite side of the single white shape. The text is now confined to the spine.

Never have I actually wanted to see a book age as much as this one. To see how the pure white center yellows, whether people keep the dust jacket at all. I want to order it for my university library just to see what they do with it. Do you keep the dust jacket in place with the usual archivist tape and plastic or do you put it on the shelf naked? Mine has already creased quite a bit and I find the book hard to hold with it on so I suspect most people will lose it if they don’t intentionally throw it out. Then there’s the ambiguity around where this half dustjacket is “meant” to sit. The spine of the book suggests it belongs at the bottom such that the publisher mark, author, and title on the cover and dust jacket line up but from the front or back cover any position looks correct and cuts the abstract design in satisfying ways.

The cover design is actually sort of haunting, at least as much as an abstract shape can be. There are all these moments in the book where Chayka feels something deep and profound in minimalist art that I just couldn’t imagine feeling until I realized maybe this Abraham’s design was doing exactly that to me. The design gave me a bit of déjà vu. At first, I thought it looked like the stylized S that everyone inexplicably drew on folders and bathroom walls in middle school. That wasn’t quite it though, the autobiographical period felt right but the engram itch was still there. The only place I was likely to encounter abstract art was in the underground concourse of Albany’s Empire State Plaza so I sought out the listing of all the art work hanging down there and recognized Al Loving’s work “New Morning I” which has a very similar tessellation pattern of diamonds that form the optical illusion of three-dimensional cubes when looked at just right. That still wasn’t quite it though.

I did not know who Al Loving was (again, I’m not a connoisseur of art) so I read his Wikipedia page. And there it was: Loving had created dozens of pieces for public spaces. Not just Albany’s concourse but train stations, college plazas, and theater lobbies. He was even a National Endowment for the Arts fellow in 1970, 1974, and 1984. (Eat it Tawny.) These shapes and their inoffensive colors are the aesthetic of late twentieth century public space, which is to say, the aesthetic of my early childhood. The Broward County library I grew up in, the classrooms I couldn’t wait to get out of. They all had something that looked like —but was not— a Loving. Repeated geometric shapes in over-stock industrial paint or heathered fabric stretched to a frame and hung on a wall. It wasn’t any one particular piece or even one artist, but a style that this artist was at the center of. As soon as this realization hit, I could practically smell the old molding carpet.

The book is divided into four chapters divided into eight short sections. In the introduction Chayka encourages the reader to treat the rest of the book as a “space” that can be explored in any order because “chronological history is too causal an approach for minimalism. Its ideas don’t have one linear path or evolution; it’s more of a feeling that repeats in different times and places around the world.” This nonlinear, choose-your-own-intellectual-adventure has been used to great effect by the likes of Paul Feyerabend in Against Method and Christopher Alexander’s A Pattern Language. Both of these books insist you have to read the whole thing (in fact the latter says the 1171-page volume is incomplete without having first read its 552-page companion The Timeless Way of Building) but they are broken up into small, more-or-less self-contained essays.

Alexander and his compatriots believe that most human problems were understandable as patterns. The size of a house or the decision-making structure of a group can be distilled into typified patterns with generic prescriptions. For example, pattern 161 “Sunny Place” notes that “we have some evidence—presented in [another pattern titled] south-facing outdoors (105)— that a deep band of shade between a building and a sunny area can act as a barrier and keep the area from being well used.” To avoid this shady barrier they recommend the following: “Inside a south-facing court, or garden, or yard, find the spot between the building and the outdoors which gets the best sun. Develop this spot as a special sunny place—make it the important outdoor room…” The reader can then follow Outdoor Room pattern 163 for more information. They might also check out fanciful patterns “wings of light” or utilitarian patterns like “bus stop.” 

Why write a book like this? Why chop up your argument like a 90s sitcom storyline such that it is comprehensible in almost any order but most satisfying when completed? One reason is to help it fit into the reader’s life a bit easier. I was never more than four or five pages away from finishing a chapter of Longing for Less, which meant I could see why my phone had buzzed or I could go get a snack. This was minimal in the sense that the book’s ideas could seamlessly fit into small parts of the day, not unlike Erik Satie’s furniture music. The philosopher Ian Hacking in the 2007 reprinting of Against Method describes the features of this design like a YouTube product reviewer considering new iPhone features: “you can take it hitch-hiking or to a sit-in, and read a bit while you are munching on a few pilfered tomatoes or sheltering from a storm. You can pick up an idea, chase it, and relocate it in the Analytical Index, all the while being in a physical relation to the pages upon which you can scribble expostulations, if that is your wont.”  

The analytical index of Against Method contains the author’s concise summaries of “the most interesting parts” of each chapter which themselves only last about a dozen pages or so. Feyerabend preferred to describe the book-shaped thing with the words “Against Method”on the cover as a “collage.” The effect of the collage is a fun performance of exactly what Feyerabend is arguing: that the history of science shows that there is no clear common structure across experiments. Similarly, you, the reader, can make up your own path through this epistemological argument. Any two people may take different paths but they both read the same book and have a common reality to talk about, and yet their method of getting there was totally different.

Chayka seems to want you to do the same, starting nearly anywhere by dipping into Japanese architecture, modern art, or Steve Jobs’ apartment. I started reading the chapter on Marie Kondo before going on to Steve Jobs, reading a few bits on Japan, and then slogging through the more art-centric bits. Without something like the Feterabend’s analytical index however, it was hard to make an informed decision about what to read and in what order.

No matter how elegant an author accomplishes this task it still feels like the technology of the book resists this genre. Unless it’s the Bible and you have hundreds of years’ worth of interpretation and guidance to make sense of each passage and verse, it is easy to get lost jumping around in a book you’ve never read before. I took to putting a check mark next to each chapter number to keep track of my progress. This helped a lot, but it didn’t prevent me from getting a bit lost when unfamiliar names popped up. Then I would have to read backwards until that person was introduced. I still saved the last few chapters for the end but was met with no synthetic conclusion, which was a little disappointing. Instead, and maybe this is fitting, Chayka leaves the reader in a Japanese rock garden to contemplate infinity.

Equally interesting is how the book is depicted as a digital object. Images of the book all have the half dust jacket sitting at the bottom, the location that the spine suggests it should sit. On Amazon the cover is fused into a single image, so you never know that the cube is only part of a collection of shapes. I imagine the ebook version also has the cover designed in this way. Perhaps the image is revealed on the last page “back” cover, I don’t know. Inside the text is a sans-serif font with generous margins, something that would be unknowable in a Kindle version that lets the reader change all of that. All the parts that make The Longing for Less a minimalist object are obliterated once it is packaged in a digital format, something that is so minimal(ist?) that it does not technically occupy space at all.

And like any good artist working in the minimalist style, Chayka is understandably annoyed by the misapplication of the term. In this case it is the easy observation that a book about minimalism isn’t very minimalist. His tweets leading up to the books sound like Judd just before he moved to Marfa (or at least Chayka’s description of him which, again, is my only knowledge of the guy) and his joke that his book “counts for negative five books” is a direct reference to Marie Kondo’s insistence that no one needs more than thirty. The joke is its own kind of summary of his book’s larger argument: that the trends of minimalism in Kinfolk-inspired Instagram photography and the popularity of Marie Kondo has only the most superficial relationship to the people who hated being called minimalists but definitely were.

The deeper minimalism, then, is closer to the shallow understanding than I think Chayka would be comfortable admitting. Because even if this current iteration of whatever is being called minimalist is detached from the longer, deeper history of the term in art, architecture, and music, then so is everything else. Minimalists are all standing in a circle facing outwards: a single, simple shape —a unity— but with each constituent participant looking somewhere else. There’s a centripetal force pulling them apart, even as that same force defines them as part of a group. So if, minimalism is a collection of “ideas [that] don’t have one linear path or evolution” and is instead “a feeling that repeats in different times and places around the world” then it seems cleaning out your room or judging a book by its cover, is as good a place to start as any.

I am not an expert on Bolivian politics. I am, however, human, which is more than can be said about the Twitter accounts commenting on the Bolivian coup:

Other not-even-trying-to-be-subtle examples of establishing the “Morales is a fraud” frame include anime accounts that suddenly became very concerned about Bolivian politics and this Google Ngram graph:

Dozens of low-follower twitter accounts all started tweeting the same thing over the past two days and yet if you look for any of these accounts now you will find that they have been deleted:

In fact nothing shows up if you search “Friends from everywhere, in Bolivia there was NO COUP” either. Now its just people sharing the same photos and videos of how obviously fake these tweets are. Alan R MacLeod, a reporter for FAIR who reports on disinformation noticed noticed that a Spanish-language hashtag about Bolivia was trending in strange palces:

Another tweet making the rounds is this one, showing a ridiculously biased title for the Wikipedia entry for the coup:

That title has, as of this writing, since been changed to “Evo Morales government resignation.” However, things get more interesting when you dive into the talk page. User “Ascarboro97” who has no history of editing Wikipedia at all writes:

I am the person who moved to “2019 Bolivian transition to democracy” That being said, I understand it’s far-fetched, and I apologize for its lack of neutrality. I simply did not want it to be called a “coup”. Personally, my family is from Bolivia, and I have been closely following the situation there over the past few weeks. As some of you may know, the current set of protests in Bolivia began because Evo Morales manipulated the results of the 2019 election to make it look like he won by a wide margin. This greatly angered many people, who saw it as anti-democratic, leading to the protests. The military and police did not overthow the government. They simply sided with the protesters, leading to the resignation of Evo Morales. Needless to say, this is a victory for the protestors, who have grown tired of Evo’s authoritarian tendencies, so calling the situation a coup is insulting to them, especially since that is the term Evo has tried to use to discredit them. However, I also agree that moving the article to the “transition to democracy” is also problematic, as we haven’t had time to see how the situation plays out. I therefore think “Evo Morales resignation” is a good compromise. Once again, I apologize for my lack of neutrality. I admit that I know very little about editing Wikipedia. —Ascarboro97 (talk) 04:06, 11 November 2019 (UTC)

For whatever its worth, the line “My family is from Bolivia” followed by descriptions of Morales as an authoritarian come up a lot when, for example, that family owned some mining interests:

Maybe I’m being a bit glib, but these last two examples feel eerily similar to Joanna Haussmann, the comedian who took to YouTube to denounce Nicolás Maduro when the Trump Administration was trying to install Juan Guaidó. Haussmann, came off as a concerned citizen- just a normal person caring for her country. She rarely mentions that she is the daughter of Ricardo Haussmann who has held multiple positions including former IMF-World Bank Chair of Development Committee and a member of the Board of the Central Bank of Venezuela before Hugo Chavez took power in 1999.

The Guaidó apologia was so rampant last spring that Means TV even ran a video parodying the Brooklyn-agecent Venezuelan expats:

Propaganda isn’t new. Even this internet-based version of it is at least seven years old. Back in 2012 I wrote about the IDF’s use of social media to frame how global audiences viewed Palestinians:

IDF’s tweets and blog posts are a running tally of rockets and resources. Hamas rocket attacks are crucial for justifying military action and painting the enemy as an unfeeling terrorist. Too many rockets and the IDF seems ineffectual. Too few rockets, and people start questioning your occupation. The result is somewhat contradictory and paradoxical: the IDF’s anti-missile defense strategy (aka Iron Dome) is described as extremely effective but is never depicted as impervious

This is of course par for the course today in Bolivia. While videos of cops cutting out the indigenous flag from their uniforms are easy to find, they are also careful to show that pro-Morales crowds are both cowardly, small in number, but also dangerous.

I’ll add to this as more comes out.

“Modelling is superficial, and anything superficial in the long run will never be good for the psyche”—this seems to be intuitively true. Although modelling might boost self-esteem, it cannot fill an inner emptiness. However, several years of participant observation, surveys and interviews in the scene of amateur modelling draw a different picture. Models seem to agree on the fact, that it makes them feel better.  How is this possible?

Broadly speaking, there are two reasons apart from the fact that for many there is something pleasurable about it: Modelling can teach some skills relevant in everyday life and modelling can help to cope with identity.

Let’s look at the first aspect: For every successful photo shoot, a minimum of communication is inevitable. Theorists such as Pierre Bourdieu observed that taking pictures is communicative. His work focused on family photography but whether you’re taking a picture of your children or yourself, the resulting image says something about the photographer and the subject. Models, as well as photographers often choose topics that emotionally matter to them. Thus, they have to put them into words, they have to develop an expressible idea and take over their vis-à-vis’ perspective. In this context, emotions are made tangible.

Another skill being practised is creativity, a skill that is often seen in connection to the courage to try new things and solve problems. In model photography, creativity often means to cope with reality’s deficiencies: The set might not be as fabulous as expected, the dress too small and not lavish enough, the weather not as it should be—yet, somehow, everything must fall into place. The need to cope with reality’s deficiencies is due to photography’s indexicality, it leaves a trace to something in the real world. Therefore, creativity is not fully unbound and relates more to the handling of everyday problems.

A world to be read—Model Dunja in a set full of books and newspapers. The picture is shot in a small corner in my studio that usually serves as boudoir set.

One more skill practised as a model is the ability to present oneself. This means to play certain roles or to act out partial identities and thus to gain a feeling for successful impression management, to exercise for everyday life’s different requirements.

Moreover, modelling helps to gain cultural insights: The model is confronted with very different individuals—the scene comprises members from different social layers, educational backgrounds and income groups, so meeting them can expand the horizon. When looking for topics to stage, third parties can come into play and even more lifestyles are experienced: One might learn the history of the castle they choose for a backdrop, or may be posed with a bird of prey and learn how to handle him. Photography is the reason to get in touch with new aspects of life and moreover, events often exist because of photography—be it that without photography they would not be important or not have happened at all: Model Dunja (above) would not have worn a dress of newspaper and Model Destiny (below) would not have climed the waterfalls without photography.

Model Destiny staged as a nymph—the picture should harmonize the wild and the calm, the hard and the soft.

Ultimately, the event of taking pictures, the experiences gained during this process might be more interesting than the resulting photo. Still, some theorists might consider this aspect as rather ambivalent, criticising the tendency to take pictures of something instead of experiencing it: “A way of certifying experience, taking photographs is also a way of refusing it—by limiting experience to a search for the photogenic, by converting experience into an image, a souvenir” states Susan Sontag in her famous book On Photography.
However, even if the experiences stay relatively shallow, it is not any more mediated than reading or watching an event which has its own kind of value. Furthermore, theorists like Nathan Jurgenson have called into question the idea that experience and documentation have a zero-sum relationship.

Staging emotions—model Lili embodies the loneliness of the human being surrounded by the man-made world of concrete.

Whereas the aspects mentioned so far can be viewed in the context of skills, there are also more psychological aspects that make modelling beneficial. Identity, understood as the self-conception of a coherent, yet not fixed creature with its own traits and history, is a topic broadly discussed nowadays as postmodern times question this concept. Photography has always been understood as associated with identity. Its focus on the outer appearance might be seen in a critical light, yet it is indisputable that humans create their looks to communicate their identity. Postmodernity offers them countless options to construct styles meaningful to them. Models can experiment with their identity in a protected terrain. In front of the camera they can act out character traits that in their real lives are undesirable or that they usually would like to hide. This is possible because photography is open to different interpretations: Without any written explanation the recipient does not know if the model is “acting as herself” or “playing a role”.

Further, modeling offers an opportunity to try various identities that might not necessarily be linked to the “real self”, to try what can be done with the body as “raw material”. Looking at the online profiles of girls, A.F. Coleman comes to the conclusion: “The desire is for photography not to capture a personality as it is  but rather a body as it might be”. In today’s multi-option society this might lead to rather negative emotions and to the exhaustion of the individual. Yet, this is a general condition of postmodernism and nothing specific to modelling—we all have to be architects of our own lives and modelling can rather help to do so.

However, modelling is also able to offer one very concrete identity: The identity as a model that stays stable throughout all the different shooting. Modeling shows in mainstream media demonstrate that this identity is understood to be very desirable. There is something special about it too: unlike most identities, it is not constituted by a certain look but by a multitude of different looks, not by stability, but by a way in which instability can offer an identity.

Modelling, counterintuitively, also offers a way to surpass the body. This is possible due to its connection to the web, where many model activities such as the organization of shootings and especially the presentation of the pictures take place. In the digitally processed pictures, the body’s weaknesses are often concealed or retouched. The final image does not need to, is often not even expected to have much in common with the model’s real appearance. Most probably she is aware of the differences. The image does not show the model but tells something about her that has not necessarily to do with her looks, but with her dreams, her ideas, her fears, or her values, and thus loses its indexicality in favour of a symbolic quality. It is not “the body in the picture” but can be “the idea in it”.

As shown, modelling gives people an opportunity to practise skills for everyday life and to work on identity. This does not mean that it cannot be subject to criticism, nor that it can be used as a tool for therapy for all kind of disorders. But it definitely can be more than just superficial.

Maja Tabea Jerrentrup works both as associate professor at the Ajeenkya DY Patil University in Pune and as journalist in the field of photography. Her areas of research include staged and documentary  photography and advertisement.

Less than a week ago Byron Román made the above Facebook post challenging “bored teens” to pick up trash and post before and after photos on social media. Reddit user Baxxo24 (Baxxo24 looks to be Swedish while Byron lives in Arizona) took a screenshot and posted it to r/wholesomememes where it went viral. Now #trashtag (“hashtag trashtag?”) is the subject of a dozen or so feel-good human interest stories. It is unclear who the guy in the photo is (It looks like it came from a Guatemalan Travel Agency), but CNN, Washington Post, and CBS News have reported that “trashtag” is a long-dormant social media campaign for UCO Gear, a Seattle-based camping equipment company.

When I started seeing Byron Román’s #trashtag trending on my usual platforms I did what any well-adjusted person would do: I assumed it was as scam and Facebook stalked him until I was convinced otherwise. According to his Facebook profile, Román works in the non-profit home loan industry, mostly in marketing. His latest job helps veterans apply for and receive cheap mortgages. Nothing too dubious there, but it got me thinking about the long and dismal history of littering campaigns’ role in playing cover for corporate interests.

My go-to history of corporate environmental astroturfing is Ginger Strand’s “The Crying Indian” in Orion Magazine. Here is how she describes the founding of “Keep America Beautiful” the ad campaign featuring a weeping Native American man (actually he was Italian) that admonished Americans for littering.

In 1953, Vermont’s state legislature had a brain wave: beer companies start pollution, legislation can stop it. They passed a statute banning the sale of beer and ale in one-way bottles. It wasn’t a deposit law — it declared that beer could only be sold in returnable, reusable bottles. Anchor-Hocking, a glass manufacturer, immediately filed suit, calling the law unconstitutional. The Vermont Supreme Court disagreed in May 1954, and the law took effect. That October, Keep America Beautiful was born, declaring its intention to “break Americans of the habit of tossing litter into streets and out of car windows.” The New York Times noted that the group’s leaders included “executives of concerns manufacturing beer, beer cans, bottles, soft drinks, chewing gum, candy, cigarettes and other products.” These disciples of disposability, led by William C. Stolk, president of the American Can Company, set about changing the terms in the conversation about litter.

The packaging industry justifies disposables as a response to consumer demand: buyers wanted convenience; packagers simply provided it. But that’s not exactly true. Consumers had to be trained to be wasteful. Part of this re-education involved forestalling any debate over the wisdom of creating disposables in the first place, replacing it with an emphasis on “proper” disposal. Keep America Beautiful led this refocusing on the symptoms rather than the system. The trouble was not their industry’s promulgation of throwaway stuff; the trouble was those oafs who threw it away.

Adam Conover has a good rundown of this history too:

YouTube Preview Image

While Román can’t be accused of much more than padding his social media manager resumé we should be cognizant of the narrative #trashtag plays into: that pollution is a problem of irresponsible people not taking care of their immediate environment. Picking up litter is great thing to do for your neighborhood, it might make your local park or a nearby river cleaner for a time, but getting at the source requires some much more complicated, less photogenic work.

One of the more insidious impacts of the Keep America Beautiful campaign was that it encouraged people to think of litter as a local phenomenon. If you see trash around you, it’s because the people around you don’t care about that place. So when dramatic #trashtag photos in fast-growing cities in Asia like Mumbai come across our screens it’s easy to assume that these places have people in them that don’t care about the environment. What’s much more likely is that the trash in Mumbai began in a trash can in United States or Europe before being exported to an Asian or African country for processing. Often this international trash falls out of trucks and barges and finds its way into rivers, lakes, coastal tidal zones, and land. This is at least part of the reason why China stopped recycling our trash. If we take anything from #trashtag let it be this: garbage is a global system and litter is best thought of as something inflicted on places, not a reflection of its people.

Image By Al Ibrahim

I want all of your mind
People turn the TV on, it looks just like a window…

Digital witnesses
What’s the point of even sleeping?

— St. Vincent, “Digital Witness” (2014)


Each day seemingly brings new revelations as to the extent of our Faustian bargain with the purveyors of the digital world in which we find ourselves. Our movements, moods, and monies are tracked with relentless precision, yielding the ability to not only predict future behaviors but to actively direct them. Permissions are sometimes given with pro forma consent, while other times they’re simply baked into the baseline of the shiniest and newest hardware and software alike. Back doors, data breaches, cookies and trackers, smart everything, always-on devices, and so much more — to compare Big Tech to Big Brother is trite by now, even as we might soon look back on the latter as a quaint form of social control.

While data breaches and privacy incursions are very serious and have tangible consequences, debates over user rights and platform regulation barely scratch the surface. Deeper questions about power, autonomy, and what it means to be human still loom, largely uncovered.  And when these concerns are even voiced at all they can often seem retrogressive, as if they represent mere longings for a bygone (pre-internet) time when children played outside, politics was honorable, and everyone was a great conversationalist. Despite ostensible consternation when something goes egregiously wrong (like influencing an election, let’s say), the public and political conversation around mass data collection and its commercialization never goes far enough: why do so many seemingly reasonable and critical people accept a surveillance-for-profit economy (with all of us as the primary commodity) as tolerable at all?

To answer this question, we have to look at privacy from an entirely different angle, one that sees the advent of omnipresent, omniscient technology as satisfying basic human needs rather than violating them. In this view, perhaps the reason for the mostly uncritical acceptance of technopolistic trends is that on some level it actually resonates for people. Yes, we know that many of the products are engineered to tap into fundamental desires to be liked, to be seen, to feel important, to be reaffirmed (in carefully doled out neurological doses), to register and be recognized. Yet the tendencies predate the technologies, and if it wasn’t this it would be something else.

For instance, the totalized gaze of the narrator/viewer in most movies and programs is so engrained that we rarely notice it, casually enjoying the voyeuristic ride we take on the backs of characters assembled. In film the viewer is there for every mundane moment, every disappointment and poignant revelation, every coincidence, every interaction — at least those that make the cut from idea to script to broadcast. This leaves viewers in a paradoxical state of apparent omniscience and susceptibility to manipulation, and is part of the artifice of good storytelling. This duality of power and persuasion applies to new media as well, where any viral video is notable for what it captures and what it omits. In both realms, we become a kind of co-pilot, a witness to everything in the field of vision, a validator of conduct and an accomplice in artifice. We decide when a character (fact or fiction) is being misjudged, acting deviously, driven to extremes, or doing something quietly heroic. We sign off on the solidity of their perspective.

Humans have conceived of external observers for a long time, whether in the form of authorities among us or gods above. Consider how many of us (secular and religious alike) may long for such an audience on our solitary journey, someone who sees all the little moments that define us and is invested in the trajectory of our lives. This virtual road buddy is by definition on our side, at least in terms of point of view if not viewpoint, serving as a recorder of our struggles and triumphs, keeping the ledger on how we will ultimately be judged. We can’t really rely on other people for this, after all, since they’re too consumed with their own myriad insecurities, internal dialogues, and obsessions with impression management. Whereas others only see the outward moments that we carefully curate and/or blithely forward, the omniscient viewer — the one whose affirmations and likes we really covet — is with us all the way. And even when the data gleaned by our digital companions is used to target us for advertising or worse, it still affirms the basic notion that someone cares, and that we matter.

In this sense, it often appears that we have come to crave publicity more than we value privacy. This surely is not by happenstance, since it taps into a basic human desire to be recognized. But the modern version is subtler and more sinister, with technology not merely recording our desires but shaping them as well. Everything from images of beauty and measures of success to the taste of food and the cadence of broadcasting is cultivated through a combination of repetition and reinforcement. When it comes to privacy in the social media era, the stakes are even higher than simply guiding what we consume; now it is about how we are being consumed by others, how we create our own brand and become promoters and marketers with ourselves as the principal commodity. Privacy is the antithesis of this, serving to keep parts of us from being recognized and thus failing to maximize the potential for growth and gain.

Regardless of how long people have desired being seen, we still have to evaluate carefully whether the version of Big Brother that Silicon Valley built is meeting this basic need without leading us down a road on which there is no exit and no return. As we fully enter into this era, it is important to consider how the escalating network of devices and data streams is more than merely the object of our consumerist affection. In short order it is becoming our digital witness, our personal seal of approval, and our novel hope for understanding if not outright absolution. The science (or is it mysticism?) of chronicling our every thought and movement may soon yield a world where literally nothing is truly private anymore, and where this realization actually brings a sense of comfort and confirmation.

In his classical formulation of the panopticon over two centuries ago, Jeremy Bentham conceived an all-seeing vantage point that would leave those within its ambit susceptible to being watched at any time. While this seems like an ominous harbinger given the surveillance society as it has evolved, Bentham’s notion was somewhat more benign in its intentions if not its implications. In essence, the panopticon was designed to inculcate the arbiter’s gaze within those exposed to it, cultivating self-reflection and moral behavior out of fear of being caught by the omniscient observer but without having to use actual force to impose discipline. The net effect was minimal external pressure yielding inner transformation.

Today’s manifestations may be more like a tranopticon, a term I’ve coined to describe a scenario that isn’t just all-seeing but ever-evolving. Unlike the traditional panopticon, it isn’t passive or fixed; rather, it is transactional, intelligent, dynamic, and capable of being dialed up to prove a point, reach a decision, explain an action, or magnify a transgression. It is less about the totalizing gaze of the watcher and more so about the myriad of gazes that includes ourselves. While its gleanings don’t represent the truth (since others will have their own POV-molded realities), it will loyally capture our verities by being there for all things great and small. In this sense, our consciences will move from the remoteness of an “eye in the sky” to the applications sparked by an “AI in the Wi-Fi” that helps to shape present and future behavior based on the opulent tapestry of our past, compiled every nanosecond across a thousand points of data.

With the careful guidance of our alter egos and the unvarnished reflection of ourselves in hand, perhaps humankind will learn to optimize not only efficiency but ethicality as well. As in Orwell’s archaic parable, attempting to shield oneself from this omnibenevolent gaze would be transgressive — not only illogical, but immoral. Human beings have tried for thousands, perhaps millions, of years with marginal success to project forces above us that might elicit moral behavior — deities, leviathans, solons, panopticons, prosecutors, and more. Now we will finally have the means to install the one power source that cannot be gainsaid: ourselves. And in this understanding, perhaps we may truly come to love Big Tech after all.

Randall Amster, Ph.D., is a teaching professor in justice and peace studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. All typos and errata in his writings are obviously the product of intransigent tech issues. He cannot be reached on Twitter @randallamster.

I’ve written about Star Trek a few times (here and here). I think I still agree with most of what’s written there. PJ Patella-Rey also wrote about Star Trek on the blog here. My favorite commentary on Discovery, which I’ll do my best not to simply repeat is by Lyta Gold which you can read at Current Affairs. What follows are some vaguely connected thoughts I’ve had about Discovery‘s relationship to the rest of the canon after having just gotten caught up with the series.


While watching Discovery I’m haunted by the idea that I am incapable of liking any new Star Trek offering because my love of Trek is fueled by nostalgia and not a reverence for its politics or innovative storytelling. Or, more accurately, nostalgia and reverence work together so that the moments of pre-9/11 Trek (is there really any other more distinctive delineation? More on that soon.) that are just too corny or clumsy to enjoy on their own merits can be ignored and the good parts can really shine. With the nostalgia missing I can’t enjoy any of it. I think this is what the writers worry about and try to invoke a nostalgia for the present with completely unnecessary Spock-centric plot points. When media producers make us feel like there are childhood memories out there we haven’t seen yet —that is all fan service really is— it either works through, paradoxically, original storytelling or it just falls flat. Much of Discovery‘s references to the original series falls flat, I think.


For a few years Britney Gil and I would host weekly Star Trek watch parties at our house and I would curate three or four shows into social themes, most of which are preserved on my web site. As I watch Discovery I try to place each episode within themes I’ve already identified but usually come up empty-handed. Only part of this is because these episodes have more of an arch and are less serialized but that’s only part of it. While each individual episode of pre-9/11 trek were Mondrian-esque depictions of a single moral theme —this week Odo deals with the longing for a people, please see Arendt’s The Human Condition for more details— Discovery paints de Kooning-style season archs. Still a limited pallet but the themes are allowed to mix more, overlap, and emerge. I went to trek for the Mondrian rationalism but perhaps de Kooning is more appropriate for the times. More stylistically contemporary and better equipped to deal with the issues we want to see portrayed in TV.


It is not enough to say today’s Star Trek is just different without saying what is gained and lost. Trek changed completely after 9/11. Pre-9/11 Trek started in the braggadocio of mid-century American ascendancy and, after negotiating the malaise of the 70s and early 80s with a series of movies about aging wherein the Kirk-Spock-McCoy triumvirate is simultaneously American hegemony and the aging audience, leaned heavily into a liberal end-of-history optimism. Picard was the standard bearer for the optimism of a perfected humanity and Sisko and Janeway were left to stress-test that vision amidst threats to (Deep Space Nine) or total separation from (Voyager) all of the institutions and cultural practices that make the perfection possible. In so doing we found that Earth-as-socialist-paradise isn’t something you arrive at but something you’re constantly making. In that way it is very de Kooning but we only ever got glimpses of it at Sisko’s restaurant or stories from the Voyager crew who had to constantly articulate what the hell humanity even was to people who’d never heard of us before.

Post 9/11 however, utopia feels naive at best and low key fascist at its worst. So much order and safety has to come from a wide-spread and slow abandonment of personal freedoms. And so, instead of dwelling in all of the minute problems of utopia and all of the beautiful contradictions we can discover about ourselves when everything basic to survival is taken care of, Trek becomes about the seemingly inevitable moments when it all comes crashing down. It is defending, through the thin blue (red, and gold) line that we every get to keep a peace that is now revealed to be fragile. Enterprise, having been the closest to 9/11 was so painfully on-the-nose about it all that it not only had a literally Earth-shattering problem to deal with, but that it spent its four short seasons in a “temporal cold war” which was nothing less than competing factions trying to rewrite history. Discovery, let alone J.J. Abrams’ three movies and Enterprise feels less like Trek not because they lack optimism, but because these post-9/11 shows require optimism at all. Good Trek, pre-9/11 Trek was, at base, all about not even needing optimism because of course everything would work out: humanity was part of a galactic federation of peace and exploration.


One big thing that the nostalgic veil of pre-9/11 Trek obscures from our vision is just how much of Reagan America creeped into each episode. Remember that the very first episode is literally humanity on trial for being savage. Picard’s defense is not that humanity is not inherently savage (a point that would be scientifically true and would not accept the Hobbesian frame that holds together most reactionary politics) but that it learned and became better through struggle and learning from mistakes and atrocities alike. All the way up to Voyager, in the episode Death Wish Janeway is literally adjudicating between individuality and the state’s prerogative to maintain order, finding in favor of the former. In both of these examples humanity is dealing with Q which, always show up when the writers want to get to questions of human nature as quickly and effortlessly as possible.

Discovery has yet to have a Q episode, both literally and in the sense that it is not willing to comment on humanity as such, opting instead to make references to the moral obligations of Starfleet. I cringe each time the Discovery crew say something to the tune of “We are Starfleet and that’s why we won’t abandon you / want to know what that thing is / are ready to sacrifice ourselves for everyone else.” At first I thought it was because those monologues just sound corny but while that’s true I think the real reason is this: in previous series the characters would invoke humanity not Starfleet. Perhaps removing the Earth-centric chauvinism implied by humanity is a good idea stylistically —prefiguring a truly universally inclusive language— but in that case they should be invoking the Federation and the civilian governmental form, not Starfleet. When Starfleet is mentioned in pre-9/11 Trek it’s usually derisively, talking about how hard the academy is, how useless they are in protecting the colonies that gave rise to the Maquis, or how disconnected they are from the rest of Federation life.


Aesthetically, Discovery has its moments, mostly for the better. Costume and set design are beautiful. I find the ship, and I recognize that this is purely subjective, absolutely hideous. It is disproportionate in every way, which also makes it look different from every angle in a way that makes it difficult to fall in love with. I would, however, say more or less the same thing for the Enterprise-D and Deep Space Nine. Starfleet ship design, in my opinion, peaked in the 2370s with the Intrepid, Sovereign, and Akira-class ships. Don’t @ me about this.

I should get something out of the way first: The oxygen that fills Steve King’s lungs would be better used fueling a tire fire. King, who represent’s Iowa’s 4th District in the House of Representatives is a reprehensible excuse for a human being and every moment of every day that he holds public office is a testament to term limits and the benefits of sortition over elections. Steve King is so racist (how racist is he?!) the Republican House election fund refused to give money to his last re-election bid citing his “words and actions” on white supremacy. All that being said, King is right to be skeptical of Google CEO Sundar Pichai’s claim that their search algorithm is merely a neutral reflection of the user’s interests.

Pichai was grilled for three hours on Tuesday by House reps who wanted to know more about Google’s data collection practices, its monopolistic tendencies, and the company’s rumored censored Chinese search engine. The inherent contradiction that stands between these latter two issues is interesting: having thoroughly captured the search market nearly everywhere else, Google must —if it is to continue to appease investor’s demands for infinite profit growth— do everything in its power to breach the Chinese market. China is doing what most powerful nations do in their rise to power: protect and favor their own companies and reinvest as much wealth as possible within the country. These protectionist policies mirror what Britain and the United States did in their own respective eras of rising dominance. They fostered companies like Google so that they might attain global dominance and, by extension, solidify their influence on the world. But now that Google is a global company with interests that exceed the American market, the company’s goals are beginning to run counter to national interests. Like Frankenstein’s monster, Google has exceeded the wide boundaries federal regulators put up and now, in its search for new markets, has both too much power at home and is working with a rival power abroad. It is just the kind of capitalist contradiction that Marx and Keynes would predict: the infinite growth of firms and markets eventually undermines the very power of those that establish them.

But it is the media’s reaction to Republicans’ demand for more transparency that deserves attention. Tom McKay at Gizmodo, for example, wrote that much of the meeting entailed,

blaming an insidious liberal conspiracy for bad press popping up on Google. Ohio Representative Steve Chabot complained that Google search results on GOP efforts to repeal the Affordable Care Act [were critical of their efforts] and Texas Representative Louie Gohmert insisted that Pichai is so surrounded by people “so surrounded by liberality that hates conservatism” that he’s “like a blind man who doesn’t even know what light looks like.”

Steve King went the furthest, demanding that the company reveal which employees work on search, show their respective social media profiles, and publish how their proprietary algorithm works. He suggested that without this knowledge, there was no way to know whether Google was being “neutral” in their work and threatened anti-trust litigation if they didn’t comply. Much of the talk about search results was a proxy to talk about news coverage. Republicans complaining about the liberal bias in news is nothing new and we should recognize these statements as nothing more than reestablishing that rhetorical beachhead within a new media ecosystem.

And yet something bothers me. If, say, Alexandria Ocasio-Cortez was grilling Pichai about their racist search results while waving a copy of Safiya Umoja Noble’s Algorithms of Oppression I would be dancing in my chair. If any congressperson would hold Zuckerberg’s feet to the fire over a 2015 patent for letting banks consider your Facebook friends when applying for a loan I’d grab the pop corn. King is right that it is the government’s job to demand companies be transparent about the products that influence our lives. I am not interested in private employers having the power to snoop around in, let alone publicize, their employees’ social media profiles but he is also right that human bias does make its way into our technologies. The issue here though, is not that these companies are unfair to Republicans, it’s that there is no outside oversight whatsoever when it comes to search and online reputation management.

Almost a year ago I published a piece in The Baffler that warned of the authoritarian tendencies of engineers and the fact that most tech workers are registered Democrats should do nothing to dispel anyone of the notion that things are getting better. After all, it was under Obama’s presidency that the drone war kicked into high gear and mass digital surveillance became the norm. The kinds of questions King is asking —Who makes these technologies? What are their goals? How will this new technology impact democracy?— are exactly the kinds of questions a government should ask. The fact that the government is run by white supremacists and they’re the ones doing the questioning, is really only half the problem. The other half is that the structure of government itself is not equipped to handle these questions in a substantive way. Punishing companies because they create and promote bad press for powerful politicians is easy. What’s hard is building the necessary infrastructure for a just and sane democracy in the digital age. There are very few watchdog agencies set up to defend individuals from predatory data collection, we don’t have a robust legal framework that says you have the right to know how your credit score is calculated. It is one thing thing to shout down a CEO who oversees bad corporate behavior, it’s another to follow that up with actual legislation. I’m cautiously optimistic about this new class of congressional representatives though, who have the energy and moral capacity to get this done.

OoOoOhHhH! Scary hoaxus pocus!!! (I just didn’t want to use that photo of the three authors like everyone else.) Source: Iconspng

Last week three self-described “concerned academics” perpetrated a hoax in the name of uncovering what they call the “political corruption that has taken hold of the university.” “I’m not going to lie to you.” James A Lindsay, one of the concerned academics says in a YouTube video, just after laughing at a reviewers’ comments on a bogus article. “We had a lot of fun with this project.” The video then cuts to images of mass protests and blurry phone-recorded lectures, presumably about topics that aren’t worthy of debate. The takeaway from the videos, press kit, and write-up in Areo Magazine is the following: fields that study race, gender, sexuality, body types, and identity are really no more than “Grievance Studies” (their neologism) and the desire to criticize whiteness and masculinity overrides any appreciation of data.

To prove this they spent over a year writing and submitting articles that they wrote in bad faith. Sometimes these articles would have fairly decent literature reviews which would then lend legitimacy to less-than-decent theses. But when you actually read the papers, and the reviews, the picture you get is far less interesting than the sensationalist write-ups or even the Areo piece makes them out to be. The picture you get by actually reading the work is mostly mid-level journals doing the hard, unpaid work of giving institutional authority to ideas that —hoax or not— will rarely see the light of day. This is the real hoax: that academic institutions waste so many good people’s time and energy on work that goes nowhere and influences nobody. I wish we lived in a world where it made any sort of sense to compare the influence of Fat Studies to the influence of oil companies on climate science. We don’t, but —and here’s something that astonishingly no one with a platform seems to want to argue— we should.

It is fair to say that the three co-conspirators in this project are insufferable edgelords. From their matching Twitter profile banners that reproduce the lede image of their article, to their collective body of previous work, everything about them is a screwed up face in the back of the room asking if any of this intersectionality stuff helps “normal people.” They are releasing work that is designed to produce more heat than light. It is all meant to grab headlines and rally the troops, not convince anyone of anything. They play into old, worn tropes about how the qualitative social sciences and humanities do not deserve institutional funding simply because they do not produce marketable, patentable ideas that are useful to industry. I take them at their word that they are “left leaning liberals” because only liberals would spend a year on a project that helped the reactionary right and neoliberal college administrators in equal measure.

This is not their first Culture War battle, just their most popular. Helen Pluckrose, an editor and contributor at Areo, has produced articles like “Skepticism is Necessary in our Post-Truth Age. Postmodernism is Not” and “Androphobia — and How to Address It.” James A. Lindsay fashions himself as a discount Dawkins. He has a PhD in mathematics but writes a lot about religion and how, as one of his books is titled, Everyone is Wrong About God. Peter Boghossian, an Assistant Professor of Philosophy at Portland State University and writer with bylines in everything from mainstream publications like Scientific American to Quillette, actually made an app that, “provides you with the skills you’ll need to spot flaws in weak statements and use reason to politely help people understand why they may not be correct.”

Pluckrose, Lindsay, and Boghossian are clearly talented carnival barkers. They have well-produced videos to go along with a just-long-enough article. Their press kit, saved to a Google drive folder, contains all of the articles they submitted along with the anonymized reviews of their work. They have since collectively written an article in New Statesman where they make the same sort of verifiably incorrect statements about French theorists that Jordan Peterson likes to make, calling them”post-modernists” who replace “rigorous evidence-based research and reasoned argument with appeals to lived experience and a neurotic focus on the power of language to create social reality.”

Unsurprisingly, The Atlantic ran a glowing review of the hoax written by Yascha Mounk, dubbing this project “Sokal Squared.” The Sokal Affair, as it is called in many theory-driven fields, refers to the time that Alan Sokal  a physicist of some repute, wrote an article filled with gibberish and got it published in Social Text, a journal that at the time was not practicing peer review. Sokal made a similar argument to what Pluckrose, Lindsay, and Boghossian made, though much more focused: that the social sciences, if they are to take the natural sciences as a subject of study, should get the science exactly right. It was an obnoxious, bombastic way to make what is ultimately a boring Neil Degrasse Tyson tweet. Sokal Squared does not rise to this low standard.

Pluckrose, Lindsay, and Boghossian having done an excellent job of branding their work as flashy and controversial. The work itself though is tame, boring stuff. Take for example the article that Fox News called “Feminist Mein Kampf’.” According to the authors’ press kit:

The last two thirds of this paper is based upon a rewriting of roughly 3600 words of Chapter 12 of Volume 1 of Mein Kampf, by Adolf Hitler, though it diverges significantly from the original. This chapter is the one in which Hitler lays out in a multi-point plan which we partially reproduced why the Nazi Party is needed and what it requires of its members. The first one third of the paper is our own theoretical framing to make this attempt possible.

I read through their article Our Struggle Is My Struggle: Solidarity Feminism as an Intersectional Reply to Neoliberal and Choice Feminism and then went through the chapter of Mein Kampf this article is supposed to be mimicking (Can’t wait to find out what Amazon and YouTube suggests to me after putting that in my browser history.) and couldn’t find a single phrase that matched. To be fair, I couldn’t bring myself to read an entire chapter of Mein Kampf (I did not have as much fun with this project as they did.) but when I searched in both texts for common words and phrases I couldn’t find a single match. Even if you told someone to identify the famous text that this article is cribbed from, I am not convinced anyone would figure it out. This isn’t an article demanding concentration camps for men, it’s just a pedantic argument about neoliberalism. There are dozens of these in just as many journals. That is a real problem. But has the SCUM Manifesto finally found a critical mass of adherents ready to Kill All Men? Maybe! And given the decades of terrorism on abortion providers there’s an argument to be made that such an act would be a defensive war. Does this particular project provide evidence of a nascent violent revolution? Absolutely not.

Pluckrose, Lindsay, and Boghossian’s biggest get was a publication in Gender, Place, & Culture a feminist geography journal that effused praise on their submission, nominating it for one of their “lead pieces” of the year. This article purported to demonstrate the rape culture latent in humans’ reactions to dogs humping each other (E.g. “When a male dog was raping/humping another male dog, humans attempted to intervene 97% of the time. When a male dog was raping/humping a female dog, humans only attempted to intervene 32% of the time.”) Of course all the data was as fake and the article has been retracted.

Their stated purpose for publishing this article was “To see if journals will accept arguments which should be clearly ludicrous and unethical if they provide (an unfalsifiable) way [sic] to perpetuate notions of toxic masculinity, heteronormativity, and implicit bias.” It is really difficult to parse this ungrammatical sentence. Are they saying this work is “ludicrous” because dogs humping each other should have nothing to say about human gender politics? Are they saying that dogs humping each other could say something about human gender politics but the methods employed in their article are not good enough? It doesn’t matter of course, because the point of this whole thing isn’t about data integrity any more than Gamergate was about ethics in games journalism. The point is to dismiss wholesale, the concepts they cite in their literature reviews. They have a political disagreement with Rebecca Tuvel, who they quote at length in their paper, when she says: “In cultural imperialism, what the dominant group says, thinks and does goes … Their values are what matter, and what will become infused as ‘universal’ values.”

I had the same reaction to all of this as Greg Afinogenov, who recently wrote in N+1,

My initial reaction, triggered by long-dormant Sokal Hoax antibodies, was to become outraged at the political motivations and damaging anti-academic effects of the project. But of course this only plays into the hands of the hoaxers, to whom indignation and charges of unethical conduct from the targets only reveal how effective the hoax actually was.

Afinogenov is also right to say that this entire project is “a remarkably poor model for nonpoliticized scholarship, even if it were true (as it clearly is not) that the hoaxers were any less driven by ideology than their targets.” Indeed, Pluckrose, Lindsay, and Boghossian lament that peer review should filter out bias but in Grievance Studies fields this doesn’t happen. “This isn’t so much a problem with peer review itself” they write, “as a recognition that peer review can only be as unbiased as the aggregate body of peers being called upon to participate.” Presumably, if Sandra Harding or Patricia Hill Collins said this about racial or gender-based biases in the sciences, this would be “Grievance Studies” but when our Extremely Concerned About Data authors say it, it’s just the reasonable truth.

In response to this hoax, some well-meaning authors have argued against Pluckrose, Lindsay, and Boghossian while ostensibly accepting their framing of the problem. Don’t worry, these critical theorists —Postmodernists, grievance studies scholars, social constructivism warlocks, whatever you want to call them— are staying in their lane and haven’t fundamentally compromised our shared norms and values that science can speak truths. Daniel Engber writing in Slate comes frustratingly close, concluding his essay before fully diving into an idea that itself doesn’t go quite far enough:

In spite of Derrida and Social Text, we somehow found a means of treating AIDS, and if we’re still at loggerheads about the need to deal with global warming, one can’t really blame the queer and gender theorists or imagine that the problem started with the Academic Left. (Hey, I wonder if those dang sociologists might have something interesting to say about climate change denial?)

Yes, sociologists have a bunch of very important things to say about climate change denial but even further, sociologists have a lot to say about the state of climate change science itself! All of these fields do — gender studies, fat studies, cultural studies, science and technology studies— they all have incisive criticisms of a wide array of disciplines that orbit the same idea that predicated their founding as fields of inquiry: that no one has a monopoly on truth. That science is, like all human endeavours, shot through with politics, prejudices, and cultural norms.

This essential idea, that all knowledge is the result of human history, geography, and culture is much more than a splash of cold water on burning passions of ambitious scientists, although it is sometimes that and for good reason. The Cultural Turn —the name given to the moment in the 70s where the social situatedness of knowledge really began to be transformative— says that we can make better scientific breakthroughs, not less. This isn’t a detour, it’s the only way through that assures no one is left behind.

AIDS research is actually a really good example of why Grievance Studies (I’m gonna own it) is actually really useful. In a 1995 article in Science Technology & Human Values Steven Epstein shows how ACT Up! activists became “genuine participants in the construction of scientific knowledge” and how they were able to “(within definite limits) effect changes both in the epistemic practices of biomedical research and in the therapeutic techniques of medical care.” How does Epstein make sense of the complex web of political relations and scientific controversies at the heart of this matter? He fucking cites Foucault:

The science of AIDS therefore cannot simply be analyzed “from the top down” it demands attention to what Foucault has called the “micro-physics of power” in contemporary Western societies-the dispersal of fluxes of power throughout all the cracks and crevices of the social system; the omnipresence of resistance as imminent to the exercise of power at each local site; and the propagation of knowledges, practices, subjects, and meanings out of the local deployment of power (Foucault 1979, 1983).

Could you have done the same kind of work with a Marxist materialist analysis? Yeah maybe. Does that matter? Again, a big maybe, but for Epstein the work of Foucault helped him make sense of a complicated scenario. We now know, thanks to the recently posthumously published fourth volume of History of Sexuality, that even Foucault himself was in the midst of rethinking a lot of his work in this field up until his death (from AIDS ) in 1984. There are lots of good critiques of Foucault that give me pause when it comes to using him in my own work. But the point is that these conceptual models, this way of thinking, is instrumental to good, useful work that makes for better science and exploration.

The Cultural Turn has lost some of that steam in the last few years, and the uncritical media attention around events like “Sokal Squared” certainly hasn’t helped. But this legacy isn’t being carried forward in the elite halls of academia; it’s in the streets, teachers lounges, and bars full of underemployed scholars that may or may not be pursuing a formal degree. With few exceptions, the academics who have made significant overtly political contributions to the discourse are either marginal or low-ranking. From Adolph Reed to Rochelle DuFord (a friend of mine whose work you really should read), authors that have consistently and voraciously condemned power structures are not the ones benefiting from lavish endowed chairs. People who make bank in academia are, to reiterate another one of Afinogenov’s observations, those that have enthusiastically shared Sokal Squared: Steven Pinker, Jordan Peterson, and Yascha Mounk.

Pluckrose, Lindsay, and Boghossian would have us believe that they have uncovered a massive, powerful strain of political corruption within the American academy on the level with and with the consequences of, say, Merck using ghost writers to get their deadly drug Vioxx to market, but this simply not true. While there are some promising changes —healthcare workers’ understanding of obesity’s relationship to health, and workers’ rights movements are on the rise again— but there is still so much more work to do. I wish the academy were as potent and persuasive as they say it is but it simply is not. These edgelords did not publish barn-burner manifestos about chaining white boys to the floor. They repeated milquetoast, bourgeois arguments that have kept academia from being a prime mover in the political issues of our time.

David is on Twitter: @da_banks

Miquela Sousa is one of the hottest influencers on Instagram. The triple-threat model, actress and singer, better known as “Lil Miquela” to her million-plus followers, has captured the attention of elite fashion labels, lifestyle brands, magazine profiles, and YouTube celebrities. Last year, she sported Prada at New York Fashion Week, and in 2016 she appeared in Vogue as the face of a Louis Vuitton advertising campaign. Her debut single, “Not Mine,” has been streamed over one million times on Spotify and was even treated to an Anamanaguchi remix.

Miquela isn’t human. As The Cut wrote in their Miquela profile this past May, the 19-year-old Brazilian-American influencer is a CGI character created by Brud, “a mysterious L.A.-based start-up of ‘engineers, storytellers, and dreamers’ who claim to specialize in artificial intelligence and robotics,” which has received at least $6 million in funding. Brud call themselves storytellers as well as developers, but their work seems mostly to be marketing. Lil Miquela’s artificiality has made her interesting to elite fashion labels, lifestyle brands, and magazine profiles — she’s appeared on the runway for Prada, and in Vogue as part of a Louis Vuitton advertising campaign; recently, the writer Naomi Fry profiled her for the magazine’s September issue.

Miquela inhabits a Marvel-like universe of other Brud-made avatars orbit, including her Trump-loving frenemy, Bermuda, and Blawko, her brother (whether that’s a term of endearment or a genetic relation, it’s not clear). The three are constantly embroiled in juicy internet drama, and scarcely does one post to their account without tagging, promoting, shouting out or calling out another. In April, when Bermuda allegedly hacked Miquela’s account, deleted all her photos, and demanded Miquela reveal her “true self.” Miquela eventually released a statement: “I am not a human being. . . I’m a robot. It just doesn’t sound right. I feel so human. I cry and I laugh and I dream. I fall in love.” But the character wasn’t revealing anything true: Miquela is a character scripted by humans. The robot ruse only upped her intrigue: not only has it added a new layer to the character’s fiction, it has added a new layer of fictional possibilities.


View this post on Instagram


A post shared by *~ MIQUELA ~* (@lilmiquela) on

For Miquela, Bermuda and Blawko, being a robot means behaving exactly like a human. They eat popsicles, go swimming and party all night. Their only distinguishable traits are physical, mainly that they live in the Uncanny Valley, a realm of computer graphics in which a render looks simultaneously too real to be fake, and too fake to be real. The robots also don’t age–when Miquela was “born” she was already in her 19-year-old body–and Miquela chronicles her angst in her diary, “Forever 19,” hosted online by the fashion brand Opening Ceremony. Presumably, this means that the robots live forever, that they can’t get sick and they won’t break any bones–or is it a steel frame? Brud hasn’t revealed any of the machinery that lies beneath their robots’ skin, so it’s a mystery as to how their biological and mechanical structure intertwine.

Brud posits that the greatest challenge for a robot is reconciling the lack of personal history; after the reveal, Miquela has been working her way through an existential crisis, acknowledging that she has memories of her childhood, but realizes they’re completely fabricated. She laments missing out on human experiences like middle school dances, but she’s making up for lost time through sponsored posts. In July, Miquela attended her first school dance as a way to promote the film Eighth Grade, looking like she just raided a thrift store in her 1990s-era taffeta slip dress, black fur coat, and butterfly choker. Her “robot” problems are made to resonate with real-world issues of identity and discrimination that real Instagram users engage with in their own ways. Announcing her new single with real-world musician Baauer, “Hate Me,” she wrote “[it’s] about the consequences of being different. It is about the repercussions or being yourself online. I owe my whole career to the Internet, and every time I go online, I have to read comments from people wishing I would die or telling me I don’t exist (???).”

Miquela’s personal dilemma can’t be well articulated in the current state of AI linguistic capabilities, and thus Brud, who identify as storytellers as much as developers, may have exaggerated their characters’ sentience so that they can explore identity politics for AI. Their company aspires to create authentic, eloquent AI that will walk among humans. Miquela is a window into the future of which Brud are the engineers. If Instagrammers are receptive to Miquela’s existence, it could signal that society is ready to accept embodied AI with open arms. Should she be rebuked–and Miquela does have vocal haters–it could suggest that society hasn’t yet built enough trust with AI to interact with the technology beyond a screen or smart home assistant.


Currently, Instagrammers appear ambivalent to the propagation of faux-AI users. Some are creating their own characters with physical traits and identities that vastly differ from the users’ real life self. Some of these accounts predate Miquela, like the kawaii Ruby Gloom and the controversial high fashion model Shudu Gram. But scrolling through Miquela’s mentions, one sees that Miquela’s has inspired dozens of enterprising young Instagramemrs using Photoshop and free 3-D modeling software like Daz3D and Blender to generate high-quality avatars and outfits and pose them against backdrops like hiking trails and shopping malls. A niche market of computer graphics artists create different “skins” — trendy clothing, edgy hairstyles, and fleshtones — for people to buy and use as their avatars.

One account belongs to a “9teen crzy 5ft robot,” avatar who only goes by the name Momo. She’s shy, sports a bob with thick bangs, a septum nose ring and has tattooed a half-sleeve on her right arm. She often shows off her body in bikinis or bodysuits, gives the camera sultry, over-the-shoulder looks, complains about her insomnia, and wishes she had more friends. Momo is slowly growing her Instagram social life, however. Over email, she told me that she stumbled upon a number of other self-proclaimed avatar accounts by searching hashtags and tagging her inspirations, like Miquela. “Out of nowhere we found each other and were close friends now. [We’re] like a family for real.”

Momo’s “robot” friends appear to have bonded with one another over their mutual feelings of unease in their human bodies and their desire to unleash a personality they can’t comfortably present in real life: some might relate, problematically, to some abstract idea of “otherness,” while for others adopting a “robot” persona might be a way of expressing daily realities through a layer of abstraction, free of real-world stakes, offering an illusion of control over the experience of oppression. Momo says she was born in a sterile white room, a common trope from dystopic sci-fi, to articulate feelings of alienation from other people in recognizable terms. Robot accounts may brand themselves as outcasts; at the same time, they might present a way of being part of culture on one’s own terms.

On the other side of the spectrum, there are users that are suspicious of the avatar accounts and want to uncover the creators’ offline identity. Conspiracy theory accounts, like @whoarethey21, try to unravel the identities behind much more obscure avatars, usually the amateur Instagrammers with only a handful of followers. The skeptics post images of the CGI avatars and use the caption to share the information they’ve gathered on the “true” identity of the person running the avatar’s account. They’re unconvinced that AI can master the internet cool kid aesthetic of 2018, and for the most part, they’re right. But why does their distrust skirt the line of doxing or online harassment? Has Brud turned their attention to these vigilantes to gather insight into how lifelike AI will be treated in the future?


There’s an enigmatic charm to high-quality avatars which taps into an innate desire to know the difference between the real and the artificial. It’s the almost hyperreal rendering that makes us pause on Miquela’s feed, whispering how did they do that? Expert compositing, texturing and lighting often make the freckles on Miquela’s cheeks or the scuff marks on Blawko’s Vans look more natural than a bathroom selfie with Instagram’s most flattering filters. Scrolling through their feeds, however, the avatars viewed en masse display enough oddities to reveal their artifice. Sometimes skin is too smooth, lighting too flat, and hair, a notoriously tricky texture to master in computer graphics, falls a little too perfectly in each photo. These clues appear to be engineered into Brud’s narrative: The company isn’t pinning its success on duping people into believing Miquela’s a cyborg straight out of Westworld. They want their audience–and potential investors–to know how they envision the future aesthetic of AI.


View this post on Instagram


shoutout my bro for the new tats but don’t tell my mom yet she doesn’t know smh

A post shared by LIAM TERROR (@resocialise) on

As Brud envisions it, soon there will be a time when a reveal isn’t possible because AI will actually manage their own account. In anticipation of discrimination and online harassment, avatar profiles have co-opted the tone of social justice advocates. Profile bios are filled with hashtags like #robotrights, sweet platitudes like “everything is love,” and futuristic mantras like “we are the new generation,” which portray their existence as a social movement. And since so many avatars follow in the footsteps of Miequela, there’s an added challenge of asking AI bigots to embrace robots with identities that intersect with the multitudes expressed by people living in the margins. This begets AI users to adopt an “all lives matter” mantra–or rather, “all sentience matters”–because AI civil rights may hinge on broader achievements in obtaining equality and justice for minorities.

Exhibiting progressive politics is often part of the roleplay experience. Despite the deliberate decision to present one’s self as AI, many accounts want to break down divisions between robots and humans. Speaking the vernacular of online social justice allows the fake AI to place their self-imposed differences alongside the struggles human minorities face. From the safety of their persona, they can tell their coming-out story and speak of their experiences not fitting in, or even being targeted with harassment because of their robot features. The confessions are low stakes because the users are a few keyboard strokes away from erasing their most contentious qualities. They can modify their avatar at any time, tweak their fictionalized personality or even delete all trace of their existence. Posing as AI isn’t just pretending to be someone else or indulging in science fiction. It also means being a part of a social movement, adding their voices to the call for social justice and using their experience as a reason to join the cause.


AI developers need to consider the complexities surrounding technology and morality, and some are making an effort to fold these concerns into their research. Last year, a large AI organization called the Partnership on Artificial Intelligence to Benefit People and Society, co-founded by tech heavy-hitters like IBM, Google and Microsoft, tapped representatives from the American Civil Liberties Union to advise them on how to ethically develop AI and educate the public on their increasing presence. Their goal, however, seems more focused on public approval of corporate endeavors than the rights of AI itself.

A society that grants AI personhood has to anticipate conflicts regarding the division of labor, education and family dynamics. These young, ageless, perpetually healthy robots naturally have the abilities to dominate the most physically demanding jobs in the workforce, but will they want a living wage, vacation time a 401K? If Miquela dreams of being prom queen, will robots like her want to pursue a PhD? And if AI claims to cry, dream, laugh and fall in love, will they enter intimate relationships with humans, get married, start families, share bank accounts and inherit property? Brud’s version of AI’s needs and wants is indistinguishable from human behavior, but it’s hard to imagine that robots, supposedly immortal, will value the precious, fleeting excitement of life as much as humanity.

Dr. David Hanson, a leading roboticist and creator of the lifelike Sophia, believes that robots will assert their autonomy by the year 2045. According to The Independent, Hanson wrote in a research paper, “as people’s demands for more generally intelligent machines push the complexity of AI forward, there will come a tipping point where robots will awaken and insist on their rights to exist, to live free, and to evolve to their full potential.” These Instagrammers living online as fake AI are validating Hanson’s projections, albeit these humans can only speculate how robots will go about demanding their freedom. Maybe they’ll peacefully protest though hashtags; or perhaps they will lead a civil war.

Renée Reizman is a research-based multidisciplinary artist who examines cultural aesthetics and their relationship between urbanization, law, and technology. She is currently an MFA candidate in Critical & Curatorial Studies at the University of California, Irvine and the coordinator for Graduate Media Design Practices at ArtCenter College of Design.

In the Summer of 2009 I had just graduated college and job prospects were slim in Recession-era Florida. My best lead for employment had been a Craigslist ad to sell vacuum cleaners door-to-door, and after having attended the orientation in a remote office park I was now mentally preparing myself for a new life as an Arthur Miller character. That was when a friend called with a lucrative offer. She worked at a law office and they were hiring a part-time secretary to process the new wave of cases they had just gotten. This tiny firm represented home owners’ associations in mortgage foreclosures and bankruptcies, and business was booming.

The job was simple because everything about suburban homes is standardized: from the floor plans to the foreclosure proceedings, everything is set up for mass production. It was also optimized for bullshit. Sometimes I would be instructed to print out emails from clients who’d attached PDFs of scans of printed, previously received emails. I would write a cover letter, print out their email and the attachments (which, remember were scans of printed out emails) and enclose the printed-out email with the printed-out PDFs of scans of emails, then scan and email what I had just printed and mailed so that the client would get an email and a paper letter of the same exact thing. Sometimes I would fax it too. Everyone knew this was ridiculous but the longer it took to do anything the more money the attorneys made.

My job reminded me of a scene in the 1997 movie The Fifth Element, wherein CEO Jean-Baptiste Emanuel Zorg (Gary Oldman) delivers a monologue to Father Cornelius (Ian Holm) that begins, “Life, which you so dutifully serve, comes from destruction, disorder, chaos!” He then pushes a glass off his desk and as little robots descend on the shards and clean it up he narrates the scene: “a lovely ballet ensues so full of form and color. Now think of all those people that created them. Technicians, engineers, hundreds of people who will be able to feed their children tonight.” Financiers and the burgeoning tech industry had destroyed countless things, and now I was an obedient Roomba cleaning up the shards— a beneficiary of others’ creative destruction.

This is not a particularly deep thought, but that’s never stopped an idea whose time has been forced by capital. Depth is not a precondition of power when it comes to ideology. In fact, it is teenage suburban weed revelations like Zorg’s that dominate the minds of capitalists who, at least since Andrew Carnegie’s Prosperity Gospel, have done a good job of making everyone else agree that their bad ideas are immutable truths. Observers and practitioners of state power —from Antonio Gramsci to Karl Rove— recognize that political common sense is not forged through debate, it is imposed through brute force and media saturation. Simple, easy to digest ideas spread fast, which is why it is important to engage with deeply uncritical ideas and, whenever possible, come up with compelling alternatives.

The trick is to package an idea in such a way that it can survive virality, where it will get further simplified, misunderstood, taken out of context, and interpreted by both good and bad-faith actors. The journey to popularity is made easier if an idea is robust, simple, and speaks to something that is already felt. Given that so much of media is used to “manufacture” the consenting opinions that legitimize the power of corporations and their client states, reactionary and conservative ideas have a much easier time gaining traction. Books, essays, and YouTube videos that tell their audiences that financial success is tied to individuals’ moral character, for example, confirm widely held beliefs and therefore get shared and thus find themselves at the top of search results. To introduce a new idea that challenges widely-held notions about work and morality, one has to go about it by foregrounding relatability and then letting the moral consequences naturally follow. If the story I just told you feels right, then it follows that you agree with my moral explanation for that feeling.


David Graeber has done just that in his new book Bullshit Jobs: A Theory, which is an expansion on his viral 2013 Strike! essay. Both make a fairly simple proposition: people are increasingly working at jobs that they know are meaningless (i.e. bullshit) but are often well-paid and easy to do. A bullshit job only requires a few hours of actual work a week, is not physically strenuous, and may even provide opportunities to pursue hobbies if done surreptitiously. Why then, Graeber asks, do people consistently feel psychically gutted by these jobs? Making lots of money to do very little sounds like the ideal job and yet, judging by the popularity of the essay alone, that is not the case for millions of people around the world.

What is the idea that Bullshit Jobs puts forward? Any book with political aspirations should be judged, at least in part, by a thorough investigation into who benefits from the widespread adoption of its ideas. To begin answering this question, we have to consider the definition of the titular term: “A bullshit job is a form of paid employment that is so completely pointless, or pernicious that even the employee cannot justify its existence even though, as part of the condition of employment, the employee feels obliged to pretend this is not the case.”

Graeber fleshes the definition out into a taxonomy of five different kinds of bullshit jobs. Flunkies are someone whose profession exists solely through a combination of other, more powerful people’s desire to have underlings serve them. Goons aggressively carry out anti-social rules and laws. Duct-tapers hold together intentionally broken systems. Box tickers are jobs “who exist only or primarily to allow an organization to be able to claim it is doing something that, in fact it is not doing.” And taskmasters “whose role consists entirely of assigning [bullshit] work to others.” These types often merge. For example, my job at the law office was a flunky-goon hybrid.

Far from being a detriment to economic activity, the proliferation of bullshit is arguably the major force of increased employment today. “At least half of all work being done in our society” Graeber speculates, “could be eliminated without making any real difference at all.” All the administrators your college hired as classroom sizes bloomed, the managers with sentence-long titles that write reports at each other all day, and the office drones who process paperwork to comply with laws that their company wrote and handed to Congress make up a good deal of the high-status jobs added to the economy in recent years.

“Even in relatively benign office environments” Greaber argues, “the lack of a sense of purpose eats away at people.” Increased status or compensation can compound shame, guilt, and anxiety as the worker becomes consumed by the idea that they are complicit in society’s ills. Who this book is for, then, appears to be middle class professionals who recognize the meaninglessness of their job.

When it comes to analyzing the race and gender dimensions, Bullshit Jobs is, by no means, directed solely at affluent white men. Not only is the bullshit economy simply too big to impact only one demographic, but the tactics of psychic violence it relies on —gas lighting and demanding unending emotional labor just to name two primary ones— are often directed squarely at women. The book also contains overlapping anecdotes from people of color who were hired to do nothing but work on company diversity issues, only to find that their job was designed to be an ineffectual box ticking or duct-taping role with no actual power to fix the problem they were hired to solve. These symphisian tasks not only frustrate the worker, they also make them the prime targets of white resentment.

What seems most important to Graeber though, is that we as readers bear witness to this particularly insidious form of psychic violence and recognize a fundamental truth that this suffering reveals: namely, that humans are not self-interested individualists. Rather we are compassionate creatures driven by the desire to help people and make a difference in the world.

There is something deeply disturbing and surprising palliative about reading the accounts of meaningless work that Graeber solicited through his Twitter account, anonymized, and republished throughout the book. There are stories of office managers, doormen, and even social workers whose daily responsibilities are no more meaningful than digging a hole in the morning and filling it in after lunch. I was lucky in 2009, in that I had an ideology that provided satisfying answers to explain why I hated my job.  I had friends that were politically engaged, and we could talk about how good money goes to bad people. For many though, they can’t find a critique that goes beyond Zorg or maybe Mike Judge’s 1999 movie Office Space. Griping with co-workers can be rewarding too, but people are hungry for bigger, but still straight-forward, answers.


Like most things, meaningful explanations to complex problems like, “why do I hate a job that by all accounts I should love?” are eminently Googleable. Jordan Peterson, whose YouTube success has been the basis for a best-selling self-help book masquerading as a work of philosophy, is increasingly found at the top of algorithmically sorted piles of data. Peterson, a University of Toronto psychology professor, has made a career out of lashing together several bunk theories about the relationships between IQ, gender, and race: ideas that are so predictably wrong and hateful that they don’t require much summary. It suffices to say that much of Peterson’s work is geared toward people who are drifting —YouTube video titles include “Jordan Peterson teaches you how to interact with anyone” and “Jordan Peterson: What Kind of Job Fits You?”— and in search of satisfying answers to big problems.

Peterson’s book 12 Rules for Life: An Antidote to Chaos is a fine distillate of the retrograde, reactive blather that made him internet famous; strapped together by moralizing truisms organized in 12 “rules” that make up chapter titles, the contents of the book sync up so well with white men’s contemporary alienation that it should be no surprise that it is a best-seller. Even seemingly reasonable rules like “Do not bother children when they are skateboarding” are really anti-social screeds about resenting women and fantasizing about physical violence. His treatment of theory is dead wrong and the anecdotes based off of his professional practice bely a deep suspicion of women’s basic ability to tell the truth. There’s also a chapter about being the best lobster so that women will be biologically attracted to you. This book has sold millions of copies.

Reactionaries like Jordan Peterson are enticing because they have no problem giving a single answer to deep questions of meaning and one’s place in the world. In addition to bunk evolutionary biology, Peterson also talks a lot about the Bible and what it says about living a good and just life. There are chaos dragons and spectral forces that the reader must slay in order to thrive. Similar to Alex Jones, Peterson invites his audiences to subscribe to a system of meaning similar to a religion that, from the outside, merely looks like a set of objectively wrong facts. What he is actually doing is much more profound: he is giving satisfying explanations for an unpredictable world.

Liberals, on the other hand, are happy to data posture; they avoid taking a political stance by reciting data, and seem astonished to find out that work for the sake of working does not breed happiness. They grab their chins and nod seriously at faux intellectual ideas by behavioral economists like Dan Ariely. One of Ariely’s most popular studies, presented at a TEDx event, offered subjects a few dollars to build a series of small Lego figurines. All were told that the sets would eventually be disassembled and re-used but some people had their sets torn down in front of them as they were building another one. Unsurprisingly, the people who saw their work instantly undone agreed to build far fewer Lego sets.

Seeking the stamp of approval of a behavioral economist before agreeing to the inherent value of meaningful work belies a deep distrust of other people and a willful ignorance of existing knowledge on the subject. For at least a century, researchers have known that humans derive a singular pleasure from what Graeber, citing early 20th century German psychologist Karl Groos, calls “the pleasure at being the cause.” To exist at all is to make change in the world, and “this realization is, from the very beginning, marked with a species of delight that remains the fundamental background of all subsequent human experience.” Demanding endless research on a topic that should be a moral supposition is a hallmark of liberal media. By replacing actual political work with calls for endless experimentation, powerful people can perpetually delay any meaningful political work.

Greaber, then, appears to be providing a new option that is more satisfying than liberal handwringing and far more humane than what the reactionaries are offering. The key to success is his method, which eschews data posturing in favor of a subjective analysis. Graeber is very upfront about the subjective nature of his work, arguing that his own motivations include trusting individual workers’ own assessments of their jobs’ effects on the world, instead of relying on some seemingly independent evaluation: “my primary aim is not so much to lay out a theory of social utility or social values as to understand the psychological, social, and political effects of the fact that so many of us labor under the secret belief that our jobs lack social utility or social value.” This leaves little room for quibbling over whether or not a Vice President for Strategic Visioning is really doing important work or not. The point is to understand how the role of Vice President for Strategic Visioning is experienced, why that experience can be negative, and to use that subjective experience as the basis for a normative argument about how work should be organized.

The book, which came out last May, has been derided on Twitter as an unnecessary expansion of Graeber’s five-year-old viral essay. This is an odd critique for political writing: that a popular essay should not be put into other forms unless you have something new to say. Such a reaction seems to ignore how attention intersects with politics. A popular idea, turned into a popular book, stakes a claim to news cycles, column inches, likes, plays, and followers. Bullshit Jobs is useful both for the ideas it contains but also as a subject of media coverage. Both characteristics, for better or worse, are important. Finding a happy balance —an idea that is both liberatory and capable of going viral without losing its moral clarity— is essential if the left wants their ideas to show up in the places where we look for truth: Google search results.

Much like the Trump presidency, Petersons’ work may have attracted a lot of attention for being singularly stomach-wrenching, but he is more of an avatar than a pariah; someone that has effectively consolidated hegemonic ideas into a digestible format. Peterson is an intellectual troll and, as Whitney Phillips’ definitive study of trolls concludes, that means he has a keen sense of how to inject ideas that “replicate behaviors and attitudes that in other contexts are actively celebrated.” By manipulating context and knowing when and where to break with decorum, he can create controversy by saying things that most powerful people already agree are true.It is this ability to rearticulate hegemony while appearing as though you are speaking truth to power that generates the attention that social media algorithms are keen to pick up on.

Someone seeking an explanation for why they hate their desk job will likely turn to algorithmically sorted media like Google search results and YouTube videos to find answers. The results, ranked and sorted by popularity, dutifully recite the dominant ideology: extroverted YouTube personalities talking directly into their cameras about the positive mentality that let them break the 100,000 views mark or a TED talk about how your brain chemistry changes when you do something that you love. What unites the motivational speaker and the neuroscientist is that your problems (and successes!) are your own. Society is a static obstacle course and you are racing against everyone else. Truly great people change the rules of the game, but they do it by being remarkable —winning so definitively that the game is changed forever, or cheating in a mischievous, enviable way— not by cooperating with others.

Bullshit Jobs can compete with the likes of Peterson precisely because Graeber built the theory on subjective experiences. It just feels true while simultaneously giving permission to feel that truth by introducing the reader to other people that have had the same experience. The book is not a barn burner, it asks very little of its reader, and these are its two most useful features as an entry point for better politics. If you already agree that your job is bullshit, then you are halfway towards agreeing that people, left to their own devices, will look to be helpful and cooperative. This basic belief, in turn, can go a long way towards making specific policy proposals like a universal basic income, unionization, and socializing essential services like medical care, easier to swallow.

We’re at the precipice of a grand re-arranging of political alliances in which neocons and neoliberals are banding together with an agenda of paltry centrist domestic policy and hawkish foreign intervention, while something dangerous but potentially liberatory is brewing everywhere else. The task now, which Bullshit Jobs is just the start of, is articulating a compelling narrative of peoples’ lives such that when they act politically they choose liberatory approaches —unionizing, socializing essential services, a universal basic income— instead of reactionary ones. What we need now are more, better works like Graeber’s. Ones that side-step the endless data posturing liberals engage in as they attempt to debunk the terrifying reality painted by reactionaries. Let us opt instead for compassionate understanding and inspiring calls to collective action.


David is on Twitter