This image provided by the U.S. Coast Guard shows fire boat response crews battle the blazing remnants of the off shore oil rig Deepwater Horizon Wednesday April 21, 2010. The Coast Guard by sea and air planned to search overnight for 11 workers missing since a thunderous explosion rocked an oil drilling platform that continued to burn late Wednesday. (AP Photo/US Coast Guard)

It has been really thrilling to hear so much positive feedback about my essay about authoritarianism in engineering. In that essay, which you can read over at The Baffler, I argue that engineering education and authoritarian tendencies trend very closely and that we see this trend play out in their interpretations of dystopian science fiction. Instead of heeding very clear warnings about the avarice of good intentions gone awry, companies like Axon (né TASER) use movies and books like Minority Report as product roadmaps. I conclude by saying:

In times like these it is important to remember that border walls, nuclear missiles, and surveillance systems do not work, and would not even exist, without the cooperation of engineers. We must begin teaching young engineers that their field is defined by care and humble assistance, not blind obedience to authority.

I’ve got some pushback, both gentle and otherwise about two specific points in my essay which I’d like to discuss here. I’m going to paraphrase and synthesize several people’s arguments but if anyone wants to jump into the comments with something specific they’re more than welcome to do so.

Pushback 1: “Engineering” is too broad a category to do that much analytical work to. Civil engineers do very different work and have very different employers than those in aerospace or mechanical engineering.

It is certainly fair to say that civil engineers, who build bridges, tunnels, and lots of other important infrastructure are not under the same pressures to work in and otherwise support the military industrial complex the way aerospace engineers are. There are, indeed, different professional cultures that exist across these subfields. That being said, lots of universities have schools of engineering that contain aerospace, civil, and many other kinds of engineering. Those engineers take the same introductory courses and the same ethics or professional development courses. Engineering curriculums, when it comes to the social impacts of engineering and the very fundamentals of engineering, often have quite a bit of overlap.

ABET, the accreditation body for most American higher education engineering programs has a fairly centralized system where EVERY engineering program or department must abide by several fairly specific criteria. The closest that criteria gets to political implications of engineers’ work, by the way, is requiring that students be evaluated for their: “understanding of and a commitment to address professional and ethical responsibilities, including a respect for diversity.” Exactly what those ethical responsibilities are (not to mention what constitutes diversity), is left up to individual programs.

If we look at specific program criteria, like aerospace for example, there are absolutely no references to ethics whatsoever. That bears repeating: the association that reviews whether you have a functioning program for teaching humans how to build drones, missiles, fighter jets, and all sorts of machines of war has no additional ethics guidelines. If ABET can make one, brief requirement for ethics across all engineering disciplines and doesn’t have to distinguish between those different engineering disciplines when it comes to ethics guidelines, then criticism of that system can operate at that resolution as well. To say that my essay relies on too-broad of a category would also call into question nearly every university’s engineering curriculum.

Finally, there’s already a lot of acclaimed work in engineering pedagogy, STS, and other fields that make definitive, empirical claims across the engineering professions. Professor of engineering pedagogy Alice Pawley has done extensive surveys of engineers and found that most work in corporate or military organizations that are fairly large and are organized in hierarchical managerial structures. Louis L. Bucciarelli’s Designing Engineers is regular reading for anyone doing work in this area and he too makes reference to “engineers” very broadly. To discount my work would mean throwing out a fairly large portion of well-regarded research on the topic, much of which I cite in the essay.

Pushback 2: Contrary to what you argue in your piece, engineers do have ethics oversight and there are licensure bodies that require continuous training and have oversight boards.

While that first pushback has the opportunity for generative tensions and interesting discussion, I feel like this argument is a bad faith engagement with the topic. In my essay I write,

Unlike medical professionals who have a Hippocratic oath and a licensure process, or lawyers who have bar associations watching over them, engineers have little ethics oversight outside of the institutions that write their paychecks. That is why engineers excel at outsourcing blame: to clients, to managers, or to their fuzzy ideas about the problems of human nature. They are taught early on that the most moral thing they can do is build what they are told to build to the best of their ability, so that the will of the user is accurately and faithfully carried out. It is only in malfunction that engineers may be said to have exerted their own will.

Canadian engineers, many have pointed out, receive an iron ring in a ceremony designed by Rudyard Kipling called The Ritual of the Calling of an Engineer. While that ceremony sounds very elaborate and might make for great in-group solidarity (which can be helpful in maintaining and enforcing ethical norms) it is not at all what I’m talking about. I didn’t say engineers have no sense of ethics, I argue that they actually have something worse: a definition of ethics wherein the individual engineer really only exercises their agency when something goes wrong. If the engineer does exactly as they are told and, for example, builds a perfectly working four-legged weapons platform for Boston Dynamics, they will have achieved a widely held definition of ethical engineering practice. That’s not good enough.

Others have argued that engineers do have oversight organizations that confer licenses and can take them away. Indeed, in the United States the National Society of Professional Engineering does confer a Professional Engineer (PE) license that is overseen by state-level licensure boards. Again, I said “little ethics oversight” not “no ethics oversight” but that is really beside the point because the NSPE does not revoke your PE license for building, say, an oil pipeline that leaks at a rate that is considered normal for that chosen design. The PE license is an example of my critique, not an argument against it because it only focuses on doing a job well, not whether the job itself comports with any sort of social justice standard or larger ethics framework.

Put another way, THE NSPE does nothing to work against what sociologist of engineering Diane Vaughn calls “normalization of deviance.” Bad, even deadly decisions, can be baked into systems-level decision-making such that individual actors might be dutifully following directions and making sure everything is staying within parameters, but there are few mechanisms for questioning the parameters in the first place. Vaughn coined normalization of deviance in studying the Challenger disaster but it works just as well to describe the BP Deepwater Horizon spill. Some might say “oh well that’s management” to which I would say the following: engineers love to boast that they have world-changing powers until something goes wrong. Then a paper-pusher becomes an insurmountable obstacle. I just don’t buy it.

A better argument against my critique would go after bar associations and medical licensure. Bar associations do not suspend lawyers for defending terrible companies and Dick Cheney’s doctors haven’t be censured for keeping a war criminal alive. Still though, lawyers also have the National Lawyers’ Guild and at least the Hippocratic Oath is partisan towards upholding and preserving life. There is no engineering organization that has significant power and would censure the NSPE-licensed engineer that will make sure Trump’s border wall is structurally sound.






An artists’ rendering of a possible future Amazon HQ2 in Chicago. Image from the Chicago Tribune.

The Intercept’s Zaid Jilani asked a really good question earlier today: Why Don’t the 20 Cities on Amazon’s HQ2 Shortlist Collectively Bargain Instead of Collectively Beg? Amazon is looking for a place to put its second headquarters and cities have fallen over each other to provide some startlingly desperate concessions to lure the tech giant. Some of the concessions, like Chicago’s offer to essentially engage in wage theft by taking all the income tax collected from employees and hand it back to Amazon, make it unclear what these cities actually gain by hosting the company. The reason that city mayors will never collectively bargain on behalf of their citizens is two fold: 1) America lacks an inter-city governance mechanism that prevents cities from being blackballed by corporate capital and 2) most big city mayors are corrupt as hell and don’t care about you.

In 1987 urban sociologists John Logan and Harvey Molotch put forward the “Growth Machine” theory to explain why cities do not collectively bargain and instead compete with one-another in a race-to-the-bottom to see which city can concede the most taxes for the least gain. The theory is rather straightforward: cities may have one or two inherent competitive advantages that no other city has, but beyond that you can only offer tax breaks. Maybe you’ve got a deep water port that big container ships can use, or you’re situated at the only pass in a mountain range. Other than that, location is completely fungible. All that’s left is tax policy and land grants.

Technology clearly makes cities’ competitive advantages even slimmer. Cities that flourished because they were well situated along water ways slowly declined as trains and the National Highway System surpassed canals as the preferred mode of freight transit. The list of things a city can exclusively offer a prospective employer seems to be getting smaller.

Meanwhile, the competition between cities has only got fiercer. “The jockeying for canals, railroads, and arsenals of the previous century,” wrote Logan and Molotch, “has given way in this one to more complex and subtle efforts to manipulate space and redistribute rents.” Instead of a handful of elites making handshake agreements over where to put a government arsenal or the Pennsylvania Railroad’s major terminus, the duty to attract major investment in the 20th century was turned over to teams of PR experts and economic development coordinators. Entire departments in cities and counties around the country were tasked with inventing incentive packages for major employers.

The Growth Machine puts business interests first, but  some stuff does actually “trickle down” to some people. Public spending may be slightly increased to the extent that capital investment isn’t actively deterred. For example, a business won’t relocate to a city where their top management’s kids can’t go to a nice school, so a city might invest in its schools to lure new business. Businesses also demand things that the rest of the public can use like airports or high speed internet. A city might even adopt the Richard Florida playbook and invest in public arts and entertainment. There was a sweet spot, between the late 70s and the early 90s, where this way of doing business was defensible. Schools were less segregated and economic inequality was bad but not horrendous.

Now, in the twenty-first century, all that is old is new again. Inequality is reaching 19th century levels and cities and school districts in many parts of the country are more segregated today than they were in the 60s. What little benefits the public received when their local governments went after major companies, has now become privatized. Again, Chicago’s bid is illustrative here: Mayor Rahm Emanuel’s brutal fight to privatize the city’s schools has created a two-tiered education system with elite Charter schools and cash-starved public ones. Whereas Amazon’s presence would have signaled the possibility that Chicago public schools would see an infusion of cash, Charter schools promise a closed circuit of money and services.

In a world where 82% of the wealth created goes to the wealthiest 1% of people, city leaders are bargaining with Amazon but with other people’s money. Some cities might have more enlightened mayors but, for the most part, there doesn’t seem to be a desire among the ruling class to extract wealth from private capital and redistribute it to average citizens. Rather, this is about securing closed circuits of wealth among a privileged few. To think that these mayors are first and foremost going to bargain for the best deal for their constituents comes off as, sadly, naive.

But lets say, for the sake of argument, a large portion of mayors did want to flip the script and collectively bargain on behalf of their citizens. First they would be confronted with the simple fact that they are organizing a detente on one level so that they can compete on another. Richard Florida, writing in CNN and also quoted in The Intercept, calls on city mayors, “to forge a pact to not give Amazon a penny in tax incentives or other handouts, thereby forcing the company to make its decision based on merit.”

What merit would that be though? Would the city with the least homeless people win? Bezos would be more apt to pick based solely on which city has the best weather. What would have been offered in explicit subsidies would really come down to the same low-tax business climate that the original Urban Growth Machine is predicated on but instead of a special gift to Amazon, cities would pass tax laws that gave away the farm to any company of sufficient size. Instead of Amazon picking from a list of tailor-made proposals, they would be looking for the city or county that just passed another staggeringly low tax policy. Chicago’s offer of routinized wage theft wouldn’t be impacted either, since it’s a state-wide program and has been in place since 2001. Mississippi, Indiana, and Missouri have similar programs.

The point here is that corporations and the people that run them are ideological. Companies do not set up shop based on what is good for people, they choose their location based on what is good for capital. How else do you explain all the businesses that set up shop in Delaware? The ideological fervor of CEOs also points to another problem: Even if cities bound together in some sort non-aggression pact so that none of them promised a single tax break, what would happen the next time a Fortune 500 company starts looking for a new headquarters? Would those cities get a shot? No. They would be blacklisted.

In Richard Florida’s latest book he laments that in an alternate universe President Hillary Clinton would have adopted his “detailed proposal for a new Council of Cities, comparable to the National Security Council,” This Council would foster “a new partnership between national government and the cities in which federal investments would flow.” This is a politically shrewd idea for reasons I have outlined before, but we are unlikely to see this happen any time soon. Even if we were to establish it tomorrow though, the larger problem remains: we have massive monopolistic companies that can make unilateral, undemocratic decisions that impact the lives of millions of people. More than anything it is our state of inequality and the attendant disinvestment in public resources that is, ultimately, the problem.

David is on Twitter.

Jack Nicholson’s President James Dale

I have this childhood memory of one of those rigged games at a county fair where the prize was a stuffed alien. I wanted it really bad. It looked just like the Halloween costume I’d made with my mom a few years back. We covered a balloon with Papier-mâché and when it dried we popped the balloon, cut out almond-shaped eyes, and spray painted the whole thing silver. This stuffed alien looked just like my costume but it was electric green and had a beautiful black cape with silver embroidery. I won it (don’t remember the game) and kept it for a long time. I might still have it somewhere.

Being the 90s kid I am, I was excited to see a New York Times story about a 2004 incident off the coast of San Diego where two Navy airmen followed a U.F.O. as it, “appeared suddenly at 80,000 feet, and then hurtled toward the sea, eventually stopping at 20,000 feet and hovering. Then they either dropped out of radar range or shot straight back up.” I was hoping this story might circulate for a while, especially given that a $22 million Defense Department program meant to study U.F.Os was recently discovered in the Pentagon’s black money budget. There’s even video of the thing! Sadly, it barely scratched the surface of most newsfeed algorithms.

The paltry reaction to such amazing footage might annoy me, but it isn’t surprising. The 21st century, in spite of 20th century sci-fi’s predictions, has been radically ambivalent to the stars. There’s no Star Trek  on primetime TV and The X-Files reboot received mixed reviews. In the 90s there were not one but two Star Trek series running throughout the whole decade, The X-Files was one of the most popular shows on television, and alien abductions were fodder for weekly episodes of Unsolved Mysteries. UFO sightings were also a dime a dozen, providing source material for books, documentaries, and even feature films.

Then, something changed. Part of the change is cultural, which, I’ve argued before, is exemplified by South Park’s Eric Cartman. Even as an 80-foot satellite dish emerges from his butt, he refuses to believe that he’s been abducted by aliens:

This syncs up nicely with Vox style explainerism to create a furiously obnoxious ethos where fun half-truths die and only the vindictive lies remain. One is either the liberal explainer Cartman who is technically correct (e.g. “There is only a 0.0024 percent chance that an 80-foot satellite dish is coming out of my ass.”) or the alt-right Cartman who refuses to acknowledge the satellite dish in the first place. Either way you’re Cartman.

I still think its accurate to say that we’re governed by a cynical desire to prove others wrong, either through bad faith deployments of data or categorical denials of incontrovertible evidence. What’s remarkable is how well represented both perspectives seem to be in our politics. It’s sort of amazing that one society can contain both and a Centers for Disease Control that can’t use the phrase “evidence-based” in their reports.

First contact stories have always really been about humanity. We are on our best behavior, or rise to the occasion when aliens arrive. In the 90s we proved our worth through feats of technical achievement (Star Trek: First Contact, Contact) or we defeated them (Independence Day, Mars Attacks). Either case required massive cooperation and the suspension of usual conflict. But what happens when a fragmented society such as our own encounters the extraterrestrial?

More recent takes on first contact —namely Europa Report in 2013 and District 9 in 2006— are very different. In Europa Report first contact is deadly and a part of a larger corporate conspiracy. In District 9 humans are the antagonists: forcing aliens into Johannesburg’s slums. Mars Attacks may actually belong in this list too. Jack Nicholson’s President James Dale gives what reads today as a decidedly Trumpian speech (read the YouTube comments if you don’t believe me): “what is wrong with you people? we could work together! why be enemies? because we’re different? is that why? think of the things we could do. think how strong we would be. earth and mars— together.” President Dale is then stabbed through the heart by a Martian’s robot hand. Defeating the aliens in Mars Attacks is achieved through an accidental discovery instead of super-human achievement.

While District 9 is based in (white) humanity’s track record of reacting to foreign visitors, Mars Attacks pokes fun at our earnest belief that our leaders are the most honorable and talented society has to offer– their Sorkin-esque speeches ensuring that “we do not go quietly into the night.”

We don’t believe that anymore. Most don’t see the president as competent, let alone inspiring. If we can no longer maintain the fiction of imagining our leadership as competent, then what use are aliens to us? They’re dinner guests showing up when you haven’t finished tidying up. They’re rubberneckers at a crash site. If aliens showed up today we would feel kinda embarrassed because we don’t feel like we’re at our best right now. Sure in the 90s, when we published books that heralded the end of history, we were happy to show off humanity, but today we are back to feeling society is a work in progress.

We aren’t paying attention to the New York Times’ reporting on U.F.Os because we don’t want to pay attention to humanity. In the past we used U.F.Os as an excuse to imagine what global cooperation would look like and we searched the skies to see if we would ever have the chance to try it out. Such cooperation and even our own best selves seem very far away at the moment. We’re not accepting visitors at the moment but hopefully, soon, we will.


Photo: , 2017

We should not be at all surprised to find ourselves online, but we are disturbed to find ourselves where we did not post, especially elements of ourselves we did not share intentionally. These departures from our expectations reveal something critical to the appeal of social media: it seems to provide a kind of identity control previously available only to autobiographers. We feel betrayed, as the writer would, if something is published which we had wanted struck from the record. The genius of social media is meeting this need for editorial control, but the danger is that these services do not profit from the user’s sense of coherent identity, which they appear to produce. The publisher is not interested primarily in the health of the memoirist, but in obtaining a story that will sell.

The intersection of autobiography and social media, especially emphasized by the structure of the Facebook Timeline, should raise questions about how identity is disclosed both before and after the advent of Facebook. The data self Facebook creates, which Nathan Jurgenson wrote about five years ago, is a dramatic departure from the way many of us likely conceive of ourselves. He suggests that the modern subject is constituted largely by data even as the subject creates that data; the self we reference and reveal to others is built on things that can be found out without our consent or effort. A more recent article in New York Times Magazine highlights the power of the immense data available on each of us with a profile.

Narrative identity theory has been developed by psychologists and theorists such as Paul Ricoeur, Jerome Bruner, and Paul John Eakin. It suggests that our sense of self is fundamentally the sense of a character in a narrative. In other words, the character named ‘I’ in the stories we tell, is a character who we understand rather well and with whom we identify, but it is not ontologically different from other characters in fictional or non-fictional narratives. The story our I-character appears in is simply a life. It contains so many events that they cannot all possibly be included, and when telling others or in our memory we all become autobiographers as we retroactively select and grant meaning to experiences and choices.

Narrative identity theory can help to render the Person-Profile dialectic more comprehensible. Just because an embodied subject is creating the content which is shared on social media does not mean the two are in a chicken and egg relationship. Under narrative identity theory, even though the author writes the autobiography, the self is already a story, and so perhaps the person is already a profile. The phrase ‘life story’ is redundant; we understand ourselves as well as we understand the stories that portray our character. How might social media, which grant users such extensive control over these stories, affect this process?

The possession of a social media account does as we seek out events which are documentable. The restaurant or concert that will fit nicely into the narrative of a profile is preferable to something which would be out of place in the story. Narrative identity theory suggests we have always sought to control our story, but the advent of social media brings this action into a new phase.

The Facebook Timeline clearly reflects the common ground between the theories of narrative identity and the data self. Rob Horning wrote about the Timeline when it was first introduced, citing an article explaining that the interface aims to evoke “the feeling of telling someone your life story, and the feeling of memory–of remembering your own life” which, under narrative identity theory are very similar actions; the creation of a sense of self comes through stories told not only to others but also internally in memory.

Horning asserts that the formulation of life as a stream of narrative is an imposition by Facebook on its users, not a natural or neutral process. When we make a coherent story out of what we post, he claims we are playing into Facebook’s hands by providing them with more useful data. Horning suggests we would not put effort into presenting a coherent narrative if it were not for the Timeline, but this is doubtful. Narrative identity theory suggests we cannot do otherwise; without a story to tell, we would not know ourselves. It may not be neutral, but it is not a total imposition on the part of the UI, either. Narrative identity theory has been around for decades, and perhaps the Timeline format has been successful because it agrees with the way we already understand ourselves.

Even basic questions about a person tend to create a kind of narrative: employment, relationships, where he/she has lived, etc. This is social accountability – the way it is normal for us to disclose our identities to others – and it is one very concrete intersection of narrative identity and the Timeline. In face-to-face expressions of identity, social accountability can be seen clearly in the questions we ask when meeting someone. Just as users cannot utilize Facebook without a profile, the story latent in a stranger’s introduction is his or her price of entry to all kinds of relationships. You might be comfortable with a coworker about whom you know very little, but a potential friend who withholds her life story or a suitor who refuses to elucidate his past? These are requests from profiles with no picture. Consider also the young professional without LinkedIn, the photographer without Instagram, or the student without a Facebook page: for better or worse, their failure to account for themselves in the expected way will inhibit their potential. It seems that social media has become the new social accountability; if you do not have a profile, you are failing to present yourself in the way society expects. This is to say nothing of the services and websites which require linked accounts in a preexisting, larger social network.

Horning’s assertion that identity forming frameworks can be changed within a generation is key to understanding how we express and —partially as a consequence of that expression— understand ourselves. When we compare pre-Timeline Facebook and MySpace to today’s infinitely scrolling Timeline one thing becomes clear: social media no longer demands static identities represented by a filled-out profile page. Instead there is a single box that constantly asks you to fill it with whatever is happening to you now. Story has overtaken stability, not only by calling for more frequent visits and updates, but by providing a stage for us to direct our character. Is it our fate to account for ourselves with these bottomless text fields, guided only by minimalistic web page designs, trending hashtags, and caption norms? If so, why have so many of us chosen it?

One reason we increasingly look to social media to host our narrative identities is because, for many of us, we have lost strong affiliations with church, state, family, company, and gender roles. These social institutions act as a point of reference to call on when identifying oneself. But by choosing to qualify our associations, and not to simply say “I am a Christian and an accountant,” the responsibility falls increasingly on the hyper-individualized subject. Identifying with one’s company, Evangelism, Catholicism, or patriotism provides a firm foundation but comes loaded with connotations and subtext over which the subject has no control. For the sake of freedom from the impositions of those structures, we have taken on the pressures of justifying and making meaning in our actions, our stories, and ultimately our identities.

A common criticism of theory is that it does not reflect lived experience, and it is indeed a tall order to ask individuals with online profiles to believe they are constituted by that data. If data in the form of the Timeline is becoming a foundation for identity, its narrative structure at least has a precedent in narrative identity theory. The narrative we write into our online data is familiar, and it helps to render the data self more comprehensible. If we are becoming data selves, it is perhaps through this very need to account for ourselves in the form of a story.

The important change is that our urge to narrate is no longer merely personal, it is profitable. Whether or not our purposes for creating a narrative are novel, there are new consequences to the act as it is mediated by social media. Facebook has done what so many successful companies have, and found a way to monetize something people already do, but what does Facebook’s immense success say about the behavior they have tapped for this profit?  To pick an easy target for comparison and consideration, consider the double purpose served by the content of weight loss and beauty magazines: images of attractive people not only suggest the success of the products for sale, they also undermine confidence and thoughts that the reader could do without those products. Could social media do the same? Continuing and accelerating the internalization of identifiers, social media has given us the control we want and the social accountability we need. Like the magazines however, for growth to continue, we must always want more. How and when might Facebook increase the demand for its product: identity?

Daniel Affsprung is a recent graduate of SUNY New Paltz, where he studied English Literature with emphasis on critical theory and creative writing, and wrote an honors thesis on narrative identity theory in autobiography.

The Daily Beast ran a story last week with this lede: “Roseanne Barr and Michael McFaul argued with her on Twitter. BuzzFeed and The New York Times cited her tweets. But Jenna Abrams was the fictional creation of a Russian troll farm.” Abrams, the story goes, was a concoction of The Internet Research Agency, the Russian government’s troll farm that was first profiled in New York Times Magazine by Adrian Chen in June 2015. During its three-year life span the Abrams account was able to amass close to 70,000 followers on Twitter and was quoted in nearly every major news outlet in America and Europe including The New York Times, The BBC, and France 24.

The Abrams Twitter account was a well of viral content that over-worked listicle writers couldn’t help but return to. Once the account had amassed a following the content shifted away from innocuous virality to offensive trolling: saying the civil war wasn’t about slavery, mocking Black Lives Matter activists, and jumping on hashtags that were critical of Clinton. “When Abrams joined in with an anti-Clinton hashtag,” The Daily Beast reports, “The Washington Post included her tweet in its own coverageOne outlet used an image of a terrorist attack sourced from Abrams’ Twitter feed.”

The Abrams account, they write, “illustrates how Russian talking points can seep into American mainstream media without even a single dollar spent on advertising.” This framing portrays journalists as passive filters that automatically parrot whatever popular Twitter users say. Journalists are supposed to be critical fact-checkers and the last defense against misinformation entering the public sphere. The rate at which false information keeps “seeping” in seems to be growing, and so it is worth asking: are there structural reasons that fake news keeps making its way into reputable news sources?

Jay Rosen is the obvious person to answer this question, and to some degree he did answer it last March when he announced a partnership with the Dutch news site De Correspondent: “if you’re doing public service journalism” he wrote, “and trying to optimize for trust, it helps immensely to be free from the business of buying and selling people’s attention.” Not having commercial sponsors also means, “not straining to find a unique angle into a story that the entire press pack is chewing on, it’s easier to avoid clickbait headlines, which undo trust. Not chasing today’s splashy story can hurt your traffic, but when you’re not selling traffic (because you don’t have advertisers) the pain is minimized.”

It is frustrating that prominent public radio personalities like Ira Glass are running in the opposite direction. Glass, talking to an AdAge reporter in 2015 confidently stated, “Public radio is ready for capitalism.” This is dangerous because much of Russia’s disinformation campaign and Trump’s home-grown trolling relied on the capitalist attention economy that governs every major media outlet. Breitbart and InfoWars republished Abrams’ tweets, but so did The Washington Post and The Times of India. The only thing these news organizations have in common is their advertiser-centered business model.

It’s no secret that most staff writers are underpaid and over-worked, and they are the lucky ones. There are thousands of wildly talented freelance writers that spend half their time writing and reporting and the other half chasing down their overdue paychecks. Reporters with no research budget and a huge publishing quota are understandably going to do a bit of Googling, pull a quote from Twitter, and call it a day. Over-worked and under-paid journalists are the weakened immune system that lets viral fake news take over the body politic.

Herman and Chomsky, in their famous book Manufacturing Consent, pointed to the high cost and time-consuming nature of good journalism as one of the five “filters” that discourage critical reporting. Instead of going to the source of the story, journalists go to police departments and corporate PR offices to grab quotes. This is not because they are lazy, but because they lack the time or money to report the story from scratch. PR offices and police departments’ spokespeople offer one-stop-shops for an official account of what happened in any given story.

The Yes Men—two artists who, for example, will pose as the spokesperson for Dow Chemical and tell a BBC reporter that they take full responsibility for the Bhopal Disaster— know that news agencies are more likely to report on something if they are handed a media package or are offered access to a talking head from a well-known organization. Their hoaxes have real consequences: sending corporate stocks temporarily tumbling and attracting mainstream attention to ignored environmental disasters.

Twitter affords a similar shortcut to newsworthiness. Putting someone with a high follower count (to say nothing of a blue checkmark) in your story increases the possibility of reciprocal attention: you click my content and I’ll click yours. When someone with 70,000 followers says something controversial to their substantial audience, that’s worth a shout out in your news story, especially when that story is little more than a survey of what people are talking about. That Twitter user, after seeing a spike in followers and  mentions related to the article, will share it themselves sending off a quick, “was included in this thing, haha.” This is the mundane, reciprocal manufacturing of attention that feeds micro celebrity and now, apparently, geopolitics. Anything with a decent follower account is low-hanging fruit for finishing a reporter’s daily content quota.

What is absolutely maddening is that the demands and responses to the fake news phenomenon have centered on social media and the algorithms that govern their behavior. Some of the solutions out there —cough Verrit cough— are either so absurd that they can only be explained as either the product of cynical opportunists looking to make fact-flavored content, or the result of too many well-connected people not understanding the nature of the problem they are facing. Both seem equally likely. The intent barely matters though, because the result is the same: a more elaborate apparatus to churn out attention-grabbing media for its own sake.

Social media has exacerbated and monetized fake news but the source of the problem is advertising-subsidized journalism. Any proposed solution that does not confront the working conditions of reporters is a band aid on a bullet wound. The problem is systematic, which means any one actor —whether it is Mark Zukerberg or Facebook itself— is neither the culprit nor the possible savior. So long as our attention is up for sale, people with all sorts of motives will pay top dollar.

Image courtesy Free Press

In our very first post, founding editors Nathan Jurgenson and PJ Patella-Rey wrote:

Facebook has become the homepage of today’s cyborg. For its many users, the Facebook profile becomes intimately entangled with existence itself. We document our thoughts and opinions in status updates and our bodies in photographs. Our likes, dislikes, friends, and activities come to form a granular picture—an image never wholly complete or accurate—but always an artifact that wraps the message of who we are up with the technological medium of the digital profile.

Too few people were talking about the internet in this way in 2010. Many were still paying close attention to Second Life more because it comported with prevailing theories of how identity worked online, not because it was representative of most people’s identity online. It was a different time: no one paid for music on the internet, men were afraid to walk out of the house with their new iPads, there was talk of Twitter Revolutions, Occupy gave us tons of opportunities to think about embodiment, planking was a thing, tattoos were talking to Nintendo 3DS’s, and the conversations around digital privacy that we have today were just taking their present form. The persistent media-rich profiles we made just a few years ago had lost their novelty and now we had to reckon with the context collapses, too-clean quantifications, algorithmic segregations, and liquid identities that they afforded.

Much has changed in the handful of years since Nathan and PJ started the blog. We say “cyborg” less and there are tons of new, wonderful people writing thoughtful essays and commentary about everything that is exciting, provocative, and downright frightening about our augmented society.

As always it is a pleasure to work alongside my co-editor Jenny and we couldn’t ask for a better crew of regular contributors: Crystal, Maya, Stephen, Gabi, Marley, Britney, and Sarah. And, of course, this site would be a 404 if it weren’t for Nathan and PJ.  To all of you and our guest contributors, Thank You!

It is hubris to predict the future but anniversaries are as good a time to look forward as they are to look back so here are a few topics and trends that seem worthy of research, debate, and clear-eyed thinking in the next year:

Geographic Thinking Will Take Prominence Alongside Historic, Anthropological, and Sociological Analysis

I study cities so maybe I am biased here but as more and more of our online interactions happen through our devices, instead of less-portable computers, geographic context will become a key component of social media’s affordances and thus our analyses of the social action that takes place on those services. Pair Snapchat’s recent map features with the steady increase of ride-sharing services and the continual fascination with the possibilities that drones represent, and it makes sense that geographers will be more helpful in understanding our digital age than ever before. We’re over-due for it anyway. As the recently-departed Edward Soja once said in his Postmodern Geographies: “For the past century, time and history have occupied a privileged position in the practical and theoretical consciousness of Western Marxism and critical social science. … Today, however, it may be space more than time that hides consequences from us, the ‘making of geography’ more than the ‘making of history’ that provides the most revealing tactical and theoretical world.” Dromology (Paul Virilio’s term for the study of speed) also has a role to play here. As we seek out and interact with our friends across digital maps and subscribe to on-demand product delivery, the accounting and over-coming of large amounts of terrain and topology become an issue for individuals, not just nations’ armies.

The Return of InfoGlut

In 2013 Mark Andrejevic published Infoglut: How Too Much Information Is Changing the Way We Think and Know and that titular neologism was everywhere. Something similar is sorely needed again as “fake news” and its phenomenological antecedents pop up like mushrooms in the dark, damp swamp that is slowly engulfing our media landscape. The issue of too many people acting on and responding to information with questionable relationships to reality is serious, but framed badly. Yes there is too much misleading information out there but what is worse is that there is simply too much information being routed through algorithms that will mess up as surely as their human progenitors do. Perhaps we don’t need better information, just less.

Amazon is the New Facebook When It Comes to Privacy Norms

The recent headlines about Amazon Key, the service that lets couriers open your front door, are definitely having an outsized influence on my thoughts but I still think its accurate to say that Amazon —in its attempts to find and conquer new markets— will start playing with our privacy norms. This year alone it has released a slew of “echo” branded devices that judge your outfits and let people automatically turn on video chats to say nothing of their Alexa devices that are constantly listening. Amazon has every reason to feel like they can succeed where Facebook failed: while Facebook was pushing users to reveal more just as they were starting to share less, Amazon has actual products and services that it is offering consumers.

Acceptance and Mobilization Around Social Media Companies’ Authority

In 2014 Yo, Ello, and Emojli tried to shake us out of the social media duopoly of Twitter and Facebook, but fell short of establishing a beachhead. Let this next year be the time that we finish our grieving process and accept these imperfect companies as the major power-players for the foreseeable future. With this acceptance, should come a determination to build organizations that we feel comfortable living with. Instead of falling for the Silicon Valley myth that everything is a meritocracy and the next billion-dollar social media company is just one round of VC funding away, we must start doing the arduous work of reigning these companies in and learning to make demands of them. Not just regulation or transparency, but profit sharing and true, meaningful shared governance. If this doesn’t happen, we may stand to lose the cyborg selves we were just starting to understand.

Inverse has a short thing about the precipitous decline of reported close encounters with extra-terrestrials following the widespread adoption of smartphones. Author Ryan Britt asks, “How come there have been fewer reports of flying saucers and alien abductions in the age of the camera phone?” The answer is, essentially, UFOs and abduction stories don’t work at the high resolutions of our devices. Roswell and abductions are the products of eye witness accounts and fuzzy VHS video, not 4k videos captured on iPhones. The mundane enchantment of suburbia, as I’ve called it before, gets deleted as noise in an attempt to capture life in the photo-realistic.

This is certainly a compelling argument. After all the timing works out: Britt notes that the 80s and 90s “were the peak of UFO interest in the United States. Proof? The vast majority of famous books published about UFOs and government cover-ups — most notably The Roswell Incident by Charles Berlitz and William L. Moore — were published in these two decades.” Add to that the popularity of The X-Files and Unsolved Mysteries and you have a pretty clear timeline for the birth and death of mundane enchantment.  As cameras proliferate the quest to capture the elusive and the strange falls off. It would be a paradox if it weren’t so pat.

The loss of modern American mysticism could easily be chalked up to our ability to capture everything, but when has irrefutable proof ever really stopped people from believing things? A world of poltergeists, little grey men, and Big Foot actually seems preferable and easier to digest than one where Donald Trump is president. Put simply: In a world of fake news, why not go on believing in alien abductions? Why, when everything is a conspiracy theory, have we lost the few entertaining half-truths?

The answer is less of a disconnectionist argument—put down your phone and revel in the unknown— and more of a push against the unrelenting positivism in media. It doesn’t seem like a coincidence that South Park, a TV series that taught a generation that caring earnestly about things is dumb, chose alien abductions as its first episode. In “Cartman Gets an Anal Probe” no one can convince Cartman that his abduction was real, and not a dream. Even as an 80-foot satellite dish emerges from his ass, Cartman only replies: “screw you guys, whatever.” It is, admittedly, a creative inversion of the common trope: the abductee is the one that must be convinced while everyone else believes the improbable.

In many ways South Park is really what replaced shows like The X-Files. The former did not literally replace the latter in a time slot (they weren’t on the same channel nor did they air on the same days) but what these shows rewarded was vastly different. South Park wanted you to be Cartman: the one that stubbornly refused what everyone else was saying, just because everyone else was saying it. This syncs up nicely with Vox style explainerism to create a furiously obnoxious ethos where fun half-truths die and only the vindictive lies remain. One is either the liberal explainer Cartman who is technically correct (e.g. “There is only a 0.0024 percent chance that an 80-foot satellite dish is coming out of my ass.”) or the alt-right Cartman who refuses to acknowledge the satellite dish in the first place. Either way you’re Cartman.

Smartphones alone didn’t kill alien abductions, there had to be an attendant cynical desire to prove others wrong. Britt predicts the pendulum might soon swing in the opposite direction though, pointing to William Gibson and other writers who contend “that flying saucer theories are meme-like, insofar as they will experience a media bandwagon period, as well as a period of not being so interesting to the mainstream.” I hope the aliens do come back, and that the bring with them a playful desire to contemplate the universe without explaining it.

David is on Twitter: @da_banks


“It’s not about the money, it is about the principle”, I’ve heard this phrase so many times from friends, colleagues and internet influencers who refuse to pay an extra charge for a service or product not deemed worthwhile. In an episode titled ‘No Change’, a famous influencer was complaining about what he had felt was a growing phenomenon—that of waiters not giving back change when he pays the bill. He was expressing annoyance at ‘being duped’ by a waiter and went on to share that it should be his decision to leave a tip. In the wake of the ubiquity of imposed minimum charges at cafés in Egypt, people started resorting to storytelling on social media platforms to expose certain companies and ameliorate the standards of services.  Instead of waiting on hold to make a complaint, a woman had provided a detailed account on Facebook of her conversation with a waiter at a café, where she was explaining to him that minimum charge is an illegal practice and he can’t really force her to pay it. She shared what she felt was a success story on a group titled ‘Don’t shop here-a list of untrustworthy shops in Egypt’, a public Facebook group where middle-class Egyptians would share stories about bad consumer experiences. The group now serves as an eclectic archive for a wide range of stories recounting bad experiences (from raw chicken at a famous restaurant to slow internet to undelivered customer service promises). 

I was first introduced to the phenomenon of consumer stories on Facebook last year. As someone from a middle-class background, I’d seen friends and co-workers discussing, sharing and parodying those stories, which later on became an anticipated series on my timeline. I would look forward to reading about the little anecdotes that unveil people’s feelings of anger, distress and disgust towards the things they’d bought. Making use of the material economy of the Facebook post, members would tag businesses, upload pictures and edit their stories to include updates that bring forth an element of resolution to the problems they were posing. Unlike reviews which are usually brief in nature, the stories told on the group include vividly detailed accounts that enable a reliving of encounters and are laced with emotional arcs. It is in these rich descriptions that a Facebook post goes from mere complaining to painting a portrait of  class identity. These online performances, while having much to do with the storytellers and the craft of sharing, are enacted through and vis-à-vis other actors and characters. The stories are brought to readers by the disposable objects that are presented as evidence and by embodied others that are being produced through narratives—the waiter, the shop employee, customer service representative, the voice on the phone.

Shared experiences, shared anxieties

Had a bad shopping experience in Egypt? Feel completely lost and with no support. Share your experience with us here!”

Soon after its conception, the Facebook group had formed what Britney Summit-Gil refers to as ‘textual community.  This online community developed its own rules and aesthetics for crafting consumer stories; which include writing as much detail as possible, naming the shop and updating the group with any new information about interactions with customer service. We see through this group a collective drafting of what it means to be a ‘woke consumer’, a context where people reflexively dwell over their status as consumers and refuse being duped in everyday purchases. While people on the group seek to engage their readers in different ways, the most prominent styles used to set the scene include a chronological timeline of events, descriptive narratives of sensory experiences and a dialogue between the storyteller and a person from customer service representatives. Some members bolster their narratives by taking screenshots of textual interactions as well as through presenting documents such as receipts and contracts.

Scrolling through the stories, we could see how this textual community exhibits what the sociologist Pierre Bourdieu called “taste formations”, where taste is subject to collective constructions rather than being inherent. In these group members’ hands “taste becomes a social weapon” in demarcating between the good and the bad, the legitimate and illegitimate when it comes to the wide range of stuff consumed. These demarcations shed light on shared anxieties about transgressive products and services and on the circulation of emotions such as disgust on timelines.

In her book The Cultural Politics of Emotion, feminist scholar Sara Ahmed asks how we can tell the story of feelings “in a way that works with the complicated relations between bodies, objects and others?” Bringing Ahmed in conversation with Bourdieu leads us to consider how distinctions of taste and constructions of classed identities and communities are enacted through the work of feelings. What is shared in this Facebook group and ones like it are stories of feelings; feelings that “do things”, as Ahmed suggests, as they are not just reactions. They demarcate between objects, separate bodies and create subjectivities online.

Embodied others and classist fears

Classed identities are shaped vis-à-vis other bodies that figure throughout the stories. In many narratives told on the group, the waiter, the customer service representative, the employee at the shop, all become abstractions that are produced by fears the storytellers have about ‘being cheated’ or ‘not being respected’—which in turn are reproduced through repetition and circulation. These fears are illustrated in a parallel ad [in Arabic] for a taxi service company titled ‘did someone take advantage of you before?’, which maps out a succession of different characters often from a lower class background that try to extract money from the well-off middle-class Egyptian on a day-to-day basis—whether it were the waiter that doesn’t give back change or the man working at a kiosk that gives gum instead of change.

These stories strip away any human backstory of workers, reducing them to the role they play in unsatisfactory consumer experiences. They are as unidimensional as a faulty gas pump or a dirty table cloth. Thus, underlying the cultural logics of consumer protection, is a protection from imagined others that emphasize the fragility of consumer selves. Going back to the stories, it is important that we also look at them as instruments of power. In a sense, they are not only affective productions of abstract subjects but the unfolding negotiations that a story sparks between a customer and a manager could result in someone losing their job. Often to not compromise the reputation of their brand, many restaurants have fired employees, whose bodies have absorbed the complaints made by customers. After all, it is easier to advocate for someone getting fired if they are reduced to a faulty part in a consumer machine and not regarded as a full human that might be overworked and therefore impolite or error-prone.

Things that have gone bad

The complicated relationships between bodies, others and objects that shape the stories could be taken further by examining the photographic display of stuff that was deemed disposable and distasteful.  While literature on mediated consumption has mostly focused on the lavish and the glamorous (e.g. studies of teens flaunting garments and sport shoes on social media), little has been written on mediation of trash. As Michael Thompson shows in his book Rubbish Theory, rubbish is undertheorized, as often “anthropologists interest themselves in what is noticed, treasured, and admired […] rather than with what is disregarded, discarded, and despised.” Online consumer stories are stories about ordinary stuff that went wrong. They reflect the social lives of trash, as they enable us to follow the trajectories of disposable stuff—which is taken back to businesses, exchanged, restored or residing in the chronicles of a Facebook group awaiting to resurface on people’s timelines.  Moreover, the importance of documenting disposable stuff for narrative evidence grants trash an aesthetic functionality. While these things have failed to conform to commonly agreed upon standards of consumability, they play an important role in substantiating complaints. Consumer narratives are therefore assemblages of both human and nonhuman actors.

Class and dynamics of chill and care online

On another note, the debates surrounding those stories tell us something about the contested nature of the performativity of identity online. The sincerity of accounts, focus on details and the intensity of shared sentiments discussed above were subject to mockery by some people, who started turning these stories into a meme. Mimicking the descriptive styles adopted and the cataloging of actions in a comical fashion, these memes serve as intentionally imperfect repetitions that disturb the seriousness of the accounts.

The parody story typically entails a similar trajectory, as the storyteller imagines a fictional situation and proceeds with giving as much detail as possible in a sense that dramatizes the whole thing. The reader at first is led to think this to be another story reflecting frustrations about stuff or services, but soon finds out that these anxieties are put to question. What figures in stories as dilemmas is rendered to micro-annoyances through clever distortions. In a way, these memes advance a sort of moral ‘chill’ that Alana Massey describes as “being far removed from anything that looks like intensity” when it comes to consumption habits. Documenting bad experiences is seen by these memesters as “too bougie”. Caring too much and not caring that much become intersecting discourses that shape the performativity of classed identities online. Consumer stories become sites that reflect the overlapping and intermeshing of different ways of being and becoming middle class.

Consumer online storytelling tells us about circulating affects, frustrations, community-making, othering processes, ordinary stuff and creative articulations. These stories are complex networks that put different people and objects in conversation. Taking these creative instances seriously is important if we want to understand everyday dynamics of class online. However, we also need to be aware of the politics of these stories and the kind of subjects that are being created through these narratives.

Eman Shahata is an MA student of Anthropology at Goldsmiths University of London. She is currently researching secondhand cultures in Cairo. 


As the school year ends we at Cyborgology thought it fitting to publish our first-ever anonymous contribution. We all have varying opinions about the views stated below but we did agree that these are ideas worth putting out there for discussion.

Excerpt from an infographic included in the IPP’s report on college president pay. Full graphic here.

To Whom It May Concern:

If it is your job to keep track and rank institutions of higher education and publish that data in venues like U.S. News & World Report or the Princeton Review, I have a simple request for you. Please start keeping track of institutions’ administrator to faculty ratios and, in your proprietary ranking formulas, reduce the numerical rank of institutions with a low ratio. The reasoning here is equally straightforward: putting more emphasis on administrative work than actual teaching and research is detrimental to student outcomes.

I wish I could say there was lots of data to back this up but, sadly, researchers are reticent to publish findings that are directly hostile to their bosses. Still though, there are preliminary findings that are worth paying attention to. For starters, a 2014 report by the Institute for Policy Studies found that within public universities high president salaries and high administrative spending overall, correlated positively with high student debt, high reliance on part-time adjunct hiring, and sharp declines in permanent tenure-track faculty. You already keep track of graduating students’ debt and the percentage of adjunct professors in the faculty pool so why not track what seems to be a predictive variable for both of those things?

If you don’t trust the non-partisan IPP, then listen to former administrators themselves. Jon Weiner, in reporting on the IPP study, interviewed William R. Schonfeld, former dean of social sciences and emeritus professor of political science at the University of California, Irvine who stated unequivocally: “The motor force behind these trends is the hiring of ‘professional administrators’ whose primary commitment is to their own careers and advancement.” Their value to the overall mission of their institutions, according to Schonfeld, is negligible if not deleterious.

Whether high administrative pay actually causes adjunctification of faculty or high student debt, does not matter. Correlation should be enough cause to include an administrator-to-faculty ratio because what that number truly represents is another data point in a larger, overdetermined trend of neoliberal education. It is a trend that, I say with respect, you are deeply implicated in. From lavish student centers to million-dollar sports stadiums, it has been your rankings that let universities compare one-another in the first place. It is not too late to use your massive influence to reverse this trend of debt and frustration. Students should be comfortable, and sports are fun, but university administration should be a solemn duty not a business opportunity.

Administration used to be a part-time task that was rotated between faculty. Now part-time faculty rotate in and out of employment under an ever-growing cast of full-time administrators. From my vantage point as a young scholar looking for my first full-time job I cannot help but notice that many universities, even as the recession fades, are hiring dozens of administrators with sentence-long titles but very few entry-level, tenure track professors. Even post doctorate positions—what should be on-the-job training for emerging researchers and teachers—are including administrative duties as part of their job calls. Enough is enough.

At first I was ashamed to write this anonymously. I wanted to stand for what I believe in and for the community I love. Now though, I feel as though anonymity articulates something more fundamental to this problem: a bottoming out of a strong and independent community of free thinkers. Job security in the academy has gotten so bad that even stating basic facts about the nature of our work like I have done above, something that would be an obviously indispensable part of any social scientific investigation, is enough to put me at the bottom of ever-growing piles of qualified job applicants.

What you measure, matters. It literally made a thousand flowers bloom when you started tracking campus aesthetics and it determined the livelihoods of countless academics who have tried to navigate the perverse incentives your metrics produce. I am not asking you to stop ranking universities (although maybe that is what we, in the end, really need) but I am asking that you think of your rankings in a self-reflective manner. Include things that may mitigate the unintended consequences of your prior actions. An administrator-to-faculty ratio would give faculty a chance to govern themselves again. It was under shared governance, between faculty and a handful of full-time administrators, that made American education the best in the world. Help us keep it that way.



Last Sunday French voters seemingly stemmed the tide of nationalist candidates winning major elections. I say seemingly because, as The Guardian reported: “Turnout was the lowest in more than 40 years. Almost one-third of voters chose neither Macron nor Le Pen, with 12 million abstaining and 4.2 million spoiling ballot papers.” The most disturbing statistic though, is that nearly half of voters 18 to 24 voted for Le Pen. She may have not won this time, but the future in France looks pretty fascist. For now, though, France seems to have dodged a bullet with a familiar caliber.

Late last Friday night the Macron campaign announced it had been hacked and many internal documents had been leaked to the open internet through Pastebin and later spread on /Pol/ and Twitter. The comparisons to the American election were easy and numerous but unlike the United States, France has a media blackout period. Elections are held on weekends and new reporting is severely limited. Emily Schultheis in The Atlantic explains:

Here, the pre-election ban on active campaigning, which begins at midnight the Friday night before an election, and ends only when the polls close Sunday night, is practically sacred. The pause is seen as a time when French voters can sit back, gather their information and reflect on their choice before heading to the voting booth on Sunday. It’s also the law: According to French election rules, the blackout includes not just candidate events but anything that could theoretically sway the course of the election: media commentary, interviews, and candidate postings on social media are not just illegal, but taboo.

It is up to future communication and media scholars to determine exactly how much influence the blackout had on these particular election results but there’s plenty of reason to believe it worked in Macron’s favor. He had won the first round of voting and lead in runoff election polling. Any sort of major shift in public opinion could only hurt him. France and nearly a hundred other countries have bans on opinion polling leading up to an election precisely because last-minute developments can result in equally abrupt changes in public opinion. Such changes are not guaranteed to be wrong or misguided, but they are most likely not well thought-out.

The 2017 French election may provide many lessons in the months and years to come but right now one thing seems clear: in the torrent of opinions and prescriptions that came out about Fake News not one of them (to my memory) suggested less media as the solution. In the rush to combat misinformation, too many people forgot the importance of reflection. Even the usual browbeating commentariat that takes every opportunity it can to tell readers that they are mindless social media zombies, did not seize on the election of Donald Trump as a sign that something was deeply wrong in our media diets.

What we did hear a lot about is the danger of unverified reporting or outright lies making it into algorithmically isolated newsfeeds. It was this new and disturbing trend, the assumption went, that was the main instigator of nationalist sympathies and support for Trump. What this theory left out was the simple fact that a majority of Americans still primarily get their news from television and a strong plurality get it from local TV (the specific percentages are 57% and 46%, respectively). The age ranges that were most likely to vote for Trump (45 and up) correlate with those demographics that get their news from television the most. Without getting too far into the weeds about correlation and causation we can make a simple observation that a change in television news would have had the greatest impact on the demographic that was most swayed by Trump’s message.

Facebook and Twitter are capable of fanning the flames of suspicion and rumor but traditional media gatekeepers are still the ones with gas cans. Leaks will happen, and it will be difficult to keep people from talking about it on social media but stories don’t blow up or even reach older audiences without the amplification aided by traditional journalists. Television news’ tendency to amplify fear and exaggerate risks has been widely documented and political scientists know that fear tends to make people vote more conservative. Trump’s campaign was, if anything, a testament to the success of a fear-based strategy.

On one hand, this is good news because it means the problem of fake news might be a lot easier to solve than we thought. We don’t need Facebook to invent a truth-o-meter algorithm, we need calm election reporting. Less black boxes and more blackouts. For those that are immediately thinking about Freedom of the Press concerns I can only say this: The government has done far worse to the First Amendment than bar Nate Silver from barking poll numbers.

Still though, media is a fast-changing enterprise and I would rather not have a government trying to decide if a platform is a place for talking about politics or an editorial outlet that should be shut down during a blackout. Perhaps the government need not get into the business of explicitly barring election reporting at all. What would be the best of all possible scenarios would be a shift in culture, not policy. After all, recent research from the American Press Institute has shown that trust in reporting is now largely derived from who shared it, not who wrote it. What we need now is an acknowledgement on the part of influential people that what they share matters and there is such a thing as careless or even reckless media habits. This is not the first time technological affordances have outpaced cultural norms, and it probably won’t be the last.