Still image from a YouTube tutorial on how to build a model Victorian factory in Minecraft

I’m going to sound like a grandpa here –video games are a big gap in my knowledge of digital media—but what the hell is wrong with today’s video games? Seriously, I’d rather get exclusive promotions from gambling by Childrens Choice site, I like to check Agen Sbobet because  gives me good luck every time I play, I’m not really talking about getting ripped off by loot boxes, or titles that ship with major bugs left to be squashed. Those are certainly things that keep me away, but what really turns me off is what the games themselves are about. And here, rest assured, I’m not talking about the violence depicted in games which, as many well-regarded studies have definitively shown, don’t cause violence. (Though, that doesn’t stop the fact that I don’t really find photo-realistic war games to be particularly entertaining.) No, I’m just tired of video games feeling like a second job.

I have never felt the desire to play any of the simulator games that are popular today, even Train Simulator 2018 which, objectively, sounds awesome. Ditto for Minecraft, even though I love building things. It all just sounds like chores. I so desperately want to love video games. I own a PlayStation 4 and have about half a dozen games, but they just collect dust on a shelf. I played Skyrim for a long time but that was only after I rage-quit half an hour into the game and then didn’t pick it back up for over a year. Why would I want to collect hundreds of flowers to make a health potion?

For our anniversary I bought Britney one of those adorable miniature Super Nintendos that just went on sale. It came pre-loaded with 21 games and we played for hours. I genuinely enjoy playing with that thing. Just last night I played Yoshi’s Island before bed. (That game is so fucking adorable.) What is it about the old 16-bit games that I love that today’s games don’t have? I want to explore my own reactions here for a minute because I have a hunch that my own confusion is shared by many others.

The quick and easy answer is nostalgia: like music, people form emotional bonds with video games that are so strong that one’s sense of familiarity and comfort supersedes the joy of discovery, such that older works are enjoyed more than newer ones. I have a lot of fond childhood memories tied to video games but that doesn’t feel like the whole answer. There is something about what these older games are about that is hitting me in a way that newer ones just don’t.

Here’s my theory: today’s video games are less about escapist fantasy than offering opportunities to feel as though you have accomplished something. Beating your cousin in Street Fighter II has always provided a singular sense of accomplishment, and nothing felt quite as bitter sweet as seeing the credits roll at the end of Super Mario RPG but the sense of accomplishment I’m talking about isn’t related to completing a task so much as taking on the identity of someone whose major purpose in life is doing something that has a tangible impact on the world.

This all hit me as I was reading David Graeber’s latest book Bullshit Jobs: A Theory. Graeber asserts that upwards of half of all workers in America and Europe toil at jobs, while well compensated, provide no net impact on the world. He argues that such meaningless employment inflicts a special kind of “spiritual violence” that attacks our very notions of what it means to be human. Contrary to popular ideas that people are naturally lazy and must be prodded and cajoled into work, it seems like we crave the opportunity to be helpful to one-another or, at the very least, accomplish a task that has some discernable social value. One need only look at how often people go to great trouble to find meaningful work at their bullshit job —taking on work that is outside of their job description, installing software that lets them surreptitiously edit Wikipedia or moderating a subreddit— to see just how important being helpful is to people.

Others though, rather than find something to actually do,

…escape into Walter Mitty-style reverie, a traditional coping mechanism for those condemned to spend their lives in sterile office environments. It’s probably no coincidence that nowadays many of these involve fantasies not of being a World War I flying ace, marrying a prince, or becoming a teenage heartthrob, but of having a better—just utterly, ridiculously better—job.

Graeber then relays a story about a man who would zone out at work only to imagine himself as “J.Lo’s or Beyoncé’s Personal Assistant.” There is, of course, a flash game called Personal Assistant that looks fairly popular based on the thousands of five-star reviews. As our jobs provide less and less meaningful satisfaction we turn to the make-believe of video games to experience actual productive labor.

I have had the good fortune to make a living based on a few odd jobs that I find deeply satisfying instead of taking a job that pays well but has no discernable psychological or societal benefit. I write about interesting topics, teach smart students, and do research for clients that I care about. This, more than anything else I surmise, is why I don’t find most games worthwhile. That, and the lack of couch co-op games. That definitely sucks too.

Perhaps you love your job and also love video games. That’s totally possible. There’s lots of reasons to love anything. I’m just trying to put my finger on a feeling that I have about the difference between today’s popular games and older ones. And yeah, I know there’s lots of indie titles, but there you need the resolve of an aficionado. Someone who enjoys trying out, judging, and curating the lesser-known offerings in their chosen field. I just want to love the popular stuff, I don’t want to go hunting.

This argument may be surprising to long-time readers of Cyborgology because it sounds a bit digital dualist —I love my real jobs instead of the virtual ones on the screens— but that’s not what I’m saying at all. The problem here isn’t that video games are competing in a zero-sum game with “real” jobs or that people are playing 2k instead of a pickup game of basketball. The problem is that we have a society and an economy that is bereft of meaningful, waged work and so people have to find wages and meaning in two different places.

For all the talk of how “addicting” phones, social media, and video games are —what with their ability to release dopamine in the brain and all—there is startlingly little said about why people look to those things in the first place. I don’t like the addiction metaphor at all, but even if we accept it, the notion fails for the same reason that “just say no” anti-drug rhetoric doesn’t work: addiction is not a matter of simply being exposed to something and getting hooked. Those who get chemically addicted to a substance might have been made dependent while recovering from an injury, or they self-medicate a psychological condition, or they live a hard life that needs numbing. In all of those cases it is unproductive to talk about drugs the way that digital mindfulness people talk about screen time: as an issue of personal responsibility and discipline. What we need is a more holistic reckoning with how we are expected to earn and spend money and time.

Which brings me back to why I enjoy my SNES more than my PlayStation. The games on my PlayStation —or any popular modern gaming platform— are meant to fill a need that my work already fulfills. I spend my whole working day completing meaningful tasks and I’m also privileged enough to have a house to work on that provides more tangible rewards. The spiritual and psychological needs that many modern videogames are designed to offer are already fulfilled for me by other tasks. The SNES feels more like genuine escapism and play than the (meaningful, interesting, and rewarding) work of Minecraft, Call of Duty, or Train Simulator 2018. All of which is to say if anyone has a visually stunning, turn-based RPG available for the PS4 that they’d like to recommend I’m all ears.

David is on Twitter

several steeples with different world religion symbols atop each peak with the highest one with the facebook F

Colin Koopman, an associate professor of philosophy and director of new media and culture at the University of Oregon, wrote an opinion piece in the New York Times last month that situated the recent Cambridge Analytica debacle within a larger history of data ethics. Such work is crucial because, as Koopman argues, we are increasingly living with the consequences of unaccountable algorithmic decision making in our politics and the fact that “such threats to democracy are now possible is due in part to the fact that our society lacks an information ethics adequate to its deepening dependence on data.” It shouldn’t be a surprise that we are facing massive, unprecedented privacy problems when we let digital technologies far outpace discussions around ethics or care for data.

For Koopman the answer to our Big Data Problems is a society-spanning change in our relationship to data:

It would also establish cultural expectations, fortified by extensive education in high schools and colleges, requiring us to think about data technologies as we build them, not after they have already profiled, categorized and otherwise informationalized millions of people. Students who will later turn their talents to the great challenges of data science would also be trained to consider the ethical design and use of the technologies they will someday unleash.

Koopman is right, in my estimation, that the response to widespread mishandling of data is an equally broad corrective. Something that is baked into everyone’s socializing, not just professionals or regulators. There’s still something that itches though. Something that doesn’t feel quite right. Let me get at it with a short digression.

One of the first and foremost projects Cyborgology ever undertook was a campaign to stop thinking of what happens online as separate and distinct from a so-called “real world.” Nathan Jurgenson originally called this fallacy “digital dualism” and for the better part of a decade all of us have tried to show the consequences for adopting digital dualism. Most of these arguments involved preserving the dignity of individuals and pointing out the condescending, often ablest and ahistorical precepts that one had to accept in order to agree with digital dualists’ arguments. (I’ve included what I think is a fairly inclusive list of all of our writing on this below.) What was endlessly frustrating was that in doing so, we were often, to put it in my co-editor Jenny Davis’ words, “find [our]selves in the position of technological apologist, enthusiast, or utopian.”

Today I must admit that I’m haunted by how much time was wasted in this debate. That we and many others had to advocate for the reality of digital sociality before we could get to its consequences and now those consequences are here and everyone was caught flat-footed. I don’t mean to over-state my case here. I do not think that eye roll-worthy Atlantic articles are directly responsible for why so many people were ignoring obvious signs that Silicon Valley was building a business model based on mass surveillance for hire. What I would argue though, is that it is easy to go from “Google makes us stupid and Facebook makes us lonely” to “Google and Facebook can do anything to anyone.”  Social media has gone from inauthentic sociality to magical political weapons. Neither condition reckons with the digital in a nuanced, thoughtful way. Instead it foregrounds technology as a deterministic force and relegates human decision-making to good intentions gone bad and hapless end users.

This is all the more frustrating, or even anger-inducing when you think about the fact that so many disconnectionists weren’t doing much more than hocking a corporate-friendly self-help regimen that put the onus on individuals to change and left management off the hook. Nowhere in Turkle, Carr, and now Twenge’s work will you find, for example, stories about bosses expecting their young social media interns to use their private accounts or do work outside of normal business hours.

All this makes it feel really easy to blame the victims when it comes to near-future autopsies of Cambridge Analytica. How many times are we going to hear about people not having the right data “diet” or “hygiene” regimen?  How often are writers going to take the motivations and intentions of Facebook as immutable and go directly to what individuals can or should do? You can also expect parents to be brow beaten into taking full responsibility for their children’s data too.

Which brings me back to Koopman’s prescription. I’m always reticent to add another thing to teachers’ full course loads and syllabi. In my work on engineering pedagogy I make a point of saying that pedagogical content has to be changed or replaced, not added. And so here I want to focus on the part that Koopman understandably side-steps: the change in cultural expectations. Where is the bulk of that change going to be placed? Who will do that work? Who will be held responsible?

I’m reminded of the Atomic Priesthood, one of several admittedly “out there” ideas from the Human Interference Task Force whose job it was to assure that people 10 to 24 thousand years later would stay away from buried fissile material. It is one of the ultimate communication problems because you cannot reliably assume any shared meaning. Instead of a physical sign or monument, linguist Thomas Sebeok suggested an institution modeled off organized religion. Religions, after all, are probably the oldest and most robust means of projecting complex information into the future.

A Data Priesthood sounds overwrought and a bit, dramatic? But I think the kernel of the idea is sound: the best way we know how to relate complex ethical ideas is to build ritual and myth around core tenets. If we do such a thing, might I suggest we try to involve everybody but keep the majority of critical concern on those that are looking to use data, not the subjects of that data. This Data Reformation, if you will, has to increase scrutiny in proportion to power. If everyone is equally responsible then those that play with and profit off of the data can always hide behind an individualistic moral code that blames the victim for not doing enough to keep their own data secure.

David is on Twitter

Past Works on Digital Dualism:

What if Facebook but Too Religious image credit: Universal Life Church Monastery

A dirty old chair with the words "My mistakes have a certain logic" stenciled onto the back.

You may have seen the media image that was circulating ahead of the 2018 State of the Union address, depicting a ticket to the event that was billed under a typographical error as the “State of the Uniom.” This is funny on some level, yet as we mock the Trump Administration’s foibles, we also might reflect on our own complicity. As we eagerly surround ourselves with trackers, sensors, and manifold devices with internet-enabled connections, our thoughts, actions, and, yes, even our mistakes are fast becoming data points in an increasingly Byzantine web of digital information.

To wit, I recently noticed a ridiculous typo in an essay I wrote about the challenges of pervasive digital monitoring, lamenting the fact that “our personal lives our increasingly being laid bare.” Perhaps this is forgivable since the word “our” appeared earlier in the sentence, but nonetheless this is a piece I had re-read many times before posting it. Tellingly, in writing about a panoptic world of self-surveillance and compelled revelations, my own contributions to our culture of accrued errors was duly noted. How do such things occur in the age of spellcheck and autocorrect – or more to the point, how can they not occur? I have a notion.

To update Shakespeare, “the fault is not in our [software], but in ourselves.” Despite the ubiquity of online tracking and the expanding potency of “Big Data” to inform decisional processes in a host of spheres, there remains one persistent design glitch in the system: humanity. We may well have “data doubles” emerging in our wake as we surf upon the web, and the predictive powers of search engines, advertisers, and political strategists may be increasing, but there are still people inside these processes. As a tech columnist naturally observed, “a social network is only as useful as the people who are on it.”

Simply put, not everything can be measured and quantified—and an initial “human error” is only likely to be magnified and amplified in a fully wired world. We might utter a malapropism or a Freudian slip in conversation, and in a bygone era one might have stumbled onto a hapax, which is a word used one time in a body of work (such as the term “sassigassity,” used by Charles Dickens only once, in his short story “A Christmas Tree”). In the online realm, where our work’s repository is virtually unlimited, solitary and even nonsensical uses can carom around the cavern, becoming self-coined additions to the lexicon.

Despite the amplificatory aspects of new media, typos themselves certainly aren’t new. An intriguing illustration from earlier stirrings of a mechanical age is that of Anne Sexton and her affinity for errors, as described in chapter eight of Lyric Poetry: The Pain and the Pleasure of Words, by Mutlu Konuk Blasing:

In the light of the lies and truths of the typewriter—of its blind insight, so to speak—Sexton’s notorious carelessness not just with spelling but with typing has a logic. She lets typos stand in her letters, and sometimes she will comment on them, presumably spending at least as much time as it would take her to strike out and correct…. ‘Perhaps my next book should be titled THE TYPO,’ she writes. This would have been a good title. She is both typist and the typo-error she produces—both an agent and the mangling of the agent on the typewriter, which tells the lie/truth that she/we? want to hear: ACTUALLY THE TYPEWRITER DOESN’T know everything.

Indeed, neither the typewriter nor its contemporary digital extrapolations can know everything. The errors in our texts (virtual or print) are reflections of ourselves, things that we generate and which in turn produce us as well. The 1985 dystopian film Brazil captures the essence of this dualism, as a clerical error—caused when a “fly in the ointment” alters a single typewriter keystroke—sets in motion a darkly comedic and deadly chain of events. The film’s protagonist internalizes his inadvertent error, which taps into his lingering sense that the whole society is a mistake—ultimately leading him to seek an escape that can only yield one possible conclusion: a grim cognitive dissonance stuck at the lie/truth interface.

Such dystopic visions reflect an Orwellian tradition of blunt instruments of control and bleak outcomes, playing on fears of an authoritarian world that tries to perfect human nature by severely constraining it. This is an endeavor of demonstrable folly, yet one that ingeniously enshrines absurdity at the core of its totalitarian project. Variations on the genre’s defining themes likewise devolve upon society’s tendency to centralize baseline errors, yielding subjects ruled not by pain but pleasure and systems of control based on reverberation. Reflecting on how a “brave new world” of distraction and titillation merges with one where the exponential growth of media becomes the paramount message, Florence Lewis (a school teacher, author, and self-described “hypo-typo”) encapsulated the crux of the dilemma (circa 1970):

I used to fear Big Brother. I feared what he could do and was doing to language, for language was sacred to me. Debase a man’s language and you took away thought, you took away freedom. I feared the cliché that defended the indefensible…. Simply because we are so bombarded by media, simply because our technology zooms in on us every day, simply because quiet and slow time is so hard to find, we now need more than ever the control of the visual line. We need to see where we are going. As of this moment, we just appear to be going and going in every direction. What I am suggesting is that in a world gone Zorba, it will not be a big brother who will watch over us but a Mustapha Mond [the figurehead from Aldous Huxley’s Brave New World], dispensing light shows, feelies, speed, acid, talkathons. It will be a psychedelic new world and, I fear, [Marshall] McLuhan will be its prophet.

These are prescient words on many levels, reminding us of the plasticity of human development and the rapidity of sociotechnical change. As adaptive creatures, we’re capable of ostensibly normalizing around all sorts of interventions and manipulations, amending our language and personae to fit the tenor of the times. Is there a limit to how much can be accepted before flexibility reaches its breaking point? A revealing paper on the “Technopsychology of IoT Optimization in [the] Business World” sheds some light on this, highlighting the ways in which our tendency as end-users to accept and appropriate new technologies into our lives is the precondition for Big Data companies to be able to “mine and analyze every log of activities through every device.” In other words, the threshold “error” is our complicity, rather than the purveyors’ audacity (or, perhaps more accurately, their sassigassity). And one of the ways this is fostered is by amplifying our fallibility and projecting it back to us across myriad platforms.

The measure of how far our perceptual apparatuses can go thus seems to reside less in the hands of Big Tech’s innovation teams, and more so in our own willingness to accept (and utilize) their biophysical and psychological incursions. The commodification of users’ attention is alarming, but the structural issues in society that make this a viable point of monetization and manipulation have been written into the code for decades. Modern society itself almost reads like one great typographical projection, a subconscious longing for someone to step in and put things right. Our errors not only go untended, however, but are magnified through thoughtless repetition in the hypermedia echo chamber. The age of mechanization, coinciding with the apotheosis of instrumental rationality, may in reality be a time of immanent entropy as meaning itself unravels and the fabric of sociability is undermined by reckless incommensurability.

An object case with real-world (and potentially disastrous) implications was the recent chain of events that led an emergency services worker to trigger the ballistic missile alert system in Hawaii. As the New York Times reported (in a telling correction to its initial article), “the worker sent the alert intentionally, not inadvertently, after misunderstanding a supervisor’s directions.” This innocuous-sounding revision indicates that the episode was due to a human error, which had occurred within (and was intensified by) a human-designed system that allowed a misunderstanding to be broadcast instantaneously. Try as they might, such Dr. Strangelove scenarios will be impossible to eliminate even if the system is automated; indeed, and more to the point, automating decisional systems will only reinforce existing disharmonies.

Humans, we have a problem. It’s not that we’re designed poorly, but more so that we’ve built a world at odds with our field-tested evolutionary capacities. To err may well be human, but we’ve scaled up the enterprise to engraft our typos into the macroscopic structures themselves; like Anne Sexton, we are both the progenitors of typographical errors, and the products of them. There’s an inherent fragility in this: at the local-micro scale errors are mitigated by redundancy, and “disparate realities begin to blend when their adherents engage in face-to-face conversation.” By contrast, current events appear as the manifestation of a political typo writ large, as the inevitable byproduct of a system that amplifies, reifies, and rewards erroneous thought and action—especially when it is spectacular, impersonal, and absurd.

Twitter users have long requested an ‘edit’ function on the site, but fixing our cultures and politics will require more than a new button on which to click. As Zeynep Tufekci observed (yes, on Twitter): “No easy way out. We have to step up, as people, to rebuild new institutions, to fix and hold accountable older ones, but, above all, to defend humane values. Human to human.” Technology can facilitate these processes, but simply pursuing progress for its own sake (or worse, for mercenary ends) only further instantiates errors. Indeed, if we’re concerned about the condition of our union, we might also be alarmed about the myriad ways in which technology is impacting our perception of the uniom as well.

 

Randall Amster, Ph.D., is a teaching professor in justice and peace studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. All typos and errata in his writings are obviously the product of intransigent tech issues. He cannot be reached on Twitter @randallamster.

Image credit: theihno

 

This image provided by the U.S. Coast Guard shows fire boat response crews battle the blazing remnants of the off shore oil rig Deepwater Horizon Wednesday April 21, 2010. The Coast Guard by sea and air planned to search overnight for 11 workers missing since a thunderous explosion rocked an oil drilling platform that continued to burn late Wednesday. (AP Photo/US Coast Guard)

It has been really thrilling to hear so much positive feedback about my essay about authoritarianism in engineering. In that essay, which you can read over at The Baffler, I argue that engineering education and authoritarian tendencies trend very closely and that we see this trend play out in their interpretations of dystopian science fiction. Instead of heeding very clear warnings about the avarice of good intentions gone awry, companies like Axon (né TASER) use movies and books like Minority Report as product roadmaps. I conclude by saying:

In times like these it is important to remember that border walls, nuclear missiles, and surveillance systems do not work, and would not even exist, without the cooperation of engineers. We must begin teaching young engineers that their field is defined by care and humble assistance, not blind obedience to authority.

I’ve got some pushback, both gentle and otherwise about two specific points in my essay which I’d like to discuss here. I’m going to paraphrase and synthesize several people’s arguments but if anyone wants to jump into the comments with something specific they’re more than welcome to do so.

Pushback 1: “Engineering” is too broad a category to do that much analytical work to. Civil engineers do very different work and have very different employers than those in aerospace or mechanical engineering.

It is certainly fair to say that civil engineers, who build bridges, tunnels, and lots of other important infrastructure are not under the same pressures to work in and otherwise support the military industrial complex the way aerospace engineers are. There are, indeed, different professional cultures that exist across these subfields. That being said, lots of universities have schools of engineering that contain aerospace, civil, and many other kinds of engineering. Those engineers take the same introductory courses and the same ethics or professional development courses. Engineering curriculums, when it comes to the social impacts of engineering and the very fundamentals of engineering, often have quite a bit of overlap.

ABET, the accreditation body for most American higher education engineering programs has a fairly centralized system where EVERY engineering program or department must abide by several fairly specific criteria. The closest that criteria gets to political implications of engineers’ work, by the way, is requiring that students be evaluated for their: “understanding of and a commitment to address professional and ethical responsibilities, including a respect for diversity.” Exactly what those ethical responsibilities are (not to mention what constitutes diversity), is left up to individual programs.

If we look at specific program criteria, like aerospace for example, there are absolutely no references to ethics whatsoever. That bears repeating: the association that reviews whether you have a functioning program for teaching humans how to build drones, missiles, fighter jets, and all sorts of machines of war has no additional ethics guidelines. If ABET can make one, brief requirement for ethics across all engineering disciplines and doesn’t have to distinguish between those different engineering disciplines when it comes to ethics guidelines, then criticism of that system can operate at that resolution as well. To say that my essay relies on too-broad of a category would also call into question nearly every university’s engineering curriculum.

Finally, there’s already a lot of acclaimed work in engineering pedagogy, STS, and other fields that make definitive, empirical claims across the engineering professions. Professor of engineering pedagogy Alice Pawley has done extensive surveys of engineers and found that most work in corporate or military organizations that are fairly large and are organized in hierarchical managerial structures. Louis L. Bucciarelli’s Designing Engineers is regular reading for anyone doing work in this area and he too makes reference to “engineers” very broadly. To discount my work would mean throwing out a fairly large portion of well-regarded research on the topic, much of which I cite in the essay.

Pushback 2: Contrary to what you argue in your piece, engineers do have ethics oversight and there are licensure bodies that require continuous training and have oversight boards.

While that first pushback has the opportunity for generative tensions and interesting discussion, I feel like this argument is a bad faith engagement with the topic. In my essay I write,

Unlike medical professionals who have a Hippocratic oath and a licensure process, or lawyers who have bar associations watching over them, engineers have little ethics oversight outside of the institutions that write their paychecks. That is why engineers excel at outsourcing blame: to clients, to managers, or to their fuzzy ideas about the problems of human nature. They are taught early on that the most moral thing they can do is build what they are told to build to the best of their ability, so that the will of the user is accurately and faithfully carried out. It is only in malfunction that engineers may be said to have exerted their own will.

Canadian engineers, many have pointed out, receive an iron ring in a ceremony designed by Rudyard Kipling called The Ritual of the Calling of an Engineer. While that ceremony sounds very elaborate and might make for great in-group solidarity (which can be helpful in maintaining and enforcing ethical norms) it is not at all what I’m talking about. I didn’t say engineers have no sense of ethics, I argue that they actually have something worse: a definition of ethics wherein the individual engineer really only exercises their agency when something goes wrong. If the engineer does exactly as they are told and, for example, builds a perfectly working four-legged weapons platform for Boston Dynamics, they will have achieved a widely held definition of ethical engineering practice. That’s not good enough.

Others have argued that engineers do have oversight organizations that confer licenses and can take them away. Indeed, in the United States the National Society of Professional Engineering does confer a Professional Engineer (PE) license that is overseen by state-level licensure boards. Again, I said “little ethics oversight” not “no ethics oversight” but that is really beside the point because the NSPE does not revoke your PE license for building, say, an oil pipeline that leaks at a rate that is considered normal for that chosen design. The PE license is an example of my critique, not an argument against it because it only focuses on doing a job well, not whether the job itself comports with any sort of social justice standard or larger ethics framework.

Put another way, THE NSPE does nothing to work against what sociologist of engineering Diane Vaughn calls “normalization of deviance.” Bad, even deadly decisions, can be baked into systems-level decision-making such that individual actors might be dutifully following directions and making sure everything is staying within parameters, but there are few mechanisms for questioning the parameters in the first place. Vaughn coined normalization of deviance in studying the Challenger disaster but it works just as well to describe the BP Deepwater Horizon spill. Some might say “oh well that’s management” to which I would say the following: engineers love to boast that they have world-changing powers until something goes wrong. Then a paper-pusher becomes an insurmountable obstacle. I just don’t buy it.

A better argument against my critique would go after bar associations and medical licensure. Bar associations do not suspend lawyers for defending terrible companies and Dick Cheney’s doctors haven’t be censured for keeping a war criminal alive. Still though, lawyers also have the National Lawyers’ Guild and at least the Hippocratic Oath is partisan towards upholding and preserving life. There is no engineering organization that has significant power and would censure the NSPE-licensed engineer that will make sure Trump’s border wall is structurally sound.

 

 

 

 

 

An artists’ rendering of a possible future Amazon HQ2 in Chicago. Image from the Chicago Tribune.

The Intercept’s Zaid Jilani asked a really good question earlier today: Why Don’t the 20 Cities on Amazon’s HQ2 Shortlist Collectively Bargain Instead of Collectively Beg? Amazon is looking for a place to put its second headquarters and cities have fallen over each other to provide some startlingly desperate concessions to lure the tech giant. Some of the concessions, like Chicago’s offer to essentially engage in wage theft by taking all the income tax collected from employees and hand it back to Amazon, make it unclear what these cities actually gain by hosting the company. The reason that city mayors will never collectively bargain on behalf of their citizens is two fold: 1) America lacks an inter-city governance mechanism that prevents cities from being blackballed by corporate capital and 2) most big city mayors are corrupt as hell and don’t care about you.

In 1987 urban sociologists John Logan and Harvey Molotch put forward the “Growth Machine” theory to explain why cities do not collectively bargain and instead compete with one-another in a race-to-the-bottom to see which city can concede the most taxes for the least gain. The theory is rather straightforward: cities may have one or two inherent competitive advantages that no other city has, but beyond that you can only offer tax breaks. Maybe you’ve got a deep water port that big container ships can use, or you’re situated at the only pass in a mountain range. Other than that, location is completely fungible. All that’s left is tax policy and land grants.

Technology clearly makes cities’ competitive advantages even slimmer. Cities that flourished because they were well situated along water ways slowly declined as trains and the National Highway System surpassed canals as the preferred mode of freight transit. The list of things a city can exclusively offer a prospective employer seems to be getting smaller.

Meanwhile, the competition between cities has only got fiercer. “The jockeying for canals, railroads, and arsenals of the previous century,” wrote Logan and Molotch, “has given way in this one to more complex and subtle efforts to manipulate space and redistribute rents.” Instead of a handful of elites making handshake agreements over where to put a government arsenal or the Pennsylvania Railroad’s major terminus, the duty to attract major investment in the 20th century was turned over to teams of PR experts and economic development coordinators. Entire departments in cities and counties around the country were tasked with inventing incentive packages for major employers.

The Growth Machine puts business interests first, but  some stuff does actually “trickle down” to some people. Public spending may be slightly increased to the extent that capital investment isn’t actively deterred. For example, a business won’t relocate to a city where their top management’s kids can’t go to a nice school, so a city might invest in its schools to lure new business. Businesses also demand things that the rest of the public can use like airports or high speed internet. A city might even adopt the Richard Florida playbook and invest in public arts and entertainment. There was a sweet spot, between the late 70s and the early 90s, where this way of doing business was defensible. Schools were less segregated and economic inequality was bad but not horrendous.

Now, in the twenty-first century, all that is old is new again. Inequality is reaching 19th century levels and cities and school districts in many parts of the country are more segregated today than they were in the 60s. What little benefits the public received when their local governments went after major companies, has now become privatized. Again, Chicago’s bid is illustrative here: Mayor Rahm Emanuel’s brutal fight to privatize the city’s schools has created a two-tiered education system with elite Charter schools and cash-starved public ones. Whereas Amazon’s presence would have signaled the possibility that Chicago public schools would see an infusion of cash, Charter schools promise a closed circuit of money and services.

In a world where 82% of the wealth created goes to the wealthiest 1% of people, city leaders are bargaining with Amazon but with other people’s money. Some cities might have more enlightened mayors but, for the most part, there doesn’t seem to be a desire among the ruling class to extract wealth from private capital and redistribute it to average citizens. Rather, this is about securing closed circuits of wealth among a privileged few. To think that these mayors are first and foremost going to bargain for the best deal for their constituents comes off as, sadly, naive.

But lets say, for the sake of argument, a large portion of mayors did want to flip the script and collectively bargain on behalf of their citizens. First they would be confronted with the simple fact that they are organizing a detente on one level so that they can compete on another. Richard Florida, writing in CNN and also quoted in The Intercept, calls on city mayors, “to forge a pact to not give Amazon a penny in tax incentives or other handouts, thereby forcing the company to make its decision based on merit.”

What merit would that be though? Would the city with the least homeless people win? Bezos would be more apt to pick based solely on which city has the best weather. What would have been offered in explicit subsidies would really come down to the same low-tax business climate that the original Urban Growth Machine is predicated on but instead of a special gift to Amazon, cities would pass tax laws that gave away the farm to any company of sufficient size. Instead of Amazon picking from a list of tailor-made proposals, they would be looking for the city or county that just passed another staggeringly low tax policy. Chicago’s offer of routinized wage theft wouldn’t be impacted either, since it’s a state-wide program and has been in place since 2001. Mississippi, Indiana, and Missouri have similar programs.

The point here is that corporations and the people that run them are ideological. Companies do not set up shop based on what is good for people, they choose their location based on what is good for capital. How else do you explain all the businesses that set up shop in Delaware? The ideological fervor of CEOs also points to another problem: Even if cities bound together in some sort non-aggression pact so that none of them promised a single tax break, what would happen the next time a Fortune 500 company starts looking for a new headquarters? Would those cities get a shot? No. They would be blacklisted.

In Richard Florida’s latest book he laments that in an alternate universe President Hillary Clinton would have adopted his “detailed proposal for a new Council of Cities, comparable to the National Security Council,” This Council would foster “a new partnership between national government and the cities in which federal investments would flow.” This is a politically shrewd idea for reasons I have outlined before, but we are unlikely to see this happen any time soon. Even if we were to establish it tomorrow though, the larger problem remains: we have massive monopolistic companies that can make unilateral, undemocratic decisions that impact the lives of millions of people. More than anything it is our state of inequality and the attendant disinvestment in public resources that is, ultimately, the problem.

David is on Twitter.

Jack Nicholson’s President James Dale

I have this childhood memory of one of those rigged games at a county fair where the prize was a stuffed alien. I wanted it really bad. It looked just like the Halloween costume I’d made with my mom a few years back. We covered a balloon with Papier-mâché and when it dried we popped the balloon, cut out almond-shaped eyes, and spray painted the whole thing silver. This stuffed alien looked just like my costume but it was electric green and had a beautiful black cape with silver embroidery. I won it (don’t remember the game) and kept it for a long time. I might still have it somewhere.

Being the 90s kid I am, I was excited to see a New York Times story about a 2004 incident off the coast of San Diego where two Navy airmen followed a U.F.O. as it, “appeared suddenly at 80,000 feet, and then hurtled toward the sea, eventually stopping at 20,000 feet and hovering. Then they either dropped out of radar range or shot straight back up.” I was hoping this story might circulate for a while, especially given that a $22 million Defense Department program meant to study U.F.Os was recently discovered in the Pentagon’s black money budget. There’s even video of the thing! Sadly, it barely scratched the surface of most newsfeed algorithms.

The paltry reaction to such amazing footage might annoy me, but it isn’t surprising. The 21st century, in spite of 20th century sci-fi’s predictions, has been radically ambivalent to the stars. There’s no Star Trek  on primetime TV and The X-Files reboot received mixed reviews. In the 90s there were not one but two Star Trek series running throughout the whole decade, The X-Files was one of the most popular shows on television, and alien abductions were fodder for weekly episodes of Unsolved Mysteries. UFO sightings were also a dime a dozen, providing source material for books, documentaries, and even feature films.

Then, something changed. Part of the change is cultural, which, I’ve argued before, is exemplified by South Park’s Eric Cartman. Even as an 80-foot satellite dish emerges from his butt, he refuses to believe that he’s been abducted by aliens:

This syncs up nicely with Vox style explainerism to create a furiously obnoxious ethos where fun half-truths die and only the vindictive lies remain. One is either the liberal explainer Cartman who is technically correct (e.g. “There is only a 0.0024 percent chance that an 80-foot satellite dish is coming out of my ass.”) or the alt-right Cartman who refuses to acknowledge the satellite dish in the first place. Either way you’re Cartman.

I still think its accurate to say that we’re governed by a cynical desire to prove others wrong, either through bad faith deployments of data or categorical denials of incontrovertible evidence. What’s remarkable is how well represented both perspectives seem to be in our politics. It’s sort of amazing that one society can contain both Fivethiryeight.com and a Centers for Disease Control that can’t use the phrase “evidence-based” in their reports.

First contact stories have always really been about humanity. We are on our best behavior, or rise to the occasion when aliens arrive. In the 90s we proved our worth through feats of technical achievement (Star Trek: First Contact, Contact) or we defeated them (Independence Day, Mars Attacks). Either case required massive cooperation and the suspension of usual conflict. But what happens when a fragmented society such as our own encounters the extraterrestrial?

More recent takes on first contact —namely Europa Report in 2013 and District 9 in 2006— are very different. In Europa Report first contact is deadly and a part of a larger corporate conspiracy. In District 9 humans are the antagonists: forcing aliens into Johannesburg’s slums. Mars Attacks may actually belong in this list too. Jack Nicholson’s President James Dale gives what reads today as a decidedly Trumpian speech (read the YouTube comments if you don’t believe me): “what is wrong with you people? we could work together! why be enemies? because we’re different? is that why? think of the things we could do. think how strong we would be. earth and mars— together.” President Dale is then stabbed through the heart by a Martian’s robot hand. Defeating the aliens in Mars Attacks is achieved through an accidental discovery instead of super-human achievement.

While District 9 is based in (white) humanity’s track record of reacting to foreign visitors, Mars Attacks pokes fun at our earnest belief that our leaders are the most honorable and talented society has to offer– their Sorkin-esque speeches ensuring that “we do not go quietly into the night.”

We don’t believe that anymore. Most don’t see the president as competent, let alone inspiring. If we can no longer maintain the fiction of imagining our leadership as competent, then what use are aliens to us? They’re dinner guests showing up when you haven’t finished tidying up. They’re rubberneckers at a crash site. If aliens showed up today we would feel kinda embarrassed because we don’t feel like we’re at our best right now. Sure in the 90s, when we published books that heralded the end of history, we were happy to show off humanity, but today we are back to feeling society is a work in progress.

We aren’t paying attention to the New York Times’ reporting on U.F.Os because we don’t want to pay attention to humanity. In the past we used U.F.Os as an excuse to imagine what global cooperation would look like and we searched the skies to see if we would ever have the chance to try it out. Such cooperation and even our own best selves seem very far away at the moment. We’re not accepting visitors at the moment but hopefully, soon, we will.

 

Photo: independent.co.uk , 2017

We should not be at all surprised to find ourselves online, but we are disturbed to find ourselves where we did not post, especially elements of ourselves we did not share intentionally. These departures from our expectations reveal something critical to the appeal of social media: it seems to provide a kind of identity control previously available only to autobiographers. We feel betrayed, as the writer would, if something is published which we had wanted struck from the record. The genius of social media is meeting this need for editorial control, but the danger is that these services do not profit from the user’s sense of coherent identity, which they appear to produce. The publisher is not interested primarily in the health of the memoirist, but in obtaining a story that will sell.

The intersection of autobiography and social media, especially emphasized by the structure of the Facebook Timeline, should raise questions about how identity is disclosed both before and after the advent of Facebook. The data self Facebook creates, which Nathan Jurgenson wrote about five years ago, is a dramatic departure from the way many of us likely conceive of ourselves. He suggests that the modern subject is constituted largely by data even as the subject creates that data; the self we reference and reveal to others is built on things that can be found out without our consent or effort. A more recent article in New York Times Magazine highlights the power of the immense data available on each of us with a profile.

Narrative identity theory has been developed by psychologists and theorists such as Paul Ricoeur, Jerome Bruner, and Paul John Eakin. It suggests that our sense of self is fundamentally the sense of a character in a narrative. In other words, the character named ‘I’ in the stories we tell, is a character who we understand rather well and with whom we identify, but it is not ontologically different from other characters in fictional or non-fictional narratives. The story our I-character appears in is simply a life. It contains so many events that they cannot all possibly be included, and when telling others or in our memory we all become autobiographers as we retroactively select and grant meaning to experiences and choices.

Narrative identity theory can help to render the Person-Profile dialectic more comprehensible. Just because an embodied subject is creating the content which is shared on social media does not mean the two are in a chicken and egg relationship. Under narrative identity theory, even though the author writes the autobiography, the self is already a story, and so perhaps the person is already a profile. The phrase ‘life story’ is redundant; we understand ourselves as well as we understand the stories that portray our character. How might social media, which grant users such extensive control over these stories, affect this process?

The possession of a social media account does as we seek out events which are documentable. The restaurant or concert that will fit nicely into the narrative of a profile is preferable to something which would be out of place in the story. Narrative identity theory suggests we have always sought to control our story, but the advent of social media brings this action into a new phase.

The Facebook Timeline clearly reflects the common ground between the theories of narrative identity and the data self. Rob Horning wrote about the Timeline when it was first introduced, citing an article explaining that the interface aims to evoke “the feeling of telling someone your life story, and the feeling of memory–of remembering your own life” which, under narrative identity theory are very similar actions; the creation of a sense of self comes through stories told not only to others but also internally in memory.

Horning asserts that the formulation of life as a stream of narrative is an imposition by Facebook on its users, not a natural or neutral process. When we make a coherent story out of what we post, he claims we are playing into Facebook’s hands by providing them with more useful data. Horning suggests we would not put effort into presenting a coherent narrative if it were not for the Timeline, but this is doubtful. Narrative identity theory suggests we cannot do otherwise; without a story to tell, we would not know ourselves. It may not be neutral, but it is not a total imposition on the part of the UI, either. Narrative identity theory has been around for decades, and perhaps the Timeline format has been successful because it agrees with the way we already understand ourselves.

Even basic questions about a person tend to create a kind of narrative: employment, relationships, where he/she has lived, etc. This is social accountability – the way it is normal for us to disclose our identities to others – and it is one very concrete intersection of narrative identity and the Timeline. In face-to-face expressions of identity, social accountability can be seen clearly in the questions we ask when meeting someone. Just as users cannot utilize Facebook without a profile, the story latent in a stranger’s introduction is his or her price of entry to all kinds of relationships. You might be comfortable with a coworker about whom you know very little, but a potential friend who withholds her life story or a suitor who refuses to elucidate his past? These are requests from profiles with no picture. Consider also the young professional without LinkedIn, the photographer without Instagram, or the student without a Facebook page: for better or worse, their failure to account for themselves in the expected way will inhibit their potential. It seems that social media has become the new social accountability; if you do not have a profile, you are failing to present yourself in the way society expects. This is to say nothing of the services and websites which require linked accounts in a preexisting, larger social network.

Horning’s assertion that identity forming frameworks can be changed within a generation is key to understanding how we express and —partially as a consequence of that expression— understand ourselves. When we compare pre-Timeline Facebook and MySpace to today’s infinitely scrolling Timeline one thing becomes clear: social media no longer demands static identities represented by a filled-out profile page. Instead there is a single box that constantly asks you to fill it with whatever is happening to you now. Story has overtaken stability, not only by calling for more frequent visits and updates, but by providing a stage for us to direct our character. Is it our fate to account for ourselves with these bottomless text fields, guided only by minimalistic web page designs, trending hashtags, and caption norms? If so, why have so many of us chosen it?

One reason we increasingly look to social media to host our narrative identities is because, for many of us, we have lost strong affiliations with church, state, family, company, and gender roles. These social institutions act as a point of reference to call on when identifying oneself. But by choosing to qualify our associations, and not to simply say “I am a Christian and an accountant,” the responsibility falls increasingly on the hyper-individualized subject. Identifying with one’s company, Evangelism, Catholicism, or patriotism provides a firm foundation but comes loaded with connotations and subtext over which the subject has no control. For the sake of freedom from the impositions of those structures, we have taken on the pressures of justifying and making meaning in our actions, our stories, and ultimately our identities.

A common criticism of theory is that it does not reflect lived experience, and it is indeed a tall order to ask individuals with online profiles to believe they are constituted by that data. If data in the form of the Timeline is becoming a foundation for identity, its narrative structure at least has a precedent in narrative identity theory. The narrative we write into our online data is familiar, and it helps to render the data self more comprehensible. If we are becoming data selves, it is perhaps through this very need to account for ourselves in the form of a story.

The important change is that our urge to narrate is no longer merely personal, it is profitable. Whether or not our purposes for creating a narrative are novel, there are new consequences to the act as it is mediated by social media. Facebook has done what so many successful companies have, and found a way to monetize something people already do, but what does Facebook’s immense success say about the behavior they have tapped for this profit?  To pick an easy target for comparison and consideration, consider the double purpose served by the content of weight loss and beauty magazines: images of attractive people not only suggest the success of the products for sale, they also undermine confidence and thoughts that the reader could do without those products. Could social media do the same? Continuing and accelerating the internalization of identifiers, social media has given us the control we want and the social accountability we need. Like the magazines however, for growth to continue, we must always want more. How and when might Facebook increase the demand for its product: identity?

Daniel Affsprung is a recent graduate of SUNY New Paltz, where he studied English Literature with emphasis on critical theory and creative writing, and wrote an honors thesis on narrative identity theory in autobiography.

The Daily Beast ran a story last week with this lede: “Roseanne Barr and Michael McFaul argued with her on Twitter. BuzzFeed and The New York Times cited her tweets. But Jenna Abrams was the fictional creation of a Russian troll farm.” Abrams, the story goes, was a concoction of The Internet Research Agency, the Russian government’s troll farm that was first profiled in New York Times Magazine by Adrian Chen in June 2015. During its three-year life span the Abrams account was able to amass close to 70,000 followers on Twitter and was quoted in nearly every major news outlet in America and Europe including The New York Times, The BBC, and France 24.

The Abrams Twitter account was a well of viral content that over-worked listicle writers couldn’t help but return to. Once the account had amassed a following the content shifted away from innocuous virality to offensive trolling: saying the civil war wasn’t about slavery, mocking Black Lives Matter activists, and jumping on hashtags that were critical of Clinton. “When Abrams joined in with an anti-Clinton hashtag,” The Daily Beast reports, “The Washington Post included her tweet in its own coverageOne outlet used an image of a terrorist attack sourced from Abrams’ Twitter feed.”

The Abrams account, they write, “illustrates how Russian talking points can seep into American mainstream media without even a single dollar spent on advertising.” This framing portrays journalists as passive filters that automatically parrot whatever popular Twitter users say. Journalists are supposed to be critical fact-checkers and the last defense against misinformation entering the public sphere. The rate at which false information keeps “seeping” in seems to be growing, and so it is worth asking: are there structural reasons that fake news keeps making its way into reputable news sources?

Jay Rosen is the obvious person to answer this question, and to some degree he did answer it last March when he announced a partnership with the Dutch news site De Correspondent: “if you’re doing public service journalism” he wrote, “and trying to optimize for trust, it helps immensely to be free from the business of buying and selling people’s attention.” Not having commercial sponsors also means, “not straining to find a unique angle into a story that the entire press pack is chewing on, it’s easier to avoid clickbait headlines, which undo trust. Not chasing today’s splashy story can hurt your traffic, but when you’re not selling traffic (because you don’t have advertisers) the pain is minimized.”

It is frustrating that prominent public radio personalities like Ira Glass are running in the opposite direction. Glass, talking to an AdAge reporter in 2015 confidently stated, “Public radio is ready for capitalism.” This is dangerous because much of Russia’s disinformation campaign and Trump’s home-grown trolling relied on the capitalist attention economy that governs every major media outlet. Breitbart and InfoWars republished Abrams’ tweets, but so did The Washington Post and The Times of India. The only thing these news organizations have in common is their advertiser-centered business model.

It’s no secret that most staff writers are underpaid and over-worked, and they are the lucky ones. There are thousands of wildly talented freelance writers that spend half their time writing and reporting and the other half chasing down their overdue paychecks. Reporters with no research budget and a huge publishing quota are understandably going to do a bit of Googling, pull a quote from Twitter, and call it a day. Over-worked and under-paid journalists are the weakened immune system that lets viral fake news take over the body politic.

Herman and Chomsky, in their famous book Manufacturing Consent, pointed to the high cost and time-consuming nature of good journalism as one of the five “filters” that discourage critical reporting. Instead of going to the source of the story, journalists go to police departments and corporate PR offices to grab quotes. This is not because they are lazy, but because they lack the time or money to report the story from scratch. PR offices and police departments’ spokespeople offer one-stop-shops for an official account of what happened in any given story.

The Yes Men—two artists who, for example, will pose as the spokesperson for Dow Chemical and tell a BBC reporter that they take full responsibility for the Bhopal Disaster— know that news agencies are more likely to report on something if they are handed a media package or are offered access to a talking head from a well-known organization. Their hoaxes have real consequences: sending corporate stocks temporarily tumbling and attracting mainstream attention to ignored environmental disasters.

Twitter affords a similar shortcut to newsworthiness. Putting someone with a high follower count (to say nothing of a blue checkmark) in your story increases the possibility of reciprocal attention: you click my content and I’ll click yours. When someone with 70,000 followers says something controversial to their substantial audience, that’s worth a shout out in your news story, especially when that story is little more than a survey of what people are talking about. That Twitter user, after seeing a spike in followers and  mentions related to the article, will share it themselves sending off a quick, “was included in this thing, haha.” This is the mundane, reciprocal manufacturing of attention that feeds micro celebrity and now, apparently, geopolitics. Anything with a decent follower account is low-hanging fruit for finishing a reporter’s daily content quota.

What is absolutely maddening is that the demands and responses to the fake news phenomenon have centered on social media and the algorithms that govern their behavior. Some of the solutions out there —cough Verrit cough— are either so absurd that they can only be explained as either the product of cynical opportunists looking to make fact-flavored content, or the result of too many well-connected people not understanding the nature of the problem they are facing. Both seem equally likely. The intent barely matters though, because the result is the same: a more elaborate apparatus to churn out attention-grabbing media for its own sake.

Social media has exacerbated and monetized fake news but the source of the problem is advertising-subsidized journalism. Any proposed solution that does not confront the working conditions of reporters is a band aid on a bullet wound. The problem is systematic, which means any one actor —whether it is Mark Zukerberg or Facebook itself— is neither the culprit nor the possible savior. So long as our attention is up for sale, people with all sorts of motives will pay top dollar.

Image courtesy Free Press

In our very first post, founding editors Nathan Jurgenson and PJ Patella-Rey wrote:

Facebook has become the homepage of today’s cyborg. For its many users, the Facebook profile becomes intimately entangled with existence itself. We document our thoughts and opinions in status updates and our bodies in photographs. Our likes, dislikes, friends, and activities come to form a granular picture—an image never wholly complete or accurate—but always an artifact that wraps the message of who we are up with the technological medium of the digital profile.

Too few people were talking about the internet in this way in 2010. Many were still paying close attention to Second Life more because it comported with prevailing theories of how identity worked online, not because it was representative of most people’s identity online. It was a different time: no one paid for music on the internet, men were afraid to walk out of the house with their new iPads, there was talk of Twitter Revolutions, Occupy gave us tons of opportunities to think about embodiment, planking was a thing, tattoos were talking to Nintendo 3DS’s, and the conversations around digital privacy that we have today were just taking their present form. The persistent media-rich profiles we made just a few years ago had lost their novelty and now we had to reckon with the context collapses, too-clean quantifications, algorithmic segregations, and liquid identities that they afforded.

Much has changed in the handful of years since Nathan and PJ started the blog. We say “cyborg” less and there are tons of new, wonderful people writing thoughtful essays and commentary about everything that is exciting, provocative, and downright frightening about our augmented society.

As always it is a pleasure to work alongside my co-editor Jenny and we couldn’t ask for a better crew of regular contributors: Crystal, Maya, Stephen, Gabi, Marley, Britney, and Sarah. And, of course, this site would be a 404 if it weren’t for Nathan and PJ.  To all of you and our guest contributors, Thank You!

It is hubris to predict the future but anniversaries are as good a time to look forward as they are to look back so here are a few topics and trends that seem worthy of research, debate, and clear-eyed thinking in the next year:

Geographic Thinking Will Take Prominence Alongside Historic, Anthropological, and Sociological Analysis

I study cities so maybe I am biased here but as more and more of our online interactions happen through our devices, instead of less-portable computers, geographic context will become a key component of social media’s affordances and thus our analyses of the social action that takes place on those services. Pair Snapchat’s recent map features with the steady increase of ride-sharing services and the continual fascination with the possibilities that drones represent, and it makes sense that geographers will be more helpful in understanding our digital age than ever before. We’re over-due for it anyway. As the recently-departed Edward Soja once said in his Postmodern Geographies: “For the past century, time and history have occupied a privileged position in the practical and theoretical consciousness of Western Marxism and critical social science. … Today, however, it may be space more than time that hides consequences from us, the ‘making of geography’ more than the ‘making of history’ that provides the most revealing tactical and theoretical world.” Dromology (Paul Virilio’s term for the study of speed) also has a role to play here. As we seek out and interact with our friends across digital maps and subscribe to on-demand product delivery, the accounting and over-coming of large amounts of terrain and topology become an issue for individuals, not just nations’ armies.

The Return of InfoGlut

In 2013 Mark Andrejevic published Infoglut: How Too Much Information Is Changing the Way We Think and Know and that titular neologism was everywhere. Something similar is sorely needed again as “fake news” and its phenomenological antecedents pop up like mushrooms in the dark, damp swamp that is slowly engulfing our media landscape. The issue of too many people acting on and responding to information with questionable relationships to reality is serious, but framed badly. Yes there is too much misleading information out there but what is worse is that there is simply too much information being routed through algorithms that will mess up as surely as their human progenitors do. Perhaps we don’t need better information, just less.

Amazon is the New Facebook When It Comes to Privacy Norms

The recent headlines about Amazon Key, the service that lets couriers open your front door, are definitely having an outsized influence on my thoughts but I still think its accurate to say that Amazon —in its attempts to find and conquer new markets— will start playing with our privacy norms. This year alone it has released a slew of “echo” branded devices that judge your outfits and let people automatically turn on video chats to say nothing of their Alexa devices that are constantly listening. Amazon has every reason to feel like they can succeed where Facebook failed: while Facebook was pushing users to reveal more just as they were starting to share less, Amazon has actual products and services that it is offering consumers.

Acceptance and Mobilization Around Social Media Companies’ Authority

In 2014 Yo, Ello, and Emojli tried to shake us out of the social media duopoly of Twitter and Facebook, but fell short of establishing a beachhead. Let this next year be the time that we finish our grieving process and accept these imperfect companies as the major power-players for the foreseeable future. With this acceptance, should come a determination to build organizations that we feel comfortable living with. Instead of falling for the Silicon Valley myth that everything is a meritocracy and the next billion-dollar social media company is just one round of VC funding away, we must start doing the arduous work of reigning these companies in and learning to make demands of them. Not just regulation or transparency, but profit sharing and true, meaningful shared governance. If this doesn’t happen, we may stand to lose the cyborg selves we were just starting to understand.

Inverse has a short thing about the precipitous decline of reported close encounters with extra-terrestrials following the widespread adoption of smartphones. Author Ryan Britt asks, “How come there have been fewer reports of flying saucers and alien abductions in the age of the camera phone?” The answer is, essentially, UFOs and abduction stories don’t work at the high resolutions of our devices. Roswell and abductions are the products of eye witness accounts and fuzzy VHS video, not 4k videos captured on iPhones. The mundane enchantment of suburbia, as I’ve called it before, gets deleted as noise in an attempt to capture life in the photo-realistic.

This is certainly a compelling argument. After all the timing works out: Britt notes that the 80s and 90s “were the peak of UFO interest in the United States. Proof? The vast majority of famous books published about UFOs and government cover-ups — most notably The Roswell Incident by Charles Berlitz and William L. Moore — were published in these two decades.” Add to that the popularity of The X-Files and Unsolved Mysteries and you have a pretty clear timeline for the birth and death of mundane enchantment.  As cameras proliferate the quest to capture the elusive and the strange falls off. It would be a paradox if it weren’t so pat.

The loss of modern American mysticism could easily be chalked up to our ability to capture everything, but when has irrefutable proof ever really stopped people from believing things? A world of poltergeists, little grey men, and Big Foot actually seems preferable and easier to digest than one where Donald Trump is president. Put simply: In a world of fake news, why not go on believing in alien abductions? Why, when everything is a conspiracy theory, have we lost the few entertaining half-truths?

The answer is less of a disconnectionist argument—put down your phone and revel in the unknown— and more of a push against the unrelenting positivism in media. It doesn’t seem like a coincidence that South Park, a TV series that taught a generation that caring earnestly about things is dumb, chose alien abductions as its first episode. In “Cartman Gets an Anal Probe” no one can convince Cartman that his abduction was real, and not a dream. Even as an 80-foot satellite dish emerges from his ass, Cartman only replies: “screw you guys, whatever.” It is, admittedly, a creative inversion of the common trope: the abductee is the one that must be convinced while everyone else believes the improbable.

In many ways South Park is really what replaced shows like The X-Files. The former did not literally replace the latter in a time slot (they weren’t on the same channel nor did they air on the same days) but what these shows rewarded was vastly different. South Park wanted you to be Cartman: the one that stubbornly refused what everyone else was saying, just because everyone else was saying it. This syncs up nicely with Vox style explainerism to create a furiously obnoxious ethos where fun half-truths die and only the vindictive lies remain. One is either the liberal explainer Cartman who is technically correct (e.g. “There is only a 0.0024 percent chance that an 80-foot satellite dish is coming out of my ass.”) or the alt-right Cartman who refuses to acknowledge the satellite dish in the first place. Either way you’re Cartman.

Smartphones alone didn’t kill alien abductions, there had to be an attendant cynical desire to prove others wrong. Britt predicts the pendulum might soon swing in the opposite direction though, pointing to William Gibson and other writers who contend “that flying saucer theories are meme-like, insofar as they will experience a media bandwagon period, as well as a period of not being so interesting to the mainstream.” I hope the aliens do come back, and that the bring with them a playful desire to contemplate the universe without explaining it.

David is on Twitter: @da_banks