A dirty old chair with the words "My mistakes have a certain logic" stenciled onto the back.

You may have seen the media image that was circulating ahead of the 2018 State of the Union address, depicting a ticket to the event that was billed under a typographical error as the “State of the Uniom.” This is funny on some level, yet as we mock the Trump Administration’s foibles, we also might reflect on our own complicity. As we eagerly surround ourselves with trackers, sensors, and manifold devices with internet-enabled connections, our thoughts, actions, and, yes, even our mistakes are fast becoming data points in an increasingly Byzantine web of digital information.

To wit, I recently noticed a ridiculous typo in an essay I wrote about the challenges of pervasive digital monitoring, lamenting the fact that “our personal lives our increasingly being laid bare.” Perhaps this is forgivable since the word “our” appeared earlier in the sentence, but nonetheless this is a piece I had re-read many times before posting it. Tellingly, in writing about a panoptic world of self-surveillance and compelled revelations, my own contributions to our culture of accrued errors was duly noted. How do such things occur in the age of spellcheck and autocorrect – or more to the point, how can they not occur? I have a notion.

To update Shakespeare, “the fault is not in our [software], but in ourselves.” Despite the ubiquity of online tracking and the expanding potency of “Big Data” to inform decisional processes in a host of spheres, there remains one persistent design glitch in the system: humanity. We may well have “data doubles” emerging in our wake as we surf upon the web, and the predictive powers of search engines, advertisers, and political strategists may be increasing, but there are still people inside these processes. As a tech columnist naturally observed, “a social network is only as useful as the people who are on it.”

Simply put, not everything can be measured and quantified—and an initial “human error” is only likely to be magnified and amplified in a fully wired world. We might utter a malapropism or a Freudian slip in conversation, and in a bygone era one might have stumbled onto a hapax, which is a word used one time in a body of work (such as the term “sassigassity,” used by Charles Dickens only once, in his short story “A Christmas Tree”). In the online realm, where our work’s repository is virtually unlimited, solitary and even nonsensical uses can carom around the cavern, becoming self-coined additions to the lexicon.

Despite the amplificatory aspects of new media, typos themselves certainly aren’t new. An intriguing illustration from earlier stirrings of a mechanical age is that of Anne Sexton and her affinity for errors, as described in chapter eight of Lyric Poetry: The Pain and the Pleasure of Words, by Mutlu Konuk Blasing:

In the light of the lies and truths of the typewriter—of its blind insight, so to speak—Sexton’s notorious carelessness not just with spelling but with typing has a logic. She lets typos stand in her letters, and sometimes she will comment on them, presumably spending at least as much time as it would take her to strike out and correct…. ‘Perhaps my next book should be titled THE TYPO,’ she writes. This would have been a good title. She is both typist and the typo-error she produces—both an agent and the mangling of the agent on the typewriter, which tells the lie/truth that she/we? want to hear: ACTUALLY THE TYPEWRITER DOESN’T know everything.

Indeed, neither the typewriter nor its contemporary digital extrapolations can know everything. The errors in our texts (virtual or print) are reflections of ourselves, things that we generate and which in turn produce us as well. The 1985 dystopian film Brazil captures the essence of this dualism, as a clerical error—caused when a “fly in the ointment” alters a single typewriter keystroke—sets in motion a darkly comedic and deadly chain of events. The film’s protagonist internalizes his inadvertent error, which taps into his lingering sense that the whole society is a mistake—ultimately leading him to seek an escape that can only yield one possible conclusion: a grim cognitive dissonance stuck at the lie/truth interface.

Such dystopic visions reflect an Orwellian tradition of blunt instruments of control and bleak outcomes, playing on fears of an authoritarian world that tries to perfect human nature by severely constraining it. This is an endeavor of demonstrable folly, yet one that ingeniously enshrines absurdity at the core of its totalitarian project. Variations on the genre’s defining themes likewise devolve upon society’s tendency to centralize baseline errors, yielding subjects ruled not by pain but pleasure and systems of control based on reverberation. Reflecting on how a “brave new world” of distraction and titillation merges with one where the exponential growth of media becomes the paramount message, Florence Lewis (a school teacher, author, and self-described “hypo-typo”) encapsulated the crux of the dilemma (circa 1970):

I used to fear Big Brother. I feared what he could do and was doing to language, for language was sacred to me. Debase a man’s language and you took away thought, you took away freedom. I feared the cliché that defended the indefensible…. Simply because we are so bombarded by media, simply because our technology zooms in on us every day, simply because quiet and slow time is so hard to find, we now need more than ever the control of the visual line. We need to see where we are going. As of this moment, we just appear to be going and going in every direction. What I am suggesting is that in a world gone Zorba, it will not be a big brother who will watch over us but a Mustapha Mond [the figurehead from Aldous Huxley’s Brave New World], dispensing light shows, feelies, speed, acid, talkathons. It will be a psychedelic new world and, I fear, [Marshall] McLuhan will be its prophet.

These are prescient words on many levels, reminding us of the plasticity of human development and the rapidity of sociotechnical change. As adaptive creatures, we’re capable of ostensibly normalizing around all sorts of interventions and manipulations, amending our language and personae to fit the tenor of the times. Is there a limit to how much can be accepted before flexibility reaches its breaking point? A revealing paper on the “Technopsychology of IoT Optimization in [the] Business World” sheds some light on this, highlighting the ways in which our tendency as end-users to accept and appropriate new technologies into our lives is the precondition for Big Data companies to be able to “mine and analyze every log of activities through every device.” In other words, the threshold “error” is our complicity, rather than the purveyors’ audacity (or, perhaps more accurately, their sassigassity). And one of the ways this is fostered is by amplifying our fallibility and projecting it back to us across myriad platforms.

The measure of how far our perceptual apparatuses can go thus seems to reside less in the hands of Big Tech’s innovation teams, and more so in our own willingness to accept (and utilize) their biophysical and psychological incursions. The commodification of users’ attention is alarming, but the structural issues in society that make this a viable point of monetization and manipulation have been written into the code for decades. Modern society itself almost reads like one great typographical projection, a subconscious longing for someone to step in and put things right. Our errors not only go untended, however, but are magnified through thoughtless repetition in the hypermedia echo chamber. The age of mechanization, coinciding with the apotheosis of instrumental rationality, may in reality be a time of immanent entropy as meaning itself unravels and the fabric of sociability is undermined by reckless incommensurability.

An object case with real-world (and potentially disastrous) implications was the recent chain of events that led an emergency services worker to trigger the ballistic missile alert system in Hawaii. As the New York Times reported (in a telling correction to its initial article), “the worker sent the alert intentionally, not inadvertently, after misunderstanding a supervisor’s directions.” This innocuous-sounding revision indicates that the episode was due to a human error, which had occurred within (and was intensified by) a human-designed system that allowed a misunderstanding to be broadcast instantaneously. Try as they might, such Dr. Strangelove scenarios will be impossible to eliminate even if the system is automated; indeed, and more to the point, automating decisional systems will only reinforce existing disharmonies.

Humans, we have a problem. It’s not that we’re designed poorly, but more so that we’ve built a world at odds with our field-tested evolutionary capacities. To err may well be human, but we’ve scaled up the enterprise to engraft our typos into the macroscopic structures themselves; like Anne Sexton, we are both the progenitors of typographical errors, and the products of them. There’s an inherent fragility in this: at the local-micro scale errors are mitigated by redundancy, and “disparate realities begin to blend when their adherents engage in face-to-face conversation.” By contrast, current events appear as the manifestation of a political typo writ large, as the inevitable byproduct of a system that amplifies, reifies, and rewards erroneous thought and action—especially when it is spectacular, impersonal, and absurd.

Twitter users have long requested an ‘edit’ function on the site, but fixing our cultures and politics will require more than a new button on which to click. As Zeynep Tufekci observed (yes, on Twitter): “No easy way out. We have to step up, as people, to rebuild new institutions, to fix and hold accountable older ones, but, above all, to defend humane values. Human to human.” Technology can facilitate these processes, but simply pursuing progress for its own sake (or worse, for mercenary ends) only further instantiates errors. Indeed, if we’re concerned about the condition of our union, we might also be alarmed about the myriad ways in which technology is impacting our perception of the uniom as well.

 

Randall Amster, Ph.D., is a teaching professor in justice and peace studies at Georgetown University in Washington, DC, and is the author of books including Peace Ecology. All typos and errata in his writings are obviously the product of intransigent tech issues. He cannot be reached on Twitter @randallamster.

Image credit: theihno