5366688138_743dabd609_o

Atrocities in Eritrea atop my Twitter feed. A few tweets below that, police violence against an innocent African American girl at a pool party. Below that, the story of a teen unfairly held at Rikers Island for three years, who eventually killed himself. Below that, news about the seemingly unending bombing campaign in Yemen. Below that, several tweets about the Iraq war and climate change—two longtime staples of my timeline. It reminds me of the writer Teju Cole exclaiming on Twitter last summer that “we’re not evolving emotional filters fast enough to deal with the efficiency with which bad news now reaches us….”

This torrent of news about war, injustice, and suffering is something many of us experience online today, be it on Facebook, Twitter, in our inboxes, or elsewhere. But I wonder about the ‘evolutionary’ framing of this problem—do we really need to develop some new kinds of emotional or social or technical filters for the bad news that engulfs us? Has it gotten that bad?

As it turns out, it has always already gotten that bad. Media critics like Neil Postman having been making arguments about the deleterious effects of having too much mass-mediated information for decades now. Although his classic Amusing Ourselves to Death (1985) was written primarily as a critique of television, contemporary critics often apply Postman’s theories to digital media. One recent piece in Salon labelled Postman “the man who predicted the Internet” and suggested that today “the people asking the important questions about where American society is going are taking a page from him.” Indeed, Postman identified the central problem of all post-typographic media, beginning with the telegraph, as one of information overload. According to Postman, “the situation created by telegraphy, and then exacerbated by later technologies, made the relationship between information and action both abstract and remote. For the first time in human history, people were faced with the problem of information glut.” For Postman, typography’s alteration of the “information-action ratio” associated with older communication technologies created a “diminished social and political potency.” In oral and print cultures, “information derives its importance from the possibilities of action” but in the era of electronic media we live in a dystopian “peek-a-boo world” where information appears from across the globe without context or connection to our daily lives. It “gives us something to talk about but cannot lead to any meaningful action.”

Put in these terms, one can understand the appeal of Postman’s ideas for the digital era, in which a feeling of being overloaded with information surely persists. Even before the Internet, “all we had to do was go to the library to feel overwhelmed by more than we could possibly absorb.” But as Mark Andrejevic reminds us in his book, Infoglut, “Now this excess confronts us at every turn: in the devices we use to work, to communicate with one another, to entertain ourselves.” And as Cole’s tweets make plain, the tension caused by too much information can be particularly acute when it comes in the form of bad news. Is it safe to say, then, that Internet has further ruptured the information-action ratio in the ways suggested by Postman?

I want to argue against such a view. For one thing, Postman’s information-action ratio appears to privilege media that provide less information, and information with easily actionable ramifications. As a criticism, this doesn’t mesh with the sensibilities of either media producers or consumers, who have sought out more information from more people and places since at least the advent of typography. Such an ideal also would seem to privilege simple news stories over complex ones, since the action one can take in response to a simple story is much clearer than a complex one. Indeed, Postman mockingly asked his readers “what steps do you plan to take to reduce the conflict in the Middle East? Or the rates of inflation, crime and unemployment? What are your plans for preserving the environment or reducing the risk of nuclear war?” But do the scale and complexity of these issues mean one should not want to know about them? Of course not. And arguing against the public consumption of such complex, thought-provoking stories seems wildly inconsistent for a book that later bemoaned the fact that television news shows were merely “for entertainment, not education, reflection or catharsis.”

But perhaps there is simply a threshold quantity of information beyond which human consciousness can’t keep up. This concern has animated much contemporary criticism of the Internet’s epistemological effects, as in Nicholas Carr’s 2008 essay “Is Google Making Us Stupid?” The piece began with Carr worrying about his own reading habits: “my concentration often starts to drift after two or three pages….” Carr quickly blamed the Internet for his newfound distraction. “What the Net seems to be doing is chipping away my capacity for concentration and contemplation….”

Amazingly though, Carr’s capacity for deep reading had somehow managed to persist throughout the age of television, in contrast to Postman’s predictions about that medium’s deleterious effects. So while media critics of every age tend to make these sort of technologically determinist criticisms, the question really ought to be reframed as one concerning social norms. Critics like Carr may talk of the brain’s “plasticity,” such that it can be rewired based on repeated exposure to hyperlinks and algorithms, but they, like Postman, don’t address why that rewiring wouldn’t necessarily entail the synthesis of old and new epistemologies, rather than the destruction of one by the other. How else to explain the fact that Carr’s own mental capacities flourished in a televisual age that was once similarly bemoaned by its critics? What we’re left with, then, is a way of reading technological panics like his and Postman’s as evidence of the shifting norms concerning communication technologies.

Shifting to normative, rather than technologically determinist, understandings of information overload recasts the problem of bad news in sociological and historical terms. Luc Boltanski’s Distant Suffering is a social history of the moral problem posed by the mass media’s representation of the suffering of distant others. When one knows that others are suffering nearby, one’s moral obligation is clearly to help them. But as Boltanski explains, when one contemplates the suffering of others from afar, moral obligations become harder to discern. In this scenario, the “least unacceptable” option is to at least make one’s voice heard. As Boltanski put it:

It is by speaking up that the spectator can maintain his integrity when, brought face to face with suffering, he is called upon to act in a situation in which direct action is difficult or impossible. Now even if this speech is initially no more than an internal whisper to himself… none the less in principle it contains a requirement of publicity.

There are echoes here of the information-action ratio concept, but the problem posed by this information is not the amount but its specific content. Good news and lighthearted entertainment don’t really pose a moral or ethical problem for spectators, nor do fictional depictions of suffering. But information about real human suffering does pose the problem of action as a moral one for the spectator.

This moral dilemma certainly didn’t originate with the telegram, much less the television or the Internet. Rather, knowledge of distant others’ suffering came to be seen as morally problematic with the growth of newspapers and the press. But the Internet does, I think, tend to shake up these norms, partly because it changes the nature of public speech. In Postman’s terms, the Internet alters the action side of the information-action ratio. Call it slacktivism or clicktivism or simply chalk it up to the affordances of communication in a networked world, but Boltanski’s “internal whisper to himself” is no longer internal for many of us. At the very least, when confronted with bad news, we can pass on the spectacle by tweeting, blogging, pinning, or posting it in ways that are immediately quite public and also immediately tailored to further sharing. Each time I read a tweet I am confronted with the question of whether I should retweet or reply to it. This becomes a miniature ethical and aesthetic referendum on every tweet about suffering and misfortune—Is the issue serious enough? Will my Twitter followers care? Do I trust the source of this info? Is there another angle that the author of the tweet has not considered?—although I do this mental work quite quickly and almost unthinkingly at times. The same is true for my email inbox, flooded with entreaties for donations to worthy causes or requests to add my name to a petition against some terrible injustice. Of course, humanitarianism and political activism thrived before the Internet, so the issue here is not that the Internet has suddenly overloaded us with information about bad news, but has increased the amount of direct actions we might take as a result. Each one of these actions is easy to do, but they add up to new and slightly different expectations.

This culminates in what I’m calling infoguilt. This term has been used sparingly in popular parlance, and its only scholarly use is in a 1998 book called Avatars of the Word: From Papyrus to Cyberspace. Author James J. O’Donnell suggested that “what is perceived as infoglut is actually infoguilt—the sense that I should be seeking more.” In O’Donnell’s conception guilt comes from not reading all that one could on a subject, or seeking out all available information. This doesn’t seem to me as potent a force as the guilt that comes from the kinds of overwhelming exposure to bad news and distant suffering discussed here. Guilt ought to be reserved for situations in which one’s moral worth is called into question, and as Boltanski pointed out, the spectacle of distant suffering “may be… the only spectacle capable of posing a specifically moral dilemma to someone exposed to it.” A more relevant definition of infoguilt ought to refer, then, to the negative conception of self that comes from not responding to the moral or emotional demands of bad news.

This guilt certainly generates a kind of reflexivity about one’s position as a spectator—how can I prioritize my time, resources, and emotions—and in this way it may surely feel like we have too much information and not enough action. But I, for one, don’t think that feeling is a bad thing. After all, it hasn’t translated to a retreat from humanitarianism and charity—quite the opposite. Rates of charitable giving online have skyrocketed, and despite a serious dip in charitable donations after the 2008 financial crisis, American giving as a whole has risen continuously over the past four years, and is projected to continue to rise in 2015 and 2016 as well. At the very least it does not appear that the problem of infoguilt contributes to what has been deemed “compassion fatigue.” Instead, the guilt we feel is precisely a marker of a continued belief in the value of compassion and an internalization of shame when we fail to act with enough compassion for the many distant others who are now only a click away.

Still, I don’t want to just dismiss infoguilt as merely a trivial first world problem. It is, of course, a symptom of a deeply unjust world where such a surplus of pain and suffering confronts the most comfortable of us every day across the globe. And I don’t have the answer for the appropriate ways we should respond to all of the misfortune that confronts us everyday online. But I do think we need to fight back against the notion that this is a technological problem. Because big tech companies are quite willing to solve the problem of infoguilt for us with algorithmic curation of the news we receive. As more and more of our news comes to us filtered through Facebook and Twitter, algorithms could reduce the emotional strain of bad news by limiting our exposure to it without us even knowing. This begs the question, as Mark Andrejevic put it “what happens when we offload our ‘knowing’ onto the database (or algorithm) because the new forms of knowledge available are ‘too big’ for us to comprehend?” Facebook has already shown that it can subtly improve or depress our moods by shifting the content of our news feeds. And Zeynep Tufecki has written about the ways that Facebook’s algorithm inadvertently suppressed information about the protests in Ferguson and the brutal police response to them in the first hours of that nascent social movement. If we solve infoguilt with technological fixes like algorithmic filtering, it will likely be at tremendous cost to what’s left of our democratic public sphere.

As Andrejevic again explains:

The dystopian version of information glut anticipates a world in which control over… information… is concentrated in the hands of the few who use it to sort, manage, and manipulate. Those without access to the database are left with the “poor person’s” strategies for cutting through the clutter: gut instinct, affective response, and “thin-slicing” (making a snap decision based on a tiny fraction of the evidence).

This is, to a great extent, how we struggle with infoguilt today. We feel the pain of being unable to respond and the guilt of living in comfort and safety while others suffer, and we make snap judgments and gut decisions about what information to let through our emotional filters, and what actions we can spare amidst the ever growing demands of work, family and social life in an always-connected present. But given the available alternatives, let’s continue to struggle through our infoguilt, keep talking it out, and not cede these moral, ethical, and normative questions—over which we do have agency—to opaque technologies promising the comforts of a bygone, mythologized era. In the same way that activists are working to change the norms about trolling and actively creating safer spaces online for women, people of color, and other oppressed peoples, we can work to develop a moral language to understand our online obligations to distant sufferers. If we don’t, then this language will be developed for us, in code, and in secret, in ways more dystopian than even Postman could envision.

 

The author would like to thank the students in his WRI 128/129 “Witnessing Disaster” seminar, who read and commented on an early draft of this essay

Timothy Recuber is a sociologist who studies how American media and culture respond to crisis and distress. His work has been published in journals such as New Media and Society, The American Behavioral Scientist, Space and Culture, Contexts, and Research Ethics.

@timr100
timrecuber.com

Headline Pic via: Source