Screen Shot 2015-08-28 at 11.37.54 AM

The American Sociological Association (ASA) annual meeting last week featured a plenary panel with an unusual speaker: comedian Aziz Ansari. Ansari just released a book that he co-wrote with sociologist Eric Klinenberg titled “Modern Romance.” The panel, by the same name, featured a psychologist working within the academy, a biological anthropologist working for, Christian Rudder from OkCupid, and of course, Ansari and Klinenberg. This was truly an inter/nondisciplinary panel striving for public engagement. I was excited and intrigued. The panel is archived here.

This panel seemingly had all of the elements that make for great public scholarhship. Yet somehow, it felt empty, cheap, and at times offensive. Or as I appreciatively retweeted:

Screen Shot 2015-08-28 at 11.36.29 AM

My discomfort and disappointment with this panel got me thinking about how public scholarship should look. As a person who co-edits an academic(ish) blog, this concern is dear to me. It is also a key issue of contemporary intellectualism. It is increasingly easy to disseminate and find information. Publishing is no longer bound by slow and paywalled peer-review journals. Finally, we have an opportunity to talk, listen, share, and reflect on the ideas about which we are so passionate. But how do we do this well? I suggest two guiding rules: rigor and accommodation.

Be rigorous. Social science is like a super power that lets you see what others take for granted and imagine alternate configurations of reality. Common sense comes under question and is often revealed as nonsensical. Public scholarship therefore maintains both the opportunity and responsibility to push boundaries and challenge predominant ways of thinking. The ASA panel missed this opportunity and in doing so, shirked their responsibility.

First of all the panel, like Ansari and Klinenberg’s book, was titled “Modern Romance.” When drafting my Master’s thesis, the people supervising the work taught me that “modern” did not mean what I thought it meant. Modernism is a particular historical moment brought forth during the industrial revolution. Without going too far into it, scholars continue to debate if we have moved past modernism, and if so, what characterizes this new era, and in turn, what we should call it. Labeling the contemporary era “modern” is therefore an argument in and of itself, one that reveals a set of underlying assumptions that differ from those of postmodernism, poststructuralism, liquid modernity etc. My thesis advisors told me to use “contemporary” instead. It means “now” and is a far less value-laden way of representing the current time period. I got no indication that the ASA panelists held strong to the theoretical underpinnings of modernism vis-à-vis other historical designations. Modernism, therefore, was misused. Just as once I misused it in the first draft of my Master’s thesis. This seems like a nitpicky point, and admittedly it is, but it matters. Public engagement entails opening dialogue between those with different kinds and levels of intellectual capital. This means that discourse can operate at multiple levels. The public scholar can communicate something broad to the larger citizenry, while communicating a more nuanced point to insiders. Moreover, how scholars speak becomes a form of training. If we say modern, then the citizens engaged in discourse with us will also say modern, thereby cultivating imprecision and perhaps even generating confusion.

The second (and larger) issue was with the tenor of the panel as a whole. About halfway through, I shot off this tweet:

Screen Shot 2015-08-28 at 11.36.10 AM

These panelists had an opportunity to offer new ways of thinking about love, romance, and family. Instead, they maintained heterosexual, monogamous, procreative, marriage relationships at the center of their discussion. Leaving aside a few cringe-worthy statements from Ansari, the panel as a whole presumed that marriage was the ultimate goal for those using dating apps, even if users wished to employ them for casual hookups in the meanwhile. The biological anthropologist made evolutionary arguments about procreation, and concluded that changes in romantic connections represented “slow love” in which marriage was the “finale” rather than the beginning. In this vein, they all talked about increases in cohabitation through the lens of declining marriage rates, rather than a reconfiguration of kinship ties and life course trajectories. In an exciting historical moment of dynamic cultural change, the panelists’ take on romance was painfully linear.

Rather than rigorous, lazy language choices and linear heteronormative logic kept the panel safely inside mainstream ways of thinking.

The flip side of rigor is accommodation. To engage the public is not to mansplain things to them, but to offer the fruits of academic training in an accessible way while taking seriously the counterpoints, hesitations, and misunderstandings this may entail. Tangibly, this means intellectuals should use language that is as simple as possible while remaining precise; it means exercising patience when lay-publics espouse ideas or use language that seems outdated or even offensive; it means remaining open to viewpoints rooted in lived experience rather than scientific study or high theory; it means remaining flexible while maintaining intellectual integrity. As an audience, members of the crowd at ASA failed to strike this balance. Instead, it became a weird dichotomy between fanenbying[1] and hyper-pretentious pushback. As I noted earlier, the panelists were heteronormative to a fault. The panel itself was therefore something of an intellectual sacrifice, as were the wholesale endorsements coming from the crowd. Those who engaged the panel critically, however, often did so without accommodation. They censured panelists in the pretentious language of insiders complete with conference-tropes such as “troubled by,” “problematic,” and “this isn’t so much a question as it is a comment.”

This all came to a head when the first person to ask a question took about five minutes to use all of the conference tropes I just mentioned. Ansari replied: “ It’s clear that you have some issues, and I also have an issue. You just said ‘this isn’t really a question it’s a comment,’ but you’re standing in the Q&A line!!” The crowd erupted. Ansari said something we have all wanted to say to long-winded commentators. He identified and called out a truly poor habit within the academy. However, the person who Ansari shut down was making a valid point, only she did so in a way that was unaccommodating. Because of this, the cheering felt uncomfortable for me. The cheers invalidated the commentator’s points and in doing so, endorsed the panelists’ message, a message which really, deserved a harsh critique.

I appreciate that ASA made the move towards public scholarship, and I appreciate that public scholarship is difficult. This is why I’m pushing them—pushing us—to think about how public scholarship can/should look in practice. A simple starting place is to engage with rigor and accommodation. Maintain intellectual standards while meeting publics where they are.

I’m interested to hear other people’s thoughts on the panel and/or public scholarship more generally.


Jenny Davis practices public scholarship regularly on Twitter @Jenny_L_Davis

[1] Google tells me fanenby is the gender neutral way to say “fangirl/fanboy” (enby for the NB of non-binary)


TargetHeadlineDisclaimer: Nothing I say in this post is new or theoretically novel. The story to which I’ll refer already peaked over the weekend, and what I have to say about it–that trolling is sometimes productive– is a point well made by many others (like on this blog last month by Nathan Jurgenson). But seriously, can we all please just take a moment and bask in appreciation of trolling at its best?

For those who missed it, Target recently announced that they would do away with gender designations for kids toys and bedding. The retailer’s move toward gender neutrality, unsurprisingly, drew ire from bigoted jerks who apparently fear that mixing dolls with trucks will hasten the unraveling of American society (if David Banks can give himself one more calls it as I sees it moment, I can too).

Sensing “comedy gold” Mike Melgaard went to Target’s Facebook page. He quickly created a fake Facebook account under the name “Ask ForHelp” with a red bullseye as the profile picture. Using this account to pose as the voice of Target’s customer service, he then proceeded to respond with sarcastic mockery to customer complaints. And hit gold, Mike did!! For 16 hilarious hours transphobic commenters provided a rich well of comedic fodder. Ultimately, Facebook stopped the fun by removing Melgaard’s Ask ForHelp account. Although Target never officially endorsed Melgaard, they made their support clear in this Facebook post on Thursday evening: Screen Shot 2015-08-17 at 1.02.12 PM

While you enjoy a short selection of my personal favorite Ask ForHelp moments, keep in mind a larger point: trolling can be a good thing, and trolls can do important work. The act of trolling refers to intentionally disrupting discourse. Often, this is destructive and shuts dialogue down. Sometimes, though, trolling is productive and brings a dialogue to new depths, heights, and/or in new directions. Melgaard’s Ask ForHelp account is a beautiful example of trolling gone wonderfully right. The troll managed to co-opt a corporate site (Facebook) for purposes of co-opting a corporate identity (Target) for purposes of discrediting those who espouse hate and endorse exclusionary policies/practices. And he was funny about it. THIS is how you troll.













Jenny is in Twitter @Jenny_L_Davis



I am an invisible man. No, I am not a spook like those who haunted Edgar Allan Poe; nor am I one of your Hollywood-movie ectoplasms. I am a man of substance, of flesh and bone, fiber and liquids — and I might even be said to possess a mind. I am invisible, understand, simply because people refuse to see me. Like the bodiless heads you see sometimes in circus sideshows, it is as though I have been surrounded by mirrors of hard, distorting glass. When they approach me they see only my surroundings, themselves, or figments of their imagination — indeed, everything and anything except me…It is sometimes advantageous to be unseen, although it is most often rather wearing on the nerves. Then too, you’re constantly being bumped against by those of poor vision…It’s when you feel like this that, out of resentment, you begin to bump people back. And, let me confess, you feel that way most of the time. You ache with the need to convince yourself that you do exist in the real world, that you’re a part of all the sound and anguish, and you strike out with your fists, you curse and you swear to make them recognize you. And, alas, it’s seldom successful… ~Ralph Ellison (1932), Invisible Man

In what follows, I argue that the Black Lives Matter movement is a hacker group, glitching the social program in ways that disrupt white supremacy with glimpses of race consciousness. It is a group that combats black Americans’ invisibility; that “bumps back” until finally, they are recognized.  As Ellison continues:  

Invisibility, let me explain, gives one a slightly different sense of time, you’re never quite on the beat. Sometimes you’re ahead and sometimes behind. Instead of the swift and imperceptible flowing of time, you are aware of its nodes, those points where time stands still or from which it leaps ahead. And you slip into the breaks and look around.

The Black Lives Matter movement brings us, forcefully, into the “breaks,” and invites us to look around, too.

To hack is to find and exploit the weaknesses in a system. Once found, hackers can gain access to what’s inside, and, if desired, change the programming. The Black Lives Matter movement is working to accomplish the latter. They expose racism among America’s most established institutions, and then disrupt the fabric of everyday life to bring these weaknesses to the attention of the masses. This disruption or “glitch” that activists—especially activists of color— present is in many cases, simply themselves. They are black bodies taking up space; black bodies making demands; black bodies resisting invisibility.

Earlier this year, Black Lives Matter activists took over Baltimore. Sitting peacefully, marching the streets, and alternatively, breaking windows and setting things on fire in protest of the deadly police brutality inflicted upon Freddie Gray. Police deployed tanks. Officials closed schools. Businesses were unable to operate. Glitch: Look at us.

In Ferguson earlier this week, Black Lives Matter activists blocked the entrance to the St. Louis Federal Courthouse and traffic on a major highway in protest and remembrance of Michael Brown, the unarmed black teen killed by a white police officer one year ago. The city declared a stated of emergency and arrested close to 60 protesters, including high profile activists like philosopher Cornel West. Glitch: We are still here.

The Invisible Man Himself, Ralph Ellison
The Invisible Man Himself, Ralph Ellison

In Seattle last week, two black women activists stormed the stage at the Social Security Works rally in Westlake Park, prohibiting white presidential candidate Bernie Sanders from speaking. Lamenting Sanders’ failure to address contemporary racial issues, the women were booed by the crowd but refused to give the microphone back. They invited Sanders to respond to their criticisms. He declined. Following the event, Black Lives Matter Seattle released a press statement in which they proclaim: “…we honor Black lives lost by doing the unthinkable, the unapologetic, and the unrespectable.”

The choice of Sanders as a target is of particular relevance. Sanders is a self-described ally with a strong record of civil rights activism. In fact, just hours after his failed attempt to speak at Westlake, Sanders addressed a crowd of 15,000 at the University of Washington calling for an end to institutional racism and reform of the criminal justice system. In contrast, Donald Trump claims there will be no “black presidents for awhile” following what he considers a botched job by Barack Obama, and Ben Carson believes we needn’t think of race because he knows deep down that brains, not skin, make us who we are.

Bernie isn’t perfect, but he’s far better than the rest. And that’s just it. His work, his almost anti-racist position, his good intentions and barely missed marks make him the lowest common denominator within the existing political system. This is a system that puts black lives alongside a suit of issues—environment, economy, tax policy, military funding. This is a system that hides race issues amongst the crowded tabs of candidates’ official web pages. The Black Lives Matter movement rejects this model. Instead, it insists that in this moment, Black Lives take center stage. Anywhere but the center is unacceptable. No more hiding in plain site. Glitch: We are taking over the platform.

Because of this insistence upon centrality, Black Lives Matter refuses to be Anonymous. They do not disrupt the system quietly. The hack is their presence. The hack is their voices. The hack is their faces. It’s not about discourse or even policy, but an insistence upon visibility; a refusal to remain unseen.

Like any good systems maintenance crew, however, the U.S. social system has workers diligently laboring to quiet the glitches, to restore the program, to punish the hackers and reinstate their invisibility. In Ferguson last year, these workers made up the grand jury that chose not to indict police officer Darren Wilson, the man who killed Michael Brown. This week, the workers are the “Oath Keepers,” made up of five white men with weapons, patrolling the streets of Ferguson to maintain “order” and “peace.” In the media, these are the news stations that label protestors “rioters” and highlight the destruction of property while marginalizing the historical and systemic destruction of black lives. It is Bernie Sanders, who pouts at his lost stage time rather than step aside to graciously acknowledge that this moment is not for him.

But the Back Lives Matter hack is powerful in its persistence. The system has been weakened by cameras on cops, fires in the streets, citizens demanding answers, and feet stomping on the ground, day after day, month after month. And because of this persistence, it is a hack that the system can only fight for so long. Each protest-induced glimpse makes invisibility more difficult to restore. At some point, we will have all seen too much, even those who try to close their eyes. This war of glitches creates a tumultuous moment, but provides the code with which to write an alternative future.

Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic via: Source

African Burial Ground Monument with Office Building Reflection.
African Burial Ground Monument with Office Building Reflection.

Towards the beginning of Italo Calvino’s novel Invisible Cities, Marco Polo sits with Kublai Khan and tries to describe the city of Zaira. To do this, Marco Polo could trace the city as it exists in that moment, noting its geometries and materials. But, such a description “would be the same as telling (Kublai) nothing.” Marco Polo explains, “The city does not consist of this, but of relationships between the measurements of its space and the events of its past: the height of a lamppost and the distance from the ground of a hanged usurper’s swaying feet.” This same city exists by a different name in Teju Cole’s novel, Open City. It’s protagonist, Julius, wanders through New York City, mapping his world in terms reminiscent of Marco Polo’s. One day, Julius happens upon the African Burial Ground National Monument. Here, in the heart of downtown Manhattan, Julius measures the distance between his place and the events of its past: “It was here, on the outskirts of the city at the time, north of Wall Street and so outside civilization as it was then defined, that blacks were allowed to bury their dead. Then the dead returned when, in 1991, construction of a building on Broadway and Duane brought human remains to the surface.” The lamppost and the hanged usurper, the federal buildings and the buried enslaved: these are the relationships, obscured and rarely recoverable though they are, on which our cities stand.

One morning early this spring, I stood at the intersection of Broadway and Duane and faced the African Burial Ground National Monument. It is, as Julius describes it, little more than a “patch of grass” with a “curious shape… set into the middle of it.” Inside the neighboring office building, though, is a visitor center with its own federal security guards, gift shop, history of the burial site and its rediscovery, and narrative of Africans in America. The tower in which it sits, named the Ted Weiss Federal Building, was completed in 1994, three years after the intact burials were discovered. (The “echo across centuries” Julius hears at the site of course fell on deaf developers’ ears.) In the lobby between the visitor center and the monument, employees of the Internal Revenue Service shuffle over the sacred ground as Barbara Chase-Riboud’s sculpture Africa Rising looks on, one face to the West, another to the East.

Africa Rising, photo from Library of Congress

I visited the African Burial Ground National Monument during a spring break trip for a course called Exploring Ground Zeros. We spent much of our class time during the semester visiting sites of trauma near our university in St. Louis, trying to uncover the webs of historical and contemporary claims that determine their meaning. In East St. Louis, we tracked the (now invisible) paths of the 1917 race riot/massacre. In St. Louis, we walked through the urban forest of Pruitt-Igoe, stared aghast at the Confederate Memorial in Forest Park, and visited the Ferguson Quick Trip, the now (in)famous epicenter of the Mike Brown protests in which many of us continue to take part. In New York, we did the same, moving from the Triangle Shirtwaist Factory to 23 Wall Street to Ground Zero to the African Burial Ground National Monument. At each site, we analyzed architecture, commemorations, official literature, and wrote field notes, trying to measure the city by its relationships so that we might later recreate it in our assignments.

We also took pictures, and a few of us uploaded ours to Instagram. After visiting the African Burial Ground and the old Triangle Shirtwaist Factory (now an NYU building), I turned to Instagram, not only to preserve and catalogue my photographic evidence, but also to find more. Were these sites worthy of selfies and faux nostalgic filters? What hashtags blessed the posts? By exploring others’ photos, I hoped to learn more about how people engage with place, and how the sites in their photos exist in contemporary cultural memory. After all, though critics have focused on what Instagram and the selfie culture it enables say about our relationship to our social world and ourselves, Instagram reveals just as much about our relationships with place. In every selfie, there’s a background, a beautiful bathroom or sunset immortalized.

And so, I tagged my photos with the location at which they were taken, and looked through publicly posted photos under the same name. Quickly, though, I ran into a problem. As I mentioned, the African Burial Ground National Monument shares the same coordinates as the Ted Weiss Federal Building. Instagram, however, must recognize them as distinct locations. On its website, Instagram defines location as the “location name” added by the uploader. Simple enough. In the next paragraph, though, “place” replaces “location.” So, in the world of Instagram, “location” is “location name” is “place” is “place name”; it is also never plural. What’s more, these definitions form a curatorial practice. On Instagram, photos are organized by location name, and the newest update offers a “search places” option. This means the reality of social place- that of competing claims and layered meanings, physical or otherwise- cannot be found. Each place is neatly and securely tied to its single tag. This is no doubt an issue of convenience and loyalty to the wisdom “you can’t be in two places at once,” but the result is undeniable: on Instagram, the Ted Weiss Building has never heard of the Burial Ground.




What Instagram does offer is the possibility to “create a place,” a feature that most honestly reflects our everyday experience of place. Technically speaking, creating a place means attaching a manually entered location name to the coordinates where the photo was taken. Conceptually, though, this action has greater significance. Defining place, or place-making, anthropologist Keith Basso tells us, is “a common response to common curiosities… As roundly ubiquitous as it is seemingly unremarkable, place-making is a universal tool of the historical imagination.” Instagram does act as a tool for place-making, then, but its singular definitions prevent it from acting as a site from which to honestly make place. This is because it severely, unnecessarily limits the possibilities of the historical imagination. By equating “place” to “location name” and organizing photographs according to that name, Instagram creates a world in which places and locations are only as old as their names.

There are two more apparent obstacles in the way of Instagram’s historical imagination: oversaturation and amateurism. In the same essay, Keith Basso offers a compelling account of how historical imagination works to make place:

When accorded attention at all, places are perceived in terms of their outward aspects—as being, on their manifest surfaces, the familiar places they are—and unless something happens to dislodge these perceptions they are left, as it were, to their own enduring devices. But then something does happen. Perhaps one spots a freshly fallen tree, or a bit of flaking paint, or a house where none has stood before—any disturbance, large or small, that inscribes the passage of time—and a place presents itself as bearing on prior events. And at that precise moment, when ordinary perceptions begin to loosen their hold, a border has been crossed and the country starts to change. Awareness has shifted its footing, and the character of the place, now transfigured by thoughts of an earlier day, swiftly takes on a new and foreign look.

In photographic terms, this disturbance sounds exactly like Roland Barthes’ punctum, a term he coined in Camera Lucida: Reflections on Photography to describe a significant and hidden detail in emotionally moving photographs. There he describes the punctum as “that accident (in the photograph) which pricks me (but also bruises me, is poignant to me).” Like Basso’s fallen tree, it “rises from the scene, shoots out of it like an arrow, and pierces.” It also shifts his awareness, transfiguring reality across time and space. Noticing the dirt road in a photograph by Kertesz, Barthes “recognize(s), with my whole body, the straggling villages I passed through on my long-ago travels in Hungary and Rumania.” But are punctums possible on Instagram?

Of course, our contemporary relationship with photographs on Instagram is quite different from Barthes’.  In a post called “From Memory Scarcity to Memory Abundance,” Michael Sacasas asks, “What if Barthes had not a few, but hundreds or even thousands of images of his mother?” The answer, naturally, is a dramatic decrease in meaning. Sacasas writes, “Gigabytes and terabytes of digital memories will not make us care more about those memories, they will make us care less.” Surely, it seems unreasonable to expect Instagram users to view its photos with the same attention Barthes paid the photographs in Camera Lucida. And yet, all is not lost. Perhaps, Barthes’ punctum and Basso’s disturbance are merely displaced from an individual photograph to the application itself.

Imagine uploading a picture of your biology class at NYU and finding that hundreds have taken photos of the same room, but from the street, where garment workers jumped to their deaths when your lab room, formerly the Triangle Shirtwaist Factory, was on fire. Even if you haven’t visited the site, looking at photographs from both perspectives might inspire the same prick the dirt road caused in Barthes. Describing that prick, Barthes writes parenthetically, “Here, the photograph really transcends itself: is this not the sole proof of its art? To annihilate itself as medium, to be no longer a sign but the thing itself?” An Instagram grid showing photos of the Ted Weiss Federal Building and the African Burial Ground National Monument simultaneously could, for some, be the thing itself. “The thing”, in this case, is not the texture of the ground or a traveller’s weariness, but the unexpected terror and sadness of a city showing its true foundations.

This combined grid’s disturbance is not only destructive, destabilizing conceptions of place, but creative. Indeed, according to Basso, “Building and sharing place-worlds… is not only a means of reviving the former times but also of revising them, a means of exploring not merely how things might have been but also how, just possibly, they might have been different from what others have supposed.” This function is particularly important today, as communities worldwide question the names they call their spaces. In the past few months, UNC Chapel Hill changed the name of Saunder’s Hall to Carolina Hall (Saunders was apparently a KKK leader); Aldermen in St. Louis are trying to change the name of Confederate Drive, which cuts through Forest Park, a 1300-acre public park; the student government at Rhodes University in South Africa voted in March to rename the university. How will Instagram incorporate contentious and changing place names? Today, there isn’t yet a geotag for Carolina Hall, but when there is, its photos will be kept separate from those taken at Saunder’s Hall. So, like the story of 290 Broadway, the UNC name change will be preserved and hidden. Students at UNC will create a new place, and before long it will have more photographs than the old. The archive of photos taken at Saunder’s Hall will continue to exist, static, and visible only to those who remember its previous name and implied owner.


Instagram makes room for these revisions, but not for places to present themselves “as bearing on prior events.” Allowing places to converse with their pasts—by allowing multiple geotags or associating geotags with time period(s) to display layers of names and meanings—would transform Instagram from a platform to share into a platform to converse and learn. Here, places would exist free of Instagram’s current architectural constraints, as palimpsests waiting to “inscribe the passage of time” and activate the historical imagination.



Jonathan Karp (@statusfro) is a recent graduate from Washington University in St. Louis, where he studied English literature and music.

Headline Pic via: Source

Autonomous Intelligence

The International Joint Conference on Artificial Intelligence (IJCIA15) met last week in Buenos Aires. AI has long captured the public imagination, and researchers are making fast advances. Conferences like IJCIA15 have high stakes, as “smart” machines become increasingly integrated into personal and public life. Because of these high stakes, it is important that we remain precise in our discourse and thought surrounding these technologies. In an effort towards precision, I offer simple, corrective point: intelligence is never artificial.

Machine intelligence, like human intelligence, is quite real. That machine intelligence processes through hardware and code has no bearing on its authenticity. The machine does not pretend to learn, it really learns, it actually gets smarter, and it does so through interactions with the environment. Ya know, just like people.

In the case of AI, artificiality implicitly refers to that which is inorganic; an intelligence born of human design rather than within the human body. Machine intelligence is built, crafted, curated by human hands. But is human intelligence not? Like Ex Machina’s Ava and HUMANS’ Anita/Mia, we fleshy cyborgs would be lost, perhaps dead, certainly unintelligent, without the keen guidance of culture and community. And yet, human intelligence remains the unmarked category while machine intelligence is qualified “fake.”

To distinguish human from machine intelligence through the criteria of artificiality is to misunderstand the very basis of the human mind.

Humans have poor instincts. Far from natural, even our earliest search for food (i.e., the breast) is often fraught with hardship, confusion, and many times, failure. Let alone learning to love, empathize, write, and calculate numbers. As anthropologists and neuroscientists have and continue to show, the mind is a process, and this process requires training. Like machines, humans have to learn how to learn. Like machines, the human brain is manufactured. Like machines, the human brain learns with the cultural logic within which it develops.

Variations in intelligence do not, therefore, hinge on artificiality. Instead, they hinge on vessel and autonomy.

The two ideal-type vessels are human and machine; one soft, wet, and vulnerable to disease, the other materially variable and vulnerable to malware. Think Jennings and Rutter vs. Watson on Jeopardy. In practice, these vessels can—and do—overlap. Ritalin affects human learning just as software upgrades affect machine learning; humans can plant chips in their bodies, learning with both organic matter and silicon; robots can feel soft, wet, and look convincingly fleshy. Both humans and machines can originate in labs. However, without going entirely down the philosophical and scientific rabbit hole of the human/machine Venn diagram, humans and machines certainly maintain distinct material makeups, existential meanings, and ontologies. One way to differentiate between human and machine intelligence is therefore quite simple: human intelligence resides in humans while machine intelligence resides in machines. Sometimes, humans and machines combine.

 Both human and machine intelligence vary in their autonomy. While some humans are “free thinkers” others remain “close-minded.” Some socio-historical conditions afford broad lines of thought and action, while others relegate human potential to narrow and linear tracks. So too, when talking about machine intelligence and its advances, what people are really referring to is the extent to which the machine can think, learn, and do, on its own. For example, IJCIA15 opened with a now viral letter from the Future of Life Institute in which AI and Robotics researchers warn against the weaponziation/militarization of artificial intelligence. In it, the authors explicitly differentiate between autonomous weapons and drones. The latter is under ultimate human control, while the former is not. Instead, an autonomous weapon acts on its own accord. Machine intelligence is therefore not more or less artificial, but more or less autonomous.

I should point out that ultimate autonomy is not possible for either humans or machines. Machines are coded with a logic that limits the horizon of possibilities, just as humans are socialized with implicit and explicit logical boundaries[1]. Autonomy therefore operates upon a continuum, with only machines able to achieve absolute zero autonomy, and neither human nor machine able to achieve total autonomy. In short, an autonomous machine approximates human levels of free thought and action, but neither human nor machine operate with boundless cognitive freedom.

Summarily: ‘Artificial Intelligence’ is  not artificial. It is machine-based and autonomous.


Follow Jenny Davis on Twitter @Jenny_L_Davis


Pic via: Source

[1] I want to thank James Chouinard for this point.


The concept of affordances, which broadly refers to those functions an object or environment make possible, maintains a sordid history in which overuse, misuse, and varied uses have led some to argue that researchers should abandon the term altogether. And yet, the concept persists as a central analytic tool within design, science and technology studies, media studies, and even popular parlance. This is for good reason. Affordances give us language to address the push and pull between technological objects and human users as simultaneously agentic and influential.

Previously on Cyborgology, I tried to save the term, and think about how to theorize it with greater precision. In recent weeks, I have immersed myself in the affordances literature in an attempt to develop the theoretical model in a tighter, expanded, and more formalized way. Today, I want to share a bit of this work: a timeline of affordances. This includes the influential works that theorize affordances as a concept and analytic tool, rather than the (quite hearty) body of work that employs affordances as part of their analyses.

The concept has an interesting history that expands across fields and continues to provoke debate and dialogue. Please feel free to fill in any gaps in the comments or on Twitter.

1966: J.J. Gibson, an ecological psychologist, first coins the term affordances in his book The Senses Considered as Perceptual Systems. He defines it as follows: “what things furnish, for good or ill.” He says little else about affordances in this work.

1979: Gibson gives fuller treatment to affordances in his book The Ecological Approach to Visual Perception. His interest in this work is understanding variations in flight skills among military pilots in WWII. He finds that depth perception is not an adequate predictor. As such, he puts forth an ecological approach in which animals (in this case pilots) and the environment (the airplane and airfield) together create the meaningful unit of analysis. In doing so, he places emphasis on the medium itself as an efficacious actor. He expands the definition:

The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill… These affordances have to be measured relative to the animal (emphasis in original)

Gibson further makes clear that affordances exist regardless of their use. That is, environments and artifacts have inherent properties that may or may not be usable for a given organism.

1984: William H. Warren quantifies affordances, using stair climbability as the exemplar case. Demonstrating the relationship between an organism and what the environment affords for that organism, Warren examines the optimal and critical points at which stairs afford climbing, based upon a leg-length to riser ratio. He showes that stair climbing requires the least amount of energy (optimally affords climbing) when the ratio is .26, and stairs cease to be climbable after a .88 ratio. Respondents accurately perceive these ratios in evaluating their ability to climb a given set of stairs.

1988: Donald A. Norman writes The Psychology of Everyday Things (POET) (now published as The Design of Everyday Things). Norman’s book brings the concept of affordances to the Human-Computer-Interaction (HCI)/Design community. Norman takes issue with Gibson’s assumption that affordances can exist as inherent properties of objects or environments. Rather, Norman argues that affordances are also products of human perception. That is, the environment affords what the organism perceives it to afford. He states:

The term affordance refers to the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possible be used. A chair affords (‘is for’) support and, therefore, affords sitting. A chair can also be carried

Since Norman’s original formulation, researchers have either adhered to it, adhered to Gibson’s, tried to synthesize the two, or used the term without specifying which definition they pull from or how they try to reconcile inconsistencies. This is exemplified in an HCI literature review by McGrenere and Ho in which they identify about 1/3 of articles that employ Gibson, another 1/3 that employ Norman, and a remaining third who never define the term or do so without clear reference to these two key figures.

1999: Norman writes an article distinguishing between real and perceived affordances. He laments not making this distinction in POET and argues that with this distinction in mind, perception should be the variable of interest for designers.

2003: Keith Jones organizes a symposium and special issue of Ecological Psychology in which scholars debate the continued relevance of affordances as a concept. This issue draws almost exclusively on Gibson and, without much reference to Norman, scholars retain Gibson’ s relational/ecological approach, but diverge from the assumption that an object contains inherent properties. That is, affordances must be complemented by the potentialities of the organism.

2005: Martin Oliver tells us to give up on affordances as a concept. In an influential article, Oliver argues that the term has been employed under such a variety of definitions that it is now meaningless. He further analyzes and critiques Gibson (1979) and Norman (1999) in their use of the term. Gibson’s perspective is positivistic, with assumptions about essential properties of an object, while Norman is constructivist, but discounts any inherent properties. Oliver argues that we should take a literary analysis approach to technology, working to understand both the writing (design) and reading (use) of the technology.

Despite Oliver’s plea, the term persists.

2012: Tarleton Gillespie organizes an invited dialogue about affordances in the Journal of Broadcasting & Electronic Media. Here, Gina Neff, Tim Jordan, and Joshua McVeigh-Shultz grapple with the role of human and technological agency. They tenuously maintain affordances as useful, but call for a more nuanced theorizing of how affordances apply within technology and communication studies, and how this differs across users.

I, of course, think the concept is useful. But more importantly, people are using it. I therefore agree with Neff and colleagues. Let’s keep theorizing with affordances, but let’s make sure that we do it well.


Jenny Davis is on Twitter (which affords 140 character texts, images, short videos, and link sharing) @Jenny_L_Davis


The case of sociologist Zandria Robinson, formerly of the University of Memphis and now teaching at Rhodes College, has a lot to say about the affordances of Twitter. But more than this, it says a lot about the intersection of communication technologies and relations of power.

Following the Charleston shooting, Robinson tapped out several 140 character snippets rooted in critical race theory. Critical race theory recognizes racism as interwoven into the very fabric of social life, locates racism within culture and institutions, and insists upon naming the oppressor (white people).



Quickly, conservative bloggers and journalists reposted the content [i] with biting commentary on the partisan problem of higher education. This came in the wake of criticism for earlier social media posts in which Robinson discredited white students’ racist assumptions about black students’ (supposedly easier) acceptance into graduate school. On Tuesday, the University of Memphis, Robinson’s (now former) employer, tweeted that she was no longer affiliated with the University[ii].

Robinson’s case, and those like it, highlight the unique position held by Twitter as both an important platform for political speech and a precarious platform through which speakers can find themselves censured. Twitter grants users voice, but only a little, only sips at a time. These sips, so easy to produce, are highly susceptible to de-contextualization and misinterpretation, and yet, they remain largely public, easily sharable, and harbor serious consequences for those who author them. While this may have embarrassing outcomes for some public figures, for those speaking from the margins, the affordances of Twitter can be produce devastating results.

Twitter is a tough place to navigate. It provides a big audience, lets users make concise points, and promotes sharing, but also, it provides a big audience, lets users make concise points, and promotes sharing. These features facilitate content distribution to an extent never before possible, and arguably unmatched on any other medium. However, these same features make it hard to convey complex arguments, easy to misinterpret, and likely that content lands in unexpected places.

Twitter’s abbreviated message structure, by limiting text to 140 characters, places a lot of the communicative onus upon readers. Readers get the one-liner, and have to know enough to interpret the content in context. The margins are, by definition, unfamiliar to the mainstream. Messages from the margins are therefore particularly susceptible to misinterpretation, while marginal voices are particularly vulnerable to formal and informal punishment.

As a sociologist who has read critical race theory and learned from critical race theorists, Robinson’s tweets, for me, were impassioned statements of well-established and well-founded lines of thought. For the uninitiated, however, they were jarring. The average nice white ladies of the world don’t understand that “whiteness is most certainly and inevitably terror” refers to a history of white-on-black interpersonal and institutional violence, degrading media portrayals, over-policing and under-protection of black communities, hypersexualization of black women, and fear mongering aimed at black men. And of course they don’t, that’s one of the key points of critical race theory: cultural logics render power-hierarchies invisible while perpetuating race-based opportunity structures that privilege whites. While my scholarly training let me fill in the substance behind Robinson’s tweets, this was not the case for all readers. Ultimately, Robinson has a new job.

Robinson’s message came from the margins. Readers were unable to do the work of interpretation and like so many marginal voices, Robinson’s required an account, an explanation, a (likely exhausting) conversation, in order for it to penetrate those who do not already understand. This is an unjust and unfair reality. People who experience oppression are burdened with the labor of teaching those who oppress. This labor, these conversations, they did not happen on Twitter. They could not have happened on Twitter. In short, even as Twitter gives voice, its affordances disproportionately distribute the efficacy and consequences of speech.

Robinson is not a radical, nor were her words unfounded. They were read, however, by eyes untrained, through a medium ill prepared to teach people what they’ve worked so hard to never learn.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic: Source


[i]No link. I don’t want to drive traffic their way. You’ll have to Google.

[ii] Robinson apparently says she was not fired, but neither she nor the University have released further details. Regardless, she has a new job and I’m pretty certain that the timing is not coincidental.

*Editors Note: Robinson has since posted a response in which she explains her decision to leave the University of Memphis*

Screen Shot 2015-06-26 at 1.16.45 PM


Today seems like a good day to talk about political participation and how it can affect actual change.

Habermas’ public sphere has long been the model of ideal democracy, and the benchmark against which researchers evaluate past and current political participation. The public sphere refers to a space of open debate, through which all members of the community can express their opinions and access the opinions of others. It is in such a space that reasoned political discourse develops, and an informed citizenry prepares to enact their civic rights and duties (i.e., voting, petitioning, protesting, etc.). A successful public sphere relies upon a diversity of voices through which citizens not only express themselves, but also expose themselves to the full range of thought.

Internet researchers have long occupied themselves trying to understand how new technologies affect political processes. One key question is how the shift from broadcast media to peer-based media bring society closer to, or farther from, a public sphere. Increasing the diversity of voices indicates a closer approximation, while narrowing the range of voices indicates democratic demise.

By this metric, the research doesn’t look good for democracy. In general, people seek out opinion-confirming information. That is, we actively consume content that strengthens—rather than challenges—our views. For those of us who hide Facebook Friends, mute people/hashtags on Twitter, and read news from a select few sources, this may not come as a surprise.

This confirmation bias is algorithmically strengthened through what Eli Pariser calls a filter bubble. Many of the platforms and websites that feed us news and information are financially driven companies. These companies make money by selling space to advertisers, who pay according to the quantity and extent of users. That is, advertisers purchase eyeballs, so it behooves Internet companies to maximize eyeballs on their site, for as long as possible. This results in users receiving information that is already appealing to them. Pandora, for example, shows you music that’s similar to what you already listen to, while Google produces results that line up with the kinds of links you tend to click. In this way, our views and preferences are largely given back to us, creating a bubble that protects against, rather than invites, disagreement and debate.

Information in the digital age is plentiful, and the work of engaged citizens entails sorting through it to find what is relevant, meaningful, and useful. It seems that both individual practices of filtering and algorithmic filters work against a version of democracy in which political action stems from reasoned consideration of key issues from all possible sides. The Internet has not given us a public sphere.

But what if the public sphere is not the democratic ideal? What if, instead, the driving force of political participation is community and commiseration? This alternative democratic vision is the driving logic behind Brigade, a series of web and mobile tools that promise to help users become active political citizens.

CEO Matt Mahan explains that these tools allow people to “declare their beliefs and values, get organized with like-minded people, and take action together, directly influencing the policies and the representatives who have an impact on the issues they care about.” This is a model that embraces—rather than fights against—affirmation bias and algorithmic filter bubbles. And it does so in the service of political action.

Currently in a beta version, Brigade is still invite-only. After requesting and receiving an invite, Brigade prompted me with a series of political questions, and encouraged me to answer more. After submitting my responses I could see how I compared with the general populace, and with those in my existing social media networks. I could also connect with others who share similar views, and learn about opportunities to get involved. It basically asks what I think, and then shows me my people. This is powerful. When a person states an opinion, as Brigade prompts users to do, it reflects a belief, but also, actively forms it. We are what we do, and stating that we believe something makes us believe it a little more firmly. Having established this belief, the user connects with others who agree, literature that supports, and events in which to participate.

In a strange way, Brigade itself embodies the will of the people. We filter, we affirm, we look for like-minded others. Brigade, as a political tool, helps us do it better. It is unclear if this will translate into votes and policies, but regardless, Brigade’s mere existence challenges us to reconsider the metrics of an ideal democracy. Perhaps, the public sphere will be dethroned.


Jenny Davis is on twitter @Jenny_L_Davis


A new duo of apps purports to curb sexual assault on college campuses. WE-CONSENT and WHAT-ABOUT-NO work together to account for both consensual sexual engagement (“yes means yes”) and unwanted sexual advances, respectively.

The CONSENT app works by recording consent proceedings, encrypting the video, and saving it in on secure servers. The video is only accessible to law enforcement through a court order or as part of a university disciplinary hearing. The NO app gives a textual “NO” and shows an image of a stern looking police officer. If the user pays $5/year for the premium version, the app records and stores the recipient of the “no” message, documenting nonconsent. The apps are intended to facilitate mutually respectful sexual engagement, prevent unwanted sexual contact, and circumvent questions about false accusations. See below for quick tutorials provided by the developers



The app suit is timely and its goals are laudable. These apps reflect a particular historical moment that intersects rape culture, emerging consent awareness, and norms of documentation coupled with widespread access to documentary devices (i.e., mobile phones with cameras and Internet capabilities). They address the issue of sexual assault on college campuses, which is a problem. A big one.

Although the problem of sexual assault has been around for awhile, it has taken hold of public attention over the last year, prompting news stories, task forces, protests, controversies, and I’m sure, lots of dinner table fights. Good. But like any festering wound that starts receiving treatment, things get messy before they get better. (Non)consent is not always clear, accounts can be imperfect (or dishonest), and, as the infamous Rolling Stone/UVA case revealed, a simple “victim’s word as Truth” approach doesn’t always suffice. The consent/nonconsent apps are here to clear things up. The CONSENT app protects the accused by providing evidence that consent did occur; the NO app supports accusers by providing evidence that sexual contact was unwanted.

To the developer’s credit, the apps start an important conversation and use readily available technologies to implement consent as part of the sexual encounter. I actually think the “No” police officer image offers a funny way to tell someone that you aren’t interested in fulfilling their request (sexual or otherwise), circumventing the uncomfortable task of rejection. But like most technological objects, made by people immersed in an existing cultural logic, these apps do more to reify troubling patterns than subvert them.

First, they reinforce the focus on campus sexual assault, despite statistics that put 18-24 year old women who do not attend college at greater risk. Of course, campus sexual assaults are a serious issue. But ALL sexual assaults are a serious issue. Nothing about the design of the app make it exclusive to college students, yet reflecting a tradition of concern-disparities along class and race lines, the apps’ discourse centers on those who attend institutions of higher education to the exclusion of those who do not.

Second, the apps demand recorded proof. Candace Lanius astutely points to the racism entailed in requiring people of color to quantitatively demonstrate their experiences of police mistreatment. So too, those who experience sexual assault are now asked to document their case—in real time. Record your “No” (for $5/year) or it didn’t happen. For those on the bottom end of a status disparity, personal accounts are not enough.

This is further reflected as the apps place the burden of proof disproportionately upon the person who experienced assault. One important difference between the CONSENT and NO apps is that the former records consent for free, but the latter charges to record nonconsent. In fact, the CONSENT app self-destructs if users say the word “no” repeatedly (the website does not indicate what number constitutes “repeated”). This means that the CONSENT app only records consent. Recording a “NO” comes, literally, at a higher cost. Keep in mind, CONSENT serves the accused, NO serves the assaulted.

Finally, the apps reify consent as temporally prior to, and separate from, the sexual encounter rather than part of an ongoing dialogue within the sexual encounter[1]. People change their minds. People come up with fun new ideas. Both of these are opportunities for further conversation. Consent is continuous, but the apps artificially bound it. This artificial binding is all the more significant, given the demand for documented proof. If people record consent, and then one party changes their mind or isn’t into a spontaneous suggestion, the record still shows consent. The person experiencing assault therefore has less than their experiential account; they also have to address a document that discounts their story. And again, documents weigh more than words.

Solutions to social problems can never just be technological. To presume that they could is to guarantee social problems will persist.

Follow Jenny Davis on Twitter @Jenny_L_Davis


Headline Pic via Source

[1] Through communications with the development team I’ve learned that there is another app on the way that allows people who experience assault to record their story and then release it to authorities later if they so choose.


Atrocities in Eritrea atop my Twitter feed. A few tweets below that, police violence against an innocent African American girl at a pool party. Below that, the story of a teen unfairly held at Rikers Island for three years, who eventually killed himself. Below that, news about the seemingly unending bombing campaign in Yemen. Below that, several tweets about the Iraq war and climate change—two longtime staples of my timeline. It reminds me of the writer Teju Cole exclaiming on Twitter last summer that “we’re not evolving emotional filters fast enough to deal with the efficiency with which bad news now reaches us….”

This torrent of news about war, injustice, and suffering is something many of us experience online today, be it on Facebook, Twitter, in our inboxes, or elsewhere. But I wonder about the ‘evolutionary’ framing of this problem—do we really need to develop some new kinds of emotional or social or technical filters for the bad news that engulfs us? Has it gotten that bad?

As it turns out, it has always already gotten that bad. Media critics like Neil Postman having been making arguments about the deleterious effects of having too much mass-mediated information for decades now. Although his classic Amusing Ourselves to Death (1985) was written primarily as a critique of television, contemporary critics often apply Postman’s theories to digital media. One recent piece in Salon labelled Postman “the man who predicted the Internet” and suggested that today “the people asking the important questions about where American society is going are taking a page from him.” Indeed, Postman identified the central problem of all post-typographic media, beginning with the telegraph, as one of information overload. According to Postman, “the situation created by telegraphy, and then exacerbated by later technologies, made the relationship between information and action both abstract and remote. For the first time in human history, people were faced with the problem of information glut.” For Postman, typography’s alteration of the “information-action ratio” associated with older communication technologies created a “diminished social and political potency.” In oral and print cultures, “information derives its importance from the possibilities of action” but in the era of electronic media we live in a dystopian “peek-a-boo world” where information appears from across the globe without context or connection to our daily lives. It “gives us something to talk about but cannot lead to any meaningful action.”

Put in these terms, one can understand the appeal of Postman’s ideas for the digital era, in which a feeling of being overloaded with information surely persists. Even before the Internet, “all we had to do was go to the library to feel overwhelmed by more than we could possibly absorb.” But as Mark Andrejevic reminds us in his book, Infoglut, “Now this excess confronts us at every turn: in the devices we use to work, to communicate with one another, to entertain ourselves.” And as Cole’s tweets make plain, the tension caused by too much information can be particularly acute when it comes in the form of bad news. Is it safe to say, then, that Internet has further ruptured the information-action ratio in the ways suggested by Postman?

I want to argue against such a view. For one thing, Postman’s information-action ratio appears to privilege media that provide less information, and information with easily actionable ramifications. As a criticism, this doesn’t mesh with the sensibilities of either media producers or consumers, who have sought out more information from more people and places since at least the advent of typography. Such an ideal also would seem to privilege simple news stories over complex ones, since the action one can take in response to a simple story is much clearer than a complex one. Indeed, Postman mockingly asked his readers “what steps do you plan to take to reduce the conflict in the Middle East? Or the rates of inflation, crime and unemployment? What are your plans for preserving the environment or reducing the risk of nuclear war?” But do the scale and complexity of these issues mean one should not want to know about them? Of course not. And arguing against the public consumption of such complex, thought-provoking stories seems wildly inconsistent for a book that later bemoaned the fact that television news shows were merely “for entertainment, not education, reflection or catharsis.”

But perhaps there is simply a threshold quantity of information beyond which human consciousness can’t keep up. This concern has animated much contemporary criticism of the Internet’s epistemological effects, as in Nicholas Carr’s 2008 essay “Is Google Making Us Stupid?” The piece began with Carr worrying about his own reading habits: “my concentration often starts to drift after two or three pages….” Carr quickly blamed the Internet for his newfound distraction. “What the Net seems to be doing is chipping away my capacity for concentration and contemplation….”

Amazingly though, Carr’s capacity for deep reading had somehow managed to persist throughout the age of television, in contrast to Postman’s predictions about that medium’s deleterious effects. So while media critics of every age tend to make these sort of technologically determinist criticisms, the question really ought to be reframed as one concerning social norms. Critics like Carr may talk of the brain’s “plasticity,” such that it can be rewired based on repeated exposure to hyperlinks and algorithms, but they, like Postman, don’t address why that rewiring wouldn’t necessarily entail the synthesis of old and new epistemologies, rather than the destruction of one by the other. How else to explain the fact that Carr’s own mental capacities flourished in a televisual age that was once similarly bemoaned by its critics? What we’re left with, then, is a way of reading technological panics like his and Postman’s as evidence of the shifting norms concerning communication technologies.

Shifting to normative, rather than technologically determinist, understandings of information overload recasts the problem of bad news in sociological and historical terms. Luc Boltanski’s Distant Suffering is a social history of the moral problem posed by the mass media’s representation of the suffering of distant others. When one knows that others are suffering nearby, one’s moral obligation is clearly to help them. But as Boltanski explains, when one contemplates the suffering of others from afar, moral obligations become harder to discern. In this scenario, the “least unacceptable” option is to at least make one’s voice heard. As Boltanski put it:

It is by speaking up that the spectator can maintain his integrity when, brought face to face with suffering, he is called upon to act in a situation in which direct action is difficult or impossible. Now even if this speech is initially no more than an internal whisper to himself… none the less in principle it contains a requirement of publicity.

There are echoes here of the information-action ratio concept, but the problem posed by this information is not the amount but its specific content. Good news and lighthearted entertainment don’t really pose a moral or ethical problem for spectators, nor do fictional depictions of suffering. But information about real human suffering does pose the problem of action as a moral one for the spectator.

This moral dilemma certainly didn’t originate with the telegram, much less the television or the Internet. Rather, knowledge of distant others’ suffering came to be seen as morally problematic with the growth of newspapers and the press. But the Internet does, I think, tend to shake up these norms, partly because it changes the nature of public speech. In Postman’s terms, the Internet alters the action side of the information-action ratio. Call it slacktivism or clicktivism or simply chalk it up to the affordances of communication in a networked world, but Boltanski’s “internal whisper to himself” is no longer internal for many of us. At the very least, when confronted with bad news, we can pass on the spectacle by tweeting, blogging, pinning, or posting it in ways that are immediately quite public and also immediately tailored to further sharing. Each time I read a tweet I am confronted with the question of whether I should retweet or reply to it. This becomes a miniature ethical and aesthetic referendum on every tweet about suffering and misfortune—Is the issue serious enough? Will my Twitter followers care? Do I trust the source of this info? Is there another angle that the author of the tweet has not considered?—although I do this mental work quite quickly and almost unthinkingly at times. The same is true for my email inbox, flooded with entreaties for donations to worthy causes or requests to add my name to a petition against some terrible injustice. Of course, humanitarianism and political activism thrived before the Internet, so the issue here is not that the Internet has suddenly overloaded us with information about bad news, but has increased the amount of direct actions we might take as a result. Each one of these actions is easy to do, but they add up to new and slightly different expectations.

This culminates in what I’m calling infoguilt. This term has been used sparingly in popular parlance, and its only scholarly use is in a 1998 book called Avatars of the Word: From Papyrus to Cyberspace. Author James J. O’Donnell suggested that “what is perceived as infoglut is actually infoguilt—the sense that I should be seeking more.” In O’Donnell’s conception guilt comes from not reading all that one could on a subject, or seeking out all available information. This doesn’t seem to me as potent a force as the guilt that comes from the kinds of overwhelming exposure to bad news and distant suffering discussed here. Guilt ought to be reserved for situations in which one’s moral worth is called into question, and as Boltanski pointed out, the spectacle of distant suffering “may be… the only spectacle capable of posing a specifically moral dilemma to someone exposed to it.” A more relevant definition of infoguilt ought to refer, then, to the negative conception of self that comes from not responding to the moral or emotional demands of bad news.

This guilt certainly generates a kind of reflexivity about one’s position as a spectator—how can I prioritize my time, resources, and emotions—and in this way it may surely feel like we have too much information and not enough action. But I, for one, don’t think that feeling is a bad thing. After all, it hasn’t translated to a retreat from humanitarianism and charity—quite the opposite. Rates of charitable giving online have skyrocketed, and despite a serious dip in charitable donations after the 2008 financial crisis, American giving as a whole has risen continuously over the past four years, and is projected to continue to rise in 2015 and 2016 as well. At the very least it does not appear that the problem of infoguilt contributes to what has been deemed “compassion fatigue.” Instead, the guilt we feel is precisely a marker of a continued belief in the value of compassion and an internalization of shame when we fail to act with enough compassion for the many distant others who are now only a click away.

Still, I don’t want to just dismiss infoguilt as merely a trivial first world problem. It is, of course, a symptom of a deeply unjust world where such a surplus of pain and suffering confronts the most comfortable of us every day across the globe. And I don’t have the answer for the appropriate ways we should respond to all of the misfortune that confronts us everyday online. But I do think we need to fight back against the notion that this is a technological problem. Because big tech companies are quite willing to solve the problem of infoguilt for us with algorithmic curation of the news we receive. As more and more of our news comes to us filtered through Facebook and Twitter, algorithms could reduce the emotional strain of bad news by limiting our exposure to it without us even knowing. This begs the question, as Mark Andrejevic put it “what happens when we offload our ‘knowing’ onto the database (or algorithm) because the new forms of knowledge available are ‘too big’ for us to comprehend?” Facebook has already shown that it can subtly improve or depress our moods by shifting the content of our news feeds. And Zeynep Tufecki has written about the ways that Facebook’s algorithm inadvertently suppressed information about the protests in Ferguson and the brutal police response to them in the first hours of that nascent social movement. If we solve infoguilt with technological fixes like algorithmic filtering, it will likely be at tremendous cost to what’s left of our democratic public sphere.

As Andrejevic again explains:

The dystopian version of information glut anticipates a world in which control over… information… is concentrated in the hands of the few who use it to sort, manage, and manipulate. Those without access to the database are left with the “poor person’s” strategies for cutting through the clutter: gut instinct, affective response, and “thin-slicing” (making a snap decision based on a tiny fraction of the evidence).

This is, to a great extent, how we struggle with infoguilt today. We feel the pain of being unable to respond and the guilt of living in comfort and safety while others suffer, and we make snap judgments and gut decisions about what information to let through our emotional filters, and what actions we can spare amidst the ever growing demands of work, family and social life in an always-connected present. But given the available alternatives, let’s continue to struggle through our infoguilt, keep talking it out, and not cede these moral, ethical, and normative questions—over which we do have agency—to opaque technologies promising the comforts of a bygone, mythologized era. In the same way that activists are working to change the norms about trolling and actively creating safer spaces online for women, people of color, and other oppressed peoples, we can work to develop a moral language to understand our online obligations to distant sufferers. If we don’t, then this language will be developed for us, in code, and in secret, in ways more dystopian than even Postman could envision.


The author would like to thank the students in his WRI 128/129 “Witnessing Disaster” seminar, who read and commented on an early draft of this essay

Timothy Recuber is a sociologist who studies how American media and culture respond to crisis and distress. His work has been published in journals such as New Media and Society, The American Behavioral Scientist, Space and Culture, Contexts, and Research Ethics.


Headline Pic via: Source