The case of sociologist Zandria Robinson, formerly of the University of Memphis and now teaching at Rhodes College, has a lot to say about the affordances of Twitter. But more than this, it says a lot about the intersection of communication technologies and relations of power.

Following the Charleston shooting, Robinson tapped out several 140 character snippets rooted in critical race theory. Critical race theory recognizes racism as interwoven into the very fabric of social life, locates racism within culture and institutions, and insists upon naming the oppressor (white people).



Quickly, conservative bloggers and journalists reposted the content [i] with biting commentary on the partisan problem of higher education. This came in the wake of criticism for earlier social media posts in which Robinson discredited white students’ racist assumptions about black students’ (supposedly easier) acceptance into graduate school. On Tuesday, the University of Memphis, Robinson’s (now former) employer, tweeted that she was no longer affiliated with the University[ii].

Robinson’s case, and those like it, highlight the unique position held by Twitter as both an important platform for political speech and a precarious platform through which speakers can find themselves censured. Twitter grants users voice, but only a little, only sips at a time. These sips, so easy to produce, are highly susceptible to de-contextualization and misinterpretation, and yet, they remain largely public, easily sharable, and harbor serious consequences for those who author them. While this may have embarrassing outcomes for some public figures, for those speaking from the margins, the affordances of Twitter can be produce devastating results.

Twitter is a tough place to navigate. It provides a big audience, lets users make concise points, and promotes sharing, but also, it provides a big audience, lets users make concise points, and promotes sharing. These features facilitate content distribution to an extent never before possible, and arguably unmatched on any other medium. However, these same features make it hard to convey complex arguments, easy to misinterpret, and likely that content lands in unexpected places.

Twitter’s abbreviated message structure, by limiting text to 140 characters, places a lot of the communicative onus upon readers. Readers get the one-liner, and have to know enough to interpret the content in context. The margins are, by definition, unfamiliar to the mainstream. Messages from the margins are therefore particularly susceptible to misinterpretation, while marginal voices are particularly vulnerable to formal and informal punishment.

As a sociologist who has read critical race theory and learned from critical race theorists, Robinson’s tweets, for me, were impassioned statements of well-established and well-founded lines of thought. For the uninitiated, however, they were jarring. The average nice white ladies of the world don’t understand that “whiteness is most certainly and inevitably terror” refers to a history of white-on-black interpersonal and institutional violence, degrading media portrayals, over-policing and under-protection of black communities, hypersexualization of black women, and fear mongering aimed at black men. And of course they don’t, that’s one of the key points of critical race theory: cultural logics render power-hierarchies invisible while perpetuating race-based opportunity structures that privilege whites. While my scholarly training let me fill in the substance behind Robinson’s tweets, this was not the case for all readers. Ultimately, Robinson has a new job.

Robinson’s message came from the margins. Readers were unable to do the work of interpretation and like so many marginal voices, Robinson’s required an account, an explanation, a (likely exhausting) conversation, in order for it to penetrate those who do not already understand. This is an unjust and unfair reality. People who experience oppression are burdened with the labor of teaching those who oppress. This labor, these conversations, they did not happen on Twitter. They could not have happened on Twitter. In short, even as Twitter gives voice, its affordances disproportionately distribute the efficacy and consequences of speech.

Robinson is not a radical, nor were her words unfounded. They were read, however, by eyes untrained, through a medium ill prepared to teach people what they’ve worked so hard to never learn.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic: Source


[i]No link. I don’t want to drive traffic their way. You’ll have to Google.

[ii] Robinson apparently says she was not fired, but neither she nor the University have released further details. Regardless, she has a new job and I’m pretty certain that the timing is not coincidental.

*Editors Note: Robinson has since posted a response in which she explains her decision to leave the University of Memphis*

Screen Shot 2015-06-26 at 1.16.45 PM


Today seems like a good day to talk about political participation and how it can affect actual change.

Habermas’ public sphere has long been the model of ideal democracy, and the benchmark against which researchers evaluate past and current political participation. The public sphere refers to a space of open debate, through which all members of the community can express their opinions and access the opinions of others. It is in such a space that reasoned political discourse develops, and an informed citizenry prepares to enact their civic rights and duties (i.e., voting, petitioning, protesting, etc.). A successful public sphere relies upon a diversity of voices through which citizens not only express themselves, but also expose themselves to the full range of thought.

Internet researchers have long occupied themselves trying to understand how new technologies affect political processes. One key question is how the shift from broadcast media to peer-based media bring society closer to, or farther from, a public sphere. Increasing the diversity of voices indicates a closer approximation, while narrowing the range of voices indicates democratic demise.

By this metric, the research doesn’t look good for democracy. In general, people seek out opinion-confirming information. That is, we actively consume content that strengthens—rather than challenges—our views. For those of us who hide Facebook Friends, mute people/hashtags on Twitter, and read news from a select few sources, this may not come as a surprise.

This confirmation bias is algorithmically strengthened through what Eli Pariser calls a filter bubble. Many of the platforms and websites that feed us news and information are financially driven companies. These companies make money by selling space to advertisers, who pay according to the quantity and extent of users. That is, advertisers purchase eyeballs, so it behooves Internet companies to maximize eyeballs on their site, for as long as possible. This results in users receiving information that is already appealing to them. Pandora, for example, shows you music that’s similar to what you already listen to, while Google produces results that line up with the kinds of links you tend to click. In this way, our views and preferences are largely given back to us, creating a bubble that protects against, rather than invites, disagreement and debate.

Information in the digital age is plentiful, and the work of engaged citizens entails sorting through it to find what is relevant, meaningful, and useful. It seems that both individual practices of filtering and algorithmic filters work against a version of democracy in which political action stems from reasoned consideration of key issues from all possible sides. The Internet has not given us a public sphere.

But what if the public sphere is not the democratic ideal? What if, instead, the driving force of political participation is community and commiseration? This alternative democratic vision is the driving logic behind Brigade, a series of web and mobile tools that promise to help users become active political citizens.

CEO Matt Mahan explains that these tools allow people to “declare their beliefs and values, get organized with like-minded people, and take action together, directly influencing the policies and the representatives who have an impact on the issues they care about.” This is a model that embraces—rather than fights against—affirmation bias and algorithmic filter bubbles. And it does so in the service of political action.

Currently in a beta version, Brigade is still invite-only. After requesting and receiving an invite, Brigade prompted me with a series of political questions, and encouraged me to answer more. After submitting my responses I could see how I compared with the general populace, and with those in my existing social media networks. I could also connect with others who share similar views, and learn about opportunities to get involved. It basically asks what I think, and then shows me my people. This is powerful. When a person states an opinion, as Brigade prompts users to do, it reflects a belief, but also, actively forms it. We are what we do, and stating that we believe something makes us believe it a little more firmly. Having established this belief, the user connects with others who agree, literature that supports, and events in which to participate.

In a strange way, Brigade itself embodies the will of the people. We filter, we affirm, we look for like-minded others. Brigade, as a political tool, helps us do it better. It is unclear if this will translate into votes and policies, but regardless, Brigade’s mere existence challenges us to reconsider the metrics of an ideal democracy. Perhaps, the public sphere will be dethroned.


Jenny Davis is on twitter @Jenny_L_Davis


A new duo of apps purports to curb sexual assault on college campuses. WE-CONSENT and WHAT-ABOUT-NO work together to account for both consensual sexual engagement (“yes means yes”) and unwanted sexual advances, respectively.

The CONSENT app works by recording consent proceedings, encrypting the video, and saving it in on secure servers. The video is only accessible to law enforcement through a court order or as part of a university disciplinary hearing. The NO app gives a textual “NO” and shows an image of a stern looking police officer. If the user pays $5/year for the premium version, the app records and stores the recipient of the “no” message, documenting nonconsent. The apps are intended to facilitate mutually respectful sexual engagement, prevent unwanted sexual contact, and circumvent questions about false accusations. See below for quick tutorials provided by the developers



The app suit is timely and its goals are laudable. These apps reflect a particular historical moment that intersects rape culture, emerging consent awareness, and norms of documentation coupled with widespread access to documentary devices (i.e., mobile phones with cameras and Internet capabilities). They address the issue of sexual assault on college campuses, which is a problem. A big one.

Although the problem of sexual assault has been around for awhile, it has taken hold of public attention over the last year, prompting news stories, task forces, protests, controversies, and I’m sure, lots of dinner table fights. Good. But like any festering wound that starts receiving treatment, things get messy before they get better. (Non)consent is not always clear, accounts can be imperfect (or dishonest), and, as the infamous Rolling Stone/UVA case revealed, a simple “victim’s word as Truth” approach doesn’t always suffice. The consent/nonconsent apps are here to clear things up. The CONSENT app protects the accused by providing evidence that consent did occur; the NO app supports accusers by providing evidence that sexual contact was unwanted.

To the developer’s credit, the apps start an important conversation and use readily available technologies to implement consent as part of the sexual encounter. I actually think the “No” police officer image offers a funny way to tell someone that you aren’t interested in fulfilling their request (sexual or otherwise), circumventing the uncomfortable task of rejection. But like most technological objects, made by people immersed in an existing cultural logic, these apps do more to reify troubling patterns than subvert them.

First, they reinforce the focus on campus sexual assault, despite statistics that put 18-24 year old women who do not attend college at greater risk. Of course, campus sexual assaults are a serious issue. But ALL sexual assaults are a serious issue. Nothing about the design of the app make it exclusive to college students, yet reflecting a tradition of concern-disparities along class and race lines, the apps’ discourse centers on those who attend institutions of higher education to the exclusion of those who do not.

Second, the apps demand recorded proof. Candace Lanius astutely points to the racism entailed in requiring people of color to quantitatively demonstrate their experiences of police mistreatment. So too, those who experience sexual assault are now asked to document their case—in real time. Record your “No” (for $5/year) or it didn’t happen. For those on the bottom end of a status disparity, personal accounts are not enough.

This is further reflected as the apps place the burden of proof disproportionately upon the person who experienced assault. One important difference between the CONSENT and NO apps is that the former records consent for free, but the latter charges to record nonconsent. In fact, the CONSENT app self-destructs if users say the word “no” repeatedly (the website does not indicate what number constitutes “repeated”). This means that the CONSENT app only records consent. Recording a “NO” comes, literally, at a higher cost. Keep in mind, CONSENT serves the accused, NO serves the assaulted.

Finally, the apps reify consent as temporally prior to, and separate from, the sexual encounter rather than part of an ongoing dialogue within the sexual encounter[1]. People change their minds. People come up with fun new ideas. Both of these are opportunities for further conversation. Consent is continuous, but the apps artificially bound it. This artificial binding is all the more significant, given the demand for documented proof. If people record consent, and then one party changes their mind or isn’t into a spontaneous suggestion, the record still shows consent. The person experiencing assault therefore has less than their experiential account; they also have to address a document that discounts their story. And again, documents weigh more than words.

Solutions to social problems can never just be technological. To presume that they could is to guarantee social problems will persist.

Follow Jenny Davis on Twitter @Jenny_L_Davis


Headline Pic via Source

[1] Through communications with the development team I’ve learned that there is another app on the way that allows people who experience assault to record their story and then release it to authorities later if they so choose.


Atrocities in Eritrea atop my Twitter feed. A few tweets below that, police violence against an innocent African American girl at a pool party. Below that, the story of a teen unfairly held at Rikers Island for three years, who eventually killed himself. Below that, news about the seemingly unending bombing campaign in Yemen. Below that, several tweets about the Iraq war and climate change—two longtime staples of my timeline. It reminds me of the writer Teju Cole exclaiming on Twitter last summer that “we’re not evolving emotional filters fast enough to deal with the efficiency with which bad news now reaches us….”

This torrent of news about war, injustice, and suffering is something many of us experience online today, be it on Facebook, Twitter, in our inboxes, or elsewhere. But I wonder about the ‘evolutionary’ framing of this problem—do we really need to develop some new kinds of emotional or social or technical filters for the bad news that engulfs us? Has it gotten that bad?

As it turns out, it has always already gotten that bad. Media critics like Neil Postman having been making arguments about the deleterious effects of having too much mass-mediated information for decades now. Although his classic Amusing Ourselves to Death (1985) was written primarily as a critique of television, contemporary critics often apply Postman’s theories to digital media. One recent piece in Salon labelled Postman “the man who predicted the Internet” and suggested that today “the people asking the important questions about where American society is going are taking a page from him.” Indeed, Postman identified the central problem of all post-typographic media, beginning with the telegraph, as one of information overload. According to Postman, “the situation created by telegraphy, and then exacerbated by later technologies, made the relationship between information and action both abstract and remote. For the first time in human history, people were faced with the problem of information glut.” For Postman, typography’s alteration of the “information-action ratio” associated with older communication technologies created a “diminished social and political potency.” In oral and print cultures, “information derives its importance from the possibilities of action” but in the era of electronic media we live in a dystopian “peek-a-boo world” where information appears from across the globe without context or connection to our daily lives. It “gives us something to talk about but cannot lead to any meaningful action.”

Put in these terms, one can understand the appeal of Postman’s ideas for the digital era, in which a feeling of being overloaded with information surely persists. Even before the Internet, “all we had to do was go to the library to feel overwhelmed by more than we could possibly absorb.” But as Mark Andrejevic reminds us in his book, Infoglut, “Now this excess confronts us at every turn: in the devices we use to work, to communicate with one another, to entertain ourselves.” And as Cole’s tweets make plain, the tension caused by too much information can be particularly acute when it comes in the form of bad news. Is it safe to say, then, that Internet has further ruptured the information-action ratio in the ways suggested by Postman?

I want to argue against such a view. For one thing, Postman’s information-action ratio appears to privilege media that provide less information, and information with easily actionable ramifications. As a criticism, this doesn’t mesh with the sensibilities of either media producers or consumers, who have sought out more information from more people and places since at least the advent of typography. Such an ideal also would seem to privilege simple news stories over complex ones, since the action one can take in response to a simple story is much clearer than a complex one. Indeed, Postman mockingly asked his readers “what steps do you plan to take to reduce the conflict in the Middle East? Or the rates of inflation, crime and unemployment? What are your plans for preserving the environment or reducing the risk of nuclear war?” But do the scale and complexity of these issues mean one should not want to know about them? Of course not. And arguing against the public consumption of such complex, thought-provoking stories seems wildly inconsistent for a book that later bemoaned the fact that television news shows were merely “for entertainment, not education, reflection or catharsis.”

But perhaps there is simply a threshold quantity of information beyond which human consciousness can’t keep up. This concern has animated much contemporary criticism of the Internet’s epistemological effects, as in Nicholas Carr’s 2008 essay “Is Google Making Us Stupid?” The piece began with Carr worrying about his own reading habits: “my concentration often starts to drift after two or three pages….” Carr quickly blamed the Internet for his newfound distraction. “What the Net seems to be doing is chipping away my capacity for concentration and contemplation….”

Amazingly though, Carr’s capacity for deep reading had somehow managed to persist throughout the age of television, in contrast to Postman’s predictions about that medium’s deleterious effects. So while media critics of every age tend to make these sort of technologically determinist criticisms, the question really ought to be reframed as one concerning social norms. Critics like Carr may talk of the brain’s “plasticity,” such that it can be rewired based on repeated exposure to hyperlinks and algorithms, but they, like Postman, don’t address why that rewiring wouldn’t necessarily entail the synthesis of old and new epistemologies, rather than the destruction of one by the other. How else to explain the fact that Carr’s own mental capacities flourished in a televisual age that was once similarly bemoaned by its critics? What we’re left with, then, is a way of reading technological panics like his and Postman’s as evidence of the shifting norms concerning communication technologies.

Shifting to normative, rather than technologically determinist, understandings of information overload recasts the problem of bad news in sociological and historical terms. Luc Boltanski’s Distant Suffering is a social history of the moral problem posed by the mass media’s representation of the suffering of distant others. When one knows that others are suffering nearby, one’s moral obligation is clearly to help them. But as Boltanski explains, when one contemplates the suffering of others from afar, moral obligations become harder to discern. In this scenario, the “least unacceptable” option is to at least make one’s voice heard. As Boltanski put it:

It is by speaking up that the spectator can maintain his integrity when, brought face to face with suffering, he is called upon to act in a situation in which direct action is difficult or impossible. Now even if this speech is initially no more than an internal whisper to himself… none the less in principle it contains a requirement of publicity.

There are echoes here of the information-action ratio concept, but the problem posed by this information is not the amount but its specific content. Good news and lighthearted entertainment don’t really pose a moral or ethical problem for spectators, nor do fictional depictions of suffering. But information about real human suffering does pose the problem of action as a moral one for the spectator.

This moral dilemma certainly didn’t originate with the telegram, much less the television or the Internet. Rather, knowledge of distant others’ suffering came to be seen as morally problematic with the growth of newspapers and the press. But the Internet does, I think, tend to shake up these norms, partly because it changes the nature of public speech. In Postman’s terms, the Internet alters the action side of the information-action ratio. Call it slacktivism or clicktivism or simply chalk it up to the affordances of communication in a networked world, but Boltanski’s “internal whisper to himself” is no longer internal for many of us. At the very least, when confronted with bad news, we can pass on the spectacle by tweeting, blogging, pinning, or posting it in ways that are immediately quite public and also immediately tailored to further sharing. Each time I read a tweet I am confronted with the question of whether I should retweet or reply to it. This becomes a miniature ethical and aesthetic referendum on every tweet about suffering and misfortune—Is the issue serious enough? Will my Twitter followers care? Do I trust the source of this info? Is there another angle that the author of the tweet has not considered?—although I do this mental work quite quickly and almost unthinkingly at times. The same is true for my email inbox, flooded with entreaties for donations to worthy causes or requests to add my name to a petition against some terrible injustice. Of course, humanitarianism and political activism thrived before the Internet, so the issue here is not that the Internet has suddenly overloaded us with information about bad news, but has increased the amount of direct actions we might take as a result. Each one of these actions is easy to do, but they add up to new and slightly different expectations.

This culminates in what I’m calling infoguilt. This term has been used sparingly in popular parlance, and its only scholarly use is in a 1998 book called Avatars of the Word: From Papyrus to Cyberspace. Author James J. O’Donnell suggested that “what is perceived as infoglut is actually infoguilt—the sense that I should be seeking more.” In O’Donnell’s conception guilt comes from not reading all that one could on a subject, or seeking out all available information. This doesn’t seem to me as potent a force as the guilt that comes from the kinds of overwhelming exposure to bad news and distant suffering discussed here. Guilt ought to be reserved for situations in which one’s moral worth is called into question, and as Boltanski pointed out, the spectacle of distant suffering “may be… the only spectacle capable of posing a specifically moral dilemma to someone exposed to it.” A more relevant definition of infoguilt ought to refer, then, to the negative conception of self that comes from not responding to the moral or emotional demands of bad news.

This guilt certainly generates a kind of reflexivity about one’s position as a spectator—how can I prioritize my time, resources, and emotions—and in this way it may surely feel like we have too much information and not enough action. But I, for one, don’t think that feeling is a bad thing. After all, it hasn’t translated to a retreat from humanitarianism and charity—quite the opposite. Rates of charitable giving online have skyrocketed, and despite a serious dip in charitable donations after the 2008 financial crisis, American giving as a whole has risen continuously over the past four years, and is projected to continue to rise in 2015 and 2016 as well. At the very least it does not appear that the problem of infoguilt contributes to what has been deemed “compassion fatigue.” Instead, the guilt we feel is precisely a marker of a continued belief in the value of compassion and an internalization of shame when we fail to act with enough compassion for the many distant others who are now only a click away.

Still, I don’t want to just dismiss infoguilt as merely a trivial first world problem. It is, of course, a symptom of a deeply unjust world where such a surplus of pain and suffering confronts the most comfortable of us every day across the globe. And I don’t have the answer for the appropriate ways we should respond to all of the misfortune that confronts us everyday online. But I do think we need to fight back against the notion that this is a technological problem. Because big tech companies are quite willing to solve the problem of infoguilt for us with algorithmic curation of the news we receive. As more and more of our news comes to us filtered through Facebook and Twitter, algorithms could reduce the emotional strain of bad news by limiting our exposure to it without us even knowing. This begs the question, as Mark Andrejevic put it “what happens when we offload our ‘knowing’ onto the database (or algorithm) because the new forms of knowledge available are ‘too big’ for us to comprehend?” Facebook has already shown that it can subtly improve or depress our moods by shifting the content of our news feeds. And Zeynep Tufecki has written about the ways that Facebook’s algorithm inadvertently suppressed information about the protests in Ferguson and the brutal police response to them in the first hours of that nascent social movement. If we solve infoguilt with technological fixes like algorithmic filtering, it will likely be at tremendous cost to what’s left of our democratic public sphere.

As Andrejevic again explains:

The dystopian version of information glut anticipates a world in which control over… information… is concentrated in the hands of the few who use it to sort, manage, and manipulate. Those without access to the database are left with the “poor person’s” strategies for cutting through the clutter: gut instinct, affective response, and “thin-slicing” (making a snap decision based on a tiny fraction of the evidence).

This is, to a great extent, how we struggle with infoguilt today. We feel the pain of being unable to respond and the guilt of living in comfort and safety while others suffer, and we make snap judgments and gut decisions about what information to let through our emotional filters, and what actions we can spare amidst the ever growing demands of work, family and social life in an always-connected present. But given the available alternatives, let’s continue to struggle through our infoguilt, keep talking it out, and not cede these moral, ethical, and normative questions—over which we do have agency—to opaque technologies promising the comforts of a bygone, mythologized era. In the same way that activists are working to change the norms about trolling and actively creating safer spaces online for women, people of color, and other oppressed peoples, we can work to develop a moral language to understand our online obligations to distant sufferers. If we don’t, then this language will be developed for us, in code, and in secret, in ways more dystopian than even Postman could envision.


The author would like to thank the students in his WRI 128/129 “Witnessing Disaster” seminar, who read and commented on an early draft of this essay

Timothy Recuber is a sociologist who studies how American media and culture respond to crisis and distress. His work has been published in journals such as New Media and Society, The American Behavioral Scientist, Space and Culture, Contexts, and Research Ethics.


Headline Pic via: Source


What counts as a threat via social media, and what are the legal implications? The Supreme Court just made a ruling on this subject, deciding 8-1 that content alone is not sufficient evidence. Most accounts of this ruling frame it as “raising the bar” for a legal conviction of threatening behavior via social media. I argue instead that the bar has not been raised, but made less fixed, and rightly so.

At issue was an earlier conviction and jail sentence for Anthony Elonis, whose Facebook posts projected harm onto his ex-wife, an FBI agent, and school children.

Elonis was originally convicted under the logic that a “reasonable person” could interpret his Facebook posts  true declarations of intent. The SCOTUS decision argues that the “reasonable person” criterion is not sufficient to convict someone. SCOTUS declares that the prosecution is burdened with showing evidence of the mental state of the accused, which suggests they do, in fact, intend harm upon the target of their message. SCOTUS does not, however, give guidelines on what that evidence includes.

Concretely, the SCOTUS decision sets a legal standard by which text itself is not enough. Juries must now debate about the meaning behind the text, and do so without set standards. While most media accounts interpret this ruling as raising the bar for conviction, it simply makes the bar more flexible and responsive.

Probably unintentionally, the SCOTUS decision is impressive in its implicit recognition of the structural features of Facebook that make a “reasonable person” criterion unfair, and empirical guidelines too restrictive. In particular, this decision seems to understand context collapse and relatedly, the nuanced and polysemic nature of public Facebook communications.

Context collapse refers to the blurring of network walls such that people from different parts of a person’s life come together to interact. Think of the uncomfortable task of seating guests at a wedding, or the experience of seeing your boss while out with friends at bar. On Facebook, people from many different networks come together in a shared social space. A single communication can mean very different things to these different people.

One of the most widely cited examples of communication strategies in light of context collapse is social stenography, composing texts that communicate a specific message to some readers, while obscuring the message from others. danah boyd writes about how teens post song lyrics to convey messages to friends, while concealing those messages from parents. That is, Facebook content cannot be taken on its own terms.

Of course, no communication can be taken entirely on its own terms. We often say things ironically, reference inside jokes, speak sincere but fleeting words in the heat of passion or anger etc. But on Facebook, these acts of communication have such a nebulous audience that to decipher their meaning from the perspective of a “reasonable person” is particularly inappropriate.

So coming back to the SCOTUS decision, it recognizes that the same content can hold multiple meanings, depending upon the “reasonable person” one asks. It further recognizes that setting concrete standards would mean predicting how all of those in a person’s network might read the piece of content, and if those readings converge with the writer’s intent. Meaning in this context is a moving target, and the open guidelines and dismissal of “reasonable person” criterion does not lower the bar, but allows that bar to move.

This becomes clear when we think of content with innocuous words and images, but potentially nefarious meaning. To give a light example, someone might write on the wall of friend “I’m going to give you a dutch oven.” Taken only as text, this indicates an intention to gift someone with a cooking vessel. Read differently, this could be a threat to trap the friend under covers with bodily gases. Though I can’t imagine a court case in which this particular threat would be at issue, it demonstrates that threats are not tied to configurations of words, but to who articulates those words, how they do so, and for whom those words are intended.

Communication is complicated, even more so in a multiply audienced environment. To recognize that readers cannot be approximated into a single “reasonable person,” and to maintain open standards, is to unfix—not raise or lower—the legal definition of threatening behavior.


Jenny Davis is on Twitter @Jenny_L_Davis

I love speculative fiction, especially when it includes a mystery. So imagine my excitement this past Saturday when I learned that Netflix released the new original series Between, premised on a mysterious illness that kills everyone over 21 years old. Blue skies could wait, this day was for binge watching. Or, as it turned out, for watching a single episode and then taking the dogs for a walk. Contrary to their usual season-dump format[i], Netflix is releasing Between on a weekly basis.

This got me thinking about how release schedules affect television for both producers and consumers, and wondering why Netflix would revert to the more traditional model.

Experientially, the season-release is truly indulgent. It’s like sitting there with a half-gallon of ice cream and a spoon (hence: binge watching). The goodness keeps on coming, usually followed by a slight disorientation and a tinge of self-disgust. Along with anticipation, the end of each episode brings immediate gratification as closing credits turn quickly into an opening montage. Even when watched in moderation (reasonable size bowl of ice cream style), the viewer can select when to tune in, out, and back in. For better or worse, the viewer loses that end-of-episode lack, the excited grief and desire for the story to continue.

Reasonable people could disagree about which is a “better” release model. Sidestepping that argument (which would first require me to define “better”), I’ll make the simpler point that full season releases are stickier. That is, they keep the viewer glued to the programming for more extended periods. For instance, I watch Walking Dead (and Talking Dead) each time a new episode airs, but have no idea what comes on next. The show ends and I turn off the television. But if AMC (or Netflix) released the entire next season in one fell swoop, I wouldn’t move until the season finale. More than that, a full season release would facilitate a more complex viewing experience for me, while giving the writers reign to write the story in more complex ways.

Indeed, the full season release changes how the story is read and also, how the story can be written.

The binge watch affords attention to detail and nuance that simply gets lost when there’s a week between episodes (and then months between seasons). Viewers are people, people forget stuff. The viewer can better appreciate (or at least more closely follow) the complexity of narrative twists and character development when stories are continuous rather than fragmented and interrupted. The viewer can also better identify plot holes, character inconsistencies, lazy writing, poor editing, and other weaknesses of the content. In short, binge watching allows viewers to read the story more closely.

From a production perspective, the binge watch adds a new pressure (see sentence above) but also affords new kinds of storytelling. This struck me most clearly when Netflix released a final season of Arrested Development, a popular situational comedy that went off the air years prior. The writers wrote the show under the assumption that viewers would watch the season holistically. They told the story in reverse chronology, starting at the end and then revealing how the characters ended up in their respective predicaments. The entire first episode made no sense, nor did the writers intend for it to. The subsequent episodes were a combination of retrospective sense-making and confusing new content—which would be clarified in subsequent episodes. It was fun. More than that, it was a new way of writing a show. Full season releases give writers space to let threads remain open, playing out over a prolonged period; they can rely on the nuance and detail that viewers are more apt to pick up when episodes are watched together; characters can develop more slowly, plot shifts can be more subtle, and confusion becomes tolerable, due to the promise of clarification in a timely manner.

So if the full season release is sticker and affords more complex storytelling, why would Netflix revert to the traditional episodic release?

The boring answer is that the decision was about copyright. Canada’s City Network has rights to the show. No one can air it until CCN does. Netflix therefore releases the episodes the same day they play in Canada.

The more interesting answer is that the decision was about speed and more specifically, the relative velocity of the medium vs. the message. The medium is the streaming platform. The message is the story. Netflix decided to get the story to viewers one episode at a time, rather than waiting and releasing all episodes together once the season concluded. They slowed down the medium, which, it seems, is far faster than the message.

Storytelling is an art. Like any art, it takes time to craft a quality product. The art of storytelling and the time it requires hasn’t changed, even as the distribution of stories has taken a radical shift. An unintended consequence of the full season release is that viewers can consume a lot of content very quickly. Traditional release schedules slowly distribute the 12-24 episode of a season over several months. Alternatively, someone could binge watch the same content in a couple of days. This leaves a lot of time gaps in which viewers have little of interest in their queue. That is, the service loses some of its stickiness.

The medium is outrunning the message. Netflix, in the case of Between, is both responding to demands for content, while pacing viewers to make that content last. They are giving viewers a taste of something new, something fresh, but not all at once, not so indulgently.

As a medium, Netflix and other streaming services have fostered a new way to watch and create stories. But they still are, and must be, beholden to the storytellers.


Jenny Davis is on Twitter @Jenny_L_Davis

[i] In the U.S. Outside the U.S. weekly episodic releases are more common.


At the beginning of this month, the ACLU in California released a free mobile app that monitors police violence. The app, called Mobile Justice CA, preserves users’ footage of police encounters.  Available on both Apple and Android devices, the user pushes a large “Record” button to document their own and others’ interactions with police. The content automatically transmits to the ACLU servers. The point is to preserve recorded content even if police destroy the recording device and/or delete the video. For instance, the ACLU would have maintained documentation of police detaining residents in an LA neighborhood, even after an officer smashed the cellphone of a witness recording the events.

The ACLU treats transmissions through the app as legal communications and protects the anonymity of the sender. Legal action is only taken upon the sender’s request, but the ACLU maintains the rights to the footage, meaning they can distribute it to media outlets as evidence of injustice. Branches of the ACLU in in New York, New Jersey, Oregon, and Missouri have released similar apps.

These apps are significant in their reflection of an increasingly central mode of activism: Sousveillance. They are also reflective of the structural embeddedness of the sousveilling citizen.

Sousveillance is watching from the ground up. It is the vigilant eyes of citizens upon figures of authority—individual and institutional. Sousveillance is facilitated by increasingly sophisticated and relatively inexpensive recording devices attached to the mobile phones that we carry around with us. Kari Andén-Papadopoulos names this form of protest citizen camera witnessing.  The citizen camera witness points hir phone towards the action, documenting citizen-authority interactions and holding the authorities accountable.

Accountability has taken center stage over the past year with case after case of documented police violence, culminating most recently in the Baltimore uprisings and investigations into systemic problems within regional justice systems. With their mobile devices, citizens have made their grievances more difficult to ignore.

The efficacy of citizen voices has long been debated within scholarship on social movements and the Internet. With digitally mediated platforms and always-with-you mobile devices, anyone can be a director, publisher, author, and curator. But attention is a finite resource and with so many individual directors, publishers, authors, and curators—competing with established institutions of content distribution (i.e., broadcast media companies)—it may be quite difficult for the average citizen to procure an audience. In other words, we can all talk, but who will listen? The ACLU apps address this issue.

The ACLU apps recognize that citizens have important things to say, but that current social arrangements are such that individual messages need institutional channeling. The uneasy reality is that mobile technologies do not free citizens from the confines of a system, but empower them within that system. Citizen camera witnesses can only be revolutionary when their revolution takes hold through social bodies that are already systemically legitimate.

The ACLU reinforces the thin and tenuous messages recorded by citizen camera witnesses. It collects, protects, and projects these messages. It makes these messages harder to ignore. In doing so, it also reestablishes the citizen as part of an institutionally based social infrastructure that is far bigger than themselves. “We’ll take it from here,” it says, thanks for your help.

Jenny Davis is on Twitter @Jenny_L_Davis

screenshot from my phone
screenshot from my phone

Like many people, I spent my morning entranced by the protests following Freddie Gray’s death. The latest in a string of highly publicized incidents of unarmed black men killed by police, Gray’s death has brought protestors to a boiling point. The streets of Baltimore are on fire. Schools are closed. The National Guard has been called. As I told my students, this is what social change looks like.

For a long while I stared intently at my Twitter feed. The content was unique to this protest, but the form of the Twitter feed looked entirely familiar: the calls for peace, calls for racial justice, racist slurs, police condemnation, images from the ground, and links to (ohmigosh so many) “think” pieces scrolled by. Then, I wondered, how does ‘rioting’ look through an anonymous platform driven by upvotes?  So I went to Yik Yak and peeked at Baltimore, MD. Here is what I found:

Political Commentary

People on Yik Yak are making political claims. Some condemn the police and the racist culture that fosters patterns of violence against black men. Others sling racial epithets. Some implore protestors to engage peacefully. Others validate protestors’ anger as a legitimate means of voice within a system that excludes them. For example:

 We are going to look back on these events and see them as the tipping point for the next revolution of an equitable society.

 Violence is not the answer.

 This is our city, we ain’t gonna let no thugs take it from us!

 Act like animals get treated like animals.

Interestingly, while cloaked racism (in the form of coded terms like “thugs”) is tolerated, explicit racism is censured through downvote. For example, at the time of this writing the yak that calls protestors “thugs” has 6 upvotes and the one that equates protestors with animals has 2, while those below were met with disapproval:

 Seriously though. You don’t see any white people looting and destroying our city…(3 downvotes)

 Maybe segregation wasn’t such a bad idea (5 downvote deletion)

 If I see a black walking, I go to the other side of the street (5 downvote deletion)

Information Seeking/Information Sharing

Along with political statements, users are employing Yik Yak to request and spread information:

 Does anyone have any recommendations for how to help? I don’t have a way to get to penn and north but I don’t want to just sit here.

 Where are people rallying?

 Loyola closed as of 2pm

 City clean up at Pennsylvania and North Ave today. Come on by!

While political commentary and information sharing largely reflect practices on non-anonymous social media, the anonymity and location-based component of Yik Yak make it unique. My peek at Baltimore therefore revealed a consistent stream of protest-related jokes, as well as commentary completely disconnected from the political unrest.

Making Light

The protests in Baltimore are serious in their own right, and reflect a deeply serious matter: the systematic violence against blacks by law enforcement. Many would therefore view it in poor taste to make light of these ongoings, and yet this was a prevalent trend on the Baltimore Yik Yak feed. Although these kinds of jokes certainly have a presence on Facebook and Twitter, they take a more prominent role on the anonymous platform:

 I’m going to walk into the heat of the riots with a backpack full of blunts and personally end this bull shit.

If the riot doesn’t kill us, these exams will. #fuckedeitherway

 The only thing I can’t get off my mind are these hotties from the National Guard.

What Riot?

Yik Yak is location based, rather than hashtag organized. Because of this, yaks maintain a variety of topics. This remains true during the protests, as occasional non-protest related content sprinkles itself through the feed:

 Sometimes I wonder where you are and what you’re doing and whether you think about me.

 Give me a nickname. I don’t care if it’s based off my name or a thing I did, if you give me a nickname I will love you forever.

 How much eyeliner is too much?



Jenny Davis is an Assistant Professor of Sociology at James Madison University and Co-editor of the Cyborgology blog. Follow Jenny on Twitter @Jenny_L_Davis

Editors Note: This is based on a presentation at the upcoming  Theorizing the Web 2015 conferenceIt will be part of the Protocol Me Maybe panel. 


I’ve been researching hacking for a little while, and it occurred to me that I was focusing on a yet unnamed hacking subgenre. I’ve come to call this subgenre “interface hacks.” Interface hack refers to any use of web interface that upends user expectations and challenges assumptions about the creative and structural limitations of the Internet. An interface hack must have a technical component; in other words, its creator must employ either a minimal amount of code or demonstrate working knowledge of web technologies otherwise. By virtue of the fact they use interface, each hack has aesthetic properties; hacks on web infrastructure do not fall in this category unless they have a component that impacts the page design.

One of the most notable interface hacks is the “loading” icon promoted by organizations including Demand Progress and Fight for the Future in September 2014. This work was created to call attention to the cause of net neutrality: it made it appear as though the website on which it was displayed was loading, even after that was obviously not the case. It would seem to visitors that the icon was there in error; this confusion encouraged clicks on the image, which linked to a separate web page featuring content on the importance of net neutrality. To display the icon, website administrators inserted a snippet of JavaScript — provided free online by Fight for the Future — into their site’s source code. A more lighthearted interface hack is the “Adult Cat Finder,” a work that satirizes pornographic advertising in the form of a pop-up window that lets users know they’re “never more than one click away from chatting with a hot, local cat;” the piece includes a looping image of a Persian cat in front of a computer and scrolling chatroom-style text simply reading “meow.” The links to these, and other interface hacks, are included at the end of this post.

I maintain that interface hacks are powerful tools for online activists and hacktivists and that their potential has yet to be fully explored. The power of interface hacks resides in the fact that they take as their raw material the technical underpinnings of the Internet. Because their medium is infrastructure as opposed to content, they are functional on the level of context — in other words, their content is also their contextual frame, or the structure that gives the work meaning. Insofar as they exist to draw attention to their own rubric for interpretation, interface hacks leave the user at a disadvantage when it comes to making sense of their initial encounter with the work. One effect of this confusion is that user attention is seized during this time. The period before the user grasps that the “surprise” of the work is intentional, i.e., the time in which their awareness is given to making sense of what they are seeing rather than simply absorbing it, is a particularly potent one in terms of establishing messages and conveying meaning in a busy web environment. I believe that interface hacks are particularly suitable for activists and artists whose work confronts digital issues.

I am aware that my usage of the word “hack” in this context may be contentious. Many people, some of whom do not identify as “hackers” per se, have strong feelings about the word “hack.” The phrase has been defined and redefined frequently since its inception in the 1950s and encompasses a distinctly heterogeneous set of activities, individuals and groups across the world. A definition of “hacking” that suits all possible contexts and applications is therefore difficult to pin down, and it’s become fashionable in recent years to use the term in contexts entirely unrelated to computing (á la “life hack”). This has evoked ire from hackers and non-hackers alike who argue that its overuse has diminished the term to the point of meaninglessness. I use it here because the taxonomical status of these works is ambiguous: many of the pieces included could be classified in numerous categories, including “artwork” “software,” “activist demonstration” and “toy.” I take “hack” as a noun version of Richard Stallman’s definition of hacking: “exploring the limits of what is possible, in a spirit of playful cleverness.” The categorical ambiguity of interface hacks demands the need for a phrase: “interface hack,” as a term, is an epistemological tool. Grouping all of these works together under one name has allowed me to refine my investigation into their likenesses. These similarities are the structural elements that allow for the development of the theory behind them.

Hacking, including interface hacking, deals in a great amount of mystification and surprise; developing new terms as figures of thought offers us the to opportunity to reveal some of its internal machinations. Making the theoretical blueprints behind the cognitive “surprise” of interface hacks allows us to create other, similar works, which can be used to boost any number of causes. These, of course, can be either good causes or bad causes; my interest is in promoting hacking-for-good. I hope that they are used for beneficial reasons above and beyond all other possible manifestations.

 Some Interface Hacks

Ben Grosser’s “Facebook Demetricator”

Maddy Varner’s “Tab Police”

Net Neutrality Slowdown Icon

404: No Weapons of Mass Destruction

Richard Stallman,”On Hacking”

Emma Stamm is a writer, musician and web developer; her work can be found at and she tweets @turing_tests.


Apple rolled out a new line of racialized emojis last week through their iOS 8.3. Originating in Japan, emojis are popular symbols by which people emote via text. Previously, the people-shaped emojis appeared with either a bright yellow hue or peach (i.e., white) skin. The new emojis offer a more diverse color palate, and users can select the skin tones that best represent them. It’s all very United Colors of Benetton.

While many applaud the new emojis— such Dean Obeidallah writing for CNN who announced “Finally, Apple’s Emojis Reflect America”—this has been far from a win for racial equality.

First there were the (pretty egregious) technical glitches. It turns out that for those who have not yet updated to iOS 8.3, the diverse emojis appear as aliens. This means that non-white symbolically translates to non-human. Similarly, as Nathan Jurgenson discovered and then sent via email, using all of the skin tones together shows up as white…”Too much diversity!! Retreat!! Retreat!!”

When Nathan tried to use all of the skin tones...
When Nathan tried to use all of the skin tones…

Then there was the immediate racist bigotry. Writing for the Washington Post, Paige Tutt says it is unsurprising to find people using the racialized emojis in incredibly offensive ways, like this gem she shared in her article:

Screen Shot 2015-04-14 at 5.32.41 PM

And finally, there was the yellow-as-default. When selecting an emoji, bright yellow is the supposedly neutral default from which the emojis can racially diverge. The problem, however, is that yellow is not racially neutral. It is, I argue, definitively white. Let me explain.

Sociologists West and Fenstermaker show that race is a key characteristic by which we categorize bodies. Their thesis is that like gender, “doing race” is not optional. That is, we racialize each other. In this vein, we racialize representations of each other—such as emojis. In American culture, which privileges whiteness, white is the presumed racial category. Representations that fall outside of the human color spectrum are, by default, coded as white.

Humans are not bright yellow and yet, yellow is not racially neutral. Rather, yellow’s very neutrality, in the U.S., signifies white. Both readers and writers of yellow know this, if not explicitly. In fact, it is the implicity of whiteness that makes it so powerful. For example, nothing about the Simpsons should read white. Marge has blue hair for goodness sake. And yet, the Simpsons are a white family. Hence, Apu and Carl are brown, both raced vis-à-vis the rest of the Springfield citizenry. Similarly, and to move away from yellow specifically, the hyenas in Lion King talk with urban black vernacular and Sebastian from the Little Mermaid is Jamaican (and sings about the ocean being awesome because you don’t have to get a job under there). We live in a culture of white-unless-signified-otherwise.

Although it is easy (and correct) to read this as all as racially insensitive on the part of Apple, the issue is much broader and far deeper. The Apple emoji case is a microcosm of race relations in the United States. We want really bad to be inclusive but are so invested, so inculcated, so primed with the white racial frame that good intentions are easily ensnared by the logics we work to break out of. Let’s be clear about that process. Yellow signifies white, but so too would green, purple, or orange. The problem of emoji racial representation is a problem of cultural race relations, as hardware and software are always products of existing social arrangements.

Apple’s emojis can’t solve the race problem. They never could.

Follow Jenny Davis on Twitter @Jenny_L_Davis