boatySome may label this moment a crisis of democracy, a moment in which the voice of The People lay inert; a moment in which the promise of citizen driven governance, shining so brightly in the glow of digitally connected screens, reveals itself as a farce.

I am talking, of course, about Sir David Attenborough, or more to the point, I am talking about the $300 million British research vessel not called Boaty McBoatface.

The British National Environmental Research Council invited citizens to select the name for their new polar research vessel. It was an opportunity to bring science to the public and involve the public in scientific discovery. Anyone was allowed to submit a name, and everyone voted on their favorites. The name with the most votes was to moniker the craft. Radio personality James Hand proposed the name Boaty McBoatface. Hand’s suggestion was well received, and the citizenry irrefutably selected Boaty for the vessel’s name. Case closed, right? No, the vessel’s name is David… which sound nothing like Boaty and includes zero McFaces.      

Although many citizens are upset about the way government heads renegotiated the terms of agreement (see: Petition for Sir David Attenborough to change his name to Boaty McBoatface), the outcome is quite in line with what we might expect. The events surrounding the Research Council’s boat-naming poll is a microcosm for how the internet interplays with citizen publics. It is an example of the rule, not an exception.

Here’s the thing about democracy in practice—citizen voices get a platform, but always within limits. Citizen voices matter, but in a relegated way. Citizen voices push and shape, but do not precisely sculpt (see: a continued absence of real campaign finance reform). While web technologies have the capacity to include more voices in the democratic process, internet-enabled democratization is especially vulnerable to bureaucratic coups. This is because citizens don’t have voice on the internet, they are given voice, and those who give voice can, ultimately, do with that voice what they will.  (See: Facebook still has no Dislike button)

Common wisdom advises those who govern to include their subjects in decision making processes. People are more invested when they have a say. This is the logic behind teachers who let students help design homework assignments, parents who ask children what they wish to eat for dinner, and bosses who query employees about ways to improve efficiency. It is in this vein that the National Environmental Research Council invited the public to select a new name for their advanced research vessel.

Internet technologies grant expansive access to large groups of people, giving rise to the now common practice of crowdsourcing—or distributing some task among massive digitally connected networks. In the case of Sir David Attenborough, the job of naming a newly acquired ship was distributed among the British citizenry. This feels very democratic. That’s the point. But in the end, power rarely dethrones itself.

The People selected Boaty McBoatface for the vessel’s name by a margin of over 100,000 votes. But U.K. Science Minister Jo Johnson vetoed the decision, opting instead for something more “suitable.” Instead of Boaty McBoatface, Johnson named the ship Sir David Attenborough, the second place name (with 11,000 votes) and the name of a famous British naturalist.

Essentially, the Environmental Research Council was willing to let citizens participate, as long as those citizens did so in a way that was agreeable with the Research Council’s ideals. It was a democratic gesture, rather than a true effort in democratization.

But then again, Sir David Attenborough was the citizens’ second choice, and, it turns out, a yellow submarine attached to the ship will be named Boaty McBoatface. So The People were heard…kind of.

And that is democracy on the internet. It is a democracy where The People are heard…kind of. The ability for citizens to express themselves by no means upends structures of power, but it does irritate those who hold power. Invited or not, collective voices rise in volume. Even if they are rejected, they cannot be entirely ignored. Those in power—i.e., those who have the resources to make and enact consequential decisions—have to operate in an environment with permeable boundaries. They are not, and cannot be in an age of digital social technologies, insulated from those over whom they govern.

So Boaty lives, just not in the prominent place the citizenry designated. The internet cannot help but democratize, but in turn, power cannot help but dominate. In this way, Facebookers can’t Dislike and  public funding is not compulsory for political candidates, but Facebookers can “React,” and two U.S. presidential contenders maintain campaign finance as central to their platforms.


Jenny Davis is on Twitter @Jenny_L_Davis


Outrage over the Bob Marley Snapchat filter was swift following its brief appearance on the mobile application’s platform on April 20 (The 420 pot smoking holiday). The idea of mimicking Bob Marley in appreciation of a day dedicated to consuming marijuana by smoking it or consuming it in the form of a gummy bear or brownie, enabled users to don the hat, dreads, and…blackface!? News outlets that day covered the issue pretty quickly. and The Verge noted the negative reactions voiced on social media in regard to the filter. Tech publisher Wired released a brief article condemning it, calling it racially tone-deaf.

The racial implications of the Bob Marley filter are multifaceted, yet I would like to focus on the larger cultural logic occurring both above and behind the scenes at an organization like Snapchat. The creation of a filter that tapped into blackface iconography demonstrates the complexity of our relationship to various forms of technology – as well as how we choose to represent ourselves through those technologies. French sociologist Jacques Ellul wrote in The Technological Society of ‘technique’ as an encompassing train of thought or practice based on rationality that achieves its desired end. Ellul spoke of technique in relation to advances in technology and human affairs in the aftermath of World War II, yet his emphasis was not on the technology itself, but rather the social processes that informed the technology. This means that in relation to a mobile application like Snapchat we bring our social baggage with us when we use it, and so do developers when they decide to design a new filter. Jessie Daniels addresses racial technique in her current projects regarding colorblind racism and the internet – in which the default for tech insiders is a desire to not see race. This theoretically rich work pulls us out of the notion that technology is neutral within a society that has embedded racial meanings flowing through various actors and institutions, and where those who develop the technology we use on a daily basis are unprepared to acknowledge the racial disparities which persist, and the racial prejudice that can—and does—permeate their designs.

This understanding of technique, when combined with critical race theory, allows us to ask if the presence of blackface in technology is any big surprise in a presumably “post-racial” world. I am positive that any critical race scholar would, without hesitation, answer, “No, it’s not.”  And that’s because we are definitively not post-racial. The intentions behind the filter might have been innocent or playful by developers, but the use of blackface within society has a long and complex history – particularly in regard to its use as a tool to perpetuate systemic racial inequalities in the dehumanizing and “othering” of African Americans in the United States. Hollywood has traditionally been the long time perpetrator of promoting blackface, and variations of it, through utilizing stereotypes that adapt to a given historical moment in society. Yet the racial implications of blackface extend beyond the screens in which we view film. Over the past couple of years tensions brought up over racialized costumes during Halloween and college parties demonstrate the reach and continuation of blackface. With such a contemporary example that has generated conflict within the general public, it seems as if the tech innovators at Snapchat would have known better. I guess that is just wishful thinking. This movement and use of blackface from film, to parties, to the mobile app demonstrates what Ellul meant in regard to technique. The continuation of blackface in our society presently is not necessarily linked to the technologies that produce them, but through the ways in which individuals develop and utilize those technologies. The presumed innocence of using blackface to ‘celebrate’ an individual within a logic of providing ‘daily-new’ filters for consumer use reflects a gross oversight in what blackface means within the larger cultural sphere of public life.

The continued existence of racism in society is undertaken through multiple shifts and debates, in which no actor or institution stands in isolation. This case of the Bob Marley filter only highlights the ways that historical racist images are allowed to perpetuate themselves in the present – becoming not-so-historical in the process as they reincarnate through new mediums. I have no doubt that some cases might be found of individuals using the filter, or commenting on it, in overtly racist ways. Yet, as mentioned above, voices also sprang up to condemn the filter as racially insensitive in various social media and news sites. The technique of blackface is malleable in that it lingers on through practices that are uncritically carried out by tech developers, but those practices are also challenged through other means across various technologies. Unraveling this technique requires disrupting the structural racism that upholds it. Brushing off the filter as a misstep by Snapchat or condemning the developers as socially out of touch, is antithetical to the critical race project, a project that is less interested in identifying those who fail at race relations and more interested in identifying, and subverting, the social conditions that allow racism to persist.


Jason A. Smith is a doctoral candidate in Public Sociology at George Mason University whose research centers on the areas of race and the media. His dissertation will look at the Federal Communications Commission and policy decisions regarding diversity in the media for minorities and women. Along with Bhoomi K. Thakore, he is a co-editor of the forthcoming volume Race and Contention in Twenty-first Century US Media (Routledge, May 2016). He is on twitter occasionally.


Headline pic: Source (CC licensed and edited by the author)


“Basic,” “painful,” “embarrassing,” and comparable to necrophilia: a small sampling from the reviews of Fuller House over the last couple of months. The Netflix original, a remake of the classic 1980s/90s sitcom Full House, may become a lasting icon of terrible, terrible, really quite bad moments in television history. The kindest sentiment I came across was expressed by Maureen Ryan in Variety, who generously conceded that “[t]hose who enjoyed the original…and don’t mind its patented blend of cloying sentiment, cutesy mugging and predictable humor might find enjoyment in this unspectacular retread.”

Naturally, I binge watched. Of course, it was as awful as expected. Maybe worse. The remake is identical to the original in both form and feel. The characters are unidimensional, the story is episodic and shallow, the catch-phrases are somehow even less catchy, and oh the racism. Kimmy Gibbler’s ex-husband is a cringe-worthy Latino caricature whose lustful propensities can hardly be contained and the 11th episode centers around an Indian themed party which acts as the foil for copious jokes, includes an almost entirely white cast dressed in saris and jamas, and culminates with the party attendees spontaneously breaking into a choreographed dance for which mysteriously, they each know all of the moves. That last part may or may not be racist, but as a storytelling decision, asks the audience to suspend an unfair amount of belief.

Fuller House could not have been worse if it tried. Which is why I reinterpreted the season as though it did try. And then, Fuller House was very good.

I watched the Fuller House season as though it was not just bad, but actively bad. From this angle, the decision to make a new show that is entirely unchanged from 30 years ago is a smart and creative vehicle for powerful social commentary. For the creators to leave this fact unmentioned is a piece of artistic genius.

That social commentary was the creators’ and actors’ intent is by no means a solid fact, nor even a well supported one. On the contrary, there is little reason to believe that the show is anything more than it appears at face value. But this is irrelevant. Media consumption is always participatory and audiences play a creative role. How a show is written matters, as does how the show is read. My reading of Fuller House—as a show out of time— transformed what was vapid, vacuous, and appallingly offensive, into a compelling piece of television programming.

Watching a show retrospectively is distinct from remaking the show in a new historical moment. The failings of the latter are decidedly more jarring and less forgivable. New cultural products are responsible for the advances in technology, storytelling, and identity politics of their time. For instance, Archie Bunker’s racial epithets and Ralph Kranden’s continued threats to send his wife “straight to the moon” would never fly today. Similarly, Saved by the Bell would only ever get picked up if Jessie Spanno’s caffeine-pill problem was a cocaine problem, intersected with storylines that wouldn’t pay off for several more seasons, and excluded any and all scenes that transitioned from dancing to crying.

Viewers may look back upon older media products with a combination of nostalgia and embarrassment, but also an implicit acceptance about the way things used to be. In contrast, an anachronistic production demonstrates how “the way things used to be” both reflected and informed normative logic. What was once popular was popular for a reason, and most certainly shaped how viewers understood themselves and others. That is, media products are formative, and the kind of culture that media products form becomes starkly clear when viewed from the future.

An anachronistic cultural product shows us to ourselves through our own nostalgia—and in the case of Fuller House, it is unflattering. It not only reveals the viewers’ formative past, but also pushes viewers to face the ways in which contemporary media products are of this particular time. In doing so it facilitates the uncomfortable question: will the media of  today be acceptable in the near and distant future? This  question applies to both broadcast and social media, and in many cases, the answer is no, this will not be acceptable. For instance, Facebook’s “real name” policy, Twilight’s implicit romanticization of abuse, and Snapchat’s Bob Marley filter will be recalled as emblems of antiquated values. Today, they are the subject of debate. 30 Years from now, if presented unchanged, they may well be shocking affronts–and yet these are all formative media. They reflect us, shape us, and are part of us. To re-present them out of time, in the manner of Fuller House, insists that the cultural milieu address what it made, what it enjoyed, and what that says about who they were and how they are now.

As a show out of time, Fuller House blares its inadequacy. It’s really just the worst. Luckily, media consumption is always active. So fuck it, I’m reading Fuller House as social commentary and basking in its brilliance. I highly recommend this tactic because next on deck:



Jenny Davis is on Twitter @Jenny_L_Davis

Panama Papers

Hacking is the new social justice activism, and the Panama Papers are the result of an epic hack. Consisting of 11.5million files and 2.6TB of data, the body of content given to German newspaper Süddeutsche Zeitung by an anonymous[1] source and then analyzed by the International Consortium of Investigative Journalists (ICIJ), is uniquely behemoth. It puts Wikileaks 1.7GB to shame.

The documents were obtained from Mossack Fonseca. The company is among the largest offshore banking firms, and their emails and other electronic documents tell a compelling (if not entirely surprising) story about untraceable monetary exchanges and the ways that state leaders manage to grow their wealth while maintaining a façade of economic neutrality. By forming shell companies, people can move money without attaching that money to themselves. This is not a sufficient condition for illegal activities, but certainly fosters illicit ones.

Much of the media content around the Panama Papers—both social and broadcast—focus upon the obvious scandals: Putin’s offshore accounts and the ties between those accounts and members of his inner circle; Iceland’s Sigmundur Gunnlaugsson’s undeclared assets and consequently, his decision to step down amid citizen protests; China’s Xi Jinping’s implications in The Papers and the subsequent block upon this information for those searching Panama Papers  from within China.

Indeed, the Panama Papers contain a lot of stuff. And typically, it is the stuff of data leaks that we find so interesting. It’s the stuff that we pore through, highlight, copy, paste, and share. The stuff is the evidence we need to substantiate wrongdoing. In this case, the data concretize a nebulous concern that many already held. The data provide hard evidence on a large scale. We knew that leaders weren’t economically neutral, nor did they make clear the status of their wealth. That people in power act unscrupulously was an open secret that, thanks to the Panama Papers, is now an open fact.

But what is equally interesting, maybe more so, is what the data do not show.

Namely, the U.S. gets very little play. In all of the Panama Paper’s documents and images, only 200 U.S. names appear. Although it may be momentarily tempting for some Americans to wave their stars and stripes in moral superiority, Delaware quickly puts a damper on this impulse. Delaware incorporates more than 1,000,000 businesses. Or in the words of the State of Delaware website, “more than 1,000,000 business entities have made Delaware their legal home.” Thanks to Delaware’s loose tax and regulation policies, and similarly loose policies across the U.S. (especially in Texas and Florida), the United States is a far better tax haven than Panama. In fact, the U.S. is ranked 3rd in the Tax Justice Network’s Financial Secrecy Index, while Panama takes a more modest 13th place.

Data hacking can be and is, effective in providing evidence of wrongdoing—whether illegal or just ill-mannered. The Panama Papers smacked us in the face with the reality of surreptitious practices among government and corporate elites. But the data do not speak for themselves and sometimes, the silences can speak the loudest.

This makes for an interesting paradox in data-based activism. Revelatory leaks give citizens and law enforcement agencies something to hang on to. They produce a tangible case against those who would prefer their practices remain obscured. At the same time, leaks create a standard by which guilt and innocence can be judged, and offer a workaround for those savvy enough to avoid data-capture.

The U.S. is mostly excluded from the Panama Papers because financial secrecy is built into our legal system. The availability of damning documentation against China, Russia, Iceland, and others aims focus upon those whose practices translate neatly into data points. That is, leaked data make the actions of those who are implicated hyper-visible. In doing so, hyper-visibility becomes the metric of blame. A latent effect of this may be that those whose actions cannot be datafied enjoy a buffer of protection.

Hacks are still important, as is a continued citizen-led insistence upon transparency. And in this insistence, should be the recognition of everything activist imposed transparency entails, including its unintended but potentially counterproductive effects.


Jenny L. Davis is on Twitter @Jenny_L_Davis


Headline Pic: Source

[1] Note: not Anonymous


The students in my Cultural Studies of New Media course are currently in the process of giving midterm presentations. The assignment was to keep a technology journal for a week, interview a peer, and interview an older adult. Students were to record their own and others’ experiences with new and social media. Students then collaborated in small groups to pull out themes from their interviews and journals and created presentations addressing the role of new and social media in everyday life.

Across presentations, I’m noticing a fascinating trend in the ways that students and their interviewees talk about the relationship between themselves and their digital stuff– especially mobile phones. They talk about technologies that are “there for you,” and alternatively, recount those moments when the technology “lets you down.” Students recount jubilation and exasperation as they and their interviewees connect, search, lurk, post, and click.

Listening to students, I am reminded that the contemporary human relationship to hardware and software is a decidedly affective one. The way we talk about our devices drips with emotion—lust, frustration, hatred, and love. This strong emotional tenor toward technological objects brings me back to a classic Louis C.K. bit, in which the comedian describes expressions of vitriol toward mobile devices in the wake of communication delays. For Louis, the comedic value is found in the absurdity of such visceral animosity toward a communication medium, coupled with a lack of appreciation for the highly advanced technology that the medium employs.

But I think the story goes deeper than this, and becomes not just funny, but also revelatory about the ways technological apparatuses are deeply embedded in the fabric of intimate life.

If the medium is the message, then the apparatuses of social media—hardware, software, and infrastructures—are how we connect, remember, and find our way when lost. These media, that live in our pockets and in our homes, hold the capacity not just to appease or frustrate, but to comfort, disappoint, and betray.


Jenny Davis emotes on Twitter @Jenny_L_Davis

Headline Pic Via: Source


Horse-race style political opinion polling is an integral a part of western democratic elections, with a history dating back to the 1800’s. Political opinion polling originally took hold in the first quarter of the 19th century, when a Pennsylvania straw poll predicted Andrew Jackson’s victory over John Quincey Adams in the bid for President of the United States. The weekly magazine Literary Digest then began conducting national opinion polls in the early 1900s, followed finally by the representative sampling introduced the George Gallup in 1936. Gallup’s polling method is the foundation of political opinion polls to this day (even though the Gallup poll itself recently retired from presidential election predictions).

While polling has been around a long time, new technological developments let pollsters gather data more frequently, analyze and broadcast it more quickly, and project the data to wider audiences. Through these developments, polling data have moved to the center of election coverage. Major news outlets report on the polls as a compulsory part of political segments, candidates cite poll numbers in their speeches and interviews, and tickers scroll poll numbers across both social media feeds and the bottom of television screens. So central has polling become that in-the-moment polling data superimpose candidates as they participate in televised debates, creating media events in which performance and analysis converge in real time. So integral has polling become to the election process that it may be difficult to imagine what coverage would look in the absence of these widely projected metrics.

But the poll-centrism ushered in by new technologies is neither natural nor inevitable, as evidenced by Amy Goodman at Democracy Now, who intentionally excludes polling data as part of her election coverage. For Goodman, the trouble with polls is that they focus attention in the wrong place. Paraphrasing from a recent interview Goodman gave on CNN, she asks “how does knowing what other people think of a candidate help me assess my own views on the candidate?” In other words, Goodman’s philosophy calls into question the usefulness of polls in helping voters make informed decisions about who will most effectively govern. Polling metrics ostensibly take away from other more meaningful information, such as voting records and the substance of candidates’ messages.

Polls have become the pinnacle metric by which commentators discern a candidate’s performance on the ground and on the debate stage. Yet polls are a metric that measures performance through appeal, essentially constructing a popularity contest and reporting the results of that contest as the most meaningful information for voters. While Goodman argues that popularity status is not very useful information for those seeking to elect a leader, popularity status does have an effect on voter behavior nonetheless. The effects of polling are at the heart of old critiques, worth rehashing in light of an increasingly pollcentric media environment.

Just as political opinion polls have been around a long time, so too have concerns over their effects. As early as the late 1800s commentators suggested that projections may influence how voters view candidates and in turn, affect voting behavior. In short, polling data don’t just take the public temperature, they set it.

Puck, volume 16, number 395, October 1, 1884, pages 72-73
Puck, volume 16, number 395, October 1, 1884, pages 72-73 (the “bandwagon effect”)

Gallup apparently spent a great deal of time and energy attempting to produce empirical evidence that would dispute the argument that polls influence—rather than simply measure—public opinion. His efforts were largely unsuccessful. Instead, empirical research shows that polls do in fact influence public opinion and that the most prevalent effect is what is known as the “bandwagon effect.”

The bandwagon effect is such that when people learn about a candidate doing well, they decide to support that candidate. That is, voters “jump on the bandwagon” of a winning contender. This has been the linchpin of Trump’s 2016 presidential campaign. When questioned about his suitability to hold the office of President, Trump invariably touts how well he is doing in the polls. And so far, this strategy has worked. The better he does in the polls, the better he does in the polls and ultimately, the better he does among primary voters.  That is, the bandwagon effect represents a self-fulfilling prophecy. Projecting success onto someone actually helps that person achieve success. In this case, connecting a candidate to poll numbers that indicate high levels of support work to actually earn the candidate more support.

The bandwagon effect paints a decidedly unflattering picture of voters, who apparently can be swayed (and quite effectively) by the documented opinions of others. However, drawing on a little social psychology, the self-fulfilling-prophecy of a bandwagon effect makes perfect sense and dovetails with status processes that permeate everyday life.

Empirical research in Status Characteristics and Expectation States Theory (SCET) shows that people develop expectations of competence about one another based upon personal characteristics (race, class, gender, physical attractiveness) and also specific skills. Those who enjoy greater presumptions of competence are given more opportunities to talk during interaction, receive deferential treatment from those with whom they interact, and tend to earn higher levels of rewards. Rewards themselves then become status indicators, such that those who have more rewards are granted greater expectations of competence and relatedly, higher status. That is, rewards beget rewards. In the case of elections, the ultimate reward is votes and through election  polling, voter support begets voter support.

Knowing this, it is unsurprising that the bandwagon effect is both strong and persistent. When voters see a candidate receive support, they grant that candidate greater competence and are then more likely to support the candidate themselves. And this is what political opinion polls do:they activate status processes that allow successful candidates to snowball into victory and push those with less support to quickly wither away.

Of course, polls are not deterministic. This is clear in the comebacks, upsets, disappointments, and flawed trajectories that make the political spectacle so spectacular. Self-fulfilling prophecies and bandwagons push candidates towards victory or defeat rather than directly causing wins and losses. Candidates still have to contend with their voting records, debate flubs, and personal histories. Voters can and do research candidates of interest. However, the push towards victory and defeat shepherded in by political opinion polling is troubling given not only the prevalence of polls, but also the almost unquestioned significance of the metrics polls produce.

The science of polling has become increasingly precise. Pollsters can discern granular trends about which groups support which candidates, for what reasons, and under what conditions. Highly skilled in survey design and statistical analysis and armed with sophisticated distribution and analytic software, pollsters have the tools to learn a lot. But perhaps the most useful place for those polls is behind the proverbial closed doors of campaign headquarters, where candidates and their staff can use the numbers to assess their own performances and adjust accordingly. Polling data can also be useful for social scientists looking to better understand political processes. To be sure, however, publically projected polling data does more than it records.

The effects of opinion polling matter not just for their influence, but for their distraction. The attention economy is not limitless, and when popularity data become the pinnacle metric, substance takes a back seat.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

ReactionsFacebook Reactions don’t grant expressive freedom, they tighten the platform’s affective control.

The range of human emotion is both vast and deep. We are tortured, elated, and ambivalent; we are bored and antsy and enthralled; we project and introspect and seek solace and seek solitude. Emotions are heavy, except when they’re light. So complex is human affect that artists and poets make careers attempting to capture the allusive sentiments that drive us, incapacitate us, bring us together, and tear us apart. Popular communication media are charged with the overwhelming task of facilitating the expression of human emotion, by humans who are so often unsure how they should—or even do—feel. For a long time, Facebook handled this with a “Like” button.

Last week, the Facebook team finally expanded the available emotional repertoire available to users. “Reactions,” as Facebook calls them, include not only “Like,” but also “Love,” “Haha,” “Wow,” “Sad,” and “Angry.” The “Like” option is still signified by a version of the iconic blue thumbs-up, while the other Reactions are signified by yellow emoji faces.

Ostensibly, Facebook’s Reactions give users the opportunity to more adequately respond to others, given the desire to do so with only the effort of a single click. The available Reaction categories are derived from the most common one-word comments people left on their friends’ posts, combined with sentiments users commonly expressed through “stickers.” At a glance, this looks like greater expressive capacity for users, rooted in the sentimental expressions of users themselves. And this is exactly how Facebook bills the change—it captures the range of users’ emotions and gives those emotions back to users as expressive tools.

However, the notion of greater expressive capacity through the Facebook platform is not only illusory, but masks the way that Reactions actually strengthen Facebook’s affective control.

Although Reactions offer an emotional lexicon that affords more granularity than the universal “Like,” they maintain the platform squarely within a happiness paradigm. Facebook maintains a vested interest in keeping the site a generally cheerful place. Advertisers post there, and it wouldn’t do to have users who openly dissent against those who paid for ad space. Moreover, advertisers are willing to pay because users go there, and people feel (at least a little) bad when they read disproportionately negative content. Keeping things cheerful keeps users coming back, which keeps eyeballs for sale and ad space more valuable. That Facebook designed Reactions based on content produced by users themselves is of little meaning, as the site has always facilitated a positive affective bent. Engineers are therefore pulling sentiments from a user-base whose emotive expressions were already shaped by a precisely designed platform.

Of note, alternate expressive options are only available after toggling or long-holding over the still-compulsory “Like” option. This is reminiscent of the way Facebook added more granular gender identity options, but relegated 56 of the 58 options to an “Other” category, available only as an alternative to the Male-Female binary. Just as they reorganized gender expression to reinforce cis-normativity, Facebook has now reorganized affective expression in a way that normalizes cheer.

Moreover, all of the Reactions, including negative ones, are signified with adorable emoji faces. These emoji express sadness and anger as a little bit silly, not too threatening, not too real. “Like” might not be the appropriate response to the passing of a loved one, but bulbous tears streaming down a banana yellow face feels downright disrespectful. Imagine posting a brow-furrowed Angry emoji in response to a friend’s personal story of sexual assault. It’s the symbolic equivalent of “that rascal!!” and woefully inadequate for anything that provokes real anger.

The cartoonishness of Reactions is most certainly intentional. It lets people express a degree of anger or sadness, while easing the transition into the remaining lines of News Feed filled with cute memes, funny text message screen captures, and images of friends on vacation. H.L. Starnes once compared bad and sad news on Facebook to the candy river in the 1970s Charlie and the Chocolate Factory film, in which those aboard float through “a bright garden of colorful sweets” and then into “an ever-quickening barrage of lights and awful imagery only to emerge on the other side to continue a fantastical tour of candy making delight.” Facebook Reactions let users quickly pause for something awful, but then seamlessly continue on their fantastical George Takei-filled tour.

In contrast to the seeming expansion of expressive capacity, Facebook Reactions strengthen Facebook’s hold on the overall sentiment of the site. Reactions don’t just offer more options, but give users a particular set of tools with which they can efficiently engage bad news. It is challenging to respond to bad news. The person who shares bad news is vulnerable, and the task of crafting a thoughtful reply requires effort and can be quite uncomfortable. Reactions give users an “out,” and give Facebook control over how negative sentiments manifest. By encouraging emotional expression through pre-fab Reactions, Facebook does not foster expressive autonomy, but instead, stakes an ever stronger hold on how sentiment takes shape.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

DroneNick Bilton’s neighbor flew a drone outside the window of Bilton’s home office. It skeeved him out for a minute, but he got over it. His wife was more skeeved out. She may or may not have gotten over it (but probably not). Bilton wrote about the incident for The New York Times, where he works as a columnist. Ultimately, Bilton’s story concludes that drone watching is no big deal, analogous to peeping-via-binoculars, and that the best response is to simply ignore drone-watchers until they fly their devices away. With all of this, I disagree.

Drone privacy is a fraught issue, one of the many in which slow legislative processes have been outpaced by technological developments. While there remains a paucity of personal-drone laws, the case precedent trends towards punishing those who damage other people’s drones, while protecting the drone owners who fly their devices into airspace around private homes. Through legal precedent, then, privacy takes a backseat to property.

Bilton spends the majority of his article parsing this legal landscape, and tying the extant legal battles to his own experience of being watched. He begins with an account of looking out his window to see a buzzing drone hovering outside. He is both amused and disturbed, as the drone intrusion took place while he was already writing an article about drones. He reports feeling first violated and intruded upon, but this feeling quickly fades, morphing into quite the opposite. He says:  

At first, I was upset and felt spied upon. But the more I thought about it, the more I came to the opposite conclusion. Maybe it’s because I’ve become inured to the reality of being monitored 24/7, whether it’s through surveillance cameras or Internet browsers. I see little difference between a drone hovering near my window, and someone standing across the street with a pair of binoculars. Both can peer into my office.

Bilton’s response is basically “well we’re already constantly surveilled and always have been, so who cares if the technology is now aerial and our neighbors join the viewership?” Apparently, his wife cares.

Bilton concedes that his wife is far more put off by the neighborly drone visit than he.  She considers getting a shotgun, he reports. Though Bilton gives cursory attention to his wife’s view and ponders the legal options for people who feel violated by drones, he eventually concludes with this dismissive advice:  “do what I did, which was to wait about 15 seconds until my neighbor got bored and flew the drone somewhere else.”

First, drones and binoculars are not the same. Not even close. Although Bilton acknowledges that drones are unique in their capacity to “…reach into crevices of your home… and see from more invasive vantage points” he ignores their capacity for documentation.  It’s not just that drone technology grants viewers access to more and more granular images, but that images are produced, rather than merely experienced in fleeting (albeit violative) moments of looking. Binoculars archive voyeuristic images within a memory bank. Drones externalize the archive with the potential to distribute.

But more importantly, neither drone nor binocular forms of spying are okay. Ever. It’s positively strange to me that Bilton’s defense of drones entails equating them with analog forms of peeping. I certainly wouldn’t consider it benign to find a person hiding in the bushes across the street watching me in my bedroom. I doubt Bilton’s wife would, either.

Dismissal is a luxury, one that Bilton apparently enjoys. Although he presents a counter argument foiled in his wife’s experience, Bilton treats his wife’s account as just another opinion—a balance point rounding out his less affected reaction. He did not, however, investigate the reason he and his wife had such dissimilar reactions.  Had Bilton focused on the underlying cause of he and his wife’s fracture, the luxury of dismissal would likely have emerged. That cause, simply, is social position—in this case, gender.

The effects of surveillance are far from uniform. For many women, queer and trans* people, voyeurism has been and remains a reality to contend with and avoid. For people of color, surveillance is a key contributing factor in the disastrous rates of mass incarceration. Eschewing privacy has different consequences for different people. Those for whom the consequences are severe know this intrinsically. Those for whom the consequences are minimal can remain comfortably naïve.

Dismissing personal drones as technological objects that fly away after 15 seconds when their operators get bored ignores the staying power that those 15 seconds can entail. Such as the way those 15 seconds stay with watched subjects psychologically, imparting an omnipresent wariness even within the sacred confines of the home; or the way the 15 seconds stay with watched subjects as documented artifacts, distributed in ways over which the watched has no control.

For Bilton, surveillance is an inconvenient but inevitable reality. Pushback seems futile, so why bother? But intrusive surveillance is only inevitable as long as people acquiesce, and acquiescence can be most effectively disrupted by centering surveillance analyses on the perspectives of those at the margins—those for whom “inevitable” surveillance can be devastating.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source


Today marks the beginning of the official presidential primaries, launching off with voting in Iowa. While the political pundits and campaign camps scrutinize poll numbers, attendance trends, and even the weather, I find myself poring over this fascinating protocol document put out by NPR.

Admittedly, I’ve never voted in a primary election. I’m going to this year. Word has it, I’m not alone. With intense fractures both within and between parties, this election holds a lot at stake and political analysts say that the primary season is likely to see participation from those who normally abstain altogether, or those like me, who have historically saved their participation for the national election.

So what happens during primary voting? The answer is that it varies drastically, but Iowa has a particularly raucous caucus (<< I know).

In learning about the Iowa primaries, I am most struck by their charm, and relatedly, the simplicity of their technological apparatuses. In Iowa today, the eminent technologies include pencils, paper, voices, and feet.

Here’s how it works: Voters register with a party, and meet with others in their party at a designated venue—church basements, gyms, the occasional grain elevator. Representatives of each presidential candidate make a case to sway voters. Here, the rules for Democrats and Republicans split off. Democrats physically move their bodies into areas of the room, which represent support for a particular candidate. They try to convince one another to come over to their candidate’s areas. Each candidate must have at least 15% of the precinct’s voters. Those voting for a candidate who receives less than 15% of the vote have to redistribute themselves among the more popular candidates. Votes are tallied by the final number of bodies in each area. Republicans do not require a 15% minimum and instead of voting with their bodies, vote by paper ballot. The whole thing is blaringly low tech.

Okay, so votes are reported with a Microsoft-created app, but up until the reporting, it’s the kind of voting we might expect an elementary school class to engage in when deciding on toppings for a pizza party.

This really is a darling process. People of like-mind congregate to debate, celebrate, and enact democracy together. I imagine that the caucuses, in which bodies are the main metric, entail enthusiastic yelling, jumping, and warm hugs or playful jeers as voters shift from one corner of the room to the next. I have a tugging desire to become Iowan just to experience the redistribution process after O’Malley inevitably falls short of his 15%.

I can’t help but wonder, though, has it always been so darling? Sure, Iowans have long caucused in this manner, but is the charm retrospective? Experiences are always temporally embedded, and paper, pencils, voices, and feet were once normative technologies rather than retro throwbacks—much like cane sugar in soda used not to be a novelty.

Indeed, the caucuses exemplify the kind of community gathering that Robert Putnam mourned the loss of in his morosely titled work, Bowling Alone. While I disagree with Putnam’s thesis that we have lost community, it is certainly clear that community has changed in form. When traditional kinds of gatherings—like the Iowa caucuses— still remain, they are no longer just events, but relics of a time past, quaint and campy, and wonderfully out of place.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source




Almost two years ago, Facebook waved the rainbow flag and metaphorically opened its doors to all of the folks who identify outside of the gender binary. Before Facebook announced this change in February of 2014, users were only able to select ‘male’ or ‘female.’ Suddenly, with this software modification, users could choose a ‘custom’ gender that offered 56 new options (including agender, gender non-conforming, genderqueer, non-binary, and transgender). Leaving aside the troubling, but predictable, transphobic reactions, many were quick to praise the company. These reactions could be summarized as: ‘Wow, Facebook, you are really in tune with the LGBTQ community and on the cutting edge of the trans rights movement. Bravo!’ Indeed, it is easy to acknowledge the progressive trajectory that this shift signifies, but we must also look beyond the optics to assess the specific programming decisions that led to this moment.

To be fair, many were also quick to point to the limitations of the custom gender solution. For example, why wasn’t a freeform text field used? Google+ also shifted to a custom solution 10 months after Facebook, but they did make use of a freeform text field, allowing users to enter any label they prefer. By February of 2015, Facebook followed suit (at least for those who select US-English).

There was also another set of responses with further critiques: more granular options for gender identification could entail increased vulnerability for groups who are already marginalized. Perfecting your company’s capacity to turn gender into data equates to a higher capacity for documentation and surveillance for your users. Yet the beneficiaries of this data are not always visible. This is concerning, particularly when we recall that marginalization is closely associated with discriminatory treatment. Transgender women suffer from disproportionate levels of hate violence from police, service providers, and members of the public, but it is murder that is increasingly the fate of people who happen to be both trans and women of color.

Alongside these horrific realities, there is more to the story – hidden in a deeper layer of Facebook’s software. When Facebook’s software was programmed to accept 56 gender identities beyond the binary, it was also programmed to misgender users when it translated those identities into data to be stored in the database. In my recent article in New Media & Society, ‘The gender binary will not be deprogrammed: Ten years of coding gender on Facebook,’ I expose this finding in the midst of a much broader examination of a decade’s worth of programming decisions that have been geared towards creating a binary set of users.

To make sure we are all on the same page, perhaps the first issue to clarify is that Facebook is not just the blue and white screen filled with pictures of your friends, frenemies, and their children. That blue and white screen is the graphic user interface – it is made for you to see and use. Other layers are hidden and largely inaccessible to the average user. Those proprietary algorithms that filter what is populated in your news feed that you keep hearing about? As a user, you can see traces of algorithms on the user interface (the outcome of decisions about what post may interest you most) but you don’t see the code that they depend on to function. The same is true of the database – the central component of any social media software. The database stores and maintains information about every user and a host of software processes are constantly accessing the database in order to, for example, populate information on the user interface. This work goes on behind the scenes.

When Facebook was first launched back in 2004, gender was not a field that appeared on the sign-up page but it did find a home on profile pages. While it is possible that, back in 2004, Mark Zuckerberg had already dreamed that Facebook would become the financial success that it has today, what is more certain is that he did not consider gender to be a vital piece of data. This is because gender was programmed as a non-mandatory, binary field on profile pages in 2004, which meant it was possible for users to avoid selecting ‘male’ or ‘female’ by leaving the field blank, regardless of their reason for doing so. As I explain in detail in my article, this early design decision became a thorny issue for Facebook, leading to multiple attempts to remove uses who had not provided a binary ID from the platform.

Yet there was always a placeholder for users who chose to exist outside of the binary deep in the software’s database. Since gender was programmed as non-mandatory, the database had to permit three values: 1 = female, 2 = male, and 0 = undefined. Over time, gender was granted space on the sign-up page as well – this time as a mandatory, binary field. In fact, despite the release of the custom gender project (the same one that offered 56 additional gender options), the sign-up page continues to be limited to a mandatory, binary field. As a result, anyone who joins Facebook as a new user must identify their gender as a binary before they can access the non-binary options on the profile page. According to Facebook’s Terms of Service, anyone who identifies outside of the binary ends up violating the terms – “You will not provide any false personal information on Facebook” – since the programmed field leaves them with no alternative if they wish to join the platform.

Over time Facebook also began to define what makes a user ‘authentic’ and ‘real.’ In reaction to a recent open letter demanding an end to ‘culturally biased and technically flawed’ ‘authentic identity’ policies that endanger and disrespect users, the company publically defended their ‘authentic’ strategy as the best way to make Facebook ‘safer.’ This defense conceals another motivation for embracing ‘authentic’ identities: Facebook’s lucrative advertising and marketing clients seek a data set made up of ‘real’ people and Facebook’s prospectus (released as part of their 2012 IPO) caters to this desire by highlighting ‘authentic identity’ as central to both ‘the Facebook experience’ and ‘the future of the web.’

In my article, I argue that this corporate logic was also an important motivator for Facebook to design their software in a way that misgenders users. Marketable and profitable data about gender comes in one format: binary. When I explored the implications of the February 2014 custom gender project for Facebook’s database – which involved using the Graph API Explorer tool to query the database – I discovered that the gender stored for each user is not based on the gender they selected, it is based on the pronoun they selected. To complete the selection of a ‘custom’ gender on Facebook, users are required to select a preferred pronoun (he, she, or them). Through my database queries, however, a user’s gender only registered as ‘male’ or ‘female.’ If a user selected ‘gender questioning’ and the pronoun ‘she,’ for instance, the database would store ‘female’ as that user’s gender despite their identification as ‘gender questioning.’ In the situation where the pronoun ‘they’ was selected, no information about gender appeared, as though these users have no gender at all. As a result, Facebook is able to offer advertising, marketing, and any other third party clients a data set that is regulated by a binary logic. The data set appears to be authentic, proves to be highly marketable, and yet contains inauthentic, misgendered users. This re-classification system is invisible to the trans and gender non-conforming users who now identify as ‘custom.’

When Facebook waved the rainbow flag, there was no indication that ad targeting capabilities would include non-binary genders. And, to be clear, my analysis is not geared towards improving database programming practices in order to remedy the fraught targeting capabilities on offer to advertisers and marketers. Instead, I seek to connect the practice of actively misgendering trans and gender non-conforming users to the hate crimes I mentioned earlier. The same hegemonic regimes of gender control that perpetuate the violence and discrimination disproportionately affecting this community are reinforced by Facebook’s programming practices. In the end, my principal concern here is that software has the capacity to enact this symbolic violence invisibly by burying it deep in the software’s core.

Rena Bivens (@renabivens) is an Assistant Professor in the School of Journalism and Communication at Carleton University in Ottawa, Canada. Her research interrogates how normative design practices become embedded in media technologies, including social media software, mobile phone apps, and technologies associated with television news production. Rena is the author of Digital Currents: How Technology and the Public are Shaping TV News (University of Toronto Press 2014) and her work has appeared in New Media & Society, Feminist Media Studies, the International Journal of Communication, and Journalism Practice.

This essay is cross-posted at Culture Digitally

Headline pic from Bivens’ recent article The gender binary will not be deprogrammed: Ten years of coding gender on Facebook