Today marks the beginning of the official presidential primaries, launching off with the caucuses in Iowa. While the political pundits and campaign camps scrutinize poll numbers, attendance trends, and even the weather, I find myself poring over this fascinating protocol document put out by NPR.
Admittedly, I’ve never voted in a primary election. I’m going to this year. Word has it, I’m not alone. With intense fractures both within and between parties, this election holds a lot at stake and political analysts say that the primary season is likely to see participation from those who normally abstain altogether, or those like me, who have historically saved their participation for the national election.
So what happens during primary voting? The answer is that it varies drastically, but Iowa has a particularly raucous caucus (<< I know).
In learning about the Iowa caucuses, I am most struck by their charm, and relatedly, the simplicity of their technological apparatuses. In Iowa today, the eminent technologies include pencils, paper, voices, and feet.
Here’s how it works: Voters register with a party, and meet with others in their party at a designated venue—church basements, gyms, the occasional grain elevator. Representatives of each presidential candidate make a case to sway voters. Here, the rules for Democrats and Republicans split off. Democrats physically move their bodies into areas of the room, which represent support for a particular candidate. They try to convince one another to come over to their candidate’s areas. Each candidate must have at least 15% of the precinct’s voters. Those voting for a candidate who receives less than 15% of the vote have to redistribute themselves among the more popular candidates. Votes are tallied by number of bodies in each final area. Republicans do not require a 15% minimum and instead of voting with their bodies, vote by paper ballot. The whole thing is blaringly low tech.
Okay, so votes are reported with a Microsoft-created app, but up until the reporting, it’s the kind of voting we might expect an elementary school class to engage in when deciding on toppings for a pizza party.
This really is a darling process. People of like-mind congregate to debate, celebrate, and enact democracy together. I imagine that the democratic caucuses, in which bodies are the main metric, entail enthusiastic yelling, jumping, and warm hugs or playful jeers as voters shift from one corner of the room to the next. I have a tugging desire to become Iowan just to experience the redistribution process after O’Malley inevitably falls short of his 15%.
I can’t help but wonder, though, has it always been so darling? Sure, Iowans have long caucused in this manner, but is the charm retrospective? Experiences are always temporally embedded, and paper, pencils, voices, and feet were once normative technologies rather than retro throwbacks—much like cane sugar in soda used not to be a novelty.
Indeed, the caucuses exemplify the kind of community gathering that Robert Putnam mourned the loss of in his morosely titled work, Bowling Alone. While I disagree with Putnam’s thesis that we have lost community, it is certainly clear that community has changed in form. When traditional kinds of gatherings—like the Iowa caucuses— still remain, they are no longer just events, but relics of a time past, quaint and campy, and wonderfully out of place.
Almost two years ago, Facebook waved the rainbow flag and metaphorically opened its doors to all of the folks who identify outside of the gender binary. Before Facebook announced this change in February of 2014, users were only able to select ‘male’ or ‘female.’ Suddenly, with this software modification, users could choose a ‘custom’ gender that offered 56 new options (including agender, gender non-conforming, genderqueer, non-binary, and transgender). Leaving aside the troubling, but predictable, transphobic reactions, many were quick to praise the company. These reactions could be summarized as: ‘Wow, Facebook, you are really in tune with the LGBTQ community and on the cutting edge of the trans rights movement. Bravo!’ Indeed, it is easy to acknowledge the progressive trajectory that this shift signifies, but we must also look beyond the optics to assess the specific programming decisions that led to this moment.
To be fair, many were also quick to point to the limitations of the custom gender solution. For example, why wasn’t a freeform text field used? Google+ also shifted to a custom solution 10 months after Facebook, but they did make use of a freeform text field, allowing users to enter any label they prefer. By February of 2015, Facebook followed suit (at least for those who select US-English).
There was also another set of responses with further critiques: more granular options for gender identification could entail increased vulnerability for groups who are already marginalized. Perfecting your company’s capacity to turn gender into data equates to a higher capacity for documentation and surveillance for your users. Yet the beneficiaries of this data are not always visible. This is concerning, particularly when we recall that marginalization is closely associated with discriminatory treatment. Transgender women suffer from disproportionate levels of hate violence from police, service providers, and members of the public, but it is murder that is increasingly the fate of people who happen to be both trans and women of color.
Alongside these horrific realities, there is more to the story – hidden in a deeper layer of Facebook’s software. When Facebook’s software was programmed to accept 56 gender identities beyond the binary, it was also programmed to misgender users when it translated those identities into data to be stored in the database. In my recent article in New Media & Society, ‘The gender binary will not be deprogrammed: Ten years of coding gender on Facebook,’ I expose this finding in the midst of a much broader examination of a decade’s worth of programming decisions that have been geared towards creating a binary set of users.
To make sure we are all on the same page, perhaps the first issue to clarify is that Facebook is not just the blue and white screen filled with pictures of your friends, frenemies, and their children. That blue and white screen is the graphic user interface – it is made for you to see and use. Other layers are hidden and largely inaccessible to the average user. Those proprietary algorithms that filter what is populated in your news feed that you keep hearing about? As a user, you can see traces of algorithms on the user interface (the outcome of decisions about what post may interest you most) but you don’t see the code that they depend on to function. The same is true of the database – the central component of any social media software. The database stores and maintains information about every user and a host of software processes are constantly accessing the database in order to, for example, populate information on the user interface. This work goes on behind the scenes.
When Facebook was first launched back in 2004, gender was not a field that appeared on the sign-up page but it did find a home on profile pages. While it is possible that, back in 2004, Mark Zuckerberg had already dreamed that Facebook would become the financial success that it has today, what is more certain is that he did not consider gender to be a vital piece of data. This is because gender was programmed as a non-mandatory, binary field on profile pages in 2004, which meant it was possible for users to avoid selecting ‘male’ or ‘female’ by leaving the field blank, regardless of their reason for doing so. As I explain in detail in my article, this early design decision became a thorny issue for Facebook, leading to multiple attempts to remove uses who had not provided a binary ID from the platform.
Yet there was always a placeholder for users who chose to exist outside of the binary deep in the software’s database. Since gender was programmed as non-mandatory, the database had to permit three values: 1 = female, 2 = male, and 0 = undefined. Over time, gender was granted space on the sign-up page as well – this time as a mandatory, binary field. In fact, despite the release of the custom gender project (the same one that offered 56 additional gender options), the sign-up page continues to be limited to a mandatory, binary field. As a result, anyone who joins Facebook as a new user must identify their gender as a binary before they can access the non-binary options on the profile page. According to Facebook’s Terms of Service, anyone who identifies outside of the binary ends up violating the terms – “You will not provide any false personal information on Facebook” – since the programmed field leaves them with no alternative if they wish to join the platform.
Over time Facebook also began to define what makes a user ‘authentic’ and ‘real.’ In reaction to a recent open letter demanding an end to ‘culturally biased and technically flawed’ ‘authentic identity’ policies that endanger and disrespect users, the company publically defended their ‘authentic’ strategy as the best way to make Facebook ‘safer.’ This defense conceals another motivation for embracing ‘authentic’ identities: Facebook’s lucrative advertising and marketing clients seek a data set made up of ‘real’ people and Facebook’s prospectus (released as part of their 2012 IPO) caters to this desire by highlighting ‘authentic identity’ as central to both ‘the Facebook experience’ and ‘the future of the web.’
In my article, I argue that this corporate logic was also an important motivator for Facebook to design their software in a way that misgenders users. Marketable and profitable data about gender comes in one format: binary. When I explored the implications of the February 2014 custom gender project for Facebook’s database – which involved using the Graph API Explorer tool to query the database – I discovered that the gender stored for each user is not based on the gender they selected, it is based on the pronoun they selected. To complete the selection of a ‘custom’ gender on Facebook, users are required to select a preferred pronoun (he, she, or them). Through my database queries, however, a user’s gender only registered as ‘male’ or ‘female.’ If a user selected ‘gender questioning’ and the pronoun ‘she,’ for instance, the database would store ‘female’ as that user’s gender despite their identification as ‘gender questioning.’ In the situation where the pronoun ‘they’ was selected, no information about gender appeared, as though these users have no gender at all. As a result, Facebook is able to offer advertising, marketing, and any other third party clients a data set that is regulated by a binary logic. The data set appears to be authentic, proves to be highly marketable, and yet contains inauthentic, misgendered users. This re-classification system is invisible to the trans and gender non-conforming users who now identify as ‘custom.’
When Facebook waved the rainbow flag, there was no indication that ad targeting capabilities would include non-binary genders. And, to be clear, my analysis is not geared towards improving database programming practices in order to remedy the fraught targeting capabilities on offer to advertisers and marketers. Instead, I seek to connect the practice of actively misgendering trans and gender non-conforming users to the hate crimes I mentioned earlier. The same hegemonic regimes of gender control that perpetuate the violence and discrimination disproportionately affecting this community are reinforced by Facebook’s programming practices. In the end, my principal concern here is that software has the capacity to enact this symbolic violence invisibly by burying it deep in the software’s core.
Rena Bivens (@renabivens) is an Assistant Professor in the School of Journalism and Communication at Carleton University in Ottawa, Canada. Her research interrogates how normative design practices become embedded in media technologies, including social media software, mobile phone apps, and technologies associated with television news production. Rena is the author of Digital Currents: How Technology and the Public are Shaping TV News (University of Toronto Press 2014) and her work has appeared in New Media & Society, Feminist Media Studies, the International Journal of Communication, and Journalism Practice.
When Sarah Palin endorsed Donald Trump, it provided political commentators with a goldmine of analytic fodder. Working through the Palin-Trump team up, there is a lot to untangle.
For instance, how do we make sense of a political climate in which the 2008 vice presidential candidate, who so damaged the presidential campaign of her running mate that he could barely mask his contempt for her on election night, is now a desirable connection?
Or what dynamics were in play that pushed Palin to Trump rather than Cruz, especially given Palin’s support of Cruz in his senate bid?
Or could her endorsement backfire, finally impressing upon moderate Republicans the urgency of nominating Rubio or Bush? And relatedly, what’s up with Rubio falling into the mainstream/moderate category?
While commentators touched on a few of these things, largely, the conversation was dominated by another topic entirely: Sarah Palin’s sweater.
To be clear, I’m not Ewwwing her sweater. In fact, I refuse to comment on her sweater. Because I’m not a sexist asshole. Rather, I’m expressing my disgust that mainstream Republicans and self-proclaimed progressives alike coalesced on social media to discuss a politician’s wardrobe—its attractiveness and appropriateness—and that this story then became the content of broadcast news. I heard CNN news anchor and political correspondent Dana Bash spend about 3 minutes on the sweater, its other appearances (CBS Sunday in November), and how the sweater fits into Palin’s larger repertoire of wardrobe choices. Mashable ran a story on the price of the sweater. And the Washington Post gave the sweater an in-depth analysis. Ewww.
Here’s the thing. Social media are heralded as a democratizing force. The voices of the people flow into the voices of broadcasters, spreading into national and international conversations in ways previously unavailable. Social media are a mechanism for the people to speak in ways that projects out into public life. This is powerful. This has been a major tool in important revolutionary movements from the Arab Spring, to #Occupy, to #BlackLivesMatter. It’s how the public insisted upon a serious discussion of interpersonal violence following the infamous Ray and Janay Rice incident. It’s where Hilary Clinton’s evocation of 9-11 to account for her ties to Wall Street were noticed, critiqued, and given back to her with striking immediacy.
But we don’t live in a media democracy. Broadcast media still have disproportionate control over what does and does not become news. Social media provide content pools, which broadcast outlets sift through to select what they will or will not, address. This means the relationship between social and broadcast media is of a curatorial nature, with broadcasters maintaining more power than those from whom they pull stories. This makes broadcast outlets responsible parties. When they don’t pick up an important story, they are accountable. When they do pick up a story that should have remained in the ether, they are also accountable. Sarah Palin’s Sweater was a story that should have withered and died.
Commenting on professional women’s clothing is sexist. It highlights fashion and beauty while backgrounding substance. It wasn’t okay with Hilary Clinton’s pants suits and it’s not okay with Sarah Palin’s sweater. It is disappointing if not surprising, however, that sexist commentary arises among the populace. It is unacceptable when that sexism translates into a headlining story.
Turn on your TV and I bet you can find a show about Alaska. A partial list of Alaska-themed reality shows airing between 2005 and today includes Deadliest Catch, Alaskan Bush People, Alaska the Last Frontier, Ice Road Truckers, Gold Rush, Edge of Alaska, Bering Sea Gold, The Last Alaskans, Mounting Alaska, Alaska State Troopers, Flying Wild Alaska, Alaskan Wing Men, and the latest, Alaska Proof, premiering last week on Animal Planet, a show that follows an Alaskan distillery team as they strive to “bottle the true Alaskan spirit.” And with Alaska Proof, I submit that we have saturated the Alaskan genre; we have reached Peak Alaska. We may see a few new Alaska shows, but it’s likely on the decline. I don’t imagine we have many Alaskan activities left yet unexplored.
Television programming remains a staple of American Leisure, even as the practice of television watching continues to change (e.g., it’s often done through a device that’s not a TV). As a leisure activity, consumers expect their TV to entertain, compel, and also, provide comfort. What content and forms will entertain, compel and comfort shift with cultural and historical developments. Our media products are therefore useful barometers for measuring the zeitgeist of the time. Marshall McLuhan argues in The Medium is the Messagethatupon something’s peak, when it is on the way out, that thing becomes most clearly visible. And so, with Alaska peaking in clear view, I ask, what does our Alaskan obsession tell us about ourselves?
In the 1980s and early ‘90s, the family sitcom reigned. These years held the ideological pinnacle of neoliberal individualism. Materialism ruled, regulations waned, and shoulder pads helped each American take up a little more space. By my count, Time’s2006 designation of “You” as the person of the year was about two decades late. Around this time, we also saw a quickly changing family structure. Divorce was on the rise, family size on the decline, and dual incomes becoming increasingly necessary and normative.
With anxieties surrounding shifting family life, it’s no surprise that the Full House/Family Matters genre rose to prominence. People wanted to sink into their couches to enjoy 30 open and closed minutes of close-knit characters who cared for each other and solved disputes with a quiet knock on the bedroom door, some short self-reflexive dialogue, and a warm hug, finally relieving the tension with a shucks-worthy joke. It was a dose of wholesome. Warm and safe like a diagonal cut grilled cheese and tall glass of milk.
Today, we’ve moved beyond the formulaic family comedy. We want complex characters and believability. We expect continuity and semiotic ambiguity, the kind of programming that spurns debate and post-show discussions with fans, creators, and actors. Or alternatively, we want voyeuristic satisfaction, long-form documentaries in the form of 50 minute segments spread across 12-16 episodes. But that doesn’t mean we’ve stopped seeking comfort. We still need our entertainment to offer reprieve from the troubles that worry us in everyday life, the concerns that accompany societal change. We still want fantasy and escape, even while demanding a realist lens.
The most obvious of contemporary shifts comes in the form of digitization and automation. We are the digital revolution, and people aren’t sure what that means, but they know things have changed and will never again be the same.
Our conversations needn’t require saliva. A day of work may elicit tears, but rarely blood or sweat. Dirt under the fingernails is more likely to originate in a community garden than on a factory floor or family farm. Our muscles may be sore, but mostly from yoga, and we can soothe ourselves with a scented bath and monthly massage membership. The gritty physicality of Alaska shines brightly against the sterile experience of everyday life here on the mainland. We are clean, and soft—at least those of us with disposable time to binge watch and disposable income to buy the commercial goods that drive programming decisions— and we are pretty sure this means something has been lost.
To be clear, our use of technology is far from actually clean. It’s actually incredibly dirty, requiring intense physical labor and causing extreme environmental strife. But we don’t want to know about the dirt and we don’t feel it in our everyday lives. And so we watch Alaskans. Or at least our fantasy of Alaskans. Those majestic creatures unsoiled by streaming News Feeds or celebrity gossip.
We watch them toil on the land. Hunt for their food and can jars of honey. We watch them barter instead of buy, and fashion Christmas gifts out of beaver parts. We watch them do things that most of us have not the skill, opportunity, nor need to practice.
Alaskans are a specimen of anthropological fascination. They give us back to ourselves as we never were, and know we never will be. They are a vestige of the rugged individualism that drives the American value system. They ease us with their living-off-the-land, even as we livetweet their experiences. We watch with reverence, our heads titled in slight confusion. And we watch desperately, for fear that these relics, these strange exemplars of the simple life, where a hard day’s work is its own reward, will remain only as part of the historical record.
As a rule, parents tend to experience concern about their children’s wellbeing. With all of the decisions that parents have to make, I imagine it’s near impossible not to worry that you are making the wrong ones, with consequences that may not reveal themselves for years to come. I’m that way with my dogs, and I feel confident the anxiety is more pressing with tiny human people. This is why recommendations from professional organizations are so important. They offer the comfort of a guiding word, based presumably in expertise, to help parents make the best decisions possible.
The American Academy of Pediatrics (AAP) is one such organization, and they have some things to say about children and screen time. However, what they have to say about children and screen time will be revised in the next year, and no doubt, many parents will listen intently. NPR interviewed David Hill, chairman of the AAP Council on Communications and Media and a member of the AAP Children, Adolescents and Media Leadership Working Group. Although Hill did not reveal what the new recommendations would advise, the way he talked about the relationship between screens and kids did reveal a lot about the logic that drives recommendations. The logic that Hill’s interview revealed made clear, once again, the need for theory to inform practice. More specifically, those who make practical recommendations about technology should consult with technology theorists.
The sneaky thing about assumptions is that they inform our way of thinking without letting on that they are doing so. A theorist is trained to identify underlying assumptions and question how they shape proceeding action. From Hill’s interview, three assumptions stand out: screens are a singular category; “screen time” is inherently harmful; and digitally mediated play is distinct from analog forms of play, the latter more “real” than the former.
The most glaring (and easiest to problematize) assumption is that screens occupy a singular category. Recommendations don’t apply differentially to ipads, televisions, or phones, let alone to the immense diversity of media content that each piece of hardware hosts. Of course, condensing screens into a singular category is likely done for reasons of parsimony—busy parents don’t have time to read nuanced recommendations about the full variety of hardware, software, and content available. But that sheer volume of different kinds of screens/ways of using them should perhaps give pause to anyone attempting to give recommendations about them as a categorical unit. Maybe sweeping recommendations aren’t going to be particularly useful.
A second assumption is that screen time is inherently harmful. In the interview, Hill sets up a debate between control over screen time and wholesale screen abstinence, using food and tobacco as the competing metaphors:
The question before us is whether electronic media use in children is more akin to diet or to tobacco use. With diet, harm reduction measures seem to be turning the tide of the obesity epidemic. With tobacco, on the other hand, there really is no safe level of exposure at any age
Although Hill takes the more moderate “food” approach to technology, this still presumes that technology is a dangerous thing to be managed. The assumption of compulsory harm constructs a debate between avoidance and minimization; it depicts technological advancement as an unfortunate juggernaut with which we are forced to contend. Such a depiction robs technology of the opportunity to be good, and eliminates research and recommendations into the complex conditions under which various media are harmful or beneficial, how, and for whom. Further, it constricts the measure of harm and benefit to metrics rooted in analog styles of learning and development. Studying the effects of new media upon old ways of thinking makes little sense, but it does not appear that the AAP is considering the dynamism of the human mind as it adapts to changing cultural realities. A new kind of world requires different kinds of thinking and different kinds of skills. Spelling is less important, information sorting is more important. Intense focus on a single task is needed sometimes, other times it pays to spread attention among multiple and fast-moving targets.
Finally, the AAP seems to operate under the assumption that digital is distinct form physical, and that digital has less substance. In hazarding a preliminary recommendation for parents, Hill advises:
If [children] color or read or play basketball or ride their bikes, take some time to ask them about what they’ve done and why they enjoyed it. These conversations will help them focus on the joys of the “real” world, and they will notice that their activity attracts your attention.
This advice is a clean and clear example of digital dualism, a fallacy we regularly point out and critique on this blog. Concisely, digitally mediated play is equally real to non-digitally mediated forms of play, and both digital and analog play can co-occur.
Hill and the AAP offer an important service. This is why I implore them, and others who help people navigate digitally infused terrains, to talk with people who think about these issues more broadly. A theorist of technology brings a critical eye that can be of use to those with particular empirical expertise, who wish to take action of practical concern. In this case, a technology theorist might offer some solace to parents…your kids are probably going to be okay. Don’t fret. Pay attention to them and pay attention to what they do. Screens aren’t the enemy.
Marvel’s Jessica Jones is a dark and reluctant hero. An alcoholic private detective, Jones’ super-human physical strength remains largely underutilized when we meet her in the Netflix series opening episode. As the story unfolds, we learn that Jessica self-medicates to deal with a traumatic past in which a man named Kilgrave, who controls people with the use of his voice, held Jessica captive as his lover while forcing her to engage in violence and even murder. Their relationship ended when Jessica was finally able to resist his control—a quality unique to her—and Kilgrave was hit by a bus, leaving him presumably dead. The storyline of the first season is premised on Jessica learning that Kilgrave is still alive, has captured another victim, and is coming to reclaim Jessica. In turn, Jones hunts for Kilgrave to ensure that he dies, once and for all.
About halfway through the season Jessica realizes that Kilgrave is tracking her whereabouts by controlling her friend and neighbor Malcom Ducasse. To wrest Malcom from Kilgrave’s control, Jessica strikes a deal. She agrees to send Kilgrave a selfie at precisely 10am each day. At his direction, Jones even includes a smile.
Jessica Jones’ selfie is a significant cultural artifact. With super-human physical brawn and impenetrable emotional toughness, Jessica Jones is an icon of strength. Jones’ image—how it looks, who it’s for, and how it’s produced— represents the potential of feminist self-documentation. It therefore shines light on what a selfie can do given the tangled relationship between feminism and patriarchy in self-documentation.
The selfie has become a key battleground for gender politics in a digital age. Although front-facing cameras are for everyone, cultural tropes most often place them in the hands of women. The selfie then becomes a vehicle for the critique of femininity. The selfie-posting woman is vapid, needy, and hungry for Likes. As Anne Burns explains:
Beyond a critique of photographic form or content, the online discussion of selfies reflects contemporary social norms and anxieties, particularly relating to the behavior of young women. The knowledge discursively produced in relation to selfie taking supports patriarchal authority and maintains gendered power relations by perpetuating negative feminine stereotypes that legitimize the discipline of women’s behaviors and identities.
Combating the haters, feminist media commentators and scholars (like Burns) offer alternative readings of the selfie as an expressive cultural form. Counter readings of selfies generally take two tracks: Concern (We’re Fucked) and Confidence (Fuck You).
Concerned feminists worry about the meaning of selfies. Selfies are not an indictment of those who post them, but of a culture in which the worth of women and girls continues to hinge on sexual desirability. The selfie then signifies complicity in patriarchal reproduction. That is, selfies are a consequence of women’s pervasive subordination. In this vein, Erin Gloria Ryan at Jezebel calls selfies a “cry for help” and rejects any notion that selfies are good for women:
Selfies aren’t empowering; they’re a high tech reflection of the fucked up way society teaches women that their most important quality is their physical attractiveness.
The Confidence crew, however, insist that turning the camera on the self is a way to take control of one’s own image, lest that image remain captured and configured by an amorphous, patriarchal, controlling, Other. The selfie is a source of Gurl-Power. For Amy McCarthy, selfies are a political act of both feminist strength and personal confidence, especially for those who don’t fit normative body ideals:
When you look in the media… there are very few examples of fat, trans, or dark-skinned women. As such, selfies present themselves as a way to make our bodies visible, and in a radical way. So you take your selfies, peeps — they’re one way to say “fuck you” to the body standards that have made us miserable for so long.
Jessica Jones’ selfie, at once an expression of agency and subservience, is a microcosm of the selfie phenomenon more generally. So tell us, Jessica, what does it mean to take a selfie? Is it empowering or is it self-inflicted oppression? The answer, of course, is “yes,” It is both.
By turning the camera front facing, Jones freed herself from external surveillance, freed Malcom from Kilgrave’s service, and took power over her own image. When watching is ubiquitous, showing becomes the agentic option. While surveilled through Malcom, Jessica could be photographed in any moment. The surveillance was potentially everywhere, all the time. As Foucault so clearly illustrates, potential surveillance is a powerful mechanism of control. When the surveilling eye remains hidden, all moments are documentable and therefore never entirely one’s own. With the selfie, Jessica purchased privacy. All of the non-selfie moments were once again, hers. When she did self-document, Jones selected the timing of this documentation and configured her body and face in a manner of her liking. She could then review the images and select which to send. In a word, the selfie enabled Jessica to document with intention. We see this intentionality manifest in Jones’ masterfully accomplished Fuck You smile.
Yet, despite its Fuck You quality, Jones still smiles, as per Kilgrave’s request. She still sends him pictures, at the time he instructs (10am). She still operates, ultimately, under Kilgrave’s gaze. Jones’ life, and the lives of those she cares about, depend on her compliance. The selfie buys Jones freedom, but within heavy confines.
It is not until the final episode of Season 1 that Jessica untangles herself from Kilgrave entirely. This disentanglement comes when she kills him. And killing Kilgrave is perhaps the perfect metaphor. The selfie is empowering, given a persistently oppressive arrangement. The selfie as a cultural artifact is both a product of and response to gender relations. As long as women are objectified, turning the camera on the self is a means of intentionality. It takes the power from the other and places it within the self. Artist and subject become one. But only by killing patriarchy—and the sexual confines, normative beauty standards, and persistent microaggressions patriarchy entails—are women and girls truly free. Within patriarchy, the selfie will always carry the weight of feminism upon its shoulders.
This is the year of #BlackLivesMatter. In response, it is also becoming the year of White Supremacy. It’s not that Black Lives didn’t matter before, nor that Whiteness didn’t reign supreme. Rather, dramatic and highly publicized incidents of violence against Black citizens by those charged with protecting them have created a cultural dynamic in which the value of Black Lives and the respondent assertion of White Supremacy, have reached a point of articulation.
When you clean house, the roaches emerge. As a nation, we are cleaning house, finding and scrubbing out the blaring and hidden spots of racism, many of which have seeped deep into the layers of our social fabric. A White Supremacist presence is therefore unsurprising. The Supremacists wriggle out in defense of their comfortable home that the elbow grease of mobilization threatens to upend. They are gross but expected. However, their pervasiveness and seeming capacity to garner sympathy, is less expected.
The affordances and dynamics of social media tell an important part of the story…
Counterclaims about both Black Lives and White Supremacy are facilitated by social media platforms that afford the formation of issue driven groups with the capacity to commiserate, strategize, and spread a unified message. Such was the process by which the successful movement at Mizzou, in which Black students and allies mobilized to achieve administrative resignations along with policy and curricular changes, translated with near immediacy into similar movements across college campuses in the U.S. #StandwithMizzou became a rallying cry for students who wished to affect real change in their own schools’ racial climate.
These very same processes are also those that currently facilitate the fast formation of White Student Unions, reactionary groups created by and for “White students and allies” who fear the loss of White’s voices and decimation of White culture. Although administrations are quick to denounce the groups as unassociated with and unsanctioned by the universities to which they are connected, the groups are nonetheless collectivities of people, most likely students, who gather to assert their White Power. The group that popped up at my university describes themselves as follows:
Rather than identifying as White Supremacists, the profiles operate under the gauze-thin cover of European identity. These are White European students, emboldened to speak their (racist and ethnocentric) truth.
The question, then, is from whence does such boldness arise? Given the widespread demonization of Whiteness identity groups in the U.S.—especially the KKK—how do a bunch of 20 year olds come to think it’s viable and acceptable to form a group around White heritage? And while we’re asking, how do four adult men, in 2015, identify as White Supremacists, scream racial slurs, and shoot into a crowd of protestors? This is the part of the story that social media doesn’t fully capture.
Racist collectives, with their communities, identities, and calls for action, form and flourish with the symbolic aid of highly visible, highly powerful, and highly influential figures, given voice through America’s political institution.
White Supremacist messaging, though animated by grass roots social media groups, is undergirded by the rhetoric of those in the highest positions of power. For instance, FBI director James Comey, who blamed #BlackLivesMatter protestors for creating a hostile environment in which police officers are disinclined to intervene, lest their actions get filmed and critiqued. Or the list of governors who (are trying to) refuse Syrian refugees entry into their states. And of course, Donald Trump, who not only represents Whites who are concerned with the slippage of their power, but like Comey and the governors, legitimates the White Supremacy position.
Comey and the anti-refugee governors foil their racism in security concerns. This breeds hate and exclusion, but can ostensibly diminish with time and data, or at least migrate to new groups as new moral panics emerge. Trump’s hate is stickier. It plays more to the long game. It’s based on a feeling—an intuition of discomfort–and its expression is attractively packaged in authenticity.
Authenticity is an ingrained American value, all the more valuable among politicians from whom we are accustomed to, but sick of, seeing precisely calculated performances. Trump is the candidate who purportedly “tells it like it is,” who isn’t enchained by political correctness. He is the folk hero who promises to “make the country great again!”
With the valor of authenticity, Trump says things like “The wall will go up and Mexico will start behaving.” He insists that Muslim Americans in New Jersey were celebrating after 9/11—a claim he vehemently defended by mocking a reporter with a congenital joint condition. He also tweeted statistics about race and murder that are entirely fabricated, imply that Blacks are disproportionately violent, and relies on “thug” iconography. And he continues to do great in the polls.
By wrapping their hate in safety and “truthfulness,” these leaders gift racists their righteousness. Trump’s inflammatory rhetoric does more than collect support, but also provides a moral position and an accompanying narrative on which to carry messages that would otherwise be unpalatable. Messages like those from White Student Union organizers, or anti-refugee protestors, or Men’s Rights Activists, or assholes on Yik Yak.
Openly racist leaders make it acceptable for everyday citizens to hold racist views and viable for everyday citizens to engage in racist acts. When the FBI director worries about police safety and effectiveness and a presidential candidate shouts about Mexicans, racism becomes an expression of morality–one of safety and truth– an expression that spreads and takes hold on the digital platforms of everyday life.
Thanksgiving brings with it the compulsory advice and opinion pieces about how to manage uncomfortable conversations, audacious behavior, and embarrassing reminiscence that so often come with large family gatherings. However, these columns leave out a highly effective and likely widespread coping mechanism: the snarky text.
The snarky text is a surreptitiously crafted message in which the sender factually reports on the ongoings around them and accompanies the report with pithy commentary in written or pictoral form. Gifs are particularly effective. Snarky texts work best when the recipient is adequately primed about the dynamics in which the sender is ensconced. Recipients may be remote, or for an added layer of complexity, may be in the same location as the sender.
For example, when your grandmother’s “friend” makes erotically suggestive statements while deciding which cut of the turkey he wants, you can send your favorite cousin, sitting across the table, this:
Or when your family coalesces to comment on your weight and food choices, you can send this to your best friend across the country:
“Oh good. I hoped everyone would spend 45 minutes noticing my vegetarianism and its ill effects upon my body.”
The snarky text is a unique communicative tool. It not only provides an outlet for in-the-moment frustrations, thus neutralizing the experience of your great aunt reminding you of your withering reproductive system, but also turns bad experiences into good ones. Those uncomfortable or painful moments become fodder for your account. The worse the better, really. When a relative proclaims we must “make our country great again!!” it is joyful for you in its potential for documentation. Who should I tell, and in what expressive form!? You ask yourself.
Of course, families are not always terrible. For the record, my family is pretty unterrible. But in general, people are annoying and life can be trying. The snarky text puts a giggle into those difficult moments.
Today is a big day in Columbia, Missouri where the the University of Missouri system president Tim Wolfe resigned amidst protests over his longstanding failure to address racial issues. Led by #ConcernedStudent1950, named for the first year Black students were accepted into the university, campus protests have been ongoing for several months. Early last week, graduate student Jonathan Butler went on a hunger strike, followed by football players boycotting their athletic labor and catapulting the story into public discourse. Things came to a head this morning with Wolfe’s announcement. Of course, I went immediately to the Columbia, Missouri Yik Yak where I refreshed compulsively.
Major themes are represented below. They include claims to reverse racism, colorblind inspired claims that racial protests create racial divides, allusions to South Park (a LOT of them, and I don’t think nostalgically…so hey there, 2001), attempts to discredit protestors as bullies or babies, attempts to discredit protests as unjust disruptions of the University’s academic function, and for good measure, panda facts and other tidbits from people for whom this is just another day.
In contrast, Twitter has been largely (though not exclusively) celebratory, containing messages of solidarity and momentum building:
We also see people on Twitter citing Yik Yak as proof of the racial problem:
The difference in feel and content between Twitter and Yik Yak likely hinge on platform affordances and relatedly, the user base. The two platforms differ in geographic scope, level of anonymity, and algorithms of visibility. Twitter is international in scope, while Yik Yak is locally tied. Twitter users are often identifiable, whereas Yik Yak users are not. Twitter provides both time and popularity based feeds, as does Yik Yak. However, a score of -5 will delete a Yak, whereas there is no way to downvote a tweet and one needs to lodge a formal complaint to have a tweet removed. Twitter therefore represents a broader swath of the population, held accountable for their content. In contrast, Yik Yak represents the local community, who can speak without identification and the repercussions that identification entails. Where racism exists, Yik Yak opportunes its expression. This is what we see in Columbia, where a field of highly racist content and the affordance of up and down voting make it unlikely that protest supporters on Yik Yak will rise to “Hot” status, and exceedingly likely that their voices will dissipate from the feed altogether.
I moved to rural Kansas a over a year ago. I live beyond Lawrence city limits, on the outskirts of Stull (where local legend places one of the gateways to hell), and 50 minutes driving to the nearest Google Fiber connection. It’s a liminal space in terms of broadband connection – the fastest network in the country is being built in the neighboring metropolitan area but when I talked to my neighbors about internet service providers in our area, they were confused by my quest for speeds higher than 1mbps. As this collection of essays on “small town internet” suggests, there’s an awareness that internet in rural, small town, and “remote” places exists, but we need to understand more about how digital connection is incorporated (or not) into small town and rural life: how it’s used, and what it feels like to use it.
One of my ongoing projects involves researching digital divides and digital inclusion efforts in Kansas City. The arrival of Google Fiber in Kansas City, KS and Kansas City, MO has provided increased momentum and renewed impetus for recognition of digital divides based on cost, access, education and computer literacy, relevance, mobility, and more discussion and visibility for organizations and activists hoping to alleviate some of these divides and emphasize internet access as a utility. I’ve argued that by reading digital media in relationship to experiences of “place,” we gain a more holistic and nuanced understanding of digital media use and non-use, processes and decisions around implementation and adoption, and our relationships to digital artifacts and infrastructures. In other words, one’s location and sense of place become important factors in shaping practices, decisions, and experiences of digital infrastructure and digital media.
The irony is not lost on me that while studying digital divides in a metropolitan area, I had chosen to live in a location with its own, unique series of inequities in terms of internet connection. These inequities have nothing to do with socio-economic instability or lack of digital literacy, as I had funds and willingness to pay a significant amount for internet service (comparable to the prices charged by urban-based, corporate ISPs), and everything to do with the fact that I lived in an area that felt as if it had been forgotten or intentionally bypassed by the internet service providers (ISPs) I had come to know living in other US cities and towns.
In this essay, I want to recount a few of the ways that my relationship to internet infrastructure and ISPs has changed since moving out to the country. (My relationship to social media and my social and economic dependence on internet connection has shifted as well, which I plan to write about elsewhere.) I’m speaking to my experience of digital connection and digital practices “after access,” from within a certain type of digital connectivity. I don’t claim that these interpretations or experiences are generalizable or representative, but they’re some of my initial observations having been an ubiquitously connected, digitally literate, urban dweller for the majority of my life and now living the last year and a half of residence in a rural place.
After moving in, I realized that although our house was advertised as having “high speed internet,” this didn’t mean a wired, cable broadband connection or even DSL, as we weren’t in either of these coverage areas. An internet connection meant that we could connect via two strictly data-capped options: satellite, 4G provided by a cell phone company, or a pay as you go 4G connection. Various blogs and forums hosting threads on ISP options overflowed with warnings about the high prices, data caps, and unreliability of satellite internet connections in rural environments and otherwise.
I posted on social media outlets and contacted friends about my frustrations with my internet access options and received suggestions to contact the cable company and ask them to expand their service to our area, offers to come to friends houses to use the internet, and empathy from people who grew up in rural areas sending condolences for the fact that I would never binge watch anything again. It might sound frivolous to some, but I admit that the thought of not being able to stream anything ever, Skype or share photos with friends and family members, and difficulty downloading large files did make me panic. I’d rather not fall victim to varieties of information, participation and culture gaps and I regularly need to stream, upload and download large files in order to do my job.
The local cable monopoly first offered us service over an old Motorola Canopy network at a maximum of 1mbps upload and download speeds. I had never consciously thought about the sheer amount of emails I received that included or requested attachments until I was unable to send one consistently from my home computer. Before the end of the first week the sound of an email arriving in my inbox while I was at home made me anxious. It meant that I would have to wait until the next time I was in town to respond with a comment other than, “I can’t send the attachment until tomorrow, I have limited access to the internet right now,” a euphemism which frustrated me and I thought made me sounds like a slacker. I cancelled the service after the two-week trial.
Now, I love my internet service provider, which is something I never thought I’d ever say. I have feelings of gratitude for them. They’re a local company who, according to their mission statement, saw “a lack of adequate Internet service options available to rural Northeast Kansas communities” and decided to build their own point-to-multipoint, line of sight network to service to our area. In 2008, they acquired another local ISP owned and operated by an area high school and later migrated their network from Motorola Canopy to 4G. They retrofitted the Canopy network antenna that the previous owner of our house had left, installed a 6 foot pole antenna on the roof of our house, and located a direct line of site to one of their towers. We now average around 5 mbps upload and download speeds. Although we experience noticeable lag time as compared to our workplace connections, and Skype, VoIP, and streaming often crash due to poor internet connection – we have a generally reliable connection with no data caps and at less than half the cost of any service provider in town.
This type of internet connectivity looks and feels different as well. The equipment that powers my connection demands more conscious and haptic attention. The pole and antenna mounted to my roof are taller than the rest of the house and are the first things you see from the driveway. I can see part of the tower that powers my internet, as well as two others that use the canopy network, across the prairie. I have to tend to my equipment. I often have to touch the antenna and pole to adjust them after being blown by strong winds and I’m regularly unplugging and pushing buttons to reset the router. The “seamfulness” of the experience makes me think about the “wires” and wireless frequencies, how they work or don’t work and why, in a way I never did while living in cities. For me, the infrastructure is very tangible and visible, which makes me think of myself as a digitally connected person more than ever before. I feel more connected to my connection, and more responsible for making it work.
I’ve wondered about the potential for mesh networks in my rural area. Mesh networks are decentralized, redundant, often inexpensive networks powered by antennae that act as both access points and routers, repeating wireless signals in a mesh-like configuration. In conversations with digital inclusion activists and community network organizations in urban areas, mesh networks are often suggested or already serve as a powerful alternative to more traditional ISPs and the networks they provide. However, the technical problem of distance persists as houses, barns, silos, garages, and other structures where antennae might be mounted can be over several miles away. More complicated is the fact that the pre-existing social structures and norms around proximity and sharing are also very different from cities or more densely populated areas. People who live out here tend to live “alone together.” I live closer to and encounter my neighbors’ cows, dogs, goats, and chickens than the people who own them, and where minimal (albeit friendly) interaction between people is the norm. There’s not much we share in terms of services and utilities: we pay for utilities individually, often from different service providers. The area is purely residential for miles and the commercial and family farms and orchards don’t have direct sales on premises. In many ways each household feels like a self-sustaining unit with their individual tanks of propane, tornado shelters, livestock, and food crops. I often wonder how introducing an infrastructure built on shared internet connection would mesh with these pre-existing social networks. But at the same time, I wish someone would propose a network like that out here, or finally send up those balloons.
Germaine R. Halegoua is an Assistant Professor at the University of Kansas interested in the ways we experience place through digital media and vice versa. She tweets occasionally @grhalegoua and also contributes to Antenna and Social Media Collective.
We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.