Editors Note: This is based on a presentation at the upcoming  Theorizing the Web 2015 conferenceIt will be part of the Protocol Me Maybe panel. 


I’ve been researching hacking for a little while, and it occurred to me that I was focusing on a yet unnamed hacking subgenre. I’ve come to call this subgenre “interface hacks.” Interface hack refers to any use of web interface that upends user expectations and challenges assumptions about the creative and structural limitations of the Internet. An interface hack must have a technical component; in other words, its creator must employ either a minimal amount of code or demonstrate working knowledge of web technologies otherwise. By virtue of the fact they use interface, each hack has aesthetic properties; hacks on web infrastructure do not fall in this category unless they have a component that impacts the page design.

One of the most notable interface hacks is the “loading” icon promoted by organizations including Demand Progress and Fight for the Future in September 2014. This work was created to call attention to the cause of net neutrality: it made it appear as though the website on which it was displayed was loading, even after that was obviously not the case. It would seem to visitors that the icon was there in error; this confusion encouraged clicks on the image, which linked to a separate web page featuring content on the importance of net neutrality. To display the icon, website administrators inserted a snippet of JavaScript — provided free online by Fight for the Future — into their site’s source code. A more lighthearted interface hack is the “Adult Cat Finder,” a work that satirizes pornographic advertising in the form of a pop-up window that lets users know they’re “never more than one click away from chatting with a hot, local cat;” the piece includes a looping image of a Persian cat in front of a computer and scrolling chatroom-style text simply reading “meow.” The links to these, and other interface hacks, are included at the end of this post.

I maintain that interface hacks are powerful tools for online activists and hacktivists and that their potential has yet to be fully explored. The power of interface hacks resides in the fact that they take as their raw material the technical underpinnings of the Internet. Because their medium is infrastructure as opposed to content, they are functional on the level of context — in other words, their content is also their contextual frame, or the structure that gives the work meaning. Insofar as they exist to draw attention to their own rubric for interpretation, interface hacks leave the user at a disadvantage when it comes to making sense of their initial encounter with the work. One effect of this confusion is that user attention is seized during this time. The period before the user grasps that the “surprise” of the work is intentional, i.e., the time in which their awareness is given to making sense of what they are seeing rather than simply absorbing it, is a particularly potent one in terms of establishing messages and conveying meaning in a busy web environment. I believe that interface hacks are particularly suitable for activists and artists whose work confronts digital issues.

I am aware that my usage of the word “hack” in this context may be contentious. Many people, some of whom do not identify as “hackers” per se, have strong feelings about the word “hack.” The phrase has been defined and redefined frequently since its inception in the 1950s and encompasses a distinctly heterogeneous set of activities, individuals and groups across the world. A definition of “hacking” that suits all possible contexts and applications is therefore difficult to pin down, and it’s become fashionable in recent years to use the term in contexts entirely unrelated to computing (á la “life hack”). This has evoked ire from hackers and non-hackers alike who argue that its overuse has diminished the term to the point of meaninglessness. I use it here because the taxonomical status of these works is ambiguous: many of the pieces included could be classified in numerous categories, including “artwork” “software,” “activist demonstration” and “toy.” I take “hack” as a noun version of Richard Stallman’s definition of hacking: “exploring the limits of what is possible, in a spirit of playful cleverness.” The categorical ambiguity of interface hacks demands the need for a phrase: “interface hack,” as a term, is an epistemological tool. Grouping all of these works together under one name has allowed me to refine my investigation into their likenesses. These similarities are the structural elements that allow for the development of the theory behind them.

Hacking, including interface hacking, deals in a great amount of mystification and surprise; developing new terms as figures of thought offers us the to opportunity to reveal some of its internal machinations. Making the theoretical blueprints behind the cognitive “surprise” of interface hacks allows us to create other, similar works, which can be used to boost any number of causes. These, of course, can be either good causes or bad causes; my interest is in promoting hacking-for-good. I hope that they are used for beneficial reasons above and beyond all other possible manifestations.

 Some Interface Hacks

Ben Grosser’s “Facebook Demetricator”

Maddy Varner’s “Tab Police”

Net Neutrality Slowdown Icon

404: No Weapons of Mass Destruction

Richard Stallman,”On Hacking”

Emma Stamm is a writer, musician and web developer; her work can be found at https://www.silentinternet.com and she tweets @turing_tests.


Apple rolled out a new line of racialized emojis last week through their iOS 8.3. Originating in Japan, emojis are popular symbols by which people emote via text. Previously, the people-shaped emojis appeared with either a bright yellow hue or peach (i.e., white) skin. The new emojis offer a more diverse color palate, and users can select the skin tones that best represent them. It’s all very United Colors of Benetton.

While many applaud the new emojis— such Dean Obeidallah writing for CNN who announced “Finally, Apple’s Emojis Reflect America”—this has been far from a win for racial equality.

First there were the (pretty egregious) technical glitches. It turns out that for those who have not yet updated to iOS 8.3, the diverse emojis appear as aliens. This means that non-white symbolically translates to non-human. Similarly, as Nathan Jurgenson discovered and then sent via email, using all of the skin tones together shows up as white…”Too much diversity!! Retreat!! Retreat!!”

When Nathan tried to use all of the skin tones...
When Nathan tried to use all of the skin tones…

Then there was the immediate racist bigotry. Writing for the Washington Post, Paige Tutt says it is unsurprising to find people using the racialized emojis in incredibly offensive ways, like this gem she shared in her article:

Screen Shot 2015-04-14 at 5.32.41 PM

And finally, there was the yellow-as-default. When selecting an emoji, bright yellow is the supposedly neutral default from which the emojis can racially diverge. The problem, however, is that yellow is not racially neutral. It is, I argue, definitively white. Let me explain.

Sociologists West and Fenstermaker show that race is a key characteristic by which we categorize bodies. Their thesis is that like gender, “doing race” is not optional. That is, we racialize each other. In this vein, we racialize representations of each other—such as emojis. In American culture, which privileges whiteness, white is the presumed racial category. Representations that fall outside of the human color spectrum are, by default, coded as white.

Humans are not bright yellow and yet, yellow is not racially neutral. Rather, yellow’s very neutrality, in the U.S., signifies white. Both readers and writers of yellow know this, if not explicitly. In fact, it is the implicity of whiteness that makes it so powerful. For example, nothing about the Simpsons should read white. Marge has blue hair for goodness sake. And yet, the Simpsons are a white family. Hence, Apu and Carl are brown, both raced vis-à-vis the rest of the Springfield citizenry. Similarly, and to move away from yellow specifically, the hyenas in Lion King talk with urban black vernacular and Sebastian from the Little Mermaid is Jamaican (and sings about the ocean being awesome because you don’t have to get a job under there). We live in a culture of white-unless-signified-otherwise.

Although it is easy (and correct) to read this as all as racially insensitive on the part of Apple, the issue is much broader and far deeper. The Apple emoji case is a microcosm of race relations in the United States. We want really bad to be inclusive but are so invested, so inculcated, so primed with the white racial frame that good intentions are easily ensnared by the logics we work to break out of. Let’s be clear about that process. Yellow signifies white, but so too would green, purple, or orange. The problem of emoji racial representation is a problem of cultural race relations, as hardware and software are always products of existing social arrangements.

Apple’s emojis can’t solve the race problem. They never could.

Follow Jenny Davis on Twitter @Jenny_L_Davis

Editors Note: This post is based on a presentation at the upcoming Theorizing the Web 2015 conference. It will be part of “The Feel World” panel.  

From the book: 'Still More Folklore from the Paperwork Empire' by Alan Dundes and Carl Pagter. Via  http://www.spacestudios.org.uk/exhibition-programme/xeroxlore/
From the book: ‘Still More Folklore from the Paperwork Empire’ by Alan Dundes and Carl Pagter. (Via http://www.spacestudios.org.uk/exhibition-programme/xeroxlore/)

Internet memes are arguably the most recognisable form of online vernacular; a proliferation of expressive pictorial and / or textual compositions, frequently characterised by running jokes expressed via informality and intended errors. The pervasiveness of humour within memes might make it easy to dismiss them as trivial, but this would be an oversight. In fact understanding the function of humour within memes discloses much; illuminating the relationship between memes and their antecedents, as well as the ways in which web-enabled social dynamics and vernaculars develop. Memetic behaviour is not novel but its current prevalence, as supported by networked culture, is remarkable. This post, a historicisation of meme usage as a communicative practice, attempts put into relief their idiosyncratic characteristics, and address the role memes may play in cultural analyses.

Memes demonstrate ideas in quick and effective ways, yet themselves are complicated articles. Aptly described by Limor Shifman as ‘conceptual troublemakers’ , they are the product of a confluence of social, linguistic and technological circumstances. Sociolinguistic analyses of extant memes are telling, shedding light on a breadth of topics; from the development of localised dialects, to the acquisition of online reputational capital. But as a communicative practice supported – and indeed afforded – by hardware and software, couching memes within a lineage of technologically driven precursors proves a revealing exercise.

The practice of Faxlore is particularly relevant to this end. Originating in the 1970s, Faxlore describes the circulation of humorous cartoons, texts, poetry, art, memoranda and urban legends via fax machine. This cultural phenomenon has been well documented by folklorists who aligned it to the folk tradition; notably Michael Preston, and Alan Dundes, who highlighted the ways in which participants formed networked communities and their own vernacular based primarily around humour and shared, often banal, experiences. Faxlore itself is part of a heritage; as explained by Michael J. Preston in his 1994 essay ‘Traditional Humor from the Fax Machine: All of a Kind’, by intervening in the transmission of folk articles, fax machines functioned for text and image based humour much in the way that telephones did for oral jokes.

Thinking in these terms makes apparent the ways in which contemporary meme cultures too have recognisable folk-like traits; the prevalence of humour and wordplay therein, the role humour plays in their transmission (as riff or remix), as well as the recurrent use of prepatterned language structures. Memes are also causal of corresponding effects; establishing communities; fostering etiquette; utilizing communicative play to create, constitute and inform a textured language, and legitimising discourse. It is worth being be mindful of questions of access and possible exclusion in this regard. Consider how Faxlore circulated between office employees – arguably a similar demographic to what Buzzfeed founder Jonah Peretti describes as the ‘bored at work network’; a social group who share a range of socioeconomic markers, cultural interests and importantly, hardware and software access.

Faxlore shows the success of the form limited by the platform, with jokes passed on from worker to worker, office to office, at a restricted pace, and within restricted groupings. Contemporary digital networks of course allow participants to broadcast messages, with missives potentially subject to uncontrolled spread, functioning variously as monologues, dialogues, or something in between. However, as a medium the Internet too is host to idiosyncratic affordances and constraints which lend memes their own distinct qualities. Theorists have begun to interpret how genre-shaping frameworks have informed the form and texture of memetic vernacular, which raises questions around the potential for forms of bias to be embedded as a result.

Jason Eppink  recently analysed the portability of the GIF format in terms of persistence and consequent potential for expression and ‘identity making’, whilst Kate Brideau and Charles Berret categorized the Impact typeface as a recurrent, communicative memetic component, historicizing its inclusion within image macro meme series in terms of affordances of software. Similarly, Patrick Davison has explored how the formal properties MS Paint had consequences for meme production: acknowledging Paint’s technical limitations and consequential influence on a set of web aesthetics, Davison contextualises the Rage Comic memes series – which persistently and purposefully utilise Paint’s crude asethetic – in terms of remediation. Eppink et al neatly encapsulate how a meme can reveal a lot about the history of its platform, a helpful thought exercise when unpacking the ways in which web culture is bound up in meme production.

Whether Paint, GIF or Impact, the point is that contemporary users have far more choice when it comes to programme, file format, or typeface. But these forms have developed a grammar of their own, becoming recursively meaningful. By understanding this meta-register, it is possible to tacitly claim authenticity or demonstrate social expertise. Language structures of the digital realm are recursive – self-informed and self-perpetuating. And here is where the added complexity in interpreting internet memes arises; whereby Faxlore could only transform as fast as an individual could remix and transmit, the form and the content of the meme are inseparable, allowing them to transform as they transmit, thus establishing hyper-discrete meanings with startling rapidity. This poses serious – and exciting – challenges to analyses of memes, especially since covert forms of humour such as irony are algorithmically unreadable. When viewed in relief of Faxloric praxis, the scope of the challenges posed when tracing and translating memetic vernacular becomes apparent. This perspective is a necessary one, for when language as a whole can be viewed as commodifiable data, it highlights how power is not so much tied up in data transfer, but in interpretation.




Hannah Barton is a PhD student at Birkbeck, University of London, researching memes as a form of vernacular online discourse.  Friday April 17th at Theorizing the Web 2015.



rape on a college campus


The past year has brought strong awareness to problems of sexual assault in general, and sexual assault on college campuses in particular. Victims, allies, and activists are naming assailants and holding university administrations responsible for their treatment of both victims and the accused. Simultaneously, campuses are building task forces, holding meetings, staging rallies, and constructing policies to prevent and deal with incidents of sexual non-consent.

In the midst of this, Rolling Stone told one woman’s story. This woman, who identified herself as “Jackie,” shared the gut wrenching if all-too-familiar tale of a fraternity party gone wrong. As they do in the age of social media, Jackie’s story went viral. In excruciating detail, we read of Jackie’s sexual violation at the hands of several men on UVA’s campus, followed by gross mishandling by those in charge. Links spread, along with petitions, open letters, and thought pieces ranging from 140 characters to several thousand words. We were up in arms. Until, that is, the story came under suspicion.    

After significant digging both within and outside of Rolling Stone, the magazine commissioned a report on their failings, retracted the story, and apologized for a poor display of journalism and the consequences it entailed.

On Sunday, Rolling Stone published the report on their website. The report’s authors, Shelia Coronel, Steve Coll, and Derek Kravits, are Journalism scholars from Columbia. They identified a host of poor practices on the part of the reporter (Sabrina Rubin Erdely) and editorial staff, including failure to triangulate the story, failure to follow threads, and failure to give the accused adequate opportunity to respond. The report summarized the entire process as a ‘systemic’ and ‘journalistic’ failing. In turn, Erdely and Rolling Stone editor Will Dana responded with remorse and humility. Accompanying the report on Rolling Stone’s website:

 This report was painful reading, to me personally and to all of us at Rolling Stone. It is also, in its own way, a fascinating document ­— a piece of journalism, as Coll describes it, about a failure of journalism. With its publication, we are officially retracting ‘A Rape on Campus.’ We are also committing ourselves to a series of recommendations about journalistic practices that are spelled out in the report. We would like to apologize to our readers and to all of those who were damaged by our story and the ensuing fallout, including members of the Phi Kappa Psi fraternity and UVA administrators and students. Sexual assault is a serious problem on college campuses, and it is important that rape victims feel comfortable stepping forward. It saddens us to think that their willingness to do so might be diminished by our failings.

Will Dana, Managing Editor

Framed as a systemic failure of journalism, the Rolling Stone’s unverifiable article has potentially serious implications for both the profession of journalism and victims of sexual assault. While I agree that this is indeed a failure of the system, I don’t believe journalism is the system in question. The system that failed is one of sexual rights.

The story was an unfortunate case of misinformation, in which a woman is now accused of crying wolf. For the facts of a case like this to turn out inaccurate was particularly detrimental because so many women have been falsely accused of inaccuracy and outright dishonesty. It is this history of accusation that Ederly and the staff at Rolling Stone refused to be part of. Jackie asked that they did not interview other witnesses or follow certain threads. Respecting the wishes of a woman whose story was one of deep violation, the team at Rolling Stone complied. Embedded within a history of victim blame, in which the burden of proof has fallen disproportionately upon the accuser, Erdely’s tactics were empathetic and humane. Jackie is not to blame, nor is Erdely or the Rolling Stone editorial staff. To blame is a history and culture in which rigorous journalism and humane treatment of a source became mutually exclusive options. Had Erdely pursued paths unauthorized by Jackie, or if the editorial staff had insisted on doing so, it would have—once again—been violative.

Erderly could not have done this “right” and the solutions are not journalistic, but humanistic. This failure is a cultural failure. Placing the blame on poor journalistic practices safely focuses our attention upon professional standards, while avoiding the far simpler, yet more taxing focus upon standards of consent, respect, and contending with a cultural history of violence.


Follow Jenny Davis on Twitter @Jenny_L_Davis

job search

A couple of years ago, I was enjoying dinner at a family gathering, loudly chiming in between bites of salad and veggie burger. In a quiet moment, my Nana’s significant other leaned in and looked at me closely. ‘I can’t pinpoint your accent,’ he said. Surprised, I wondered out loud if I had developed some strange hybrid of Virginia and Texas—my home state and the state where I was attending graduate school, respectively. ‘No,’ he said. ‘It’s more like California.’ He then faux flipped his hair, batted his eyelashes, and repeated something I’d said earlier using exaggerated uptalk. The table broke into laughter, jokes about nail chipping and mall shopping, and ironic air-quoted references to “Dr. Davis.” It was funny because this particular speech pattern—deeply classed and gendered—connotes  “ditziness,” which sits in (apparently comedic) contrast to my position as an adult in general, and an academic in particular. And indeed, my speech patterns growing up, like those of many girls I knew, followed the stereotypical “valley girl” inflection and cadence. I have learned to temper this over the years, but in relaxed moments, the excessive “likes” and statements-that-sound-like-questions slip back in.

I was not offended by this dinner table exchange. On the contrary, I put my gender activist hat away for a bit and  joined in, asking people to pass various food items in my best Alicia Silverstone (from clueless, obvi) voice. It was totally funny!! It would be less funny, however, if I found myself unable to get a job because of these speech patterns.

Jobaline is a company that helps hourly wage employers obtain and sort job candidates using mobile applications. It promises to both expand the applicant pool, and then produce the strongest candidates from that pool through prescreening. Here’s how it works: Employers give Jobaline the criteria that they’re looking for—education, skills, language fluency, etc.—and in some cases provide specific questions to ask applicants. Jobaline collects this information from applicants, and uses an algorithm to provide the employer with candidates of best fit. Although the employer has ultimate decision-making power, Jobaline’s algorithm significantly increases the odds of some applicants getting the position, while decreasing the odds for others.

Jobaline’s latest innovation is Voice Analyzer, a program that identifies the emotional response a particular voice is likely to elicit. From the website:

Advanced algorithms will measure their voice attributes: inflection, pitch, wave amplitude and predict if the voice will have a particular emotional impact on the listener.

Identify candidates whose voices are calming or soothing; perfect for customer service jobs.

Identify candidates whose voices create a positive engagement with your customers; perfect for sales and frontline employees.

Jobaline touts this tool as a way around human prejudice. The website displays the following quote by CEO Luis Salazar:

There are so many sources of bias when you’re dealing with humans. The beauty of Voice Analyzer algorithm is that it’s blind.

Similarly, writing about Voice Analyzer on NPR, Aarti Shahani states:

The benefit of computer automation isn’t just efficiency or cutting costs. Humans evaluating job candidates can get tired by the time applicant No. 25 comes through the door. Those doing the hiring can discriminate. But algorithms have stamina, they do not factor in things like age, race, gender or sexual orientation (emphasis added).

I think we can all get behind hiring practices that seek to minimize prejudice and discrimination. Voice based algorithms, however, inherently miss the mark. The framing of this technology as objective falsely assumes that 1) algorithms are not human made and 2) what “calms,” “sooths,” and “engages,” is biologically determined rather than socially produced.

Algorithms sort in the way that humans tell them to sort. They are necessarily imbued with human values. Hidden behind a veil of objectivity, algorithms have a powerful potential to reinforce existing cultural arrangements and render these arrangements natural, moral and inevitable.

When applied to desirable voice, algorithms necessarily rely on normative values that intersect race, class, gender, sexuality, geographic locale, and neurotypicality. My uptalk, for example, emanates of white, heterosexual, American, middle-class, girlhood. My male-raised counterparts would therefore likely have a built-in advantage, their voice algorithmically preferred for its relative professionalism. Imagine what this does to a person who speaks with black English vernacular, affected femininity, or the cadence of a deep-southern drawl. The odds are not in this person’s favor.

Technological processes are, always, human processes.
Follow Jenny Davis on Twitter @Jenny_L_Davis
Pic: Source


Academia is in the midst of a labor crisis. With two-thirds of instructional faculty made up of contingent workers (i.e., adjuncts) a critical mass of dissatisfied—and often hungry— advocates are joining together to decry the unacceptable working conditions within historically sacred institutions of higher education. And with new adjunct unions forming regularly, the movement is taking on undeniable prevalence.

But it is more than just a growing quantity of under-paid, over-burdened, college educators that has fostered a national movement, it is also the availability of digitally mediated platforms through which these workers can connect, aggregate data, and share personal and collective stories with a larger public. That is, digital media has been instrumental in creating this particular counter-public.

Contemporary social movements are inevitably augmented, with digital and physical inextricably tied. In the case of adjuncts, however, digital media plays an especially crucial role. Of course I can only engage in informed speculation, but I don’t believe the adjunct movement would be a movement at all (or at least not much of one) without Internet technologies. This has to do with the material and social realities of contingent labor within higher-ed.

On a material level, contingent faculty as a group, are pressed for time. This is related to the press for money. The average adjunct instructor makes less than $3,000 per 3 credit course, and rarely are benefits included. Making a livable wage therefore requires a highly intense teaching load, often spread between multiple institutions, and sometimes supplemented with non-academic jobs and government subsidies.

Under these conditions, the autonomy and flexibility of asynchronous communication affords participation for many who would otherwise be sidelined. It may be nearly impossible to set aside time for a march or sit-in (and even more difficult to accumulate a large group who can do so), but signing a petition, posting a tweet, or contributing data may be do-able. I’ve written previously about the meaningful role of “slacktivism” in causing real change. In the case of contingent faculty, slacktivism is not only effective, but necessary. It is the means by which exploitative realities come to light, and it holds tangible consequences. It’s embarrassing for colleges and universities to be named as the exploiters, and the documentation of existing labor practices, shared en masse, holds these institutions accountable.

And speaking of institutions, digital media are instrumental in giving voice to a largely voiceless population. Even when we all talk, some voices carry further and with greater volume. Namely, those with power and prestige are more likely to be heard, and to have their message acted upon. Concretely, full-time faculty and administrators speak louder than adjuncts, no matter how hard an adjunct physically projects. Because of this, it takes a collection of adjunct voices to achieve the necessary effect. Digital media affords such collection.

Moreover, without job security, speaking up can have serious repercussions. The relative capacity for anonymity through digitally mediated platforms affords the sharing of voice without revelations of self. The screen, in this case, is protective.

Through things like the Adjunct Project, adjunctaction.org, @facultyforward, and the New Faculty Majority, the labor situation within education becomes difficult to ignore. The projection of voices from those on the margins, whose material resources would otherwise preclude participation, is largely facilitated by digital technologies. Sit-ins, walk-outs, and meetings with legislators and administrators are still part of the equation, but the big story is happening online.

Follow Jenny Davis on Twitter @Jenny_L_Davis

Headline Image via: Source

Ecology of thought

My feeds this week kept popping up with a bothersome headline, stated in a variety of ways: Smartphones are Making Us Stupid, Technology is Making us Lazy, Does Smartphone use Decrease Intelligence?

The University of Waterloo released a paper this week that was originally published in Computers in Human Behavior back in July. Titled The Brain in your Pocket: Evidence that Smartphones are used to Supplant Thinking, the authors find a negative correlation between cognitive functioning and the use of search engines via smartphones.

The authors refer to smartphones as an “extended mind,” and make causal claims about the effects of this technological extension upon the current and futures state of human cognition. Namely, they predict that increased use of smartphones to gather information will indulge the human tendency towards lazy thinking, and we will become increasingly reliant upon devices. That is, the authors predict an outsourcing of critical thought. It’s kind of scary, really.

But before the phone stackers triumphantly proclaim that they told us so! Let’s look more closely at the research and the underlying assumptions of the research question.   

The authors base their findings on 3 studies, in which they ask participants to self-report about device usage and then give them a series of tests on both cognitive style and cognitive ability. Cognitive style refers to the tendency to engage in effortful thinking, while cognitive ability refers to the ‘computational power’ a person has available. Participants included Mechanical Turks (Studies 1 and 2) and College Students (Study 3). They find that using search engines via smartphone are negatively correlated with both style and ability.

Here is their hypothesis and rationale:

 One potential consequence of the accessibility of Smartphone technology is that the general disinclination and/or inability to engage analytic thinking may now be applicable not only to reliance on intuitive and heuristic thinking, but also to no thinking at all. A straightforward prediction follows from this line of reasoning: There should be a relation between these two forms of cognitive miserliness, such that those more prone to rely on intuitive cognitive heuristics should be more prone to heavy Smartphones use. We tested this prediction in three studies.

Actually, they tested more than intuitiveness, but also ability, yet I digress. This hypothesis implies (though does not state) a research question: How does smartphone usage affect cognitive processes? This is an important question, but one the research was never prepared to answer thoughtfully. Rather, the authors recast this question as a prediction, embedded in a host of assumptions which privilege unmediated thought.

This approach is inherently flawed. It defines cognitive functioning (incorrectly) as a raw internal process, untouched by technology in its purest state. This approach pits the brain against the device, as though tools are foreign intruders upon the natural body. This is simply not the case. Humans defining characteristic is our need for tools. Our brains literally developed with and through technology. This continues to be true. Brains are highly plastic, and new technologies change how cognition works. Our thought processes are, and always have been, mediated.

With a changing technological landscape, this means that cognitive tests quickly become outdated and fail to make sense as ‘objective’ measures of skill and ability. In other words, definitions of high functioning cognition are always in flux.  Therefore, in reading cognitive research that makes evaluative claims, we should critically examine which forms of cognition the study privileges. In turn, authors should make their assumptions clear. In this case, we can discern that the authors define high cognitive functioning as digitally unmediated.

Certainly, it is useful to understand how cognition is changing, and traditional measures are good baselines to track that change. But change does not indicate laziness, stupidity, or, as the authors claim, no thinking at all.  It indicates, instead, the need for new measures.

A more interesting question, for me,  is how are intelligence and thoughtfulness changing? Rather than understand the brain and the device as separate sources of thought, could we instead render them connected nodes within a thought ecology? Such a rendering first, recognizes the increasing presence of digital devices in everyday life, and second, explicitly accounts for the relationship between structural inequalities and definitions of intelligence.

Definitions of intelligence have a long history of privileging the skills and logics of dominant groups. If cognitive function is tied to digital devices, then digital inequality—rather than human deficiency—becomes a key variable in understanding variations. At some level, I think people already understand this. After all, is it not the underlying driver of digital literacy movements?

This was not the study I wanted it to be. It does, however, tell us something interesting. People are changing. Our thought processes are changing. This is a moment of cognitive flux, and mobile digital technologies are key players in the future of thinking.


Follow Jenny Davis on Twitter @Jenny_L_Davis


Headline pic: Source

Ferguson Protest, NYC 25th Nov 2014

Statistics are never objective. Rather, they use numeric language to tell a story, and entail all of the subjectivity of storytelling. Indeed, the skilled statistician, like the skilled orator, can bring an audience into the world of their creation, and get the audience to buy fully into the logic of this world. Numbers, like words, are tools of communication, persuasion, connection, and dissent.  Statistics are not objective. But my goodness, statistics can be powerful.

Check out this particularly compelling statistical story about Ferguson Missouri:

statistical proof

And yet somehow, this story concludes with no civil rights charges against the officer who killed a young unarmed black citizen.  Darren Wilson is these statistics. His story is the exemplary case. Darren Wilson and Michael Brown are what these statistics translate to, once we leave the language of numbers and averages. But this non-charge does not take away the power of the story—told numerically or through a single case.

The #FergusonReport hashtag on Twitter is largely populated by those who take the statistical story and use it to speak truth to power.

demand 1

Activists can use these statistics as they organize and develop narratives. I will certainly use these statistics when talking with students in and outside of the classroom. Those friendship-ending racially fraught Facebook threads will, undoubtedly, draw from these numbers, and force color-blind advocates to confront the disturbing story that they tell.

Candace Lanius cogently argued that demanding statistical proof of racial oppression is, in itself, racist. Such demands discount the voices of the oppressed, and require a second—numerically based—version of the story. they are rooted in the very marginalization of the groups exploited and ignored by the powerful. This is such an important argument, and one with which I agree. However, I think we can push it further by making a distinction between objects and subjects of statistical demand. That is, we can ask: who is demanding statistical proof from whom? Lanius gives a nod to this distinction, but I want to make it more explicit.

 Lanius’s point locates people of color, women, trans* people, people experiencing poverty,  people of size, and all groups that require an intentional use of “people” before their categorical designation lest we forget that they are, in fact, human, as those upon whom the demand for statistics are placed.  As she argues, this holds the oppressed accountable for proving their oppression in a language not subject to the bias of personal experience. It renders personal experiences unviable, due to their articulation by already discredited voices.

But what about demanding statistical proof from the powerful? This is qualitatively different.  Insisting that powerful groups reflect on their own isms troubles the stories that come from privileged voices. It challenges those voices whose authority is, generally, taken for granted. This tactic assumes that the powerful are oppressive, unless they can prove otherwise.

Let’s call Lanius’s version demands from the center and the alternative version demands from the margins. The former reflects and reinforces power dynamics, the latter redistributes power among the masses.

This is similar to the difference between surveillance and sousveillance. These are both acts of disciplinary looking, but surveillance comes from the top down, while sousveillance comes from the bottom up. Surveillance works to maintain the hierarchy, while sousveillance threatens to topple it.

In this way, demands from the margins stare unflinchingly into the face of powerful institutions and the actors endorsed therein, and say please, tell us, why should we NOT take to the streets?


Follow Jenny Davis on Twitter @Jenny_L_Davis

Headline pic: Source

Twitter and Dove have teamed up in a new campaign to combat criticisms of women’s bodies on social media. The #SpeakBeautiful campaign, which kicked off with a short video (shown above) during the pre-show of this year’s Academy Awards, cites the staggering statistic that women produced over 5 million negative body image Tweets last year. The campaign implores women to stop this, to focus on what is beautiful about each of us, and bring our collective beauty to the fore. Set to musical crescendo and the image of falling dominos, this message is both powerful and persuasive.

Sadly, it gets feminism and women’s empowerment fundamentally wrong. Women’s bodies have historically been sites of objectification and critique. They still are. From politicians to athletes, women are continually required to account for their bodies. I have yet to receive end of semester student evaluations that didn’t comment on my attire and appearance (did you know leggings aren’t pants!?) The work of feminism is to do away with such objectification; to reject the equation of beauty with human value. #SpeakBeautiful not only fails in this endeavor, but actively reaffirms women’s positions as—first and foremost—beautiful objects.

In its very name, #SpeakBeautiful centers physical attractiveness as the proper metric with which to measure women’s value. Rather than decenter or reject this metric, it asks women to give one another high scores. Broadening the standards of beauty does nothing to abolish the requirement that women be beautiful. I repeat: broadening the standards of beauty does nothing to abolish the requirement that women be beautiful (I’m talking to you, Strong is the New Skinny).

Of course, it is in Dove’s interest to maintain the requirement to be beautiful. They sell beauty products, after all. It is not, however, in women’s interests. Yet it is women who Dove recruits to give voice to their campaign. Indeed, this campaign of objectification only works if women—lots of women—actively participate.

I realize it may seem unfair to throw such strong critiques upon a well-meaning campaign, with well-meaning supporters. It’s true that most advertising campaigns offer no feminist agenda. It’s true that many advertising campaigns unapologetically render women mere tools of male sexual pleasure. But these campaigns don’t masquerade as progress.

Cultural products that claim social justice are the very objects we must examine most closely, and call out—loudly—when they get it wrong. #SpeakBeautiful is insidious in its feminist cloak. Its bold rejection of negative body-talk can easily lull us into not only compliance, but active participation in the very structures and logics that make negative body-talk such a painful and effective weapon against women.






There’s a tricky balancing act to play when thinking about the relative influence of technological artifacts and the humans who create and use these artifacts. It’s all too easy to blame technologies or alternatively, discount their shaping effects.

Both Marshall McLuhan and Actor Network Theorists (ANT) insist on the efficaciousness of technological objects. These objects do things, and as researchers, we should take those things seriously. In response to the popular adage that “guns don’t kill people, people kill people,” ANT scholar Bruno Latour famously retorts:

It is neither people nor guns that kill. Responsibility for action must be shared among the various actants

From this perspective, failing to take seriously the active role of technological artifacts, assuming instead that everything hinges on human practice, is to risk entrapment by those artifacts that push us in ways we cannot understand or recognize. Speaking of media technologies, McLuhan warns:

Subliminal and docile acceptance of media impact has made them prisons without walls for their human users.   

This, they get right. Technology is not merely a tool of human agency, but pushes, guides, and sometimes traps users in significant ways. And yet both McLuhan and ANT have been justly criticized as deterministic. Technologies may shape those who use them, but humans created these artifacts, and humans can—and do— work around them.  

In working through the balance of technological influence and human agency, the concept of “affordances” has come to the fore. Affordances are the specifications of a technology which guide—but do not determine—human users.  It is rare to read a social study of technology without reference to the affordances of the artifact(s) of interest. Although some argue the term is so widely used it no longer contains analytic value, I strongly believe its place remains essential. The power of “affordance” as an analytic tool is its recognition of technology as efficacious, without falling prey to the determinism of McLuhan and ANT.

We can, however, improve the nuance with which we employ the concept. Primarily, a delineation of affordances currently answers the question of “what?” That is, it tells us what component parts the artifact contains and what this implies for the user. For instance, the required gender designation of Facebook pushes users to identify their bodies into a single social category. The dropdown menu limits those options vis-à-vis a write in, but expands the options through multiple gender designations beyond male and female. These are some of the affordances of the Facebook platform, and they influence how users engage the platform in gendered ways. This is an important point, but I argue that we can better employ affordances through theorizing the “how?” in addition to the what. How for example, does Facebook push users to identify with a gender category? Do they make the user do so, or simply make it difficult for the user not to? In other words, the how tells us the degree of force with which the whats are implemented.

This issue of how came to me while talking with my students about technological affordances. An astute student asked about the difference between a wood privacy fence and a perimeter rope. They both afford the same thing, he correctly noticed, but in different ways. We collectively decided that while the fence tells you to stay off the property, the rope politely (though often effectively) asks.

I propose a rudimentary typology for the question of how, in which a technological affordance can request, demand, allow, or encourage.  The first two refer to bids placed upon the user by the artifact. The latter two are the artifact’s response to (desired) user action. I welcome tweaks, suggestions, and of course applications.

Requests and demands move human users upon specific paths, with differing levels of insistence.

A technological artifact requests when it pushes a user in some direction, but without much force. This is an affordance a user can easily navigate around. For instance, Facebook requests that users include a profile image, but one can sign up and engage the service without doing so. Similarly, David Banks’ coffee maker requests that he live in a spacious home, but still agrees to make coffee in his modest residence.

A technological artifact demands when its use is conditioned on a particular set of circumstances. Facebook demands, for instance, that users select a gender category before signing up, and Keurig demands its users make coffee with Keurig brand K-cups. Although demand runs the risk of technological determinism, it is important to note that even demands can be rebuffed, though the obstacle may be significant. For instance, one might jailbreak an iPhone, subverting its demand that distribution rests solely with Apple. Or, with likely much greater difficulty, one might craft their own K-cup, subverting Keurig’s demand on brand loyalty. I think we could say/fight more about demands, but I’m kind of looking forward to hashing it out on Twitter and in the comments.

Thinking about the difference between request and demand, we can imagine signing up for some service through an online form. The form has several blank categories for the applicant to fill in. Those spots with red stars are required. Those without red stars are not required. The form therefore asks the applicant to fill in all of the information, but demands that they fill in particular parts.

The second two categories refer to an artifact’s response to those things a user may wish to do.

A technological artifact allows when its architecture permits some act, but does so with relative indifference or even disapproval. The Facebook status update allows users to post links, text, and images. One can post short quips or longer narratives. These narratives can potentially follow a variety of affective lines, such as joy, excitement, depression, or disappointment.  Keurig allows users to make a variety of coffee flavors and strengths, offering a host of these through the machine-compatible brand. The user can also potentially run water through the same K-cup twice, reducing the value of each individual pod, though the technological artifact does not invite the user to do so.

A technological artifact encourages when its architecture makes a particular line of action easy and appealing, especially vis-à-vis alternative lines of action. It fosters, breeds, nourishes something, while stifling, suppressing, discouraging something else. Facebook, for example, encourages users to produce content, providing a host of templates and a centrally located status update box complete with a prompt: “What’s on your mind?” It further encourages interaction by providing “notifications” of relevant activity at the top of the page in an eye-catching red font and sending these notifications to user’s mobile devices. Twitter, in turn, encourages short blips and link sharing, while Tumblr encourages longer form engagement. Both, like Facebook, encourage engaged interaction.

In examining the how, the what remains critically important. It is the what that the artifact requests, demands, allows, or encourages. Affordances enable and constrain. That is, they are always a product of, and subject to, human agency. However, facilitations and constraints operate at different levels. This typology is useful in understanding the degree to which each affordance is open to negotiation. It recognizes not only the mutual influence of human and machine, but the variable nature of this relationship.

Follow Jenny Davis on Twitter @Jenny_L_Davis