I am sick of talking about trigger warnings. I think a lot of people are. The last few months have seen heated debates and seemingly obligatory position statements streaming through RSS and social media feeds.  I even found a piece that I had completely forgotten I wrote until I tried to save this document under the same name (“trigger warning”). Some of these pieces are deeply compelling and the debates have been invaluable in bringing psychological vulnerability into public discourse and positioning mental health as a collective responsibility. But at this point, we’ve reached a critical mass. Nobody is going to change anyone else’s mind. Trigger warning has reached buzzword status. So let’s stop talking about trigger warnings. Seriously. However, let’s absolutely not stop talking about what trigger warnings are meant to address: the way that content can set off intense emotional and psychological responses and the importance of managing this in a context of constant data streams.

I’m going to creep out on a limb and assume we all agree that people who have experienced trauma should not have to endure further emotional hardship in the midst of a class session nor while scrolling through their friends’ status updates. Avoiding such situations is an important task, one that trigger warnings take as their goal.  Trigger warnings, however, are ill equipped for the job.

Why Trigger Warnings Fall Short

 Warning people of potential triggers is a great idea. But trigger warnings do way more than this. They warn of sensitive material, as per their primary function, but also, they pick a fight.

Language is living. The meaning of a term is subject to the contexts in which people use it. The trigger debates have charged trigger warnings with a keen divisiveness. Trigger warnings not only warn, but also state that delivering warnings in this specifically explicit way is something writers, teachers, and speakers should do. Clearly, this is not a position with which everyone agrees, and claimants on both sides shade their arguments with a strong moral tint. Posting a trigger warning is therefore a political decision, one that tells a contingent of consumers to go screw.

It is perhaps tempting to shrug off concern for those audiences so vehemently against trigger warnings that the inclusion of one is taken as a personal affront. I implore you to resist the urge to shrug. Alienating audiences with opposing worldviews is exclusionary, unproductive, rude, and ultimately, unfortunate. It takes a potential conversation and turns it into a self-congratulatory monologue. This may be okay on a personal Facebook page, but less so in widespread public media and especially, classrooms.

So, imbued with the contentions of the trigger debates, trigger warnings do too much.  Ironically, however, trigger warnings also don’t do enough.

The logic of trigger warnings is that trauma can be mitigated if content producers prepare consumers for the inclusion of sensitive material. This supposes that the producer can identify what’s sensitive and thereby determine what requires warning. That is, trigger warnings presume that we can predict each other’s trauma. Avoiding psychological harm then depends upon accurate prediction. It’s essentially a bet that hinges on mindreading. If we take seriously what trigger warnings are intended for, this is a pretty risky bet.

Of course there are topics that make their sensitivity known—sexual assault, intimate partner violence, images of war etc. But lots of potentially sensitive topics aren’t so obvious. I have a student who tells me she feels anxious and angry whenever she steps into an ice cream parlor as the smell of baking cones brings back terrible memories of a negative work experience. It produces discomfort rather than real psychological trauma for her, but what if her bad work experience went beyond an annoying boss and ice cream really did invoke a more serious reaction? Extrapolate this example to the myriad contexts in which people encounter jarring life events. A granular approach would provide trigger warnings for increasingly more topics, but this is a slippery slope that quickly becomes a losing battle. The content would be lost among the warnings, and potentially harmful content would still slip by.

So How Do We Write for an Audience With Whom We Don’t Necessarily Agree, While Caring for an Audience Who We Can Never Entirely Know?

 I suggest we do so with an orientation towards audience intentionality among content producers, content distributors, and platform designers. Let the audience intentionally decide what to consume and on what terms. Don’t make consumption compulsory.

Content producers and distributors include published authors, social media prosumers, and classroom teachers. These are the people who make the content and spread it around. For them, I offer a very simple suggestion: use informative titles, thoughtful subtitles, and precise keywords. David Banks mentions the title approach as one he’s taken in lieu of trigger warnings. It’s a simple and elegant response to the trauma problem. Rather than “trigger warned content,” the content is just accurately framed. The reader can prepare without being warned, per say. Clever titles are, well, clever, but leave the reader unprepared and vulnerable to surprise. I’ve used clever titles. I’m now going to stop.  Check out the title of this piece. Nothing fancy, but you knew what you were in for when you clicked the link. Goal accomplished. Not to mention, using clear titles with precise keywords helps with search engine optimization and in a flooded attention economy, that’s nothing to sneeze at.

To the platform designers— stop it already with the autoplay. Design platforms with the assumption that users do not want to consume everything that those in their networks produce. People are excellent curators. They will click the link if they want to consume. Give people the opportunity to click and the equal opportunity to scroll by.  This is all the more effective if producers and distributors clearly label their content.

Trigger warnings are earnest in their purpose, but don’t hold up as a useful tool of social stewardship. People know themselves and given enough information, can make self-protective decisions quite effectively. Trigger warnings are a paternalistic and divisive alternative to handing over consumptive decisions in subtler and simpler ways. Perhaps the best way we can care for one another is by helping and trusting each person to care for hirself.

Jenny Davis is in Twitter @Jenny_L_Davis

Headline Pic: Source


I know this is a technology blog but today, let’s talk about science.

When I’m not theorizing digital media and technology, I moonlight as an experimental social psychologist. The Reproducibility Project, which ultimately finds that results from most psychological studies cannot be reproduced, has therefore weighed heavy on my mind (and prominent in over-excited conversations with my partner/at our dogs).

The Reproducibility Project is impressive in its size and scope. In collaboration with the authors of original studies and volunteer researchers numbering in the hundreds, project managers at the Open Science Framework replicated 100 psychological experiments from three prominent psychology journals. Employing “direct replications” in which protocols were recreated as closely as possible, the Reproducibility Project found that out of 100 studies, only 39 produced the same results. That means over 60% of published studies did not have their findings confirmed.

In a collegial manor, the researchers temper  the implications of their findings by correctly explaining that each study is only one piece of evidence and that theories with strong predictive power require robust bodies of evidence. Therefore, failure to confirm is not necessarily a product of sloppy design, statistical manipulations, or dishonesty, but an example of science as an iterative process. The solution is more replication. Each study can act as its own data point in the larger scientific project of knowledge production. From the conclusion of the final study:

As much as we might wish it to be otherwise, a single study almost never provides definitive resolution for or against an effect and its explanation… Scientific progress is a cumulative process of uncertainty reduction that can only succeed if science itself remains the greatest skeptic of its explanatory claims.

This is an important point, and replication is certainly valuable for the reasons that the authors state. The point is particularly pertinent given an incentive structure that rewards new and innovative research and statistically significant findings far more than research that confirms what we know or concludes with null hypotheses.

However, in its meta-inventory of experimental psychology, the Reproducibility Project suffers from a fatal methodological flaw: its use of direct replications. This methodological decision, based upon accurate mimicry of the original experimental protocol, misunderstands what experiments do—test theories.  The Reproducibility Project replicated empirical conditions as closely as possible, while the original researchers treated empirical conditions as instances of theoretical variables.  Because it was incorrectly premised on empirical rather than theoretical conditions, the Reproducibility Project did not test what it set out to test.

Experiments are sometimes critiqued for their artificiality. This is a critique based in misunderstanding. Like you, experimentalists also don’t care how often college students agree with each other during a 30 minute debate, or how quickly they solve challenging puzzles. Instead, they care about things like how status affects cooperation and group dynamics, or how stereotypes affect performance on relevant tasks. That is, experimentalists care about theoretical relationships that pop up in all kinds of real life social situations. But studying these relationships is challenging.  The social world is highly complex and contains infinite contingencies, making theoretical variables difficult to isolate. The artificial environment of the lab helps researchers isolate their theoretical variables of interest. The empirical circumstances of a given experiment are created as instances of these theoretical variables. Those instances necessarily change across time, context, and population.

For example, diffuse status characteristics, a commonly used social psychological variable, are defined as: observable personal attributes that have two or more states that are differentially evaluated, where each state is culturally associated with general and specific performance expectations. Examples in the contemporary United States include race, gender, and  physical attractiveness.  In this example, we know that any of these may eventually cease to be diffuse status markers, hence the goal of social justice activism. Similarly, we could be sure that definitions of “physical attractiveness” will vary by population.

Experimentalists are meticulous (hopefully) in designing circumstances that instantiate their variables of interest, be they status, stereotypes, or, as in the case below, decision making.

One of the “failed replications” was from a study that originated at Florida State University. This study asked students to choose between housing units: small but close to campus, or larger but further away from campus. The purpose of the study was to test conditions that affect decision making processes (in this case, sugar consumption).  For FSU students, the housing choice was a difficult decision. At the University of Virginia, where the study was replicated, the decision was easy. While Florida is a commuter school, UVA is not, therefore living close to campus was the only reasonable decision for the replication population. Unsurprisingly, the findings from Florida didn’t translate to Virginia. This is not because the original study was poorly designed, statistically massaged, or a fluke,  but because in Florida, the housing choice was an instance of a “difficult choice” but in Virginia, it was not. Therefore, the theoretical variable of interest did not translate. The replication study  failed to replicate the theoretical test

Experimentalists would not expect their empirical findings to replicate in new situations. They would, however, expect new instances of the theoretical variables to produce the same results. Those instances, however, might look very different.

Therefore, the primary concern of a true replication study is not empirical research design, but how that design represents social processes that persist outside of the laboratory. Of course, because culture shifts slowly, empirical replication is both useful and common in recreating theoretical conditions. However,  A true replication is one that captures the spirit of the original study, not one that necessarily copies it directly.  In contrast, the Reproducibility Project is actively atheoretical. Footnote 5 of their proposal summary states:

Note that the Reproducibility Project will not evaluate whether the original interpretation of the finding is correct.  For example, if an eligible study had an apparent confound in the design, that confound would be retained in the replication attempt.  Confirmation of theoretical interpretations is an independent consideration

It is unfortunate that the Reproducibility Project contains such a fundamental design error, despite its laudable intentions. Not only because the project used a lot of resources, but also because it takes an important and valid point—we need more replication—and undermines it by arguing with poor evidence. The Reproducibility Project proposal concludes with a compelling statement:

Some may worry that discovering a low reproducibility rate will damage the image of psychology or science more generally.  It is certainly possible that opponents of science will use such a result to renew their calls to reduce funding for basic research.  However, we believe that there is a much worse alternative: having a low reproducibility rate, but failing to investigate and discover it.  If reproducibility is lower than acceptable, then we believe it is vitally important that we know about it in order to address it.  Self-critique, and the promise of self-correction, is why science is such an important part of humanity’s effort to understand nature and ourselves.

I whole heartedly agree.  We do need more replication, and with the move towards electronic publishing models, there is more space than ever for this kind of work. Let us be careful, however, that we conduct replications with the same scientific rigor that we expect of the studies’ original designers. And in the name of scientific rigor, let us be sure to understand, always, the connection between theory and design.


Jenny L. Davis is on Twitter @Jenny_L_Davis

Headline Image: Source

Screen Shot 2015-08-28 at 11.37.54 AM

The American Sociological Association (ASA) annual meeting last week featured a plenary panel with an unusual speaker: comedian Aziz Ansari. Ansari just released a book that he co-wrote with sociologist Eric Klinenberg titled “Modern Romance.” The panel, by the same name, featured a psychologist working within the academy, a biological anthropologist working for, Christian Rudder from OkCupid, and of course, Ansari and Klinenberg. This was truly an inter/nondisciplinary panel striving for public engagement. I was excited and intrigued. The panel is archived here.

This panel seemingly had all of the elements that make for great public scholarhship. Yet somehow, it felt empty, cheap, and at times offensive. Or as I appreciatively retweeted:

Screen Shot 2015-08-28 at 11.36.29 AM

My discomfort and disappointment with this panel got me thinking about how public scholarship should look. As a person who co-edits an academic(ish) blog, this concern is dear to me. It is also a key issue of contemporary intellectualism. It is increasingly easy to disseminate and find information. Publishing is no longer bound by slow and paywalled peer-review journals. Finally, we have an opportunity to talk, listen, share, and reflect on the ideas about which we are so passionate. But how do we do this well? I suggest two guiding rules: rigor and accommodation.

Be rigorous. Social science is like a super power that lets you see what others take for granted and imagine alternate configurations of reality. Common sense comes under question and is often revealed as nonsensical. Public scholarship therefore maintains both the opportunity and responsibility to push boundaries and challenge predominant ways of thinking. The ASA panel missed this opportunity and in doing so, shirked their responsibility.

First of all the panel, like Ansari and Klinenberg’s book, was titled “Modern Romance.” When drafting my Master’s thesis, the people supervising the work taught me that “modern” did not mean what I thought it meant. Modernism is a particular historical moment brought forth during the industrial revolution. Without going too far into it, scholars continue to debate if we have moved past modernism, and if so, what characterizes this new era, and in turn, what we should call it. Labeling the contemporary era “modern” is therefore an argument in and of itself, one that reveals a set of underlying assumptions that differ from those of postmodernism, poststructuralism, liquid modernity etc. My thesis advisors told me to use “contemporary” instead. It means “now” and is a far less value-laden way of representing the current time period. I got no indication that the ASA panelists held strong to the theoretical underpinnings of modernism vis-à-vis other historical designations. Modernism, therefore, was misused. Just as once I misused it in the first draft of my Master’s thesis. This seems like a nitpicky point, and admittedly it is, but it matters. Public engagement entails opening dialogue between those with different kinds and levels of intellectual capital. This means that discourse can operate at multiple levels. The public scholar can communicate something broad to the larger citizenry, while communicating a more nuanced point to insiders. Moreover, how scholars speak becomes a form of training. If we say modern, then the citizens engaged in discourse with us will also say modern, thereby cultivating imprecision and perhaps even generating confusion.

The second (and larger) issue was with the tenor of the panel as a whole. About halfway through, I shot off this tweet:

Screen Shot 2015-08-28 at 11.36.10 AM

These panelists had an opportunity to offer new ways of thinking about love, romance, and family. Instead, they maintained heterosexual, monogamous, procreative, marriage relationships at the center of their discussion. Leaving aside a few cringe-worthy statements from Ansari, the panel as a whole presumed that marriage was the ultimate goal for those using dating apps, even if users wished to employ them for casual hookups in the meanwhile. The biological anthropologist made evolutionary arguments about procreation, and concluded that changes in romantic connections represented “slow love” in which marriage was the “finale” rather than the beginning. In this vein, they all talked about increases in cohabitation through the lens of declining marriage rates, rather than a reconfiguration of kinship ties and life course trajectories. In an exciting historical moment of dynamic cultural change, the panelists’ take on romance was painfully linear.

Rather than rigorous, lazy language choices and linear heteronormative logic kept the panel safely inside mainstream ways of thinking.

The flip side of rigor is accommodation. To engage the public is not to mansplain things to them, but to offer the fruits of academic training in an accessible way while taking seriously the counterpoints, hesitations, and misunderstandings this may entail. Tangibly, this means intellectuals should use language that is as simple as possible while remaining precise; it means exercising patience when lay-publics espouse ideas or use language that seems outdated or even offensive; it means remaining open to viewpoints rooted in lived experience rather than scientific study or high theory; it means remaining flexible while maintaining intellectual integrity. As an audience, members of the crowd at ASA failed to strike this balance. Instead, it became a weird dichotomy between fanenbying[1] and hyper-pretentious pushback. As I noted earlier, the panelists were heteronormative to a fault. The panel itself was therefore something of an intellectual sacrifice, as were the wholesale endorsements coming from the crowd. Those who engaged the panel critically, however, often did so without accommodation. They censured panelists in the pretentious language of insiders complete with conference-tropes such as “troubled by,” “problematic,” and “this isn’t so much a question as it is a comment.”

This all came to a head when the first person to ask a question took about five minutes to use all of the conference tropes I just mentioned. Ansari replied: “ It’s clear that you have some issues, and I also have an issue. You just said ‘this isn’t really a question it’s a comment,’ but you’re standing in the Q&A line!!” The crowd erupted. Ansari said something we have all wanted to say to long-winded commentators. He identified and called out a truly poor habit within the academy. However, the person who Ansari shut down was making a valid point, only she did so in a way that was unaccommodating. Because of this, the cheering felt uncomfortable for me. The cheers invalidated the commentator’s points and in doing so, endorsed the panelists’ message, a message which really, deserved a harsh critique.

I appreciate that ASA made the move towards public scholarship, and I appreciate that public scholarship is difficult. This is why I’m pushing them—pushing us—to think about how public scholarship can/should look in practice. A simple starting place is to engage with rigor and accommodation. Maintain intellectual standards while meeting publics where they are.

I’m interested to hear other people’s thoughts on the panel and/or public scholarship more generally.


Jenny Davis practices public scholarship regularly on Twitter @Jenny_L_Davis

[1] Google tells me fanenby is the gender neutral way to say “fangirl/fanboy” (enby for the NB of non-binary)


TargetHeadlineDisclaimer: Nothing I say in this post is new or theoretically novel. The story to which I’ll refer already peaked over the weekend, and what I have to say about it–that trolling is sometimes productive– is a point well made by many others (like on this blog last month by Nathan Jurgenson). But seriously, can we all please just take a moment and bask in appreciation of trolling at its best?

For those who missed it, Target recently announced that they would do away with gender designations for kids toys and bedding. The retailer’s move toward gender neutrality, unsurprisingly, drew ire from bigoted jerks who apparently fear that mixing dolls with trucks will hasten the unraveling of American society (if David Banks can give himself one more calls it as I sees it moment, I can too).

Sensing “comedy gold” Mike Melgaard went to Target’s Facebook page. He quickly created a fake Facebook account under the name “Ask ForHelp” with a red bullseye as the profile picture. Using this account to pose as the voice of Target’s customer service, he then proceeded to respond with sarcastic mockery to customer complaints. And hit gold, Mike did!! For 16 hilarious hours transphobic commenters provided a rich well of comedic fodder. Ultimately, Facebook stopped the fun by removing Melgaard’s Ask ForHelp account. Although Target never officially endorsed Melgaard, they made their support clear in this Facebook post on Thursday evening: Screen Shot 2015-08-17 at 1.02.12 PM

While you enjoy a short selection of my personal favorite Ask ForHelp moments, keep in mind a larger point: trolling can be a good thing, and trolls can do important work. The act of trolling refers to intentionally disrupting discourse. Often, this is destructive and shuts dialogue down. Sometimes, though, trolling is productive and brings a dialogue to new depths, heights, and/or in new directions. Melgaard’s Ask ForHelp account is a beautiful example of trolling gone wonderfully right. The troll managed to co-opt a corporate site (Facebook) for purposes of co-opting a corporate identity (Target) for purposes of discrediting those who espouse hate and endorse exclusionary policies/practices. And he was funny about it. THIS is how you troll.













Jenny is in Twitter @Jenny_L_Davis

African Burial Ground Monument with Office Building Reflection.
African Burial Ground Monument with Office Building Reflection.

Towards the beginning of Italo Calvino’s novel Invisible Cities, Marco Polo sits with Kublai Khan and tries to describe the city of Zaira. To do this, Marco Polo could trace the city as it exists in that moment, noting its geometries and materials. But, such a description “would be the same as telling (Kublai) nothing.” Marco Polo explains, “The city does not consist of this, but of relationships between the measurements of its space and the events of its past: the height of a lamppost and the distance from the ground of a hanged usurper’s swaying feet.” This same city exists by a different name in Teju Cole’s novel, Open City. It’s protagonist, Julius, wanders through New York City, mapping his world in terms reminiscent of Marco Polo’s. One day, Julius happens upon the African Burial Ground National Monument. Here, in the heart of downtown Manhattan, Julius measures the distance between his place and the events of its past: “It was here, on the outskirts of the city at the time, north of Wall Street and so outside civilization as it was then defined, that blacks were allowed to bury their dead. Then the dead returned when, in 1991, construction of a building on Broadway and Duane brought human remains to the surface.” The lamppost and the hanged usurper, the federal buildings and the buried enslaved: these are the relationships, obscured and rarely recoverable though they are, on which our cities stand.

One morning early this spring, I stood at the intersection of Broadway and Duane and faced the African Burial Ground National Monument. It is, as Julius describes it, little more than a “patch of grass” with a “curious shape… set into the middle of it.” Inside the neighboring office building, though, is a visitor center with its own federal security guards, gift shop, history of the burial site and its rediscovery, and narrative of Africans in America. The tower in which it sits, named the Ted Weiss Federal Building, was completed in 1994, three years after the intact burials were discovered. (The “echo across centuries” Julius hears at the site of course fell on deaf developers’ ears.) In the lobby between the visitor center and the monument, employees of the Internal Revenue Service shuffle over the sacred ground as Barbara Chase-Riboud’s sculpture Africa Rising looks on, one face to the West, another to the East.

Africa Rising, photo from Library of Congress

I visited the African Burial Ground National Monument during a spring break trip for a course called Exploring Ground Zeros. We spent much of our class time during the semester visiting sites of trauma near our university in St. Louis, trying to uncover the webs of historical and contemporary claims that determine their meaning. In East St. Louis, we tracked the (now invisible) paths of the 1917 race riot/massacre. In St. Louis, we walked through the urban forest of Pruitt-Igoe, stared aghast at the Confederate Memorial in Forest Park, and visited the Ferguson Quick Trip, the now (in)famous epicenter of the Mike Brown protests in which many of us continue to take part. In New York, we did the same, moving from the Triangle Shirtwaist Factory to 23 Wall Street to Ground Zero to the African Burial Ground National Monument. At each site, we analyzed architecture, commemorations, official literature, and wrote field notes, trying to measure the city by its relationships so that we might later recreate it in our assignments.

We also took pictures, and a few of us uploaded ours to Instagram. After visiting the African Burial Ground and the old Triangle Shirtwaist Factory (now an NYU building), I turned to Instagram, not only to preserve and catalogue my photographic evidence, but also to find more. Were these sites worthy of selfies and faux nostalgic filters? What hashtags blessed the posts? By exploring others’ photos, I hoped to learn more about how people engage with place, and how the sites in their photos exist in contemporary cultural memory. After all, though critics have focused on what Instagram and the selfie culture it enables say about our relationship to our social world and ourselves, Instagram reveals just as much about our relationships with place. In every selfie, there’s a background, a beautiful bathroom or sunset immortalized.

And so, I tagged my photos with the location at which they were taken, and looked through publicly posted photos under the same name. Quickly, though, I ran into a problem. As I mentioned, the African Burial Ground National Monument shares the same coordinates as the Ted Weiss Federal Building. Instagram, however, must recognize them as distinct locations. On its website, Instagram defines location as the “location name” added by the uploader. Simple enough. In the next paragraph, though, “place” replaces “location.” So, in the world of Instagram, “location” is “location name” is “place” is “place name”; it is also never plural. What’s more, these definitions form a curatorial practice. On Instagram, photos are organized by location name, and the newest update offers a “search places” option. This means the reality of social place- that of competing claims and layered meanings, physical or otherwise- cannot be found. Each place is neatly and securely tied to its single tag. This is no doubt an issue of convenience and loyalty to the wisdom “you can’t be in two places at once,” but the result is undeniable: on Instagram, the Ted Weiss Building has never heard of the Burial Ground.




What Instagram does offer is the possibility to “create a place,” a feature that most honestly reflects our everyday experience of place. Technically speaking, creating a place means attaching a manually entered location name to the coordinates where the photo was taken. Conceptually, though, this action has greater significance. Defining place, or place-making, anthropologist Keith Basso tells us, is “a common response to common curiosities… As roundly ubiquitous as it is seemingly unremarkable, place-making is a universal tool of the historical imagination.” Instagram does act as a tool for place-making, then, but its singular definitions prevent it from acting as a site from which to honestly make place. This is because it severely, unnecessarily limits the possibilities of the historical imagination. By equating “place” to “location name” and organizing photographs according to that name, Instagram creates a world in which places and locations are only as old as their names.

There are two more apparent obstacles in the way of Instagram’s historical imagination: oversaturation and amateurism. In the same essay, Keith Basso offers a compelling account of how historical imagination works to make place:

When accorded attention at all, places are perceived in terms of their outward aspects—as being, on their manifest surfaces, the familiar places they are—and unless something happens to dislodge these perceptions they are left, as it were, to their own enduring devices. But then something does happen. Perhaps one spots a freshly fallen tree, or a bit of flaking paint, or a house where none has stood before—any disturbance, large or small, that inscribes the passage of time—and a place presents itself as bearing on prior events. And at that precise moment, when ordinary perceptions begin to loosen their hold, a border has been crossed and the country starts to change. Awareness has shifted its footing, and the character of the place, now transfigured by thoughts of an earlier day, swiftly takes on a new and foreign look.

In photographic terms, this disturbance sounds exactly like Roland Barthes’ punctum, a term he coined in Camera Lucida: Reflections on Photography to describe a significant and hidden detail in emotionally moving photographs. There he describes the punctum as “that accident (in the photograph) which pricks me (but also bruises me, is poignant to me).” Like Basso’s fallen tree, it “rises from the scene, shoots out of it like an arrow, and pierces.” It also shifts his awareness, transfiguring reality across time and space. Noticing the dirt road in a photograph by Kertesz, Barthes “recognize(s), with my whole body, the straggling villages I passed through on my long-ago travels in Hungary and Rumania.” But are punctums possible on Instagram?

Of course, our contemporary relationship with photographs on Instagram is quite different from Barthes’.  In a post called “From Memory Scarcity to Memory Abundance,” Michael Sacasas asks, “What if Barthes had not a few, but hundreds or even thousands of images of his mother?” The answer, naturally, is a dramatic decrease in meaning. Sacasas writes, “Gigabytes and terabytes of digital memories will not make us care more about those memories, they will make us care less.” Surely, it seems unreasonable to expect Instagram users to view its photos with the same attention Barthes paid the photographs in Camera Lucida. And yet, all is not lost. Perhaps, Barthes’ punctum and Basso’s disturbance are merely displaced from an individual photograph to the application itself.

Imagine uploading a picture of your biology class at NYU and finding that hundreds have taken photos of the same room, but from the street, where garment workers jumped to their deaths when your lab room, formerly the Triangle Shirtwaist Factory, was on fire. Even if you haven’t visited the site, looking at photographs from both perspectives might inspire the same prick the dirt road caused in Barthes. Describing that prick, Barthes writes parenthetically, “Here, the photograph really transcends itself: is this not the sole proof of its art? To annihilate itself as medium, to be no longer a sign but the thing itself?” An Instagram grid showing photos of the Ted Weiss Federal Building and the African Burial Ground National Monument simultaneously could, for some, be the thing itself. “The thing”, in this case, is not the texture of the ground or a traveller’s weariness, but the unexpected terror and sadness of a city showing its true foundations.

This combined grid’s disturbance is not only destructive, destabilizing conceptions of place, but creative. Indeed, according to Basso, “Building and sharing place-worlds… is not only a means of reviving the former times but also of revising them, a means of exploring not merely how things might have been but also how, just possibly, they might have been different from what others have supposed.” This function is particularly important today, as communities worldwide question the names they call their spaces. In the past few months, UNC Chapel Hill changed the name of Saunder’s Hall to Carolina Hall (Saunders was apparently a KKK leader); Aldermen in St. Louis are trying to change the name of Confederate Drive, which cuts through Forest Park, a 1300-acre public park; the student government at Rhodes University in South Africa voted in March to rename the university. How will Instagram incorporate contentious and changing place names? Today, there isn’t yet a geotag for Carolina Hall, but when there is, its photos will be kept separate from those taken at Saunder’s Hall. So, like the story of 290 Broadway, the UNC name change will be preserved and hidden. Students at UNC will create a new place, and before long it will have more photographs than the old. The archive of photos taken at Saunder’s Hall will continue to exist, static, and visible only to those who remember its previous name and implied owner.


Instagram makes room for these revisions, but not for places to present themselves “as bearing on prior events.” Allowing places to converse with their pasts—by allowing multiple geotags or associating geotags with time period(s) to display layers of names and meanings—would transform Instagram from a platform to share into a platform to converse and learn. Here, places would exist free of Instagram’s current architectural constraints, as palimpsests waiting to “inscribe the passage of time” and activate the historical imagination.



Jonathan Karp (@statusfro) is a recent graduate from Washington University in St. Louis, where he studied English literature and music.

Headline Pic via: Source

Autonomous Intelligence

The International Joint Conference on Artificial Intelligence (IJCIA15) met last week in Buenos Aires. AI has long captured the public imagination, and researchers are making fast advances. Conferences like IJCIA15 have high stakes, as “smart” machines become increasingly integrated into personal and public life. Because of these high stakes, it is important that we remain precise in our discourse and thought surrounding these technologies. In an effort towards precision, I offer simple, corrective point: intelligence is never artificial.

Machine intelligence, like human intelligence, is quite real. That machine intelligence processes through hardware and code has no bearing on its authenticity. The machine does not pretend to learn, it really learns, it actually gets smarter, and it does so through interactions with the environment. Ya know, just like people.

In the case of AI, artificiality implicitly refers to that which is inorganic; an intelligence born of human design rather than within the human body. Machine intelligence is built, crafted, curated by human hands. But is human intelligence not? Like Ex Machina’s Ava and HUMANS’ Anita/Mia, we fleshy cyborgs would be lost, perhaps dead, certainly unintelligent, without the keen guidance of culture and community. And yet, human intelligence remains the unmarked category while machine intelligence is qualified “fake.”

To distinguish human from machine intelligence through the criteria of artificiality is to misunderstand the very basis of the human mind.

Humans have poor instincts. Far from natural, even our earliest search for food (i.e., the breast) is often fraught with hardship, confusion, and many times, failure. Let alone learning to love, empathize, write, and calculate numbers. As anthropologists and neuroscientists have and continue to show, the mind is a process, and this process requires training. Like machines, humans have to learn how to learn. Like machines, the human brain is manufactured. Like machines, the human brain learns with the cultural logic within which it develops.

Variations in intelligence do not, therefore, hinge on artificiality. Instead, they hinge on vessel and autonomy.

The two ideal-type vessels are human and machine; one soft, wet, and vulnerable to disease, the other materially variable and vulnerable to malware. Think Jennings and Rutter vs. Watson on Jeopardy. In practice, these vessels can—and do—overlap. Ritalin affects human learning just as software upgrades affect machine learning; humans can plant chips in their bodies, learning with both organic matter and silicon; robots can feel soft, wet, and look convincingly fleshy. Both humans and machines can originate in labs. However, without going entirely down the philosophical and scientific rabbit hole of the human/machine Venn diagram, humans and machines certainly maintain distinct material makeups, existential meanings, and ontologies. One way to differentiate between human and machine intelligence is therefore quite simple: human intelligence resides in humans while machine intelligence resides in machines. Sometimes, humans and machines combine.

 Both human and machine intelligence vary in their autonomy. While some humans are “free thinkers” others remain “close-minded.” Some socio-historical conditions afford broad lines of thought and action, while others relegate human potential to narrow and linear tracks. So too, when talking about machine intelligence and its advances, what people are really referring to is the extent to which the machine can think, learn, and do, on its own. For example, IJCIA15 opened with a now viral letter from the Future of Life Institute in which AI and Robotics researchers warn against the weaponziation/militarization of artificial intelligence. In it, the authors explicitly differentiate between autonomous weapons and drones. The latter is under ultimate human control, while the former is not. Instead, an autonomous weapon acts on its own accord. Machine intelligence is therefore not more or less artificial, but more or less autonomous.

I should point out that ultimate autonomy is not possible for either humans or machines. Machines are coded with a logic that limits the horizon of possibilities, just as humans are socialized with implicit and explicit logical boundaries[1]. Autonomy therefore operates upon a continuum, with only machines able to achieve absolute zero autonomy, and neither human nor machine able to achieve total autonomy. In short, an autonomous machine approximates human levels of free thought and action, but neither human nor machine operate with boundless cognitive freedom.

Summarily: ‘Artificial Intelligence’ is  not artificial. It is machine-based and autonomous.


Follow Jenny Davis on Twitter @Jenny_L_Davis


Pic via: Source

[1] I want to thank James Chouinard for this point.


The concept of affordances, which broadly refers to those functions an object or environment make possible, maintains a sordid history in which overuse, misuse, and varied uses have led some to argue that researchers should abandon the term altogether. And yet, the concept persists as a central analytic tool within design, science and technology studies, media studies, and even popular parlance. This is for good reason. Affordances give us language to address the push and pull between technological objects and human users as simultaneously agentic and influential.

Previously on Cyborgology, I tried to save the term, and think about how to theorize it with greater precision. In recent weeks, I have immersed myself in the affordances literature in an attempt to develop the theoretical model in a tighter, expanded, and more formalized way. Today, I want to share a bit of this work: a timeline of affordances. This includes the influential works that theorize affordances as a concept and analytic tool, rather than the (quite hearty) body of work that employs affordances as part of their analyses.

The concept has an interesting history that expands across fields and continues to provoke debate and dialogue. Please feel free to fill in any gaps in the comments or on Twitter.

1966: J.J. Gibson, an ecological psychologist, first coins the term affordances in his book The Senses Considered as Perceptual Systems. He defines it as follows: “what things furnish, for good or ill.” He says little else about affordances in this work.

1979: Gibson gives fuller treatment to affordances in his book The Ecological Approach to Visual Perception. His interest in this work is understanding variations in flight skills among military pilots in WWII. He finds that depth perception is not an adequate predictor. As such, he puts forth an ecological approach in which animals (in this case pilots) and the environment (the airplane and airfield) together create the meaningful unit of analysis. In doing so, he places emphasis on the medium itself as an efficacious actor. He expands the definition:

The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill… These affordances have to be measured relative to the animal (emphasis in original)

Gibson further makes clear that affordances exist regardless of their use. That is, environments and artifacts have inherent properties that may or may not be usable for a given organism.

1984: William H. Warren quantifies affordances, using stair climbability as the exemplar case. Demonstrating the relationship between an organism and what the environment affords for that organism, Warren examines the optimal and critical points at which stairs afford climbing, based upon a leg-length to riser ratio. He showes that stair climbing requires the least amount of energy (optimally affords climbing) when the ratio is .26, and stairs cease to be climbable after a .88 ratio. Respondents accurately perceive these ratios in evaluating their ability to climb a given set of stairs.

1988: Donald A. Norman writes The Psychology of Everyday Things (POET) (now published as The Design of Everyday Things). Norman’s book brings the concept of affordances to the Human-Computer-Interaction (HCI)/Design community. Norman takes issue with Gibson’s assumption that affordances can exist as inherent properties of objects or environments. Rather, Norman argues that affordances are also products of human perception. That is, the environment affords what the organism perceives it to afford. He states:

The term affordance refers to the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possible be used. A chair affords (‘is for’) support and, therefore, affords sitting. A chair can also be carried

Since Norman’s original formulation, researchers have either adhered to it, adhered to Gibson’s, tried to synthesize the two, or used the term without specifying which definition they pull from or how they try to reconcile inconsistencies. This is exemplified in an HCI literature review by McGrenere and Ho in which they identify about 1/3 of articles that employ Gibson, another 1/3 that employ Norman, and a remaining third who never define the term or do so without clear reference to these two key figures.

1999: Norman writes an article distinguishing between real and perceived affordances. He laments not making this distinction in POET and argues that with this distinction in mind, perception should be the variable of interest for designers.

2003: Keith Jones organizes a symposium and special issue of Ecological Psychology in which scholars debate the continued relevance of affordances as a concept. This issue draws almost exclusively on Gibson and, without much reference to Norman, scholars retain Gibson’ s relational/ecological approach, but diverge from the assumption that an object contains inherent properties. That is, affordances must be complemented by the potentialities of the organism.

2005: Martin Oliver tells us to give up on affordances as a concept. In an influential article, Oliver argues that the term has been employed under such a variety of definitions that it is now meaningless. He further analyzes and critiques Gibson (1979) and Norman (1999) in their use of the term. Gibson’s perspective is positivistic, with assumptions about essential properties of an object, while Norman is constructivist, but discounts any inherent properties. Oliver argues that we should take a literary analysis approach to technology, working to understand both the writing (design) and reading (use) of the technology.

Despite Oliver’s plea, the term persists.

2012: Tarleton Gillespie organizes an invited dialogue about affordances in the Journal of Broadcasting & Electronic Media. Here, Gina Neff, Tim Jordan, and Joshua McVeigh-Shultz grapple with the role of human and technological agency. They tenuously maintain affordances as useful, but call for a more nuanced theorizing of how affordances apply within technology and communication studies, and how this differs across users.

I, of course, think the concept is useful. But more importantly, people are using it. I therefore agree with Neff and colleagues. Let’s keep theorizing with affordances, but let’s make sure that we do it well.


Jenny Davis is on Twitter (which affords 140 character texts, images, short videos, and link sharing) @Jenny_L_Davis


The case of sociologist Zandria Robinson, formerly of the University of Memphis and now teaching at Rhodes College, has a lot to say about the affordances of Twitter. But more than this, it says a lot about the intersection of communication technologies and relations of power.

Following the Charleston shooting, Robinson tapped out several 140 character snippets rooted in critical race theory. Critical race theory recognizes racism as interwoven into the very fabric of social life, locates racism within culture and institutions, and insists upon naming the oppressor (white people).



Quickly, conservative bloggers and journalists reposted the content [i] with biting commentary on the partisan problem of higher education. This came in the wake of criticism for earlier social media posts in which Robinson discredited white students’ racist assumptions about black students’ (supposedly easier) acceptance into graduate school. On Tuesday, the University of Memphis, Robinson’s (now former) employer, tweeted that she was no longer affiliated with the University[ii].

Robinson’s case, and those like it, highlight the unique position held by Twitter as both an important platform for political speech and a precarious platform through which speakers can find themselves censured. Twitter grants users voice, but only a little, only sips at a time. These sips, so easy to produce, are highly susceptible to de-contextualization and misinterpretation, and yet, they remain largely public, easily sharable, and harbor serious consequences for those who author them. While this may have embarrassing outcomes for some public figures, for those speaking from the margins, the affordances of Twitter can be produce devastating results.

Twitter is a tough place to navigate. It provides a big audience, lets users make concise points, and promotes sharing, but also, it provides a big audience, lets users make concise points, and promotes sharing. These features facilitate content distribution to an extent never before possible, and arguably unmatched on any other medium. However, these same features make it hard to convey complex arguments, easy to misinterpret, and likely that content lands in unexpected places.

Twitter’s abbreviated message structure, by limiting text to 140 characters, places a lot of the communicative onus upon readers. Readers get the one-liner, and have to know enough to interpret the content in context. The margins are, by definition, unfamiliar to the mainstream. Messages from the margins are therefore particularly susceptible to misinterpretation, while marginal voices are particularly vulnerable to formal and informal punishment.

As a sociologist who has read critical race theory and learned from critical race theorists, Robinson’s tweets, for me, were impassioned statements of well-established and well-founded lines of thought. For the uninitiated, however, they were jarring. The average nice white ladies of the world don’t understand that “whiteness is most certainly and inevitably terror” refers to a history of white-on-black interpersonal and institutional violence, degrading media portrayals, over-policing and under-protection of black communities, hypersexualization of black women, and fear mongering aimed at black men. And of course they don’t, that’s one of the key points of critical race theory: cultural logics render power-hierarchies invisible while perpetuating race-based opportunity structures that privilege whites. While my scholarly training let me fill in the substance behind Robinson’s tweets, this was not the case for all readers. Ultimately, Robinson has a new job.

Robinson’s message came from the margins. Readers were unable to do the work of interpretation and like so many marginal voices, Robinson’s required an account, an explanation, a (likely exhausting) conversation, in order for it to penetrate those who do not already understand. This is an unjust and unfair reality. People who experience oppression are burdened with the labor of teaching those who oppress. This labor, these conversations, they did not happen on Twitter. They could not have happened on Twitter. In short, even as Twitter gives voice, its affordances disproportionately distribute the efficacy and consequences of speech.

Robinson is not a radical, nor were her words unfounded. They were read, however, by eyes untrained, through a medium ill prepared to teach people what they’ve worked so hard to never learn.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic: Source


[i]No link. I don’t want to drive traffic their way. You’ll have to Google.

[ii] Robinson apparently says she was not fired, but neither she nor the University have released further details. Regardless, she has a new job and I’m pretty certain that the timing is not coincidental.

*Editors Note: Robinson has since posted a response in which she explains her decision to leave the University of Memphis*

Screen Shot 2015-06-26 at 1.16.45 PM


Today seems like a good day to talk about political participation and how it can affect actual change.

Habermas’ public sphere has long been the model of ideal democracy, and the benchmark against which researchers evaluate past and current political participation. The public sphere refers to a space of open debate, through which all members of the community can express their opinions and access the opinions of others. It is in such a space that reasoned political discourse develops, and an informed citizenry prepares to enact their civic rights and duties (i.e., voting, petitioning, protesting, etc.). A successful public sphere relies upon a diversity of voices through which citizens not only express themselves, but also expose themselves to the full range of thought.

Internet researchers have long occupied themselves trying to understand how new technologies affect political processes. One key question is how the shift from broadcast media to peer-based media bring society closer to, or farther from, a public sphere. Increasing the diversity of voices indicates a closer approximation, while narrowing the range of voices indicates democratic demise.

By this metric, the research doesn’t look good for democracy. In general, people seek out opinion-confirming information. That is, we actively consume content that strengthens—rather than challenges—our views. For those of us who hide Facebook Friends, mute people/hashtags on Twitter, and read news from a select few sources, this may not come as a surprise.

This confirmation bias is algorithmically strengthened through what Eli Pariser calls a filter bubble. Many of the platforms and websites that feed us news and information are financially driven companies. These companies make money by selling space to advertisers, who pay according to the quantity and extent of users. That is, advertisers purchase eyeballs, so it behooves Internet companies to maximize eyeballs on their site, for as long as possible. This results in users receiving information that is already appealing to them. Pandora, for example, shows you music that’s similar to what you already listen to, while Google produces results that line up with the kinds of links you tend to click. In this way, our views and preferences are largely given back to us, creating a bubble that protects against, rather than invites, disagreement and debate.

Information in the digital age is plentiful, and the work of engaged citizens entails sorting through it to find what is relevant, meaningful, and useful. It seems that both individual practices of filtering and algorithmic filters work against a version of democracy in which political action stems from reasoned consideration of key issues from all possible sides. The Internet has not given us a public sphere.

But what if the public sphere is not the democratic ideal? What if, instead, the driving force of political participation is community and commiseration? This alternative democratic vision is the driving logic behind Brigade, a series of web and mobile tools that promise to help users become active political citizens.

CEO Matt Mahan explains that these tools allow people to “declare their beliefs and values, get organized with like-minded people, and take action together, directly influencing the policies and the representatives who have an impact on the issues they care about.” This is a model that embraces—rather than fights against—affirmation bias and algorithmic filter bubbles. And it does so in the service of political action.

Currently in a beta version, Brigade is still invite-only. After requesting and receiving an invite, Brigade prompted me with a series of political questions, and encouraged me to answer more. After submitting my responses I could see how I compared with the general populace, and with those in my existing social media networks. I could also connect with others who share similar views, and learn about opportunities to get involved. It basically asks what I think, and then shows me my people. This is powerful. When a person states an opinion, as Brigade prompts users to do, it reflects a belief, but also, actively forms it. We are what we do, and stating that we believe something makes us believe it a little more firmly. Having established this belief, the user connects with others who agree, literature that supports, and events in which to participate.

In a strange way, Brigade itself embodies the will of the people. We filter, we affirm, we look for like-minded others. Brigade, as a political tool, helps us do it better. It is unclear if this will translate into votes and policies, but regardless, Brigade’s mere existence challenges us to reconsider the metrics of an ideal democracy. Perhaps, the public sphere will be dethroned.


Jenny Davis is on twitter @Jenny_L_Davis


A new duo of apps purports to curb sexual assault on college campuses. WE-CONSENT and WHAT-ABOUT-NO work together to account for both consensual sexual engagement (“yes means yes”) and unwanted sexual advances, respectively.

The CONSENT app works by recording consent proceedings, encrypting the video, and saving it in on secure servers. The video is only accessible to law enforcement through a court order or as part of a university disciplinary hearing. The NO app gives a textual “NO” and shows an image of a stern looking police officer. If the user pays $5/year for the premium version, the app records and stores the recipient of the “no” message, documenting nonconsent. The apps are intended to facilitate mutually respectful sexual engagement, prevent unwanted sexual contact, and circumvent questions about false accusations. See below for quick tutorials provided by the developers



The app suit is timely and its goals are laudable. These apps reflect a particular historical moment that intersects rape culture, emerging consent awareness, and norms of documentation coupled with widespread access to documentary devices (i.e., mobile phones with cameras and Internet capabilities). They address the issue of sexual assault on college campuses, which is a problem. A big one.

Although the problem of sexual assault has been around for awhile, it has taken hold of public attention over the last year, prompting news stories, task forces, protests, controversies, and I’m sure, lots of dinner table fights. Good. But like any festering wound that starts receiving treatment, things get messy before they get better. (Non)consent is not always clear, accounts can be imperfect (or dishonest), and, as the infamous Rolling Stone/UVA case revealed, a simple “victim’s word as Truth” approach doesn’t always suffice. The consent/nonconsent apps are here to clear things up. The CONSENT app protects the accused by providing evidence that consent did occur; the NO app supports accusers by providing evidence that sexual contact was unwanted.

To the developer’s credit, the apps start an important conversation and use readily available technologies to implement consent as part of the sexual encounter. I actually think the “No” police officer image offers a funny way to tell someone that you aren’t interested in fulfilling their request (sexual or otherwise), circumventing the uncomfortable task of rejection. But like most technological objects, made by people immersed in an existing cultural logic, these apps do more to reify troubling patterns than subvert them.

First, they reinforce the focus on campus sexual assault, despite statistics that put 18-24 year old women who do not attend college at greater risk. Of course, campus sexual assaults are a serious issue. But ALL sexual assaults are a serious issue. Nothing about the design of the app make it exclusive to college students, yet reflecting a tradition of concern-disparities along class and race lines, the apps’ discourse centers on those who attend institutions of higher education to the exclusion of those who do not.

Second, the apps demand recorded proof. Candace Lanius astutely points to the racism entailed in requiring people of color to quantitatively demonstrate their experiences of police mistreatment. So too, those who experience sexual assault are now asked to document their case—in real time. Record your “No” (for $5/year) or it didn’t happen. For those on the bottom end of a status disparity, personal accounts are not enough.

This is further reflected as the apps place the burden of proof disproportionately upon the person who experienced assault. One important difference between the CONSENT and NO apps is that the former records consent for free, but the latter charges to record nonconsent. In fact, the CONSENT app self-destructs if users say the word “no” repeatedly (the website does not indicate what number constitutes “repeated”). This means that the CONSENT app only records consent. Recording a “NO” comes, literally, at a higher cost. Keep in mind, CONSENT serves the accused, NO serves the assaulted.

Finally, the apps reify consent as temporally prior to, and separate from, the sexual encounter rather than part of an ongoing dialogue within the sexual encounter[1]. People change their minds. People come up with fun new ideas. Both of these are opportunities for further conversation. Consent is continuous, but the apps artificially bound it. This artificial binding is all the more significant, given the demand for documented proof. If people record consent, and then one party changes their mind or isn’t into a spontaneous suggestion, the record still shows consent. The person experiencing assault therefore has less than their experiential account; they also have to address a document that discounts their story. And again, documents weigh more than words.

Solutions to social problems can never just be technological. To presume that they could is to guarantee social problems will persist.

Follow Jenny Davis on Twitter @Jenny_L_Davis


Headline Pic via Source

[1] Through communications with the development team I’ve learned that there is another app on the way that allows people who experience assault to record their story and then release it to authorities later if they so choose.