In late September the social news networking site ‘Reddit’ announced a revamp of their ‘quarantine’ function. A policy that has been in place for almost three years now, quarantines were designed to stop casual Redditors from accessing offensive and gruesome subreddits (topic based communities within the site), without banning these channels outright. In doing so the function impacted a small number of small subreddits and received little attention. The revamp of the quarantine function however has led to the policy applying to much larger subreddits, creating significant controversy. As an attempt to shape the affordances of the site, the revamped quarantine function highlights many of political and architectural issues that Reddit is facing in today’s current political climate.

As a platform, Reddit sits in a frequently uncomfortable position. Reddit was initially established as a haven for free speech, a place in which anything and everything could and should be discussed. When, for example, discussion about #gamergate, the controversy in 2014 over the ethics of the gaming industry that resulted in a number of high-profile women game designers and journalists being publicly harassed, was banned on the often more insidious 4chan, it was Reddit where discussion continued to flourish. However, in recent years, Reddit has come under increasing pressure due to this free for all policy. Reddit has been blamed for fueling misogyny, facilitating online abuse, and even leading to the misidentification of suspects in the aftermath of the Boston Marathon Bombings.

Reddit announced the revamp of its quarantining policy via a long post on the subreddit r/announcements. In doing so, one of Reddit’s moderators u/landoflobsters highlighted the bind that Reddit faces. They said:

“On a platform as open and diverse as Reddit, there will sometimes be communities that, while not prohibited by the Content Policy, average redditors may nevertheless find highly offensive or upsetting. In other cases, communities may be dedicated to promoting hoaxes (yes we used that word) that warrant additional scrutiny, as there are some things that are either verifiable or falsifiable and not seriously up for debate (eg, the Holocaust did happen and the number of people who died is well documented). In these circumstances, Reddit administrators may apply a quarantine.”

u/landoflobsters Argued that a quarantine function was not designed to shut down discourse, but rather to ensure those who didn’t wish to see it did not view it and to potentially encourage change:

“The purpose of quarantining a community is to prevent its content from being accidentally viewed by those who do not knowingly wish to do so, or viewed without appropriate context. We’ve also learned that quarantining a community may have a positive effect on the behavior of its subscribers by publicly signaling that there is a problem. This both forces subscribers to reconsider their behavior and incentivizes moderators to make changes.”

A quarantine has a number of impacts on a subreddit. A quarantined community is unable to generate revenue, does not appear on Reddit’s front page, nor on other non-subscription-based feeds. These subreddits cannot be found via the search or recommendation function. To find a quarantined subreddit a user must search for the name specifically, normally through Google. When a user attempts to subscribe to a quarantined subreddit a warning is displayed requiring the user to explicitly opt-in to view the content. Information about these subreddits, for example the number of subscribers, is also scrubbed from their front page. In essence, quarantines are designed to halt the growth of subreddits without banning them outright.

Through the mechanism of affordances we can understand this attempt from Reddit more thoroughly. Affordances is a term that has become a critical analytical tool within science and technology studies, and increasingly in relation to the study of social media architectures. Davis and Chouinard argue that “affordance refers to the range of functions and constraints that an object provides for, and places upon, structurally situated subjects.”

In relation to studies of social media we can understand affordances to be the functions of the site that shape what users can do within the space. Davis and Chouinard provide a theoretical scaffold for affordance analyses by modeling the mechanisms of affordance. These mechanisms capture variation in the strength with which artefacts shape human behavior and social dynamics. Davis and Chouinard propose that artefacts don’t just afford or not afford but instead, request, demand, encourage, discourage, refuse, and allow.

Requests and demands refer to bids that the artefact places upon the subject. Encouragement, discouragement, and refusal refer to how the artefact responds to a subject’s desired actions. Allow pertains to both bids placed upon on the subject and bids placed upon the artefact.”

In relation to Reddit’s quarantine function, we can see a strong attempt at discouragement. Davis and Chouinard argue that “artifacts discourage when one line of action, though available should subjects wish to pursue it, is only accessible through concerted effort.” Reddit has attempted discouragement in a number of ways — through requiring users to specifically search for quarantined subreddits, through the warning messages provided when users access quarantined subreddits, and through the inability of these subreddits to earn income, in turn discouraging use and development of these subreddits.

Through the expansion of the quarantine function we can see the deep politics that exist behind artefact affordances. The expansion of the quarantine function has seen significant pushback by affected subreddits. One of the largest for example, r/TheRedPill, which was quarantined specifically for misogynistic rhetoric, has launched a concerted campaign against their quarantine. Moderators of r/TheRedPill have argued strongly against the politics of the quarantine, stating that Reddit has tacitly endorsed male abuse and denied its victims. In doing so they have worked to subvert the new affordance of the site, labeling it as an inherently political act.

In response to the quarantine, we see the politics of affordances play out. While affordances provide a framework for what users can and cannot do with a particular artefact, this can, and is frequently subverted by users in numerous ways. This mutability is in particular a feature of Reddit. Having an open source philosophy Reddit users have changed and shaped the interface and functions of the site regularly across its history. Whether users will be able to successfully subvert the quarantine revamp is yet to be seen. Yet, what is interesting about this episode is how it highlights the politics that exist behind the artefact’s affordances, politics that are playing out once again on centre stage.

Reddit finds itself once again stuck in a bind. The site seems to be trying to please everybody, hoping to hold on to its status as a place of free speech, while at the same time responding to critics about the very speech that occurs on the site. In doing so it is likely that Reddit will end up pleasing no one.

 

Simon Copland is a PhD candidate in Sociology at the Australian National University (ANU), studying online men’s communities, frequently called the ‘manosphere’

Simon is a freelance writer and has been published in The Guardian, BBC Online and Overland Journal, amongst others. He co-produces the podcast‘Queers’, a fortnightly discussion on queer politics and culture, and is the co-editor of the opinion and analysis site ‘Green Agenda’.

Simon is a David Bowie and rugby union fanatic.”

Headline pic via: Source

The following argument is as an elaboration upon and the second part of “The Ineluctable Politics of Doctor Who: Part 1.” In that piece, I present the television series Doctor Who as an artefact with ineluctable social-material significance and political implications. In so doing, I illustrate that the ostensibly playful, inconsequential spaces that celebrate beloved objects of fan entertainment never actually enact neutral positions. The text and fan pronouncements about the text exist, incontrovertibly, as partisan acts—even when enacting an ostensibly innocuous posture that seeks to avoid or negate polemical effects.

Here, in Part 2, I address the ways in which the show may and should take responsibility for its social-material effects—which, while demonstrating relevance for a general viewing audience, hold particular import for a diverse fan community. It is on this point of fan diversity that the present discussion locates sociological significance. Surely Doctor Who fans, as a group, constitute a wide range of varying demographic orientations. Such a pronouncement seems rather evident considering the fanbase spans cross-cultural contexts.

While an analysis of the fans’ demographic differences may be revelatory and significant in its own right, I presently refrain from such an endeavour (which is beyond the scope of mere blog musings) and instead gauge the diverse opinions surrounding the induction of the show’s first woman Doctor, Jodie Whittaker. The overall response to Whittaker’s casting is largely positive. Most fans embrace the change, while a smaller subset of fans express derision for it. Though, as will become apparent in the remainder of the discussion, the whole of the fan community demonstrates substantive variations in how they either affirm or disparage the event.

While these differences of expression are by no means resolute and exclusive stances for all persons voicing them, they still illustrate a far more complex, nuanced assortment of postures than a bifurcated typology demarcating for or against can capture. Nowhere is this complexity and nuance clearer than in claims to political neutrality, which span discourses on both sides of the debate. Whovians espouse political neutrality by claiming that the Doctor Who programme should remain free from gender concerns and feminist sensibilities; however, said Whovians differ on whether or not they believe Whittaker’s casting exists as a feminist political act.

Such divergent interpretations of Whittaker-as-Doctor reveal the agency and generative-potential of fan viewership as well as obfuscate the ineluctability of political implications emerging from a beloved-fan artefact. Only by acknowledging the untenability of authorial political-neutrality can concerned subjects also recognize the futility of a politics free fandom. Whereby, I hold that fans should hold the enterprises that create, curate and profit from beloved-fan artefacts accountable for their political instantiations. The appropriate concern, then, is not whether  enterprises should be political, but rather what politic they should enact and how they should the enact it.

Accordingly, one may pose the following queries. Should enterprises pander to existing fans’ nostalgic desires for maintaining narrative lineage and tradition? Or should enterprises interrogate and potentially amend the possible norms and values they engender? With that said we may now survey and consider various fan pronouncements that exist among and across diverse internet platforms.

The Opposition

We will begin by cataloguing “some” of the stances of disappointment—i.e., those who express dissatisfaction with the program and wish that the contents of the show were otherwise.

Antifeminism

One persistent accusation emerging from the opposition indicates that the Doctor Who enterprise panders to discourses and campaigns that cultivate and enact progressively partisan sentiments—particularly with regards to gender politics. Individual expressions of discontent may highlight one or many of the following discursive orientations: cultural Marxism, feminism, third wave feminism, the #Metoo campaign, Social Justice initiatives, progressivism, political correctness, leftist or liberal politics, etc.  Of course they also employ pejorative expressions as well—e.g., superficial hipsters, wokeness, snowflakes, feminazis, man-haters, the easily triggered, etc.  As they see it, gender politics demonstrate an inappropriate presence in and influence over the show’s narrative and casting decisions in a manner that fundamentally undermines the quality and legacy of the programme.

One central example of antifeminism is youtuber Dave Cullen of The Dave Cullen Show who criticizes the narrative unfolding of Doctor Who with at least two separate videos, “The Vandalization of Doctor Who” and “Feminist Misandry Infests Doctor Who.” Through these media artefacts, he provides analysis and commentary of how the show’s dialogue repeatedly gives voice to a progressivism that—among other rebukes—routinely derogates men in a thoroughly one-sided manner. As he articulates the problem, the show’s narrative choices are a consequence of the enterprise’s attempt to “virtue signal” in response to feminist ideology and critique.

Cullen, using phrases like “feminist garbage,” enacts an oppositional rhetoric that not only negates a respect for (and thereby legitimacy of) the creative decisions of Doctor Who, but also progressive epistemologies. Consequently, the critique offered—though not necessarily unsophisticated for the context of Youtube—refrains from approximating the academic practice of measured interrogation and a scholarly disinclination toward determinism. Instead of probing a discursive logic in an effort to unearth its potential utility and apparent shortcomings, Cullen argues by assertion (at least with respect the epistemological frames he indicts). “Feminism,” within the epistemic parameters of his own narrative enactments, is simply (and thereby essentially) absent of intellectual and political worth.

Other antifeminist commentaries centre specifically on Jodie’s casting as the source of (or rather a target for) fan discontent. These discursive acts contend that Jodie’s undertaking of the role represents a radical departure from the show fans grew to love and celebrate. As one Twitter user exclaims in response to Jodie’s casting, “Are you f**king kidding me you’ve ruined doctor who for me and my father in law. You might as well just cancelled the show.” Though other expressions of disappointment contend that Jodie’s presence represents the enterprise’s zenith instantiation of feminism—making the story and mythology secondary to political demonstrations and indoctrination. Likewise, there are commentaries that share this (or a similar) sentiment, but target some specific demographic or behaviour as a proxy for feminist and progressive sensibilities. One tweet plainly states, “This is just pandering to women” and another exclaims, “This is nothing but political correctness” (as quoted in Chastain 2017).

Narrative Consistency

Some discursive indictments emphasize an investment in The Doctor being an archetypal male. While such speech acts often demonstrate antifeminist sentiments, they focus attention on narrative integrity and respect for the cannon. A petition to “stop making Doctor Who a SJW [social justice warrior]/PC [politically correct] show” circulating the internet (rather unsuccessfully with less than a thousand signatures) explains:

The Doctor is a well established male character. I feel that if the BBC want[s] more female leads they should create new shows instead of breaking over 50 years of tradition in Doctor Who. People today don’t have a lot of male heroes to look up to and the ones we do have have been replaced by women…. What makes this instant with Doctor Who worse is that they are changing the gender of the protagonist. Not making another Female Timelord, just taking one of the only male ones left… I’m sorry, but that is just wrong. You wouldn’t turn Spongebob into a female to get girl viewers or turn Barbie [in]to a man to get Boy fans. This is just unnecessary for the show, marketing and the plot.

Cullen (the aforementioned YouTuber) demonstrates a similar logic when he gives reason— beyond his general distaste for feminism—for his disinclination to accept the casting of a woman Doctor. While Cullen acknowledges that the Doctor Who cannon does not prohibit the story from moving in this direction, he purports that show creators made the change without a concern for the cannon or any other story centred initiative. He believes the BCC’s pandering to a pervading culture of political correctness serves as the primary impetus for The Doctor’s gender change. Though he also notes that the Doctor Who enterprise adopted and began disseminating a “progressive” agenda well before Whittaker’s casting. Such an agenda, Cullen purports, became “ramped-up” during the years of Whittaker’s predecessor, Peter Capaldi.

 

Gender Neutral Opposition

Some oppositional discourse suggests that indeed the casting of a woman Doctor is entirely appropriate, but gender should not be a criterion to determine such a casting. Proceeding from this ethic, they maintain that the show producers and writers should not permit or facilitate the salience of gender issues and feminist politics within the show’s narrative and advertising. Likewise, some fans contend that that the show is still enjoyable—they give voice to their intentions to remain viewers and fans—but further suggest that to do so requires that they actively ignore what they perceive to be obvious political sentiments of the show. As one YouTube user—commenting on Cullen’s “The Vandalization of Doctor Who”—declares, “As an avid Doctor Who fan, I’m just going to try to enjoy the show and just ignore the ‘wokeness.’”

For the sake of clarity, we should acknowledge that sexiest sentiments and hang-ups imbue all—and everything among and in-between—these discursive acts. While the comments offered seem to represent distinct positions of opposition, they coalesce on a general discontent with the presence of progressive gender politics within one or more aspects of the Doctor Who enterprise. Furthermore, while these ostensible subsets of opposition demonstrate distinct nuances—it would likely be a mistake for us to regard them as the definitive and precise stances of the persons who proffer them. If we examine the history of any given user’s media commentaries, after all, we may very well find that such comments, read individually, are not entirely consistent with respect to other discrete pronouncements made by said user. Perhaps the comments will demonstrate outright contradiction, or they may simply facilitate opportunities for ambiguous and ambivalent readings. In any event, discursive enactments demonstrate an agency that, in some respects, remain independent of their authorial progenitors.

The Advocates            

Overdue Feminism

On the other end of the Whovian political spectrum (#MakeWhovianPolticalSpectrumReal) are those Whovians who resolutely, largely or somewhat affirm the narrative and casting choices of the show. Some highlight that Doctor Who holds a sexist legacy, which requires correction via the casting of a woman Doctor as well as the incorporation and dissemination of progressive (and/or feminist) values. In a series of tweets, sociologist-fan Carlos Beck (@CarlosGBeck) provides a thorough exegesis of the gender inequity perpetually underlying the Doctor-companion relationship. Beck iterates:

The Doctor’s relations with his companions is variously asymmetrical. While this is narratively established by the fact that the Doctor is a space/time traveling E[xtra]T[errestrial], it nonetheless serves as the foundation for a series of tropes effectively organized around gender. “doctor-splaining” is only 1 mind-numbingly repetitive example in a set of these narrative devices whereby the dr, as man, is positioned to explain generally in annoyed fashion the fantastic logic of the episode’s conflict & its resolution to his companion, as woman. Another ex.[ample] being the tendency of his companions, all the most important have up to s[eason]3 been women, to soothe the Dr’s more aggressive (read:masculine) tendencies. This… is attached to & justified by the dr’s initimate knowledge of the cosmos, one his comp[anion]’s can’t master.

For Beck, as with others, the casting of a woman Doctor was long overdue and indeed, may well not go far enough.

Gender Neutral Advocacy

Yet other discourses, which advocate for Whittaker as The Doctor, suggest that Doctor Who has not and/or does not pander to progressivism, feminism or any other political sensibility. Whereby, their understanding of fan politics is such that it is a mistake to read such an agenda into the program. Responding to a denigrating tweet from a (now former) fan bemoaning Doctor Who’s gender progressivism and celebration of Jodie’s casting, many fans defended the premiere by denouncing any such political enactments within the episode. Such fans made statements to the effect of, “The episode doesn’t have an agenda! You should watch it; you may enjoy it!” Similarly, some fans suggest that while having a woman Doctor is not necessary or relevant to the show’s success or political stakes, the position of opposition to a woman Doctor lacks legitimacy. As one twitter user, responding to another user bemoaning the fact that The Doctor is now a woman, stipulated: “The Doctor does not need to be a woman. But there is no reason she shouldn’t be. And she is.” One with this understanding, then, may simply neglect to engage with or consider the political implications of The Doctor’s gender.

The Untenability of Political Neutrality

The above examples defending the Doctor Who enterprise by distancing it from political intentions and enactments demonstrate particular import in juxtaposition to the oppositional examples that indict the show for having a politic. The implicit logic undergirding all these examples, both from the side of affirmation and opposition, is that Doctor Who is not an appropriate space for political concerns—with respect to gender inequality or otherwise. Yet, as I illustrate in the first part of this two part discussion, an artefact’s political implications are ineluctable—both in the sense that an artefact demonstrates social-material consequences for users and in the (not entirely separate) sense that an artefact does not exist independently of political readings—even those readings positioned in opposition to partisan posturing.

What we are left with, then, is a circumstance in which Whovians miscategorise or neglect the relevant and appropriate debate. What, then, is an enterprise’s obligation to its diverse fanbase? Should an enterprise like Doctor Who uphold fan expectations—i.e., pander to lineage and tradition? With respect to disappointed Whovians, the circumstance is one in which a segment of fans believe that the Doctor Who programme holds a responsibility to meet their expectations. In the present moment, these fans believe that the enterprise should uphold the show’s tradition of casting men as The Doctor. With respect to how they understand the problem, the relevant ethics are rather simple. Because Loyal Whovians are in significant ways responsible for the show’s enduring success, the programme thereby owes them—or rather the programme must remain sensitive to their desires. Is it not, then, disrespectful to said fans to amend such an enduring feature of the Doctor Who story—like The Doctor’s gender?

Without even exploring the narrative justification for why a woman Doctor does not disturb the integrity of the Doctor Who mythos (trust me, it doesn’t), we may readily problematize this logic. We must contrast this ostensible obligation to traditional expectations with another apparent responsibility. Because enterprises creating, curating and profiting from beloved (fan) artefacts have ineluctable consequences (political in scope), are these enterprises not accountable for the consequences (and possible harms) they engender? With respect to the Doctor Who enterprise, a history of representational gender inequity remains complicit in normalizing differential gender privileges and burdens (see Part 1 of the discussion).

So enterprises—recognizing an obligation to (some) fans—may very well maintain a narrative unfolding indicative of the show’s past. In so doing, they could very well enact normative on-goings that pander to audience nostalgia (and prejudice)—despite the social harms that such on-goings may engender (either deliberately or by means of silence and inaction). Or the enterprises in question can glean the relevant lessons of the present socio-historical context and take responsibility for the political implications they pose to a world of onlookers. For what my opinion is worth, I prefer my beloved programmes to take a position of responsibility that furthers social equality. My Doctor is a person who will recognize the needs of those denied equal opportunity and fair treatment over the desires of those who are simply content with the way things are or were. My Doctor does not refuse help to those in need.

 

James Chouinard (@Jamesbc81) is a Lecturer in the School of Sociology at The Australian National University

 

Headline Image via: Source

An analysis of how human beings engage with a given artefact likely draws from a fundamental premise: human creations demonstrate social-material consequences. This observation does not purport to indicate a probable condition, but rather an ineluctable one—and it holds relevance, always and everywhere, for all types of artefacts. This is true of artefacts demonstrating utilitarian salience—like a spear, scythe, wrench, pencil, microwave, motor vehicle, computer, etc.—and those ostensibly centring on more aesthetic functions—like a painting, sonnet, concert, novel, play or even a television programme.

For the following argument, I discuss how a particular television series, Doctor Who, demonstrates social-material consequences for a community of fans, the Whovians. Following the recent premier of Season Eleven, many excited Whovians took to Twitter in collective celebration of Jodie Whittaker, the first woman to play the show’s leading character, The Doctor.  After 55 years of men in the role, Whittaker’ casting had clear symbolic importance. But it had social-material significance, too. One Twitter comment comes to my mind as an exemplary indication of such significance.

A father tweets, “My daughter (6) told me they were playing  #DoctorWho… in the playground today and she was the Doctor – that’s why last night was brilliant.” Recognizing that the child’s pretending to be the Doctor is to envision herself as the hero, we may acknowledge that she not only enacted a role of social importance, but also felt it was appropriate and desirable to do so. In other words, we confront the affective (and thereby material) implications of her having a woman role model to serve as fodder for her imagined (and real life) ambitions. Pretending to be the Doctor, this child may envision herself as not only competent, but exceptional. While playing, she perhaps recited that now iconic line from The Woman Who Fell to Earth, “When people need help, I never refuse!”  

This tweet, for many, is a heart-warming indication of how parents derive joy from their children’s glee. Though it also reveals how an ostensibly silly television show about aliens, time-travel and companionship holds the power to provoke social change—i.e., to empower young girls (as well as boys) to regard themselves as capable (as potential heroes) as well as to inform a normative order about who gets to be exceptional and deserves celebration. To acknowledge this, though, is to give credence to another ineluctable implication of engaging artefacts—i.e., that such engagement is not independent of the dynamics of power and thereby holds political relevance. Media representation, after all, is an operation and reflection of power—performing and indicating a politic with respect to who has power and for whom power is denied.

The Doctor Who Universe

Doctor Who is a BBC produced science-fiction programme. It demonstrates prominent fandom the world over, but particularly so within British society. The show began in 1963 and aired until 1989. The BBC attempted to relaunch the programme in 1996 with a made-for-television Doctor Who film; however, a rebooted series did not begin airing until 2005. From that time onwards, the show has remained on the air and is still in production presently.

The unfolding story of Doctor Who centres on “the Doctor,” a humanoid time traveling alien from the fictional planet Gallifrey. The character typically travels with human companions to diverse worlds as well as into the past and future. These traveling feats occur by means of the iconic TARDIS: a sentient, time and space traversing vehicle that is paradoxically “bigger on the inside than the outside.” To all outside observers, the TARDIS appears as a police kiosk—an edifice that was normative to Britain’s public infrastructure at the time of the show’s inception. TARDIS travel is often imprecise. Consequently, the Doctor and companions may find themselves in unanticipated times or places. Also, they typically find themselves amongst and aiding peoples or persons enduring a crisis from some despotic foe (e.g., daleks, cybermen, weeping angels, etc.).

Reading this description, one may note the lack of gender pronouns. Though I recognize the political significance of gender neutral prose, I employ the practice here for another reason—which pertains to the present and historical context of Doctor Who. The show has maintained a tradition of continually recasting The Doctor after a specified segment of time. From 1963 to the present, thirteen separate lead actors have officially undertaken the role. The narrative unfolding of Doctor Who accounts for the change by enacting the Doctor’s ability to regenerate (and thereby endure a prolonged life).

With every regeneration the doctor takes on a new humanoid form as well as a new persona. (Yes! I know! Except for that time David Tennant’s Doctor regenerated in a way that maintained his form and persona. Please stop quibbling!). With respect to the enduring mythology of the show, however, the Doctor (in substantial respects) remains the same person—i.e., the doctor retains many if not most memories of past “lives” and the show intends for the audience to regard the present and “new” Doctor as “the same” Doctor(s) from past seasons. Now prior to the eleventh season, I could simply stipulate, “The Doctor and his companions….”  But the current Doctor (or rather the Doctor’s current form)—as we now know—is a woman and portrayed by Jodie Whittaker.

The Whovian Response

Many, including myself, regard the character’s gender change as profound and exciting. The enthusiasm for and celebration of the Season Eleven premiere has much to do with the fact that the first woman Doctor follows 55 concurrent years of Doctors who were men. Whereby, the weeks and months preceding Jodie’s debut enjoyed widespread anticipation from old and new fans alike. Yet the love and enthusiasm did not span the whole of the Whovian community.

Many fans expressed derision at what they perceived to be an interpolation of trendy political sentiments in the show’s narrative and casting decisions. Such fans regarded the revelation of Jodie’s casting as the tipping point upon which the show enacted a metamorphic shift from episodic interjections of feminist sentiments to a full-on critical-feminist project (e.g., see The Cullen Show). For a contingent of fans, this perceived shift in the show’s narrative corresponded to a relational one in which said Whovians’ perpetual annoyance with the show gave way to a complete break in their fan-allegiance.

These fans then took to the internet to declare their intentions to abstain from further viewership of the programme. Subsequently, an enduring internet battle unfolded across an array of platforms with some Whovians supporting the show’s narrative and casting choices, others wholly denouncing said choices and still others occupying a plethora of ambiguous, ambivalent and approximating positions between the two extremes. In Part Two 2 I illustrate the precise nature of these debates and offer a corresponding analysis of their differential logics and implications. Presently, though, I clarify why these debates matter.

What Is at Stake?

Debates among Whovians about Whittaker’s casting centre on a few broad questions:  Is the Doctor Who universe an appropriate space to address gender inequities?  Is it inappropriate to imbue those cultural spaces intended to be fun (or possibly inconsequential with regard to real-world commitments) with politicized sentiments? These are indictments masquerading as inquiries and to many onlookers they are ostensibly reasonable. However, closer inspection reveals that such inquires fail to grasp a fundamental premise about cultural artefacts. They fail to demonstrate what the present text stipulates at its outset—i.e., that all cultural artefacts hold politics. We have already addressed how a given artefact’s politics are fundamentally entangled with social-material consequences and relations of power. Yet we may further explore how artefacts are also ineluctably subject to political readings.

To understand how this occurs, we must acknowledge that this particular concern implicates both authors and audiences. Most readers will readily acknowledge that authors have and will intentionally incorporate their political leanings into a text and do so in a manifest way—resolutely undermining all opportunities for ambiguous and ambivalent readings. Those who indict as well as those who champion Doctor Who as a political project assume said project to be a consequence of author intentionality. Whereby, they accept as apparent within the story’s unfolding a deliberate, unambiguous message about gender representation. I am not, here, arguing for or against the veracity of this claim; rather I give recognition to the relevance of intentional design so that I may better frame a discussion of how political readings can and will demonstrate independence from authorial aims.

With that said, we may also confront the circumstance in which authors fancy themselves as being persons without ideological objectives. They might reflect, “I’m not political. I simply enjoy making-up cool stories with interesting characters.” Such a disposition, perhaps, is one which many disappointed Whovians believe is appropriate and necessary for the creators of Doctor Who. These fans may reason, “Look, we just want to see time travel and aliens and all the fun stuff related to these things. I don’t want my favourite sci-fi stories to burden me with the ideological hang-ups of the real world.” However, even if the producers and writers heed such a call, the resulting television show will still demonstrate political relevance.

For example, a hypothetical author may attempt to resolutely position the narrative in opposition to partisan readings by crafting the text in a manner that is decisively banal, innocuous or unremarkable—i.e., in a manner that does not lay claim to political, divisive rhetoric. Yet to proffer a series of bromides and seemingly benign characterizations still demonstrates a politic in its default positioning. While such a posture refuses to state moral or ethical suppositions, it continues to imply them. An author who neglects to critically assess the world, after all, negates the importance of such evaluations (and thereby makes a claim about what should be or should remain so).

Our present conflation of neglect with negation is appropriate here, because such epistemic negligence holds real world consequences. By not questioning and challenging the everyday violence that occurs in one context or another, one permits as well as complies with said violence. The discourses articulating systemic racism and sexism illustrate this point. These prejudices are built into the fabric of everyday life. To ignore them as consequential for the life-chances of others is to remain complicit in the harms such forces engender. In short, political neutrality is an unviable position—even when engaging fun, fantastical stories (like Doctor Who) to experientially escape the real harms and concerns of lived experience.

Furthermore, authors who deliberately construct narratives to facilitate highly precise or exact readings (political, apolitical or otherwise) will witness the undermining of their efforts as an agentic audience demonstrates the generative power of construal—of actively observing meaning into an artefact. In other words, the audience in question “prosumes”  cultural artefacts, which implies that the act of watching (and consuming) holds productive potential. For example, Whovians consume Doctor Who in a manner that produces a diverse array of experiences and political inclinations.

While some fans regard the eleventh Doctor’s relationship to long-time companion Amelia Pond as constituting an implied romance other fans regard their mutual affections—though immensely loving—as decidedly platonic. The situation is not merely one of differential or (mis)interpretation. Fans, after all, do not passively view the unfolding events of a series arc; rather, they actively will the story to take a specific form. Furthermore, prosuming Doctor Who generates fodder for fandom unity and debate—e.g., fans who celebrate the romance and those who argue about the nature of the relationship. Thus, the fans’ prosumption of the beloved artefact occurs with and through their prosumption of the community itself.

We must now consider some clarifying remarks on the point of viewer agency. Some readers might assume that the practice lends an absolute (or limitless) efficacy to viewer-consumers. One may, then, surmise that since viewers may create and curate their own meanings for a programme’s on-goings, the show’s actual decisions hold little political relevance for or influence over said viewers. Such a position, however, is fundamentally mistaken.

The extant arrangements of a story (as well as a surrounding social world) imposes limits on the generative potential of fan viewing.  After all, if we accept that human perceptions of the world largely operate in and through shared social constructions, then, we must recognize that social artefacts—like televisions shows— are informative of how persons (or fans) understand and engage the world. Furthermore, a fan’s generative-viewing potential likely suffers increasing enfeeblement the more said fan credulously acquiesces to the putative logics of various dialogues and scenes. Even if we accept this hypothesis as irrefutable, however, we still must recognize that the most discerning viewers and subversive viewings inexorably draw from and make-sense of the world through the symbolic register of a larger, normative order.

With that said, we should acknowledge that a reoccurring male Doctor—typically with adoring female companions—communicates and inculcates cultural expectations about what type of persons can and should be protagonists, leaders, saviours, special and exceptional as well as what types of persons should admire, follow, acquiesce and accept the deliberations and pronouncements of others. Such expectations, then, are thoroughly shot-through with the dynamics of systemic power. Fan opposition to Whittaker as Doctor is thus an inherently political act—even among those who claim political neutrality and wish for their pop-culture to remain free from political meddling.

Of course, we would be remiss not to recognize that the history of Doctor Who does offer us strong, agentic and admirable women companions time and again—such as Sarah Jane, Martha Jones, Donna Noble and Bill Potts (among others). But admiring these characters—especially in the sense that they often have and do reflect progressive-feminist ideals—does not negate the legitimacy of observations recognizing past inequities and calls for rectification.

Sexism and gender egalitarianism as well as hegemonic standards and progressive ideals are multifaceted variables—each of which exist on a continuum. Whereby the presence of a variable on one end of the continuum does not necessitate a mutually exclusive relationship with a variable on the other end. Consequently, one may observe both sexist and gender egalitarian features within the same artefact. The history of Doctor Who, then, may have been “relatively” progressive in some respects, but not others. Keeping in mind, however, that all such observations demonstrate a contextual contingency with respect to the cultural moments in which they emerge.

With a concern for the present cultural moment, we should come to understand the multifaceted political readings among Whovians as a representative microcosm of larger debates highlighting the intersections of politics, power, and gender—e.g., the #MeToo campaign and other recent instantiations of progressivism. Wherefore, we readily acknowledge that political participation takes many forms. Disrupting hearings for a predatory SCOTUS nominee is one such example. So too, however, is a fan’s support for story arcs that position women as heroes—especially when those stories take shape through cultural artefacts as widely treasured as Doctor Who. Such an understanding becomes apparent when reflecting on the initial anecdote about a young girl pretending to be The Doctor. Re-enacting the Season Eleven premiere and the episodes to come, this little girl and others like her believe they just might save the universe.

 

James Chouinard (@jamesbc81) is a Lecturer in the School of Sociology at The Australian National University

Headline pic via: Source

 

I’ll start by stating the obvious: power manifests in myriad forms. In this piece I’ll be focusing on the normalizing power of discourse. Normalizing discourse refers to the way language – talk, text, and body – reinforces the status quo and crystalizes social structures, including our own place within those structures. I will draw on my own research about religion online to make the case that the internet fosters normalizing discourse, while at the same time, leaving room for subversion.

I suggest conceptualizing digital media as a Foucauldian Discourse, or, for a lack of a better analog: the street, the marketplace. What I mean by Foucauldian discourse is the systematic ways in which communication shapes our social norms. This happens online because, while we use digital media individually, we are taking part in a social space. Online media includes the multiplicity of opinions experienced through an individual’s lenses. We use digital media in personalized ways: to create a ‘personal’ profile, to do your own banking, travel, shopping, etc. But the experience in not fully individualized: the ‘street’ or ‘tribe’ is always at the background of online activities. Friends and family (‘the tribe’) react to personal profiles in social media; reviewers and commenters (‘the street’) “shout” their opinions about the latest gadget you just purchased, or the news you are reading; and always, the watchful eye of a big company – Google, Microsoft, Apple – is present. Therefore, online communication is never done in a vacuum. Even if I am watching cat videos by myself at 3 AM, I am surrounded by society. Online, the individual user is communicating with ‘the masses.’ They are out in the street, or at the marketplace, or at school, or at church, even if they are physically alone in bed. Online, you converse with “everyone.” And these online ‘conversations,’ I argue, are the essence of conceptualizing online media as Foucauldian discourse.

Understanding digital media as discourse means theorizing digital communication as a set of systematic statements and online practices that create, construct, and negotiate social norms: as spaces of power and resistance. And, while the internet allows for multiple voices and counter-spheres, there are policing and regulating processes that make online media a normalizing force. I’d like to share two example from my own work on religion online that reflect how digital media can be conceptualized as a site for power and resistance.

I’ll begin with a juicy example. In my dissertation I examined Jewish religious negotiation of gender and sexuality. I was hoping to find that the internet – commonly conceived as a ‘democratizing’ technology – gives power to new voices – like women and LGBTQIA+ people – to reshape gender and sexual norms within this religion. I found the opposite. That the ability to comment, share, like and participate reinforces the religious dogmatic positions rather than, as I anticipated, help bend them. For example, in a Q&A with a rabbi, a female lay person asks if it’s “ok for girl to… masturbate”. The rabbi says no. While in fact, to some degree, Orthodox Judaism does not explicitly prohibit female masturbation, it is not really surprising that the rabbi prohibits this. But what follows is the interesting stuff:

In the comment section, users, some of whom self-identify as women, shame the girl for asking, and thank the rabbi for prohibiting. “Thank god, I can’t masturbate, what a relief!” Just as the rabbi reinforces gendered norms of chastity, the community concurs and entrenches normative notions of shame about a sexualized body. Normalized discourse is thus reinforced and along with it, normative oppressive practice.

But then, a lone commenter contests the point:

No way?? [מה פתאום]. I’ve discovered this by accident at a young age and for years my conscience was killing me. I tormented myself long and hard to stop and I almost succeeded. I thought to myself, when I get married this will stop by itself. Today I am married and I am so not sorry for my experience with this because this is how I’ve learned what feels good for me and could reach pleasure also with my partner. I hear about married women that do not enjoy [intercourse] and don’t know how to have fun with their husbands. And both sides are then frustrated. I’m not saying you must but if you have this experience it is for sure not bad.

This spark of resistance is short lived, as other commenters collaboratively dissent: “Are you not ashamed?” “Who do you think you are?”, etc. This is just a small example of how the democratic features of the internet can and are being used to reinforce traditional, patriarchal and even fundamental, worldviews. However, the inkling of resistance shines through.

In other cases, the resistance shines brighter. Indeed, the internet is multiple and complex. Thus, not all internet-based discourses will look the same. As such, I find discourses in my research that subvert conservative religio-political positions.

In a current project, I am examining religious resistance online via Twitter. Specifically, I’m looking at the hashtag #EmptyThePews which is part of a movement of Ex or liberal Evangelicals, using social media to resist Trumpist churches. Here, twitter and Facebook are used to openly share traumatic church experience, and to slowly form a different conception of what a church can and should be – free of racism, sexism, and hate. The leaders of this movement are calling people to empty the pews of churches that support Trump or hurt their members in other ways. For example, “If your church supports @realDonaldTrump, ask your pastor to name one quality Christ and Trump have in common. Then walk out. #EmptyThePews” Or: “If your pastor doesn’t condemn racism tomorrow…walk out. #EmptyThePews”. This hashtag is used to shed light on these experiences, and perhaps, to slowly change the discourse around racism and its deep ties to religion.

To think of power as discourse and of online communication as a space for normalizing discourse does not mean we are doomed to be controlled by religious authorities and alt-right internet trolls. Discourse is created and maintained by us – and so we can and must participate in it. True, some have more power to influence the discourse then others, but we can try and contribute to the discourses we are part of online and offline.

The internet has emerged as a technology with the potential to level the playing field in a number of discursive domains, but it is not the fated savior of liberal values it has been extolled to be. It is more appropriately recognized as a variant of other democratic institutions, the agora, the voting booth, spaces that can only be utilized to their full intended potential when we all engage and participate in them. In the same way that democracy is threatened when we don’t cast our ballots, so too the internet can serve as an echo-chamber for only the most fanatical and extreme of voices. While the internet does offer a greater sense of accessibility and allows for different voices to be heard, this ‘participatory culture’ does not always lead to change, or to an open and respectful dialogue. We should not be deceived by the seemingly democratic capabilities of the internet. The street can also be democratic, and many lynches happened at the market place. The fleeting feeling of online media, along with its blurring of private and public, create a confusing space. One way to think about this confusing space is to consider it a discourse, in which ‘they’ or ‘everybody’ – the common-sense, the norm – take the front stage. In this space, it is the work of normalizing social beliefs and behaviors that is taking place, unless counter-publics insist on alternatives.  Digital media are about defining and negotiating definitions, about power and resistance. And just like in a democracy, change won’t happen if we don’t demand it.

 

Ruth Tsuria, Assistant Professor at Seton Hall University College of Communication and the Arts, has earned her Ph.D. from Texas A&M University Department of Communication.  Her research investigates the intersection of digital media, religion, and feminism. Awarded Emerging Scholar in Religion in Society, Her work has been supported by various bodies, including Women and Gender Studies Program at Texas A&M University. She is currently working on her first book Holy Women, Pious Sex, Sanctified Internet: Exploring Jewish Online Discourse on Gender and Sexuality.

 

A version of this work was presented at ASA Media Sociology Preconference, 2018

 

Headline image via: Source

Defending the theoretician’s choice to employ a theoretical reductionism is in some respects a nonsensical exercise.  After all, theory of any kind operates as a manifestly reductionistic articulation of a given thing—even if that thing is another theory. This is the conclusion we must come to if we permit ourselves to define theory by the fundamental function it performs.  That is to say, we must accept that theory is (and seeks to be) a reduction of the busyness of the world’s observable on-goings—i.e., it omits detail in one form or another in an effort to make some specific facet of human experience more intelligible, approachable, operatable, etc. To stipulate any theoretical premise (even one that indicts another theory as reductionistic), then, is to assert a reductionism.

Following such an understanding, we must take a moment to acknowledge that many who regularly engage with theory (particularly those who regard themselves as theorists) will rebuke the present characterization.  To justify their stance to the contrary, they could highlight the theoretical efforts to complicate and perhaps negate those perniciously simple and banal articulations of observable on-goings. They may offer rebuttals that quite closely resemble the following remarks:

  • I employ theory to challenge the taken for granted idea that merit alone determines life-chances.
  • It is by means of dense, sophisticated theoretical texts that I teach my students that gender is more than a binary.
  • Conventional frames of understanding, common sense and norms pervading everyday life portray race as a matter of genetic ancestry and, thereby, biologically determined. We draw upon theory to demonstrate how such an understanding obscures a far more nuanced and contentious story.

Those who make these and like claims are correct to do so.  Theory can and has enriched our understanding of the world by indicating a need for discursive complexity.  Yet the theoretical endeavour that seeks to lend discursive nuance, complication and density to some artefact, dynamic, event, or trajectory does so in manner that requires certain nuances and complexities to take precedence over others.  The act of analysis, then, demonstrates an intelligible differentiation—via conceptual demarcations and frames of reference—between precise analytical objects and a world of noise or irrelevant information (e.g., digressive observations, inchoate musings, palpably present though analytically extraneous effects and affected subjects).

So what, then, are we to make of the reductionism critique? Is the indictment of reductionism absent of practical application and epistemological utility?  If we regard theory as a demonstration of the truth or a closer approximation to some real reality, then yes—the indictment has no intellectual worth. After all, If the goal of theory is truth then the observation of reductionism operates as a proxy for the charge of falsity—i.e., an analytical explanation demonstrates the mutually exclusive condition of being accurate or inaccurate.

When the reduction is accurate—when theory reduces observable on-goings in a manner that seems appropriate—the theoretician performs an analytical act analogous to the magician’s slight of hand.  The performance is one in which the less than relevant (yet not wholly irrelevant) objects remain unacknowledged or the performer asks the audience to disregard the reality or presence of said objects since they exist beyond the figurative curtain enacted by a claim to scope conditions.  Whereby claiming reductionism in an effort to demonstrate inaccuracy draws attention to the epistemological leniencies or precarious merit given to theoretical demonstrations of truth.

The observer may ask, “How does reductionism—an inexorable consequence of theorizing—demonstrate inaccuracy (the condition of falsity) in one circumstance but not another?”  Further probing reveals the theoretician’s implicit request that observers acquiesce to the condition of true enough or true until demonstrated otherwise.  Of course, this request is entirely appropriate—but giving recognition to such a request draws into question the very need for or desirability of employing theoretical stipulations directed at the discernment of truth.

Theory as technology

If we regard theory as a technology, however, we may accept that reductionism and indictment of reductionism are both appropriate and necessary. The function of technology—with respect to the present epistemological concerns—is to accomplish a task, address a problem or manage a dynamic or operation. Theory is the (or rather “an”) analytical (often textual, but not necessarily so) manifestation of such management.  To indict a given theory-technology as reductive then is to suggest that said theory-technology forecloses the possible introduction of or plausible accounting for other theory-technologies.

We may, by way of example, look to the conditions under which one appropriately indicts psychoanalysis as reductionistic.  The most appropriate instances—in accordance with the logics of the present argument—are those where psychoanalytic accounts proffer “repressed wishes and instincts” as wholly responsible for a given set of circumstances or behaviors.  In such instances, the theory intends specific enactments (i.e., repression) or variables (i.e., repressed psychic contents) to account for all observable influence over a given object or dynamic—thereby, denying opportunity for additional analytical explanations and clarifications.

Providing an analytical space for alternating epistemologies emerges as a necessary condition for theorizing since a single variable or set of variables can only account for a portion, and never the totality, of the explanatory influence.  Such is the burden of analysing empirical circumstances, which demonstrate a seemingly infinite complexity (at least far greater complexity than one may account for by means of a single explanation).  Of course, one may continually proffer additional explanatory variables in an attempt to approximate ever more closely (but never achieve) a totalizing theoretical account.  However, the resulting theory would likely read as ambivalent and digressive—failing to make any explanatory variable salient while attempting to lend salience to all such variables.  Efforts at theorizing that correspond to the far reaching scope just described  demonstrate a latent appeal to absolutist logics.  The cultivated explanation emerges as teological in character and intends imperviousness to all counter arguments.  The result operates as an absurdity in both aesthetic and utility.

To recognize theory as technology, however, is to demonstrate and validate a need for pragmatism—i.e., to recognize that a world of seemingly infinite complexity requires metrics able and willing to proffer useful rather than perfect measurements. So we may readily permit a psychoanalytic account of repressed wishes and instincts if such an account seems to demonstrate an explanatory efficacy (in correspondence with or relative to other theories).  The primary idea, then, is to evaluate a theory with regards to the utility it adds.  One would inquire, “Does psychoanalysis help me address the analytical problem at hand?” If the answer is no, the conclusion is that psychoanalysis as a theory-technology is inappropriate for the investigative context.  A claim to validity would simply have no relevance.

With this in mind, we may request psychoanalytic theoreticians to make qualifying remarks if and when they determine the application of psychoanalytic theory to be appropriate; wherein, we expect these theory purveyors to say, “Of course other influential variables are at stake, but it is beyond the scope of the present discussion to account for and make salient such influences.”  This enactment—the disclaimer or admission of analytical limitations—operates as the theoretician’s incantation for warding off the contaminating danger of the reductionistic indictment.

Highly skilled theorists have all manner of requisite incantations at the ready.  They deploy them when necessary to ward off any and all possible indictments seeking to shame ostensible charlatans (and thereby distinguish the more competent and sophisticated wizards… err… I mean theoreticians as better than and qualitatively different from said charlatans).  Such indictments are as follows: 1) you have denied the agency of this subject, thing or any and all things; 2) you have assumed or postulated an essence or ontology in a manner that disregards social constructionism; 3) likewise, you have permitted your conceptions to operate as reifications neglecting contextual contingency and a world of constructive-generative agencies; 4) you have assumed or permitted a dichotomous understanding where a continuum resides or should be; etc.

It would be a mistake to regard these criticisms as hackneyed demonstrations of academic posturing.  Their widespread and continual presence within analyses of all kinds is a necessary circumstance for theorizing a dynamic world of diffuse power and agentic operations. Yet all the offenses highlighted by said criticisms remain analytically unavoidable at some point or another while theorizing.  The most pervasive and inevitable offense being the reductionistic articulation—which constitutes the point (or modus operandi) of theory itself.  So the theoretical incantation to ward off reductionism has become a trite, though necessary, endeavour.  It signifies to onlookers, “Yes, of course this analytical problem is a concern; now please acknowledge that I have not denied this concern a presence within my analysis.”

Yet the incantation should not—in and of itself—be sufficient for warding off the criticism of reductionism.  Providing a textual acknowledgement and concern for an epistemological problem doesn’t negate that said problem may still trouble the utility and applicability of a working theory.  So if psychoanalytic scholars preface their analyses with the requisite incantation to ward off a criticism of reductionism and said scholars proceed to articulate any and all social enactments as somehow indicative of the “repressed made manifest,” we may regard such an analysis as one that pushes beyond the limits of a particular reductionism’s technological utility.

By way of crude analogy, we may recognize that it’s useful to have a pocketknife on one’s person at most times.  There are a numerous circumstances where a pocketknife enables one to accomplish a given thing. Is there something stuck to the bottom of your shoe? Use the pocketknife to scrape it off! Come across a difficult to peel fruit? Slice it a few times with the pocketknife (but wash it first if you just scraped something off your shoe).  Have a package that isn’t designed to open readily? Poke it a few times with the pocketknife.

Though we readily accept that the extent of this technology’s utility is ostensibly interminable, we must also acknowledge that there will be times when other, perhaps similar, technologies are simply more appropriate.  Need to spread some preserves on toast? You may use a pocketknife and possibly cut your hand, or you could use the much safer and more effective butterknife. Such considerations about the apt and effective application of one permissible technology among others is equally relevant to an evaluation of our most trusted (and ostensibly useful) theories.  Psychanalytic logics may provide some reason for how and why I use my laptop to go online shopping, write articles, grade papers, etc.; yet a theory of technological affordances will demonstrate more intelligible and relevant explanations.

Highlighting the use and utility of theory-technologies

Reiterating a previous point, then, we acknowledge that it is only appropriate to regard psychoanalytic theory as reductive when it seeks to ignore or negate the relevance of other explanations for why and how a person is in and of the world.  Such an acknowledgement requires us, those giving critiques to another’s theoretical application, to differentiate the intentions of users from the affordances of the technologies they employ.  Whereby, the history of psychoanalysis tells us that while some have attempted to extend the theory to any and all things, as a paradigm, the psychoanalytic endeavour remains largely fixated on a particular and narrow range of empirical on-goings.

The indictment of user-error thus emerges as a far more appropriate criticism than that of a reductionistic theory.  Furthermore, we must overall applaud the efforts of those who push the application of theory-technologies to their absurd ends.  There is a pedagogical function in such boundary transgressions.  The pedagogy being, “Here in lies the discursive (or intellectually useful) limits of this theoretical endeavour.  Beyond this point or analytical threshold, the explanation appears unintelligible and perhaps offensive.”

So rather than enacting an intellectual culture that casts derision over and discourages those who apply theory in ways that doesn’t quite resonate with a larger intellectual community, perhaps we should permit spaces within the culture to celebrate such forerunners—giving recognition to epistemological failure as a worthwhile function of intellectual growth.  On this point it is worth differentiating the ill- or poorly conceived idea from that which simply does not work.  Such an ideal encourages bold applications of theory in an effort to explore the practical limitations of a given logic or method despite that a tenuous, intellectually unappealing explanation is a likely result.

But this is an ideal we only wish to enact if while doing so we are able to maintain specific intellectual standards—i.e., the expectation that intellectual rigor will characterize the analysis despite an assumed expectation that employed logics will fail to correspond to a given empirical circumstance in any useful manner.  In other words, pushing theoretical logics to the absurd limits of applicability should not serve as an invitation for slipshod scholarship demonstrating a dearth of erudition.  Yet if intellectuals maintain the standards of earnest analysis in their attempts at demonstrating the full range of applicability for each and all the prevailing reductionisms constituting the epistemological cannon, then they may enculturate a general relationship to theory less inclined to disparaging one school or another in the name of epistemological loyalty and defence; whereby, scholars would act less like ideologues and religious converts and accept epistemological diversity as a necessary requisite for the seemingly infinite complexity of the world.

 

James Chouinard (@Jamesbc81) is a sociologist at the Australian National University.

Headline Image Via: Source

Algorithms are something of a hot topic.  Interest in these computational directives has taken hold in public discourse and emerged as a subject of public concern. While computer scientists were the original algorithm experts, social scientists now equally stake a claim in this space. In the past 12 months, several excellent books on the social science of algorithms have hit the shelves. Three in particular stand out: Safiya Umoja Noble’s Algorithms of Oppression, Virginia Eubanks’ Automating Inequality, and Taina Bucher’s If…Then: Algorithmic Power and Politics. Rather than a full review of each text, I offer a quick summary of what they offer together, while drawing out what makes each distinct.

I selected these texts because of what they represent: a culmination of shorter and more hastily penned contentions about automation and algorithmic governance, and an exemplary standard for critical technology studies. I review them here as a state of the field and an analytical grounding for subsequent thought.

There is no shortage of social scientists commenting on algorithms in everyday life. Twitter threads, blog posts, op-eds, and peer-review articles take on the topic with varying degrees of urgency and rigor. Algorithms of Oppression, Automating Inequality, and If…Then encapsulate these lines of thought and give them full expression in monograph form.

The books are tied together in their insistent imbrication of social, structural, and technical. Each resists technological determinism while giving careful attention to the materiality of code and its animation at the hands of user-publics. In their socio-technical analyses, each book also centralizes politics and power. This is critical. The authors weave patterns of status, power, inequality, and resistance throughout their texts. They spend time at the social margins and show with stunning clarity how personal troubles and public issues are entwined with technical systems from design to implementation. Together, these works show us what algorithms are, how they are social, and remind us that algorithmic configurations are neither natural nor inevitable, but products of value systems and political dynamics.

Noble’s Algorithms of Oppression addresses how algorithms sort and curate information. Focusing primarily on the Google search engine, Noble draws on traditional works from Library Sciences to contextual political and economic decisions about visibility, relevance, and legitimacy in information systems. Noble begins with a powerful and personal example of a Google search for “Black Girls.” Spoiler: the search returned images and links that had little to do with the interests or activities of children of color. Using this as a jumping off point, Noble traces the ways race, class and gender intersect with commercial interests and normalizing conventions in ways that perpetuate stereotypes and maintain Whiteness as the default subjectivity. “The more we can make transparent the political dimensions of technology”, says Noble, “the more we might be able to intervene in the spaces where algorithms are becoming a substitute for public policy debates over resource distribution—from mortgages to insurance to educational opportunities.”

Eubanks’ Automating Inequality takes up the very real and tangible algorithmic governance that Noble’s work highlights as significant. Eubanks draws on three case studies in which algorithms and poor Americans come in contact, with disastrous results. The book tells the story of Indiana’s welfare system, homeless services in Los Angeles, and child protective services in Pennsylvania. The work shows how poverty becomes a liability, service providers are alienated from clientele, and, like 19th century poorhouses, digital poorhouses do more to entrench economic instability than ameliorate it. With heart-wrenching accounts of lost health care, invasive data collection, and unjustly taken children, Eubanks highlights the contentious class dynamics that inform and are amplified through automated systems. “Technologies of poverty management are not neutral,” says Eubanks. “They are shaped by our nation’s fear and hatred of the poor; they in turn shape the politics and experience of poverty”. It is worth noting that this is one of the most rigorously researched books I’ve ever read. Eubanks even hires a fact-checker for herself, which sets a standard for intellectual integrity that rises beyond reproach.

Bucher’s If…Then: Algorithmic Power and Politics takes on the social side of automation. The theoretical work in the beginning of the text states in no uncertain terms that the banal platforms through which we socialize maintain deep and far reaching implications for the organization of daily life. As Bucher explains, “platforms act as performative intermediaries that participate in shaping the worlds they only purport to represent.” That is, platforms like Facebook, Twitter and Instagram are not only reflective mirrors, but powerfully efficacious in their own right. The text relies on case studies that capture social-algorithms from diverse angles. Bucher provides a technical study of the Facebook news feed, an exploratory study of everyday social media users’ “encounters” with algorithms, and documents how algorithms become institutionalized in the journalism and media landscape. Thus, Bucher tackles engineering and materiality, everyday experiences, and institutionalization of algorithmic systems, maintaining all the while a critical eye towards politics and power.

The three works are thus unified in their larger project of social scientific inquiry into algorithm studies, guided by a critical lens. Each are distinct, however, in their focus. The distinct substance of each book is a crucial reminder that algorithms are pervasive and polymorphous. They are not ends or entities in themselves, but vehicles of, and tools for, social organization in its myriad forms.

 

Jenny Davis is on Twitter @Jenny_L_Davis

This post is based on the author’s article in the journal Science as Culture. Full text available here and here

In 2016, Lumos Labs – creators of the popular brain training service Lumosity – settled against charges laid by the FTC, who concluded that the company unjustly ‘preyed on consumers fears …[but] simply did not have the science to back up its ads’. In addition to a substantial fine, the judgment stipulated that – except in light of any rigorously derived scientific findings – Lumos Labs

‘… are permanently restrained and enjoined from making any representation, expressly or by implication [that their product] … improves performance in school, at work … delays or protects against age-related decline in memory or other cognitive function, including mild cognitive impairment, dementia, or Alzheimer’s disease…. [or] reduces cognitive impairment caused by health conditions, including Turner syndrome, post-traumatic stress disorder (PTSD), attention deficit hyperactivity disorder (ADHD), traumatic brain injury (TBI), stroke, or side effects of chemotherapy.’

However, by the time of the settlement, Lumosity’s message was already out. Lumosity boasts ‘85 million registered users worldwide from 182 countries’ and their seductive advertisements were seen by many millions more. Over three billion mini-games have been played on their platform, which – combined with personal data gleaned from their users – makes for an incredibly valuable data set. Lumosity kindled sparks of hope within those who suffered, or feared suffering from the above conditions, or who simply sought to better themselves for contemporary demands. In this way, the brain has become a site of both promise and peril. Today, ever more ethical injunctions are levied through calls for ‘participatory biocitizenship’, the supposed ‘empowerment of the individual, at any age, to self-monitor and self-manage health and wellness’.

However, this regime of self-care is not sold through oppressive demands, but the consumer-friendly promise of fun (especially when it can be displayed to others). These entanglements of hope, fear, duty, and pleasure coalesce into aspirations of ‘virtuous play’. Late capitalist modes of prosumption leverage our desires for realizing ideal selves through conspicuous consumption practices, proving ourselves as healthful, industrious, and always pleasure-seeking. Self-tracking technologies ably capture this turn to virtuous play, combining joyful game playing with diligent lifelogging. Brain training proves exemplary here, for through the potent combination of pop neuroscience, self-help rhetoric, normative role expectations, and haptic stimulation, we labour to enhance our cognitive capacities.

Of course, ‘brain training’ in the typical form of tablet and smartphone-based games constitutes a rather mild intervention, relative to other neurotechnologies adopted for personal enhancement. Consider, for example, EEG-based devices enticing consumers with neuro-mapping and (cognitive) arousal-based life-logging, or gamification and smart-home integration (see Melissa Littlefield’s new book for more on EEG devices). Some concept videos for such applications are saccharine sweet:

While others could have used a little less brotopia exuberance:

Elsewhere, we can find virtuous play in the uptake of transcranial direct current stimulation (tDCS), sometimes used in clinical settings, but increasingly also by amateur ‘neurohacking’ enthusiasts.

However, while the consumer-friendly brain training offered by companies like Lumosity pales in its relative intensity, its widespread appeal threatens to inscribe narrow ethical prescriptions of self-care (while also smoothing paths toward those more invasive measures). In other words, the actual efficacy of current brain training methods may matter far less than the discursive grooves they carve.

For example, ‘brain training’ rhetoric commonly leverages aspirations of virtuosity as relief from contemporary anxiety and vulnerability. Yet, by simultaneously stoking these very anxieties, they ratchet up expectations of being dutifully productive and pleasure-seeking subjects. Also, limited affordances entail that the subject is disaggregated into only those functional capacities deemed value-bearing and measurable. The risk here is reinforcing hyper-reflexive but shallow practices of self-care.

Moreover, popular rhetoric around ‘neuroplasticity’ construes the brain as an untapped well of potential, infinitely open to targeted enhancement for ideal neoliberal subjects who are ‘dynamic, multi-polar and adaptive to circumstance’. This enhancement ethos has also emerged in response to the collective dread felt towards neurodegenerative diseases, where responsible, enterprising subjects seek ways to ensure cognitive longevity.

Our neuroplastic potentials are also regularly invoked, holding promise that we can truly realize our latent capacities to be more productive, fulfil role obligations, ward off neurodegeneration, and shore up our reserves of human capital. This is the contemporary burden of endless ‘virtuosity’, where subjects must constantly work upon their value-bearing capacities to be (temporarily) relieved of insecurity, risk, and vulnerability.

These hopes, fears, and obligations are soothed and stoked through the virtuous play of brain training. This market operates under the premise that through expertly designed activities – commonly packaged as short games – cognitive capacities may be enhanced in ways that generalize to everyday life. Proponents have sought to ground consumer-friendly brain training in scientific rigour, but efficacy remains hotly contested.

More broadly, brain training constitutes part of the growing ‘brain-industrial complex’, driven in part by ‘soft’ commercialization trends. These commercial claims encourage ‘endless projects of self-optimization in which individuals are responsible for continuously working on their own brains to produce themselves as better parents, workers, and citizens’.

The rhetoric of brain training reflects moral hazards that often accompany commercialization, with ‘inflated claims as to the translational potential of research findings’ resulting in tenuous practical applications. Brain training also reflects how smoothly self-tracking has been incorporated into obligations of healthfulness, leveraging a ‘notion of ethical incompleteness’. Hence, while most consumer-friendly ‘brain training’ products are of low intensity, even here abound ethical appeals that ‘divides, imposes burdens, and thrives upon the anxieties and disappointments generated by its own promises’. Coupling self-tracking with gamification thus enables joyous pleasure and ethical measure. Care for oneself ‘is now shot-through with the promise of uninhibited amusement’ so that we can ‘amuse ourselves to life’. This judicious leisure keeps mortality at bay and morality upheld.

Using Lumosity as a peg upon which to hang the concept of virtuous play, we can unpack how popular brain training and related self-tracking practices lean on contemporary aspirations and anxieties. Firstly, Lumosity is designed to be routine yet fun, undertaken through short, aesthetically pleasing video games, usually played on personal computers, tablets, or smartphone devices. These games purport to target, assess, and – with training – enhance cognitive capacities. Many of these games draw upon classic experimental designs, and Lumosity has sought to further establish credibility through their ‘Lumos Labs’ – where ‘In-house scientists refine and improve the product’ – and their ‘Human Cognition Project’.

Admittedly, it may be tempting to dismiss products like Lumosity as pseudoscience packaged in exaggerative marketing, not worthy of our attention. But such dismissals neglect how we are typically constituted as subjects, for it is

‘… at this vulgar, pragmatic, quotidian and minor level that one can see the languages and techniques being invented that will reshape understandings of the subjects and objects of government, and hence reshape the very presuppositions upon which government rests.’

Therefore, with this need to better understand prevailing rationales of neuro-enhancement, observe here how Lumosity pitched itself to consumers in 2014:

Several appeals emerge here: equating brain training with other forms of ‘fitness’; the offer of focusing on what is ‘important to you’; scientific rigour; progress measured by comparison against the cohort; and the promise of fun. Finally, there is an earnest petition of potential, for with Lumosity you will ‘discover what your brain can do’.

The brain training industry has thrived within this context of egalitarian self-enterprise, offering aspiring virtuosos ‘the key to self-empowered aging’. Such seductive rationales are highlighted by Sharp Brains, ‘an inde­pen­dent market research firm track­ing health and per­for­mance appli­ca­tions of neu­ro­science’. They claim

‘When we con­ducted in-depth focus groups and inter­views with [lay subject] respon­dents, the main ques­tion many had was not what has per­fect sci­ence behind it, but what has bet­ter sci­ence than the other things peo­ple are doing – solving cross­word puz­zle num­ber one mil­lion and one, tak­ing ‘brain sup­ple­ments,’ or doing noth­ing at all until depres­sion or demen­tia hits home.’

The implication – conveniently endorsed by Sharp Brains – is that although efficacy remains unproven, this does not absolve individual responsibility. Rather, we must do something to care for our brains, lest we be seen as defeatist and indolent, sullenly waiting for depression or dementia to ‘hit home’. Such sentiments have certainly been fostered by slickly-packaged commercial appeals.

In 2012, Lumosity launched a highly successful ‘Why I Play’ campaign, designed to normalize brain training. The campaign was active for several years, reaching a massive global audience through an enticing emphasis on aspiration and emulation. Each ‘Why I Play’ commercial adhered to a shared template: an actor portraying a happy Lumosity user stresses the imperative need to enhance their cognition, while also noting the pleasures of brain training. All the actors are, of course, impossibly attractive, and the perfect embodiment of the late capitalist subject. They serve as avatars of virtuosity, with unending drives for both self-improvement and pleasure.

This simultaneously disciplined, pleasurable, intimate, and yet distant framing of ‘discovering what your brain can do’ creates a peculiar ethic-fetish of brainhood. Advocates proclaim that ‘I am happier with my brain’ or ‘my brain feels great’. The users also praise ‘the science behind the games’, and highlight hopes to maintain cognitive capacities as they age. These commercials lean directly on burdensome expectations placed upon labouring subjects today.

Another variant of the ‘Why I Play’ campaign, upping the ethical stakes, even implies that brain training may be obligatory for those who aspire to be the kindest persons they can be:

Similarly, a mother expresses relief that ‘it’s not just random games, it’s all based on neuroscience’, reassuring her that ‘every brain in the house gets a little better every day’. Training one’s brain – and the brains of dependents – is framed as an admirable practice for those who seek to be a source of joy, comfort, and care for others.

Upon commencing their ‘brain training journey’ members are asked probing questions around when they feel most productive, their sleeping patterns, general mood, exercise habits, age, and more. A competitive regimen is also stoked, with users asked ‘What percentage of Lumosity members do you outrank? … Come back weekly to see how many people you’ve surpassed.’ Such encouragement is then reflected in precise rankings of users in their various cognitive capacities. Lumosity also enables integration of data from Fitbit devices, further entrenching associations between brain fitness and aerobic fitness.

After completing a prescribed number of training sessions the user will receive a ‘Performance Report’. This report includes comparisons with other users according to occupation group, implying which line of work their particular brain may best be suited. Users can also consult their ‘Brain Profile’, divided into five functions of ‘Attention’, ‘Flexibility’, ‘Speed’, ‘Problem Solving’, and ‘Memory’. These five measures generate the user’s entire ‘Brain Profile’, while the ‘Performance Index’ ensures that ‘users know where they fall with respect to their own performance using a single number’. Nothing else can be accommodated, and everything must be reducible to a single figure. Our wondrous cognitive assembly collapses into a narrow ‘profile’ of functions, percentages, and indices, all framed through buzzwords and mantras of corporate-speak.

So, while it remains contentious whether such practices materially ‘train’ a brain, these regimes certainly contribute to entraining and championing a particular kind of subject. Yet the range of qualities measurable is clearly restricted by prevailing capabilities, including how these qualities are themselves refashioned to fit available affordances. Nevertheless, perhaps some comfort is found in giving in to the promise of fun and giving oneself over to expertise. In their capacious allowance for both pleasure and duty, these games serve as tolerable acts of confession. However, this fetish-ethic may, in time, become a burdensome labour, adding supposed precision around ‘brainhood’ that reflects only current idealisations.

The fetish-ethic of cognitive enhancement is particularly evident in the insistence on ‘active ageing’. Brain training products are often directly marketed to persons in the ‘Third Age’ (those who are perhaps retired, but not yet dependent upon others). The commercial exploitation of the Third Age has commonly been tied to strategies that bemoan passive acceptance of ‘natural’ ageing, and instead urge practices designed to lengthen this twilight of life.

Lumosity’s ‘Why I Play’ campaign, for instance, expressly endorses active ageing. One actor  states ‘There’s a lot going on in here [pointing to head], and I want to keep it that way’, while another actor speaks directly to Third Age virtuosity.

Here, the extended Third Age is embodied in a handsome and (improbably) young retiree; a privileged silver fox carrying a clearly aspirational message. In this manner Lumosity presents brain training as the rational consumer choice through avatars of success, worthy of emulation. Such rationales are persuasive means in shifting the burden of healthfulness onto the consuming subject. A new actuarialism is emergent, managing population-level risks through the pleasurable consumption of self-care.

However, virtuous play also requires justifying the use of time. For today’s perpetually harried subject, this is achieved by blurring distinctions between labour and leisure. In this way, recreation can be tied to self-perfection, equipping the user against neoliberal demands without sacrificing participation in the experiential economy. This strategy of ‘instrumentalizing pleasure as a means of legitimizing it’ is especially evident in the way another brain training product – Elevate – pitches itself to consumers, with emphasis placed on the judicious use of time. Advertisements feature actors discussing the product’s benefits: time well spent; productive pleasure; and enhanced work focus. Indeed, these Elevate ‘users’ suggest that the right kind of play is actually the most effective and rational means of enhancing productivity:

Elevate’s emphasis on personal productivity is part of a broader ‘corporate mind hack’ trend. Under this regime, the labouring subject is disaggregated into discrete functions pre-determined as valuable, and then incentivized to improve them.

This is sometimes put into practice by leveraging competitive drives in workplace settings, with some arguing that it can prove ‘socially connective with the self and co-workers in just the right lightweight competitive way’. Such ‘biohacking’ is also driven by simmering distrust of more intuitive and holistic assessments of one’s wellbeing. Instead, ‘hard’ data is sought through mediating, non-human authorities. Still, it remains noteworthy that brain training retains a form of embodied volition. Note, for instance, how brain training is typically offered through devices imbued with haptic feedback capabilities, enabling a pleasurable experience through the sensory bleed between mind, body, device, and the virtual world presented within it.

Still, the expectation is that we should circumvent our sensing, intuitive apparatuses, and instead seek data neatly cleaved from its source. These mediated outputs can then provide reassuring, purportedly objective markers of our accumulated human capital. Yet, human capital, of course, is determined only by what counts as worth counting in any particular social context. Hence a circular pedagogy emerges, for as Foucault noted, one cannot ‘know’ without transforming that which is observed, and to ‘know’ oneself requires first abiding what is deemed of value to know.

The result is that these narrowly derived brain ‘profiles’ and ‘indices’ ultimately prescribe far more than they reveal. Likewise, virtuous play is a discursive veil by which productive expectations are heaped upon dutiful biocitizens. This is further compounded by the hasty rush-to-market. Emerging products looking to cash in on contemporary hopes and anxieties are limited by available affordances, yet still exploit obligations of self-care. This generates constraining ontological frames, hardening precisely at the very moment in which personal neurotechnologies are touching upon extraordinary exploratory potential.

Given these trends, we should aspire to foster discursive spaces where ‘enhancement’ can be reimagined. Or, better yet, perhaps we can sidestep the insistence on ‘enhancement’ altogether, and cease hyper-reflexively categorizing ourselves into endlessly improvable higher cognitive functions. Alternatively, perhaps we may better take advantage of flexible affordances within digital platforms. Could we find more ways of turning our hopes, fears, anxieties, and desires for pleasure not to high scores and top rankings for sole virtuosos? Such habits accrue hard metrics that confer worth only to oneself. Instead, can we turn personal neurotechnologies more towards discovering new avenues for our social capacities to soothe fears and anxieties – and, perhaps, even be a source of pleasure – for others?

This is not to advocate for metricizing intimacy through the ‘quantified relationship’. To precisely metricize good conduct – and give authority over these measures to mediators that cannot accommodate the creative ruptures of ‘play’ – is to wilfully foreclose the very same elusive potentials we are striving to attain. Instead, perhaps we can reimagine self-fashioning in ways less tethered to rigid and pre-determined instrumental ends, and instead embrace more experimental modes.

In any case, following their smackdown by the FTC, Lumosity are now far more cautious in their claims:

 

Matt Wade is a postdoctoral fellow in NTU’s Centre for Liberal Arts and Social Sciences, Singapore. His primary research interests are within the sociology of science, technology, and morality (particularly around obligations of virtuosity and assessing moral worthiness). These interests are pursued in various contexts, including: debates and applications of moral neuropsychology; consumer-friendly neurotechnologies; self-tracking practices; and appeals for aid through personal crisis crowdfunding. Matt also has an interest in cultural sociology, particularly spectacles of prosumption and emotional labour. Previously, this research focused on evangelical megachurches, and now is pursued through a project on contemporary wedding rituals.

Some of Matt’s work can be accessed here and here.

 

Facebook has had a rough few weeks. Just as the Cambridge Analytica scandal reached fever pitch, revelations about Zuckerberg’s use of self-destructing messages came to the surface. According to TechCrunch, three sources have confirmed that messages from Zuckerberg have been removed from their Facebook inboxes, despite the users’ own messages remaining visible. Facebook responded by explaining that the message-disappearing feature was a security measure put in place after the 2014 Sony hack. The company promptly disabled the feature for Zuckerberg and other executives and promised to integrate the disappearing message feature into the platform interface for all users in the near future.

This quick apology and immediate feature change exemplifies a pattern revealed by Zeynep Tufekci in a NYT opinion piece, in which she describes Facebook’s public relations strategy as a series of public missteps followed by “a few apologies from Mr. Zuckerberg, some earnest-sounding promises to do better, followed by a couple of superficial changes to Facebook that fail to address the underlying structural problems.”

In the case of disappearing messages, Facebook’s response was both fast and shallow. Not only did the company fail to address underlying structural issues, but responded to the wrong issue entirely. Their promise to offer message deletion to all Facebook users treated the problem as one of equity. It presumed that what was wrong with Zuckerberg deleting his own messages from the archive was that others couldn’t do the same. But equity is not what’s at issue. Of course users don’t have the same control over content—or anything else on the Facebook platform—as the CEO. I think most people assume that they are less Facebook-Powerful than Mark Zuckerberg. Rather, what is at issue is a breach of accountability. Or more precisely, the problem with disappearing messages on Facebook is that this violated accountability expectations.

Helen Nissenbaum introduced a widely used framework to describe how and when privacy violations take place. The “contextual integrity” framework rejects universal evaluations of privacy and instead, defines privacy by the expectations of a particular context. For example, it isn’t a privacy violation if you give your information to a researcher and they reproduce that information in published reports, but it is a privacy violation if you give your information to a researcher and they sell that information to third parties. The same idea can be applied to accountability.

Contexts and situations carry with them expectations about what will be maintained for the record. These expectations of accountability ostensibly guide behavior and interaction. If people assume that all communications are retrievable, they will comport themselves accordingly. Similarly, they will treat others’ communications as available for retrieval and evaluation. With his disappearing messages, Zuckerberg violated the contextual integrity of accountability.

Disappearing messages are not in and of themselves accountability violations. Snapchat, for instance, integrates ephemeral messaging as a core feature of its design. Recipients of Snapchat messages do not presume that senders can or will be held accountable for their content in the way that users of archival services—like Facebook—would. What’s so unsettling about Zuckerberg deleting his messages isn’t that we users can’t do it too, it’s that he violated the integrity of the context by presenting one set of accountability assumptions and enacting another.

Offering message deletion to all Facebook users would indeed change the contextual expectations of accountability, but fail to repair the contextual violation. Instead, a new feature roll-out is another a quick pivot that leaves larger intersecting issues of power, design, and regulation unaddressed.

 

Jenny Davis is on Twitter @Jenny_L_Davis

 

Humor is central to internet culture. Through imagery, text, snark and stickers, funny content holds strong cultural currency.  In a competitive attention economy, LOLs are a hot commodity. But just because internet culture values a laugh it doesn’t preclude serious forms of digitally mediated communication nor consideration of consequential subject matter. In contrast, the silly and serious can—and do—imbricate in a single utterance.

The merging of serious and silly becomes abundantly evident in recent big data analyses of political communication on social media. Studies show that parody accounts, memes, gifs and other funny content garner disproportionate attention during political news events. John Hartley refers to this phenomenon as ‘silly citizenship’ while Tim Highfield evokes an ‘irreverent internet’. This silliness and irreverence in digitally mediated politics means that contemporary political discourse employs humor as a participatory norm. What remains unclear, however, is what people are doing with their political humor.  Is humor a vehicle for meaningful political communication, or are politics just raw material for funny content?  My co-authors and I (Tony Love (@tonyplove) and Gemma Killen (@gemkillen)) addressed this question in a paper published last week in New Media & Society.

The question of what people do with political humor is significant. Researchers and social commentators have expressed concern that humor detracts from substantive conversation and foments cynicism and apathy in the democratic system. At the same time, internet technologies present new platforms that give voice to marginalized groups while humor offers an accessible discursive style. A tension thus emerges in which silliness online may at once strengthen and undermine public participation in politics.

Our paper, titled ‘Seriously Funny: The Political Work of Humor on Social Media’ looks at how humor works, and the work humor does, in digitally mediated political communication.  Data for the paper is derived from two key moments during the 2016 U.S. presidential race in which humor and politics intersect: Donald Trump calling Hillary Clinton a ‘nasty woman’ and Clinton referring to Trump supporters as a ‘basket of deplorables.’ We scraped public Twitter data from the 24 hours following each event to create a big(ish) data set. We ended up with over 14,000 tweets. We coded these tweets for humor and political messaging. We then analyzed the humorous-political tweets to discern what people were doing with their political humor. Finally, we separated the two cases—deplorables and nasty woman—to see if we could find partisan differences in humor style.

Methodological interlude: this process of coding was as (or more) tedious than you would imagine. Existing research has used big data computational methods to show broad patterns. We were interested in the nuances that big data glosses over and/or obscures. Our questions required a small data approach. This was especially true because we were interested in humor, and a key feature of humor is that it often means something different than it says. Humor is deeply layered and culturally specific, relying on intertextual remixes and inside knowledge. This was a do-it-by-hand-the-old-fashioned-way kind of job.  Practically, that meant hand-coding 14,000+ tweets including following links and threads to gain context. It meant re-coding the subset of tweets that we deemed funny and political (~3,300) and then coding them again in search of partisan patterns. I tweeted this commentary on the process during the revision stage (while attempting to get through even more re-coding). All of this is to say that big data methods represent a massive advancement in social research, but sometimes research questions require sleeves-up qualitative deep-dives.

Our first pass of the data showed two main things. First, we found that, as expected, humor loomed large, featuring in about 5,000 tweets. Most of the other tweets were just informational (e.g., “Clinton called Trump supporters a ‘basket of deplorables’”) and/or links to articles and videos of the events, with the periodic angry humorless rant. Second, we found that nearly 70% of humorous tweets carried some political agenda. That is, we found that the vast majority of funny content acted as a vehicle for serious political talk. This second finding answered one of our main research questions (is humor a tool for political speech or are politics fodder for apolitical jokes?). This finding, that humor does serious political work, eases concerns about humor as a force of apathy and cynicism and indicates that those who trade in humor can—and do—engage actively in the public political sphere.

Our next step entailed delineating more specifically what Twitter users do with their humor. We categorized political humor into three thematics: discrediting the opposition, political identification, and civic support. We analyzed these as a whole and also, looked at how the data distributed along partisan lines. We tied each thematic to a ‘humor function’ using  John C. Meyer’s origins and effects framework. Meyer posits that humor takes three forms: relief—cutting through a heavy moment with levity; incongruity—making the familiar strange; and superiority—triumph through pointed deprecation of an ‘other.’ These humor origins serve two broad effects: unity and division. Meyer clarifies that humor always has multiple origins and serves multiple ends, but with different emphases.

We saw relief and incongruity throughout the tweets but were able to parse variations in superiority as an origin, and unity/division as an effect. Specifically, tweets that discredit the opposition were primarily divisive and heavily reliant on superiority; political identification was primarily unifying while pushing back on denigration; and civic support had elements of superiority with relatively equal parts unification and division as mobilization was both a collective action and an act of aggression. These humor schematics not only connected our findings to cultural studies of humor, but also allowed us to make sense of partisan humor style.

Examining humor style across partisan lines is meaningful theoretically, as humor studies have traditionally shown firm symbolic boundaries between ideological groups. At the same time, internet studies have celebrated a ‘convergence culture’ and general breakdown of symbolic boundaries as shared language, cadence and syntax take hold across contexts. Divergent humor style across the two data sets would lend credence to traditional humor studies, while shared humor style would indicate that social media have had profound boundary breaching effects on practices and preferences of humor.

Our first category, discrediting the opposition, was the most heavily populated. Here, tweets mocked the opposing candidates and their supporters, hotly contesting fitness for office and general value as human beings. For instance, anti-Trump tweets referenced his misogyny and (dull) intellect while anti-Clinton tweets referenced elitism and corruption. For example:

 “Such a Nasty Woman,GRAB THEM BY P*SSY, Nobody has more respect for women than me”—donald trump

Mrs Deplorable will have to take a few days off from parties in Hollywood, she’s in the bed,        deplorabley tired. #LockHerUp #TrumpTrain

 About 2/3 of all tweets had elements of opposition and these distributed equally along partisan lines.

Our second theme was political identification. This referred to establishing the self as a political subject through reclaiming negative labels, connecting political preferences to other positive statuses, and establishing the self as part of a political bloc. For example:

 I was going to be a nasty woman for Halloween, but I am already sexy, smart and generous  

Folks I’m not a Major. ’Major Lee D Plorable’ read fast is Majorly Deplorable. I was only   corporal in USMC #BasketOfDeplorables lol

About 1/3 of all tweets had elements of political identification.  Again, these distributed about equally along partisan lines.

Note: analyses of our first two categories show no partisan differences in humor style, indicating a clear divergence from the strong cultural boundaries that humor studies would lead us to expect. But then, we come to civic support.

Our final category, civic support, is in many ways the most interesting. Civic support refers to active participation in the political process through mobilization, fundraising and voting. For example:

  This nasty woman is taking my pussy to a voting booth to vote for @HillaryClinton Too bad we both can’t vote. #ImWithHer #NastyWomen

How’s Go “F” yourself, from a deplorable Independent who just changed her vote from Her to Him

Although it is our least populated category (only present in about 20% of all humorous political tweets), it is the only category that varies along partisan lines. While about a quarter of ‘nasty woman’/pro-Clinton tweets contain elements of civic support, this thematic is present in less than 10% of ‘deplorables’/pro-Trump tweets. This pattern is critical as the only example of partisan difference in humor style, showing that humor’s traditionally strong boundaries may partially resist the convergent pull of internet culture. The pattern also presents something of a puzzle: despite the relatively high prevalence of civic action among Clinton supporters on Twitter, the election ultimately fell in support of Trump. This raises interesting questions about the predictive value of social media for actual voting behaviors.

In sum, our study shows four main things: 1) humor plays a big part in digitally mediated political communication; 2) humor is a vehicle for serious political commentary and participation; 3) humor is used largely for denigration and divisiveness, but there are substantial trends of political subjectification, civic participation, and collective action; and 4) political humor partially transcends partisan lines while leaving some boundaries in-tact. These findings ease concerns about the possible cynicism fomented through humor online while raising key questions about the relationship between social media practices and voting behavior. The findings also speak to humor studies—which show firm symbolic boundaries—and internet studies—which show boundaries broken down. The partial but incomplete breakdown of ideological boundaries in our analysis of humor style indicates that the meeting of humor and social media leaves neither unchanged.

 

Full text found here: Seriously Funny: The Political Work of Humor on Social Media

Jenny Davis is on Twitter (@Jenny_L_davis), where she sometimes tries to be funny with varying degrees of success

If I were to ask you a question, and neither of us knew the answer, what would you do? You’d Google it, right? Me too. After you figure out the right wording and hit the search button, at what point would you be satisfied enough with Google’s answer to say that you’ve gained new knowledge? Judging from the current socio-technical circumstances, I’d be hard-pressed to say that many of us would make it past the featured snippet, let alone the first page of results.

The internet—along with the complementary technologies we’ve developed to increase its accessibility—enriches our lives by affording us access to the largest information repository ever conceived. Despite physical barriers, we can share, explore, and store facts, opinions, theories, and philosophies alike. As such, this vast repository contains many answers to many questions derived from many distinct perspectives. These socio-technical circumstances are undeniably promising for the distribution and development of knowledge. However, in 2008, tech-critic Nicholas Carr posed a counter argument about the internet and its impact on our cognitive abilities by asking readers a simple question: is Google making us stupid? In his controversial article published by The Atlantic, Carr blames the internet for our diminishing ability to form “rich mental connections,” and supposes that technology and the internet are instruments of intentional distraction. While I agree with Carr’s sentiment that the way we think has changed, I don’t agree that the fault falls on the internet. I believe we expect too much of Google and less of ourselves; therefore, the fault (if there is fault) is largely our own.

Here’s why: Carr’s argument hinges on the idea that technology definitively determines our society’s structural and cultural values—a theory known as technological determinism. However, he fails to recognize the theory of affordance in this argument. Affordances refer to the way in which the features of a technology interact with agentic users and diverse circumstances. While the technical and material elements of technology do have shaping effects, they are far from determined. Affordance theory suggests that the technologies we use and the internet infrastructures from which they draw, contain multipotentiality: they afford the potential to indulge in curiosity and develop robust knowledge while simultaneously affording the potential to relinquish curiosity and develop complacency through the comforts of convenience and self-confirmation.

Considering the initial sentiment of Carr’s argument (the way we think has changed) together with affordance theory, we can derive two critical questions: have we embraced complacency and become too comfortable with the internet’s knowledge production capabilities? If so, by choosing to rest on our laurels and exploit this affordance, what happens to epistemic curiosity?

There’s a lot to unpack, but in order to address these questions, we need to examine the potential socio-technical circumstances that could lead us down a path of declining epistemic curiosity, starting with the binary ideas of convenience and complacency.

Complacency is characterized by the feeling of being satisfied with how things are and not wanting to try to make them better. Clearly, in terms of making life more efficient, we are nowhere near complacent, as we constantly strive to streamline our lives through innovation—from fire to the invention of (arguably) our greatest creation to date and the basis for our modernity: information and communication technology. This technology affords us the ability to live more convenient, effortless lives by providing access to the world’s knowledge with the tap of a finger and the ability to do more in a few moments than previous generations could do in hours.

For instance, education has become much more convenient. Thanks to the internet, you can take advantage of distance learning programs and earn a degree on your own terms, without physically attending class. The workforce has also become more flexible, as technology allows us to maximize time and stay on top of our work through complete mobility, and in some cases, complete task automation. Economically, the internet allows us to sell and consume goods and services without the physical limitations of brick and mortar. It also allows us to communicate with friends, family, and strangers over long distances, document our lives, access current events with ease, and answer a question within moments of it popping into our heads.

These conveniences must make life better, right?

Think of these conveniences like your bed on a cold morning: warm and comfortable, convincing you to hit snooze and stay a while longer. This warmth and comfort can be a source of sustenance and strength; however, if we stay too long, comfort can get the best of us. We might become lazy, hesitating to diverge from the path of least resistance.

Just as it is inadvisable to regularly snooze until noon, it is concerning when information and knowledge are accessed too easily, too quickly. With the increased accessibility and speed of information, it’s easy to become desensitized to curiosity—the very intuition that is responsible for our technological progress—in the same way that you are desensitized to your breathing pattern or heartbeat. By following the path of least resistance, we can create a dynamic in which we perceive the internet as a mere convenience instead of a tool to stimulate our thoughts about the world around us. This convenience dynamic allows us to settle into a state of complacency in which we are certain that everything we think and believe can be justified through a quick Google search—because, in fact, it can be. That feeling of certainty and comfort that stems from this technical ability to self-confirm is, what I call, informed complacency.

The idea of informed complacency is especially fraught because it signifies a turning point in our perception of contemporary knowledge. Ultimately, it can encourage us to develop an underlying sense of omniscient modernity, which Adam Kirch discusses in his article for The New Yorker,Are We Really So Modern?”:

“Modernity cannot be identified with any particular technological or social breakthrough. Rather, it is a subjective condition, a feeling or an intuition that we are in some profound sense different from the people who lived before us. Modern life, which we tend to think of as an accelerating series of gains in knowledge, wealth, and power over nature, is predicated on a loss: the loss of contact with the past.”

In the past, nothing was certain. The information our ancestors had on the world and universe was constantly being overturned and molded into something else entirely. Renowned thinkers from across the ages built and destroyed theories like they were children with LEGO bricks—especially during the Golden Age of Athens (fourth and fifth centuries B.C.) and the Enlightenment (seventeenth and eighteenth centuries A.D.). Each time they thought they had it figured out, the world as they knew it came crashing down with a new discovery:

“The discovery of America destroyed established geography, the Reformation destroyed the established Church, and astronomy destroyed the established cosmos. Everything that educated people believed about reality turned out to be an error or, worse, a lie. It’s impossible to imagine what, if anything, could produce a comparable effect on us today”

Today, we still face uncertainty, albeit a different kind. With the glut of empirical evidence on the internet, multiple versions of objective reality flourish even as they conflict. These multiple truths create a dynamic information environment that makes it difficult to differentiate between fact, theory, and fiction, increasing the likelihood that whatever one thinks is true can easily be confirmed as such. With this sentiment in mind, by following the path of least resistance and developing a sense of informed complacency, we risk developing a sense of omniscient modernity and over-comprehending our ability to know, because we are certain that we know—or can know—everything, past, present, and future, with the click of a button or the tap of a finger.

Though a dynamic information environment has clear benefits for epistemic curiosity—better science, more informed debates, an engaged citizenry—the tilt of the affordance scale towards complacency always remains a lingering possibility. If we begin to lean in this direction, I contend that informed complacency is likely to take hold and lead us to ignorance and insularity amid a saturated information environment. This can create cognitive traps that, in the worst instance, diminish epistemic curiosity.

One of these traps is called the immediate gratification bias, which Tim Urban of Wait But Why, has playfully dubbed the “Instant Gratification Monkey”. He describes this predisposition as “thinking only about the present, ignoring lessons from the past and disregarding the future altogether, and concerning yourself entirely with maximizing the ease and pleasure of the current moment.” As a result of this predisposition, there is an increasing demand for instant services like Uber, Amazon Prime, Netflix, and Tinder, which testifies that the notions of ease and instancy have infiltrated our thought-process, compelling us to apply them to every other aspect of our lives. The increase in the speed at which we consume information has molded us to rely on and expect instant results for everything. Consequently, we are likely to base our information decisions on this principle and choose not to dig past surface-level.

Another trap is found in gluttonous information habits—devouring as much of it as we can, as quickly as possible, solely for the sake of hoarding what we consider to be knowledge. In all our modernity, it seems that we misguidedly assume that consuming information at a faster pace is beneficial to the development of knowledge, when in fact, too much information (information overload) can have overwhelming, negative effects, such as the inability to make the “rich mental connections” Carr describes in his article. This trap is amplified by pressures to stay “in the know” as well as the market of apps and services that capitalize on a pervasive fear of missing out, transforming the pursuit of knowledge from an act of personal curiosity to a social requirement.

The complex algorithms deployed by search engine and social media conglomerates to manage our vast aggregates of information curate content in ways users are likely to experience not only as useful, but pleasurable. These algorithmic curations are purposefully designed to keep information platforms sticky; to keep users engaged, and ultimately sell data and attention. These are the conditions under which another cognitive trap arises: the filter bubble. By personally analyzing each individual user’s interests, the algorithms place them in a filtered environment in which only agreeable information makes its way to the top of their screens. Therefore, we are constantly able to confirm our own personal ideologies, rendering any news that disagrees with one’s established viewpoints as “fake news.” In this context, it’s easy to believe everything we read on the internet, even if it’s not true. This makes it difficult to accurately assess the truthfulness and credibility of news sources online, as truth value seems to be measured by virality rather than veracity.

Ultimately, with his argument grounded in technological determinism, Carr overlooks the perspective that technology cannot define its own purpose. As its creators and users, we negotiate how technology integrates into our lives. The affordances of digital knowledge repositories create the capacity for unprecedented curiosity and the advancement of human thought. However, they also enable us to be complacent, misinformed, and superficially satisfied; that is to say, an abundance of easily accessed information does not always mean persistent curiosity and improved knowledge. To preserve epistemic curiosity and avoid informed complacency, we should keep reminding ourselves of this and practice conscious information consumption habits. This means recognizing how algorithms filter content; seeking diverse perspectives and content sources; questioning, critiquing, and evaluating news and information; and perhaps most importantly, always do your best to venture past the first page of Google search results. Who knows, you might find something that challenges everything you believe.

Clayton d’Arnault is the Editor of The Disconnect, a new digital magazine that forces you to disconnect from the internet. He is also the Founding Editor of Digital Culturist. Find him on Twitter @cjdarnault.

 

Headline pic via: Source