This post is based on the author’s article in the journal Science as Culture. Full text available here and here

In 2016, Lumos Labs – creators of the popular brain training service Lumosity – settled against charges laid by the FTC, who concluded that the company unjustly ‘preyed on consumers fears …[but] simply did not have the science to back up its ads’. In addition to a substantial fine, the judgment stipulated that – except in light of any rigorously derived scientific findings – Lumos Labs

‘… are permanently restrained and enjoined from making any representation, expressly or by implication [that their product] … improves performance in school, at work … delays or protects against age-related decline in memory or other cognitive function, including mild cognitive impairment, dementia, or Alzheimer’s disease…. [or] reduces cognitive impairment caused by health conditions, including Turner syndrome, post-traumatic stress disorder (PTSD), attention deficit hyperactivity disorder (ADHD), traumatic brain injury (TBI), stroke, or side effects of chemotherapy.’

However, by the time of the settlement, Lumosity’s message was already out. Lumosity boasts ‘85 million registered users worldwide from 182 countries’ and their seductive advertisements were seen by many millions more. Over three billion mini-games have been played on their platform, which – combined with personal data gleaned from their users – makes for an incredibly valuable data set. Lumosity kindled sparks of hope within those who suffered, or feared suffering from the above conditions, or who simply sought to better themselves for contemporary demands. In this way, the brain has become a site of both promise and peril. Today, ever more ethical injunctions are levied through calls for ‘participatory biocitizenship’, the supposed ‘empowerment of the individual, at any age, to self-monitor and self-manage health and wellness’.

However, this regime of self-care is not sold through oppressive demands, but the consumer-friendly promise of fun (especially when it can be displayed to others). These entanglements of hope, fear, duty, and pleasure coalesce into aspirations of ‘virtuous play’. Late capitalist modes of prosumption leverage our desires for realizing ideal selves through conspicuous consumption practices, proving ourselves as healthful, industrious, and always pleasure-seeking. Self-tracking technologies ably capture this turn to virtuous play, combining joyful game playing with diligent lifelogging. Brain training proves exemplary here, for through the potent combination of pop neuroscience, self-help rhetoric, normative role expectations, and haptic stimulation, we labour to enhance our cognitive capacities.

Of course, ‘brain training’ in the typical form of tablet and smartphone-based games constitutes a rather mild intervention, relative to other neurotechnologies adopted for personal enhancement. Consider, for example, EEG-based devices enticing consumers with neuro-mapping and (cognitive) arousal-based life-logging, or gamification and smart-home integration (see Melissa Littlefield’s new book for more on EEG devices). Some concept videos for such applications are saccharine sweet:

While others could have used a little less brotopia exuberance:

Elsewhere, we can find virtuous play in the uptake of transcranial direct current stimulation (tDCS), sometimes used in clinical settings, but increasingly also by amateur ‘neurohacking’ enthusiasts.

However, while the consumer-friendly brain training offered by companies like Lumosity pales in its relative intensity, its widespread appeal threatens to inscribe narrow ethical prescriptions of self-care (while also smoothing paths toward those more invasive measures). In other words, the actual efficacy of current brain training methods may matter far less than the discursive grooves they carve.

For example, ‘brain training’ rhetoric commonly leverages aspirations of virtuosity as relief from contemporary anxiety and vulnerability. Yet, by simultaneously stoking these very anxieties, they ratchet up expectations of being dutifully productive and pleasure-seeking subjects. Also, limited affordances entail that the subject is disaggregated into only those functional capacities deemed value-bearing and measurable. The risk here is reinforcing hyper-reflexive but shallow practices of self-care.

Moreover, popular rhetoric around ‘neuroplasticity’ construes the brain as an untapped well of potential, infinitely open to targeted enhancement for ideal neoliberal subjects who are ‘dynamic, multi-polar and adaptive to circumstance’. This enhancement ethos has also emerged in response to the collective dread felt towards neurodegenerative diseases, where responsible, enterprising subjects seek ways to ensure cognitive longevity.

Our neuroplastic potentials are also regularly invoked, holding promise that we can truly realize our latent capacities to be more productive, fulfil role obligations, ward off neurodegeneration, and shore up our reserves of human capital. This is the contemporary burden of endless ‘virtuosity’, where subjects must constantly work upon their value-bearing capacities to be (temporarily) relieved of insecurity, risk, and vulnerability.

These hopes, fears, and obligations are soothed and stoked through the virtuous play of brain training. This market operates under the premise that through expertly designed activities – commonly packaged as short games – cognitive capacities may be enhanced in ways that generalize to everyday life. Proponents have sought to ground consumer-friendly brain training in scientific rigour, but efficacy remains hotly contested.

More broadly, brain training constitutes part of the growing ‘brain-industrial complex’, driven in part by ‘soft’ commercialization trends. These commercial claims encourage ‘endless projects of self-optimization in which individuals are responsible for continuously working on their own brains to produce themselves as better parents, workers, and citizens’.

The rhetoric of brain training reflects moral hazards that often accompany commercialization, with ‘inflated claims as to the translational potential of research findings’ resulting in tenuous practical applications. Brain training also reflects how smoothly self-tracking has been incorporated into obligations of healthfulness, leveraging a ‘notion of ethical incompleteness’. Hence, while most consumer-friendly ‘brain training’ products are of low intensity, even here abound ethical appeals that ‘divides, imposes burdens, and thrives upon the anxieties and disappointments generated by its own promises’. Coupling self-tracking with gamification thus enables joyous pleasure and ethical measure. Care for oneself ‘is now shot-through with the promise of uninhibited amusement’ so that we can ‘amuse ourselves to life’. This judicious leisure keeps mortality at bay and morality upheld.

Using Lumosity as a peg upon which to hang the concept of virtuous play, we can unpack how popular brain training and related self-tracking practices lean on contemporary aspirations and anxieties. Firstly, Lumosity is designed to be routine yet fun, undertaken through short, aesthetically pleasing video games, usually played on personal computers, tablets, or smartphone devices. These games purport to target, assess, and – with training – enhance cognitive capacities. Many of these games draw upon classic experimental designs, and Lumosity has sought to further establish credibility through their ‘Lumos Labs’ – where ‘In-house scientists refine and improve the product’ – and their ‘Human Cognition Project’.

Admittedly, it may be tempting to dismiss products like Lumosity as pseudoscience packaged in exaggerative marketing, not worthy of our attention. But such dismissals neglect how we are typically constituted as subjects, for it is

‘… at this vulgar, pragmatic, quotidian and minor level that one can see the languages and techniques being invented that will reshape understandings of the subjects and objects of government, and hence reshape the very presuppositions upon which government rests.’

Therefore, with this need to better understand prevailing rationales of neuro-enhancement, observe here how Lumosity pitched itself to consumers in 2014:

Several appeals emerge here: equating brain training with other forms of ‘fitness’; the offer of focusing on what is ‘important to you’; scientific rigour; progress measured by comparison against the cohort; and the promise of fun. Finally, there is an earnest petition of potential, for with Lumosity you will ‘discover what your brain can do’.

The brain training industry has thrived within this context of egalitarian self-enterprise, offering aspiring virtuosos ‘the key to self-empowered aging’. Such seductive rationales are highlighted by Sharp Brains, ‘an inde­pen­dent market research firm track­ing health and per­for­mance appli­ca­tions of neu­ro­science’. They claim

‘When we con­ducted in-depth focus groups and inter­views with [lay subject] respon­dents, the main ques­tion many had was not what has per­fect sci­ence behind it, but what has bet­ter sci­ence than the other things peo­ple are doing – solving cross­word puz­zle num­ber one mil­lion and one, tak­ing ‘brain sup­ple­ments,’ or doing noth­ing at all until depres­sion or demen­tia hits home.’

The implication – conveniently endorsed by Sharp Brains – is that although efficacy remains unproven, this does not absolve individual responsibility. Rather, we must do something to care for our brains, lest we be seen as defeatist and indolent, sullenly waiting for depression or dementia to ‘hit home’. Such sentiments have certainly been fostered by slickly-packaged commercial appeals.

In 2012, Lumosity launched a highly successful ‘Why I Play’ campaign, designed to normalize brain training. The campaign was active for several years, reaching a massive global audience through an enticing emphasis on aspiration and emulation. Each ‘Why I Play’ commercial adhered to a shared template: an actor portraying a happy Lumosity user stresses the imperative need to enhance their cognition, while also noting the pleasures of brain training. All the actors are, of course, impossibly attractive, and the perfect embodiment of the late capitalist subject. They serve as avatars of virtuosity, with unending drives for both self-improvement and pleasure.

This simultaneously disciplined, pleasurable, intimate, and yet distant framing of ‘discovering what your brain can do’ creates a peculiar ethic-fetish of brainhood. Advocates proclaim that ‘I am happier with my brain’ or ‘my brain feels great’. The users also praise ‘the science behind the games’, and highlight hopes to maintain cognitive capacities as they age. These commercials lean directly on burdensome expectations placed upon labouring subjects today.

Another variant of the ‘Why I Play’ campaign, upping the ethical stakes, even implies that brain training may be obligatory for those who aspire to be the kindest persons they can be:

Similarly, a mother expresses relief that ‘it’s not just random games, it’s all based on neuroscience’, reassuring her that ‘every brain in the house gets a little better every day’. Training one’s brain – and the brains of dependents – is framed as an admirable practice for those who seek to be a source of joy, comfort, and care for others.

Upon commencing their ‘brain training journey’ members are asked probing questions around when they feel most productive, their sleeping patterns, general mood, exercise habits, age, and more. A competitive regimen is also stoked, with users asked ‘What percentage of Lumosity members do you outrank? … Come back weekly to see how many people you’ve surpassed.’ Such encouragement is then reflected in precise rankings of users in their various cognitive capacities. Lumosity also enables integration of data from Fitbit devices, further entrenching associations between brain fitness and aerobic fitness.

After completing a prescribed number of training sessions the user will receive a ‘Performance Report’. This report includes comparisons with other users according to occupation group, implying which line of work their particular brain may best be suited. Users can also consult their ‘Brain Profile’, divided into five functions of ‘Attention’, ‘Flexibility’, ‘Speed’, ‘Problem Solving’, and ‘Memory’. These five measures generate the user’s entire ‘Brain Profile’, while the ‘Performance Index’ ensures that ‘users know where they fall with respect to their own performance using a single number’. Nothing else can be accommodated, and everything must be reducible to a single figure. Our wondrous cognitive assembly collapses into a narrow ‘profile’ of functions, percentages, and indices, all framed through buzzwords and mantras of corporate-speak.

So, while it remains contentious whether such practices materially ‘train’ a brain, these regimes certainly contribute to entraining and championing a particular kind of subject. Yet the range of qualities measurable is clearly restricted by prevailing capabilities, including how these qualities are themselves refashioned to fit available affordances. Nevertheless, perhaps some comfort is found in giving in to the promise of fun and giving oneself over to expertise. In their capacious allowance for both pleasure and duty, these games serve as tolerable acts of confession. However, this fetish-ethic may, in time, become a burdensome labour, adding supposed precision around ‘brainhood’ that reflects only current idealisations.

The fetish-ethic of cognitive enhancement is particularly evident in the insistence on ‘active ageing’. Brain training products are often directly marketed to persons in the ‘Third Age’ (those who are perhaps retired, but not yet dependent upon others). The commercial exploitation of the Third Age has commonly been tied to strategies that bemoan passive acceptance of ‘natural’ ageing, and instead urge practices designed to lengthen this twilight of life.

Lumosity’s ‘Why I Play’ campaign, for instance, expressly endorses active ageing. One actor  states ‘There’s a lot going on in here [pointing to head], and I want to keep it that way’, while another actor speaks directly to Third Age virtuosity.

Here, the extended Third Age is embodied in a handsome and (improbably) young retiree; a privileged silver fox carrying a clearly aspirational message. In this manner Lumosity presents brain training as the rational consumer choice through avatars of success, worthy of emulation. Such rationales are persuasive means in shifting the burden of healthfulness onto the consuming subject. A new actuarialism is emergent, managing population-level risks through the pleasurable consumption of self-care.

However, virtuous play also requires justifying the use of time. For today’s perpetually harried subject, this is achieved by blurring distinctions between labour and leisure. In this way, recreation can be tied to self-perfection, equipping the user against neoliberal demands without sacrificing participation in the experiential economy. This strategy of ‘instrumentalizing pleasure as a means of legitimizing it’ is especially evident in the way another brain training product – Elevate – pitches itself to consumers, with emphasis placed on the judicious use of time. Advertisements feature actors discussing the product’s benefits: time well spent; productive pleasure; and enhanced work focus. Indeed, these Elevate ‘users’ suggest that the right kind of play is actually the most effective and rational means of enhancing productivity:

Elevate’s emphasis on personal productivity is part of a broader ‘corporate mind hack’ trend. Under this regime, the labouring subject is disaggregated into discrete functions pre-determined as valuable, and then incentivized to improve them.

This is sometimes put into practice by leveraging competitive drives in workplace settings, with some arguing that it can prove ‘socially connective with the self and co-workers in just the right lightweight competitive way’. Such ‘biohacking’ is also driven by simmering distrust of more intuitive and holistic assessments of one’s wellbeing. Instead, ‘hard’ data is sought through mediating, non-human authorities. Still, it remains noteworthy that brain training retains a form of embodied volition. Note, for instance, how brain training is typically offered through devices imbued with haptic feedback capabilities, enabling a pleasurable experience through the sensory bleed between mind, body, device, and the virtual world presented within it.

Still, the expectation is that we should circumvent our sensing, intuitive apparatuses, and instead seek data neatly cleaved from its source. These mediated outputs can then provide reassuring, purportedly objective markers of our accumulated human capital. Yet, human capital, of course, is determined only by what counts as worth counting in any particular social context. Hence a circular pedagogy emerges, for as Foucault noted, one cannot ‘know’ without transforming that which is observed, and to ‘know’ oneself requires first abiding what is deemed of value to know.

The result is that these narrowly derived brain ‘profiles’ and ‘indices’ ultimately prescribe far more than they reveal. Likewise, virtuous play is a discursive veil by which productive expectations are heaped upon dutiful biocitizens. This is further compounded by the hasty rush-to-market. Emerging products looking to cash in on contemporary hopes and anxieties are limited by available affordances, yet still exploit obligations of self-care. This generates constraining ontological frames, hardening precisely at the very moment in which personal neurotechnologies are touching upon extraordinary exploratory potential.

Given these trends, we should aspire to foster discursive spaces where ‘enhancement’ can be reimagined. Or, better yet, perhaps we can sidestep the insistence on ‘enhancement’ altogether, and cease hyper-reflexively categorizing ourselves into endlessly improvable higher cognitive functions. Alternatively, perhaps we may better take advantage of flexible affordances within digital platforms. Could we find more ways of turning our hopes, fears, anxieties, and desires for pleasure not to high scores and top rankings for sole virtuosos? Such habits accrue hard metrics that confer worth only to oneself. Instead, can we turn personal neurotechnologies more towards discovering new avenues for our social capacities to soothe fears and anxieties – and, perhaps, even be a source of pleasure – for others?

This is not to advocate for metricizing intimacy through the ‘quantified relationship’. To precisely metricize good conduct – and give authority over these measures to mediators that cannot accommodate the creative ruptures of ‘play’ – is to wilfully foreclose the very same elusive potentials we are striving to attain. Instead, perhaps we can reimagine self-fashioning in ways less tethered to rigid and pre-determined instrumental ends, and instead embrace more experimental modes.

In any case, following their smackdown by the FTC, Lumosity are now far more cautious in their claims:

 

Matt Wade is a postdoctoral fellow in NTU’s Centre for Liberal Arts and Social Sciences, Singapore. His primary research interests are within the sociology of science, technology, and morality (particularly around obligations of virtuosity and assessing moral worthiness). These interests are pursued in various contexts, including: debates and applications of moral neuropsychology; consumer-friendly neurotechnologies; self-tracking practices; and appeals for aid through personal crisis crowdfunding. Matt also has an interest in cultural sociology, particularly spectacles of prosumption and emotional labour. Previously, this research focused on evangelical megachurches, and now is pursued through a project on contemporary wedding rituals.

Some of Matt’s work can be accessed here and here.

 

Facebook has had a rough few weeks. Just as the Cambridge Analytica scandal reached fever pitch, revelations about Zuckerberg’s use of self-destructing messages came to the surface. According to TechCrunch, three sources have confirmed that messages from Zuckerberg have been removed from their Facebook inboxes, despite the users’ own messages remaining visible. Facebook responded by explaining that the message-disappearing feature was a security measure put in place after the 2014 Sony hack. The company promptly disabled the feature for Zuckerberg and other executives and promised to integrate the disappearing message feature into the platform interface for all users in the near future.

This quick apology and immediate feature change exemplifies a pattern revealed by Zeynep Tufekci in a NYT opinion piece, in which she describes Facebook’s public relations strategy as a series of public missteps followed by “a few apologies from Mr. Zuckerberg, some earnest-sounding promises to do better, followed by a couple of superficial changes to Facebook that fail to address the underlying structural problems.”

In the case of disappearing messages, Facebook’s response was both fast and shallow. Not only did the company fail to address underlying structural issues, but responded to the wrong issue entirely. Their promise to offer message deletion to all Facebook users treated the problem as one of equity. It presumed that what was wrong with Zuckerberg deleting his own messages from the archive was that others couldn’t do the same. But equity is not what’s at issue. Of course users don’t have the same control over content—or anything else on the Facebook platform—as the CEO. I think most people assume that they are less Facebook-Powerful than Mark Zuckerberg. Rather, what is at issue is a breach of accountability. Or more precisely, the problem with disappearing messages on Facebook is that this violated accountability expectations.

Helen Nissenbaum introduced a widely used framework to describe how and when privacy violations take place. The “contextual integrity” framework rejects universal evaluations of privacy and instead, defines privacy by the expectations of a particular context. For example, it isn’t a privacy violation if you give your information to a researcher and they reproduce that information in published reports, but it is a privacy violation if you give your information to a researcher and they sell that information to third parties. The same idea can be applied to accountability.

Contexts and situations carry with them expectations about what will be maintained for the record. These expectations of accountability ostensibly guide behavior and interaction. If people assume that all communications are retrievable, they will comport themselves accordingly. Similarly, they will treat others’ communications as available for retrieval and evaluation. With his disappearing messages, Zuckerberg violated the contextual integrity of accountability.

Disappearing messages are not in and of themselves accountability violations. Snapchat, for instance, integrates ephemeral messaging as a core feature of its design. Recipients of Snapchat messages do not presume that senders can or will be held accountable for their content in the way that users of archival services—like Facebook—would. What’s so unsettling about Zuckerberg deleting his messages isn’t that we users can’t do it too, it’s that he violated the integrity of the context by presenting one set of accountability assumptions and enacting another.

Offering message deletion to all Facebook users would indeed change the contextual expectations of accountability, but fail to repair the contextual violation. Instead, a new feature roll-out is another a quick pivot that leaves larger intersecting issues of power, design, and regulation unaddressed.

 

Jenny Davis is on Twitter @Jenny_L_Davis

 

Humor is central to internet culture. Through imagery, text, snark and stickers, funny content holds strong cultural currency.  In a competitive attention economy, LOLs are a hot commodity. But just because internet culture values a laugh it doesn’t preclude serious forms of digitally mediated communication nor consideration of consequential subject matter. In contrast, the silly and serious can—and do—imbricate in a single utterance.

The merging of serious and silly becomes abundantly evident in recent big data analyses of political communication on social media. Studies show that parody accounts, memes, gifs and other funny content garner disproportionate attention during political news events. John Hartley refers to this phenomenon as ‘silly citizenship’ while Tim Highfield evokes an ‘irreverent internet’. This silliness and irreverence in digitally mediated politics means that contemporary political discourse employs humor as a participatory norm. What remains unclear, however, is what people are doing with their political humor.  Is humor a vehicle for meaningful political communication, or are politics just raw material for funny content?  My co-authors and I (Tony Love (@tonyplove) and Gemma Killen (@gemkillen)) addressed this question in a paper published last week in New Media & Society.

The question of what people do with political humor is significant. Researchers and social commentators have expressed concern that humor detracts from substantive conversation and foments cynicism and apathy in the democratic system. At the same time, internet technologies present new platforms that give voice to marginalized groups while humor offers an accessible discursive style. A tension thus emerges in which silliness online may at once strengthen and undermine public participation in politics.

Our paper, titled ‘Seriously Funny: The Political Work of Humor on Social Media’ looks at how humor works, and the work humor does, in digitally mediated political communication.  Data for the paper is derived from two key moments during the 2016 U.S. presidential race in which humor and politics intersect: Donald Trump calling Hillary Clinton a ‘nasty woman’ and Clinton referring to Trump supporters as a ‘basket of deplorables.’ We scraped public Twitter data from the 24 hours following each event to create a big(ish) data set. We ended up with over 14,000 tweets. We coded these tweets for humor and political messaging. We then analyzed the humorous-political tweets to discern what people were doing with their political humor. Finally, we separated the two cases—deplorables and nasty woman—to see if we could find partisan differences in humor style.

Methodological interlude: this process of coding was as (or more) tedious than you would imagine. Existing research has used big data computational methods to show broad patterns. We were interested in the nuances that big data glosses over and/or obscures. Our questions required a small data approach. This was especially true because we were interested in humor, and a key feature of humor is that it often means something different than it says. Humor is deeply layered and culturally specific, relying on intertextual remixes and inside knowledge. This was a do-it-by-hand-the-old-fashioned-way kind of job.  Practically, that meant hand-coding 14,000+ tweets including following links and threads to gain context. It meant re-coding the subset of tweets that we deemed funny and political (~3,300) and then coding them again in search of partisan patterns. I tweeted this commentary on the process during the revision stage (while attempting to get through even more re-coding). All of this is to say that big data methods represent a massive advancement in social research, but sometimes research questions require sleeves-up qualitative deep-dives.

Our first pass of the data showed two main things. First, we found that, as expected, humor loomed large, featuring in about 5,000 tweets. Most of the other tweets were just informational (e.g., “Clinton called Trump supporters a ‘basket of deplorables’”) and/or links to articles and videos of the events, with the periodic angry humorless rant. Second, we found that nearly 70% of humorous tweets carried some political agenda. That is, we found that the vast majority of funny content acted as a vehicle for serious political talk. This second finding answered one of our main research questions (is humor a tool for political speech or are politics fodder for apolitical jokes?). This finding, that humor does serious political work, eases concerns about humor as a force of apathy and cynicism and indicates that those who trade in humor can—and do—engage actively in the public political sphere.

Our next step entailed delineating more specifically what Twitter users do with their humor. We categorized political humor into three thematics: discrediting the opposition, political identification, and civic support. We analyzed these as a whole and also, looked at how the data distributed along partisan lines. We tied each thematic to a ‘humor function’ using  John C. Meyer’s origins and effects framework. Meyer posits that humor takes three forms: relief—cutting through a heavy moment with levity; incongruity—making the familiar strange; and superiority—triumph through pointed deprecation of an ‘other.’ These humor origins serve two broad effects: unity and division. Meyer clarifies that humor always has multiple origins and serves multiple ends, but with different emphases.

We saw relief and incongruity throughout the tweets but were able to parse variations in superiority as an origin, and unity/division as an effect. Specifically, tweets that discredit the opposition were primarily divisive and heavily reliant on superiority; political identification was primarily unifying while pushing back on denigration; and civic support had elements of superiority with relatively equal parts unification and division as mobilization was both a collective action and an act of aggression. These humor schematics not only connected our findings to cultural studies of humor, but also allowed us to make sense of partisan humor style.

Examining humor style across partisan lines is meaningful theoretically, as humor studies have traditionally shown firm symbolic boundaries between ideological groups. At the same time, internet studies have celebrated a ‘convergence culture’ and general breakdown of symbolic boundaries as shared language, cadence and syntax take hold across contexts. Divergent humor style across the two data sets would lend credence to traditional humor studies, while shared humor style would indicate that social media have had profound boundary breaching effects on practices and preferences of humor.

Our first category, discrediting the opposition, was the most heavily populated. Here, tweets mocked the opposing candidates and their supporters, hotly contesting fitness for office and general value as human beings. For instance, anti-Trump tweets referenced his misogyny and (dull) intellect while anti-Clinton tweets referenced elitism and corruption. For example:

 “Such a Nasty Woman,GRAB THEM BY P*SSY, Nobody has more respect for women than me”—donald trump

Mrs Deplorable will have to take a few days off from parties in Hollywood, she’s in the bed,        deplorabley tired. #LockHerUp #TrumpTrain

 About 2/3 of all tweets had elements of opposition and these distributed equally along partisan lines.

Our second theme was political identification. This referred to establishing the self as a political subject through reclaiming negative labels, connecting political preferences to other positive statuses, and establishing the self as part of a political bloc. For example:

 I was going to be a nasty woman for Halloween, but I am already sexy, smart and generous  

Folks I’m not a Major. ’Major Lee D Plorable’ read fast is Majorly Deplorable. I was only   corporal in USMC #BasketOfDeplorables lol

About 1/3 of all tweets had elements of political identification.  Again, these distributed about equally along partisan lines.

Note: analyses of our first two categories show no partisan differences in humor style, indicating a clear divergence from the strong cultural boundaries that humor studies would lead us to expect. But then, we come to civic support.

Our final category, civic support, is in many ways the most interesting. Civic support refers to active participation in the political process through mobilization, fundraising and voting. For example:

  This nasty woman is taking my pussy to a voting booth to vote for @HillaryClinton Too bad we both can’t vote. #ImWithHer #NastyWomen

How’s Go “F” yourself, from a deplorable Independent who just changed her vote from Her to Him

Although it is our least populated category (only present in about 20% of all humorous political tweets), it is the only category that varies along partisan lines. While about a quarter of ‘nasty woman’/pro-Clinton tweets contain elements of civic support, this thematic is present in less than 10% of ‘deplorables’/pro-Trump tweets. This pattern is critical as the only example of partisan difference in humor style, showing that humor’s traditionally strong boundaries may partially resist the convergent pull of internet culture. The pattern also presents something of a puzzle: despite the relatively high prevalence of civic action among Clinton supporters on Twitter, the election ultimately fell in support of Trump. This raises interesting questions about the predictive value of social media for actual voting behaviors.

In sum, our study shows four main things: 1) humor plays a big part in digitally mediated political communication; 2) humor is a vehicle for serious political commentary and participation; 3) humor is used largely for denigration and divisiveness, but there are substantial trends of political subjectification, civic participation, and collective action; and 4) political humor partially transcends partisan lines while leaving some boundaries in-tact. These findings ease concerns about the possible cynicism fomented through humor online while raising key questions about the relationship between social media practices and voting behavior. The findings also speak to humor studies—which show firm symbolic boundaries—and internet studies—which show boundaries broken down. The partial but incomplete breakdown of ideological boundaries in our analysis of humor style indicates that the meeting of humor and social media leaves neither unchanged.

 

Full text found here: Seriously Funny: The Political Work of Humor on Social Media

Jenny Davis is on Twitter (@Jenny_L_davis), where she sometimes tries to be funny with varying degrees of success

If I were to ask you a question, and neither of us knew the answer, what would you do? You’d Google it, right? Me too. After you figure out the right wording and hit the search button, at what point would you be satisfied enough with Google’s answer to say that you’ve gained new knowledge? Judging from the current socio-technical circumstances, I’d be hard-pressed to say that many of us would make it past the featured snippet, let alone the first page of results.

The internet—along with the complementary technologies we’ve developed to increase its accessibility—enriches our lives by affording us access to the largest information repository ever conceived. Despite physical barriers, we can share, explore, and store facts, opinions, theories, and philosophies alike. As such, this vast repository contains many answers to many questions derived from many distinct perspectives. These socio-technical circumstances are undeniably promising for the distribution and development of knowledge. However, in 2008, tech-critic Nicholas Carr posed a counter argument about the internet and its impact on our cognitive abilities by asking readers a simple question: is Google making us stupid? In his controversial article published by The Atlantic, Carr blames the internet for our diminishing ability to form “rich mental connections,” and supposes that technology and the internet are instruments of intentional distraction. While I agree with Carr’s sentiment that the way we think has changed, I don’t agree that the fault falls on the internet. I believe we expect too much of Google and less of ourselves; therefore, the fault (if there is fault) is largely our own.

Here’s why: Carr’s argument hinges on the idea that technology definitively determines our society’s structural and cultural values—a theory known as technological determinism. However, he fails to recognize the theory of affordance in this argument. Affordances refer to the way in which the features of a technology interact with agentic users and diverse circumstances. While the technical and material elements of technology do have shaping effects, they are far from determined. Affordance theory suggests that the technologies we use and the internet infrastructures from which they draw, contain multipotentiality: they afford the potential to indulge in curiosity and develop robust knowledge while simultaneously affording the potential to relinquish curiosity and develop complacency through the comforts of convenience and self-confirmation.

Considering the initial sentiment of Carr’s argument (the way we think has changed) together with affordance theory, we can derive two critical questions: have we embraced complacency and become too comfortable with the internet’s knowledge production capabilities? If so, by choosing to rest on our laurels and exploit this affordance, what happens to epistemic curiosity?

There’s a lot to unpack, but in order to address these questions, we need to examine the potential socio-technical circumstances that could lead us down a path of declining epistemic curiosity, starting with the binary ideas of convenience and complacency.

Complacency is characterized by the feeling of being satisfied with how things are and not wanting to try to make them better. Clearly, in terms of making life more efficient, we are nowhere near complacent, as we constantly strive to streamline our lives through innovation—from fire to the invention of (arguably) our greatest creation to date and the basis for our modernity: information and communication technology. This technology affords us the ability to live more convenient, effortless lives by providing access to the world’s knowledge with the tap of a finger and the ability to do more in a few moments than previous generations could do in hours.

For instance, education has become much more convenient. Thanks to the internet, you can take advantage of distance learning programs and earn a degree on your own terms, without physically attending class. The workforce has also become more flexible, as technology allows us to maximize time and stay on top of our work through complete mobility, and in some cases, complete task automation. Economically, the internet allows us to sell and consume goods and services without the physical limitations of brick and mortar. It also allows us to communicate with friends, family, and strangers over long distances, document our lives, access current events with ease, and answer a question within moments of it popping into our heads.

These conveniences must make life better, right?

Think of these conveniences like your bed on a cold morning: warm and comfortable, convincing you to hit snooze and stay a while longer. This warmth and comfort can be a source of sustenance and strength; however, if we stay too long, comfort can get the best of us. We might become lazy, hesitating to diverge from the path of least resistance.

Just as it is inadvisable to regularly snooze until noon, it is concerning when information and knowledge are accessed too easily, too quickly. With the increased accessibility and speed of information, it’s easy to become desensitized to curiosity—the very intuition that is responsible for our technological progress—in the same way that you are desensitized to your breathing pattern or heartbeat. By following the path of least resistance, we can create a dynamic in which we perceive the internet as a mere convenience instead of a tool to stimulate our thoughts about the world around us. This convenience dynamic allows us to settle into a state of complacency in which we are certain that everything we think and believe can be justified through a quick Google search—because, in fact, it can be. That feeling of certainty and comfort that stems from this technical ability to self-confirm is, what I call, informed complacency.

The idea of informed complacency is especially fraught because it signifies a turning point in our perception of contemporary knowledge. Ultimately, it can encourage us to develop an underlying sense of omniscient modernity, which Adam Kirch discusses in his article for The New Yorker,Are We Really So Modern?”:

“Modernity cannot be identified with any particular technological or social breakthrough. Rather, it is a subjective condition, a feeling or an intuition that we are in some profound sense different from the people who lived before us. Modern life, which we tend to think of as an accelerating series of gains in knowledge, wealth, and power over nature, is predicated on a loss: the loss of contact with the past.”

In the past, nothing was certain. The information our ancestors had on the world and universe was constantly being overturned and molded into something else entirely. Renowned thinkers from across the ages built and destroyed theories like they were children with LEGO bricks—especially during the Golden Age of Athens (fourth and fifth centuries B.C.) and the Enlightenment (seventeenth and eighteenth centuries A.D.). Each time they thought they had it figured out, the world as they knew it came crashing down with a new discovery:

“The discovery of America destroyed established geography, the Reformation destroyed the established Church, and astronomy destroyed the established cosmos. Everything that educated people believed about reality turned out to be an error or, worse, a lie. It’s impossible to imagine what, if anything, could produce a comparable effect on us today”

Today, we still face uncertainty, albeit a different kind. With the glut of empirical evidence on the internet, multiple versions of objective reality flourish even as they conflict. These multiple truths create a dynamic information environment that makes it difficult to differentiate between fact, theory, and fiction, increasing the likelihood that whatever one thinks is true can easily be confirmed as such. With this sentiment in mind, by following the path of least resistance and developing a sense of informed complacency, we risk developing a sense of omniscient modernity and over-comprehending our ability to know, because we are certain that we know—or can know—everything, past, present, and future, with the click of a button or the tap of a finger.

Though a dynamic information environment has clear benefits for epistemic curiosity—better science, more informed debates, an engaged citizenry—the tilt of the affordance scale towards complacency always remains a lingering possibility. If we begin to lean in this direction, I contend that informed complacency is likely to take hold and lead us to ignorance and insularity amid a saturated information environment. This can create cognitive traps that, in the worst instance, diminish epistemic curiosity.

One of these traps is called the immediate gratification bias, which Tim Urban of Wait But Why, has playfully dubbed the “Instant Gratification Monkey”. He describes this predisposition as “thinking only about the present, ignoring lessons from the past and disregarding the future altogether, and concerning yourself entirely with maximizing the ease and pleasure of the current moment.” As a result of this predisposition, there is an increasing demand for instant services like Uber, Amazon Prime, Netflix, and Tinder, which testifies that the notions of ease and instancy have infiltrated our thought-process, compelling us to apply them to every other aspect of our lives. The increase in the speed at which we consume information has molded us to rely on and expect instant results for everything. Consequently, we are likely to base our information decisions on this principle and choose not to dig past surface-level.

Another trap is found in gluttonous information habits—devouring as much of it as we can, as quickly as possible, solely for the sake of hoarding what we consider to be knowledge. In all our modernity, it seems that we misguidedly assume that consuming information at a faster pace is beneficial to the development of knowledge, when in fact, too much information (information overload) can have overwhelming, negative effects, such as the inability to make the “rich mental connections” Carr describes in his article. This trap is amplified by pressures to stay “in the know” as well as the market of apps and services that capitalize on a pervasive fear of missing out, transforming the pursuit of knowledge from an act of personal curiosity to a social requirement.

The complex algorithms deployed by search engine and social media conglomerates to manage our vast aggregates of information curate content in ways users are likely to experience not only as useful, but pleasurable. These algorithmic curations are purposefully designed to keep information platforms sticky; to keep users engaged, and ultimately sell data and attention. These are the conditions under which another cognitive trap arises: the filter bubble. By personally analyzing each individual user’s interests, the algorithms place them in a filtered environment in which only agreeable information makes its way to the top of their screens. Therefore, we are constantly able to confirm our own personal ideologies, rendering any news that disagrees with one’s established viewpoints as “fake news.” In this context, it’s easy to believe everything we read on the internet, even if it’s not true. This makes it difficult to accurately assess the truthfulness and credibility of news sources online, as truth value seems to be measured by virality rather than veracity.

Ultimately, with his argument grounded in technological determinism, Carr overlooks the perspective that technology cannot define its own purpose. As its creators and users, we negotiate how technology integrates into our lives. The affordances of digital knowledge repositories create the capacity for unprecedented curiosity and the advancement of human thought. However, they also enable us to be complacent, misinformed, and superficially satisfied; that is to say, an abundance of easily accessed information does not always mean persistent curiosity and improved knowledge. To preserve epistemic curiosity and avoid informed complacency, we should keep reminding ourselves of this and practice conscious information consumption habits. This means recognizing how algorithms filter content; seeking diverse perspectives and content sources; questioning, critiquing, and evaluating news and information; and perhaps most importantly, always do your best to venture past the first page of Google search results. Who knows, you might find something that challenges everything you believe.

Clayton d’Arnault is the Editor of The Disconnect, a new digital magazine that forces you to disconnect from the internet. He is also the Founding Editor of Digital Culturist. Find him on Twitter @cjdarnault.

 

Headline pic via: Source

Augmented reality makes odd bed fellows out of pleasure and discomfort. Overlaying physical objects with digital data can be fun and creative. It can generate layered histories of place, guide tourists through a city, and gamify ordinary landscapes.  It can also raise weighty philosophical questions about the nature of reality.

The world is an orchestrated accomplishment, but as a general rule, humans treat it like a fact. When the threads of social construction begin to unravel, there is a rash of movement to weave them back together. This pattern of reality maintenance, potential breakdown and repair is central to the operation of self and society and it comes into clear view through public responses to digital augmentation.

A basic sociological tenet is that interaction and social organization are only possible through shared definitions of reality. For meaningful interaction to commence, interactants must first agree on the question of “what’s going on here?”. It is thus understandable that technological alteration, especially when applied in fractured and nonuniform ways, would evoke concern about disruptions to the smooth fabric of social life. It is here, in this disruptive potential, that lie apprehensions about the social effects of AR.

When Pokémon Go hit the scene, digital augmentation was thrown into the spotlight. While several observers talked about infusions of fun and community into otherwise atomized locales, another brand of commentary arose in tandem. This second commentary, decidedly darker, portended the breakdown of shared meaning and a loosening grip on reality. As Nathan Jurgeson said at the time, “Augmented reality violates long-held collective assumptions about the nature of reality around us”. But, Jurgenson points out, reality has always been augmented and imperfectly shared. The whole purpose of symbol systems is that they represent what can’t be precisely captured. Symbols are imperfect proxies for experience and are thus necessary for communication and social organization.

What AR does is explicate experiential idiosyncrasies and clarify that the world is not, and needn’t be, what it seems. This explication disrupts smooth flows of interaction like a glitch in the social program. It reveals that reality is collaboratively made rather than a priori. It’s easy to think that augmented reality will be the end of Truth, but such a concern presumes that there was a singular Truth to begin with.

While Pokémon Go faded quickly into banality, underlying anxieties remained salient. Such anxieties about the imminent fall of shared meaning have resurfaced in response to Snapchat’s rollout of a new custom lens feature. “Lenses,” available for about $10 each, build on the company’s playful aesthetic and use of AR as an integral feature of digital image construction and sharing. The idea is that users can create unique lenses for events and special occasions as a fun way to add an element of personal flair. The use of augmented reality is not new for the company, nor is personalization, but this feature is the first to make AR customizable.

Customizable AR takes on a relatively benign form in its manifestation through Snap Lenses. The idea of customizable augmentation, however, creates space for critical consideration about what it means to filter, frame, and rearrange reality with digital social tools.

The capacity to alter an image and then save that image as a documented memory potentially distorts what is and what was and replaces it with what we wish it had been. The wish, the desire, made tangible through augmented alteration, ostensibly changes the facts until facts become irrelevant, truth becomes fuzzy, and representations are severed from their referents.

Anxieties about losing a firm grip on the world are thus amplified through customizability, as the distorting augmented lens adheres not even to a shared template, but is subject to the whims and fancies of each unique person. This is essentially the argument put forth by Curtis Silver at Forbes in his article “Snapchat Now Lets Users Further Disassociate From Reality With Custom Lens Creation”.

Silver contends that customizable Snap Lenses will be the straw that breaks the camel’s back as users escape objectivity and get lost in the swirls of personalized augmentation. “Lenses is a feature in Snapchat that allows users to create a view of a reality that simply does not exist” he writes.  “Now those users can create lenses to fit their actual reality, further blurring the already fragile and thin lines separating perception from real life.” He worries that customizable augmentation not only blurs reality but also indulges idealized images of the self that are inherently unattainable. With personalized augmentation, warns Silver “ [w]e begin to actually believe we are that glowing, perfect creature revealed through Snapchat lenses”.

Silver is not alone. His piece joins a flurry of commentators worrying over reality disintegration caused by “mixed reality” tools—a trend that began long before Snap made AR customizable.

Arguments about the loss of reality via augmentation–though tapping into critical contemporary questions– miss two crucial points. First, social scientists and philosophers have long rejected the idea of a single shared reality. Second, even if there were one shared reality, it’s far from clear that augmentation would muddy it.

As Jurgenson pointed out in his analysis of Pokémon Go, social thinkers have long understood reality as collaboratively constructed. The social world is a process of becoming rather than a stable fact. George Herbert Mead famously said people have multiple selves, a self for every role that they play, while W.I. Thomas declared that reality is that which is real in its consequences. We can even think of widespread truisms like “beauty is in the eye of the beholder” and it becomes clear that selves and realities are neither singular nor revealed but multiple and constructed. The very idea of distorting reality is built on a faulty premise—that reality is concrete and clear cut.

From the starting point of reality as process rather than fact, augmentation doesn’t so much distort the truth but underline and entrench a shared standard.

Augmentation is defined by its relation to a referential object. In the course of daily life, that referential object—agreed upon reality—persists largely unnoticed. Society and interaction work because their construction is ambient and shared meanings can go unsaid. Imposing augmentation makes a referential object obvious. What was unnoticed is now perceived. It’s not simply there, it’s marked as there first. By imagining and externalizing what could be, augmentation gives new meaning to what is.

Augmentation imbues referential objects with newfound authenticity, rendering them raw through juxtaposition. Just as the #nofilter selfie becomes a relevant category only in the face of myriad filters, the pre-augmented world only emerges as “natural” in comparison to that which is digitally adorned.

Snap’s customizable lens feature enables a playful relationship to the world. That playfulness doesn’t loosen our collective grip on reality but produces a reality that is retrospectively concretized as real.  The fear of Lenses as a distorting force not only (incorrectly) assumes a singular true reality, it misses the flip side—Lenses reinforce the idea of shared reality by superimposing something “augmented” on top. Playing with imagery (through lenses, filters etc.) casts the original image as pure and unfiltered. The augmented image gives new character to its original as an organic capture and the idea of shared meaning reasserts itself with fervor renewed.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: source

In last week’s much-anticipated conversation between Barack Obama and Prince Harry, the pair turned to the topic of social media. Here’s what Obama said:

“Social media is a really powerful tool for people of common interests to convene and   get to know each other and connect. But then it’s important for them to get offline,  meet in a pub, meet at a place of worship, meet in a neighbourhood and get to know   each other.”

The former president’s statements about social media are agreeable and measured. They don’t evoke moral panic, but they do offer a clear warning about the rise of new technologies and potential fall of social relations.

These sentiments feel comfortable and familiar. Indeed, the sober cautioning that digital media ought not replace face-to-face interaction has emerged as a widespread truism, and for valid reasons. Shared corporality holds distinct qualities that make it valuable and indispensable for human social connection. With the ubiquity of digital devices and mediated social platforms, it is wise to think about how these new forms of community and communication affect social relations, including their impact on local venues where people have traditionally gathered. It is also reasonable to believe that social media pose a degree of threat to community social life, one that individuals in society should actively ward off.

However, just because something is reasonable to believe doesn’t mean it’s true. The relationship between social media and social relations is not a foregone conclusion but an empirical question: does social media make people less social? Luckily, scholars have spent a good deal of time collecting cross-disciplinary evidence from which to draw conclusions. Let’s look at the research:

In a germinal work from 2007, communication scholars Nicole Ellison and colleagues establish a clear link between Facebook use and college students’ social capital. Using survey data, the authors show that Facebook usage positively relates to forming new connections, deepening existing connections, and maintaining connections with dispersed networks (bridging, bonding, and maintaining social capital, respectively). Ellison and her team repeated similar findings in 2011 and again in 2014.   Burke, Marlow and Lento showed further support for a link between social media and social capital based on a combination of Facebook server logs and participant self-reports, demonstrating that direct interactions through social media help bridge social ties.

Out of sociology, network analyses show social media use associated with expanding social networks and increased social opportunities. Responding directly to Robert Putnam’s harrowing Bowling Alone thesis, Keith Hampton, Chul-Joo Lee and Eun Ja Her report on a range of information communication technologies (ICTs) including mobile phones, blogs, social network sites and photo sharing platforms. They find that these ICTs directly and indirectly increase network diversity and do so by encouraging participation in “traditional” settings such as neighbourhood groups, voluntary organizations, religious institutions and public social venues—i.e., the pubs and places of worship Obama touted above. Among older adults, a 2017 study by Anabel Quan-Haase, Guang Ying Mo and Barry Wellman shows that seniors use ICTs to obtain social support and foster various forms of companionship, including arranging in-person visits, thus mitigating the social isolation that too often accompanies old age. .

From psychology, researchers repeatedly show a relationship between “personality” and social media usage. For example, separate studies by Teresa Correa et al. and Samuel Gosling and colleagues show that those who are more social offline and define themselves as “extraverts” are also more active on social media. Summarizing this trend, Gosling et al. conclude that “[social media] users appear to extend their offline personalities into the domains of online social networks”. That is, people who are outgoing and have lots of friends continue to be outgoing and have lots of friends. They don’t replace one form of interaction with another, but continue interaction patterns across and between the digital and physical. This also means that people who are generally less social remain less social online. However, this is not an effect of the medium, it is an effect of their existing style of social interaction.

In short, the research shows that social media help build and maintain social relationships, supplement and support face-to-face interaction, and reflect existing socializing styles rather than eroding social skills. That is, ICTs supplement (and at times, enhance) interaction rather than displace it. These supplements and enhancements move between online and offline, as users reinforce relationships in between face-to-face engagements, coordinate plans to meet up, and connect online amidst physical and geographic barriers.

Of course, the picture isn’t entirely rosy. Social media open the doors to new levels and types of bullying, misinformation runs rampant, and the affordances of major platforms like Facebook may well make people feel bad about themselves. But, from the research, it doesn’t seem like social media is making anybody stay home.

Perhaps it is time to retire the sage warning that too many glowing screens will lead to empty bar stools and café counters. The common advice that social media is best used in moderation, and only so long as users keep engaging face-to-face isn’t negated by the research, but shown irrelevant—people are using social media to facilitate, augment, and supplement face-to-face interaction. There’s enough to worry over in this world, thanks to the research, we can take mass social isolation off the list.

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source

Let me begin with a prescriptive statement: major social media companies ought to consult with trained social researchers to design interfaces, implement policies, and understand the implications of their products. I embark unhesitatingly into prescription because major social media companies have extended beyond apps and platforms, taking on the status of infrastructures and institutions. Pervasive in personal and public life, social media are not just things people use, places they go to, or activities they do. Social media shape the flows of social life, structure civic engagement, and integrate with affect, identity and selfhood.

Public understanding of social media as infrastructural likely underpins mass concern about what social media are doing to society, and what individuals in society are doing with social media. Out of this concern has emerged a vibrant field of commentary on the relationship between social media use and psychological well-being. Spanning academic literature, op-ed pages and dinner table conversation the question has seemingly remained on the collective mind: does social media make people feel bad? Last week, Facebook addressed the issue directly.

In a blog post titled Hard Questions: Is Spending Time on Social Media Bad for Us?, Facebook researchers David Ginsberg and Moira Burke review the literature on Facebook use and psychological well-being. Their review yields a wholly unsurprising response to the titular query: sometimes, it depends. Facebook doesn’t make people feel good or bad, they say, but it depends on how people use the technology. Specifically, those who post and engage “actively” feel better, while those who “passively” consume feel worse[1].

I was delighted with the Facebook blog post up until this point. The company engaged social researchers and peer-reviewed content to address a pressing question derived from public concern. But then, out came “it’s how you use it”.

“It’s how you use it” is wholly unsatisfying, philosophically misguided, and a total corporate cop-out that places disproportionate responsibility on individual users while ignoring the politics and power of design.  It’s also a strangely projective conclusion to what began as a reflexive internal examination of technological effects.

If the trendy onslaught of new materialism has taught us anything, it’s that things are not just objects of use, but have meaningful shaping capacities. That objects are efficacious isn’t a new idea, nor is it niche. Within media studies, we can look to Marshall McLuhan who, 50-plus years ago, established quite succinctly that the medium is the message. From STS, we can look to Actor Network Theory (ANT), through which Bruno Latour clarified that while guns don’t kill people on their own, the technology of the gun is integral to violence. We can look to Cyborgology co-editor David Banks’ recent article, addressing the need to articulate design politics as part of engineering education. And I would  also direct readers to my own work, in which I keep blathering about “technological affordances.” I’ll come back to affordances in a bit.

Certainly, we theorists of design recognize users and usage as part of the equation. Technology does not determine social or individual outcomes. But, design matters, and when social ills emerge on infrastructural platforms, the onus falls on those platform facilitators to find out what’s the matter with their design.

To be fair, Ginsberg and Burke seem to know this implicitly. In fact, they have an entire section (So what are we doing about it? ) in which they talk about current and forthcoming adjustments to the interface. This section is dedicated to prosocial design initiatives including recrafted news feed algorithms, “snooze” options that let users take a break from particular people and content, visibility options following relationship status change, and a suicide prevention tool that triages self-harm through social networks, professional organizations and AI that recognizes users who may be in trouble.

In short, the researchers ask how Facebook relates to psychological well-being, conclude that psychological outcomes are predicated on user behavior, and describe plans to design features that promote a happier user-base. What they don’t do, however, is make a clear connection between platform design and user behavior—a connection that, given the cited research, seems crucial to building a prosocial interface that provides users with an emotional boost. That is, the Facebook blog post doesn’t interrogate how existing and integral design features may afford social versus antisocial usage and for whom. If posting and interacting on Facebook feels good and consuming content feels bad, how do current design features affect the production-consumption balance, for which users, and under what circumstances? And relatedly, what is it about consumption of Facebook content that elicits The Sads? Might the platform be reworked such that consumption is more joyful than depleting?

A clear framework of technological affordances becomes useful here. Affordances refer to how technologies push and pull in varying directions with more or less force. Specifically, technologies can request, demand, encourage, discourage, refuse, and allow. How an object affords will vary across users and situations. Beginning with this conceptual schema—what I call the mechanisms and conditions framework—Facebook’s existing affordances emerge more clearly and design and policy solutions can be developed systematically.

For instance, Facebook’s design features work in several ways to reinforce status quo ideas and popular people while maintaining an ancillary status for those on the margins. Given findings about the psychological effects of production versus consumption, these features then have behavioral consequences and in turn, emotional ones. I’ve picked two examples for illustration, but the list certainly extends.

First, Facebook converges multiple networks into a shared space with uncertain and oft-changing privacy settings. This combination of context collapse and default publicness make sharing undesirable and even untenable for those whose identities, ideas, or relationships put them at risk. For LGBTQ persons, ex-criminals, political radicals, critical race-gender activists, refugees, undocumented persons and the like, Facebook affordances in their current configuration may be profoundly hazardous. When a platform is designed in a way that makes it dangerous for some users to produce, then it also makes it difficult for those users to obtain psycho-social benefits and more likely that they encounter psychological harm.

Second, news feed algorithms use a “rich get richer” strategy in which popular content increases in visibility. That is, users are encouraged to engage with content that has already accrued attention, and discouraged from engaging with content which has gained little steam. Facebook’s metric-driven algorithmic system not only promotes content with mass appeal, but also snowballs attention towards those who already take up significant symbolic space in the network. So, while everyone is allowed to post on Facebook, rewards distribute in a way that encourages the popular kids and keeps the shy ones quiet. By encouraging production from some users and consumption from others, Facebook’s very design allocates not just attention, but also emotion.

Of course, content consumption is not an essentially depressing practice. But on Facebook, it is. It’s worth examining why. Facebook is  designed in a way that makes negative social comparison–and related negative self-feelings–a likely outcome of scrolling through a news feed. In particular, Facebook’s aggressive promotion of happy expression through silly emoji, exclusion of a “dislike” button, the ready availability of gifs, and algorithms that grant visibility  preference to images and exclamation points, work together to encourage users to share the best of themselves while discouraging banal or unflattering content. By design, Facebook created the highlight reel phenomenon, and onlookers suffer for it. Might Facebook consumption feel different if there were more of those nothing-special dinner pics that everyone loves to complain about?

In response to Facebook’s blog post, a series of commentators accused the company of blaming users. I don’t think it was anything so nefarious. I just think Facebook didn’t have the conceptual tools to accomplish what they meant to accomplish—a critical internal examination and effective pathway towards correction. Hey Facebook, try affordance theory <3.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

[1] For the sake of space, I’m tabling the issue of “active” and “passive.” However, any media studies scholar will tell you that media consumption is not passive, making the active-passive distinction in the blog post problematic.

Every now and again, as I stroll along through the rhythms of teaching and writing, my students stop and remind me of all the assumptions I quietly carry around. I find these moments helpful, if jarring. They usually entail me stuttering and looking confused and then rambling through some response that I was unprepared to give. Next there’s the rumination period during which I think about what I should have said, cringe at what I did (and did not) say, and engage in mildly self-deprecating wonder at my seeming complacency. I’m never upset when my positions are challenged (in fact, I quite like it) but I am usually disappointed and surprised that I somehow presumed my positions didn’t require justification.

Earlier this week, during my Public Sociology course, some very bright students took a critical stance against politics in the discipline.  As a bit of background, much of the content I assign maintains a clear political angle and a distinct left leaning bias. I also talk a lot about writing and editing for Cyborgology, and have on several occasions made note of our explicit orientation towards social justice.  The students wanted to know why sociology and sociologists leaned so far left, and questioned the appropriateness of incorporating politics into scholarly work—public or professional.

I think these questions deserve clear answers. The value of integrating politics with scholarship is not self-evident and it is unfair (and a little lazy) to go about political engagement as though it’s a fact of scholarly life rather than a position or a choice. We academics owe these answers to our students and we public scholars would do well to articulate these answers to the publics with whom we hope to engage.

In an exercise that’s simultaneously for me, my students, and those who read this blog, I’ll talk through the questions of political leanings and their place in academic engagement, respectively.

Let’s begin with the liberal bias. First of all, I want to temper claims of radicalism in the academy. Survey data of academics’ political views show that overall, about 45% of professors maintain progressive ideals, compared with 45% who identify as moderate and 9% as conservative. Conservatives are admittedly underrepresented within the academy, but less than half of all academic faculty identify with the left and of those, only a tiny fraction (about 8%) hold radical leftist views. Still, political leanings vary by discipline with social scientists in general  and sociologists in particular maintaining  higher than average left leaning propensities compared with academics in other fields. So, sociologists are collectively progressive. Why?

One guess is that sociology has an inherent appeal to the progressive sensibility and so attracts people with a leftist political bent. However, this explanation falls short when we look to the origins of the field, which are largely conservative and date back to attempts by key figures at finding stability amidst the industrial revolution while equating society to the organic body. Another guess—and I think a partially reasonable one—is that progressive politics are informally rewarded while conservative politics may face censure within Sociology departments. However, having met very few truly conservative trained sociologists (inside or outside of the academy), negative effects of conservative dissent likely only play a small role in the general tenor of the discipline

I believe that a major reason sociologists lean left politically is because we are bombarded by inequality professionally. Our job is to scrutinize social life and in doing so, systemic oppressions become blaring. Sociologists are trained to enact the sociological imagination, a praxis introduced by C. Wright Mills by which personal troubles are understood in relation to public issues. The course of our study reveals clear patterns in which intersections of race, class, geography, and gender predict life chances with sickening precision. We teach about egregious disparities in health care, life expectancy, educational attainment, mental wellbeing, and incarceration rates. Through research and reading, we become intimately familiar with the voices of those on the wrong side of these rates—the individual persons whose troubles represent public issues. In my own collaborative research, I’ve dealt with issues of race and disability stigma, social responses to intimate partner violence, and the costs of being a woman during  task-based social interaction. To know these patterns, connect them to real people’s lives, and understand how policy and culture perpetuate inequitable systems, tends to foster a progressive sensibility.

But even if this sensibility is both understandable and tightly rooted in empirical realities, is it appropriate as part of professional practice? For me, it is. I strongly support the inclusion of politics into pedagogy, public engagement, and scholarly production. The idea that scholars are only scholars—impartial vestibules of knowledge—is disingenuous. Scholars are people, and as people, we have politics. Pretending those politics aren’t there obscures the discourses in which we engage across professional arenas. Our intellectual projects are inextricable from political agendas. From the research questions we ask, to the ways we frame our findings, to the decisions we make about how to disseminate our work and ideas, politics are ever present.  From an intellectual standpoint, making those politics as transparent as possible increases the credibility and robustness of scholarly bodies of work.  Scholarly argumentation goes much deeper when all parties lay bare their assumptions.  From a human and ethical standpoint, I contend that there is an obligation to take what we know and do something useful with it. To willingly ignore patterns of injustice and oppression is a moral decision, just as is the choice to act politically against them. One’s position as a scholar/academic does not recuse that person from the dynamics of social life. We are all a part of society, and maintaining a position of passive objectivity is equivalent to active complicity in the way things are.

I appreciate that my students are critical in the classroom and that they push me to defend my pedagogy and scholarly practice. I’ll share this post with them and hope that they feel empowered to keep the conversation going.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: source

 

Findings from a recent study out of Stanford University Business School by Yilun Wang and Michal Kosinski indicate that AI can correctly identify sexual preference based on images of a person’s face. The study used 35,000 images from a popular U.S. dating site to test the accuracy of algorithms in determining self-identified sexual orientation. Their sample images include cis-white people who identify as either heterosexual or homosexual. The researchers’ algorithm correctly assessed the sexual identity of men 81% of the time and women 74%. When the software had access to multiple images of each face, accuracy increased to 91% for images of men and 84% for images of women. In contrast, humans correctly discerned men’s sexual identity 61% of the time and for women, only 54%.

The authors of the study note that algorithmic detection was based on “gender atypical” expressions and “grooming” practices along with fixed facial features, such as forehead size and nose length. Homosexual-identified men appeared more feminized than their heterosexual counterparts, while lesbian women appeared more masculine. Wang and Kosinski argue that their findings show “strong support” for prenatal hormone exposure which predisposes people to same-sex attraction and has clear markers in both physiology and behavior. According to the authors’ analysis and subsequent media coverage, people with same-sex attraction were “born that way” and the essential nature of sexuality was revealed through a sophisticated technological apparatus.

While the authors demonstrate an impressive show of programming, they employ bad science, faulty philosophy, and irresponsible politics. This is because the study and its surrounding commentary maintain two lines of essentialism, and both are wrong.

The first line of essentialism is biological and emerges from the “born this way” interpretation of the data. The idea that one’s body is a causal reflection of ingrained physiology disregards scores of social and biological science that demonstrate a clear interrelationship between culture and embodiment. The idea of ingrained sexual genetics has a long history in science, but it is a now dated and maintains a heavy ideological bent. As Greggor Mattson explains in his critique of the study:

Wang and Kosinski…are only the most recent example of a long history of discredited studies attempting to determine the truth of sexual orientation in the body. These ranged from 19th century measurements of lesbians’ clitorises and homesexual men’s hips, to late 20th century claims to have discovered “gay genes,” “gay brains,” “gay ring fingers,” “lesbian ears,” “gay scalp hair,” or other physical differences between homosexual and heterosexual bodies.

There is a lot of very recent and ongoing research that overturns biological determinism and instead, recognizes the imbrication of culture with the body. For example, Elizabeth Wilson’s 2015 book Gut Feminism,  addresses interactions between the gut, pharmaceuticals, and depression; a host of studies demonstrate long and short term physiological responses to racism; and scientists show genetic mutations in children of Holocaust survivors indicating a hereditary element to extreme distress.  While these ideas continue to advance and gain steam, they are not new. Anne Fausto-Sterling wrote Sexing the Body more than 15 years ago and 30 years before that, Clifford Geertz drew on existing science to make a clear and empirically grounded case that the most natural thing about humans is their physiological need for culture, through which human bodies and brains develop. The physiological indicators of sexual orientation therefore reflect how culture is written into the body, not the presence of “gay genes.”

Politically, the science of biological essentialism is troubling. Not only does it stem from the very logic that spurred eugenics projects in the late 19th and early 20th centuries, but also reifies a clear hierarchy of gender and sexuality in which cis-heterosexuals enjoy a top spot. Although “born this way” has become a rallying cry for equality, it implies that non-normative sexual desire is a deficit. To defend non-normative sexual desire by claiming that the desire is in-born takes fault away from the individual while reinforcing that desire as inherently faulty. It excuses the non-normative sexuality by re-entrenching the norm. “Born this way” implies that non-normative sexuality would be overcome, but only for this blasted biology. It may be a path towards equal rights, but “born this way” ultimately leads back to wrong-headed science that assumes heteronomativity.

The second line of essentialism from the study is technological and it’s rooted in the assumption that AI is autonomous and reveals objective truths about the social world. In comparing humans to machines, the study points to the disproportionately high accuracy of the latter. The algorithm ostensibly knows humans better than humans know themselves. But as I’ve written before, AI is not artificial nor is it autonomous. AI comes directly from human coders and is thus always culturally embedded. AI does not choose what to learn, but learns from human-centered logics.

Distinguishing people based on sexual orientation—and depicting orientation as a stable binary–are not independent conclusions reached by smart technology. These are reified constructs that people have implicitly agreed upon and developed meaning structures and interaction practices around. Wang and Kosinski built those meaning structures into a piece software and distilled sexual orientation from other cultural signals thereby maximizing sexuality as a salient feature in the machine’s knowledge system. They also, by excluding PoC, trans* persons, and those with fluid sexual identities re-entrenched another layer of normalization by which white, binary-identified people come to represent The Population and everyone else remains an afterthought, deviation, or extension.

AI is not a sanitary machine apparatus, but a vessel of human values. AI is not extrinsic to humanity, but only, and always, of humanity. AI does not reveal humanity to itself from a safe and objective distance, but amplifies what humans have collectively constructed and does so from the inside out.

The capacity for AI to recognize sexual identity based upon facial cues has significant social implications—mostly that people can be identified, rooted out, and possibly censured formally and/or informally for their sex, sexuality, and gender presentation. This is an important takeaway from the study, and acts as a sober reminder that the same technological affordances that liberate, mobilize, and facilitate community can also become tools of oppression (this idea is not new, but always worth repeating). But technologies don’t just become tools of liberation or oppression because of the hands in which they end up. It’s not only about how you use it, but how you build it and what kinds of meaning you make from it. Discerning sexual orientation is a human-based skill that Wang and Kosinski taught a machine to be good at. Markers of orientation don’t reflect a biologically determined core, just as machine recognition doesn’t reflect an autonomous intelligence. Both bodily comportment and technological developments reflect, reinforce, work in, work through, work around, but are always enmeshed with, people, culture, power, and politics.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source

The High Court of Australia is currently hearing a case about whether or not Australia will move forward with a marriage equality plebiscite. The plebiscite is a non-binding survey in which Australians can indicate their position on same-sex marriage. The results of the plebiscite have no direct effect on the law, but will inform members of parliament who may or may not then proceed with legislation to extend marriage rights to non-heterosexual couples.

The marriage equality debates in Australia are mired in familiar political tensions—left-leaning liberals argue that marriage is a human right, critical progressives are wary about entrenching normative kinship structures, and conservatives oppose same-sex marriage because, what about the children?. The plebiscite is contentious in its own right, as a high price tag ($122million) and an open platform for “No” campaigners to espouse hate have been the subject of heated critique (and indeed, undergird the current court hearings). But the plebiscite is also marked by an additional controversy arising from a seemingly mundane component: the use of postal mail.

The plebiscite will operate through the Australian Post. Voters who want to have their say on marriage equality will receive paper surveys to fill in and send back. At issue is the barrier to participation this creates for an important demographic: young people.

When thinking about inequality in the technology space, common wisdom is that young people hold a distinct advantage over older people. This assumption is rooted in the presumption that “technology” refers only to smarthphones and social media. In fact, technology is merely another word for tools coupled with knowledge and includes a wide range of apparatuses that have been part of human interaction since long before the first Atari. When a technology was once common, but is now less so, then the age dynamics of power and access shift away from youth and towards the grownups. Such is the case with postal mail.

A brief affordance analysis of the postal vote reveals the social implications of this technological decision while underlining the situatedness of communication media.

Affordances refer to the opportunities and constraints of technological objects. Affordances are not absolute, but operate through an interrelated set of mechanisms and conditions. The mechanisms of affordance refer to the strength with which technological objects push, pull, open, and resist while the conditions of affordance designate how the mechanisms vary across users and contexts. Mechanisms include the ways that technologies request, demand, encourage, discourage, refuse and/or allow some actions. The conditions of affordance include perception, dexterity, and cultural and institutional legitimacy. In short, technological objects push and pull in particular directions, but the direction of the push-pull and the strength of its insistence will depend on the knowledge and perception of the user, how adept the user is in deploying an object’s features, and how well supported that user is in utilizing the object in various ways. An affordance analysis means asking how does this technological apparatus afford, for whom, and under what circumstances? (see full explication of affordances framework here ).

With regard to the marriage equality plebiscite, an affordance analysis asks for whom does a postal vote encourage participation? For whom is participation discouraged? Is anyone refused?

The medium itself does not refuse participation to anyone. Everyone legally included in the plebiscite may send their survey through postal mail. Those who are not legally included (such as non-citizens, like me) would be refused through any medium. However, the decision to use the Australian Post markedly discourages participation by young Australians. This is because the medium of postal mail does not uniformly request, demand, encourage, discourage, refuse, or allow political participation, but disproportionately serves a practice and skill set well-honed by older generations and unfamiliar to younger ones.

As reported across Australian news (using mostly anecdotal evidence), a substantial number of people under 25 have never posted a letter. Using affordance theory language, lack of practice significantly reduces young adults’ dexterity with the postal medium, thus erecting barriers to political participation among this population. The gap in dexterity between older and younger voters thus encourages (or at least allows) older generations to participate in the plebiscite while discouraging younger generations. Asking 20-somethings to mail a letter is like asking 30-somethings to send a fax—we may know what a fax machine is and generally what it does, but the process would be clumsy and bewildering at best. So too, young Australian voters understand that letters go from one postal box to another, and these voters have all of the material resources at their disposal to post a letter, but they have to overcome the discomfort of fumbling through a medium with which they are experientially unfamiliar.

The conditions that create affordance disparities between younger and older voters can have serious political implications. Prime Minister Malcolm Turnbull has said that a solid “Yes” outcome from the plebiscite would mean marriage equality policy could be considered and debated in parliament. However, a clear “No” outcome would halt all amendments to the current Marriage Act from 1961, which defines marriage as “the union of a man and a woman to the exclusion of all others.” Data reveal, unsurprisingly, that young Australians support equal rights for same-sex couples at higher rates than older Australians. This means that conditions which discourage youth participation create a clear bias in the conservative direction. An affordance analysis thus indicates that results should be weighted for age, participation should be offered through multiple mediums, or alternatively, the government could stop giving voice to bigots and let go of policies that protect and ingrain heteronormative versions of love. But that last one is less about technology…

Jenny Davis is on Twitter @Jenny_L_Davis

Headline image via: source