Defending the theoretician’s choice to employ a theoretical reductionism is in some respects a nonsensical exercise.  After all, theory of any kind operates as a manifestly reductionistic articulation of a given thing—even if that thing is another theory. This is the conclusion we must come to if we permit ourselves to define theory by the fundamental function it performs.  That is to say, we must accept that theory is (and seeks to be) a reduction of the busyness of the world’s observable on-goings—i.e., it omits detail in one form or another in an effort to make some specific facet of human experience more intelligible, approachable, operatable, etc. To stipulate any theoretical premise (even one that indicts another theory as reductionistic), then, is to assert a reductionism.

Following such an understanding, we must take a moment to acknowledge that many who regularly engage with theory (particularly those who regard themselves as theorists) will rebuke the present characterization.  To justify their stance to the contrary, they could highlight the theoretical efforts to complicate and perhaps negate those perniciously simple and banal articulations of observable on-goings. They may offer rebuttals that quite closely resemble the following remarks:

  • I employ theory to challenge the taken for granted idea that merit alone determines life-chances.
  • It is by means of dense, sophisticated theoretical texts that I teach my students that gender is more than a binary.
  • Conventional frames of understanding, common sense and norms pervading everyday life portray race as a matter of genetic ancestry and, thereby, biologically determined. We draw upon theory to demonstrate how such an understanding obscures a far more nuanced and contentious story.

Those who make these and like claims are correct to do so.  Theory can and has enriched our understanding of the world by indicating a need for discursive complexity.  Yet the theoretical endeavour that seeks to lend discursive nuance, complication and density to some artefact, dynamic, event, or trajectory does so in manner that requires certain nuances and complexities to take precedence over others.  The act of analysis, then, demonstrates an intelligible differentiation—via conceptual demarcations and frames of reference—between precise analytical objects and a world of noise or irrelevant information (e.g., digressive observations, inchoate musings, palpably present though analytically extraneous effects and affected subjects).

So what, then, are we to make of the reductionism critique? Is the indictment of reductionism absent of practical application and epistemological utility?  If we regard theory as a demonstration of the truth or a closer approximation to some real reality, then yes—the indictment has no intellectual worth. After all, If the goal of theory is truth then the observation of reductionism operates as a proxy for the charge of falsity—i.e., an analytical explanation demonstrates the mutually exclusive condition of being accurate or inaccurate.

When the reduction is accurate—when theory reduces observable on-goings in a manner that seems appropriate—the theoretician performs an analytical act analogous to the magician’s slight of hand.  The performance is one in which the less than relevant (yet not wholly irrelevant) objects remain unacknowledged or the performer asks the audience to disregard the reality or presence of said objects since they exist beyond the figurative curtain enacted by a claim to scope conditions.  Whereby claiming reductionism in an effort to demonstrate inaccuracy draws attention to the epistemological leniencies or precarious merit given to theoretical demonstrations of truth.

The observer may ask, “How does reductionism—an inexorable consequence of theorizing—demonstrate inaccuracy (the condition of falsity) in one circumstance but not another?”  Further probing reveals the theoretician’s implicit request that observers acquiesce to the condition of true enough or true until demonstrated otherwise.  Of course, this request is entirely appropriate—but giving recognition to such a request draws into question the very need for or desirability of employing theoretical stipulations directed at the discernment of truth.

Theory as technology

If we regard theory as a technology, however, we may accept that reductionism and indictment of reductionism are both appropriate and necessary. The function of technology—with respect to the present epistemological concerns—is to accomplish a task, address a problem or manage a dynamic or operation. Theory is the (or rather “an”) analytical (often textual, but not necessarily so) manifestation of such management.  To indict a given theory-technology as reductive then is to suggest that said theory-technology forecloses the possible introduction of or plausible accounting for other theory-technologies.

We may, by way of example, look to the conditions under which one appropriately indicts psychoanalysis as reductionistic.  The most appropriate instances—in accordance with the logics of the present argument—are those where psychoanalytic accounts proffer “repressed wishes and instincts” as wholly responsible for a given set of circumstances or behaviors.  In such instances, the theory intends specific enactments (i.e., repression) or variables (i.e., repressed psychic contents) to account for all observable influence over a given object or dynamic—thereby, denying opportunity for additional analytical explanations and clarifications.

Providing an analytical space for alternating epistemologies emerges as a necessary condition for theorizing since a single variable or set of variables can only account for a portion, and never the totality, of the explanatory influence.  Such is the burden of analysing empirical circumstances, which demonstrate a seemingly infinite complexity (at least far greater complexity than one may account for by means of a single explanation).  Of course, one may continually proffer additional explanatory variables in an attempt to approximate ever more closely (but never achieve) a totalizing theoretical account.  However, the resulting theory would likely read as ambivalent and digressive—failing to make any explanatory variable salient while attempting to lend salience to all such variables.  Efforts at theorizing that correspond to the far reaching scope just described  demonstrate a latent appeal to absolutist logics.  The cultivated explanation emerges as teological in character and intends imperviousness to all counter arguments.  The result operates as an absurdity in both aesthetic and utility.

To recognize theory as technology, however, is to demonstrate and validate a need for pragmatism—i.e., to recognize that a world of seemingly infinite complexity requires metrics able and willing to proffer useful rather than perfect measurements. So we may readily permit a psychoanalytic account of repressed wishes and instincts if such an account seems to demonstrate an explanatory efficacy (in correspondence with or relative to other theories).  The primary idea, then, is to evaluate a theory with regards to the utility it adds.  One would inquire, “Does psychoanalysis help me address the analytical problem at hand?” If the answer is no, the conclusion is that psychoanalysis as a theory-technology is inappropriate for the investigative context.  A claim to validity would simply have no relevance.

With this in mind, we may request psychoanalytic theoreticians to make qualifying remarks if and when they determine the application of psychoanalytic theory to be appropriate; wherein, we expect these theory purveyors to say, “Of course other influential variables are at stake, but it is beyond the scope of the present discussion to account for and make salient such influences.”  This enactment—the disclaimer or admission of analytical limitations—operates as the theoretician’s incantation for warding off the contaminating danger of the reductionistic indictment.

Highly skilled theorists have all manner of requisite incantations at the ready.  They deploy them when necessary to ward off any and all possible indictments seeking to shame ostensible charlatans (and thereby distinguish the more competent and sophisticated wizards… err… I mean theoreticians as better than and qualitatively different from said charlatans).  Such indictments are as follows: 1) you have denied the agency of this subject, thing or any and all things; 2) you have assumed or postulated an essence or ontology in a manner that disregards social constructionism; 3) likewise, you have permitted your conceptions to operate as reifications neglecting contextual contingency and a world of constructive-generative agencies; 4) you have assumed or permitted a dichotomous understanding where a continuum resides or should be; etc.

It would be a mistake to regard these criticisms as hackneyed demonstrations of academic posturing.  Their widespread and continual presence within analyses of all kinds is a necessary circumstance for theorizing a dynamic world of diffuse power and agentic operations. Yet all the offenses highlighted by said criticisms remain analytically unavoidable at some point or another while theorizing.  The most pervasive and inevitable offense being the reductionistic articulation—which constitutes the point (or modus operandi) of theory itself.  So the theoretical incantation to ward off reductionism has become a trite, though necessary, endeavour.  It signifies to onlookers, “Yes, of course this analytical problem is a concern; now please acknowledge that I have not denied this concern a presence within my analysis.”

Yet the incantation should not—in and of itself—be sufficient for warding off the criticism of reductionism.  Providing a textual acknowledgement and concern for an epistemological problem doesn’t negate that said problem may still trouble the utility and applicability of a working theory.  So if psychoanalytic scholars preface their analyses with the requisite incantation to ward off a criticism of reductionism and said scholars proceed to articulate any and all social enactments as somehow indicative of the “repressed made manifest,” we may regard such an analysis as one that pushes beyond the limits of a particular reductionism’s technological utility.

By way of crude analogy, we may recognize that it’s useful to have a pocketknife on one’s person at most times.  There are a numerous circumstances where a pocketknife enables one to accomplish a given thing. Is there something stuck to the bottom of your shoe? Use the pocketknife to scrape it off! Come across a difficult to peel fruit? Slice it a few times with the pocketknife (but wash it first if you just scraped something off your shoe).  Have a package that isn’t designed to open readily? Poke it a few times with the pocketknife.

Though we readily accept that the extent of this technology’s utility is ostensibly interminable, we must also acknowledge that there will be times when other, perhaps similar, technologies are simply more appropriate.  Need to spread some preserves on toast? You may use a pocketknife and possibly cut your hand, or you could use the much safer and more effective butterknife. Such considerations about the apt and effective application of one permissible technology among others is equally relevant to an evaluation of our most trusted (and ostensibly useful) theories.  Psychanalytic logics may provide some reason for how and why I use my laptop to go online shopping, write articles, grade papers, etc.; yet a theory of technological affordances will demonstrate more intelligible and relevant explanations.

Highlighting the use and utility of theory-technologies

Reiterating a previous point, then, we acknowledge that it is only appropriate to regard psychoanalytic theory as reductive when it seeks to ignore or negate the relevance of other explanations for why and how a person is in and of the world.  Such an acknowledgement requires us, those giving critiques to another’s theoretical application, to differentiate the intentions of users from the affordances of the technologies they employ.  Whereby, the history of psychoanalysis tells us that while some have attempted to extend the theory to any and all things, as a paradigm, the psychoanalytic endeavour remains largely fixated on a particular and narrow range of empirical on-goings.

The indictment of user-error thus emerges as a far more appropriate criticism than that of a reductionistic theory.  Furthermore, we must overall applaud the efforts of those who push the application of theory-technologies to their absurd ends.  There is a pedagogical function in such boundary transgressions.  The pedagogy being, “Here in lies the discursive (or intellectually useful) limits of this theoretical endeavour.  Beyond this point or analytical threshold, the explanation appears unintelligible and perhaps offensive.”

So rather than enacting an intellectual culture that casts derision over and discourages those who apply theory in ways that doesn’t quite resonate with a larger intellectual community, perhaps we should permit spaces within the culture to celebrate such forerunners—giving recognition to epistemological failure as a worthwhile function of intellectual growth.  On this point it is worth differentiating the ill- or poorly conceived idea from that which simply does not work.  Such an ideal encourages bold applications of theory in an effort to explore the practical limitations of a given logic or method despite that a tenuous, intellectually unappealing explanation is a likely result.

But this is an ideal we only wish to enact if while doing so we are able to maintain specific intellectual standards—i.e., the expectation that intellectual rigor will characterize the analysis despite an assumed expectation that employed logics will fail to correspond to a given empirical circumstance in any useful manner.  In other words, pushing theoretical logics to the absurd limits of applicability should not serve as an invitation for slipshod scholarship demonstrating a dearth of erudition.  Yet if intellectuals maintain the standards of earnest analysis in their attempts at demonstrating the full range of applicability for each and all the prevailing reductionisms constituting the epistemological cannon, then they may enculturate a general relationship to theory less inclined to disparaging one school or another in the name of epistemological loyalty and defence; whereby, scholars would act less like ideologues and religious converts and accept epistemological diversity as a necessary requisite for the seemingly infinite complexity of the world.


James Chouinard (@Jamesbc81) is a sociologist at the Australian National University.

Headline Image Via: Source

Algorithms are something of a hot topic.  Interest in these computational directives has taken hold in public discourse and emerged as a subject of public concern. While computer scientists were the original algorithm experts, social scientists now equally stake a claim in this space. In the past 12 months, several excellent books on the social science of algorithms have hit the shelves. Three in particular stand out: Safiya Umoja Noble’s Algorithms of Oppression, Virginia Eubanks’ Automating Inequality, and Taina Bucher’s If…Then: Algorithmic Power and Politics. Rather than a full review of each text, I offer a quick summary of what they offer together, while drawing out what makes each distinct.

I selected these texts because of what they represent: a culmination of shorter and more hastily penned contentions about automation and algorithmic governance, and an exemplary standard for critical technology studies. I review them here as a state of the field and an analytical grounding for subsequent thought.

There is no shortage of social scientists commenting on algorithms in everyday life. Twitter threads, blog posts, op-eds, and peer-review articles take on the topic with varying degrees of urgency and rigor. Algorithms of Oppression, Automating Inequality, and If…Then encapsulate these lines of thought and give them full expression in monograph form.

The books are tied together in their insistent imbrication of social, structural, and technical. Each resists technological determinism while giving careful attention to the materiality of code and its animation at the hands of user-publics. In their socio-technical analyses, each book also centralizes politics and power. This is critical. The authors weave patterns of status, power, inequality, and resistance throughout their texts. They spend time at the social margins and show with stunning clarity how personal troubles and public issues are entwined with technical systems from design to implementation. Together, these works show us what algorithms are, how they are social, and remind us that algorithmic configurations are neither natural nor inevitable, but products of value systems and political dynamics.

Noble’s Algorithms of Oppression addresses how algorithms sort and curate information. Focusing primarily on the Google search engine, Noble draws on traditional works from Library Sciences to contextual political and economic decisions about visibility, relevance, and legitimacy in information systems. Noble begins with a powerful and personal example of a Google search for “Black Girls.” Spoiler: the search returned images and links that had little to do with the interests or activities of children of color. Using this as a jumping off point, Noble traces the ways race, class and gender intersect with commercial interests and normalizing conventions in ways that perpetuate stereotypes and maintain Whiteness as the default subjectivity. “The more we can make transparent the political dimensions of technology”, says Noble, “the more we might be able to intervene in the spaces where algorithms are becoming a substitute for public policy debates over resource distribution—from mortgages to insurance to educational opportunities.”

Eubanks’ Automating Inequality takes up the very real and tangible algorithmic governance that Noble’s work highlights as significant. Eubanks draws on three case studies in which algorithms and poor Americans come in contact, with disastrous results. The book tells the story of Indiana’s welfare system, homeless services in Los Angeles, and child protective services in Pennsylvania. The work shows how poverty becomes a liability, service providers are alienated from clientele, and, like 19th century poorhouses, digital poorhouses do more to entrench economic instability than ameliorate it. With heart-wrenching accounts of lost health care, invasive data collection, and unjustly taken children, Eubanks highlights the contentious class dynamics that inform and are amplified through automated systems. “Technologies of poverty management are not neutral,” says Eubanks. “They are shaped by our nation’s fear and hatred of the poor; they in turn shape the politics and experience of poverty”. It is worth noting that this is one of the most rigorously researched books I’ve ever read. Eubanks even hires a fact-checker for herself, which sets a standard for intellectual integrity that rises beyond reproach.

Bucher’s If…Then: Algorithmic Power and Politics takes on the social side of automation. The theoretical work in the beginning of the text states in no uncertain terms that the banal platforms through which we socialize maintain deep and far reaching implications for the organization of daily life. As Bucher explains, “platforms act as performative intermediaries that participate in shaping the worlds they only purport to represent.” That is, platforms like Facebook, Twitter and Instagram are not only reflective mirrors, but powerfully efficacious in their own right. The text relies on case studies that capture social-algorithms from diverse angles. Bucher provides a technical study of the Facebook news feed, an exploratory study of everyday social media users’ “encounters” with algorithms, and documents how algorithms become institutionalized in the journalism and media landscape. Thus, Bucher tackles engineering and materiality, everyday experiences, and institutionalization of algorithmic systems, maintaining all the while a critical eye towards politics and power.

The three works are thus unified in their larger project of social scientific inquiry into algorithm studies, guided by a critical lens. Each are distinct, however, in their focus. The distinct substance of each book is a crucial reminder that algorithms are pervasive and polymorphous. They are not ends or entities in themselves, but vehicles of, and tools for, social organization in its myriad forms.


Jenny Davis is on Twitter @Jenny_L_Davis

This post is based on the author’s article in the journal Science as Culture. Full text available here and here

In 2016, Lumos Labs – creators of the popular brain training service Lumosity – settled against charges laid by the FTC, who concluded that the company unjustly ‘preyed on consumers fears …[but] simply did not have the science to back up its ads’. In addition to a substantial fine, the judgment stipulated that – except in light of any rigorously derived scientific findings – Lumos Labs

‘… are permanently restrained and enjoined from making any representation, expressly or by implication [that their product] … improves performance in school, at work … delays or protects against age-related decline in memory or other cognitive function, including mild cognitive impairment, dementia, or Alzheimer’s disease…. [or] reduces cognitive impairment caused by health conditions, including Turner syndrome, post-traumatic stress disorder (PTSD), attention deficit hyperactivity disorder (ADHD), traumatic brain injury (TBI), stroke, or side effects of chemotherapy.’

However, by the time of the settlement, Lumosity’s message was already out. Lumosity boasts ‘85 million registered users worldwide from 182 countries’ and their seductive advertisements were seen by many millions more. Over three billion mini-games have been played on their platform, which – combined with personal data gleaned from their users – makes for an incredibly valuable data set. Lumosity kindled sparks of hope within those who suffered, or feared suffering from the above conditions, or who simply sought to better themselves for contemporary demands. In this way, the brain has become a site of both promise and peril. Today, ever more ethical injunctions are levied through calls for ‘participatory biocitizenship’, the supposed ‘empowerment of the individual, at any age, to self-monitor and self-manage health and wellness’.

However, this regime of self-care is not sold through oppressive demands, but the consumer-friendly promise of fun (especially when it can be displayed to others). These entanglements of hope, fear, duty, and pleasure coalesce into aspirations of ‘virtuous play’. Late capitalist modes of prosumption leverage our desires for realizing ideal selves through conspicuous consumption practices, proving ourselves as healthful, industrious, and always pleasure-seeking. Self-tracking technologies ably capture this turn to virtuous play, combining joyful game playing with diligent lifelogging. Brain training proves exemplary here, for through the potent combination of pop neuroscience, self-help rhetoric, normative role expectations, and haptic stimulation, we labour to enhance our cognitive capacities.

Of course, ‘brain training’ in the typical form of tablet and smartphone-based games constitutes a rather mild intervention, relative to other neurotechnologies adopted for personal enhancement. Consider, for example, EEG-based devices enticing consumers with neuro-mapping and (cognitive) arousal-based life-logging, or gamification and smart-home integration (see Melissa Littlefield’s new book for more on EEG devices). Some concept videos for such applications are saccharine sweet:

While others could have used a little less brotopia exuberance:

Elsewhere, we can find virtuous play in the uptake of transcranial direct current stimulation (tDCS), sometimes used in clinical settings, but increasingly also by amateur ‘neurohacking’ enthusiasts.

However, while the consumer-friendly brain training offered by companies like Lumosity pales in its relative intensity, its widespread appeal threatens to inscribe narrow ethical prescriptions of self-care (while also smoothing paths toward those more invasive measures). In other words, the actual efficacy of current brain training methods may matter far less than the discursive grooves they carve.

For example, ‘brain training’ rhetoric commonly leverages aspirations of virtuosity as relief from contemporary anxiety and vulnerability. Yet, by simultaneously stoking these very anxieties, they ratchet up expectations of being dutifully productive and pleasure-seeking subjects. Also, limited affordances entail that the subject is disaggregated into only those functional capacities deemed value-bearing and measurable. The risk here is reinforcing hyper-reflexive but shallow practices of self-care.

Moreover, popular rhetoric around ‘neuroplasticity’ construes the brain as an untapped well of potential, infinitely open to targeted enhancement for ideal neoliberal subjects who are ‘dynamic, multi-polar and adaptive to circumstance’. This enhancement ethos has also emerged in response to the collective dread felt towards neurodegenerative diseases, where responsible, enterprising subjects seek ways to ensure cognitive longevity.

Our neuroplastic potentials are also regularly invoked, holding promise that we can truly realize our latent capacities to be more productive, fulfil role obligations, ward off neurodegeneration, and shore up our reserves of human capital. This is the contemporary burden of endless ‘virtuosity’, where subjects must constantly work upon their value-bearing capacities to be (temporarily) relieved of insecurity, risk, and vulnerability.

These hopes, fears, and obligations are soothed and stoked through the virtuous play of brain training. This market operates under the premise that through expertly designed activities – commonly packaged as short games – cognitive capacities may be enhanced in ways that generalize to everyday life. Proponents have sought to ground consumer-friendly brain training in scientific rigour, but efficacy remains hotly contested.

More broadly, brain training constitutes part of the growing ‘brain-industrial complex’, driven in part by ‘soft’ commercialization trends. These commercial claims encourage ‘endless projects of self-optimization in which individuals are responsible for continuously working on their own brains to produce themselves as better parents, workers, and citizens’.

The rhetoric of brain training reflects moral hazards that often accompany commercialization, with ‘inflated claims as to the translational potential of research findings’ resulting in tenuous practical applications. Brain training also reflects how smoothly self-tracking has been incorporated into obligations of healthfulness, leveraging a ‘notion of ethical incompleteness’. Hence, while most consumer-friendly ‘brain training’ products are of low intensity, even here abound ethical appeals that ‘divides, imposes burdens, and thrives upon the anxieties and disappointments generated by its own promises’. Coupling self-tracking with gamification thus enables joyous pleasure and ethical measure. Care for oneself ‘is now shot-through with the promise of uninhibited amusement’ so that we can ‘amuse ourselves to life’. This judicious leisure keeps mortality at bay and morality upheld.

Using Lumosity as a peg upon which to hang the concept of virtuous play, we can unpack how popular brain training and related self-tracking practices lean on contemporary aspirations and anxieties. Firstly, Lumosity is designed to be routine yet fun, undertaken through short, aesthetically pleasing video games, usually played on personal computers, tablets, or smartphone devices. These games purport to target, assess, and – with training – enhance cognitive capacities. Many of these games draw upon classic experimental designs, and Lumosity has sought to further establish credibility through their ‘Lumos Labs’ – where ‘In-house scientists refine and improve the product’ – and their ‘Human Cognition Project’.

Admittedly, it may be tempting to dismiss products like Lumosity as pseudoscience packaged in exaggerative marketing, not worthy of our attention. But such dismissals neglect how we are typically constituted as subjects, for it is

‘… at this vulgar, pragmatic, quotidian and minor level that one can see the languages and techniques being invented that will reshape understandings of the subjects and objects of government, and hence reshape the very presuppositions upon which government rests.’

Therefore, with this need to better understand prevailing rationales of neuro-enhancement, observe here how Lumosity pitched itself to consumers in 2014:

Several appeals emerge here: equating brain training with other forms of ‘fitness’; the offer of focusing on what is ‘important to you’; scientific rigour; progress measured by comparison against the cohort; and the promise of fun. Finally, there is an earnest petition of potential, for with Lumosity you will ‘discover what your brain can do’.

The brain training industry has thrived within this context of egalitarian self-enterprise, offering aspiring virtuosos ‘the key to self-empowered aging’. Such seductive rationales are highlighted by Sharp Brains, ‘an inde­pen­dent market research firm track­ing health and per­for­mance appli­ca­tions of neu­ro­science’. They claim

‘When we con­ducted in-depth focus groups and inter­views with [lay subject] respon­dents, the main ques­tion many had was not what has per­fect sci­ence behind it, but what has bet­ter sci­ence than the other things peo­ple are doing – solving cross­word puz­zle num­ber one mil­lion and one, tak­ing ‘brain sup­ple­ments,’ or doing noth­ing at all until depres­sion or demen­tia hits home.’

The implication – conveniently endorsed by Sharp Brains – is that although efficacy remains unproven, this does not absolve individual responsibility. Rather, we must do something to care for our brains, lest we be seen as defeatist and indolent, sullenly waiting for depression or dementia to ‘hit home’. Such sentiments have certainly been fostered by slickly-packaged commercial appeals.

In 2012, Lumosity launched a highly successful ‘Why I Play’ campaign, designed to normalize brain training. The campaign was active for several years, reaching a massive global audience through an enticing emphasis on aspiration and emulation. Each ‘Why I Play’ commercial adhered to a shared template: an actor portraying a happy Lumosity user stresses the imperative need to enhance their cognition, while also noting the pleasures of brain training. All the actors are, of course, impossibly attractive, and the perfect embodiment of the late capitalist subject. They serve as avatars of virtuosity, with unending drives for both self-improvement and pleasure.

This simultaneously disciplined, pleasurable, intimate, and yet distant framing of ‘discovering what your brain can do’ creates a peculiar ethic-fetish of brainhood. Advocates proclaim that ‘I am happier with my brain’ or ‘my brain feels great’. The users also praise ‘the science behind the games’, and highlight hopes to maintain cognitive capacities as they age. These commercials lean directly on burdensome expectations placed upon labouring subjects today.

Another variant of the ‘Why I Play’ campaign, upping the ethical stakes, even implies that brain training may be obligatory for those who aspire to be the kindest persons they can be:

Similarly, a mother expresses relief that ‘it’s not just random games, it’s all based on neuroscience’, reassuring her that ‘every brain in the house gets a little better every day’. Training one’s brain – and the brains of dependents – is framed as an admirable practice for those who seek to be a source of joy, comfort, and care for others.

Upon commencing their ‘brain training journey’ members are asked probing questions around when they feel most productive, their sleeping patterns, general mood, exercise habits, age, and more. A competitive regimen is also stoked, with users asked ‘What percentage of Lumosity members do you outrank? … Come back weekly to see how many people you’ve surpassed.’ Such encouragement is then reflected in precise rankings of users in their various cognitive capacities. Lumosity also enables integration of data from Fitbit devices, further entrenching associations between brain fitness and aerobic fitness.

After completing a prescribed number of training sessions the user will receive a ‘Performance Report’. This report includes comparisons with other users according to occupation group, implying which line of work their particular brain may best be suited. Users can also consult their ‘Brain Profile’, divided into five functions of ‘Attention’, ‘Flexibility’, ‘Speed’, ‘Problem Solving’, and ‘Memory’. These five measures generate the user’s entire ‘Brain Profile’, while the ‘Performance Index’ ensures that ‘users know where they fall with respect to their own performance using a single number’. Nothing else can be accommodated, and everything must be reducible to a single figure. Our wondrous cognitive assembly collapses into a narrow ‘profile’ of functions, percentages, and indices, all framed through buzzwords and mantras of corporate-speak.

So, while it remains contentious whether such practices materially ‘train’ a brain, these regimes certainly contribute to entraining and championing a particular kind of subject. Yet the range of qualities measurable is clearly restricted by prevailing capabilities, including how these qualities are themselves refashioned to fit available affordances. Nevertheless, perhaps some comfort is found in giving in to the promise of fun and giving oneself over to expertise. In their capacious allowance for both pleasure and duty, these games serve as tolerable acts of confession. However, this fetish-ethic may, in time, become a burdensome labour, adding supposed precision around ‘brainhood’ that reflects only current idealisations.

The fetish-ethic of cognitive enhancement is particularly evident in the insistence on ‘active ageing’. Brain training products are often directly marketed to persons in the ‘Third Age’ (those who are perhaps retired, but not yet dependent upon others). The commercial exploitation of the Third Age has commonly been tied to strategies that bemoan passive acceptance of ‘natural’ ageing, and instead urge practices designed to lengthen this twilight of life.

Lumosity’s ‘Why I Play’ campaign, for instance, expressly endorses active ageing. One actor  states ‘There’s a lot going on in here [pointing to head], and I want to keep it that way’, while another actor speaks directly to Third Age virtuosity.

Here, the extended Third Age is embodied in a handsome and (improbably) young retiree; a privileged silver fox carrying a clearly aspirational message. In this manner Lumosity presents brain training as the rational consumer choice through avatars of success, worthy of emulation. Such rationales are persuasive means in shifting the burden of healthfulness onto the consuming subject. A new actuarialism is emergent, managing population-level risks through the pleasurable consumption of self-care.

However, virtuous play also requires justifying the use of time. For today’s perpetually harried subject, this is achieved by blurring distinctions between labour and leisure. In this way, recreation can be tied to self-perfection, equipping the user against neoliberal demands without sacrificing participation in the experiential economy. This strategy of ‘instrumentalizing pleasure as a means of legitimizing it’ is especially evident in the way another brain training product – Elevate – pitches itself to consumers, with emphasis placed on the judicious use of time. Advertisements feature actors discussing the product’s benefits: time well spent; productive pleasure; and enhanced work focus. Indeed, these Elevate ‘users’ suggest that the right kind of play is actually the most effective and rational means of enhancing productivity:

Elevate’s emphasis on personal productivity is part of a broader ‘corporate mind hack’ trend. Under this regime, the labouring subject is disaggregated into discrete functions pre-determined as valuable, and then incentivized to improve them.

This is sometimes put into practice by leveraging competitive drives in workplace settings, with some arguing that it can prove ‘socially connective with the self and co-workers in just the right lightweight competitive way’. Such ‘biohacking’ is also driven by simmering distrust of more intuitive and holistic assessments of one’s wellbeing. Instead, ‘hard’ data is sought through mediating, non-human authorities. Still, it remains noteworthy that brain training retains a form of embodied volition. Note, for instance, how brain training is typically offered through devices imbued with haptic feedback capabilities, enabling a pleasurable experience through the sensory bleed between mind, body, device, and the virtual world presented within it.

Still, the expectation is that we should circumvent our sensing, intuitive apparatuses, and instead seek data neatly cleaved from its source. These mediated outputs can then provide reassuring, purportedly objective markers of our accumulated human capital. Yet, human capital, of course, is determined only by what counts as worth counting in any particular social context. Hence a circular pedagogy emerges, for as Foucault noted, one cannot ‘know’ without transforming that which is observed, and to ‘know’ oneself requires first abiding what is deemed of value to know.

The result is that these narrowly derived brain ‘profiles’ and ‘indices’ ultimately prescribe far more than they reveal. Likewise, virtuous play is a discursive veil by which productive expectations are heaped upon dutiful biocitizens. This is further compounded by the hasty rush-to-market. Emerging products looking to cash in on contemporary hopes and anxieties are limited by available affordances, yet still exploit obligations of self-care. This generates constraining ontological frames, hardening precisely at the very moment in which personal neurotechnologies are touching upon extraordinary exploratory potential.

Given these trends, we should aspire to foster discursive spaces where ‘enhancement’ can be reimagined. Or, better yet, perhaps we can sidestep the insistence on ‘enhancement’ altogether, and cease hyper-reflexively categorizing ourselves into endlessly improvable higher cognitive functions. Alternatively, perhaps we may better take advantage of flexible affordances within digital platforms. Could we find more ways of turning our hopes, fears, anxieties, and desires for pleasure not to high scores and top rankings for sole virtuosos? Such habits accrue hard metrics that confer worth only to oneself. Instead, can we turn personal neurotechnologies more towards discovering new avenues for our social capacities to soothe fears and anxieties – and, perhaps, even be a source of pleasure – for others?

This is not to advocate for metricizing intimacy through the ‘quantified relationship’. To precisely metricize good conduct – and give authority over these measures to mediators that cannot accommodate the creative ruptures of ‘play’ – is to wilfully foreclose the very same elusive potentials we are striving to attain. Instead, perhaps we can reimagine self-fashioning in ways less tethered to rigid and pre-determined instrumental ends, and instead embrace more experimental modes.

In any case, following their smackdown by the FTC, Lumosity are now far more cautious in their claims:


Matt Wade is a postdoctoral fellow in NTU’s Centre for Liberal Arts and Social Sciences, Singapore. His primary research interests are within the sociology of science, technology, and morality (particularly around obligations of virtuosity and assessing moral worthiness). These interests are pursued in various contexts, including: debates and applications of moral neuropsychology; consumer-friendly neurotechnologies; self-tracking practices; and appeals for aid through personal crisis crowdfunding. Matt also has an interest in cultural sociology, particularly spectacles of prosumption and emotional labour. Previously, this research focused on evangelical megachurches, and now is pursued through a project on contemporary wedding rituals.

Some of Matt’s work can be accessed here and here.


Facebook has had a rough few weeks. Just as the Cambridge Analytica scandal reached fever pitch, revelations about Zuckerberg’s use of self-destructing messages came to the surface. According to TechCrunch, three sources have confirmed that messages from Zuckerberg have been removed from their Facebook inboxes, despite the users’ own messages remaining visible. Facebook responded by explaining that the message-disappearing feature was a security measure put in place after the 2014 Sony hack. The company promptly disabled the feature for Zuckerberg and other executives and promised to integrate the disappearing message feature into the platform interface for all users in the near future.

This quick apology and immediate feature change exemplifies a pattern revealed by Zeynep Tufekci in a NYT opinion piece, in which she describes Facebook’s public relations strategy as a series of public missteps followed by “a few apologies from Mr. Zuckerberg, some earnest-sounding promises to do better, followed by a couple of superficial changes to Facebook that fail to address the underlying structural problems.”

In the case of disappearing messages, Facebook’s response was both fast and shallow. Not only did the company fail to address underlying structural issues, but responded to the wrong issue entirely. Their promise to offer message deletion to all Facebook users treated the problem as one of equity. It presumed that what was wrong with Zuckerberg deleting his own messages from the archive was that others couldn’t do the same. But equity is not what’s at issue. Of course users don’t have the same control over content—or anything else on the Facebook platform—as the CEO. I think most people assume that they are less Facebook-Powerful than Mark Zuckerberg. Rather, what is at issue is a breach of accountability. Or more precisely, the problem with disappearing messages on Facebook is that this violated accountability expectations.

Helen Nissenbaum introduced a widely used framework to describe how and when privacy violations take place. The “contextual integrity” framework rejects universal evaluations of privacy and instead, defines privacy by the expectations of a particular context. For example, it isn’t a privacy violation if you give your information to a researcher and they reproduce that information in published reports, but it is a privacy violation if you give your information to a researcher and they sell that information to third parties. The same idea can be applied to accountability.

Contexts and situations carry with them expectations about what will be maintained for the record. These expectations of accountability ostensibly guide behavior and interaction. If people assume that all communications are retrievable, they will comport themselves accordingly. Similarly, they will treat others’ communications as available for retrieval and evaluation. With his disappearing messages, Zuckerberg violated the contextual integrity of accountability.

Disappearing messages are not in and of themselves accountability violations. Snapchat, for instance, integrates ephemeral messaging as a core feature of its design. Recipients of Snapchat messages do not presume that senders can or will be held accountable for their content in the way that users of archival services—like Facebook—would. What’s so unsettling about Zuckerberg deleting his messages isn’t that we users can’t do it too, it’s that he violated the integrity of the context by presenting one set of accountability assumptions and enacting another.

Offering message deletion to all Facebook users would indeed change the contextual expectations of accountability, but fail to repair the contextual violation. Instead, a new feature roll-out is another a quick pivot that leaves larger intersecting issues of power, design, and regulation unaddressed.


Jenny Davis is on Twitter @Jenny_L_Davis


Humor is central to internet culture. Through imagery, text, snark and stickers, funny content holds strong cultural currency.  In a competitive attention economy, LOLs are a hot commodity. But just because internet culture values a laugh it doesn’t preclude serious forms of digitally mediated communication nor consideration of consequential subject matter. In contrast, the silly and serious can—and do—imbricate in a single utterance.

The merging of serious and silly becomes abundantly evident in recent big data analyses of political communication on social media. Studies show that parody accounts, memes, gifs and other funny content garner disproportionate attention during political news events. John Hartley refers to this phenomenon as ‘silly citizenship’ while Tim Highfield evokes an ‘irreverent internet’. This silliness and irreverence in digitally mediated politics means that contemporary political discourse employs humor as a participatory norm. What remains unclear, however, is what people are doing with their political humor.  Is humor a vehicle for meaningful political communication, or are politics just raw material for funny content?  My co-authors and I (Tony Love (@tonyplove) and Gemma Killen (@gemkillen)) addressed this question in a paper published last week in New Media & Society.

The question of what people do with political humor is significant. Researchers and social commentators have expressed concern that humor detracts from substantive conversation and foments cynicism and apathy in the democratic system. At the same time, internet technologies present new platforms that give voice to marginalized groups while humor offers an accessible discursive style. A tension thus emerges in which silliness online may at once strengthen and undermine public participation in politics.

Our paper, titled ‘Seriously Funny: The Political Work of Humor on Social Media’ looks at how humor works, and the work humor does, in digitally mediated political communication.  Data for the paper is derived from two key moments during the 2016 U.S. presidential race in which humor and politics intersect: Donald Trump calling Hillary Clinton a ‘nasty woman’ and Clinton referring to Trump supporters as a ‘basket of deplorables.’ We scraped public Twitter data from the 24 hours following each event to create a big(ish) data set. We ended up with over 14,000 tweets. We coded these tweets for humor and political messaging. We then analyzed the humorous-political tweets to discern what people were doing with their political humor. Finally, we separated the two cases—deplorables and nasty woman—to see if we could find partisan differences in humor style.

Methodological interlude: this process of coding was as (or more) tedious than you would imagine. Existing research has used big data computational methods to show broad patterns. We were interested in the nuances that big data glosses over and/or obscures. Our questions required a small data approach. This was especially true because we were interested in humor, and a key feature of humor is that it often means something different than it says. Humor is deeply layered and culturally specific, relying on intertextual remixes and inside knowledge. This was a do-it-by-hand-the-old-fashioned-way kind of job.  Practically, that meant hand-coding 14,000+ tweets including following links and threads to gain context. It meant re-coding the subset of tweets that we deemed funny and political (~3,300) and then coding them again in search of partisan patterns. I tweeted this commentary on the process during the revision stage (while attempting to get through even more re-coding). All of this is to say that big data methods represent a massive advancement in social research, but sometimes research questions require sleeves-up qualitative deep-dives.

Our first pass of the data showed two main things. First, we found that, as expected, humor loomed large, featuring in about 5,000 tweets. Most of the other tweets were just informational (e.g., “Clinton called Trump supporters a ‘basket of deplorables’”) and/or links to articles and videos of the events, with the periodic angry humorless rant. Second, we found that nearly 70% of humorous tweets carried some political agenda. That is, we found that the vast majority of funny content acted as a vehicle for serious political talk. This second finding answered one of our main research questions (is humor a tool for political speech or are politics fodder for apolitical jokes?). This finding, that humor does serious political work, eases concerns about humor as a force of apathy and cynicism and indicates that those who trade in humor can—and do—engage actively in the public political sphere.

Our next step entailed delineating more specifically what Twitter users do with their humor. We categorized political humor into three thematics: discrediting the opposition, political identification, and civic support. We analyzed these as a whole and also, looked at how the data distributed along partisan lines. We tied each thematic to a ‘humor function’ using  John C. Meyer’s origins and effects framework. Meyer posits that humor takes three forms: relief—cutting through a heavy moment with levity; incongruity—making the familiar strange; and superiority—triumph through pointed deprecation of an ‘other.’ These humor origins serve two broad effects: unity and division. Meyer clarifies that humor always has multiple origins and serves multiple ends, but with different emphases.

We saw relief and incongruity throughout the tweets but were able to parse variations in superiority as an origin, and unity/division as an effect. Specifically, tweets that discredit the opposition were primarily divisive and heavily reliant on superiority; political identification was primarily unifying while pushing back on denigration; and civic support had elements of superiority with relatively equal parts unification and division as mobilization was both a collective action and an act of aggression. These humor schematics not only connected our findings to cultural studies of humor, but also allowed us to make sense of partisan humor style.

Examining humor style across partisan lines is meaningful theoretically, as humor studies have traditionally shown firm symbolic boundaries between ideological groups. At the same time, internet studies have celebrated a ‘convergence culture’ and general breakdown of symbolic boundaries as shared language, cadence and syntax take hold across contexts. Divergent humor style across the two data sets would lend credence to traditional humor studies, while shared humor style would indicate that social media have had profound boundary breaching effects on practices and preferences of humor.

Our first category, discrediting the opposition, was the most heavily populated. Here, tweets mocked the opposing candidates and their supporters, hotly contesting fitness for office and general value as human beings. For instance, anti-Trump tweets referenced his misogyny and (dull) intellect while anti-Clinton tweets referenced elitism and corruption. For example:

 “Such a Nasty Woman,GRAB THEM BY P*SSY, Nobody has more respect for women than me”—donald trump

Mrs Deplorable will have to take a few days off from parties in Hollywood, she’s in the bed,        deplorabley tired. #LockHerUp #TrumpTrain

 About 2/3 of all tweets had elements of opposition and these distributed equally along partisan lines.

Our second theme was political identification. This referred to establishing the self as a political subject through reclaiming negative labels, connecting political preferences to other positive statuses, and establishing the self as part of a political bloc. For example:

 I was going to be a nasty woman for Halloween, but I am already sexy, smart and generous  

Folks I’m not a Major. ’Major Lee D Plorable’ read fast is Majorly Deplorable. I was only   corporal in USMC #BasketOfDeplorables lol

About 1/3 of all tweets had elements of political identification.  Again, these distributed about equally along partisan lines.

Note: analyses of our first two categories show no partisan differences in humor style, indicating a clear divergence from the strong cultural boundaries that humor studies would lead us to expect. But then, we come to civic support.

Our final category, civic support, is in many ways the most interesting. Civic support refers to active participation in the political process through mobilization, fundraising and voting. For example:

  This nasty woman is taking my pussy to a voting booth to vote for @HillaryClinton Too bad we both can’t vote. #ImWithHer #NastyWomen

How’s Go “F” yourself, from a deplorable Independent who just changed her vote from Her to Him

Although it is our least populated category (only present in about 20% of all humorous political tweets), it is the only category that varies along partisan lines. While about a quarter of ‘nasty woman’/pro-Clinton tweets contain elements of civic support, this thematic is present in less than 10% of ‘deplorables’/pro-Trump tweets. This pattern is critical as the only example of partisan difference in humor style, showing that humor’s traditionally strong boundaries may partially resist the convergent pull of internet culture. The pattern also presents something of a puzzle: despite the relatively high prevalence of civic action among Clinton supporters on Twitter, the election ultimately fell in support of Trump. This raises interesting questions about the predictive value of social media for actual voting behaviors.

In sum, our study shows four main things: 1) humor plays a big part in digitally mediated political communication; 2) humor is a vehicle for serious political commentary and participation; 3) humor is used largely for denigration and divisiveness, but there are substantial trends of political subjectification, civic participation, and collective action; and 4) political humor partially transcends partisan lines while leaving some boundaries in-tact. These findings ease concerns about the possible cynicism fomented through humor online while raising key questions about the relationship between social media practices and voting behavior. The findings also speak to humor studies—which show firm symbolic boundaries—and internet studies—which show boundaries broken down. The partial but incomplete breakdown of ideological boundaries in our analysis of humor style indicates that the meeting of humor and social media leaves neither unchanged.


Full text found here: Seriously Funny: The Political Work of Humor on Social Media

Jenny Davis is on Twitter (@Jenny_L_davis), where she sometimes tries to be funny with varying degrees of success

If I were to ask you a question, and neither of us knew the answer, what would you do? You’d Google it, right? Me too. After you figure out the right wording and hit the search button, at what point would you be satisfied enough with Google’s answer to say that you’ve gained new knowledge? Judging from the current socio-technical circumstances, I’d be hard-pressed to say that many of us would make it past the featured snippet, let alone the first page of results.

The internet—along with the complementary technologies we’ve developed to increase its accessibility—enriches our lives by affording us access to the largest information repository ever conceived. Despite physical barriers, we can share, explore, and store facts, opinions, theories, and philosophies alike. As such, this vast repository contains many answers to many questions derived from many distinct perspectives. These socio-technical circumstances are undeniably promising for the distribution and development of knowledge. However, in 2008, tech-critic Nicholas Carr posed a counter argument about the internet and its impact on our cognitive abilities by asking readers a simple question: is Google making us stupid? In his controversial article published by The Atlantic, Carr blames the internet for our diminishing ability to form “rich mental connections,” and supposes that technology and the internet are instruments of intentional distraction. While I agree with Carr’s sentiment that the way we think has changed, I don’t agree that the fault falls on the internet. I believe we expect too much of Google and less of ourselves; therefore, the fault (if there is fault) is largely our own.

Here’s why: Carr’s argument hinges on the idea that technology definitively determines our society’s structural and cultural values—a theory known as technological determinism. However, he fails to recognize the theory of affordance in this argument. Affordances refer to the way in which the features of a technology interact with agentic users and diverse circumstances. While the technical and material elements of technology do have shaping effects, they are far from determined. Affordance theory suggests that the technologies we use and the internet infrastructures from which they draw, contain multipotentiality: they afford the potential to indulge in curiosity and develop robust knowledge while simultaneously affording the potential to relinquish curiosity and develop complacency through the comforts of convenience and self-confirmation.

Considering the initial sentiment of Carr’s argument (the way we think has changed) together with affordance theory, we can derive two critical questions: have we embraced complacency and become too comfortable with the internet’s knowledge production capabilities? If so, by choosing to rest on our laurels and exploit this affordance, what happens to epistemic curiosity?

There’s a lot to unpack, but in order to address these questions, we need to examine the potential socio-technical circumstances that could lead us down a path of declining epistemic curiosity, starting with the binary ideas of convenience and complacency.

Complacency is characterized by the feeling of being satisfied with how things are and not wanting to try to make them better. Clearly, in terms of making life more efficient, we are nowhere near complacent, as we constantly strive to streamline our lives through innovation—from fire to the invention of (arguably) our greatest creation to date and the basis for our modernity: information and communication technology. This technology affords us the ability to live more convenient, effortless lives by providing access to the world’s knowledge with the tap of a finger and the ability to do more in a few moments than previous generations could do in hours.

For instance, education has become much more convenient. Thanks to the internet, you can take advantage of distance learning programs and earn a degree on your own terms, without physically attending class. The workforce has also become more flexible, as technology allows us to maximize time and stay on top of our work through complete mobility, and in some cases, complete task automation. Economically, the internet allows us to sell and consume goods and services without the physical limitations of brick and mortar. It also allows us to communicate with friends, family, and strangers over long distances, document our lives, access current events with ease, and answer a question within moments of it popping into our heads.

These conveniences must make life better, right?

Think of these conveniences like your bed on a cold morning: warm and comfortable, convincing you to hit snooze and stay a while longer. This warmth and comfort can be a source of sustenance and strength; however, if we stay too long, comfort can get the best of us. We might become lazy, hesitating to diverge from the path of least resistance.

Just as it is inadvisable to regularly snooze until noon, it is concerning when information and knowledge are accessed too easily, too quickly. With the increased accessibility and speed of information, it’s easy to become desensitized to curiosity—the very intuition that is responsible for our technological progress—in the same way that you are desensitized to your breathing pattern or heartbeat. By following the path of least resistance, we can create a dynamic in which we perceive the internet as a mere convenience instead of a tool to stimulate our thoughts about the world around us. This convenience dynamic allows us to settle into a state of complacency in which we are certain that everything we think and believe can be justified through a quick Google search—because, in fact, it can be. That feeling of certainty and comfort that stems from this technical ability to self-confirm is, what I call, informed complacency.

The idea of informed complacency is especially fraught because it signifies a turning point in our perception of contemporary knowledge. Ultimately, it can encourage us to develop an underlying sense of omniscient modernity, which Adam Kirch discusses in his article for The New Yorker,Are We Really So Modern?”:

“Modernity cannot be identified with any particular technological or social breakthrough. Rather, it is a subjective condition, a feeling or an intuition that we are in some profound sense different from the people who lived before us. Modern life, which we tend to think of as an accelerating series of gains in knowledge, wealth, and power over nature, is predicated on a loss: the loss of contact with the past.”

In the past, nothing was certain. The information our ancestors had on the world and universe was constantly being overturned and molded into something else entirely. Renowned thinkers from across the ages built and destroyed theories like they were children with LEGO bricks—especially during the Golden Age of Athens (fourth and fifth centuries B.C.) and the Enlightenment (seventeenth and eighteenth centuries A.D.). Each time they thought they had it figured out, the world as they knew it came crashing down with a new discovery:

“The discovery of America destroyed established geography, the Reformation destroyed the established Church, and astronomy destroyed the established cosmos. Everything that educated people believed about reality turned out to be an error or, worse, a lie. It’s impossible to imagine what, if anything, could produce a comparable effect on us today”

Today, we still face uncertainty, albeit a different kind. With the glut of empirical evidence on the internet, multiple versions of objective reality flourish even as they conflict. These multiple truths create a dynamic information environment that makes it difficult to differentiate between fact, theory, and fiction, increasing the likelihood that whatever one thinks is true can easily be confirmed as such. With this sentiment in mind, by following the path of least resistance and developing a sense of informed complacency, we risk developing a sense of omniscient modernity and over-comprehending our ability to know, because we are certain that we know—or can know—everything, past, present, and future, with the click of a button or the tap of a finger.

Though a dynamic information environment has clear benefits for epistemic curiosity—better science, more informed debates, an engaged citizenry—the tilt of the affordance scale towards complacency always remains a lingering possibility. If we begin to lean in this direction, I contend that informed complacency is likely to take hold and lead us to ignorance and insularity amid a saturated information environment. This can create cognitive traps that, in the worst instance, diminish epistemic curiosity.

One of these traps is called the immediate gratification bias, which Tim Urban of Wait But Why, has playfully dubbed the “Instant Gratification Monkey”. He describes this predisposition as “thinking only about the present, ignoring lessons from the past and disregarding the future altogether, and concerning yourself entirely with maximizing the ease and pleasure of the current moment.” As a result of this predisposition, there is an increasing demand for instant services like Uber, Amazon Prime, Netflix, and Tinder, which testifies that the notions of ease and instancy have infiltrated our thought-process, compelling us to apply them to every other aspect of our lives. The increase in the speed at which we consume information has molded us to rely on and expect instant results for everything. Consequently, we are likely to base our information decisions on this principle and choose not to dig past surface-level.

Another trap is found in gluttonous information habits—devouring as much of it as we can, as quickly as possible, solely for the sake of hoarding what we consider to be knowledge. In all our modernity, it seems that we misguidedly assume that consuming information at a faster pace is beneficial to the development of knowledge, when in fact, too much information (information overload) can have overwhelming, negative effects, such as the inability to make the “rich mental connections” Carr describes in his article. This trap is amplified by pressures to stay “in the know” as well as the market of apps and services that capitalize on a pervasive fear of missing out, transforming the pursuit of knowledge from an act of personal curiosity to a social requirement.

The complex algorithms deployed by search engine and social media conglomerates to manage our vast aggregates of information curate content in ways users are likely to experience not only as useful, but pleasurable. These algorithmic curations are purposefully designed to keep information platforms sticky; to keep users engaged, and ultimately sell data and attention. These are the conditions under which another cognitive trap arises: the filter bubble. By personally analyzing each individual user’s interests, the algorithms place them in a filtered environment in which only agreeable information makes its way to the top of their screens. Therefore, we are constantly able to confirm our own personal ideologies, rendering any news that disagrees with one’s established viewpoints as “fake news.” In this context, it’s easy to believe everything we read on the internet, even if it’s not true. This makes it difficult to accurately assess the truthfulness and credibility of news sources online, as truth value seems to be measured by virality rather than veracity.

Ultimately, with his argument grounded in technological determinism, Carr overlooks the perspective that technology cannot define its own purpose. As its creators and users, we negotiate how technology integrates into our lives. The affordances of digital knowledge repositories create the capacity for unprecedented curiosity and the advancement of human thought. However, they also enable us to be complacent, misinformed, and superficially satisfied; that is to say, an abundance of easily accessed information does not always mean persistent curiosity and improved knowledge. To preserve epistemic curiosity and avoid informed complacency, we should keep reminding ourselves of this and practice conscious information consumption habits. This means recognizing how algorithms filter content; seeking diverse perspectives and content sources; questioning, critiquing, and evaluating news and information; and perhaps most importantly, always do your best to venture past the first page of Google search results. Who knows, you might find something that challenges everything you believe.

Clayton d’Arnault is the Editor of The Disconnect, a new digital magazine that forces you to disconnect from the internet. He is also the Founding Editor of Digital Culturist. Find him on Twitter @cjdarnault.


Headline pic via: Source

Augmented reality makes odd bed fellows out of pleasure and discomfort. Overlaying physical objects with digital data can be fun and creative. It can generate layered histories of place, guide tourists through a city, and gamify ordinary landscapes.  It can also raise weighty philosophical questions about the nature of reality.

The world is an orchestrated accomplishment, but as a general rule, humans treat it like a fact. When the threads of social construction begin to unravel, there is a rash of movement to weave them back together. This pattern of reality maintenance, potential breakdown and repair is central to the operation of self and society and it comes into clear view through public responses to digital augmentation.

A basic sociological tenet is that interaction and social organization are only possible through shared definitions of reality. For meaningful interaction to commence, interactants must first agree on the question of “what’s going on here?”. It is thus understandable that technological alteration, especially when applied in fractured and nonuniform ways, would evoke concern about disruptions to the smooth fabric of social life. It is here, in this disruptive potential, that lie apprehensions about the social effects of AR.

When Pokémon Go hit the scene, digital augmentation was thrown into the spotlight. While several observers talked about infusions of fun and community into otherwise atomized locales, another brand of commentary arose in tandem. This second commentary, decidedly darker, portended the breakdown of shared meaning and a loosening grip on reality. As Nathan Jurgeson said at the time, “Augmented reality violates long-held collective assumptions about the nature of reality around us”. But, Jurgenson points out, reality has always been augmented and imperfectly shared. The whole purpose of symbol systems is that they represent what can’t be precisely captured. Symbols are imperfect proxies for experience and are thus necessary for communication and social organization.

What AR does is explicate experiential idiosyncrasies and clarify that the world is not, and needn’t be, what it seems. This explication disrupts smooth flows of interaction like a glitch in the social program. It reveals that reality is collaboratively made rather than a priori. It’s easy to think that augmented reality will be the end of Truth, but such a concern presumes that there was a singular Truth to begin with.

While Pokémon Go faded quickly into banality, underlying anxieties remained salient. Such anxieties about the imminent fall of shared meaning have resurfaced in response to Snapchat’s rollout of a new custom lens feature. “Lenses,” available for about $10 each, build on the company’s playful aesthetic and use of AR as an integral feature of digital image construction and sharing. The idea is that users can create unique lenses for events and special occasions as a fun way to add an element of personal flair. The use of augmented reality is not new for the company, nor is personalization, but this feature is the first to make AR customizable.

Customizable AR takes on a relatively benign form in its manifestation through Snap Lenses. The idea of customizable augmentation, however, creates space for critical consideration about what it means to filter, frame, and rearrange reality with digital social tools.

The capacity to alter an image and then save that image as a documented memory potentially distorts what is and what was and replaces it with what we wish it had been. The wish, the desire, made tangible through augmented alteration, ostensibly changes the facts until facts become irrelevant, truth becomes fuzzy, and representations are severed from their referents.

Anxieties about losing a firm grip on the world are thus amplified through customizability, as the distorting augmented lens adheres not even to a shared template, but is subject to the whims and fancies of each unique person. This is essentially the argument put forth by Curtis Silver at Forbes in his article “Snapchat Now Lets Users Further Disassociate From Reality With Custom Lens Creation”.

Silver contends that customizable Snap Lenses will be the straw that breaks the camel’s back as users escape objectivity and get lost in the swirls of personalized augmentation. “Lenses is a feature in Snapchat that allows users to create a view of a reality that simply does not exist” he writes.  “Now those users can create lenses to fit their actual reality, further blurring the already fragile and thin lines separating perception from real life.” He worries that customizable augmentation not only blurs reality but also indulges idealized images of the self that are inherently unattainable. With personalized augmentation, warns Silver “ [w]e begin to actually believe we are that glowing, perfect creature revealed through Snapchat lenses”.

Silver is not alone. His piece joins a flurry of commentators worrying over reality disintegration caused by “mixed reality” tools—a trend that began long before Snap made AR customizable.

Arguments about the loss of reality via augmentation–though tapping into critical contemporary questions– miss two crucial points. First, social scientists and philosophers have long rejected the idea of a single shared reality. Second, even if there were one shared reality, it’s far from clear that augmentation would muddy it.

As Jurgenson pointed out in his analysis of Pokémon Go, social thinkers have long understood reality as collaboratively constructed. The social world is a process of becoming rather than a stable fact. George Herbert Mead famously said people have multiple selves, a self for every role that they play, while W.I. Thomas declared that reality is that which is real in its consequences. We can even think of widespread truisms like “beauty is in the eye of the beholder” and it becomes clear that selves and realities are neither singular nor revealed but multiple and constructed. The very idea of distorting reality is built on a faulty premise—that reality is concrete and clear cut.

From the starting point of reality as process rather than fact, augmentation doesn’t so much distort the truth but underline and entrench a shared standard.

Augmentation is defined by its relation to a referential object. In the course of daily life, that referential object—agreed upon reality—persists largely unnoticed. Society and interaction work because their construction is ambient and shared meanings can go unsaid. Imposing augmentation makes a referential object obvious. What was unnoticed is now perceived. It’s not simply there, it’s marked as there first. By imagining and externalizing what could be, augmentation gives new meaning to what is.

Augmentation imbues referential objects with newfound authenticity, rendering them raw through juxtaposition. Just as the #nofilter selfie becomes a relevant category only in the face of myriad filters, the pre-augmented world only emerges as “natural” in comparison to that which is digitally adorned.

Snap’s customizable lens feature enables a playful relationship to the world. That playfulness doesn’t loosen our collective grip on reality but produces a reality that is retrospectively concretized as real.  The fear of Lenses as a distorting force not only (incorrectly) assumes a singular true reality, it misses the flip side—Lenses reinforce the idea of shared reality by superimposing something “augmented” on top. Playing with imagery (through lenses, filters etc.) casts the original image as pure and unfiltered. The augmented image gives new character to its original as an organic capture and the idea of shared meaning reasserts itself with fervor renewed.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: source

In last week’s much-anticipated conversation between Barack Obama and Prince Harry, the pair turned to the topic of social media. Here’s what Obama said:

“Social media is a really powerful tool for people of common interests to convene and   get to know each other and connect. But then it’s important for them to get offline,  meet in a pub, meet at a place of worship, meet in a neighbourhood and get to know   each other.”

The former president’s statements about social media are agreeable and measured. They don’t evoke moral panic, but they do offer a clear warning about the rise of new technologies and potential fall of social relations.

These sentiments feel comfortable and familiar. Indeed, the sober cautioning that digital media ought not replace face-to-face interaction has emerged as a widespread truism, and for valid reasons. Shared corporality holds distinct qualities that make it valuable and indispensable for human social connection. With the ubiquity of digital devices and mediated social platforms, it is wise to think about how these new forms of community and communication affect social relations, including their impact on local venues where people have traditionally gathered. It is also reasonable to believe that social media pose a degree of threat to community social life, one that individuals in society should actively ward off.

However, just because something is reasonable to believe doesn’t mean it’s true. The relationship between social media and social relations is not a foregone conclusion but an empirical question: does social media make people less social? Luckily, scholars have spent a good deal of time collecting cross-disciplinary evidence from which to draw conclusions. Let’s look at the research:

In a germinal work from 2007, communication scholars Nicole Ellison and colleagues establish a clear link between Facebook use and college students’ social capital. Using survey data, the authors show that Facebook usage positively relates to forming new connections, deepening existing connections, and maintaining connections with dispersed networks (bridging, bonding, and maintaining social capital, respectively). Ellison and her team repeated similar findings in 2011 and again in 2014.   Burke, Marlow and Lento showed further support for a link between social media and social capital based on a combination of Facebook server logs and participant self-reports, demonstrating that direct interactions through social media help bridge social ties.

Out of sociology, network analyses show social media use associated with expanding social networks and increased social opportunities. Responding directly to Robert Putnam’s harrowing Bowling Alone thesis, Keith Hampton, Chul-Joo Lee and Eun Ja Her report on a range of information communication technologies (ICTs) including mobile phones, blogs, social network sites and photo sharing platforms. They find that these ICTs directly and indirectly increase network diversity and do so by encouraging participation in “traditional” settings such as neighbourhood groups, voluntary organizations, religious institutions and public social venues—i.e., the pubs and places of worship Obama touted above. Among older adults, a 2017 study by Anabel Quan-Haase, Guang Ying Mo and Barry Wellman shows that seniors use ICTs to obtain social support and foster various forms of companionship, including arranging in-person visits, thus mitigating the social isolation that too often accompanies old age. .

From psychology, researchers repeatedly show a relationship between “personality” and social media usage. For example, separate studies by Teresa Correa et al. and Samuel Gosling and colleagues show that those who are more social offline and define themselves as “extraverts” are also more active on social media. Summarizing this trend, Gosling et al. conclude that “[social media] users appear to extend their offline personalities into the domains of online social networks”. That is, people who are outgoing and have lots of friends continue to be outgoing and have lots of friends. They don’t replace one form of interaction with another, but continue interaction patterns across and between the digital and physical. This also means that people who are generally less social remain less social online. However, this is not an effect of the medium, it is an effect of their existing style of social interaction.

In short, the research shows that social media help build and maintain social relationships, supplement and support face-to-face interaction, and reflect existing socializing styles rather than eroding social skills. That is, ICTs supplement (and at times, enhance) interaction rather than displace it. These supplements and enhancements move between online and offline, as users reinforce relationships in between face-to-face engagements, coordinate plans to meet up, and connect online amidst physical and geographic barriers.

Of course, the picture isn’t entirely rosy. Social media open the doors to new levels and types of bullying, misinformation runs rampant, and the affordances of major platforms like Facebook may well make people feel bad about themselves. But, from the research, it doesn’t seem like social media is making anybody stay home.

Perhaps it is time to retire the sage warning that too many glowing screens will lead to empty bar stools and café counters. The common advice that social media is best used in moderation, and only so long as users keep engaging face-to-face isn’t negated by the research, but shown irrelevant—people are using social media to facilitate, augment, and supplement face-to-face interaction. There’s enough to worry over in this world, thanks to the research, we can take mass social isolation off the list.

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source

Let me begin with a prescriptive statement: major social media companies ought to consult with trained social researchers to design interfaces, implement policies, and understand the implications of their products. I embark unhesitatingly into prescription because major social media companies have extended beyond apps and platforms, taking on the status of infrastructures and institutions. Pervasive in personal and public life, social media are not just things people use, places they go to, or activities they do. Social media shape the flows of social life, structure civic engagement, and integrate with affect, identity and selfhood.

Public understanding of social media as infrastructural likely underpins mass concern about what social media are doing to society, and what individuals in society are doing with social media. Out of this concern has emerged a vibrant field of commentary on the relationship between social media use and psychological well-being. Spanning academic literature, op-ed pages and dinner table conversation the question has seemingly remained on the collective mind: does social media make people feel bad? Last week, Facebook addressed the issue directly.

In a blog post titled Hard Questions: Is Spending Time on Social Media Bad for Us?, Facebook researchers David Ginsberg and Moira Burke review the literature on Facebook use and psychological well-being. Their review yields a wholly unsurprising response to the titular query: sometimes, it depends. Facebook doesn’t make people feel good or bad, they say, but it depends on how people use the technology. Specifically, those who post and engage “actively” feel better, while those who “passively” consume feel worse[1].

I was delighted with the Facebook blog post up until this point. The company engaged social researchers and peer-reviewed content to address a pressing question derived from public concern. But then, out came “it’s how you use it”.

“It’s how you use it” is wholly unsatisfying, philosophically misguided, and a total corporate cop-out that places disproportionate responsibility on individual users while ignoring the politics and power of design.  It’s also a strangely projective conclusion to what began as a reflexive internal examination of technological effects.

If the trendy onslaught of new materialism has taught us anything, it’s that things are not just objects of use, but have meaningful shaping capacities. That objects are efficacious isn’t a new idea, nor is it niche. Within media studies, we can look to Marshall McLuhan who, 50-plus years ago, established quite succinctly that the medium is the message. From STS, we can look to Actor Network Theory (ANT), through which Bruno Latour clarified that while guns don’t kill people on their own, the technology of the gun is integral to violence. We can look to Cyborgology co-editor David Banks’ recent article, addressing the need to articulate design politics as part of engineering education. And I would  also direct readers to my own work, in which I keep blathering about “technological affordances.” I’ll come back to affordances in a bit.

Certainly, we theorists of design recognize users and usage as part of the equation. Technology does not determine social or individual outcomes. But, design matters, and when social ills emerge on infrastructural platforms, the onus falls on those platform facilitators to find out what’s the matter with their design.

To be fair, Ginsberg and Burke seem to know this implicitly. In fact, they have an entire section (So what are we doing about it? ) in which they talk about current and forthcoming adjustments to the interface. This section is dedicated to prosocial design initiatives including recrafted news feed algorithms, “snooze” options that let users take a break from particular people and content, visibility options following relationship status change, and a suicide prevention tool that triages self-harm through social networks, professional organizations and AI that recognizes users who may be in trouble.

In short, the researchers ask how Facebook relates to psychological well-being, conclude that psychological outcomes are predicated on user behavior, and describe plans to design features that promote a happier user-base. What they don’t do, however, is make a clear connection between platform design and user behavior—a connection that, given the cited research, seems crucial to building a prosocial interface that provides users with an emotional boost. That is, the Facebook blog post doesn’t interrogate how existing and integral design features may afford social versus antisocial usage and for whom. If posting and interacting on Facebook feels good and consuming content feels bad, how do current design features affect the production-consumption balance, for which users, and under what circumstances? And relatedly, what is it about consumption of Facebook content that elicits The Sads? Might the platform be reworked such that consumption is more joyful than depleting?

A clear framework of technological affordances becomes useful here. Affordances refer to how technologies push and pull in varying directions with more or less force. Specifically, technologies can request, demand, encourage, discourage, refuse, and allow. How an object affords will vary across users and situations. Beginning with this conceptual schema—what I call the mechanisms and conditions framework—Facebook’s existing affordances emerge more clearly and design and policy solutions can be developed systematically.

For instance, Facebook’s design features work in several ways to reinforce status quo ideas and popular people while maintaining an ancillary status for those on the margins. Given findings about the psychological effects of production versus consumption, these features then have behavioral consequences and in turn, emotional ones. I’ve picked two examples for illustration, but the list certainly extends.

First, Facebook converges multiple networks into a shared space with uncertain and oft-changing privacy settings. This combination of context collapse and default publicness make sharing undesirable and even untenable for those whose identities, ideas, or relationships put them at risk. For LGBTQ persons, ex-criminals, political radicals, critical race-gender activists, refugees, undocumented persons and the like, Facebook affordances in their current configuration may be profoundly hazardous. When a platform is designed in a way that makes it dangerous for some users to produce, then it also makes it difficult for those users to obtain psycho-social benefits and more likely that they encounter psychological harm.

Second, news feed algorithms use a “rich get richer” strategy in which popular content increases in visibility. That is, users are encouraged to engage with content that has already accrued attention, and discouraged from engaging with content which has gained little steam. Facebook’s metric-driven algorithmic system not only promotes content with mass appeal, but also snowballs attention towards those who already take up significant symbolic space in the network. So, while everyone is allowed to post on Facebook, rewards distribute in a way that encourages the popular kids and keeps the shy ones quiet. By encouraging production from some users and consumption from others, Facebook’s very design allocates not just attention, but also emotion.

Of course, content consumption is not an essentially depressing practice. But on Facebook, it is. It’s worth examining why. Facebook is  designed in a way that makes negative social comparison–and related negative self-feelings–a likely outcome of scrolling through a news feed. In particular, Facebook’s aggressive promotion of happy expression through silly emoji, exclusion of a “dislike” button, the ready availability of gifs, and algorithms that grant visibility  preference to images and exclamation points, work together to encourage users to share the best of themselves while discouraging banal or unflattering content. By design, Facebook created the highlight reel phenomenon, and onlookers suffer for it. Might Facebook consumption feel different if there were more of those nothing-special dinner pics that everyone loves to complain about?

In response to Facebook’s blog post, a series of commentators accused the company of blaming users. I don’t think it was anything so nefarious. I just think Facebook didn’t have the conceptual tools to accomplish what they meant to accomplish—a critical internal examination and effective pathway towards correction. Hey Facebook, try affordance theory <3.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

[1] For the sake of space, I’m tabling the issue of “active” and “passive.” However, any media studies scholar will tell you that media consumption is not passive, making the active-passive distinction in the blog post problematic.

Every now and again, as I stroll along through the rhythms of teaching and writing, my students stop and remind me of all the assumptions I quietly carry around. I find these moments helpful, if jarring. They usually entail me stuttering and looking confused and then rambling through some response that I was unprepared to give. Next there’s the rumination period during which I think about what I should have said, cringe at what I did (and did not) say, and engage in mildly self-deprecating wonder at my seeming complacency. I’m never upset when my positions are challenged (in fact, I quite like it) but I am usually disappointed and surprised that I somehow presumed my positions didn’t require justification.

Earlier this week, during my Public Sociology course, some very bright students took a critical stance against politics in the discipline.  As a bit of background, much of the content I assign maintains a clear political angle and a distinct left leaning bias. I also talk a lot about writing and editing for Cyborgology, and have on several occasions made note of our explicit orientation towards social justice.  The students wanted to know why sociology and sociologists leaned so far left, and questioned the appropriateness of incorporating politics into scholarly work—public or professional.

I think these questions deserve clear answers. The value of integrating politics with scholarship is not self-evident and it is unfair (and a little lazy) to go about political engagement as though it’s a fact of scholarly life rather than a position or a choice. We academics owe these answers to our students and we public scholars would do well to articulate these answers to the publics with whom we hope to engage.

In an exercise that’s simultaneously for me, my students, and those who read this blog, I’ll talk through the questions of political leanings and their place in academic engagement, respectively.

Let’s begin with the liberal bias. First of all, I want to temper claims of radicalism in the academy. Survey data of academics’ political views show that overall, about 45% of professors maintain progressive ideals, compared with 45% who identify as moderate and 9% as conservative. Conservatives are admittedly underrepresented within the academy, but less than half of all academic faculty identify with the left and of those, only a tiny fraction (about 8%) hold radical leftist views. Still, political leanings vary by discipline with social scientists in general  and sociologists in particular maintaining  higher than average left leaning propensities compared with academics in other fields. So, sociologists are collectively progressive. Why?

One guess is that sociology has an inherent appeal to the progressive sensibility and so attracts people with a leftist political bent. However, this explanation falls short when we look to the origins of the field, which are largely conservative and date back to attempts by key figures at finding stability amidst the industrial revolution while equating society to the organic body. Another guess—and I think a partially reasonable one—is that progressive politics are informally rewarded while conservative politics may face censure within Sociology departments. However, having met very few truly conservative trained sociologists (inside or outside of the academy), negative effects of conservative dissent likely only play a small role in the general tenor of the discipline

I believe that a major reason sociologists lean left politically is because we are bombarded by inequality professionally. Our job is to scrutinize social life and in doing so, systemic oppressions become blaring. Sociologists are trained to enact the sociological imagination, a praxis introduced by C. Wright Mills by which personal troubles are understood in relation to public issues. The course of our study reveals clear patterns in which intersections of race, class, geography, and gender predict life chances with sickening precision. We teach about egregious disparities in health care, life expectancy, educational attainment, mental wellbeing, and incarceration rates. Through research and reading, we become intimately familiar with the voices of those on the wrong side of these rates—the individual persons whose troubles represent public issues. In my own collaborative research, I’ve dealt with issues of race and disability stigma, social responses to intimate partner violence, and the costs of being a woman during  task-based social interaction. To know these patterns, connect them to real people’s lives, and understand how policy and culture perpetuate inequitable systems, tends to foster a progressive sensibility.

But even if this sensibility is both understandable and tightly rooted in empirical realities, is it appropriate as part of professional practice? For me, it is. I strongly support the inclusion of politics into pedagogy, public engagement, and scholarly production. The idea that scholars are only scholars—impartial vestibules of knowledge—is disingenuous. Scholars are people, and as people, we have politics. Pretending those politics aren’t there obscures the discourses in which we engage across professional arenas. Our intellectual projects are inextricable from political agendas. From the research questions we ask, to the ways we frame our findings, to the decisions we make about how to disseminate our work and ideas, politics are ever present.  From an intellectual standpoint, making those politics as transparent as possible increases the credibility and robustness of scholarly bodies of work.  Scholarly argumentation goes much deeper when all parties lay bare their assumptions.  From a human and ethical standpoint, I contend that there is an obligation to take what we know and do something useful with it. To willingly ignore patterns of injustice and oppression is a moral decision, just as is the choice to act politically against them. One’s position as a scholar/academic does not recuse that person from the dynamics of social life. We are all a part of society, and maintaining a position of passive objectivity is equivalent to active complicity in the way things are.

I appreciate that my students are critical in the classroom and that they push me to defend my pedagogy and scholarly practice. I’ll share this post with them and hope that they feel empowered to keep the conversation going.


Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: source