If I were to ask you a question, and neither of us knew the answer, what would you do? You’d Google it, right? Me too. After you figure out the right wording and hit the search button, at what point would you be satisfied enough with Google’s answer to say that you’ve gained new knowledge? Judging from the current socio-technical circumstances, I’d be hard-pressed to say that many of us would make it past the featured snippet, let alone the first page of results.

The internet—along with the complementary technologies we’ve developed to increase its accessibility—enriches our lives by affording us access to the largest information repository ever conceived. Despite physical barriers, we can share, explore, and store facts, opinions, theories, and philosophies alike. As such, this vast repository contains many answers to many questions derived from many distinct perspectives. These socio-technical circumstances are undeniably promising for the distribution and development of knowledge. However, in 2008, tech-critic Nicholas Carr posed a counter argument about the internet and its impact on our cognitive abilities by asking readers a simple question: is Google making us stupid? In his controversial article published by The Atlantic, Carr blames the internet for our diminishing ability to form “rich mental connections,” and supposes that technology and the internet are instruments of intentional distraction. While I agree with Carr’s sentiment that the way we think has changed, I don’t agree that the fault falls on the internet. I believe we expect too much of Google and less of ourselves; therefore, the fault (if there is fault) is largely our own.

Here’s why: Carr’s argument hinges on the idea that technology definitively determines our society’s structural and cultural values—a theory known as technological determinism. However, he fails to recognize the theory of affordance in this argument. Affordances refer to the way in which the features of a technology interact with agentic users and diverse circumstances. While the technical and material elements of technology do have shaping effects, they are far from determined. Affordance theory suggests that the technologies we use and the internet infrastructures from which they draw, contain multipotentiality: they afford the potential to indulge in curiosity and develop robust knowledge while simultaneously affording the potential to relinquish curiosity and develop complacency through the comforts of convenience and self-confirmation.

Considering the initial sentiment of Carr’s argument (the way we think has changed) together with affordance theory, we can derive two critical questions: have we embraced complacency and become too comfortable with the internet’s knowledge production capabilities? If so, by choosing to rest on our laurels and exploit this affordance, what happens to epistemic curiosity?

There’s a lot to unpack, but in order to address these questions, we need to examine the potential socio-technical circumstances that could lead us down a path of declining epistemic curiosity, starting with the binary ideas of convenience and complacency.

Complacency is characterized by the feeling of being satisfied with how things are and not wanting to try to make them better. Clearly, in terms of making life more efficient, we are nowhere near complacent, as we constantly strive to streamline our lives through innovation—from fire to the invention of (arguably) our greatest creation to date and the basis for our modernity: information and communication technology. This technology affords us the ability to live more convenient, effortless lives by providing access to the world’s knowledge with the tap of a finger and the ability to do more in a few moments than previous generations could do in hours.

For instance, education has become much more convenient. Thanks to the internet, you can take advantage of distance learning programs and earn a degree on your own terms, without physically attending class. The workforce has also become more flexible, as technology allows us to maximize time and stay on top of our work through complete mobility, and in some cases, complete task automation. Economically, the internet allows us to sell and consume goods and services without the physical limitations of brick and mortar. It also allows us to communicate with friends, family, and strangers over long distances, document our lives, access current events with ease, and answer a question within moments of it popping into our heads.

These conveniences must make life better, right?

Think of these conveniences like your bed on a cold morning: warm and comfortable, convincing you to hit snooze and stay a while longer. This warmth and comfort can be a source of sustenance and strength; however, if we stay too long, comfort can get the best of us. We might become lazy, hesitating to diverge from the path of least resistance.

Just as it is inadvisable to regularly snooze until noon, it is concerning when information and knowledge are accessed too easily, too quickly. With the increased accessibility and speed of information, it’s easy to become desensitized to curiosity—the very intuition that is responsible for our technological progress—in the same way that you are desensitized to your breathing pattern or heartbeat. By following the path of least resistance, we can create a dynamic in which we perceive the internet as a mere convenience instead of a tool to stimulate our thoughts about the world around us. This convenience dynamic allows us to settle into a state of complacency in which we are certain that everything we think and believe can be justified through a quick Google search—because, in fact, it can be. That feeling of certainty and comfort that stems from this technical ability to self-confirm is, what I call, informed complacency.

The idea of informed complacency is especially fraught because it signifies a turning point in our perception of contemporary knowledge. Ultimately, it can encourage us to develop an underlying sense of omniscient modernity, which Adam Kirch discusses in his article for The New Yorker,Are We Really So Modern?”:

“Modernity cannot be identified with any particular technological or social breakthrough. Rather, it is a subjective condition, a feeling or an intuition that we are in some profound sense different from the people who lived before us. Modern life, which we tend to think of as an accelerating series of gains in knowledge, wealth, and power over nature, is predicated on a loss: the loss of contact with the past.”

In the past, nothing was certain. The information our ancestors had on the world and universe was constantly being overturned and molded into something else entirely. Renowned thinkers from across the ages built and destroyed theories like they were children with LEGO bricks—especially during the Golden Age of Athens (fourth and fifth centuries B.C.) and the Enlightenment (seventeenth and eighteenth centuries A.D.). Each time they thought they had it figured out, the world as they knew it came crashing down with a new discovery:

“The discovery of America destroyed established geography, the Reformation destroyed the established Church, and astronomy destroyed the established cosmos. Everything that educated people believed about reality turned out to be an error or, worse, a lie. It’s impossible to imagine what, if anything, could produce a comparable effect on us today”

Today, we still face uncertainty, albeit a different kind. With the glut of empirical evidence on the internet, multiple versions of objective reality flourish even as they conflict. These multiple truths create a dynamic information environment that makes it difficult to differentiate between fact, theory, and fiction, increasing the likelihood that whatever one thinks is true can easily be confirmed as such. With this sentiment in mind, by following the path of least resistance and developing a sense of informed complacency, we risk developing a sense of omniscient modernity and over-comprehending our ability to know, because we are certain that we know—or can know—everything, past, present, and future, with the click of a button or the tap of a finger.

Though a dynamic information environment has clear benefits for epistemic curiosity—better science, more informed debates, an engaged citizenry—the tilt of the affordance scale towards complacency always remains a lingering possibility. If we begin to lean in this direction, I contend that informed complacency is likely to take hold and lead us to ignorance and insularity amid a saturated information environment. This can create cognitive traps that, in the worst instance, diminish epistemic curiosity.

One of these traps is called the immediate gratification bias, which Tim Urban of Wait But Why, has playfully dubbed the “Instant Gratification Monkey”. He describes this predisposition as “thinking only about the present, ignoring lessons from the past and disregarding the future altogether, and concerning yourself entirely with maximizing the ease and pleasure of the current moment.” As a result of this predisposition, there is an increasing demand for instant services like Uber, Amazon Prime, Netflix, and Tinder, which testifies that the notions of ease and instancy have infiltrated our thought-process, compelling us to apply them to every other aspect of our lives. The increase in the speed at which we consume information has molded us to rely on and expect instant results for everything. Consequently, we are likely to base our information decisions on this principle and choose not to dig past surface-level.

Another trap is found in gluttonous information habits—devouring as much of it as we can, as quickly as possible, solely for the sake of hoarding what we consider to be knowledge. In all our modernity, it seems that we misguidedly assume that consuming information at a faster pace is beneficial to the development of knowledge, when in fact, too much information (information overload) can have overwhelming, negative effects, such as the inability to make the “rich mental connections” Carr describes in his article. This trap is amplified by pressures to stay “in the know” as well as the market of apps and services that capitalize on a pervasive fear of missing out, transforming the pursuit of knowledge from an act of personal curiosity to a social requirement.

The complex algorithms deployed by search engine and social media conglomerates to manage our vast aggregates of information curate content in ways users are likely to experience not only as useful, but pleasurable. These algorithmic curations are purposefully designed to keep information platforms sticky; to keep users engaged, and ultimately sell data and attention. These are the conditions under which another cognitive trap arises: the filter bubble. By personally analyzing each individual user’s interests, the algorithms place them in a filtered environment in which only agreeable information makes its way to the top of their screens. Therefore, we are constantly able to confirm our own personal ideologies, rendering any news that disagrees with one’s established viewpoints as “fake news.” In this context, it’s easy to believe everything we read on the internet, even if it’s not true. This makes it difficult to accurately assess the truthfulness and credibility of news sources online, as truth value seems to be measured by virality rather than veracity.

Ultimately, with his argument grounded in technological determinism, Carr overlooks the perspective that technology cannot define its own purpose. As its creators and users, we negotiate how technology integrates into our lives. The affordances of digital knowledge repositories create the capacity for unprecedented curiosity and the advancement of human thought. However, they also enable us to be complacent, misinformed, and superficially satisfied; that is to say, an abundance of easily accessed information does not always mean persistent curiosity and improved knowledge. To preserve epistemic curiosity and avoid informed complacency, we should keep reminding ourselves of this and practice conscious information consumption habits. This means recognizing how algorithms filter content; seeking diverse perspectives and content sources; questioning, critiquing, and evaluating news and information; and perhaps most importantly, always do your best to venture past the first page of Google search results. Who knows, you might find something that challenges everything you believe.

Clayton d’Arnault is the Editor of The Disconnect, a new digital magazine that forces you to disconnect from the internet. He is also the Founding Editor of Digital Culturist. Find him on Twitter @cjdarnault.

 

Headline pic via: Source

Augmented reality makes odd bed fellows out of pleasure and discomfort. Overlaying physical objects with digital data can be fun and creative. It can generate layered histories of place, guide tourists through a city, and gamify ordinary landscapes.  It can also raise weighty philosophical questions about the nature of reality.

The world is an orchestrated accomplishment, but as a general rule, humans treat it like a fact. When the threads of social construction begin to unravel, there is a rash of movement to weave them back together. This pattern of reality maintenance, potential breakdown and repair is central to the operation of self and society and it comes into clear view through public responses to digital augmentation.

A basic sociological tenet is that interaction and social organization are only possible through shared definitions of reality. For meaningful interaction to commence, interactants must first agree on the question of “what’s going on here?”. It is thus understandable that technological alteration, especially when applied in fractured and nonuniform ways, would evoke concern about disruptions to the smooth fabric of social life. It is here, in this disruptive potential, that lie apprehensions about the social effects of AR.

When Pokémon Go hit the scene, digital augmentation was thrown into the spotlight. While several observers talked about infusions of fun and community into otherwise atomized locales, another brand of commentary arose in tandem. This second commentary, decidedly darker, portended the breakdown of shared meaning and a loosening grip on reality. As Nathan Jurgeson said at the time, “Augmented reality violates long-held collective assumptions about the nature of reality around us”. But, Jurgenson points out, reality has always been augmented and imperfectly shared. The whole purpose of symbol systems is that they represent what can’t be precisely captured. Symbols are imperfect proxies for experience and are thus necessary for communication and social organization.

What AR does is explicate experiential idiosyncrasies and clarify that the world is not, and needn’t be, what it seems. This explication disrupts smooth flows of interaction like a glitch in the social program. It reveals that reality is collaboratively made rather than a priori. It’s easy to think that augmented reality will be the end of Truth, but such a concern presumes that there was a singular Truth to begin with.

While Pokémon Go faded quickly into banality, underlying anxieties remained salient. Such anxieties about the imminent fall of shared meaning have resurfaced in response to Snapchat’s rollout of a new custom lens feature. “Lenses,” available for about $10 each, build on the company’s playful aesthetic and use of AR as an integral feature of digital image construction and sharing. The idea is that users can create unique lenses for events and special occasions as a fun way to add an element of personal flair. The use of augmented reality is not new for the company, nor is personalization, but this feature is the first to make AR customizable.

Customizable AR takes on a relatively benign form in its manifestation through Snap Lenses. The idea of customizable augmentation, however, creates space for critical consideration about what it means to filter, frame, and rearrange reality with digital social tools.

The capacity to alter an image and then save that image as a documented memory potentially distorts what is and what was and replaces it with what we wish it had been. The wish, the desire, made tangible through augmented alteration, ostensibly changes the facts until facts become irrelevant, truth becomes fuzzy, and representations are severed from their referents.

Anxieties about losing a firm grip on the world are thus amplified through customizability, as the distorting augmented lens adheres not even to a shared template, but is subject to the whims and fancies of each unique person. This is essentially the argument put forth by Curtis Silver at Forbes in his article “Snapchat Now Lets Users Further Disassociate From Reality With Custom Lens Creation”.

Silver contends that customizable Snap Lenses will be the straw that breaks the camel’s back as users escape objectivity and get lost in the swirls of personalized augmentation. “Lenses is a feature in Snapchat that allows users to create a view of a reality that simply does not exist” he writes.  “Now those users can create lenses to fit their actual reality, further blurring the already fragile and thin lines separating perception from real life.” He worries that customizable augmentation not only blurs reality but also indulges idealized images of the self that are inherently unattainable. With personalized augmentation, warns Silver “ [w]e begin to actually believe we are that glowing, perfect creature revealed through Snapchat lenses”.

Silver is not alone. His piece joins a flurry of commentators worrying over reality disintegration caused by “mixed reality” tools—a trend that began long before Snap made AR customizable.

Arguments about the loss of reality via augmentation–though tapping into critical contemporary questions– miss two crucial points. First, social scientists and philosophers have long rejected the idea of a single shared reality. Second, even if there were one shared reality, it’s far from clear that augmentation would muddy it.

As Jurgenson pointed out in his analysis of Pokémon Go, social thinkers have long understood reality as collaboratively constructed. The social world is a process of becoming rather than a stable fact. George Herbert Mead famously said people have multiple selves, a self for every role that they play, while W.I. Thomas declared that reality is that which is real in its consequences. We can even think of widespread truisms like “beauty is in the eye of the beholder” and it becomes clear that selves and realities are neither singular nor revealed but multiple and constructed. The very idea of distorting reality is built on a faulty premise—that reality is concrete and clear cut.

From the starting point of reality as process rather than fact, augmentation doesn’t so much distort the truth but underline and entrench a shared standard.

Augmentation is defined by its relation to a referential object. In the course of daily life, that referential object—agreed upon reality—persists largely unnoticed. Society and interaction work because their construction is ambient and shared meanings can go unsaid. Imposing augmentation makes a referential object obvious. What was unnoticed is now perceived. It’s not simply there, it’s marked as there first. By imagining and externalizing what could be, augmentation gives new meaning to what is.

Augmentation imbues referential objects with newfound authenticity, rendering them raw through juxtaposition. Just as the #nofilter selfie becomes a relevant category only in the face of myriad filters, the pre-augmented world only emerges as “natural” in comparison to that which is digitally adorned.

Snap’s customizable lens feature enables a playful relationship to the world. That playfulness doesn’t loosen our collective grip on reality but produces a reality that is retrospectively concretized as real.  The fear of Lenses as a distorting force not only (incorrectly) assumes a singular true reality, it misses the flip side—Lenses reinforce the idea of shared reality by superimposing something “augmented” on top. Playing with imagery (through lenses, filters etc.) casts the original image as pure and unfiltered. The augmented image gives new character to its original as an organic capture and the idea of shared meaning reasserts itself with fervor renewed.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: source

In last week’s much-anticipated conversation between Barack Obama and Prince Harry, the pair turned to the topic of social media. Here’s what Obama said:

“Social media is a really powerful tool for people of common interests to convene and   get to know each other and connect. But then it’s important for them to get offline,  meet in a pub, meet at a place of worship, meet in a neighbourhood and get to know   each other.”

The former president’s statements about social media are agreeable and measured. They don’t evoke moral panic, but they do offer a clear warning about the rise of new technologies and potential fall of social relations.

These sentiments feel comfortable and familiar. Indeed, the sober cautioning that digital media ought not replace face-to-face interaction has emerged as a widespread truism, and for valid reasons. Shared corporality holds distinct qualities that make it valuable and indispensable for human social connection. With the ubiquity of digital devices and mediated social platforms, it is wise to think about how these new forms of community and communication affect social relations, including their impact on local venues where people have traditionally gathered. It is also reasonable to believe that social media pose a degree of threat to community social life, one that individuals in society should actively ward off.

However, just because something is reasonable to believe doesn’t mean it’s true. The relationship between social media and social relations is not a foregone conclusion but an empirical question: does social media make people less social? Luckily, scholars have spent a good deal of time collecting cross-disciplinary evidence from which to draw conclusions. Let’s look at the research:

In a germinal work from 2007, communication scholars Nicole Ellison and colleagues establish a clear link between Facebook use and college students’ social capital. Using survey data, the authors show that Facebook usage positively relates to forming new connections, deepening existing connections, and maintaining connections with dispersed networks (bridging, bonding, and maintaining social capital, respectively). Ellison and her team repeated similar findings in 2011 and again in 2014.   Burke, Marlow and Lento showed further support for a link between social media and social capital based on a combination of Facebook server logs and participant self-reports, demonstrating that direct interactions through social media help bridge social ties.

Out of sociology, network analyses show social media use associated with expanding social networks and increased social opportunities. Responding directly to Robert Putnam’s harrowing Bowling Alone thesis, Keith Hampton, Chul-Joo Lee and Eun Ja Her report on a range of information communication technologies (ICTs) including mobile phones, blogs, social network sites and photo sharing platforms. They find that these ICTs directly and indirectly increase network diversity and do so by encouraging participation in “traditional” settings such as neighbourhood groups, voluntary organizations, religious institutions and public social venues—i.e., the pubs and places of worship Obama touted above. Among older adults, a 2017 study by Anabel Quan-Haase, Guang Ying Mo and Barry Wellman shows that seniors use ICTs to obtain social support and foster various forms of companionship, including arranging in-person visits, thus mitigating the social isolation that too often accompanies old age. .

From psychology, researchers repeatedly show a relationship between “personality” and social media usage. For example, separate studies by Teresa Correa et al. and Samuel Gosling and colleagues show that those who are more social offline and define themselves as “extraverts” are also more active on social media. Summarizing this trend, Gosling et al. conclude that “[social media] users appear to extend their offline personalities into the domains of online social networks”. That is, people who are outgoing and have lots of friends continue to be outgoing and have lots of friends. They don’t replace one form of interaction with another, but continue interaction patterns across and between the digital and physical. This also means that people who are generally less social remain less social online. However, this is not an effect of the medium, it is an effect of their existing style of social interaction.

In short, the research shows that social media help build and maintain social relationships, supplement and support face-to-face interaction, and reflect existing socializing styles rather than eroding social skills. That is, ICTs supplement (and at times, enhance) interaction rather than displace it. These supplements and enhancements move between online and offline, as users reinforce relationships in between face-to-face engagements, coordinate plans to meet up, and connect online amidst physical and geographic barriers. Learn how to buy supplements on Amazon.com.

Of course, the picture isn’t entirely rosy. Social media open the doors to new levels and types of bullying, misinformation runs rampant, and the affordances of major platforms like Facebook may well make people feel bad about themselves. But, from the research, it doesn’t seem like social media is making anybody stay home.

Perhaps it is time to retire the sage warning that too many glowing screens will lead to empty bar stools and café counters. The common advice that social media is best used in moderation, and only so long as users keep engaging face-to-face isn’t negated by the research, but shown irrelevant—people are using social media to facilitate, augment, and supplement face-to-face interaction. There’s enough to worry over in this world, thanks to the research, we can take mass social isolation off the list.

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source

Let me begin with a prescriptive statement: major social media companies ought to consult with trained social researchers to design interfaces, implement policies, and understand the implications of their products. I embark unhesitatingly into prescription because major social media companies have extended beyond apps and platforms, taking on the status of infrastructures and institutions. Pervasive in personal and public life, social media are not just things people use, places they go to, or activities they do. Social media shape the flows of social life, structure civic engagement, and integrate with affect, identity and selfhood.

Public understanding of social media as infrastructural likely underpins mass concern about what social media are doing to society, and what individuals in society are doing with social media. Out of this concern has emerged a vibrant field of commentary on the relationship between social media use and psychological well-being. Spanning academic literature, op-ed pages and dinner table conversation the question has seemingly remained on the collective mind: does social media make people feel bad? Last week, Facebook addressed the issue directly.

In a blog post titled Hard Questions: Is Spending Time on Social Media Bad for Us?, Facebook researchers David Ginsberg and Moira Burke review the literature on Facebook use and psychological well-being. Their review yields a wholly unsurprising response to the titular query: sometimes, it depends. Facebook doesn’t make people feel good or bad, they say, but it depends on how people use the technology. Specifically, those who post and engage “actively” feel better, while those who “passively” consume feel worse[1].

I was delighted with the Facebook blog post up until this point. The company engaged social researchers and peer-reviewed content to address a pressing question derived from public concern. But then, out came “it’s how you use it”.

“It’s how you use it” is wholly unsatisfying, philosophically misguided, and a total corporate cop-out that places disproportionate responsibility on individual users while ignoring the politics and power of design.  It’s also a strangely projective conclusion to what began as a reflexive internal examination of technological effects.

If the trendy onslaught of new materialism has taught us anything, it’s that things are not just objects of use, but have meaningful shaping capacities. That objects are efficacious isn’t a new idea, nor is it niche. Within media studies, we can look to Marshall McLuhan who, 50-plus years ago, established quite succinctly that the medium is the message. From STS, we can look to Actor Network Theory (ANT), through which Bruno Latour clarified that while guns don’t kill people on their own, the technology of the gun is integral to violence. We can look to Cyborgology co-editor David Banks’ recent article, addressing the need to articulate design politics as part of engineering education. And I would  also direct readers to my own work, in which I keep blathering about “technological affordances.” I’ll come back to affordances in a bit.

Certainly, we theorists of design recognize users and usage as part of the equation. Technology does not determine social or individual outcomes. But, design matters, and when social ills emerge on infrastructural platforms, the onus falls on those platform facilitators to find out what’s the matter with their design.

To be fair, Ginsberg and Burke seem to know this implicitly. In fact, they have an entire section (So what are we doing about it? ) in which they talk about current and forthcoming adjustments to the interface. This section is dedicated to prosocial design initiatives including recrafted news feed algorithms, “snooze” options that let users take a break from particular people and content, visibility options following relationship status change, and a suicide prevention tool that triages self-harm through social networks, professional organizations and AI that recognizes users who may be in trouble.

In short, the researchers ask how Facebook relates to psychological well-being, conclude that psychological outcomes are predicated on user behavior, and describe plans to design features that promote a happier user-base. What they don’t do, however, is make a clear connection between platform design and user behavior—a connection that, given the cited research, seems crucial to building a prosocial interface that provides users with an emotional boost. That is, the Facebook blog post doesn’t interrogate how existing and integral design features may afford social versus antisocial usage and for whom. If posting and interacting on Facebook feels good and consuming content feels bad, how do current design features affect the production-consumption balance, for which users, and under what circumstances? And relatedly, what is it about consumption of Facebook content that elicits The Sads? Might the platform be reworked such that consumption is more joyful than depleting?

A clear framework of technological affordances becomes useful here. Affordances refer to how technologies push and pull in varying directions with more or less force. Specifically, technologies can request, demand, encourage, discourage, refuse, and allow. How an object affords will vary across users and situations. Beginning with this conceptual schema—what I call the mechanisms and conditions framework—Facebook’s existing affordances emerge more clearly and design and policy solutions can be developed systematically.

For instance, Facebook’s design features work in several ways to reinforce status quo ideas and popular people while maintaining an ancillary status for those on the margins. Given findings about the psychological effects of production versus consumption, these features then have behavioral consequences and in turn, emotional ones. I’ve picked two examples for illustration, but the list certainly extends.

First, Facebook converges multiple networks into a shared space with uncertain and oft-changing privacy settings. This combination of context collapse and default publicness make sharing undesirable and even untenable for those whose identities, ideas, or relationships put them at risk. For LGBTQ persons, ex-criminals, political radicals, critical race-gender activists, refugees, undocumented persons and the like, Facebook affordances in their current configuration may be profoundly hazardous. When a platform is designed in a way that makes it dangerous for some users to produce, then it also makes it difficult for those users to obtain psycho-social benefits and more likely that they encounter psychological harm.

Second, news feed algorithms use a “rich get richer” strategy in which popular content increases in visibility. That is, users are encouraged to engage with content that has already accrued attention, and discouraged from engaging with content which has gained little steam. Facebook’s metric-driven algorithmic system not only promotes content with mass appeal, but also snowballs attention towards those who already take up significant symbolic space in the network. So, while everyone is allowed to post on Facebook, rewards distribute in a way that encourages the popular kids and keeps the shy ones quiet. By encouraging production from some users and consumption from others, Facebook’s very design allocates not just attention, but also emotion.

Of course, content consumption is not an essentially depressing practice. But on Facebook, it is. It’s worth examining why. Facebook is  designed in a way that makes negative social comparison–and related negative self-feelings–a likely outcome of scrolling through a news feed. In particular, Facebook’s aggressive promotion of happy expression through silly emoji, exclusion of a “dislike” button, the ready availability of gifs, and algorithms that grant visibility  preference to images and exclamation points, work together to encourage users to share the best of themselves while discouraging banal or unflattering content. By design, Facebook created the highlight reel phenomenon, and onlookers suffer for it. Might Facebook consumption feel different if there were more of those nothing-special dinner pics that everyone loves to complain about?

In response to Facebook’s blog post, a series of commentators accused the company of blaming users. I don’t think it was anything so nefarious. I just think Facebook didn’t have the conceptual tools to accomplish what they meant to accomplish—a critical internal examination and effective pathway towards correction. Hey Facebook, try affordance theory <3.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline Pic Via: Source

[1] For the sake of space, I’m tabling the issue of “active” and “passive.” However, any media studies scholar will tell you that media consumption is not passive, making the active-passive distinction in the blog post problematic.

Every now and again, as I stroll along through the rhythms of teaching and writing, my students stop and remind me of all the assumptions I quietly carry around. I find these moments helpful, if jarring. They usually entail me stuttering and looking confused and then rambling through some response that I was unprepared to give. Next there’s the rumination period during which I think about what I should have said, cringe at what I did (and did not) say, and engage in mildly self-deprecating wonder at my seeming complacency. I’m never upset when my positions are challenged (in fact, I quite like it) but I am usually disappointed and surprised that I somehow presumed my positions didn’t require justification.

Earlier this week, during my Public Sociology course, some very bright students took a critical stance against politics in the discipline.  As a bit of background, much of the content I assign maintains a clear political angle and a distinct left leaning bias. I also talk a lot about writing and editing for Cyborgology, and have on several occasions made note of our explicit orientation towards social justice.  The students wanted to know why sociology and sociologists leaned so far left, and questioned the appropriateness of incorporating politics into scholarly work—public or professional.

I think these questions deserve clear answers. The value of integrating politics with scholarship is not self-evident and it is unfair (and a little lazy) to go about political engagement as though it’s a fact of scholarly life rather than a position or a choice. We academics owe these answers to our students and we public scholars would do well to articulate these answers to the publics with whom we hope to engage.

In an exercise that’s simultaneously for me, my students, and those who read this blog, I’ll talk through the questions of political leanings and their place in academic engagement, respectively.

Let’s begin with the liberal bias. First of all, I want to temper claims of radicalism in the academy. Survey data of academics’ political views show that overall, about 45% of professors maintain progressive ideals, compared with 45% who identify as moderate and 9% as conservative. Conservatives are admittedly underrepresented within the academy, but less than half of all academic faculty identify with the left and of those, only a tiny fraction (about 8%) hold radical leftist views. Still, political leanings vary by discipline with social scientists in general  and sociologists in particular maintaining  higher than average left leaning propensities compared with academics in other fields. So, sociologists are collectively progressive. Why?

One guess is that sociology has an inherent appeal to the progressive sensibility and so attracts people with a leftist political bent. However, this explanation falls short when we look to the origins of the field, which are largely conservative and date back to attempts by key figures at finding stability amidst the industrial revolution while equating society to the organic body. Another guess—and I think a partially reasonable one—is that progressive politics are informally rewarded while conservative politics may face censure within Sociology departments. However, having met very few truly conservative trained sociologists (inside or outside of the academy), negative effects of conservative dissent likely only play a small role in the general tenor of the discipline

I believe that a major reason sociologists lean left politically is because we are bombarded by inequality professionally. Our job is to scrutinize social life and in doing so, systemic oppressions become blaring. Sociologists are trained to enact the sociological imagination, a praxis introduced by C. Wright Mills by which personal troubles are understood in relation to public issues. The course of our study reveals clear patterns in which intersections of race, class, geography, and gender predict life chances with sickening precision. We teach about egregious disparities in health care, life expectancy, educational attainment, mental wellbeing, and incarceration rates. Through research and reading, we become intimately familiar with the voices of those on the wrong side of these rates—the individual persons whose troubles represent public issues. In my own collaborative research, I’ve dealt with issues of race and disability stigma, social responses to intimate partner violence, and the costs of being a woman during  task-based social interaction. To know these patterns, connect them to real people’s lives, and understand how policy and culture perpetuate inequitable systems, tends to foster a progressive sensibility.

But even if this sensibility is both understandable and tightly rooted in empirical realities, is it appropriate as part of professional practice? For me, it is. I strongly support the inclusion of politics into pedagogy, public engagement, and scholarly production. The idea that scholars are only scholars—impartial vestibules of knowledge—is disingenuous. Scholars are people, and as people, we have politics. Pretending those politics aren’t there obscures the discourses in which we engage across professional arenas. Our intellectual projects are inextricable from political agendas. From the research questions we ask, to the ways we frame our findings, to the decisions we make about how to disseminate our work and ideas, politics are ever present.  From an intellectual standpoint, making those politics as transparent as possible increases the credibility and robustness of scholarly bodies of work.  Scholarly argumentation goes much deeper when all parties lay bare their assumptions.  From a human and ethical standpoint, I contend that there is an obligation to take what we know and do something useful with it. To willingly ignore patterns of injustice and oppression is a moral decision, just as is the choice to act politically against them. One’s position as a scholar/academic does not recuse that person from the dynamics of social life. We are all a part of society, and maintaining a position of passive objectivity is equivalent to active complicity in the way things are.

I appreciate that my students are critical in the classroom and that they push me to defend my pedagogy and scholarly practice. I’ll share this post with them and hope that they feel empowered to keep the conversation going.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: source

 

Findings from a recent study out of Stanford University Business School by Yilun Wang and Michal Kosinski indicate that AI can correctly identify sexual preference based on images of a person’s face. The study used 35,000 images from a popular U.S. dating site to test the accuracy of algorithms in determining self-identified sexual orientation. Their sample images include cis-white people who identify as either heterosexual or homosexual. The researchers’ algorithm correctly assessed the sexual identity of men 81% of the time and women 74%. When the software had access to multiple images of each face, accuracy increased to 91% for images of men and 84% for images of women. In contrast, humans correctly discerned men’s sexual identity 61% of the time and for women, only 54%.

The authors of the study note that algorithmic detection was based on “gender atypical” expressions and “grooming” practices along with fixed facial features, such as forehead size and nose length. Homosexual-identified men appeared more feminized than their heterosexual counterparts, while lesbian women appeared more masculine. Wang and Kosinski argue that their findings show “strong support” for prenatal hormone exposure which predisposes people to same-sex attraction and has clear markers in both physiology and behavior. According to the authors’ analysis and subsequent media coverage, people with same-sex attraction were “born that way” and the essential nature of sexuality was revealed through a sophisticated technological apparatus.

While the authors demonstrate an impressive show of programming, they employ bad science, faulty philosophy, and irresponsible politics. This is because the study and its surrounding commentary maintain two lines of essentialism, and both are wrong.

The first line of essentialism is biological and emerges from the “born this way” interpretation of the data. The idea that one’s body is a causal reflection of ingrained physiology disregards scores of social and biological science that demonstrate a clear interrelationship between culture and embodiment. The idea of ingrained sexual genetics has a long history in science, but it is a now dated and maintains a heavy ideological bent. As Greggor Mattson explains in his critique of the study:

Wang and Kosinski…are only the most recent example of a long history of discredited studies attempting to determine the truth of sexual orientation in the body. These ranged from 19th century measurements of lesbians’ clitorises and homesexual men’s hips, to late 20th century claims to have discovered “gay genes,” “gay brains,” “gay ring fingers,” “lesbian ears,” “gay scalp hair,” or other physical differences between homosexual and heterosexual bodies.

There is a lot of very recent and ongoing research that overturns biological determinism and instead, recognizes the imbrication of culture with the body. For example, Elizabeth Wilson’s 2015 book Gut Feminism,  addresses interactions between the gut, pharmaceuticals, and depression; a host of studies demonstrate long and short term physiological responses to racism; and scientists show genetic mutations in children of Holocaust survivors indicating a hereditary element to extreme distress.  While these ideas continue to advance and gain steam, they are not new. Anne Fausto-Sterling wrote Sexing the Body more than 15 years ago and 30 years before that, Clifford Geertz drew on existing science to make a clear and empirically grounded case that the most natural thing about humans is their physiological need for culture, through which human bodies and brains develop. The physiological indicators of sexual orientation therefore reflect how culture is written into the body, not the presence of “gay genes.”

Politically, the science of biological essentialism is troubling. Not only does it stem from the very logic that spurred eugenics projects in the late 19th and early 20th centuries, but also reifies a clear hierarchy of gender and sexuality in which cis-heterosexuals enjoy a top spot. Although “born this way” has become a rallying cry for equality, it implies that non-normative sexual desire is a deficit. To defend non-normative sexual desire by claiming that the desire is in-born takes fault away from the individual while reinforcing that desire as inherently faulty. It excuses the non-normative sexuality by re-entrenching the norm. “Born this way” implies that non-normative sexuality would be overcome, but only for this blasted biology. It may be a path towards equal rights, but “born this way” ultimately leads back to wrong-headed science that assumes heteronomativity.

The second line of essentialism from the study is technological and it’s rooted in the assumption that AI is autonomous and reveals objective truths about the social world. In comparing humans to machines, the study points to the disproportionately high accuracy of the latter. The algorithm ostensibly knows humans better than humans know themselves. But as I’ve written before, AI is not artificial nor is it autonomous. AI comes directly from human coders and is thus always culturally embedded. AI does not choose what to learn, but learns from human-centered logics.

Distinguishing people based on sexual orientation—and depicting orientation as a stable binary–are not independent conclusions reached by smart technology. These are reified constructs that people have implicitly agreed upon and developed meaning structures and interaction practices around. Wang and Kosinski built those meaning structures into a piece software and distilled sexual orientation from other cultural signals thereby maximizing sexuality as a salient feature in the machine’s knowledge system. They also, by excluding PoC, trans* persons, and those with fluid sexual identities re-entrenched another layer of normalization by which white, binary-identified people come to represent The Population and everyone else remains an afterthought, deviation, or extension.

AI is not a sanitary machine apparatus, but a vessel of human values. AI is not extrinsic to humanity, but only, and always, of humanity. AI does not reveal humanity to itself from a safe and objective distance, but amplifies what humans have collectively constructed and does so from the inside out.

The capacity for AI to recognize sexual identity based upon facial cues has significant social implications—mostly that people can be identified, rooted out, and possibly censured formally and/or informally for their sex, sexuality, and gender presentation. This is an important takeaway from the study, and acts as a sober reminder that the same technological affordances that liberate, mobilize, and facilitate community can also become tools of oppression (this idea is not new, but always worth repeating). But technologies don’t just become tools of liberation or oppression because of the hands in which they end up. It’s not only about how you use it, but how you build it and what kinds of meaning you make from it. Discerning sexual orientation is a human-based skill that Wang and Kosinski taught a machine to be good at. Markers of orientation don’t reflect a biologically determined core, just as machine recognition doesn’t reflect an autonomous intelligence. Both bodily comportment and technological developments reflect, reinforce, work in, work through, work around, but are always enmeshed with, people, culture, power, and politics.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source

The High Court of Australia is currently hearing a case about whether or not Australia will move forward with a marriage equality plebiscite. The plebiscite is a non-binding survey in which Australians can indicate their position on same-sex marriage. The results of the plebiscite have no direct effect on the law, but will inform members of parliament who may or may not then proceed with legislation to extend marriage rights to non-heterosexual couples.

The marriage equality debates in Australia are mired in familiar political tensions—left-leaning liberals argue that marriage is a human right, critical progressives are wary about entrenching normative kinship structures, and conservatives oppose same-sex marriage because, what about the children?. The plebiscite is contentious in its own right, as a high price tag ($122million) and an open platform for “No” campaigners to espouse hate have been the subject of heated critique (and indeed, undergird the current court hearings). But the plebiscite is also marked by an additional controversy arising from a seemingly mundane component: the use of postal mail.

The plebiscite will operate through the Australian Post. Voters who want to have their say on marriage equality will receive paper surveys to fill in and send back. At issue is the barrier to participation this creates for an important demographic: young people.

When thinking about inequality in the technology space, common wisdom is that young people hold a distinct advantage over older people. This assumption is rooted in the presumption that “technology” refers only to smarthphones and social media. In fact, technology is merely another word for tools coupled with knowledge and includes a wide range of apparatuses that have been part of human interaction since long before the first Atari. When a technology was once common, but is now less so, then the age dynamics of power and access shift away from youth and towards the grownups. Such is the case with postal mail.

A brief affordance analysis of the postal vote reveals the social implications of this technological decision while underlining the situatedness of communication media.

Affordances refer to the opportunities and constraints of technological objects. Affordances are not absolute, but operate through an interrelated set of mechanisms and conditions. The mechanisms of affordance refer to the strength with which technological objects push, pull, open, and resist while the conditions of affordance designate how the mechanisms vary across users and contexts. Mechanisms include the ways that technologies request, demand, encourage, discourage, refuse and/or allow some actions. The conditions of affordance include perception, dexterity, and cultural and institutional legitimacy. In short, technological objects push and pull in particular directions, but the direction of the push-pull and the strength of its insistence will depend on the knowledge and perception of the user, how adept the user is in deploying an object’s features, and how well supported that user is in utilizing the object in various ways. An affordance analysis means asking how does this technological apparatus afford, for whom, and under what circumstances? (see full explication of affordances framework here ).

With regard to the marriage equality plebiscite, an affordance analysis asks for whom does a postal vote encourage participation? For whom is participation discouraged? Is anyone refused?

The medium itself does not refuse participation to anyone. Everyone legally included in the plebiscite may send their survey through postal mail. Those who are not legally included (such as non-citizens, like me) would be refused through any medium. However, the decision to use the Australian Post markedly discourages participation by young Australians. This is because the medium of postal mail does not uniformly request, demand, encourage, discourage, refuse, or allow political participation, but disproportionately serves a practice and skill set well-honed by older generations and unfamiliar to younger ones.

As reported across Australian news (using mostly anecdotal evidence), a substantial number of people under 25 have never posted a letter. Using affordance theory language, lack of practice significantly reduces young adults’ dexterity with the postal medium, thus erecting barriers to political participation among this population. The gap in dexterity between older and younger voters thus encourages (or at least allows) older generations to participate in the plebiscite while discouraging younger generations. Asking 20-somethings to mail a letter is like asking 30-somethings to send a fax—we may know what a fax machine is and generally what it does, but the process would be clumsy and bewildering at best. So too, young Australian voters understand that letters go from one postal box to another, and these voters have all of the material resources at their disposal to post a letter, but they have to overcome the discomfort of fumbling through a medium with which they are experientially unfamiliar.

The conditions that create affordance disparities between younger and older voters can have serious political implications. Prime Minister Malcolm Turnbull has said that a solid “Yes” outcome from the plebiscite would mean marriage equality policy could be considered and debated in parliament. However, a clear “No” outcome would halt all amendments to the current Marriage Act from 1961, which defines marriage as “the union of a man and a woman to the exclusion of all others.” Data reveal, unsurprisingly, that young Australians support equal rights for same-sex couples at higher rates than older Australians. This means that conditions which discourage youth participation create a clear bias in the conservative direction. An affordance analysis thus indicates that results should be weighted for age, participation should be offered through multiple mediums, or alternatively, the government could stop giving voice to bigots and let go of policies that protect and ingrain heteronormative versions of love. But that last one is less about technology…

Jenny Davis is on Twitter @Jenny_L_Davis

Headline image via: source

 

 

Image used with permission from artist Nathan Anderson

 

It is no secret that we live in an era of vast and unprecedented technological advancement.  We are inundated in computers of all sorts, smart phones, drones (both commercial and military), juiceros, a growing and inescapable surveillance presence, robotic radiosurgery systems, the list goes on and on.  Some of this technology is miraculous, some of it is frivolous, some of it is downright scary. At times, it seems as though the conditions of the world as we know it are less than half a step away from the teeming circuitboard studded eco-systems of Cyberpunk fiction. The comparison has been made before, in this excellent Washington Post editorial, for example.

The backdrop of my favorite Cyberpunk works are commercialized wastelands; the walls built and buttressed by corporate power, floorboards laid by cyber crime and corporate espionage, furnished with wires, neon and advertising. With every passing day our world more and more resembles this speculative and cautionary setting.

However, Cyberpunk is more than a warning to me… it’s a road map. Cyberpunk, in many ways, leads us through the boundaries and pitfalls that it seems to predict. That’s not to say that Cyberpunk is a monolith, by any means. However, by examining the common narrative strands shared by different Cyberpunk works, themes and trajectories become all the more apparent and applicable to our lived experience.

The catalyst to my writing this piece is the recent result of the Supreme Court Case: Impression Products, Inc.  V. Lexmark International, Inc. The court case is fairly complicated- but here is the quick and dirty rundown: Lexmark sold two kinds of printer cartridges: refillable cartridges and single use cartridges. Impression Products, Inc was sued by Lexmark for adapting the single use cartridges into reusable cartridges (cutting down on waste and letting the consumer save some coin). The case made its way up to the Supreme Court and the court aired in favor of Impression over Lexmark.

Alright, so it’s ink, what’s the big deal? Well, Kyle Weins at Wired nails it on the head: “Why all the fuss? Because this wasn’t really about printer toner. It was about your ownership rights, and whether a patent holder can dictate how you repair, modify, or reuse something you’ve purchased.” Over the years, tech giants like Sony, Lexmark, HP, Microsoft, etc. have been pushing the idea that products purchased from them are, in fact, licensed and not owned by the consumer. Understandably, these licensing schemes are an attempt by these larger companies to consolidate and protect their intellectual property.

Apple and other large tech companies do everything they can to inhibit small time repair shops- in the name of intellectual property, of course. Apple went so far as to disable IPhones remotely if they were detected at a third party repair shop. I’m sure intellectual property was a factor in these policies but it’s convenient that companies like Apple simultaneously make a tidy profit on the micro monopolies they create by locking down the repair and expansion of the products that they sell to us.

These restrictions represent a kind of technological prescriptivism. From the perspective of large tech companies like Apple, we have to use manufactured items for their standardized manufactured purpose. Innovation has been consigned to the boardroom, the R&D lab or the Silicon Valley start up. We no longer literally “own” what we own. Copyright, intellectual property, and the very concept of economic exchange have become disgusting shams under these policies. Technological prescriptivism would rob us of our ability to tinker, to create, to experiment… we are to become naught but predictable and ever profitable consumers.

THIS is where we can learn from Cyberpunk. Those interested in Cyberpunk can quote William Gibson ad nauseum on this: “The Street finds its own uses for things – uses the manufacturers never imagined.” What Gibson is saying: characters in Cyberpunk overcome the assigned manufactured purpose of the things around them.

Cyberpunk fiction is filled with individuals owning what they own but simultaneously do not “own.” It’s filled with individuals who subvert prescribed use.

In the 1995 anime, Ghost in the Shell, Motoko Kusinagi’s body is literally not hers. Her state-of-the-art cybernetic body is government property. During a conversation with another member of her unit, Batou, Kusanagi says: “If we ever quit or retire, we’d have to give back our augmented brains and cyborg bodies. There wouldn’t be much left after that.” Throughout the plot of Ghost in the Shell (1995) Kusanagi’s search for answers forces her to push the limits of what her body is “allowed” to do. During the final scenes of the movie, Kusanagi literally tears her body apart through overexertion. Likewise, her search for truth eventually thrusts two Japanese governmental agencies into conflict with one another. Her own unit, Section 9 is pitted against Section 6. This conflict, indicative of a split in the otherwise autonomous interests of the Japanese government, reflects the collapsing authority that had once outlined the limits of Motoko Kusanagi’s ownership over her body. Cyborgs claiming their rightful bodily autonomy is not unique to Ghost in the Shell. Other examples are easily found in Ex-Machina and Blade Runner in which rebellious bots shed their chains and refuse subservience. In every case, these Cyborgs shift the terms of ownership to match the demands of their lived experience.

In the 1985 Terry Gilliam dystopian film, Brazil, there is a short scene wherein the protagonist, Sam, phones into the “Central Services” to get his heating and air conditioning fixed. He finds his requests dispassionately and politely declined. Amusingly, renegade repairman Archibald Tuttle intercepts the request and infiltrates Sam’s apartment in order to repair his air conditioning. This, of course, is a dangerous and highly illegal endeavor- “Central Services” eventually seizes Sam’s apartment because of the unauthorized repairs. Apple would be proud. In Brazil, Gilliam frames Tuttle, the third party repairman, as a literal subversive. To me, the third party repairmen who fix cracked IPhone screens are probably not that far off Gilliam’s Archibald Tuttle.

Finally, many Cyberpunk stories harbor a motif of necessary improvisation in the face of obsolescence. Two famous examples are Terminator 2 and Terminator 3. In both films, the T-800/T-850 (as portrayed by Arnold Schwarzenegger) is an outdated model of Android forced to hold his own against a technologically superior foe. The T-8XX and his allies must make due with what they have. John Connor, Sarah Connor, Kate Brewster and others have to be creative, they have to struggle, and they have to improvise. That improvisation is a crucial part of the Terminator movies, but it is an undeniable part of the Cyberpunk aesthetic generally speaking. In William Gibson’s Neuromancer, Ratz- the bartender has to make due with his outdated (described as antique) mechanical arm. In Deus Ex, Gunther Hermann and Anna Navarre- military cyborgs- find themselves at risk of being displaced by newer cyborgs. Hermann and Navarre are especially resentful because their extensive cyberization left them permanently disfigured- an ordeal the newer cyborgs don’t have to deal with. Despite their struggle against obsolescence, Hermann and Navarre prove themselves to be exceptional soldiers via tactical prowess and ruthlessness. The need for improvisation and struggle against obsolescence is something that’s been felt by anyone who has had to make due with an aging computer or wait for a contract renew before updating a dying mobile phone.

It is essential (or at least, helpful) to pay attention to the way characters in Cyberpunk fiction navigate the technological worlds in which they live. It is rare to see Cyberpunk characters depicted as luddites (although, it is not unheard of – In Deus Ex, the player can blow up the internet). Generally speaking, however, Cyberpunks turn their constraints back on themselves. In the finale of the surrealist cyberpunk horror film, Tetsuo: The Iron Man, when a man is faced with the loss of his humanity at the hands of a “Metal Fetishist,” this would-be victim subverts his transfiguration to corrupt the corruption he’s been forced to embrace.

Cyberpunks own what is theirs, even when it is not theirs. They repair and they tinker. They improvise and adapt. In Cyberpunk fiction, a spade is not a spade- a spade is whatever you can make it.

In our own world, we are quick to dismiss new technology. Many wish to escape the ubiquity of smartphones, social media, networks and surveillance. PsychologyToday even has a guide on how to escape and set boundaries. The impulse to toss it all aside makes sense- it’s clear that technology often isn’t presented to us as much as it is imposed. On this point, I turn to Hélène Cixous’ account of writing. In her 1975 article, Laugh of the Medusa, Cixous (philosopher, playwright and poet) highlights a certain anxiety the average person feels when they are called upon to write:

And why don’t you write? Write! Writing is for you, you are for you; your body is yours, take it. I know why you haven’t written. (And why I didn’t write before the age of twenty-seven.) Because writing is at once too high, too great for you, it’s reserved for the great-that is for “great men”; and it’s “silly.”

Technology is just the same- generally speaking, it is manufactured for an imaginary “average” everyday consumer. But as cyberpunk teaches us, we are not bound by the prescribed manufacture. As punk musician Amanda Palmer, would say- “we can fix our own shit”, too.

Winding down- I am reminded of my older sister who lives in New York City. In her spare time, she makes art from duct tape. She uses an exacto knife to cut out bits of different colored tape. From there, she arranges the bits into an reimagined sort of mosaic. The result is nothing less than stunning to me- Nikki is able to see past the standardized use of “duct tape” as material with a set use and function. Artists, like Cyberpunks, have an inert ability to see past the given. Artists and Cyberpunks alike innovate from the bottom up rather than the top down. Such a mindset is needed if we are to escape the strange pre-Cyberpunk dysphoria we currently find ourselves in.

 

Alex Palma is a member of the Philadelphia Historical Community; he’s worked in several archives and historical sites across the city. His interests include technology, videogames, film, genre literature, historiography, historic preservation and continental philosophy.

 

 

A mere 2 minutes and 19 seconds in length, the video Are Black British Youth Obsessed with Light Skin/Curly Hair. Or is it just Preference?” is a compilation of snippets from “person on the street” interviews, conducted in the environs of two shopping centers and a commuter railway station in east London (more on this later).

The interviewer is a roving Internet reporter going by the handle of VanBanter, whose YouTube channel boasts over 85,000 subscribers.  VanBanter is a tall, svelte, black Briton of around 16, himself light skinned, whose voluminous hair in the clips is either styled in cornrows, or pulled back in a low Afro puff, the black version of the “man bun.”

The interviewees are black boys, ostensibly between the ages of 12 and 17, of a wide spectrum of skin colors and hair textures.  The single question VanBanter asks all of them is, “What kind of girls are you into?”  On occasion, he phrases it as, “What type of girls do you slide into?”  Two token girls are asked the same question about boys.  All interviewed say they like “light skins.”  Some add “curly hair,” clearly meant as a qualifier in opposition to “kinky,” not straight, hair texture. Hence, palpably, one can infer that light-skins are more favoured than any other colors at the place. Most of the interviewees are filmed standing in pairs or small groups of friends who support their responses with interjections, gestures, or general glee.

The video was first uploaded on June 1st to the Facebook page of Black British Banter.  Over that weekend, it received a million views, over 6k likes (2.6k neutral thumbs-up expressing interest, 1.2k crying emojis, 1.1k angry ones, 546 laughing ones, 467 wows, and 62 loves), 5k comments, and 8,000 shares.

I myself could not stop viewing it.  The comments far outstretch the bounds of personal preference, to which we all have an undisputable right.  Instead, they defend a centuries-old global regime of negating not only the beauty, but very humanity, of people with dark skin, especially women.  “No black t’ings, like my shoes n’ shit!” says one very dark-skinned boy, luminous in red track suit and fresh fade. “Light skins, always light skins, man,” says another boy hubristically, he himself light of complexion.  Surprisingly, his mate, a much darker boy, steps forward into the frame of the shot to pat him approvingly on the shoulder, then retreats with a satisfied smirk on his face. “All o’ dem!” responds a third speaker, who looks to be around age twelve.  Goaded on by his surrounding posse of friends, the boy continues.   “Curly” – which he pronounces “queely” – “hair, light skin, all o’ dem.  No dark skins, no dark skins!”

I must have replayed this entire video twenty times or more.  Each time, something new shot forth to astonish, inform, infuriate, dismay, perplex, even amuse and impress, but, regrettably, raise little hope in me from the mouths of this Black British youth, minors, all of them.

Impressive is the boy who is the companion of the one who says he keeps his shoes and his women separate.  Addressing the camera directly, he lyrically traverses two or more generations and thousands of geographic miles in just one comment.  He starts out delivering his reply in a vernacular London tongue, not quite Cockney, but close to it, dropping his t’s and adding the emphatic “yeah” at the end of his sentence: “Light skin, big back, big ti’i, yeah” (“Light skin, big behind, big titties.”).  He then deftly code switches to Jamaican-inflected patois, eliding the “nt” in the word “want” with a subtle exhalation, and substituting, as in most Caribbean creole languages, “to” with “for”: “Wah fi tek wood. Dem gyal deh.” (“Want [or, it might be one] to take wood. Those kind of girls.” Wood is penis).  All this he delivers with an emcee’s flow, synching his posture and hand gestures to his speech.  Beautiful in sound and adept in motion! His is the most performative delivery of a speech pattern that all the youth, the interviewer included, communicate in.

London Multicultural English, or MLE, as it has come to be termed, is a patchwork of vocabulary, syntax, and inflections woven together from a multitude of language families transported to London by regional and international migrants over the course of centuries.  In years closest to ours, the dialect’s four decided grandparents are Cockney English, Caribbean creole or patois, languages from former British colonies in, chiefly, the Indian Subcontinent and West Africa, and “learner varieties,” the in-between states of fluency formed during any process of second-language acquisition.  If you listen closely to this crowd of youngsters, you’ll hear from the Cockney grandparent a lot of “innit,” “know wha’ ah mean?” and elisions of the double “t” in words like “butter.”  From the Caribbean forebear comes the pronunciation of “them” and “that” as “dem” and “dat,” the erasure of the final “g” in gerund words (“cleaning” becomes “cleanin”), and the practice of teeth sucking, a catch-all method of dismissing or objecting to a circumstance or expressed opinion, used throughout the African diaspora, as well as it’s onomatopoeic cousin, the short non-word “Chuh!”, which is more specific to the Anglophone Caribbean.  There is also a cousin once-removed, Hip Hop Nation Language, hailing from African America.

When I watch and listen to these kids, I think to myself, thence came Grime music and a host of art forms, past and current, that have infused British popular culture.  But, then, the horrifying thoughts crowd in.  What are the stakes of social mobility and political inclusion for kids like these whose mother tongue is MLE? Is London, or all of Britain, hurtling towards the same trials and tribulations vis-à-vis the state and public education that the U.S.A. has faced with Ebonics?  And, is that deft youth’s flow the spontaneous rehearsal of some other boy’s or man’s rhyme he’s heard elsewhere, maybe on somebody’s Grime track.  Worse yet, given its noxious sentiments, is it part of a rhyme scheme he’s producing for mass consumption?  Either way, the thoughts he poeticizes can and do travel through popular music, literally becoming soundtracks to these young people’s lives.

My generation inherited and in turn passed down a fair share of those soundtracks.  Remember Buju Banton’s “Love me Browning” from 1992?  The refrain went, “Me love me car, me love me bike, me love me money and t’ing.  But most of all, me love me browning” – his light-skinned girlfriend.  The song topped the charts in Jamaica and was appreciated worldwide by enthusiasts of Jamaican dancehall music, which basically means the Jamaican and pan-Caribbean diaspora. I couldn’t help but get an echo of Banton when I heard one of the interviewees say, “I like my light skins.” This is an intergenerational playback loop.  In 1994, the Notorious B.I.G., one of the most heroicized martyrs of the U.S. Hip Hop Nation, born Christopher Wallace in 1972 in Brooklyn to Jamaican parents, released the track, “One More Chance,” whose opening rhyme goes as follows,

First things first: I, Poppa, freaks all the honeys

Dummies, Playboy bunnies, those wanting money

Those the ones I like ‘cause they don’t get Nathan, but penetration

Unless it smells like sanitation

Gar-bage, I turn like doorknawbs

Heartthrob never, Black and ugly as ever

The Notorious B.I.G. was describing himself.  Or, more accurately, women’s reactions to him.  For years, I have used this track in one of my classes in media studies to trace the history of sampling, configured as it has been by discographic nostalgia in the creative imaginations of the artists.  “One More Chance” is one of a whopping twenty-six tracks produced by different artists between 1991 and 2011 that have sampled various elements of the same song, “Stay with Me,” by the 1980s RnB group Debarge.  Every time the part in my lecture comes where I play Biggie’s installment, I nervously hold my finger above the space bar to avoid playing that “black and ugly as ever” line to my students, who are predominantly young people of color.  Yes, they, unlike the youngsters discussed here, are legal adults, and have not only heard the line times immemorial – Biggie is a music icon for the ages – but had more life experience to process its message.  But, how many times do they have to hear it, even in an institutional setting of higher learning, before it becomes taken-for-granted common sense, before we can all agree to delete it from our ideological playlist?  The “Are Black British Youth Obsessed with Light Skin/Curly Hair. Or is it just a preference?” video could, it strikes me now, help suppress my trigger-finger anxiety.  It’s a great tool to use to reflect, in any arena, on how not only rhymes and rhythms, but attitudes, get sampled through time and space.  This, I believe, was its creator’s main intention, given his choice to make the final voice in his editing of the video that of a dark-skinned boy with a relaxed stance who says, “not ligh’ies, not lig’ies.”  This is not to praise the speaker for simply inverting the hierarchical order of skin-tone preference, for that would be equally unsatisfying, if indeed that’s what he does.  We do not, in fact, hear him express an adoration for dark-skinned girls.  It’s his self-possession in assuming a position contrary to the one solidly occupied by all the speakers who came before him that is significant.  I remember back when Bobby Brown cast as lead actress in his video for the song “Every Little Step” a dark-skinned model, the one who leads the posse of much lighter-complected beauties and gets Bobby in a bathtub in the end. It was actually a point of discussion among my friends because this was so uncommon in the casting of video vixens.

Further along in the video, we encounter a boy in his later adolescence, perhaps 17, with a medium-brown complexion and tight cornrows.  “Obviously, it’s gonna be them light skins, they know how to…” he begins by saying.  The video cuts to him attempting to clarify his statement by outlining a tension he has observed between pursuer and pursued in this color-coded game.  His is analysis missing from all the other speakers, and if there is a modicum of hopefulness to be found in this video that all these boys will in time mature into reflective men, here’s where it lingers.  “Dhy-dhy-dhey’re stressful,” he stutters out about light-skinned girls.  “But, they’re confident.” With this, he turns his palms to the camera, as if to say, in resignation, “that’s just the way it is.”

Is it any wonder this pre-adult perceives “them” as stressful and possessed of a confidence level that rises to threatening, given the cultural feedback loop that plays and replays, mixes, remixes, and mashes up the message his peers, all barely on either side of puberty’s threshold, are parroting?  It’s not just the pervasive degradation of dark-skinned femininity that these comments reinscribe, although that is nuff damage.  It is the way in which they mark out positions in a gender war where female agency gets reduced to skin color (the lead signifier in an armament of preferred traits), and male potency (laying “wood” the chief maneuver) to the measure of one’s ability to circumscribe that agency, make it work for you.  The speaker’s halting pronouncement shows that this world of meaning he helps stabilize with his own words he does not inhabit with ease.  It is a world that will not always yield to him, may often be outright cruel, and it does so in accordance with the terms he himself has set.  Anyone who has watched even one episode of the reality television series, Basketball Wives, or She’s got Game, and taken note of how the female cast looks, and what the male storylines consist of, will taste a kernel of the speaker’s despair over his prospects for a fulfilling relationship.

I said there are two female interviewees.  One is a pretty, pretty, pretty girl, who is bubbly and holds one hand up to her face as she takes in VanBanter’s question, “What kind of boys are you into?”  She doesn’t skip a beat in responding, smiling sweetly.  However, her answer is so very troubling, given that this sweetheart, who cannot be older than 14, has a deep dark-brown complexion.  “Light skins have long hair,” she says.  Now, here’s what I hear:  this girl is so conditioned to follow “light skin” with “long hair” as a possession that she doesn’t even edit the mantra to more accurately fit the question.  It is Pavlovian.  For she and so many.  The industry in hair weaves and extensions that reaps billions a year in pounds, dollars, euros and multiple other currencies throughout the world owes a healthy portion of its fortunes to this self-generating discourse.  It’s enough to make you bombaclot upset, in the words of the Brummie rapper Lady Leshurr, both of whose parents migrated to the UK from the island of St. Kitts.  In the video for Leshurr’s track Upset, she is joined at the conclusion by her friend and sometimes collaborator, Paigey Cakey, an emcee and actress from Hackney of Jamaican and white English parentage.  The two are shown in an outdoor market and have both donned wigs that have a rasta-colored tam on top with long, fake dreads sewn on inside, which cascade down past the wearers’ shoulders. “Bombaclot twelve years fuh deeze dreads, y’see, ee,” Leshurr says to the camera, flipping up one of the fake locks.  Cakey chimes in beside her, “A-me mixed-race, y’knuh. De dreads grow five years, y’knuh.  Bombaclot years!” They’re having a laugh, and it is funny.  But, what they are also doing is spelling out a pervasive anxiety over natural hair growth patterns and length in the wide variety of Afro hair that can find black girls just out of puberty covering up their healthy heads of hair with wigs.

Location: these interviews were conducted in the environs of commercial and commuter hubs in Stratford, a district of east London whose population, according to the 2011 UK census, is 21 percent black.  The census categorizes Stratford’s demographics as “Multicultural Metropolitan: Inner City.” The mixed-race population here is sizable – just scrutinize the collection of children in this video – because there has been considerable miscegenation between working-class Caribbean immigrants who began settling the area as early as the fifties and the resident English and Irish working class they met there.  All over England such has been the case.  In 2009, Samir Shah, former chairman of the Runnymede Trust, a think tank on issues of racial equality, wrote a controversial cover story for the Spectator, “Race is Not an Issue in the UK Anymore,” in which he stated, “Today, almost half of all children of Caribbean heritage have one white parent.  Earlier this year, a report by the Institute for Social & Economic Research at Essex University said that the Afro-Caribbean community will ‘virtually disappear’ — dissolving into the white mainstream.”

That is a stark forecast on many fronts.  One is the vista through which the mixed-race woman who is half-black and half-white has been a constant figure in Britain’s music and pop culture scene from as far back as the fifties when Shirley Bassey debuted on the airwaves.  A classic chanteuse in style and vocals, Dame Bassey was followed in the late seventies and early eighties by intentionally grittier Pauline Black and Rhoda Dakar, the two female vocalists most readily associated with the British Two Tone Movement. Black’s and Dakar’s interracial heritage symbolized their musical subculture’s message of racial harmony and cultural syncretism inside Thatcherite Britain.  Sade emerged later, in the early eighties, giving an international profile to British neo-soul.  Later, when UK hip hop started to attract international recognition, its female emcees were led by Ms. Dynamite.  Rolling into the 2000s and the televisualization of vocal performance, Leona Lewis shot to prominence when she won The X Factor in 2006.  Other artists continue to make their mark, among them Corinne Bailey Rae, and Emeli Sandé.  All of these women are the daughters of Caribbean or African men and English or Scottish women.  And, as most have expressed publicly at one point or another, being mixed-race in Britain has for them been a mixed bag of opportunities and setbacks.  In 2014, the singer Tahliah Barnett who goes by the name FKA Twigs played her heritage in an interview with a journalist who brought up the media’s habit of classifying her as “alt-R&B,” overlooking the plethora of influences in her music.

It’s just because I’m mixed race,” FKA Twigs said.  “When I first released music and no one knew what I looked like, I would read comments like: ‘I’ve never heard anything like this before, it’s not in a genre.’ And then my picture came out six months later, now she’s an R&B singer.  I share certain sonic threads with classical music; my song Preface is like a hymn. So, let’s talk about that.  If I was white and blonde and said I went to church all the time, you’d be talking about the ‘choral aspect.’ But you’re not talking about that because I’m a mixed-race girl from south London.

Returning to the youth in the video, it seems for them mixed-race is a status beyond question.  Viewed from a governmental perspective, this is ironic.  The category of “mixed race” was made a box on the UK census in the year 2001, following, as political scientist Debra Thompson notes, near unanimous support for the proposal from government departments.  Ironic, then, that an act of government undertaken with futurist ideals about inclusion has interbred with a hierarchical conception of (feminine) attractiveness and desirability, one that is antiquated and racist.  One boy, for example, distracted in an exchange with his mate as he absorbs the question being asked him, leads with the astonishing preface, “Obviously, mixed-race girls.”  What’s obvious about it?

As for the only other girl interviewed, a sentimental smile crosses her face when she replies in a croon, “Chocolate ones and light skin ones.” From this girl’s appearance, it seems highly likely that she herself is mixed-race.  So, what harm, then, in desiring one’s mirror image?  None at all.  Lisa Bonet and Lenny Kravitz, both the children of one black and one Jewish parent, were one couple that did.  But, then the girl’s face turns from placid sentiment to hateful scowl when she concludes with a warning to all watching, “Don’t be dark, doah!” (pronunciation of “though,” another MLE-ism).

What, I wonder, was the sequence of steps taken in and by British society as a whole, from the turn of the millennium, around the point that this girl’s mum and dad were drawn to each other, to now, when their daughter thinks nothing of going on social media to denounce the dark side of her provenance?

In July, 2014, the Office of National Statistics issued a cross-analysis of its most recent demographic figures, “What Does the 2011 Census Tell Us About Inter-ethnic Relationships?”  The report provides interesting findings on such topics as “patterns of inter-ethnic relationships,” “differences between men and women in inter-ethnic relationships,” “dependent children in multi-ethnic households.”  However, it does not offer any insight into attitudes towards racial background or racial appearance among inter-ethnic or mixed-race youth, and the wider implications of such attitudes.  I am confidently hopeful that this needed research is either available or currently underway at governmental agencies, universities, and think tanks.

This past holiday season was the tail end of a sabbatical year I took to complete a book on interracial attitudes and relationships in Britain between blacks and a more recent wave of newcomers: the now roughly 1 million Poles who began settling the country after 2004, when Poland joined the border-free European Union. My mother spent the holidays with me in London.  One afternoon, she and I visited the sprawling Westfield Stratford City shopping centre, one of London’s most ostentatious recent commercial developments, opened in 2011.  Some of the interviews in this video were conducted there, as well as around the less flashy 1970s-built Stratford Centre, and the Stratford railway station, both not far away.  I am always happy to get my mother to London.  She spent many formative years there, beginning in 1946 at the tender age of 19 as a student-nurse from what was then British Guiana, now Guyana.  My mother’s stories of post-war London recount a society coming to grips with the chromatic diversification of its citizenry.  She has, since the 1980s when she began making return trips to visit her many relatives and friends who settled permanently in the city, been describing the sea change she notices in the demographic makeup.  “London is black,” she would often say.  To her, it is a city far unlike the one she traversed in the late forties, fifties, and sixties, where, on one memorable occasion, a white Englishman in Holborn tube station, infuriated at the sight of a young West Indian man and English girl showing PDA on the up escalator, bellowed across the cavernous tunnel from his down escalator, “Bloody well go and find your own kind!”

Two days before Christmas, 2016, the Westfield Stratford City shopping centre was packed with last-minute shoppers.  Members of every conceivable race and ethnicity were present, with a preponderance of Afro-Caribbean descendants.  My mother and I were served lunch by an Eastern European waitress, given movie-going advice from a hijabied Somali theatre attendant with a local accent, and when we stopped for a rest in a seating area, a mischievous little South Asian baby dangled her arms over the top of the adjacent banquette as her mother and sisters debated, in an accent subtly distinguishable from what’s been described, whether to get their dad the new GPS or a different gift.

My mother took it all in, looking at faces, listening to voices and their accents, eavesdropping on conversations, watching the ceaseless parade of couples pass by, their pairings of races, or skin tones within races, utterly unpredictable.  We didn’t talk about it then and there, but I knew what she was thinking.

Now, five months later, I see this video. I recognize the backdrops, and I realize I was right there, self-satisfied at the time that my mother was able to witness the walking, talking evidence of progress.  Had we overheard the wrong conversations that day?  Should I have listened with a keener ear?  Would I have caught the slights and slander the youth in this video utter?

I don’t have straight answers to these questions.  My final thought, and I might be turning into a person of my parents’ generation in expressing it, is with VanBanter, the conscious interviewer, in mind.  Why are schoolboys, some barely 10, being questioned about picking up girls instead of about picking up their books?

Chuh!

Nicholas Boston, Ph.D., is associate professor of media studies and sociology at the City University of New York (CUNY), Lehman College.

 

The author in London in August, 2016.

Williams

 

Editor’s Note: We are re-posting this piece that originally ran in June 2016. With the newest season of OITNB launching this Friday, the post’s original author (Apryl Williams) reports that she has found no evidence of increased racial diversity in the OITNB writer’s room. In light of this, the message of her essay bears repeating. 

*****************************Mild Spoilers**************************************

Orange is the New Black’s newest season demands to be binge watched with its notorious twists at every episode style. When it came out on June 17th, I began my annual binge session and had completed it by Saturday, June 18th.

If you haven’t heard, the series delivered “The mother of all finales” at the end of this season. As I mourned the death of a major black character, I found myself simultaneously mourning the real deaths of Eric Garner, Sandra Bland, Freddie Gray and the list unfortunately goes on. The stylized portrayal of a death in prison custody at the hands – or knee rather – of a white correctional officer was unmistakably close to Garner’s “I can’t breath.” Though those words were never uttered, anyone who has kept up with news in the last year would find haunting familiarity in the fictional inmate’s all-too-real gasps for air.

With her small frame and spine gradually being crushed by the full weight of the white correctional officer as she tried to breathe but failed, the imagery was almost too painful to watch. But I had come this far, I had to continue. At the end of the season, instead of falling into my usual “showhole” syndrome, I was angry and emotionally distraught. This had a visceral, personal effect and nobody warned me it was coming. As the other inmates grieved the death of their friend and urged those in charge to move her body, I wondered who was responsible for writing these scenes and this episode. Surely, a person of color would have cautioned against such tactics without ample viewer preparation. It appears as though the perspective of black viewers was not taken into consideration; a likely result of the limited representation we have in media production. Then I realized that to a white audience, a warning would not have the same meaning or importance.

Black presence in the writing room would have not only shaped the outcome of the episode (more on that later), it would have also pointed out the obvious misstep of writing a sympathetic baby-faced, murdering correctional officer into the role befitting of “#bluelivesmatter”.  The end result, with the head warden supporting the actions of a “good kid” who simply made a mistake does more to highlight the privileged space in which Netflix and the writers of OITNB exist. They are free to portray injustices such as transphobia or privatized prisons when it is convenient for them. And they do so in a manner that is comfortable and palatable for a mainstream audience.

Instead of drawing attention to the all-encompassing police state in which people of color live, white writers of OITNB portrayed the death of a black prison inmate in a manner that is similar to the carnivalesque spectacle associated with lynchings of the past. Lynchings were a leisure time activity that served dual purpose: to show the superiority over the physical corpus of blacks while simultaneously reinforcing the status quo, demonstrating to black Americans that they had little agency. Without influence from Black Lives Matter activists or black writers, the season 4 finale of Orange is the New Black operates in a similar fashion. Let me be clear, Netflix and the OITNB writers do not occupy the same space as a lynch mob, however, the effect of white dominated narrative coupled with the portrayal of black death on television have a similar result: black deaths and pain are harnessed for entertainment purposes. If Netflix is our town square, then we have all gathered to watch the spectacle.

As a black viewer, I watched and re-lived the shared pain that black people have experienced for centuries but in recent memory, over the course of the past two years with what seems like continuous news coverage of yet another death of an unarmed black person. To make matters worse, after the death, theatrics did nothing to ease the pain of remembrance.

The body was left on the floor of the prison cafeteria for days, drawing obvious parallels to Michael Brown’s death as his body lay in the summer sun for hours after police had shot him. The public relations officials warned Caputo, the warden, not to call the victim’s parents, the police, or the coroner until they had the right angle. The crass humor with which these two men tried to dig up “thuggish” pictures and dirty laundry were intended to serve as comic relief. However for me, and probably for a lot of other black viewers, this was just another reminder of the victim blaming that is typically spread by media coverage.

Netflix and the writers of Orange is the New Black are telling our stories but from a white perspective. In the scenes and in the writing room, white writers control the narrative.

Perhaps input from a black writer (or better yet, multiple black writers) would have resulted in a story line that honored the deaths of black people at the hands of police instead of one that reiterates and upholds the dominant framing. Black Lives Matter activists may have recommended that the writers highlight the complicated web of systematic and militaristic policing of black and brown bodies that lands them in prison where they are rendered almost powerless. I recognize that Netflix and the writers of OITNB may have tried to reveal injustice by portraying it in a raw and brutal way, as is typical of the show, but as it stands, watching the narrative play out feels as though white writers are exploiting black pain for the intrigue of white viewers without regard for those of us who actually live this experience.

This is not the first time the writers have betrayed the moral emptiness of their good intentions. A show that prides itself on shedding light on social issues like prison reform films at a prison where the actors can’t even drink the water because of a leaking sewage problem. The true conditions with which prisoners live in the actual prison where the show is filmed are too graphic for television. Former inmates talk about rivers of feces that flow into their rooms at night. Real people live in this prison that the actors and producers leave at the end of filming. Piper Kerman considers herself a prison reform activist and yet, as a producer of the show, continues to allow filming rather than demanding that the people living there receive better living conditions. My point here is that we watch the fictive stories of women living in similar conditions from the comfort of our homes at times being lulled into a false sense of ease concerning the quality of life of the real people represented by the story lines. Similarly, the season 4 finale makes a spectacle of death at the hands of correctional officers without paying homage and respect to many that have lost and will continue to lose their lives. Watching these narratives on screen for many black Americans serves to reinstate the fear that we live with on a daily basis; knowing that at times, we cannot protect those we love.

Apryl Williams (@AprylW) is a doctoral candidate in the Sociology Department at Texas A&M University and series co-editor of Emerald Studies in Media and Communications. Her current research explores black resistance through social media.

Headline Pic via: Source