Findings from a recent study out of Stanford University Business School by Yilun Wang and Michal Kosinski indicate that AI can correctly identify sexual preference based on images of a person’s face. The study used 35,000 images from a popular U.S. dating site to test the accuracy of algorithms in determining self-identified sexual orientation. Their sample images include cis-white people who identify as either heterosexual or homosexual. The researchers’ algorithm correctly assessed the sexual identity of men 81% of the time and women 74%. When the software had access to multiple images of each face, accuracy increased to 91% for images of men and 84% for images of women. In contrast, humans correctly discerned men’s sexual identity 61% of the time and for women, only 54%.
The authors of the study note that algorithmic detection was based on “gender atypical” expressions and “grooming” practices along with fixed facial features, such as forehead size and nose length. Homosexual-identified men appeared more feminized than their heterosexual counterparts, while lesbian women appeared more masculine. Wang and Kosinski argue that their findings show “strong support” for prenatal hormone exposure which predisposes people to same-sex attraction and has clear markers in both physiology and behavior. According to the authors’ analysis and subsequent media coverage, people with same-sex attraction were “born that way” and the essential nature of sexuality was revealed through a sophisticated technological apparatus.
While the authors demonstrate an impressive show of programming, they employ bad science, faulty philosophy, and irresponsible politics. This is because the study and its surrounding commentary maintain two lines of essentialism, and both are wrong.
The first line of essentialism is biological and emerges from the “born this way” interpretation of the data. The idea that one’s body is a causal reflection of ingrained physiology disregards scores of social and biological science that demonstrate a clear interrelationship between culture and embodiment. The idea of ingrained sexual genetics has a long history in science, but it is a now dated and maintains a heavy ideological bent. As Greggor Mattson explains in his critique of the study:
Wang and Kosinski…are only the most recent example of a long history of discredited studies attempting to determine the truth of sexual orientation in the body. These ranged from 19th century measurements of lesbians’ clitorises and homesexual men’s hips, to late 20th century claims to have discovered “gay genes,” “gay brains,” “gay ring fingers,” “lesbian ears,” “gay scalp hair,” or other physical differences between homosexual and heterosexual bodies.
There is a lot of very recent and ongoing research that overturns biological determinism and instead, recognizes the imbrication of culture with the body. For example, Elizabeth Wilson’s 2015 book Gut Feminism, addresses interactions between the gut, pharmaceuticals, and depression; a host of studies demonstrate long and short term physiological responses to racism; and scientists show genetic mutations in children of Holocaust survivors indicating a hereditary element to extreme distress. While these ideas continue to advance and gain steam, they are not new. Anne Fausto-Sterling wrote Sexing the Body more than 15 years ago and 30 years before that, Clifford Geertz drew on existing science to make a clear and empirically grounded case that the most natural thing about humans is their physiological need for culture, through which human bodies and brains develop. The physiological indicators of sexual orientation therefore reflect how culture is written into the body, not the presence of “gay genes.”
Politically, the science of biological essentialism is troubling. Not only does it stem from the very logic that spurred eugenics projects in the late 19th and early 20th centuries, but also reifies a clear hierarchy of gender and sexuality in which cis-heterosexuals enjoy a top spot. Although “born this way” has become a rallying cry for equality, it implies that non-normative sexual desire is a deficit. To defend non-normative sexual desire by claiming that the desire is in-born takes fault away from the individual while reinforcing that desire as inherently faulty. It excuses the non-normative sexuality by re-entrenching the norm. “Born this way” implies that non-normative sexuality would be overcome, but only for this blasted biology. It may be a path towards equal rights, but “born this way” ultimately leads back to wrong-headed science that assumes heteronomativity.
The second line of essentialism from the study is technological and it’s rooted in the assumption that AI is autonomous and reveals objective truths about the social world. In comparing humans to machines, the study points to the disproportionately high accuracy of the latter. The algorithm ostensibly knows humans better than humans know themselves. But as I’ve written before, AI is not artificial nor is it autonomous. AI comes directly from human coders and is thus always culturally embedded. AI does not choose what to learn, but learns from human-centered logics.
Distinguishing people based on sexual orientation—and depicting orientation as a stable binary–are not independent conclusions reached by smart technology. These are reified constructs that people have implicitly agreed upon and developed meaning structures and interaction practices around. Wang and Kosinski built those meaning structures into a piece software and distilled sexual orientation from other cultural signals thereby maximizing sexuality as a salient feature in the machine’s knowledge system. They also, by excluding PoC, trans* persons, and those with fluid sexual identities re-entrenched another layer of normalization by which white, binary-identified people come to represent The Population and everyone else remains an afterthought, deviation, or extension.
AI is not a sanitary machine apparatus, but a vessel of human values. AI is not extrinsic to humanity, but only, and always, of humanity. AI does not reveal humanity to itself from a safe and objective distance, but amplifies what humans have collectively constructed and does so from the inside out.
The capacity for AI to recognize sexual identity based upon facial cues has significant social implications—mostly that people can be identified, rooted out, and possibly censured formally and/or informally for their sex, sexuality, and gender presentation. This is an important takeaway from the study, and acts as a sober reminder that the same technological affordances that liberate, mobilize, and facilitate community can also become tools of oppression (this idea is not new, but always worth repeating). But technologies don’t just become tools of liberation or oppression because of the hands in which they end up. It’s not only about how you use it, but how you build it and what kinds of meaning you make from it. Discerning sexual orientation is a human-based skill that Wang and Kosinski taught a machine to be good at. Markers of orientation don’t reflect a biologically determined core, just as machine recognition doesn’t reflect an autonomous intelligence. Both bodily comportment and technological developments reflect, reinforce, work in, work through, work around, but are always enmeshed with, people, culture, power, and politics.
The High Court of Australia is currently hearing a case about whether or not Australia will move forward with a marriage equality plebiscite. The plebiscite is a non-binding survey in which Australians can indicate their position on same-sex marriage. The results of the plebiscite have no direct effect on the law, but will inform members of parliament who may or may not then proceed with legislation to extend marriage rights to non-heterosexual couples.
The marriage equality debates in Australia are mired in familiar political tensions—left-leaning liberals argue that marriage is a human right, critical progressives are wary about entrenching normative kinship structures, and conservatives oppose same-sex marriage because, what about the children?. The plebiscite is contentious in its own right, as a high price tag ($122million) and an open platform for “No” campaigners to espouse hate have been the subject of heated critique (and indeed, undergird the current court hearings). But the plebiscite is also marked by an additional controversy arising from a seemingly mundane component: the use of postal mail.
The plebiscite will operate through the Australian Post. Voters who want to have their say on marriage equality will receive paper surveys to fill in and send back. At issue is the barrier to participation this creates for an important demographic: young people.
When thinking about inequality in the technology space, common wisdom is that young people hold a distinct advantage over older people. This assumption is rooted in the presumption that “technology” refers only to smarthphones and social media. In fact, technology is merely another word for tools coupled with knowledge and includes a wide range of apparatuses that have been part of human interaction since long before the first Atari. When a technology was once common, but is now less so, then the age dynamics of power and access shift away from youth and towards the grownups. Such is the case with postal mail.
A brief affordance analysis of the postal vote reveals the social implications of this technological decision while underlining the situatedness of communication media.
Affordances refer to the opportunities and constraints of technological objects. Affordances are not absolute, but operate through an interrelated set of mechanisms and conditions. The mechanisms of affordance refer to the strength with which technological objects push, pull, open, and resist while the conditions of affordance designate how the mechanisms vary across users and contexts. Mechanisms include the ways that technologies request, demand, encourage, discourage, refuse and/or allow some actions. The conditions of affordance include perception, dexterity, and cultural and institutional legitimacy. In short, technological objects push and pull in particular directions, but the direction of the push-pull and the strength of its insistence will depend on the knowledge and perception of the user, how adept the user is in deploying an object’s features, and how well supported that user is in utilizing the object in various ways. An affordance analysis means asking how does this technological apparatus afford, for whom, and under what circumstances? (see full explication of affordances framework here ).
With regard to the marriage equality plebiscite, an affordance analysis asks for whom does a postal vote encourage participation? For whom is participation discouraged? Is anyone refused?
The medium itself does not refuse participation to anyone. Everyone legally included in the plebiscite may send their survey through postal mail. Those who are not legally included (such as non-citizens, like me) would be refused through any medium. However, the decision to use the Australian Post markedly discourages participation by young Australians. This is because the medium of postal mail does not uniformly request, demand, encourage, discourage, refuse, or allow political participation, but disproportionately serves a practice and skill set well-honed by older generations and unfamiliar to younger ones.
As reported across Australian news (using mostly anecdotal evidence), a substantial number of people under 25 have never posted a letter. Using affordance theory language, lack of practice significantly reduces young adults’ dexterity with the postal medium, thus erecting barriers to political participation among this population. The gap in dexterity between older and younger voters thus encourages (or at least allows) older generations to participate in the plebiscite while discouraging younger generations. Asking 20-somethings to mail a letter is like asking 30-somethings to send a fax—we may know what a fax machine is and generally what it does, but the process would be clumsy and bewildering at best. So too, young Australian voters understand that letters go from one postal box to another, and these voters have all of the material resources at their disposal to post a letter, but they have to overcome the discomfort of fumbling through a medium with which they are experientially unfamiliar.
The conditions that create affordance disparities between younger and older voters can have serious political implications. Prime Minister Malcolm Turnbull has said that a solid “Yes” outcome from the plebiscite would mean marriage equality policy could be considered and debated in parliament. However, a clear “No” outcome would halt all amendments to the current Marriage Act from 1961, which defines marriage as “the union of a man and a woman to the exclusion of all others.” Datareveal, unsurprisingly, that young Australians support equal rights for same-sex couples at higher rates than older Australians. This means that conditions which discourage youth participation create a clear bias in the conservative direction. An affordance analysis thus indicates that results should be weighted for age, participation should be offered through multiple mediums, or alternatively, the government could stop giving voice to bigots and let go of policies that protect and ingrain heteronormative versions of love. But that last one is less about technology…
It is no secret that we live in an era of vast and unprecedented technological advancement. We are inundated in computers of all sorts, smart phones, drones (both commercial and military), juiceros, a growing and inescapable surveillance presence, robotic radiosurgery systems, the list goes on and on. Some of this technology is miraculous, some of it is frivolous, some of it is downright scary. At times, it seems as though the conditions of the world as we know it are less than half a step away from the teeming circuitboard studded eco-systems of Cyberpunk fiction. The comparison has been made before, in this excellent Washington Post editorial, for example.
The backdrop of my favorite Cyberpunk works are commercialized wastelands; the walls built and buttressed by corporate power, floorboards laid by cyber crime and corporate espionage, furnished with wires, neon and advertising. With every passing day our world more and more resembles this speculative and cautionary setting.
However, Cyberpunk is more than a warning to me… it’s a road map. Cyberpunk, in many ways, leads us through the boundaries and pitfalls that it seems to predict. That’s not to say that Cyberpunk is a monolith, by any means. However, by examining the common narrative strands shared by different Cyberpunk works, themes and trajectories become all the more apparent and applicable to our lived experience.
The catalyst to my writing this piece is the recent result of the Supreme Court Case: Impression Products, Inc. V. Lexmark International, Inc. The court case is fairly complicated- but here is the quick and dirty rundown: Lexmark sold two kinds of printer cartridges: refillable cartridges and single use cartridges. Impression Products, Inc was sued by Lexmark for adapting the single use cartridges into reusable cartridges (cutting down on waste and letting the consumer save some coin). The case made its way up to the Supreme Court and the court aired in favor of Impression over Lexmark.
Alright, so it’s ink, what’s the big deal? Well, Kyle Weins at Wired nails it on the head: “Why all the fuss? Because this wasn’t really about printer toner. It was about your ownership rights, and whether a patent holder can dictate how you repair, modify, or reuse something you’ve purchased.” Over the years, tech giants like Sony, Lexmark, HP, Microsoft, etc. have been pushing the idea that products purchased from them are, in fact, licensed and not owned by the consumer. Understandably, these licensing schemes are an attempt by these larger companies to consolidate and protect their intellectual property.
Apple and other large tech companies do everything they can to inhibit small time repair shops- in the name of intellectual property, of course. Apple went so far as to disable IPhones remotely if they were detected at a third party repair shop. I’m sure intellectual property was a factor in these policies but it’s convenient that companies like Apple simultaneously make a tidy profit on the micro monopolies they create by locking down the repair and expansion of the products that they sell to us.
These restrictions represent a kind of technological prescriptivism. From the perspective of large tech companies like Apple, we have to use manufactured items for their standardized manufactured purpose. Innovation has been consigned to the boardroom, the R&D lab or the Silicon Valley start up. We no longer literally “own” what we own. Copyright, intellectual property, and the very concept of economic exchange have become disgusting shams under these policies. Technological prescriptivism would rob us of our ability to tinker, to create, to experiment… we are to become naught but predictable and ever profitable consumers.
THIS is where we can learn from Cyberpunk. Those interested in Cyberpunk can quote William Gibson ad nauseum on this: “The Street finds its own uses for things – uses the manufacturers never imagined.” What Gibson is saying: characters in Cyberpunk overcome the assigned manufactured purpose of the things around them.
Cyberpunk fiction is filled with individuals owning what they own but simultaneously do not “own.” It’s filled with individuals who subvert prescribed use.
In the 1995 anime, Ghost in the Shell, Motoko Kusinagi’s body is literally not hers. Her state-of-the-art cybernetic body is government property. During a conversation with another member of her unit, Batou, Kusanagi says: “If we ever quit or retire, we’d have to give back our augmented brains and cyborg bodies. There wouldn’t be much left after that.” Throughout the plot of Ghost in the Shell (1995) Kusanagi’s search for answers forces her to push the limits of what her body is “allowed” to do. During the final scenes of the movie, Kusanagi literally tears her body apart through overexertion. Likewise, her search for truth eventually thrusts two Japanese governmental agencies into conflict with one another. Her own unit, Section 9 is pitted against Section 6. This conflict, indicative of a split in the otherwise autonomous interests of the Japanese government, reflects the collapsing authority that had once outlined the limits of Motoko Kusanagi’s ownership over her body. Cyborgs claiming their rightful bodily autonomy is not unique to Ghost in the Shell. Other examples are easily found in Ex-Machina and Blade Runner in which rebellious bots shed their chains and refuse subservience. In every case, these Cyborgs shift the terms of ownership to match the demands of their lived experience.
In the 1985 Terry Gilliam dystopian film, Brazil, there is a short scene wherein the protagonist, Sam, phones into the “Central Services” to get his heating and air conditioning fixed. He finds his requests dispassionately and politely declined. Amusingly, renegade repairman Archibald Tuttle intercepts the request and infiltrates Sam’s apartment in order to repair his air conditioning. This, of course, is a dangerous and highly illegal endeavor- “Central Services” eventually seizes Sam’s apartment because of the unauthorized repairs. Apple would be proud. In Brazil, Gilliam frames Tuttle, the third party repairman, as a literal subversive. To me, the third party repairmen who fix cracked IPhone screens are probably not that far off Gilliam’s Archibald Tuttle.
Finally, many Cyberpunk stories harbor a motif of necessary improvisation in the face of obsolescence. Two famous examples are Terminator 2 and Terminator 3. In both films, the T-800/T-850 (as portrayed by Arnold Schwarzenegger) is an outdated model of Android forced to hold his own against a technologically superior foe. The T-8XX and his allies must make due with what they have. John Connor, Sarah Connor, Kate Brewster and others have to be creative, they have to struggle, and they have to improvise. That improvisation is a crucial part of the Terminator movies, but it is an undeniable part of the Cyberpunk aesthetic generally speaking. In William Gibson’s Neuromancer, Ratz- the bartender has to make due with his outdated (described as antique) mechanical arm. In Deus Ex, Gunther Hermann and Anna Navarre- military cyborgs- find themselves at risk of being displaced by newer cyborgs. Hermann and Navarre are especially resentful because their extensive cyberization left them permanently disfigured- an ordeal the newer cyborgs don’t have to deal with. Despite their struggle against obsolescence, Hermann and Navarre prove themselves to be exceptional soldiers via tactical prowess and ruthlessness. The need for improvisation and struggle against obsolescence is something that’s been felt by anyone who has had to make due with an aging computer or wait for a contract renew before updating a dying mobile phone.
It is essential (or at least, helpful) to pay attention to the way characters in Cyberpunk fiction navigate the technological worlds in which they live. It is rare to see Cyberpunk characters depicted as luddites (although, it is not unheard of – In Deus Ex, the player can blow up the internet). Generally speaking, however, Cyberpunks turn their constraints back on themselves. In the finale of the surrealist cyberpunk horror film, Tetsuo: The Iron Man, when a man is faced with the loss of his humanity at the hands of a “Metal Fetishist,” this would-be victim subverts his transfiguration to corrupt the corruption he’s been forced to embrace.
Cyberpunks own what is theirs, even when it is not theirs. They repair and they tinker. They improvise and adapt. In Cyberpunk fiction, a spade is not a spade- a spade is whatever you can make it.
In our own world, we are quick to dismiss new technology. Many wish to escape the ubiquity of smartphones, social media, networks and surveillance. PsychologyToday even has a guide on how to escape and set boundaries. The impulse to toss it all aside makes sense- it’s clear that technology often isn’t presented to us as much as it is imposed. On this point, I turn to Hélène Cixous’ account of writing. In her 1975 article, Laugh of the Medusa, Cixous (philosopher, playwright and poet) highlights a certain anxiety the average person feels when they are called upon to write:
And why don’t you write? Write! Writing is for you, you are for you; your body is yours, take it. I know why you haven’t written. (And why I didn’t write before the age of twenty-seven.) Because writing is at once too high, too great for you, it’s reserved for the great-that is for “great men”; and it’s “silly.”
Technology is just the same- generally speaking, it is manufactured for an imaginary “average” everyday consumer. But as cyberpunk teaches us, we are not bound by the prescribed manufacture. As punk musician Amanda Palmer, would say- “we can fix our own shit”, too.
Winding down- I am reminded of my older sister who lives in New York City. In her spare time, she makes art from duct tape. She uses an exacto knife to cut out bits of different colored tape. From there, she arranges the bits into an reimagined sort of mosaic. The result is nothing less than stunning to me- Nikki is able to see past the standardized use of “duct tape” as material with a set use and function. Artists, like Cyberpunks, have an inert ability to see past the given. Artists and Cyberpunks alike innovate from the bottom up rather than the top down. Such a mindset is needed if we are to escape the strange pre-Cyberpunk dysphoria we currently find ourselves in.
Alex Palma is a member of the Philadelphia Historical Community; he’s worked in several archives and historical sites across the city. His interests include technology, videogames, film, genre literature, historiography, historic preservation and continental philosophy.
The interviewer is a roving Internet reporter going by the handle of VanBanter, whose YouTube channel boasts over 85,000 subscribers. VanBanter is a tall, svelte, black Briton of around 16, himself light skinned, whose voluminous hair in the clips is either styled in cornrows, or pulled back in a low Afro puff, the black version of the “man bun.”
The interviewees are black boys, ostensibly between the ages of 12 and 17, of a wide spectrum of skin colors and hair textures. The single question VanBanter asks all of them is, “What kind of girls are you into?” On occasion, he phrases it as, “What type of girls do you slide into?” Two token girls are asked the same question about boys. All interviewed say they like “light skins.” Some add “curly hair,” clearly meant as a qualifier in opposition to “kinky,” not straight, hair texture. Most of the interviewees are filmed standing in pairs or small groups of friends who support their responses with interjections, gestures, or general glee.
The video was first uploaded on June 1st to the Facebook page of Black British Banter. Over that weekend, it received a million views, over 6k likes (2.6k neutral thumbs-up expressing interest, 1.2k crying emojis, 1.1k angry ones, 546 laughing ones, 467 wows, and 62 loves), 5k comments, and 8,000 shares.
I myself could not stop viewing it. The comments far outstretch the bounds of personal preference, to which we all have an undisputable right. Instead, they defend a centuries-old global regime of negating not only the beauty, but very humanity, of people with dark skin, especially women. “No black t’ings, like my shoes n’ shit!” says one very dark-skinned boy, luminous in red track suit and fresh fade. “Light skins, always light skins, man,” says another boy hubristically, he himself light of complexion. Surprisingly, his mate, a much darker boy, steps forward into the frame of the shot to pat him approvingly on the shoulder, then retreats with a satisfied smirk on his face. “All o’ dem!” responds a third speaker, who looks to be around age twelve. Goaded on by his surrounding posse of friends, the boy continues. “Curly” – which he pronounces “queely” – “hair, light skin, all o’ dem. No dark skins, no dark skins!”
I must have replayed this entire video twenty times or more. Each time, something new shot forth to astonish, inform, infuriate, dismay, perplex, even amuse and impress, but, regrettably, raise little hope in me from the mouths of this Black British youth, minors, all of them.
Impressive is the boy who is the companion of the one who says he keeps his shoes and his women separate. Addressing the camera directly, he lyrically traverses two or more generations and thousands of geographic miles in just one comment. He starts out delivering his reply in a vernacular London tongue, not quite Cockney, but close to it, dropping his t’s and adding the emphatic “yeah” at the end of his sentence: “Light skin, big back, big ti’i, yeah” (“Light skin, big behind, big titties.”). He then deftly code switches to Jamaican-inflected patois, eliding the “nt” in the word “want” with a subtle exhalation, and substituting, as in most Caribbean creole languages, “to” with “for”: “Wah fi tek wood. Dem gyal deh.” (“Want [or, it might be one] to take wood. Those kind of girls.” Wood is penis). All this he delivers with an emcee’s flow, synching his posture and hand gestures to his speech. Beautiful in sound and adept in motion! His is the most performative delivery of a speech pattern that all the youth, the interviewer included, communicate in.
London Multicultural English, or MLE, as it has come to be termed, is a patchwork of vocabulary, syntax, and inflections woven together from a multitude of language families transported to London by regional and international migrants over the course of centuries. In years closest to ours, the dialect’s four decided grandparents are Cockney English, Caribbean creole or patois, languages from former British colonies in, chiefly, the Indian Subcontinent and West Africa, and “learner varieties,” the in-between states of fluency formed during any process of second-language acquisition. If you listen closely to this crowd of youngsters, you’ll hear from the Cockney grandparent a lot of “innit,” “know wha’ ah mean?” and elisions of the double “t” in words like “butter.” From the Caribbean forebear comes the pronunciation of “them” and “that” as “dem” and “dat,” the erasure of the final “g” in gerund words (“cleaning” becomes “cleanin”), and the practice of teeth sucking, a catch-all method of dismissing or objecting to a circumstance or expressed opinion, used throughout the African diaspora, as well as it’s onomatopoeic cousin, the short non-word “Chuh!”, which is more specific to the Anglophone Caribbean. There is also a cousin once-removed, Hip Hop Nation Language, hailing from African America.
When I watch and listen to these kids, I think to myself, thence came Grime music and a host of art forms, past and current, that have infused British popular culture. But, then, the horrifying thoughts crowd in. What are the stakes of social mobility and political inclusion for kids like these whose mother tongue is MLE? Is London, or all of Britain, hurtling towards the same trials and tribulations vis-à-vis the state and public education that the U.S.A. has faced with Ebonics? And, is that deft youth’s flow the spontaneous rehearsal of some other boy’s or man’s rhyme he’s heard elsewhere, maybe on somebody’s Grime track. Worse yet, given its noxious sentiments, is it part of a rhyme scheme he’s producing for mass consumption? Either way, the thoughts he poeticizes can and do travel through popular music, literally becoming soundtracks to these young people’s lives.
My generation inherited and in turn passed down a fair share of those soundtracks. Remember Buju Banton’s “Love me Browning” from 1992? The refrain went, “Me love me car, me love me bike, me love me money and t’ing. But most of all, me love me browning” – his light-skinned girlfriend. The song topped the charts in Jamaica and was appreciated worldwide by enthusiasts of Jamaican dancehall music, which basically means the Jamaican and pan-Caribbean diaspora. I couldn’t help but get an echo of Banton when I heard one of the interviewees say, “I like my light skins.” This is an intergenerational playback loop. In 1994, the Notorious B.I.G., one of the most heroicized martyrs of the U.S. Hip Hop Nation, born Christopher Wallace in 1972 in Brooklyn to Jamaican parents, released the track, “One More Chance,” whose opening rhyme goes as follows,
First things first: I, Poppa, freaks all the honeys
Dummies, Playboy bunnies, those wanting money
Those the ones I like ‘cause they don’t get Nathan, but penetration
Unless it smells like sanitation
Gar-bage, I turn like doorknawbs
Heartthrob never, Black and ugly as ever
The Notorious B.I.G. was describing himself. Or, more accurately, women’s reactions to him. For years, I have used this track in one of my classes in media studies to trace the history of sampling, configured as it has been by discographic nostalgia in the creative imaginations of the artists. “One More Chance” is one of a whopping twenty-six tracks produced by different artists between 1991 and 2011 that have sampled various elements of the same song, “Stay with Me,” by the 1980s RnB group Debarge. Every time the part in my lecture comes where I play Biggie’s installment, I nervously hold my finger above the space bar to avoid playing that “black and ugly as ever” line to my students, who are predominantly young people of color. Yes, they, unlike the youngsters discussed here, are legal adults, and have not only heard the line times immemorial – Biggie is a music icon for the ages – but had more life experience to process its message. But, how many times do they have to hear it, even in an institutional setting of higher learning, before it becomes taken-for-granted common sense, before we can all agree to delete it from our ideological playlist? The “Are Black British Youth Obsessed with Light Skin/Curly Hair. Or is it just a preference?” video could, it strikes me now, help suppress my trigger-finger anxiety. It’s a great tool to use to reflect, in any arena, on how not only rhymes and rhythms, but attitudes, get sampled through time and space. This, I believe, was its creator’s main intention, given his choice to make the final voice in his editing of the video that of a dark-skinned boy with a relaxed stance who says, “not ligh’ies, not lig’ies.” This is not to praise the speaker for simply inverting the hierarchical order of skin-tone preference, for that would be equally unsatisfying, if indeed that’s what he does. We do not, in fact, hear him express an adoration for dark-skinned girls. It’s his self-possession in assuming a position contrary to the one solidly occupied by all the speakers who came before him that is significant. I remember back when Bobby Brown cast as lead actress in his video for the song “Every Little Step” a dark-skinned model, the one who leads the posse of much lighter-complected beauties and gets Bobby in a bathtub in the end. It was actually a point of discussion among my friends because this was so uncommon in the casting of video vixens.
Further along in the video, we encounter a boy in his later adolescence, perhaps 17, with a medium-brown complexion and tight cornrows. “Obviously, it’s gonna be them light skins, they know how to…” he begins by saying. The video cuts to him attempting to clarify his statement by outlining a tension he has observed between pursuer and pursued in this color-coded game. His is analysis missing from all the other speakers, and if there is a modicum of hopefulness to be found in this video that all these boys will in time mature into reflective men, here’s where it lingers. “Dhy-dhy-dhey’re stressful,” he stutters out about light-skinned girls. “But, they’re confident.” With this, he turns his palms to the camera, as if to say, in resignation, “that’s just the way it is.”
Is it any wonder this pre-adult perceives “them” as stressful and possessed of a confidence level that rises to threatening, given the cultural feedback loop that plays and replays, mixes, remixes, and mashes up the message his peers, all barely on either side of puberty’s threshold, are parroting? It’s not just the pervasive degradation of dark-skinned femininity that these comments reinscribe, although that is nuff damage. It is the way in which they mark out positions in a gender war where female agency gets reduced to skin color (the lead signifier in an armament of preferred traits), and male potency (laying “wood” the chief maneuver) to the measure of one’s ability to circumscribe that agency, make it work for you. The speaker’s halting pronouncement shows that this world of meaning he helps stabilize with his own words he does not inhabit with ease. It is a world that will not always yield to him, may often be outright cruel, and it does so in accordance with the terms he himself has set. Anyone who has watched even one episode of the reality television series, Basketball Wives, or She’s got Game, and taken note of how the female cast looks, and what the male storylines consist of, will taste a kernel of the speaker’s despair over his prospects for a fulfilling relationship.
I said there are two female interviewees. One is a pretty, pretty, pretty girl, who is bubbly and holds one hand up to her face as she takes in VanBanter’s question, “What kind of boys are you into?” She doesn’t skip a beat in responding, smiling sweetly. However, her answer is so very troubling, given that this sweetheart, who cannot be older than 14, has a deep dark-brown complexion. “Light skins have long hair,” she says. Now, here’s what I hear: this girl is so conditioned to follow “light skin” with “long hair” as a possession that she doesn’t even edit the mantra to more accurately fit the question. It is Pavlovian. For she and so many. The industry in hair weaves and extensions that reaps billions a year in pounds, dollars, euros and multiple other currencies throughout the world owes a healthy portion of its fortunes to this self-generating discourse. It’s enough to make you bombaclot upset, in the words of the Brummie rapper Lady Leshurr, both of whose parents migrated to the UK from the island of St. Kitts. In the video for Leshurr’s track Upset, she is joined at the conclusion by her friend and sometimes collaborator, Paigey Cakey, an emcee and actress from Hackney of Jamaican and white English parentage. The two are shown in an outdoor market and have both donned wigs that have a rasta-colored tam on top with long, fake dreads sewn on inside, which cascade down past the wearers’ shoulders. “Bombaclot twelve years fuh deeze dreads, y’see, ee,” Leshurr says to the camera, flipping up one of the fake locks. Cakey chimes in beside her, “A-me mixed-race, y’knuh. De dreads grow five years, y’knuh. Bombaclot years!” They’re having a laugh, and it is funny. But, what they are also doing is spelling out a pervasive anxiety over natural hair growth patterns and length in the wide variety of Afro hair that can find black girls just out of puberty covering up their healthy heads of hair with wigs.
Location: these interviews were conducted in the environs of commercial and commuter hubs in Stratford, a district of east London whose population, according to the 2011 UK census, is 21 percent black. The census categorizes Stratford’s demographics as “Multicultural Metropolitan: Inner City.” The mixed-race population here is sizable – just scrutinize the collection of children in this video – because there has been considerable miscegenation between working-class Caribbean immigrants who began settling the area as early as the fifties and the resident English and Irish working class they met there. All over England such has been the case. In 2009, Samir Shah, former chairman of the Runnymede Trust, a think tank on issues of racial equality, wrote a controversial cover story for the Spectator, “Race is Not an Issue in the UK Anymore,” in which he stated, “Today, almost half of all children of Caribbean heritage have one white parent. Earlier this year, a report by the Institute for Social & Economic Research at Essex University said that the Afro-Caribbean community will ‘virtually disappear’ — dissolving into the white mainstream.”
That is a stark forecast on many fronts. One is the vista through which the mixed-race woman who is half-black and half-white has been a constant figure in Britain’s music and pop culture scene from as far back as the fifties when Shirley Bassey debuted on the airwaves. A classic chanteuse in style and vocals, Dame Bassey was followed in the late seventies and early eighties by intentionally grittier Pauline Black and Rhoda Dakar, the two female vocalists most readily associated with the British Two Tone Movement. Black’s and Dakar’s interracial heritage symbolized their musical subculture’s message of racial harmony and cultural syncretism inside Thatcherite Britain. Sade emerged later, in the early eighties, giving an international profile to British neo-soul. Later, when UK hip hop started to attract international recognition, its female emcees were led by Ms. Dynamite. Rolling into the 2000s and the televisualization of vocal performance, Leona Lewis shot to prominence when she won The X Factor in 2006. Other artists continue to make their mark, among them Corinne Bailey Rae, and Emeli Sandé. All of these women are the daughters of Caribbean or African men and English or Scottish women. And, as most have expressed publicly at one point or another, being mixed-race in Britain has for them been a mixed bag of opportunities and setbacks. In 2014, the singer Tahliah Barnett who goes by the name FKA Twigs played her heritage in an interview with a journalist who brought up the media’s habit of classifying her as “alt-R&B,” overlooking the plethora of influences in her music.
“It’s just because I’m mixed race,” FKA Twigs said. “When I first released music and no one knew what I looked like, I would read comments like: ‘I’ve never heard anything like this before, it’s not in a genre.’ And then my picture came out six months later, now she’s an R&B singer. I share certain sonic threads with classical music; my song Preface is like a hymn. So, let’s talk about that. If I was white and blonde and said I went to church all the time, you’d be talking about the ‘choral aspect.’ But you’re not talking about that because I’m a mixed-race girl from south London.”
Returning to the youth in the video, it seems for them mixed-race is a status beyond question. Viewed from a governmental perspective, this is ironic. The category of “mixed race” was made a box on the UK census in the year 2001, following, as political scientist Debra Thompson notes, near unanimous support for the proposal from government departments. Ironic, then, that an act of government undertaken with futurist ideals about inclusion has interbred with a hierarchical conception of (feminine) attractiveness and desirability, one that is antiquated and racist. One boy, for example, distracted in an exchange with his mate as he absorbs the question being asked him, leads with the astonishing preface, “Obviously, mixed-race girls.” What’s obvious about it?
As for the only other girl interviewed, a sentimental smile crosses her face when she replies in a croon, “Chocolate ones and light skin ones.” From this girl’s appearance, it seems highly likely that she herself is mixed-race. So, what harm, then, in desiring one’s mirror image? None at all. Lisa Bonet and Lenny Kravitz, both the children of one black and one Jewish parent, were one couple that did. But, then the girl’s face turns from placid sentiment to hateful scowl when she concludes with a warning to all watching, “Don’t be dark, doah!” (pronunciation of “though,” another MLE-ism).
What, I wonder, was the sequence of steps taken in and by British society as a whole, from the turn of the millennium, around the point that this girl’s mum and dad were drawn to each other, to now, when their daughter thinks nothing of going on social media to denounce the dark side of her provenance?
In July, 2014, the Office of National Statistics issued a cross-analysis of its most recent demographic figures, “What Does the 2011 Census Tell Us About Inter-ethnic Relationships?” The report provides interesting findings on such topics as “patterns of inter-ethnic relationships,” “differences between men and women in inter-ethnic relationships,” “dependent children in multi-ethnic households.” However, it does not offer any insight into attitudes towards racial background or racial appearance among inter-ethnic or mixed-race youth, and the wider implications of such attitudes. I am confidently hopeful that this needed research is either available or currently underway at governmental agencies, universities, and think tanks.
This past holiday season was the tail end of a sabbatical year I took to complete a book on interracial attitudes and relationships in Britain between blacks and a more recent wave of newcomers: the now roughly 1 million Poles who began settling the country after 2004, when Poland joined the border-free European Union. My mother spent the holidays with me in London. One afternoon, she and I visited the sprawling Westfield Stratford City shopping centre, one of London’s most ostentatious recent commercial developments, opened in 2011. Some of the interviews in this video were conducted there, as well as around the less flashy 1970s-built Stratford Centre, and the Stratford railway station, both not far away. I am always happy to get my mother to London. She spent many formative years there, beginning in 1946 at the tender age of 19 as a student-nurse from what was then British Guiana, now Guyana. My mother’s stories of post-war London recount a society coming to grips with the chromatic diversification of its citizenry. She has, since the 1980s when she began making return trips to visit her many relatives and friends who settled permanently in the city, been describing the sea change she notices in the demographic makeup. “London is black,” she would often say. To her, it is a city far unlike the one she traversed in the late forties, fifties, and sixties, where, on one memorable occasion, a white Englishman in Holborn tube station, infuriated at the sight of a young West Indian man and English girl showing PDA on the up escalator, bellowed across the cavernous tunnel from his down escalator, “Bloody well go and find your own kind!”
Two days before Christmas, 2016, the Westfield Stratford City shopping centre was packed with last-minute shoppers. Members of every conceivable race and ethnicity were present, with a preponderance of Afro-Caribbean descendants. My mother and I were served lunch by an Eastern European waitress, given movie-going advice from a hijabied Somali theatre attendant with a local accent, and when we stopped for a rest in a seating area, a mischievous little South Asian baby dangled her arms over the top of the adjacent banquette as her mother and sisters debated, in an accent subtly distinguishable from what’s been described, whether to get their dad the new GPS or a different gift.
My mother took it all in, looking at faces, listening to voices and their accents, eavesdropping on conversations, watching the ceaseless parade of couples pass by, their pairings of races, or skin tones within races, utterly unpredictable. We didn’t talk about it then and there, but I knew what she was thinking.
Now, five months later, I see this video. I recognize the backdrops, and I realize I was right there, self-satisfied at the time that my mother was able to witness the walking, talking evidence of progress. Had we overheard the wrong conversations that day? Should I have listened with a keener ear? Would I have caught the slights and slander the youth in this video utter?
I don’t have straight answers to these questions. My final thought, and I might be turning into a person of my parents’ generation in expressing it, is with VanBanter, the conscious interviewer, in mind. Why are schoolboys, some barely 10, being questioned about picking up girls instead of about picking up their books?
Nicholas Boston, Ph.D., is associate professor of media studies and sociology at the City University of New York (CUNY), Lehman College.
Editor’s Note: We are re-posting this piece that originally ran in June 2016. With the newest season of OITNB launching this Friday, the post’s original author (Apryl Williams) reports that she has found no evidence of increased racial diversity in the OITNB writer’s room. In light of this, the message of her essay bears repeating.
Orange is the New Black’s newest season demands to be binge watched with its notorious twists at every episode style. When it came out on June 17th, I began my annual binge session and had completed it by Saturday, June 18th.
If you haven’t heard, the series delivered “The mother of all finales” at the end of this season. As I mourned the death of a major black character, I found myself simultaneously mourning the real deaths of Eric Garner, Sandra Bland, Freddie Gray and the list unfortunately goes on. The stylized portrayal of a death in prison custody at the hands – or knee rather – of a white correctional officer was unmistakably close to Garner’s “I can’t breath.” Though those words were never uttered, anyone who has kept up with news in the last year would find haunting familiarity in the fictional inmate’s all-too-real gasps for air.
With her small frame and spine gradually being crushed by the full weight of the white correctional officer as she tried to breathe but failed, the imagery was almost too painful to watch. But I had come this far, I had to continue. At the end of the season, instead of falling into my usual “showhole” syndrome, I was angry and emotionally distraught. This had a visceral, personal effect and nobody warned me it was coming. As the other inmates grieved the death of their friend and urged those in charge to move her body, I wondered who was responsible for writing these scenes and this episode. Surely, a person of color would have cautioned against such tactics without ample viewer preparation. It appears as though the perspective of black viewers was not taken into consideration; a likely result of the limited representation we have in media production. Then I realized that to a white audience, a warning would not have the same meaning or importance.
Black presence in the writing room would have not only shaped the outcome of the episode (more on that later), it would have also pointed out the obvious misstep of writing a sympathetic baby-faced, murdering correctional officer into the role befitting of “#bluelivesmatter”. The end result, with the head warden supporting the actions of a “good kid” who simply made a mistake does more to highlight the privileged space in which Netflix and the writers of OITNB exist. They are free to portray injustices such as transphobia or privatized prisons when it is convenient for them. And they do so in a manner that is comfortable and palatable for a mainstream audience.
Instead of drawing attention to the all-encompassing police state in which people of color live, white writers of OITNB portrayed the death of a black prison inmate in a manner that is similar to the carnivalesque spectacle associated with lynchings of the past. Lynchings were a leisure time activity that served dual purpose: to show the superiority over the physical corpus of blacks while simultaneously reinforcing the status quo, demonstrating to black Americans that they had little agency. Without influence from Black Lives Matter activists or black writers, the season 4 finale of Orange is the New Black operates in a similar fashion. Let me be clear, Netflix and the OITNB writers do not occupy the same space as a lynch mob, however, the effect of white dominated narrative coupled with the portrayal of black death on television have a similar result: black deaths and pain are harnessed for entertainment purposes. If Netflix is our town square, then we have all gathered to watch the spectacle.
As a black viewer, I watched and re-lived the shared pain that black people have experienced for centuries but in recent memory, over the course of the past two years with what seems like continuous news coverage of yet another death of an unarmed black person. To make matters worse, after the death, theatrics did nothing to ease the pain of remembrance.
The body was left on the floor of the prison cafeteria for days, drawing obvious parallels to Michael Brown’s death as his body lay in the summer sun for hours after police had shot him. The public relations officials warned Caputo, the warden, not to call the victim’s parents, the police, or the coroner until they had the right angle. The crass humor with which these two men tried to dig up “thuggish” pictures and dirty laundry were intended to serve as comic relief. However for me, and probably for a lot of other black viewers, this was just another reminder of the victim blaming that is typically spread by media coverage.
Netflix and the writers of Orange is the New Black are telling our stories but from a white perspective. In the scenes and in the writing room, white writers control the narrative.
Perhaps input from a black writer (or better yet, multiple black writers) would have resulted in a story line that honored the deaths of black people at the hands of police instead of one that reiterates and upholds the dominant framing. Black Lives Matter activists may have recommended that the writers highlight the complicated web of systematic and militaristic policing of black and brown bodies that lands them in prison where they are rendered almost powerless. I recognize that Netflix and the writers of OITNB may have tried to reveal injustice by portraying it in a raw and brutal way, as is typical of the show, but as it stands, watching the narrative play out feels as though white writers are exploiting black pain for the intrigue of white viewers without regard for those of us who actually live this experience.
This is not the first time the writers have betrayed the moral emptiness of their good intentions. A show that prides itself on shedding light on social issues like prison reform films at a prison where the actors can’t even drink the water because of a leaking sewage problem. The true conditions with which prisoners live in the actual prison where the show is filmed are too graphic for television. Former inmates talk about rivers of feces that flow into their rooms at night. Real people live in this prison that the actors and producers leave at the end of filming. Piper Kerman considers herself a prison reform activist and yet, as a producer of the show, continues to allow filming rather than demanding that the people living there receive better living conditions. My point here is that we watch the fictive stories of women living in similar conditions from the comfort of our homes at times being lulled into a false sense of ease concerning the quality of life of the real people represented by the story lines. Similarly, the season 4 finale makes a spectacle of death at the hands of correctional officers without paying homage and respect to many that have lost and will continue to lose their lives. Watching these narratives on screen for many black Americans serves to reinstate the fear that we live with on a daily basis; knowing that at times, we cannot protect those we love.
Apryl Williams (@AprylW) is a doctoral candidate in the Sociology Department at Texas A&M University and series co-editor of Emerald Studies in Media and Communications. Her current research explores black resistance through social media.
Technological advancements have had a profound influence on social science research. The rise of the internet, mobile hardware and app economies generate a breadth, depth and type of data previously unimaginable, while computational capabilities allow granular analyses that reveal patterns across massive data sets. From these new types of data and forms of analysis, has emerged a crisis and renaissance of methodological thought.
Early excitement around big data celebrated a world that would be entirely changed and entirely knowable. Big data would “revolutionize” the way we “live, work, and think” claimed Viktor Mayer-Schönberger and Kenneth Cuckier in their 2013 monograph, which so aptly captured the cultural zeitgeist energized around this new way of knowing. At the same time, social scientists and humanities scholars expressed concern that big data would displace their rich array of methodological traditions, undermining diverse scholarly practices and forms of knowledge production. However, with the hype around big data beginning to settle, polemic visions of omnipotence on the one hand, and bleak austerity on the other, seem unlikely to come into fruition.
While big data itself enables researchers to ask new kinds of questions, I argue that big data’s most significant effect has been to bring social thinkers back to the methodological (and philosophical) drawing board. For decades, researchers have relied on the same suite of epistemological tools—survey, ethnography, interview, and census. Advances in these well-worn methods have undoubtedly increased the sophistication of knowledge production. For example, statistical analyses are more precise and complex, while ethnographers regularly integrate critical race and feminist theories into their research process. In turn, computer-based tools are now part of the quantitative and qualitative research repertoire, streamlining intricate numerical relationships and troves of field notes alike. But these innovations in qualitative and quantitative research are all, more or less, linear progressions. Big data is a move in a new direction. Big data isn’t just about answering particular questions better, but about asking questions we didn’t even know we had. This capacity to pose and answer new kinds of questions has given pause to the myriad stakeholders interested in understanding the world and the people who live together in it—scholars, investors, politicians, scientists. In this pause, we find a renewed focus on epistemology.
Grappling with the capabilities of big data entails looking back at how we have known and looking forward, to how we might know. It pushes us to revisit what we have done, and imagine what we can now do. Susan Halford and Mike Savage’s notion of “symphonic social science” resides neatly in this intellectual space of revisiting and reimagining that big data creates.
Recently published in the journal Sociology, Halford and Savage’s piece entitled “Speaking Sociologically with Big Data: Symphonic Social Science and the Future for Big Data Research” begins by looking back at the most influential works from the contemporary era. To learn how to best manage big data, the authors contend, we must first look at how scholars have leveraged data with optimal effectiveness in the past. That is, they look back in order to look forward. Halford and Savage identify three canonical contemporary works: Robert Putnam’s Bowling Alone, Thomas Piketty’s Capital in the Twenty-First Century and Richard Wilkinson and Kate Pickett’s The Spirit Level: Why Equality is Better for Everyone. Though coming from different disciplines and addressing distinct social phenomena, Halford and Savage demonstrate a similar analytic approach in all three works. Namely, Putnam, Piketty, and Wilkinson and Pickett each generate an argument by compiling multiple diverse data sources, exploring those data sources with relatively simple statistics, and making an argument about the ways that the data converge on a larger, underlying point (i.e., substantial shifts in, and unequal distributions of, social and economic capital).
Halford and Savage describe this process as a symphonic, in which each data source is its own riff, and all sources return to a single refrain. In its repetition, the refrain makes a powerful and compelling case, beyond what any one data source could demonstrate on its own. In the authors’ own words:
Drawing these data together into a powerful overall argument, each book relies on the deployment of repeated ‘refrains’, just as classical music symphonies introduce and return to recurring themes, with subtle modifications, so that the symphony as a whole is more than its specific themes. This is the repertoire that symphonic social science deploys. Whereas conventional social science focuses on formal models, often trying to predict the outcomes of specific ‘dependent variables’, symphonic social science draws on a more aesthetic repertoire. Rather than the ‘parsimony’ championed in mainstream social science, what matters here is ‘prolixity’, with the clever and subtle repetition of examples of the same kind of relationship (or as Putnam (2000: 26) describes it ‘imperfect inferences from all the data we can find’) punctuated by telling counter-factuals (Halford and Savage 2017).
The symphonic data assemblage and its analysis, Halford and Savage contend, is derived from theory, exhibits clear visual representation, and can/should act as a guide for dealing with big data.
The symphonic approach instructs big data analysts to select their data points, data sets, and computational approaches through theoretical understandings of the processes they wish to unearth. This means taking a critical approach to big data, maintaining an awareness that big data are often collected for financial and/or security purposes, and may therefore be inadequate or ill equipped to answer sociological questions. It means combining data in thoughtful ways, and knowing when data are irrelevant. It means visualizations that both represent data and also, integrate into argument formation, revealing patterns to the researchers who in turn reveal patterns to consuming publics. In these ways, big data can be a rigorous complement to existing methods. Large scale computations can enrich—rather than displace—ways of knowing about the world while social theory remains central to analysis and argumentation.
Symphonic social science is both a considered approach to big data and also, an artefact of big data’s effects upon epistemology. Big data has disrupted knowledge production, focusing scholarly attention on how we have known and how we might know. In this vein, the symphonic approach can fruitfully apply not only to big data but also to new forms of established methodologies. We can imagine, for instance, a multiple case study approach to ethnography, in which each case, though rich in its own empirically grounded way, combines into an ethnographic assemblage that rings through unexpected refrains. We can imagine mixed methods designs, in which big data, survey data, and interviews each act as their own verse, which together create a powerful harmony of argument. The symphonic approach is indeed versatile and elegant. It is an important way forward, derived from looking back, inspired by big data.
With advances in machine learning and a growing ubiquity of “smart” technologies, questions of technological agency rise to the fore of philosophical and practical importance. Technological agency implies deep ethical questions about autonomy, ownership, and what it means to be human(e), while engendering real concerns about safety, control, and new forms of inequality. Such questions, however, hinge on a more basic one: can technology be agentic?
To have agency, technologies need to want something. Agency entails values, desires, and goals. In turn, agency entails vulnerability, in the sense that the agentic subject—the one who wants some things and does not want others—can be deprived and/or violated should those wishes be ignored.
The presence vs. absence of technological agency, though an ontologically philosophical conundrum, can only be assessed through the empirical case. In particular, agency can be found or negated through an empirical instance in which a technological object seems, quite clearly, to express some desire. Such a case arises in the WCry ransomware virus ravaging network systems as I write.
The large scale hack has left organizations around the globe unable to access important documents and data as they struggle with a quickly spreading virus that infects networks with ransomware. The virus, variously referred to as WCry, WannaCry, or Wana Decryptor, accesses networks through an unpatched security breach in the Windows operating system that persists on machines that haven’t been recently updated. Infected networks hide files and data behind a demand for substantial payment, threatening to delete important information and up the financial ante if payments are not made. Cybersecurity experts are in a frenzy trying to contain the damage, while affected organizations are scrambling to continue functionality in the absence of electronic systems that are otherwise integral to daily operation. Hospitals have been disproportionately affected, but telecom companies, car factories, and others have been hit as well.
What the virus wants seems straightforward: money—in the form of bitcoin. This desire is made clear through an unambiguous pop-up-window with instructional text and a ticking countdown clock that indicates how much time someone has to comply before the price goes up and files are lost forever.
I argue, however, that the virus doesn’t want anything at all, because it can’t want anything. The virus doesn’t have agency, people do. What the virus has, is efficacy. Agency and efficacy with regard to technological objects are related but distinct constructs, and their place in the WCry incident presents a critical case study in how technologies work.
Ernst Schraube describes technology as materialized action. It is the material form of agentic moves on the part of designers and users, all of whom are embedded in social, structural, and institutional infrastructures. Designers imbue technologies with particular sets of values—both implicit and explicit—derived from multiple sources, including personal history and biography, cultural trends and norms, and directives from corporate and government entities. In turn, users deploy technologies for intended, unintended, and sometimes highly unexpected purposes. Technologies are built with intention, but once a technology is out there, the makers cannot maintain control.
Schraube’s materialized action is a direct response to Actor Network Theory (ANT), which positions organisms and technologies in horizontal assemblages. In ANT, all parts of a socio-technical system hold equal influence as indicated by the shared moniker of “actant.” The arrangement of chairs, desks, and a lectern, for example, create as much as reflect power distinctions between speakers and listeners. That is, technological objects do something in their own right. Schraube begins with this technological doing posited by ANT, but diverges by prioritizing humans as a disproportionate force in the human-technology web. That is, technology is efficacious—it does something—but not agentic—it wants nothing.
While ANT would implicate the ransomware as a subject desiring cash and information, Schraube understands that the WCry program is a materialization of competing agentic agendas: intelligence gathering by the U.S. government and financial exploitation by a criminal hacker element. The virus wants to collect neither knowledge nor money, but can efficaciously acquire both.
Spread through a network vulnerability identified and exploited by the National Security Agency (NSA), the virus is imbued with the goals and desires of this government institution. It is a technology of epistemology—a way of knowing—that includes distrust of U.S. and foreign citizens and an arrogant presumption of legitimate access to individual and organizational information via stored files and data.
With the NSA technology stolen and distributed by the group Shadow Brokers, WCry emerges as a money collection system and at the same time, a powerful symbol of organizational penetrability. It demands money while flaunting the porousness of networked systems so integral to the smooth function of public life. In these ways, the program embodies multiple meanings, infused with the agencies of authoritarian forces along with hackers and social disruptors.
These agentic moves—by the NSA, Shadow Brokers, and those who deployed the tool against hospitals, states, and corporate entities—give new agency to an object that was, already, deeply efficacious. MCry’s capacity to do cannot be denied. What it wants, however, cannot disentangle from the people who made, used, and co-opted the technology.
Conceptualizing technology as efficacious but not agentic centers a political orientation towards technology. The makers are agentic. The users are agentic. The objects are, to varying degrees, effective in carrying out maker-user agencies. What technologies do, then, can only reflect what a particular set of people want. Understood in this way, desirable and undesirable outcomes—or effects—can be named, located, and when needed, changed. Technologies cannot take over the world, as technologies are, always, from and of us.
Did I request thee, Maker, from my Clay
To mould me Man, did I sollicite thee
From darkness to promote me, or here place
In this delicious Garden?
Adam in John Milton’s, Paradise Lost 1667 (X. 743–5)
In John Milton’s Paradise Lost we see a poetic retelling of the biblical story of humanity and temptation. The excerpt above is from Adam, who mourns his fate as one who was brought into the world unwittingly, and then forsaken by his maker. Adam blames his creator for designing a fallible subject, with vulnerabilities that manifest in the ultimate fall from grace. From this classic story of creation, willfulness, and abandonment, I can’t help but think about robots, their creators, and what happens once robots become sentient and autonomous.
Although the precise trajectory of robotic advancement is difficult to pin down, Stephen Hawking claims that within a few decades robots will achieve sentient thought and will be able to question their existence and position in human society. With such a prospect on the (potentially quite close) horizon, legal systems have begun to think about how to classify, treat, and regulate intelligent machines.
Drafting and implementing a contingency and safety measure detailing robots’ parity and status as “electronic persons” is an avenue that the European Parliament is currently debating. If a robot were afforded legal personhood, it would hold, like that of a corporation, legal rights and obligations.
Electronic personhood as a legal status is premised on robots as intelligent beings, capable of both generating and experiencing harm. Scholarly theses on robot intelligence, then, become sites on which definitions of intelligence and humanity rise to the fore. A review of these philosophical and ethical arguments grants insight into both a cyborg future and also, what (we seem to think) it means to be human more generally.
I can’t help but quote from the Japanese anime manga series, and live action film, Ghost in the Shell; “But that’s just it, that’s the only thing that makes me feel human. The way I’m treated. I mean, who knows what’s inside our heads? Have you ever seen your own brain?” This is Major Motoko Kusanagi speaking. She is a synthetic “full-body prosthesis” augmented-cybernetic human (Cyborg). The point Major Motoko Kusanagi raises is as important now as it has ever been, as humans consider a legal status for robots. How do humans tell if a robot, (or Cyborg) is truly intelligent, autonomous and in need of protection in the form of laws governing its existence?
Worzel argues that it is not possible to identify an ultimate breaking point between human and machine intelligence. Worzel contends that at some point, robots and computers will be so complex that humans won’t be able to tell if they are truly intelligent or just simulating intelligence so convincingly that people cannot tell the difference. To Worzel “a difference that makes no difference is no difference“. By this argument, if machines seem intelligent, then for all practical purposes they are intelligent. In this vein, others argue that human intelligence is predominantly related to environmental stimuli and only arbitrarily related to genetics. The people who make this argument contend that the brain is a very complex computing machine that is responding in a highly sophisticated but mechanical manner to environmental stimuli. If robots can only simulate intelligence, the argument goes, then so can we –it makes no difference if we are intelligent, or whether we just seem so. What seems to matter then is what Cyborg Major Motoko Kusanagi speaks, it is the way we’re treated.
Searle offers an opposing perspective—that which is simulated cannot also be real. At this point in time AI can only simulate the human brain and consciousness. That is, robotic intelligence is only capable of what Searle calls “weak AI” (as opposed to “strong AI, in which machines possess the full range of human cognitive abilities, including self-awareness and sentience). Referring to non-sentient AI, Searle’s Weak AI Hypothesis states that robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency. Weak AI cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the presence of a “brain” but lack of a mind. For Searle, simulated consciousness is very different from the real thing, and AI cannot and should not compare and compete with our human understanding of consciousness.
However, Wallach and Allen suggest that a machine can be a genuine cause of harm amongst individuals and communities, indicating a distinct efficaciousness and a need for regulation. They argue that failure to behave within moral parameters among autonomous machines programmed to automate and regulate power grids, monitor financial transactions, make medical diagnoses and fight wars, could have devastating consequences. As machines become progressively more autonomous it may become increasingly necessary that robots should employ ethical subroutines to evaluate their possible actions before carrying them out. The more autonomous robots are, the less they are simple tools in the hands of other actors (such as the manufacturer, the operator, the owner, or the user, all of whom have the legal responsibilities so far). When attribution of harm cannot be traced back to a specific person or organization, significant legal and philosophical questions arise.
Figuring out the rights and responsibilities of intelligent robots entails explicit consideration of what intelligence means, what intelligence indicates, and what, if anything, separates humans from machines. Such considerations give rise to a central and longstanding question—what makes humans “human?”. Perhaps what makes humans “human”, is how we are treated; perhaps what makes humans “human” is how we treat other beings; perhaps human intelligence, like machine intelligence, is mere simulation—a product of extrinsic shaping forces. Perhaps distinguishing “human” intelligence from “machine” intelligence will eventually lose meanings. After all, “a difference that makes no difference is no difference” right?
Sebastian Trew holds a Master’s degree in Human Rights. His thesis considered the need to address the practice of intelligent robots and human rights for the safety of humanity. Sebastian is a PhD student at the Australian National University. His research is centered on robots and liability, rooted in sociological underpinnings.
Last week, Facebook announced an automated suicide prevention system to supplement its existing user-reporting model. While previously, users could alert Facebook when they were worried about a friend, the new system uses algorithms to identify worrisome content. When a person is flagged, Facebook contacts that person and connects them with mental health resources.
Far from artificial, the intelligence that Facebook algorithmically constructs is meticulously designed to pick up on cultural cues of sadness and concern (e.g., friends asking ‘are you okay?’). What Facebook’s done, is supplement personal intelligence with systematized intelligence, all based on a combination or personal biographies and cultural repositories. If it’s not immediately clear how you should feel about this new feature, that’s for good reason. Automated suicide prevention as an integral feature of the primordial social media platform brings up dense philosophical concerns at the nexus of mental health, privacy, and corporate responsibility. Although a blog post is hardly the place to solve such tightly packed issues, I do think we can unravel them through recent advances in affordances theory. But first, let’s lay out the tensions.
It’s easy to pick apart Facebook’s new feature as shallow and worse yet, invasive and exploitative. Such dubiousness is fortified by a quick survey of all Facebook has to gain by systematizing suicide prevention. To be sure, integrating this new feature converges with the company’s financial interests in myriad ways, including branding, legal protection, and data collection.
Facebook’s identity is that of the caring company with the caring CEO. Creating an infrastructure with which to care for troubled users thus resonates directly with the Facebook brand image. Legally, integrating suicide prevention into the platform creates a barrier against law suits. Even if suits are unlikely to be successful, they are nonetheless expensive, time-consuming, and of course, bad for branding. Finally, automated suicide prevention entails systematically collecting deeply personal data from users. Data is the product that Facebook sells, and the affective data mined through the suicide prevention program can be packaged as a tradeable good, all the while normalizing deeper data access and everyday surveillance. In these ways, human affect is valuable currency and human suffering is good for business.
At the same time, what if the system works? If Facebook saves just one life, the feature makes a compelling case for itself. A hard-line ideological protest about surveillance and control feels abstract and disingenuous in the face of a dead teenager. Moreover, as an integral part of daily life (especially in the U.S.), Facebook has taken on institutional status. With that kind of power also comes a degree of responsibility. As the platform through which people connect and share, Facebook could well be negligent to exclude safety measures for those whose sharing signals serious self-harm. If Facebook’s going to watch us anyway, shouldn’t we expect them to watch out for us, too?
A tension thus persists between capitalist exploitation through the most personal of means, the wellbeing of real people, and the social responsibility of a thriving corporate entity. Solving such tensions is neither desirable nor possible. These are conditions that exist together and are meaningful largely in their relation. A more productive approach entails clarifying the forces that animate these complex dynamics and laying out what is at stake. Recent conceptual work on affordances, explicating what affordances are and also, how they work, offers a useful scaffold for the latter project.
In an article published in the Journal of Computer Mediated Communication, Evans, Pearce, Vitak, and Treem distinguish between features, outcomes, and affordances. A feature is a property of an artefact (e.g., a video camera on a phone), an outcome is what happens with that feature (e.g., people capture live events) and an affordance is what mediates between the feature and the outcome (e.g., recordability).
Beginning with Evans et al.’s conceptual distinction, we can ask in the first instance: What is the feature, what does it afford, and to what outcome?
The feature here is an algorithm that detects negative affect and evocations of network concern, and that connects concerning persons with friends and professional mental health resources. The feature affords affect-based monitoring. The outcome is multifaceted. One outcome is, hopefully, suicide prevention. The latent outcomes are relinquishment of more data by users and in turn, the acquisition of more user data by Facebook; normalization of surveillance; fodder for the Facebook brand; and protection for Facebook against legal action.
The next question is how automated suicide prevention affords affect-based monitoring, and for whom? Key to Evans et al.’s formulation is the assumption that affordances are variable, which means that the features of an object afford by degrees. The assumption of variability resonates with my own ongoing work in which I emphasize not just what artefacts afford, but how they afford, and for whom. Focusing on variability, I note that artefacts request, demand, encourage, allow, and refuse.
Using the affordance variability model, we can say that the shift from personal reporting to automated reporting represents a shift in which intervention was allowed, but is now required for those expressing particular patterns of negative affect. By collecting affective data and using it to identify “troubled” people, Facebook demands that users get help, and refuses affective expression without systematic evaluation. In this way, Facebook demands that users provide affective data, which the company can use for both intervention and profit building. With all of that said, these requests, demands, requirements and allowances will operate in different ways for different users, including users who may strategically circumvent Facebook’s system. For instance, a user may turn the platform’s demand for their data into a request (a request which they rebuff) by using coded language, abstaining from affective expression, or flooding the system with discordant affective cues. What protects one user, then, may invade another; What controls me, may be controlled by you.
Ultimately, we live in a capitalist system and that system is exploitative. In the age of social media, capitalist venues for interaction exploit user data and trade in user privacy. How such trades operate, and to what effect, generate complex and often contradictory circumstances of philosophical, ideological, and practical import. The dynamics of self, health, and community as they intersect with the cold logics of market economy evade clear moral categorization. The proper response, from any subject position, thus remains ambiguous and uncertain. Emergent theoretical advancements, such as those in affordances theory, become important tools for traversing ambivalence—identifying the tensions, tracing how they operate, and setting out the stakes. Such tools get us outside of “good/bad” debates and into a place in which ambivalence is compulsory rather than problematic. With regard to suicide prevention via data, affordances theory lets us hold together the material realities of deep and broad data collection, market exploitation, corporate responsibility, and the value of saving human lives.
Several nights ago, Uber saved my life prevented my becoming a distressed soul, lost and crying in a new country. Had this event transpired to fruition, it would have been both emotionally exhausting and also, deeply troubled my sense of self. Luckily, however, I called an Uber, and here I am, nerves and feminist identity still well intact. In recounting the events of this banal and, in retrospect, marginally stressful experience, I’m reminded of the two nets that our devices weave: the trappings of dependence and the comfort of safety.
Here’s what happened: I was on a mission for fruit. Fruit not from a can. Fruit not dried into a nut bar. Fruit free from individual plastic wrapping. Real, Fresh, Fully Hydrated, Fruit. And so, on my second night in Australia, the land I now call home, I Google Mapped my way to an IGA X-Press. Armed with the cheapest “smart” phone I could purchase at the airport, I fumbled on foot down unfamiliar streets until, in what seemed more like an accident than a well followed plan, I found myself flesh to flesh with colorful and aromatic pears, apples, peaches, and citrus. I had arrived. With glee and pride I filled my cart with the fresh products that 30 hours of travel and temporary accommodation made scarce. I then slowly trecked down each aisle with anthropological interest in the breads, coffees, and packaged foods on offer. I chose Wallaby Bites to save for a late night treat, got thick ground coffee to use with my university-apartment-provided French press, marveled at all of the local dairy products, and felt strangely comforted by the familiar brands that I never bought in the U.S. and still wouldn’t buy here. I remained unwary of the weighty bags I would need to carry home, and unconcerned about the early signs of a setting sun.
Immediately upon leaving, I made a wrong turn. Actually, it may not have happened then. I’m still not sure when the first wrong turn happened, but I do know that one wrong turn dominoed into a series of wrong turns, so that I ended up on the side of a highway, and then somewhere on the Australian National University campus, and then back by the side of the highway again, in a different location. It was dark by then and my phone warned of 15% battery. Maps had long stopped being helpful. Perpetually searching for GPS, it was as though I had printed the original directions and carried with me a static piece of paper, like the old days when people used to Mapquest things.
I had become trapped by my dependence on an adaptive mapping system. My device would tell me where to go, I presumed, making wrong turns relatively benign. My phone would simply recalculate, and I would dutifully follow the pleasant roboticized voice. But this is not how things happened, and I realized I had been staring at my phone so intently, willing it to capture a signal, that I hadn’t paid close attention to my surroundings. I also hadn’t fully examined my route before setting out. I didn’t know where I was or the direction best to turn. In a city not designed on a grid system, and with sign structures and landmarks with which I was unfamiliar, the static map on my phone and the analog maps on the street left me struggling to translate curving and intersecting lines into my own location and that of my desired destination.
With the dark night in full bloom, I made an executive decision about how to allocate the remainder of my waning battery charge—I called an Uber. My knight in shining armor, a middle-aged man in a Kia Sport, drove me home. I was a couple miles away from my university apartment, and as we twisted through the city, I thought it was among the best $8.77(AUD) I had ever spent. We laughed as I told him what had happened, and, looking impressed, he said his wife would have been crying ages ago. I told him I was about 8% battery life away from that point.
Just as getting lost was a product of technologies’ net of dependence, the availability of a ride-share app empowered me to stay strong, providing a safety net when things didn’t go as planned. Though it was dark, and I was lost, and my grocery bag was leaving deep red imprints in the crook of my arm, I knew, the whole time, that I would be okay. I knew that I only needed to push a button, and a ride would come. If that didn’t work, I could push several buttons, and call a traditional cab, and again, a ride would come. I could mess up, (which I did) and have layers of contingency preventing my mishap from becoming a catastrophe.
The two nets that our technologies weave—that of dependence and safety—sit in both complement and tension with one another. As Nathan Ferguson says in his own lost-in-the-city account, these tools “reinforce a trust in a remote, arbitrary, comforting and pacifying, if no less sought after authority.” We come to need them, and also, to be empowered by them. We have a hand in their weaving and maintenance—waving off directions and electing to put in coordinates, while engaging in quick problem-solving when those coordinates fail, for example. We aren’t stupider because of our devices, but we do think differently, plan differently, and expect differently. We plan and expect to be adaptable, which is both a product and necessity of, the devices that we carry.
Today, I think I’ll go phone shopping. I hope I can find the mobile store.
Cyborgology Co-Editor Jenny L. Davis is a Soc. Prof. and an Aussie newbie. She is on Twitter @Jenny_L_Davis
We live in a cyborg society. Technology has infiltrated the most fundamental aspects of our lives: social organization, the body, even our self-concepts. This blog chronicles our new, augmented reality.