race

One of Amazon’s many revenue streams is a virtual labor marketplace called MTurk. It’s a platform for businesses to hire inexpensive, on-demand labor for simple ‘microtasks’ that resist automation for one reason or another. If a company needs data double-checked, images labeled, or surveys filled out, they can use the marketplace to offer per-task work to anyone willing to accept it. MTurk is short for Mechanical Turk, a reference to a famous hoax: an automaton which played chess but concealed a human making the moves.

The name is thus tongue-in-cheek, and in a telling way; MTurk is a much-celebrated innovation that relies on human work taking place out of sight and out of mind. Businesses taking advantage of its extremely low costs are perhaps encouraged to forget or ignore the fact that humans are doing these rote tasks, often for pennies.

Jeff Bezos has described the microtasks of MTurk workers as “artificial artificial intelligence;” the norm being imitated is therefore that of machinery: efficient, cheap, standing in reserve, silent and obedient. MTurk calls its job offerings “Human Intelligence Tasks” as additional indication that simple, repetitive tasks requiring human intelligence are unusual in today’s workflows. The suggestion is that machines should be able to do these things, that it is only a matter of time until they can. In some cases, the MTurk workers are in fact labelling data for machine learning, and thus enabling the automation of their own work. more...

Drew Harwell (@DrewHarwell) wrote a balanced article in the Washington Post about the ways universities are using wifi, bluetooth, and mobile phones to enact systematic monitoring of student populations. The article offers multiple perspectives that variously support and critique the technologies at play and their institutional implementation. I’m here to lay out in clear terms why these systems should be categorically resisted.

The article focuses on the SpotterEDU app which advertises itself as an “automated attendance monitoring and early alerting platform.” The idea is that students download the app and then universities can easily keep track of who’s coming to class and also, identify students who may be in, or on the brink of, crisis (e.g., a student only leaves her room to eat and therefore may be experiencing mental health issues). As university faculty, I would find these data useful. They are not worth the social costs. more...

As technology expands its footprint across nearly every domain of contemporary life, some spheres raise particularly acute issues that illuminate larger trends at hand. The criminal justice system is one such area, with automated systems being adopted widely and rapidly—and with activists and advocates beginning to push back with alternate politics that seek to ameliorate existing inequalities rather than instantiate and exacerbate them. The criminal justice system (and its well-known subsidiary, the prison-industrial complex) is a space often cited for its dehumanizing tendencies and outcomes; technologizing this realm may feed into these patterns, despite proponents pitching this as an “alternative to incarceration” that will promote more humane treatment through rehabilitation and employment opportunities.

As such, calls to modernize and reform criminal justice often manifest as a rapid move toward automated processes throughout many penal systems. Numerous jurisdictions are adopting digital tools at all levels, from policing to parole, in order to promote efficiency and (it is claimed) fairness. However, critics argue that mechanized systems—driven by Big Data, artificial intelligence, and human-coded algorithms—are ushering in an era of expansive policing, digital profiling, and punitive methods that can intensify structural inequalities. In this view, the embedded biases in algorithms can serve to deepen inequities, via automated systems built on platforms that are opaque and unregulated; likewise, emerging policing and surveillance technologies are often deployed disproportionately toward vulnerable segments of the population. In an era of digital saturation and rapidly shifting societal norms, these contrasting views of efficiency and inequality are playing out in quintessential ways throughout the realm of criminal justice. more...

A series of studies was just published showing that White Liberals present themselves as less competent when interacting with Black people than when interacting with other White people. This pattern does not emerge among White Conservatives. The authors of the studies, Cynthia H. Dupree (Yale University) and Susan T.  Fiske (Princeton University), refer to this as the “competence downshift” and explain that reliance on racial stereotypes result in patronizing patterns of speech when Liberal Whites engage with a racial outgroup. The original article appears in the journal Personality and Social Psychology. I make the case that these human-based findings have something to tell us about AI and its continued struggle with bigotry.  more...

In late September the social news networking site ‘Reddit’ announced a revamp of their ‘quarantine’ function. A policy that has been in place for almost three years now, quarantines were designed to stop casual Redditors from accessing offensive and gruesome subreddits (topic based communities within the site), without banning these channels outright. In doing so the function impacted a small number of small subreddits and received little attention. The revamp of the quarantine function however has led to the policy applying to much larger subreddits, creating significant controversy. As an attempt to shape the affordances of the site, the revamped quarantine function highlights many of political and architectural issues that Reddit is facing in today’s current political climate.

As a platform, Reddit sits in a frequently uncomfortable position. Reddit was initially established as a haven for free speech, a place in which anything and everything could and should be discussed. When, for example, discussion about #gamergate, the controversy in 2014 over the ethics of the gaming industry that resulted in a number of high-profile women game designers and journalists being publicly harassed, was banned on the often more insidious 4chan, it was Reddit where discussion continued to flourish. However, in recent years, Reddit has come under increasing pressure due to this free for all policy. Reddit has been blamed for fueling misogyny, facilitating online abuse, and even leading to the misidentification of suspects in the aftermath of the Boston Marathon Bombings.

Reddit announced the revamp of its quarantining policy via a long post on the subreddit r/announcements. In doing so, one of Reddit’s moderators u/landoflobsters highlighted the bind that Reddit faces. They said: more...

Miquela Sousa is one of the hottest influencers on Instagram. The triple-threat model, actress and singer, better known as “Lil Miquela” to her million-plus followers, has captured the attention of elite fashion labels, lifestyle brands, magazine profiles, and YouTube celebrities. Last year, she sported Prada at New York Fashion Week, and in 2016 she appeared in Vogue as the face of a Louis Vuitton advertising campaign. Her debut single, “Not Mine,” has been streamed over one million times on Spotify and was even treated to an Anamanaguchi remix.

Miquela isn’t human. As The Cut wrote in their Miquela profile this past May, the 19-year-old Brazilian-American influencer is a CGI character created by Brud, “a mysterious L.A.-based start-up of ‘engineers, storytellers, and dreamers’ who claim to specialize in artificial intelligence and robotics,” which has received at least $6 million in funding. Brud call themselves storytellers as well as developers, but their work seems mostly to be marketing. Lil Miquela’s artificiality has made her interesting to elite fashion labels, lifestyle brands, and magazine profiles — she’s appeared on the runway for Prada, and in Vogue as part of a Louis Vuitton advertising campaign; recently, the writer Naomi Fry profiled her for the magazine’s September issue.

Miquela inhabits a Marvel-like universe of other Brud-made avatars orbit, including her Trump-loving frenemy, Bermuda, and Blawko, her brother (whether that’s a term of endearment or a genetic relation, it’s not clear). The three are constantly embroiled in juicy internet drama, and scarcely does one post to their account without tagging, promoting, shouting out or calling out another. In April, when Bermuda allegedly hacked Miquela’s account, deleted all her photos, and demanded Miquela reveal her “true self.” Miquela eventually released a statement: “I am not a human being. . . I’m a robot. It just doesn’t sound right. I feel so human. I cry and I laugh and I dream. I fall in love.” But the character wasn’t revealing anything true: Miquela is a character scripted by humans. The robot ruse only upped her intrigue: not only has it added a new layer to the character’s fiction, it has added a new layer of fictional possibilities. more...

Algorithms are something of a hot topic.  Interest in these computational directives has taken hold in public discourse and emerged as a subject of public concern. While computer scientists were the original algorithm experts, social scientists now equally stake a claim in this space. In the past 12 months, several excellent books on the social science of algorithms have hit the shelves. Three in particular stand out: Safiya Umoja Noble’s Algorithms of Oppression, Virginia Eubanks’ Automating Inequality, and Taina Bucher’s If…Then: Algorithmic Power and Politics. Rather than a full review of each text, I offer a quick summary of what they offer together, while drawing out what makes each distinct.

I selected these texts because of what they represent: a culmination of shorter and more hastily penned contentions about automation and algorithmic governance, and an exemplary standard for critical technology studies. I review them here as a state of the field and an analytical grounding for subsequent thought.

There is no shortage of social scientists commenting on algorithms in everyday life. Twitter threads, blog posts, op-eds, and peer-review articles take on the topic with varying degrees of urgency and rigor. Algorithms of Oppression, Automating Inequality, and If…Then encapsulate these lines of thought and give them full expression in monograph form. more...

Every now and again, as I stroll along through the rhythms of teaching and writing, my students stop and remind me of all the assumptions I quietly carry around. I find these moments helpful, if jarring. They usually entail me stuttering and looking confused and then rambling through some response that I was unprepared to give. Next there’s the rumination period during which I think about what I should have said, cringe at what I did (and did not) say, and engage in mildly self-deprecating wonder at my seeming complacency. I’m never upset when my positions are challenged (in fact, I quite like it) but I am usually disappointed and surprised that I somehow presumed my positions didn’t require justification.

Earlier this week, during my Public Sociology course, some very bright students took a critical stance against politics in the discipline.  As a bit of background, much of the content I assign maintains a clear political angle and a distinct left leaning bias. I also talk a lot about writing and editing for Cyborgology, and have on several occasions made note of our explicit orientation towards social justice.  The students wanted to know why sociology and sociologists leaned so far left, and questioned the appropriateness of incorporating politics into scholarly work—public or professional.

I think these questions deserve clear answers. The value of integrating politics with scholarship is not self-evident and it is unfair (and a little lazy) to go about political engagement as though it’s a fact of scholarly life rather than a position or a choice. We academics owe these answers to our students and we public scholars would do well to articulate these answers to the publics with whom we hope to engage. more...

A mere 2 minutes and 19 seconds in length, the video Are Black British Youth Obsessed with Light Skin/Curly Hair. Or is it just Preference?” is a compilation of snippets from “person on the street” interviews, conducted in the environs of two shopping centers and a commuter railway station in east London (more on this later).

The interviewer is a roving Internet reporter going by the handle of VanBanter, whose YouTube channel boasts over 85,000 subscribers.  VanBanter is a tall, svelte, black Briton of around 16, himself light skinned, whose voluminous hair in the clips is either styled in cornrows, or pulled back in a low Afro puff, the black version of the “man bun.”

The interviewees are black boys, ostensibly between the ages of 12 and 17, of a wide spectrum of skin colors and hair textures.  The single question VanBanter asks all of them is, “What kind of girls are you into?”  On occasion, he phrases it as, “What type of girls do you slide into?”  Two token girls are asked the same question about boys.  All interviewed say they like “light skins.”  Some add “curly hair,” clearly meant as a qualifier in opposition to “kinky,” not straight, hair texture. Hence, palpably, one can infer that light-skins are more favoured than any other colors at the place. Most of the interviewees are filmed standing in pairs or small groups of friends who support their responses with interjections, gestures, or general glee.

The video was first uploaded on June 1st to the Facebook page of Black British Banter.  Over that weekend, it received a million views, over 6k likes (2.6k neutral thumbs-up expressing interest, 1.2k crying emojis, 1.1k angry ones, 546 laughing ones, 467 wows, and 62 loves), 5k comments, and 8,000 shares.

I myself could not stop viewing it.  The comments far outstretch the bounds of personal preference, to which we all have an undisputable right.  Instead, they defend a centuries-old global regime of negating not only the beauty, but very humanity, of people with dark skin, especially women.  “No black t’ings, like my shoes n’ shit!” says one very more...

harambe

On May 28th, 2016 a three-year-old black boy fell into the gorilla enclosure at the Cincinnati Zoo.  As a result a 17-year-old gorilla inside the pen, Harambe, was shot, as the zoo argued, for the boy’s protection. Nearly three months later, on August 22nd the director of the zoo, Thane Maynard, issued a plea for an end to the ‘memeification’ of Harambe, stating, “We are not amused by the memes, petitions and signs about Harambe…Our zoo family is still healing, and the constant mention of Harambe makes moving forward more difficult for us.” By the end of October, however, despite turgid proclamations to the contrary, the use of Harambe seems to be waning.

The six-month interim marked a significant transition in the media presence of Harambe, from symbol of public uproar and cross-species sympathy to widely memed Internet joke. The death and affective trajectory of Harambe, therefore, represents a unique vector in analyzing intersections of animality, race, and the phenomenon of virality. Harambe, like Cecil the Lion before him, became a widely appropriated Internet cause, one with fraught ethical implications.

more...