A series of studies was just published showing that White Liberals present themselves as less competent when interacting with Black people than when interacting with other White people. This pattern does not emerge among White Conservatives. The authors of the studies, Cynthia H. Dupree (Yale University) and Susan T.  Fiske (Princeton University), refer to this as the “competence downshift” and explain that reliance on racial stereotypes result in patronizing patterns of speech when Liberal Whites engage with a racial outgroup. The original article appears in the journal Personality and Social Psychology. I make the case that these human-based findings have something to tell us about AI and its continued struggle with bigotry. 

Since the article’s publication, the Conservative response has been swift and expected. Holding the report as evidence of White Liberal hypocrisy, a Washington Times story describes how the findings “fly in the face of a standard talking point of the political left,” and a Patriot Post story concludes that “Without even realizing it, ‘woke’ leftists are the ones most guilty of perpetrating the very racial stereotypes they so vehemently condemn.”

Conservative commentators aren’t wrong to call out the racism of White Liberals, nor would White Liberals be justified in coming to their own defense. The data do indeed tell a damning story. However, the data also reveal ingroup racial preference among White Conservatives and an actively unwelcoming interaction style when White Conservatives engage with people of color. In other words, White Conservatives aren’t wrong, they are just racist, too.

Overall, the studies show the insidiousness of racism across ideological bounds. Once racial status processes activate, they inform everyday encounters in ways that are often difficult to detect, and yet have lasting impacts. While White Liberals talk down to Black people in an effort to connect, White Conservatives look down on Black people and would prefer to remain within their own racial group. Neither of these outcomes are good for Black people, and that story is clear.

Racism is rampant across ideological lines. That is the story that the data tell. This story has implications beyond the laboratory settings in which the data were collected.  I think one of those implications has to do with AI. Namely, the findings tell us something insightful about how and why AI keeps being accidentally racist (and sexist/homophobic/classist/generally bigoted), despite continued efforts and promises to rectify such issues. .

The tales of problematic AI are regular and fast-coming. Facial recognition software that misidentifies people of color; job advertisements that show women lower paying gigs; welfare algorithms that punish poverty; and search platforms that rely on raced and gendered stereotypes. I could go on.

The AI bigotry problem is easy to identify and diagnose, but the findings of the above study show that it is especially tricky—though not impossible—to resolve. AI comes out prejudice because society is prejudice. AI is made by people who live in society, trained by data that comes from society, and deployed through culturally entrenched social systems. AI hardware and software are thus bound to pick up and enact the status structures that govern human social relations. The problem isn’t so much with faulty technology, but with historically ingrained “isms” that have become so normative they disappear from conscious thought until Surprise! Gmail autocomplete assumes investors are men.  These #AIFails are like an embarrassing super power which renders invisible inequalities hyper-visible and blatantly clear.

The oft proposed solution—besides technical fixes—has been a call for a more critical lens in the tech sector. This means collaboration between technologists and critical social thinkers such that technological design can better attend to and account for the complexities of social life, including issues of power, status, and intersecting oppressions.

The solution of a critical lens, however, is somewhat undermined by Dupree and Fiske’s findings. One of the main reasons the authors give for the competence downshift is White Liberals’ disproportionate desire to engage with racial minorities and their concern that racial minorities will find White Liberals racist. That is, Liberal Whites wanted to reach across race lines, and they were aware of how their Whiteness can trouble interracial interaction. This is a solid critical starting point, one I imagine most progressive thinkers would hope for among people who build AI. And yet, it was this exact critical position that created racist interaction patterns.

When White Liberals interacted with Black people in Dupree and Fiske’s studies, they activated stereotypes along with an understanding of their own positionality. This combination resulted in “talking down” to people in a racial outgroup. In short, White Liberals weren’t racist despite their best intentions and critical toolbox, but because of them. If racism is so insidious in humans, how can we expect machines, made by humans, to be better?

One pathway is simple: diversify the tech field and check all products rigorously and empirically against a critical standard. The standpoint of technologists matter. An overly White, male, hetero field promises a racist, sexist, heteronormative result. A race-gender diverse field is better. Social scientists can help, too. Social scientists are trained in detecting otherwise imperceptible patterns. We take a lot of methods classes just for this purpose, and pair those methods with years of theory training. A critical lens is not enough. It never will be. It can, however, be a tool that intersects with diverse standpoints and rigorous social science. AI will still surprise us in unflattering and perhaps devastating ways, but critical awareness and a firm directive to“stop being racist,” can’t be the simple solution.

 

Jenny Davis is on Twitter @Jenny_L_Davis

Headline pic via: Source