We often think that as long as a white person doesn’t fly the Confederate flag, use the n-word, or show up to a white supremacist rally that they aren’t racist. However, researchers at Harvard and the Ohio State University, among others, have shown that even whites who don’t endorse racist beliefs tend to be biased against non-whites. This bias, though, is implicit: it’s subconscious and activated in decisions we make that are faster than our conscious mind can control.

You can test your own implicit biases here. Millions of people have.

But where do these negative subconscious attitudes come from? And when do they start?

The Kirwan Institute for the study of race and ethnicity has found that we learn them early and often from the mass media. As an example, consider this seemingly harmless digital billboard for Hiperos, a company that works to protect clients against risk online. The ad implies that, as a business, you need to be leery of working with third parties. Of particular risk is exposure to bribery or corruption. Whom can you trust? Who are the people you should be afraid of? Who might be corrupt?

I took a photo of each of the ads as they cycled through. Turns out, the company portrays people you should be worried about as mostly non-white or not-quite-white.


Who is untrustworthy? Those that seem exotic: brown people, black people, Asian people, Latinos, Italian “mobsters,” foreigners. 4 5 6

There were comparatively few non-Hispanic whites represented: 7

Of course, this company’s advertising alone could not powerfully influence whom we consider suspicious, but stuff like this — combined with thousands of other images in the news, movies, and television shows — sinks into our subconscious, teaching us implicitly to fear some kinds of people and not others.

For more, see the original post on sociologytoolbox.com.

Todd Beer, PhD is an Assistant Professor at Lake Forest College, a liberal arts college north of Chicago. His blog, SOCIOLOGYtoolbox, is a collection of tools and resources to help instructors teach sociology and build an active sociological imagination. 

When it comes to rule-breakers and rule enforcers, which side you are on seems to depend on the rule-breaker and the rule. National Review had a predictable response to the video of a school officer throwing a seated girl to the floor. [Editor’s note: Video added to original. Watch with caution; disturbing imagery]

Most of the response when the video went viral was revulsion. But not at National Review. David French said it clearly:

I keep coming to the same conclusion: This is what happens when a person resists a lawful order from a police officer to move.

The arrested student at Spring Valley High School should have left her seat when her teacher demanded that she leave. She should have left when the administrator made the same demand. She should have left when Fields made his first, polite requests. She had no right to stay. She had no right to end classroom instruction with her defiance. Fields was right to move her, and he did so without hurting her. The fact that the incident didn’t look good on camera doesn’t make his actions wrong.

This has been the general response on the right to nearly all the recently publicized incidents of the police use of force. If law enforcement tells you to do something, and then you don’t do it, it’s OK for the officer to use force, and if you get hurt or killed, it’s your fault for not complying, even if you haven’t committed an offense.

That’s the general response. There are exceptions, notably Cliven Bundy. In case you’d forgotten, Bundy is the Nevada cattle rancher who was basically stealing – using federal lands for grazing his cattle and refusing to pay the fees.  He’d been stiffing the United States this way for many years. When the Federales finally arrested him and rounded up his cattle, a group of his well armed supporters challenged the feds. Rather than do what law enforcers in other publicized accounts do when challenged by someone with a gun – shoot to kill –  the Federal rangers negotiated.

Bundy was clearly breaking the law. Legally, as even his supporters acknowledged, he didn’t have a leg to stand on. So the view from the right must have been that he should do what law enforcement said. But no.

Here is National Review’s Kevin Williamson:

This is best understood not as a legal proceeding but as an act of civil disobedience… As a legal question Mr. Bundy is legless. But that is largely beside the point.

What happened to “This is what happens when a person resists a lawful order”? The law is now “beside the point.” To Williamson, Bundy is a “dissident,” one in the tradition of Ghandi, Thoreau, and fugitive slaves.

Not all dissidents are content to submit to what we, in the Age of Obama, still insist on quaintly calling “the rule of law.”

Every fugitive slave, and every one of the sainted men and women who harbored and enabled them, was a law-breaker, and who can blame them if none was content to submit to what passed for justice among the slavers?

(The equation with fugitive slaves became something of an embarrassment later when Bundy opined that those slaves were better off as slaves than are Black people today who get government subsidies. Needless to say, Bundy did not notice that the very thing he was demanding was a government handout – free grazing on government lands.)

The high school girl refused the teacher’s request that she give up her cell phone and then defied an order from the teacher and an administrator to leave the classroom.  Cliven Bundy’s supporters “threatened government employees and officials, pointed firearms at law enforcement officers, harassed the press, called in bomb scares to local businesses, set up roadblocks on public roads, and formed lists (complete with photos and home addresses) of their perceived enemies” (Forbes).

A Black schoolgirl thrown to the floor by a weightlifting cop twice her size — cop right, rule-breaker wrong. A rural White man with White male supporters threatening Federal law enforcers — cops wrong, rule-breakers right.

Originally posted at Montclair SocioBlog.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

In 1994, a US immigration judge lifted an order to deport a woman named Lydia Oluloro. Deportation would have forced her to either leave her five- and six-year-old children in America with an abusive father or take them with her to Nigeria. There, they would have been at risk of a genital cutting practice called infibulation, in which the labia majora and minora are trimmed and fused, leaving a small opening for urination and menstruation.

Many Americans will agree that the judge made a good decision, as children shouldn’t be separated from their mothers, left with dangerous family members, or subjected to an unnecessary and irreversible operation that they do not understand. I am among these Americans. However, I am also of the view that Americans who oppose unfamiliar genital cutting practices should think long and hard about how they articulate their opposition.

The judge in the Oluloro case, Kendall Warren, articulated his opposition like this:

This court attempts to respect traditional cultures … but [infibulation] is cruel and serves no known medical purpose. It’s obviously a deeply ingrained cultural tradition going back 1,000 years at least.

Let’s consider the judge’s logic carefully. First, by contrasting the “court” (by which he means America)with “traditional cultures”, the judge is contrasting us (America) with a them (Nigeria). He’s implying that only places like Nigeria are “traditional” — a euphemism for states seen as backward, regressive, and uncivilised — while the US is “modern,” a state conflated with progressiveness and enlightenment.

When he says that the court “attempts to respect traditional cultures,” but cannot in this case, the judge is suggesting that the reason for the disrespect is the fault of the culture itself. In other words, he’s saying “we do our best to respect traditional cultures, but you have pushed us too far.” The reason for this, the judge implies, is that the practices in question have no redeeming value. It “serves no known medical purpose,” and societies which practice it are thus “up to no good” or are not engaging in “rational” action.

The only remaining explanation for the continuation of the practice, the judge concludes, is cruelty. If the practice is cruel the people who practice it must necessarily also be cruel; capriciously, pointlessly, even frivolously cruel.

To make matters worse, in the eyes of the judge, such cruelty can’t be helped because its perpetrators don’t have free will. The practice, he says, is “deeply ingrained” and has been so for at least 1,000 years. Such cultures cannot be expected to see reason. This is the reason why the court — or America — can and should be compelled to intervene.

In sum, the judge might well have said: “we are a modern, rational, free, good society, and you who practice female genital cutting—you are the opposite of this.”


I’ve published extensively on the ways in which Americans talk about the female genital cutting practices (FGCs) that are common in parts of Africa and elsewhere, focusing on the different ways opposition can be articulated and the consequence of those choices. There are many grounds upon which to oppose FGCs: the oppression of women, the repression of sexuality, human rights abuse, child abuse, a violation of bodily integrity, harm to health, and psychological harm, to name just a few. Nevertheless, Judge Warren, chose to use one of the most common and counterproductive frames available: cultural depravity.

The main source of this frame has been the mass media, which began covering FGCs in the early 1990s. At the time anti-FGC activists were largely using the child abuse frame in their campaigns, yet journalists decided to frame the issue in terms of cultural depravity. This narrative mixed with American ethnocentrism, an obsession with fragile female sexualities, a fear of black men, and a longstanding portrayal of Africa as dark, irrational, and barbaric to make a virulent cocktail of the “African Other.”

The more common word used to describe FGCs — mutilation — is a symbol of this discourse. It perfectly captures Judge Warren’s comment. Mutilation is, perhaps by definition, the opposite of healing and of what physicians are called to do. Defining FGCs this way allows, and even demands, that we wholly condemn the practices, take a zero tolerance stance, and refuse to entertain any other point of view.

Paradoxically, this has been devastating for efforts to reduce genital cutting. People who support genital cutting typically believe that a cut body is more aesthetically pleasing. They largely find the term “mutilation” confusing or offensive. They, like anyone, generally do not appreciate being told that they are barbaric, ignorant of their own bodies, or cruel to their children.

The zero tolerance demand to end the practices has also failed. Neither foreigners intervening in long-practicing communities, nor top-down laws instituted by local politicians under pressure from Western governments, nor even laws against FGCs in Western countries have successfully stopped genital cutting. They have, however, alienated the very women that activists have tried to help, made women dislike or fear the authorities who may help them, and even increased the rate of FGCs by inspiring backlashes.

In contrast, the provision of resources to communities to achieve whatever goals they desire, and then getting out of the way, has been proven to reduce the frequency of FGCs. The most effective interventions have been village development projects that have no agenda regarding cutting, yet empower women to make choices. When women in a community have the power to do so, they often autonomously decide to abandon FGCs. Who could know better, after all, the real costs of continuing the practice?

Likewise, abandonment of the practice may be typical among immigrants to non-practicing societies. This may be related to fear of prosecution under the law. However, it is more likely the result of a real desire among migrants to fit into their new societies, a lessening of the pressures and incentives to go through with cutting, and mothers’ deep and personal familiarity with the short- and long-term pain that accompanies the practices.

The American conversation about FGCs has been warped by our own biases. As a Hastings Center Report summarizes, those who adopt the cultural depravity frame misrepresent the practices, overstate the negative health consequences, misconstrue the reasons for the practice, silence the first-person accounts of women who have undergone cutting, and ignore indigenous anti-FCG organizing. And, while it has fed into American biases about “dark” Africa and its disempowered women, the discourse of cultural depravity has actually impaired efforts to reduce rates of FGCs and the harm that they can cause.

Originally posted at Open Democracy and Pacific Standard.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow up on her research about female genital cutting here.

Daniel Drezner once wrote about how international relations scholars would react to a zombie epidemic. Aside from the sheer fun of talking about something as silly as zombies, it had much the same illuminating satiric purpose as “how many X does it take to screw in a lightbulb” jokes. If you have even a cursory familiarity with the field, it is well worth reading.

Here’s my humble attempt to do the same for several schools within sociology.

Public Opinion. Consider the statement that “Zombies are a growing problem in society.” Would you:

  1. Strongly disagree
  2. Somewhat disagree
  3. Neither agree nor disagree
  4. Somewhat agree
  5. Strongly agree
  6. Um, how do I know you’re really with NORC and not just here to eat my brain?

Criminology. In some areas (e.g., Pittsburgh, Raccoon City), zombification is now more common that attending college or serving in the military and must be understood as a modal life course event. Furthermore, as seen in audit studies employers are unwilling to hire zombies and so the mark of zombification has persistent and reverberating effects throughout undeath (at least until complete decomposition and putrefecation). However, race trumps humanity as most employers prefer to hire a white zombie over a black human.

Cultural toolkit. Being mindless, zombies have no cultural toolkit. Rather the great interest is understanding how the cultural toolkits of the living develop and are invoked during unsettled times of uncertainty, such as an onslaught of walking corpses. The human being besieged by zombies is not constrained by culture, but draws upon it. Actors can draw upon such culturally-informed tools as boarding up the windows of a farmhouse, shotgunning the undead, or simply falling into panicked blubbering.

Categorization. There’s a kind of categorical legitimacy problem to zombies. Initially zombies were supernaturally animated dead, they were sluggish but relentlessness, and they sought to eat human brains. In contrast, more recent zombies tend to be infected with a virus that leaves them still living in a biological sense but alters their behavior so as to be savage, oblivious to pain, and nimble. Furthermore, even supernatural zombies are not a homogenous set but encompass varying degrees of decomposition. Thus the first issue with zombies is defining what is a zombie and if it is commensurable with similar categories (like an inferius in Harry Potter). This categorical uncertainty has effects in that insurance underwriters systematically undervalue life insurance policies against monsters that are ambiguous to categorize (zombies) as compared to those that fall into a clearly delineated category (vampires).

Neo-institutionalism. Saving humanity from the hordes of the undead is a broad goal that is easily decoupled from the means used to achieve it. Especially given that human survivors need legitimacy in order to command access to scarce resources (e.g., shotgun shells, gasoline), it is more important to use strategies that are perceived as legitimate by trading partners (i.e., other terrified humans you’re trying to recruit into your improvised human survival cooperative) than to develop technically efficient means of dispatching the living dead. Although early on strategies for dealing with the undead (panic, “hole up here until help arrives,” “we have to get out of the city,” developing a vaccine, etc) are practiced where they are most technically efficient, once a strategy achieves legitimacy it spreads via isomorphism to technically inappropriate contexts.

Population ecology. Improvised human survival cooperatives (IHSC) demonstrate the liability of newness in that many are overwhelmed and devoured immediately after formation. Furthermore, IHSC demonstrate the essentially fixed nature of organizations as those IHSC that attempt to change core strategy (eg, from “let’s hole up here until help arrives” to “we have to get out of the city”) show a greatly increased hazard for being overwhelmed and devoured.

Diffusion. Viral zombieism (e.g. Resident Evil, 28 Days Later) tends to start with a single patient zero whereas supernatural zombieism (e.g. Night of the Living Dead, the “Thriller” video) tends to start with all recently deceased bodies rising from the grave. By seeing whether the diffusion curve for zombieism more closely approximates a Bass mixed-influence model or a classic s-curve we can estimate whether zombieism is supernatural or viral, and therefore whether policy-makers should direct grants towards biomedical labs to develop a zombie vaccine or the Catholic Church to give priests a crash course in the neglected art of exorcism. Furthermore, marketers can plug plausible assumptions into the Bass model so as to make projections of the size of the zombie market over time, and thus how quickly to start manufacturing such products as brain-flavored Doritos.

Social movements. The dominant debate is the extent to which anti-zombie mobilization represents changes in the political opportunity structure brought on by complete societal collapse as compared to an essentially expressive act related to cultural dislocation and contested space. Supporting the latter interpretation is that zombie hunting militias are especially likely to form in counties that have seen recent increases in immigration. (The finding holds even when controlling for such variables as gun registrations, log distance to the nearest army administered “safe zone,” etc.).

Family. Zombieism doesn’t just affect individuals, but families. Having a zombie in the family involves an average of 25 hours of care work per week, including such tasks as going to the butcher to buy pig brains, repairing the boarding that keeps the zombie securely in the basement and away from the rest of the family, and washing a variety of stains out of the zombie’s tattered clothing. Almost all of this care work is performed by women and very little of it is done by paid care workers as no care worker in her right mind is willing to be in a house with a zombie.

Applied micro-economics. We combine two unique datasets, the first being military satellite imagery of zombie mobs and the second records salvaged from the wreckage of Exxon/Mobil headquarters showing which gas stations were due to be refueled just before the start of the zombie epidemic. Since humans can use salvaged gasoline either to set the undead on fire or to power vehicles, chainsaws, etc., we have a source of plausibly exogenous heterogeneity in showing which neighborhoods were more or less hospitable environments for zombies. We show that zombies tended to shuffle towards neighborhoods with low stocks of gasoline. Hence, we find that zombies respond to incentives (just like school teachers, and sumo wrestlers, and crack dealers, and realtors, and hookers, …).

Grounded theory. One cannot fully appreciate zombies by imposing a pre-existing theoretical framework on zombies. Only participant observation can allow one to provide a thick description of the mindless zombie perspective. Unfortunately scientistic institutions tend to be unsupportive of this kind of research. Major research funders reject as “too vague and insufficiently theory-driven” proposals that describe the intention to see what findings emerge from roaming about feasting on the living. Likewise IRB panels raise issues about whether a zombie can give informed consent and whether it is ethical to kill the living and eat their brains.

Ethnomethodology. Zombieism is not so much a state of being as a set of practices and cultural scripts. It is not that one is a zombie but that one does being a zombie such that zombieism is created and enacted through interaction. Even if one is “objectively” a mindless animated corpse, one cannot really be said to be fulfilling one’s cultural role as a zombie unless one shuffles across the landscape in search of brains.

Conversation Analysis.2 (1)

Cross-posted at Code and Culture.

Gabriel Rossman is a professor of sociology at UCLA. His research addresses culture and mass media, especially pop music radio and Hollywood films, with the aim of understanding diffusion processes. You can follow him at Code and Culture.

This video was making the rounds last spring. The video maker wants to make two points:

1. Cops are racist. They are respectful of the White guy carrying the AR-15. The Black guy gets less comfortable treatment.

2. The police treatment of the White guy is the proper way for police to deal with someone carrying an assault rifle.

I had two somewhat different reactions.

1. This video was made in Oregon. Under Oregon’s open-carry law, what both the White and Black guy are doing is perfectly legal. And when the White guy refuses to provide ID, that’s legal too. If this had happened in Roseburg, and the carrier had been strolling to Umpqua Community College, there was nothing the police could have legally done, other than what is shown in the video, until the guy walked onto campus, opened fire, and started killing people.

2.  Guns are dangerous, and the police know it. In the second video, the cop assumes that the person carrying an AR-15 is potentially dangerous – very dangerous. The officer’s fear is palpable. He prefers to err on the side of caution – the false positive of thinking someone is dangerous when he is really OK.  The false negative – assuming an armed person is harmless when he is in fact dangerous – could well be the last mistake a cop ever makes.

But the default setting for gun laws in the US is just the opposite – better a false negative. This is especially true in Oregon and states with similar gun laws. These laws assume that people with guns are harmless. In fact, they assume that all people, with a few exceptions, are harmless. Let them buy and carry as much weaponry and ammunition as they like.

Most of the time, that assumption is valid. Most gun owners, at least those who got their guns legitimately, are responsible people. The trouble is that the cost of the rare false negative is very, very high. Lawmakers in these states and in Congress are saying in effect that they are willing to pay that price. Or rather, they are willing to have other people – the students at Umpqua, or Newtown, or Santa Monica, or scores of other places, and their parents – pay that price.

UPDATE October, 6You have to forgive the hyperbole in that last paragraph, written so shortly after the massacre at Umpqua. I mean, those politicians don’t really think that it’s better to have dead bodies than to pass regulations on guns, do they?

Or was it hyperbole? Today, Dr. Ben Carson, the surgeon who wants to be the next president of the US, stated even more clearly this preference for guns even at the price of death.  “I never saw a body with bullet holes that was more devastating than taking the right to arm ourselves away.” (The story is in the New York Times and elsewhere.)

Originally posted at Montclair Socioblog.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Health care providers who perform abortions routinely use ultrasound scans to confirm their patients’ pregnancies, check for multiple gestations, and determine the stage of the pregnancies. But it is far from standard – and not at all medically necessary – for women about to have abortions to view their ultrasounds. Ultrasound viewing by patients has no clinical purpose: it does not affect the woman’s condition or the decisions health providers make. Nevertheless, ultrasound viewing has become central to the hotly contested politics of abortion.

Believing that viewing ultrasounds will change minds, opponents of abortion – spearheaded by the advocacy group Americans United for Life – have pushed for state laws to require such viewing. So far, eighteen states require that women be offered the opportunity to view their pre-abortion ultrasound images, and five states actually go so far as to legally require women to view their ultrasound images before obtaining an abortion (although the women are permitted to avert their eyes). In two of the five states that have passed such mandatory viewing laws, courts have permanently enjoined the laws, keeping them from going into effect.

States that allow/require ultrasounds before abortion (vocative):7

As the debates continue to rage, both sides assume that what matters for an abortion patient is the content of the ultrasound image. Abortion opponents believe the image will demonstrate to the woman that she is carrying a baby – a revelation they think will make her want to continue her pregnancy. Ironically, supporters of abortion rights also argue that seeing the image of the fetus will make a difference. They say this experience will be emotionally distressing and make abortions more difficult. Paradoxically, such arguments from rights advocates reinforce assumptions that fetuses are persons and perpetuate stigma about abortion procedures.

Does viewing change women’s minds – or cause trauma?

What is missing from all of this is research on a crucial question: How do women planning abortions actually react to voluntary or coerced viewing of ultrasounds? As it turns out, seeing the ultrasound images as such does little to change women’s minds about abortion. What matters is how women scheduled for abortions already feel. Viewing an ultrasound can matter for women who are not fully certain about their plans to have an abortion.

My colleagues and I analyzed medical records from over 15,000 abortion visits during 2011 to a large, urban abortion provider. This provider has a policy of offering every patient the voluntary opportunity to view her ultrasound image. In her intake paperwork, the patient can check a box saying she wants to view; then, when she’s in the ultrasound room, the technician provides her with the opportunity to see the image. Over 42% of incoming abortion patients chose to view their ultrasound images, and the substantial majority (99%) of all 15,000 pregnancies ended in abortion.

Our research team looked at whether viewing the ultrasound image was associated with deciding to continue with the pregnancy instead of proceeding with the abortion. We took into account factors such as the age, race, and poverty level of the women involved, as well as how far along their pregnancies were, the presence of multiple fetuses, and how certain women said they were about their abortion decision.

As it became clear that certainty mattered, we looked more closely. Among women who were highly certain, viewing their ultrasound did not change minds. However, among the small fraction (7.4%) of women who were not very certain or only moderately certain, viewing slightly increased the odds that they would forego their planned abortion and continue with their pregnancy. Nonetheless, this effect was very small and most did proceed to abortion.

Our findings make sense, because some women who are unsure about their abortion decision may seek experiences such as ultrasound viewing to help them make a final choice. Nevertheless, many previous studies have documented that women’s reasons for abortion are complex and unlikely to be negated simply by viewing an ultrasound image. Our study analyzed a situation where viewing ultrasounds was voluntary, but there is no reason to think that mandatory viewing would change more minds. Forcing women to view their ultrasounds could, however, affect patient satisfaction and sense of autonomy.

Apart from whether minds are changed, many people imagine that viewing an ultrasound for an unwanted pregnancy is distressing; and in interviews with 26 staff members at an abortion facility that offers pre-abortion ultrasounds, my colleague and I discovered that many staffers believed viewing the image caused relief for women early in their pregnancies but was traumatic for those at later stages.

However, when my colleagues and I asked 212 women throughout the United States about their reactions to viewing pre-abortion ultrasounds, we found no evidence that viewing was broadly distressing or that emotions depended on the gestational stage. All interviewees said their minds were not changed about proceeding with abortions. Just over one in five reported that viewing provoked negative reactions of guilt, depression, or sadness; one in ten reported positive feelings such as happiness; and the largest group, just over a third, said they felt “fine,” “okay,” or even “nothing.” This common response that viewing did not matter was a surprise given the intensity surrounding political debates.

Our research questions the wisdom of state laws that force women scheduled to have abortions to view their ultrasounds prior to the procedure. Fewer than half of abortion patients want to view their ultrasounds, and there is no clinical benefit. More to the point, abortion providers already offer patients the opportunity to view their ultrasounds – and never turn down women’s requests to look at these images. When women already feel uncertain about proceeding with an abortion, viewing the image of the fetus may make a difference. But for the vast majority whose minds are made up, viewing does not matter – and trying to force this to happen in every case merely adds costs and indignities to the abortion process.

Originally posted at Scholars Strategy NetworkRead more at: 

Katrina Kimport, PhD is an assistant professor in the Department of Obstetrics, Gynecology and Reproductive Sciences and a research sociologist with the Advancing New Standards in Reproductive Health program at the University of California, San Francisco.

In the 6-minute video below, Stanford sociologist Aliya Saperstein discusses her research showing that the perception of other peoples’ race is shaped by what we know about them. She uses data collected through a series of in-person interviews in which interviewers sit down with respondents several times over many years, learn about what’s happened and, among other things, make a judgment call as to their race. You may be surprised how often racial designations. In one of her samples, 20% of respondents were inconsistently identified, meaning that they were given different racial classifications by different interviewers at least once.

Saperstein found that a person judged as white in an early interview was more likely to be marked as black in a later interview if they experienced a life event that is stereotypically associated with blackness, like imprisonment or unemployment.

She and some colleagues also did an experiment, asking subjects to indicate whether people with black, white, and ambiguous faces dressed in a suit or a blue work shirt were white or black. Tracing their mouse paths, it was clear that the same face in a suit was more easily categorized as white than the one in a work shirt.


Race is a social construction, not just in the sense that we made it up, but in that it’s flexible and dependent on status as well as phenotype.

She finishes with the observation that, while phenotype definitely impacts a person’s life chances, we also need to be aware that differences in education, income, and imprisonment reflect not only bias against phenotype, but the fact that success begets whiteness. And vice versa.

Watch the whole thing here:

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

Flashback Friday.

The term “fetal alcohol syndrome” (FAS) refers to a group of problems that include mental retardation,  growth problems, abnormal facial features, and other birth defects.  The disorder affects children whose mothers drank large amounts of alcohol during pregnancy.


Well, not exactly.

It turns out that only about 5% of alcoholic women give birth to babies who are later diagnosed with FAS. This means that many mothers drink excessively, and many more drink somewhat (at least 16 percent of mothers drink during pregnancy), and yet many, many children born to these women show no diagnosable signs of FAS. Twin studies, further, have shown that sometimes one fraternal twin is diagnosed with FAS, but the other twin, who shared the same uterine environment, is fine.

So, drinking during pregnancy does not appear to be a sufficient cause of FAS, even if it is a necessary cause (by definition?). In her book, Conceiving Risk, Bearing Responsibility, sociologist and public health scholar Elizabeth M. Armstrong explains that FAS is not just related to alcohol intake, but is “highly correlated with smoking, poverty, malnutrition, high parity [i.e., having lots of children], and advanced maternal age” (p. 6). Further, there appears to be a genetic component. Some fetuses may be more vulnerable than others due to different ways that bodies breakdown ethanol, a characteristic that may be inherited. (This may also explain why one fraternal twin is affected, but not the other.)

To sum, drinking alcohol during pregnancy appears to contribute to FAS, but it by no means causes FAS.

And yet… almost all public health campaigns, whether sponsored by states, social movement organizations, public health institutes, or the associations of alcohol purveyors tell pregnant women not to drink alcohol during, before, or after pregnancy… at all… or else.

The Centers for Disease Control (U.S.):

The National Organization on Fetal Alcohol Syndrome:

Best Start, Ontario’s Maternal Newborn and Early Child Development Resource Centre:

Nova Scotia Liquor Commission:

These campaigns all target women and explain to them that they should not drink any alcohol at all if they are trying to conceive, during pregnancy, during the period in which they are breastfeeding and, in some cases, if they are not trying to conceive but are using only somewhat effective birth control.

So, the strategy to reduce FAS is reduced to the targeting of women’s behavior.

But “women” do not cause FAS. Neither does alcohol. This strategy replaces addressing all of the other problems that correlate with the appearance of FAS — poverty, stress, and other kinds of social deprivation — in favor of policing women. FAS, in fact, is partly the result of individual behavior, partly the result of social inequality, and partly genetic, but our entire eradication strategy focuses on individual behavior. It places the blame and responsibility solely on women.

And, since women’s choices are not highly correlated with the appearance of FAS, the strategy fails. Very few women actually drink at the levels correlated with FAS. If we did not have a no-drinking-during-pregnancy campaign and pregnant women continued drinking at the rates at which they drank before being pregnant, we would not see a massive rise in FAS. Only the heaviest drinking women put their fetus at risk and they, unfortunately, are the least likely to respond to the no-drinking campaign (largely due to addiction).

Originally posted in 2010 and developed into a two-page essay for Contexts magazine.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.