The practice of pairing the word “men” (which refers to adults) with “girls” (which does not) reinforces a gender hierarchy by mapping it onto age.  Jason S. discovered an example of this tendency at Halloween Adventure (East Village, NYC) and snapped a picture to send in:


Sara P. found another example, this time from iparty.  The flyer puts a girl and a boy side-by-side in police officer costumes.  The boy’s is labeled “policeman” and the girl’s is labeled “police girl.”


This type of language often goes unnoticed, but it sends a ubiquitous gender message about how seriously we should take men and women.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

Daniel Drezner once wrote about how international relations scholars would react to a zombie epidemic. Aside from the sheer fun of talking about something as silly as zombies, it had much the same illuminating satiric purpose as “how many X does it take to screw in a lightbulb” jokes. If you have even a cursory familiarity with the field, it is well worth reading.

Here’s my humble attempt to do the same for several schools within sociology.

Public Opinion. Consider the statement that “Zombies are a growing problem in society.” Would you:

  1. Strongly disagree
  2. Somewhat disagree
  3. Neither agree nor disagree
  4. Somewhat agree
  5. Strongly agree
  6. Um, how do I know you’re really with NORC and not just here to eat my brain?

Criminology. In some areas (e.g., Pittsburgh, Raccoon City), zombification is now more common that attending college or serving in the military and must be understood as a modal life course event. Furthermore, as seen in audit studies employers are unwilling to hire zombies and so the mark of zombification has persistent and reverberating effects throughout undeath (at least until complete decomposition and putrefecation). However, race trumps humanity as most employers prefer to hire a white zombie over a black human.

Cultural toolkit. Being mindless, zombies have no cultural toolkit. Rather the great interest is understanding how the cultural toolkits of the living develop and are invoked during unsettled times of uncertainty, such as an onslaught of walking corpses. The human being besieged by zombies is not constrained by culture, but draws upon it. Actors can draw upon such culturally-informed tools as boarding up the windows of a farmhouse, shotgunning the undead, or simply falling into panicked blubbering.

Categorization. There’s a kind of categorical legitimacy problem to zombies. Initially zombies were supernaturally animated dead, they were sluggish but relentlessness, and they sought to eat human brains. In contrast, more recent zombies tend to be infected with a virus that leaves them still living in a biological sense but alters their behavior so as to be savage, oblivious to pain, and nimble. Furthermore, even supernatural zombies are not a homogenous set but encompass varying degrees of decomposition. Thus the first issue with zombies is defining what is a zombie and if it is commensurable with similar categories (like an inferius in Harry Potter). This categorical uncertainty has effects in that insurance underwriters systematically undervalue life insurance policies against monsters that are ambiguous to categorize (zombies) as compared to those that fall into a clearly delineated category (vampires).

Neo-institutionalism. Saving humanity from the hordes of the undead is a broad goal that is easily decoupled from the means used to achieve it. Especially given that human survivors need legitimacy in order to command access to scarce resources (e.g., shotgun shells, gasoline), it is more important to use strategies that are perceived as legitimate by trading partners (i.e., other terrified humans you’re trying to recruit into your improvised human survival cooperative) than to develop technically efficient means of dispatching the living dead. Although early on strategies for dealing with the undead (panic, “hole up here until help arrives,” “we have to get out of the city,” developing a vaccine, etc) are practiced where they are most technically efficient, once a strategy achieves legitimacy it spreads via isomorphism to technically inappropriate contexts.

Population ecology. Improvised human survival cooperatives (IHSC) demonstrate the liability of newness in that many are overwhelmed and devoured immediately after formation. Furthermore, IHSC demonstrate the essentially fixed nature of organizations as those IHSC that attempt to change core strategy (eg, from “let’s hole up here until help arrives” to “we have to get out of the city”) show a greatly increased hazard for being overwhelmed and devoured.

Diffusion. Viral zombieism (e.g. Resident Evil, 28 Days Later) tends to start with a single patient zero whereas supernatural zombieism (e.g. Night of the Living Dead, the “Thriller” video) tends to start with all recently deceased bodies rising from the grave. By seeing whether the diffusion curve for zombieism more closely approximates a Bass mixed-influence model or a classic s-curve we can estimate whether zombieism is supernatural or viral, and therefore whether policy-makers should direct grants towards biomedical labs to develop a zombie vaccine or the Catholic Church to give priests a crash course in the neglected art of exorcism. Furthermore, marketers can plug plausible assumptions into the Bass model so as to make projections of the size of the zombie market over time, and thus how quickly to start manufacturing such products as brain-flavored Doritos.

Social movements. The dominant debate is the extent to which anti-zombie mobilization represents changes in the political opportunity structure brought on by complete societal collapse as compared to an essentially expressive act related to cultural dislocation and contested space. Supporting the latter interpretation is that zombie hunting militias are especially likely to form in counties that have seen recent increases in immigration. (The finding holds even when controlling for such variables as gun registrations, log distance to the nearest army administered “safe zone,” etc.).

Family. Zombieism doesn’t just affect individuals, but families. Having a zombie in the family involves an average of 25 hours of care work per week, including such tasks as going to the butcher to buy pig brains, repairing the boarding that keeps the zombie securely in the basement and away from the rest of the family, and washing a variety of stains out of the zombie’s tattered clothing. Almost all of this care work is performed by women and very little of it is done by paid care workers as no care worker in her right mind is willing to be in a house with a zombie.

Applied micro-economics. We combine two unique datasets, the first being military satellite imagery of zombie mobs and the second records salvaged from the wreckage of Exxon/Mobil headquarters showing which gas stations were due to be refueled just before the start of the zombie epidemic. Since humans can use salvaged gasoline either to set the undead on fire or to power vehicles, chainsaws, etc., we have a source of plausibly exogenous heterogeneity in showing which neighborhoods were more or less hospitable environments for zombies. We show that zombies tended to shuffle towards neighborhoods with low stocks of gasoline. Hence, we find that zombies respond to incentives (just like school teachers, and sumo wrestlers, and crack dealers, and realtors, and hookers, …).

Grounded theory. One cannot fully appreciate zombies by imposing a pre-existing theoretical framework on zombies. Only participant observation can allow one to provide a thick description of the mindless zombie perspective. Unfortunately scientistic institutions tend to be unsupportive of this kind of research. Major research funders reject as “too vague and insufficiently theory-driven” proposals that describe the intention to see what findings emerge from roaming about feasting on the living. Likewise IRB panels raise issues about whether a zombie can give informed consent and whether it is ethical to kill the living and eat their brains.

Ethnomethodology. Zombieism is not so much a state of being as a set of practices and cultural scripts. It is not that one is a zombie but that one does being a zombie such that zombieism is created and enacted through interaction. Even if one is “objectively” a mindless animated corpse, one cannot really be said to be fulfilling one’s cultural role as a zombie unless one shuffles across the landscape in search of brains.

Conversation Analysis.2 (1)

Cross-posted at Code and Culture.

Gabriel Rossman is a professor of sociology at UCLA. His research addresses culture and mass media, especially pop music radio and Hollywood films, with the aim of understanding diffusion processes. You can follow him at Code and Culture.

Opponents of government aid to the poor often argue that the poor are not really poor. The evidence they are fond of is often an inappropriate comparison, usually with people in other countries: “Thus we can say that by global standards there are no poor people in the US at all: the entire country is at least middle class or better” (Tim Worstall in Forbes).  Sometimes the comparison is with earlier times, as in this quote from Heritage’s Robert Rector: “‘Poor’ Americans today are better housed, better fed, and own more property than did the average US citizen throughout much of the 20th Century.”

I parodied this approach in a post a few years ago by using the ridiculous argument that poor people in the US are not really poor and are in fact “better off than Louis XIV because the Sun King didn’t have indoor plumbing.” I mean, I thought the toilet argument was ridiculous. But sure enough, Richard Rahn of the Cato Institute used it in an article last year in the Washington Times, complete with a 17th century portrait of the king:

2Common Folk Live Better Now than Royalty Did in Earlier Times

Louis XIV lived in constant fear of dying from smallpox and many other diseases that are now cured quickly by antibiotics. His palace at Versailles had 700 rooms but no bathrooms…

Barry Ritholtz at Bloomberg has an ingenious way of showing how meaningless this line of thinking his. He compares today not with centuries past but with centuries to come. Consider our hedge-fund billionaires, with private jets whisking them to their several mansions in different states and countries. Are they well off?  Not at all.  They are worse off than the poor of 2215.

Think about what the poor will enjoy a few centuries from now that even the 0.01 percent lack today. … “Imagine, they died of cancer and heart disease, had to birth their own babies, and even drove their own cars. How primitive can you get!”

Comparisons with times past or future tell us about progress. They can’t tell us who’s poor today. What makes people rich or poor is what they can buy compared with other people in their own society. To extrapolate a line from Mel Brooks’s Louis XVI, “It’s good to be the king . . . even if flush toilets haven’t been invented yet.”

And you needn’t sweep your gaze to distant centuries to find inappropriate comparisons. When Marty McFly in “Back to the Future” goes from the ’80s to the ’50s, he feels pretty cool, even though the only great advances he has over kids there seem to be skateboards, Stratocasters, and designer underpants. How would he have felt if in 1985 he could have looked forward thirty years to see the Internet, laptops, and smartphones?

People below the poverty line today do not feel well off  just because they have indoor plumbing or color TVs or Internet connections. In the same way,  our 1% do not feel poor even though they lack consumer goods that people a few decades from now will take for granted.

Originally posted at Montclair SocioBlog. Re-posted at Pacific Standard.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

This video was making the rounds last spring. The video maker wants to make two points:

1. Cops are racist. They are respectful of the White guy carrying the AR-15. The Black guy gets less comfortable treatment.

2. The police treatment of the White guy is the proper way for police to deal with someone carrying an assault rifle.

I had two somewhat different reactions.

1. This video was made in Oregon. Under Oregon’s open-carry law, what both the White and Black guy are doing is perfectly legal. And when the White guy refuses to provide ID, that’s legal too. If this had happened in Roseburg, and the carrier had been strolling to Umpqua Community College, there was nothing the police could have legally done, other than what is shown in the video, until the guy walked onto campus, opened fire, and started killing people.

2.  Guns are dangerous, and the police know it. In the second video, the cop assumes that the person carrying an AR-15 is potentially dangerous – very dangerous. The officer’s fear is palpable. He prefers to err on the side of caution – the false positive of thinking someone is dangerous when he is really OK.  The false negative – assuming an armed person is harmless when he is in fact dangerous – could well be the last mistake a cop ever makes.

But the default setting for gun laws in the US is just the opposite – better a false negative. This is especially true in Oregon and states with similar gun laws. These laws assume that people with guns are harmless. In fact, they assume that all people, with a few exceptions, are harmless. Let them buy and carry as much weaponry and ammunition as they like.

Most of the time, that assumption is valid. Most gun owners, at least those who got their guns legitimately, are responsible people. The trouble is that the cost of the rare false negative is very, very high. Lawmakers in these states and in Congress are saying in effect that they are willing to pay that price. Or rather, they are willing to have other people – the students at Umpqua, or Newtown, or Santa Monica, or scores of other places, and their parents – pay that price.

UPDATE October, 6You have to forgive the hyperbole in that last paragraph, written so shortly after the massacre at Umpqua. I mean, those politicians don’t really think that it’s better to have dead bodies than to pass regulations on guns, do they?

Or was it hyperbole? Today, Dr. Ben Carson, the surgeon who wants to be the next president of the US, stated even more clearly this preference for guns even at the price of death.  “I never saw a body with bullet holes that was more devastating than taking the right to arm ourselves away.” (The story is in the New York Times and elsewhere.)

Originally posted at Montclair Socioblog.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Flashback Friday.

Social and biological scientists agree that race and ethnicity are social constructions, not biological categories.  The US government, nonetheless, has an official position on what categories are “real.”  You can find them on the Census (source):


These categories, however real they may seem, are actually the product of a long process. Over time, the official US racial categories have changed in response to politics, economics, conflict, and more. Here’s some highlights.

In the year of the first Census, 1790, the race question looked very different than it does today:

Free white males
Free white females
All other free persons (included Native Americans who paid taxes and free blacks)
And slaves

By 1870 slavery is illegal and the government was newly concerned with keeping track of two new kinds of people: “mulattos” (or people with both black and white ancestors) and Indians:

Indian (Native Americans)

Between 1850 and 1870 6.5 million Europeans had immigrated and 60,000 Chinese.  Chinese and Japanese were added for the 1880 Census.

By 1890, the U.S. government with obsessed with race-mixing.  The race question looked like this:

Black (3/4th or more “black blood”)
Mulatto (3/8th to 5/8th “black blood”)
Quadroons (1/4th “black blood”)
Octoroons (1/8th or any trace of “black blood”)

This year was the only year to include such fine-tuned mixed-race categories, however, because it turned out it wasn’t easy to figure out how to categorize people.

In the next 50 years, the government added and deleted racial categories. There were 10 in 1930 (including “Mexican” and “Hindu”) and 11 in 1940 (introducing “Hawaiian” and “Part Hawaiian”).  In 1970, they added the “origin of descent” question that we still see today.  So people are first asked whether they are “Hispanic, Latino, or Spanish” and then asked to choose a race.

You might immediately think, “But what do these words even mean?”  And you’d be right to ask.  “Spanish” refers to Spain; “Latino” refers to Latin America; and “Hispanic” is a totally made up word that was originally designed to mean “people who speak Spanish.”

Part of the reason we have the “Hispanic” ethnicity question is because Mexican Americans fought for it.  They thought it would be advantageous to be categorized as “white” and, so, they fought for an ethnicity category instead of a racial one.

Funny story:  The US once included “South American” as a category in the “origin of descent” question.  That year, over a million residents southern U.S. states, like Alabama and Mississippi checked that box.

2000 was the first year that respondents were allowed to choose more than one race. They considered a couple other changes for that year, but decided against them. Native Hawaiians had been agitating to be considered Native Americans in order to get access to the rights and resources that the US government has promised Native Americans on the mainland. The government considered it for 2000, but decided “no.” And whether or not Arab American should be considered a unique race or an ethnicity was also discussed for that year. They decided to continue to instruct such individuals to choose “white.”

The changing categories in the Census show us that racial and ethnic categories are political categories. They are chosen by government officials who are responding not to biological realities, but to immigration, war, prejudice, and social movements.

This post originally appeared in 2010.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

Flashback Friday.

In her fantastic book, Talk of Love (2001), Ann Swidler investigates how people use cultural narratives to make sense of their marriages.

She describes the “romantic” version of love with which we are all familiar.  In this model, two people fall deeply in love at first sight and live forever and ever in bliss .  We can see this model of love in movies, books, and advertisements:

She finds that, in describing their own marriages, most people reject a romantic model of love out-of-hand.

Instead, people tended to articulate a “practical” model of love.  Maintaining love in marriage, they said requires trust, honesty, respect, self-discipline, and, above all, hard work.  This model manifests in the therapeutic and religious self-help industry and its celebrity manifestations:

But even though most people favored a practical model of love in Swidler’s interviews, even the most resolute realist would occasionally fall back on idealist versions of love. In that sense, most people would articulate contradictory beliefs. Why?

Swidler noticed that people would draw on the different models when asked different kinds of questions. When she would ask them “How do you keep love alive from day to day?” they would respond with a practical answer. When she asked them “Why do you stay married?” or “Why did you get married?” they would respond with a romantic answer.

So, even though most people said that they didn’t believe in the ideal model, they would invoke it. They did so when talking about the institution of marriage (the why), but not when talking about the relationship they nurtured inside of that institution (the how).

Swidler concludes that the ideal model of love persists as a cultural trope because marriage, as an institution, requires it. For example, while people may not believe that there is such a thing as “the one,” marriage laws are written such that you must marry “one.” She explains:

One is either married or not; one cannot be married to more than one person at a time; marrying someone is a fateful, sometimes life-transforming choice; and despite divorce, marriages are still meant to last (p. 117-118).

That “one,” over time, becomes “the one” you married. “The social organization of marriage makes the mythic image true experientially…” (p. 118, my emphasis).

If a person is going to get married at all, they must have some sort of cultural logic that allows them to choose one person. Swidler writes:

In order to marry, individuals must develop certain cultural, psychological, and even cognitive equipment. They must be prepared to feel, or at least convince others that they feel, that one other person is the unique right ‘one.’ They must be prepared to recognize the ‘right person’ when that person comes along.

The idea of romantic love does this for us. It is functional given the way that contemporary institutions structure love relationships. And, that, Swidler says, is why it persists:

The culture of [romantic] love flourishes in the gap between the expectation of enduring relationships and the free, individual choice upon which marriage depends… Only if there really is something like love can our relationships be both voluntary and enduring (p. 156-157).

Presumably if marriage laws didn’t exist, or were different, the romantic model of love would disappear because it would no longer be useful.

The culture of love would die out, lose its plausibility, not if marriages did not last (they don’t) but if people stopped trying to form and sustain lasting marriages (p. 158).

Even when individuals consciously disbelieve dominant myths [of romantic love], they find themselves engaged with the very myths whose truths they reject—because the institutional dilemmas those myths capture are their dilemmas as well (p. 176).

Cultural tropes, then, don’t persist because we (or some of us) are duped by movies and advertisements, they persist because we need them.

Originally posted in 2010.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

The “Oxford comma” is the one placed before an “and” in a list of three or more. It’s the subject of an embittered battle among grammar-lovers. You can make up your own mind. Sometimes it’s correct to use it; sometimes it’s more fun not to.


Posted at Mental Floss, made by Arika Okrent, who wrote a book about invented languages, and artist Mike Rogalski. Used with permission.

Lisa Wade is a professor at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. Find her on TwitterFacebook, and Instagram.

From an angry tweet to an actual change.

On September 1st I objected to the description of the Disney movie Pocahontas at Netflix. It read:

An American Indian woman is supposed to marry the village’s best warrior, but she years for something more — and soon meets Capt. John Smith.


I argued that, among other very serious problems with the film itself, this description reads like a porn flick or a bad romance novel. It overly sexualizes the film, and only positions Pocahontas in relation to her romantic options, not as a human being, you know, doing things.

Other Disney lead characters are not at all described this way. Compare the Pocahontas description to the ones for a few other Disney films on Netflix:

The Hunchback of Notre Dame. “Inspired by Victor Hugo’s novel, this Disney film follows a gentle, crippled bell ringer as he faces prejudice and tries to save the city he loves.”

The Emperor’s New Groove. “In this animated Disney adventure, a South American emperor experiences a reversal of fortune when his power-hungry adviser turns him into a llama.”

Tarzan. After being shipwrecked off the African coast, a lone child grows up in the wild and is destined to become lord of the jungle.”

Hercules. “The heavenly Hercules is stripped of his immortality and raised on earth instead of Olympus, where he’s forced to take on Hades and assorted monsters.”

I picked these four because they have male protagonists and, with the exception of Emperor’s New Groove which has a “South American” lead, the rest are white males. I have problems with the “gentle, crippled” descriptor but, the point is, these movies all have well developed romance plot lines, but their (white, male) protagonists get to save things, fight people, have adventures, and be “lord of the jungle” – they are not defined by their romantic relationships in the film.

We cannot divorce the description of Pocahontas from it’s context. We live in a society that sexualizes Native women: it paints us as sexually available, free for the taking, and conquerable – an extension of the lands that we occupy. The statistics for violence against Native women are staggeringly high, and this is all connected.

NPR Codeswitch recently posted a piece about how watching positive representations of “others” (LGBT, POC) on TV leads to more positive associations with the group overall, and can reduce prejudice and racism. This is awesome, but what if the only representations are not positive? In the case of Native peoples, the reverse is true – seeing stereotypical imagery, or in the case of Native women, overly sexualized imagery, contributes to the racism and sexual violence we experience. The research shows that these seemingly benign, “funny” shows on TV deeply effect real life outcomes, so I think we can safely say that a Disney movie (and its description) matters.

So, my point was not to criticize the film, which I can save for another time, but to draw attention to the importance of the words we use, and the ways that insidious stereotypes and harmful representations sneak in to our everyday lives.

In any case, I expressed my objection to the description on Twitter and was joined by hundreds of people. And… one week later, I received an email from Netflix:

Dear Dr. Keene,

Thanks for bringing attention to this synopsis. We do our best to accurately portray the plot and tone of the content we’re presenting, and in this case you were right to point out that we could do better. The synopsis has been updated to better reflect Pocahontas’ active role and to remove the suggestion that John Smith was her ultimate goal.


<netflix employee>



A young American Indian girl tries to follow her heart and protect her tribe when settlers arrive and threaten the land she loves.

Sometimes I’m still amazed by the power of the internet.

Adrienne Keene, EdD is a graduate of the Harvard Graduate School of Education and is now a postdoctoral fellow in Native American studies at Brown University. She blogs at Native Appropriations, where this post originally appeared. You can follow her on Twitter.

Cross-posted at Pacific Standard.