culture

Cross-posted at Montclair SocioBlog.

Isabella was the second most popular name for baby girls last year.  She had been number one for two years but was edged out by Sohpia.  Twenty-five years ago Isabella was not in the top thousand.

How does popularity happen?  Gabriel Rossman’s new book Climbing the Charts: What Radio Airplay Tells Us about the Diffusion of Innovation offers two models.*   People’s decisions — what to name the baby, what songs to put on your station’s playlist (if you’re a programmer), what movie to go see, what style of pants to buy —  can be affected by others in the same position.  Popularity can spread seemingly on its own, affected only by the consumers themselves communicating with one another person-to-person by word of mouth.  But our decisions can also be influenced by people outside those consumer networks – the corporations or people produce and promote the stuff they want us to pay attention to.

These outside “exogenous” forces tend to exert themselves suddenly, as when a movie studio releases its big movie on a specified date, often after a big advertising campaign.  The film does huge business in its opening week or two but adds much smaller amounts to its total box office receipts in the following weeks.   The graph of this kind of popularity is a concave curve.  Here, for example, is the first  “Twilight” movie.

Most movies are like that, but not all.  A few build their popularity by word of mouth.  The studio may do some advertising, but only after the film shows signs of having legs (“The surprise hit of the year!”).  The flow of information about the film is mostly from viewer to viewer, not from the outside.

This diffusion path is “endogenous”; it branches out among the people who are making the choices.  The rise in popularity starts slowly – person #1 tells a few friends, then each of those people tells a few friends.  As a proportion of the entire population, each person has a relatively small number of friends.  But at some point, the growth can accelerate rapidly.  Suppose each person has five friends.  At the first stage, only six people are involved (1 + 5); stage two adds another 25, and stage three another 125, and so on.  The movie “catches on.”

The endogenous process is like contagion, which is why the term “viral” is so appropriate for what can happen on the Internet with videos or viruses.   The graph of endogenous popularity growth has a different shape, an S-curve, like this one for “My Big Fat Greek Wedding.”

By looking at the shape of a curve, tracing how rapidly an idea or behavior spreads, you can make a much better guess as to whether you’re seeing exogenous or endogenous forces.  (I’ve thought that the title of Gabriel’s book might equally be Charting the Climb: What Graphs of Diffusion Tell Us About Who’s Picking the Hits.)

But what about names, names like Isabella?  With consumer items  – movies, songs, clothing, etc. – the manufacturers and sellers, for reasons of self-interest, try hard to exert their exogenous influence on our decisions.  Nobody makes money from baby names, but even those can be subject to exogenous effects, though the outside influence is usually unintentional and brings no economic benefit.  For example, from 1931 to 1933, the first name Roosevelt jumped more than 100 places in rank.

When the Census Bureau announced that the top names for 2011 were Jacob and Isabella, some people suspected the influence of an exogenous factor — “Twilight.”

I’ve made the same assumption in saying (here) that the popularity of Madison as a girl’s name — almost unknown till the mid-1980s but in the top ten for the last 15 years — has a similar cause: the movie “Splash” (an idea first suggested to me by my brother).  I speculated that the teenage girls who saw the film in 1985 remembered Madison a few years later when they started having babies.

Are these estimates of movie influence correct? We can make a better guess at the impact of the movies (and, in the case of Twilight, books) by looking at the shape of the graphs for the names.

Isabella was on the rise well before Twilight, and the gradual slope of the curve certainly suggests an endogenous contagion.  It’s possible that Isabella’s popularity was about to level off  but then got a boost in 2005 with the first book.  And it’s possible the same thing happened in 2008 with the first movie. I doubt it, but there is no way to tell.

The curve for Madison seems a bit steeper, and it does begin just after “Splash,” which opened in 1984.   Because of the scale of the graph, it’s hard to see the proportionately large changes in the early years.  There were zero Madisons in 1983, fewer than 50 the next year, but nearly 300 in 1985.  And more than double that the next year.  Still, the curve is not concave.  So it seems that while an exogenous force was responsible for Madison first emerging from the depths, her popularity then followed the endogenous pattern.  More and more people heard the name and thought it was cool.  Even so, her rise is slightly steeper than Isabella’s, as you can see in this graph with Madison moved by six years so as to match up with Isabella.

Maybe the droplets of “Splash” were touching new parents even years after the movie had left the theaters.

————————

* Gabriel posted a short version about these processes when he pinch hit for Megan McCardle at the Atlantic (here).

Men and women in Western societies often look more different than they are naturally because of the incredible amounts of work we put into trying to look different.  Often this is framed as “natural” but, in fact, it takes a lot of time, energy, and money.  The dozens of half-drag portraits, from photographer Leland Bobbé, illustrate just how powerful our illusion can be.  Drag, of course, makes a burlesque of the feminine; it is hyperfeminine.  But most all of us are doing drag, at least a little bit, much of the time. 

Here’s an example of one we have permission to use for the cover of our Gender textbook:

1

Many more at Leland Bobbé’s website.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Earlier this week I wrote a post asking Is the Sky Blue?, discussing the way that culture influences our perception of color.  In the comments thread Will Robertson linked to a fascinating 8-minute BBC Horizon clip.  The video features an expert explaining how language changes how children process color in the brain.

We also travel to Namibia to visit with the Himba tribe.  They have different color categories than we do in the West, making them “color blind” to certain distinctions we easily parse out, but revealing ways in which we, too, can be color blind.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

When we categorize people into “races,” we do so using a number of physical characteristics, but especially skin color. Our racial system is based on the idea that skin color is a clearly distinguishing trait, especially when we use terms like “black” and “white,” which we generally conceive of as opposite colors.

Of course, because race is socially constructed, there’s actually enormous diversity within the categories we’ve created, and great overlap between them, as we’ve forced all humans on earth into just a few groupings.  And terms like “black” and “white” don’t really describe the shades of actual human skin.

Artist and photographer Angelica Dass has an art project, Humanae, that illustrates the tremendous diversity in skin color (via co.CREATE, sent in by Dolores R., Mike R., and YetAnotherGirl. She uses an 11×11 pixel of individuals’ faces to match them to a specific color in the Pantone color system, which catalogs thousands of hues and is used in many types of manufacturing to standardize and match colors. She then takes a photo of them in front of a background of their Pantone color.

Currently the project is very heavily focused on people we’d generally categorize as White — there are a few individuals from other groups, but not many, and in no way does it represent “every skin tone,” as I’ve seen it described in some places. So that’s a major caveat.

That said, I do think the project shows how reductive our system of classifying people by skin tone is, when you look at the range of colors even just among Whites — why does it make sense to throw most of these people into one category and say they’re all physically the same in a meaningful way that separates them from everyone else (and then connect those supposedly shared physical traits to non-physical ones)? And which part of the body do we use to do so, since many of us have various shades on our bodies? Or which time of year, since many of us change quite a bit between summer and winter?

Maru sent in a similar example; French artist Pierre David created “The Human Pantone,” using 40 models. We think racial categories make sense because we generally think of the extremes, but by showing individuals arranged according to hue, the project highlights the arbitrariness of racial boundaries. Where, exactly, should the dividing lines be?

Via TAXI.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

Food shortages during World War II required citizens and governments to get creative, changing the gastronomical landscape in surprising ways.   Many ingredients that the British were accustomed to were unavailable.  Enter the carrot.

According to my new favorite museum, the Carrot Museum, carrots were plentiful, but the English weren’t very familiar with the root.  Wrote the New York Times in 1942: “England has a goodly store of carrots. But carrots are not the staple items of the average English diet. The problem…is to sell the carrots to the English public.”

So the British government embarked on a propaganda campaign designed to increase dependence on carrots.  It linked carrot consumption to patriotism, disseminated recipes, and made bold claims about the carrot’s ability to improve your eyesight (useful considering they were often in blackout conditions).

Here’s a recipe for Carrot Fudge:

You will need:

  • 4 tablespoons of finely grated carrot
  • 1 gelatine leaf
  • orange essence or orange squash
  • a saucepan and a flat dish

Put the carrots in a pan and cook them gently in just enough water to keep them covered, for ten minutes. Add a little orange essence, or orange squash to flavour the carrot. Melt a leaf of gelatine and add it to the mixture. Cook the mixture again for a few minutes, stirring all the time. Spoon it into a flat dish and leave it to set in a cool place for several hours. When the “fudge” feels firm, cut it into chunks and get eating!

Disney created characters in an effort to help:

The government even used carrots as part of an effort to misinform their enemies:

…Britain’s Air Ministry spread the word that a diet of carrots helped pilots see Nazi bombers attacking at night. That was a lie intended to cover the real matter of what was underpinning the Royal Air Force’s successes: the latest, highly efficient on board,  Airborne Interception Radar, also known as AI.

When the Luftwaffe’s bombing assault switched to night raids after the unsuccessful daylight campaign, British Intelligence didn’t want the Germans to find out about the superior new technology helping protect the nation, so they created a rumour to afford a somewhat plausible-sounding explanation for the sudden increase in bombers being shot down… The Royal Air Force bragged that the great accuracy of British fighter pilots at night was a result of them being fed enormous quantities of carrots and the Germans bought it because their folk wisdom included the same myth.

But here’s the most fascinating part.

It turns out that, exactly because of the rationing, British people of all classes ate healthier.

…many poor people had been too poor to feed themselves properly, but with virtually no unemployment and the introduction of rationing, with its fixed prices, they ate better than in the past.

Meanwhile, among the better off, rationing reduced the intake of unhealthy foods.  There were very few sweets available and people ate more vegetables and fewer fatty foods.  As a result “…infant mortality declined and life expectancy increased.”

I love carrots. I’m eating them right now.

To close, here are some kids eating carrots on a stick:

Via Retronaut.  For more on life during World War II, see our posts on staying off the phones and carpool propaganda (“When You Ride ALONE, You Ride With Hitler!”) and our coverage of life in Japanese Internment Camps, women in high-tech jobs, the demonization of prostitutes, and the German love/hate relationship with jazz.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

A while back I was summoned for jury duty and found myself being considered for a case against a young Latina with a court translator.  She was accused of selling counterfeit Gucci and Chanel purses on the street in L.A.  After introducing the case, the judge asked: “Is any reason why you could not objectively apply the law?” My hand shot up.

I said:

I have to admit, I’m kind of disgusted that state resources are being used to protect the corporate interests of Chanel and Gucci.

Then I gave a spiel about corruption in the criminal justice system and finished up with:

I think that society should be protecting its weakest members, not penalizing them for trivial infractions. There is no way in good conscience I could give that girl a criminal record, I don’t care if she’s guilty. Some things are more important than the rules.

I was summarily dismissed.

Criminal prosecutions are one way to decrease counterfeiting and, yes, protect corporate interests and Shaynah H. sent in another: shame.  This National Crime Prevention Council/Bureau of Justice Assistance ad, spotted in a mall in Portland, tells you that if you buy knock-offs, you are “a phony.”

Yikes.  I would have preferred “savvy” or “cost-conscious.”  But, no, the message is clear.  You are a fake person, a liar, a hypocrite.  You are insincere and pretentious.  You are an impostor.  (All language borrowed from the word’s definition.)  And these are not something that anyone wants to be.

But, honestly, why does anyone care?

I suspect that counterfeits don’t really cut into Chanel’s profits directly.  The people who buy bags that costs thousands of dollars are not going to try to save some pennies by buying a knock-off.  Or, to put it the inverse way, the people who are buying the counterfeits wouldn’t suddenly be buying the originals if their supply ran out.

Instead, policing the counterfeiters is a response to a much more intangible concern, something Pierre Bourdieu called “cultural capital.”  You see, a main reason why people spend that kind of money on handbags is to be seen as the kind of person who does.  The handbags are a signal to others that they are “that kind” of person, the kind that can afford a real Gucci.  The products, then, are ways that people put boundaries between themselves and lesser others.

But, when lesser others can buy knock-offs on the street in L.A. and just parade around as if they can buy Gucci too!  Well, then the whole point of buying Gucci is lost!  If the phony masses can do it, it no longer serves to distinguish the elites from the rest of us.

In this sense, Chanel and Gucci are very interested in reducing counterfeiting; the rich people who buy their products will only do so if buying them proves that they’re special.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

The term sexual dimorphism refers to differences between males and females of the same species.  Some animals are highly sexually dimorphic. Male elephant seals outweigh females by more than 2,500 pounds; peacocks put on a color show that peahens couldn’t mimic in their wildest dreams; and a male anglerfish’s whole life involves finding a female, latching on, and dissolving until there’s nothing left but his testicles (yes, really).

On the spectrum of very high to very low dimorphism, humans are on the low end.  We’re just not that kind of species.  Remove the gendered clothing styles, make up, and hair differences and we’d look more alike than we think we do.

Because we’re invested in men and women being different, however, we tend to be pleased by exaggerated portrayals of human sexual dimorphism (for example, in Tangled). Game designer-in-training Andrea Rubenstein has shown us that we extend this ideal to non-human fantasy as well.  She points to a striking dimorphism (mimicking Western ideals) in World of Warcraft creatures:

Annalee Newitz at Wired writes:

[Rubenstein] points out that these female bodies embody the “feminine ideal” of the supermodel, which seems a rather out-of-place aesthetic in a world of monsters. Supermodelly Taurens wouldn’t be so odd if gamers had the choice to make their girl creatures big and muscley, but they don’t. Even if you wanted to have a female troll with tusks, you couldn’t. Which seems especially bizarre given that this game is supposed to be all about fantasy, and turning yourself into whatever you want to be.

It appears that the supermodel-like females weren’t part of the original design of the game.  Instead, the Alpha version included a lot less dimorphism, among the Taurens and the Trolls for example:

Newitz says that the female figures were changed in response to player feedback:

Apparently there were many complaints about the women of both races being “ugly” and so the developers changed them into their current incarnations.

The dimorphism in WoW is a great example of how gender difference is, in part, an ideology.  It’s a desire that we impose onto the world, not reality in itself.  We make even our fantasy selves conform to it.  Interestingly, when people stray from affirming the ideology, they can face pressure to align themselves with its defenders.  It appears that this is exactly what happened in WoW.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Cross-posted at Jezebel.

I’ve been watching the response to Anne-Marie Slaughter’s Why Women Still Can’t Have It All roll out across the web.  Commentators are making excellent points, but E.J. Graff at The American Prospect sums it up nicely:

Being both a good parent and an all-out professional cannot be done the way we currently run our educational and work systems… Being a working parent in our society is structurally impossible. It can’t be done right… You’ll always be failing at something — as a spouse, as a parent, as a worker. Just get used to that feeling.

In other words, the cards are stacked against you and it’s gonna suck.

And it’s true, trust me, as someone who’s currently knee-deep in the literature on parenting and gender, I’m pleased to see the structural contradictions between work and parenting being discussed.

But I’m frustrated about an invisibility, an erasure, a taboo that goes unnamed.  It seems like it should at least get a nod in this discussion.  I’m talking about the one really excellent solution to the clusterf@ck that is parenting in America.

Don’t. Have. Kids.

No really — just don’t have them.

Think about it.  The idea that women will feel unfulfilled without children and die from regret is one of the most widely-endorsed beliefs in America.  It’s downright offensive to some that a woman would choose not to have children.  Accusations of “selfishness” abound.  It’s a given that women will have children, and many women will accept it as a given.

But we don’t have to.  The U.S. government fails to support our childrearing efforts with sufficient programs (framing it as a “choice” or “hobby”), the market is expensive (child care costs more than college in most states), and we’re crammed into nuclear family households (making it difficult to rely on extended kin, real or chosen).  And the results are clear: raising children changes the quality of your life.  In good ways, sure, but in bad ways too.

Here are findings from the epic data collection engine that is the World Values Survey, published in Population and Development Review. If you live in the U.S., look at the blue line representing “liberal” democracies (that’s what we are).  The top graph shows that, among 20-39 year olds, having one child is correlated with a decrease in happiness, having two a larger decreases, and so on up to four or more.  If you’re 40 or older, having one child is correlated with a decrease in happiness and having more children a smaller one.  But even the happiest people, with four or more children, are slightly less happy than those with none at all.

Don’t shoot the messenger.

Long before Slaughter wrote her article for The Atlantic, when she floated the idea of writing it to a female colleague, she was told that it would be a “terrible signal to younger generations of women.”  Presumably, this is because having children is compulsory, so it’s best not to demoralize them.  Well, I’ll take on that Black Badge of Dishonor.  I’m here to tell still-childless women (and men, too) that they can say NO if they want to.  They can reject a lifetime of feeling like they’re “always… failing at something.”

I wish it were different. I wish that men and women could choose children and know that the conditions under which they parent will be conducive to happiness.  But they’re not.  As individuals, there’s little we can do to change this, especially in the short term.  We can, however, try to wrest some autonomy from the relentless warnings that we’ll be pathetically-sad-forever-and-ever if we don’t have babies.  And, once we do that, we can make a more informed measurement of the costs and benefits.

Some of us will choose to spend our lives doing something else instead.  We’ll learn to play the guitar, dance the Flamenco (why not?), get more education, travel to far away places, write a book, or start a welcome tumblr.  We can help raise our nieces and nephews, easing the burden on our loved ones, or focus on nurturing our relationships with other adults.  We can live in the cool neighborhoods with bad school districts and pay less in rent because two bedrooms are plenty.  We can eat out, sleep in, and go running.  We can have extraordinary careers, beautiful relationships, healthy lives, and lovely homes.  My point is: there are lots of great things to do in life… having children is only one of them.

Just… think about it.  Maybe you can spend your extra time working to change the system for the better.  Goodness knows parents will be too tired to do it.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.