Cross-posted at Montclair SocioBlog.

Isabella was the second most popular name for baby girls last year.  She had been number one for two years but was edged out by Sohpia.  Twenty-five years ago Isabella was not in the top thousand.

How does popularity happen?  Gabriel Rossman’s new book Climbing the Charts: What Radio Airplay Tells Us about the Diffusion of Innovation offers two models.*   People’s decisions — what to name the baby, what songs to put on your station’s playlist (if you’re a programmer), what movie to go see, what style of pants to buy —  can be affected by others in the same position.  Popularity can spread seemingly on its own, affected only by the consumers themselves communicating with one another person-to-person by word of mouth.  But our decisions can also be influenced by people outside those consumer networks – the corporations or people produce and promote the stuff they want us to pay attention to.

These outside “exogenous” forces tend to exert themselves suddenly, as when a movie studio releases its big movie on a specified date, often after a big advertising campaign.  The film does huge business in its opening week or two but adds much smaller amounts to its total box office receipts in the following weeks.   The graph of this kind of popularity is a concave curve.  Here, for example, is the first  “Twilight” movie.

Most movies are like that, but not all.  A few build their popularity by word of mouth.  The studio may do some advertising, but only after the film shows signs of having legs (“The surprise hit of the year!”).  The flow of information about the film is mostly from viewer to viewer, not from the outside.

This diffusion path is “endogenous”; it branches out among the people who are making the choices.  The rise in popularity starts slowly – person #1 tells a few friends, then each of those people tells a few friends.  As a proportion of the entire population, each person has a relatively small number of friends.  But at some point, the growth can accelerate rapidly.  Suppose each person has five friends.  At the first stage, only six people are involved (1 + 5); stage two adds another 25, and stage three another 125, and so on.  The movie “catches on.”

The endogenous process is like contagion, which is why the term “viral” is so appropriate for what can happen on the Internet with videos or viruses.   The graph of endogenous popularity growth has a different shape, an S-curve, like this one for “My Big Fat Greek Wedding.”

By looking at the shape of a curve, tracing how rapidly an idea or behavior spreads, you can make a much better guess as to whether you’re seeing exogenous or endogenous forces.  (I’ve thought that the title of Gabriel’s book might equally be Charting the Climb: What Graphs of Diffusion Tell Us About Who’s Picking the Hits.)

But what about names, names like Isabella?  With consumer items  – movies, songs, clothing, etc. – the manufacturers and sellers, for reasons of self-interest, try hard to exert their exogenous influence on our decisions.  Nobody makes money from baby names, but even those can be subject to exogenous effects, though the outside influence is usually unintentional and brings no economic benefit.  For example, from 1931 to 1933, the first name Roosevelt jumped more than 100 places in rank.

When the Census Bureau announced that the top names for 2011 were Jacob and Isabella, some people suspected the influence of an exogenous factor — “Twilight.”

I’ve made the same assumption in saying (here) that the popularity of Madison as a girl’s name — almost unknown till the mid-1980s but in the top ten for the last 15 years — has a similar cause: the movie “Splash” (an idea first suggested to me by my brother).  I speculated that the teenage girls who saw the film in 1985 remembered Madison a few years later when they started having babies.

Are these estimates of movie influence correct? We can make a better guess at the impact of the movies (and, in the case of Twilight, books) by looking at the shape of the graphs for the names.

Isabella was on the rise well before Twilight, and the gradual slope of the curve certainly suggests an endogenous contagion.  It’s possible that Isabella’s popularity was about to level off  but then got a boost in 2005 with the first book.  And it’s possible the same thing happened in 2008 with the first movie. I doubt it, but there is no way to tell.

The curve for Madison seems a bit steeper, and it does begin just after “Splash,” which opened in 1984.   Because of the scale of the graph, it’s hard to see the proportionately large changes in the early years.  There were zero Madisons in 1983, fewer than 50 the next year, but nearly 300 in 1985.  And more than double that the next year.  Still, the curve is not concave.  So it seems that while an exogenous force was responsible for Madison first emerging from the depths, her popularity then followed the endogenous pattern.  More and more people heard the name and thought it was cool.  Even so, her rise is slightly steeper than Isabella’s, as you can see in this graph with Madison moved by six years so as to match up with Isabella.

Maybe the droplets of “Splash” were touching new parents even years after the movie had left the theaters.

————————

* Gabriel posted a short version about these processes when he pinch hit for Megan McCardle at the Atlantic (here).

Cross-posted at Montclair SocioBlog.

What’s familiar isn’t so bad, even if it’s bad.

One of the things I remember from my days in the crim biz is that people’s perceptions of crime don’t have a lot to do with actual crime rates.  This was back in the high-crime decades, and people were more afraid of crime than they are now.  But people felt safer in their own neighborhoods than in other neighborhoods, even when their own neighborhoods had a higher crime rate.

These were the days when I would give someone directions to my building — “Get off the IRT* at 72nd St…” — and they would often ask, “Is it safe?”

“Of course it’s safe.  It’s my neighborhood,” I would say, “I live here. I ought to know.”   Yet when I would go to a party in the East 20s or, God forbid, Brooklyn, I would emerge from the subway and follow the directions with a certain sense of apprehension and caution.

Apparently, the same link between far and fear holds true for people’s perceptions of economic well-being.  A recent Gallup poll asked people how the economy was in places ranging from their own city or area to the world generally.  The closer to home, the better the economy.  The farther from home, the lower the percent of people rating economic conditions as excellent or good.And the farther from home, the higher the percent of people rating economic conditions as “only fair” or poor.Republicans were the most pessimistic about the economy, regardless of location.  Democrats were the most sanguine, with Independents in between. The graph shows the percent who rated the economy positively minus the percent who rated it Poor.This obviously has nothing to do with familiarity but with contempt.  Apparently, for Republicans, a Democrat – especially a Kenyan socialist Democrat – in the White House means that the economy must be bad everywhere.

* These old subway line designations – IRT, BMT, IND – are no longer in official use.  But when did the MTA jettison them?  If you know the answer, please tell me.

———————

UPDATE, June 22 Andrew Gelman has formatted the data as line graphs, making the comparisons and trends clearer.  He has also added his own observations – things I wish I had known or thought of.

Cross-posted at Montclair SocioBlog.

Air pollution is what economists call an “externality.”  It is not an intrinsic part of the economic bargaining between producers and consumers.  The usual market forces — buyers and sellers pursuing their own individual interests — won’t help.  The market may bring us more goods at lower prices, for example, but it can harm the air that everyone, in or out of that market, has to breathe. To create or protect a public good, the free market has to be a little less free.  That’s where government steps in.  Or not.

Case in point: My son and his girlfriend arrived in Beijing ten days ago.  The got-here-safely e-mail ended with this:

…was blown away by the pollution! I know people talk about it all the time, but it really is crazy.

And it is.  Here’s a photo I grabbed from the Internet:

Flickr creative commons by nasus89.

At about the same time, I came upon a this link to photos of my home town Pittsburgh in 1940.  Here are two of them:
Today in downtown Pittsburgh, the streetcars and overhead trolleys are gone.  So are the fedoras.  And so is the smoke.

The air became cleaner in the years following the end of the War.  It didn’t become clean all by itself, and it didn’t become clean because of free-market forces.  It got clean because of government — legislation and regulation, including an individual mandate.

The smoke was caused by the burning of coal, and while the steel mills accounted for some of the smoke, much of the it came from coal-burning furnaces in Pittsburghers’ houses.  If the city was to have cleaner air, the government would have to force people change the way they heated their homes.  And that is exactly what the law did. To create a public good — clean air — the law required individuals to purchase something — either non-polluting fuel (oil, gas, or smokeless coal) or smokeless equipment.*

Initially, not everyone favored smoke control, but as Pittsburgh became cleaner and lost its “Smoky City” label, approval of the regulations increased, and there was a fairly rapid transition to gas heating.  By the 1950s, nobody longed for the unregulated air of 1940.  Smoke control was a great success.**  Of course, it may have helped that Pittsburgh did not have a major opposition party railing against this government takeover of home heating or claiming that smoke control was a jobs-killing assault on freedom.

————————

* Enforcement focused not on individuals but distributors.  Truckers were forbidden from delivering the wrong kind of coal.

** For a fuller account of smoke control in Pittsburgh, see Joel A. Tarr and Bill C. Lamperes, Changing Fuel Use Behavior and Energy Transitions: The Pittsburgh Smoke Control Movement, 1940-1950: A Case Study in Historical Analogy. Journal of Social History , Vol. 14, No. 4, Special Issue on Applied History (Summer, 1981), pp. 561-588.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Cross-posted at Montclair SocioBlog.

If a person thinks that the media are infiltrating his mind and controlling his thoughts and behavior, we consider him a nutjob, and we recommend professional help and serious meds.  But if a person thinks that the media are infiltrating other people’s minds and affecting their behavior, we call him or her an astute social observer, one eminently qualified to give speeches or write op-eds.

The previous post dwelt on economist Isabel Sawhill’s Washington Post op-ed channeling Dan Quayle, particularly Quayle’s speech asserting that a TV sitcom was wielding a strong effect on people’s decisions — not just decisions like Pepsi vs. Coke, but decisions like whether to have a baby.

That was Quayle, this is now.  Still, our current vice-president can sometimes resemble his counterpart of two decades ago.  Just a last month, Joe Biden echoed the Quayle idea on the power of sitcoms.  On “Meet the Press,” in response to David Gregory’s question about gay marriage, Biden said that “this is evolving” and added:

And by the way, my measure, David, and I take a look at when things really begin to change, is when the social culture changes.  I think “Will and Grace” probably did more to educate the American public than almost anything anybody’s ever done so far.

“Will and Grace” ran for eight seasons, 1998-2006.  Its strongest years were 2001-2005, when it was the top rated show among the 18-49 crowd. Biden could point to General Social Survey (GSS) data on the gay marriage question.  In 1988, ten years before “Will and Grace,” the GSS asked about gay marriage.  Only 12% supported it, 73% opposed it.  The question was asked again in 2004, six years into the W+G era.  Support had more than doubled, and it continued to rise in subsequent years.

We don’t know just when in that 18-year period, 1988-2004, things “really began to change.”  Fortunately, the GSS more regularly asked the respondent’s view on sexual relations between same-sex partners.  Here too, tolerance grows in the “Will and Grace” period (gray on the graph):

The graph is misleading, though. To see the error, all we need do is extend our sampling back a few years  Here is the same graph starting in 1973:

The GSS shows attitudes about homosexuality starting to change in 1990.  By the time of the first episode of “Will and Grace,” the proportion seeing nothing wrong with homosexuality had already doubled.  Like Quayle’s “Murphy Brown” effect, the “Will and Grace” effect is hard to see.

The flaw in the Quayle-Biden method is not in mistaking TV for reality.  It’s in assuming that the public’s awareness is simultaneous with their own.

Why do our vice-presidents (and many other people) give so much credit (or blame) to a popular TV show for a change in public opinion?  The error is partly a simplistic post hoc logic.   “Will and Grace” gave us TV’s first gay principle character; homosexuality became more acceptable.  Murphy Brown was TV’s first happily unwed mother, and in the following years, single motherhood increased.   Besides, we know that these shows are watched by millions of people each week.  So it must be the show that is causing the change.

It’s also possible that our vice-presidents (and many other people) may also have been projecting their own experiences onto the general public.  Maybe Murphy Brown was the first or only unwed mother that Dan Quayle really knew – or at least she was the one he knew best.  It’s possible that Joe Biden wasn’t familiar with any gay men, not in the way we feel we know TV characters.  A straight guy might have some gay acquaintances or co-workers, but it’s the fictional Will Truman whose private life he could see, if only for a half hour every week.

Does TV matter?  When we think about our own decisions, we are much more likely to focus on our experiences and on the pulls and pushes of family, work, and friends.  We generally don’t attribute much causal weight to the sitcoms we watch.  Why then are we so quick to see these shows as having a profound influence on other people’s behavior, especially behavior we don’t like?   Maybe because it’s such an easy game to play.  Is there more unwed motherhood?  Must be “Murphy Brown.”  Did obesity increase in the 1990s?  “Roseanne.”  Are twentysomethings and older delaying marriage?  “Seinfeld” and “Friends.” And of course “The Simpsons,” or at least Bart and Homer, who can be held responsible for a variety of social ills.

Cross-posted at Montclair SocioBlog.

I’m not sure what effect prime-time sitcoms have on the general public.  Very little, I suspect, but I don’t know the literature on the topic.  Still, it’s surprising how many people with a similar lack of knowledge assume that the effect is large and usually for the worse.

Isabel Sawhill, is a serious researcher at Brookings; her areas are poverty and inequality.  Now, in a Washington Post article, she, says that Dan Quayle was right about Murphy Brown.

Some quick history for those who were out of the room — or hadn’t yet entered the room: In 1992, Dan Quayle was vice-president under Bush I.  Murphy Brown was the title character on a popular sitcom then in its fourth season — a divorced TV news anchor played by Candice Bergen.  On the show, she got pregnant.  When the father, her ex, refused to remarry her, she decided to have the baby and raise it on her own.

Dan Quayle, in his second most famous moment,* gave a campaign speech about family values that included this:

Bearing babies irresponsibly is simply wrong… Failing to support children one has fathered is wrong… It doesn’t help matters when prime-time TV has Murphy Brown, a character who supposedly epitomizes today’s intelligent, highly paid professional woman, mocking the importance of fathers by bearing a child alone and calling it just another lifestyle choice.

Sawhill, citing her own research and that of others, argues that Quayle was right about families:  children raised by married parents are better off in many ways — health, education, income, and other measures of well-being — than are children raised by unmarried parents whether single or together.**

But Sawhill also says that Quayle was right about the more famous part of the statement – that “Murphy Brown” was partly to blame for the rise in nonmarried parenthood.

Dan Quayle was right. Unless the media, parents and other influential leaders celebrate marriage as the best environment for raising children, the new trend — bringing up baby alone — may be irreversible.

Sawhill, following Quayle, gives pride of place to the media.  But unfortunately, she cites no evidence on the effects of sitcoms or the media in general on unwed parenthood.  I did, however, find a graph of trends in unwed motherhood. It shows the percent of all babies that were born to unmarried mothers.  I have added a vertical line to indicate the Murphy Brown moment.

The “Murphy Brown” effect is, at the very least, hard to detect. The rise is general across all racial groups, including those who were probably not watching a sitcom whose characters were all white and well-off.  Also, the trend begins well before “Murphy Brown” ever saw the light of prime time.  So 1992, with Murphy Brown’s fateful decision, was no more a turning point than was 1986, for example, a year when the two top TV shows were “The Cosby Show” and “Family Ties,” sitcoms with a very low rate of single parenthood and, at least for “Cosby,” a more inclusive demographic.

————————

  * Quayle’s most remembered moment: when a schoolboy wrote “potato” on the blackboard, Quayle “corrected” him by getting him to add a final “e” – “potatoe.”  “There you go,” said the vice-president of the United States approvingly. (A 15-second video is here.)

** These results are not surprising.  Compared with other wealthy countries, the US does less to support poor children and families or to ease the deleterious effects on children who have been so foolhardy as to choose poor, unmarried parents.

Cross-posted at Montclair SocioBlog.

In recent Democratic primaries in Appalachian states, Obama lost 40% of the vote.  The anti-Obama Democrats voted for candidates like “uncommitted” (Kentucky), an unknown lawyer (Arkansas), and a man who is incarcerated in Texas (West Virginia).

Could it be that there’s racism at work in Appalachia?  Or is the anti-Obama vote based entirely on opposition to his policies?

The 2008 Presidential election — Obama v. McCain — offers some hints.  For those with short memories, the Bush legacy — an unpopular war and an economic catastrophe — may have hurt the GOP.  In that election, the country went Democratic.  The Democrats did better than they had in 2004, the Republicans worse.  But not everywhere.  The Times provides this map:

Still, it’s possible that those voters in Appalachia preferred the policies of candidate Kerry to those of candidate Obama.  As Chris Cilizza says in in a Washington Post blog (here), the idea that race had anything to do with this shift is…

…almost entirely unprovable because it relies on assuming knowledge about voter motivations that — without being a mindreader — no one can know.

Cilizza quotes Cornell Belcher, the head of a polling firm with the Monkish name Brilliant Corners:

One man’s racial differences is another man’s cultural differences.

Right.  The folks in Appalachia preferred John Kerry’s culture.

I’m generally cautious about attributing mental characteristics to people based on a single bit of behavior.  But David Weigel, in Slate, goes back to the 2008 Democratic primaries – Obama versus Hillary Clinton.  A CNN exit poll asked voters if race was an important factor in their vote. In West Virginia and Kentucky, about 20% of the voters in the Democratic primary said yes.  Were those admittedly race-conscious voters more anti-Obama than other Democrats?

As Weigel points out, this was before Obama took office, before voters really knew what policies he would propose.  Besides, there wasn’t all that much difference in his policies and those of Hillary Clinton.

Cilizza is right that we can’t read voters’ minds.  But to argue that there was no racial motivation, you have to discount what the voters said and what they did.

Cross-posted at Montclair SocioBlog.

The politics of motherhood reared its head again last month when Hilary Rosen, who the news identified as a “Democratic strategist,” said that Ann Romney (Mrs. Mitt) had “never worked a day in her life.” (A NY Times article is here.)

“Worked” was a bad choice of words.  Raising kids and taking care of a home are work, maybe even if you can hire the kind of help that Mrs. Romney could afford.  Rosen’s comment implied that family work is not as worthwhile as work in the paid labor force.  That’s not such an unreasonable conclusion if you assume that we put our money where our values are and reward work in proportion to what we think it’s worth.  Mitt’s supporters use this value-to-society assumption to justify the huge payoffs Romney derived from those leveraged buyouts at Bain Capital.*

Even Mrs. Romney apparently felt that there must be some truth to the enviability of a career.   Why else would she refer to stay-at-home motherhood as a career?  “My career choice was to be a mother.”

Still, regardless of the truth of Rosen’s remark, it was insulting.**  Stay-at-home motherhood is work – a job.

But is it a good job?

A recent Gallup poll provides some more evidence as to why stay-at-home moms might be both envious or resentful of their employed counterparts.  Gallup asked women about the emotions, positive and negative, that they had felt “a lot” in the previous day.  Gallup then compared the stay-at-home moms, employed moms, and employed women who had no children at home.

The stay-at-home moms came in first on every negative emotion.  Some of the differences are small, but the Gallup sample was more than 60,000 so these differences are statistically significant.   The smallest difference was for Stress – no surprise there, since paid work can be stressful.  Worry and Anger too can be part of the workplace.  The largest differences were for Sadness and Depression.  Stay-home moms were 60% more likely to have been sad or depressed.

Gallup also asked about positive feelings (Thriving, Smiling or Laughing, Learning, Happiness, Enjoyment), and while the differences were smaller, they went the same way, with stay-at-home moms on the shorter end.  Still it’s encouraging that 86% of them had Experienced Happiness 86%; so had 91% of the employed moms.

Money matters.  As Rosen said,

This isn’t about whether Ann Romney or I or other women of some means can afford to make a choice to stay home and raise kids. Most women in America, let’s face it, don’t have that choice.

Gallup found a small interaction effect.  The stay-at-home mom-employed difference was greater for low-income women.

The Gallup poll does not offer much speculation about why stay-at-home moms have more sadness and less happiness. One in four experienced “a lot” of depression yesterday.  That number should be cause for concern.

Maybe women feel more uncertain and less able to control their lives when they depend on a man, especially one whose income is inadequate.  Maybe stay-at-home moms find themselves more isolated from other adults. Maybe they are at home not by choice but because they cannot find a decent-paying job. Or maybe money talks, and what it says to unpaid stay-at-home moms is society does not value your work.  Nor, in comparison with other wealthy countries, does US society or government provide much non-financial support to make motherhood easier.

The late Donna Summer sang,

She works hard for the money
So you better treat her right

But how right are we treating women who work hard for no money?

——————————-

* For example, Edward Conrad is a former partner of Romney.  In a recent article in the Times Magazine, Adam Davidson writes, “If a Wall Street trader or a corporate chief executive is filthy rich, Conrad says that the merciless process of economic selection has assured that they have somehow benefitted society.”

** Hillary Clinton committed a similar gaffe twenty years ago in response to a reporter’s question about work and family “I suppose I could have stayed home and baked cookies and had teas, but what I decided to do was to fulfill my profession which I entered before my husband was in public life”

Cross-posted at Montclair SocioBlog.

Jacob and Isabella were the most popular baby names last year.  Some observers, even some sociologists, see this as the influence of the Twilight series.  (See here for example.)

But Jacob, Isabella, and even Bella were on the rise well before Stephanie Meyer sent her similarly-named characters out to capture the hearts, minds, and naming preferences of romantic adolescents:

The forecasters predict a bumper crop soon in Rue, Cato, and perhaps other names that are from the Hunger series.  Still, since the YA (Young Adult) audience for these books and movies are more Y than A, I’m hoping for lag time of at least a few years before they start naming babies.  As I blogger earlierSplash, the film with Darryl Hannah as Madison the mermaid, came out in 1984, but it was not until nine years later that Madison surfaced in the top 100 names. And if there’s a Hogwarts effect, we’re still waiting to see it.  The trend in Harry and Harold is downward on both sides of the Atlantic, and Hermione has yet to break into the top 1000.

Don’t look for any Katnisses to be showing up in your classes for quite a while.