methods/use of data

Cross-posted at Montclair SocioBlog.

Does “the abortion culture” cause infanticide?  That is, does legalizing the aborting of a fetus in the womb create a cultural, moral climate where people feel free to kill newborn babies?

It’s not a new argument.  I recall a 1998 Peggy Noonan op-ed in the Times, “Abortion’s Children,” arguing that kids who grew up in the abortion culture are “confused and morally dulled.”*  Earlier this week, USA Today ran an op-ed by Mark Rienzi repeating this argument in connection with the Gosnell murder conviction.

Rienzi argues that the problem is not one depraved doctor.  As the subhead says:

The killers are not who you think. They’re moms.

Worse, he warns, infanticide has skyrocketed.

While murder rates for almost every group in society have plummeted in recent decades, there’s one group where murder rates have doubled, according to CDC and National Center for Health Statistics data — babies less than a year old.

Really? The FBI’s Uniform Crime Reports has a different picture.

1

Many of these victims were not newborns, and Rienzi is talking about day-of-birth homicides — the type killing Dr. Gosnell was convicted of, a substitute for abortion.  Most of these, as Rienzi says are committed not by doctors but by mothers.  I make the assumption that the method in most of these cases is smothering.  These deaths show an even steeper decline since 1998.

2

Where did Rienzi get his data that rates had doubled?  By going back to 1950.

3

The data on infanticide fit with his idea that legalizing abortion increased rates of infanticide.  The rate rises after Roe v. Wade (1973) and continues upward till 2000.

But that hardly settles the issue. Yes, as Rienzi says, “The law can be a potent moral teacher.”  But many other factors could have been affecting the increase in infanticide, factors much closer to actual event — the mother’s age, education, economic and family circumstances, blood lead levels, etc.

If Roe changed the culture, then that change should be reflected not just in the very small number of infanticides but in attitudes in the general population.  Unfortunately, the GSS did not ask about abortion till 1977, but since that year, attitudes on abortion have changed very little.   Nor does this measure of “abortion culture” have any relation to rates of infanticide.

4

Moreover, if there is a relation between infanticide and general attitudes about abortion, then we would expect to see higher rates of infanticide in areas where attitudes on abortion are more tolerant.

5

The South and Midwest are most strongly anti-abortion, the West Coast and Northeast the most liberal.  So, do these cultural difference affect rates of infanticide?

1

Well, yes, but it turns out the actual rates of infanticide are precisely the opposite of what the cultural explanation would predict.  The data instead support a different explanation of infanticide: Some state laws make it harder for a woman to terminate an unwanted pregnancy.  Under those conditions, more women will resort to infanticide.  By contrast, where abortion is safe, legal, and available, women will terminate unwanted pregnancies well before parturition.

The absolutist pro-lifers will dismiss the data by insisting that there is really no difference between abortion and infanticide and that infanticide is just a very late-term abortion. As Rienzi puts it:

As a society, we could agree that there really is little difference between killing a being inside and outside the womb.

In fact, very few Americans agree with this proposition. Instead, they do distinguish between a cluster of a few fertilized cells and a newborn baby. I know of no polls that ask about infanticide, but I would guess that a large majority would say that it is wrong under all circumstances.  But only perhaps 20% of the population thinks that abortion is wrong under all circumstances.

Whether the acceptance of abortion in a society makes people “confused and morally dulled” depends on how you define and measure those concepts.  But the data do strongly suggest that whatever “the abortion culture” might be, it lowers the rate of infanticide rather than increasing it.

* I had trouble finding Noonan’s op-ed at the Times Website.  Fortunately, then-Rep. Talent (R-MO) entered it into the Congressional Record.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Originally posted in 2010. Re-posted in honor of Women’s History Month.

The New York Public Library posted a page from the first issue (September 1941) of Design for Living: The Magazine for Young Moderns that I thought was sorta neat for bringing some perspective to the increase in the amount and variety of clothing we take as normal today–but also, to my relief, the acceptance of a more casual style of dress. The magazine conducted a poll of women at a number of colleges throughout the U.S. about how many of various articles of clothing they owned. Here’s a visual showing the school where women reported the highest and lowest averages (the top item is a dickey, not a shirt):

1

Overall the women reported spending an average of $240.33 per year on clothing.

Hats for women were apparently well on their way out of fashion:

Can you imagine a magazine aimed at college women today implying that you might be able to get away with only three or four pairs of shoes, even if that’s what women reported?

At the end of the article they bring readers’ attention to the fact that they used a sample:

I can’t help but find it rather charming that a popular magazine would even bother to clarify anything about their polling methods. So…earnest!

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

In 2009, 470,000 15-year-olds in 65 developed nations took a science test.  Boys in the U.S. outperformed girls by 14 points: 509 to 495.  How does the U.S. compare to other countries?

The figure below — from the New York Times — features Western and Northern Europe and the Americas (in turquoise), Asia and the Pacific Islands (in pink), and the Middle East and Eastern and Southern Europe (in yellow).  The line down the middle separates societies in which boys scored higher than girls (left) and vice versa (right).

Notice that the countries in which boys outscore girls are overwhelmingly Western and Northern Europe and the Americas.

This data tells a similar story to the data on gender and math aptitude.  Boys used to outperform girls in math in the U.S., but no longer.  And if you look transnationally, cultural variation swamps gender differences.  Analyses have shown that boys outperforming girls in math is strongly correlated with the degree of inequality in any given society.

One lesson to take is this: any given society is just one data point and can’t be counted on to tell the whole story.

Via The Global Sociology Blog.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

An expanded version of this post is cross-posted at Montclair SocioBlog.

Six years ago, I wrote that the Pittsburgh Steelers had become “America’s Team,” a title once claimed, perhaps legitimately, by the Dallas Cowboys.

Now Ben Blatt at The Harvard College Sports Analysis Collective concludes that it’s still the Cowboys:

…based on their huge fan base and ability to remain the most popular team coast-to-coast, I think the Dallas Cowboys have earned the right to use the nickname  ‘America’s Team’.

To get data, Blatt posed as an advertiser and euchred Facebook into giving him some data from 155 million Facebook users, about half of the US population.  Blatt counted the “likes” for each NFL team:

It’s Superbowls X, XIII, and XXX all over again – Steelers vs. Cowboys.  And the Cowboys have a slight edge.  But does that make them “America’s Team”? It should be easy to get more likes when you play to a metro area like Dallas that has twice as many people as Pittsburgh.  If the question is about “America’s Team,” we’re not interested in local support.  Just the opposite: if you want to know who America’s team is, you should find out how many fans it has outside its local area.

Unfortunately, Blatt doesn’t provide that information. So for a rough estimate, I took the number of Facebook likes and subtracted the metro area population.  Most teams came out on the negative side. The Patriots, for example, had 2.5 million likes. but they are in a media market of over 4 million people.  The Cowboys too wound up in the red  3.7 million likes in a metro area of 5.4 million people.

Likes outnumbered population for only five teams.  The clear winner was the Steelers.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Advanced quantitative analysis often controls for variables that aren’t of central interest. But what does it mean to “control for” a variable?  XKCD offers a fun example.

So, do subscribers to Martha Stewart Living live alongside furries?  Probably not. In any case, these maps don’t offer any evidence in favor of this conclusion.  This is because of a variable that hasn’t been controlled for: population density.

To control for population, one would have to divide the number of subscribers/furries by the total population.  This would give us the percentage of the population that is described by both proclivities, instead of the sheer number of devotees.  Then the maps would actually show variance in the proportion of the population instead of variance in the population itself.

In other words, we would have controlled for population in order to get a closer look at what we’re really interested in: furries, of course.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

Cross-posted at Reports from the Economic Front.

Many expected that the severity of the Great Recession, a recognition that prior expansion was largely based on unsustainable “bubbles,” and an anemic post-crisis recovery, would lead to serious discussion about the need to transform our economy.   Yet, it hasn’t happened.

One important reason is that not everyone has experienced the Great Recession and its aftermath the same. Jordan Weissmann, writing in the Atlantic, published a figure from the work of Edward Wolff. The charts shows the rise and fall of median and mean net worth among Americans: how much one owns (e.g., savings, investments, and property) minus how much one owes (e.g., credit card debt and outstanding loans).

Both the mean and the median are interesting because, while they’re both measures of central tendency, one is more sensitive to extremes than the other. The mean is the statistical average (literally, all the numbers added up and divided by the number of numbers), so it is influenced by very low and very high numbers.  The median, in contrast is, literally, the number in the middle of the sample of numbers.  So, if there are very high or low numbers, their status as outliers doesn’t shape the measure.

Back to the figure: as of 2010, median household net worth (dark purple) had fallen back to levels last seen in the early 1960s.  In contrast, mean household net worth (light purple) had only retreated to the 2000s.  This shows that a small number of outliers — the very, very rich — have weathered the Great Recession much better than the rest of us.

Wolff_Mean_and_Median_Net_Wealth-thumb-615x433-106876

The great disparity between median and mean wealth declines is a reflection of the ability of those at the top of the wealth distribution to maintain most of their past gains.  And the lack of discussion about the need for change in our economic system is largely a reflection of the ability of those very same people to influence our political leaders and shape our policy choices.

—————————

Martin Hart-Landsberg is a professor of Economics and Director of the Political Economy Program at Lewis and Clark College.  You can follow him at Reports from the Economic Front.

For the last week of December, we’re re-posting some of our favorite posts from 2012. Cross-posted at Global Policy TV and Pacific Standard.

Publicizing the release of the 1940 U.S. Census data, LIFE magazine released photographs of Census enumerators collecting data from household members.  Yep, Census enumerators. For almost 200 years, the U.S. counted people and recorded information about them in person, by sending out a representative of the U.S. government to evaluate them directly (source).

By 1970, the government was collecting Census data by mail-in survey. The shift to a survey had dramatic effects on at least one Census category: race.

Before the shift, Census enumerators categorized people into racial groups based on their appearance.  They did not ask respondents how they characterized themselves.  Instead, they made a judgment call, drawing on explicit instructions given to the Census takers.

On a mail-in survey, however, the individual self-identified.  They got to tell the government what race they were instead of letting the government decide.  There were at least two striking shifts as a result of this change:

  • First, it resulted in a dramatic increase in the Native American population.  Between 1980 and 2000, the U.S. Native American population magically grew 110%.  People who had identified as American Indian had apparently been somewhat invisible to the government.
  • Second, to the chagrin of the Census Bureau, 80% of Puerto Ricans choose white (only 40% of them had been identified as white in the previous Census).  The government wanted to categorize Puerto Ricans as predominantly black, but the Puerto Rican population saw things differently.

I like this story.  Switching from enumerators to surveys meant literally shifting our definition of what race is from a matter of appearance to a matter of identity.  And it wasn’t a strategic or philosophical decision. Instead, the very demographics of the population underwent a fundamental unsettling because of the logistical difficulties in collecting information from a large number of people.  Nevertheless, this change would have a profound impact on who we think Americans are, what research about race finds, and how we think about race today.

See also the U.S. Census and the Social Construction of Race and Race and Censuses from Around the World. To look at the questionnaires and their instructions for any decade, visit the Minnesota Population Center.  Thanks to Philip Cohen for sending the link.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012. Originally cross-posted at Family Inequality.

The other day the New York Times had a Gray Matter science piece by the authors of a study in PLoS One that showed some people could identify gays and lesbians based only on quick flashes of their unadorned faces. They wrote:

We conducted experiments in which participants viewed facial photographs of men and women and then categorized each face as gay or straight. The photographs were seen very briefly, for 50 milliseconds, which was long enough for participants to know they’d seen a face, but probably not long enough to feel they knew much more. In addition, the photos were mostly devoid of cultural cues: hairstyles were digitally removed, and no faces had makeup, piercings, eyeglasses or tattoos.

…participants demonstrated an ability to identify sexual orientation: overall, gaydar judgments were about 60 percent accurate.

Since chance guessing would yield 50 percent accuracy, 60 percent might not seem impressive. But the effect is statistically significant — several times above the margin of error. Furthermore, the effect has been highly replicable: we ourselves have consistently discovered such effects in more than a dozen experiments.

This may be seen as confirmation of the inborn nature of sexual orientation, if it can be detected by a quick glance at facial features.

Sample images flashed during the “gaydar” experiment:

There is a statistical issue here that I leave to others to consider: the sample of Facebook pictures the researchers used was 48% gay/lesbian (111/233 men, 87/180 women). So if, as they say, it is 64% accurate at detecting lesbians, and 57% accurate at detecting gay men, how useful is gaydar in real life (when about 3.5% of people are gay or lesbian, when people aren’t reduced to just their naked, hairless facial features, and you know a lot of people’s sexual orientations from other sources)? I don’t know, but I’m guessing not much.

Anyway, I have a serious basic reservation about studies like this — like those that look for finger-lengthhair-whorltwin patterns, and other biological signs of sexual orientation. To do it, the researchers have to decide who has what sexual orientation in the first place — and that’s half the puzzle. This is unremarked on in the gaydar study or the op-ed, and appears to cause no angst among the researchers. They got their pictures from Facebook profiles of people who self-identified as gay/lesbian or straight (I don’t know if that was from the “interested in” Facebook option, or something else on their profiles).

Sexual orientation is multidimensional and determined by many different things — some combination of (presumably many) genes, hormonal exposures, lived experiences. And for some people at least, it changes over the course of their lives. That’s why it’s hard to measure.

Consider, for example, a scenario in which someone who felt gay at a young age married heterogamously anyway — not too uncommon. Would such a person self-identify as gay on Facebook? Probably not. But if someone in that same situation got divorced and then came out of the closet they probably would self-identify as gay then.

Consider another new study, in the Archives of Sexual Behavior, which used a large sample of people interviewed 10 years apart. They found changes in sexual orientation were not that rare. Here is my table based on their results:Overall, 2% of people changed their response to the sexual orientation identity question. That’s not that many — but then only 2.5% reported homosexual or bisexual identities in the first place.

In short, self identification may be the best standard we have for sexual orientation identity (which isn’t the same as sexual behavior), but it’s not a good fit for studies trying to get at deep-down gay/straight-ness, like the gaydar study or the biological studies.

And we need to keep in mind that this is all complicated by social stigma around sexual orientation. So who identifies as what, and to whom, is never free from political or power issues.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.