Tag Archives: methods/use of data

Majority of “Stay-at-Home Dads” Aren’t There to Care for Family

At Pew Social Trends, Gretchen Livingston has a new report on fathers staying at home with their kids. They define stay at home fathers as any father ages 18-69 living with his children who did not work for pay in the previous year (regardless of marital status or the employment status of others in the household). That produces this trend:

1 (2) - Copy

At least for the 1990s and early-2000s recessions, the figure very nicely shows spikes upward of stay-at-home dads during recessions, followed by declines that don’t wipe out the whole gain — we don’t know what will happen in the current decline as men’s employment rates rise.

In Pew’s numbers 21% of the stay at home fathers report their reason for being out of the labor force was caring for their home and family; 23% couldn’t find work, 35% couldn’t work because of health problems, and 22% were in school or retired.

It is reasonable to call a father staying at home with his kids a stay at home father, regardless of his reason. We never needed stay at home mothers to pass some motive-based criteria before we defined them as staying at home. And yet there is a tendency (not evidenced in this report) to read into this a bigger change in gender dynamics than there is. The Census Bureau has for years calculated a much more rigid definition that only applied to married parents of kids under 15: those out of the labor force all year, whose spouse was in the labor force all year, and who specified their reason as taking care of home and family. You can think of this as the hardcore stay at home parents, the ones who do it long term, and have a carework motivation for doing it. When you do it that way, stay at home mothers outnumber stay at home fathers 100-to-1.

I updated a figure from an earlier post for Bryce Covert at Think Progress, who wrote a nice piece with a lot of links on the gender division of labor. This shows the percentage of all married-couple families with kids under 15 who have one of the hardcore stay at home parents:

SHP-1. PARENTS AND CHILDREN IN STAY-AT-HOME PARENT FAMILY GROUPS

That is a real upward trend for stay at home fathers, but that pattern remains very rare.

See the Census spreadsheet for yourself here.  Cross-posted at Pacific Standard.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Sunday Fun: Are Newly Minted PhDs Being Launched Into Space?

Hmmmm…

1 (2) - CopySee more at Spurious Correlations. Thanks to John McCormack for the tip!

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

How Well Do Teen Test Scores Predict Adult Income?

The short answer is, pretty well. But that’s not really the point.

In a previous post I complained about various ways of collapsing data before plotting it. Although this is useful at times, and inevitable to varying degrees, the main danger is the risk of inflating how strong an effect seems. So that’s the point about teen test scores and adult income.

If someone told you that the test scores people get in their late teens were highly correlated with their incomes later in life, you probably wouldn’t be surprised. If I said the correlation was .35, on a scale of 0 to 1, that would seem like a strong relationship. And it is. That’s what I got using the National Longitudinal Survey of Youth. I compared the Armed Forces Qualifying Test scores, taken in 1999, when the respondents were ages 15-19 with their household income in 2011, when they were 27-31.

Here is the linear fit between between these two measures, with the 95% confidence interval shaded, showing just how confident we can be in this incredibly strong relationship:

1 (2) - Copy

That’s definitely enough for a screaming headline, “How your kids’ test scores tell you whether they will be rich or poor.” And it is a very strong relationship – that correlation of .35 means AFQT explains 12% of the variation in household income.

But take heart, ye parents in the age of uncertainty: 12% of the variation leaves a lot left over. This variable can’t account for how creative your children are, how sociable, how attractive, how driven, how entitled, how connected, or how White they may be. To get a sense of all the other things that matter, here is the same data, with the same regression line, but now with all 5,248 individual points plotted as well (which means we have to rescale the y-axis):

1 (2)

Each dot is a person’s life — or two aspects of it, anyway — with the virtually infinite sources of variability that make up the wonder of social existence. All of a sudden that strong relationship doesn’t feel like something you can bank on with any given individual. Yes, there are very few people from the bottom of the test-score distribution who are now in the richest households (those clipped by the survey’s topcode and pegged at 3 on my scale), and hardly anyone from the top of the test-score distribution who is now completely broke.

But I would guess that for most kids a better predictor of future income would be spending an hour interviewing their parents and high school teachers, or spending a day getting to know them as a teenager. But that’s just a guess (and that’s an inefficient way to capture large-scale patterns).

I’m not here to argue about how much various measures matter for future income, or whether there is such a thing as general intelligence, or how heritable it is (my opinion is that a test such as this, at this age, measures what people have learned much more than a disposition toward learning inherent at birth). I just want to give a visual example of how even a very strong relationship in social science usually represents a very messy reality.

Cross-posted at Family Inequality and Pacific Standard.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Saturday Stat: Is Fox News Trying to Fool Us?

Post redacted. It turns out the graph was faked. Thanks to all of our savvy twitter friends for the heads up.

Last month I posted a misleading graph released by Reuters that made quite the impression, so I thought I would share another. This one is from Fox News and it reveals the number of people who have signed up for Obama care.

A glance at the graph makes it seem as if people are disenrolling: the line goes down as it moves to the right. But, of course, the fact is that people are increasingly enrolling, Fox just decided to invert the vertical axis.  It goes from 8 million at the bottom to 4 million at the top.

1 (2)

Over at Monclair SocioBlog, Jay Livingston seems convinced that the Reuter’s graph was a mistake, but the Fox graph was purposefully deceptive. Both have inverted axes. The designers rationale for the former — which illustrated a rise in gun deaths in response to Stand Your Ground laws — was to make the graph look like blood being shed. If Fox has released a rationale for this inverted axis, I haven’t seen it.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

How to Lie with Statistics: Stand Your Ground and Gun Deaths

At Junk Charts, Kaiser Fung drew my attention to a graph released by Reuters.  It is so deeply misleading that I loathe to expose your eyeballs to it.  So, I offer you this:

1The original figure is on the left.  It counts the number of gun deaths in Florida.  A line rises, bounces a little, reaches a 2nd highest peak labeled “2005, Florida enacted its ‘Stand Your Ground’ law,” and falls precipitously.

What do you see?

Most people see a huge fall-off in the number of gun deaths after Stand Your Ground was passed.  But that’s not what the graph shows.  A quick look at the vertical axis reveals that the gun deaths are counted from top (0) to bottom (800).  The highest peaks are the fewest gun deaths and the lowest ones are the most.  A rise in the line, in other words, reveals a reduction in gun deaths.  The graph on the right — flipped both horizontally and vertically — is more intuitive to most: a rising line reflects a rise in the number of gun deaths and a dropping a drop.

The proper conclusion, then, is that gun deaths skyrocketed after Stand Your Ground was enacted.

This example is a great reminder that we bring our own assumptions to our reading of any illustration of data.  The original graph may have broken convention, making the intuitive read of the image incorrect, but the data is, presumably, sound.  It’s our responsibility, then, to always do our due diligence in absorbing information.  The alternative is to be duped.

Cross-posted at Pacific Standard.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

Umpires and Expectation Bias

Here’s Matt Holliday. It’s strike three and it was three bad calls.

2

Holliday’s body language speaks clearly, and his reaction is understandable. The pitch was wide, even wider than the first two pitches, both of which the umpire miscalled as strikes.   Here’s the data:

1.jpg

The PITCHf/x technology that makes this graphic possible, whatever its value or threat to umpires, has been a boon for sabremetricians  and social scientists.  The big data provided can tell us not just the number of bad calls but the factors that make a bad call more or less likely.

In the New York Times, Brayden King and Jerry Kim report on their study of roughly 780,000 pitches in the 2008-09 season. Umpires erred on about 1 in every 7 pitches – 47,000 pitches over the plate that were called balls, and nearly 69,000 like those three to Matt Holliday.

Here are some of the other findings that King and Kim  report in today’s article.

  •  Umpires gave a slight edge to the home team pitchers, calling 13.3% of their pitches outside the zone as strikes.  Visitors got 12.6%.
  • The count mattered: At 0-0, the error rate was 14.7%, at 3-0, 18.6% of pitches outside the zone were called as strikes, and at 0-2, only 7.3% of pitches outside the zone were called as strikes.
  • All-star pitchers were more likely than others to get favorable calls…
  • …especially if the pitcher had a reputation as a location pitcher.
  • The importance of the situation (tie game, bottom of the ninth) made no difference in bad calls.

It seems that expectation accounts for a lot of these findings. It’s not that what you see is what you get. It’s that what you expect is what you see. We expect good All-star pitchers to throw more accurately.  We also expect that a pitcher who is way ahead in the count will throw a waste pitch and that on the 3-0, he’ll put it over the plate.  My guess is that umpires share these expectations. The difference is that the umps can turn their expectations into self-fulfilling prophecies.

Cross-posted at Business Insider.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Why Survey Questions Matter: Blasphemy Edition

“How could we get evidence for this?” I often ask students. And the answer, almost always is, “Do a survey.” The word survey has magical power; anything designated by that name wears a cloak of infallibility.

“Survey just means asking a bunch of people a bunch of questions,” I’ll say. “Whether it has any value depends on how good the bunch of people is and how good the questions are.”  My hope is that a few examples of bad sampling and bad questions will demystify.

For example, Variety:

11

Here’s the lede:

Despite its Biblical inspiration, Paramount’s upcoming “Noah” may face some rough seas with religious audiences, according to a new survey by Faith Driven Consumers.

The data to confirm that idea:

The religious organization found in a survey that 98% of its supporters were not “satisfied” with Hollywood’s take on religious stories such as “Noah,” which focuses on Biblical figure Noah.

The sample:

Faith Driven Consumers surveyed its supporters over several days and based the results on a collected 5,000+ responses.

And (I’m saving the best till last) here’s the crucial survey question:

As a Faith Driven Consumer, are you satisfied with a Biblically themed movie — designed to appeal to you — which replaces the Bible’s core message with one created by Hollywood?

As if the part about “replacing the Bible’s core message” weren’t enough, the item reminds the respondent of her or his identity as a Faith Driven Consumer. It does make you wonder about that 2% who either were fine with the Hollywood* message or didn’t know.

You can’t really fault Faith Driven Consumer too much for this shoddy “research.” They’re not in business to find the sociological facts. What’s appalling is that Variety accepts it at face value and without comment.

Cross-posted at Montclair SocioBlog.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Does Sleeping with a Guy on the First Date Make Him Less Likely to Call Back?

Let’s imagine that a woman — we’ll call her “you,” like they do in relationship advice land — is trying to calculate the odds that a man will call back after sex. Everyone tells you that if you sleep with a guy on the first date he is less likely to call back. The theory is that giving sex away at a such a low “price” lowers the man’s opinion of you, because everyone thinks sluts are disgusting.* Also, shame on you.

So, you ask, does the chance he will call back improve if you wait till more dates before having sex with him? You ask around and find that this is actually true: The times you or your friends waited till the seventh date, two-thirds of the guys called back, but when you slept with him on the first date, only one-in-five called back. From the data, it sure looks like sleeping with a guy on the first date reduces the odds he’ll call back.

1 (2)

So, does this mean that women make men disrespect them by having sex right away? If that’s true, then the historical trend toward sex earlier in relationships could be really bad for women, and maybe feminism really is ruining society.

Like all theories, this one assumes a lot. It assumes you (women) decide when couples will have sex, because it assumes men always want to, and it assumes men’s opinion of you is based on your sexual behavior. With these assumptions in place, the data appear to confirm the theory.

But what if that those assumptions aren’t true? What if couples just have more dates when they enjoy each other’s company, and men actually just call back when they like you? If this is the case, then what really determines whether the guy calls back is how well-matched the couple is, and how the relationship is going, which also determines how many dates you have.

What was missing in the study design was relationship survival odds. Here is a closer look at the same data (not real data), with couple survival added:

5

(Graph corrected from an earlier version.)

.

By this interpretation, the decision about when to have sex is arbitrary and doesn’t affect anything. All that matters is how much the couple like and are attracted to each other, which determines how many dates they have, and whether the guy calls back. Every couple has a first date, but only a few make it to the seventh date. It appears that the first-date-sex couples usually don’t last because people don’t know each other very well on first dates and they have a high rate of failure regardless of sex. The seventh-date-sex couples, on the other hand, usually like each other more and they’re very likely to have more dates. And: there are many more first-date couples than seventh-date couples.

So the original study design was wrong. It should have compared call-back rates after first dates, not after first sex. But when you assume sex runs everything, you don’t design the study that way. And by “design the study” I mean “decide how to judge people.”

I have no idea why men call women back after dates. It is possible that when you have sex affects the curves in the figure, of course. (And I know even talking about relationships this way isn’t helping.) But even if sex doesn’t affect the curves, I would expect higher callback rates after more dates.

Anyway, if you want to go on blaming everything bad on women’s sexual behavior, you have a lot of company. I just thought I’d mention the possibility of a more benign explanation for the observed pattern that men are less likely to call back after sex if the sex takes place on the first date.

* This is not my theory.

Cross-posted at Family Inequality and Pacific Standard.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.