We don’t prohibit all dangerous behavior, or even behavior that endangers others, including people’s own children.

Question: Is the limit of acceptable risks to which we may subject our own children determined by absolute risks or relative risks?

Case for consideration: Home birth.

Let’s say planning to have your birth at home doubles the risk of some serious complications. Does that mean no one should do it, or be allowed to do it? Other policy options: do nothing, discourage home birth, promote it, regulate it, or educate people about the risks and let them do what they want.

Here is the most recent result from a large study reported on the New York Times Well blog, which looks to me like it was done properly, from the American Journal of Obstetrics & Gynecology. Researchers analyzed about 2 million birth records of live, term (37-43 weeks), singleton, head-first births, including 12,000 planned home births.

The planned-home birth mothers were generally relatively privileged, more likely to be White and non-Hispanic, college-educated, married, and not having their first child. However, they were also more likely to be older than 34 and to have waited to see a doctor until their second trimester.

On three measures of birth outcomes, the home-birth infants were more likely to have bad results: low Apgar scores and neonatal seizures. Apgar is the standard for measuring an infant’s wellbeing within 5 minutes of birth, assessing breathing, heart rate, muscle tone, reflex irritability and circulation (blue skin). With up to 2 points on each indicator, the maximum score is 10, but 7 or more is considered normal and under 4 is serious trouble. Low scores are usually caused by some difficulty in the birth process, and babies with low scores usually require medical attention. The score is a good indicator of risk for infant mortality.

These are the unadjusted rates of middle- and low-Apgar scores and seizure rates:

homebirthoutcomesThese are big differences considering the home birth mothers are usually healthier. In the subsequent analysis, the researchers controlled for parity, maternal age, race/ethnicity, education, gestational age at delivery, number of prenatal care visits, cigarette smoking during pregnancy, and medical/obstetric conditions. With those controls, the odds ratios were 1.9 for Apgar<4, 2.4 for Apgar<7, and 3.1 for seizures. Pretty big effects.

Two years  ago I wrote about a British study that found much higher rates of birth complications among home births when the mother was delivering her first child. This is my chart for their findings:

Again, those were the unadjusted rates, but the disparities held with a variety of important controls.

These birth complication rates are low by world historical standards. In New Delhi, India, in the 1980s 10% of 5-minute-olds had Apgar scores of 3 or less. So that’s many-times worse than American home births. On the other hand, a number of big European countries (Germany, France, Italy) have Apgar<7 rates of 1% or less, which is much better.

A large proportional increase on a low risk for a high-consequence event (like nuclear meltdown) can be very serious. A large absolute risk of a common low-consequence event (like having a hangover) can be completely acceptable. Birth complications are somewhere in between. But where?

Seems like a good topic for discussion, and having some real numbers helps. Let me know what you decide.

Cross-posted at Family Inequality.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Last week Kay Hymowitz (who sometimes works out of a PO Box rented by Brad Wilcox) wrote the following in the LA Times:

As far back as the 1970s, family researchers began noticing that… [b]oys from broken homes were more likely than their peers to get suspended and arrested… And justice experts have long known that juvenile facilities and adult jails overflow with sons from broken families. Liberals often assume that these kinds of social problems result from our stingy support system for single mothers and their children. But the link between criminality and fatherlessness holds even in countries with lavish social welfare systems.

Ah, the link between criminality and fatherlessness again. So ingrained is the assumption that crime rates always go up that conservatives making this argument do not even see the need to account for the incredible, world-historical drop in violence that has accompanied the collapse of the nuclear family. I know Kay Hymowitz knows this, because we’ve argued about it before. But if her editors and readers don’t, why should she make a big deal out of it?

In this graph I show the scales down to zero so you can see the proportional change in each trend: father-not-present boys ages 10-14 and male juvenile violent-crime arrest rates.

1

I’m not arguing about whether boys living without fathers are more likely to commit crimes. I’m just saying that this is very unlikely to be the major cause of male juvenile violent crime if the trends can move so drastically in opposite directions at the same time. These aren’t little fluctuations. Even if you leave out the late-80s-early-90s spike in crime, arrests fell about 40% from 1980 to 2010 while father-absent boys increased almost 50%.

If you are going to argue for a strong association — which Hymowitz does — and use words like “tide,” you should at least acknowledge that the problem you are trumpeting is getting better while the cause you are bemoaning is getting worse.

Cross-posted at Family Inequality.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

In 1990 I was still an American Culture major in college, but I was getting ready to jump ship for sociology. That’s when Madonna’s “Justify My Love” video was banned by MTV, which was a thing people used to use to watch videos. And network TV used to be a major source of exposure.

I was watching when Madonna went on Nightline for an interview.  The correspondent intoned:

…nudity, suggestions of bisexuality, sadomasochism, multiple partners. Finally, MTV decided Madonna has gone to far.

They showed the video, preceded by a dire parental warning (it was 11:30 p.m., and there was no way to watch it at any other time).  In the interview, Forrest Sawyer eventually realize he was being played:

Sawyer: This was a win-win for you. If they put the video on, you would get that kind of play. And if they didn’t you would still make some money. It was all, in a sense, a kind of publicity stunt. … But in the end you’re going to wind up making even more money than you would have.

Madonna: Yeah. So, lucky me.

The flap over Miley Cyrus completely baffles me. This is a business model (as artistic as any other commercial product), and it hasn’t changed much, just skinnier, with more nudity and (even) less feminism. I don’t understand why this is any more or less controversial than any other woman dancing naked. Everyone does realize that there is literally an infinite amount of free hardcore porn available to every child in America, right? There is no “banning” a video. (Wrecking Ball is pushing 250 million views on YouTube.)
mileymadonna1
No one is censoring Miley Cyrus — is there some message I’m missing? When she talked to Matt Lauer he asked, “Are you surprised by the attention you’re getting right now?” And she said, “Not really. I mean, it’s kind of what I want.”

I think the conversation has slid backward. In Lisa Wade’s excellent comment, she draws on a 1988 article, “Bargaining With Patriarchy,” which concluded:

Women strategize within a set of concrete constraints, which I identify as patriarchal bargains. Different forms of patriarchy present women with distinct “rules of the game” and call for different strategies to maximize security and optimize life options with varying potential for active or passive resistance in the face of oppression.

I think it applies perfectly to Miley Cyrus, if you replace “security” and “life options” with “celebrity” and “future island-buying potential.” Lisa is 1,000-times more plugged in to kids these days than I am, and the strategies-within-constraints model is well placed. But that article is from 1988, and it applies just as well to Madonna. So where’s the progress here?

mileymadonna2

Interviewed by Yahoo!, Gloria Steinem said, “I wish we didn’t have to be nude to be noticed … But given the game as it exists, women make decisions.” That is literally something she could have said in 1990.

The person people are arguing about has (so far) a lot less to say even than Madonna did. When Madonna was censored by MTV, Camile Paglia called her “the true feminist.”

She exposes the puritanism and suffocating ideology of American feminism, which is stuck in an adolescent whining mode. Madonna has taught young women to be fully female and sexual while still exercising total control over their lives. She shows girls how to be attractive, sensual, energetic, ambitious, aggressive and funny — all at the same time.

When Miley Cyrus caused a scandal on TV, Paglia could only muster, “the real scandal was how atrocious Cyrus’ performance was in artistic terms.”

Madonna was a bonafide challenge to feminists, for the reasons Paglia said, but also because of the religious subversiveness and homoerotic stuff. Madonna went on, staking her claim to the “choice” strand of feminism:

I may be dressing like the typical bimbo, whatever, but I’m in charge. You know. I’m in charge of my fantasies. I put myself in these situations with men, you know, and… people don’t think of me as a person who’s not in charge of my career or my life, okay. And isn’t that what feminism is all about, you know, equality for men and women? And aren’t I in charge of my life, doing the things I want to do? Making my own decisions?

And she embraced some other feminist themes. When Madonna was asked on Nightline, “Where do you draw the line?” she answered, “I draw the line with violence, and humiliation and degradation.”

I’m not saying there hasn’t been any progress since 1990. It’s more complicated than that. On matters of economic and politics gender has pretty well stalled. The porn industry has made a lot of progress. Reported rape has become less common, along with other forms of violence.

But — and please correct me if I’m wrong — I don’t see the progress in this conversation about whether it’s feminist or anti-feminist for a women to use sex or nudity to sell her pop music. As Lisa Wade says, “Because that’s what the system rewards. That’s not freedom, that’s a strategy.” So I would skip that debate and ask whether the multi-millionaire in question is adding anything critical to her product, or using her sex-plated platform for some good end.  Madonna might have. So far Miley Cyrus isn’t.

Cross-posted at Family Inequality and Pacific Standard.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.
1
Columbia Pictures/Sony Pictures Animation

.

The Smurfs, originating as they did in mid-century Europe, exhibit the quaint sexism in which boys or men are generic people – with their unique qualities and abilities – while girls and women are primarily identified by their femininity. The sequel doesn’t upend the premise of Smurfette.

In the original graphic novels, Smurfette (or La Schtroumpfette in French) was the creation of the evil Gargamel, who made her to sow chaos among the all-male Smurf society. His recipe for femininity included coquetry, crocodile tears, lies, gluttony, pride, envy, sentimentality, and cunning.

In the Smurfs 2, there are a lot of Smurfs. And they all have names based on their unique qualities. According to the cast list, the male ones are Papa, Grouchy, Clumsy, Vanity, Narrator, Brainy, Handy, Gutsy, Hefty, Panicky, Farmer, Greedy, Party Planner, Jokey, Smooth, Baker, Passive-Aggressive, Clueless, Social, and Crazy. And the female one is Smurfette–because being female is enough for her. There is no boy Smurf whose identifying quality is his gender, of course, because that would seem hopelessly limited and boring as a character.

Here are the Smurf characters McDonald’s is using for their Happy Meals:

2

When you buy a Happy Meal at McDonald’s, the cashier asks if it’s for a boy or a girl. In my experience, which is admittedly limited to my daughters, girls get Smurfette. I guess boys get any of the others.

The Way It’s Never Been

Identifying male characters by their non-gender qualities and females by their femininity is just one part of the broader pattern of gender differentiation, or what you might call gendering.

There are two common misconceptions about gendering children. One is that it has always been this way – with boys and girls so different naturally that all products and parenting practices have always differentiated them. This is easily disproved in the history of clothing, which shows that American parents mostly dressed their boys and girls the same a century ago. In fact, boys and girls were often indistinguishable, as evident in this 1905 Ladies’ Home Journal contest in which readers were asked to guess the sex of the babies (no one got them all right):

3
Source: Jo Paoletti, Pink and Blue: Telling the Boys from the Girls in America

.

The other common perception is that our culture is actually eliminating gender distinctions, as feminism tears down the natural differences that make gender work. In the anti-feminist dystopian mind, this amounts to feminizing boys and men. This perspective gained momentum during the three decades after 1960, when women entered previously male-dominated occupations in large numbers (a movement that has largely stalled).

However, despite some barrier-crossing, we do more to gender-differentiate now than we did during the heyday of the 1970s unisex fashion craze (the subject of Jo Paoletti’s forthcoming book, Sex and Unisex). On her Tumblr, Paoletti has a great collection of unisex advertising, such as this 1975 Garanimals clothing ad, which would be unthinkable for a major clothier today:

4

And these clothing catalog images from 1972 (left) and 1974 (right):

5

Today, the genders are not so easily interchangeable. Quick check: Google image search for “girls clothes” (left) vs. “boys clothes” (right):

6

Today, a blockbuster children’s movie can invoke 50-year-old gender stereotypes with little fear of a powerful feminist backlash. In fact, even the words “sexism” and “sexist,” which rose to prominence in the 1970s and peaked in the 1990s, have once again become less common than, say, the word “bacon”:

7

And the gender differentiation of childhood is perhaps stronger than it has ever been. Not all differences are bad, of course. But what Katha Pollitt called “the Smurfette principle” — in which “boys are the norm, girls the variation” — is not a difference between equals.

Cross-posted at The Atlantic and Family Inequality

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.
Trayvon Martin
AP Images

In conversation, I keep accidentally referring to Zimmerman’s defense lawyers as “the prosecution.” Not surprising, because the defense of George Zimmerman was only a defense in the technical sense of the law. Substantively, it was a prosecution of Trayvon Martin. And in making the case that Martin was guilty in his own murder, Zimmerman’s lawyers had the burden of proof on their side, as the state had to prove beyond a reasonable doubt that Martin wasn’t a violent criminal.

This raises the question, who’s afraid of young black men? Zimmerman’s lawyers took the not-too-risky approach of assuming that white women are (the jury was six women, described by the New York Times as five white and one Latina).

“This is the person who … attacked George Zimmerman,” defense attorney Mark O’Mara said in his closing argument, holding up two pictures of Trayvon Martin, one of which showed him shirtless and looking down at the camera with a deadpan expression. He held that shirtless one up right in front of the jury for almost three minutes. “Nice kid, actually,” he said, with feigned sincerity.

Mark O'Mara
Joe Burbank/AP Images

Going into the trial, according to one analysis, the female jurors were supposed to have more negative views about Zimmerman’s vigilante behavior, and be more sympathetic over the loss of the child Trayvon. As a former prosecutor put it:

With the jury being all women, the defense may have a difficult time having the jurors truly understand their defense, that George Zimmerman was truly in fear for his life. Women are gentler than men by nature and don’t have the instinct to confront trouble head-on.

But was the jury’s race, or their gender, the issue? O’Mara’s approach suggests he thought it was the intersection of the two: White women could be convinced that a young black man was dangerous.

Race and Gender

Racial biases are well documented. With regard to crime, for example, one recent controlled experiment using a video game simulation found that white college students were most likely to accidentally fire at an unarmed suspect who was a black male — and most likely to mistakenly hold fire against armed white females. More abstractly, people generally overestimate the risk of criminal victimization they face, but whites are more likely to do so when they live in areas with more black residents.

The difference in racial attitudes between white men and women are limited. One analysis by prominent experts in racial attitudes concluded that “gender differences in racial attitudes are small, inconsistent, and limited mostly to attitudes on racial policy.” However, some researchers have found white men more prone than women to accept racist stereotypes about blacks, and the General Social Survey in 2002 found that white women were much more likely than men to describe their feelings toward African Americans positively. (In 2012, a minority of both white men and white women voted for Obama, although white men were more overwhelmingly in the Romney camp.)

What about juries? The evidence for racial bias over many studies is quite strong. For example, one 2012 study found that in two Florida counties having an all-white jury pool – that is, the people from which the jury will be chosen – increased the chance that a black defendant would be convicted. Since the jury pool is randomly selected from eligible citizens, unaltered by lawyers’ selections or disqualifications, the study has a clean test of the race effect. But I can’t find any on the combined influence of race and gender.

The classical way of framing the question is whether white women’s group identity as whites is strong enough to overcome their gender-socialized overall “niceness” when it comes to attitudes toward minority groups. But Zimmerman’s lawyers appeared to be invoking a very specific American story: white women’s fear of black male aggression. Of course the “victim” in their story was Zimmerman, but as he lingered over the shirtless photo, O’Mara was tempting the women on the jury to put themselves in Zimmerman’s fearful shoes.

Group Threat

But do white women really feel threatened by black men? That’s an old, blood-stained debate. In the 20th century there were 455 American men (legally) executed for rape, and 89 percent of them were black — most were accused of raping white women. That was just the legal tip of Jim Crow’s lynching iceberg, partly driven by white men asserting ownership over white women in the name of protection. But the image of course lives on.

In the specific realm of U.S. racial psychology, one of the less optimistic, but most reliable, findings is that whites who live in places with larger black populations on average express more racism (here’s a recent confirmation). Most analysts attribute that to some sense of group threat – economic, political, or violent – experienced by the dominant majority.

Because people inflate things they are afraid of, you can get a ballpark idea of how threatened white people feel by asking them how big they think the black population is. And since they don’t realize their racial attitudes are being measured, they aren’t as likely to shade their answers to appear reasonable.

The 2000 General Social Survey asked about 1,000 white adults to estimate the size of the black population. Both groups were way off, of course: 95 percent of white women and 85 percent of white men overestimated. But the skew was stronger for women than men: 69 percent of women and 49 percent of men guessed that blacks are more than 20 percent of the population (the correct answer at the time was 12 percent).

Here are those results, showing the cumulative percentage of white men and women who thought the black population was at or below each level:

1

Maybe white women’s greater overestimation of the black population is not an indicator of perceived threat. In the same survey white women were no more likely than white men to describe blacks as “prone to violence” (then again, there’s social pressure to say “no”).  Anyway, whether women feel more threatened than men do isn’t the issue, since the jury was all women. The question is whether the perceived threat was salient enough that the defense could manipulate it.

I don’t know what was in the hearts and minds of the jurors in this case, of course. Being on a jury is not like filling out a survey or playing a video game. But however much we elevate the rational elements in the system, emotion also plays a role. Whether they were right or not, Zimmerman’s lawyers clearly thought there was a vein of fear of black men inside the jurors’ psyches, waiting to be mined.

Originally posted at The Atlantic and Family Inequality.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Cross-posted at Family Inequality.

The other day I was surprised that a group of reporters failed to call out what seemed to be an obvious exaggeration by Republican Congresspeople in a press conference. Did the reporters not realize that a 25% unemployment rate among college graduates in 2013 is implausible, were they not paying attention, or do they just assume they’re being fed lies all the time so they don’t bother?

Last semester I launched an aggressive campaign to teach the undergraduate students in my class the size of the US population. If you don’t know that – and some large portion of them didn’t – how can you interpret statements such as, “On average, 24 people per minute are victims of rape, physical violence, or stalking by an intimate partner in the United States.” In this case the source followed up with, “Over the course of a year, that equals more than 12 million women and men.” But, is that a lot? It’s a lot more in the United States than it would be in China. (Unless you go with, “any rape is too many,” in which case why use a number at all?)

1

Anyway, just the US population isn’t enough. I decided to start a list of current demographic facts you need to know just to get through the day without being grossly misled or misinformed – or, in the case of journalists or teachers or social scientists, not to allow your audience to be grossly misled or misinformed. Not trivia that makes a point or statistics that are shocking, but the non-sensational information you need to know to make sense of those things when other people use them. And it’s really a ballpark requirement; when I tested the undergraduates, I gave them credit if they were within 20% of the US population – that’s anywhere between 250 million and 380 million!

I only got as far as 22 facts, but they should probably be somewhere in any top-100. And the silent reporters the other day made me realize I can’t let the perfect be the enemy of the good here. I’m open to suggestions for others (or other lists if they’re out there).

They refer to the US unless otherwise noted:

Description Number Source
World Population 7 billion 1
US Population 316 million 1
Children under 18 as share of pop. 24% 2
Adults 65+ as share of pop. 13% 2
Unemployment rate 7.6% 3
Unemployment rate range, 1970-2013 4% – 11% 4
Non-Hispanic Whites as share of pop. 63% 2
Blacks as share of pop. 13% 2
Hispanics as share of pop. 17% 2
Asians as share of pop. 5% 2
American Indians as share of pop. 1% 2
Immigrants as share of pop 13% 2
Adults with BA or higher 28% 2
Median household income $53,000 2
Most populous country, China 1.3 billion 5
2nd most populous country, India 1.2 billion 5
3rd most populous country, USA 315 million 5
4th most populous country, Indonesia 250 million 5
5th most populous country, Brazil 200 million 5
Male life expectancy at birth 76 6
Female life expectancy at birth 81 6
National life expectancy range 49 – 84 7

Sources:
1. http://www.census.gov/main/www/popclock.html
2. http://quickfacts.census.gov/qfd/states/00000.html
3. http://www.bls.gov/
4. Google public data: http://bit.ly/UVmeS3
5. https://www.cia.gov/library/publications/the-world-factbook/rankorder/2119rank.html
6. http://www.cdc.gov/nchs/hus/contents2011.htm#021
7. https://www.cia.gov/library/publications/the-world-factbook/rankorder/2102rank.html

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Cross-posted at Family Inequality.

The other day when the Pew report on mothers who are breadwinners came out, I complained about calling wives “breadwinners” if they earn $1 more than their husbands:

A wife who earns $1 more than her husband for one year is not the “breadwinner” of the family. That’s not what made “traditional” men the breadwinners of their families — that image is of a long-term pattern in which the husband/father earns all or almost all of the money, which implies a more entrenched economic domination.

To elaborate a little, there are two issues here. One is empirical: today’s female breadwinners are much less economically dominant than the classical male breadwinner — and even than the contemporary male breadwinner, as I will show. And second, conceptually breadwinner not a majority-share concept determined by a fixed percentage of income, but an ideologically specific construction of family provision.

Let’s go back to the Pew data setup: heterogamously married couples with children under age 18 in the year 2011 (from Census data provided by IPUMS). In 23% of those couples the wife’s personal income is greater than her husband’s — that’s the big news, since it’s an increase from 4% half a century ago. This, to the Pew authors and media everywhere, makes her the “primary breadwinner,” or, in shortened form (as in their title), “breadwinner moms.” (That’s completely reasonable with single mothers, by the way; I’m just working on the married-couple side of the issue — just a short chasm away.)

The 50%+1 standard conceals that these male “breadwinners” are winning a greater share of the bread than are their female counterparts. Specifically, the average father-earning-more-than-his-wife earns 81% of the couple’s income; the average mother-earning-more-than-her-husband earns 69% of the couple’s income. Here is the distribution in more detail:

1

This shows that by far the most common situation for a female “breadwinner” is to be earning between 50% and 60% of the couple’s income — the case for 38% of such women. For the father “breadwinners,” though, the most common situation — for 28% of them — is to be earning all of the income, a situation that is three-times more common than the reverse.

Collapsing data into categories is essential for understanding the world. But putting these two groups into the same category and speaking as if they are equal is misleading.

This is especially problematic, I think, because of the historical connotation of the term breadwinner. The term dates back to 1821, says the Oxford English Dictionary. That’s from the heyday of America’s separate spheres ideology, which elevated to reverential status the woman-home/man-work ideal. Breadwinners in that Industrial Revolution era were not defined by earning 1% more than their wives. They earned all of the money, ideally (meaning, if their earnings were sufficient) but, just as importantly, they were the only one permanently working for pay outside the home. (JSTOR has references going back to the 1860s which confirm this usage.)

Modifying “breadwinner” with “primary” is better than not, but that subtlety has been completely lost in the media coverage. Consider these headlines from a Google news search just now:

Further down there are some references to “primary breadwinners,” but that’s rare.

Maybe we should call those 100%ers breadwinners, and call the ones closer to 50% breadsharers.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Cross-posted at The Atlantic and Family Inequality.

In 1996 the Hoover Institution published a symposium titled “Can Government Save the Family?”  A who’s-who list of culture warriors — including Dan Quayle, James Dobson, John Engler, John Ashcroft, and David Blankenhorn — were asked, “What can government do, if anything, to make sure that the overwhelming majority of American children grow up with a mother and father?”

There wasn’t much disagreement on the panel.  Their suggestions were (1) end welfare payments for single mothers, (2) stop no-fault divorce, (3) remove tax penalties for marriage, and (4) fix “the culture.” From this list their only victory was ending welfare as we knew it, which increased the suffering of single mothers and their children but didn’t affect the trajectory of marriage and single motherhood.

So the collapse of marriage continues apace. Since 1980, for every state in every decade, the percentage of women who are married has fallen (except Utah in the 1990s):

1

Red states (last four presidential elections Republican) to blue (last four Democrat), and in between (light blue, purple, light red), makes no difference:

2

But the “marriage movement” lives on. In fact, their message has changed remarkably little. In that 1996 symposium, Dan Quayle wrote:

We also desperately need help from nongovernment institutions like the media and the entertainment community. They have a tremendous influence on our culture and they should join in when it comes to strengthening families.

Sixteen years later, in the 2012 “State of Our Unions” report, the National Marriage Project included a 10-point list of familiar demands, including this point #8:

Our nation’s leaders, including the president, must engage Hollywood in a conversation about popular culture ideas about marriage and family formation, including constructive critiques and positive ideas for changes in media depictions of marriage and fatherhood.

So little reflection on such a bad track record — it’s enough to make you think that increasing marriage isn’t the main goal of the movement.

Plan for the Future

So what is the future of marriage? Advocates like to talk about turning it around, bringing back a “marriage culture.” But is there a precedent for this, or a reason to expect it to happen? Not that I can see. In fact, the decline of marriage is nearly universal. A check of United Nations statistics on marriage trends shows that 87 percent of the world’s population lives in countries with marriage rates that have fallen since the 1980s.

Here is the trend in the marriage rate since 1940, with some possible scenarios to 2040 (source: 1940-19601970-2011):

3

Notice the decline has actually accelerated since 1990. Something has to give. The marriage movement folks say they want a rebound. With even the most optimistic twist imaginable (and a Kanye wedding), could it get back to 2000 levels by 2040? That would make headlines, but the institution would still be less popular than it was during that dire 1996 symposium.

If we just keep going on the same path (the red line), marriage will hit zero at around 2042. Some trends are easy to predict by extrapolation (like next year’s decline in the name Mary), but major demographic trends usually don’t just smash into 0 or 100 percent, so I don’t expect that.

The more realistic future is some kind of taper. We know, for example, that decline of marriage has slowed considerably for college graduates, so they’re helping keep it alive — but that’s still only 35 percent of women in their 30s, not enough to turn the whole ship around.

So Live With It

So rather than try to redirect the ship of marriage, we have to do what we already know we have to do: reduce the disadvantages accruing to those who aren’t married — or whose parents aren’t married. If we take the longer view we know this is the right approach: In the past two centuries we’ve largely replaced such family functions as food production, healthcare, education, and elder care with a combination of state and market interventions. As a result — even though the results are, to put it mildly, uneven — our collective well-being has improved rather than diminished, even though families have lost much of their hold on modern life.

If the new book by sociologist Kathryn Edin and Timothy Nelson is to be believed, there is good news for the floundering marriage movement in this approach: Policies to improve the security of poor people and their children also tend to improve the stability of their relationships.  In other words, supporting single people supports marriage.

To any clear-eyed observer it’s obvious that we can’t count on marriage anymore — we can’t build our social welfare system around the assumption that everyone does or should get married if they or their children want to be cared for. That’s what it means when pensions are based on spouse’s earnings, employers don’t provide sick leave or family leave, and when high-quality preschool is unaffordable for most people. So let marriage be truly voluntary, and maybe more people will even end up married. Not that there’s anything wrong with that.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.