New interest in the virgjinesha inspires us to re-post our coverage from 2012.
Rigid gender roles often inspire creative solutions. Families in Afghanistan, for example, when they have all girls, often pick a daughter to pretend to be a boy until puberty. The child can then run errands, get a job, and chaperone “his” sisters in public (all things girls aren’t allowed to do). The transition is sudden and doesn’t involve relocation, so the entire community knows that the child is a girl. They just pretend nothing at all strange is going on. In fact, it’s not strange. It happens quite routinely.
A similar phenomenon emerged in Albania in the 1400s. Inter-group warfare had left a dearth of men in many communities. Since rights and responsibilities were strongly sex-typed, some families needed a “man” to accomplish certain things like buy land and pass down wealth.
In response, some girls became “virgjinesha,” or sworn virgins. A sworn virgin was a socially-recognized man for the rest of “his” life (so long as the oath was kept). Many girls would take the oath after their father died.
There are only about forty sworn virgins left; as women were granted more and more rights, fewer and fewer girls felt the need to adopt a male identity for themselves or their families.
Some of the remaining virgjinesha were featured in a New York Times slideshow. Two of the images, by photographer Johan Spanner, are reproduced here.
After becoming a man, Qamile Stema [below] said she could leave the house and chop wood with other men. She also carried a gun. At wedding parties, she sat with men. When she talked to women, she recalled, they recoiled in shyness.
Qamile Stema said she would die a virgin. Had she married, she joked, it would have been to a traditional Albanian woman. “I guess you could say I was partly a woman and partly a man, but of course I never did everything a man does,” she said. “I liked my life as a man. I have no regrets.”
The other day I was surprised that a group of reporters failed to call out what seemed to be an obvious exaggeration by Republican Congresspeople in a press conference. Did the reporters not realize that a 25% unemployment rate among college graduates in 2013 is implausible, were they not paying attention, or do they just assume they’re being fed lies all the time so they don’t bother?
Last semester I launched an aggressive campaign to teach the undergraduate students in my class the size of the US population. If you don’t know that – and some large portion of them didn’t – how can you interpret statements such as, “On average, 24 people per minute are victims of rape, physical violence, or stalking by an intimate partner in the United States.” In this case the source followed up with, “Over the course of a year, that equals more than 12 million women and men.” But, is that a lot? It’s a lot more in the United States than it would be in China. (Unless you go with, “any rape is too many,” in which case why use a number at all?)
Anyway, just the US population isn’t enough. I decided to start a list of current demographic facts you need to know just to get through the day without being grossly misled or misinformed – or, in the case of journalists or teachers or social scientists, not to allow your audience to be grossly misled or misinformed. Not trivia that makes a point or statistics that are shocking, but the non-sensational information you need to know to make sense of those things when other people use them. And it’s really a ballpark requirement; when I tested the undergraduates, I gave them credit if they were within 20% of the US population – that’s anywhere between 250 million and 380 million!
I only got as far as 22 facts, but they should probably be somewhere in any top-100. And the silent reporters the other day made me realize I can’t let the perfect be the enemy of the good here. I’m open to suggestions for others (or other lists if they’re out there).
The ’60s is often held up as a time of dramatic upheaval in American life. It brought us civil rights victories, the sexual revolution, the women’s movement, the gay liberation movement, and anti-war activism. It was, in short, antiestablishmentarian.
What were the concrete impacts of these changes? One is the birth rate, as illustrated in a post by Made in America‘s Claude S. Fischer. Far from introducing a new normal, the ’60s reversed what was a relatively recent a rise in the ideal number of children and actual fertility rate.
While data not shown suggest that the ideal number of children in the ’30s was under three, the ideal had risen to 3.6 by 1962. This dropped quickly across the rest of the decade.
Likewise, the actual number of children born to the average woman in the 1930s was about two, but this started shooting up in the late ’30s and ’40s. Then, just as quickly as it had risen, it plummeted again:
This data reminds us of how unusual the ’50s really was. It was an especially pro-natal family-centered time. As historian Stephanie Coontz puts it:
At the end of the 1940s, all the trends characterizing the rest of the twentieth century suddenly reversed themselves. For the first time in more than one hundred years, the age for marriage and motherhood fell, fertility increased, divorce rates declined, and women’s degree of educational parity with men dropped sharply. In a period of less than ten years, the proportion of never-married persons declined by as much as it had during the entire previous half century.
So, while in some ways the 1960s dramatically changed American culture, in other ways it simply put us back on track.
CollegeHumor posted a set of fake Puritan-themed Valentine’s Day cards. They’re a humorous way of reminding us that our intensive focus on romantic love as a driving force for sex and marriage is, in fact, quite new.
When the Puritans landed on the rocky east coast of America in the 1600s, they brought with them the belief that sex should be restricted to intercourse in marriage, hence the sentiment on the left. All non-marital and non-reproductive sexual activities were forbidden, including pre- and extra-marital sex, homosexual sex, masturbation, and oral or anal sex (even if married). Violations of the rules were punished by fines, whipping, public shaming (yes, with “scarlet letters”), ostracism, or even death.
Alongside religion, there were practical reasons why the Puritans were so darn puritanical. Colonizing the U.S. was a dangerous job; lots of people were dying from exposure, starvation, illness, and war. Babies replenished the labor supply, motivating the Puritans to channel the sex drive towards the one sexual activity that made babies: intercourse. Accordingly, having intercourse with your spouse wasn’t only allowed, it was essential; women could divorce men who had proven impotent.
The Puritans also married primarily to form practical partnerships for bearing children and mutual survival, hence the sentiment in the card on the right.
The idea that love should be the basis for marriage didn’t take hold until the Victorian era, when industrialization was changing the value of children. Useful on the farm, children were suddenly became a burden in expensive and overcrowded lodgings. This gave couples a new reason to limit the number of children they had and, because industrial production had made condoms increasingly cheap and effective, they could. Marital fertility rates dropped precipitously between 1800 and 1900: from 6+ children/woman to 3 1/2 in the U.S., England, and Wales.
In this context, a Puritan sexual ethic that restricted sex to efforts to make babies just didn’t make sense. People needed a new logic to guide sexual activity: the answer was love. Over the course of the 1800s, Victorians slowly abandoned the Puritan idea that sex was only for reproduction, embracing instead the now familiar idea that sex could be an expression of love and a source of pleasure, an idea that still resonates strongly today.
That’s at least part of the story anyway.
Bremer, Francis J., and Tom Webster. 2006. Puritans and Puritanism in Europe and America: A Comprehensive Encyclopedia. SantaBarbara: ABC-CLIO, Inc.
D’Emilio, John & Estelle Freedman. 1997. Intimate Matters: A History of Sexuality in America. Chicago: University of Chicago Press.
Freedman, Estelle. 1982. Sexuality in Nineteenth Century America: Behavior, Ideology, and Politics. Reviews in American History 10, 4: 196-215.
The Institute of Medicine and the National Research Council released some damaging numbers this month: Americans ranks startlingly low in life expectancy, compared to 16 other similarly developed countries. This is especially true for younger Americans. Indeed, among people 55 and under, we rank dead last. Among those 50-80 years old, our life expectancy is 3rd or 2nd to last.
Sabrina Tavernise at the New York Times reports that the “major contributors” to low life expectancy among younger Americans are high rates of death from guns, car accidents, and drug overdoses. We also have the highest rate of diabetes and the second-highest death rate from lung and heart disease.
Americans had “the lowest probability over all of surviving to the age of 50.” The numbers for American men were slightly worse than those for women. Overall, life expectancy for men was 17 out of 17; women came in 16th. Education and poverty made a difference too, as did the more generous social services provided by the other countries in the study.
The year during which the U.S. will become a “majority minority” is well discussed. It looks like it’s going to happen sometime around 2050 or earlier. This statistic, however, elides an interesting subplot: the year various age groups will be majority minority.
Over at The Society Pages Editors’ Desk, sociologist Doug Hartmann offered the following table. It shows that children under the age of 18 will be majority minority 32 years earlier, by 2018. Young people ages 18-29 will join them by 2027. By 2035, people aged 35-64 will be majority minority. People 65 and older are quick to follow.
This data reminds us that demographic change is gradual. The year 2018 is just five years away. If young people continue to vote in numbers similar to those in the last two elections, their changing demographics could push forward a change that looks all but inevitable in the long run.
In the meantime, we need to be vigilant about how younger people are portrayed. Today poverty is racialized so as to demonize social programs designed to help the less fortunate. Can we imagine a future in which public education and other youth-oriented programming is similarly framed: as white people helping supposedly undeserving people of color? This is likely something that we should be vigilant against in the coming years.
The problem of income inequality often gets forgotten in conversations about biological clocks.
The dilemma that couples face as they consider having children at older ages is worth dwelling on, and I wouldn’t take that away from Judith Shulevitz’s essay in the New Republic, “How Older Parenthood Will Upend American Society,” which has sparked commentary from Katie Roiphe, Hanna Rosin, Ross Douthat, and Parade, among many others.
The story is an old one — about the health risks of older parenting and the implications of falling fertility rates for an aging population — even though some of the facts are new. But two points need more attention. First, the overall consequences of the trend toward older parenting are on balance positive, both for women’s equality and for children’s health. And second, social-class inequality is a pressing — and growing — problem in children’s health, and one that is too easily lost in the biological-clock debate.
First, we need to distinguish between the average age of birth parents on the one hand versus the number born at advanced parental ages on the other. As Shulevitz notes, the average age of a first-time mother in the U.S. is now 25. Health-wise, assuming she births the rest of her (small) brood before about age 35, that’s perfect.
Consider two measures of child well-being according to their mothers’ age at birth. First, infant mortality:
Health prospects for children improve as women (and their partners) increase their education and incomes, and improve their health behaviors, into their 30s. Beyond that, the health risks start accumulating, weighing against the socioeconomic factors, and the danger increases.
Second, here is the rate of cognitive disability among children according to the age of their mothers at birth, showing a very similar pattern:
Again, the lowest risks are to those born when their parents are in their early 30s, a pattern that holds when I control for education, income, race/ethnicity, gender, and child’s age.
When mothers older than age 40 give birth, which accounted for 3 percent of births in 2011, the risks clearly are increased, and Shulevitz’s story is highly relevant. But, at least in terms of mortality and cognitive disability, an average parental age in the late 20s and early 30s is not only not a problem, it’s ideal.
But the second figure above hints at another problem — inequality in the health of parents and children. On that purple chart, a college graduate in her early 40s has the same risk as a non-graduate in her late 20s. And the social-class gap increases with age. Why is the rate of cognitive disabilities so much higher for the children of older mothers who did not finish college? It’s not because of their biological clocks or genetic mutations, but because of the health of the women giving birth.
For healthy, wealthy older women, the issue of aging eggs and genetic mutations from fathers’ run-down sperm factories are more pressing than it is for the majority of parents, who have not graduated college.
If you look at the distribution of women having babies by age and education, it’s clear that the older-parent phenomenon is disproportionately about more-educated women. (I calculated these from the American Community Survey, because age-by-education is not available in the CDC numbers, so they are a little different.)
Most of the less-educated mothers are giving birth in their 20s, and a bigger share of the high-age births are to women who’ve graduated college — most of them married and financially better off. But women without college degrees still make up more than half of those having babies after age 35, and the risks their children face have more to do with high blood pressure, obesity, diabetes, and other health conditions than with genetic or epigenetic mutations. Preterm births, low birth-weight, and birth complications are major causes of developmental disabilities, and they occur most often among mothers with their own health problems.
Most distressing, the effects of educational (and income) inequality on children’s health have been increasing. Here are the relative odds of infant mortality by maternal education, from 1986 to 2001, from a study in Pediatrics. (This compares the odds to college graduates within each year, so anything over 1.0 means the group has a higher risk than college graduates.)
This inequality is absent from Shulevitz’s essay and most of the commentary about it. She writes, of the social pressure mothers like her feel as they age, “Once again, technology has given us the chance to lead our lives in the proper sequence: education, then work, then financial stability, then children” — with no consideration of the 66 percent of people who have reached their early 30s with less than a four-year college degree. For the vast majority of that group, the sequence Shulevitz describes is not relevant.
In fact, if Shulevitz had considered economic inequality, she might not have been quite as worried about advancing parental age. When she worries that a 35-year-old mother has a life expectancy of just 46 more years — years to be a mother to her child — the table she consulted applies to the whole population. She should breathe a little bit easier: Among 40-year-old white college graduates women are expected to live an average extra five years compared with those who have a high school education only.
When it comes to parents’ age versus social class, the challenges are not either/or. We should be concerned about both. But addressing the health problems of parents — especially mothers — with less than a college degree and below-average incomes is the more pressing issue — both for potential lives saved or improved and for social equality.
Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.
In 2011 the U.S. birth rate dropped to the lowest ever recorded, according to preliminary data released by the National Center for Health Statistics and reported by Pew Social Trends:
The decline was led by foreign-born women, who’s birthrate dropped 14% between 2007 and 2010, compared to a 6% drop for U.S.-born women.
Considering the last two decades, birthrates for all racial/ethnic groups and both U.S.- and foreign-born women have been dropping, but the percent change is much larger among the foreign-born and all non-white groups. The drop in the birthrate of foreign-born women is double that of U.S.-born and the drop in the birthrate of white women is often a fraction that of women of color.
It’s easy to forget that effective, reversible birth control was invented only about 50 years ago. Birth control for married couples was illegal until 1965; legalization for single people would follow a few years later. In the meantime, the second wave of feminism would give women the opportunity to enter well-paying, highly-regarded jobs, essentially giving women something rewarding to do other than/in addition to raise children. The massive drop in the birthrate during the ’60s likely reflects these changes.
Many European countries are facing less than replacement levels of fertility and scrambling to figure out what to do about it (the health of most economies in the developed world is predicated on population growth), the U.S. is likely not far behind.