Photo by US Department of Labor's photostream on flickr.com
Atty General Holder visits Potomac Job Corps via flickr.com

Ask Mitt Romey and Barack Obama about the lingering high unemployment rate and they’ll likely cite a “skills mismatch” between American workers and available jobs as at least one part of the problem. Despite this striking point of agreement across the political spectrum, Barbara Kiviat argues in The Atlantic that social science data tell a different, much murkier, story.

Consensus over whether U.S. workers have the skills to meet employer demand has see-sawed over time.

That public discourse in the 1980s landed on the idea of a vastly under-skilled labor force is curious, considering that less than a decade earlier, policymakers believed that over-qualification was the main threat as technology “deskilled” work. In the 1976 book The Overeducated American, economist Richard Freeman held that the then-falling wage difference between high-school and college graduates was the result of a college-graduate glut. Sociologists studying “credentialism” agreed, arguing that inflated hiring requirements had led U.S. workers to obtain more education than was necessary. A 1973 report by the Department of Health, Education and Welfare ruminated about how to keep employees happy when job complexity increasingly lagged workers’ abilities and expectations for challenging jobs.

This simple story of a “skills mismatch,” regardless of whether workers are over- or under-qualified,  is not universally supported by data, depending on how you slice it. When looking at individual-level data, Kiviat notes:

The findings here are decidedly more nuanced. While certain pockets of workers, such as high-school drop-outs, clearly lack necessary skill, no nation-wide mismatch emerges. In fact, some work, such as an analysis by Duke University’s Stephen Vaisey, finds that over-qualification is much more common than under-qualification, particularly among younger workers and those with college degrees.

It seems that the heart of the matter really comes down to what story you want to tell about what kind of workers with what kind of skills, a much less neat and tidy task than painting a broad-stroked mismatch picture.

As sociologist Michael Handel points out in his book Worker Skills and Job Requirements, in the skills mismatch debate, it is often not clear who is missing what skill. The term is used to talk about technical manufacturing know-how, doctoral-grade engineering talent, high-school level knowledge of reading and math, interpersonal smoothness, facility with personal computers, college credentials, problem-solving ability, and more. Depending on the conversation, the problem lies with high-school graduates, high-school drop-outs, college graduates without the right majors, college graduates without the right experience, new entrants to the labor force, older workers, or younger workers. Since the problem is hazily defined, people with vastly different agendas are able to get in on the conversation–and the solution

Statue Of Joe Paterno

This past week, the N.C.A.A. announced its sanctions against Penn State University in response to the Jerry Sandusky scandal.  Among the sanctions was the decision that all of Penn State’s football victories from 1998 to 2011 were to be vacated.  While there were varied reactions, Sociologist Gary Alan Fine reacted in an Op-Ed in the New York Times by stating that famous author George Orwell would be amused.

In his magnificent dystopia, “1984,” Orwell understood well the dangers of “history clerks.” Those given authority to write history can change the past. Those sweat-and-mud victories of the Nittany Lions — more points on the scoreboard — no longer exist. The winners are now the losers.

While Fine agrees that Penn State deserved sanctions and that the N.C.A.A. had an obligation to respond forcefully, he asks if re-writing history was the proper answer.

We learn bad things about people all the time, but should we change our history? Should we, like Orwell’s totalitarian Oceania, have a Ministry of Truth that has the authority to scrub the past? Should our newspapers have to change their back files? And how far should we go? Should we review Babe Ruth’s records? Or O. J. Simpson’s? Should a disgraced senator have her votes vacated? Perhaps we should claim that Joe McCarthy actually lost his elections. Or give victory to John Edwards’s opponent?

It is understandable that an organization wants its official history to reflect its hopes, but Fine argues that histories must properly reflect what happened at the time.  Discomfort and shame honoring flawed people is understandable.  Yet, “building a false history is the wrong way to recall the past. True and detailed histories always work better.”

It’s no surprise that the Great Recession has brought economic inequality front and center in the United States. The focus has been mostly problems in the the labor market, but Jason DeParle at the New York Times points out that other demographic changes have also had a sizable impact on growing inequality.

Estimates vary widely, but scholars have said that changes in marriage patterns — as opposed to changes in individual earnings — may account for as much as 40 percent of the growth in certain measures of inequality.

To illustrate how changes in family structure contribute to increasing inequality, DeParle turns to the research of several sociologists. One issue is the fact that those who are well off are more likely to get married.

Long a nation of economic extremes, the United States is also becoming a society of family haves and family have-nots, with marriage and its rewards evermore confined to the fortunate classes.

“It is the privileged Americans who are marrying, and marrying helps them stay privileged,” said Andrew Cherlin, a sociologist at Johns Hopkins University.

A related trend is the educational gap between women who have children in or out of wedlock.

Less than 10 percent of the births to college-educated women occur outside marriage, while for women with high school degrees or less the figure is nearly 60 percent.

This difference contributes to significant  inequalities in long-term outcomes for children.

While many children of single mothers flourish (two of the last three presidents had mothers who were single during part of their childhood), a large body of research shows that they are more likely than similar children with married parents to experience childhood poverty, act up in class, become teenage parents and drop out of school.

Sara McLanahan, a Princeton sociologist, warns that family structure increasingly consigns children to “diverging destinies.”

Married couples are having children later than they used to, divorcing less and investing heavily in parenting time. By contrast, a growing share of single mothers have never married, and many have children with more than one man.

“The people with more education tend to have stable family structures with committed, involved fathers,” Ms. McLanahan said. “The people with less education are more likely to have complex, unstable situations involving men who come and go.”

She said, “I think this process is creating greater gaps in these children’s life chances.”

As sociologists and others have shown, the income gap between those at the top and bottom has changed dramatically over time.

Four decades ago, households with children at the 90th percentile of incomes received five times as much as those at the 10th percentile, according to Bruce Western and Tracey Shollenberger of the Harvard sociology department. Now they have 10 times as much. The gaps have widened even more higher up the income scale.

But again, DeParle notes that marriage, rather than just individual incomes, makes a big difference:

Economic woes speed marital decline, as women see fewer “marriageable men.” The opposite also holds true: marital decline compounds economic woes, since it leaves the needy to struggle alone.

“The people who need to stick together for economic reasons don’t,” said Christopher Jencks, a Harvard sociologist. “And the people who least need to stick together do.”

For more on the Great Recession and inequality, check out our podcast with David Grusky.

A provocative, sociologically-minded piece on the mainstream media ignoring mental illness among African Americans appeared late last week in the Milwaukee Community Journal: http://www.communityjournal.net/mainstream-media-tend-to-ignore-blacks-mental-health-problems/. It includes key commentary from Dr. Leonard, a sociologist in the Department of Critical Culture, Gender, and Race Studies at Washington State University and author of After Artest: The NBA and the Assault on Blackness (featured on our Reading List a bit back).

“Even when the topic is more about black celebrity than race, mental illness, particularly in famous athletes, is viewed as “evidence of a criminal character. … Media go immediately to focusing on the purported pathologies of the players themselves and don’t want to see what the broader context is,” Leonard says. “The history of race and mental health is a history of racism and the white medical establishment demonizing and criminalizing the black community through writing about their ‘abnormal personalities’ and being ‘crazy.’
“That history plays out in mainstream media coverage, but it also affects public discussions about mental health because it has so often been used to justify exclusion, segregation and inequality” in mental health treatment for African-Americans.”

Tower Bridge

We’ve had two sightings regarding the participation of women in the upcoming Summer Olympics.  To bring you up to speed, Saudi Arabia had announced that it would be sending female athletes for the first time.  But, this spring, there were many doubts about whether Saudi Arabian women would actually be allowed to participate.

This past Thursday, Saudi Arabia agreed to send two women to compete, making this the first time in Olympic history that every country will be represented by female athletes.  In 1996, 26 teams had no women.  However, that figure dropped to three in Beijing four years ago, where women represented 42% of the athletes.  That percentage is expected to increase in London.

 

As people approach midlife, the days of youthful exploration, when life felt like one big blind date, are fading. Schedules compress, priorities change and people often become pickier in what they want in their friends… [later] people realize how much they have neglected to restock their pool of friends only when they encounter a big life event, like a move, say, or a divorce.

More fish, sure, but are there always more friends in the sea? In its Sunday edition, The New York Times considers the expansive, but shallow pools of friends, associates, and colleagues–the slackening social networks–so many notice with a start in middle age.

As external conditions change, it becomes tougher to meet the three conditions that sociologists since the 1950s have considered crucial to making close friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other, said Rebecca G. Adams, a professor of sociology and gerontology at the University of North Carolina at Greensboro. This is why so many people meet their lifelong friends in college, she added.

The article goes on to cite, beyond graduation, increasing couple-dom, divergent careers (even best friends can grow apart when one has mortgage troubles while the other can’t decide whether to spend one month or two in St. Bart’s), parenthood, and the pickiness engendered by self-discovery as reasons adults find themselves with fewer friends–and fewer avenues to find new ones–once they’re out of college and early career stages.

The good news, though, is that social scientists like psychologist Linda L. Carstensen have found that, as friend numbers dwindle (though perhaps not on Facebook), those remaining friendships grow closer.  In fact, Marla Paul, author of The Friendship Crisis, tells the Times, “The bar is higher than when we were younger and were willing to meet almost anyone for a margarita,” but that’s not necessarily a bad thing. People may find that they have just enough time to invest in real, lasting, fruitful friendships with this culled group.

Or, they might follow advice given by others in the Times: go on a search to fill specific “friend niches” or even launch back into the incredibly social, unattached behavior of their early 20s. Exhaustive, to be sure, but quite possibly exhausting.


Today, people are opting out of parenthood at unprecedented rates.  In 1976, 10% of U.S. women ages 40-44 had never had a child; by 2006, the percentage had doubled.  While some people desire children but are unable to have them, increasing numbers of adults are deciding to form families without children.

In a recent opinion piece, Sociologist Amy Blackstone explained that families that don’t include children still can play an important role in the life of children.

 According to the people I’ve interviewed, child-free adults serve as mentors, role models, back-up parents, playmates, fun aunties, big brothers, partners-in-crime, advisers and buddies to the children in their lives. And, as research conducted for Big Brothers Big Sisters shows, having caring adults who are not their parents involved in their lives improves kids’ confidence, grades and social skills.

Though stereotypes often portray adults without children as self-involved or baby-haters, Blackstone notes that most child-free adults enjoy children.  And, at a time when parents are busier than ever, these child-free individuals are often more available, in terms of money or resources, to take on additional responsibilities.  Apparently, it still takes a village to raise a child.

 Families have changed a lot, but children will always need love and guidance. Whether those raising children are single-parents, heterosexual couples, or gay or lesbian parents, other adults make a positive difference in a child’s life.

Family dinners are often thought of as a sort of magical hour each night, where parents and children connect, laughing and talking about their day over steaming dishes of mashed potatoes and green beans. So, where does that leave (perhaps the majority of) families for which this illusive ideal doesn’t quite become daily reality? Past research has suggested that regular family dinners do have many positive outcomes in kids’ lives, but new work by Ann Meier and Kelly Musick suggests the relationship may not simply be a straightforward case of cause and effect. Writing in The New York Times, Meier and Musick wonder:

[D]oes eating together really make for better-adjusted kids? Or is it just that families that can pull off a regular dinner also tend to have other things (perhaps more money, or more time) that themselves improve child well-being?

Our research, published last month in the peer-reviewed Journal of Marriage and Family, shows that the benefits of family dinners aren’t as strong or as lasting as previous studies suggest.

They did find that kids who had regular family dinners exhibited less depressive symptoms, drug and alcohol use, and delinquency. However, the relationship significantly weakened after accounting for factors like the quality of their family relationships, other activities they do with their parents, how their parents monitor them, or their family’s income. Additionally, Meier and Musick didn’t find lasting effects of family dinners when they analyzed data collected years later, when the kids were young adults.

What, then, should you think about dinnertime? Though we are more cautious than other researchers about the unique benefits of family dinners, we don’t dismiss the possibility that they can matter for child well-being. Given that eating is universal and routine, family meals offer a natural opportunity for parental influence: there are few other contexts in family life that provide a regular window of focused time together…

But our findings suggest that the effects of family dinners on children depend on the extent to which parents use the time to engage with their children and learn about their day-to-day lives. So if you aren’t able to make the family meal happen on a regular basis, don’t beat yourself up: just find another way to connect with your kids.

 

Photo from Persephone's Birth by eyeliam on flickr.com
Photo from Persephone's Birth by eyeliam via flickr.com

The so-called “mommy wars” have apparently made it all the way to the delivery room, according to Jennifer Block, writing for Slate:

For a long time home birth was too fringe to get caught in this parenting no-fly zone, but lately it’s been fitting quite nicely into the mommy war media narrative: There are the stories about women giving birth at home because it’s fashionable, the idea that women are happy sacrificing their newborns for some “hedonistic” spa-like experience, or that moms-to-be (and their partners) are just dumb and gullible when it comes to risk management…

For many parents, home birth is a transcendent experience. …Yet as the number of such births grows, so does the number of tragedies—and those stories tend to be left out of soft-focus lifestyle features.

Debates about home birth have erupted in the media and the blogosphere in recent months, largely focused on the relative risks of home birth versus hospital birth. But at the heart of the issue is who, and what evidence, to trust.

I could list several recent large prospective studies… all comparing where and with whom healthy women gave birth, which found similar rates of baby loss—around 2 per 1,000—no matter the place or attendant. We could pick through those studies’ respective strengths and weaknesses, talk about why we’ll never have a “gold-standard” randomized controlled trial (because women will never participate in a study that makes birth choices for them), and I could quote a real epidemiologist on why determining the precise risk of home birth in the United States is nearly impossible. Actually, I will: “It’s all but impossible, certainly in the United States,” says Eugene Declercq, an epidemiologist and professor of public health at Boston University, and coauthor of the CDC study that found the number of U.S. home births has risen slightly, to still less than 1 percent of all births. One of the challenges is that “the outcomes tend to be pretty good,” Declercq says…But to really nail it down here in the U.S., he says, we’d need to study tens of thousands of home births, “to be able to find a difference in those rare outcomes.” With a mere 30,000 planned home births happening each year nationwide, “We don’t have enough cases.”

And, as sociologist Barbara Katz-Rothman notes, decisions about where to give birth are likely made more on the basis of perceived, rather than real, risk.

“What we’re talking about is felt risk rather than actual risk,” explains Barbara Katz-Rothman, professor of sociology at the City University of New York and author of much scholarship on birth, motherhood, and risk. Take our fear of flying. “Most people understand intellectually that on your standard vacation trip or business trip, the ride to and from the airport is more likely to result in your injury or death than the plane ride itself, but you never see anybody applaud when they reach the airport safely in the car.” The flight feels more risky. Similarly, we can look at data showing our risk of infection skyrockets the second we step in a hospital, “but there’s something about the sight of all those gloves and masks that makes you feel safe.”

Photo by Brian D. Hawkins via flickr.com
Photo by Brian D. Hawkins via flickr.com/briandhawkins.com

For the first time in about a century, new Census data reveal that population growth in big U.S. cities is exceeding that of the suburbs. According to the Associated Press (via Huffington Post):

Primary cities in large metropolitan areas with populations of more than 1 million grew by 1.1 percent last year, compared with 0.9 percent in surrounding suburbs. While the definitions of city and suburb have changed over the decades, it’s the first time that growth of large core cities outpaced that of suburbs since the early 1900s.

In all, city growth in 2011 surpassed or equaled that of suburbs in roughly 33 of the nation’s 51 large metro areas, compared to just five in the last decade.

Young adults forgoing homeownership and embracing the conveniences of urban life appear to be a driving force behind this trend.

Burdened with college debt or toiling in temporary, lower-wage positions, they are spurning homeownership in the suburbs for shorter-term, no-strings-attached apartment living, public transit and proximity to potential jobs in larger cities…They make up roughly 1 in 6 Americans, and some sociologists are calling them “generation rent.”

A related report from NPR further cites tougher mortgage rules since the housing bubble burst as an important factor.

Even with big drops in housing prices and interest rates, getting a mortgage has become a lot harder since the heady days of “no income, no assets” loans that fueled the housing boom of the early 2000s. Most lenders now require a rock-steady source of income and a substantial down payment before they will even look at potential borrowers. And many millennials won’t be able to reach that steep threshold.

The combination of stricter mortgage requirements, college loan debt, and a tough economy leaves sociologist Katherine Newman skeptical of young adults’ prospects for home ownership for the foreseeable future. From Huffington Post:

“Young adults simply can’t amass the down payments needed and don’t have the earnings,” she said. “They will be renting for a very long time.”