history

Cross-posted at Family Inequality.

The other day when the Pew report on mothers who are breadwinners came out, I complained about calling wives “breadwinners” if they earn $1 more than their husbands:

A wife who earns $1 more than her husband for one year is not the “breadwinner” of the family. That’s not what made “traditional” men the breadwinners of their families — that image is of a long-term pattern in which the husband/father earns all or almost all of the money, which implies a more entrenched economic domination.

To elaborate a little, there are two issues here. One is empirical: today’s female breadwinners are much less economically dominant than the classical male breadwinner — and even than the contemporary male breadwinner, as I will show. And second, conceptually breadwinner not a majority-share concept determined by a fixed percentage of income, but an ideologically specific construction of family provision.

Let’s go back to the Pew data setup: heterogamously married couples with children under age 18 in the year 2011 (from Census data provided by IPUMS). In 23% of those couples the wife’s personal income is greater than her husband’s — that’s the big news, since it’s an increase from 4% half a century ago. This, to the Pew authors and media everywhere, makes her the “primary breadwinner,” or, in shortened form (as in their title), “breadwinner moms.” (That’s completely reasonable with single mothers, by the way; I’m just working on the married-couple side of the issue — just a short chasm away.)

The 50%+1 standard conceals that these male “breadwinners” are winning a greater share of the bread than are their female counterparts. Specifically, the average father-earning-more-than-his-wife earns 81% of the couple’s income; the average mother-earning-more-than-her-husband earns 69% of the couple’s income. Here is the distribution in more detail:

1

This shows that by far the most common situation for a female “breadwinner” is to be earning between 50% and 60% of the couple’s income — the case for 38% of such women. For the father “breadwinners,” though, the most common situation — for 28% of them — is to be earning all of the income, a situation that is three-times more common than the reverse.

Collapsing data into categories is essential for understanding the world. But putting these two groups into the same category and speaking as if they are equal is misleading.

This is especially problematic, I think, because of the historical connotation of the term breadwinner. The term dates back to 1821, says the Oxford English Dictionary. That’s from the heyday of America’s separate spheres ideology, which elevated to reverential status the woman-home/man-work ideal. Breadwinners in that Industrial Revolution era were not defined by earning 1% more than their wives. They earned all of the money, ideally (meaning, if their earnings were sufficient) but, just as importantly, they were the only one permanently working for pay outside the home. (JSTOR has references going back to the 1860s which confirm this usage.)

Modifying “breadwinner” with “primary” is better than not, but that subtlety has been completely lost in the media coverage. Consider these headlines from a Google news search just now:

Further down there are some references to “primary breadwinners,” but that’s rare.

Maybe we should call those 100%ers breadwinners, and call the ones closer to 50% breadsharers.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Cross-posted at The Wild Magazine.

Poverty in the United States is stereotypically associated with racial minorities in urban centers. However, a closer look at social geography reveals a more complex situation: a majority of poor people are white and live in the suburbs. This makes sense when you consider that whites are the largest racial group in the U.S., making up 75% of the population, and that there are three times as many suburbanites than urbanites.

A majority of Americans are losing wealth, and we know it’s going straight to the top. This is not a conspiracy theory, but the economic arrangement of the last 40 years. The New Deal, which created the middle class and the American Dream, was systematically dismantled by elite interest. The revolving door, the shuffling of elites in top positions of power between the public and private sectors, made this possible. The New Deal was abandoned for neoliberal policy. As a result, the comfortable middle class lifestyle was replaced by unemployment and working class struggle.

Suburban poverty normally reflects the spread of metropolitan poverty, but in recent years, suburban poverty has been growing at a faster rate. From 2010-2011, poverty in America’s 100 largest metro areas increased by 5.9% overall. Suburban poverty grew at a rate of 6.8%, while urban poverty grew only 4.7%. In general, the poverty rates in urban areas are still higher (21%) than those in the suburbs (11%). Most notable is the rate of change in the suburbs, which can be attributed to increasing inequality, the housing market crash, gentrification, efforts to make low-income people more mobile, and public housing vouchers.

Screenshot_1

For the past decade, suburbanites commuted between suburbs rather than into cities for work. More affluent, nearby suburbs provide low-wage service jobs in food and retail. Poverty rates in suburbia are rising due to a crumbling middle class, but the poor are still mainly concentrated in inner-ring suburbs close to cities, and on the fringe — former rural areas consumed by suburban sprawl.

Poverty’s expansion to the suburbs is a symptom of an increasingly unequal society. The geographic isolation of the suburban poor in the inner and outer rings of suburbia troubles the validity of the claim that poverty moved to the suburbs. More accurately, people are getting poorer and more people live in the suburbs—or areas now designated as such. It’s plausible that economic inequality and leapfrog developments have changed the sociogeographic landscape. Low-income earners are displaced to the outskirts of the city (inner-ring of the suburbs) due to gentrification, and the rural poor are now more easily counted among the suburban poor due to suburban sprawl. Whatever the case, suburban poverty presents unique challenges to policy makers because federal antipoverty resources are tailored for densely populated urban areas. The stereotypical images of inner city poverty and suburban affluence are the ultimate fiction.

Kara McGhee is a PhD student in sociology at the University of Missouri specializing in culture, identity, and inequalities. She is a regular contributor for The WILD Magazine

While the stereotype of the college professor might still be an elbow-patched intellectual cozied up in an office, it might be more accurate to place him in his car.  A new report from the American Association of University Professors finds that more than 40% of college instructors are part-time, often driving from campus to campus to cobble together enough classes to enable them to pay rent.  These types of employees far outnumber tenured and tenure-track faculty, who make up less than a quarter.

1

This data suggest that the term “precariat” applies well to a significant proportion of college and university professors. Coined by economist Guy Standing, the term is meant to draw attention to the economic fragility of many lower wage workers in today’s labor market.  It’s a combination of the word “precarious” and “proletarian,” a word that is used to refer to the working class under capitalism.

Part-time faculty count as part of the precariat because their jobs are contingent (renewed semester to semester), low paid, and bring little or no benefits.  Let me put it this way.  I just finished my first year as a tenured professor after six years on the tenure track.  I teach five classes.  An adjunct at a public research university would have to teach more than twenty-three classes to earn my salary (average pay is $3,200/class); someone teaching at community colleges would have to teach more than thirty-three (at $2,250/class).  Of course, my salary also reflects research and institutional service, but my hourly wage is obviously far out-of-proportion to that of part-time faculty.  Plus I get a wide range of benefits; adjuncts usually get nothing.

When government funding of higher education shrinks, colleges and universities respond by cutting corners where they can.  Hiring adjuncts is one way to do that.  It’s important to remember, then, that funding cuts hurt not only students; they also hurt jobs.

See also How Many PhDs are Professors?

Via Jordan Weissman at The Atlantic. Cross-posted at Pacific Standard.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Screenshot_1While the flight attendant might be a quintessentially feminized occupation today, the first “stewardess” was, in fact, a “steward.” Pan American had an all-male steward workforce — and a ban on hiring women — for 16 years.  It was forced to integrate during the male labor shortage of World War II, when female flight attendants were considered as revolutionary as “Rosie” riveters and welders.  By 1958, their ban on hiring women would be reversed. There was now a ban on hiring men.  This is just some of the fascinating history in Phil Tiemeyer‘s new book, Plane Queer, a history of the male flight attendant.

By the 1950s women dominated the aisles in the sky.  Airlines accepted this.  Women (1) were cheaper to employ, (2) domesticated the cabin, making commercial travel seem suitable for women and children, and (3) sexualized the experience for the business men who still made up the bulk of travelers.

By the time Celio Diaz Jr. invoked the 1964 Civil Rights Act and sued Pan Am on the basis of gender discrimination, white male flight attendants were seen as downright queer.  Servile behavior — the cooking, serving, nurturing, and aiding behavior characteristic of the job at the time — was both gendered and racialized.  When black men or white women performed domestic duties, it was seen as natural.  (The gender dimension might seem obvious but, from slavery to the early 1900s, black men were also concentrated in domestic occupations: coachmen, waiters, footmen, butlers, valets  etc.)

So, when white men served others — but not black men or white women — it challenged the supposedly natural order on which both hierarchies were founded.  This is why male flight attendants caused such a stir. The airlines wouldn’t hire black men or women, so they hired white men and women. The men, as a result, were suspected of being not-quite-heterosexual from the get-go and have suffered the ups and downs of homophobia ever since.

The double-definition of servile behavior as simultaneously racialized and gendered absolutely leapt out at me when I saw a commercial for Virgin Atlantic, sent in by Grace P.  It captures both the race and gender dimension of a segregated workforce. The two women and single black man play the role of service worker, while the two white men are a pilot and an engineer.  Each is framed as being literally born to do these jobs, thus the insistent and troubling naturalization of these hierarchical roles.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Screenshot_2As our society becomes increasingly technological, I love stories that remind us of the value of simpler ways to solve problems, like a faux bus stop to catch escapee nursing home residents or dogs that are trained to sniff out cancer (both stories here).

This weekend we were treated to another such story, this time by Google. The company has announced a plan to bring internet to the whole world… with balloons.  The very first launch of a gas balloon was in 1783.  Two hundred and thirty years later, the company aims to deliver what is arguably the defining feature of our age — the internet — with helium-filled balloons.  That technology will then bring almost countless other technologies, such as medical advances and agricultural information, to people who are largely excluded from them now.  A fantastical plan.

Here’s how it’ll work:

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Screenshot_1Earlier on SocImages, Lisa Wade drew attention to the tourism industry’s commodification of Polynesian women and their dancing. She mentioned, briefly, how the hula was made more tourist-friendly (what most tourists see when they attend one of the many hotel-based luaus throughout the islands is not traditional hula).  In this post, I want to offer more details on the history and the differences between the tourist and the traditional hula.

First, Wade states that, while female dancers take center stage for tourists, the traditional hula was “mostly” a men’s dance.  While it has not been determined for certain if women were ever proscribed from performing the hula during the time of the Ali’i (chiefs), it seems unlikely that women would have been prevented from performing the hula when the deity associated with the hula is Pele, a goddess. Furthermore, there is evidence that women were performing the dance at the time of Captain James Cook’s arrival in Hawai’i.

Second, while the traditional dances were not necessarily sexualized, they were very sensual.  The movement of hips and legs that are seen as sexual by some visitors, and showcased as such by the tourism industry, certainly existed in early practices.

In fact, the supposedly lascivious and blasphemous nature of the hula prompted missionaries to censure the public practice of hula, and in 1830 Queen Ka’ahumanu enacted a law prohibiting the public performance of the hula. This law was highly ineffective, however, and when King Kalakaua ascended the throne he actively encouraged public hula performances and other expressions of Native Hawaiian culture, earning him the moniker “Merrie Monarch.”

Eventually, a modernized dance emerged that did not incorporate much religiosity and employed modern music rather than chants. This is closer to what you would find at a hotel luau, but differs drastically in costuming and lacks the uncomfortable cloud of objectification associated with hotel-style hula (that is, the focus is on the dance rather than the dancers).  Below are some examples of the evolution.

Hula (men’s dance, traditional):

Hula (contemporary):

These examples of hula, and other Polynesian dances, are vastly different from what one finds in a hotel’s “Polynesian Revue” luau.

Hula (hotel):

In conclusion, it is true that the hula dances, and other dances of Polynesia, have been usurped by the tourism industry and commodified.  The culturally authentic forms, however, still thrive. Native dances are impressive enough without the ridiculous costuming and disrespectful bending of the islands’ histories seen at hotel luaus; unfortunately, it is difficult to find any culturally sensitive displays of Polynesian culture due to the huge influence of tourism over these locations.

*The information in this post was gleaned from various courses I’ve taken at the University of Hawai’i at Manoa. For more information on hula and the commodification of the Hawaiian culture, see Haunani-Kay Trask’s From A Native Daughter.

Sarah Neal is currently working on obtaining her M.A. in English at North Carolina State University.

Cross-posted at The Atlantic and Family Inequality.

In 1996 the Hoover Institution published a symposium titled “Can Government Save the Family?”  A who’s-who list of culture warriors — including Dan Quayle, James Dobson, John Engler, John Ashcroft, and David Blankenhorn — were asked, “What can government do, if anything, to make sure that the overwhelming majority of American children grow up with a mother and father?”

There wasn’t much disagreement on the panel.  Their suggestions were (1) end welfare payments for single mothers, (2) stop no-fault divorce, (3) remove tax penalties for marriage, and (4) fix “the culture.” From this list their only victory was ending welfare as we knew it, which increased the suffering of single mothers and their children but didn’t affect the trajectory of marriage and single motherhood.

So the collapse of marriage continues apace. Since 1980, for every state in every decade, the percentage of women who are married has fallen (except Utah in the 1990s):

1

Red states (last four presidential elections Republican) to blue (last four Democrat), and in between (light blue, purple, light red), makes no difference:

2

But the “marriage movement” lives on. In fact, their message has changed remarkably little. In that 1996 symposium, Dan Quayle wrote:

We also desperately need help from nongovernment institutions like the media and the entertainment community. They have a tremendous influence on our culture and they should join in when it comes to strengthening families.

Sixteen years later, in the 2012 “State of Our Unions” report, the National Marriage Project included a 10-point list of familiar demands, including this point #8:

Our nation’s leaders, including the president, must engage Hollywood in a conversation about popular culture ideas about marriage and family formation, including constructive critiques and positive ideas for changes in media depictions of marriage and fatherhood.

So little reflection on such a bad track record — it’s enough to make you think that increasing marriage isn’t the main goal of the movement.

Plan for the Future

So what is the future of marriage? Advocates like to talk about turning it around, bringing back a “marriage culture.” But is there a precedent for this, or a reason to expect it to happen? Not that I can see. In fact, the decline of marriage is nearly universal. A check of United Nations statistics on marriage trends shows that 87 percent of the world’s population lives in countries with marriage rates that have fallen since the 1980s.

Here is the trend in the marriage rate since 1940, with some possible scenarios to 2040 (source: 1940-19601970-2011):

3

Notice the decline has actually accelerated since 1990. Something has to give. The marriage movement folks say they want a rebound. With even the most optimistic twist imaginable (and a Kanye wedding), could it get back to 2000 levels by 2040? That would make headlines, but the institution would still be less popular than it was during that dire 1996 symposium.

If we just keep going on the same path (the red line), marriage will hit zero at around 2042. Some trends are easy to predict by extrapolation (like next year’s decline in the name Mary), but major demographic trends usually don’t just smash into 0 or 100 percent, so I don’t expect that.

The more realistic future is some kind of taper. We know, for example, that decline of marriage has slowed considerably for college graduates, so they’re helping keep it alive — but that’s still only 35 percent of women in their 30s, not enough to turn the whole ship around.

So Live With It

So rather than try to redirect the ship of marriage, we have to do what we already know we have to do: reduce the disadvantages accruing to those who aren’t married — or whose parents aren’t married. If we take the longer view we know this is the right approach: In the past two centuries we’ve largely replaced such family functions as food production, healthcare, education, and elder care with a combination of state and market interventions. As a result — even though the results are, to put it mildly, uneven — our collective well-being has improved rather than diminished, even though families have lost much of their hold on modern life.

If the new book by sociologist Kathryn Edin and Timothy Nelson is to be believed, there is good news for the floundering marriage movement in this approach: Policies to improve the security of poor people and their children also tend to improve the stability of their relationships.  In other words, supporting single people supports marriage.

To any clear-eyed observer it’s obvious that we can’t count on marriage anymore — we can’t build our social welfare system around the assumption that everyone does or should get married if they or their children want to be cared for. That’s what it means when pensions are based on spouse’s earnings, employers don’t provide sick leave or family leave, and when high-quality preschool is unaffordable for most people. So let marriage be truly voluntary, and maybe more people will even end up married. Not that there’s anything wrong with that.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Cross-posted at Speech Events.

Earlier this year President Obama described California attorney general Kamala Harris “the best-looking attorney general in the country.” Even though the crowd reportedly laughed at the comment, Obama was criticized for making sexist remarks and quickly apologized to Harris.

But some people claimed to be confused: why was Obama wrong to compliment a woman on her looks? From the Washington Times:

Please, give us a chance to learn the rules. Give us a minute to catch our breath

We were taught (most of us were) that girls and women were to be given flowers for their beauty of character and good looks.

Exactly what is wrong with this?

But one morning we were told that it is okay, even required, to tell a woman that she looks marvelous. Next morning, hey, we can go to jail for this!

This is not the first time a president has run into this sort of trouble. This picture of reporter Helen Thomas ran in the Philadelphia Evening Bulletin on August 7, 1973.*

Helen Thomas Standing in Front of White House with Note Pad

The accompanying story was titled “Nixon Turns Fashion Critic, ‘Turn Around…’”  It included the following:

President Nixon, a gentleman of the old school, teased a newspaper woman yesterday about wearing slacks to the White House and made it clear that he prefers dresses on women.

After a bill-signing ceremony in the Oval Office, the President stood up from his desk and in a teasing voice said to UPI’s Helen Thomas: “Helen, are you still wearing slacks? Do you prefer them actually? Every time I see girls in slacks it reminds me of China.”

Nixon went on, asking Thomas to present her rear:

“This is not said in an uncomplimentary way, but slacks can do something for some people and some it can’t.” He hastened to add, “but I think you do very well. Turn around.”

As Nixon, Attorney General Elliott L. Richardson, FBI Director Clarence Kelley and other high-ranking law enforcement officials smiling [sic], Miss Thomas did a pirouette for the President. She was wearing white pants, a navy blue jersey shirt, long white beads and navy blue patent leather shoes with red trim.

There are several parallels between this incident and the Obama one: they took place at the tail end of an official event, when the president apparently thought he could take some time for harmless jokes. The women involved were highly acclaimed professional women. In both events, we see a powerful man verbally change a woman from a respected professional to an attractive female.

We know how the public responded to Obama’s comment. What about reception in 1973?

First of all, Helen Thomas herself wrote the article about this incident; according to anthropologist Michael Silverstein, “this is what we call ‘payback’ time.” At first glance, it seems like a neutral report of a conversation, but take a closer look. From the very beginning, Nixon is set up as the bad guy – a “gentleman of the old school” who “teased a newspaper woman.”

The mocking, faux fashion report tone continues from the headline into the description of Thomas’s outfit. What seems like a harmless personal interest story tacked onto a news article was actually a protest against this treatment – and it required damage control by the president. Within the next week, Thomas’s fellow reporters went on the record as saying that they were on her side, and wished she had not played along with the president. Even the First Lady weighed in, saying that there was no rule against women wearing pants in the White House.

The rules haven’t changed: there’s nothing new about presidents talking about professional women’s appearance, and even in 1973 it was recognized as inappropriate.

* The image is from here; the article on google is text only.

Miranda Weinberg is a graduate student in Educational Linguistics and Anthropology at the University of Pennsylvania studying multilingualism in schooling.