For the last week of December, we’re re-posting some of our favorite posts from 2012.

This morning NPR aired a segment on media stories about the “boomerang generation,” college-educated children who return to live with their parents after graduation. A widely-repeated figure is that currently 85% of recent college grads are moving back in with their parents, taken as a sign of the ongoing, and potentially long-term, consequences of the economic crisis.

Except for the part where it’s not true.

You may have heard this figure. CNN Money seems to be the first to cite it, in 2010; Time and the New York Post, among others, repeated the number:

It  continued to spread, most recently ending up in a political ad from American Crossroads that attacks President Obama.

But PolitiFact recently looked into the claim and declared it false. It supposedly came from a survey conducted by a marketing and research firm from Philadelphia. Yet as they dug further into the story, PolitiFact found many things that might make you suspicious. For instance, some people listed as employees claimed never to have worked for them, while others seem to be fictional, their photos taken from stock photo archives. One employee they did find turned out to be the company president’s dad. When they found the president, David Morrison, he said the survey was conducted “many years ago” but refused to release any information about the methodology, saying he had a non-disclosure agreement with the (unnamed) client.

But as the story of this shocking trend was reproduced, it appears reporters did not try to access the original survey to fact-check it, or surely they would have discovered at least some of these discrepancies, or the lack of any available data to back up the claim.

In contrast to the 85% figure, a Pew Center report (based on a sample of 2,048) found that for young adults aged 18-34, 39% were either currently living with their parents or had temporarily moved in with them at some point because of the economic downturn:

And importantly, of those currently living with their parents, the vast majority of 18-24 year-olds said the economy wasn’t the reason they were doing so. The study found no significant differences by education for those under 30 (42% of graduates were living at home, compared to 49% of those who never attended college), but for those 30-34, only 10% of college graduates were living at home (compared to 22% of non-college graduates).

But once the more shocking 85% figure had been cited by a mainstream news source, it was quickly reproduced in many other outlets with little fact-checking. As PolitiFact sums up,

…once a claim enters the mainstream media, it’s hard to put the genie back in the bottle. “The dynamic of trust is built with each link,” Wemple said. “It barely occurs to anybody that all those links may be built on a straw foundation.”

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

Today in the U.S., one of the major rules of masculinity is that men must avoid physical intimacy with each other unless they want to have their sexuality called into question. The guy horrified by the potential implications of a casual physical touch is a common trope in our pop culture.

But this wasn’t always the case. For physical closeness and even casual expressions of intimacy to become threats to masculinity, homosexuality had to enter the public consciousness as a stigmatized identity. That is, a man being gay had to be a possibility in observers’ minds when interpreting their behavior, and men had to be eager to avoid any such assumptions.

Over at the Art of Manliness, Brett and Kate McKay have posted a fantastic collection of old photos showing men posing in ways that show a high level of comfort with physical contact between men. Many of them show men posed in ways that would be unacceptable among straight men today. Here are just a few; I highly recommend looking at their entire post:

The McKays point out that sitting for a portrait required men to go to public businesses and openly pose for a photographer. These poses were quite common for men at the time and wouldn’t have been read through the lens of potential gayness that viewers today would likely apply.

Once personal cameras became popular, formal studio photos waned, but early snapshots showed similar poses. Though snapshots eliminated the need to go to a public place of business and pose, film still had to be developed by a professional, who would look at each image (and, even when I was a kid, developers would occasionally refuse to develop photos due to content, and occasionally you heard of a developer calling the police about photos they believed revealed illegal activities). The fact that physical touching is so common among men in early snapshots indicates that there was nothing scandalous or threatening bout such poses. Only as the performance of masculinity became increasingly focused on an obsessive avoidance of any perception of gayness or femininity did such touching become taboo.

Seriously, though — -check out their entire post. It’s awesome!

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

For the last week of December, we’re re-posting some of our favorite posts from 2012. Cross-posted at Global Policy TV and Pacific Standard.

Publicizing the release of the 1940 U.S. Census data, LIFE magazine released photographs of Census enumerators collecting data from household members.  Yep, Census enumerators. For almost 200 years, the U.S. counted people and recorded information about them in person, by sending out a representative of the U.S. government to evaluate them directly (source).

By 1970, the government was collecting Census data by mail-in survey. The shift to a survey had dramatic effects on at least one Census category: race.

Before the shift, Census enumerators categorized people into racial groups based on their appearance.  They did not ask respondents how they characterized themselves.  Instead, they made a judgment call, drawing on explicit instructions given to the Census takers.

On a mail-in survey, however, the individual self-identified.  They got to tell the government what race they were instead of letting the government decide.  There were at least two striking shifts as a result of this change:

  • First, it resulted in a dramatic increase in the Native American population.  Between 1980 and 2000, the U.S. Native American population magically grew 110%.  People who had identified as American Indian had apparently been somewhat invisible to the government.
  • Second, to the chagrin of the Census Bureau, 80% of Puerto Ricans choose white (only 40% of them had been identified as white in the previous Census).  The government wanted to categorize Puerto Ricans as predominantly black, but the Puerto Rican population saw things differently.

I like this story.  Switching from enumerators to surveys meant literally shifting our definition of what race is from a matter of appearance to a matter of identity.  And it wasn’t a strategic or philosophical decision. Instead, the very demographics of the population underwent a fundamental unsettling because of the logistical difficulties in collecting information from a large number of people.  Nevertheless, this change would have a profound impact on who we think Americans are, what research about race finds, and how we think about race today.

See also the U.S. Census and the Social Construction of Race and Race and Censuses from Around the World. To look at the questionnaires and their instructions for any decade, visit the Minnesota Population Center.  Thanks to Philip Cohen for sending the link.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012. Originally cross-posted at Ms.

Mojca P., Jason H., Larry H., and Cindy S. sent us a link to a story about a Saudi Arabian version of an IKEA catalog in which all of the women were erased.  Here is a single page of the American and Saudi Arabian magazines side-by-side:

After the outcry in response to this revelation began, IKEA responded by called the removal of women a “mistake” “in conflict with the IKEA Group values.”   IKEA seems to have agreed with its critics: erasing women capitulates to a sexist society and that is wrong.

But, there is a competing progressive value at play: cultural sensitivity.  Isn’t removing the women from the catalog the respectful and non-ethnocentric thing to do?

Susan Moller Okin wrote a paper that famously asked, “Is Multiculturalism Bad for Women?”  The question led to two decades of debate and an interrogating of the relationship between culture and power.  Who gets to decide what’s cultural?  Whose interests does cultural sensitivity serve?

The IKEA catalog suggests that (privileged) men get to decide what Saudi Arabian culture looks like (though many women likely endorse the cultural mandate to keep women out of view as well).  So, respecting culture entails endorsing sexism because men are in charge of the culture?

Well, it depends.  It certainly can go that way, and often does.  But there’s a feminist (and anti-colonialist) way to do this too.  Respecting culture entails endorsing sexism only if we demonize certain cultures as irredeemably sexist and unable to change.  In fact, most cultures have sexist traditions.  Since all of those cultures are internally-contested and changing, no culture is hopelessly sexist.  Ultimately, one can bridge their inclinations to be both culturally sensitive and feminist by seeking out the feminist strains in every culture and hoping to see those manifested as it evolves.

None of this is going to solve IKEA’s problem today, but it does illustrate one of difficult-to-solve paradoxes in contemporary progressive politics.

—————————

Lisa Wade has published extensively on the relationship between feminism and multiculturalism, using female genital cutting as a case.  You can follow her on Twitter and Facebook (where she keeps discussion of “mutilation” to a minimum).

For the last week of December, we’re re-posting some of our favorite posts from 2012.

In an effort to map the shape of the dual career challenge, the Clayman Institute for Research on Gender at Stanford University did a survey of 30,000 faculty at 13 universities. The study was headed by Londa Schiebinger, Andrea Henderson, and Shannon Gilmartin.

When academics use the phrase “dual career,” they’re referring to the tendency of academics to marry other academics, making the job hunt fraught with trouble.  Most institutions are not keen to hire someone’s partner just because they exist.  Meanwhile, the academic job market is tough; it’s difficult to get just one job, let alone two within a reasonable commute of one another.

So, what did the researchers find?

More than a third of professors are partnered with another professor:

When we break this data down by gender, we see some interesting trends.  Female professors are somewhat more likely to be married to an academic partner (40% of women versus 34% of men), they are twice as likely to be single (21% are single versus 10% of men; racial minority women are even more likely), and they are only 1/4th as likely to have a stay-at-home partner:

On the one hand, since women are more likely to have an academic partner, the problem of finding a job for a pair of academics hits women harder.  On the other hand, the fact that they are more often single makes choosing a job simpler for a larger proportion of women than men.  (On anther note, if you’ve ever wondered why fewer female than male academics have children, there are several answers in the pie charts above.)

For women who are partnered with another academic, the data is starker than the 6 point difference above would suggest.  The researchers asked members of dual-career academic couples, whose job comes first?  Half of men said that theirs did, compared to only 20% of women.  When it comes to balancing competing career demands, then, women may be more willing to compromise than men.

There is a lot more detailed information on academic couples and what institutions think of them in the report. Or, listen to Londa Schiebinger and the other researchers describe their findings:

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.  Cross-posted at Jezebel, the Huffington Post, and Pacific Standard.

You might be surprised to learn that at its inception in the mid-1800s cheerleading was an all-male sport.  Characterized by gymnastics, stunts, and crowd leadership, cheerleading was considered equivalent in prestige to an American flagship of masculinity, football.  As the editors of Nation saw it in 1911:

…the reputation of having been a valiant “cheer-leader” is one of the most valuable things a boy can take away from college.  As a title to promotion in professional or public life, it ranks hardly second to that of having been a quarterback.*

Indeed, cheerleading helped launch the political careers of three U.S. Presidents.  Dwight D. Eisenhower, Franklin Roosevelt, and Ronald Reagan were cheerleaders. Actor Jimmy Stewart was head cheerleader at Princeton. Republican leader Tom DeLay was a noted cheerleader at the University of Mississippi.

Women were mostly excluded from cheerleading until the 1930s. An early opportunity to join squads appeared when large numbers of men were deployed to fight World War I, leaving open spots that women were happy to fill.


When the men returned from war there was an effort to push women back out of cheerleading (some schools even banned female cheerleaders).  The battle over whether women should be cheerleaders would go on for several decades.  Argued one opponent in 1938:

[Women cheerleaders] frequently became too masculine for their own good… we find the development of loud, raucous voices… and the consequent development of slang and profanity by their necessary association with [male] squad members…**

Cheerleading was too masculine for women!  Ultimately the effort to preserve cheer as an man-only activity was unsuccessful.  With a second mass deployment of men during World War II, women cheerleaders were here to stay.

The presence of women changed how people thought about cheering.  Because women were stereotyped as cute instead of “valiant,” the reputation of cheerleaders changed.  Instead of a pursuit that “ranks hardly second” to quarterbacking, cheerleading’s association with women led to its trivialization.  By the 1950s, the ideal cheerleader was no longer a strong athlete with leadership skills, it was someone with “manners, cheerfulness, and good disposition.”  In response, boys pretty much bowed out of cheerleading altogether. By the 1960s, men and megaphones had been mostly replaced by perky co-eds and pom-poms:

Cheerleading in the sixties consisted of cutesy chants, big smiles and revealing uniforms.  There were no gymnastic tumbling runs.  No complicated stunting.  Never any injuries.  About the most athletic thing sixties cheerleaders did was a cartwheel followed by the splits.***

Cheerleading was transformed.

Of course, it’s not this way anymore.  Cultural changes in gender norms continued to affect cheerleading. Now cheerleaders, still mostly women, pride themselves in being both athletic and spirited, a blending of masculine and feminine traits that is now considered ideal for women.

See also race and the changing shape of cheerleading and the amazing disappearing cheerleading outfit.

Citations after the jump:

more...

For the last week of December, we’re re-posting some of our favorite posts from 2012.

A recent episode of Radiolab centered on questions about colors.  It profiled a British man who, in the 1800s, noticed that neither The Odyssey nor The Iliad included any references to the color blue.  In fact, it turns out that, as languages evolve words for color, blue is always last.  Red is always first.  This is the case in every language ever studied.

Scholars theorize that this is because red is very common in nature, but blue is extremely rare.  The flowers we think of as blue, for example, are usually more violet than blue; very few foods are blue.  Most of the blue we see today is part of artificial colors produced by humans through manufacturing processes.  So, blue is the last color to be noticed and named.

An exception to the rarity of blue in nature, of course — one that might undermine this theory — is the sky.  The sky is blue, right?

Well, it turns out that seeing blue when we look up is dependent on already knowing that the sky is blue.  To illustrate, the hosts of Radiolab interviewed a linguist named Guy Deutscher who did a little experiment on his daughter, Alma.  Deutscher taught her all the colors, including blue, in the typical way: pointing to objects and asking what color they were.  In the typical way, Alma mastered her colors quite easily.

But Deutscher and his wife avoided ever telling Alma that the sky was blue.  Then, one day, he pointed to a clear sky and asked her, “What color is that?”

Alma, at first, was puzzled.  To Alma, the sky was a void, not an object with properties like color.  It was nothing. There simply wasn’t a “that” there at all.  She had no answer.  The idea that the sky is a thing at all, then, is not immediately obvious.

Deutscher kept asking on “sky blue” days and one day she answered: the sky was white.  White was her answer for some time and she only later suggested that maybe it was blue.  Then blue and white took turns for a while, and she finally settled on blue.

The story is a wonderful example of the role of culture in shaping perception.  Even things that seem objectively true may only seem so if we’ve been given a framework with which to see it; even the idea that a thing is a thing at all, in fact, is partly a cultural construction.  There are other examples of this phenomenon.  What we call “red onions” in the U.S., for another example, are seen as blue in parts of Germany.  Likewise, optical illusions that consistently trick people in some cultures — such as the Müller-Lyer illusion — don’t often trick people in others.

So, next time you look into the sky, ask yourself what you might see if you didn’t see blue.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

In “Rock in a Hard Place: Grassroots Cultural Production in the Post-Elvis Era,” William Bielby discusses the emergence of the amateur teen rock band. The experience of teens getting together with their friends to form a band and practice in their parents’ garage is iconic in our culture now; recalling their first band or their first live show is a standard element of interviews with successful rock musicians. Bielby traces the history of this cultural form, which appeared in the 1950s. In particular, he argues that social structures largely excluded young women from full participation in the teen band phenomenon.

Though young women were involved in many other types of musical performance, the pop charts featured many successful female artists in the 1950s, and girls listened to music more than boys, rock bands emerged as a male-dominated (and predominantly White) musical form. One important reason was parents’ concern about the rock subculture and the lack of supervision. Parents might be willing to let their sons get together with friends and play loud music and travel around town or even to other cities to play in front of a crowd, but they were much less likely to let their daughters do so. Gendered parenting, and the closer regulation of girls than boys, meant that girls were less likely to be given the chance to join a band. So while boys were learning to take on the role of active producers of rock music, girls didn’t have the same opportunities.

Yunnan C. sent us photos she took of two shirts at an H&M store in Toronto that made me think about Bielby’s argument:

As Yunnan points out,

This, as fashion, enforces this idea that being in a band and playing music are for guys, limiting women to being the passive consumers and supporters of it, rather than the producers.

The shirts don’t just cast women in the role of fans; they specifically frame them as potential groupies, whose fandom is filtered through a romantic/sexual attraction to individual members of a band. Communications scholar Melissa Click argues that female fans are often dismissed because there is a “persistent cultural assumption that male-targeted texts are authentic and interesting, while female-targeted texts are schlocky and mindless—and further that men and boys are active users of media while girls are passive consumers.” While the image of the groupie is as well-known as that of the band, the groupie is usually viewed skeptically, seen as someone with a superficial, inauthentic appreciation of the music, “a particular kind of female fan assumed to be more interested in sex with rock stars than in their music.”

So the H&M shirts reflect gendered notions about who makes music (there were no shirts saying “I am the drummer”) as well as the idea that women’s appreciation for music and other forms of pop culture should be expressed through affection for a specific person, a form of fanhood that ultimately stigmatizes those who express it as superficial and inauthentic.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.