Archive: 2012

For the last week of December, we’re re-posting some of our favorite posts from 2012. Cross-posted at Global Policy TV and Pacific Standard.

Publicizing the release of the 1940 U.S. Census data, LIFE magazine released photographs of Census enumerators collecting data from household members.  Yep, Census enumerators. For almost 200 years, the U.S. counted people and recorded information about them in person, by sending out a representative of the U.S. government to evaluate them directly (source).

By 1970, the government was collecting Census data by mail-in survey. The shift to a survey had dramatic effects on at least one Census category: race.

Before the shift, Census enumerators categorized people into racial groups based on their appearance.  They did not ask respondents how they characterized themselves.  Instead, they made a judgment call, drawing on explicit instructions given to the Census takers.

On a mail-in survey, however, the individual self-identified.  They got to tell the government what race they were instead of letting the government decide.  There were at least two striking shifts as a result of this change:

  • First, it resulted in a dramatic increase in the Native American population.  Between 1980 and 2000, the U.S. Native American population magically grew 110%.  People who had identified as American Indian had apparently been somewhat invisible to the government.
  • Second, to the chagrin of the Census Bureau, 80% of Puerto Ricans choose white (only 40% of them had been identified as white in the previous Census).  The government wanted to categorize Puerto Ricans as predominantly black, but the Puerto Rican population saw things differently.

I like this story.  Switching from enumerators to surveys meant literally shifting our definition of what race is from a matter of appearance to a matter of identity.  And it wasn’t a strategic or philosophical decision. Instead, the very demographics of the population underwent a fundamental unsettling because of the logistical difficulties in collecting information from a large number of people.  Nevertheless, this change would have a profound impact on who we think Americans are, what research about race finds, and how we think about race today.

See also the U.S. Census and the Social Construction of Race and Race and Censuses from Around the World. To look at the questionnaires and their instructions for any decade, visit the Minnesota Population Center.  Thanks to Philip Cohen for sending the link.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012. Originally cross-posted at Ms.

Mojca P., Jason H., Larry H., and Cindy S. sent us a link to a story about a Saudi Arabian version of an IKEA catalog in which all of the women were erased.  Here is a single page of the American and Saudi Arabian magazines side-by-side:

After the outcry in response to this revelation began, IKEA responded by called the removal of women a “mistake” “in conflict with the IKEA Group values.”   IKEA seems to have agreed with its critics: erasing women capitulates to a sexist society and that is wrong.

But, there is a competing progressive value at play: cultural sensitivity.  Isn’t removing the women from the catalog the respectful and non-ethnocentric thing to do?

Susan Moller Okin wrote a paper that famously asked, “Is Multiculturalism Bad for Women?”  The question led to two decades of debate and an interrogating of the relationship between culture and power.  Who gets to decide what’s cultural?  Whose interests does cultural sensitivity serve?

The IKEA catalog suggests that (privileged) men get to decide what Saudi Arabian culture looks like (though many women likely endorse the cultural mandate to keep women out of view as well).  So, respecting culture entails endorsing sexism because men are in charge of the culture?

Well, it depends.  It certainly can go that way, and often does.  But there’s a feminist (and anti-colonialist) way to do this too.  Respecting culture entails endorsing sexism only if we demonize certain cultures as irredeemably sexist and unable to change.  In fact, most cultures have sexist traditions.  Since all of those cultures are internally-contested and changing, no culture is hopelessly sexist.  Ultimately, one can bridge their inclinations to be both culturally sensitive and feminist by seeking out the feminist strains in every culture and hoping to see those manifested as it evolves.

None of this is going to solve IKEA’s problem today, but it does illustrate one of difficult-to-solve paradoxes in contemporary progressive politics.

—————————

Lisa Wade has published extensively on the relationship between feminism and multiculturalism, using female genital cutting as a case.  You can follow her on Twitter and Facebook (where she keeps discussion of “mutilation” to a minimum).

For the last week of December, we’re re-posting some of our favorite posts from 2012.

In an effort to map the shape of the dual career challenge, the Clayman Institute for Research on Gender at Stanford University did a survey of 30,000 faculty at 13 universities. The study was headed by Londa Schiebinger, Andrea Henderson, and Shannon Gilmartin.

When academics use the phrase “dual career,” they’re referring to the tendency of academics to marry other academics, making the job hunt fraught with trouble.  Most institutions are not keen to hire someone’s partner just because they exist.  Meanwhile, the academic job market is tough; it’s difficult to get just one job, let alone two within a reasonable commute of one another.

So, what did the researchers find?

More than a third of professors are partnered with another professor:

When we break this data down by gender, we see some interesting trends.  Female professors are somewhat more likely to be married to an academic partner (40% of women versus 34% of men), they are twice as likely to be single (21% are single versus 10% of men; racial minority women are even more likely), and they are only 1/4th as likely to have a stay-at-home partner:

On the one hand, since women are more likely to have an academic partner, the problem of finding a job for a pair of academics hits women harder.  On the other hand, the fact that they are more often single makes choosing a job simpler for a larger proportion of women than men.  (On anther note, if you’ve ever wondered why fewer female than male academics have children, there are several answers in the pie charts above.)

For women who are partnered with another academic, the data is starker than the 6 point difference above would suggest.  The researchers asked members of dual-career academic couples, whose job comes first?  Half of men said that theirs did, compared to only 20% of women.  When it comes to balancing competing career demands, then, women may be more willing to compromise than men.

There is a lot more detailed information on academic couples and what institutions think of them in the report. Or, listen to Londa Schiebinger and the other researchers describe their findings:

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.  Cross-posted at Jezebel, the Huffington Post, and Pacific Standard.

You might be surprised to learn that at its inception in the mid-1800s cheerleading was an all-male sport.  Characterized by gymnastics, stunts, and crowd leadership, cheerleading was considered equivalent in prestige to an American flagship of masculinity, football.  As the editors of Nation saw it in 1911:

…the reputation of having been a valiant “cheer-leader” is one of the most valuable things a boy can take away from college.  As a title to promotion in professional or public life, it ranks hardly second to that of having been a quarterback.*

Indeed, cheerleading helped launch the political careers of three U.S. Presidents.  Dwight D. Eisenhower, Franklin Roosevelt, and Ronald Reagan were cheerleaders. Actor Jimmy Stewart was head cheerleader at Princeton. Republican leader Tom DeLay was a noted cheerleader at the University of Mississippi.

Women were mostly excluded from cheerleading until the 1930s. An early opportunity to join squads appeared when large numbers of men were deployed to fight World War I, leaving open spots that women were happy to fill.


When the men returned from war there was an effort to push women back out of cheerleading (some schools even banned female cheerleaders).  The battle over whether women should be cheerleaders would go on for several decades.  Argued one opponent in 1938:

[Women cheerleaders] frequently became too masculine for their own good… we find the development of loud, raucous voices… and the consequent development of slang and profanity by their necessary association with [male] squad members…**

Cheerleading was too masculine for women!  Ultimately the effort to preserve cheer as an man-only activity was unsuccessful.  With a second mass deployment of men during World War II, women cheerleaders were here to stay.

The presence of women changed how people thought about cheering.  Because women were stereotyped as cute instead of “valiant,” the reputation of cheerleaders changed.  Instead of a pursuit that “ranks hardly second” to quarterbacking, cheerleading’s association with women led to its trivialization.  By the 1950s, the ideal cheerleader was no longer a strong athlete with leadership skills, it was someone with “manners, cheerfulness, and good disposition.”  In response, boys pretty much bowed out of cheerleading altogether. By the 1960s, men and megaphones had been mostly replaced by perky co-eds and pom-poms:

Cheerleading in the sixties consisted of cutesy chants, big smiles and revealing uniforms.  There were no gymnastic tumbling runs.  No complicated stunting.  Never any injuries.  About the most athletic thing sixties cheerleaders did was a cartwheel followed by the splits.***

Cheerleading was transformed.

Of course, it’s not this way anymore.  Cultural changes in gender norms continued to affect cheerleading. Now cheerleaders, still mostly women, pride themselves in being both athletic and spirited, a blending of masculine and feminine traits that is now considered ideal for women.

See also race and the changing shape of cheerleading and the amazing disappearing cheerleading outfit.

Citations after the jump:

more...

For the last week of December, we’re re-posting some of our favorite posts from 2012.

A recent episode of Radiolab centered on questions about colors.  It profiled a British man who, in the 1800s, noticed that neither The Odyssey nor The Iliad included any references to the color blue.  In fact, it turns out that, as languages evolve words for color, blue is always last.  Red is always first.  This is the case in every language ever studied.

Scholars theorize that this is because red is very common in nature, but blue is extremely rare.  The flowers we think of as blue, for example, are usually more violet than blue; very few foods are blue.  Most of the blue we see today is part of artificial colors produced by humans through manufacturing processes.  So, blue is the last color to be noticed and named.

An exception to the rarity of blue in nature, of course — one that might undermine this theory — is the sky.  The sky is blue, right?

Well, it turns out that seeing blue when we look up is dependent on already knowing that the sky is blue.  To illustrate, the hosts of Radiolab interviewed a linguist named Guy Deutscher who did a little experiment on his daughter, Alma.  Deutscher taught her all the colors, including blue, in the typical way: pointing to objects and asking what color they were.  In the typical way, Alma mastered her colors quite easily.

But Deutscher and his wife avoided ever telling Alma that the sky was blue.  Then, one day, he pointed to a clear sky and asked her, “What color is that?”

Alma, at first, was puzzled.  To Alma, the sky was a void, not an object with properties like color.  It was nothing. There simply wasn’t a “that” there at all.  She had no answer.  The idea that the sky is a thing at all, then, is not immediately obvious.

Deutscher kept asking on “sky blue” days and one day she answered: the sky was white.  White was her answer for some time and she only later suggested that maybe it was blue.  Then blue and white took turns for a while, and she finally settled on blue.

The story is a wonderful example of the role of culture in shaping perception.  Even things that seem objectively true may only seem so if we’ve been given a framework with which to see it; even the idea that a thing is a thing at all, in fact, is partly a cultural construction.  There are other examples of this phenomenon.  What we call “red onions” in the U.S., for another example, are seen as blue in parts of Germany.  Likewise, optical illusions that consistently trick people in some cultures — such as the Müller-Lyer illusion — don’t often trick people in others.

So, next time you look into the sky, ask yourself what you might see if you didn’t see blue.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

In “Rock in a Hard Place: Grassroots Cultural Production in the Post-Elvis Era,” William Bielby discusses the emergence of the amateur teen rock band. The experience of teens getting together with their friends to form a band and practice in their parents’ garage is iconic in our culture now; recalling their first band or their first live show is a standard element of interviews with successful rock musicians. Bielby traces the history of this cultural form, which appeared in the 1950s. In particular, he argues that social structures largely excluded young women from full participation in the teen band phenomenon.

Though young women were involved in many other types of musical performance, the pop charts featured many successful female artists in the 1950s, and girls listened to music more than boys, rock bands emerged as a male-dominated (and predominantly White) musical form. One important reason was parents’ concern about the rock subculture and the lack of supervision. Parents might be willing to let their sons get together with friends and play loud music and travel around town or even to other cities to play in front of a crowd, but they were much less likely to let their daughters do so. Gendered parenting, and the closer regulation of girls than boys, meant that girls were less likely to be given the chance to join a band. So while boys were learning to take on the role of active producers of rock music, girls didn’t have the same opportunities.

Yunnan C. sent us photos she took of two shirts at an H&M store in Toronto that made me think about Bielby’s argument:

As Yunnan points out,

This, as fashion, enforces this idea that being in a band and playing music are for guys, limiting women to being the passive consumers and supporters of it, rather than the producers.

The shirts don’t just cast women in the role of fans; they specifically frame them as potential groupies, whose fandom is filtered through a romantic/sexual attraction to individual members of a band. Communications scholar Melissa Click argues that female fans are often dismissed because there is a “persistent cultural assumption that male-targeted texts are authentic and interesting, while female-targeted texts are schlocky and mindless—and further that men and boys are active users of media while girls are passive consumers.” While the image of the groupie is as well-known as that of the band, the groupie is usually viewed skeptically, seen as someone with a superficial, inauthentic appreciation of the music, “a particular kind of female fan assumed to be more interested in sex with rock stars than in their music.”

So the H&M shirts reflect gendered notions about who makes music (there were no shirts saying “I am the drummer”) as well as the idea that women’s appreciation for music and other forms of pop culture should be expressed through affection for a specific person, a form of fanhood that ultimately stigmatizes those who express it as superficial and inauthentic.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

Today, most people in the U.S. see childhood as a stage distinct from adulthood, and even from adolescence. We think children are more vulnerable and innocent than adults and should be protected from many of the burdens and responsibilities that adult life requires. But as Sidney Mintz explains in Huck’s Raft: A History of American Childhood, “…childhood is not an unchanging biological stage of life but is, rather, a social and cultural construct…Nor is childhood an uncontested concept” (p. viii). Indeed,

We cling to a fantasy that once upon a time childhood and youth were years of carefree adventure…The notion of a long childhood, devoted to education and free from adult responsibilities, is a very recent invention, and one that became a reality for a majority of children only after World War II. (p. 2)

Our ideas about what is appropriate for children to do has changed radically over time, often as a result of political and cultural battles between groups with different ideas about the best way to treat children. Most of us would be shocked by the level of adult responsibilities children were routinely expected to shoulder a century ago.

Reader RunTraveler let us know that the Library of Congress has posted a collection of photos by Lewis Hine, all depicting child labor in the early 1900s in the U.S. The photos are a great illustration of our changing ideas about childhood, showing the range of jobs, many requiring very long hours in often dangerous or extremely unpleasant conditions, that children did. I picked out a few (with some of Hine’s comments on each one below each photo), but I suggest looking through the full Flikr set or the full collection of over 5,000 photos of child laborers from the National Child Labor Committee:

“John Howell, an Indianapolis newsboy, makes $.75 some days. Begins at 6 a.m., Sundays.” 1908. Source.

Interior of tobacco shed, Hawthorn Farm. Girls in foreground are 8, 9, and 10 years old. The 10 yr. old makes 50 cents a day. 12 workers on this farm are 8 to 14 years old, and about 15 are over 15 yrs. (LOC)“Interior of tobacco shed, Hawthorn Farm. Girls in foreground are 8, 9, and 10 years old. The 10 yr. old makes 50 cents a day.” 1917. Source.

Eagle and Phoenix Mill. "Dinner-toters" waiting for the gate to open. This is carried on more in Columbus than in any other city I know, and by smaller children... (LOC)“Eagle and Phoenix Mill. ‘Dinner-toters’ waiting for the gate to open.” 1913. Source.

Vance, a Trapper Boy, 15 years old. Has trapped for several years in a West Va. Coal mine. $.75 a day for 10 hours work...(LOC)“Vance, a Trapper Boy, 15 years old. Has trapped for several years in a West Va. Coal mine. $.75 a day for 10 hours work. All he does is to open and shut this door: most of the time he sits here idle, waiting for the cars to come. On account of the intense darkness in the mine, the hieroglyphics on the door were not visible until plate was developed.” 1908. Source.

“Rose Biodo…10 years old. Working 3 summers. Minds baby and carries berries, two pecks at a time. Whites Bog, Brown Mills, N.J. This is the fourth week of school and the people here expect to remain two weeks more.” 1910. Source.

Hine’s photos make it clear how common child labor was, but their very existence also documents the cultural battle over the meaning of childhood taking place in the 1900s. Hine worked for the National Child Labor Committee, and his photos and especially his accompanying commentary express concern that children were doing work that was dangerous, difficult, poorly-paid, and that interfered with their school attendance.

In fact, the NCLC’s efforts contributed to the passage of the Keating-Owen Child Labor Act in 1916, the first law to regulate the use of child workers (limiting hours and forbidding interstate commerce in items produced by children under various ages, depending on the product). The law was ruled unconstitutional by the Supreme Court in 1918. This resulted in an extended battle between supporters and opponents of child labor laws, as another law was passed and then struck down by the courts, followed by successful efforts to stall any more legislation in the 1920s based on states-rights and anti-Communist arguments. Only in 1938, with the passage of the Fair Labor Standards Act as part of the New Deal, did child workers receive specific protections.

Even then, we had loopholes. While children working in factories or mines was redefined as inappropriate and even exploitative and cruel, a child babysitting or delivering newspapers for money was often interpreted as character-building. Today, the cultural battle over the use of children as workers continues. This year, the Labor Department retracted suggested changes that would restrict the type of farmwork children could be hired to do after it received significant push-back from farmers and legislators afraid it would apply to kids working on their own family’s farms.

As Mintz said, childhood is a contested concept, and the struggle to decide what kind of work, if any, is appropriate for any child to do continues.

For more examples, see Lisa’s 2009 post about child labor.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

Paul M. sent along the image below, from an NPR story, commenting on the way skin color is used in the portrayal of evolution.  There’s one obvious way to read this graphic: lighter-skinned people are more evolved (dare we say, “civilized”) than darker-skinned people. It seemed worthy to make a point of Paul’s observation, because this racialized presentation of evolution is really common.  A search for the word on Google Images quickly turns up several more.  In fact, almost every single illustration of evolution of this type, unless it’s in black and white, follows this pattern.  (See also our post on representations of modern man.) Here’s what a Google image search returns, for example:


This is important stuff.  It reinforces the idea that darker-skinned people are more animalistic than the lighter-skinned.  It also normalizes light-skinned people as people and darker-skinned peoples as Black or Brown people, in the same way that we use the word “American” to mean White-American, but various hyphenated phrases (African-American, Asian-American, etc) to refer to everyone else.  So, though this may seem like a trivial matter, the patterns add up to a consistent centering and applauding of Whiteness.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.