Search results for The

For the last week of December, we’re re-posting some of our favorite posts from 2012. Originally cross-posted at Inequality by Interior Design.

There is not actually a great deal of literature on “man caves,” “man dens,” and the like–save for some anthropological and archeological work using the term a bit differently.  There is, however, a substantial body of literature dealing with bachelor pads.  The “bachelor pad” is a term that emerged in the 1960s.  It was a style of masculinizing domestic spaces heavily influenced by “gentlemen’s” magazines like Esquire and Playboy.  Originally referred to as “bachelor apartments,” “bachelor pad” was coined in an article in the Chicago Tribune, and by 1964 it appeared in the New York Times and Playboy as well.

It’s somewhat ironic that the “bachelor pad” came into the American cultural consciousness at a time when the median age at first marriage was at a historic low (20.3 for women and 22.8 for men).  So, the term came into usage at a time when heterosexual marriage was in vogue.  Why then?  Another ironic twist is that while the term has only become more popular since it was introduced, “bachelorette pad” never took off–despite the interesting finding that women live alone in larger numbers than do men.  I think these two paradoxes substantiate a fundamental truth about the bachelor pad–it has always been more myth than reality (see herehereherehere, and here).

The gendering of domestic space had been a persistent dilemma since the spheres were separated in the first place.  Few men were ever able to afford the lavish, futuristic and hedonistic “pads” advertised in Esquireand Playboy.  But they did want to look at them in magazines.

A small body of literature on bachelor pads finds that they played a significant role in producing a new masculinity over the course of the 21st century.  As Bill Ogersby puts it, “A place where men could luxuriate in a milieu of hedonistic pleasure, the bachelor pad was the spatial manifestation of a consuming masculine subject that became increasingly pervasive amid the consumer boom of the 1950s and 1960s” (here).  The really interesting thing is that few men were actually able to luxuriate in these environments.  Yet Playboy — along with a host of copycat magazines — spent a great deal of money, time, and effort perpetuating a lifestyle in which few men engaged.  Indeed, outside of James Bond movies and the Playboy Mansion, I wonder how many actual bachelor pads exist or ever existed.

In the 1950s — despite a transition into consumer culture — consumption was regarded as a feminine practice and pursuit.  Bachelor pads — and the magazines that sold the images of these domestic spaces to men around the country — helped men bridge this gap.  More than a few have noted the importance of Playboy’s (hetero)sexual content in helping to sell consumption to American men.  Barbara Ehrenreich said it this way: “The breasts and bottoms were necessary not just to sell the magazine, but to protect it” (here).  Additionally, the masculinization of domestic space took many forms in early depictions of bachelor pads with ostentatious gadgetry of all types, beds with enough compartments and features to be comparable to Swiss Army knives, and each room designed in anticipation of heterosexual conquest at a moment’s notice.

Paradoxically, bachelor pads seem to have been produced to sell men thehistorically “feminized” activity of consumption.

I’m guessing that many of the “man caves” I’ll see in my research wouldn’t necessarily fit the image most of us conjure in our minds.  But the ways men with caves talk about them are replete with images not yet fully realized by men who are most often economically incapable of architecturally articulating domestic spaces without which they may never feel “at home.”

———————

Tristan Bridges is a sociologist of gender and sexuality.  He starts as an Assistant Professor of Sociology at the College at Brockport (SUNY) in the fall of 2012.  He is currently studying heterosexual couples with “man caves” in their homes.  Tristan blogs about some of this research and more at Inequality by (Interior) Design.  You can follow him on twitter @tristanbphd.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012. Cross-posted at The Huffington Post.

All that rot they teach to children about the little raindrop fairies with their buckets washing down the window panes must go.  We need less sentimentality and more spanking.

Or so said Granville Stanley Hall, founder of child psychology, in 1899.  Hall was one of many child experts of the 1800s who believed that children needed little emotional connection with their parents.

Luther Emmett Holt, who pioneered the science of pediatrics, wrote a child rearing advice book in which he called infant screaming “the baby’s exercise.”   “Babies under six months old should never be played with,” he wrote, “and the less of it at any time the better for the infant.”

Holt and Granville’s contemporary, John B. Watson, wrote a child advice book that sold into the second half of the 1900s.  In a chapter titled “Too Much Mother Love,” he wrote:

Never hug and kiss them, never let them sit in your lap. If you must, kiss them once on the forehead when they say goodnight. Shake hands with them in the morning.

When you are tempted to pet your child remember that mother love is a dangerous instrument. An instrument which may inflict a never-healing wound, a wound which may make infancy unhappy, adolescence a nightmare, an instrument which may wreck your adult son or daughter’s vocational future and their chances for marital happiness.

With these quotes in mind, it seems less surprising that we put adolescents to work in factories and coal mines.

In any case, it was in this context — one in which loving one’s child was viewed suspiciously, at best, and nurturing care both psychologically and physically dangerous — that psychologist Harry Harlow did some of his most famous experiments.  In the 1960s, using Rhesus monkeys, he set about to prove that babies needed more than just food, water, and shelter.  They needed comfort and even love.  While this may seem stunningly obvious today, Harlow was up against widespread beliefs in psychology.

This video shows one of the more basic experiments (warning, these videos can be hard to watch):

The need for these experiments reveals just how dramatically conventional wisdom can change.  The psychologists of the time needed experimental proof that physical contact between a baby and its parent mattered.   Harlow’s experiments were part of a revolution in thinking about child development.  It’s quite fascinating to realize that such a revolution was ever needed.

Special thanks to Shayna Asher-Shapiro for finding Holt, Hall, and Watson for me.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

This morning NPR aired a segment on media stories about the “boomerang generation,” college-educated children who return to live with their parents after graduation. A widely-repeated figure is that currently 85% of recent college grads are moving back in with their parents, taken as a sign of the ongoing, and potentially long-term, consequences of the economic crisis.

Except for the part where it’s not true.

You may have heard this figure. CNN Money seems to be the first to cite it, in 2010; Time and the New York Post, among others, repeated the number:

It  continued to spread, most recently ending up in a political ad from American Crossroads that attacks President Obama.

But PolitiFact recently looked into the claim and declared it false. It supposedly came from a survey conducted by a marketing and research firm from Philadelphia. Yet as they dug further into the story, PolitiFact found many things that might make you suspicious. For instance, some people listed as employees claimed never to have worked for them, while others seem to be fictional, their photos taken from stock photo archives. One employee they did find turned out to be the company president’s dad. When they found the president, David Morrison, he said the survey was conducted “many years ago” but refused to release any information about the methodology, saying he had a non-disclosure agreement with the (unnamed) client.

But as the story of this shocking trend was reproduced, it appears reporters did not try to access the original survey to fact-check it, or surely they would have discovered at least some of these discrepancies, or the lack of any available data to back up the claim.

In contrast to the 85% figure, a Pew Center report (based on a sample of 2,048) found that for young adults aged 18-34, 39% were either currently living with their parents or had temporarily moved in with them at some point because of the economic downturn:

And importantly, of those currently living with their parents, the vast majority of 18-24 year-olds said the economy wasn’t the reason they were doing so. The study found no significant differences by education for those under 30 (42% of graduates were living at home, compared to 49% of those who never attended college), but for those 30-34, only 10% of college graduates were living at home (compared to 22% of non-college graduates).

But once the more shocking 85% figure had been cited by a mainstream news source, it was quickly reproduced in many other outlets with little fact-checking. As PolitiFact sums up,

…once a claim enters the mainstream media, it’s hard to put the genie back in the bottle. “The dynamic of trust is built with each link,” Wemple said. “It barely occurs to anybody that all those links may be built on a straw foundation.”

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

In an effort to map the shape of the dual career challenge, the Clayman Institute for Research on Gender at Stanford University did a survey of 30,000 faculty at 13 universities. The study was headed by Londa Schiebinger, Andrea Henderson, and Shannon Gilmartin.

When academics use the phrase “dual career,” they’re referring to the tendency of academics to marry other academics, making the job hunt fraught with trouble.  Most institutions are not keen to hire someone’s partner just because they exist.  Meanwhile, the academic job market is tough; it’s difficult to get just one job, let alone two within a reasonable commute of one another.

So, what did the researchers find?

More than a third of professors are partnered with another professor:

When we break this data down by gender, we see some interesting trends.  Female professors are somewhat more likely to be married to an academic partner (40% of women versus 34% of men), they are twice as likely to be single (21% are single versus 10% of men; racial minority women are even more likely), and they are only 1/4th as likely to have a stay-at-home partner:

On the one hand, since women are more likely to have an academic partner, the problem of finding a job for a pair of academics hits women harder.  On the other hand, the fact that they are more often single makes choosing a job simpler for a larger proportion of women than men.  (On anther note, if you’ve ever wondered why fewer female than male academics have children, there are several answers in the pie charts above.)

For women who are partnered with another academic, the data is starker than the 6 point difference above would suggest.  The researchers asked members of dual-career academic couples, whose job comes first?  Half of men said that theirs did, compared to only 20% of women.  When it comes to balancing competing career demands, then, women may be more willing to compromise than men.

There is a lot more detailed information on academic couples and what institutions think of them in the report. Or, listen to Londa Schiebinger and the other researchers describe their findings:

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.  Cross-posted at Jezebel, the Huffington Post, and Pacific Standard.

You might be surprised to learn that at its inception in the mid-1800s cheerleading was an all-male sport.  Characterized by gymnastics, stunts, and crowd leadership, cheerleading was considered equivalent in prestige to an American flagship of masculinity, football.  As the editors of Nation saw it in 1911:

…the reputation of having been a valiant “cheer-leader” is one of the most valuable things a boy can take away from college.  As a title to promotion in professional or public life, it ranks hardly second to that of having been a quarterback.*

Indeed, cheerleading helped launch the political careers of three U.S. Presidents.  Dwight D. Eisenhower, Franklin Roosevelt, and Ronald Reagan were cheerleaders. Actor Jimmy Stewart was head cheerleader at Princeton. Republican leader Tom DeLay was a noted cheerleader at the University of Mississippi.

Women were mostly excluded from cheerleading until the 1930s. An early opportunity to join squads appeared when large numbers of men were deployed to fight World War I, leaving open spots that women were happy to fill.


When the men returned from war there was an effort to push women back out of cheerleading (some schools even banned female cheerleaders).  The battle over whether women should be cheerleaders would go on for several decades.  Argued one opponent in 1938:

[Women cheerleaders] frequently became too masculine for their own good… we find the development of loud, raucous voices… and the consequent development of slang and profanity by their necessary association with [male] squad members…**

Cheerleading was too masculine for women!  Ultimately the effort to preserve cheer as an man-only activity was unsuccessful.  With a second mass deployment of men during World War II, women cheerleaders were here to stay.

The presence of women changed how people thought about cheering.  Because women were stereotyped as cute instead of “valiant,” the reputation of cheerleaders changed.  Instead of a pursuit that “ranks hardly second” to quarterbacking, cheerleading’s association with women led to its trivialization.  By the 1950s, the ideal cheerleader was no longer a strong athlete with leadership skills, it was someone with “manners, cheerfulness, and good disposition.”  In response, boys pretty much bowed out of cheerleading altogether. By the 1960s, men and megaphones had been mostly replaced by perky co-eds and pom-poms:

Cheerleading in the sixties consisted of cutesy chants, big smiles and revealing uniforms.  There were no gymnastic tumbling runs.  No complicated stunting.  Never any injuries.  About the most athletic thing sixties cheerleaders did was a cartwheel followed by the splits.***

Cheerleading was transformed.

Of course, it’s not this way anymore.  Cultural changes in gender norms continued to affect cheerleading. Now cheerleaders, still mostly women, pride themselves in being both athletic and spirited, a blending of masculine and feminine traits that is now considered ideal for women.

See also race and the changing shape of cheerleading and the amazing disappearing cheerleading outfit.

Citations after the jump:

more...

For the last week of December, we’re re-posting some of our favorite posts from 2012.

A recent episode of Radiolab centered on questions about colors.  It profiled a British man who, in the 1800s, noticed that neither The Odyssey nor The Iliad included any references to the color blue.  In fact, it turns out that, as languages evolve words for color, blue is always last.  Red is always first.  This is the case in every language ever studied.

Scholars theorize that this is because red is very common in nature, but blue is extremely rare.  The flowers we think of as blue, for example, are usually more violet than blue; very few foods are blue.  Most of the blue we see today is part of artificial colors produced by humans through manufacturing processes.  So, blue is the last color to be noticed and named.

An exception to the rarity of blue in nature, of course — one that might undermine this theory — is the sky.  The sky is blue, right?

Well, it turns out that seeing blue when we look up is dependent on already knowing that the sky is blue.  To illustrate, the hosts of Radiolab interviewed a linguist named Guy Deutscher who did a little experiment on his daughter, Alma.  Deutscher taught her all the colors, including blue, in the typical way: pointing to objects and asking what color they were.  In the typical way, Alma mastered her colors quite easily.

But Deutscher and his wife avoided ever telling Alma that the sky was blue.  Then, one day, he pointed to a clear sky and asked her, “What color is that?”

Alma, at first, was puzzled.  To Alma, the sky was a void, not an object with properties like color.  It was nothing. There simply wasn’t a “that” there at all.  She had no answer.  The idea that the sky is a thing at all, then, is not immediately obvious.

Deutscher kept asking on “sky blue” days and one day she answered: the sky was white.  White was her answer for some time and she only later suggested that maybe it was blue.  Then blue and white took turns for a while, and she finally settled on blue.

The story is a wonderful example of the role of culture in shaping perception.  Even things that seem objectively true may only seem so if we’ve been given a framework with which to see it; even the idea that a thing is a thing at all, in fact, is partly a cultural construction.  There are other examples of this phenomenon.  What we call “red onions” in the U.S., for another example, are seen as blue in parts of Germany.  Likewise, optical illusions that consistently trick people in some cultures — such as the Müller-Lyer illusion — don’t often trick people in others.

So, next time you look into the sky, ask yourself what you might see if you didn’t see blue.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

In “Rock in a Hard Place: Grassroots Cultural Production in the Post-Elvis Era,” William Bielby discusses the emergence of the amateur teen rock band. The experience of teens getting together with their friends to form a band and practice in their parents’ garage is iconic in our culture now; recalling their first band or their first live show is a standard element of interviews with successful rock musicians. Bielby traces the history of this cultural form, which appeared in the 1950s. In particular, he argues that social structures largely excluded young women from full participation in the teen band phenomenon.

Though young women were involved in many other types of musical performance, the pop charts featured many successful female artists in the 1950s, and girls listened to music more than boys, rock bands emerged as a male-dominated (and predominantly White) musical form. One important reason was parents’ concern about the rock subculture and the lack of supervision. Parents might be willing to let their sons get together with friends and play loud music and travel around town or even to other cities to play in front of a crowd, but they were much less likely to let their daughters do so. Gendered parenting, and the closer regulation of girls than boys, meant that girls were less likely to be given the chance to join a band. So while boys were learning to take on the role of active producers of rock music, girls didn’t have the same opportunities.

Yunnan C. sent us photos she took of two shirts at an H&M store in Toronto that made me think about Bielby’s argument:

As Yunnan points out,

This, as fashion, enforces this idea that being in a band and playing music are for guys, limiting women to being the passive consumers and supporters of it, rather than the producers.

The shirts don’t just cast women in the role of fans; they specifically frame them as potential groupies, whose fandom is filtered through a romantic/sexual attraction to individual members of a band. Communications scholar Melissa Click argues that female fans are often dismissed because there is a “persistent cultural assumption that male-targeted texts are authentic and interesting, while female-targeted texts are schlocky and mindless—and further that men and boys are active users of media while girls are passive consumers.” While the image of the groupie is as well-known as that of the band, the groupie is usually viewed skeptically, seen as someone with a superficial, inauthentic appreciation of the music, “a particular kind of female fan assumed to be more interested in sex with rock stars than in their music.”

So the H&M shirts reflect gendered notions about who makes music (there were no shirts saying “I am the drummer”) as well as the idea that women’s appreciation for music and other forms of pop culture should be expressed through affection for a specific person, a form of fanhood that ultimately stigmatizes those who express it as superficial and inauthentic.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

Today, most people in the U.S. see childhood as a stage distinct from adulthood, and even from adolescence. We think children are more vulnerable and innocent than adults and should be protected from many of the burdens and responsibilities that adult life requires. But as Sidney Mintz explains in Huck’s Raft: A History of American Childhood, “…childhood is not an unchanging biological stage of life but is, rather, a social and cultural construct…Nor is childhood an uncontested concept” (p. viii). Indeed,

We cling to a fantasy that once upon a time childhood and youth were years of carefree adventure…The notion of a long childhood, devoted to education and free from adult responsibilities, is a very recent invention, and one that became a reality for a majority of children only after World War II. (p. 2)

Our ideas about what is appropriate for children to do has changed radically over time, often as a result of political and cultural battles between groups with different ideas about the best way to treat children. Most of us would be shocked by the level of adult responsibilities children were routinely expected to shoulder a century ago.

Reader RunTraveler let us know that the Library of Congress has posted a collection of photos by Lewis Hine, all depicting child labor in the early 1900s in the U.S. The photos are a great illustration of our changing ideas about childhood, showing the range of jobs, many requiring very long hours in often dangerous or extremely unpleasant conditions, that children did. I picked out a few (with some of Hine’s comments on each one below each photo), but I suggest looking through the full Flikr set or the full collection of over 5,000 photos of child laborers from the National Child Labor Committee:

“John Howell, an Indianapolis newsboy, makes $.75 some days. Begins at 6 a.m., Sundays.” 1908. Source.

Interior of tobacco shed, Hawthorn Farm. Girls in foreground are 8, 9, and 10 years old. The 10 yr. old makes 50 cents a day. 12 workers on this farm are 8 to 14 years old, and about 15 are over 15 yrs. (LOC)“Interior of tobacco shed, Hawthorn Farm. Girls in foreground are 8, 9, and 10 years old. The 10 yr. old makes 50 cents a day.” 1917. Source.

Eagle and Phoenix Mill. "Dinner-toters" waiting for the gate to open. This is carried on more in Columbus than in any other city I know, and by smaller children... (LOC)“Eagle and Phoenix Mill. ‘Dinner-toters’ waiting for the gate to open.” 1913. Source.

Vance, a Trapper Boy, 15 years old. Has trapped for several years in a West Va. Coal mine. $.75 a day for 10 hours work...(LOC)“Vance, a Trapper Boy, 15 years old. Has trapped for several years in a West Va. Coal mine. $.75 a day for 10 hours work. All he does is to open and shut this door: most of the time he sits here idle, waiting for the cars to come. On account of the intense darkness in the mine, the hieroglyphics on the door were not visible until plate was developed.” 1908. Source.

“Rose Biodo…10 years old. Working 3 summers. Minds baby and carries berries, two pecks at a time. Whites Bog, Brown Mills, N.J. This is the fourth week of school and the people here expect to remain two weeks more.” 1910. Source.

Hine’s photos make it clear how common child labor was, but their very existence also documents the cultural battle over the meaning of childhood taking place in the 1900s. Hine worked for the National Child Labor Committee, and his photos and especially his accompanying commentary express concern that children were doing work that was dangerous, difficult, poorly-paid, and that interfered with their school attendance.

In fact, the NCLC’s efforts contributed to the passage of the Keating-Owen Child Labor Act in 1916, the first law to regulate the use of child workers (limiting hours and forbidding interstate commerce in items produced by children under various ages, depending on the product). The law was ruled unconstitutional by the Supreme Court in 1918. This resulted in an extended battle between supporters and opponents of child labor laws, as another law was passed and then struck down by the courts, followed by successful efforts to stall any more legislation in the 1920s based on states-rights and anti-Communist arguments. Only in 1938, with the passage of the Fair Labor Standards Act as part of the New Deal, did child workers receive specific protections.

Even then, we had loopholes. While children working in factories or mines was redefined as inappropriate and even exploitative and cruel, a child babysitting or delivering newspapers for money was often interpreted as character-building. Today, the cultural battle over the use of children as workers continues. This year, the Labor Department retracted suggested changes that would restrict the type of farmwork children could be hired to do after it received significant push-back from farmers and legislators afraid it would apply to kids working on their own family’s farms.

As Mintz said, childhood is a contested concept, and the struggle to decide what kind of work, if any, is appropriate for any child to do continues.

For more examples, see Lisa’s 2009 post about child labor.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.