discourse/language

Cross-posted at Montclair SocioBlog.

The Wall Street Journal had an op-ed this week by Donald Boudreaux and Mark Perry claiming that things are great for the middle class.  Here’s why:

No single measure of well-being is more informative or important than life expectancy. Happily, an American born today can expect to live approximately 79 years — a full five years longer than in 1980 and more than a decade longer than in 1950.

Yes, but.  If life-expectancy is the all-important measure of well-being, then we Americans are less well off than are people in many other countries, including Cuba.

The authors also claim that we’re better off because things are cheaper:

…spending by households on many of modern life’s “basics” — food at home, automobiles, clothing and footwear, household furnishings and equipment, and housing and utilities — fell from 53% of disposable income in 1950 to 44% in 1970 to 32% today.

Globalization probably has much to do with these lower costs.  But when I reread the list of “basics,” I noticed that a couple of items were missing, items less likely to be imported or outsourced, like housing and health care.  So, we’re spending less on food and clothes, but more on health care and houses. Take housing.  The median home values for childless couples increased by 26% between just 1984 and 2001 (inflation-adjusted); for married couples with children, who are competing to get into good school districts, median home value ballooned by 78% (source).

The authors also make the argument that technology reduces the consuming gap between the rich and the middle class.  There’s not much difference between the iPhone that I can buy and the one that Mitt Romney has.  True, but it says only that products filter down through the economic strata just as they always have.  The first ball-point pens cost as much as dinner for two in a fine restaurant.  But if we look forward, not back, we know that tomorrow the wealthy will be playing with some new toy most of us cannot afford. Then, in a few years, prices will come down, everyone will have one, and by that time the wealthy will have moved on to something else for us to envy.

The readers and editors of the Wall Street Journal may find comfort in hearing Boudreaux and Perry’s good news about the middle class.  Middle-class people themselves, however, may be a bit skeptical on being told that they’ve never had it so good (source).

Some of the people in the Gallup sample are not middle class, and they may contribute disproportionately to the pessimistic side.  But Boudreaux and Perry do not specify who they include as middle class.  But it’s the trend in the lines that is important.  Despite the iPhones, airline tickets, laptops and other consumer goods the authors mention, fewer people feel that they have enough money to live comfortably.

Boudreaux and Perry insist that the middle-class stagnation is a myth, though they also say that

The average hourly wage in real dollars has remained largely unchanged from at least 1964—when the Bureau of Labor Statistics (BLS) started reporting it.

Apparently“largely unchanged” is completely different from “stagnation.”  But, as even the mainstream media have reported, some incomes have changed quite a bit (source).

The top 10% and especially the top 1% have done well in this century.  The 90%, not so much. You don’t have to be too much of a Marxist to think that maybe the Wall Street Journal crowd has some ulterior motive in telling the middle class that all is well and getting better all the time.

—————————

Jay Livingston is the chair of the Sociology Department at Montclair State University.  You can follow him at Montclair SocioBlog or on Twitter.

Cross-posted at PolicyMic.At the end of my sociology of gender class, I suggest that the fact that feminists are associated with negative stereotypes — ugly, bitter, man-haters, for example — is not a reflection of who feminists really are, but a sign that the anti-feminists have power over how we think about the movement.  The very idea of a feminist, in other words, is politicized… and the opposition might be winning.

A clip forwarded by Dmitriy T.C. is a great example.  In the 1.38 minute Fox News clip below, two pundits discuss a North Carolina teacher, Leah Gayle, who was accused of having sex with her 15-year-old student.  One of the show’s hosts suggests that feminism is to blame for Gayle’s actions. She says:

There’s something about feminism that lets them know, I can do everything a man does. I can even go after that young boy. I deserve it… It’s turning women into sexualized freaks.

This clip reveals a discursive act.  She is defining who feminists are and what they believe.  And this idea is being broadcast across the airwaves.

This happens all day every day.  Some of the messages are friendly to feminists, and some are not.  These messages compete in our collective imagination.  Most have little to do with what feminists (who are a diverse group anyway) actually believe and many are outrageous lies and distortions, like this one.

So, next time you hear someone describing a feminist, know that what you’re hearing is almost never a strict definition of the movement. Instead, it’s a battle cry, with one side competing with the other to shape what we think of people who care about women’s equality with men.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Cross-posted at Family Inequality.

It’s not that “working families” don’t exist, it’s just the way most people use this term it doesn’t mean anything.  Search Google images for “working families,” and you’ll find images like this:

And that’s pretty much the way the term is used: every family is a working family.

To hear the White House talk, you have to wonder whether there are people who aren’t in families. I’ve complained about this before, Obama’s tendency to say things like, “This reform is good for families; it’s good for businesses; it’s good for the entire economy.” As if “families” covers all people.

Specifically, if you Google search the White House website’s press office directory, which is where the speeches live, like this, you get 457 results, such as this transcript of remarks by Michelle Obama at a “Corporate Voices for Working Families” event. The equivalent search for “working people” yields a paltry 108 hits (many of them Obama speeches at campaign events, which include false-positives, like him making the ridiculous claim that Americans are the “hardest working people on Earth.”) If you search the entire Googleverse for “working families” you get about 318 million hits, versus just just 7 million for “working people” (less than the 10 million that turns up for “Kardashians,” whatever that means.)

You would never know that 33 million Americans live alone — comprising 27% of all households. And 50 million people, or one out of every 6 people, lives in what the Census Bureau defines as a “non-family household,” or a household in which the householder has no relatives (some of those people may be cohabitors, however). The rise of this phenomenon was ably described by Eric Klinenberg in Going Solo: The Extraordinary Rise and Surprising Appeal of Living Alone.

This is partly a complaint about cheap rhetoric, but it’s also about the assumption that families are primary social units when it comes to things like policy and economics, and about the false universality of “middle class” (which is made up of “working families”) in reference to anyone (in a family with anyone) with a job.

Here’s one visualization, from a Google ngrams search of millions of books. The blue line is use of the phrase “working people” as a fraction of references to “people,” while the red line is use of the phrase “working families” as a fraction of references to “families.” It shows, I think, that “working” is coming to define families, not people.

This isn’t all bad. Families matter, and part of the attention to “working families” (or Families That Work) is driven by important problems of work-family conflict, unequal care work burdens, and so on. But ultimately these are problems because they affect people (some of whom are in families). When we treat families as the primary unit of analysis, we mask the divisions within families — the conflicts of interest and exploitation, the violence and abuse, and the ephemeral nature of many family relationships and commitments — and we contribute to the marginalization of people who aren’t in, or don’t have, families.  And those members of the No Family community need our attention, too.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

Cross-posted at Family Unequal.

As I wrote about the older-birth-mothers issue recently (first, and then), I didn’t comment on the photo illustrations people are using with the stories. But when an alert reader sent this one to me, from Katie Roiphe’s post in Slate, I couldn’t help it:

Something about that picture and “women in their late 30s or 40s” rubbed my correspondent the wrong way, or rather, led her to write, “Late 30s or early 40s?!?”

Since this was from a legit website that credits its stock agency, I was able to visit Thinkstock and search for the photo. Sure enough:

Of course, it’s not news, so the title “Middle-aged woman holding her newborn grandson” doesn’t make it a less true illustration of the older-mother phenomenon than one captioned “Desperate aging woman clings to feminist myth that it’s OK to delay childbearing.” But it gives you an idea of what the Slate editor was looking for in the stock photo.

I looked around a little, and found one other funny one. Another Slate essay,this one by Allison Benedikt, was reprinted in Canada’s National Post, and they laid it out like this:

When I visited the Getty Images site, I discovered this picture was taken in China. Here’s how it’s presented:

This one, which is a picture of real people, looks like it could be a grandmother, or maybe more likely a caretaker. Regardless, it’s sold as an illustration of a story about China’s elderly having too few grandchildren to take care of them, which is vaguely related to the content of the story, but that’s not what the Post’s caption points to:

It’s true that older parents are more established and experienced but many of those experiences are, from a genetic point of view, negative, says Allison Benedikt.

Anyway, there were others where the women looked pretty old for the story, but I couldn’t find them in the catalogs, so I stopped.

This is all relevant to one of my critiques of these stories, which is that they make it seem like having children at older ages has become more common than it was in the past. That’s true compared with 1980, but not 1960. The difference is it’s more likely to be their first child nowadays. So Benedikt is way off when she writes,

Remember how there was that one kid in your high school class whose parents were sooooo old that it was weird and creepy? That’s all of us now. Oops.

As I showed, 40-year-old women are less likely to have children now than they were when she was a kid. And when Roiphe writes of the “50-year-old mother in the kindergarten class [who] attracts a certain amount of catty interest and disapproval,” she should be aware that the disapproval — which I don’t doubt exists — is not about the increased frequency of older mothers, but about how people think about them.

Philip N. Cohen is a professor of sociology at the University of Maryland, College Park, and writes the blog Family Inequality. You can follow him on Twitter or Facebook.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

We often hear that Christopher Columbus “discovered” America, a word that erases the 50 million-plus inhabitants of the continent that were already here when his boat arrived. A person can’t discover something that another person already knows about.  In the American telling of the story, however, the indigenous population don’t count as people. They’re knowledge isn’t real.

This dismissal of knowledge-of-a-thing until the “right” people know about it is a common tendency, and another example was sent in by Jordan G. last week.  CNNABCCBS, and the Los Angeles Times, among other news outlets, reported that a new species of monkey was “discovered.”

So where did they find this monkey?  Tied to a post in a Congolese village; it was a pet.  

Cercopithecus lomamiensis (Lesula)
By John A. Hart et al. Wikimedia Commons.

So someone knew about these monkeys.  It just wasn’t the right kind of person.  In this case, the right kind of person was a (bonafide) scientist (with credentials and institutional privileges not un-related to living in the West).

Now I’m not saying that it doesn’t matter that a trained scientist encountered the monkey and established it as a unique and previously undocumented species.  The team did a lot of work to establish this.  As the Times, which otherwise does a fine job on the story, explains:

Convinced the species was novel, team leader John Hart began an exhaustive three-year study to describe the monkey, and to differentiate it from its nearest neighbor, the owl face monkey. The study included geneticists and biological anthropologists, who helped confirm that the monkey was different from the owl face, though the two share a common genetic ancestor.

In other words, something significant happened because those scientists happened upon this monkey.  But to say that they “discovered” it is to mischaracterize what occurred. The scientists write that it was “previously undescribed,” which is far more accurate. Their language also doesn’t erase the consciousness of the people of the Congo, where this monkey is “endemic.”   In fact, they recommend the short-hand name “lesula,” “as it is the vernacular name used [by people who’ve known about it, probably for generations] over most of its known range.”  In doing so, they acknowledge the species’ relationship to a population of human beings, making them visible and significant.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012. Cross-posted at The Huffington Post.

If you live in the U.S. you are absolutely bombarded with the idea that being overweight is bad for your health.  This repetition leaves one with the idea that being overweight is the same thing as being unhealthy, something that is simply not true.  In fact, people of all weights can be either healthy or unhealthyoverweight people (defined by BMI) may actually have a lower risk of premature death than “normal” weight people.  Being fat is simply not the same thing as being unhealthy.

The Health At Every Size (HAES) movement attempts to interrupt the conflation of health and thinness by arguing that, instead of using one’s girth as an indicator of one’s health, we should be focusing on eating/exercising habits and more direct health measures (like blood pressure and cholesterol).

A recent study offered the HAES movement some interesting ammunition in this battle. The study recruited almost 12,000 people of varying BMIs and followed them for 170 months as they adopted healthier habits.  Their conclusion? ” Healthy lifestyle habits are associated with a significant decrease in mortality regardless of baseline body mass index.”

Take a look.  The “hazard ratio” refers to the risk of dying early, with 1 being the baseline.  The “habits” along the bottom count how many healthy habits a person reported.  The shaded bars represent people of different BMIs from “healthy weight” (18.5-24.9) to “overweight” (25-29.9), to “obese” (over 30).

The three bars on the far left show the relative risk of premature death for people with zero healthy habits. It suggests that being overweight increases that risk, and being obese much more so.  The three bars on the far right show the relative risk for people with four healthy habits; the differential risk among them is essentially zero; for people with healthy habits, then, being fatter is not correlated with an increased relative risk of premature death.  For everyone else in between, we more-or-less see the expected reduction in mortality risk given those two poles.

This data doesn’t refute the idea that fat matters.  In fact, it shows clearly that thinness is protective if people are doing absolutely nothing to enhance their health.  It also suggests, though, that healthy habits can make all the difference.  Overweight and obese people can have the same mortality risk as “normal” weight people; therefore, we should reject the idea that fat people are “killing themselves” with their extra pounds.  It’s simply not true.

h/t to BigFatBlog.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

A recent episode of Radiolab centered on questions about colors.  It profiled a British man who, in the 1800s, noticed that neither The Odyssey nor The Iliad included any references to the color blue.  In fact, it turns out that, as languages evolve words for color, blue is always last.  Red is always first.  This is the case in every language ever studied.

Scholars theorize that this is because red is very common in nature, but blue is extremely rare.  The flowers we think of as blue, for example, are usually more violet than blue; very few foods are blue.  Most of the blue we see today is part of artificial colors produced by humans through manufacturing processes.  So, blue is the last color to be noticed and named.

An exception to the rarity of blue in nature, of course — one that might undermine this theory — is the sky.  The sky is blue, right?

Well, it turns out that seeing blue when we look up is dependent on already knowing that the sky is blue.  To illustrate, the hosts of Radiolab interviewed a linguist named Guy Deutscher who did a little experiment on his daughter, Alma.  Deutscher taught her all the colors, including blue, in the typical way: pointing to objects and asking what color they were.  In the typical way, Alma mastered her colors quite easily.

But Deutscher and his wife avoided ever telling Alma that the sky was blue.  Then, one day, he pointed to a clear sky and asked her, “What color is that?”

Alma, at first, was puzzled.  To Alma, the sky was a void, not an object with properties like color.  It was nothing. There simply wasn’t a “that” there at all.  She had no answer.  The idea that the sky is a thing at all, then, is not immediately obvious.

Deutscher kept asking on “sky blue” days and one day she answered: the sky was white.  White was her answer for some time and she only later suggested that maybe it was blue.  Then blue and white took turns for a while, and she finally settled on blue.

The story is a wonderful example of the role of culture in shaping perception.  Even things that seem objectively true may only seem so if we’ve been given a framework with which to see it; even the idea that a thing is a thing at all, in fact, is partly a cultural construction.  There are other examples of this phenomenon.  What we call “red onions” in the U.S., for another example, are seen as blue in parts of Germany.  Likewise, optical illusions that consistently trick people in some cultures — such as the Müller-Lyer illusion — don’t often trick people in others.

So, next time you look into the sky, ask yourself what you might see if you didn’t see blue.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

For the last week of December, we’re re-posting some of our favorite posts from 2012.

Today, most people in the U.S. see childhood as a stage distinct from adulthood, and even from adolescence. We think children are more vulnerable and innocent than adults and should be protected from many of the burdens and responsibilities that adult life requires. But as Sidney Mintz explains in Huck’s Raft: A History of American Childhood, “…childhood is not an unchanging biological stage of life but is, rather, a social and cultural construct…Nor is childhood an uncontested concept” (p. viii). Indeed,

We cling to a fantasy that once upon a time childhood and youth were years of carefree adventure…The notion of a long childhood, devoted to education and free from adult responsibilities, is a very recent invention, and one that became a reality for a majority of children only after World War II. (p. 2)

Our ideas about what is appropriate for children to do has changed radically over time, often as a result of political and cultural battles between groups with different ideas about the best way to treat children. Most of us would be shocked by the level of adult responsibilities children were routinely expected to shoulder a century ago.

Reader RunTraveler let us know that the Library of Congress has posted a collection of photos by Lewis Hine, all depicting child labor in the early 1900s in the U.S. The photos are a great illustration of our changing ideas about childhood, showing the range of jobs, many requiring very long hours in often dangerous or extremely unpleasant conditions, that children did. I picked out a few (with some of Hine’s comments on each one below each photo), but I suggest looking through the full Flikr set or the full collection of over 5,000 photos of child laborers from the National Child Labor Committee:

“John Howell, an Indianapolis newsboy, makes $.75 some days. Begins at 6 a.m., Sundays.” 1908. Source.

Interior of tobacco shed, Hawthorn Farm. Girls in foreground are 8, 9, and 10 years old. The 10 yr. old makes 50 cents a day. 12 workers on this farm are 8 to 14 years old, and about 15 are over 15 yrs. (LOC)“Interior of tobacco shed, Hawthorn Farm. Girls in foreground are 8, 9, and 10 years old. The 10 yr. old makes 50 cents a day.” 1917. Source.

Eagle and Phoenix Mill. "Dinner-toters" waiting for the gate to open. This is carried on more in Columbus than in any other city I know, and by smaller children... (LOC)“Eagle and Phoenix Mill. ‘Dinner-toters’ waiting for the gate to open.” 1913. Source.

Vance, a Trapper Boy, 15 years old. Has trapped for several years in a West Va. Coal mine. $.75 a day for 10 hours work...(LOC)“Vance, a Trapper Boy, 15 years old. Has trapped for several years in a West Va. Coal mine. $.75 a day for 10 hours work. All he does is to open and shut this door: most of the time he sits here idle, waiting for the cars to come. On account of the intense darkness in the mine, the hieroglyphics on the door were not visible until plate was developed.” 1908. Source.

“Rose Biodo…10 years old. Working 3 summers. Minds baby and carries berries, two pecks at a time. Whites Bog, Brown Mills, N.J. This is the fourth week of school and the people here expect to remain two weeks more.” 1910. Source.

Hine’s photos make it clear how common child labor was, but their very existence also documents the cultural battle over the meaning of childhood taking place in the 1900s. Hine worked for the National Child Labor Committee, and his photos and especially his accompanying commentary express concern that children were doing work that was dangerous, difficult, poorly-paid, and that interfered with their school attendance.

In fact, the NCLC’s efforts contributed to the passage of the Keating-Owen Child Labor Act in 1916, the first law to regulate the use of child workers (limiting hours and forbidding interstate commerce in items produced by children under various ages, depending on the product). The law was ruled unconstitutional by the Supreme Court in 1918. This resulted in an extended battle between supporters and opponents of child labor laws, as another law was passed and then struck down by the courts, followed by successful efforts to stall any more legislation in the 1920s based on states-rights and anti-Communist arguments. Only in 1938, with the passage of the Fair Labor Standards Act as part of the New Deal, did child workers receive specific protections.

Even then, we had loopholes. While children working in factories or mines was redefined as inappropriate and even exploitative and cruel, a child babysitting or delivering newspapers for money was often interpreted as character-building. Today, the cultural battle over the use of children as workers continues. This year, the Labor Department retracted suggested changes that would restrict the type of farmwork children could be hired to do after it received significant push-back from farmers and legislators afraid it would apply to kids working on their own family’s farms.

As Mintz said, childhood is a contested concept, and the struggle to decide what kind of work, if any, is appropriate for any child to do continues.

For more examples, see Lisa’s 2009 post about child labor.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.