Search results for The

Way back in 1996 sociologist Susan Walzer published a research article pointing to one of the more insidious gender gaps in household labor: thinking. It was called “Thinking about the Baby.”

In it, Walzer argued that women do more of the intellectual and emotional work of childcare and household maintenance. They do more of the learning and information processing (like buying and reading “how-to” books about parenting or researching pediatricians). They do more worrying (like wondering if their child is hitting his developmental milestones or has enough friends at school). And they do more organizing and delegating (like deciding when towels need washing or what needs to be purchased at the grocery store), even when their partner “helps out” by accepting assigned chores.

For Mother’s Day, a parenting blogger named Ellen Seidman powerfully describes this exhausting and almost entirely invisible job. I am compelled to share. Her essay centers on the phrase “I am the person who notices…” It starts with the toilet paper running out and it goes on… and on… and on… and on. Read it.

She doesn’t politicize what she calls an “uncanny ability to see things… [that enable] our family to basically exist.” She defends her husband (which is fine) and instead relies on a “reduction to personality,” that technique of dismissing unequal workloads first described in the canonical book The Second Shift: somehow it just so happens that it’s her and not her husband that notices all these things.

But I’ll politicize it. The data suggests that it is not an accident that it is she and not her husband that does this vital and brain-engrossing job. Nor is it an accident that it is a job that gets almost no recognition and entirely no pay. It’s work women disproportionately do all over America. So, read it. Read it and remember to be thankful for whoever it is in your life that does these things. Or, if it is you, feel righteous and demand a little more recognition and burden sharing. Not on Mother’s Day. That’s just one day. Everyday.

Cross-posted and in print at Money.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Police brutality is a problem in US criminal justice. Police-worn body cameras are one potential “remedy” to these violent encounters, but they have both benefits and drawbacks. The cameras may increase transparency and improve police legitimacy, promote legally compliant behavior among both police officers and citizens; enhance evidence quality that can improve resulting legal proceedings; and deter officers’ use-of-force. Conversely, body-worn cameras could create privacy concerns for the officer and the citizenry and place a large logistical and financial burden on already cash-strapped law enforcement agencies.

.

This issue is so timely that research is only now starting to see publication, but we do have some early insights. The first observational studies examining the use of police-worn body cameras were carried out in England and Scotland. They found rates of citizen complaints dropped after body cameras were introduced. Preliminary results from an experimental study in Phoenix, Arizona also suggest that the use of body cameras reduces both self-reported and official records of citizen complaints.

The first experimental evidence concerning use-of-force comes from a large study in the Rialto, California Police Department, and the results should encourage advocates of body cameras. The study randomly assigned particular police shifts to wear body cameras (the “treatment”). Police shifts in the treatment condition are associated with reduced use-of-force and citizen complaints against the police were significantly reduced. Shifts in the control condition, in contrast, saw roughly twice as much use-of-force as the treatment condition.

The research so far suggests that body cameras are a promising way to reduce unnecessary use of force.

Ryan Larson is a graduate student studying the sociology of crime at the University of Minnesota. His research interests extend to statistics, sport, and media. He writes for and is on the Graduate Editorial Board of The Society Pages. This post originally appeared at There’s Research on That! 

The Wall Street Journal’s Real Time Economics recently looked at wealth inequality.  The first chart taken from the post shows wealth differences by race and age of head of family.

wealth gap

Racial differences (white versus black and Hispanic) dominate whether looking at average or median net worth, and the gap grows as the head of the family ages.  Median figures are especially sobering, showing the limited wealth generation of representative black and Hispanic heads of families regardless of age.

So, do these advantages and disadvantages transfer to the next generation? Yes, and not just laterally. This second chart looks at the relationship between inheritance and wealth generation.

Inheritance

Inheritance was divided into ten groups.  WARNING: THE TENTH GROUP, WHICH RECEIVED THE LARGEST INHERITANCE, IS NOT SHOWN.

As Josh Zumrun, the author of the blog, explains:

The bottom 10% of inheritors received an inheritance averaging only about $2,000. Families receiving this much inheritance aren’t that wealthy.

But among families that received a $35,000 inheritance, their net worth is over half a million. Families that received a $125,000 inheritance are worth $780,000 on average and those that receive a $200,000 inheritance are, on average, millionaires. (The top 10% of inheritors, not pictured in this chart, inherit $1.6 million on average and have a net worth of $4.2 million.)

The take-away is pretty simple: Wealth inequality is real, with strong racial determinants, and is also, to a significant degree, self-reinforcing.

Originally posted at Reports from the Economic Front.

———————————

Martin Hart-Landsberg is a professor of economics at Lewis and Clark College. You can follow him at Reports from the Economic Front.

NPR recently aired a story about female lawmaker’s representation state by state. According to the story, Colorado has the most women; female lawmakers make up 42% of that total. Wyoming had the least, with women only representing 13% of state lawmakers.

NPR’s experts suggested that term limits in Colorado and a female-friendly party leadership were behind their high number of female legislators, whereas a change in Wyoming from multi-member to single-member district in the 1990s was unfavorable to women (because voters have to pick only one and tend to lean toward men when they have to make hard choices). The story also mentioned voting rules and the difficulty of balancing home, work, and lawmaking responsibilities.

In fact, sociologists have been studying this issue in depth for some time and a few years ago Deborah Carr summarized the reigning wisdom on why women are less likely to be politicians. She highlighted six factors to explain the gender gap in the US Congress:

  1. Women have to face sexism (e.g., glass ceiling – Nancy Pelosi used the term marble ceiling in her inaugural speech as Speaker in 2007), especially voters’ sex role stereotyping “what women can and should be.”
  1. Women are not in the “pipeline,” suggesting that not enough women are in careers that have historically led to political office.
  1. Because of gendered wealth and income inequality, women don’t as often have enough money to run multi-dollar campaigns, nor access to social networks full of big donors.
  1. Women have different interests, focusing on “issues related to family and social welfare, rather than national defense and international relations.”
  1. Women are less likely to be risk-takers than their male counterparts, perhaps explaining why women must be asked several times before they seriously consider launching campaigns.
  1. Women opt out of politics because of family responsibilities.

4

To improve female participation in politics, we should promote more gender-neural political environments. Political parties should take further steps to recruit and support female candidates, as Colorado seems to be doing. We should repeatedly encourage women to run for office since they take a lot of encouragement before they seriously consider launching candidacies. More importantly, we need to seed the pipeline by encouraging young girls to get involved in student government and see governing as compatible with their interests and abilities.

Sangyoub Park, PhD is a professor of sociology at Washburn University. His research interests include social capital, demographic trends, and post-Generation Y.  

One word in the headlines last week seemed like a throwback to an earlier era:

As Trump moves to soften his image, Democrats seek to harden it

The Washington Post

Donald Trump to reshape image, new campaign chief tells G.O.P.

The New York Times

Trump surrogates say GOP front-runner “projecting an image” during primaries

— Fox News

It was in the 1960s that politicians, their handlers, and the people who write about them discovered image. The word carries the cynical implication that voters, like shoppers, respond to the surface image rather than the substance – the picture on the box rather than what’s inside.  A presidential campaign was based on the same thing as an advertising campaign – image.  You sold a candidate the same way you sold cigarettes, at least according to the title and book jacket of Joe McGinnis’s book.

Then, sometime around 1980, image began to fade. In its place we now have brand. I went to Google N-grams and looked at the ratio of image to brand in both the corporate and the political realm. The pattern is nearly identical.


The ratio rises steeply from 1960 to 1980 – lots more talk about image, no increase in brand. Then the trend reverses. Sightings of image were still rising, but nowhere nearly as rapidly as brand, which doubled from 1980 to 2000 in politics and quadrupled in the corporate world.

Image sounds too deceptive and manipulative; you can change it quickly according to the needs of the moment. Brand implies permanence and substance (not to mention Marlboro-man-like rugged independence and integrity.) No wonder people in the biz prefer brand.

Decades ago, when my son was in grade school, I met another parent who worked in the general area of public relations. On seeing him at the next school function a few weeks later, I said, “Oh right, you work in corporate image-mongering.” I thought I said it jokingly, but he seemed offended. He was, I quickly learned, a brand consultant. Image bad; brand good.

In later communications, he also said that a company’s attempt to brand itself as something it’s not will inevitably fail.  The same thing supposedly goes for politics:

“One thing you learn very quickly in political consulting is the fruitlessness of trying to get a candidate to change who he or she fundamentally is at their core,” said Republican strategist Whit Ayres, who did polling for Rubio’s presidential campaign before he dropped out of the race. “So, is the snide, insulting, misogynistic guy we’ve seen really who Donald Trump is? Or is it the disciplined, respectful, unifying Trump we saw for seven minutes after the New York primary?

These consultants are saying what another Republican said a century and a half ago: “You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

This seems to argue that political image-mongers have to be honest about who their candidate really is. But there’s another way of reading Lincoln’s famous line: You only need to fool half the people every four years.

Originally posted at Montclair SocioBlog.

———————

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Flashback Friday.

Americans tend to conflate the law and morality. We believe, that is, that we make things illegal because they’re immoral. While we might admit that there are exceptions, we tend to think that our laws generally reflect what is right and wrong, not a simple or arbitrary effort to control the population in ways that people who influence policy want.

This is why changing laws can sometimes be so hard. If it isn’t just about policy, but ethics, then changing a law means allowing something immoral to be legal.

In some other countries, people don’t think like this. They see law as simple public policy, not ethics, which leads to a different attitude toward enforcement.

In Amsterdam, for example, possession and cultivation of marijuana is a misdemeanor. Despite the city’s famous and deserved reputation for the open use of marijuana and the”coffee shops” that sell it, it’s illegal. The city, though, decided that policing it was more trouble than it was worth, so it has a policy of non-enforcement.

An even more fascinating example is their approach to street level sex work. While prostitution is legal in Amsterdam, “streetwalking” is not. Still, there will always be sex workers who can’t afford to rent a work space. These women, some of the most economically deprived, will be on the streets whether the city likes it or not.

Instead of adding to their problems by throwing them all in jails or constantly fining them, the city built a circular drive just outside of town equipped with semi-private stalls. In other words, the city decided against enforcing the law on “streetwalking” and instead spent tax money to build a location in which individuals could engage in behavior that was against the law… and they considered it a win-win.

I thought of this when Julieta R. sent in this picture, shot by her friend at the Aberdeen Pub in Edinburgh, Scotland. Sex in the bathroom, it appears, had begun to inconvenience customers. But, instead of trying to eradicate the behavior, the Pub just said: “Ok, fine, but just keep it to cubicle no. 4.”

Americans would never go for this. Because we think it’s immoral to break the law, not just illegal, we would consider this to be hypocrisy. It doesn’t matter if enforcing the law is impractical (marijuana), if doing so does more harm than good (sex work), or if it’d be easier and cheaper not to do it (cubicle no. 4), in America we believe that the person breaking the law is bad and letting them get away with it is letting a bad person go unpunished.

If we had a practical orientation toward the law, though, instead of a moral one, we might be quicker to change laws, be more willing to weigh the benefits of enforcement with its costs, be able to consider whether enforcement is ethical, feel more comfortable with just letting people break the law, and even helping them do so, if we decided that it was the “right” thing to do.

This post originally appeared in 2010.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

“The poor fellow died of Nostalgia,” said a war surgeon in 1861. “Deaths from this cause are very frequent in the army.”

During the Civil War, physicians believed that acute homesickness was a genuine disease and a sometimes fatal one. Symptoms included heart palpitations, fever, lesions, lack of appetite, incontinence and bowel irregularities and, ultimately, dementia. A veteran of the war described homesickness as a “vampyre-like,” sucking the life out of soldiers.

“The soldier’s dream of home” (Library of Congress):4

Writing for the New York Times, historian Susan Matt writes that “between 1861 and 1866, 5,537 Union soldiers suffered homesickness acutely enough to come to a doctor’s attention, and 74 died of it.” Some believed that homesickness was the single most deadly threat to soldiers, above and beyond the war itself.

Physicians debated how best to avert nostalgia. Some said not enough letters from home caused it; others said too many could do so. Some units prohibited music that reminded men of home or sang its praises. They wondered whether young men — barely more than boys — were most susceptible. Or whether it was grown men, like the man in the image above — accustomed to the comforts of domestic life — who would miss home the most. If homesickness was untreatable, soldiers would be granted a furlough as a last resort and a few were honorably discharged, simply unable to function away from home.

Susan Matt, who has written a book about the history of homesickness, points out that Americans don’t think of themselves as homebodies anymore. They’ve re-cast themselves as natural adventurers who seek novelty and new experiences. When Europeans arrived on the East Coast, they didn’t sit there, they went West! Today, people get the “travel bug.” We are now a nation of tourists.

And when people do express homesickness, Matt observes, writing for the Council on Contemporary Families, we see it as a different kind of pathology: weakness or immaturity. When young adults don’t want to leave home, we call it “failure to launch,” “boomerang kids,” or “the Peter Pan syndrome.” Colleges now shoo away “helicopter parents” and have “parting ceremonies” symbolizing a “cutting of the cord” between parent and child.

But the word “homesick” reminds us that it wasn’t always that way, nor was it always so easy to dismiss feelings of nostalgia and isolation. The notion that we should be ruggedly independent and eager to set out on our own is only about 90 years old. So, the homebodies out there who first heard the word “staycation” and said YES! are holding up a true American tradition.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Last week PBS hosted a powerful essay by law professor Ekow Yankah. He points to how the new opioid addiction crisis is being talked about very differently than addiction crises of the past. Today, he points out, addiction is being described and increasingly treated as a health crisis with a human toll. “Our nation has linked arms,” he says, “to save souls.”

Even just a decade ago, though, addicts weren’t victims, they were criminals.

What’s changed? Well, race. “Back then, when addiction was a black problem,” Yankah says about 30 years ago, “there was no wave of national compassion.” Instead, we were introduced to suffering “crack babies” and their inhuman, incorrigible mothers. We were told that crack and crime went hand-in-hand because the people involved were simply bad. We were told to fear addicts, not care for them. It was a “war on drugs” that was fought against the people who had succumbed to them.

Yankah is clear that this a welcome change. But, he says, for African Americans, who would have welcomed such compassion for the drugs that devastated their neighborhoods and families, it is bittersweet.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.