social structure

Flashback Friday.

The percent of carless households in any given city correlates very well with the percent of homes built before 1940. So what happened in the 40s?

According to Left for LeDroit, it was suburbs:

The suburban housing model was — and, for the most part, still is — based on several main principles, most significantly, the uniformity of housing sizes (usually large) and the separation of residential and commercial uses. Both larger lots and the separation of uses create longer distances between any two points, requiring a greater effort to go between home, work, and the grocery store.

These longer distances between daily destinations made walking impractical and the lower population densities made public transit financially unsustainable. The only solution was the private automobile, which, coincidentally, benefited from massive government subsidies in the form of highway building and a subsidized oil infrastructure and industry.

Neighborhoods designed after World War II are designed for cars, not pedestrians; the opposite is true for neighborhoods designed before 1940. Whether or not one owns a car, and how far one drives if they do, then, is dependent on the type of city, not personal characteristics like environmental friendliness.  Ezra Klein puts it nicely:

In practice, this doesn’t feel like a decision imposed by the cold realities of infrastructure. We get attached to our cars. We get attached to our bikes. We name our subway systems. We brag about our short walks to work. People attach stories to their lives. But at the end of the day, they orient their lives around pretty practical judgments about how best to live. If you need a car to get where you’re going, you’re likely to own one. If you rarely use your car, have to move it a couple of times a week to avoid street cleaning, can barely find parking and have trouble avoiding tickets, you’re going to think hard about giving it up. It’s not about good or bad or red or blue. It’s about infrastructure.

Word.

Neither Ezra nor Left for LeDroit, however, point out that every city, whether it was built for pedestrians or cars, is full of people without cars. In the case of car-dependent cities, this is mostly people who can’t afford to buy or own a car. And these people, in these cities, are royally screwed. Los Angeles, for example, is the most expensive place in the U.S. to own a car and residents are highly car-dependent; lower income people who can’t afford a car must spend extraordinary amounts of time using our mediocre public transportation system, such that carlessness contributes significantly to unemployment.

Originally posted in 2010.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Last month one media behemoth, AT&T, stated it would purchase another, Time Warner, for $85.4 million. AT&T provides a telecommunications service, while Time Warner provides content. The merger represents just one more step in decades of media consolidation, the placing of control over media and media provision into fewer and fewer hands. This graphic, from the Wall Street Journal, illustrates the history of mergers for the latest companies to propose a merger:

4

The purchase raises several issues regarding consumer protections – particularly over privacy, competition, price hikes, and monopoly power in certain markets – and one of these is related to race.

A third of the American population identifies as Latino, African American, Asian American, and Native American, yet members of these groups own only 5% of television stations and 7% of radio stations. Large-scale mergers like the proposed one between AT&T and Time Warner exacerbate this exclusion. Minority-owned media companies tend to be smaller and mergers make it even harder to compete with larger and larger media conglomerates. As a result, minority-owned companies close or are sold and the barriers to entry get raised as well. The research is clear: media consolidation is bad for media diversity.

After the #OscarsSoWhite controversy, the Academy of Motion Picture Arts and Sciences committed to increasing diversity on screen and technology companies have vowed to increase their workforce diversity, but such commitments have done relatively little to improve representation. Such “gentlemen’s agreements” are largely voluntary and are mostly false promises for communities of color.

Advocacy groups and federal authorities should not rely on Memorandum of Understandings to advance inclusion goals. When the AT&T/Time Warner deal gets to the Federal Communications Commission, scrutiny in the name of “public interest” should include the issue of minorities’ inclusion in both the media and technology industries. As a diverse nation struggling with ongoing racial injustices, leaving underrepresented communities out of media merger debates is a disservice not only to those communities, but to us all.

Jason A. Smith is a PhD candidate in the Public Sociology program at George Mason University. His research focuses on race and the media. He recently co-edited the book Race and Contention in Twenty-first Century U.S. Media (Routledge, 2016). He tweets occasionally.

Despite popular notions that the U.S. is now “post-racial,” numerous recent events (such as the Rachel Dolezal kerfuffle and the Emmanuel AME Church shooting) have clearly showcased how race and racism continue to play a central role in the functioning of contemporary American society. But why is it that public rhetoric is at such odds with social reality?

A qualitative study by sociologists Natasha Warikoo and Janine de Novais provides insights. By conducting interviews with 47 white students at two elite universities, they explore the “lenses through which individuals understand the role of race in society.” Described as race frames, Warikoo and de Novais articulate two ways in which their respondents rely on particular cultural frames in making sense of race and race relations.

  • The color-blind frame: the U.S. is now a “post-racial” society where race has little social meaning or consequence.
  • The diversity frame: race is a “positive cultural identity” and the incorporation of a multitude of perspectives (also referred to as multiculturalism) is beneficial to all those involved.

Integral to Warikoo and de Novais’ study is the finding that about half of their student respondents simultaneously house both the color-blind and diversity frames. Of 24 students who held a color-blind frame, 23 also promoted a diversity frame. Warikoo and de Novais explain this discursive discordance as a product of the environments in which respondents reside: a pre-college environment where race is typically de-emphasized and a college environment that amplifies the importance of diversity and multiculturalism.

Importantly, Warikoo and de Novais argue that the salience of these two co-occurring race frames is significant not only because of their seeming contradictions, but because they share conceptions of race that largely ignore a structural frame: the idea that social structures are an important source of racism and racial inequality in the U.S. Ultimately, Warikoo and de Novais’ findings illustrate the general ambivalence that their white respondents share about race and race-based issues — undoubtedly reflective of the discrepancies concerning race in broader society.

Cross-posted at Discoveries.

Stephen Suh is a PhD candidate in Sociology at the University of Minnesota and a graduate board member at The Society Pages. His dissertation research examines the growing global trend of ethnic return migration through the perspectives of Korean Americans.

I was on jury duty this week, and the greatest challenge for me was the “David Brooks temptation” to use the experience to expound on the differences in generations and the great changes in culture and character that technology and history have brought.

I did my first tour of duty in the 1970s. Back then you were called for two weeks. Even if you served on a jury, after that trial ended, you went back to the main jury room. If you were lucky, you might be released after a week and a half. Now it’s two days.

What most struck me most this time was the atmosphere in the main room. Now, nobody talks. You’re in a large room with maybe two hundred people, and it’s quieter than a library. Some are reading newspapers or books, but most are on their latops, tablets, and phones. In the 1970s, it wasn’t just that there was no wi-fi, there was no air conditioning. Remember “12 Angry Men”? We’re in the same building. Then, you tried to find others to talk to. Now you try to find a seat near an electric outlet to connect your charger.

2 (1)

I started to feel nostalgic for the old system. People nowadays – all in their own narrow, solipsistic worlds, nearly incapable of ordinary face-to-face sociability. And so on.

But the explanation was much simpler. It was the two-day hitch. In the old system, social ties didn’t grow from strangers seeking out others in the main jury room. It happened when you went to a courtroom for voir dire. You were called down in groups of forty. The judge sketched out the case, and the lawyers interviewed the prospective jurors. From their questions, you learned more about the case, and you learned about your fellow jurors – neighborhood, occupation, family, education, hobbies. You heard what crimes they’d been a victim of.  When judge called a break for bathroom or lunch or some legal matter, you could find the people you had something in common with. And you could talk with anyone about the case, trying to guess what the trial would bring. If you weren’t selected for the jury, you went back to the main jury room, and you continued the conversations there. You formed a social circle that others could join.

This time, on my first day, there were only two calls for voir dire, the clerk as bingo-master spinning the drum with the name cards and calling out the names one by one. My second day, there were no calls. And that was it. I went home having had no conversations at all with any of my fellow jurors. (A woman seated behind me did say, “Can you watch my laptop for a second?” when she went to the bathroom, but I don’t count that as a conversation.)

I would love to have written 800 words here on how New York character had changed since the 1970s.  No more schmoozing. Instead we have iPads and iPhones and MacBooks destroying New York jury room culture – Apple taking over the Apple. People unable or afraid to talk to one another because of some subtle shift in our morals and manners. Maybe I’d even go for the full Brooks and add a few paragraphs telling you what’s really important in life.

But it was really a change in the structure. New York expanded the jury pool by eliminating most exemptions. Doctors, lawyers, politicians, judges – they all have to show up. As a result, jury service is two days instead of two weeks, and if you actually are called to a trial, once you are rejected for the jury or after the trial is over, you go home.

The old system was sort of like the pre-all-volunteer army. You get called up, and you’re thrown together with many kinds of people you’d never otherwise meet. It takes a chunk of time out of your life, but you wind up with some good stories to tell. Maybe we’ve lost something. But if we have lost valuable experiences, it’s because of a change in the rules, in the structure of how the institution is run, not a because of a change in our culture and character.

Cross-posted  at Montclair Socioblog.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

“Stay-at-home mother” evokes black and white images of well-coiffed women in starched aprons. Rather than a vestige of a bygone era, stay-at-home moms are on the rise, according to the findings of a new Pew Research study. In 2012, 29% of women with children under the age of 18 stayed home, a number that has been on the rise since 1999 and is 3% higher than in 2008.

However, while more women are staying home with their children, the face of the stay-at-home mom has changed dramatically since the 1950s “Leave It to Beaver” days. Stay-at-home moms today are less educated and more likely to live in poverty than working moms. Younger mothers and immigrant mothers also make up a good portion of stay-at-home moms.

The story of why mothers are staying home is more complex than you may imagine and has more to do with the poor labor market, the exorbitant price of child care, and the contemporary structure of work. In a recent interview with Wisconsin Public RadioBarbara Risman, a sociologist at the University of Illinois at Chicago, spoke about how this report has been picked up by the mainstream media:

What’s surprising to me is the headlines and how it’s portrayed in the news. Although the numbers are going up, when you look at what mothers say, 6% of the mothers in this study say they are home because they can’t find a job. When you take those 6% of mothers out, the results are rather flat. Part of the real story here then is that it’s hard to find a job that allows you to work and covers your child care, particularly if you have less education and your earning potential isn’t very high.

These days stay-at-home moms, who are more likely to be less educated, are not able to make enough money for working to even be worthwhile. Many times, their pay wouldn’t actually cover the cost of child care. Beyond these important financial considerations, lower wage shift work makes it extremely difficult to coordinate child care in the midst of work schedules that change on a weekly basis.

Erin Hoekstra is pursuing a PhD in Sociology at the University of Minnesota. This post originally appeared on Citings and Sightings and you can read all of Erin’s contributions to The Society Pages here.  Cross-posted at Pacific Standard.

Ed, at Gin & Tacos, made a fantastic observation about this photo of a 1960 lunch counter sit-in at a Woolworth’s in Greensboro, NC, protesting the exclusion of black customers.

1

“The most interesting thing about it,” he writes:

…is that the employee behind the Whites Only lunch counter is also black. That’s curious, since on the scale of intimate social contact one would think that having someone handle your food ranks above sitting next to a fully clothed stranger on adjacent stools.

This, he observes, tells us something important about prejudice.

When I first saw this picture and learned about this period in our history… I thought that racism was about believing that another race is inferior. Like most people I got (slightly) wiser with age and eventually figured out that racism is about keeping someone else beneath you on the social ladder… If you actually thought black people were dirty savages you wouldn’t eat anything they handed you. But of course it has nothing to do with that. You’re fine being served food because servility implies social inferiority. And you don’t want to sit next to them simply because it implies equality.

When we observe efforts to uphold unequal social conditions, it’s smart to think past notions of hatred and fear (like the term homophobia unfortunately implies) and instead about how the privileged are benefiting and what they would lose along with their superordinate status.  Hate may be useful for justifying inequality, but at its root it’s about power and resources, not emotions.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

This Duck Dynasty thing seems to have everyone’s undies in a culture war bunch with lots of hand wringing about free speech (find out why this is ridiculous here), the persecution of Christians, and the racism, sexism, and homophobia of poor, rural, Southern whites.

There is, however, an underlying class story here that is going unsaid.

2

Phil Robertson is under fire for making heterosexist comments and trivializing racism in the south in GQ.  While I wholeheartedly and vociferously disagree with Robertson, I am also uncomfortable with how he is made to embody the “redneck.”  He represents the rural, poor, white redneck from the south that is racist, sexist, and homophobic.

This isn’t just who he is; we’re getting a narrative told by the producers of Duck Dynasty and editors at GQ—extremely privileged people in key positions of power making decisions about what images are proliferated in the mainstream media.  When we watch the show or read the interview, we are not viewing the everyday lives of Phil Robertson or the other characters.  We are getting a carefully crafted representation of the rural, white, Southern, manly man, regardless of whether or not the man, Phil Robertson, is a bigot (which, it seems, he is).

The stars of Duck Dynasty eight years ago (left) and today (right):

1

This representation has traction with the American viewing audience.  Duck Dynasty is the most popular show on A&E.  Folks love their Duck Dynasty.

There are probably many reasons why the show is so popular.  Might I suggest that one could be that the “redneck” as stereotyped culture-war icon is pleasurable because he simultaneously makes us feel superior, while saying what many of us kinda think but don’t dare say?

Jackson Katz talks about how suburban white boys love violent and misogynistic Gangsta Rap in particular (not all rap music is sexist and violent, but the most popular among white audiences tends to be this kind). Katz suggests that “slumming” in the music of urban, African American men allows white men to feel their privilege as white and as men.  They can symbolically exercise and express sexism and a sense of masculine power when other forms of sexism are no longer tolerated.  Meanwhile, everybody points to the rapper as the problem; no one questions the white kid with purchasing power.

Might some of the audience of Duck Dynasty be “slumming” with the bigot to feel their difference and superiority while also getting their own bigot on?  The popularity of the show clearly has something to do with the characters’ religiosity and rural life, but I’m guessing it also has something to do with the “redneck” spectacle, allowing others to see their own “backwoods” attitudes reinforced (I’m talking about racism, sexism, and homophobia, not Christianity).

He is a representation of a particular masculinity that makes him compelling to some and abhorrent to others, which also makes him the perfect pawn in the culture wars.  Meanwhile, we are all distracted from social structure and those who benefit from media representations of the rural, white, southern bigot. 

Sociologists Pierrette Hondagneu-Sotelo and Michael Messner suggest that pointing the finger at the racist and homophobic attitudes of rural, poor whites — or the sexist and homophobic beliefs of brown and black men, like in criticism of rap and hip hop — draws our attention away from structures of inequality that systematically serve the interests of wealthy, white, straight, and urban men who ultimately are the main benefactors.  As long as we keep our concerns on the ideological bigotry expressed by one type of loser in the system, no one notices the corporate or government policies and practices that are the real problem.

While all eyes are on the poor, rural, white, Southern bigot, we fail to see the owners of media corporations sitting comfortably in their mansions making decisions about which hilarious down-trodden stereotype to trot out next.  Sexist, homophobic, and racist ideology gets a voice, while those who really benefit laugh all the way to the bank.

Mimi Schippers, PhD is an Associate Professor of Sociology at Tulane University.  She is working on a book on the radical gender potential of polyamory.  Her first book was Rockin’ Out of the Box: Gender Maneuvering in Alternative Hard Rock.  You can follow her at Marx in Drag.

Cross-posted at The Huffington Post and Marx in Drag.  Photos from the Internet Movie Database and Today.

Many critics are praising 12 Years a Slave for its uncompromising honesty about slavery. It offers not one breath of romanticism about the ante-bellum South.  No Southern gentlemen getting all noble about honor and no Southern belles and their mammies affectionately reminiscing or any of that other Gone With the Wind crap, just an inhuman system. 12 Years depicts the sadism not only as personal (though the film does have its individual sadists) but as inherent in the system – essential, inescapable, and constant.

Now, Noah Berlatsky at The Atlantic points out something else about 12 Years as a movie, something most critics missed – its refusal to follow the usual feel-good cliche plot convention of American film:

If we were working with the logic of Glory or Django, Northup would have to regain his manhood by standing up to his attackers and besting them in combat.

Django Unchained is a revenge fantasy. In the typical version, our peaceful hero is just minding his own business when the bad guy or guys deliberately commit some terrible insult or offense, which then justifies the hero unleashing violence – often at cataclysmic levels – upon the baddies. One glance at the poster for Django, and you can pretty much guess most of the story.

1

It’s the comic-book adolescent fantasy – the nebbish that the other kids insult when they’re not just ignoring him but who then ducks into a phone booth or says his magic word and transforms himself into the avenging superhero to put the bad guys in their place.

This scenario sometimes seems to be the basis of U.S. foreign policy. An insult or slight, real or imaginary, becomes the justification for “retaliation” in the form of destroying a government or an entire country along with tens of thousands or hundreds of thousands of its people. It seems pretty easy to sell that idea to us Americans – maybe because the revenge-fantasy scenario is woven deeply into American culture –  and it’s only in retrospect that we wonder how Iraq or Vietnam ever happened.

Django Unchained and the rest are a special example of a more general story line much cherished in American movies: the notion that all problems – psychological, interpersonal, political, moral – can be resolved by a final competition, whether it’s a quick-draw shootout or a dance contest.  (I’ve sung this song before in this blog, most recently here after I saw Silver Linings Playbook.)

Berlatsky’s piece on 12 Years points out something else I hadn’t noticed but that the Charles Atlas ad makes obvious: it’s all about masculinity. Revenge is a dish served almost exclusively at the Y-chromosome table.  The women in the story play a peripheral role as observers of the main event – an audience the hero is aware of – or as prizes to be won or, infrequently, as the hero’s chief source of encouragement, though that role usually goes to a male buddy or coach.

But when a story jettisons the manly revenge theme, women can enter more freely and fully.

12 Years a Slave though, doesn’t present masculinity as a solution to slavery, and as a result it’s able to think about and care about women as people rather than as accessories or MacGuffins.

Scrapping the revenge theme can also broaden the story’s perspective from the personal to the political (i.e., the sociological):

 12 Years a Slave doesn’t see slavery as a trial that men must overcome on their way to being men, but as a systemic evil that leaves those in its grasp with no good choices.

From that perspective, the solution lies not merely in avenging evil acts and people but in changing the system and the assumptions underlying it, a much lengthier and more difficult task. After all, revenge is just as much an aspect of that system as are the insults and injustices it is meant to punish. When men start talking about their manhood or their honor, there’s going to be blood, death, and destruction – sometimes a little, more likely lots of it.

One other difference between the revenge fantasy and political reality: in real life results of revenge are often short-lived. Killing off an evildoer or two doesn’t do much to end the evil. In the movies, we don’t have to worry about that. After the climactic revenge scene and peaceful coda, the credits roll, and the house lights come up. The End. In real life though, we rarely see a such clear endings, and we should know better than to believe a sign that declares “Mission Accomplished.”

Cross-posted at Montclair SocioBlog.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.