methods/use of data

Last month, Green Book won Best Picture at the 91st Academy Awards. The movie tells the based-on-a-true-story of Tony Lip, a white working-class bouncer from the Bronx, who is hired to drive world-class classical pianist Dr. Don Shirley on a tour of performances in the early-1960s Deep South. Shirley and Lip butt heads over their differences, encounter Jim Crow-era racism, and, ultimately, form an unlikely friendship. With period-perfect art direction and top-notch actors in Mahershala Ali and Viggo Mortensen, the movie is competently-crafted and performed fairly well at the box office.

Still, the movie has also been controversial for at least two reasons. First, many critics have pointed out that the movie paints a too simple account of racism and racial inequality and positions them as problem in a long ago past. New York Times movie critic Wesley Morris has called Green Book the latest in a long line of “racial reconciliation fantasy” films that have gone on to be honored at the Oscars.

But Green Book stands out for another reason. It’s an unlikely movie to win the Best Picture because, well, it’s just not very good.

Source: Wikimedia Commons

Sociologists have long been interested in how Hollywood movies represent society and which types of movies the Academy does and doesn’t reward. Matthew Hughey, for example, has noted the overwhelming whiteness of Oscar award winners at the Oscars, despite the Oscars A2020 initiative aimed at improving the diversity of the Academy by 2020. But, as Maryann Erigha shows, the limited number of people of color winning at the Oscars reflects, in part, the broader under-representation and exclusion of people of color in Hollywood.

Apart from race, past research by Gabriel Rossman and Oliver Schilke has found that the Oscars tend to favor certain genres like dramas, period pieces, and movies about media workers (e.g., artists, journalists, musicians). Most winners are released in the final few months of the year and have actors or directors with multiple prior nominations. According to these considerations, Green Book had a lot going for it. Released during the holiday season, it is a historical movie about a musician, co-starring a prior Oscar winner and a prior multiple time Oscar nominee. Sounds like perfect Oscar bait.

And, yet, quality matters, too. It’s supposed to be the Best Picture after all. The problem is what makes a movie “good” is both socially-constructed and a matter of opinion. Most studies that examine questions related to movies measure quality using the average of film critics’ reviews. Sites like Metacritic compile these reviews and produce composite scores on a scale from 0 (the worst reviewed movie) to 100 (the best reviewed movie). Of course, critics’ preferences sometimes diverge from popular tastes (see: the ongoing box office success of the Transformers movies, despite being vigorously panned by critics). Still, movies with higher Metacritic scores tend to do better at the box office, holding all else constant.

If more critically-acclaimed movies do better at the box office, how does quality (or at least the average of critical opinion) translate into Academy Awards? It is certainly true that Oscar nominees tend to have higher Metacritic scores than the wider population of award-eligible movies. But the nominees are certainly not just a list of the most critically-acclaimed movies of the year. Among the films eligible for this year’s awards, movies like The Rider, Cold War, Eight Grade, The Death of Stalin, and even Paddington 2 all had higher Metacritic scores than most of the Best Picture nominees. So, while nominated movies tend to be better than most movies, they are not necessarily the “best” in the eyes of the critics.

Even among the nominees, it is not the case that the most critically-acclaimed movie always wins. In the plot below, I chart the range of Metacritic scores of the Oscars nominees since the Academy Awards reinvented the category in 2009 (by expanding the number of nominees and changing the voting method). The top of the golden area represents the highest-rated movie in the pool of nominees and the bottom represents the worst-rated film. The line captures the average of the pool of nominees and the dots point out each year’s winner.

Click to Enlarge

As we can see, the most critically-acclaimed movie doesn’t always win, but the Best Picture is usually above the average of the pool of nominees. What makes Green Book really unusual as a Best Picture winner is that it’s well below the average of this year’s pool and the worst winner since 2009. Moreover, according to MetaCritic (and LA Times’ film critic Justin Chang), Green Book is the worst winner since Crash in 2005.

Green Book’s Best Picture win has led to some renewed calls to reconsider the Academy’s ranked choice voting system in which voters indicate the order of preferences rather than voting for a single movie. The irony is that when Moonlight, a highly critically-acclaimed movie with an all-black cast, won in 2016, that win was seen as a victory made possible by ranked choice voting. Now, in 2019, we have a racially-controversial and unusually weak Best Picture winner that took home the award because it appears to have been the “least disliked” movie in the pool.

The debate over ranked choice voting for the Academy Awards may ultimately end in further voting rule changes. Until then, we should regard a relatively weak movie like Green Book winning Best Picture as the exception to the rule.   

Andrew M. Lindner is an Associate Professor at Skidmore College. His research interests include media sociology, political sociology, and sociology of sport.

When I teach social statistics, I often show students how small changes in measurement or analysis can make a big difference in the way we understand the world. Recently, I have been surprised by some anger and cynicism that comes up when we talk about this. Often at least one student will ask, “does it even matter if you can just rig the results to say whatever you want them to say?”

I can’t blame them. Controversy about manufactured disagreement on climate change, hoax studies, or the rise of fake news and “both side-ism” in our politics can make it seem like everyone is cooking the books to get results that make them happy. The social world is complicated, but it is our job to work through that complexity and map it out clearly, not to throw up our hands and say we can’t do anything about it. It’s like this optical illusion:

The shape isn’t just a circle or a square. We can’t even really say that it is both, because the real shape itself is complicated. But we can describe the way it is built to explain why it looks like a circle and a square from different angles. The same thing can happen when we talk about debates in social science.

A fun example of this popped up recently in the sociology of religion. In 2016, David Voas and Mark Chaves published an article in the American Journal of Sociology about how rates of religious commitment in the United States are slowly declining. In 2017, Landon Schnabel and Sean Bock published an article in Sociological Science responding to this conclusion, arguing that most of the religious decline was among moderate religious respondents—people with very strong religious commitments seemed to be holding steady. Just recently, both teams of authors have published additional comments about this debate (here and here), analyzing the same data from the General Social Survey.

So, who is right?

Unlike some recent headlines about this debate, the answer about religious decline isn’t just “maybe, maybe not.” Just like the circle/square illusion, we can show why these teams get different results with the same data.

Parallel Figures from Voas & Chaves (2018) and Schnabel & Bock (2018) (Click to Enlarge)

When we put the charts together, you can see how Voas and Chaves fit straight and smoothly curved lines to trends across waves in the GSS. This creates the downward-sloping pattern that fits their conclusions about slow religious decline over time. Schnabel and Bock don’t think a single straight line can accurately capture these trends, because the U.S. saw a unique peak in religious commitment that happened during the Regan years and may have receded more quickly. Their smoothing technique (LOESS smoothing) captures this peak and a quick decline afterwards, and doing so flattens out the rest of the trends after that period.

The most important lesson from these charts is that they don’t totally get rid of the ambiguity about religious change. Rather than just ending the debate or rehashing it endlessly, this work helps us see how it might be more helpful to ask different questions about the historical background of the case. I like this example because it shows us how disagreement among experts can be an invitation to dig into the details, rather than a sign we should just agree to disagree. Research methods matter, and sometimes they can help us more clearly explain why we see the world so differently.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

Social institutions are powerful on their own, but they still need buy-in to work. When people don’t feel like they can trust institutions, they are more likely to find ways to opt out of participating in them. Low voting rates, religious disaffiliation, and other kinds of civic disengagement make it harder for people to have a voice in the organizations that influence their lives.

And, wow, have we seen some good reasons not to trust institutions over the past few decades. The latest political news only tops a list running from Watergate to Whitewater, Bush v. Gore, the 2008 financial crisis, clergy abuse scandals, and more.

Using data from the General Social Survey, we can track how confidence in these institutions has changed over time. For example, recent controversy over the Kavanaugh confirmation is a blow to the Supreme Court’s image, but strong confidence in the Supreme Court has been on the decline since 2000. Now, attitudes about the Court are starting to look similar to the way Americans see the other branches of government.

(Click to Enlarge)
Source: General Social Survey Cumulative File
LOESS-Smoothed trend lines follow weighted proportion estimates for each response option.

Over time, you can see trust in the executive and legislative branches drop as the proportion of respondents who say they have a great deal of confidence in each declines. The Supreme Court has enjoyed higher confidence than the other two branches, but even this has started to look more uncertain.

For context, we can also compare these trends to other social institutions like the market, the media, and organized religion. Confidence in these groups has been changing as well.

(Click to Enlarge)
Source: General Social Survey Cumulative File

It is interesting to watch the high and low trend lines switch over time, but we should also pay attention to who sits on the fence by choosing some confidence on these items. More people are taking a side on the press, for example, but the middle is holding steady for organized religion and the Supreme Court.

These charts raise an important question about the nature of social change: are the people who lose trust in institutions moderate supporters who are driven away by extreme changes, or “true believers” who feel betrayed by scandals? When political parties argue about capturing the middle or motivating the base, or the church worries about recruiting new members, these kinds of trends are central to the conversation.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

Everyone has been talking about last week’s Senate testimony from Christine Blasey Ford and Supreme Court nominee Brett Kavanaugh. Amid the social media chatter, I was struck by this infographic from an article at Vox:

Commentators have noted the emotional contrast between Ford and Kavanaugh’s testimony and observed that Kavanaugh’s anger is a strategic move in a culture that is used to discouraging emotional expression from men and judging it harshly from women. Alongside the anger, this chart also shows us a gendered pattern in who gets to change the topic of conversation—or disregard it altogether.

Sociologists use conversation analysis to study how social forces shape our small, everyday interactions. One example is “uptalk,” a gendered pattern of pitched-up speech that conveys different meanings when men and women use it. Are men more likely to change the subject or ignore the topic of conversation? Two experimental conversation studies from American Sociological Review shed light on what could be happening here and show a way forward.

In a 1994 study that put men and women into different leadership roles, Cathryn Johnson found that participants’ status had a stronger effect on their speech patterns, while gender was more closely associated with nonverbal interactions. In a second study from 2001, Dina G. Okamoto and Lynn Smith-Lovin looked directly at changing the topic of conversation and did not find strong differences across the gender of participants. However, they did find an effect where men following male speakers were less likely to change the topic, concluding “men, as high-status actors, can more legitimately evaluate the contributions of others and, in particular, can more readily dismiss the contributions of women” (Pp. 867).

Photo Credit: Sharon Mollerus, Flickr CC

The important takeaway here is not that gender “doesn’t matter” in everyday conversation. It is that gender can have indirect influences on who carries social status into a conversation, and we can balance that influence by paying attention to who has the authority to speak and when. By consciously changing status dynamics —possibly by changing who is in the room or by calling out rule-breaking behavior—we can work to fix imbalances in who has to have the tough conversations.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

This week Hurricane Florence is making landfall in the southeastern United States. Sociologists know that the impact of natural disasters isn’t equally distributed and often follows other patterns of inequality. Some people cannot flee, and those who do often don’t go very far from their homes in the evacuation area, but moving back after a storm hits is often a multi-step process while people wait out repairs.

We often hear that climate change is making these problems worse, but it can be hard for people to grasp the size of the threat. When we study social change, it is useful to think about alternatives to the world that is—to view a different future and ask what social forces can make that future possible. Simulation studies are especially helpful for this, because they can give us a glimpse of how things may turn out under different conditions and make that thinking a little easier.

This is why I was struck by a map created by researchers in the Climate Extremes Modeling Group at Stony Brook University. In their report, the first of its kind, Kevin Reed, Alyssa Stansfield, Michael Wehner, and Colin Zarzycki mapped the forecast for Hurricane Florence and placed it side-by-side with a second forecast that adjusted air temperature, specific humidity, and sea surface temperature to conditions without the effects of human induced climate change. It’s still a hurricane, but the difference in the size and severity is striking:

Reports like this are an important reminder that the effects of climate change are here, not off in the future. It is also interesting to think about how reports like these could change the way we talk about all kinds of social issues. Sociologists know that narratives are powerful tools that can change minds, and projects like this show us where simulation can make for more powerful storytelling for advocacy and social change.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

Want to help fight fake news and manage political panics? We have to learn to talk about numbers.

While teaching basic statistics to sociology undergraduates, one of the biggest trends I noticed was students who thought they hated math experiencing a brain shutdown when it was time to interpret their results. I felt the same way when I started in this field, and so I am a big advocate for working hard to bridge the gap between numeracy and literacy. You don’t have to be a statistical wizard to make your reporting clear to readers.

Sociology is a great field to do this, because we are used to going out into the world and finding all kinds of cultural tropes (like pointlessly gendered products!). My new favorite trope is the Half-Dozen Headline. You can spot them in the wild, or through Google News with a search for “half dozen.” Every time I read one of these headlines, my brain echoes with “half of a dozen is six.”

Sometimes, six is a lot:

Sometimes, six is not:

(at least, not relative to past administrations)

Sometimes, well, we just don’t know:

Is this five deaths (nearly six)? Is a rate of about two deaths a year in a Walmart parking lot high? If people already struggle to interpret raw numbers, wrapping your findings in fuzzy language only makes the problem worse.

Spotting Half-Dozen Headlines is a great introductory exercise for classes in social statistics, public policy, journalism, or other fields that use applied data analysis. If you find a favorite Half-Dozen Headline, be sure to send it our way!

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

The Star Tribune recently ran an article about a new study from George Washington University tracking cases of Americans who traveled to join jihadist groups in Syria and Iraq since 2011. The print version of the article was accompanied by a graph showing that Minnesota has the highest rate of cases in the study. TSP editor Chris Uggen tweeted the graph, noting that this rate represented a whopping seven cases in the last six years.

Here is the original data from the study next to the graph that the paper published:

(Click to Enlarge)

Social scientists often focus on rates when reporting events, because it make cases easier to compare. If one county has 300 cases of the flu, and another has 30,000, you wouldn’t panic about an epidemic in the second county if it had a city with many more people. But relying on rates to describe extremely rare cases can be misleading. 

For example, the data show this graph misses some key information. California and Texas had more individual cases than Minnesota, but their large populations hide this difference in the rates. Sorting by rates here makes Minnesota look a lot worse than other states, while the number of cases is not dramatically different. 

As far as I can tell, this chart only appeared in the print newspaper photographed above and not on the online story. If so, this chart only went to print audiences. Today we hear a lot of concern about the impact of “filter bubbles,” especially online, and the spread of misleading information. What concerns me most about this graph is how it shows the potential impact of offline filter bubbles in local communities, too.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

That large (and largely trademarked) sporting event is this weekend. In honor of its reputation for massive advertising, Lisa Wade tipped me off about this interesting content analysis of last year’s event by the Media Education Foundation.

MEF watched last year’s big game and tallied just how much time was devoted to playing and how much was devoted to ads and other branded content during the game. According to their data, the ball was only in play “for a mere 18 minutes and 43 seconds, or roughly 8% of the entire broadcast.”

MEF used a pie chart to illustrate their findings, but readers can get better information from comparing different heights instead of different angles. Using their data, I quickly made this chart to more easily compare branded and non-branded content.

Data Source: Media Education Foundation, 2018

One surprising thing that jumps out of this data is that, for all the hubbub about commercials, far and away the most time is devoted to replays, shots of the crowd, and shots of the field without the ball in play. We know “the big game” is a big sell, but it is interesting to see how the thing it sells the most is the spectacle of the event itself.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.