Shepard Fairey’s work on the streets of San Francisco. Photo by Michael Pittman, Flickr CC

Political spectators anxiously await a final decision from the Supreme Court on the Wisconsin gerrymandering case, Gill v. Whitford. Gerrymandering occurs when legislators redraw voting districts in order to concentrate their electoral dominance. This highly anticipated judicial decision could stop gerrymandering practices and require courts around the country to search for bias in their district maps. While voting is the cornerstone of democracy, social science research on gerrymandering suggests that democratic ideals may not match up to how voting works in practice.

Wisconsin redistricting plans that were ratified in 2011 gave Republicans an advantage over Democrats in translating votes into seats in the legislature. Computer simulations can diminish partisanship in district drawing, but it remains unclear how effective this would be in reducing political polarization in Congress. One study suggests that redistricting does appear to diminish electoral competition, but does not appear to exacerbate polarization along party lines.  
Political sociologists have shown that full voting rights are not as guaranteed in the United States as in many other major democracies, and gerrymandering is just one example of practices that lead to the under-representation of low-income voters and communities of color in the electoral process. For example, partisan gerrymandering reduced access to communication between ward residents, local nonprofits, and their political representatives in Chicago. There is also evidence it changed voters’ choices in Georgia. In short, gerrymandering has real consequences for racial inequalities and representation in the United States.
Photo by Harold Navarro, Flickr CC

Immigration is a hot-button issue in American politics today. President Trump’s proposed border wall, rescinding of DACA, travel bans for multiple majority-Muslim countries, and increased detention and deportation have meant that the debate has focused almost exclusively on Hispanics and Muslims. This is the latest in a long history of misgivings towards immigrants that has obvious racial dimensions. It’s easy to forget that much anti-immigrant rhetoric is based on American attitudes about who is white, or who has the potential to become white. Social science research reminds us how certain groups who were once cast as racial outsiders eventually came to be seen as “white,” while others have been consistently denied white status and the full citizenship that comes with it.

The meaning of “white” has changed through the course of American history. From the 19th century into the early 20th century, “white” only incorporated Anglo-Saxon, Protestant Americans. American voters and policymakers were concerned that “non-white” immigrant groups such as the Irish, Poles, Jews, and Italians lacked the ability to assimilate into American society. Gradually, however, these immigrants became incorporated into the dominant racial category and were thus no longer considered outsiders.
This did not apply to all immigrant groups, however. Despite the historical flexibility of the category, whiteness never encompassed everybody. Courts, laws, and pseudoscience defined whiteness in ways that excluded some groups from full citizenship in America. Many immigrant communities—such as West Indians, Hispanics, and the Chinese—found themselves in racial categories that shaped their access to various socioeconomic opportunities, belonging, and citizenship.
Photo by stephalicious, Flickr CC

From sexual harassment to salary gaps, stories about gender inequality at work are all over the news. How does this happen? Social science research finds that people often place into different jobs by gender, race, and class, and this sorting has consequences for inequality in earnings and career prestige. Just like a middle school dance where students congregate on opposite sides of the floor because of both self-sorting and social norms, gendered occupational segregation comes from a combination of choice and implicit discrimination based on workplace “fit.” Women often choose less prestigious occupations based on how they perceive their personalities and competence, and employers and colleagues tend to favor people like themselves when hiring, promoting, and collaborating.

When people choose their jobs, they often think about careers to match their personalities. Gender socialization and stereotypes about competence, personality traits, and innate abilities influence how women and men consider which  jobs are right for them. Many women learn to perceive themselves as emotional, systematic, or people-oriented. They also tend to think they possess the right traits to work in female-dominated jobs like teaching and nursing. Women are more likely to think they will perform poorly at careers in science, technology, math, and engineering (STEM) because they have learned to think they are not “naturally” as good at these subjects as men are.
Outright gender-based discrimination in hiring and workplace practice is illegal, but it still occurs through implicit biases to the detriment of women. Employers often look for people who will blend well with their workplace’s culture, and this results in hiring candidates similar to themselves, in terms of both gender and social class. Once hired, colleagues tend to collaborate and share resources with those they think are like them as well, often isolating women in male-dominated workplaces. As a result, many women leave highly-paid, highly-skilled positions in favor of less prestigious jobs with more women and friendlier environments.
Photo by Gage Skidmore, Flickr CC

Donald Trump was recently the first sitting president to address the Values Voter Summit in Washington, D.C., where he referenced “attacks” on Judeo-Christian values. But what does this “Judeo-Christian” buzzword really mean? Social science research shows us that national identity is a style of political engagement that can change over time, but also that these cultural changes have real stakes for the way Americans think about their fellow citizens. While the U.S. is becoming an increasingly racially and religiously diverse nation, this demographic change comes up against the persistent cultural assumption that Americans share a distinct Christian identity and heritage.

The meaning of “Judeo-Christian” has changed over time. Once referring to progressive political coalitions, it became a rallying cry that designated socially conservative positions in the “culture wars” of the 1980s and beyond. This case shows us how nationalism is a cultural style comprised of different beliefs and identities. This means that political leaders and everyday citizens can draw on different styles of nationalism.
And these styles of nationalism have real stakes. An emerging trend in public opinion literature shows that Christian nationalism in particular is a strong predictor of negative attitudes toward minority groups. For example, respondents high on this kind of nationalism are also less likely to support interracial and same-sex marriage.
Photo by GotCredit, Flickr CC

In oral arguments during the Supreme Court’s recent case about partisan gerrymandering in Wisconsin, political science research was presented to demonstrate the effects of redistricting plans on voting outcomes. In response, Chief Justice John Roberts commented that he was wary of  “sociological gobbledygook,” questioning the data presented. As public figures like Roberts question expert knowledge, social scientists are increasingly concerned about public perceptions of social science research and maintaining trust between the academy, the government, and the public. Examining the relationship between experts and the public helps us understand the role of social science in the public sphere.  

Some scholars have suggested that distrusting experts might be rooted in the American value of an open society that treats everyone equally. According to this explanation, people distrust social scientists –and experts in general– because they believe these experts belong to a privileged and disconnected  “intellectual class.”
Negative views of this intellectual class matter because they lead people to think experts have hidden political biases and that they use scientific knowledge to obtain self-interested political and economic advantages. These views also affect the way people evaluate political movements and politicians.
Social scientists are looking for strategies that could help them bridge the gap between their research and the public, and some recommend social scientists get involved in the public sphere. A study of academic credibility among college students found that students often view faculty who work in the public sphere as more credible because of their perceived personal commitment to the broader community.
A candlelight vigil outside Virginia Tech’s Burruss Hall after the 2007 mass shooting. Photo by Kate Wellington, Flickr CC

The nation remains in mourning as we struggle to make sense of this week’s tragedy in Las Vegas, where 59 people were killed and over 500 wounded. Many are referring to the attack as the “deadliest shooting in modern US history.” Through their grief and shock, some now question how local law enforcement, politicians, and news media outlets will characterize the shooter, a middle-aged white man, who, according to family members and the early stages of the investigation, had no known ties to religious or political groups. Investigative authorities link terrorism to violent acts, the motives behind those acts, and affiliation with known terrorist organizations. Yet, several activists have argued that the media’s characterization of mass shooters depends upon their race, ethnicity, and religious beliefs, noting that “Whiteness, somehow, protects men from being labeled terrorists.” Examining the role of media discourse regarding mass killings might help us make sense of these acts of violence.

Mass shootings have been covered extensively by the U.S. media since the late 1990s Columbine shooting. What began as a focus on the two perpetrators and 11 victims developed into a moral panic regarding youth delinquency, mental illness, discipline, and even terrorism. Yet, the media does not treat all mass shootings equally — several factors come into play, including the availability of iconic images, media access, and the race and socioeconomic status of the perpetrator. Shootings that occur in seemingly quiet suburbs by white youth are more shocking because the perpetrators and victims are considered to be “people like us.” In contrast, shootings where the perpetrators are persons of color or reside in working-class neighborhoods produce less shock, as news producers and consumers presume that violence is somehow normal or inherent to those communities.
One comparative study defined mass shootings as “homicide offenses that require firearms as the weapon of attack, and they often end in the offender’s suicide or orchestration of ‘suicide by cop’.” By this definition, the U.S. has likely had more public mass shootings than other comparable nations over the past 50 years. Mass shootings are more likely to take place in countries with higher levels of gun ownership and, in the case of school shootings, have been linked to aggressive performances of masculinity by predominantly young, white, suburban students. While mass shootings frequently involve multiple casualties, authorities rarely refer to such acts as terrorism — the designation of “terrorist” is generally reserved for “foreign-based terrorist organizations.” 
One concern about the coverage of such events is that the publicity and sensationalization surrounding mass killings may inspire other “copycat” crimes. Potential mass killers may use media reporting as a way to create a fictive bond with other mass murderers as a “comradery-focused fantasy.” Seung-Hui Cho, for example, idolized Dylan Klebold and Eric Harris (Columbine shooters) for several years, before carrying out his own deadly attack at Virginia Tech. Other potential mass murderers intensely scour news clippings of prior mass killings to find the perpetrators’ weaknesses and compete with them. Before killing 20 children and 6 adults at Sandy Hook Elementary School, shooter Adam Lanza’s correspondence illustrates that he critiqued James Holmes, the Aurora movie theater shooter, for what he saw as a weak effort to murder multiple people.
Photo by Herry Lawford, Flickr CC

Worries about rapid technological change negatively affecting society abound — the advent of the internet, increased availability of smartphones, and ubiquity of social media have many concerned that people are constantly “plugged in” and, as a result, tuning out the world around them. These concerns were revitalized with the recent publication of psychologist Jennifer Twenge’s new research, which finds that a social media heavy diet is associated with depression and social isolation among teens. However, Twenge explains, “The aim of generational study is not to succumb to nostalgia for the way things used to be; it’s to understand how they are now. Some generational changes are positive, some are negative, and many are both.” Social science research on nostalgia warns against idealizing the past, but also points to varied uses and meanings of nostalgia over time.

Seen as a sickness when it first entered circulation centuries earlier, nostalgia became a common trope in the late 20th century, moving from the medical field to everyday life. Nostalgia is typically defined as a “sentimental longing for the past,” and is often associated with an idealized remembering of “how things used to be.” In this way, nostalgia can be viewed as reactionary and regressive — calls for returns to “traditional families” or “tight-knit communities” are often cast in a language that selectively highlights the positives of previous social forms and ignores the problems associated with them. For example, Stephanie Coontz finds that there has never been a “traditional family” that protects people from poverty or social disruption.
Nostalgia can also be exploited by those in power to further ideological ends. For example, think Trump’s electoral campaign slogan “Make America Great Again,” or Brexit with its “Take back control” discourse — both imply a better past. This type of nostalgia is usually vague in terms of the era and place of longing, yet has an exclusionary vision of society that has strict rules about who belongs.
However, recent research complicates these negative connotations of nostalgia by exploring some of the different affective, sentimental, and ideational roles that various kinds of nostalgia practice perform. Research finds that nostalgia can be both a comfort and a catalyst for change, and some argue that nostalgia can be an important basis for thinking into the future. Sociologist Fred Davis recognizes nostalgia as a tool for identity construction and a lens through which people construct, maintain, and reconstruct their identities. He finds that nostalgia reduces insecurities and self-threat by keeping fears of insignificance at bay and reassuring us that our self “is as it was then.” Similarly, Katharina Niemeyer argues that the process of “nostalgizing” provides a sense of belonging that can increase solidarity and lessen loneliness.
Photo by Frank de Kleine, Flickr CC

Several abortion providers have come under intense criticism for offering free abortions to women affected by Hurricane Harvey. While this criticism echoes decades of social and political debates regarding women’s reproductive rights, the control over women’s bodies extends far beyond the second-wave feminist movement during the mid-20th century. For example, recent calls for the removal of a statue honoring J. Marion Sims, a doctor known for medical contributions to the field of gynecology and who performed experimental surgeries on non-consenting enslaved black women without anesthesia, illustrate the historical links between reproductive control, gender, and race. Sociologists allow us to trace the long history of controlling black women’s reproduction.

While historical accounts of reproductive rights rhetoric in the 19th century point to the gendered issue of men’s control over women’s bodies and the valorization of traditional motherhood, they neglect how political rhetoric also drew on ideas of white superiority. As more immigrants migrated to the U.S., Anglo-Saxon political elites worried that greater migrant representation would quickly dismantle their political power, and so American physicians encouraged Anglo-Saxon women to bear children for the sake of continued political power among whites.
Even though white women were subjected to political rhetoric that sought to control their reproduction, their capacity to reproduce the white race meant they were privileged relative to black women. This privilege was shaken when white women gave birth to mixed-race children, however, and these women were sometimes forced into indentured servitude. On the other hand, racially mixed children born to black women during slavery were not threatening to a white racial order. Instead, they were viewed as symbols of white men’s social and economic control over black women.
During and after slavery, black women were commonly depicted as sexually deviant, hypersexual and promiscuous. State-sanctioned practices to control black women’s reproduction–like coercive birth control and mass sterilization campaigns where doctors performed hysterectomies on black women that were not medically necessary–reflected these cultural images. When black women did have children, restrictive welfare policies limited the state support they could receive, further drawing on racialized constructions of black women as lazy, ignorant, “welfare queens.” Both sets of state practices reflect the attempt to control black women’s sexuality, reproduction, and families.

For more on the ways mothers are controlled and policed, check out this TROT on morality and maybe-moms.

Photo by Ted Eytan, Flickr CC

Despite increased awareness surrounding transgender identities and experiences, the National Coalition of Anti-Violence Programs (NCAVP) has tracked 36 anti-LGBTQ homicides in 2017 alone, 16 of whom were transgender women of color. Along with President Trump’s recently proposed military ban of transgender persons, there is evidence that personal prejudices and institutional discrimination continue to affect the lives of those in the trans community.

One way institutions like the military discriminate against transgender individuals is by advancing the assumption that trans bodies are a problem and should conform to “normal” standards. For example, medical professionals that work with transgender patients often discourage them from undergoing transitional surgery too soon. This discouragement reinforces traditional ideas that treat gender and sex as the same thing, and recognize both as binary. This discrimination shapes psychological and physical well-being, as transgender people who appear as gender nonconforming have a greater risk of engaging in self-harm acts, including suicide.
Transgender people are also at a greater risk of being the victims of violence. Surveys indicate that roughly 50% of transgender people report experiences of sexual violence or assault. The continuous threat of violence influences everyday decision making and quality of life across communities. For example, transgender women are more likely to perceive an association between acts of physical violence with sexual assault. Moreover, individuals transitioning from male-to-female are more likely to experience violent victimization than those transitioning from female-to-male, with a heightened risk during periods of transition and gender ambiguity.
Photo by Mathias Eick, EU/ECHO, Rakhine State, Myanmar/Burma, September 2013. Flickr CC

The Rohingya, a Muslim minority group, have been the target of violence for years in Myanmar, also known as Burma. But in recent weeks, international media coverage has surged following a spike in violence that has led to over 120,000 Rohingya fleeing their homes. The increased media attention, however, has also provided a platform for an anti-Rohingya propaganda campaign that argues the Rohingya are “terrorists” and deserve the violence that befalls them. Sociologists have brought new insight into how propaganda enables the acceptance of atrocities and how it can directly impact rates of violence.

Propaganda campaigns often demonize a group by characterizing them as less-than-human. Refugee communities, for example, are often treated with fear and suspicion by members of their host nation. This can also negatively impact individual-level interactions with the mistrusted group, such as higher rates of expressed aggression and contempt. Studies show that when a group is dehumanized, those outside of the group find it easier to exclude them and assume that they are more deserving of problems in their lives.
Scholars have also examined patterns of violence and how perpetrators make decisions through the use of propaganda. Radio propaganda played a key role in the Rwandan genocide; on hills where radio reception was better, the rate of killing was higher than in areas where reception was limited. Groups such as ISIS use social media to motivate and recruit individuals. With the increasing prominence of social media, understanding how these mechanisms enable the acceptance and perpetration of violence is essential. They also indicate that positive social media campaigns could help to counter propaganda.