1894 newspaper illustration by Frederick Burr Opper, Library of Congress via Wikimeida commons

The election of President Donald Trump in the United States in 2016 ushered in an era of attacks on the media and accusations that outlets such as The New York Times and The Washington Post are publishing “fake news.” But what exactly is “fake news”? And why are claims about information, misinformation, and disinformation in American journalism so troubling?

TSP has previously published articles summarizing scholarly concerns about fake news–in particular, its role in the political polarization phenomenon. Media scholars also now see these trends as part of a larger, longer-term crisis of democracy itself, beginning sometime in the final decades of the 20th century.

In spite of all of these questions and controversies, one thing is clear: there is no consensus on what exactly fake news is. The definition of fake news is unclear to many Americans. According to a 2018 study from The Media Insight Project, there are several understandings of what “fake news” really means to Americans nationwide:  

  • 71% of Americans think fake news is “made-up stories from news outlets that don’t exist”
  • 63% think fake news refers to “media outlets that pass on conspiracy theories and unsubstantiated rumors”
  • 62% think it means “journalists from real news organizations making stuff up” 
  • 43% think fake news refers to news organizations making sloppy mistakes
  • 25% call satire or comedy about current events fake news
Audiences play a key role in interpreting the news and acting on it — or not. Pew Research Center data shows 68 percent of American adults say that they get their news on social media even though 57 percent of them expect the news they see on social media to be “largely inaccurate.” Academic studies also find that “fake news” is often used by social media users to insult information shared by members of opposing political parties.  
Harvard vs. Bucknell football game. Photo by Yzukerman, Flickr CC

There is no shortage of writing on the history of college sports, especially its history of scandal. There is also plenty of writing on how big-time college sports harm both the colleges and their athletes. Books as varied as Pay for Play: A History of Big-Time College Athletic Reform, College Sports Inc.: The Athletic Department vs. The University and College Athletes for Hire: The Evolution and Legacy of the NCAA’s Amateur Myth highlight the rise of the NCAA through and because of scandal, the enormous amounts of money flowing through college athletic departments (but not to players), and the contortions of universities to fit big-time athletics.

But athletics matter even in schools defined by their academics rather than their sports. Documents from the recent Harvard affirmative action legal case confirm prior research: even at the Ivies and at coed liberal arts colleges, athletes receive a substantial admissions bump. Articles from The Atlantic and Slate detail this bump and how it especially benefits upper-class white students. At Ivies and elite liberal arts colleges, the potential financial gain from athletics (as suspect as that might be at other schools) doesn’t make sense as the primary reason to keep sports in these schools. So what are some other reasons that American higher education institutions prioritize athletics? Here are three that sociological thinking and research can help us understand.

1.Status Networks and Peer Institutions

First, athletics helps schools signal who their peers are, both academically and athletically. Higher education in the United States didn’t develop from a master plan. It is, instead, a network and market of schools that jockey for position, carving out niches and constantly battling for status. Athletic conferences are one way that institutions establish networks, and research has found that schools within conferences come to share similar status, both athletically and academically. The Ivy League is the prime example of this phenomenon. Although “the Ivies” have come to mean a set of elite schools, the league began as simply a commitment to compete against each other on the athletic playing field. 

2. Competing for Students

Another way that colleges signal prestige is through established ranking systems, and a key part of those rankings come from measures of selectivity and the quality undergraduate students. So all colleges are competing for students — either to solidify rankings or to simply matriculate enough students for small, tuition-dependent institutions to be able to pay the bills. In Creating a Class, Mitchell Stevens points out how important athletics are to recruiting students within the competitive, small liberal arts space. He writes,

“Students choose schools for multiple reasons, and the ability to participate in a particular sport at a competitive level of play is often an important one. Because so many talented students also are serious athletes, colleges eager to admit students with top academic credentials are obliged to maintain at least passable teams and to support them with competitive facilities.”

3. Non-academic Signals in Admissions

Histories of Ivy League admissions have revealed how including athletic markers was part of establishing who belonged at the school. On the most obvious level, prominent alumni who were athletes or the parents of prospective students publicly pushed for admissions policies that would be beneficial to others like them. But more subtly, and more insidiously, having an affinity for athletics was viewed as a mark of the “Yale man,” the upper-class, Christian, future leader of the world who had the presence of mind and body to pick up new ideas and manage others. 

Athletics in colleges isn’t just a money-maker or something to keep students happy. It’s a way for colleges to recruit students, fight for status, and signal what types of students they value.

Photo by the euskadi 11, Flickr CC

Originally posted April 2017. We’re reposting this in light of California’s recent decision to prevent the renewal of contracts with for-profit prison companies.

Last month, Attorney General Jeff Sessions reinstated the use of private prisons in the federal system. This move is welcome news to top corrections corporations such as CoreCivic, but human rights activists are concerned about this shift. Opponents claim that these corporations bring in large profits while their prisons remain rife with safety and healthcare deficiencies, as well as underpaid employees. While these concerns are important to consider, the private prison industry represents a small segment of the American correctional system. According to the Bureau of Justice Statistics, only 17% of inmates in federal prisons and 7% in state prisons were held in private facilities in 2015.

During their initial inception, private prisons were believed to be a cost-effective option that could provide better services than government facilities. Despite these goals, much of the current evaluative research suggests that private facilities are no more cost effective than public facilities. Likewise, private prisons appear to perform worse in reducing recidivism than public correctional facilities and have similar (and sometimes worse) conditions than public facilities. In contrast, some evidence suggests that private prisons may be less overcrowded. Due to these ambiguities, scholars of the privatization debate are calling for more research into the qualitative differences between the private and public sector of prisons.
Regardless of their effectiveness, research suggests that the demographic composition of private prisons is racially disparate. In an analysis of adult correctional facilities in 2005, private prisons had significantly fewer white and more Hispanic populations when compared to their public counterparts. As to why racial and ethnic disparities exist, research points to the role of private prisons in immigrant detention, which has lead some scholars to argue that the private prison industry is just a small segment of a massive immigrant industrial complex. This line of research posits that this complex perpetuates the criminalization and stigmatization of immigrants, especially among Latinos, and as a result comes at a significant cost to immigrant families and communities.
Illustration of Game of Thrones characters who are unimpressed while watching the show. By Silueta Production House via Vimeo.

Let’s face it: lots of fans despised the final season of Game of Thrones. Earlier this year, Scientific American suggested that’s because the storytelling style changed from sociological to psychological. When the storytelling was sociological, the characters evolved often in dramatic, unpredictable ways in response to the broader institutional settings, and the countervailing incentives and norms that surrounded them. When the style switched to psychological, viewers had to identify with the characters on a personal level and become invested in them for the story to work. Within this individualistic framing, characters’ unexpectedly evil actions and untimely deaths stopped making sense. As it happens, not only is sociological storytelling an important driver in keeping audiences devoted, it can also be a powerful tool in crafting a persuasive research article.

Research suggests that storytelling is powerful precisely because it gives human faces to abstract social forces, emplotting them as combatants over the very problems which social theory endeavors to understand — conflict, inequality, and modernization, to name but a few. Andrew Abbott thus argues for a lyrical sociology that recreates the experience of social discovery in the reader. Sara Lawrence-Lightfoot and Jessica Hoffmann Davis similarly suggest that to capture the complexity, dynamics, and subtlety of human experience and organizational life, one must document the voices and visions of the people they are studying.
Is it enough to fashion stories that enthrall readers with captivating narrative arcs, or must scholarship also advance theoretical arguments? In a recent Twitter thread, Jeff Guhin invites discussion of the tension in qualitative work between telling stories about social problems and making arguments. Some sociologists argue that description alone makes a valuable contribution. Scholars doing qualitative work should strive to publish descriptively rich, findings-driven papers that are so grounded and concrete that the reader intuitively grasps the “so what.” Though ethnography may share some characteristics with imaginative writing, Hammersley points out that it is more than that. Ethnographers must grapple with a number of issues as they analyze data and write up their work, just as they do when they choose where and how to collect it.
Using our sociological imagination in storytelling doesn’t mean discounting characters’ personal or psychological motivations. Instead, it means showing characters in ongoing and complex interaction with the economic and political forces of broader society and illustrating the consequences that emerge. This can be a powerful tool for learning social theory.

The Dishchii’ Bikoh’ Apache Group from Cibecue, Arizona, demonstrates the Apache Crown Dance. Photo by Grand Canyon National Park, Flickr CC

Originally posted October 9, 2017

In recent years,  an increasing number of Americans are celebrating Indigenous People’s Day to honor those who suffered at the hands of explorers like Christopher Columbus. Social science research helps us understand the underlying gender and racial components of colonial settlement in the United States.

In what is now the United States, Andrea Smith argues that sexual conquest — the rape of native women — was closely tied to the conquest of land. Europeans perceived the indigenous people that inhabited the Americas as uncivilized. Ideas of white civility deemed native women as hypersexual and uncontrollable, unlike white women, whose perceived purity they could not match. These ideas of native women’s sexuality allowed for European males to rape native women without consequence.
Ideas about native men’s and women’s  inferiority were also important for white men’s identities. In the U.S., white settlers believed themselves to be superior to indigenous peoples, bringing enlightenment to an empty wilderness. White, male identity was thus closely tied to the control of land and ownership of property.  
Colonizers viewed land as a metaphor for women’s subjugation. Land – similar to women – was something to be taken and possessed by European men. For example, Europeans who colonized parts of Africa referred to the continent as “virgin land.” Just as virginity was used to describe young women who are perceived as pure and untainted by sex, referring to unconquered land as “virgin” reflects the European’s beliefs that it was also pure, untainted, and ripe for European colonization.

Candidate for Virginia Delegate (elected November 7) Danica Roem, at Protest Trans Military Ban. Photo by Ted Eytan, Flickr CC

Originally posted November 28, 2017.

American attitudes towards transgender and gender nonconforming persons might be changing. Earlier this month, voters elected six transgender officials to public office in the United States, and poll data from earlier this year suggests the majority of Americans oppose transgender bathroom restrictions and support LGBT nondiscrimination laws. Yet, data on attitudes toward transgender folks is extremely limited, and with the Trump administration’s assault on transgender protections in the military and workplace, the future for the trans community is unclear. Despite this uncertainty, a close examination of the social science research on past shifts in attitudes towards same-sex relationships can provide us insight for what the future may hold for the LGBTQ community in the coming decades.

Attitudes about homosexuality vary globally. While gay marriage is currently legal in more than twenty countries, many nations still criminalize same-sex relationships. Differences in attitudes about homosexuality between countries can be explained by a variety of factors, including religious context, the strength of democratic institutions, and the country’s level of economic development.
In the United States, the late 1980s witnessed little acceptance of same-sex marriage, except for small groups of people who tended to be highly educated, from urban backgrounds, or non-religious. By 2010, support for same-sex marriage increased dramatically, though older Americans, Republicans, and evangelicals were significantly more likely to remain opposed to same-sex marriage. Such a dramatic shift in a relatively short period of time indicates changing attitudes rather than generational differences.
Americans have also become more inclusive in their definition of family. In 2003, nearly half of Americans emphasized heterosexual marriage in their definition of family, while only about a quarter adopted a definition that included same-sex couples. By 2010, nearly one third of Americans ascribed to a more inclusive understanding of family structures. Evidence suggests that these shifts in attitudes were partially the result of broader societal shifts in the United States, including increased educational attainment and changing cultural norms.
Despite this progress for same-sex couples, many challenges remain. Members of the LGBTQ community still experience prejudice, discrimination, and hate crimes — especially for trans women of color. Even with support for formal rights for same-sex couples from the majority of Americans, the same people are often uncomfortable with informal privileges, like showing physical affection in public. Past debates within LGBTQ communities about the importance of fighting for marriage rights indicates that the future for the LGBTQ folks in the United States is uncertain. While the future can seem harrowing, the recent victories in the United States and Australia for same-sex couples and transgender individuals would have been unheard of only a few decades ago, which offers a beacon of hope to LGBTQ communities.

Want to read more?

Check out these posts on TSP:

Review historical trends in public opinion on gay and lesbian rights (Gallup)

Check out research showing that bisexual adults are less likely to be “out” (Pew Research Center)

Mural Showing Child Soldier from Iran-Iraq War, Photo by Adam Jones, Flickr CC

In 2014, Boko Haram made global headlines when militants kidnapped 276 girls from school in Nigeria. Policy makers, activists, and celebrities across the globe mobilized, calling for action to #BringBackOurGirls. But in the case of child soldiers, the moral lines are often less clear because they are simultaneously victims and perpetrators. Ex-Boko Haram fighters, including at least 8,000 children, currently face a new battle as they seek to reintegrate into civilian life despite stigma. Research on the social construction of victimhood and childhood can help us better understand child soldiers.

Ideas around morality, righteousness, and innocence of victimhood differ across time and place. In World War I, for example, soldiers who suffered from trauma were treated as weak or unpatriotic by superiors and medical professionals. In the aftermath of WWII and the Holocaust, humanitarian actors and mental health professionals led movements to redefine victims of violence as worthy of respectability and reverence. Characterizations of victimhood are also contrasted with perpetrators — the innocent, passive victim is defined in opposition to the active, wrong-doing perpetrator. Sociologists examine how such labels are constructed, and in practice, moral lines are rarely so clear.
Media outlets often depict child soldiers as helpless victims who are abducted and indoctrinated by militia groups. This is due to media representations of children as innocent and naive.  However, many children volunteer to enlist due to survival techniques. Thus, scholars have sought to depict child soldiers as “agentic,” rather than passive victims. While the media emphasizes the binary between childhood and adulthood, child soldiers occupy an ambiguous space between these categories. This ambiguity is the result of a child soldier being capable of extraordinary violence and simultaneously symbolizing the innocence of childhood. Scholars argue that challenges to reintegration stem from how children have been socialized into militias, as well as their young ages.

While child soldiers occupy the muddy moral grounds of victimhood, these categories remain important,  particularly for issues of restorative justice and reintegration in their communities. 


For more on victimhood across different contexts, see these TROTs:

Ben Ostrowsky//Flickr CC
Ben Ostrowsky//Flickr CC

Originally posted October 13, 2015.

October brings cozy sweaters, Pumpkin Spice Lattes, and lots of pink for Breast Cancer Awareness Month. It’s a worthy campaign: approximately 1 in 8 women will receive a breast cancer diagnosis in her lifetime. But how has breast cancer gained such visibility when others—even other forms of cancer—plague the population at even higher numbers?

Breast cancer awareness campaigns have branded breast cancer through pink ribbons and other merchandise, making the disease not only highly visible, but also a commodity. The signature pink color connects breast cancer to traditional ideas of femininity, beauty, and morality, and allows family and friends to show support.  Color aside, merchandise and freebies like cosmetics and small home appliances reinforce breast cancer’s symbolic ties to beautiful, domestic, heterosexual women as the primary sufferers. This is breast cancer’s “disease regime,” a system of institutional practices and styles of speech that shapes how patients experience it (Klawiter 2004, 851).  
Large-scale organizations like the Susan G. Komen foundation raise awareness, but often leave out marginalized identities that don’t fit a traditional feminine image. Groups like the Women & Cancer Walk provide spaces for those who don’t fit the mainstream definition of a “breast cancer patient.”
The specific image of the breast cancer patient affects who participates in activism and how they view their work for the cause. Many women volunteer for organizations like Komen as a way to connect with other survivors. Often this means that much of their work goes unnoticed, in part because they downplay their activism as trivial volunteering or “just being fair,” further reinforcing the gendered construction of the disease.

For more on breast cancer awareness, check out posts at Feminist Reflections, Sociological Images, and two of our recent Discoveries.

Photo by Sasha Kimel, Flickr CC

We at The Society Pages have written about the study of “white supremacy” in social science. This term can be used to describe overarching patterns of privilege and power that favor whites or a term that bigotry, prejudice, and belief that whites are a superior race. It may be easy to think that this latter meaning has become less relevant in the contemporary, “post-racial” world, but this is not the case.

In recent years, beliefs about the superiority of whites have actually re-emerged within the political mobilization of populist attitudes, anti-immigrant sentiment, and Right-wing political beliefs in Western democracies. To capture these distinctive and troubling realities, scholars, reporters, and cultural commentators have increasingly begun to use the term “white nationalism.” White nationalism is not just a remnant of outdated, obsolete prejudice; rather, it is has been reconfigured and revitalized for the new global world.

Modern white nationalist rhetoric constructs the image of a historically white country and populace under attack amidst a world of 21st-century immigration, globalization, and shifting racial landscapes. By advancing nativist rhetoric and mobilizing such sentiments in the political arena, white nationalist organizations forwarded understandings of “white” that draw on the idea that the Western world is meant for white people. This has had important political consequences in the USA and Europe; politicians and parties who advance anti-immigration platforms have been bolstered by these dynamics.
Even though relatively few politicians and political parties have openly endorsed white nationalist statements, research shows that white nationalist rhetoric and nativist messages can impact political discourse even among moderate groups. In essence, the presence of white nationalist rhetoric can shape the contours of political discourse more generally. Research has studied such dynamics with an eye to common digital media of the 21st century; the discursive impacts of white nationalist rhetoric are particularly visible in studies of the Internet, social media, and other such platforms. In the 21st century, prominence in the digital sphere is important to how contemporary white nationalist groups make their presence felt. 
It is important to remember white nationalism and right-wing beliefs are not simply empty rhetoric without material consequence. Authors have described how white nationalist rhetoric and organization can affect electoral results — the “Brexit” vote being one of the most obvious current examples. In addition, upticks in white nationalism and nativist sentiment have been paralleled by increased hostility and violence against minority and immigrant populations, as well as the institutionalization of laws that restrict such groups’ rights by targeting their cultural and religious practices. For example, the push for “burqa bans” in several European countries reflects mobilization by nativist groups that has cast the burqa as a symbolic challenge to national identity. This and example and ones like it highlight the white nationalist belief that the nation should be defined by whiteness and designed for whites.

An elementary school student shows her younger friend how to sign using American Sign Language. Photo by daveynin, Flickr CC.

Since the passage of the Education for All Handicapped Children Act (EHA) in 1975 and the more comprehensive Individuals with Disabilities Education Act (IDEA) in 1990, the number of children receiving special education services has increased dramatically. Today, seven million children in the United States receive special education to meet their individual needs, with more than ever attending their neighborhood schools as opposed to separate schools or institutions.

Because special education has become so institutionalized in schools over the past three decades, we often take for granted that the categories we use to classify people with special needs are socially constructed. For instance, Minnesota has thirteen categorical disability areas, ranging from autism spectrum disorders to blind-visual impairment to traumatic brain injury. But these categories differ from state to state, as do states’ definitions for each category and their protocols for determining when a child meets the diagnostic criteria in a given area. A more sociological take suggests that the “special ed” label does more than just entitle children to receipt of services. For better or worse, it also helps to establish their position within the structure of the mass education system, and to define their relationships with other students, administrators, and professionals.
Research suggests that children of color are overdiagnosed and underserved. They are more likely to be referred for special education testing and to receive special education services than others. This disproportionality occurs more often in categories for which diagnosis relies on the “art” of professional judgment, like emotionally disturbed (ED) or learning disabled (LD). It occurs less often in categories that require little diagnostic inference like deafness or blindness. The attribution of labels can be particularly concerning for children of color, as these labels can be associated with lower teacher and peer expectations and reduced curricular coverage. Even when appropriately placed in special education classes, children of color often receive poorer services than disabled white children. Some research suggests that this happens because the culture and organization of schools encourages teachers to view students of color as academically and behaviorally deficient.
Given the disproportionate representation of students of color in special education, sociologists have investigated whether a child’s race or ethnicity elevates their likelihood of special education placement. By controlling for individual-, school-, and district-level factors, researchers have found that race and social class are not significant predictors of placement. However, school characteristics — like the overall level of student ability — play a role in determining who gets diagnosed. And, because children of color tend to be concentrated in majority-minority schools, they are less likely to be diagnosed than their white peers.

You may also be interested in a previous article: “Autism Across Cultures.”

For more information on children and youth with disabilities, check out the National Center for Education Statistics.