Search results for special education

An elementary school student shows her younger friend how to sign using American Sign Language. Photo by daveynin, Flickr CC.

Since the passage of the Education for All Handicapped Children Act (EHA) in 1975 and the more comprehensive Individuals with Disabilities Education Act (IDEA) in 1990, the number of children receiving special education services has increased dramatically. Today, seven million children in the United States receive special education to meet their individual needs, with more than ever attending their neighborhood schools as opposed to separate schools or institutions.

Because special education has become so institutionalized in schools over the past three decades, we often take for granted that the categories we use to classify people with special needs are socially constructed. For instance, Minnesota has thirteen categorical disability areas, ranging from autism spectrum disorders to blind-visual impairment to traumatic brain injury. But these categories differ from state to state, as do states’ definitions for each category and their protocols for determining when a child meets the diagnostic criteria in a given area. A more sociological take suggests that the “special ed” label does more than just entitle children to receipt of services. For better or worse, it also helps to establish their position within the structure of the mass education system, and to define their relationships with other students, administrators, and professionals.
Research suggests that children of color are overdiagnosed and underserved. They are more likely to be referred for special education testing and to receive special education services than others. This disproportionality occurs more often in categories for which diagnosis relies on the “art” of professional judgment, like emotionally disturbed (ED) or learning disabled (LD). It occurs less often in categories that require little diagnostic inference like deafness or blindness. The attribution of labels can be particularly concerning for children of color, as these labels can be associated with lower teacher and peer expectations and reduced curricular coverage. Even when appropriately placed in special education classes, children of color often receive poorer services than disabled white children. Some research suggests that this happens because the culture and organization of schools encourages teachers to view students of color as academically and behaviorally deficient.
Given the disproportionate representation of students of color in special education, sociologists have investigated whether a child’s race or ethnicity elevates their likelihood of special education placement. By controlling for individual-, school-, and district-level factors, researchers have found that race and social class are not significant predictors of placement. However, school characteristics — like the overall level of student ability — play a role in determining who gets diagnosed. And, because children of color tend to be concentrated in majority-minority schools, they are less likely to be diagnosed than their white peers.

You may also be interested in a previous article: “Autism Across Cultures.”

For more information on children and youth with disabilities, check out the National Center for Education Statistics.

Photo of Elizabeth Warren speaking at a podium. There is a large sign next to her about how students afford college.
Photo by Senate Democrats, Flickr CC

Elizabeth Warren released an ambitious plan for free college and student loan relief on April 22.  Among a Democratic primary field that is increasingly embracing free college as the standard, Warren’s plan stood out for including $50,000 of debt relief for all individuals with current student debt, expanding what we mean by the cost of attendance, creating a fund for HBCUs, and (eventually) banning for-profit colleges from receiving federal funds. The plan also stood out in another way: centering sociological, justice-oriented research. Inequality and education are topics with a lot of good work from sociologists, but it is worth highlighting three sociologists who influenced Warren’s proposal: Louise Seamster, Tressie McMillan Cottom, and Sara Goldrick-Rab.

Warren notes that student loan debt is a racial equality issue. She specifically cites analysis done by a team at Brandeis University, including sociologist Louise Seamster, that finds that households with lower levels of education and families of color benefit more from Warren’s plan. Dr. Seamster’s recent article in Contexts, “Black Debt, White Debt,” demonstrates how debt often functions differently for black and white families. White Americans can take advantage of forms of debt like home mortgages, student loans, and business loans that later result in increased wealth and can be used to establish creditworthiness for future financial interactions. In contrast, municipals fines and fees or predatory student loans are more likely to be carried by black Americans. These forms of debt have high interest rates, poor terms, and hurt future wealth and creditworthiness more than they help.

Tressie McMillan Cottom’s Lower Ed also highlighted disparate impacts of student loan debt on black Americans, as well as the centrality of inequality for the American economy and the effects of for-profit colleges. Her work demonstrates how for-profit colleges target low-income students and students of color Dr. Cottom has also testified in front of Congress on for-profit colleges and the reauthorization of the Higher Education Act.

Warren’s free-college-for-all position leans heavily on researchers such as Sara Goldrick-Rab, one of the most active scholars and advocates for low-income college students. Dr. Goldrick-Rab advocates for meeting the basic needs of students as they pursue their education, especially in recognizing the costs beyond tuition that students face. Paying the Price demonstrates how it is money, not will or desire, that gets in the way of students on financial aid trying to finish a degree.

Louise Seamster, Tressie McMillan Cottom, and Sara Goldrick-Rab are exemplars of how sociological research can shape public policy and of how research and activism can push for a more equitable world.

A woman helps an elderly man get up from his chair
Photo by Brian Walker, Flickr CC

When we talk about work, we often miss a type of work that is crucial to keeping the economy going and arguably more challenging and difficult than ever under conditions of quarantine and social distancing: care work. Care work includes both paid and unpaid services caring for children, the elderly, and those who are sick and disabled, including bathing, cooking, getting groceries, and cleaning.

Sociologists have found that caregiving that happens within families is not always viewed as work, yet it is a critical part of keeping the paid work sector running. Children need to eat and be bathed and clothed. Families need groceries. Houses need to be cleaned. As many schools in the United States are closed and employees are working from home, parents are having to navigate extended caring duties. Globally, women do most of this caring labor, even when they also work outside of the home. 
Photo of a woman cooking
Photo by spablab, Flickr CC
Globally, women do most of this caring labor, even when they also work outside of the home. Historically, wealthy white women were able to escape these caring duties by employing women of color to care for their children and households, from enslaved African Americans to domestic servants. Today people of color, immigrants, and those with little education are overrepresented in care work with the worst job conditions. 
In the past decade, the care work sector has grown substantially in the United States. However, care workers are still paid low wages and receive little to no benefits. In fact, care work wages are stagnant or declining, despite an overall rise in education levels for workers. Thus, many care workers — women especially — find themselves living in poverty.  

Caring is important for a society to function, yet care work — paid or unpaid — is still undervalued. In this time of COVID-19 where people are renegotiating how to live and work, attention to caring and appreciation for care work is more necessary than ever.

A student takes notes by hand. Photo via Wikimedia Commons.

If you believe that taking notes longhand is better than typing (especially that typing leads to verbatim transcription but writing helps with processing ideas), you have probably seen a reference to Mueller and Oppenheimer (2014). We even featured it in a TROT last fall. The Pen is Mightier Than the Keyboard has over 900 citations on Google Scholar, but is also a staple on Twitter, blogs, and news articles. According to Altmetric, it has been cited in 224 stories in 137 news outlets the past two years (two years after it was published), and linked to on Twitter almost 3,000 times. 

But new research suggests that its fame was premature. How Much Mightier Is the Pen than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer argues that research has not yet determined whether writing or typing is categorically better for class achievement. The new study (a direct replication of the original study) did find a slight advantage for those taking notes by hand, but unlike in the original study, the differences were not statistically significant. 
The new study also expanded the original by including a group with eWriters, an electronic form of notetaking that allows students to write on a screen. As our original blog noted, much of the research on laptops in the classroom revolves around their potential to be distractions, and research on notetaking needs to take into account advances in technology that could lead to better notetaking environments as well as assisting students with disabilities. Morehead, Dunlosky, and Rawson find that eWriters, although not in common use, could be a bridge between paper and the distraction-laden environment of laptops. 
Photo of a plaque commemorating Ida B. Wells. Photo by Adam Jones, Flickr CC

As Black History month draws to a close, it’s important to celebrate the work of Black scholars that contributed to social science research. Although the discipline has begun to recognize the foundational work of scholars like W.E.B. DuBois, academia largely excluded Black women from public intellectual space until the mid-20th century. Yet, as Patricia Hill Collins reminds us, they leave contemporary sociologists with a a long and rich intellectual legacy. This week we celebrate the (often forgotten) Black women who continue to inspire sociological studies regarding Black feminist thought, critical race theory, and methodology.

Ida B. Wells (1862-1931) was a pioneering social analyst and activist who wrote and protested against many forms of racism and sexism during the late 19th and early 20th century. She protested Jim Crow segregation laws, founded a Black women’s suffrage movement, and became one of the founding members of the NAACP. But Wells is best-known for her work on lynchings and her international anti-lynching campaign. While Wells is most commonly envisioned as a journalist by trade, much of her work has inspired sociological research. This is especially true for her most famous works on lynchings, Southern Horrors (1892) and The Red Record (1895).
In Southern Horrors (1892), Wells challenged the common justification for lynchings of Black men for rape and other crimes involving white women. She adamantly criticized white newspaper coverage of lynchings that induced fear-mongering around interracial sex and framed Black men as criminals deserving of this form of mob violence. Using reports and media coverage of lynchings – including a lynching of three of her close friends – she demonstrated that lynchings were not responses to crime, but rather tools of political and economic control by white elites to maintain their dominance. In The Red Record (1895), she used lynching statistics from the Chicago Tribune to debunk rape myths, and demonstrated how the pillars of democratic society, such as right to a fair trial and equality before the law, did not extend to African American men and women.
Anna Julia Cooper (1858-1964) was an avid educator and public speaker. In 1982, her first book was published, A Voice from the South: By A Black Woman of the South. It was one of the first texts to highlight the race- and gender-specific conditions Black women encountered in the aftermath of Reconstruction. Cooper argued that Black women’s and girls’ educational attainment was vital for the overall progress of Black Americans. In doing so, she challenged notions that Black Americans’ plight was synonymous with Black men’s struggle. While Cooper’s work has been criticized for its emphasis on racial uplift and respectability politics, several Black feminists credit her work as crucial for understanding intersectionality, a fundamentally important idea in sociological scholarship today.
As one of the first Black editors for an American Sociological Association journal, Jacquelyn Mary Johnson Jackson (1932-2004) made significant advances in medical sociology. Her work focused on the process of aging in Black communities. Jackson dismantled assumptions that aging occurs in a vacuum. Instead, her scholarship linked Black aging to broader social conditions of inequality such as housing and transportation. But beyond scholarly research, Jackson sought to develop socially relevant research that could reach the populations of interest. As such, she identified as both a scholar and activist and sought to use her work as a tool for liberation.

Together, these Black women scholars challenged leading assumptions regarding biological and cultural inferiority, Black criminality, and patriarchy from both white and Black men. Their work and commitment to scholarship demonstrates how sociology may be used as a tool for social justice. Recent developments such as the #CiteBlackWomen campaign draw long-overdue attention to their work, encouraging the scholarly community to cite Wells, Cooper, Jackson, and other Black women scholars in our research and syllabi.

A man reads a newspaper by the wall, by Garry Knight, via Flickr CC.

This post was created in collaboration with the Minnesota Journalism Center

According to Gallup, 45% of Americans polled trusted the mass media in 2018. Reuters Institute’s 2019 Digital News Report found similar trends among citizens in 37 countries around the globe: the average level of trust in the news is at 42%, and only 23% say they trust news they find on social media. Further, Edelman’s 2019 Trust Barometer found that, globally, people trust their employers, NGOs, and businesses before the media “to do what is right.”

Media literacy goes hand in hand with trust in the media, especially for younger generations. But studies show that news media has become neglected in media literacy education systems worldwide. To help young people in school better understand how to cultivate a sense of literacy about news consumption, educators could provide examples of what positive engagement with social media and news looks like. Studies show that recommending what young people shouldn’t do on social media — something scholars call “protectionist discourse” — isn’t very helpful.
Scholars argue that it is also useful to distinguish news literacy from concepts such as media literacy and digital literacy. According to Melissa Tully and colleagues, news literacy is defined as: “Knowledge of the personal and social processes by which news is produced, distributed, and consumed, and skills that allow users some control over these processes.” In this model, news literacy includes: “Context,” “Consumption,” “Circulation,” “Creation,” and “Content.”
These 5 “C’s” contribute to how media literacy is part of a healthy, functioning democracy: in a polarized era of partisanship and distrust (learn more about political polarization here), literacy can help consumers embrace differences and facilitate connections for the common good. However, some citizens avoid the news altogether. These “news avoiders” contribute to a culture that evades the need for literacy altogether in a post-fact and post-truth society.
Harvard vs. Bucknell football game. Photo by Yzukerman, Flickr CC

There is no shortage of writing on the history of college sports, especially its history of scandal. There is also plenty of writing on how big-time college sports harm both the colleges and their athletes. Books as varied as Pay for Play: A History of Big-Time College Athletic Reform, College Sports Inc.: The Athletic Department vs. The University and College Athletes for Hire: The Evolution and Legacy of the NCAA’s Amateur Myth highlight the rise of the NCAA through and because of scandal, the enormous amounts of money flowing through college athletic departments (but not to players), and the contortions of universities to fit big-time athletics.

But athletics matter even in schools defined by their academics rather than their sports. Documents from the recent Harvard affirmative action legal case confirm prior research: even at the Ivies and at coed liberal arts colleges, athletes receive a substantial admissions bump. Articles from The Atlantic and Slate detail this bump and how it especially benefits upper-class white students. At Ivies and elite liberal arts colleges, the potential financial gain from athletics (as suspect as that might be at other schools) doesn’t make sense as the primary reason to keep sports in these schools. So what are some other reasons that American higher education institutions prioritize athletics? Here are three that sociological thinking and research can help us understand.

1.Status Networks and Peer Institutions

First, athletics helps schools signal who their peers are, both academically and athletically. Higher education in the United States didn’t develop from a master plan. It is, instead, a network and market of schools that jockey for position, carving out niches and constantly battling for status. Athletic conferences are one way that institutions establish networks, and research has found that schools within conferences come to share similar status, both athletically and academically. The Ivy League is the prime example of this phenomenon. Although “the Ivies” have come to mean a set of elite schools, the league began as simply a commitment to compete against each other on the athletic playing field. 

2. Competing for Students

Another way that colleges signal prestige is through established ranking systems, and a key part of those rankings come from measures of selectivity and the quality undergraduate students. So all colleges are competing for students — either to solidify rankings or to simply matriculate enough students for small, tuition-dependent institutions to be able to pay the bills. In Creating a Class, Mitchell Stevens points out how important athletics are to recruiting students within the competitive, small liberal arts space. He writes,

“Students choose schools for multiple reasons, and the ability to participate in a particular sport at a competitive level of play is often an important one. Because so many talented students also are serious athletes, colleges eager to admit students with top academic credentials are obliged to maintain at least passable teams and to support them with competitive facilities.”

3. Non-academic Signals in Admissions

Histories of Ivy League admissions have revealed how including athletic markers was part of establishing who belonged at the school. On the most obvious level, prominent alumni who were athletes or the parents of prospective students publicly pushed for admissions policies that would be beneficial to others like them. But more subtly, and more insidiously, having an affinity for athletics was viewed as a mark of the “Yale man,” the upper-class, Christian, future leader of the world who had the presence of mind and body to pick up new ideas and manage others. 

Athletics in colleges isn’t just a money-maker or something to keep students happy. It’s a way for colleges to recruit students, fight for status, and signal what types of students they value.

Candidate for Virginia Delegate (elected November 7) Danica Roem, at Protest Trans Military Ban. Photo by Ted Eytan, Flickr CC

Originally posted November 28, 2017.

American attitudes towards transgender and gender nonconforming persons might be changing. Earlier this month, voters elected six transgender officials to public office in the United States, and poll data from earlier this year suggests the majority of Americans oppose transgender bathroom restrictions and support LGBT nondiscrimination laws. Yet, data on attitudes toward transgender folks is extremely limited, and with the Trump administration’s assault on transgender protections in the military and workplace, the future for the trans community is unclear. Despite this uncertainty, a close examination of the social science research on past shifts in attitudes towards same-sex relationships can provide us insight for what the future may hold for the LGBTQ community in the coming decades.

Attitudes about homosexuality vary globally. While gay marriage is currently legal in more than twenty countries, many nations still criminalize same-sex relationships. Differences in attitudes about homosexuality between countries can be explained by a variety of factors, including religious context, the strength of democratic institutions, and the country’s level of economic development.
In the United States, the late 1980s witnessed little acceptance of same-sex marriage, except for small groups of people who tended to be highly educated, from urban backgrounds, or non-religious. By 2010, support for same-sex marriage increased dramatically, though older Americans, Republicans, and evangelicals were significantly more likely to remain opposed to same-sex marriage. Such a dramatic shift in a relatively short period of time indicates changing attitudes rather than generational differences.
Americans have also become more inclusive in their definition of family. In 2003, nearly half of Americans emphasized heterosexual marriage in their definition of family, while only about a quarter adopted a definition that included same-sex couples. By 2010, nearly one third of Americans ascribed to a more inclusive understanding of family structures. Evidence suggests that these shifts in attitudes were partially the result of broader societal shifts in the United States, including increased educational attainment and changing cultural norms.
Despite this progress for same-sex couples, many challenges remain. Members of the LGBTQ community still experience prejudice, discrimination, and hate crimes — especially for trans women of color. Even with support for formal rights for same-sex couples from the majority of Americans, the same people are often uncomfortable with informal privileges, like showing physical affection in public. Past debates within LGBTQ communities about the importance of fighting for marriage rights indicates that the future for the LGBTQ folks in the United States is uncertain. While the future can seem harrowing, the recent victories in the United States and Australia for same-sex couples and transgender individuals would have been unheard of only a few decades ago, which offers a beacon of hope to LGBTQ communities.

Want to read more?

Check out these posts on TSP:

Review historical trends in public opinion on gay and lesbian rights (Gallup)

Check out research showing that bisexual adults are less likely to be “out” (Pew Research Center)

Photo of two hands holding a paper that says "I Like Being Autistic Because"
Photo by Walk InRed, Flickr CC

In 2007, the United Nations General Assembly designated April 2nd as World Autism Awareness Day. This community-wide event promotes the recognition and raises awareness about Autism Spectrum Disorder (ASD). The celebration brings individuals with autism and grassroots organizations together to connect and to promote appreciation for people with autism. Despite increasing awareness, the causes of ASD remain a puzzle. While scientific approaches consider it to be a developmental disorder associated with genetic or environmental factors, recent studies in social science illustrate how cultures themselves vary in their perception of both autism and other neurological differences.

The prevalence of autism has grown in past decades in North America.  Explanations of this trend point out to an increase in the prevalence of ASD, a broadening spectrum of autism diagnoses, and declining stigma that promotes recognition and acceptance of the condition. Sociologists have also suggested that this may be because parents, psychologists, and therapists have created alliances, using their expertise to develop a new system of institutions for approaching autism.
Regarding the causes of ASD per se, early scientific theories indicated that the condition was associated with genetic alterations, but social science studies have emphasized the role of environmental factors. Further, cultural factors across the world can also shape how people understand autism in the first place.
Both the description and diagnosis of the ASD depend on historical factors and vary across nations. In Korea, children with autism and their families experience profound stigma, especially the mother — who is considered to be responsible for her child’s condition. Since in Korea parents gain social respect based on the behavior of their children, having a child with autism constitutes a signal of defective parenting. On the other hand, in Nicaragua, there is an emergent culture around autism that encourages teachers and communities to create a supportive environment. However, both cultures still see autistic children as suffering with a disability. Both stances contrast with new ideas about neurodiversity that strive to create a new place for autism in larger socio-cultural contexts.
Somali immigrants call autism the “Western disease” because there is no word for autism in the Somali language and because many believe it does not exist in Somalia. Somali parents accuse the Western diet and medical environment in North America for the condition of their children. Their testimonies have not only opened possibilities to explore new scientific hypotheses regarding the environmental causes of autism, but also to reveal the power dynamics and struggles involved in validating different perspectives and narratives about the condition.  

Contemporary educational programs in the United States are now more aware of the importance of highlighting the strengths rather than the deficits of students with autism. They also recognize that accommodation and acceptance of autism is as important as finding its genetic and neurological causes.

Image of a sign that reads, "honk for your kid's future"
Photo by Kyla Duhamel, Flickr CC

The FBI recently announced charges in a wide-spread college admission scandal involving fake test scores and fabricated athletic resumes. In the wake of the scandal, sociologists are weighing in and reminding us that college admissions is as much about legitimating privilege as improving life prospects. Sociologists have long been skeptical of the term meritocracy, which was in fact first coined as satire by Michael Young. The research below shows how constructed measures of merit in college admissions play a key part in reproducing inequality.

Mitchell Stevens spent a year in the admissions department at a selective liberal arts school. His book describes “the aristocracy of merit” — especially how the review process rewards the activities and presentation styles most common for privileged students. And while admissions officers are mostly the ones judging merit, the book also highlights how staff from other departments, such as coaches or fundraising officers, can advocate for a student’s admission. Shamus Khan’s ethnographic work similarly notes how elite prep schools set up their students to be competitive in elite college admission through skills, activities, and awards. Prep school staff even occasionally call admissions offices of Ivy League schools for students.
Other work on college admissions highlights that the idea of “merit” has always been socially constructed because those with race and class privilege can set the rules. For instance, colleges instituted  “holistic admission” in the early twentieth century because contemporary elites worried that their children would be shut out of attending their alma mater because of high-achieving Jewish students. They rewrote admissions criteria to devalue standardized test scores in favor of a review process that gave students credit for the experiences, skills, and habits that students from the upper-class were more likely to have.
So, if upper-class kids already have advantages, why is there a college cheating scandal? Jessica Calarco points out in NPR that the students affected by this scandal would likely do well no matter what school they attended, but parents are anxious about rising inequality, a bifurcating labor market, and afraid that children will have a harder time than their parents did. From teaching children how to advocate for themselves in school to paying thousands of dollars for out-of-school activities, middle and upper-class parents do whatever they can to help their children get ahead. American higher education is a decentralized marketplace that runs on prestige, which makes credentials from a big-name school potentially even more important in today’s changing labor market — both for students looking for social mobility and those looking to legitimate their privilege.

As Anthony Jack told CNN, the admission scandal flips the usual script — usually when we are discussing merit in college admissions it is around insinuations that minority students don’t deserve to get in. For more on race-based affirmative action, check out other TSP work below!

Affirmative Action, College Admissions, and the Debunked “Mismatch” Hypothesis

The Supreme Court’s Impacts on Race and Admissions in America

Merit and the Admissions Debates at Harvard University and Stuyvesant High School