Photo of a child sitting on a sidewalk. Photo by Chris Beckerman, Flickr CC

In 2016 there were more than 400,000 children in foster care in the United States. Kids are placed in foster care because of parental neglect, abuse, incarceration, and other reasons that make it unsafe for them to live at home. The majority of these kids are successfully reunited with their parents after their parents complete a case plan. However, a sizable minority of these reunited children will re-enter foster care. New research by Sarah Font, Kierra Sattler, and Elizabeth Gershoff identifies the policy and family conditions that make foster care re-entry more likely.

Foster care is meant to be a temporary status, and the federal government pushes states to achieve “permanency” for kids in care as quickly as possible. Federal funds can even be withheld from states if too many children remain in foster care for longer than a year. A “permanent” home has two main forms: reunification with parents, or terminating parents’ rights and matching kids with an adoptive home. Terminating parents’ rights is considered an extreme step. Doing so requires detailed evidence that parents are not making timely progress toward their goals. By contrast, the standards for reunification are less clear. This means that if parents’ progress is not good, but also not bad enough to terminate their rights, the state has an incentive to reunite them with their kids as they approach federal deadlines. Returning children to parents who have made sub-par progress makes it more likely that they will be taken from their home again in the future.

The researchers analyzed data of children from Texas to find out what family conditions predict foster care re-entry. The children most at risk of re-entry were those who initially entered foster care because of parental substance abuse and neglect (substance abuse is rarely the only reason children are removed from a home). Of these cases, parental substance abuse and neglect were also typically the reasons for reentry, showing that these issues within the home persist over time.

These findings are especially important at a time when opioid use (combined with neglect) is increasing the number of children being removed from their homes. The researchers do not suggest that states should lower their standards to terminate parents’ rights. Rather, they advocate that timelines toward permanency should be relaxed and more post-reunification services should be offered to formerly substance-abusing parents to reduce the risk of returning a child to a home that is still unsafe.

Photo by Metropolitan Transportation Authority of the State of New York, Flickr CC

Originally posted August 14, 2018.

Helping former inmates return to communities after being released from prison is a serious challenge. Poor and unskilled populations with criminal records face significant barriers to enter the labor market, especially when trying to access formal and stable jobs. New research by Naomi Sugie describes the day-to-day experiences of job search and survival for 133 men recently released from prison in Newark, New Jersey.

Since former prisoners are a highly mobile and hard-to-reach population, Sugie distributed smartphones among participants and asked them to report their daily job search and employment experiences. This type of self-reporting and real-time data also prevented the difficulties of trying to remember past work experiences. The more than 8,000 real-time daily measures showed that respondents experience extreme job instability. Only about half of the participants ever worked at least two consecutive days and only one-quarter of the sample ever worked at least four consecutive days. Most men ceased looking for jobs after the first month, and for those who continued to search,  the chances of finding a regular job were still fairly low. To survive and fulfill their immediate needs, men relied on various low-skill and irregular jobs — from warehouse keepers to carnival maintenance workers. 

To explain the experience of working at the margins of the labor market, Sugie created the concept of work as foraging, which refers to engagement in intermittent, short-term, and precarious work needed to make ends meet. The irregular and sometimes exploitative experience of work as foraging may even exacerbate strain or criminal activities among marginalized job seekers. This particular type of work — work as foraging — challenges the idea that work always leads to social integration and desistance. On the contrary, working as a form of survival is more likely to lead to higher social inequality and lower social integration for foraging workers.

Photo of an open math textbook. Photo by Alan Levine, Flickr CC

High school math teachers may have a new answer to the perpetual student question, “Why do we have to learn this?” Researchers who study education stratification know that math serves as a gatekeeper to advanced high school degrees, selective colleges, and sought-after majors, as all of these require advanced math courses or math test scores. In new research, Daniel Douglas and Paul Attewell test whether math achievement still matters for inequality when other demographic factors are taken into account, as well as whether the emphasis the education system places on math really reflects the needs of the workforce.

First, Douglas and Attewell find that math achievement does not simply reflect mastery of skills students will need in prestigious jobs. An analysis of data from the Bureau of Labor Statistics O*net program and the Occupational Employment Statistics found that 81% of all U.S. workers and 62% of all U.S. workers in jobs requiring a bachelors degree never use advanced math or statistics beyond simple algebra or formulas. Less than 3% of Americans said that their jobs require more math than knowing how to calculate the square footage of a house. In fact, individuals with master’s degrees or doctorates often used less math or less advanced math than individuals with bachelor’s degrees. But importantly, math achievement does still predict college attendance and degree attainment, even for students of the same race or class. This effect of math achievement was most significant for students with higher socioeconomic status.

The fact that knowing advanced math mattered for college attainment, but not for the actual workplace, indicates that the math education system is doing more than simply giving individuals the skills they will need in advanced jobs or sorting which students are most qualified for the most prestigious jobs. Instead, math achievement allows students to collect certain credentials, such as a degree from a selective college or an advanced high school diploma, and the gatekeeping function of math achievement keeps those credentials rare (and therefore valuable). So students may not need to be able to do the math that they learn in high school, but not achieving in high school math can limit opportunities for the rest of their lives.

Photo of a portable structure labeled, “drug testing office.” Photo by Phil! Gold, Flickr CC

Originally published November 1, 2018.

For a long time, individuals and organizations have drawn stark lines between the “deserving” and the “undeserving” poor. Over the past 40 years, these distinctions have been used to justify cutting or limiting social safety net programs, leading to a decline in cash welfare programs and other parts of social assistance programs that working-age, able-bodied, poor adults are eligible for. Furthermore, researchers have shown that welfare recipients are subject to a growing list of limits, conditions, and expectations. In a recent study, Eric Bjorklund, Andrew P. Davis, and Jessica Pfaffendorf continue such research by examining states’ efforts to implement drug testing for applicants to “Temporary Aid to Needy Families” (TANF), a flagship national welfare program.

In the tense racial and economic climate following Obama’s 2008 election, Arizona became the first state to introduce a policy restricting access to cash welfare for applicants based on drug test results. Since then, 15 states followed by passing drug test policies for recipients of TANF. To understand how the states that passed welfare drug testing policies potentially differ from states that did not, Bjorklund and colleagues looked for patterns in the years leading up to the implementation of the policy. They examined factors such as states’ government ideology, whether a Republican governor ousted a Democrat, the proportion of nonwhites in the population, and the white employment rate. 

Both decreases in white labor force participation and having a Republican governor were associated with a state’s implementation of a drug testing policy. The authors rely on social context to explain this finding — specifically, these policies were implemented during the economic recession following Obama’s 2008 election as the first African American President of the United States. Given the importance of the white employment rate, the authors speculate that whites may have held a zero-sum belief that economic gains by people of color would entail losses for whites. Whites’ racialized economic fears may have led them to support restrictive policies framed as “correcting” the behavior of certain “morally compromised” groups, thus prompting politicians and legislators to tighten access to welfare programs by excluding those who failed a drug test. In short, this research highlights the ways social assistance programs can be shaped by public perceptions about who deserves assistance and who doesn’t.

Photo by oddharmonic, Flickr CC

Originally posted January 3, 2018.

In the United States we tend to think children develop sexuality in adolescence, but new research by Heidi Gansen shows that children learn rules and beliefs associated with romantic relationships and sexuality much earlier. Gansen spent over 400 hours in nine different classrooms in three Michigan preschools. She observed behavior from teachers and students during daytime classroom hours and concluded that children learn — via teachers’ practices — that heterosexual relationships are normal and that boys and girls have very different roles to play in them. 

In some classrooms, teachers actively encouraged “crushes” and kissing between boys and girls. Teachers assumed that any form of affection between opposite gender children was romantically-motivated and these teachers talked about the children as if they were in a romantic relationship, calling them “boyfriend/girlfriend.” On the other hand, the same teachers interpreted affection between children of the same gender as friendly, but not romantic. Children reproduced these beliefs when they played “house” in these classrooms. Rarely did children ever suggest that girls played the role of “dad” or boys played the role of “mom.” If they did, other children would propose a character they deemed more gender-appropriate like a sibling or a cousin.

Preschoolers also learned that boys have power over girls’ bodies in the classroom. In one case, teachers witnessed a boy kiss a girl on the cheek without permission. While teachers in some schools enforced what the author calls “kissing consent” rules, the teachers in this school interpreted the kiss as “sweet” and as the result of a harmless crush. Teachers also did not police boys’ sexual behaviors as actively as girls’ behaviors. For instance, when girls pulled their pants down teachers disciplined them, while teachers often ignored the same behavior from boys. Thus, children learned that rules for romance also differ by gender.

Photo of a white van with the word, “cash” written on it in graffiti. Photo by Dustin Ground, Flickr CC

They say opportunity makes the thief, and cash provides opportunity for crime. Cash is untraceable, provides anonymity, constitutes a universal and efficient method of exchange, and, unlike credit cards, has durable value and cannot be ‘cancelled,’ which makes it the ideal target for street crime. Because of this, countries around the world have begun to promote the use of digital payment systems, such as debit or credit cards, to reduce opportunities for robbery. But how strong is the connection between cash use and crime? William Pridemore, Sean Roche and Meghan L. Rogers compared rates of ‘cashlessness’ across countries and found out societies that don’t use cash as much as others also have lower levels of street crime.

The research team used the Global Financial Inclusion Database to compare countries’ cashlessness by looking at the percentage of adults in a country that received a direct deposit or payment from the government into a bank account. Unlike commercial digital transactions, government-based deposits directly benefit poor people who are at a greater risk of street crime. These public payments also signal the effort of state-level policies to reduce cash among the poor. The United Nations Office on Drugs and Crime provided the data on robberies.

Despite the importance of other factors like poverty, education, and unemployment in crime rates, cashlessness is significantly associated with lower robbery rates. These findings suggest that the medium of our financial transactions contributes to the forms of typical criminal activity. As social and digital forms of monetary exchange evolve, a new generation of digital crimes have emerged as well, creating challenging questions for those concerned with crime and justice. 

Minnehaha Falls, Minnesota. Photo by Brooke Chambers

Originally posted May 16, 2018. 

After a particularly long winter, spring has finally sprung in our snowy corner of the United States. As the weather improves, people are emerging from their winter sanctuaries to enjoy the warmth and sunshine outdoors. But there may be more to these everyday adventures than just taking a stroll. In fact, going out in public — whether riding transit, taking a walk, or gathering in large groups — is an act influenced by social factors, like identity and bias. In a recent article, Michael DeLand and David Trouille examine these daily explorations through their new theoretical lens.  

DeLand and Trouille describe different styles of “going out” on various spectrums. The first includes the level of interactions with others. While some go out to be alone, others go out to seek social engagement. Outings also differ by commitment. Some may head outside to wander and explore, and others may venture with a specific task in mind, like joining a public sporting event or going for a run. Each outing is dynamic — an individual may intend to stroll alone through a park, then stumble across a game of soccer and change plans. 

The authors also discuss how social structures and identities influences the styles of going out. For example, structural inequalities and individual identities influence which public spaces an individual may feel safe inhabiting. For example, in Trouille’s ethnographic work he describes tension between styles of “going out” for Latino immigrant men and their neighbors. The men he observed drank in a Los Angeles park since bars were too expensive and unsafe. While some neighbors found this maddening, the Latino men made plans based on structural restraints — and this vantage point of “going out” allows for deeper insight into the ways that inequality impacts day-to-day life. In short, the social world influences decision processes like these every time we step outside.

Photo of an ancestry dna kit. Photo by Lisa Zins, Flickr CC

Earlier this year, Donald Trump pledged to contribute $1 million to a charity of Elizabeth Warren’s choice if she “proved” that she had Native American ancestry. Warren then released results from a DNA test indicating she may have had a Native ancestor six to ten generations ago, bringing controversy about ancestry testing to the forefront once again. The science behind DNA testing is often misunderstood or overstated, and a recent article by Wendy D. Roth and Biörn Ivemark seeks to understand how people internalize their results (or don’t). Ancestry tests use historical migration routes to “geneticize” race and ethnicity, promoting a link between biology and identity while underemphasizing social factors that shape identity categories. Roth and Ivermark examine how personal and social expectations impact consumers’ evaluations of their ancestry test results, complicating the common assumption that genes determine race.

The researchers interviewed 100 American ancestry-test consumers after they had received their results. They asked participants how they identified throughout their lives and if the DNA test altered their identities. Most participants said their identities remained consistent even after the test, but not all. Privately, bias and aspirations shaped how participants responded to their results. For instance, one white woman had previously embraced her family’s supposed Native ancestry, and when the test did not reflect this story, she rejected the test. On the other hand, participants were more likely to incorporate a new identity if they felt positively about a racial or ethnic group reflected in their test results. Publicly, participants used social cues to evaluate whether a new identity would be accepted. The results of one Black participant, for example, indicated that she may have native ancestry. When she tried to embrace this identity by volunteering at a Native community center, she felt unwelcome, and thus decided to dismiss the identity.

However, such internal and social influences aren’t constant — they differ by race.  Black respondents were the least likely to incorporate new identities, while white respondents were the most likely to do so. Black participants often assumed a multi-ethnic identity before taking the test and they generally thought cultural and political differences were more important for shaping their racial or ethnic identities than their DNA. Roth and Ivemark theorize that a desire for uniqueness made whites more eager to embrace trace levels of non-white ancestry, and white respondents were also able to embrace new identities while still retaining the social benefits of their whiteness. Overall, Roth and Ivemark’s work reemphasizes the social factors that shape identity, far beyond the capacity of ancestry tests to unveil historical genetic trends.

Photo of sign with handwritten messages of “why I didn’t report.” Photo by The All-Nite Images, Flickr CC

Emotions ran high across the nation as many of us tuned in to watch Dr. Christine Blasey Ford’s testimony of sexual assault by Judge Brett Kavanugh in front of the Senate Judiciary Committee. Dr. Ford’s public testimony has produced important dialogues regarding why most victims do not report, as seen in the social media hashtag #WhyIDidntReport. New research by Shamus Khan, Jennifer Hirsch, Alexander Wamboldt, and Claude A. Mellins contributes to this growing conversation by exploring the social risks students face in reporting.  

Khan and colleagues draw upon the Sexual Health Initiative to Foster Transformation (SHIFT) study, which includes 151 interviews, 17 focus groups, 18 months of participant observation, and a random-sample survey of 1,671 college students at Columbia University and Barnard College (a women’s only institution). This study mostly draws from the interviews with college students. Researchers defined incidents of sexual victimization based on legal definitions of sexual assault, rather than incidents that students explicitly labeled as such. According to the authors’ definition, interviews revealed that 66 students recounted 89 incidents of sexual victimization.

For many victims, naming their experiences as sexual assault and telling authorities came with a variety of social risks. For one, students feared association with a stigmatized identity such as “victim” or “survivor.” They often viewed these labels as disempowering and told interviewers they didn’t want to be seen as “that girl” or “that guy.” Some students attempted to claim alternative identities as compassionate students willing to provide second chances to their perpetrators by not reporting.

Second, reporting means students encounter risks to their social networks. For example, students considered how labeling and reporting might hinder their ability to develop or preserve interpersonal relationships, which for some, include relationships with their perpetrator. Lastly, students worried they might lose access to college activities like sports, sororities, fraternities, and other student organizations, which would affect their long-term career goals. Several students discussed how reporting would add stress and take time away from their activities. Men who were intoxicated and Black men in particular expressed concern that they would be the ones accused of assault due to their alcohol intake or lack of racial privilege.

As we continue to address sexual violence in the Me Too era, this work encourages us to look beyond formal reporting policies and work to transform the social and cultural conditions that shape perceptions of risk among sexual assault victims.

Click here for more sociological research and expert insight on sexual violence!

Photo of the Assembly room in Independence Hall where the U.S. Constitution was signed. Photo by Ken Lund, Flickr CC

Research on racial attitudes finds that a more modern form of racism has emerged since the civil rights movement — people are less likely to assert biological differences between racial groups, but often utter statements that covertly reinforce racial inequality (“I’m not racist, but…” ). A recent study by Kasey Henricks suggests that such covert forms of racism were actually present in the speeches and debates about slavery during the framing of the U.S. Constitution.

Henricks and his research assistants examined over 1,000 pages of an archival collection at the Library of Congress called, A Century of Lawmaking for a New Nation, one of the largest and oldest collections of congressional records. They focus on the three-fifths clause debate, the clause that codified slavery into law, between Northern and Southern framers about how to “properly tax” the human bondage that the United States was built upon, as well as how to count slaves for state representation in Congress.

In these discussions, Henricks finds parallels to many of the same expressions and contradictions we see today. For instance, while many framers lamented that slavery continued to exist, they simultaneously refused to extend the same humanity and rights to slaves as they did to “freepersons.”  Many framers stated their opposition to slavery, but nonetheless provided colorblind justifications for its continued existence. One of these justifications was the idea that slavery should be left up to local governance instead of federal intervention. Slaveholders also tried to distance themselves from any culpability by discussing themselves as “victims of circumstance” to a costly form of labor that made them more deserving of tax breaks. According to Henricks, these findings highlight how core American values were used to justify the continuation of slavery without ever having to explicitly discuss race.

These historical underpinnings of colorblindness illustrate that both our present and past are defined by forms of racism, overt and covert. In our current era, characterized by the reemergence of white supremacist groups and violence at Charlottesville, research must continue to demonstrate the multiple ways racism manifests itself and thus supports persistent inequalities, injustices, and violence.