Photo of a sign that says, “workers united can never be defeated!” Photo by rochelle hartman, Flickr CC

In a perfect world, the Internet would bring people together and give everyone a voice in the public sphere. When it comes to organizational activism for collective bargaining rights in North Carolina, however, Jen Schradie argues that digital technology creates a “treadmill that reproduces inequality,” but that only focusing on online data alone may miss social movement organizing that takes place offline. In her study “The Digital Activism Gap: How Class and Costs Shape Online Collective Action,” Schradie sets out to discover how the digital divide — defined as the economic, educational, and social inequalities between those who have computers and online access and those who do not — influences activism.

In 1959 the North Carolina General Assembly banned collective bargaining for the public sector, making North Carolina one of only three states in all of America that does not allow public workers to have collective bargaining rights. Schradie focuses on 34 organizations comprising individuals from various socioeconomic classes to better understand whether or not the digital divide influences activism. She combined online and offline data collection methods from 2011 to 2014, including in-depth interviews with 65 informants from the organizations; ethnographic observations of meetings, protests, and individuals’ personal internet use; and content analysis of each group’s website, Facebook, and Twitter posts. A research team also gathered data from Tweets, Facebook posts, and website metrics of the organizations under study.

Schradie found that digital engagement varies along class lines in a way that produces a digital activism gap. This gap is defined by two key takeaways: first, organizations with predominantly working-class members were much less likely to use the internet for organizing than those with members from middle and upper classes; second, two of the most active groups for collective bargaining rights lacked a web presence. While the digital divide affects social movements, organizations are still effective offline. To this end, Schradie notes the solution for activists and researchers is not simply to provide digital resources to the working class: it’s to understand that “theories — and policies — that are built only on those who have an empowered digital presence are limited.”

Photo of a student using a math workbook. Photo by Bindaas Madhavi, Flickr CC

There are two understandings of how schools affect inequality. On the one hand, evidence suggests that schools increase inequality by providing more advantages to students who are better off to begin with. On the other, schools are hailed as society’s “great equalizer” and believed to provide opportunities for all children to get ahead. New work from von HippelWorkman, and Downey revisits whether schools can compensate for family inequality.

The researchers replicated an earlier study that compared kindergarteners’ reading and math progress during the school year to their progress during summer vacation. Looking at summer vacation allows researchers to focus on inequality due to differences in the home. And by comparing summer vacation progress to school year progress, researchers can determine whether schools are making those differences larger or smaller. The earlier study found that, while inequality grew substantially between the start of kindergarten and the end of first grade, it grew much faster during summer vacations than it did during the school year. The researchers concluded that schools, in fact, were slowing down the growth of gaps due to inequalities in the home. 

In the new study, von Hippel, Workman, and Downey tested children who began kindergarten in 2010 with an updated measurement of achievement to see if school affected inequality differently in this younger cohort. For most students in both the original and the younger cohorts, gaps grew more quickly over summer vacations. For the 2010 cohort, however, the variation in scores upon entering kindergarten was reduced by the time they finished second grade. This means that not only are schools slowing down the growth of achievement gaps, they’re actually shrinking them. Yet, this finding was not the same for all children — for African American students, the pattern was reversed, with gaps growing wider when school was in session.

While it is heartening to know that schools have the ability to close achievement gaps, these early childhood gaps — especially those in basic reading and math — are still an issue. Since these gaps emerge before students begin kindergarten, later school or summer interventions are palliative, rather than preventative. It may therefore be wiser to develop policies that reduce inequalities among parents and their children before they’re old enough to enroll.

Photo of a child sitting on a sidewalk. Photo by Chris Beckerman, Flickr CC

In 2016 there were more than 400,000 children in foster care in the United States. Kids are placed in foster care because of parental neglect, abuse, incarceration, and other reasons that make it unsafe for them to live at home. The majority of these kids are successfully reunited with their parents after their parents complete a case plan. However, a sizable minority of these reunited children will re-enter foster care. New research by Sarah Font, Kierra Sattler, and Elizabeth Gershoff identifies the policy and family conditions that make foster care re-entry more likely.

Foster care is meant to be a temporary status, and the federal government pushes states to achieve “permanency” for kids in care as quickly as possible. Federal funds can even be withheld from states if too many children remain in foster care for longer than a year. A “permanent” home has two main forms: reunification with parents, or terminating parents’ rights and matching kids with an adoptive home. Terminating parents’ rights is considered an extreme step. Doing so requires detailed evidence that parents are not making timely progress toward their goals. By contrast, the standards for reunification are less clear. This means that if parents’ progress is not good, but also not bad enough to terminate their rights, the state has an incentive to reunite them with their kids as they approach federal deadlines. Returning children to parents who have made sub-par progress makes it more likely that they will be taken from their home again in the future.

The researchers analyzed data of children from Texas to find out what family conditions predict foster care re-entry. The children most at risk of re-entry were those who initially entered foster care because of parental substance abuse and neglect (substance abuse is rarely the only reason children are removed from a home). Of these cases, parental substance abuse and neglect were also typically the reasons for reentry, showing that these issues within the home persist over time.

These findings are especially important at a time when opioid use (combined with neglect) is increasing the number of children being removed from their homes. The researchers do not suggest that states should lower their standards to terminate parents’ rights. Rather, they advocate that timelines toward permanency should be relaxed and more post-reunification services should be offered to formerly substance-abusing parents to reduce the risk of returning a child to a home that is still unsafe.

Photo by Metropolitan Transportation Authority of the State of New York, Flickr CC

Originally posted August 14, 2018.

Helping former inmates return to communities after being released from prison is a serious challenge. Poor and unskilled populations with criminal records face significant barriers to enter the labor market, especially when trying to access formal and stable jobs. New research by Naomi Sugie describes the day-to-day experiences of job search and survival for 133 men recently released from prison in Newark, New Jersey.

Since former prisoners are a highly mobile and hard-to-reach population, Sugie distributed smartphones among participants and asked them to report their daily job search and employment experiences. This type of self-reporting and real-time data also prevented the difficulties of trying to remember past work experiences. The more than 8,000 real-time daily measures showed that respondents experience extreme job instability. Only about half of the participants ever worked at least two consecutive days and only one-quarter of the sample ever worked at least four consecutive days. Most men ceased looking for jobs after the first month, and for those who continued to search,  the chances of finding a regular job were still fairly low. To survive and fulfill their immediate needs, men relied on various low-skill and irregular jobs — from warehouse keepers to carnival maintenance workers. 

To explain the experience of working at the margins of the labor market, Sugie created the concept of work as foraging, which refers to engagement in intermittent, short-term, and precarious work needed to make ends meet. The irregular and sometimes exploitative experience of work as foraging may even exacerbate strain or criminal activities among marginalized job seekers. This particular type of work — work as foraging — challenges the idea that work always leads to social integration and desistance. On the contrary, working as a form of survival is more likely to lead to higher social inequality and lower social integration for foraging workers.

Photo of an open math textbook. Photo by Alan Levine, Flickr CC

High school math teachers may have a new answer to the perpetual student question, “Why do we have to learn this?” Researchers who study education stratification know that math serves as a gatekeeper to advanced high school degrees, selective colleges, and sought-after majors, as all of these require advanced math courses or math test scores. In new research, Daniel Douglas and Paul Attewell test whether math achievement still matters for inequality when other demographic factors are taken into account, as well as whether the emphasis the education system places on math really reflects the needs of the workforce.

First, Douglas and Attewell find that math achievement does not simply reflect mastery of skills students will need in prestigious jobs. An analysis of data from the Bureau of Labor Statistics O*net program and the Occupational Employment Statistics found that 81% of all U.S. workers and 62% of all U.S. workers in jobs requiring a bachelors degree never use advanced math or statistics beyond simple algebra or formulas. Less than 3% of Americans said that their jobs require more math than knowing how to calculate the square footage of a house. In fact, individuals with master’s degrees or doctorates often used less math or less advanced math than individuals with bachelor’s degrees. But importantly, math achievement does still predict college attendance and degree attainment, even for students of the same race or class. This effect of math achievement was most significant for students with higher socioeconomic status.

The fact that knowing advanced math mattered for college attainment, but not for the actual workplace, indicates that the math education system is doing more than simply giving individuals the skills they will need in advanced jobs or sorting which students are most qualified for the most prestigious jobs. Instead, math achievement allows students to collect certain credentials, such as a degree from a selective college or an advanced high school diploma, and the gatekeeping function of math achievement keeps those credentials rare (and therefore valuable). So students may not need to be able to do the math that they learn in high school, but not achieving in high school math can limit opportunities for the rest of their lives.

Photo of a portable structure labeled, “drug testing office.” Photo by Phil! Gold, Flickr CC

Originally published November 1, 2018.

For a long time, individuals and organizations have drawn stark lines between the “deserving” and the “undeserving” poor. Over the past 40 years, these distinctions have been used to justify cutting or limiting social safety net programs, leading to a decline in cash welfare programs and other parts of social assistance programs that working-age, able-bodied, poor adults are eligible for. Furthermore, researchers have shown that welfare recipients are subject to a growing list of limits, conditions, and expectations. In a recent study, Eric Bjorklund, Andrew P. Davis, and Jessica Pfaffendorf continue such research by examining states’ efforts to implement drug testing for applicants to “Temporary Aid to Needy Families” (TANF), a flagship national welfare program.

In the tense racial and economic climate following Obama’s 2008 election, Arizona became the first state to introduce a policy restricting access to cash welfare for applicants based on drug test results. Since then, 15 states followed by passing drug test policies for recipients of TANF. To understand how the states that passed welfare drug testing policies potentially differ from states that did not, Bjorklund and colleagues looked for patterns in the years leading up to the implementation of the policy. They examined factors such as states’ government ideology, whether a Republican governor ousted a Democrat, the proportion of nonwhites in the population, and the white employment rate. 

Both decreases in white labor force participation and having a Republican governor were associated with a state’s implementation of a drug testing policy. The authors rely on social context to explain this finding — specifically, these policies were implemented during the economic recession following Obama’s 2008 election as the first African American President of the United States. Given the importance of the white employment rate, the authors speculate that whites may have held a zero-sum belief that economic gains by people of color would entail losses for whites. Whites’ racialized economic fears may have led them to support restrictive policies framed as “correcting” the behavior of certain “morally compromised” groups, thus prompting politicians and legislators to tighten access to welfare programs by excluding those who failed a drug test. In short, this research highlights the ways social assistance programs can be shaped by public perceptions about who deserves assistance and who doesn’t.

Photo by oddharmonic, Flickr CC

Originally posted January 3, 2018.

In the United States we tend to think children develop sexuality in adolescence, but new research by Heidi Gansen shows that children learn rules and beliefs associated with romantic relationships and sexuality much earlier. Gansen spent over 400 hours in nine different classrooms in three Michigan preschools. She observed behavior from teachers and students during daytime classroom hours and concluded that children learn — via teachers’ practices — that heterosexual relationships are normal and that boys and girls have very different roles to play in them. 

In some classrooms, teachers actively encouraged “crushes” and kissing between boys and girls. Teachers assumed that any form of affection between opposite gender children was romantically-motivated and these teachers talked about the children as if they were in a romantic relationship, calling them “boyfriend/girlfriend.” On the other hand, the same teachers interpreted affection between children of the same gender as friendly, but not romantic. Children reproduced these beliefs when they played “house” in these classrooms. Rarely did children ever suggest that girls played the role of “dad” or boys played the role of “mom.” If they did, other children would propose a character they deemed more gender-appropriate like a sibling or a cousin.

Preschoolers also learned that boys have power over girls’ bodies in the classroom. In one case, teachers witnessed a boy kiss a girl on the cheek without permission. While teachers in some schools enforced what the author calls “kissing consent” rules, the teachers in this school interpreted the kiss as “sweet” and as the result of a harmless crush. Teachers also did not police boys’ sexual behaviors as actively as girls’ behaviors. For instance, when girls pulled their pants down teachers disciplined them, while teachers often ignored the same behavior from boys. Thus, children learned that rules for romance also differ by gender.

Photo of a white van with the word, “cash” written on it in graffiti. Photo by Dustin Ground, Flickr CC

They say opportunity makes the thief, and cash provides opportunity for crime. Cash is untraceable, provides anonymity, constitutes a universal and efficient method of exchange, and, unlike credit cards, has durable value and cannot be ‘cancelled,’ which makes it the ideal target for street crime. Because of this, countries around the world have begun to promote the use of digital payment systems, such as debit or credit cards, to reduce opportunities for robbery. But how strong is the connection between cash use and crime? William Pridemore, Sean Roche and Meghan L. Rogers compared rates of ‘cashlessness’ across countries and found out societies that don’t use cash as much as others also have lower levels of street crime.

The research team used the Global Financial Inclusion Database to compare countries’ cashlessness by looking at the percentage of adults in a country that received a direct deposit or payment from the government into a bank account. Unlike commercial digital transactions, government-based deposits directly benefit poor people who are at a greater risk of street crime. These public payments also signal the effort of state-level policies to reduce cash among the poor. The United Nations Office on Drugs and Crime provided the data on robberies.

Despite the importance of other factors like poverty, education, and unemployment in crime rates, cashlessness is significantly associated with lower robbery rates. These findings suggest that the medium of our financial transactions contributes to the forms of typical criminal activity. As social and digital forms of monetary exchange evolve, a new generation of digital crimes have emerged as well, creating challenging questions for those concerned with crime and justice. 

Minnehaha Falls, Minnesota. Photo by Brooke Chambers

Originally posted May 16, 2018. 

After a particularly long winter, spring has finally sprung in our snowy corner of the United States. As the weather improves, people are emerging from their winter sanctuaries to enjoy the warmth and sunshine outdoors. But there may be more to these everyday adventures than just taking a stroll. In fact, going out in public — whether riding transit, taking a walk, or gathering in large groups — is an act influenced by social factors, like identity and bias. In a recent article, Michael DeLand and David Trouille examine these daily explorations through their new theoretical lens.  

DeLand and Trouille describe different styles of “going out” on various spectrums. The first includes the level of interactions with others. While some go out to be alone, others go out to seek social engagement. Outings also differ by commitment. Some may head outside to wander and explore, and others may venture with a specific task in mind, like joining a public sporting event or going for a run. Each outing is dynamic — an individual may intend to stroll alone through a park, then stumble across a game of soccer and change plans. 

The authors also discuss how social structures and identities influences the styles of going out. For example, structural inequalities and individual identities influence which public spaces an individual may feel safe inhabiting. For example, in Trouille’s ethnographic work he describes tension between styles of “going out” for Latino immigrant men and their neighbors. The men he observed drank in a Los Angeles park since bars were too expensive and unsafe. While some neighbors found this maddening, the Latino men made plans based on structural restraints — and this vantage point of “going out” allows for deeper insight into the ways that inequality impacts day-to-day life. In short, the social world influences decision processes like these every time we step outside.

Photo of an ancestry dna kit. Photo by Lisa Zins, Flickr CC

Earlier this year, Donald Trump pledged to contribute $1 million to a charity of Elizabeth Warren’s choice if she “proved” that she had Native American ancestry. Warren then released results from a DNA test indicating she may have had a Native ancestor six to ten generations ago, bringing controversy about ancestry testing to the forefront once again. The science behind DNA testing is often misunderstood or overstated, and a recent article by Wendy D. Roth and Biörn Ivemark seeks to understand how people internalize their results (or don’t). Ancestry tests use historical migration routes to “geneticize” race and ethnicity, promoting a link between biology and identity while underemphasizing social factors that shape identity categories. Roth and Ivermark examine how personal and social expectations impact consumers’ evaluations of their ancestry test results, complicating the common assumption that genes determine race.

The researchers interviewed 100 American ancestry-test consumers after they had received their results. They asked participants how they identified throughout their lives and if the DNA test altered their identities. Most participants said their identities remained consistent even after the test, but not all. Privately, bias and aspirations shaped how participants responded to their results. For instance, one white woman had previously embraced her family’s supposed Native ancestry, and when the test did not reflect this story, she rejected the test. On the other hand, participants were more likely to incorporate a new identity if they felt positively about a racial or ethnic group reflected in their test results. Publicly, participants used social cues to evaluate whether a new identity would be accepted. The results of one Black participant, for example, indicated that she may have native ancestry. When she tried to embrace this identity by volunteering at a Native community center, she felt unwelcome, and thus decided to dismiss the identity.

However, such internal and social influences aren’t constant — they differ by race.  Black respondents were the least likely to incorporate new identities, while white respondents were the most likely to do so. Black participants often assumed a multi-ethnic identity before taking the test and they generally thought cultural and political differences were more important for shaping their racial or ethnic identities than their DNA. Roth and Ivemark theorize that a desire for uniqueness made whites more eager to embrace trace levels of non-white ancestry, and white respondents were also able to embrace new identities while still retaining the social benefits of their whiteness. Overall, Roth and Ivemark’s work reemphasizes the social factors that shape identity, far beyond the capacity of ancestry tests to unveil historical genetic trends.