work

A man sits in front of a document, cup of coffee, and laptop, his head resting in his hands. Sunlight streams through a window to the left. Image used under CC0

Originally published March 30, 2022.


Today “help wanted” signs are commonplace; restaurants, shops, and cafes have temporarily closed or have cut back on hours due to staffing shortages. “Nobody wants to work,” the message goes. Some businesses now offer higher wages, benefits, and other incentives to draw in low-wage workers. All the same, “the great resignation” has been met with alarm across the country, from the halls of Congress to the ivory tower.

In America, where work is seen as virtuous, widespread resignations are certainly surprising.  How does so many are walking away from their jobs differ from what we’ve observed in the past, particularly in terms of frustrations about labor instability, declining benefits, and job insecurity? Sociological research on work, precarity, expectations, and emotions provides cultural context on the specificity and significance of “the great resignation.”

Individualism and Work

The importance of individualism in American culture is clear in the workplace. Unlike after World War II, when strong labor unions and a broad safety net ensured reliable work and ample benefits (for mostly white workers), instability and precarity are hallmarks of today’s workplace. A pro-work, individualist ethos values individual’s flexibility, adaptability, and “hustle.” When workers are laid off due to shifting market forces and the profit motives of corporate executives, workers internalize the blame. Instead of blaming executives for prioritizing stock prices over workers, or organizing to demand more job security, the cultural emphasis on individual responsibility encourages workers to devote their energy into improving themselves and making themselves more attractive for the jobs that are available.

Expectations and Experiences

For many, the pandemic offered a brief glimpse into a different world of work with healthier work-life balance and temporary (if meaningful) government assistance. Why and how have American workers come to expect unpredictable work conditions and meager benefits? The bipartisan, neoliberal consensus that took hold in the latter part of the twentieth century saw a reduction in government intervention into the social sphere. At the same time, a bipartisan pro-business political agenda reshaped how workers thought of themselves and their employers. Workers became individualistic actors or “companies of one” who looked out for themselves and their own interests instead of fighting for improved conditions. Today’s “precariat” – the broad class of workers facing unstable and precarious work – weather instability by expecting little from employers or the government while demanding more of themselves.

Generational Changes

Researchers have identified generational differences in expectations of work. Survey data shows that Baby Boomers experience greater difficulty with workplace instability and the emerging individualist ethos. On the other hand, younger generations – more accustomed to this precarity – manage the tumult with greater skill. These generational disparities in how insecurity is perceived have real implications for worker well-being and family dynamics.

Emotions

Scholars have also examined the central role emotions play in setting expectations of work and employers, as well as the broad realm of “emotional management” industries that help make uncertainty bearable for workers. Instead of improving workplace conditions for workers, these “emotional management” industries provide “self-care” resources that put the burden of managing the despair and anxiety of employment uncertainty on employees themselves, rather than companies.

A man sits in front of a document, cup of coffee, and laptop, his head resting in his hands. Sunlight streams through a window to the left. Image used under CC0.

Today “help wanted” signs are commonplace; restaurants, shops, and cafes have temporarily closed or have cut back on hours due to staffing shortages. “Nobody wants to work,” the message goes. Some businesses now offer higher wages, benefits, and other incentives to draw in low-wage workers. All the same, “the great resignation” has been met with alarm across the country, from the halls of Congress to the ivory tower.

In America, where work is seen as virtuous, widespread resignations are certainly surprising.  How does so many are walking away from their jobs differ from what we’ve observed in the past, particularly in terms of frustrations about labor instability, declining benefits, and job insecurity? Sociological research on work, precarity, expectations, and emotions provides cultural context on the specificity and significance of “the great resignation.”

Individualism and Work

The importance of individualism in American culture is clear in the workplace. Unlike after World War II, when strong labor unions and a broad safety net ensured reliable work and ample benefits (for mostly white workers), instability and precarity are hallmarks of today’s workplace. A pro-work, individualist ethos values individual’s flexibility, adaptability, and “hustle.” When workers are laid off due to shifting market forces and the profit motives of corporate executives, workers internalize the blame. Instead of blaming executives for prioritizing stock prices over workers, or organizing to demand more job security, the cultural emphasis on individual responsibility encourages workers to devote their energy into improving themselves and making themselves more attractive for the jobs that are available.

Expectations and Experiences

For many, the pandemic offered a brief glimpse into a different world of work with healthier work-life balance and temporary (if meaningful) government assistance. Why and how have American workers come to expect unpredictable work conditions and meager benefits? The bipartisan, neoliberal consensus that took hold in the latter part of the twentieth century saw a reduction in government intervention into the social sphere. At the same time, a bipartisan pro-business political agenda reshaped how workers thought of themselves and their employers. Workers became individualistic actors or “companies of one” who looked out for themselves and their own interests instead of fighting for improved conditions. Today’s “precariat” – the broad class of workers facing unstable and precarious work – weather instability by expecting little from employers or the government while demanding more of themselves.

Generational Changes

Researchers have identified generational differences in expectations of work. Survey data shows that Baby Boomers experience greater difficulty with workplace instability and the emerging individualist ethos. On the other hand, younger generations – more accustomed to this precarity – manage the tumult with greater skill. These generational disparities in how insecurity is perceived have real implications for worker well-being and family dynamics.

Emotions

Scholars have also examined the central role emotions play in setting expectations of work and employers, as well as the broad realm of “emotional management” industries that help make uncertainty bearable for workers. Instead of improving workplace conditions for workers, these “emotional management” industries provide “self-care” resources that put the burden of managing the despair and anxiety of employment uncertainty on employees themselves, rather than companies.

Image: A little white girl sits on an adult’s lap in front of a laptop, her small hands covering the adults as they use the computer. Image courtesy of Nenad Stojkovic CC BY 2.0

Democrats in Congress continue toward passing sweeping infrastructure legislation. Part of the infrastructure packages would provide funding for childcare including universal pre-K for three and four-year-olds, aid for working families to pay for the costs of daycare, and paid family leave. Social science research helps place this current debate in perspective, connecting it to larger conversations about who is responsible for paying the costs of raising kids, the consequences for families of the private responsibility for childcare, and what international comparison can show us about alternatives. 

Part of the question concerns whether we should think of raising children as a social, rather than individual, responsibility. Public investments in childcare, whether through public assistance to cover the cost of childcare or a public system of universal childcare, are one way that countries communicate who is responsible for reproductive labor: the work of having and caring for children. In the United States, this is often thought of as the responsibility of individual families and, historically, mothers. Feminist scholars, in particular, have critiqued the individualization of responsibility for raising children, emphasizing that the work of having and raising children benefits society across the board. Having kids creates the next generation of workers and tax-payers, carrying on both practical and cultural legacies. Scholars argue that because we all benefit from the work of reproducing the population we should all share its costs and responsibilities.

Other wealthy Western nations handle childcare differently. For instance, in Sweden there is subsidized childcare available for all children that is considered high quality and is widely utilized. In Germany, there is greater availability of well-paying part-time jobs that can enable two-parent households to better balance the responsibilities of work with the demands of raising kids. In the United States, there is now virtually no public support for childcare. Parents are left to their own devices to figure out how to cover the time before the start of public school at age five as well as childcare for before or after school, and during school vacations. The U.S. is not alone in expecting families to provide childcare, for instance, Italy has a culture of “familialism” that expects extended family and, in particular, grandparents to provide free childcare for working families. However, as Caitlyn Collins writes, the combination of little support for families, and cultural expectations that workers are fully devoted to their jobs, makes raising a child particularly challenging in America.

There are two important consequences to the lack of public support for childcare in the United States. The first is economic. Mothers experience a “motherhood penalty” in overall lifetime wages when they exit the labor force to provide childcare, or they may be placed on on “mommy tracks” in their professions, with lower-paying and less prestigious jobs that can better accommodate their caring responsibilities. Scholarship shows much smaller motherhood penalties in countries with more cultural and institutional support for childcare.

A second consequence of little support for caring responsibilities is emotional. As Caitlyn Collins writes, mothers in many nations feel guilt and struggle to balance the responsibility to care for their children and their jobs. However, in the United States this guilt and emotional burden is particularly acute because mothers are left almost totally on their own to bear both the practical and moral responsibility for raising children. The guilt parents feel, as well as the stress of balancing childcare responsibilities and full-time work, may be one reason that there is a larger “happiness gap” between parents and non-parents in the United States when compared to other wealthy nations that provide better public support for raising children.

The pandemic has brought a number of social realities into stark relief, including the fact that individual families have to navigate childcare on their own, never clearer than when school closings kept kids at home. As we imagine a post-pandemic future and the potential to “build back better,” we should consider what social research tells us about who should be responsible for caring for kids, the weight of that responsibility, and how public policy changes might provide better care for the nation’s youngest citizens.

A woman helps an elderly man get up from his chair
Photo by Brian Walker, Flickr CC

Originally published May 4, 2020

When we talk about work, we often miss a type of work that is crucial to keeping the economy going and arguably more challenging and difficult than ever under conditions of quarantine and social distancing: care work. Care work includes both paid and unpaid services caring for children, the elderly, and those who are sick and disabled, including bathing, cooking, getting groceries, and cleaning.

Sociologists have found that caregiving that happens within families is not always viewed as work, yet it is a critical part of keeping the paid work sector running. Children need to eat and be bathed and clothed. Families need groceries. Houses need to be cleaned. As many schools in the United States are closed and employees are working from home, parents are having to navigate extended caring duties. Globally, women do most of this caring labor, even when they also work outside of the home. 
Photo of a woman cooking
Photo by spablab, Flickr CC
Globally, women do most of this caring labor, even when they also work outside of the home. Historically, wealthy white women were able to escape these caring duties by employing women of color to care for their children and households, from enslaved African Americans to domestic servants. Today people of color, immigrants, and those with little education are overrepresented in care work with the worst job conditions. 
In the past decade, the care work sector has grown substantially in the United States. However, care workers are still paid low wages and receive little to no benefits. In fact, care work wages are stagnant or declining, despite an overall rise in education levels for workers. Thus, many care workers — women especially — find themselves living in poverty.  

Caring is important for a society to function, yet care work — paid or unpaid — is still undervalued. In this time of COVID-19 where people are renegotiating how to live and work, attention to caring and appreciation for care work is more necessary than ever.

Covid-19 may be bringing long-term changes to workplaces and leisure activities as people become more attuned to potential infectious disease. But our shock, surprise, and general inability to deal with the virus also tells us something about how much our relationship with disease has changed. 

Graph showing the birth rates, death rate, and total population during each of the 5 stages of epidemiological transition. Image via Wikimedia Commons.

What scientists call the “epidemiological transition” has drastically increased the age of mortality. In other words, in the first two phases of the epidemiological transition lots of people died young, often of infection. Advancements in medicine and public health pushed the age of mortality back, and in later phases of the transition the biggest killers became degenerative diseases like heart disease and cancer. In phase four, our current phase, we have the technology to delay those degenerative diseases, and we occasionally fight emerging infections like AIDS or covid-19. Of course, local context matters, and although the general model above seems to fit the experience of many societies over a long period of time, it’s not deterministic. 

Inequality

Even before the epidemiological transition, not everyone had the same risk of contracting a deadly infection. Data from the urban U.S. shows that the level of mortality experienced by white Americans during the 1918 flu (a historic level considered to be a once-in-a-lifetime event by demographers), was the same level of mortality experienced by nonwhite Americans in every county in every year prior to 1918. 

Rise of new infectious diseases

Clearly, as we are seeing today, the epidemiological transition isn’t a smooth line. There is also considerable year-to-year and place-to-place variability, and new diseases can cause a sharp uptick in infectious disease deaths. For instance, the emergence of AIDS in the 1980s was responsible for a rise in infectious mortality and demonstrated the need to be prepared for new diseases. 

In just a few short weeks, covid-19 became a leading cause of death in the United States. The pandemic is a reminder that despite all of our health advances we aren’t beyond the disruptions of infectious disease, despite the broader long-term shift from high rates of childhood mortality to high rates of degenerative disease among elders.

A woman helps an elderly man get up from his chair
Photo by Brian Walker, Flickr CC

When we talk about work, we often miss a type of work that is crucial to keeping the economy going and arguably more challenging and difficult than ever under conditions of quarantine and social distancing: care work. Care work includes both paid and unpaid services caring for children, the elderly, and those who are sick and disabled, including bathing, cooking, getting groceries, and cleaning.

Sociologists have found that caregiving that happens within families is not always viewed as work, yet it is a critical part of keeping the paid work sector running. Children need to eat and be bathed and clothed. Families need groceries. Houses need to be cleaned. As many schools in the United States are closed and employees are working from home, parents are having to navigate extended caring duties. Globally, women do most of this caring labor, even when they also work outside of the home. 
Photo of a woman cooking
Photo by spablab, Flickr CC
Globally, women do most of this caring labor, even when they also work outside of the home. Historically, wealthy white women were able to escape these caring duties by employing women of color to care for their children and households, from enslaved African Americans to domestic servants. Today people of color, immigrants, and those with little education are overrepresented in care work with the worst job conditions. 
In the past decade, the care work sector has grown substantially in the United States. However, care workers are still paid low wages and receive little to no benefits. In fact, care work wages are stagnant or declining, despite an overall rise in education levels for workers. Thus, many care workers — women especially — find themselves living in poverty.  

Caring is important for a society to function, yet care work — paid or unpaid — is still undervalued. In this time of COVID-19 where people are renegotiating how to live and work, attention to caring and appreciation for care work is more necessary than ever.

Photo by Kandukuru Nagarjun, Flickr CC

This post was created in collaboration with the Minnesota Journalism Center.

Technology has its share of perks and benefits. Past articles on The Society Pages demonstrate how artificial intelligence and technology can help enhance journalism and curb trafficking and trolling online — but scholars have also found technology has a dark side. Meredith Broussard calls it, “technochauvinism,” a belief that tech is always the solution to the world’s problems. It is a red flag, she says, because “there has never been, nor will there ever be, a technological innovation that moves us away from the essential problems of human nature.”
One of these problems is unequal access to the internet. On The Society Pages, we highlighted how access to the internet influences activism. Other research shows how access to the internet influences various societal practices including predictive policing, real estate markets, affordable housing, social services and medical care. For example, predictive policing is a developing area of inquiry. This practice has come under scrutiny for its lack of transparency and potential to assign inaccurate risk scores to individuals that may become a victim or offender in a violent crime, which can lead to the overpolicing of already marginalized areas.
Scholars have also discovered that blue-chip companies, including Google, produce search results that marginalize underrepresented populations. Further, there is fear that algorithms are writing people out of jobs. While algorithms do have the potential to write people out of jobs, different fields may experience this to various degrees. This may be true for professions including paralegals: Up to 69 percent of paralegals’ time could be automated. In the journalistic profession, reporters and editors are in better shape due to their ability to animate algorithms to their advantage: As a human-centered process, algorithms have the potential to increase reporting outputs with less human effort. But algorithms aren’t neutral — they are produced by people, and they have the potential to reproduce marginalization.
Photo by Office of Congresswoman Katherine Harris, Wikimedia Commons

This post was created in collaboration with the Minnesota Journalism Center.

Recent estimates from the International Labor Organization (ILO) and Walk Free Foundation found that more than 40 million people are in modern slavery. The ILO has valued human trafficking as a $150 billion industry, with $99 billion coming from commercial sexual exploitation. Prostitution and trafficking are both illegal in America (except for several counties in the state of Nevada where prostitution is legal), but the two terms are often conflated. With regard to terminology: When one is coerced or forced into selling themselves for sex, it is a form of trafficking, and those who enter the regulated sex industry voluntarily are deemed sex workers.

The “normalization” of sex work worldwide is still in flux. Scholars divide the international community into two camps with regard to this issue: abolitionist feminists, who believe both voluntary and involuntary prostitution and sex work is exploitative; and human rights feminists, who de-link prostitution/sex work and trafficking by arguing that some adult women and men are in prostitution/sex work voluntarily and should not be considered victims, and only those who are forced or coerced to be prostitutes or sex workers should be considered trafficking victims.
Scholars demonstrate that NGO coverage of trafficking often portrays “ideal victim” and “ideal perpetrator” stereotypes that don’t always reflect the truth about who is subject to trafficking worldwide. Further, journalistic coverage of trafficking is often written through the lens of “episodic” frames that provide personal narratives but lack trend statistics, quotes from experts, or social forces at play in perpetuating demand for trafficking worldwide.
As anti-trafficking campaigns evolve in the Digital Age, technology also plays an integral role in efforts to curb demand and address supply that flows through social media networks and the Internet. Initiatives — including research about online demand for sex and working partnerships between social scientists, law enforcement, and anti-trafficking NGOs — are shaping the future of anti-trafficking efforts worldwide.
Photo of a drone flying in the air near a statue of Joan of Arc.
Photo by Ted Eytan, Flickr CC

This post was created in collaboration with the Minnesota Journalism Center.

The landscape of journalism is changing every day. The Pew Research Center reported that newspaper newsroom employees declined by 45% between 2008 and 2017, and Nieman Lab argues that newsrooms are in the midst of a “do-or-die moment.” As traditional newsrooms lose hundreds of reporters and editors annually, content creators including WikiLeaks and Deadspin are coming alongside legacy media outlets including CNN, the BBC, and The New York Times to provide information to the public. All of these players publish content online in a journalistic fashion, raising the question of what journalism is as a profession.

In the midst of a shrinking workforce, scholars are starting to pay attention to “interlopers” and “intralopers:” Interlopers are actors or institutions who may consider the work they do to be part of news media, though they do not always define themselves as journalists; web analytics companies are one current example. Intralopers are similar to interlopers, but instead work from within news organizations as specialists in digital and social media and often produce emerging technology meant to complement journalists’ work. Both play increasingly key roles in journalistic spaces.
Machines and software packages are beginning to play a more central role in news gathering, news selection, news writing, news editing, and news distribution in newsrooms worldwide. Drones are one example of machines occupying space traditionally held by journalistic actors. 2016 was a turning point for the institutionalization of drones in newsrooms in the United States, when the Federal Aviation Administration (FAA) amended aviation regulations to allow for widespread experimentation with drones in American journalism. Since that date, journalists from outlets including The New York Times and The Washington Post have produced compelling stories, photos and videos but have also go through a comprehensive federal certification process (Columbia Journalism Review recently wrote about this phenomenon).
Analytics and metrics also play a key role in newsrooms nationwide. However, journalists have varying opinions of how influential their role is in their daily routines, with some arguing that analytics challenge journalists’ authority to decide which stories are newsworthy.
Beyond analytics and metrics, journalists and technologists often collaborate with each other on a regular basis to create open-source software programs. One example is “hackathons” — events where coders and journalists come together to find solutions to journalistic problems in the interest of creating a brighter future for news outlets worldwide.

Photo of a closed sign outside Saguaro National Park during the 2013 U.S. federal shutdown. Photo by NPCA Photos, Flickr CC

Originally posted October 15, 2013.

Government shutdowns are (thankfully) rare and tend to lead to a lot of calls to economists: what happens to the dollar on the international market? How do military towns and towns that rely on National Park tourism survive? Will companies screech to a halt while they wait for the FDA to get back to business? In the meantime, we might take this opportunity to remember the myriad ways in which all Americans are dependent upon the government.

Most people don’t realize they benefit from government programs.

In 2012, Mettler asserted 96% of Americans benefit from 21 specific government programs (not including those that affect all people equally, like road maintenance). These include “submerged” benefits (like tax breaks for mortgage interest) and direct benefits (like Medicaid). In Table 3 of the second citation, she shows that even some 44.1% of those receiving Social Security benefits answer “no” when asked if they “have used a government program.”

The government is instrumental in innovation.

Fred Block and Matthew Keller sum up some of their research in a Scholars’ Strategy Network brief on government as the main driver of innovation. Using data from R&D‘s annual top 100 breakthroughs list, in 2006 they identified 88 winners with some government support, 77 of which relied on federal dollars and 42 of which came directly out of federally-sponsored labs. They also focus on a program started by Ronald Regan’s Administration that, today, provides up to 6,000 loans ($2 billion or so) annually to small businesses trying to commercialize new tech.