Arielle Kuperberg,  Assistant Professor of Sociology, The University of North Carolina at Greensboro, is author of this week’s briefing report at CCF@TSP. Here she answers the questions that keep coming up when people talk about cohabitation these days.

Apples via Creative Commons by Nina Matthews
Apples via Creative Commons by Nina Matthews

Lots of people keep asking, Does living together before marriage increase your chance of getting a divorce?  In my recently published study, I finally answer this question with a definitive “no!”

For decades, researchers have found a connection between premarital cohabitation and divorce that no one could fully explain.   Despite these studies and warnings from well-meaning relatives about the dangers of “living in sin,” rates of living together before marriage have skyrocketed over the past 50 years, increasing by almost 900% since the 1960s. I found in my study that almost two-thirds of women who married between 2000 and 2009 lived with their husband before they tied the knot.

With the majority of couples now living together before marriage, if cohabitation somehow caused couples to divorce, you would think that divorce would be more common in recent generations of young adults, who were much more likely to live together before marriage compared to earlier generations. But recent research has found that for young adults born in 1980 or later, divorce rates have been steady or even declining compared to earlier generations.

What explains the connection between cohabitation and divorce?

We already knew that some of the connection between cohabitation and divorce is a result of the type of people who live together before marriage. In my study I find that compared to those who married without living together first, premarital cohabitors have lower levels of education, are more likely to have a previous birth, and are more likely to be black and have divorced parents, all factors that numerous studies (including mine) have found are related to higher divorce rates.

My study found that the rest of the connection between divorce and cohabitation can be explained by one thing that previous researchers never took into account: the age at which couples moved in together. Cohabitors moved in together at earlier ages (on average) than couples that didn’t live together before marriage, and since living together at younger ages is associated with higher divorce rates, cohabitors are more likely to divorce.

Why does moving in when you are younger increase your divorce rate?

Younger couples are (on average) less prepared to run a joint household together, and may be less prepared to pick a suitable partner than couples that settle down when they are older (on average- remember this doesn’t apply to every couple!). People change a lot in their early 20s, and those changes sometimes cause incompatibilities with a partner who was selected at a younger age. These incompatibilities lead to higher average divorce rates among those who moved in with their eventual spouse at a young age, even if they waited to marry until they were older.

My research shows that waiting until you are 23-24 or older to settle down with a partner is associated with around a 30% risk of divorce if you marry that partner. Moving in earlier leads to a much higher divorce risk- for instance couples that move in at age 18-19 face almost a 60% risk of divorce if they eventually marry, and couples aged 21-22 when settling down have a 40% risk of divorce after marriage.

Are cohabitation and marriage different types of relationships? Why get married at all?

A frequent question I’ve been asked is: does this mean cohabitation and marriage are basically the same? One reporter asked me about a recent study in which couples were threatened with a mild electrical shock while holding the hand of their married or cohabiting partner, where married couples were shown to be calmer than cohabiting couples. Doesn’t this show that cohabitation is a different type of relationship than marriage is?

Absolutely! My own research has shown that although cohabitation and marriage aren’t drastically different types of relationships for couples that lived together before marriages, some differences in behavior are pronounced, and the longer a couple has been married, the greater these differences grow. The public commitment, legal binds, and social expectations that come with marriage affect behavior in numerous ways which can’t be discounted.

So should you live together before marriage? Should you get married at all? That’s up to you! But living together won’t increase your chances of getting a divorce if you choose to go that route.

Susan J. Matt is author of Homesickness: An American History (Oxford University Press, 2011). She is Presidential Distinguished Professor and Chair of the History Department at Weber State University, in Ogden, Utah. She tweets at @alongingforhome.

Not long ago, The Onion ran an article with the headline “Unambitious Loser with Happy, Fulfilling Life Still Lives in Hometown.” The piece quoted a friend of the “loser,” who said, “I’ve known Mike my whole life and he’s a good guy, but it’s pretty pathetic that he’s still living on the same street he grew up on and experiencing a deep sense of personal satisfaction . . . .[H]e’s nearly 30 years old, living in the exact same town he was born in, working at the same small-time job, and is extremely contented in all aspects of his home and professional lives. It’s really sad.”

While the article was fiction, the attitudes it encapsulated were not. Americans disparage those overly attached to home. The homesick, boomerang kids, and tightly bonded families seem antithetical to American individualism. We are supposed to be a nation of restless movers who break ties to home with ease. When individuals stay in place, it contradicts our mythology. What is wrong with these people?homesickness

That’s a question being asked with increasing frequency about the rising generation, for nearly 22 percent of all adults in their 20s and 30s are living with their parents, the highest rate since the 1950s. And the media have not been kind to them: CBS News observed “… for many boomerang kids, living in a parent’s home becomes a crutch, enabling them to put off making grown-up decisions . . . .” Others have termed them the “Go-Nowhere Generation.”

The message is that staying home shows immaturity and a fatal lack of ambition. It is a sign of emotional neediness and dependence, traits widely stigmatized in American society. However, the expectation that individuals should leave home in their early 20s, and do so easily, is of recent vintage. Only in the last century did Americans come to see young adults who were emotionally close to kin and geographically rooted as psychologically immature and destined for economic failure.

In the nineteenth century, Americans believed that love for home was an ennobling emotion, evidence of a tender heart and a strong family life. Writers and preachers lavished praise on those who loved home, while physicians suggested that wandering too far from it could be fatal, for they believed people could die of acute homesickness.

In contrast, during the 20th century, as corporations and the military began to deploy people across the nation and the globe, strong attachments to family and place became a problem, obstacles to the smooth flow of capital and personnel. The love of home became an archaic emotion in a modern society dependent on a fungible, mobile workforce.

By mid-century, experts were arguing that tightly bonded families were out of place in America. Sociologist W. Lloyd Warner explained that because the economy required individuals to move frequently, “families cannot be too closely attached to their kindred. . . or they will be held to one location, socially and economically maladapted.” Those who tried to maintain strong kin ties were criticized. In 1951, psychiatrist Edward Strecker, preoccupied with the Cold War and the need for a mobile fighting force, accused American mothers of keeping their “children enwombed psychologically,” failing to “untie the emotional apron string . . . which binds her children to her.” He dubbed these women the nation’s “gravest menace.”

Today, we continue to believe young adults should leave home. When they don’t, their living choices are chalked up to poor employment prospects. While economic realities surely play a part in their residential choices, the media give short shrift to other motives. The idea that families might be drawn together by feelings of affection is left out of the equation, as is the possibility that this generation wants to become something other than mobile individualists. Yet there’s considerable evidence that millennials hold values that center more on family and less on high powered careers. A recent poll found them far less concerned with financial success than the population at large. They also are closer to their parents, whom they fight with less, and talk with more than earlier generations.

For decades we’ve assumed that leaving home in one’s early twenties is natural, a sign of healthy psychological adjustment; but we should remember such expectations are historically contingent. Today’s millennials remind us there are other ways of organizing family life than the model we’ve grown accustomed to, and prompt us to recognize that values other than individualistic, market-driven ones frequently motivate human behavior. We can learn from them that staying close to home does not make one an “unambitious loser.”

 

My world of parenting involves sifting through countless listicles of advice, online images of children in trauma who are forced to grow up too fast, apps to manage kids’ crazy schedules, Vine videos of tiny tots singing “Let it Go” off key in the back of a minivan, and clever kidroom decorating tips on Pinterest. This is overwhelming, even for parents like me who have plenty of resources and time and education and other things that likely will enhance the life chances of my son. Parenting is hard for everyone, especially those who struggle to find work, make meals, or know where to look if they have questions about kids. Navigating the words and images and sheer volume of information on parenting out there makes it hard even for the people who have work, food, and people to turn to for help.

From Pixabay.
From Pixabay.

Many of us American parents who have the luxury of a laptop or a bookshelf may have catchy titles such as the following in our libraries and social media feeds:

More or Less: How to Raise Overscheduled Kids and Then Feel Guilty About It and Then Schedule Them in Fewer Activities but Then Add to Their Schedule to Keep Up with Other Parents Whose Kids Will Get Into a Good College

Quality Assurance: How to Use Your Professional Career Skills in Parenting, but Never Show Too Much of Your Family Self at Work for Fear of Being Labeled “Not a Committed Team Player”

Independence Days: How Not to Get Arrested for Letting Kids Do Things by Themselves That You Did When You Were a Kid

Americans are the Worst: How to Raise Your Kids Like French/Italian/Chinese/Swedish Parents Do, and Also How to Eat Like Them with Your Kids in Restaurants and Not Gain Weight

Sometimes I think parents, despite our valiant efforts to be the grown-ups in situations with our children, are more like toddlers with flailing appendages trying to learn what we should and should not fear. Trying to control a world that seems filled with tall and vocal experts and parenting peers whom we’re not sure we should trust. And tripping and hitting our heads on coffee tables every so often. While parents since the dawn of time have probably felt insecure about their abilities, we now swim in an especially large and public typhoon of confusing messages.

Does this typhoon of information make us better parents? Does it make is more assured that we are, in fact, the parents, and our children are, in fact, in need of parenting? More is not better, after all, and not just with regard to chocolate cake. Does the overload actually make us less sure of ourselves, more in need of comforting, less mature, and therefore more similar to the little creatures we are trying our hardest to raise? While our tendency to read a list of the latest habits of highly effective parents would place us squarely in the demographic category of “parent” (because who else would read that stuff?), could it also be that reading all of this actually makes us feel less parental?

Many smart people, from folks at the CDC to a long list of wonderful experts, have talked about this topic already in a myriad of other online and paper-type sources, and have even said that there are too many pieces of advice out there so we should be careful not to get overwhelmed (whoa, that’s very meta), but sometimes when it hits home it bears pondering again. My husband and I, when our son was a baby a decade ago, found ourselves amidst a circle of people who had the time and resources to read and recommend all sorts of books on babies. We, being people with time and resources and commitment to the use of big words whenever possible, read excerpts from the fluffy baby whisperer book and from the technical medical book, threw both out the window and improvised, and then returned to them three months later to realize we had done it pretty much the way the fluffy and medical experts had told us to do it in a perfect combination of both. Sometimes I think experts are just good at telling us what our guts would tell us to do anyway, but far more eloquently and for $12.95. Evidently my husband and I would rather buy advice than trust ourselves not to hit our heads on coffee tables.

I recently asked my mom, now in her 70s and an expert on parenting who has read every book out there since Dr. Spock, whether she thought the difference between the parent and child roles seemed wider between her and me than they are between me and my son. I asked her because she always seemed far more grown-up to me than I am currently acting with my kid. She never laughed when I farted at the dinner table, for example.

In this discussion, Mom and I figured that the answer to that question lies not in my penchant for scatological humor, nor in the amount of information available for parents today, but squarely in the fact that kids are often better than their parents at navigating the latest technology. Kids have long figured they knew more than their parents, and parents have long figured they need to ask for advice on what to do with these tiny creatures who appear in our lives, but now we parents have a hard time knowing which screen corners to swipe and in-app purchases to avoid to retrieve the good information. Ten years after my husband and I threw actual books out an actual window, the typhoon of advice can be read in every social media feed, app, and link on Buzzfeed. Not to mention in the 2nd editions of the fluffy and medical books, now available electronically if you can remember your Kindle password.

Kids are teaching us more than ever, at least about the means to get to the messages. I never taught my mom the steps on how to open a calendar without ripping the pages to mark down when I had piano lessons. She never needed to rely on my brothers to find out how to unfold the medical brochure on tetanus shots. There was no swiping involved in parenting then, at least not on a screen. She was the grown-up. I was the kid. But when our tiny tech expert offspring know more than we do about technology, we feel like the kids.

But despite our agreement that today’s generation gap seems narrower because of our technology-induced role reversals, I felt that my mom gave me more independence than I am giving my son. And isn’t independence part of being a grown-up? Wouldn’t that criterion be evidence of a narrower generation gap then versus now? What does it mean that my son has more skills on a smartphone than I do, but I could ride farther away on my purple banana-seat bike when I was his age? Who is more grown-up – the one who can navigate Map My Ride without accidentally buying porn or the one who can ride her bike alone to the swimming pool two miles away?

As for myself, I am considering two options for my next step as a parent. I could read all of the titles I mentioned earlier, once I find them online with my son’s help. Maybe the most apropos book we could find would be titled

Parenting in an Age of Irony: How My Kid Helped Me Responsibly Purchase Online Resources about How I Should Protect His Innocent and Developing Brain.

Or, rather than actually reading the myriad parenting columns, books, and online diatribes, I will ask my son to digitally catalog them in order from “Most Useful for How to Raise Me” to “Meh, You Can Delete This from Your Cache,” and then make the catalog into a smartphone app that will not accidentally make me buy porn.

Surely his technological prowess will prepare him well for deciphering what is and is not useful information.

But only if he does his deciphering within a one-block radius of our house, so I can keep an eye on him.

Ever since winning third place in a rural Minnesota district high school speech contest with her rendering of an excerpt from Scandinavian Humor and Other Myths, Michelle Janning has attempted to add humor to all academic pursuits, including the sociological discovery of everyday life patterns. She is a sociology professor at Whitman College, and a Senior Scholar with the Council on Contemporary Families. Her website and blog, with a humorous focus on the “between-ness” of social life, is at michellejanning.com.

This post draws from a longer CCF Brief originally published December 10, 2013. Rachel A. Gordon is a professor of sociology at the University of Illinois at Chicago.

By Irangilaneh (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
By Irangilaneh (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
It is “back to school” time – we can see this all around us, in stores, online, and in the media. As students shop for school supplies and clothing, many are thinking about the image they will portray when they first walk the halls of school. A recent google ad encapsulated these concerns as it opened with a youth searching “How to not look like a freshman.” Technology amplifes – or at least makes more visible – teens’ concern with social image. A recent survey by the We Heart It social networking site, and published exclusively by TIME, documents the ways in which youth thirst for attaching “likes,” “hearts,” and comments to shared photos – the latest incarnation of the original of Facebook hot or not ratings of student photos that make many people cringe, but live on.

The We Heart It study reinforced a finding in my own recent work about the impact of not just comments that are openly hurtful or admiring, but of being lost in the shuffle. One teen in the We Heart It survey reported “Sometimes I just feel like I don’t exist, like I’m invisible to everyone, I pretend it’s okay, but it hurts.” In our study, we considered how others’ ratings of adolescents’ looks associated with their achievement — in grades as well as the social scene. Our most consistent finding was that being above average in looks – what we call standing out from the crowd – was correlated with nearly every social and academic domain that we examined in high school.  These advantages continued into young adulthood, including through higher college completion and, as a consequence, higher earnings for the attractive than the average in looks.

Not surprisingly, being at one pole or the other of looks was important, too, but more selectively. What we called the “fairest of the fair” – being rated very attractive rather than attractive – revealed itself more in young adulthood than in high school, where the best-looking youth in our study rated themselves as more extroverted, reported more friends, and were more likely to attain a college degree. We also importantly documented how youth rated by others as being on the ugly side of looks were more depressed and had fewer friends than those who were average in looks, both in high school and young adulthood.

We were struck, however, by the extent to which these advantages and disadvantages of being at either end of the perceived beauty continuum can overshadow the importance of being “invisible” in the middle of the continuum. In fact the largest fraction of youth were in this “average” category of others’ ratings of their looks – over 4 out of every 10 youth were rated “average” – whereas about 3 in 10 were rated “attractive,” just 1 or 2 out of 10 as “very attractive” and less than 1 in 10 as “unattractive” or “very unattractive.” The advantages of standing out – being on the attractive end rather than “just” average – were also meaningful. For instance differentials on high school grades and college graduation between youth rated by others as attractive versus average in looks were similar in size to differentials between youth living in two-parent versus single-parent families.

Social scientists have not yet developed programs to address lookism. Taking a page from interventions aimed at reducing other prejudices, however, we anticipate that a wide array of strategies might help youth — and the adults and teachers that they interact with — circumvent assumptions based on looks. The large size and many different classes in high schools mean that teachers and students usually get to know one another less well than in elementary schools, yet a person’s looks are likely especially salient in these large, impersonal settings. High school cliques also restrict interactions across groups, but one of the most successful strategies for reducing prejudice is cross-group contact. One strategy educators might try could bring students and teachers together for meaningful interactions that cut across social cliques, and assess the extent to which such strategies help level the playing field for youth who are more and less attractive.

More broadly, we believe the issue is not just about how others perceive adolescents’ looks, but instead reflects a larger concern about American high schools, where some children are socially marginalized (as my co-author Rob Crosnoe showed in his earlier work) and as a consequence do not get the most out of school.  When this happens, children’s potential is not fully achieved – they lose out individually, and we lose out as a society. In this way, a major challenge for the school system is to keep all students engaged and feeling a part of school.  Initiatives like the Collaborative for Academic, Social and Emotional Learning run by my colleagues here in Chicago are trying to do just that, and we hope schools and scholars will continue such work in the future.

Amy Blackstone is a sociology professor at the University of Maine.

“We got a puppy, and that’s my idea of starting a family. People say, ‘Oh, that’s practice for parenting,’ but if it’s practice for anything it’s to be a mom to another puppy.” –Christina Hendricks

Image via Flickr Creative Commons
Image via Flickr Creative Commons

Mad Men’s Christina Hendricks is the most recent among a host of celebrities to be asked about when she’ll be adding kids to her family. Though the media has only recently taken notice of the childfree, the fact is that rates of childbearing in the U.S. have been on the decline for the past 40 years. It seems celebrities aren’t the only ones choosing to create families that don’t include kids.

The notion that family is something we choose rather than something based solely on ties of “blood or marriage” isn’t new. Kath Weston explored this idea over two decades ago in her 1991 book on gay and lesbian kinship, Families We Choose. Yet Google “start a family” and you’ll quickly discover that for many people, even today, families don’t begin until children enter the picture.

In 1976, just 10 percent of women had not given birth by the time they reached their forties. Today, that number has nearly doubled, reaching 19 percent in 2012. While a fifth of women may be without children, they are not without families. Research shows that people without children form bonds, create households, and help rear the next generation in many of the same ways that those with kids do.

For the 45 childfree women and men I have interviewed in the course of my research on the choice not to parent, family is about belonging, social support, responsibility, and love. For my interview participants, family can and does include blood relations such as siblings and parents and it also includes partners with whom they may have legal ties. But, on the whole, their definitions of family emphasize the needs that families meet and the functions they fulfill rather than who their families do or do not include. As Sara, a partnered childfree woman in her mid 30’s put it, family is those who are “united despite any kind of differences; it’s a togetherness.”

Image courtesy we’re {not} having a baby!,
Image courtesy we’re {not} having a baby!,

Perhaps many of the definitions of family my research participants shared emphasize meanings rather than members because of childfree people’s own experiences of exclusion. A number of my interview participants shared stories about not being invited to events at friends’ and relatives’ houses because it was assumed, without asking, that they wouldn’t want to participate if kids were present. Others described how “family friendly” events in the community exclude their adults-only families.

Annette, a 40-year-old childfree woman who defines family as “anyone who cares for and loves each other” shared her frustration: “Our town has lots of great activities and most of them are called some variation of, ‘Family Fun Day.’ So does that exclude me? It usually does because it’s geared for children, not for my family.” It seems that family fun days and family friendly environments really mean fun and friendly for just one kind of family: those that include children.

Americans of course aren’t the only ones whose perceptions of family seem to be limited to household units that include children. In Ireland, couples without children are defined by the census as “pre-family.” In some ways, this makes sense; having children is an important milestone and children are an essential part of family for many. But when one fifth of women end their childbearing years without having had children, perhaps it is time to consider that not all families do, nor must they, include children.

Stacy Torres is a PhD candidate in sociology at New York University.

The American value of individualism affects us all, but what happens when you are not able to express that value? This is a dilemma for older people subject to stereotypes of dependency. They face special challenges in striving for this ideal and feeling comfortable enough to accept help so that they can remain self-sufficient. In my last post, I explored some reasons why older people may not want to move in with their families. Given these cultural pressures, how do elders living on their own negotiate their need for care and autonomy?

Programs like Meals on Wheels help older people remain independent in their homes. (Image via Wikimedia Commons.)

Polls consistently show that older adults and aging baby boomers want to “age in place”—or remain in their homes independently for as long as possible. This arrangement, desired by ordinary people as a means of preserving autonomy and by policy makers who view this as a cost effective alternative to nursing homes, requires that seniors—often in conjunction with their families—patch together creative ways to support their independence.

The day-to-day managing of routine tasks like grocery shopping, doctor’s appointments, and household chores, usually necessitates a little help from a supportive web that includes family, friends, neighbors, and social service agencies. Family may help older relatives with chores, coordinate medical appointments, and pay for supplemental help when possible. Network studies have foundthat friends are especially good at providing emotional support and a sympathetic ear when life’s travails require someone to bear witness. And neighbors can pitch in with practical help, such as picking up a few things from the store when an older person has trouble leaving the house. For years I have observed how eighty-year-old Joe’s next-door neighbor has served as his link to the outside world whenever his swollen ankles and knees leave him homebound. She brings him a copy of The Daily News and groceries whenever he needs a few days to mend.

Beyond kin and friendship networks, senior centers provide a range of services to community-dwelling elders, though they are also usually the first candidates for budget cuts. A few older people I’ve met over the years regularly took advantage of the cheap but nutritious meals offered daily by a local senior center for a dollar, which saved them the hassle of cooking for one and the cost of eating out but also provided a little companionship. Nonprofit organizations that serve older adults, such as the Jewish Association Serving the Aging (JASA), offer comprehensive access to services that help older people deal with the challenges of living alone in an expensive, gentrified city like New York, including benefits screening for programs such as food stamps and Medicaid. As I walked past a Midtown Manhattan food pantry the other day and saw the line stretching a half-block long, filled with mostly Asian and Latino elders and their shopping carts waiting for donated potatoes, rice, and canned vegetables, I was reminded again of how crucial these stop gaps are for those struggling to remain independent in old age.

But in some cases, elders may go too far in keeping their family at bay due to fears of losing their independence if they reveal their physical or financial challenges. In my own research I’ve found that some people feel so threatened by the prospect of moving in with family (or worse, a nursing home) and ashamed of asking for help that they sometimes go to great lengths to cover up health issues and other difficulties. It’s often only after a crisis that families learn of mounting problems. For example, after 83-year-old Dottie ended up hospitalized for a heart attack her daughter discovered that she had not seen a doctor besides her podiatrist for several years. In the absence of regular medical care, Dottie had improvised her own self-care measures such as weighing down a shopping cart with telephone books for support when she walked, rather than using a cane or walker. When Theresa, in her mid-70s, fell and twisted her ankle, her family discovered the severity of her dementia, which had eroded her ability to tell time and remember dates. Afterwards she moved closer to where her brother lived.

How can we support elders so that reaching out for help doesn’t pose a threat to independence but rather ensures that a bad situation doesn’t get worse or become an unnecessary crisis? Perhaps the first step is recognizing that none of us can do it alone and that at every age we achieve self-reliance by drawing on a mix of social resources and supports.

“It’s my home,” Sylvia, 85, offers as a simple but profound explanation for why she’s not ready to give up her Manhattan apartment and move in with relatives. Though she lived with her own parents as they aged, Sylvia has lived alone for almost twenty years, since her husband passed away, as does her 92-year-old sister-in-law and many of her contemporaries in old age.

People increasingly prefer to "age in place"--near rather than with family.
People increasingly prefer to “age in place”–near rather than with family.

The advent of Social Security gave older people—and more often than not, older women—the financial resources to live on their own. Economists Kathleen McGarry and Robert Schoeni found that 59 percent of widows over the age of 65 lived with adult children in 1940, compared with 20 percent fifty years later. Today nearly a third of all older adults live alone. These rates rise with age and follow distinct gendered patterns, with women much more likely to live alone than men at all ages. By age 85, 47 percent of women and 27 percent of men lived alone in 2010.

Many older people struggle to make ends meet on Social Security as their sole source of income, or in combination with modest savings and pensions. For immigrant elders in cities like New York, living alone is often not an option due to a lack of affordable housing, linguistic hurdles, and cultural traditions of multi-generational living arrangements. While poverty rates rise with age and hit women hardest in late life, as my analysis of Census data has found, those who can afford to live alone usually do. Researchers expect that these trends will only increase with the aging of baby boomers, who have experienced higher rates of divorce, cohabitation, lifelong singlehood, and childlessness during their lifetimes.

Despite the financial, physical, and psycho-social challenges of living independently, many older people I’ve spoken to prefer to “age in place” and remain on their own for as long as possible, rebuffing numerous offers to move in with family. Why?

Classic sociological studies of community and family life found that a half century ago elders eschewed intergenerational living arrangements in favor of living independently. When British sociologist Peter Townsend interviewed older people in East London about their families in the 1950s, he discovered that most desired familial intimacy at a distance. They felt a deep attachment to their homes and didn’t want to invite conflict with adult children by moving in. They preferred to live near family instead of with family, and the great majority had a child living within a mile of them.

Despite the conventional wisdom and handwringing over the potential for isolation, sometimes living alone can be less isolating for older people than living with younger family members. As sociologist Arlie Hochschild found in her classic study The Unexpected Community, living independently among “age peers” can often provide greater opportunities for social interaction than within a family where an older person feels like a burden or “in the way,” as some of the older people I’ve spoken to have expressed. My own research on older adults aging in place has found that while many have loving, close relationships with their families and keep in touch by phone and email, co-residence poses a number of drawbacks. Eugene, a 90-year-old man living in New York I first met ten years ago, has received repeated offers to move in with his sister. They care deeply for each other. But even in the face of financial struggles and limited mobility that makes walking more than a city block physically draining, Eugene prefers to stay put rather than risk a loss of independence in the suburbs of Dallas: “I don’t drive, and they would have to take me everywhere.” Dispatches from siblings that have moved in with family provide him with little incentive. He reports that one younger sister has had a difficult time living with her son and wants to move into an assisted living facility where she can make friends and socialize. “Her son and daughter-in-law ignore her, and she’s miserable,” he cautions.

Older adults may also hesitate to move in with family due to expectations that they will provide unpaid care giving. Grandparents already provide significant help raising grandchildren, and low-income grandparent care givers can experience health declines and neglect their own preventative care when care work demands outstrip resources. In countries with a shortage of affordable child care, such as Japan, having a grandparent nearby can make it more difficult for families to obtain subsidized day care, due to assumptions that elder family members will provide care. Sociologist Jennifer Utrata’s study of Russian single mothers and grandmothers found that older women faced pressure to be good, self-sacrificing “babushkas” and to provide the lion’s share of unpaid care giving for their grandchildren and household help to their single daughters in the paid workforce.

Given demographic shifts toward an older, grayer society, it’s in everyone’s interest to invest in supporting elders so that they can age with dignity in their communities, whether living alone or with family, and understand that living alone does not guarantee isolation but in many cases can promote less stressful relationships between family members and the space to develop stronger bonds across generations.

Stacy Torres is a PhD candidate in sociology at New York University.

 

Here’s a reprint of my column on family medical leave, originally posted at GirlwPen; today’s post includes updates of additional studies further highlight the need for paid leave that extends to all workers.

From Flickr Creative Commons.
From Flickr Creative Commons, by Lenneke Veerbeek. Click image for details.

Over 20 years ago Congress passed the Family and Medical Leave Act, and President Bill Clinton signed it into law two weeks after his inauguration in 1993. Remember the optimism? Under the FMLA a qualified employee can take up to 12 weeks of unpaid leave to care for a sick family member or for pregnancy, newborn, newly adopted, or for care of a new foster child. In a good-news bad-news sense, one of the notable features of the FMLA was that it was gender-neutral: men and women equally had no funding for their job-protected leave for up to 12 weeks per year. Otherwise, this policy for helping families has been the weakest compared to other rich countries. At the time, the FMLA was the “foot in the door” for improving the situation of working families. A hint for how FMLA is doing today was offered by Girlwpen’s Susan Bailey earlier this year.

So…how’s that foot in the door now? Several recent studies offer new tools for analysis. In “Expanding Federal Family and Medical Leave Coverage,” economists Helene Jorgenson and Eileen Appelbaum investigated who benefits from FMLA using the 2012 FMLA Employee Survey conducted by the Department of Labor. About one in five qualified employees has used FMLA leave within the past 18 months, according to a new Center for Economic and Policy Research (CEPR) report. The authors found an extensive amount of unmet need for family and medical leave. UPDATE: In June 2014, Jorgensen and Applebaum published “Documenting the Need for a National Paid Family and Medical Leave Program: Evidence from the 2012 FMLA survey.” They reported that,”One-in-four employees needing leave had their leave needs unmet in the past 12 months. Not being able to afford unpaid leave (49.4 percent) and the risk of loss of job (18.3 percent) were the two most common reasons given for not taking needed leave. Employees with children living at home and female employees had the greatest need for leave, but also had the highest rates of unmet leave.”

Several key limitations of the FMLA mean that, in practice, the law doesn’t apply to a large share of the workforce. The FMLA does not cover workers in firms with fewer than 50 employees. As a result, 44.1 percent of workers in the private sector (49.3 million workers) are excluded from protected leave for caring for their sick or vulnerable relations. The FMLA also excludes employees who have been at their current job for less than a year or have worked fewer than 1250 hours in the past year.

Not everyone with needy kin works in mid-to-large size firms nor has regular employment. So, those limits on access to FMLA do not affect everyone equally. Young workers and Hispanic workers had lower rates of eligibility than other groups. Education level was the strongest predictor of eligibility. People with less than a high school degree were 13.6 percentage points less likely than those with “some college” to have access to unpaid leave for family and medical concerns. Meanwhile, those with a college degree were 10.7 percentage points more likely than those with some college to have access to FMLA leave.

From CEPR’s “Expanding Federal Family and Medical Leave Coverage” (Feb 5, 2014) by Helene Jorgenson and Eileen Appelbaum.

 

Have there been improvements in the past two decades? Another recent study from CEPR and the Center for American Progress, “Job Protection isn’t Enough: Why America Needs Paid Parental Leave,” by Heather Boushey, Jane Farrell, and John Schmitt, points to no. Analysis of data from the Current Population Survey over the past 20 years revealed two key things: First, women take leave way more than men despite the gender neutrality of the policy. Men have increased from a very low rate, but the ratio in the last five years studied is about nine to one. In addition, over the past two decades there has been essentially no change in women’s rates of leave-taking.

Also, per Boushey and colleagues, guess who is most likely to benefit from leave? Women with college degrees and those in full-time jobs. Commenting on their statistical analysis, the authors state, “Better-educated, full-time, union women are more likely than their otherwise identical counterparts to take parental leave” (p. 12). Not everyone can be in a job that qualifies them for FMLA leave; however once qualified, not everyone has the financial security to use that leave.

These authors—like Jorgenson and Appelbaum—applaud the FMLA and the opportunities it has provided to qualified workers—but their data show that the 1993 Act did not generate a cascade of family-progressive policies for men, women, and families. But one can hope. Jorgenson and Appelbaum demonstrate that a policy that reduced the firm size from 50 to 30 and reduced the hours worked in the past year from 1250 to 750, an additional 8.3 million private sector workers would be eligible for family leave under FMLA. UPDATE: In an April 2014 study, Jorgensen and Applebaum zeroed in on FMLA’s exemptions–such as small firms. They calculated that,”If the FMLA were amended to cover all firms and worksites regardless of size, an estimated 34.1 million private-sector employees would gain access to job-protected family and medical leave, if they otherwise meet the eligibility requirements relating to length of tenure and hours of work.”

There are some pretty great examples of places in the United States where better family leave policies have been put in place and have worked well. California passed a paid family leave act in 2002, and after twelve years, the program has been highly successful. Appelbaum and Ruth Milkman reported in 2011 about the social, family, and economic benefits of the program. Washington State passed similar legislation in 2007 but it has been help up since then. New Jersey did so in 2008, and Rhode Island’s law was implemented in January 2014. Another review of the California paid leave program demonstrates the growth in uptake since its initiation, but reports that uptake continues to be low because while the leave is paid one’s job is not protected.

We just celebrated 50 years of the Civil Rights Act. Last year we celebrated 50 years since the Equal Pay Act. Retrospectives on such landmark legislation includes successes as well as persisting shortfalls. We are at 21 and counting with FMLA. These studies remind us that with FMLA we need to do more to have more success than shortfalls.

Ashton Applewhite hosts *This Chair Rocks.* By Clipper (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
Ashton Applewhite hosts *This Chair Rocks.* Image by Clipper (Own work) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
June 30th saw the release of the 65+ in the United States 2010 U.S. Census Bureau Report, the latest overview of how older Americans are faring socially and economically. Brace yourself: “the U.S. population is poised to experience a population aging boom over the next two decades.” Uh oh, right? Despite the fact that longer lives reflect a remarkable public health achievement—the redistribution of death from the young to the old—there’s more hand-wringing than back-patting going on.

Much of the apprehension centers on the “dependency ratio”: the fact that the number of people over 65 is growing and the number of people the workforce shrinking. Fiscal crisis! Social collapse! In fact that ratio’s been falling pretty steadily for over a century. Over the same period national GDPs, along with lifespans, have rapidly increased.

People don’t turn into economic dead weights when they hit 65. As the Census Report documents, they’re participating in the labor force in ever-greater numbers. It also notes that “the dependency ratio does not account for older or younger people who work or have financial resources, nor does it capture those in their ‘working ages’ who are not working,” and that many caregivers are over age 65. Because it’s unpaid, this work is omitted from our national accounting. Millions more older Americans would like to continue to contribute, but are prevented by age discrimination in the workplace, which relegates them to jobs that don’t take advantage of their skills and experience—if they land one at all.

The “approaching crisis in caregiving” that the Census Report calls out is real and growing more acute. But people are healthier as well as longer-lived, and are not an inevitable sink for healthcare dollars. According to the ten-year MacArthur Foundation Study of Aging in America, once people reach 65, their added years don’t have a major impact on Medicare costs. As the Census Report details, the number of Americans aged 65+ in nursing homes declined by 20 percent in the last decade, “from 4.6 percent in 2000 to 3.1 percent in 2010.” That’s three percent of Americans over 65.Chronic conditions pile up, but they don’t keep most older Americans from functioning in the world, helping their neighbors, and enjoying their lives.

The Census Report includes an oft-cited statistic: “An unprecedented shift will occur between 2015 and 2020, when the percentage of people aged 65 and over in the global population will surpass the percentage of the very young (aged 0-4) for the first time.” This means that by 2020 there’ll be one older adult for every child—far better for children’s welfare than the inverse, as well as for the women who once had to produce enough of them to survive famines, wars, and epidemics.

It’s also helpful to keep in mind that the projections that have Americans so worked up are largely the result of a specific historical phenomenon: the cohort effect of the baby boom growing old—the proverbial bulge in the python. This effect will peak by midcentury, although, tellingly, few graphs extend far enough out to show the downturn. Much was made of the first boomers turning 65 in 2011, but a 2013 milestone went largely unremarked. That’s when millennials first outnumbered baby boomers. The number of boomers will continue to decline.

Even countries that are rapidly aging can produce “youth bulges”, as demographer Philip Longman pointed out in 2010, describing them as looming disasters “with all the attendant social consequences, from more violence to economic dislocation.” Can’t win for losing. In that same Foreign Policy article Longman warned of a “’gray tsunami’ sweeping the planet.” Journalists jumped on this frankly terrifying metaphor, and “gray tsunami” has since become widely adopted shorthand for the socioeconomic threat posed by an aging population.

What we’re facing is no tsunami. It’s a demographic wave that scientists have been tracking for decades, and it’s washing over a flood plain, not crashing without warning on a defenseless shore. This ageist and alarmist rhetoric justifies prejudice against older people, legitimates their abandonment, and fans the flames of intergenerational conflict. If left unchallenged, ageism will pit us against each other like racism and sexism; it will rob us of an immense accrual of knowledge and experience; and it will poison our response to the remarkable achievement of longer, healthier lives.

Ashton Applewhite blogs at This Chair Rocks where she also frequently updates her page Yo, Is This Ageist? Reach her on twitter at @thischairrocks.

"They did not know the laws of nature" Advertisement (1926).  Source: Wikimedia Commons
“They did not know the laws of nature” Advertisement (1926).  Wikimedia Commons

Nearly 50 years ago, in the 1965 Griswold v Connecticut case, the Supreme Court declared birth control legal for married persons, and shortly afterwards in another case legalized birth control for single people. In a famous study published in 2002, “The power of the pill,” two Harvard economists reported on the dramatic rise in women’s entrance into the professions and attributed this development to the availability of oral contraception beginning in the 1960s. Several years ago, the CDC reported that 99% of U.S. women who have ever had sexual intercourse had used contraception at some point. So the recent controversial Hobby Lobby case no doubt appears somewhat surreal to many Americans who understandably have assumed that contraception—unlike abortion–is a settled, non-contentious issue in the U.S.

To be sure, some conservatives, fearful of a female voter backlash in November, have tried to claim the case is about the religious freedom of certain corporations, and not contraception. But the case is about contraception. The Majority in Hobby Lobby made this clear, claiming the decision only applies to contraception and not to other things that some religious groups might oppose, such as vaccinations and blood transfusions.

So why are Americans still fighting about something that elsewhere in the industrialized world is a taken for granted part of reproductive health care? As Jennifer Reich and I discuss in our forthcoming volume, Reproduction and Society, contraception has always had a volatile career in the U.S., sometimes being used coercively by those in power, and at other times, like the present, being withheld from those who desperately need it.

The contraceptive wars started with the notorious campaign in the late 19th century of the Postmaster General Anthony Comstock, who successfully banned the spread of information about contraception under an obscenity statute. Margaret Sanger, who starting in the early 20th century, sought to bring birth control information and services to American women, was repeatedly arrested, before her eventual success in starting Planned Parenthood.

Gradually, after the Supreme Court cases mentioned above, the discovery and dissemination of the pill and steady increases in premarital sexuality, contraception became far more mainstreamed. Indeed, among its severest critics were feminist health activists of the 1970s, concerned about the safety of early versions of the pill and IUDS, as well as the use of Third World women as “guinea pigs” for testing methods. Federal and state governments became actively involved in the promotion of birth control: Title X of the Public Health Act of 1970 became the first federal program specifically designed to deliver family planning services to the poor and to teens. This legislation in turn drew angry protests from some activists within the African American community who, pointing to the disproportionate location of newly funded clinics in their neighborhoods, raised accusations of “black genocide.” (Title X exists to this day, albeit chronically underfunded and always threatened with being defunded entirely).

For a fairly short period after the Roe v Wade decision in 1973, contraception was seen as “common ground” between politicians who were proponents and opponents of that decision. But as the Religious Right grew more prominent in American politics, contraception became increasingly attacked for enabling non-procreative sexual activity, as epitomized in the statement of the presidential candidate Rick Santorum, promising to eliminate all public funding for contraception if elected: “It’s not okay. It’s a license to do things in a sexual realm that is counter to how things are supposed to be.”

Moreover, many anti-abortionists have come to reframe some forms of contraception as “abortafacients.” Indeed, much of the Hobby Lobby case can be understood as a profound disagreement between abortion opponents and the medical community as to what constitutes an actual pregnancy and how particular contraceptives work. For the former, pregnancy begins the moment that sperm meets egg and fertilization takes place; for the latter it is the implantation of the fertilized egg in a woman’s uterus (the first point at which a pregnancy can actually be ascertained). The four contraceptive methods at issue in the Hobby Lobby case—two brands of Emergency Contraception and two models of IUDs–are deemed by many conservatives, including the plaintiffs in Hobby Lobby, to cause abortions, while the medical community has gone on record as saying these methods cannot be considered in this light, as they cannot interfere with an established pregnancy. According to medical researchers, these methods work by inhibiting ovulation, while one of the IUDs in question may prevent implantation in some circumstances.

Numerous other challenges to contraceptive coverage in Obamacare are expected to come before the Court, and some will seek employers’ right to deny all forms of contraception. What the outcomes of these cases will be and what success President Obama and Democrats will have in finding the “work-arounds” that they have pledged to pursue are not entirely clear at this moment. What is clear, however, is who suffers most from Hobby Lobby — not only the huge pool of women directly affected, but their families as well. Though we typically think of contraception as a “women’s issue,” in fact it plays a huge role in family well-being. A massive literature review by the Guttmacher Institute reveals the negative impacts on adult relationships, including depression and heightened conflict, when births are unplanned, and also shows the health benefits to children when births are spaced.

But the most effective contraceptive methods are the most expensive ones. As Justice Ruth Bader Ginsburg noted in her scathing dissent, the upfront cost of an IUD can be a thousand dollars, nearly a month’s wages for a low income worker. And many women who can’t afford an IUD apparently want one. One study has shown that when cost-sharing for contraceptive methods was eliminated for a population of California patients, IUD use increased by 137%. In light of this, my depressing conclusion about the Hobby Lobby case is that it follows a familiar pattern of American policies about contraception, and indeed of this country’s social policies more generally: the poorest Americans always seem to get the short end of the stick.

Carole Joffe is a professor at the Bixby Center for Global Reproductive Health at the University of California, San Francisco, professor emerita of sociology at U.C. Davis, and the author of Dispatches from the Abortion Wars: The Costs of Fanaticism to Doctors, Patients and the Rest of Us, and co-editor, with Jennifer Reich, of Reproduction and Society: Interdisciplinary Readings.