Media have tended to depict childfree people negatively, likening the decision not to have children to “whether to have pizza or Indian for dinner.” Misperceptions about those who do not have children have serious weight, given that between 2006 and 2010 15% of women and 24% of men had not had children by age 40, and that nearly half of women aged 40-44 in 2002 were what Amy Blackstone and Mahala Dyer Stewart refer to as “childfree,” or purposefully not intending to have children.

Trends in childlessness/childfreeness from the Pew Research Center:

4

Blackstone and Stewart’s forthcoming 2016 article in The Family Journal, “There’s More Thinking to Decide”: How the Childfree Decide Not to Parent, engages the topic and extends the scholarly and public work Blackstone has done, including her shared blog, We’re Not Having a Baby.

When researchers explore why people do not have children, they find that the reasons are strikingly similar to reasons why people do have children. For example, “motivation to develop or maintain meaningful relationships” is a reason that some people have children – and a reason that others do not. Scholars are less certain on how people come to the decision to to be childfree. In their new article, Blackstone and Stewart find that, as is often the case with media portrayals of contemporary families, descriptions of how people come to the decision to be childfree have been oversimplified. People who are childfree put a significant amount of thought into the formation of their families, as they report.

Blackstone and Stewart conducted semi-structured interviews with 21 women and 10 men, with an average age of 34, who are intentionally childfree. After several coding sessions, Blackstone and Stewart identified 18 distinct themes that described some aspect of decision-making with regard to living childfree. Ultimately, the authors concluded that being childfree was a conscious decision that arose through a process. These patterns were reported by both men and women respondents, but in slightly different ways.

Childfree as a conscious decision

All but two of the participants emphasized that their decision to be childfree was made consciously. One respondent captured the overarching message:

People who have decided not to have kids arguably have been more thoughtful than those who decided to have kids. It’s deliberate, it’s respectful, ethical, and it’s a real honest, good, fair, and, for many people, right decision.

There were gender differences in the motives for these decisions. Women were more likely to make the decision based on concern for others: some thought that the world was a tough place for children today, and some did not want to contribute to overpopulation and environmental degradation. In contrast, men more often made the decision to live childfree “after giving careful and deliberate thought to the potential consequences of parenting for their own, everyday lives, habits, and activities and what they would be giving up were they to become parents.”

Childfree as a process

Contrary to misconceptions that the decision to be childfree is a “snap” decision, Blackstone and Stewart note that respondents conceptualized their childfree lifestyle as “a working decision” that developed over time. Many respondents had desired to live childfree since they were young; others began the process of deciding to be childfree when they witnessed their siblings and peers raising children. Despite some concrete milestones in the process of deciding to be childfree, respondents emphasized that it was not one experience alone that sustained the decision. One respondent said, “I did sort of take my temperature every five, six, years to make sure I didn’t want them.” Though both women and men described their childfree lifestyle as a “working decision,” women were more likely to include their partners in that decision-making process by talking about the decision, while men were more likely to make the decision independently.

Blackstone and Stewart conclude by asking, “What might childfree families teach us about alternative approaches to ‘doing’ marriage and family?” The present research suggests that childfree people challenge what is often an unquestioned life sequence by consistently considering the impact that children would have on their own lives as well as the lives of their family, friends, and communities. One respondent reflected positively on childfree people’s thought process: ‘‘I wish more people thought about thinking about it… I mean I wish it were normal to decide whether or not you were going to have children.’’

Braxton Jones is a graduate student in sociology at the University of New Hampshire, and serves as a Graduate Research and Public Affairs Scholar for the Council on Contemporary Families, where this post originally appeared.

We often think that religion helps to build a strong society, in part because it gives people a shared set of beliefs that fosters trust. When you know what your neighbors think about right and wrong, it is easier to assume they are trustworthy people. The problem is that this logic focuses on trustworthy individuals, while social scientists often think about the relationship between religion and trust in terms of social structure and context.

New research from David Olson and Miao Li (using data from the World Values survey) examines the trust levels of 77,405 individuals from 69 countries collected between 1999 and 2010. The authors’ analysis focuses on a simple survey question about whether respondents felt they could, in general, trust other people. The authors were especially interested in how religiosity at the national level affected this trust, measuring it in two ways: the percentage of the population that regularly attended religious services and the level of religious diversity in the nation.

These two measures of religious strength and diversity in the social context brought out a surprising pattern. Nations with high religious diversity and high religious attendance had respondents who were significantly less likely to say they could generally trust other people. Conversely, nations with high religious diversity, but relatively low levels of participation, had respondents who were more likely to say they could generally trust other people.

5

One possible explanation for these two findings is that it is harder to navigate competing claims about truth and moral authority in a society when the stakes are high and everyone cares a lot about the answers, but also much easier to learn to trust others when living in a diverse society where the stakes for that difference are low. The most important lesson from this work, however, may be that the positive effects we usually attribute to cultural systems like religion are not guaranteed; things can turn out quite differently depending on the way religion is embedded in social context.

Evan Stewart is a PhD candidate at the University of Minnesota studying political culture. He is also a member of The Society Pages’ graduate student board. There, he writes for the blog Discoveries, where this post originally appeared. You can follow him on Twitter

Will Davies, a politics professor and economic sociologist at Goldsmiths, University of London, summarized his thoughts on Brexit for the Political Economy and Research Centre, arguing that the split wasn’t one of left and right, young and old, racist or not racist, but center and the periphery. You can read it in full there, or scroll down for my summary.

——————————–

Many of the strongest advocates for Leave, many have noted, were actually among the beneficiaries of the UK’s relationship with the EU. Small towns and rural areas receive quite a bit of financial support. Those regions that voted for Leave in the greatest numbers, then, will also suffer some of the worst consequences of the Leave. What motivated to them to vote for a change that will in all likelihood make their lives worse?

Davies argues that the economic support they received from their relationship with the EU was paired with a culturally invisibility or active denigration by those in the center. Those in the periphery lived in a “shadow welfare state” alongside “a political culture which heaped scorn on dependency.”

Davies uses philosopher Nancy Fraser’s complementary ideas of recognition and redistribution: people need economic security (redistribution), but they need dignity, too (recognition). Malrecognition can be so psychically painful that even those who knew they would suffer economically may have been motivated to vote Leave. “Knowing that your business, farm, family or region is dependent on the beneficence of wealthy liberals,” writes Davies, “is unlikely to be a recipe for satisfaction.”

It was in this context that the political campaign for Leave penned the slogan: “Take back control.” In sociology we call this framing, a way of directing people to think about a situation not just as a problem, but a particular kind of problem. “Take back control” invokes the indignity of oppression. Davies explains:

It worked on every level between the macroeconomic and the psychoanalytic. Think of what it means on an individual level to rediscover control. To be a person without control (for instance to suffer incontinence or a facial tick) is to be the butt of cruel jokes, to be potentially embarrassed in public. It potentially reduces one’s independence. What was so clever about the language of the Leave campaign was that it spoke directly to this feeling of inadequacy and embarrassment, then promised to eradicate it. The promise had nothing to do with economics or policy, but everything to do with the psychological allure of autonomy and self-respect.

Consider the cover of the Daily Mail praising the decision and calling politicians “out-of-touch” and the EU “elite” and “contemptuous”:2

From this point of view, Davies thinks that the reward wasn’t the Leave, but the vote itself, a veritable middle finger to the UK center and the EU “eurocrats.” They know their lives won’t get better after a Brexit, but they don’t see their lives getting any better under any circumstances, so they’ll take the opportunity to pop a symbolic middle finger. That’s all they think they have.

And that’s where Davies thinks the victory  of the Leave vote parallels strongly with Donald Trump’s rise in the US:

Amongst people who have utterly given up on the future, political movements don’t need to promise any desirable and realistic change. If anything, they are more comforting and trustworthy if predicated on the notion that the future is beyond rescue, for that chimes more closely with people’s private experiences.

Some people believe that voting for Trump might in fact make things worse, but the pleasure of doing so — of popping a middle finger to the Republican party and political elites more generally — would be satisfaction enough. In this sense, they may be quite a lot like the Leavers. For the disenfranchised, a vote against pragmatism and solidarity may be the only satisfaction that this election, or others, is likely to get them.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

Flashback Friday.

Russ Ruggles, who blogs for Online Dating Matchmaker, makes an argument for lying in your online dating profile. He notes, first, that lying is common and, second, that people lie in the direction that we would expect, given social desirability. Men, for example, tend to exaggerate their height; women tend to exaggerate their thinness:

Since people also tend to restrict their searches according to social desirability (looking for taller men and thinner women), these lies will result in your being included in a greater proportion of searches. So, if you lie, you are more likely to actually go on a date.

Provided your lie was small — small enough, that is, to not be too obvious upon first meeting — Ruggles explains that things are unlikely to fall to pieces on the first date. It turns out that people’s stated preferences have a weak relationship to who they actually like. Stated preferences, one study found, “seemed to vanish when it came time to choose a partner in physical space.”

“It turns out,” Ruggles writes, that “we have pretty much no clue what we actually want in a partner.”

So lie! A little! Lie away! And, also, don’t be so picky. You never know!

Originally posted in 2010. Crossposted at Jezebel.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

Historian Molly Worthen is fighting tyranny, specifically the “tyranny of feelings” and the muddle it creates. We don’t realize that our thinking has been enslaved by this tyranny, but alas, we now speak its language. Case in point:

“Personally, I feel like Bernie Sanders is too idealistic,” a Yale student explained to a reporter in Florida.

Why the “linguistic hedging” as Worthen calls it? Why couldn’t the kid just say, “Sanders is too idealistic”? You might think the difference is minor, or perhaps the speaker is reluctant to assert an opinion as though it were fact. Worthen disagrees.

“I feel like” is not a harmless tic. . . . The phrase says a great deal about our muddled ideas about reason, emotion and argument — a muddle that has political consequences.

The phrase “I feel like” is part of a more general evolution in American culture. We think less in terms of morality – society’s standards of right and wrong – and more in terms individual psychological well-being. The shift from “I think” to “I feel like” echoes an earlier linguistic trend when we gave up terms like “should” or “ought to” in favor of “needs to.” To say, “Kayden, you should be quiet and settle down,” invokes external social rules of morality. But, “Kayden, you need to settle down,” refers to his internal, psychological needs. Be quiet not because it’s good for others but because it’s good for you.

4

Both “needs to” and “I feel like” began their rise in the late 1970s, but Worthen finds the latter more insidious. “I feel like” defeats rational discussion. You can argue with what someone says about the facts. You can’t argue with what they say about how they feel. Worthen is asserting a clear cause and effect. She quotes Orwell: “If thought corrupts language, language can also corrupt thought.” She has no evidence of this causal relationship, but she cites some linguists who agree. She also quotes Mark Liberman, who is calmer about the whole thing. People know what you mean despite the hedging, just as they know that when you say, “I feel,” it means “I think,” and that your are not speaking about your actual emotions.

The more common “I feel like” becomes, the less importance we may attach to its literal meaning. “I feel like the emotions have long since been mostly bleached out of ‘feel that,’ ” …

Worthen disagrees.  “When new verbal vices become old habits, their power to shape our thought does not diminish.”

“Vices” indeed. Her entire op-ed piece is a good example of the style of moral discourse that she says we have lost. Her stylistic preferences may have something to do with her scholarly ones – she studies conservative Christianity. No “needs to” for her. She closes her sermon with shoulds:

We should not “feel like.” We should argue rationally, feel deeply and take full responsibility for our interaction with the world.

——————————-

Originally posted at Montclair SocioBlog. Graph updated 5/11/16.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Way back in 1996 sociologist Susan Walzer published a research article pointing to one of the more insidious gender gaps in household labor: thinking. It was called “Thinking about the Baby.”

In it, Walzer argued that women do more of the intellectual and emotional work of childcare and household maintenance. They do more of the learning and information processing (like buying and reading “how-to” books about parenting or researching pediatricians). They do more worrying (like wondering if their child is hitting his developmental milestones or has enough friends at school). And they do more organizing and delegating (like deciding when towels need washing or what needs to be purchased at the grocery store), even when their partner “helps out” by accepting assigned chores.

For Mother’s Day, a parenting blogger named Ellen Seidman powerfully describes this exhausting and almost entirely invisible job. I am compelled to share. Her essay centers on the phrase “I am the person who notices…” It starts with the toilet paper running out and it goes on… and on… and on… and on. Read it.

She doesn’t politicize what she calls an “uncanny ability to see things… [that enable] our family to basically exist.” She defends her husband (which is fine) and instead relies on a “reduction to personality,” that technique of dismissing unequal workloads first described in the canonical book The Second Shift: somehow it just so happens that it’s her and not her husband that notices all these things.

But I’ll politicize it. The data suggests that it is not an accident that it is she and not her husband that does this vital and brain-engrossing job. Nor is it an accident that it is a job that gets almost no recognition and entirely no pay. It’s work women disproportionately do all over America. So, read it. Read it and remember to be thankful for whoever it is in your life that does these things. Or, if it is you, feel righteous and demand a little more recognition and burden sharing. Not on Mother’s Day. That’s just one day. Everyday.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

To Post Secret, a project that collects personal secrets written artistically onto postcards, someone recently sent in the following bombshell: “Ever since we started getting married and buying houses,” she writes, “my girlfriends and I don’t laugh much anymore.”

4

Her personal secret is, in fact, a national one.  It’s part of what has been called the “paradox of declining female happiness.” Women have more rights and opportunities than they have had in decades and yet they are less happy than ever in both absolute terms and relative to men.

Marriage is part of why. Heterosexual marriage is an unequal institution. Women on average do more of the unpaid and undervalued work of households, they work more each day, and they are more aware of this inequality than their husbands. They are more likely to sacrifice their individual leisure and career goals for marriage. Marriage is a moment of subordination and women, more so than men, subordinate themselves and their careers to their relationship, their children, and the careers of their husbands.

Compared to being single, marriage is a bum deal for many woman. Accordingly, married women are less happy than single women and less happy than their husbands, they are less eager than men to marry, they’re more likely to file for divorce and, when they do, they are happier as divorcees than they were when married (the opposite is true for men) and they are more likely than men to prefer never to remarry.

The only reason this is surprising is because of the torrent of propaganda we get that tells us otherwise. We are told by books, sitcoms, reality shows, and romantic comedies that single women are wetting their pants to get hitched. Men are metaphorically or literally drug to the altar in television commercials and wedding comedies, an idea invented by Hugh Hefner in the 1950s (before the “playboy,” men who resisted marriage were suspected of being gay). Not to mention the wedding-themed toys aimed at girls and the ubiquitous wedding magazines aimed solely at women. Why, it’s almost as if they were trying very hard to convince us of something that isn’t true.

But if women didn’t get married to men, what would happen? Marriage reduces men’s violence and conflict in a society by giving men something to lose. It increases men’s efforts at work, which is good for capitalists and the economy. It often leads to children, which exacerbate cycles of earning and spending, makes workers more reliable and dependent on employers, reduces mobility, and creates a next generation of workers and social security investors. Marriage inserts us into the machine. And if it benefits women substantially less than men, then it’s no surprise that so many of our marriage promotion messages are aimed squarely at them.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

Despite the maxim about familiarity breeding contempt, we usually like what’s familiar.  With music for example, familiarity breeds hits in the short run and nostalgia in the long run. The trouble is that it’s tempting to attribute our liking to the inherent quality of the thing rather than its familiarity.  With movies, film buffs may make this same conflation between what they like and what they easily recognize.

That’s one of the points of Scott Lemieux’s takedown of Peter Suderman’s Vox article about Michael Bay.

Suderman hails Bay as “an auteur — the author of a film — whose movies reflect a distinctive, personal sensibility. Few filmmakers are as stylistically consistent as Bay, who recycles many of the same shots, editing patterns, and color schemes in nearly all of his films.”

But what’s so great about being an auteur with a recognizable style? For Lemieux, Michael Bay is a hack. His movies aren’t good, they’re just familiar. Bay’s supporters like them because of that familiarity but then attribute their liking to some imagined cinematic quality of the films.

My students, I discovered last week,  harbor no such delusions about themselves and the songs they like. As a prologue to my summary of the Salganik-Watts MusicLab studies, I asked them to discuss what it is about a song that makes it a hit. “Think about hit songs you like and about hit songs that make you wonder, ‘How did that song get to be #1?’” The most frequent answers were all about familiarity and social influence. “You hear the song a lot, and everyone you know likes it, and you sort of just go along, and then you like it too.” I had to probe in order to come up with anything about the songs themselves – the beat, the rhymes, even the performer.

Lemieux cites Pauline Kael’s famous essay “Circles and Squares” (1963), a response to auteur-loving critics like Andrew Sarris. She makes the same point – that these critics conflate quality with familiarity, or as she terms it “distinguishability.”

That the distinguishability of personality should in itself be a criterion of value completely confuses normal judgment. The smell of a skunk is more distinguishable than the perfume of a rose; does that make it better?

Often the works in which we are most aware of the personality of the director are his worst films – when he falls back on the devices he has already done to death. When a famous director makes a good movie, we look at the movie, we don’t think about the director’s personality; when he makes a stinker we notice his familiar touches because there’s not much else to watch.

Assessing quality in art is difficult if not impossible. Maybe it’s a hopeless task, one that my students, in their wisdom, refused to be drawn into. They said nothing about why one song was better than another. They readily acknowledged that they liked songs because they were familiar and popular, criteria that producers, promoters, and payola-people have long been well aware of.

“In the summer of 1957,” an older friend once told me, “My family was on vacation at Lake Erie. There was this recreation hall – a big open room where teenagers hung out. You could get ice cream and snacks, and there was music, and some of the kids danced. One afternoon, they played the same song – ‘Honeycomb’ by Jimmie Rodgers – about twenty times in a row, maybe more. They just kept playing that song over and over again. Maybe it was the only song they played the whole afternoon.”

It wasn’t just that one rec hall. The people at Roulette Records must have been doing similar promotions all around the country and doing whatever they had to do to get air play for the record. By the end of September, “Honeycomb” was at the top of the Billboard charts. Was it a great song? Assessment of quality was irrelevant, or it was limited to the stereotypical critique offered by the kids on American Bandstand: “It’s got a good beat. You can dance to it.” Of course, this was before the 1960s and the rise of the auteur, a.k.a. the singer-songwriter.

Hollywood uses the same principle when it churns out sequels and prequels – Rocky, Saw, Batman. They call it a “franchise,” acknowledging the films had the similarity of Burger Kings. The audience fills the theaters not because the movie is good but because it’s Star Wars. Kael and the other anti-auteurists argue that auteur exponents are no different in their admiration for all Hitchcock. Or Michael Bay. It’s just that their cinema sophistication allows them to fool themselves.

Originally posted at Montclair SocioblogBig hat tip to Mark at West Coast Stat Views.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.