social psychology

Why do so many Americans continue to support Donald Trump with such fervor?

Hillary Clinton now leads Donald Trump in presidential polls by double-digits, but Trump’s hardiest supporters have not only stood by him, many have actually increased their commitment. It seems clear that in a little less than a month’s time, tens of millions of Americans will cast a vote for a man who overtly seeks to overthrow basic institutions that preserve the American ideal such as a free pressfreedom of religionuniversal suffragethe right of the accused to legal counsel, and the right of habeas corpus. This is over-and-above his loudly proclaimed bigotry, sexism, boasts of sexual assault, ableism, history of racial and anti-Muslim bias, and other execrable personal characteristics that would have completely destroyed the electoral prospects of past presidential candidates.

Trump is a uniquely odious candidate who is quite likely going to lose, but more than 40% of Americans plan to vote for him. The science of group conflict might help us understand why.

Photograph by Gage Skidmore via Flickr
Photograph by Gage Skidmore via Flickr.

In a powerful 2003 article in the journal American Psychologist, Roy Eidelson and Judy Eidelson foreshadowed Trump’s popularity. Drawing on a close reading of both history and social science literature, they identified five beliefs that — if successfully inculcated in people by a leader — motivate people to initiate group conflict. Trump’s campaign rhetoric deftly mobilizes all five.

  • Confidence in one’s superiority: Trump constantly broadcasts a message that he and his followers are superior to other Americans, whereas those who oppose him are “stupid” and deserve to be punched in the face. His own followers’ violent acts are excused as emanating from “tremendous love and passion for the country.”
  • Claims of unjust treatment: Trump is obsessed with the concept of fairness, but only when it goes his way. Given his presumed superiority, it naturally follows that the only way he and his supporters could fail is if injustice occurs.
  • Fears of vulnerability: Accordingly, Trump has overtly stated that he believes the presidential election will be rigged. His supporters believe him. In one recent poll, only 16 percent of North Carolina Trump supporters agreed that if Clinton wins it would be because she got more votes.
  • Distrust of the other: Trump and his supporters routinely claim that the mediagovernmenteducational institutions, and other established entities are overtly undermining Trump, his supporters, and their values. To many Trump supporters, merely being published or broadcast by a major news outlet is evidence that a fact is not credible, given the certainty they have that media professionals are conspiring against Trump.
  • A sense of helplessness: When Trump allows that it’s possible that he might lose the election because of fraud, conspiracy, or disloyalty, he taps into his followers’ sense of helplessness. No matter how superior he and his followers truly are, no matter how unjustly they are treated, there is little that they can do in the face of a nation-wide plot against him. Accordingly, many of Trump’s most ardent supporters will see the impending rejection of their candidate not as a corrective experience to lead them to reconsider their beliefs, but as further evidence that they are helpless in the face of a larger, untrustworthy outgroup.

By ably nurturing these five beliefs, Trump has gained power far beyond the level most could have dreamed prior to the present election cycle.

It seems clear that, if and when Trump loses, he won’t be going anywhere. He has a constituency, stoked by effective rhetorics shown to propel people to group conflict, one some of his supporters are already preparing for. And, since he has convinced so many of his supporters that he alone can bring the changes they desire, it is a surety that he will use their mandate for his own future purposes.

Sean Ransom, PhD is an assistant clinical professor in the Department of Psychiatry and Behavioral Sciences at Tulane University School of Medicine and founder of the Cognitive Behavioral Therapy Center of New Orleans. He received his PhD in clinical psychology at the University of South Florida.

Yesterday Donald Trump appeared to suggest that defenders of the 2nd Amendment should assassinate Hillary Clinton if she is elected. Or maybe any judges she appoints to the Supreme Court. It wasn’t very clear.

Supporters rushed to his defense, suggesting he was joking. Here’s what a humor scholar, Jason P. Steed, had to say about that via Twitter:


You can follow Jason P. Steed on Twitter here.

FBI director, James Comey, didn’t call it the “Ferguson Effect.” Instead, he called the recent rise in homicide rates a “viral video effect” – a more accurately descriptive term for the same idea: that murder rates increased because the police were withdrawing from proactive policing. The full sequence goes something like this:  Police kill unarmed Black person. Video goes viral. Groups like Black Lives Matter organize protests. Politicians fail to defend the police. Police decrease their presence in high-crime areas. More people in those areas commit murder.

Baltimore is a good example, as Peter Moskos has strongly argued on his blog Cop in the Hood. But many cities, even those with all the Ferguson elements, have not seen large increases in homicide. New York, for example, the city where I live, had all of the Ferguson-effect elements. Yet the number of murders in New York did not rise, nor did rates of other crimes. Other factors – gang conflict, drugs, and the availability of guns – make a big difference, and these vary among cities. Chicago is not New York. Las Vegas is not Houston. All homicide is local.

There is another flaw with the viral-video theory: It assumes that the crime is a game of cops and robbers (or cops and murderers), where the only important players are the bad guys and the cops. If the cops ease up, the bad guys start pulling the trigger more often. Or as Director Comey put it,

There’s a perception that police are less likely to do the marginal additional policing that suppresses crime — the getting out of your car at 2 in the morning and saying to a group of guys, “Hey, what are you doing here?”

This model of crime leaves out the other people in those high-crime neighborhoods. It sees them as spectators or bystanders or occasionally victims. But those people, the ones who are neither cops nor shooters, can play a crucial role in crime control. In some places, it is the residents of the neighborhood who can get the troublesome kids to move off the corner. But even when residents cannot exert any direct force on the bad guys, they can provide information or in other ways help the police. Or not.

This suggests a different kind of Ferguson Effect. In the standard version, the community vents its anger at the cops, the cops then withdraw, and crime goes up. But the arrows of cause and effect can point in both directions. Those viral videos of police killing unarmed Black people reduce the general level of trust. More important, those killings are often the unusually lethal tip of an iceberg of daily unpleasant interactions between police and civilians. That was certainly the case with the Ferguson police department with its massive use of traffic citations and other fines as a major source of revenue. Little wonder that a possibly justifiable shooting by a cop elicited a huge protest.

It’s not clear exactly how the Full Ferguson works. Criminologist Rich Rosenfeld speculates that where people don’t trust the police, they are more likely to settle scores themselves. That may be true, but I wonder if it accounts for increases in killings between gang members or drug dealers. They weren’t going to call the cops anyway. Nor were people who have been drinking and get into an argument, and someone has a gun.

But maybe where that trust is absent, people don’t do what most of us would do when there’s trouble we cannot handle ourselves  –  dial 911. As in Director Comey’s version, the police are less a presence in those neighborhoods but not because they are afraid of being prosecuted for being too aggressive and not because they are being petulant about what some politician said, but because people there are not calling the cops.

Originally posted at Montclair SocioBlog.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

Media have tended to depict childfree people negatively, likening the decision not to have children to “whether to have pizza or Indian for dinner.” Misperceptions about those who do not have children have serious weight, given that between 2006 and 2010 15% of women and 24% of men had not had children by age 40, and that nearly half of women aged 40-44 in 2002 were what Amy Blackstone and Mahala Dyer Stewart refer to as “childfree,” or purposefully not intending to have children.

Trends in childlessness/childfreeness from the Pew Research Center:

4

Blackstone and Stewart’s forthcoming 2016 article in The Family Journal, “There’s More Thinking to Decide”: How the Childfree Decide Not to Parent, engages the topic and extends the scholarly and public work Blackstone has done, including her shared blog, We’re Not Having a Baby.

When researchers explore why people do not have children, they find that the reasons are strikingly similar to reasons why people do have children. For example, “motivation to develop or maintain meaningful relationships” is a reason that some people have children – and a reason that others do not. Scholars are less certain on how people come to the decision to to be childfree. In their new article, Blackstone and Stewart find that, as is often the case with media portrayals of contemporary families, descriptions of how people come to the decision to be childfree have been oversimplified. People who are childfree put a significant amount of thought into the formation of their families, as they report.

Blackstone and Stewart conducted semi-structured interviews with 21 women and 10 men, with an average age of 34, who are intentionally childfree. After several coding sessions, Blackstone and Stewart identified 18 distinct themes that described some aspect of decision-making with regard to living childfree. Ultimately, the authors concluded that being childfree was a conscious decision that arose through a process. These patterns were reported by both men and women respondents, but in slightly different ways.

Childfree as a conscious decision

All but two of the participants emphasized that their decision to be childfree was made consciously. One respondent captured the overarching message:

People who have decided not to have kids arguably have been more thoughtful than those who decided to have kids. It’s deliberate, it’s respectful, ethical, and it’s a real honest, good, fair, and, for many people, right decision.

There were gender differences in the motives for these decisions. Women were more likely to make the decision based on concern for others: some thought that the world was a tough place for children today, and some did not want to contribute to overpopulation and environmental degradation. In contrast, men more often made the decision to live childfree “after giving careful and deliberate thought to the potential consequences of parenting for their own, everyday lives, habits, and activities and what they would be giving up were they to become parents.”

Childfree as a process

Contrary to misconceptions that the decision to be childfree is a “snap” decision, Blackstone and Stewart note that respondents conceptualized their childfree lifestyle as “a working decision” that developed over time. Many respondents had desired to live childfree since they were young; others began the process of deciding to be childfree when they witnessed their siblings and peers raising children. Despite some concrete milestones in the process of deciding to be childfree, respondents emphasized that it was not one experience alone that sustained the decision. One respondent said, “I did sort of take my temperature every five, six, years to make sure I didn’t want them.” Though both women and men described their childfree lifestyle as a “working decision,” women were more likely to include their partners in that decision-making process by talking about the decision, while men were more likely to make the decision independently.

Blackstone and Stewart conclude by asking, “What might childfree families teach us about alternative approaches to ‘doing’ marriage and family?” The present research suggests that childfree people challenge what is often an unquestioned life sequence by consistently considering the impact that children would have on their own lives as well as the lives of their family, friends, and communities. One respondent reflected positively on childfree people’s thought process: ‘‘I wish more people thought about thinking about it… I mean I wish it were normal to decide whether or not you were going to have children.’’

Braxton Jones is a graduate student in sociology at the University of New Hampshire, and serves as a Graduate Research and Public Affairs Scholar for the Council on Contemporary Families, where this post originally appeared.

We often think that religion helps to build a strong society, in part because it gives people a shared set of beliefs that fosters trust. When you know what your neighbors think about right and wrong, it is easier to assume they are trustworthy people. The problem is that this logic focuses on trustworthy individuals, while social scientists often think about the relationship between religion and trust in terms of social structure and context.

New research from David Olson and Miao Li (using data from the World Values survey) examines the trust levels of 77,405 individuals from 69 countries collected between 1999 and 2010. The authors’ analysis focuses on a simple survey question about whether respondents felt they could, in general, trust other people. The authors were especially interested in how religiosity at the national level affected this trust, measuring it in two ways: the percentage of the population that regularly attended religious services and the level of religious diversity in the nation.

These two measures of religious strength and diversity in the social context brought out a surprising pattern. Nations with high religious diversity and high religious attendance had respondents who were significantly less likely to say they could generally trust other people. Conversely, nations with high religious diversity, but relatively low levels of participation, had respondents who were more likely to say they could generally trust other people.

5

One possible explanation for these two findings is that it is harder to navigate competing claims about truth and moral authority in a society when the stakes are high and everyone cares a lot about the answers, but also much easier to learn to trust others when living in a diverse society where the stakes for that difference are low. The most important lesson from this work, however, may be that the positive effects we usually attribute to cultural systems like religion are not guaranteed; things can turn out quite differently depending on the way religion is embedded in social context.

Evan Stewart is a PhD candidate at the University of Minnesota studying political culture. He is also a member of The Society Pages’ graduate student board. There, he writes for the blog Discoveries, where this post originally appeared. You can follow him on Twitter

Flashback Friday.

Russ Ruggles, who blogs for Online Dating Matchmaker, makes an argument for lying in your online dating profile. He notes, first, that lying is common and, second, that people lie in the direction that we would expect, given social desirability. Men, for example, tend to exaggerate their height; women tend to exaggerate their thinness:

Since people also tend to restrict their searches according to social desirability (looking for taller men and thinner women), these lies will result in your being included in a greater proportion of searches. So, if you lie, you are more likely to actually go on a date.

Provided your lie was small — small enough, that is, to not be too obvious upon first meeting — Ruggles explains that things are unlikely to fall to pieces on the first date. It turns out that people’s stated preferences have a weak relationship to who they actually like. Stated preferences, one study found, “seemed to vanish when it came time to choose a partner in physical space.”

“It turns out,” Ruggles writes, that “we have pretty much no clue what we actually want in a partner.”

So lie! A little! Lie away! And, also, don’t be so picky. You never know!

Originally posted in 2010. Crossposted at Jezebel.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

At Vox, Evan Soltas discusses new research from Nextoins showing racial bias in the legal profession. They put together a hypothetical lawyer’s research memo that had 22 errors of various kinds and distributed it to 60 partners in law firms who were asked to evaluate it as an example of the “writing competencies of young attorneys.” Some were told that the writer was black, others white.

Fifty-three sent back evaluations. They were on alert for mistakes, but those who believed the research memo was written by a white lawyer found fewer errors than those who thought they were reading a black lawyer’s writing. And they gave the white writer an overall higher grade on the report. (The partner’s race and gender didn’t effect the results, though women on average found more errors and gave more feedback.)

Illustration via Vox:

5

At Nextion, they collected typical comments:

8

This is just one more piece of evidence that the deck is stacked against black professionals. The old saying is that minorities and women have to work twice as hard for half the credit. This data suggests that there’s something to it.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Historian Molly Worthen is fighting tyranny, specifically the “tyranny of feelings” and the muddle it creates. We don’t realize that our thinking has been enslaved by this tyranny, but alas, we now speak its language. Case in point:

“Personally, I feel like Bernie Sanders is too idealistic,” a Yale student explained to a reporter in Florida.

Why the “linguistic hedging” as Worthen calls it? Why couldn’t the kid just say, “Sanders is too idealistic”? You might think the difference is minor, or perhaps the speaker is reluctant to assert an opinion as though it were fact. Worthen disagrees.

“I feel like” is not a harmless tic. . . . The phrase says a great deal about our muddled ideas about reason, emotion and argument — a muddle that has political consequences.

The phrase “I feel like” is part of a more general evolution in American culture. We think less in terms of morality – society’s standards of right and wrong – and more in terms individual psychological well-being. The shift from “I think” to “I feel like” echoes an earlier linguistic trend when we gave up terms like “should” or “ought to” in favor of “needs to.” To say, “Kayden, you should be quiet and settle down,” invokes external social rules of morality. But, “Kayden, you need to settle down,” refers to his internal, psychological needs. Be quiet not because it’s good for others but because it’s good for you.

4

Both “needs to” and “I feel like” began their rise in the late 1970s, but Worthen finds the latter more insidious. “I feel like” defeats rational discussion. You can argue with what someone says about the facts. You can’t argue with what they say about how they feel. Worthen is asserting a clear cause and effect. She quotes Orwell: “If thought corrupts language, language can also corrupt thought.” She has no evidence of this causal relationship, but she cites some linguists who agree. She also quotes Mark Liberman, who is calmer about the whole thing. People know what you mean despite the hedging, just as they know that when you say, “I feel,” it means “I think,” and that your are not speaking about your actual emotions.

The more common “I feel like” becomes, the less importance we may attach to its literal meaning. “I feel like the emotions have long since been mostly bleached out of ‘feel that,’ ” …

Worthen disagrees.  “When new verbal vices become old habits, their power to shape our thought does not diminish.”

“Vices” indeed. Her entire op-ed piece is a good example of the style of moral discourse that she says we have lost. Her stylistic preferences may have something to do with her scholarly ones – she studies conservative Christianity. No “needs to” for her. She closes her sermon with shoulds:

We should not “feel like.” We should argue rationally, feel deeply and take full responsibility for our interaction with the world.

——————————-

Originally posted at Montclair SocioBlog. Graph updated 5/11/16.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.