Search results for day care

Presidential hopeful and U.S. Congressman Ron Paul (R-TX) made the news over the weekend arguing, among other things, that the Federal Emergency Management Agency (FEMA) is unnecessary or, even worse, creates a kind of moral hazard in populations who come to depend on Federal relief efforts. In remarks reported Friday, Rep. Paul said that Hurricane Irene should be handled “like 1900,” the year that a large storm killed approximately 8,000 individuals in Galveston and a few thousand more onshore, when it struck the low-lying island and nearby small communities on the Texas coast.

It is certainly true that the Federal response to the destruction of Galveston was relatively minor. Systematic Federal management and provision of aid to individuals in disaster crystallized in response to the Mississippi River’s catastrophic flooding in 1927.  In 1900, it was limited for the most part to President McKinley sending surplus Army tents to house the newly homeless residents of Galveston, and loaning some ships to transport relief goods.

The nation as a whole, on the other hand, quickly mobilized relief donation efforts through newspapers, state and city governments, and the dense network of fraternal organizations that characterized American civil society in 1900. The nation’s response was along the lines of the civic and political institutions of the time, with all that entailed.

[Credit: Rosenberg Library’s Galveston and Texas History Center archives]

So, for instance, some of the citizens of Galveston who survived the storm were given liquor for their nerves and pressed into service at gunpoint by local authorities to clear dead and putrefying bodies from the wreckage; some were later partially compensated for their time with a small sum of money. Property owners, however, were exempted from mandatory clearing of debris and corpses.

Voluntary associations – often segregated by gender, race, ethnicity, and class – took care of their own members as best they could, but the broader distribution of relief supplies arriving from other areas was handled by committees of Galveston’s social and economic elites, based on their knowledge of their city’s political districts. Distribution efforts throughout the Texas coast were controversial enough that hearings were held by the Texas State Senate to investigate reports of improper relief distribution, some of which were borne out by testimony but none of which were pursued.  Survivors’ letters suggest that in some cases the nicer relief goods – the distribution of which was handled by committees of Galveston’s social and economic elites on the basis of what they knew about their city’s political districts – went to the wealthier victims’ districts, when they weren’t re-routed by less wealthy and somewhat disgruntled Galvestonians tasked with actually lugging the supplies around the city.  And Galveston’s African-American community was wholly shut out of the rebuilding process and denied a seat on the Central Relief Committee, despite efforts to secure a place in helping shape the collective destiny of the city. This is hardly surprising: poorer Americans tend to suffer disproportionately in most disasters, and are often left out of planning and rebuilding efforts.

There is much to be said for the response of Galveston’s Central Relief Committee. Under their leadership the city built the seawall that helps protect the city to this day and they initiated a series of successful municipal reforms that became widespread during the Progressive era. But we should not let unexamined nostalgia blind us to the realities of the situation in Galveston in the months after the 1900 storm.

Nor should we forget that the techniques that might have been more or less appropriate in 1900 were attuned to a society that has since changed quite a bit. It would be hard to imagine contemporary Americans pressed into service to clear bodies, barring a truly exceptional event. And despite its shortcomings, American culture is on the whole more egalitarian in 2005 than it was in 1900.

But the dense network of associations through which much assistance flowed to the city simply does not exist in the contemporary U.S. for a variety of reasons, none of which are reducible to the growth of the Federal government.  Instead, Americans support each other in crises by way of donations to highly professionalized and technically adept disaster relief organizations like the Red Cross, and by maintaining government organizations charged with preparing for the worst disasters and catastrophes with their tax dollars.

This makes sense in part because contemporary cities and the economic arrangements which undergird them are much more complex beasts than they were in 1900. The following chart property damage and deaths caused by major disasters over the 20th century:

[Source: The Federal Response to Hurricane Katrina: Lessons Learned, p. 6.]

The overall trend is toward less lethal but much costlier disasters, which in turn causes significant disruptions to the ordinary functioning of local businesses and municipal governments that depend on tax revenues from those businesses. This necessitates more Federal involvement, as cities and state governments struggle to get their own houses in order, and to pay for the resources and technical know-how needed to rebuild infrastructure, modern dwellings, and businesses. As Lawrence Powell, a historian at Tulane University in New Orleans, asked of the influx of well-meaning volunteers in response to Katrina, “Can the methods of a nineteenth-century barn raising drag a twenty-first-century disaster area from the mud and the muck?”.

The 20th century history of Federal disaster policy can be described as a cycle of expansion and contraction. Increasingly complex disasters draw forth ad hoc solutions, which are then formalized and later institutionalized until they grow unwieldy and are periodically consolidated in efforts to provide more efficient, systematic, and effective services that are less prone to fraud or waste.

Small and big business, social movement organizations, academics, professionals, voluntary associations and NGOs have all helped shape the trajectory of that cycle, as when civil rights organizations successfully lobbied Congress and the Red Cross after Hurricane Camille in 1969 to provide a baseline of minimum assistance to hurricane victims, rather than the older policy that granted aid on the basis of pre-disaster assets (and which thus tended to favor wealthier victims on the basis that they had lost more than had the poor).

In recent decades, this has tended toward deregulation of coastal development in deference to free market ideals and a Congressional movement in the mid 1990s that sought to pay for disaster relief by, in large part, cutting social service programs that serve the poor. (See Ted Steinberg’s Acts of God for one good historical and political economic critique of U.S. disaster policy.)

How Federal disaster mitigation efforts can be more efficient, just, or effective is certainly a worthy conversation to hold. How best to arrange – and pay for – social relationships around economic, ecological, and technological risk is also an excellent topic for deliberation and debate. But to seriously argue that we should strive to make our disaster response regime more like that enjoyed by Americans in the early half of the twentieth century is, for lack of a better word, silly.

(For that matter, it’s hard to understand what Rep. Paul means by his call for more control by the States; the decision to request the involvement of the Federal government and FEMA already rests with the State governors, as per the Stafford Act.)

Former generations of Americans saw a patchwork of state government solutions as inadequate to managing modern disasters, particularly those that overwhelm municipal or State governments. They built Civil Defense agencies, the Office of Emergency Preparedness, and later FEMA in an effort to combine accountability and economies of scale and expertise, and to ensure that in times of disaster Americans could count on their Federal government to marshal tools and talent when local and State governments are overwhelmed and help is asked.

And as my own research shows, the efforts of these state organizations have long been understood by victims and outside observers alike as expressing and relying on bonds of fellow citizenship and civil solidarity. That in recent decades this legacy has been tarnished with cronyism and mismanagement from above says more about those political actors and the institutions of American electoral politics than it does about the inherent worth of Federal disaster management organizations.

——————————

Brady Potts is a lecturer in the Department of Sociology at the University of Southern California. His current research focuses on the history of public discourse and narratives around risk and hurricane disasters, and the role of civic culture in American disaster response.

If you would like to write a post for Sociological Images, please see our Guidelines for Guest Bloggers.

The Demographics

During disasters, poor people, people of color, and the elderly die in disproportionate numbers (source), and Katrina was no exception. Many decisions were made in the days leading up to and shortly after Katrina that amplified loss of life for these groups. New Orleans is both a poor (23% poverty rate pre-Katrina – twice the national average) and segregated city, and these factors led to loss of life. First, an effective evacuation plan was not in place that accounted for the 112,000 poor, mostly black New Orleanians without cars. Additionally, the timing of the storm at the end of the month meant that those receiving public assistance were unusually cash-strapped. To make matters worse for poor people with children, school had just started so expenses for the month were higher than usual.

The immobile poor were disproportionately left behind and lost their lives. A comprehensive study of evacuees to Houston (who had stayed behind during the storm) found that 22% were physically unable to evacuate, 14% were physically disabled, 23% stayed in New Orleans to care for a physically disabled person, and 25% were suffering from a chronic disease (source). Also,

• 55% did not have a car or a way to evacuate
• 68% had neither money in the bank nor a useable credit card
• 57% had total household incomes of less than $20,000 in the prior year
• 76% had children under 18 with them in the shelter
• 77% had a high school education or less
• 93% were black
• 67% were employed full or part-time before the hurricane

Age was also a factor in fatalities. Nearly 40% of those who died in Katrina were elderly, and many more elderly individuals died from the stress of evacuation and home loss.

Government Response

Mayor Nagin received nearly $20 million to establish a workable evacuation plan in plenty of time for Katrina, but it’s questionable whether it was ever developed, and it was never disseminated. Two months before Katrina, Nagin spent money to produce and distribute DVDs in poorer neighborhoods to inform residents that they would be on their own if a storm hit because the city could not afford to evacuate them.  In the days before the storm, Nagin sent empty Amtrak trains out of the city, failed to mobilized available school and other buses, and waited an entire day to call for a mandatory evacuation so he could determine whether the City would face lawsuits from local businesses (source). All of these decisions were deadly.

The federal response was no better. The city was quiet after the storm whipped through late Sunday night/early Monday morning when President Bush announced that New Orleans had “dodged a bullet.” Within hours, three major levees breaches and over fifty minor breaches flooded the city. Despite Governor Blanco’s request for federal assistance on Saturday (two days before the storm made landfall) and concern from local media on Sunday (one day before the storm) that the levees wouldn’t hold, they breached on Monday morning with only two Federal Emergency Management Agency (FEMA) workers on the ground (see the timeline). It would take two days for 1,000 additional officials to arrive.

Once on the ground, FEMA slowed the evacuation with unworkable paperwork and certification requirements. Marc Cresswell, a medic from a private ambulance company, reported that “At one point I had 10 helicopters on the ground waiting to go, but FEMA kept stonewalling us with paperwork. Meanwhile, every 30 or 40 minutes someone was dying.” FEMA was also criticized for turning away personnel, vehicles, medical equipment, food and other supplies, and diesel fuel.

The 30,000 people who evacuated to the Superdome (per Nagin’s instructions) were stranded for a week. Those who evacuated to the Superdome experienced deplorable conditions – unbearable heat, darkness, the stench of sewage, and a lack of food and water. They were not allowed to leave, and, according to several evacuees I interviewed in Texas shortly after the storm, this led one man to take his life by jumping from a balcony. This death was one of only six deaths at the Superdome: one person overdosed and four others died of natural causes. Another 20,000 people gathered at the Convention Center for assistance, an evacuation site the federal government was unaware of until three days after the storm.

President Bush was otherwise occupied during this time. The day Katrina hit, he traveled to Arizona and California to promote his prescription drug plan, had birthday cake with John McCain, and attended a Padres game.

Panicked at the slow federal response, Governor Blanco sent an urgent request: “Mr. President, we need your help. We need everything you’ve got.” The president retired to bed that night without responding to Blanco. The next day, he sang songs with country singer Mark Willis and returned to Texas for the final night of his vacation. The President was so oblivious to the suffering in New Orleans that his staff made a video of news coverage four days after the storm to sensitize him. And, in response, President Bush’s team assembled a carefully crafted PR plan to blame local officials seven days into the ordeal while thousands of people were still stranded. Later that same day, President Bush made the infamous statement, “Brownie, you’re doing a heckuva job.”

Cross-posted at Caroline Heldman’s blog.

A couple of years ago we posted a series of weight gain ads from the 1940s, ’50s, and ’60s.  Yes, weight gain ads.  Say it a few times, see how it rolls unfamiliarly around your tongue.  If you consume popular culture, it’s rare to come across anyone suggesting that there’s such a thing as women who are too skinny. Quite the opposite. Yet, during the middle decades of the 1900s, being too skinny was a problem that women worried about.  And Wate-On was there to help them achieve the “glamorous curves” of “popular” girls.

Jeremiah gave us a great excuse to re-post this already-posted material.  He sent in an ad for Wate-On featuring Raquel Welch:

There are interesting conversations to be had here.  Is pressure to be full-figured any different than pressure to be thin? It’s just another kind of pressure to conform to a particular kind of body.  Is the mid-century ideal different than the contemporary ideal of “curvy” women? In other words, are these women any less thin, or any less hourglass-figured, than the supposedly curvy icons of today: Beyonce, JLo, etc?  Are there any products for women who think they are too skinny today?  Can we make an interesting comparison between the capitalist and the medical solution to “too skinny”?  Other thoughts?

—————————

Julie C. found this ad in a newspaper from the 1960s:

The text:

“If skinny, thin and underweight take improved WATE-ON to help put on pounds and inches of firm, healthy looking flesh. WATE-ON supplies weight gaining calories plus vitamins, minerals, protein and other beneficial nutrients. Clinically tested. Fast weight gains 4, 6, 10… as much as 20 and 30 pounds have been reported. No over-eating. Helps make bustline, cheeks, arms, legs fill out, helps put firm solid flesh on skinny figures all over body. Helps fight fatigue, low resistance, sleeplessness and nervousness that so often accompany underweight. Underweight children and convalescents can take WATE-ON. It’s a clinically tested, pleasant formula sold around the world. Buy some today and start putting on weight FAST. Satisfaction from 1st bottle or price refunded. At drug stores everywhere.”

Another (year unknown, found here):

Taylor D. sent in this add for Wate-On (found here), which targets African American women:

 

Here’s another brand for a similar product from 1943:

Text:

Girls with “Naturally Skinny” Figures …AMAZED AT THIS ENTIRELY NEW WAY TO ADD 5 LBS. OF SOLID FLESH IN 1 WEEK…OR NO COST!

New Natural Mineral Concentrate From the Sea, Rich in FOOD IODINE, Building Up Weak, Rundown Men and Women Everywhere.

THOUSANDS of thin, pale, rundown folks–and even “Naturally Skinny” men and women–are amazed at this new, easy way to put on healthy needed pounds quickly. Gains of 15 to 20 lbs. in one month–5 lbs. in one week–are reported regularly.

Kelp-a-Malt, the new mineral concentrate from the sea–gets right down to the cause of thin, underweight conditions and adds weight through a “3 ways in one” natural process.

First, its rich supply of easily assimilable minerals nourish the digestive glands, which produce the juices that alone enable you to digest the fats and starches, the weight-making elements in your daily diet. Second, Kelp-a-Malt provides an amazingly effective digestive substance which actually digests 4 times its own weight of the flesh-building foods you eat. Third, Kelp-a-Malt’s natural FOOD IODINE stimulates and nourishes the internal glands which control assimilation–the process of converting digested food into firm flesh, new strength and energy. Three Kelp-a-Malt tablets contain more iron and copper than a pound of spinach or 7-1/2 lbs. of fresh tomatoes; more calcium than 6 eggs; more phosphorous than 1-1/2 lbs. carrots; more FOOD IODINE than 1600 lbs. of beef.

Try Kelp-a-Malt for a single week and notice the difference–how much better you sleep, how firm flesh appears in place of scrawny hollows” and the new energy and strength it brings you! Prescribed and used by physicians, Kelp-a-Malt is fine for children, too–improves their appetities. Remember the name, Kelp-a-Malt, the original and genuine kelp and malt tablets. There is nothing else like them, so don’t accept imitations and substitutes. Try Kelp-a-Malt today, and if you don’t gain at least 5 lbs. of good, firm flesh in 1 week, the trial is free. 100 jumbo size tablets, 4 to 5 times the size of ordinary tablets, cost but little. Sold at all good drug, stores. If your dealer has not yet received his supply, send $1.00 for special introductory size bottle of 65 tablets to address below.

Vintage Ads posted another example:

Text:

If you are a normal healthy, underweight person and are ashamed of your skinny, scrawny figure, NUMAL may help you add pounds and pounds of firm, attractive flesh to your figure.

For NUMAL, a doctor-approved formula, contains essential minerals and vitamins that may aid your appetite. Then you eat more and enjoy what you eat. But that isn’t all. NUMAL contains a food element which is also a great help in putting on weight. So don’t let them snicker at your skinny, scrawny figure. A skinny, scarecrow figure is neither fashionable nor glamorous. Remember, the girls with the glamorous curves get the dates.
So start NUMAL today…

Lauren McGuire spotted this ad (at Vintage Ads, via Jezebel):

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.

Cross-posted at Scientopia, Ms., and Jezebel.

Dolores R. and Andrew S. let us know about the report “The College Payoff: Education, Occupations, Lifetime Earnings,” by researchers at Georgetown University’s Center on Education and the Workforce, based on 2007-2009 American Community Survey data (via Feministing and Kay Steiger). Not surprisingly, higher education significantly increases lifetime earnings of U.S. workers:

But education doesn’t pay off equally for all groups. Women, not surprisingly, make less at every level of education than men do; in fact, their median lifetime earnings are generally on par with men a couple of rungs down the educational ladder:

Ah, but, you might think, women are more likely to take time out of the workforce than men, so perhaps that accounts for the difference. But the gaps calculated here are only for full-time, year-round workers and do not include periods out of the workforce — that is, this is the “best-case scenario” in terms of comparing gender earnings, and yet women still make about 25% less than men at the same educational level. When they include workers taking time out of the workforce, the pay gap would be significantly larger. The far right column in this table shows how much less women make compared to men based on the “typical” work pattern for workers in each educational category:

The benefits of education also vary by race and ethnicity, with non-Hispanic Whites generally making more at each educational level than all other groups, though Asians outearn them at the highest levels:

Though the authors don’t include a table showing the gap if you include workers who do not work full-time year-round throughout their careers, they state that as with gender, the gap widens significantly, since non-Whites are more likely to experience periods without work.

So does education pay? Undoubtedly, for all groups. But due to factors such as occupational segregation (especially by gender) and discrimination in the workplace, the return on an educational investment is clearly a lot higher for some than others.

Also see our recent posts on the gender gap in science and tech jobs, racial differences in job loss during the recession, unemployment among Black and White college grads, and trends in job segregation by sex.

Cross-posted at Reports from the Economic Front.

Congress has finally agreed on a deficit reduction plan that President Obama supports. As a result, the debt ceiling is being lifted, which means that the Treasury can once again borrow to meet its financial obligations.

Avoiding a debt default is a good thing. However, the agreement is bad and even more importantly the debate itself has reinforced understandings of our economy that are destructive of majority interests.

The media presented the deficit reduction negotiations as a battle between two opposing sides. President Obama, who wanted to achieve deficit reduction through a combination of public spending cuts and tax increases, anchored one side. The House Republicans, who would only accept spending cuts, anchored the other. We were encouraged to cheer for the side that we thought best represented our interests.

Unfortunately, there was actually little difference between the two sides in terms of the way they engaged and debated the relevant issues. Both sides agreed that we face a major debt crisis. Both sides agreed that out-of-control social programs are the main driver of our deficit and debt problems. And both sides agreed that the less government involvement in the economy the better.

The unanimity is especially striking since all three positions are wrong. We do not face a major debt crisis, social spending is not driving our deficits and debt, and we need more active government intervention in the economy, not less, to solve our economic problems.

So what was the deal?

Before discussing these issues it is important to highlight the broad terms of the deficit reduction agreement. The first step is limited to spending cuts; discretionary spending is to be reduced by $900 billion over the next ten years. Approximately 35% of the reduction will come from security-related budgets (military and homeland security), with the rest coming from non-security discretionary budgets (infrastructure, energy, research, education, and social welfare). In exchange for these budget cuts the Congress has agreed to raise the debt ceiling by $1 trillion.

The agreement also established a 12 person committee (with 6 Democrats and 6 Republicans) to recommend ways to reduce future deficits by another $1.2-1.5 trillion. Its recommendations must be made by November 23, 2011 and they can include cuts to every social program (including Social Security, Medicare and Medicaid), as well as tax increases.

Congress has to vote on the committee’s package of recommendations by December 23, 2011, up or down. If Congress approves them they will be implemented. If Congress does not approve them, automatic cuts of $1.2 trillion will be made; 50% of the cuts must come from security budgets and the other 50% must come from non-security discretionary budgets. Regardless of how Congress votes on the recommendations, it must also vote on whether to approve a Balanced Budget Amendment to the Constitution. Once this vote is taken, the debt ceiling will be raised again by an amount slightly smaller than the deficit reduction.

Check out this flowchart from the New York Times if you want a more complete picture of the process.

Why is this a problem?

Those who favor reducing spending on government programs generally argue that we have no choice because our public spending and national debt are out of control, threatening our economic future. But, the data says otherwise.

The chart below, from the economist Menzie Chinn at Econbrowser, shows the movement in the ratio of publically held debt to GDP over the period 1970 to 2011; the area in yellow marks the Obama administration. While this ratio has indeed grown rapidly, it remains well below the 100% level that most economists take to be the warning level. In fact, according to Congressional Budget Office predictions, we are unlikely to reach such a level for decades even if we maintain our current spending and revenue patterns.

The sharp growth in the ratio over the last few years strongly suggests that our current high deficits are largely due to recent developments, in particular the 2001 and 2003 Bush tax cuts, the wars in Iraq and Afghanistan, and the Great Recession. Their contribution can be seen in this chart from the New York Times.

The effects of the tax cuts and economic crisis on our deficits (and by extension debt) are especially visible in the following chart (again from Menzie Chinn), which plots yearly changes in federal spending and federal revenue as a percentage of GDP (the shaded areas mark periods of recession). As we can see, the recent deficit explosion was initially driven more by declining revenues than out of control spending. Attempts to close the budget gap solely or even primarily through spending cuts, especially of social programs, is bound to fail.


To summarize:

Tragically, the debate over how best to reduce the deficit has encouraged people to blame social spending for our large deficits and those large deficits for our current economic problems.  As a result, demands for real structural change in the way our economy operates are largely dismissed as irrelevant.

Recent economic data should be focusing our attention on the dangers of a new recession.  According to the Commerce Department our economy grew at an annual rate of just 1.3% in second quarter of this year, following a first quarter in which the economy grew by only 0.3%.  These are incredibly slow rates of growth for an economy recovering from a major recession.  To put these numbers in perspective, Dean Baker notes that we need growth of over 2.5% to keep our already high unemployment rate from growing.

Cutting spending during a period of economic stagnation, especially on infrastructure, research, and social programs, is a recipe for greater hardship.  In fact, such a policy will likely further weaken our economy, leading to greater deficits.  This is what happened inthe UK, Ireland, and Greece—countries with weak economies that tried to solve their deficit problems by slashing public spending.

We need more active government intervention, which means more spending to redirect and restructure the economy; a new, more progressive tax structure; and a major change in our foreign policy, if we are going to solve our economic problems.  Unfortunately for now we don’t have a movement powerful enough to ensure our side has a player in the struggles that set our political agenda.

 

Anders Behring Breivik has now joined the pantheon of homegrown domestic terrorists who have unleashed horror on their own countrymen. Sixteen years ago, Timothy McVeigh and other members of the Aryan Republican Army blew up the Murrah Office Building in Oklahoma City, killing 168 of their own countrymen and women. It was the worst act of domestic terrorism in our history, and, indeed, until 9-11, the worst terrorist attack of any kind in our history. We know what Norwegians are going through; as Bill Clinton said, we “feel your pain.”

As pundits and policymakers search for clues that will help us understand that which cannot be understood, it may be useful to compare a few common elements between McVeigh and Breivik.

Both men saw themselves as motivated by what they viewed as the disastrous consequences of globalization and immigration on their own countries. Breivik’s massive tome, 2083: A European Declaration of Independence, paints a bleak picture of intolerant Islamic immigrants engaged in a well-planned takeover of European countries in the fulfillment of their divine mission. His well-planned and coldly executed massacre of 94 of his countrymen was, as he saw it, a blow against the policies promoting social inclusion and a recognition of a diverse multicultural society promoted by the labor-leaning government.

McVeigh also inveighed against both multinational corporate greed and a society that had become too mired in multiculturalism to provide for its entitled native-born “true” Americans. In a letter to the editor of his hometown newspaper, McVeigh, then a returning veteran of the first Gulf War, complained that the birthright of the American middle class had been stolen, handed over by an indifferent government to a bunch of ungrateful immigrants and welfare cheats. “The American dream,” he wrote “has all but disappeared, substituted with people struggling just to buy next week’s groceries.”

McVeigh and Breivik both sought to inspire their fellow Aryan countrymen to action. After blowing up the federal building – home of the oppressive and unrepresentative government that had capitulated to the rapacious corporations and banks — McVeigh hoped that others would soon follow suit and return the government to the people. Breivik cared less about government and more about the ruination of the pure Norwegian culture, deliberately diluted in a brackish multiculti sea.

For the past five years, I’ve been researching and writing about the extreme right in both the United States and Scandinavia. I’ve interviewed 45 contemporary American neo-Nazis, White Supremacists, Aryan youth, Patriots, Minutemen, and members of rural militias. I also read documentary materials in the major archival collections at various libraries on the extreme right. I then interviewed 25 ex-neo-Nazis in Sweden. All were participants in a government-funded program called EXIT, which provides support and training for people seeking to leave the movement. (This included twice interviewing “the most hated man in Sweden,” Jackie Arklof, who murdered two police officers during a botched bank robbery. Arklof is currently serving a life sentence at Kumla High Security prison in Orebro. To my knowledge, I’m the only researcher to date to have interviewed him as well as members of EXIT.)

I’ve learned a lot about how the extreme right understands what is happening to their countries, and why they feel called to try and stop it. And one of the key things I’ve found is that the way they believe that global economic changes and immigration patterns have affected them can be understood by looking at gender, especially masculinity. (Don’t misunderstand: it’s not that understanding masculinity and gender replaces the political economy of globalization, the financial crisis, or the perceived corruption of a previously pristine national culture. Not at all. But I do believe that you can’t understand the extreme right without also understanding gender.)

First, they feel that current political and economic conditions have emasculated them, taken away the masculinity to which they feel they are entitled by birth. In the U.S., they feel they’ve been emasculated by the “Nanny State” through taxation, economic policies and political initiatives that demand civil rights and legal protection for everyone. They feel deprived of their entitlement (their ability to make a living, free and independent) by a government that now doles it out to everyone else – non-whites, women, and immigrants. The emasculation of the native-born white man has turned a nation of warriors into a nation of lemmings, or “sheeple” as they often call other white men. In The Turner Diaries, the movement’s most celebrated text, author William Pierce sneers at “the whimpering collapse of the blond male,” as if White men have surrendered, and have thus lost the right to be free. As one of their magazines puts it:

As Northern males have continued to become more wimpish, the result of the media-created image of the ‘new male’ – more pacifist, less authoritarian, more ‘sensitive’, less competitive, more androgynous, less possessive – the controlled media, the homosexual lobby and the feminist movement have cheered… the number of effeminate males has increased greatly…legions of sissies and weaklings, of flabby, limp-wristed, non-aggressive, non-physical, indecisive, slack-jawed, fearful males who, while still heterosexual in theory and practice, have not even a vestige of the old macho spirit, so deprecated today, left in them.

Second, they use gender to problematize the “other” against whom they are fighting. Consistently, the masculinity of native-born white Protestants is set off against the problematized masculinity of various “others” – blacks, Jews, gay men, other non-white immigrants – who are variously depicted as either “too” masculine (rapacious beasts, avariciously cunning, voracious) or not masculine “enough” (feminine, dependent, effeminate). Racism, anti-Semitism, nativism, and homophobia all are expressed through denunciations of the others’ masculinity.

Third, they use it as a recruiting device, promising the restoration of manhood through joining their groups. Real men who join up will simultaneously protect white women from these marauding rapacious beasts, earn those women’s admiration and love, and reclaim their manhood.

American White Supremacists thus offer American men the restoration of their masculinity – a manhood in which individual white men control the fruits of their own labor and are not subject to the emasculation of Jewish-owned finance capital, a black- and feminist-controlled welfare state.

At present, I am working my way through 2083: A European Declaration of Independence, the 1,518 page manifesto written in London by Anders Behring Breivik (under the Anglicized name Andrew Berwick) in the months leading up to his attack. These same themes are immediately evident. (Quotes are from the document.)

(1) Breivik associates feminism with liberal, multicultural societies. He claims that feminism has been responsible for a gender inversion in which, whether in the media or the military, we see the “inferiority of the male and the superiority of the female.” As a result of this widespread inversion, the “man of today” is “expected to be a touchy-feely subspecies who bows to the radical feminist agenda.”

(2) Breivik spends the bulk of the document playing off two gendered stereotypes of Muslim immigrants in Europe. On the one hand, they are hyper-rational, methodically taking over European societies; on the other hand, they are rapacious religious fanatics, who, with wide-eyed fervor, are utterly out of control. In one moment in the video, he shows a little boy (blond hair indicating his Nordic origins), poised between a thin, bearded hippie, who is dancing with flowers all around him, and a bearded, Muslim terrorist fanatic – two utterly problematized images of masculinity. 3:58 in the video:

(3) In his final “call to arms” and the accompanying video, he offers photos of big-breasted women, in very tight T-shirts, holding assault weapons with the word “infidel” on it and some Arabic writing, a declaration that his Crusader army members are the infidels to the Muslim invaders. 9:02 in the video:

This initial, if sketchy, report from Oslo, and Breivik’s own documents, indicate that in this case, also, it will be impossible to fully understand this horrific act without understanding how gender operates as a rhetorical and political device for domestic terrorists.

These members of the far right consider themselves Christian Crusaders for Aryan Manhood, vowing its rescue from a feminizing welfare state. Theirs is the militarized manhood of the heroic John Rambo – a manhood that celebrates their God-sanctioned right to band together in armed militias if anyone, or any governmental agency, tries to take it away from them. If the state and capital emasculate them, and if the masculinity of the “others” is problematic, then only “real” white men can rescue the American Eden or the bucolic Norwegian countryside from a feminized, multicultural, androgynous immigrant-inspired melting pot.

————————

Michael Kimmel is a professor of sociology at the State University of New York at Stonybrook.  He has written or edited over twenty volumes, including Manhood in America: A Cultural History and Guyland: The Perilous World Where Boys Become Men.  You can visit his website here.

Cross-posted at Montclair SocioBlog.

Changes in language seem to just happen. Nobody sets out to introduce a change, but suddenly people are saying “groovy” or “my bad.” And then they’re not. Even written language changes, though the evolution is slower.

Last weekend, I saw this sign at a goat farm on Long Island.


WER’E ??

I used to care about the apostrophe, but after years of reading student papers about “different society’s,” I have long accepted that the tide is against me. The apostrophe today is where spelling was a few hundred years ago – you can pretty much make up your own rules.

Sometimes the rule is fairly clear: use an apostrophe in plurals when leaving it out makes the word look like a different word rather than a plural form of the original. Change the “y” in “society” to “ies” and it looks too different. “Of all the cafe’s, I like the one with lime martini’s.” The “correct” version is cafes and martinis. but I think they take a nanosecond or two longer to mentally process.

Or these

Technically, it should be “ON DVDS.” But DVDS looks like it’s some government agency (I gotta go down to the DVDS tomorrow) or maybe a disease.

It’s not always easy to figure out what rule or logic the writer is following. The little apostrophe seems to be plunked in almost at random. Not random, really. It’s usually before an “s.” But why does Old Navy say, “Nobody get’s hurt”?

There’s a prescriptivist Website, ApostropheAbuse.com, that collects these (that’s where I found the DVDS and Old Navy pictures). They’re fighting a losing battle.

Technology matters – I guess that’s the sociological point here. The invention of print and then the widespread dissemination of identical texts herded us towards standardization. Printers became a separate professional group (not part of the church or state), and most of them were in the same place (London). They had a stranglehold on published spelling.

For the last few decades, anyone could be a printer. The page you are now reading might harbor countless errors in punctuation and spelling (though spell-checkers greatly reduce misspellings), but it looks just as good as an online article in the Times, and it’s published in a similar way to potentially as many readers. And now there’s texting. It’s already pushing upper case letters off the screen, and the apostrophe forecast doesn’t look so good either. But what will still be interesting is not the missing apostrophe but the apostrophe added where, by traditional rules, it doesn’t belong.

I still can’t figure out WER’E.

If you’re not writing a dissertation or taking care of twins, you might have heard that News of the World, a tabloid newspaper in the U.K., has been gathering news by illegally listening to people’s voicemail messages. News of the World is owned by Rupert Murdoch’s firm News Corporation, the second largest media company in the world. News Corporation also owns Fox. This is a great natural experiment testing the potential problems with media consolidation, the fact that more and more media outlets are owned by fewer and fewer companies.

So how does Fox report on this scandal? Rob Beschizza, writing for BoingBoing, highlighted a segment on Fox News in which the host and guest agree that “hacking scandals” are a “serious… problem” and imply that, in this instance, News of the World was the victim, not the perpetrator.  More, the guest “expert” is not a politician, scholar, or even a pundit, he’s actually a public relations professional who specializes in spinning scandals to obviate the negative consequences for corporations. Says James Fallows at The Atlantic:

He is Robert Dilenschneider, former head of Hill and Knowlton and now head of the Dilenschneider Group, who recently was featured in an interview, “How to Manage a PR Disaster.”

So Fox is having an expert on spin as a guest, who just so happens to spin the scandal about their parent corporation:

Partial transcript:

The NOTW is a hacking scandal, it can’t be denied. But the real issue is, why are so many people piling on at this point? We know it’s a hacking scandal, shouldn’t we get beyond it and deal with the issue of hacking? Citicorp has been hacked into, Bank of America has been hacked into, American Express has been hacked into, insurance companies have been hacked into, we’ve got a serious hacking problem in this country, and the government’s obviously been hacked into, 24,000 files.

The bigger issue is really hacking and how we as the public going to protect our privacy and deal with it. I would also say, by the way, Citigroup, great bank. Bank of America, great bank. Are they getting the same attention for hacking that took place less than a year ago, that News Corp is getting today?

Of course, as Beschizza at BoingBoing points out, Citigroup and Bank of America were hacked into, whereas News of the World did the hacking.  It’s also an interesting use of the word “hacking.”  Beschizza continues:

Though we all use the term “hacking” broadly, punching in a default PIN number isn’t quite the same thing as the skills required to hack into banks and governments. You can’t pretend these are the same class of problem, unless you’re happy being ignorant of the crisis management issues on which you are being presented as an expert.

Use of the term, then, makes the illegal activity seem more like the mischief of a techy teenager or the nefarious work of anti-establishmentarians, not the plain ol’ straightforwardly criminal behavior it is.

See also: Shameless promotion of the movie, Tinkerbell, at Good Morning America.

Lisa Wade, PhD is an Associate Professor at Tulane University. She is the author of American Hookup, a book about college sexual culture; a textbook about gender; and a forthcoming introductory text: Terrible Magnificent Sociology. You can follow her on Twitter and Instagram.