In 2002, California became the first U.S. state to pass legislation establishing paid family leaves for workers who need to take time off from their jobs to bond with new children or care for a seriously ill family member. Other states are moving in the same direction, including New Jersey, which began operating a similar program in 2009. Yet only California has amassed years of experience with this important new social benefit.

Six years after the law went into effect, we conducted detailed surveys of California employers and employees in 2009 and 2010. The employer survey reached 253 for-profit and nonprofit worksites of a range of sizes sampled from Dun & Bradstreet; public worksites were excluded. The employee survey reached 500 individuals who had a family event (a new child, a seriously ill family member) covered by the paid leave law.

Our findings offer a rich picture of how effectively paid leave has operated for California employers and employees. We were especially interested in learning whether paid family leave helps to reduce inequalities at work. Before California passed paid leave, many low-wage workers had no access to income support when they took time off to attend to family needs – not even paid sick or vacation days. Most employers who offered paid leaves for care-giving restricted them to professionals and managers. Has the new leave law narrowed the gap?

Benefits Offered by the California Law

In California, these two could still be paid while taking off work to be with their newborn. Photo by OakleyOriginals via Flickr.com
In California, these two could still be paid while taking off work to be with their newborn. Photo by OakleyOriginals via Flickr.com

California’s pathbreaking program offers nearly all employees in the state – fathers as well as mothers – partial wage replacement if they go on leave to bond with a new biological, adopted or foster child during the first year after the child is born or placed with the family. The program also offers wage replacement to workers who care for a seriously ill parent, child, spouse, or domestic partner.

  • To be eligible, workers need only to have earned $300 or more in a covered job during a three-month period in the previous year. This means that most part-time workers are covered, along with full-time employees.
  • Workers can receive up to six weeks of wage replacement at 55 percent of their usual weekly earnings, up to a maximum of $1011 per week in 2012. This maximum amount rises with inflation. Benefits are low by international standards, but a good start.
  • Self-employed workers can opt into the program, and unionized public sector workers can also join through the collective bargaining process.

Does California’s Program Work?

Our study shows that paid family leave has worked well overall for both employers and employees, but here we focus specifically on the results for low-wage workers. Among respondents to our survey who took leave from jobs that pay $20 per hour or less (or do not offer health insurance), we compared the experiences of leave-takers who used the California law to those who took time off without claiming the legal benefits.

  • 84% of those who used the California program received at least half of their usual weekly pay during leave, compared to only 49% of those who took time off outside the program.
  • Low-wage workers who used the paid leave program were able to take longer leaves and were more satisfied with the length of their leaves.
  • Users of the paid leave program were better able to care for new children and seriously ill family members than low-wage workers who did not take advantage of the law. Babies were more often breastfed, and sick family members did better.

Our findings show that California’s program does make a positive difference – including for workers in the lowest paid jobs who have not previously enjoyed workplace benefits.

The Need to Improve Benefits and Spread Awareness

Over a third of the respondents to our survey who were aware of the paid leave program but did not use its benefits said they were afraid they might lose their jobs or face other negative consequences at work. Others felt the money was insufficient. In the future, California should consider increasing the benefit and adding provisions to protect the job rights of leave-takers.

Meanwhile, six years into the implementation of paid family leave, many Californians remain in the dark about the benefits to which they are entitled. The Field Poll of registered voters taken in September 2011 found that 59% of those with annual family incomes between $60,000 and $100,000 were aware of the state paid leave program, compared to only 25% of those with family incomes ranging from $20,000 to $40,000. Just over half of workers aged 40-49 knew of the benefits, compared to only 27% of workers aged 18-29. Not only is public awareness limited, the low-wage workers, Latinos, blacks, and younger employees who stand to gain the most are the ones who know the least about the program. Not everyone knows about their state’s paid leave program, including lower-wage workers, minorities, and younger employees.

In our 2009-2010 survey, respondents who were aware of the program most often learned about it from their employers. Unfortunately, depending on employers to spread the word won’t get the job fully done. Employers have traditionally offered leave benefits to many of their highly paid professionals and managers; telling those employees about the law saves money. But employers do not have the same incentive to inform their low-wage workers.

Citizen associations as well as advocates in California are helping to spread the word and improve the terms of paid family leave. Meanwhile, leaders in other states can learn many valuable lessons from California’s pioneering example.

Ruth Milkman is in the sociology department at the City University of New York. She studies past and present labor and labor movements.

Eileen Appelbaum is an economist at the Center of Economic and Policy Research. She researches various aspects of labor and employment.

Providing financial support is one of the many important things that fathers do for children. Even with more mothers working in the United States today, fathers’ earnings remain the primary source of income for most couples with children. The chances of children growing up in poverty are much greater when fathers earn too little, or do not contribute adequate child support to children not living with them.

Low wages make it hard for fathers to support their families, but so do the problems of unemployment, insufficient hours of work, and inability to get year-round work or hold a steady job. Our research on the impact of these factors helps policymakers and citizens better understand how patterns of employment differ across fathers in various family situations – and what the various patterns of work can mean for children’s wellbeing.

What is Family Life Like for New Fathers?

In the United States today, four in ten babies are born to unmarried parents. But the lack of a marriage certificate does not mean that the fathers are out of the picture.

  • About half of babies born to unmarried parents will go from the hospital to a home where both the mother and the biological father reside.
  • Two in ten babies – about half of all those born to unmarried mothers – will live apart from their fathers. But even when fathers are not married to babies’ mothers or living with their offspring, they are often involved in the lives of their children.

The Hours New Fathers Work

  • Working full-time throughout the year is not a common experience for all men who become fathers. Fathers without a college education – and especially those without a high school degree or GED – work fewer hours weekly than college-educated fathers.
  • Not surprisingly, fathers who are in poor health, or who are addicted to drugs or have criminal records, have trouble getting and maintaining stable jobs with full-time hours.

How Does Parenthood Change Fathers’ Work?

At office
Married men put in up to 20 more hours a week than non-married men before their child is born. Photo by Phil and Pam via Flickr.com

In a study of the work hours of 1084 fathers, we looked at how much men worked in the year before they had their first child. We also tracked how much their work hours had changed by their child’s fifth birthday.

  • In the years just before the men in our study became parents, married fathers worked more hours each week and were employed for more weeks of the year than the unmarried fathers. The differences were large. For example, married men worked 20 hours more per week than unmarried men who were living without a cohabiting partner.
  • Married men worked much more than unmarried men in the year before their child’s birth in part because married men tend to be older and better educated. Married men are also less likely to use drugs, be in poor health, or have a criminal record. But marriage also matters in its own right. Comparisons among men of the same age, education, and other social characteristics show that married men still work more than unmarried men, at least before they have a child.
  • Having a baby changes work life for all men – but differently, depending on whether they are married. For married men, having a child does not lead to working additional hours, on average. But for unmarried men, having a baby goes hand-in-hand with working more hours each week and working more consistently throughout the year. These changes in unmarried men’s employment are big enough that by the time their first child turns five years old, unmarried men and married men with similar backgrounds are working almost the same number of hours per week.

How Public Policy Can Help Fathers Support Children

In various ways, public policies can help fathers in different situations do a more adequate job of supporting their children.

    • By the time the first child celebrates her fifth birthday, fathers are working an average of 46 hours per week. Most mothers are working too, many of them full-time. Policies such as paid family and medical leave that help parents balance work and parental obligations can improve child and family wellbeing in the United States.
    • Fathers who earn low wages often cannot adequately support families, even when they work more than 45 hours per week. Increases in the federal Earned Income Tax Credit and hikes in federal and state minimum-wage levels can help remedy this situation.
    • Fathers with low levels of education or criminal records may need extra assistance to secure full-time, stable employment. Health problems or drug addictions may preclude stable employment. Income assistance, medical care, and treatment programs can help such struggling fathers – and also benefit their children.
Fathers are working 46 hours per week on average by the time their child turns 5 years old.

Christine Percheski is in the sociology department at Northwestern University. She studies the changes of parenthood and the American family.

Christopher Wildeman is in the sociology department at Yale University. He researches the impacts of prison on family life, including inequalities in health, mortality, and life expectancy.

Why are so many Washington officials obsessed with budget deficits? And why are they so willing to entertain big cuts to social programs such as Social Security, Medicare, and education, while being reluctant or outright unwilling to increase taxes on the highest income earners? The answer cannot be that most Americans want these choices. Survey after survey shows that large majorities support asking the wealthiest to pay more in taxes and want to maintain or increase spending on Social Security and federal health and education programs.

A possible answer to where budget hawks get energy and inspiration comes from the first systematic survey social scientists have managed to do of the political attitudes of the wealthiest one percent of Americans. Working with a team of scholars from several disciplines, I am conducting a study called the “Survey of Economically Successful Americans and the Common Good.” Most national surveys include only a tiny number of very wealthy citizens, but we used additional data sources to identify a larger sample of wealthy individuals living in the greater Chicago metropolitan area. Further research would be needed to explore attitudes among the very wealthy living everywhere in the United States. But our findings are highly suggestive of what would be found in a nationwide study. For the first time, we are able to pinpoint issues on which the very wealthiest agree or disagree with other Americans.

On Key Budget Questions, the Wealthy Have Distinctive Priorities

The wealthy respondents to our survey expressed great concern about budget deficits:

  • Fully 87% called deficits a “very important problem” for the United States, more than chose unemployment, education or anything else on a list of eleven national challenges.
  • On an open-ended question asking respondents to name the most important problem facing the country, a hefty 32% of the wealthy mentioned budget deficits or excessive government spending, far more than cited any other problem.
  • Only 11% of the wealthy listed unemployment or education as America’s top problem.
  • Wealthy respondents tilted toward cutting back – rather than expanding – federal government spending on Social Security and health care.

By contrast, in a national survey taken about the same time as our survey, only seven percent of all Americans mentioned deficits or the national debt as the most important problem, while 53% cited jobs and the economy as the top problem. Average Americans also leaned toward expanding rather than cutting back on major federal outlays for Social Security and health care. In a national survey, 53% of all Americans cited jobs as the top problem, whereas 87% of wealthy Americans believe the national debt is the most important issue.

Disagreements on Jobs and Income Supports

Most wealthy respondents to our survey opposed a wide range of job and income policies that majorities of ordinary Americans favor. Our respondents were against setting the minimum wage above the poverty line; providing a decent standard of living for the unemployed; increasing the earned income tax credit; and having government provide jobs for everyone able and willing to work who cannot find private employment.

Likewise, the wealthy opposed – while most Americans favor – providing health insurance financed by tax money; spending “whatever is necessary” to ensure that all children can attend good public schools; making sure that everyone who wants to can go to college; and investing more in worker retraining and education to help workers adapt to changes in the economy.

The general American public favors more regulation of big corporations, but our wealthy respondents tend not to favor this idea. Most Americans favor using corporate income taxes “a lot” to get revenue for government programs, but most of the wealthy do not favor this. On the somewhat more contentiously-worded question of whether governments should “redistribute” wealth with heavy taxes on the rich, 52% of all Americans favor this, but unsurprisingly a large majority of the wealthy are opposed.

Are Policymakers Especially Responsive to the Wealthy?

Our data do not prove that the wealthy actually cause unpopular policy actions. But our study points in the same direction as recent research by Martin Gilens and his associates, showing that the actions of government align more closely with the preferences of affluent Americans than with the preferences of middle and lower-income citizens. Like previous studies, we also find that the wealthy are unusually politically active.

Donald Trump. Photo by Gage Skidmore via Flickr.com
Donald Trump. Photo by Gage Skidmore via Flickr.com
  • Two thirds of the wealthy respondents to our survey had contributed money – an average of $4,633 – in the most recent presidential election. Remarkably, more than one of every five of our respondents had helped to solicit, or “bundle,” contributions from other affluent political donors.
  • Within the past six months, about half had initiated contact with at least one Senator or House member; and many had contacted members of the White House staff, other executive branch officials, or officials at regulatory agencies.
  • Judging by respondents’ accounts of what they talked about with officials, some 44% of these contacts concerned matters of rather narrow economic self-interest.

Of course, the mere fact that the wealthy and other citizens disagree does not prove that the general public is right. To make serious judgments, we would need to consider who has better information and whether different classes are more alert to tax costs or spending benefits. At minimum, however, our findings suggest that there are reasons to worry that U.S. policy may not respond fully and democratically to the needs and values of the majority of Americans. The very wealthy have different priorities, and they may be the ones who set the agenda in ongoing debates about taxes, social programs, and federal budget shortfalls.

Benjamin I. Page is in the political science department at Northwestern University. He studies American politics through public opinion, policy making, and the media.

Now in his second term, President Obama intends to visit Israel, where he hopes to restart stalled peace talks with the Palestinians. To prepare, I hope he will go beyond perusing the usual briefing books supplied to traveling U.S. presidents. He should immerse himself in history, too – and not only in books about the 62 years of sporadic violence between Arabs and Israelis with which most of us are familiar. That would be a good start, but for a deeper appreciation of what moves the principal actors in the Middle East, Obama – like the rest of us – must go back well before the birth of the modern state of Israel in 1948. Much earlier, Jews and Muslims had indelible experiences with Western powers maneuvering in their pivotal region.

Shimon Peres, current President of the State of israel. Photo by jurvetson via Flickr.com
Shimon Peres, current President of the State of israel. Photo by jurvetson via Flickr.com

Their wariness about Western promises dates especially to the First World War when Britain issued the Balfour Declaration pledging to support the establishment of a homeland for the Jewish people in Palestine. Today, we consider that promise to be the foundation stone of modern Israel. But originally it was only one aspect of a larger strategy whose reverberations are still felt in Middle Easterners’ distrust of Western commitments.

The Back Story to Balfour

In November 1917, the First World War was raging and which side would win remained very much in doubt. The British government decided to issue the Balfour Declaration as part of a complicated set of maneuvers to build support for its war efforts.

British leaders at that time had stereotypical views about Jews and their purported influence in the West and Russia. American Jews were thought to control U.S. high finance in ways that could help bring America into the war on Britain’s side; Russian Jews were thought to have sufficient influence with pacifists to be able to keep their country from dropping out of the war. Britons also supposed, mistakenly, that the vast majority of Jews everywhere were Zionists who desired to return to their ancient homeland. Given these mistaken suppositions, British leaders thought that the backing of “international Jewry” would give them a better chance of beating Germany. So they offered a great bribe to Jews in the form of the famous Balfour Declaration.

At the same time, British leaders worked to bribe the Arabs. They feared that the Ottoman Sultan, who was also the Caliph of Islam, could declare jihad against them, prompting Muslims in South Asia, Egypt and Sudan to rise up against their Imperial masters. The British knew, however, that if the second-ranking figure in Islam, Grand Sharif Hussein of Mecca, supported Britain against the Ottomans, any call to holy war would be weakened. Consequently, in a series of famous letters, the British Consul General in Cairo, Sir Henry McMahon, promised to support the establishment of an independent Arab kingdom in Syria, Lebanon, Arabia, and Mesopotamia (modern-day Iraq). It remains unclear whether McMahon promised that the kingdom would include Palestine. But it is indisputable that he expressed himself so vaguely in the letters that Grand Sharif Hussein finally concluded that he had made this commitment.

Bribes and Double Dealing

Meanwhile, back in London, even though they were not yet victorious, Britain and France were secretly redrawing the Ottoman map. France, they projected, would obtain direct and indirect control over Syria and Lebanon, while Britain would hold sway over Mesopotamia. Because Palestine contained Jerusalem, Holy City to three great religions, it would be governed by an international “condominium” of the western allied powers.

When Arabs and British Zionists learned about this, they responded with outrage. The Zionists concluded they must obtain a written promise about Palestine because spoken pledges were “weak as water.” Eventually, they got the formally issued Balfour Declaration. Analogously, when the Arab leader Hussein recalled his correspondence with McMahon, he told his son: “I have in my pocket a letter which promises all I wish.” With this letter, Hussein trusted Britain to keep McMahon’s pledges and rein in the French. The Arabs and Zionists eventually received the Balfour Declaration, but they were not aware of the British Prime Minister’s other offers.

But that was before Hussein learned about the Balfour Declaration. When he did learn of it, he thought he had been betrayed.

Neither Zionists nor the Arabs ever knew, however, that – in what amounts to a triple-cross maneuver – British Prime Minister David Lloyd George also had opened a back channel to the Ottomans! After all, a separate peace with Turkey would do more to win the war than anything involving either Jews or Arabs, or even both of these groups. So Lloyd George offered the Ottomans bribes, too. If they signed a separate peace treaty, then, in addition to receiving a huge sum of money, they could continue flying their flag across the Middle East, Palestine included. Lloyd George’s emissary to the Turks repeated this offer in January of 1918 – two months after the Balfour Declaration had been made public.

Even this does not cap this remarkable story of intrigue and deceit. When Lloyd George made the offer to the Turks, not only did he keep it secret from the British Zionists and the Arabs, he also kept it secret from his own Foreign Secretary, Arthur Balfour. Lloyd George withheld the news just as Balfour signed the famous declaration that bears his name.

Overcoming a Heritage of Mistrust

With so many secretive steps and contradictory promises, British policies in the Middle East during the First World War engendered recrimination, suspicion and resentment. Such sentiments have been compounded over the years by subsequent Western dealings. We must not be surprised if today Arabs and Israelis do not take Western leaders at face value, or if they look first to their own interests, as they understand them. History has taught them that no one else will. Why should they expect better from American presidents bearing peace plans?

To make headway in this pivotal region, in short, President Obama must overcome pervasive mistrust nearly a century in the making. He would do well to understand its origins.

Jonathan Schneer is in the School of History, Technology, and Society at the Georgia Institute of Technology. He studies current issues in British politics and their historical context.

The Voting Rights Act was a monumental achievement of the modern struggle for racial equality in the United States. After legislators from both parties passed the law in 1965, sustained implementation was enabled by broad bipartisan support. Congress has renewed and strengthened the act several times, sometimes pushing into territory the Supreme Court was reluctant to sanction. The most recent reauthorization in 2006 was strongly supported by President George W. Bush, and by many Republicans as well as Democrats in Congress.

But the long stretch of broad support is at an end. During arguments in a 2009 case before the Supreme Court, both Chief Justice John Roberts and Justice Anthony Kennedy expressed concern that the act’s enforcement authority may have outlived its utility. Their skepticism was directed at Section 5, which authorizes the Department of Justice to block changes in election rules in states designated for special scrutiny because of their history of legalized racial discrimination. Since 2009, state Republican leaders have swelled the chorus of doubters.

Could modern America’s historic Voting Rights Act actually be eviscerated? Many people presume that racial progress is inevitable and irreversible. But a review of the nation’s troubled racial past reveals that the long fight for equal citizenship has been subject to shocking reversals.

It Took Two Reconstructions

The acquisition of full democratic political rights for African Americans advanced through two historic Reconstructions of law, elections, and the capacities of the federal government to enforce equal treatment. The First Reconstruction remade southern politics in the decades following the Union victory in the Civil War; and the Second Reconstruction re-enfranchised African Americans and transformed all aspects of race relations in the mid-twentieth century.

Both Reconstructions furthered and depended upon bi-racial democratic alliances; and both opened new leadership posts and legislative careers to African-Americans (and more recently to Latinos as well). The election of Barack Obama as the first African-American president could not have happened without the Second Reconstruction.

The First Reconstruction and Its Undoing

The First Reconstruction began during the Civil War itself as President Abraham Lincoln and members of Congress laid plans for the restoration of the Union and the implementation of partial black suffrage rights. Changes pushed forward through the end of the 19th century, longer than is commonly known. As late as the mid-1890s, black political movements and their Republican allies were able to win at least sporadic victories in southern state politics. In North Carolina, for example, a bi-racial coalition of Populists and Republicans briefly gained control of the governorship and legislature and managed to elect two U.S. Senators and an African-American representative in the House of Representatives.

But white supremacists in the South undid the electoral accomplishments of the First Reconstruction. Legal disenfranchisement of African-Americans began in Florida in 1889 and was completed by Georgia in 1907. At that juncture, the United States marked an unhappy “first” in the world-historical march of democratic political rights. A major, previously enfranchised group of Americans lost the right to vote and was pushed entirely out of party and electoral politics – all by quasi-democratic means. State-level referenda, statutes, and constitutional amendments were used to create burdensome prerequisites to voting rights by African Americans. Although extra-legal violence played a role, the success of “legal” disenfranchising maneuvers depended on acquiescence by the Supreme Court and the national political parties. Federal judges and Republican politicians turned the other way as southern Democrats did their dirty deeds, restricting U.S. democracy in an extraordinary way. No other democracy has repeated on the same scale such a process of legally unwinding democratic voting rights. The United States lost democratic progress when it legally disenfranchised African-Americans around 1900.

Take Two in the Mid-20th Century

The Second Reconstruction again rebuilt U.S. and southern politics to include African American voters. Starting in the mid-1940s, the NAACP mounted the first great voter registration drives in the wake of an important Supreme Court case outlawing whites-only primaries run by the Democratic Party in the South. Black registration increased rapidly from the 1940s, before a strong white backlash to the NAACP drive that set the stage for the well-known struggles of the 1950s and 1960s, dramatic efforts to secure a full range of equal rights. One peak accomplishment was the Voting Rights Act of 1965. Implementation was supported by the modern Supreme Court. It helped to ensure minority office-holding as well as voting rights – though conservative Supreme Court justices began to back off about 20 years ago.

Who Will Defend the Voting Rights Act?

Voting signs in New York are multi-lingual. Photo by John Morton via Flickr.com
Voting signs in New York are multi-lingual. Photo by John Morton via Flickr.com

Much of the Second Reconstruction is beyond undoing, but the future of the Voting Rights Act – including its use to block new attempts to hinder minority voting – is again at issue. Not only is the law’s fate in the hands of the Supreme Court; it will also depend on who wins the 2012 presidential election. Republican nominee Mitt Romney’s views are not clear, but several pivotal GOP states are mounting fervent challenges. The Obama administration has used Section 5 to block new voter ID laws in Texas and South Carolina, arguing that these measures hinder the voting rights of minorities. Challenges to the Voting Rights Act and its Section 5 powers will soon be argued before the Supreme Court, for decision by June 2013.

As this pivotal drama plays out, it matters greatly how forcefully the Department of Justice defends the Voting Rights Act. A second Obama administration would mount a more vigorous defense than a Romney administration beholden to Republican opponents of the law. The 2012 election will, therefore, not only decide whether America’s first black president wins reelection; it will also help decide whether the historic Voting Rights Act that helped make his political ascendancy possible survives – to be used with continuing vigor to ensure full democratic rights for future generations of minority voters and potential officeholders.

Richard M. Valelly is in the political science department at Swarthmore College. His research focuses on African American, Latino, Asian American, and Native American voting rights.

The 2010 elections were a high mark for Tea Party funders and voters determined to reshape the Republican Party and block President Obama’s agenda. With low voter turnout and high public frustration during a slow economic recovery, Tea Party Republicans triumphed in Congress and many states. But the 2012 contests proved much more treacherous. In contests where younger and minority voters turned out in force, many GOP candidates could not manage simultaneously to propitiate Tea Party sympathizers and appeal to other voters.

Republicans lost the 2012 presidential contest and gave ground in Congress, but no one should imagine that Tea Party forces have left the field. They remain determined to block Obama initiatives and make new electoral and policy gains in the years to come.

The GOP Challenge in 2012

By mid-2011, conservative politicians and media talking heads knew that the “Tea Party” was unpopular with most Americans, and downplayed the once-ubiquitous label. There were less than 900 references to the “Tea Party” in Fox News transcripts during the six months prior to the 2012 general election, compared to over 3,000 references in the same period in 2011.
Downplaying the label was possible, but discarding controversial positions pushed by grassroots Tea Partiers was much more difficult, as became evident in states like Montana, Missouri, and Indiana. Despite the general conservatism of such states, Republicans who were too frank in pushing extreme positions risked alienating other supporters they needed to win in November 2012. In addition, publicity for extreme remarks such as those made by Missourian Todd Akin about rape hurt GOP chances in many states.

Nowhere was the detrimental leverage of the Tea Party clearer than in the campaign of eventual GOP presidential nominee Mitt Romney. Romney’s persona didn’t fit the Tea Party ideal, but to win GOP primaries in which Tea Party voters were active, Romney took hardline positions. For example, hurting his chances to win Latino votes in November, Romney opposed college tuition breaks for students innocently brought to the U.S. as children, and argued that life should be made so tough for all undocumented immigrants that they would “self deport.”

Photo by Gage Skidmore via Flickr.com
Representative Paul Ryan of Wisconsin. Photo by Gage Skidmore via Flickr.com

Romney also propitiated big-money Tea Party ideologues by selecting as his running mate their Congressional champion, Representative Paul Ryan of Wisconsin. Ryan has authored national budget plans that would abruptly shrink the U.S. federal government and remake social programs from the New Deal and Great Society eras. His ideas thrill billionaire Tea Party funders who have nurtured his national political career, but they are out of sync with majority voter preferences. Romney’s careful positioning sewed up the GOP presidential nomination but put him in a general-election bind. In November 2012, Romney lost by twenty points or more in most demographic categories not solidly represented in the Tea Party base. He carried older whites, but lost big among Latinos, African Americans, Asian Americans, and young people.

A Turn to Voter Suppression

Aware that high turnout would not help the Republicans’ electoral chances, local Tea Parties engaged in concerted campaigns against imaginary “voter fraud.” Thousands of legitimate voters, mostly low-income people and minorities in traditional Democratic strongholds, saw their rights challenged by groups such as “True the Vote,” an organization founded by a Texas Tea Party activist. In the words of former President Bill Clinton, these efforts were intended to “make the 2012 electorate look more like the 2010 electorate than the 2008 electorate.” Widespread publicity about voter suppression efforts seems to have aroused enough anger to keep youth and minority voting high in 2012. But Tea Party forces remain active in many Republican-governed states, looking for ways to reduce turnout in the 2014 midterm elections and change voting procedures or ways of counting votes for 2016. Republicans pushed by Tea Party activists know that they need to shrink the electorate if they are to win or hold office in the future, and there is no sign that voter suppression efforts are going away.

Where are Republicans Headed?

As Republican elites debate the future of the national Republican Party, grassroots Tea Partiers are still hard at work. About two thirds of the groups active in 2010 were still active during 2012. An impressive 350 groups were still meeting as frequently (or more frequently) than at the high tide of Tea Party effervescence. National election losses mean little for well-organized activists focused on local victories in conservative strongholds. Dozens of GOP members of the House of Representatives, for example, are more worried about ultra-conservative challenges in primaries than they are about majority public preferences in entire states or across the country.

The leverage and activism of the conservative base will make it difficult for the Republican Party to shift its national image. On the crucial issue of immigration, for instance, grassroots Tea Party activists will strongly oppose any kind of legalization that establishes a “path to citizenship” for low-income newcomers, especially Latinos. This may make it impossible for the “Republican establishment” to reposition their party on this pivotal issue. Similarly, Tea Party-prodded Republicans are likely to remain firm against tax increases and in favor of massive cuts to Medicare, Medicaid, and Social Security. And they are sure to oppose legislative and regulatory steps to fight global warming. Grassroots Tea Partiers aren’t going anywhere anytime soon.

For the immediate future, many conservative Republicans will refuse to compromise, stick to their priorities, and look to recoup losses in 2014 – a mid-term election that could very well see reduced voter turnout. A single electoral setback, in short, does not mean certain defeat for the priorities pushed by the Tea Party’s ideologues, billionaires, and grassroots activists. Nor will other Republicans soon gather the nerve to resist Tea Party pressures. In 2012, the national Republican Party may have suffered more than gained from Tea Party activism. But the after-effects of the original mobilizations in 2009 and 2010 live on, and the Republican Party remains extreme in style and policy substance. The impact of the Tea Party will remain evident in American politics for years to come.

Theda Skocpol is the Victor S. Thomas Professor of Government and Sociology at Harvard University and is the Director of the Scholars Strategy Network. Her research focuses on health reform, social policy, and civic engagement.

Vanessa Williamson studies Government and Taxation at Harvard University.

In his 2011 State of the Union Address, President Obama invoked “our Sputnik moment.” Recalling U.S. investments in research and education after Russia launched the first space satellite half a century ago, the President called for renewed efforts to meet international competition with investments in education and research, renewable energy, biomedical science and information technologies. Obama’s call to action still matters.

The Original Sputnik Moment

The Soviet Union’s launch of an unmanned space satellite on October 4, 1957 – followed by the launch of Sputnik II a month later – led to a watershed in the Cold War. That era of high super-power tensions brought an arms race and repeatedly pushed the world to the brink of all-out nuclear war. But there were also some positive side-effects, which we should not forget.

With the Sputnik launches, the Soviet Union showed that it had mastered long-distance rocket technology and might be able to use space as a platform for space-based weapons. The threat to America was obvious, but so was the challenge – and our nation responded immediately and vigorously under both Republican and Democratic presidents.

Photo by Mikel Vidal via Flickr.com
Photo by Mikel Vidal via Flickr.com

Although he was a Republican and the former commander in chief of U.S. forces in Europe during World War II, President Eisenhower did not respond to Sputnik by calling for a huge boost in military spending. Instead, he said the American people should meet the Soviet competition by improving education and investing in science. He created a White House office of science and technology led by the president of the Massachusetts Institute of Technology; quintupled funding for the National Science Foundation; launched the National Aeronautics and Space Agency; and proposed increased federal, state, and local spending on education.

Both Eisenhower and his Democratic successor, John F. Kennedy, were concerned about the race for the hearts and minds of people in the developing countries of Africa, Asia, and the Middle East. Both called on America and the rest of the “Free World” to counter communism by helping developing nations escape from poverty. Both presidents also worked toward equal civil rights at home. In fact, right before Sputnik in September 1957, Eisenhower sent troops to Little Rock to enforce the Supreme Court’s 1954 ruling striking down school segregation. When the Little Rock schools resisted, Eisenhower had the international audience in mind as he deployed U.S. Army troops to force admission for nine black children who were being barred from school by the Arkansas National Guard. The Soviet Union was reminding people around the world, especially in developing nations, that the United States talked a lot about freedom but denied it to African Americans. It was the Cold War, as much as anything, that persuaded Eisenhower to press desegregation with federal power.

President Kennedy Calls for Social Reforms

President Kennedy was even more vigorous about invoking Cold War competition to urge America to realize as well as proclaim high democratic ideals. When he accepted his party’s nomination in July 1960, Kennedy pointed to “three worlds – the free nations, the repressed countries of the Communist world, and the impoverished nations.” The United States, he declared, must “awaken” the new nations and strengthen the free world by improving the lives of our own people and showing that democracy is more inclusive and successful than Communism.

How, exactly? In his January 1961 Inaugural Address, President Kennedy highlighted the moral imperative of tackling the problem of poverty in the United States, even as we protect liberty around the world. Calling on Americans to “pay any price, bear any burden, meet any hardship, support any friend, oppose any foe to assure the survival and the success of liberty,” Kennedy set forth an ambitious agenda to reform our system here at home and defend our ideals around the world. “Ask not what your country can do for you, ask what you can do for your country,” Kennedy said, and followed by a less remembered call to “my fellow citizens of the world.” “Ask not what America will do for you, but what together we can do for the freedom of man.” “Ask not what America will do for you, but what together we can do for the freedom of man.”

Throughout his time in office, President Kennedy repeatedly linked domestic reforms to a global freedom agenda. America’s unemployment rate was too high and economic growth too slow, he argued. To help less privileged citizens, Kennedy pushed for improved unemployment compensation and an increase in the minimum wage. He tackled lack of access to health care and the blight of substandard housing for 25 million Americans. Kennedy, in short, coupled the challenge of the Cold War to making progress at home on extending equal civil rights, fighting poverty, improving education, and extending health care to the poorest Americans.

From Then to Now

The Cold War inspired presidents to defend U.S. ideals in contests with the Soviet power by improving the actual lives of all Americans through expensive and sustained public efforts to overcome poverty, inequality, and racial injustice. Today, domestic challenges are no less serious, and questions about U.S. standing and influence no less worrisome.

In 2011, the U.S. poverty rate stood at a decades-long high point of 15%. Family incomes are falling, and U.S. income and wealth gaps are the highest in the advanced world and have reached levels not seen since the end of the 1920s. The richest one percent claim nearly a fifth of all income and pay low taxes, even as the United States has declined to 12th place in the world in reading literacy; 17th in science literacy; and 25th in math literacy. U.S. life expectancy at birth languishes in 25th place among several dozen advanced industrial democracies.

America’s moral and political influence in the world remains strong, but for how long? Decades ago, the Sputnik moment challenged and mobilized Americans to work together to strengthen our economy, society, and democracy. The 9/11 crisis in 2001 did not spark any similar effort, because U.S. leaders are divided and federal spending as a share of the economy is lower than at the time of Sputnik, and headed lower still. Yet the challenges America faces now are, if anything, even greater than then. It remains to be seen whether President Obama and his successors can invoke another Sputnik moment to mobilize resources to fight social injustices and boost U.S. competitiveness in a world still marked by clashing models and ideologies.

Thomas F. Remington is in the political science department at Emory University. He studies transitions in politics, primarily in Russia and China.

As female roles and rights change quickly across the globe, women’s organizations push for gender equality in developed and developing countries alike. But how do such organizations make a difference beyond economic trends and government policy? Part of the answer lies in international relationships and leverage. Nonprofit groups are involved, and so is the United Nations, which sponsors many initiatives and has regularly convened world conferences. Most recently, the Fourth World Conference on Women was held in Beijing, China in 1995.

United Nations world conferences are like years-long political campaigns. Some 5,000 government delegates and 30,000 women activists gathered in Beijing for two weeks in late August and early September in 1995. But years of mobilization preceded and followed.

  • From 1993 to 1995, 189 governments participated in regional and global preparatory meetings to negotiate the Platform for Action, and women activists also met by the hundreds and thousands in regional and global forums exchanging experiences and formulating inputs.
  • The book-length Platform finalized in Beijing set the stage for further actions to advance gender equality in participating nations, and in 2000 “Beijing+5” was held in New York to review implementation steps and renew momentum into the 21st century.

My research tracks the mobilization of women’s organizations to participate in the Fourth World Conference and analyzes the after-effects. In particular, I used interviews and field work to look closely at women’s organizations and the impact of the world conference in India and China, two populous, rising powers with entrenched patriarchal traditions. India is the world’s largest democracy, and China is the largest authoritarian polity. How did world conference participation play out in two such different political systems?

Impacts in China and India

Conventional wisdom might lead us to expect that world conference participation would have a big impact in a democracy but create only a ripple in an authoritarian polity. In actuality, outcomes in India and China showed no neat correspondence to regime characteristics. The
Fourth World Conference had major effects in both countries, yet the effects were in some realms more important in China. We might expect participation in this conference would make more of a difference in India’s democracy, but that is not necessarily the case.

  • A much wider range of women participated in India than in China. Indian participants included women from the lower castes, religious minorities, and tribal groups, while in China participants came mainly from a small and homogenous segment of urban professional women.
  • Like their Chinese counterparts, activists in India leveraged the Fourth World Conference in their policy debates with government authorities; they built organizations in the name of theFourth World Conference; and they hammered out movement identities in intense dialogue with labels endorsed by the Conference for participating non-governmental organizations.
  • The Fourth World Conference facilitated the formation of nineteen new women’s organizations in China, but helped only four additional ones take wing in India.
  • Both the Indian and Chinese governments endorsed the Fourth World Conference’s most anticipated achievement, the Platform for Action, which spells out what governments, women’s movements, international organizations, and even commercial banks should do to further gender equality.
  • But women’s activists diverged. The Chinese activists argued that the World Conference agenda correctly identified causes and solutions to gender discrimination, while the Indian activists argued that the agenda too closely resembled their government’s neo-liberal style policies, and thus was insufficient. Leaders of the Chinese women’s organizations were more positive than their Indian counterparts about the global identity and strategies endorsed by the Fourth World Conference.

What Can be Learned?

My analysis suggests lessons about the ways in which world conferences and similar transnational campaigns may energize and connect with domestic social movements – such as the women’s rights efforts I studied in India and China.

  • To understand the possible effects, both authoritarian and democratic contexts need to be broken down into more nuanced relationships between governments and social movements and among social groups themselves. It is a mistake to expect world conferences or other transnational efforts to have a uniform, predictable effect depending only on the type of political systems from which participants come.
    • Authoritarian regimes certainly do place serious constraints on women’s rights activism, and the scope for action is likely to remain greater for government officials than for groups operating on their own. But in authoritarian systems as in all others, we find various relations between governments and citizens. Even in non-democratic regimes, there will be arenas in which participation in a world conference can allow existing or newly formed groups to set agendas and encourage officials to take new kinds of actions.
  • In democratic regimes, citizens groups enjoy more room to maneuver, but much depends on whether civilsociety actors are well-established and what they aim to do. Established and independent-minded groups may be able to use world conferences to challenge and move well ahead of existing domestic government policies.

In sum, domestic rights movements cannot change the type of a regime. But they can achieve efficacy beyond their national regime type leads us to expect. Prior changes in women’s organizational capacities and their ability to set agendas were the key to understanding the sometimes paradoxical impacts of the Fourth World Conference in India and China. Sponsors of transnational campaigns should pay attention to ongoing developments within and among domestic organizations, and look for ways to boost their capacities to pursue specific reforms. Domestic groups, in turn, should recognize that world conferences are sources of long-term leverage much more than simple events. Domestic groups can use conferences in many ways.

Dongxiao Liu is in the sociology department at Texas A&M University. She studies women, social movements, and international development in India and China.

Ratified in 1951, the 22nd Amendment to the Constitution of the United States limits the number of terms a president can serve to two – and it is a lifetime restriction. Living two-term presidents such as Bill Clinton and George W. Bush are thus excluded from ever serving again. The irony of the limit is that even the most politically successful contemporary presidents – those who achieve reelection – reach the peak of their careers and the beginning of their decline at the same moment, when they raise their hands to be sworn in at the second Inauguration.

From that moment, second termers are known to be leaving office on a date certain. Inexorably, their influence is on the wane. So how much can re-elected presidents accomplish in their last four years? Although the options are limited, they are not all bad, because over two and a half centuries the office of the U.S. presidency has accumulated significant powers.

Limits for Lame Ducks Walking

Photo by Tony Fischer Photography via Flickr.com
Photo by Tony Fischer Photography via Flickr.com

Second-term presidents regularly face problems with appointments, bargaining, and relations with Congress:

  • In relations with Congress, the first two years of the second term hold the best prospects for reelected presidents to get Congress to enact their priorities. The normal presidential success rate during that time is on a par with the first term. In the last two years, however, the president usually has little success, especially because the reelected president’s party regularly does badly in the next midterm election.
  • A reelected president is likely to have to fill many high level positions. Because heading an agency or working in the White House is a high stress position that pays relatively little compared to the private sector, high-level officials often depart and need to be replaced at the beginning of the second term. At the same time, judges of the president’s party often step down in hope that their replacements will be ideologically similar. Yet the now lame-duck president may have trouble persuading highly qualified administrators to serve. And if nominees are found, the Senate still has to confirm them. It may do so in the first years of the second terms, but by the last two years, the Senate, if controlled by the opposition, may stall in order to wait the president out.
  • In international affairs (and sometimes in domestic dealings, too), actors outside the government may try to take advantage of a second-term president. Because lame duck presidents are limited in the promises they can make, foreign leaders or domestic actors may go so far as to deal with major party candidates or an incoming president-elect before the sitting president even leaves office to see if they can get a better deal.

Possibilities for Continuing Influence

Reelected presidents need to move quickly on proposed legislation. Fortunately for them, the experience they have gained from the first four years can lay the groundwork even before the second term starts. Thus, reelected presidents should be realistic and proactive; they should fill key posts quickly and encourage judges who are planning to retire to leave as early as possible. Key strategic positions in the bureaucracy are the priority, as lesser positions can be left vacant to be filled by acting agency heads who are generally reliable professional civil servants. Presidents in their second term are under pressure to move quickly, but they have four years of experience to help.

Second-term presidents can also deploy administrative and executive powers that do not require legislative approval. Executive orders, rulemaking, the exercise of delegated authority are all important tools at their disposal, even though there are limits to their effectiveness.

  • Constitutional authority: All presidents can use the veto and the threat of a veto to shape policy. But a veto serves only to negate, so it cannot much help the president drive legislation in a preferred direction. The president also has the authority to issue pardons and take the nation to war.
  • Delegated authority: Congress has often delegated significant powers to the president to ensure that laws on the books are carried through. According to the courts, delegated authority can only be reversed by another act of Congress. As long as a given president remains in office (and even afterwards), legislative reversals are unlikely because of the presidential veto that can only be overridden by a two-thirds vote of both houses of Congress.
  • Executive orders: As the chief executive officer of the vast federal government, the president does not need Congressional permission to issue orders that apply exclusively to executive branch activities. The U.S. federal government taxes and spends roughly one-fifth of the gross national product, and the United States is the only world superpower. Executive orders can therefore have very wide-ranging effects. True, the next president may cancel or amend executive orders, but it often happens that any given order has built up a supportive constituency, making it impracticable to reverse. And to the extent that executive orders enhance presidential power, succeeding presidents may want the authority regardless of party.
  • Rulemaking: Regulations issued by agencies to implement federal statutes are a little known but critical part of policymaking. Laws passed by Congress often leave great latitude for interpretation, and proposed regulations go through a rigorous and difficult review process. Once in place, they are hard to change, because modifications or reversals must go through the same difficult process. Even a president known to be on the way out the door can thus use rulemaking to place a firm imprint on governmental policy.

Last but not least is the “bully pulpit” available to all presidents. This was President Theodore Roosevelt’s phrase for the chief executive’s capacity to highlight an issue or ideas through statements that capture broad public attention. Second-term departing presidents may have special capacities in this regard even at the very end. In the last State of the Union address, or in a thoughtful Farewell Address, an outgoing president can call upon eight years of experience to make cogent and influential observations – such as President George Washington’s advice against “entangling alliances” with foreign powers, or President Dwight Eisenhower’s warning about the modern “military industrial complex.” Their ideas can become a permanent part of national debates, living on long after those who gave expression to them move out of the White House and lay down the awesome powers of the U.S. presidency.

Daniel Paul Franklin is in the Political Science department at Georgia State University. He studies the institutions of American politics, including both state and federal governments.

For decades, social scientists have been looking for damaging effects from particular media forms, especially those enabled by new communication technologies. Does television make us stupid? Is the Internet undermining social ties? Despite many research efforts, there is very little uncontested empirical evidence of generally damaging media effects.

If social scientific methods have not shown that the media are doing us harm, why do so many of us still believe that one sort of communications media or another causes serious damage to individuals or society? My research on media criticism and its history has convinced me that our mistrust of communications media tells us more about ambivalence about modern life than it does about any actual ill effects from new modes of communication.

Each New Technology Brings More of the Same Worries

Photo by woodleywonderworks via Flckr.com
Photo by woodleywonderworks via Flckr.com

Media criticism proceeds as if we already have (or will soon find) proof of damaging media influences. Many critics make sweeping but unsubstantiated claims about what must be happening to this generation, thanks to the latest media form. The Internet is our most recent source of worry. But we should not forget that critics once bemoaned the irreversible damage allegedly being wrought by predecessors like the radio, comic books, and the Sony Walkman.

With the arrival of each new media technology, commentators replay surprisingly similar themes of hope and dread. People hope that the new form will offer a more democratized access to knowledge, education, and uplift. But then critics are dismayed to discover that, instead, the new technology is often deployed for entertainment, escape and pleasure. This recurrence of themes and arguments is telling. Just like the Internet, earlier new forms of mass communication – like magazines, radio, films, and television – were first hailed as having unprecedented potential to
educate the masses by spreading enlightenment and higher forms of culture – only to soon be decried as entertaining diversions spreading trash instead of art, classics, or other forms of alleged cultural uplift.

Media criticism tells us a lot about our recurrent hopes and fears for modern life – that is, new media technologies provide a mirror for social anxieties. My review of previous generations of media prognostications and commentary reveals several recurring claims:

  •   When a new communications technology makes its debut, we worry most about children, seeing them as particularly vulnerable and in need of protection. Don’t let them watch too much television or play too many videogames.
  •  Better education is supposed to set up a protective shield for children and other vulnerable groups like the less educated or immigrants, preventing bad effects from exposure to the new media.
  •  Each new communications modality brings another round of agonizing about “popular taste,” with social critics baffled and dismayed by the choices of their fellow citizens. How could they like that junk? The specter of an imaginary “lowest common denominator” makes its appearance, as we blame the media for dumbing everyone down.
  •  Prior golden days are invoked – such as an imagined pre-media world of logical arguments and children playing for hours out of doors. If only the latest media breakthrough were not polluting society we’d still be “back then” leading more wholesome and meaningful lives.

A Fresh Perspective on Media in Our Lives

Instead of once again replaying the same fears and fantasies, we could reexamine our premises. Innovations in mass media, including the Internet, do not really drop from the sky to pollute a once-pure world. The Internet is only the most recent version of a communication technology that was humanly created to further deeply human purposes – including, above all, storytelling. Communications media are technologies for creating and spreading narratives, the very sorts of stories we have been telling among ourselves for a long time, using the spoken word, the written word, printed newspapers, magazines, and books, and now a succession of electronic forms for conveying sounds and pictures. Innovations in mass media, including the Internet, do not really drop from the sky to pollute a once-pure world.

Yes, for some the Internet’s stories may seem to be far too much “trash” and not enough “information” or “art.” That’s no different from how many reacted to radio, movies and television. But every era has plenty of trash. And often one era’s trash becomes another era’s treasure. One generation’s inexpensive, dreadful, degenerate music or cheap thrill can and has evolved into another generation’s classic genre.

People and their various choices and tastes cannot and should not mirror a unified world of highly educated aesthetes. Maybe the repeated contrasts we draw between “art” and “trash,” between “information” and “entertainment,” or between “emotion” and “reason” are really shifting interactions between dimensions that all of us experience and practice. Perhaps the media just amplify who we already are, by giving us additional ways to define ourselves, connect with one another, and share and experience a range of human emotions and perspectives.

Some of today’s most insightful media scholars study how people, individually and in groups, actually use various kinds of communication media in everyday life. These ethnographic studies are not obsessed with finding damaging media effects; they instead report what is happening as people deploy various new ways to communicate. The increased access, range and creativity of what they are making, choosing, sharing and enjoying is heartening. This research on how people use new media to make sense of their lives shows how fruitful it can be to let go of unsubstantiated fears about media damage. Instead, we need to find out more about what real people are doing, often very creatively, with actual media forms, as these forms change over time.

Joli Jensen is in the Communication department at the University of Tulsa. She studies the relationship between media and contemporary culture