Every news-consuming American knows there’s always a ballot or election on the horizon. As stats are shot at us from left and right, it is difficult to go  a day without hearing the most recent reports on which candidate has taken the lead, who has gained momentum, and who or what is no longer viable. This leads to two rather important, but rarely asked, questions: who cares and what do these numbers really mean? In this roundtable, we hope to provide a basic understanding of polling, its different forms, how they are used by the candidates, campaigns, and the media—and what insight sociologists can provide.

What is polling and what does it measure?

Howard Schuman: A poll is almost always a series of questions intended to provide information about a large population (one too big to talk to one on one). For example, the total American population, which is now over 300 million people, or you can be interested only in American adults, say, 18 and over. A poll does this by drawing a relatively small sample from the population, then using probability theory to generalize the sample results to the entire population.

Surveys or polls can be used for practically any issue you wish to ask about, whether it’s factual, attitudes, beliefs, values, or anything else you can think to ask people. Most people are familiar with polls that measure opinions about political candidates, including who will win an election, but there’s really no limit to the types of topics that a poll or survey can ask about. For example, the federal government uses surveys each month to measure unemployment. So when you hear a figure like “There’s 8.2% unemployment over the past month,” it’s based on a sample. It’s a fairly large sample, but still, it’s a very small part of the total population of the U.S. labor force, so the government uses a survey to determine and report on unemployment every month. And much else that appears in government reports is based on samples of either the total population or some part of the population.

The questions themselves matter. And it turns out that writing questions is a lot more complex than most people realize. Answers can be affected by the form of the question—that is, whether it is open-ended or closed. The words also matter. For example, there’s no real difference between “forbidding” smoking and “not allowing” smoking in, say, a classroom, but many Americans give different answers to the question depending on whether you say “Should smoking be forbidden?” or “Should it not be allowed?” So, wording can make a huge difference. A third factor that affects answers is the context of the question. This includes the order of the questions—what questions came before—and also if there’s an interviewer, the race and sex of the interviewer often affect the answers, particularly if the questions deal with race or gender.

Members of the organization 38 degrees analyze poll data. Photo by 38Degrees via flickr.com.
Members of the organization 38 degrees analyze poll data. Photo by 38Degrees via flickr.com.

Let me add that anyone who watches television, reads newspapers, or looks at the Internet will see lots of polls. They’ve increased enormously since first developed (usually traced to the mid-1930s), so today polls proliferate on all subjects and very much on any political issue, and of course on the Republican nominating process and so forth. They’ve really become overwhelming, and it’s important to try to distinguish good from bad surveys.

What are the uses and limitations of polling?

Paul Goren: Well, one big limitation is that a slight change or slight alteration of a particular phrase or the inclusion or exclusion of a particular adjective can change poll responses a lot. For instance, you ask the question “Should we spend more, spend less, or spend about the same on Social Security?” you might find 53% of the public says, “Let’s spend more on Social Security.” And then if we run the survey using the following wording, “Should we spend more, spend less, or spend about the same on protecting Social Security?” just by adding the one word “protecting,” support for spending on Social Security might move 15% in the liberal direction. And so if you have a poll that’s run by the National Rifle Association or the Sierra Club—any group with an obvious stake in the outcome of the polls—you can probably discount it. Even legitimate polling organizations like Gallup and NBC/Wall Street Journal, you have to look at that question wording very carefully because just a slight change, a slight tweak can lead to substantially different results. If 53% say we should spend more, that’s a majority; if it’s 68%, that’s a supermajority, and that suggests you’ve got more of the public with you, when it may just be the reaction to that one word, the idiosyncrasy of that one question. So the question wording is one thing you should pay close attention to.

The trick there is can you consult multiple polls? Again, an example I always use with my students, the problem with relying on results from a single survey question is “On your final exam, how many of you would like to have one multiple-choice question?” Show of hands? Nobody puts their hand up. “How about a hundred multiple-choice questions?” All the hands go up. Because that one question could be the one that they don’t know, and they could get wiped out to zero. Same with public opinion: Why would you try to measure the public’s perceptions on an issue using one question? You wouldn’t use a hundred, but maybe two, three, four, or five polls that ask about the same thing.

So, if politicians are smart, they’ll look at poll results, but not just from one survey, particularly one survey or one question that seems to confirm their preexisting bias; they should look at several polls with several questions. But, you know, they’re politicians, they were elected to do x, y, and z, they want to see the public is with them, and so they may resist that temptation to try to be open-minded and inclusive in their assessment of polling data!

Schuman: One problem is that polls tend to construct what’s happening. That is, people hear that some issue is very prominent at the moment, and that then influences what they think is the important issue of the day.

There is certainly a tendency to dismiss polls, but usually they get dismissed by people who don’t like the results. And the same person who says, “Oh, I don’t believe polls; how can you do that with so few people?” and so forth is apt to defend a poll if it goes in a direction they favor! So, I don’t take critics too seriously—of course many polls are trivial and many polls are badly done, but the criticisms are usually based on whether the results agree or disagree with what you think.

survey books
There are many, many ways to conduct a survey—and some are better than others. Photo by Wilderdom via flickr.com

That being said, there are many problems that come up when doing a survey, and these apply both to sampling and to question-asking. In the case of sampling, not everyone is willing to answer questions or can even be located. The proportion of people who take part in the survey is called the “response rate” for the survey, and response rates have been dropping in recent years. Rates were around 80% in the 1950s and are now under 10%, though much higher for government surveys, especially those where participation is legally required. There are ways to compensate for the bias that nonresponse introduces, but the compensation cannot be perfect and it lends more uncertainty into almost all poll reports these days.

Also, to the extent that the questions asked are not good measures of what you intend to measure, results are less valid. A big problem in constructing questions is that every big issue has many different aspects. So, although some people are against all abortions and some people would leave all abortions up to a woman’s choice, most respondents to a survey will give different answers depending on how you specify the reasons for the abortion and the time it occurs, so really, it doesn’t make sense to ask a general question about abortion (“Do you favor or oppose?” or whatever). Questions are almost always better if they are more specific. And that’s true of almost any other issue, whether it’s gun control, Iran’s development of nuclear weapons, Obama’s health reform legislation—in all these cases, the question has to be useful to begin to deal with the specificities.

Can you provide a brief contextualization of the role polling plays in the political process?

Tom Smith: Polling is used for many different purposes in the political process, including, but not limited to, so-called horse-race questions about the candidate one intends to vote for, the likelihood of voting, familiarity with candidates and issues, message testing, and assessment of campaign ads. The polls may be directed to the whole electorate, likely voters, members of one political party only, or some special target group (e.g., Hispanics or first-time voters). They range from high-quality, well-designed surveys down to virtual junk based on tiny samples, biased questions, and other shoddy parts.

When done well, polls provide valuable information to a campaign and can greatly improve a candidate’s chances of success in an election. But polls are often poorly done. First, campaigns and their consultants may lack the technical competency to design and carry out surveys properly or lack the resources to do scientifically credible work. Second, campaigns often need very up-to-date information (e.g., after a debate or the emergence of some damaging news) and there may be neither the time nor resources to measure the impact of the breaking development.

Goren: Polls have been part of the political system of campaigns and elections for a very long time. There’s some good archival research that shows that presidents Kennedy, Johnson, and Nixon paid a lot of attention to internal polls to get an idea of what kind of policies they could pursue, how far they could go, and things of that nature. Polling has informed, or at least served as a guideline, for presidential decision making for a long time. But over the past couple of decades, it’s really taken off. There is now polling data all over the place.

One way it matters is that politicians can get a very good sense of where the public stands on any particular issue at any particular point in time. Whether politicians choose to pay much attention or much heed to those polling data is a different story. Sometimes the polls suggest politicians should not pursue a given policy, and politicians, by virtue of, say, a big win on Election Day, might think they have a mandate from the voters and try to move in a direction even though the public opinion polls suggest it might get them in some trouble. One example of politicians taking such a risk would be when the Republicans had their big win in the midterm election of 2010. Historically large, but not unprecedented, it was a very, very big win for the Republicans. And a lot of Republicans in the House took that as evidence that the voters wanted them to move in a very conservative direction on entitlement programs—take Medicare and change it from an entitlement program to a voucher program. But when the polling results started coming back, they started getting a lot of heat for that.

The voting process is often placed under the domain of political scientists. How does a sociological approach differ from, or supplement, that found in political science? 

Smith: While elections and campaigns may formally be under the domain of political science rather than sociology, sociologists can play an important role in the use of political polls. The topic of public opinion falls as much under sociology as under political science. Sociologists are particularly adept at understanding social change and how short- and long-term trends may be reshaping both society as a whole and the body politic in particular. Sociologists in particular have a good grasp of cohort differences and how these may be changing the political climate. They are also well tuned to understanding subgroup dynamics and the role of different groups such as ethnicities, genders, and classes in the political process. Political scientists sometimes have a narrower political focus, while sociologists are more likely to have a more holistic understanding of voters.

Schuman: I can give you an example from my own research, which is quite different from political polling. I’ve been doing, for the last few years, studies of what is called “collective memory”—that is, how people think about important national and world events from the past, such as the 9/11 terrorist attack, the invasion of Iraq, the assassination back in the ‘60s of President Kennedy, and even going back to how they think about World War II. I’ve had a guiding hypothesis—shared by someone working with me, Amy Corning—that most people remember best those events that occurred when they were growing up (roughly ages 10 to 30). That is, if you ask them “What are the important events over, say, the last 100 years?” Most people will give an event that occurred when they were, themselves, adolescents or in very early adulthood, no matter what their present age is. We’ve investigated this not only with American data but with data from half a dozen other countries (Germany, Japan, Russia, Israel, Lithuania, and Pakistan), and we did this because we believed the hypothesis to be quite general, not just about people in the United States. So, that’s an example of something that has nothing to do with predicting who’s going to win an election, but it’s cross-national and it deals with a kind of fundamental hypothesis about human beings—it’s just one example of what sociologists can do with polling and survey data.

Goren: By definition, political scientists care about the subject, right? Otherwise, why would you get a PhD in political science and teach about it? So, people in my particular discipline are deeply informed, deeply knowledgeable about all aspects of politics (what liberalism and conservatism mean, what’s in the Affordable Care Act, things of that nature). And then when you come across poll results that find that what the public knows is just shockingly low or abysmally low, people are just blown away! And so political scientists—not all of us, but a lot of people in my discipline—when they see that only 20% of the public knows that John Roberts is the Chief Justice of the Supreme Court, they say, “Oh my gosh! That’s why democracy is in such a sorry state!” …But despite my lack of knowledge, I still might be a very reasonable voter. John Roberts might be the utility infielder who played for the Minnesota Twins seven years ago for all I know, but even though I know very little about politics, it still might be a very reasonable choice that I would make on Election Day. And so political scientists [might be] so single-mindedly focused on political matters they miss the big picture! What I’ve noticed, at least among the folks in sociology, they tend to look at polls not simply in the context of politics but some aspects of politics and society more broadly.

Kyle Green is in the sociology program at the University of Minnesota. He is the author, with Doug Hartmann, of “Politics and Sport: Strange, Secret Bedfellows.”

Paul Goren is a political scientist at the University of Minnesota. He studies public opinion, voting behavior, and applied statistics and econometrics.

Howard Schuman is a professor emeritus of sociology at the University of Michigan. He is the author of Method and Meanings in Polls and Surveys.

Tom Smith is a senior fellow and the director of the Center for the Study of Politics and Society at the University of Chicago’s NORC. He is the incoming editor of Public Opinion Quarterly.