A major point of discussion after the 2016 U.S. election has been the fact that while many polls predicted a win for Hillary Clinton, she ultimately lost the electoral college. Numerous estimates showed a commanding lead for the Democratic candidate, but they were decidedly off when it came to calling the results on election night. People were quick to blame the pollsters for misinterpreting the results or collecting bad data, but social scientists point to methodological issues that plague almost every poll and survey that can help explain some of what happened in November.
Often, issues with sample distribution are part of the problem. A common method within polling circles is “Random Digit Dialing” (RDD), where researchers make a list of potential phone numbers and take a random sample from that set. Next, they call those numbers and ask people to take their survey by phone. This method can create a coverage bias, which can exclude some groups or people from being part of the respondent pool — not everyone has a phone or is able to stop and take a survey via phone in the middle of the day. This means that any conclusions drawn from that sample cannot be used to make conclusions about the general population because it is not truly representative. To correct for this, some researchers use address-based sampling (ABS) to make a respondent pool by sending mail invites to randomly selecting home addresses. Sometimes, this method can elicit a higher response rate than RDD, but the research shows that both RDD and ABS tend to over-represent non-Hispanic whites and people with college educations. In short, it is difficult to get a representative sample.
- Andy Peytchev, Lisa R. Carley-Baxter, and Michele C. Black. 2011. “Multiple Sources of Nonobservation Error in Telephone Surveys: Coverage and Nonresponse.” Sociological Research Methods 40(1): 138-168.
- Michael W. Link, Michael P. Battaglia, Martin R. Frankel, Larry Osborn and Ali H. Mokdad. 2008. “A Comparison of Address-Based Sampling (ABS) Versus Random-Digit Dialing (RDD) for General Population Surveys.” Public Opinion Quarterly 72(1): 6-27.
Another common set of problems with polling is with respondents themselves. Social desirability bias occurs when participants provide answers that they feel are more socially acceptable, even if they are not necessarily their true beliefs. Krumpal provides an overview of the various forces that drive social desirability bias and the impacts it can have on both survey results and the ways that researchers interpret the data. Another prominent issue is panel conditioning, which happens when a survey respondent is asked the same questions repeatedly overtime. Respondents will often change their answer each time, revealing how fleeting survey responses can be.
- Ivar Krumpal. “Determinants of Social Desirability Bias in Sensitive Surveys: A Literature Review.” 2013. Quality & Quantity 47(4): 2025-2047.
- John Robert Warren and Andrew Halpern-Manners. 2008. “Panel Conditioning in Longitudinal Social Science Surveys.” Sociological Methods & Research 41(4): 491-534.
Comments