Photo by oatsy40, Flickr CC
Photo by oatsy40, Flickr CC

A major point of discussion after the 2016 U.S. election has been the fact that while many polls predicted a win for Hillary Clinton, she ultimately lost the electoral college. Numerous estimates showed a commanding lead for the Democratic candidate, but they were decidedly off when it came to calling the results on election night. People were quick to blame the pollsters for misinterpreting the results or collecting bad data, but social scientists point to methodological issues that plague almost every poll and survey that can help explain some of what happened in November. 

Often, issues with sample distribution are part of the problem. A common method within polling circles is “Random Digit Dialing” (RDD), where researchers make a list of potential phone numbers and take a random sample from that set. Next, they call those numbers and ask people to take their survey by phone. This method can create a coverage bias, which can exclude some groups or people from being part of the respondent pool — not everyone has a phone or is able to stop and take a survey via phone in the middle of the day. This means that any conclusions drawn from that sample cannot be used to make conclusions about the general population because it is not truly representative. To correct for this, some researchers use address-based sampling (ABS) to make a respondent pool by sending mail invites to randomly selecting home addresses. Sometimes, this method can elicit a higher response rate than RDD, but the research shows that both RDD and ABS tend to over-represent non-Hispanic whites and people with college educations. In short, it is difficult to get a representative sample. 
Another common set of problems with polling is with respondents themselves. Social desirability bias occurs when participants provide answers that they feel are more socially acceptable, even if they are not necessarily their true beliefs. Krumpal provides an overview of the various forces that drive social desirability bias and the impacts it can have on both survey results and the ways that researchers interpret the data. Another prominent issue is panel conditioning, which happens when a survey respondent is asked the same questions repeatedly overtime. Respondents will often change their answer each time, revealing how fleeting survey responses can be.