Senin, 10 Oktober 2016

Can You Really Trust Election Polls?

Around here at Everyday Einstein, when we talk about science experiments, we are usually discussing biology, chemistry, or physics experiments. But today I want to take a look at a different kind of science experiment, one that is receiving a lot of attention lately in light of the upcoming presidential election in the United States: the social science and statistical experiment of political polls.

How do you design a reliable poll? How many people do you need to question to predict the results of a national election? Can you trust poll results?

How Do Polls Work?

Large public opinion polls like those we turn to for predictions during an election year are typically conducted by specialized companies that often provide their results for a fee. Polling methods have not changed much in the past several decades: the preferred method of surveying participants is still by phone.

What has changed, however, is the response rate. According to the Pew Research Center, response rates in 1997, even after Caller ID was already quite popular in the US, were still as high as 36%. Thus if 1,000 respondents were required for a poll, a typical number for state polls, at least 2,777 people were called. However, the response rate dropped to only ~9% in 2012. Thus, the same poll would require more than 11,000 calls be made. Lower response rates are due in part to the fact that calls from unknown numbers are easier to ignore but are also likely connected to growing concerns of privacy and thus a hesitance to share personal information.

At least the number of people polled is less important than the range of demographics covered by the people included in the poll. Participants are typically selected through a process called Random Digit Dialing. Pollsters pick the first six digits of the phone numbers they will call, often targeting people who live within the same area, and then randomly generate the last four digits. The randomization allows for the inclusion of unlisted numbers and attempts to sample a cross section of the population that will be representative of the whole.

So how well do polls typically do at covering their representation bases? According to the Harvard Business Review, the groups that are most commonly under-represented in polling are young people, Spanish speakers, Evangelicals, and African Americans—particularly African American men.

For example, more and more people, myself included, do not use a land line at all. While land lines can be called by computer, federal regulations require that cell phones be dialed by hand, thus making polls that reach cell phone users more expensive. Missing responses from cell-phone-only users is not a problem, if they cover the full range of respondent demographics that are also represented among land line users. Unfortunately, that is not the case: those people who have given up their land lines tend to skew young, in the 18-30-year-old age group.

Another group entirely missed by phone polls: the ~7 million Americans who live overseas, including soldiers, teachers, missionaries, and students.

Online polls have started to gain traction but do not have the benefit of adjusting to decades of trial and error to hone their effectiveness. Additionally, an estimated 16% of Americans don’t use the internet (while almost everyone has a phone either in their pocket or at least in their house). Online polls also tend to over represent men, particularly those who are unemployed.

See Also: How to Use Statistics to Understand Poll Results

Can You Trust Poll Results?

So creating a reliable poll first requires the sampling of a truly representative subset of the group whose opinion you are trying to measure. Like any science experiment, you must consider the potential bias in your chosen methods.

A well-known case study of sample bias is the Literary Digest poll leading up to the 1936 presidential election between Alf Landon and Franklin D. Roosevelt. Previously one of the most well-trusted and accurate polls, the Digest poll failed to consider the sample bias inherent in polling people by phone in a year that marked the end of the Great Depression. At the time, only the upper middle and upper class could afford telephones and those classes tended to skew Republican. Thus, while their statistical conclusions accurately represented the people that they polled, the people that they polled did not accurately represent the voting public. They predicted Landon would receive 57% of the vote with Roosevelt earning only 43% when in fact, Roosevelt won with 62% of the vote.

When a certain bias cannot be avoided like, for example, a lack of Spanish speakers responding to a poll in English, the members of the missing group that do respond can be given more weight in the final tally.


This method of weighting, however, is controversial and can be used to push the results toward a particular outcome if not done carefully. If weighted too heavily, the opinions of just a few respondents could play a disproportionately important roll in the poll’s outcome when they may not actually represent the majority beliefs of the demographics they were weighted to represent.

Others disagree on what demographics should be weighted. Weighting by gender or race can make sense, although of course all women or all Latino people do not vote as a monolith, but what about political affiliation? Weighting by political party could cause the derived poll results to be misrepresentative if, in a particular election, voters are crossing party lines more than usual.

Sometimes even the most carefully chosen methods cannot account for biases in respondents. 

However, sometimes even the most carefully chosen methods cannot account for biases in respondents. For example, after the first debate between Obama and Romney before the 2012 election, polls shifted to favor Romney, a shift that in part reflected the fact that Democrats weren’t interested in providing responses after what was perceived as a huge loss for Obama in the debate. Republicans, feeling victorious, were naturally feeling more generous with their time.

Another important factor in designing a reliable polling experiment is methodological error. For example, are the questions clear? Even a sampling of the most accurately representative subset of the population will not do any good if particular questions can be easily misinterpreted. Pollsters will also often ask questions to help predict whether the respondent will actually vote, questions like “how interested are you in the election?” or “did you vote in the last election?”

When considering which polls to trust, one should also look to their margin of error which represents the statistical ability to accurately sample the distribution of opinions. For example, in a group of 1000 people who are equally divided between two candidates, a poll may talk to 490 of one group and 510 of another, suggesting a slight lead for one candidate when there really is none. Thus, if a particular poll suggests that a candidate is in the lead, but only by a percentage smaller than the margin of error, that lead cannot be claimed for certain.

The time at which the poll is conducted also affects its reliability. Back in May, the New York times examined how closely polls predicted, on average, the winner of a presidential election (in percentage points) back to 1980 as a function of the number of days before the election at which the poll was conducted. Polls were not with 5 percentage points of the actual outcome until ~80 days before the election. By ~4 weeks (our current distance from the 2016 election in the US), poll predictions were 3-4 points within the actual result, and as close as they ever got to reality.  

Evidence from the same report also suggests that polls do not shift by more than a few percentage points based on events like debates. Thus, the debates matter only when the election is close. Party conventions typically give bigger boosts to support in polls for a particular party, and occasionally big news items will cause observable dips (like Mitt Romney’s 47% remark in the 2012 election).

Luckily, we can look to studies of polls that covered the last presidential election to see which had the most reliable algorithm. The National Council on Public Polls, for example, provides a report on the 2012 pre-election polls to give you an idea of who got it right last time. Pew Research highlights their own record on predicting the 2012 popular vote from that report on their website.

Polls are important not solely for predicting election outcomes, but also because they reflect what issues are important to the voting public. Candidates will base how much time and money they spend on particular issues or proposed policy changes on how much voters suggest they care in their polled responses. Hopefully our polling technology can continue to keep up with our lifestyles to improve polling response rates so that polls can continue to be a valuable resource.

Until next time, this is Sabrina Stierwalt with Everyday Einstein’s Quick and Dirty Tips for helping you make sense of science. You can become a fan of Everyday Einstein on Facebook or follow me on Twitter, where I’m @QDTeinstein. If you have a question that you’d like to see on a future episode, send me an email at everydayeinstein@quickanddirtytips.com.



Tidak ada komentar:

Posting Komentar