Course Hero Logo

Public Opinion in the United States

What Is Polling?

How Polling Works

Formal polls require carefully sampling a representative segment of the desired population to produce a statistically meaningful measure of public opinion on issues and public individuals. Each poll has a margin of error that suggests how accurately its results reflect the population as a whole.

Polls are the most common medium for collecting public opinion. In a poll, a specific group of people is surveyed to determine their feelings and beliefs about issues or public individuals. Polls produce detailed, correlated data that is much more useful to government officials and other interested parties than generalizations or the anecdotal information of constituent messages.

Until the 1930s, political polls were informal. A nonrandom section of the population was surveyed, such as all of the people exiting a restaurant in the same hour or all subscribers to a popular magazine. Pollsters were not concerned with whether these groups represented a cross section of all society or just a narrow group. They simply wanted a sense of what people thought, and controls over the group sampled were put in place. The most famous example of a nonrandom poll was a presidential preference poll conducted by the magazine Literary Digest in 1936. The poll predicted that Republican Alf Landon would defeat incumbent Franklin Roosevelt by 57 percent to 43 percent. In fact, Roosevelt won a landslide victory, capturing just over 60 percent of the popular vote. Poor design that overrepresented middle-class and upper-class voters more likely to oppose Roosevelt's policies accounted for the poll's spectacular inaccuracy.

There are still informal polls like this. Quick-click Facebook polls are an example, as are e-mail surveys from newspapers or professional organizations. Often, TV networks will conduct an "instant poll," where online or phone results are immediately reported with little concern over how representative of the population respondents are or even how many times they respond. While these unscientific methods may not matter with issues of popular interest but no policy implications, they are inadequate when trying to measure public opinion on policy issues. They are misleading and cannot be relied on as any sort of accurate measure of opinion.

Formal public-opinion polling began in the 1930s. Since then, polling has become a major industry with far-reaching importance. Polling companies employ statisticians, data analysts, and methodologists who design and conduct polls. These scientific polls usually share three things in common: a population, a random sample of that population, and a representative random sample of that population. The population is the people whose opinion is being measured. For example, a population might be defined as not only registered voters but also as voters who live in North Dakota and have voted in the past two presidential elections. From that population, the polling organization takes a random sample, a group of people selected from a statistical population in which all selected have an equal probability of being chosen. The random sample is then further narrowed into a representative sample, a randomly chosen group of people, all with an equal probability of being chosen, who reflect the demographics of the population. Demographics refers to the proportions of various groupings in a population based on gender, race, ethnicity, age, education, and income. In the earlier example, a representative sample of North Dakota would have 49 percent of the sample be female to reflect the demographics of the state, where women make up about 49 percent of the population.

If constructed scientifically, a random sample of several hundred people can fairly accurately reflect the larger population in its views on candidates and issues. A well-designed sample of just a thousand people can be representative of the whole U.S. population with about a 3 percent margin of error. Margin of error is a statistical measurement of the potential inaccuracy of a poll's findings. The smaller the margin of error, the more accurate the poll. Margins of error tend to be smaller when the random sample is larger, since more of the targeted population has had the opportunity to express their opinion. However, the larger a sample, the more expensive a poll is to conduct—a consideration that polling companies keep in mind. A margin of error greater than 5 percent is too large and throws the entire poll into question. Results that are greater than the margin of error of a well-constructed poll can be considered reliable. Those that fall within the margin of error should be viewed more skeptically. That is, if voters favor Candidate A by 7 points over Candidate B in a poll with a 3 percent margin of error, Candidate A should be seen as ahead in the race. If the lead is only 2 points and the poll has a 3 percent margin of error, the race should be seen as a toss-up.

Issues with Polling

Polls must be carefully designed to be useful. The method used to contact respondents, the timing of the poll, the content and structure of questions, and the problem of inaccurate responses can all cloud the validity of poll results.

Most polling is now conducted by phone. To reach their desired population and achieve a random sample, pollsters find phone numbers using published phone directories, customer lists sold by companies, or contact data for members of a specific political party. They may also use software to randomly generate phone numbers with the same area codes. Interviewers ask specific questions early in the phone call to weed out any respondents who do not fall within the desired population. Most phone polls conducted by reputable polling organizations have a human interviewer making the calls and asking the questions. In some cases, the poll is a "robocall," with recorded questions that the respondent answers by pressing certain numbers on the keypad. Studies show that these two approaches can produce statistically significant results; polling experts consider live calls to be more reliable than robocalls.

New technologies have made phone polling problematic, however. In the early 21st century, the number of Americans with landlines has decreased. Cell phone numbers remain mostly unpublished, making it difficult to include people who rely on cell phones in the sample. In addition, software may block or end calls originating with phone numbers associated with polling companies, producing a nonresponse when those numbers are called. With the prevalence of cell phones, the response rate for polls has fallen to less than 10 percent of those contacted, a far lower response rate than the 30 percent or more that pollsters experienced in the past. Because callers must make more calls to get the same number of responses, phone surveys that include cell phone numbers are more expensive. This can force some organizations to abandon polling, leading to fewer polls. Fewer polls means fewer opportunities to identify results that are outliers, or significantly different from the averages of all polls, which can reveal misleading results. Many polling companies have turned their attention to Internet polling, but the challenges of obtaining and vetting respondents online still need to be overcome. In addition, Internet polling faces issues such as the overrepresentation of young people and the inclusion of children below voting age, whose views on political issues are less desirable to pollsters since they are not yet eligible to vote.

Pollsters regularly encounter other obstacles to accuracy. Timing is one of these issues. When a person is contacted can make a difference in responses. For example, voters exposed to a constant media barrage of campaign coverage in an election year may be less willing to respond to a poll on the candidates simply because of lack of patience.

Nonresponse can also cause problems for poll results. Sample members who are not available to respond to the survey questions or who are available but refuse to take part can affect the results. If a significant share of members from one demographic group falls into the nonresponse category, the results no longer reflect the representative sample that the pollsters had constructed. Pollsters attempt to remedy this problem by weighing the results, using proven statistical techniques to give greater weight to the members of the underrepresented group who did respond.

The content of questions is another issue for accuracy. For a poll to be nonbiased, the poll questions cannot be leading, or designed to produce a desired answer. The order in which questions are asked (and choices listed), the manner in which an interviewer asking the questions replies to a response, and the follow-up questions to a response are all carefully planned beforehand to avoid introducing bias. The possible responses offered to the respondent can also make a difference to the results of a poll. Closed-ended responses provide limited answers such as yes or no, agree or disagree, or this candidate or that one. Open-ended responses allow respondents to answer in his or her own words. These are typically used for questions such as the issue that most concerns the respondent.

False answers provide pollsters with a major challenge. When people agree to respond to a poll, there is no guarantee that their responses will be truthful. Studies show polls can be influenced by respondents providing false answers in an attempt to disguise their unfamiliarity with a topic. Results can also be skewed when a person answers falsely because he or she is too embarrassed to express what is perceived to be an unpopular opinion. These misleading responses—called non-attitudes—can be eliminated if the pollster asks screening questions beforehand. For example, before asking for the respondent's opinion, the interviewer might ask "Have you thought much about" an issue or candidate? If the response is no, the interviewer skips the opinion question.

Well-funded pollsters sometimes conduct what are called deliberative opinion polls in an attempt to maneuver the possibility of false responses. In this type of polling, a random, representative sample of people is surveyed on an issue and may be paid a stipend. They then spend a day or more together to learn about the issue in-depth, engaging in small-group and moderated discussions. At the end of this time, they are once again polled on the same issue. The difference between their initial and later responses tells the pollster what information can shape the public's response. This can be valuable information for campaigns and policy makers.