2016 Postmortem
In reply to the discussion: Why Dem Primary Anomalies Must Be Thoroughly Investigated Before Choosing a Nominee [View all]Fresh_Start
(11,355 posts)Exit polls are two-stage cluster surveys, in which polling places are sampled
within states in the first stage and voters in those precincts are systematically sampled in
the second stage. As to the first stage, the National Election Pool (NEP) sampled from 14
to 55 precincts per state, with the most in the battleground states of Florida (55),
Michigan, Missouri, and Pennsylvania (50 each), Ohio (49), and Iowa, Minnesota, and
Wisconsin (45 each). The accuracy of exit polls depends in part on whether the sampled
precincts turn out to be representative of the state as a whole. While sampling 50
precincts may sound good, it may not be enough to provide an accurate sample.
Sampling more precincts would diminish the sampling error, though staffing more
precincts with interviewers would add to the expense of exit polls. At to the second
stage, the within-precinct-error rate in 2004 was higher in precincts where the
interviewing rate was higher (e.g., every tenth voter instead of every second voter),
suggesting a difficulty for interviewers in interviewing in larger precincts.
Coverage error occurs in exit polls when interviewers are kept too far away from
a polling place to interview voters effectively. Some states (such as Texas) require
pollsters to stay at least 100 feet from the polling place, and that is so far that many voters
park within that distance so that they do not pass in front of the interviewer. As an
example of the legal issues involved, five days before the election the Ohio Secretary of
State ordered the enforcement of a 100-foot electioneering distance on interviewers. A
court overturned that order at 10:30 PM the night before the election, but many
interviewers and election officials did not know about this when the polls opened the next
morning. As a result, interviewing did not start successfully in some Ohio precincts until
mid-morning, and no interviewing occurred in one Ohio precinct. In the end, 4% (62) of
the 1480 sampling polling places did not provide exit data on Election Day (Edison
Media Research 2005).
Another serious coverage issue in exit polls is that interviewers sometimes do not
show up on Election Day because they have found better work. For example, only 84%
of sampled precincts were staffed with interviewers in 2000 (Konner 2003). In 2004 the
exit polls also trained replacement interviewers, and 62 replacements (out of 1480 total
polling places) had to be sent out on Election Day. There were only 7 sampled precincts
in which there was no interviewer, 4 prohibited due to legal and distance restrictions.
There are also some time-of-day issues with exit polls: voters early in the day may be
different from those later in the day, which seems to be one explanation of the Kerry
leads in Florida and Ohio released on the Internet in mid-day, which would have given
him the election but which vanished by the time the polls closed.
Absentee voting has always created a potential problem for exit polls, in that
some voters are not in their sampling frame. This is again coverage error in that the
sampling frame of people voting at polling places excludes some people in the voting
population of interest. This potential source of error became an even more important
with the advent of early voting in some states and mail voting in others. "Convenience
voting" could be a serious biasing factor if one party mobilized its supporters better than
the other party for absentee, early, and mail voting. The NEP exit polls try to handle this
by conducting phone interviews before the election in the states with the most
convenience voting, though this raises tricky questions of how to weight the phone
interviews versus those at polling places.
Unit Nonresponse Error
Unit nonresponse occurs when some people in the sample are not interviewed,
either because of noncontact or refusal. This becomes problematic to the extent that
nonresponse is correlated with vote intention, so that survey response becomes biased.
Exit polls in the U.S. experience considerable nonresponse: 33% refusals plus
10% misses in 1996 (Merkle & Edelman 2000), only a 51% response rate in 2000
(Konner 2003), and a completion rate of 53% in 2004.
The 2000 Florida election demonstrated that exit polls encounter one further
problem: they measure how respondents believe they have voted and not whether or how
their votes were actually counted. Similarly, in 2004 respondents who cast provisional
ballots could answer how they voted but there was no way to tell whether their ballots
would be counted. In these instances, the survey question cannot precisely measure the
behavior that is of actual interest. Election officials make the decisions that really count.
Interviewer-Related Measurement Error
Some interviewer-related error is inevitable in interviewer-administered polls.1
The main potential interviewer-related issues involve their selection and training. For
example, interviewer age is related to success and accuracy. Exit polling operations often
hire college students as interviewers based on recommendations from college faculty, but
older interviewers are more successful in obtaining interviews. Merkle and Edelman
(2000) show that older voters are less willing to participate in exit polls, but the
difference is less when older interviewers are used. Indeed, the within-precinct-error in
2004 was greater in precincts with younger interviewers.
The important post-survey decision in exit polls involves how to predict the
election result based on the precinct data. As exit poll data come in on Election Day,
mathematical models based on how those precincts voted in previous years are used to
weight the data. (As a simple example of why this is necessary, say that the precincts
that report early are ones that traditionally vote more Democratic than the state as a
whole, so it is essential to weight the data to see if they are voting more or less
Democratic than in previous elections.) A persistent problem is that these mathematical
models are not very good, and the problems with the 2004 exit polls suggest that they
have not yet been improved enough.
SOURCE OF ERROR EXIT POLLS
Sampling Error: Faulty choice of sample precincts
Coverage Error: Kept away from polls
Item Nonresponse: Systematic refusals
Measurement Error: Ballot is not counted as voter intended
Measurement Error: Selection of interviewers & poor training
Post-Survey Error: Weighting precincts wrong
Edit history
Recommendations
0 members have recommended this reply (displayed in chronological order):