Anybody watching MSNBC's wall-to-wall coverage of the midterm elections will have heard constant references to "statistical dead heats" and "within the margin of error."
Margins of errors seem to be very misunderstood by the public and by the media. I am by no means an expert, so I did a bit of searching. Some of these items are a bit of a read, but they're worth it.
http://en.wikipedia.org/wiki/Margin_of_error"The margin of error grew out of a well-intentioned need to compare the accuracy of different polls. However, its widespread use in high-stakes polling has degraded from comparing polls to comparing reported percentages, a use that is not supported by theory. A web search of news articles using the terms "statistical tie" or "statistical dead heat" returns many articles that use these terms to describe reported percentages that differ by less than a margin of error. These terms are misleading; if one observed percentage is greater than another, the true percentages in the entire population are more likely ordered in the same way than not. In addition, the margin of error as generally calculated is applicable to an *individual percentage* and not the difference between percentages. (The margin of error applicable directly to the "lead" is very approximately equal to twice the generally stated margin of error, but this is exactly the case only for a two-choice poll with a result of 50% for each choice). The margin of error is often interpreted as if the poll gives either no information (a difference within a margin of error) or perfect information (a difference larger than a margin of error) about the ranking of two percentages in the population, but this is a gross oversimplification. As the margin of error continues to be inappropriately applied, simpler alternatives (sample size) or more complex alternatives (standard error, probability of leading) may be warranted."
http://www.robertniles.com/stats/margin.shtml"The margin of error. In this case, the CNN et al. poll had a four percent margin of error. That means that
if you asked a question from this poll 100 times, 95 of those times the percentage of people giving a particular answer would be within 4 points of the percentage who gave that same answer in this poll."
http://www.isixsigma.com/library/content/c040607a.asp"A 95 percent level of confidence means that 5 percent of the surveys will be off the wall with numbers that don't make much sense. Therefore, if 100 surveys are conducted using the same customer service question, five of them will provide results that are somewhat wacky. Normally researchers do not worry about this 5 percent because they are not repeating the same question over and over so the odds are that they will obtain results among the 95 percent. However, if the same question is asked repeatedly such as a tracking study, then researchers should beware that unexpected numbers that seem way out of line may come up. For example, customers are asked the same question about customer service every week over a period of months, and 'very good' is selected each time by 50 percent, then 54 percent, 52 percent, 49 percent, 50 percent, and so on. If 20 percent surfaces in another period and a 48 percent follows in the next period, it is probably safe to assume the 20 percent is part of the "wacky" 5 percent, assuming proper methodology is followed."
Here's how I read all of this: If you have nineteen polls showing the Democrat with a three point lead, each with a margin of error of +/- 4%, there is a chance that at least one will show the race either much closer (a tie) or show the Democrat with a bigger lead. This poll is likely inaccurate. If a series of polls that follow show a tie, then we can start talking about the race "tightening."
What MSNBC and other organizations do is take one poll and use it to represent the whole race. This is the wrong approach. A better approach is to take many polls and compare their results. Finding the median or the average of several recent polls gives a far better read of how the race is moving, rather than using just one.
I hope that some of the statisticians we have here will correct me if I'm at all wrong.