The Current Electoral Vote Predictor 2004 is a classy web based chart that maps the latest poll results onto an elecoral map. It even has an RSS feed.
And then there's the interesting commentary, e.g.,
The featured poll today is a special poll of New Mexico commissioned by Libertarian party candidate Michael Badnarik and conducted by Scott Rasmussen on Aug. 4. The result is Kerry 50%, Bush 43%, Badnarik 5%, a surprisingly strong showing by him. If pollsters regularly asked Kerry/Bush/Badnarik instead of Kerry/Bush/Nader, Badnarik might do better. That might actually affect the election since Nader sucks votes almost entirely from Kerry whereas Badnarik is much more of an equal-opportunity gadfly drawing from both sides. Could the questions asked by the pollsters actually change the election results? Time for a Ph.D. thesis on Heisenberg's principle (“observing the system changes the system”) as applied to politics. No change in the electoral college as New Mexico was already leaning to Kerry, only now the lead is a bit more solid.
First, a great site, thanks for an excellent blawg.
However, I’d have to say its a bit too generous to pretend as if the pollster’s influence on the result of the poll is as innocuous as the scientific observer’s. Heisenberg conceived of the observer attempting to gain a clear picture of the orbit of a subatomic particle and failing because of the necessary interaction between the observer and the observed. Maybe so, maybe not, maybe it throws a monkey wrench in any data you can use on such a subject.
But how could data created by an intentionally driven search for a particular result, arrived at through a system of questions carefully designed to produce that result asked to a group of people carefully selected towards that same end, and sponsored from the outset by an observer with a stake in the outcome ever be trusted? Even Gallup, the mainstay of the industry, doesn’t poll democrats, for example, but democrats and “democrat leaners,” and rarely reaches a sample of more than 500 people. At any given time you can find a swing between competing polls of 6% or more over and above the stated margin of error. And that discounts the subject matter of the questions chosen to begin with.
To me, believing in the polls is a lot like going to a talented cold reading psychic. There’s a great degree of self delusion involved…
Poll design is as much an art as a science. The biggest variable in the whole thing is who is going out to vote. It dosn’t matter that you have 55% of the likely voters if it rains and 10% of your supporters stay home. The “statistical margins of error” are supposed to capture the potential variation in asking this group of 500 random individuals versus asking that group of 500 random individuals. It says nothing about the design error of assuming that 80% of white voters who you poll will show up to vote if that figure is really 75% or 85%.
The most accurate polls are exit polls as the variable of who is going to vote has been eliminated. Even there, Florida in ’00 is the counter example where widespread voting problems caused the counted vote to vary from the exit polls in a crucial way.
GO W!