Here is the link to the report, if you haven't seen it:
http://exit-poll.net/election-night/EvaluationJan192005.pdfBelow are my comments on their report. I am posting this in an effort to counter what I feel is a highly misleading report. I would appreciate constructive comments on this, from anyone who wants to take the time to look at it. But please don't provide comments to the effect that they know more than I do about polling, or that they know more about their own research than I do. I know that. And I also think that they know that their dismissing of election fraud is unwarranted -- it is so transparent. So please if you have any criticisms, base them on the content of my comments rather than on who I am or am not.
Context of the Edison/Mitofsky exit polls
Before specifically commenting on the report, I will first attempt to provide a broad perspective on what I see as the significance of exit poll analysis to the controversy surrounding the 2004 election. I believe that this is appropriate because it is the discrepancy between the exit polls and the “official” 2004 vote count that is the main reason why so many thousands of U.S. citizens believe today that the “official” vote count is wrong, and that John Kerry would have won the election if the votes cast by voters would have been counted accurately.
There are three possible reasons for a discrepancy between the official vote count and exit polls:
1. Random error – or chance
2. Biased exit polls
3. Inaccurate election – i.e., the votes as intended by the voters were not counted accurately
The first reason, random error, can be assessed by statistical tests to determine the probability that the magnitude of the discrepancy (or greater magnitude) could have occurred by chance. These probabilities are known as “p values”. I have seen analyses of this type performed on available exit poll data by four different persons: Myself, Steven Freeman, Jonathon Simon, and a person who frequently posts on the Democratic Underground with a screen name of “Truth Is All”. In each case, somewhat different aspects of the data were considered and somewhat different data was utilized (because of the difficulties in obtaining “official” exit poll data), and consequently somewhat different results were obtained. However, the general methodology was similar in each case, and the results for three of the four investigators were p values in the tens of millions or billions (This is a short hand way of saying that the p value was one divided by tens of millions or billions). The fourth investigator, Jonathon Simon, provided a p value of a little less than a million. The main difference between his analysis and the others was that he considered only the national sample, which was much smaller than the combination of state samples, and thus resulted in a less dramatic p value.
Relevance of the Edison/Mitofsky report to the p values (a consideration of reason # 1)
The reason why all four of the above noted investigators focused on p values is that p values are a reflection of the probability that reason # 1, random error, explains the discrepancy. In order to thoroughly consider the reason for the discrepancy, all three of the possible reasons need to be thoroughly considered (My report very briefly considered reason #s 2 and 3, and Freeman’s reports considered reason #s 2 and 3 in great detail).
The Edison/Mitofsky report did not state any p values. However, they did state t scores for all of the individual states, and t scores can be used to calculate p values, using standard tables. I used the table on pages 21 and 22 of their report, under the “composite estimator” column, to determine whether or not the p values for the individual states were outside of the margin of error. The term “margin of error” is typically used in scientific studies to donate a p value of less than 0.05, which translates to a probability of one in twenty. Since the probability is one in twenty that a p value for any single comparison will fall outside of the margin of error on the basis of chance alone, in a sample of 50 states, one would expect two or three to fall outside of the margin of error based on chance.
The results reported in the Edison/Mitofsky report were similar to the results that I (and Truth is All) obtained in our analyses, but not identical (Freeman and Simon did not analyze all 50 states that I am aware of). My analysis showed 18 states outside of the margin of error, all of them more favorable to Kerry in the exit polls (These included the swing states of OH, FL, PA, MN, and NH), as compared with the “official” count. The Edison/Mitofsky report showed 13 states outside of the margin of error, all of them in the same direction as in my analysis. Their report did not indicate six states as being outside of the margin of error that my report did (AL, AK, AZ, MA, NE, WI), and they included one state that I didn’t (MS). I believe that the reason for the differences between my analysis and theirs is that the numbers that I used were obtained prior to final “weighting” by Edison/Mitofsky. If I understand their report correctly, this weighting was done to correct for unequal sampling by gender.
I cannot calculate the exact probability (as I did in my analysis) that the combination of discrepancies in the state by state exit polls in the Edison/Mitofsky report could have achieved that magnitude (or greater) by chance alone. I can say that the probability that all 13 states that were outside of the margin of error deviated in the same direction is one to 4096. And I will also say that I am certain that the total p value relating to the combination of all 50 states has to be in the millions, if not billions. But I cannot calculate it since I do not have the raw data.
Therefore, the Edison/Mitofsky report essentially rules out any reasonable likelihood that reason # 1 (random error) explains the exit poll discrepancy. Their report does not specifically say this, but in one sense they tacitly acknowledge it, in that the bulk of their report concentrates on reason # 2 (exit poll bias).
What does the Edison/Mitofsky have to say about the likelihood of reason # 3 – inaccurate election
They say nothing about this, except to note briefly in the executive summary that “Exit polls do not support allegations of fraud …”, and then back this up by noting that there were no systematic differences in the exit poll discrepancies between precincts using optical scan vs. touch screen voting machines.
Their statement that the exit poll discrepancies do not support allegations of fraud is almost meaningless. In the first place, the very fact that there are large exit poll discrepancies in itself points to the possibility of fraud, in that that is one of the three reasons for a discrepancy between an exit poll and an official vote. The fact that their data rules out any reasonable possibility of chance as the explanation for the discrepancy further supports the possibility of fraud (or other reason for an inaccurate election), since that leaves only reason #s 2 and 3 as a reasonable possibility. The only evidence in support of their statement (that their poll does not support the possibility of fraud) is the finding that there is no significant difference in the exit poll discrepancies between precincts using optical scan vs. touch screen voting machines. Of course, if fraud occurred to a similar degree with the use of both types of machines, then one wouldn’t expect to see a difference.
In fact, if one looks at their data, rather than at their seemingly off the cuff opinion on this issue, I submit that there is some support for fraud (or an inaccurate election because of some other reason). First, with regard to the type of voting equipment, there are five types presented in the report, and only one of them (paper ballots) is not susceptible to the type of fraud that would most likely be involved in machine counting of votes. Here is the within precinct error (WPE) noted in the report:
Paper ballot: -2.2
Mechanical voting machine: -10.6
Touch screen -7.1
Punch cards -6.6
Optical scan -6.1
WPE is what the report claims is the reason for the exit poll discrepancy (more about that later). Thus, the type of voting equipment that demonstrates the smallest amount of exit poll discrepancy is the only type of equipment that is not susceptible to machine manipulation of the vote count. The report does not specify a p value for this discrepancy, and this cannot be calculated without the raw data.
Also, another part of the report, which looks at the WPE by swing states vs. non-swing states, provides some support for the fraud hypothesis. This shows a WPE in the 11 most important swing states of -7.9, compared to -6.1 in the non-swing states. This is consistent with the fact that the exit poll discrepancy is outside of the margin of error in 5 of the 11 swing states, but in only 8 of the 49 other states (p<.05 by my calculation). If fraud was involved, one would expect the discrepancies to be larger in the swing states. Again, the Edison/Mitofsky report does not provide a p value for the WPE discrepancy between the swing states and the non-swing states.
Reason # 2 – Exit poll bias
Most of the report deals with the “reasons for the exit poll bias”. I put this in quotes because I don’t believe that the report presented any evidence whatsoever to indicate that exit poll bias is a more likely alternative than reason # 3. Rather, this is merely assumed to be the case, apparently because a faulty election is never seriously considered as a possibility by the authors.
The report notes that there are two basic types of election poll bias: faulty sampling of the precincts and within precinct bias (WPE). The authors rule out faulty sampling of precincts and thus conclude that the poll bias was solely due to WPE. Consequently, they say that the bias was due to over-sampling of Kerry voters, and more specifically they say that the response rate for Kerry voters was 56%, whereas the response rate for Bush voters was 50%. This calculation, and the more general statement that the WPE represents an over-sampling of Kerry voters, is not based on any data. Rather, it is totally based on the assumption that the election results were accurate. I agree with them that if one assumes that the election results were accurate, then an over-sampling of Kerry voters is the most likely explanation. But I strenuously disagree with the opinion that it is valid to make that assumption. And I also object to the practice of presenting this information as fact, when in fact it is based not only on an assumption, but an assumption which is at the very heart of the whole argument.
Much of the following discussion then looks at various categories of precincts and interviewer characteristics to categorize the WPE according to these characteristics. The report then characterizes the groups with the most negative WPE as the ones that demonstrated the most bias towards Kerry in the exit polls. What is important to note is that for almost all of these comparisons, even the group with the least negative WPE has a WPE that is well over half of average WPE of -6.5. Thus, for example, the report notes that distance of the interviewers from the voting place is associated with a more negative WPE (hence more bias). But even precincts where interviewers were inside of the voting place demonstrated a WPE of -5.3. Therefore, no matter what variable is analyzed, all groups within the analysis retain the “bias”. The closest by far that any group came to avoiding this “bias” were precincts that used paper ballots (WPE = -2.2).
Another interesting example of this type of analysis is that the most negative WPEs occurred in association with interviewers who were the most educated. For those with only a high school education or less, the WPE was -3.9, whereas for those with advanced degrees it was -7.9. The report does not reflect on the meaning of this, except for the implication that therefore the more educated the interviewer the greater the “bias” towards Kerry. They don’t stop to consider the alternate explanation – that those with less education might not have followed procedures as well, and therefore those interviews were biased towards Bush. Again, as with all of these comparisons, no p values are given to denote the possibility that the various WPE differences could have occurred by chance.
Summary
In summary, the Edison/Mitofsky report indicates that there were large differences between their exit polls and the official results of the 2004 presidential election. The national exit poll indicated a three point victory for Kerry, whereas the official results indicated that he lost by 2.5% -- a difference of 5.5%.
The discrepancy was outside the margin of error nationally, as well as in 13 states, including the crucial large swing states of Ohio, Florida, and Pennsylvania.
There are three possible reasons for the discrepancy between the official results and the exit polls, in this election or any other: 1) Chance; 2) Exit poll bias; 3) Inaccurate election. The Edison/Mitofsky report rules out reason # 1, and it fails to consider reason # 3. It therefore concludes, erroneously, that reason # 2, and more specifically a within precinct over-sampling of Kerry voters, is the reason for the discrepancy.