|
Edited on Tue Jan-25-05 01:14 AM by rerdavies
The first principle of designing secure systems is that you need to identify the value of assets being protected. High-value assets require high-quality protection. In the case of an election, the assets are priceless, but as a starting point, there are clearly people to whom cracking an election is worth many billions of dollars. As these things go, the stakes don't get any higher.
The second principle of designing secure systems is that a system is only as good as it's weakest link. If any one of the links in a security system is subvertible, then the system isn't secure. State-of-the-art security on a front door is useless if the back door is wide open.
Each and every step of an election must be auditable and verifiable. It's not good enough to just count votes; you have to be able to provide evidence that the count was conducted correctly, and that evidence must be publicly verifiable after an election takes place. Evidence that is only privately verifiable is only as good as the credentials of those who verify the private evidence. The only people I can think of who have credentials good enough to weigh against the assets in question is all of the particpating candidates in the election. Having gone this far, it seems pointless not to make that evidence freely availble to the public at large as well.
If someone who conducts an election cannot prove that a particular phase of an election was conducted correctly, then you've found a weakest link.
The particular problem that needs to be addressed with electronic voting systems is how to establish that the votes actually cast correspond to the votes actually recorded. How can you prove that every button pushed on a voting machine was correctly recorded? Once votes have been recorded, there are a number of relatively straightforward ways to ensure that no records have been modified, misplaced, and that all recorded votes hve been properly added up, even in computer-based systems.
In a manual system, the issue of verifying that ballots cast corresponds to the ballots recorded is addressed by a system of observers, and physical security on ballot boxes. Observers from all parties can watch the votes going into the ballot box, and coming out of the ballot box. Tamper seals and secure storage more-or-less ensure that what goes in comes out. The proof in this case is provided by a chain of evidence consisting of witnesses, tamper seals, and procedures that ensure that physical security of sealed ballot boxes. Once the integrity of physical ballots can be determined, auditing can be performed by performing random or targetted recounts of physical ballots in selected precincts.
In computer based systems, you need an auditable mechanism that ensures that what voters selected on the screen is the same as what gets recorded.
Paper printouts do solve this problem. Voters can verify that what got printed corresponds to the buttons they push. Whether voters verify every printed ballot, some, or few, visual verification of printed ballots by voters constitutes an audit system that ensures that votes entered correspond to votes printed. The physical printed ballots allow digitally recorded counts to be audited by re-counting the printed ballots, and ensuring they match the machine counts.
In reality, this means that a number of precincts will be randomly selected, counts will be verified for those randomly selected precincts. The emphasis is on random selection. If anyone knows with certainty that any precincts is not subject to random selection, then the random audit is as good as useless. Attackers can fearlessly hack only precincts that aren't subject to random audit.
It should be strongly noted that the Ohio manual recount procedure failed to acheive it's purpose because precincts selected for manual recounting were not selected randomly. For all practical purposes, no manual check of machine counts was performed in Ohio.
Open source isn't required to verify auditing if printed ballots are produced, and if precints are randomly audited. You also need an observer/tamper seal/physical security procedure to ensure the integrity of printed ballots. If physical integrity of the printed ballots isn't ensured, then the system becomes as weak as the purely digital path through which counted votes travel.
Some people have proposed cryptographic solutions as an alternative to printed ballots. While crytographic solutions can protect votes from the point at which they are recorded to the point at which they are counted, then cannot provide protection for votes between the time that they were entered, and the time they were recorded.
Note that the VoteHere solution, for example, provides a digital code that can be used to verify that the vote that was recorded was actually counted. The VoteHere verification code does not verify that the vote entered corresponds to the vote recorded. This is a signficant shortcoming of the VoteHere solution.
In pure crypto solutions, open source is not a luxury; it is a neccessity. Any attempt to verify that votes entered correspond to votes recorded requires that source code, and object code be available for review, along with a chain of evidence that guarantees that published source and object code correspond to the software that was actually running on the machines that translated votes entered into votes recorded.
Ask anyone who advocates a crypto solution this question: how do you prove that votes entered correspond to votes recorded? If they cannot provide a bulletproof answer then you have just identified the weakest link. I don't see any way to make such a claim without publishing source for public inspection.
Note that just publishing the source is not nearly enough. Either publicly available build systems, or inspectable object code are also required, since compilers can be (and have been) hacked to inject evil code. If only source code is provided, then a proof must be provided that the compiled source produces the correct object code, byte-for-byte. Being able to build object code from the source code of record that matches published object code signatures, registered at NIST would be acceptable; build fully replicable builds are difficult to achieve even in relatively controlled software development environments. For example, a number of common compilers and linkers will store random bytes in small fragments of un-initialized data areas in object files. These random bytes would make it impossible to match up source code with publicly registered object code signatures.
Even this isn't enough. Evil code could be injected by system components, or other unrelated software, while the recording software is runnnig. Beyond the DRE software, a chain of evidence is also required for all other software on the machine (in object code at a minimum, but ideally in source form as well).
These questions need to be asked, and there must be answers to these questions. How can you prove that a technician for Diebold hasn't swapped a system component, like USER32.EXE. How can you prove that some underpaid, but clever junior programmer at Microsoft hasn't injected a hidden source-code hack into -- for example -- a browser DLL, or a COM interop DLL? Given the value of the assets being protected, these are distinct and real possibilities. And nothing less than proof is acceptable.
If you cannot provide provable answers to these questions, then these things will happen.
It has to be said, that current electronic voting systems don't come remotely close to providing the kinds of evidence chains, and audit mechanisms that are required to provide even rudimentary security in the election process.
Open source does provide key evidence in certain important situations; but it provides evidence for only small portions of the overall evidence chain. If the full chain of evidence isn't in place, then open source isn't much good.
|