Relating to this semi-dormant thread, the operational basis of any election is the tabulation and interpretation of the votes, as anyone who remembers the hanging chad incident in the Presidential Election of 2000 knows. When that function is fulfilled by computers, then it’s a necessity that those machines be functioning properly, being neither flawed nor maliciously designed.
That said, this report out of Pennsylvania for local elections held in 2019 should be disheartening for advocates of these election machines:
A couple of minutes after polls closed in Easton, Pennsylvania on Election Day, the chairwoman of the county Republicans, Lee Snover, realized something had gone horribly wrong.
When vote totals began to come in for the Northampton County judge’s race, it was obvious there was a problem. The Democratic candidate, Abe Kassis, only had 164 votes out of 55,000 ballots across 100 precincts. In an area where you can vote for a straight party ticket, it was near a “statistical impossibility”, according to the New York Times.
When paper backup ballots were recounted, they showed Kassis winning narrowly, 26,142 to 25,137, over his opponent, the Republican Victor Scomillio. Snover said at about 9:30PM on November 5, her “anxiety began to pick up”.
“I’m coming down there and you better let me in,” she told someone at the election office after eventually getting through to them on the phone.
Matthew Munsey, the chairman of the Northampton County Democrats who helped with the paper ballot recount said: “People were questioning, and even I questioned, that if some of the numbers are wrong, how do we know that there aren’t mistakes with anything else?” [SGT Report]
Munsey certainly has the right question to ask, doesn’t he? How do we know when a black-box[1] process is not functioning properly when the data is not known before hand? Put this question in your brain-box and shake violently: suppose your election machine can be influenced via a WiFi connection so that when it’s being tested, it works properly (that is, it reflects in its tabulations the data – votes – that was input), but when it’s time to count the real vote, you don’t know the inputs, so how do you know the output is right?
Well, there are ways by detecting patterns in the output. This has attracted the attention of statisticians such as Dr. Clarkson, which I’ve noted before. But this requires deep analysis and access to raw data, which some government entities will not permit – as Dr. Clarkson discovered in the case of Kansas.
Here’s the depressing side of this news:
Katina Granger, a spokeswoman for Election Systems & Software, the manufacturer of the machines said: “We also need to focus on the outcome, which is that voter-verified paper ballots provided fair, accurate and legal election results, as indicated by the county’s official results reporting and successful postelection risk-limiting audit. The election was legal and fair.”
No, we don’t need to focus on the outcome, Granger. We need to focus on what went wrong, in particular Munsey’s concern. This is not a sane remark, it’s the remark of someone trying to not get sued, not lose market share – or even the elimination of the entire market.
I reiterate the point I made years ago in this thread – Ban election machines. Count by hand. Mistakes may be made, there might even be cheating – but humans are additive, computers are multipliers. Which do you want cheating? That we use them at all shows we do not take election seriously; that we get election machines from private vendors who refuse to allow the source code and machines to be proctored suggests that we’re actively addle-pated when it comes to understanding the basic philosophy of any governmental system.
The uplifting side of the news? Look at who detected and reported the apparently bad number – the chairperson of the local Republican Party on behalf of the Democratic candidate, who eventually won. It’s good to see that some people take very seriously their responsibilities as citizens and put their ideological concerns in the back seat, where they belong.
1 “Black-box” refers to some process of which the implementation details are unavailable to the testers. “Black box” testing simply means feeding data into the process and checking the output is what you expect, while “white box” testing is aware of the details of the implementation, presumably in order to test that the details are working as expected. That is, sometimes an improperly implementation will still output proper results. The error may not be in the results, but in the speed at which the results are calculated, which may not be apparent in the test scenario used by the testing personnel.