Hacker News new | past | comments | ask | show | jobs | submit login

> Any programmer worth their salt knows that it's practically impossible to vet that what is executing is 1:1 the code that someone at some point in time audited somewhere, or that the code is worthy of trust from the commons in the first place.

What?

There are entire systems built around doing exactly that. Embedded, military, high-trust.

It's never state of the art performance or mass deployed, because most people would rather have performance and cost optimized over assurance, but it exists and is in production use.

You verify hardware, chain of custody from production to delivery, track every deployed piece of hardware, then lock the firmware and enforce restrictions on anything that executes after that.

It's not easy or cheap (or foolproof, as anything can be exploited), but it's also not impossible. And substantially hardens security.

And for simpler systems with lower performance requirements, completely achievable.

F.ex. voting machines don't need to be running 16-core, hyperthreaded CPUs running multi-process operating systems




> There are entire systems built around doing exactly that. Embedded, military, high-trust.

This is a completely different thing. In those systems, the organization doing the vetting is the one that protects itself through those systems; the good of the organization is presumed to be aligned with the good of the end-users by the threat model. That is, the threat model is purely external to the organization: we are protecting the army's computers from an enemy army or a rogue soldier. An end-user of such a system (say, a low rank soldier sitting in a tank that includes remote-controlled components) can't really trust that those things are used in their best interest. For all they know, the devices are listening to every conversation looking for signs of treason/incompetence - this is still perfectly allowed by an embedded, military, high-trust system. It's the generals that trust the system, as it were, not the individual soldiers.

In contrast, in an election, what we care about is not that the sitting president trusts the results; we care that every individual voter trusts them. And the individual voters are not the ones that have the power to control the way procurement, hiring, vetting, verification, and everything else is done. In fact, the relationship between the electorate and the voting organizers is normally modeled as partly adversarial. The true test of a democracy is whether the populace can easily vote down the people currently in power, the ones that are organizing the election, when they would like to maintain their power.

So yes, I agree that if I am building a system that I want to trust with voting, and I have enough money, I can build an electronic system that I can trust. And you can build one that you can trust. But I can't build one that you can trust, unless you already trust me.


>What?

There is no way to demonstrate that what is executing is the source code unless you're compiling at execution time from a local vetted copy of the source code. Is the guy who vetted the source code vetted? Who vets the vetter? Is the compiler actually compiling the source code? Is the compiler compiling as generally expected? What about bugs in the compiler? Is the source code even what it claims (binary blobs!)?

What about the hardware? Are there any black box enclaves? Bugs? Does it actually crunch as would be generally expected of a number cruncher? Does it even have the vetted software?

All this complexity and anyone would be fully within their right to say "I don't and won't trust this."

Meanwhile, someone counting paper ballots by hand can be immediately understood by anyone and everyone. It's simple and it's brutally effective. So what if the process takes time? Good stuff usually takes time, what's the rush? So what if the human counter(s) screw up? Human errors are inevitable, that's why you count multiple times to confirm the results can be repeated.

The most secure, most hardened, most certified ballot counting machine cannot compare to a simple human counting paper ballots in witness of anyone and everyone.


The questions you're asking make it seem like (a) you're not thinking about this very hard, (b) you're trying to reach the answer you've already decided on, or (c) you're not familiar with high trust systems.

Still, in the interest of a conversation, some brief answers. Please ask in detail about any you're interested in (but realize I'm going to balance the time I spend answering with the time you spend researching and asking).

"Is the guy who vetted the source code vetted?" Yes, because he or she was assigned a key and signed the code with it.

"Who vets the vetter?" Whatever level of diligence you want, up to and including TS+SCI level.

"Is the compiler actually compiling the source code? Is the compiler compiling as generally expected? What about bugs in the compiler?" This is why you test. And it's pathological to believe that well-tested compilers, that have built trillions of lines of code, are going to only fail to successfully compile election code.

"Is the source code even what it claims (binary blobs!)?" See test and also dependency review and qualification.

"What about the hardware? Are there any black box enclaves?" Yes, by design, because that's how secure systems are built. And no, the enclaves aren't black boxes.

"Bugs? Does it actually crunch as would be generally expected of a number cruncher?" Testing and validation.

"Does it even have the vetted software?" Signed executables, enforced by trusted hardware.

> Meanwhile, someone counting paper ballots by hand can be immediately understood by anyone and everyone. It's simple and it's brutally effective

No, it's not. Because people are messy, error-prone entities, especially when it comes to doing a boring process 100+ times in a row.

You're not comparing against perfection: you're comparing against at best bored/distracted and at worst possibly-partisan humans.

Human counts rarely match exactly, because humans make mistakes. And then they make mistakes in the recounts intended to validate counts.

If you can't envision all the ways humans can fail, then I'd reflect on why things never fail at your work because of people, and everything always runs smoothly.


The point is that humans counting paper ballots by hand in the witness of anyone and everyone is and always will be more credible than any voting machine ever. You can certify the digital chain of trust as much as you want, it will not beat human hands counting paper ballots as anyone and everyone watches.

>you're not thinking about this very hard

Yes, because the commons will not think very hard about a complicated "solution" when a much simpler solution already exists.

>If you can't envision all the ways humans can fail,

Yes, humans fail. It's also not important. Any election worth its salt should be counting multiple times using a variety of counters and witnesses to demonstrate repeatability of the vote.

Again: Humans failing is not important.

What is important is the ability to verify immediately and simply how the vote is being tallied. Machines can and will fail (or more likely be corrupted) like humans, but we can immediately see when the human screws up whereas it's impossible to see when the machine screws up.

It's baffling I'm having to argue this to FOSS people of all peoples, you guys should know better than anyone else that vetting source code and binaries and hardware is a fool's errand for something as important as counting votes.

Nothing beats the brutal simplicity of hand counting paper ballots while everyone watches.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: