I spent a few years in Ireland and the Netherlands lobbying against insecure voting machines. I had so many conversations with politicians and civil servants where these effects were in abundance. In each country I was representing a group that included the most eminent and experienced computer scientists in the country. Whenever we briefed someone for the first time, it would only take a minute or two to cover how without a voter-verified paper audit trail, no-one knew how to build a system that provided anonymity, verifiability, and resistance to voter coercion and vote selling. But it just never clicked, so many of them refused to believe that we couldn't simply "nerd harder". And of course, there was no shortage of charlatans who would tell them that they could solve it.
In Ireland, we put enough pressure on the politicians for them to create a cross-body commission to investigate. Because anyone could make detailed submissions, and because the commission treated these submissions like the clerks of the Supreme Court treat amicus briefs; it was pretty effective. The commission ended up pausing, and ultimately abandoning, the rollout. Ever since I've learned to appreciate any avenues to "de-politicize" a controversy and get it to that kind of body.
I gotta admit, perhaps a bit naive of me, that the concept of "tech charlatans" didn't click for me until now. It's true, they are out there, and I see it now. I mean, I know there are phone scammers, but they probably been there before tech. I know there are hackers writing viruses and exploits, but those are oftentimes talented people doing bad things. However, this comment, and a couple recent experiences really drove "tech charlatans" home.
One experience was a trip to a crypto conference. Many booths were making unsubstantiated, impossible claims. If you tried to ask how, they couldn't answer. Like, someone would say they made transactions instantaneous, and when I asked how they solved the problems of unpredictable networks, they'd have no answer. It got pretty frustrating. I had to dig through a few claims like these only to be convinced that they're selling snake oil.
Another (less recent) experience was looking through a spreadsheet of government approved innovation/research grants. I couldn't believe what I was reading. Many of the entries seemed to make absolutely no sense, purposefully using buzzwords to sound smart, but having no meaning when unpacked. Buzzword salads. These are funded projects. And to get more money on round 2, all the "innovators" had to do is show any activity, which is very easy to fake.
So yes, tech charlatans. I'm a bit on the old school side of tech, and this gives me cognitive dissonance. I'm used to thinking of software devs/engineers as honest/creative/driven, but I guess this was always inevitable. Our field is very exploitable, because many people put their livelihood in tech, while knowing very little about it.
Well, nobody likes being called a charlatan and you have to be careful with that accusation. Case in point: this article. I hate it because the argument he makes is correct and useful until the end, when he tries to claim that giving certain government agencies access to encrypted messengers is impossible without giving it to all of them "because maths". This is a good example of tech charlatanism and it's the sort of thing that will hurt our industry a lot in the long run. It's why lawmakers often end up not listening to us.
There is nothing that stops tech firms doing exactly what they're being asked to do. Every claim otherwise is obfuscation because tech firms don't want to do it, mostly because of their internal internationalist politics where they don't want to be forced to pick sides and tell some governments "sorry, we're Anglos who choose to give the US/UK governments special access that you don't get because we're better than you". It's an example of the first kind of no, not the third kind.
End-to-end encryption is a vipers nest of false claims like this. There's lots of ways to implement such policies, like this: for each message that's being encrypted, you encrypt it under a per-message key which is then itself encrypted under the recipient's key, and also a police (public) key. The servers forward the messages to the police so they can decrypt them. If the decryption fails too often (hacked client) then that user is denied access to the network. Yes yes I know that WhatsApp/Signal and friends use a more complex protocol, that description is a simplified textbook example, but the argument doesn't change.
Cryptography is a very flexible set of tools. They can easily be used to achieve complex security goals, like empowering some parties whilst disempowering others. The resistance to doing this is legitimate and I even agree with it, but it's also political and not technological. When politicians push back and insist that their police should have access to WhatsApp, and get told it's impossible, well they are not all stupid and correctly conclude they're being bullshitted. Indeed some of the MPs in the UK have computer science degrees.
Fact is, buzzword salads can be used to baffle people and get them to agree with you even if you're wrong. Technologists are especially tempted to abuse them when they want to say "no" to make a Type 1 No seem like a Type 3 No. Researchers do the same thing all the time, your complaints about grant funding are as old as the hills. Honest specialists speak clearly even when they might benefit from speaking unclearly.
What you suggest is technically possible but it misses the point: Whose job is it to guard the police's key? The chance that that key will be stolen and a breach will happen to all the police's messages is 100%, either because the police aren't good at IT security or because an insider will be bribed or (worst of all) a dictator takes over and decides to read all the messages without due process and everybody who wrote a message critical of the dictator gets "disappeared."
By not sending a copy to the police, you can guarantee that those things won't happen.
It says it's from 2015, but the Catalyst 6500 product line is much, much older. And quite a few companies sold both mediation devices and encryption software.
All these problems exist with access to metadata, so police and service providers already need to address them today. I'm not saying that this isn't challenging (maybe it is, I don't work in this field), it's just that handling content (especially text message) within the existing framework wouldn't be such a massive change because of the existing infrastructure and procedures.
> All these problems exist with access to metadata, so police and service providers already need to address them today.
In countries with higher corruption levels than the US data/metadata from “lawful” access is sold on a black market. I don’t think the problem of keeping data safe solved in a future proof way even in the first world countries and if encryption will be compromised sooner or later data or even keys will be sold/stolen/abused.
This is a Type 1 argument and not even a good one because the effort required isn't even placed on those being regulated.
Police can store keys inside HSMs. Tech firms can give training and develop software stacks to ensure secure workflows if they want to.
You can also use a targeted approach. Allow the police to request intercepts of specific phone numbers. Tech firm rotates the public key being served to the clients, replacing it with a police key. They add a special flag that says "don't put any sign this happened in the UI". Now police can intercept just that specific phone number.
There's lots of ways to do this at different points in the security/usability spectrum, it's all just an engineering problem.
Also, bear in mind that police and governments won't find this argument convincing because they can already access every other form of electronic communication and in reality mass leaks of private data have never been an issue.
> in reality mass leaks of private data have never been an issue.
This is just wildly untrue. The 2015 OPM hack (in which an APT stole security clearance records) was a mass spill of private data. The Shadow Brokers leak put the NSA's most sensitive tools on the public internet for anyone to download. And of course we might never hear about collected intelligence that had been purloined by an adversary in the course of an offensive cyber operation.
> And I know about the Shadow Brokers. Not personal information.
"Tools and methods" are in fact considered more sensitive than any personal information in the IC. The system you're proposing presumes the existence of at least one unhackable organization, and I regret to inform you that there is no such thing.
I think this sub-thread is getting a bit confused.
Firstly, nothing I've proposed demands unhackable organizations. That appears to be a requirement you invented. No security system presumes that! The so-called E2E encryption systems are hackable via several different organizations today: you could hack Meta client teams and insert code into the next releases, you could hack Google/Apple and tamper with the code as it gets shipped via the store, you could hack a phone OEM and insert a backdoor into the devices themselves.
Secondly, if you think government agencies would get hacked more often than tech firms then you may be right, but that's also irrelevant to any points being made here. The goal here isn't to design a perfect system, it's just to point out that the claim that no system can exist at all isn't true. Responding to that with "but your comment doesn't contain a full design doc for a system I personally judge as perfect" isn't going to get us far. Governments don't care if the system is perfect, right? They're OK with some leaks from hacked police departments.
Thirdly, I haven't even been making concrete proposals! Just pointing out how cryptography works as examples. If I was hired to implement these requirements tomorrow I wouldn't do things directly in those ways, they're oversimplified, hence the references to textbooks.
Finally, the point about mass leaks was about "every other form of electronic communication" so responding with a staff database that was stolen by China and never leaked onto the internet isn't a great counter-example (not a leak from a police department, not private citizen communications). Police investigations are targeted anyway, so there's not much to leak. NSA isn't targeted but they apparently can keep their metadata databases secure enough, even if they've sometimes lost control of PowerPoints or malware caught in the wild.
It's not at all, which is why your solution ideas miss the mark completely.
It is fundamentally a "humans can be corrupted" problem.
Everything you describe is a backdoor of various forms. Backdoors have nothing that prevent abuse other than triple pinky promise to only spy the bad guys (by whoever gets to define who is the bad guy, which changes over time).
Whenever you have a system where people have access to bypass it, you'll soon enough have some of those people be corrupted to abuse it, or ordered by higher ups to abuse it or threatened by various way to abuse. Since people are involved, there is a 100% chance this will happen.
As long as corruptible humans are in the loop (i.e. as long as humanity exists) the one and only way to avoid corrupted abuse of backdoors is to not have any backdoors.
Governments know that the police sometimes go rogue and don't care. It can be cleaned up when they do.
Tech firms also know that programmers can make mistakes whilst implementing complex cryptography, or even be corrupted, yet this is not itself an argument against implementing cryptography!
To repeat once again, we're not debating the ethics of E2E encryption here. Please don't waste time trying to convince me that E2E is encryption is a good idea, because (if it was real and worked) I'd agree with you! But your argument is a Type 1 No by the scheme presented in the article. It is a "we'd really rather not" social argument.
The problem our industry has is typified by the article. Too many tech people argue that giving police access to WhatsApp encryption is actually a Type 3 "it can't be done for fundamental physical reasons so it doesn't matter who demands it" problem, but that isn't true. Remember that governments don't care about E2E encryption whatsoever in any way. They would much rather ban it as a source of unnecessary problems. If tech firms claim they can't turn it off completely, they're obviously lying and that will just enrage governments. If tech firms claim they can't keep it whilst providing targeted access, governments don't care about that either. After all, email isn't end-to-end encrypted, nor is SMS, nor are normal phone calls, nor are letters. Why should WhatsApp be any different?
In reality it actually is possible to design a system that stops people with only server-side access to WhatsApp reading messages, whilst still breaking if the clients are compromised, and which allows police to have targeted levels of access without any risk of universal master key leaks. There are lots of ways to do that. You can use secure enclaves, zero knowledge proofs, or more exotic algorithms. But it's also not really relevant to the point I'm making, which is about the No Type being presented to governments. There was surely a better example that could have been chosen for a Type 3 No.
> Too many tech people argue that giving police access to WhatsApp encryption is actually a Type 3 "it can't be done for fundamental physical reasons so it doesn't matter who demands it" problem, but that isn't true.
This is changing the goalposts.
Giving police access to whatsapp chat is trivially easy. But that's not the question.
The pro-surveillance people say "Give the good-guy police (whoever they are) access to everything and keep it secure from the Bad Guys Whom We Oppose (whoever they might be this week)". That one is indeed impossible due to the laws of information theory.
> without any risk of universal master key leaks
You're looking at a technical problem when this is not that, it is a humanity problem.
It's relatively easy to avoid e.g. master key leaks. That's an irrelevant implementation detail. What matters is that if some set of people have unfettered access to bypass all protections, then all the Bad Guys will also have that access soon enough because you can't keep people from getting corrupted/threatened. No matter how hard you wish, you can't. Humans are like that.
> That one is indeed impossible due to the laws of information theory ... it is a humanity problem
So, is this impossible due to human nature or the "laws of information theory"? Which is it? And if the latter what "law" are you thinking of, exactly? Can you name these laws?
Here's the problem: it's neither impossible mathematically nor practically. Remember the kerfuffle over the NSA's backdoored ECDRBG algorithm? That was a very pure textbook example of what's possible, it would have allowed the NSA and only the NSA to decrypt TLS streams using it. According to you that would have been physically impossible due to violating some sort of law, but it's not. Cryptographers know how to construct such systems, there are many ways.
But the existence of such solutions doesn't even matter. Lawful intercept abilities have existed for ages, governments will happily accept a solution of just turning off E2E encryption entirely, and the possibility that governments or tech firms will get hacked doesn't bother them because that's a transient problem that can be made hard by throwing money at it.
They also don't care about corruption because the countries demanding this have low levels of it. Governments are the original experts in corruption, you might say, and have evolved a lot of different mechanisms to fight it. Finally, remember that all these systems are already hackable or corruptible. Pay off someone who works on the WhatsApp mobile app team and unless Meta's internal controls detect it, it's game over.
> So, is this impossible due to human nature or the "laws of information theory"? Which is it? And if the latter what "law" are you thinking of, exactly? Can you name these laws?
Yes. It's simple, if you send information in a recoverable way to another entity, they can recover it. If that entity involves humans, they can and will with 100% certainty be corrupted or threatened to obtain the information improperly.
> Remember the kerfuffle over the NSA's backdoored ECDRBG algorithm?
Amusing that your example is a counterexample to your thesis. Exactly. Backdoors never serve only their would-be masters. That's the impossible part.
If this is not blindingly obvious by now, I fail at being able to explain it better.
The backdoor in the EC DRBG algorithm was detected but never leaked because to actually open that back door required a key only the NSA had, but that key itself never leaked. Only the NSA could decrypt streams that used this PRNG. To everyone else they remained undecryptable.
So it's not a counter-example and your amusement is misplaced.
It’s naive to think that social engineering is so successful due to a lack of training, and that training will thwart it.
My old boss and I were recently laid off because my company (Livingston International) has been doing massive layoffs, because the company has been losing clients and money over repeated phishing attacks. We always shat on the assumed tech-illiterate person or persons responsible for us now having to sign in multiple times per DAY with MFA. Right before we left he mentioned this suspicious email he got the other day… and he clicked on it. It was one of those internal “we got you! Be careful next time!” emails. Come on man.
The tech companies themselves aren't able to read the messages, and no process exists to do so -- that's a necessary part of the security design and a core tenet of e2ee.
Sure, iCloud backups and photos are technically readable (they're not under e2ee) but policies and procedures exist to allow law enforcement to access them as well.
What you're asking for is that e2ee be eliminated in favor of a process-heavy solution. But at the end of the day, any human-dependent process can be broken by social engineering and a lack of vigilance.
Yes, an e2ee session can be broken by shipping a hacked client update to an endpoint. But that's much harder to accomplish, human- and process-wise, than obtaining the right key from the right person.
Having the key just isn't enough, you also need a traffic copy and break the transport encryption. The latter likely requires rewriting the data stream, and some cooperation from a vendor (maybe not the vendor whose service you are targeting, though). A one-time key leak without access to interception infrastructure is probably not that useful.
Key leaks are also relatively easy to prevent because secret keys are not relevant to investigations, so they never have to leave the interception framework and handed to a human operator. Sure, the infrastructure could be compromised, but that's already an extremely severe issue if you just have metadata in it.
I'm not asking for anything. I'm pointing out what's possible. As for the last sentence, we don't know what their internal procedures are so we don't know what's harder.
My personal definition of end-to-end encryption, which I believe is widely shared, limits decryption to the recipient of the message.
The protocol you have described is not end-to-end encryption by that definition. Sure, it's possible to do this kind of escrow encryption, but that's not what Meta and Signal are selling, and it's definitely not what I'm buying.
It's perfectly viable to create a messaging application with encryption that isn't e2e. Sure, it's a ridiculous thing that no democracy should demand, and will currently destroy any brand that attempts it, for good reason. It also won't be effective against any mildly persistent threat (not much of a change over our current implementation, really). But it's something perfectly viable to create.
If the message is being decrypted before it reaches the target recipient -- i.e. the second "end" in "end-to-end encryption" -- then it is by definition not end-to-end encrypted.
The idea is that the client is doubly encrypting the message for both the target recipient and the government, and that if the client doesn’t do that it’s banned from the system. So it is still end-to-end encrypted on the server and is not being decrypted by anyone other than the recipients, they’re just forcing you to send every message to the government as well.
To be clear, I think this is a horrible idea, but it’s not technically impossible.
That definition is insufficient to capture a secure system. It's not enough for a system to do this. You have to actually know / be able to prove it's doing this.
It's worth repeating this because tech firms have made the definition so confused, but encryption was developed to let you use a trusted device to communicate over an untrusted medium (radio). If your trust in the communications medium is the same as your level of trust in the device, which for so-called "E2E" messengers it is, then the whole system doesn't make any sense.
What Meta/Signal sell is kind of a smokescreen because they control both the clients and the medium and the key directory too, so nothing is really limited. They can update the logic at any moment to disable the encryption for you, the person you're talking to, or everyone, and nobody would ever know. They can also update the client to upload your private key if you're being specifically targeted, or use a weak RNG or suppress a key rotation notification or any one of a million other things. In fact, they might have already done that without anyone noticing. I pointed out in other posts that they already undermined one of the most basic properties of a modern cryptographic system (that the adversary can't tell if you're sending the same message twice) and they did so for typical government-type reasons of controlling rumors and misinformation, as they see it.
For E2E messengers to work conceptually they'd need to allow arbitrary third party clients, so you could choose to trust that client and then use the WhatsApp/Signal networks even though you don't trust them. Or at the very least, they'd need a very sophisticated and transparent auditing programme. They won't do either of those things.
If a company have the means to decrypt a particular users data, they have the ability to decrypt all users data. But the argument is not about that, it's about privacy, and how we have seen exceptions to privacy have always led to a slippery slope where they use it for more purposes than originally intended.
Btw, end-to-end encryption by its very definition, means that only the sender and receiver kan decrypt it. Your scheme is basically saying that the police should also be a receiver of all messages...
This is cryptography 101. Asymmetric crypto lets you encrypt a message using a public key without having the private key to decrypt it.
Remember that the encryption is being done client side by apps these networks control. E2E is therefore sort of fake to begin with because WhatsApp is not only the servers but also the client. You can't mix and match, so you have to encrypt messages using software provided to you by the "adversary". E2E encryption is therefore more of a tool to control bad insiders and negotiate with governments than encryption as conventionally understood.
Also remember that tech firms run the public key directory. Almost nobody verifies public keys, and even if they did, they're doing so with apps controlled by the tech firms so you can't know the verification is done properly anyway. And the keys can change at any moment, with your own way to know it's happened being UI controlled by the tech firms.
Still, even if clients and servers were separate, nothing stops clients from encrypting messages using a well known government public key and attaching that along with the e2e encrypted version.
The point the parent was making is that if Apple decided tomorrow they wanted to implement a backdoor for law enforcement there's no way we'd know until the evidence starts to show up in court cases (or someone at Apple/LEO leaks the knowledge). The system is a closed loop and proprietary. We're taking their word for it.
That does not change anything, asymmetric or symmetric, the same principles apply. E2E means that only the recipients has the ability to decrypt the messages, so far we seem to agree. As I said, and you verified, for your scheme to work, then messages should be encrypted with a known government PK, but that is not privacy. Now the government has the ability to decrypt everything, and can store messages indefinitely and use it for whatever they decide now and in the future. That is a surveillance state by definition.
Not the government's problem to solve, and tech firms can easily solve that by changing how their encryption works (e.g. using secure enclaves, or remote attestation).
> If a company have the means to decrypt a particular users data, they have the ability to decrypt all users data.
The messaging company can embed the police's encryption key in the app but not have possession of the corresponding decryption key.
> exceptions to privacy have always led to a slippery slope
That's a reasonable argument. But to the GP's point, thats not a technical argument. Its just another argument that the policy is bad for normal "bad policy" reasons. It has nothing to do with the math.
> The messaging company can embed the police's encryption key in the app but not have possession of the corresponding decryption key.
Once three people have access to a secret, it isn’t secret anymore. Once hundreds of thousands of police officers have access to the private key, it will leak, and everyone will be able to read these messages.
> it will leak, and everyone will be able to read these messages.
Again, do you see how this is not a technical argument? It might be a good argument. But its not an argument about the math or the computer science. "We can't trust the police" is a social argument, not a technical argument. A math or CS degree will not help you understand this argument.
Anyway, why would the decryption key be in the hands of "hundreds of thousands of police officers"? Especially when the decryption key itself is useless without access to the encrypted messages themselves. If this were implemented, its much more likely that the police would build themselves a web portal or something through which they could access people's WhatsApp messaging logs. The crypto could all be handled on the data portal backend.
A much stronger argument against this sort of thing is the governmental slippery slope argument. If the UK police gains capabilities like this, you bet every other country will (reasonably) demand similar access. Apple / Meta would have to decide which police / security departments to work with, and thats a very complex problem. Who do you trust? Hungary? Bulgaria? Russia? Iran? Egypt? China? Brazil? Where, exactly, is the line? And should access be revoked after a coup, like in Niger?
Its much easier to just refuse all governmental cooperation. It protects your brand. And makes it much simpler to justify refusing access to police departments who you don't trust.
"This security scheme wouldn't work because of these social factors" _is_ a technical argument. Security is very specifically about making sure the right people have access to a resource and the wrong people don't. Social aspects are inherent in this. Therefore, in the context of security, social arguments are technical arguments.
Arguing that the myriad local police departments of the United States in particular do not have the security posture required to keep access to a data portal secure is a technical argument against government-backdoor encryption.
> For the last 25 years, engineers have said ‘we can make it secure, or we can let law enforcement have access, but that means the Chinese can get in too” and politicians reply “no, make secure but not for people we like”.
Insecure police departments will inevitably leak the backdoor keys. It's not possible to limit who can use decryption keys based on who they are and not just possession of the keys under our current understanding of how encryption works. If you assume that the police will never leak keys then sure it's easy. But arguing about whether or not social factors like police department computer security is good enough to safely store keys is a technical argument about this technical problem.
It's quite depressing how many people here don't seem able to deconflate "I don't believe police can be trusted with this power" from "it is mathematically impossible to give it to them".
This is a nice clear example of how experts talk themselves into lying to the public for the greater good, as we saw so often in the past :(
> Btw, end-to-end encryption by its very definition, means that only the sender and receiver kan decrypt it. Your scheme is basically saying that the police should also be a receiver of all messages...
And serverless literally means "without servers", and yet...
Point being, scope is a free variable. "E2EE" that's managed by a central server is already stretching it, yet people accept it. They'll mostly accept excluding law enforcement out of the scope of eavesdropers "E2EE" protects you from too.
Similar story with people claiming early victory on online age verification. A common claim is "web age verification can't work unless you're happy giving every porn site your name and credit card". Clearly not true. Federated authentication is very old tech and the same techniques that allow you protect your identity with an Apple sign-in can also be used to allow sites to verify that the user is an adult with some goverment account, but nothing more. I agree that it would likely end up expensive and marred by beurocracy like most government IT projects, but at a technical level it's sound
I agree that some claims about e2e are misleading. That said, you could interpret "mathematically impossible" very charitably as risk assessment math. It's mathematically impossible to improve investigability while also improving resistance to spying by adversaries. Their relationship must be inverse. I agree that most people would interpret "math" as cryptography, and that it's better to make a clearer distinction between cryptography and risk assessment maths.
However this goes both ways. People demanding a solution think that you could make something investigable while keeping it completely airtight to adversaries or abuse. There is no escaping the fact that this is mathematically impossible in terms of how risk works. You can compromise security in favor of investigability, or you can improve security at the cost of investigability. And it's also important for the lawmakers to understand that each compromise is not gradual. It's drastic. If you went from 1 party having a key, to 2 parties, you've probably doubled vulnerability surface. If it's 3 parties, and one of those parties is an organization with lots of employees, you've probably exponentially increased vulnerability surface by orders of magnitude. This math does apply here.
Your proposed solution is an insecure system that can and almost certainly will be hacked. That’s the point. You could make insecure encryption pretty easily. What you can’t do is make something that is secure and yet also has a key that gets handed out all over the place. In the last decade alone, there have been all sorts of examples of exactly these kind of security keys being leaked.
There is no key that gets handed out all over the place. Numerous people are trying to explain that on this thread. Even a purely textbook cryptography setup would involve the private keys being generated inside an HSM and never leaving it. Only the public key would be widely distributed. A public key lets you encrypt a message but not decrypt it.
The hard part here isn't key leaks. System critical keys are commonplace in our society and virtually never leak, exactly because they don't tend to leave dedicated hardware. For example e-Passports are signed with long term government keys, they don't leak all the time. The US DoD runs a large scale private PKI, no problem. Our society has even got pretty good at physically giving people private keys in such a way that they still can't access them: credit cards, SIM cards, games consoles. The hard part is the workflows around them. Ensuring the HSM only decrypts messages for authorized users and things.
Even if you assume HSMs are constantly getting ransacked, governments don't care. They don't even necessarily want to have their own key management to deal with at all. A web portal that employees log in to, type in a phone number and then see the logs is perfect for them. Make it be dedicated hardware supplied by Facebook itself if you want, with login systems as secure as they use for their own employees. Governments just do not care about these details. Type 1 Nos, to use your lingo.
The hard part of such a system is defining your precise security goals and then implementing it in ways that all the goals are met simultaneously. So called "E2E encryption" isn't really, we all know that, so there's lots of flex to define systems that meet the same goals in different ways especially if you're willing to roll with good-enough type solutions e.g. assume a trusted client (which e2e messengers do already for things like their forwarding counters).
if you introduce a key that can decrypt all messages, what you have is not end to end encryption. Then you might as well just not do end-to-end, since the service provider can read all messages using the key they gave the police anyway
That "nerd harder" thing is something I keep coming across in the professional world and it is something of a paradox. It comes from someone who knows you are intelligent and more knowledgeable in a given area than they are, and who wants you to solve their problem, but they are unwilling to accept that your intelligence/knowledge extends to whether or not something can or cannot be done reasonably (Reasonable here being excluding things which are technically possible, such as lifting Rhode Island into some kind of Earth orbit, but aren't really quite feasible or practical).
One guy I worked for had a bad habit of starting unicorn hunts and a lot of "this looks easy from fifty thousand feet" foolishness with the phrase, "I know you're real smart but ..." and whatever followed was more a statement of how convenient it would be if something were true, rather than if it were true, possible, and so on.
I had a boss who wanted to learn how to decrypt another apps' data on a phone in order to access sensitive information. I told him it wouldn't work, since that defeats the purpose of encrypting that data if we don't own that app. What did he do?
Installed Android Studio on an i3 laptop and tried to use the emulator with some random tutorial from some MITM app that claimed it could do the job. Then linked me some other tools that explicitly said they couldn't do this outside of device emulation or rooting the device.
The thing about voting machines is that they can work, as long as they also provide a paper trail that determines the actual result of an election or referendum.
This means initial results will be available quickly, followed by actual results a day or two later. This takes the pressure off the counting process which should help prevent miscounts. They solve the real problem of unintentionally spoiled ballots. However, I don't think the concept of voting machines is entirely incompatible as long as we keep the paper trail authoritative.
I have had to explain the obvious flaws in electronic voting to various family members ("why can't we just vote online with our government login" being the most common one). When blockchain bullshit started appearing in public media, people started pretending like they finally found a problem that blockchains solve, only to quickly be shut down again soon after.
I don't think people are aware of how many layers of protections the voting system has and how well thought-out it is. Every year naive politicians try to call for modernisation, and every year they find out that paper voting is actually the best option we have been able to come up with.
The ones I saw seemed fine? You fill out a paper ballot, and you submit it to the worker at the end who scans it and drops the ballot in the box in one motion.
You have a paper trail, it anonymous, and it's pretty easy to understand.
I guess if you modified the firmware on the scanner to disclose the vote counters continuously, and the worker at the end knew me by sight (the worker I showed my I'd to is on the other end of the room) you could find my vote. But you could also hide a camera in the booth, which is easier.
You can see it this way: if you can time people voting, you can hide the camera anywhere in the voting room. Checking the booth is always going to be easier than checking the room.
There is also the issue of voters selling their vote. It's pretty easy for them to wear a tracker that tells you when they were in the booth. On the other hand, with paper ballots, the buyer has no way to check the vote, since market ballots are null.
The first problem is a non-issue. You don't need the voting machine to do the authentication if you let a human control the ballots that get stamped/marked/whatever.
The second problem also isn't a problem, because the machine doesn't need to be right. The paper ballot is leading, the machine is just an indication. The ballots are still counted by hand afterwards using the normal process. That means that as long as citizens are able to review the manual counting, they don't need to know or care how the voting machine works.
And yes, that does probably negate 90% of the advantages of voting machines.
> The first problem is a non-issue. You don't need the voting machine to do the authentication if you let a human control the ballots that get stamped/marked/whatever.
The machine can register what was vote was cast at 13:42. From there, the whole idea of anonymity disapears.
> The second problem also isn't a problem
The impossibility to have the second criterion is a problem because it prevents the first criterion from being verifiable by anyone.
> This means initial results will be available quickly, followed by actual results a day or two later
Actual results from paper ballots in French elections are in around an hour or two after voting closes. Yes, it relies on volunteers counting and on people watching but it’s much more effective than anything that would involve machine. You can literally watch your ballot box all day if you’re suspicious.
France's voting system is bulletproof when it comes to vote integrity. However, it has one significant drawback: it is hard to ask more complex questions than voting for x/y/z or yes/no questions.
One could say that it is a good thing (since it is hard to have a good public debate on many topics at once, voting on many topics at once means you collect people's preconceived opinions rather than people's informed judgement), but many US states ask many questions at once when people vote, so they would have to significantly reduce the share of direct democracy in their system.
> And hand recounts are only for verification, you shouldn't rely on them because they're far less accurate.
Voting machines can be altered. Go to any big hacker conference and you can learn how to hack the common models in thirty seconds. Humans may be fallible, but I'll always trust those fallible humans over anything automated.
And hence - use random hand counted samples for verification.
But nobody has proven that the dominion machines changed a single thing, in fact, the hand recount in Georgia showed as much. Which is why the "election fraud" claims then moved on to more outrageous claims.
There is obviously no such thing as unhackable systems, but when people mention Brazilian elections and it's voting systems, it's incredible how everyone assumes it's a system developed/reviewed/audited by unexperienced people.
Thanks for mentioning that issue, it's something that I don't hear mentioned enough in online/distance voting debates (maybe it just means that I'm not involved enough, but anyway good to hear this mentioned). It's so critical and at the same time fairly orthogonal to all the encryption / zero-knowledge proofs / quantum resistance and other cool math that nerds love to nerd about.
There are downsides to being too against those things, because it'd ban at-home mail-in voting, and having that is worth a lot of downsides because it lets everyone actually look up what they're voting on.
Not to mention, if everyone has to travel to vote, you're mostly going to get retirees and people with a lot of free time voting.
I do consider at-home mail-in voting as a bad risk - locally we have a solution that involves separate 'voting stations' (just as any other voting station, officials + observers from opposing parties monitoring) visiting the people who for various reasons are unable to come to vote, and collecting a secret ballot on-site from e.g. bed-ridden sick people; and special voting stations for people who can't leave - e.g. hospitals, prisons, army bases. Of course, that won't help if large numbers of people aren't coming to vote because e.g. the lines are too long or they have to work and can't get to vote, but then you should fix these problems directly.
It is important that people are able to vote without being controlled by their family members or employers, so any unconstrained remote voting should be an exception that's minimized as much as possible.
> Of course, that won't help if large numbers of people aren't coming to vote because e.g. the lines are too long or they have to work and can't get to vote, but then you should fix these problems directly.
The issues are:
1. young people don't vote because they don't care, so it should help to make it easier for them.
2. once you're in the voting booth, it's too hard to remember who you decided to vote for when there's tens of choices and ballot props, like in California.
Mail-in voting also helps prevent the issue of local governments trying to sneak stuff past the voters by having offseason elections for it. Machine politics has died off in most places, but it's still strong in New York for instance. If everyone gets a mail ballot they'll notice.
Indeed. I supported the absentee voting push during the pandemic, but one thing a lot of my friends don't understand is my opposition to absentee voting being normalized going forward.
At-home mail-in voting is also where the biggest fraud risk is. One of the very rare UK electoral commission cases involved illegal registration and fraudulent postal ballots: https://en.wikipedia.org/wiki/Erlam_v_Rahman
While I wouldn't object, it's not necessary. Voting in the UK is always on Thursdays IIRC, but there's lots of polling stations and they open early and close late. Queues are usually only a few minutes. However postal and proxy votes are still available
note: there's e-voting where you go vote at a voting booth and enter your ballot in a machine (with the advantage that the votes can be tallied faster) and there's e-voting where you use your browser or an app on your phone to vote remotely.
People tend to conflate these two things but they are quite different. Each one has their own set of problems/challenges. On top of the voting infrastructure itself, you also have to think about how to prevent people from being denied the right to vote, how to prevent issuing two votes, etc., etc. Voting infrastructure and logistics (whether electronic or not) can be very complex.
I remember that time, it was the only time in my life I was ever concerned enough to call into Ireland's most popular radio show "Joe Duffy", to raise my concerns about the rollout... They described me on air as a "computer expert against e-voting" ;-)
I think I was a CompSci student at the time, but close enough...
As I recall a lot of ordinary folks were for it originally, until the pitfalls were raised.
The Joe Duffy show is quite strange, because I've heard people of all ages and all demographics calling into it. It really is the way the nation talks to itself. Or, at least, it was, when I was listening ... fifteen years or so ago.
The verifiability is the biggest one. One could conceivably develop a voting system that had perfect accountability, secrecy, and integrity, but would be far far more difficult to work out by hand. A zero knowledge roll up is not something your average poll worker could work out on scratch paper. Paper ballots counted by hand can and can achieve all the things we desire in election systems.
Fist off where I've voted they switched from punched cards to scanning paper ballots. It's all typically run by three middle aged women with a printed book of who's registered to vote. You just tell the first one your name and they find your entry and you sign next to it. The next one hands you a scantron sheet. You mark it up. The the third one feeds your sheets into the machine as you watch. Machine records your votes and drops the sheets into a bin in the machine. That's a system I trust and it has the positive of civic engagement. And fundamentally it's not broken.
You now also have the option of mailing in a ballot. That's easier to corrupt I'm sure. But at least it's still paper.
All the fully computerized systems seem really really sketchy to me. Especially in low trust cultures where underhanded stuff is normalized.
The big thing is you can do audit with original documents created by the voter themselves. Which is the problem with all the 100% electronic systems. You can't because the voter doesn't create a master copy of his vote.
> A zero knowledge roll up is not something your average poll worker could work out on scratch paper.
There's no need to do calculations by hand. An electronic voting system only needs to be verifiable by the use of public data. Everybody walks with a computer.
If you mange to create a system that does that, well, the entire world is quite interested on it.
> Paper ballots counted by hand can and can achieve all the things we desire in election systems.
Sometimes, it's about efficiency as well. I've been a ballot worker here in Germany for the 14 years since I was eligible.
There are simple elections: electing a mayor, for example. You have a ton of DIN A4 paper sheets, but you simply sort them into heaps, count the heaps, and you're done. These don't really benefit from computerizing them, other than speed, but even for a larger polling station, you're done in an hour or two.
Then there are more complex elections, like the German Bundestag, where you have two votes on one DIN A4 sheet: the "Erststimme", where you vote for the directly elected representative of your district, is just as easy to count as the mayoral elections. But then you have to go through all the ballot papers again to count the "Zweitstimme", where you vote for the party lists that make up the other half of the Bundestag. While most people vote the same in both where you can short circuit during the first phase of sorting, IME around half the people vote different (say, they vote for the SPD candidate in primary because he's the one that will most likely have success over the CDU conservative, but in secondary vote they vote Greens or Left), so it's still a sizable amount of paper you have to touch twice.
And then there are the horrors: the European Parliament election [1] or regional/city elections [2], where each ballot paper can reach almost 1 m² in size making them very difficult to handle, and you have a ton of ways to distribute your votes across parties ("Panachage" [3] and others), so for the (again) about half of people actually using the complex distribution you have to painstakingly check and count votes. Having these computerized would eliminate so much trouble, particularly as there's always 5-10% of voters who mess up their maths, rendering sections or their entire vote void. And counting these can last for days of very mentally intensive work.
On top of that come specialized voting schemes such as ranked choice and its countless variants, where some are very difficult to execute without having a computer that can run through vote combinations as a batch job.
Paper ballots does not mean absolutely zero computers are involved. I do not know about Germany but in the United States we have voting machines that print paper ballots and those are submitted to a vote counting machine but you can verify your vote before and after printing. Voting machines are important for accessibility reasons. Not every can see or even read English after all.
The vote counting machines are then double checked by hand. A blockchain based solution would take significantly longer to verify by hand without a computer and the amount people who could do the cryptographic calculations in a reasonable time frame on scratch paper would mean we would essentially be trusting the results of the entire election to a few people. The average person can count and easily understand that the winner is whoever gets the most votes.
I would like to see these machines open sourced and confirmed to never be running on the internet through.
I agree that computerized voting can make some voting schemes viable and I feel sorry for all the hard work needed to handle a big city worth of those 164x60 cm ballots, if I got the German right.
However about that 1 m² ballot: maybe the problem is in the rules, especially if they end up with 5-10% of the voters making mistakes and voiding their vote. Something different would result in less mistakes and less costs (printing, distribution, labor, storage, etc.)
I'd say generally the German vote counting system is efficient enough but perhaps sometimes understaffed (I guess more people could be drafted if that really helps). At the same time, because it is quite manual and involves a fair bit of people with disseminated knowledge of local results, it would be quite tough to (wholesale) manipulate without people noticing.
Maybe some level of inefficiency is actually not a bad thing.
is this inefficency, just another incarnation of Proof of work?
Anyway, I think that inefficency caused by distribution to many not connected agents (regions, committee, people) is key in achieving security in voting system, so yes it's not only not bad but inevitable.
I've come to the conclusion that the only way to convince these people that voting machines are a bad idea is to actually engage in election manipulation and then uncover it after the election has already been accepted. Consequences are the only thing they understand.
Could one not still do voter coercion and vote selling using the existing postal vote system?
You get people to do a postal vote, and they show you what they are voting for and you pay them, take away the ballot and post it yourself (or threaten them and do the same thing)
In Ireland, we put enough pressure on the politicians for them to create a cross-body commission to investigate. Because anyone could make detailed submissions, and because the commission treated these submissions like the clerks of the Supreme Court treat amicus briefs; it was pretty effective. The commission ended up pausing, and ultimately abandoning, the rollout. Ever since I've learned to appreciate any avenues to "de-politicize" a controversy and get it to that kind of body.