Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The point is that today, the key isn't in Google's or Amazon's or Meta's servers, but on the phones of people. That means that you literally don't have the key if you don't have the phone. And governments don't want that, they want the keys in order to eavesdrop but without being noticed (and stealing the phone would get you noticed).

So your only option to comply with this is to remove the phone-only key storage option and move all of the key into your servers, which is what we talk about when we mean "breaking end-to-end encryption".

The issue is that to comply with the rules, you have to secure that server so only the good guys can get in, and only if the warrant is legit, but also to allow fast access for time-sensitive cases such as terrorism and secret cases such as NSA investigations. You also have to make sure that there's absolutely no way for people to access that server if they don't have the approval.

Oh, and also that server / these servers contain the keys to read every message from every citizen of your country (including politicians), which is probably worth as much of your GDP.

So you need to build the equivalent of a safe containing one trillion dollars that can't be accessed for any reason except all of the reasons mentioned abov3. Except that this theoretical trillion of dollars are special dollars where if you mess up and let people in without anyone noticing they got in, they can "steal" the trillion dollars and start spending them and nobody would notice that they're being spent. And there's just about every country on earth that would love to "borrow" your two trillion dollars, especially if you can't ever realistically prove they did it.

Easy, right?



Has there ever been a public key sign-countersign encrypted tap method?

I.e. Authorized tap requestors have keys (law enforcement, intelligence) and sign a request (including timestamp), storing a copy for audit.

The approval system (courts, FISA) validates that request, countersigns if they approve (including timestamp), storing a copy for audit.

The system owners (messaging services, etc.) then validate both signatures and provide the requested tap information, creating a tap record (including content scope and timestamp), storing a copy for audit.

Ideally, then all audit logs get publicly published, albeit redacted as needed for case purposes.

Part of the central issue is deciding "Who should be responsible for security?" Imho, if governments want to mandate a scheme like this, it sure as shit shouldn't be the tech companies. The government should have to manage its own keys, or deal with consequences of leaking them (while allowing the tech companies to retain independent records of individual requests).

As much as it pains me to say this... this wouldn't be the worst use case for a blockchain...



Yes! Exactly like what you've apparently thought about and worked on for a long time. Neat!

>> To decrypt it, multiple parties need to come together and combine their keys, all the while creating an audit log of why they are accessing this or that portion.

To me, this is the technical solution that best mirrors the ideals of the pre-technical reality.

And I consider myself an encryption absolutist! But I think the powers arrayed against it are too strong (and in some areas, too morally correct) to fully resist.

Which devolves to creating a compromise, and hopefully one better than "Government has no keys, any of the time" or "Government has all keys, all the time."


So instead of stealing a single key, the FSB has to steal three?


The client side devices / cameras / whatever would send the encrypted copies off-prem, to be decrypted in the case of proper due process and authorization. But it would require interactively querying a distributed database that is managed by agencies or networks representing civilian interests, and these agencies would rate-limit the queryinf and disclose every query, who did it and why.

We need more transparency in our governments and security agencies (including FSB, CIA). Start with transparency on why the need certain data. More here:

https://community.qbix.com/t/transparency-in-government/234/...


Yes. In addition to two of those keys being attributable to the federal government.

Which, at least in the US DoD's case, already manages the world's largest PKI system.

The key difference with the UK scheme would be (1) the tech company would retain the final decryption key & (2) any use of that decryption key would be required (technically and legally) to generate a public audit record (albeit optionally obfuscated if the court order so requires it).


And what happens when the NSA or the FSB or some other equivalent just breaks into where the keys are stored, or beats it out of an employee, and bypasses the entire logging mechanism?

Your security guard having a clipboard where everyone signs in at the gate doesn't matter if someone dug a hole under the fence.


You mean when the {other nation's foreign intelligence agency} penetrates {nation's intelligence agency} and {nation's court system}?

And still creates a logging trail because the log system is intrinsically linked to fulfilling a request?


"Intrinsically linked" doesn't exist. Encryption is math, math you can do on a piece of paper (in theory). Anything you set up to log the fact that people did that math is always going to be meaningless if people take the numbers and do the math away from your logging system.

Now, you can say "but you can't ever access the numbers, just order the computer to do the operation". And also "To order the operation, you need 2FA and a signature for a judge and the president". And, of course, "The numbers needed for decrypting are split between three different servers all with their own security system and they can't be forced to talk to each other without the president's signature being added to a public log". And that's all well and good, but consider this: I install a listener on the RAM of each of the three servers. I wait until it does a totally legit, totally approved thing that gets logged. I now have the numbers copied somewhere. I do the decrypting for everything else away from the servers.

Sounds like a difficult operation? You're talking about three numbers worth a trillion dollars if they ever get out. Spy missions have been done that were harder to pull off for less benefit.

You just thought of [technical solution] to prevent listening through the RAM? Great, you just solved one _very obvious_ part of the attack surface. Now to address the ten thousand other parts identified by your threat model, and I really hope that you did a perfect job while designing that threat model because one blind spot = all of the keys are out forever. Also, no pressure, but your team of 10 or 100 or even 1000 people working on that threat model are immediately going to be pit against teams of the same size from every government ever, so I hope your team has the best and most amazing engineers we'll ever see in the world. And that's not considering the human aspect of all of that, because, well, one mole during the deployment, one developer paid enough by an adversary to do an "accidental" typo that leaves a security hole, one piece of open-source software getting supply chain attacked during deployment, and your threat model is moot.


So many arguments against this boil down to 'Anything less than perfection isn't perfect.'

That true.

But it's also missing goods of a less-than-perfect but better-than-worst-case system.

By your argument, TLS shouldn't exist.

And yet, it does, is widely deployed, and has generally improved the wire-security of the internet as a whole. Even while having organizational and threat surface flaws.

I agree with you that no government entity should have decryption keys in their possession.

However, I disagree that there should be no way for them to force decryption.

There's technical space between those two statements that preserves user privacy while also allowing the legal systems of our society to function in a post-widespread personal encryption age.


That's completely missing the point. This is not about perfection, this is about the threat level.

Decryption is always going to be technically possible. A government can always get possession of a phone, invest a lot of time and skill to get the key out of it, and then use that. This is what happened in that one famous Apple case, and this is what is always going to happen when people use E2E encryption. The point I made in my other posts was that once you get the key, you have the key, and that doesn't change just because the key is on the phone. That's your threat model when you use E2E encryption.

TLS works the same way. The encryption keys are ephemeral, but they're temporarily stored on your computer and on the server you're communicating with. If you want to attack a TLS connection (and you can!) you need to obtain the key from either the server or the client, and that's your threat model when you use TLS.

This is a completely fine and acceptable threat model as long as the keys are stored in a disparate sea of targets, either on hundred of millions of possible client/server machines for TLS, or on each person's phone (each one with a different model, from a different maker, and using different apps) for E2E. The thing is, in such a distributed model, nobody can realistically get every key out of every phone at once. This makes every single attack targeted to a couple of high-profile target, and therefore the impact of successful attacks way, wayyyy lower.

The issue arises when you decide to forbid end-to-end encryption, and instead mandate a global way to decrypt everything without needing access to the phone itself. This changes the threat model in a way that makes it unsustainable.

Again, and I know I repeated that vault analogy but it's a great way to explain attack surfaces and threat models: It's fine if everyone has a vault at home with their life savings in gold inside, because nobody can realistically rob every vault from everyone at once. It's still fine if every city has a vault where people store their gold, because while a few robberies might happen, it's possible to have high enough security to make it not worth to rob this vault. It starts being a bad idea to ask everyone to put their gold into a large, unique central vault that "only the government" has access to, because the money you need to spend to protect that vault is going to be prohibitive (and no way the government isn't going to skimp out on that at some point). And finally, it's an awful ideal to make that with magical gold that you can steal by touching it with a finger and teleporting out with it, because all of that gold is going to disappear so fast you better not blink, and losing that combined pile gold is going to impact every citizen ever.

It's a matter of threat modeling: the moment there's a way to access absolutely everything from a single entry point with possibly avoidable consequences for the attacker, then that entry point becomes so enticing that you can't protect it. You just can't. No amount of effort, money, and technical know-how is going to protect that target.


> TLS works the same way.

TLS does not use emphemeral keys, from a practical live connection perspective, because the root of trust is established via chaining up to a trusted root key.

Ergo, there are a set of root keys that, if compromised, topple the entire house of cards by enabling masquerading as the endpoint and proxying requests to it.

And that's exactly the problem you're gripping about with regards to a tap system. One key to rule them all.


Hacking the root certificates of TLS doesn't allow you to read every TLS-encrypted conversation ever, thankfully. It just allows you to set up a MITM attack that looks legit. And sure, that is bad, but it's not "immediately makes everything readable" bad.

That's why I call TLS keys "ephemeral" under this threat model.

The goal of anti-E2E legislation isn't to be able to MITM a conversation - again, government agencies can already set that up with the current protocols fairly easily. The goal of the legislation is to make it so that, "with the correct keys that only the good guys have", you can decrypt any past message you want that was already sent using the messaging system, without needing access to either device.

If the governments only settled with an "active tap system" that works like a MITM for e2e encrypted channels, we wouldn't be having this discussion or we wouldn't be talking about new regulations. Because again, that is already possible, and governments are already doing it.


That's why I put the live caveat. Granted, decryption of previously recorded conversations and decryption of new conversations are two different threat models.

Out of curiosity, can MITM of new connections be set up fairly easily with current protocols? (let's say TLS / web cert PKI and Telegram)

For the TLS case, they'd need to forge a cert for the other end and serve it to a targeted user. Anything broader would risk being picked up by cert transparency logs. Which limits the attack capability to targeted, small-scale and requires control of key internet routing infrastructure? Not ideal, but at least we're limiting mass continuous surveillance.

For Telegram, the initiation is via DH [0] and rekeyed every 100 messages or calendar week, whichever comes first, with interactive key visualization on the initial key exchange [1]. That seems a lot harder to break.

[0] https://core.telegram.org/api/end-to-end

[1] https://core.telegram.org/api/end-to-end/pfs#key-visualizati...


And not just TLS and certificate authorities but also DNSSEC. Still, it is pretty worrying to have one CA like letsencrypt be behind so many sites, or seven people behind DNSSEC:

https://www.icann.org/en/blogs/details/the-problem-with-the-...

But here is how they protect it:

https://www.iana.org/dnssec/ceremonies

On the other hand, data is routinely stored in centralized databases and they are constantly hacked:

https://qbix.com/blog/2023/06/12/no-way-to-prevent-this-says...


The issue is that whatever "audit" or "protection" method you create, whatever technology you use to ensure only the "good guys" get the information and the "bad guys" can't, it's only layers added on top of the real issue:

The final key is always going to be a single number. Once the key is out, it's out. There's nothing you can do about it being out, and no way to know it's out unless your audit system somehow caught it beforehand.

And that key (or these keys, which doesn't change much between "one number" and "two billion numbers" in terms of difficulty of stealing or storing them) is going to be worth trillions of dollars.

Again, the bank vault thing is an apt analogy (up to a point): You can add all of the security "around" the vault, guard rounds, advanced infrared sensors, reinforced concrete with weaved kevlar in it, etc... But if someone ever gets the dollar bills in their hands, then they got the bills. And if they somehow manage to bypass the security systems and not get noticed as they go in for the steal, you have no way to know who they are or that they did it.

Now, that is completely fine for a standard bank vault: after all, you need to physically send someone in, it's pretty rare for people to actually want in the vault so security can be pretty slow and involved, it doesn't have that much "money" inside (I'm pretty sure no bank vault in the world contains more than a handful of millions at any given time), and above all it's "physical" stuff inside: you'd immediately see if it's gone, it's not like someone who got in the vault can "magically" copy the bank notes and leave with the money while leaving the vault seemingly intact.

It's less fine for a "server" vault, where not only do you store everything so it's worth trillions, but people need to access it all the time because "investigations" and "warrants", and in a fast way because "terrorism", and if there's a breach or a mole or anything like that then people can copy all of the data inside and leave the server seemingly intact.

I think thinking that there's a technical solution is misunderstanding the problem, and that anyone pretending they "solved" it are always going to minimize one risk or the other. The governments and regulators don't get that yet, because it looks like it's just a technological issue to build "the vault". But the real issue, the fact that "the vault" doesn't matter when the consequences of stealing the contents of the vault are risk-free for bad guys but so immensely impactful for citizens, is the reason why technical solutions won't ever be enough.


I understand the analogies.

What I don't understand is, in the absence of some sort of scheme, how a justice system functions.

How would you compel production of evidence when duly authorized?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: