Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Schism in the OpenPGP World (lwn.net)
138 points by todsacerdoti on Dec 7, 2023 | hide | past | favorite | 271 comments


The politics here as I perceived them...

Way back, there was a rambling and expansive process to update the OpenPGP standard. Years. So eventually it was decided to do a more focused "crypto refresh" that would be restricted to just important cryptography concerns. The GnuPG program was part of this. During the process the head of the GnuPG project (Werner Koch) spent a lot of time pushing back against what he considered pointless or wrongheaded changes. Eventually things seemed to stabilize and there seemed to be a consensus and people started paying less attention. Koch was one of those people.

The process then wound up again and a bunch of stuff that Koch opposed ended up in the standards draft. Koch eventually noticed and ended up taking the position that the GnuPG project would not follow the new changes and would instead stay where they were.

I think the very best outcome here would be if the various entities would accept that the process is working to the extent of clearly showing that consensus has not been reached and can't be reached from this point. The process will have to start yet again. The root cause here seems to be that there is no real serious issue with the existing standard, so attempts to update it lack a focus. As a result there is a tendency to try to change everything.


This is actually pretty accurate.

Worth noting is that the choice to move on with the rfc process was made very deliberately: there was a large thread on the mailing list to try and find a compromise. In this discussion, Werner Koch's negotiation standpoint basically required things to go 100% his way. The choices for the working group chairs ultimately boiled down to: 1. Closing down the wg with no result. 2. Moving on as a mostly functioning working group, at the cost of leaving Werner/GnuPG in the rough. 3. Reverting everything to the way Werner preferred.

None of those were great options, obviously.

Note that for option 3, the draft that is now LibrePGP and which was edited by Werner until circa 2020, had spectacularly failed to gather support and achieve consensus back then, and effectively stalled the working group for years. The LibrePGP project now claims that this draft was "the last consensus" of the working group, but at that time the reality was that Werner had gotten pretty used to just committing to master of the spec draft whatever he felt like, instead of seeking group consensus.

At the moment there doesn't seem to be a chance for the parties to come to an agreement. I'm curious to see how this develops.


People should agree that if there is no consensus reachable within the group of experts, an external advisor (e.g. Bruce Schneier, Ross Anderson) should be consulted. They may not be automatically "right", but such an agreement could re-stablish focus and avoid division or, worse, ending the worthwhile group's activities.

Nothing is worse than open source people not reaching consensus and moving on, because nothing will make the enemy of privacy more happy.


The IETF aims for rough consensus [1][2], rather than full consensus. The OpenPGP Working Group chairs have deemed that GnuPG is "in the rough" of the rough consensus [3], and that there is sufficient consensus on the crypto refresh to publish it as an RFC. If GnuPG disagrees with that outcome, they can appeal it. That would be the normal "process" to follow, which - to my knowledge - hasn't happened yet.

As for lacking focus, I'm obviously biased as one of the authors of the document, but the charter of the crypto refresh [4] was rather narrow, and aimed at modernizing the cryptographic primitives in OpenPGP. That's what the crypto refresh focused on; e.g. see [5].

[1]: https://www.rfc-editor.org/rfc/rfc8789.html

[2]: https://www.rfc-editor.org/rfc/rfc7282.html

[3]: https://mailarchive.ietf.org/arch/msg/openpgp/yz6EnZilyk_90j...

[4]: https://datatracker.ietf.org/doc/charter-ietf-openpgp/03/

[5]: https://proton.me/blog/openpgp-crypto-refresh


Rough consensus doesn't have to include the people actually running the project? That sounds like a great way to publish standards that will be ignored by everyone...


Multiple OpenPGP implementations were involved in the crypto refresh, including Sequoia-PGP, OpenPGP.js, GopenPGP, and - at one point - RNP and even GnuPG itself, though those two stopped being involved at some point. GnuPG is now unhappy with some of the changes that were made after that point, which is their right. However, GnuPG is not the only implementation of OpenPGP, and not the only voice that matters. Various implementations have already implemented the crypto refresh (and personally I still hold out hope that eventually, GnuPG will do so as well, if this drama dies down and everyone can reconciliate).


The thing is that I've never even heard of any of those. When people talk about PGP they mean GnuPG. All of those other projects their purpose of existence is to be compatible with GnuPG. If compatibility wasn't a concern you'd just use something new like Age.


If I added to my litany of problems with PGP the fact that its advocates and user base believe that GnuPG is the PGP standard, I'd get dunked on. But I agree with you completely: GnuPG is the standard.

Further: I don't think this is intrinsically a bad thing. Other, more successful projects work this way, most notably WireGuard. You have to judge these things on the merits.


I sort of assume protonmail is a major issue here. A reasonable scale service that offers pgp and wants/needs some improvements.


Even if you don't know them, (and pardon my arrogance,) OpenPGP.js and GopenPGP probably have many more end users than GnuPG, due to them being used by Proton Mail (among other products). However, that shouldn't actually matter, because..

> All of those other projects their purpose of existence is to be compatible with GnuPG.

That's not how the IETF process works. Technical arguments are more important than who has the most users.

> If compatibility wasn't a concern you'd just use something new like Age.

That being said, OpenPGP.js and GopenPGP do actually implement the old version of the draft (now dubbed "LibrePGP") as well (and everybody obviously still supports RFC4880), so there's no real risk of incompatibility. We just also implement the crypto refresh, and hope that GnuPG will do so as well, so that everyone can benefit from the improvements made there.


> Even if you don't know them, (and pardon my arrogance,) OpenPGP.js and GopenPGP probably have many more end users than GnuPG, due to them being used by Proton Mail (among other products).

Right, but I assume the point of Proton Mail using OpenPGP is so that mail sent by Proton Mail can actually be verified/decrypted by other systems that aren't Proton Mail? So compatibility with the widely known GnuPG is a goal. Otherwise the question remains, why PGP?

> That's not how the IETF process works

Sure, but it is how the real world works.

> That being said, OpenPGP.js and GopenPGP do actually implement the old version of the draft

So are you doing both? If you implement the crypto refresh and GnuPG doesn't then you lose compatibility, right? If you don't lose compatibility and are somehow doing both, then what's the point?


> So are you doing both? If you implement the crypto refresh and GnuPG doesn't then you lose compatibility, right? If you don't lose compatibility and are somehow doing both, then what's the point?

OpenPGP keys can signal support for various features and algorithms - such as AEAD modes. As one example, the crypto refresh specifies GCM (as optional to implement).

This means that, when sending email between Proton users and OpenPGP implementations that support that, we can use GCM - which has certain security and performance benefits.

If we send email between Proton users and GnuPG, we won't use GCM, so we won't lose compatibility, but users also won't benefit from those (and other) security and performance improvements.


Thank you for clarifying. I'd still be concerned about not being able to use the same key for proton mail and my desktop application, but that is more reasonable to deal with.


If you use the same key for proton mail and your desktop, anyone who is privacy conscious will ban that key as "leaked".

There was recently such a discussion on debian, that keys used by protonmail and similar are not allowed to be used for uploads.


This is something I would want a subkey system to handle. Like it would be neat if you could have a masterkey that signs two separate keys and says "this is for proton" and "this separate key is for debian", in such a way that it's clear that they both belong to me.


> I'd still be concerned about not being able to use the same key for proton mail and my desktop application

Would it be possible to use different subkeys for each (a good idea in any case, since it would allow revoking them separately in case one of them is leaked)? I don't recall whether the packet with the features and algorithms support information is attached to the key or to the subkey.


It's possible to do both, however, the crypto refresh recommends setting the preferences on the entire key, and anything else isn't widely supported, I believe. Theoretically, you could have two versions of the key with different preferences (and different subkeys), but at that point you're probably better off having two entirely separate keys.


> I assume the point of Proton Mail using OpenPGP is so that mail sent by Proton Mail can actually be verified/decrypted by other systems that aren't Proton Mail?

My experience with Proton is that they really don't care anything about encryption or signatures except as they apply to emails between Proton users: http://jfloren.net/b/2023/7/7/0


We do care about interoperability, indeed that's the point of using OpenPGP.

I've addressed the points in the blog post in https://news.ycombinator.com/item?id=36643124.


Proton mail has fewer users than "apt-get" though. So your accounting is probably off by a few million users the other way.


Not sure: https://proton.me/blog/proton-100-million-accounts

But yeah, I was more thinking of email than code signing, since it's hard to get statistics on the latter I should've qualified :)


Ah... So the standard/implementation equivalent of Pokémon's "Your trainer level isn't high enough".


More like... First there was GnuPG. Then people wanted interoperability with other implementations so OpenPGP happened, and then OpenPGP changes things to be incompatible with GnuPG.


Not to be pedantic but that's not how the history went. First there was PGP (the product), then that was turned into a standard (RFC2440, "OpenPGP Message Format"), and GnuPG implemented that.

Then, some (backwards-compatible) changes to the OpenPGP standard were proposed, and every implementation implemented them. (These were specified in RFC4880.)

Then, some (backwards-compatible) changes to the OpenPGP standard were proposed, and many implementations implemented them. (They never ended up in an RFC, but are now dubbed "LibrePGP".)

Then, some more (again backwards-compatible) changes were proposed, and many implementations implemented them, but not GnuPG, for now. (This is the crypto refresh. It should become an RFC soon.)

It's a shame GnuPG doesn't want to implement the crypto refresh at this time, but that shouldn't cause incompatibilities (if all implementations are conservative in what they generate). Messages sent between GnuPG and other implementations can still use RFC4880 (the old OpenPGP standard), they just won't benefit from the improvements in the crypto refresh, unfortunately.


GnuPG is the only real implementation that is used by anyone though.


It is a similar position as with chromium and web standards. Doing stuff without their agreement is kinda pointless in the same way, but if they aren't gonna cooperate what are you gonna do, give up and just do whatever they want?


But what's the point of sending an encrypted email to someone who can't read it?


See also 'A Critique on “A Critique on the OpenPGP Updates”':

* https://blog.pgpkeys.eu/critique-critique

Which goes over the concerns/critiques that Koch posted at:

* https://librepgp.org/


[flagged]


>...old, potentially vulnerable cyphers...

Examples? Specifically, old potentially vulnerable ciphers that would actually allow the NSA to crack encrypted emails? IDEA and 3DES for example are perfectly secure for that usage. ... and just having something in the standard is not going to force anyone to use it. These days it is all AES and RSA or Curve25519.


> IDEA and 3DES for example are perfectly secure for that usage.

I wouldn’t use the phrase “perfectly secure”. They are both 64-bit block ciphers so vulnerable to generic collision attacks like https://sweet32.info/. This is why NIST deprecated 3DES and reduced the allowed limit of data encrypted with a single key to 2^20 blocks = 8MB. Many emails with attachments exceed that size.

Now, you may say that such attacks are largely theoretical and the actual amount of (known plaintext) that needs to be captured is much larger in practice, but this is quite a step from “perfectly secure”, especially when you are considering the NSA as your adversary.

Edit: spelling/typos


>...reduced the allowed limit of data encrypted with a single key to 2^20 blocks = 8MB.

Is that is from here?

* https://csrc.nist.gov/News/2017/Update-to-Current-Use-and-De...

I can't find any indication that this has transitioned from the proposal stage to an actual recommendation. But at any rate, this proposal is based on sweet32[1] which was a oracle attack which required 785 GB of traffic to demonstrate. Off the top of my head, recommendations for things like email (and file encryption), which are not vunerable to that sort of oracle attack, suggest a maximum for 64 bit block length ciphers of something like 4 GB. Email tends to be a maximum of 50 MB. That would be after base64 encoding.

[1] https://sweet32.info/


Yes, it became part of the standard in rev 2. 3DES will be completely forbidden for federal use after the end of the year. Sweet32 was a demo, attacks get better. And there are other generic attacks against overuse of 64-bit blockciphers. Outside of a few usecases in constrained environments there’s no good reason to use a 64-bit blockcipher anymore (and there are better choices than DES/IDEA for those cases).

https://csrc.nist.gov/news/2023/nist-to-withdraw-sp-800-67-r...


OK, but how does any of this refute my contention that 3DES is secure for PGP over email?


Just to be clear, you are asking how all this evidence refutes your totally unsupported assertion that 3DES is “perfectly secure” against the NSA? When even the NSA, who co-designed DES in the first place, forbid its continued use?


Is anyone else kinda hoping that GPG/PGP loses enough respect in the tech community that something fresh comes along that really solves a lot of the UX and security issues they have? (Acquiring keys, rotating keys, identifying compromised keys, and most importantly either reaches a large enough percentage of emails sent that usage of it is not in itself an immediate flag to monitor or can be implemented as a side channel not directly including the signature in the email payload itself.)


> something fresh

It exists, it's called age..

Some random links

https://github.com/FiloSottile/age

https://www.reddit.com/r/crypto/comments/hr64hr/state_of_age...

https://github.com/FiloSottile/age/discussions/432

> (Acquiring keys, rotating keys, identifying compromised keys, and most importantly either reaches a large enough percentage of emails..

Oh nevermind, age doesn't do any of that. Indeed, it doesn't even do email https://github.com/FiloSottile/age/issues/93


Heh.

Now send a file encrypted by age to two addressees, who don't need to know each other's decryption keys.

After that, sign an email without encrypting it; do the same with a git commit, or a Debian package.

GPG has features well beyond the simple encryption (which age does) because there are real uses for that.


'age' is really impressive, it's helped me get a better grasp on GPG.

I'm not joking, it's genuinely user-friendly, and I certainly prefer it over GPG.

Interestingly, its simplicity makes everything about GPG suddenly make more sense.


Has this project ever received a security audit?


Age is excellent. Fast and easy for anyone to grasp


I think it already has. Maybe not the underlying tech, but certainly for encrypt emails.

But I think the outcome of this is not "something fresh" but rather "giving up on the idea of encrypted emails altogether". We have far superior communication channels that are secure, easy and private today (Signal, Matrix, WhatsApp, iMessage); That problem is solved. Storing emails secure and encrypted is solved too, by providers such as protonmail, fastmail, tutanota and many others.

So what does GPG/PGP really solve still?

The one use-case that I strongly see it shine, is where mail-notifications are signed. I've only ever seen Kraken do this: Upload my GPG/PGP pub key in their web-interface, and have them sign and encrypt mails to me with that. It very much solves phishing and MITM on email-notifications (and password reset mails and such). Comes with two serious problems though: the UX on my client-end still sucks; e.g. I never managed to set-up PGP on my android client, so I cannot read their mails from my phone. "subject: Login attempt from X" body: garbled. And generation and rotation of the keys is as frustrating as ever. It certainly is not a feature for "my mom".


> (Signal, Matrix, WhatsApp, iMessage);

None of which are anywhere close to as ubiquitous as email. None of which are well suited to long messages. Only one of those (matrix) is federated. Only one (matrix) wasn't designed specifically for use on mobile devices. Yes, the others can be used on a desktop, but the experience isn't great.


Hold on, there's a sleight of hand here. You're trying to compare all of email against WhatsApp. Now, it's possible that if you took transactional email out of the picture, and just stuck with interpersonal communication, WhatsApp could beat email. But that doesn't matter, because in this discussion, the figure of merit is encrypted email, and on that metric, every single one of those platforms on their own roflstomps email for daily usage.


Only Matrix is federated, like email. I really enjoy sending emails without opening an account for each recipient or being subservient to 1 stack provider for 50 years, with all the inbreeding that entails.


That wasn't the point being made, was it?


> Now, it's possible that if you took transactional email out of the picture, and just stuck with interpersonal communication, WhatsApp could beat email

Everyone I know uses email. No one I know uses whatsapp. A couple of people I know use Signal. A handful use iMessage.

> But that doesn't matter, because in this discussion, the figure of merit is encrypted email,

Ok, let's consider one case where encrypted email is commonly used: reporting security vulnerabilities. Do you really think any of these would be a good medium for that? Do you see companies or other organizations putting a whatsapp username as the contact in their security.txt?

I do want there to be a more secure replacement for email. But most of the newer e2ee messaging systems can't really fully replace email.


Is encrypted email commonly used for reporting security vulnerabilities? It seems like increasingly, more reports occur via bug bounty programs, or are disclosed publicly by the researchers, or are just sent as plaintext emails to security@ or whatever is publicly listed. When I've found security vulnerabilities in somebody's code, I can't think of a time I ever thought about GPG-signing my notice to them.


>When I've found security vulnerabilities in somebody's code, I can't think of a time I ever thought about GPG-signing my notice to them.

It's not authenticity that matters here, it's confidentiality.


Basically nobody cares. Vulnerability researchers don't use GPG either.


Yes: I think Signal is drastically better for reporting security vulnerabilities than email. I think if you're actually worried about operational security for accepting vulnerability reports, using email is practically malpractice. The fact is, most security teams, even the very large ones, are not especially concerned about operational security for inbound vulnerability reports.


From a security point of view, absolutely. But there are logistical problems. Currently, a signal account has to be tied to a cell phone number. How does that work when you want it sent to a team instead of an individual? There isn't a sanctioned API, so it is difficult (and unsupported) to set up an integration with bug tracking software. Not to mention that the reporter may not have Signal set up yet.


Most reporters don't have PGP set up, either --- far fewer than have Signal set up. But this is all kind of a moot point: the industry norm is to use plaintext email, and to make ad hoc arrangements (including voice calls) for the very rare cases where things are too scary to email.


Honestly these seem like pretty minor issues compared to the task of properly managing a GPG install.

How do you manage the keys? If you've shared them with a team, how do you ensure someone hasn't taken a copy? What if the key is lost? What if someone ends up replying to the thread without doing the encryption song and dance? It's just such a pain. I'd rather copy and paste something out of Signal and into my bug tracker a thousand times than have to deal with all the footguns of email encrypted with GPG.


>The fact is, most security teams, even the very large ones, are not especially concerned about operational security for inbound vulnerability reports.

This never made sense to me, can anyone explain?


“A handful use iMessage”.

Right…


The fact they know no-one who uses WhatsApp is a giveaway of their demografy as well. In many countries "not having WhatsApp" equals "not participating in anything". In my country everything, from my insurance help desk to the coordination for a friend's birthday gift happens on WA.

Despite my reluctance to use Meta projects, I read and write far, far more WA messages per day than emails.


I mean, only a handful of my friends use iMessage, but that's because I don't have that many friends.


I think we agree[0] on this. Email encryption isn't ever going to be a thing because of the way email itself works. But email signing would help a lot. I still don't think GPG does this very well, though, because of issues with key rotation/invalidation/etc.

[0]: https://news.ycombinator.com/item?id=38557771


I wish it would just use TOFU ("trust on first use") by default. It's not 100% fool-proof, but actually does cover a large number of use-cases, and is certainly better than nothing.

UI:

  "billing@paypal.com: we never seen this sender before, be careful"

  "billing@paypal.com: this is verified to be the same sender"

  "billing@paypal.com: ACHTUNG! THIS IS SOMEONE ELSE"
You can of course still manually add keys, and you can even do automatic or semi-automatic key rotation with some new header (e.g. "X-New-Key: [...]" that's signed with the old).


> You can of course still manually add keys, and you can even do automatic or semi-automatic key rotation with some new header (e.g. "X-New-Key: [...]" that's signed with the old).

Headers aren't part of an encrypted or authenticated body, so this is trivial to perform a key replacement attack against.


Or MIME part, or change the spec to include some headers (I thought it did?) – that's not an important detail here.


> Headers aren't part of an encrypted or authenticated body, so this is trivial to perform a key replacement attack against.

DKIM can be leveraged for that, although DKIM is one hell of a gun to give someone to shoot themselves.


DNS records aren't part of the encrypted or authenticated channel, so SSH is trivial to perform a key replacement attack against?


Sorry, is this a rhetorical question? I thought the fact that SSH does TOFU was (somewhat) common knowledge, which is why it spits out all kinds of scary MITM warnings when a host fingerprint changes.

If you're connecting to an SSH server for the first time and don't already have a pre-established host fingerprint, then yes: someone who controls your server's DNS records can redirect you to another SSH host, which you'll then (presumably) enter your password into.


> which you'll then (presumably) enter your password into.

One of the many arguments for using pubkeys so that's all they'll get. Neverthless, the rest of the session could still be anything, and agent forwarding should never be used for untrusted hosts.


Is it a coincidence that the tech you mentioned includes CIA honeypots e.g., protonmail?

gnupg looks like something that actually can keep your secrets (at rest, otherwise a less-likely-to-backdoored convenient option is Telegram).


I'm not sure what you were trying to say here about telegram, but they are completely unencrypted, for nearly all intents and purposes messages are stored in plaintext on the server. They just succeeded impressively in twisting that fact away via marketing.


It has been discussed to death already. There is e2e encryption if you need it. "completely unencrypted" is just false.


Are you sure? Please share your evidence. https://www.reddit.com/r/ProtonMail/comments/14demhj/debunki...


A random Reddit post is hardly evidence. My personal opinion is that they sell snake oil (a.k.a. secure email).


when you are dealing with the trillion dollar war propaganda machine any link one might provide will be drown out by noise "debunking" it.

Do your own research. I found the initial links on hacker news.


What links?


>We have far superior communication channels that are secure, easy and private today (Signal, Matrix, WhatsApp, iMessage)

These are valuable tools, but they aren't usable on the command line the way gpg is. Signal doesn't help me sign an update to my software package.

Also I think the web of trust is a pretty cool concept, in theory at least. I don't know if anything outside GPG supports it.


I'm using FairEmail [0] as a client on Android which supports PGP and i am quite happy with it.

[0] https://email.faircode.eu/


There have been many attempts, usually formed by ignoring the inherent difficulty of creating secure communications while fundamentally being the exact same UX as GPG. It's genuinely incredibly tedious to see there is a new "alternative" and find out once again folks are over-promising and under-delivering.


> Is anyone else kinda hoping that GPG/PGP loses enough respect in the tech community that something fresh comes along that really solves a lot of the UX and security issues they have?

This already exists. It's just not a single, all-in-one tool - the answer is to use a tool that's fit for the specific purpose you're trying to accomplish (secure messaging, encrypted backups, encrypting application data, etc.)

There is not, and never will be, a modern one-size-fits-all approach because the entire industry has moved on from that model - over the last 30 years, we (collectively) have learned that it's inherently insecure to design a tool to do so many disparate things and expect people to use it correctly each way.


All of these use cases are encrypting files, some of them with a few extra steps as sauce. Stuff like whatsapp / signal is the exact UX that GPG has, fixed by instead ignoring everything that's hard (trust). The asymmetric cryptography is not fundamentally novel or interesting, and the end result of it could be applied to literally anything if they allowed you to touch your own things (which they don't). These modern solutions are build on the infantilisation of their own users, nothing else.


That's exactly the wrong way to look at it. Everything is potentially a file, but not all cryptosystems have the same use cases. The needs of message encryption (forward and future secrecy, for instance) are not at all like the needs of backup encryption (deduplication, for instance). This is one of the biggest things wrong with the PGP model of retrofitting cryptography onto problems, and why it has virtually never been successful at any of them.


I think if there was interest in it there would be multiple OpenPGP implementations.

Most people don't even care about security. You can't protect people who won't care to protect themselves.


They exist, albeit not 1:1 compatible in all aspects. That one in Thunderbird, Sequoia.


I'm just saying, I've used and tried to get people to practice WoT with OpenPGP on the order of a decade or more. There simply isn't a demand for protected communications because normies haven't started suffering at the hands of government for online shenanigans yet.

After a while, you sort of want bad things to happen so society will move forward... Humans are so reticent to act to protect themselves from a threat they don't see or understand the capabilities of.


normies use encrypted communication way more than the cumulative historical usage of GPG or PGP; Signal, WhatsApp, iMessage, and now Facebook messenger all offer better privacy for the average person than GPG or PGP ever did, and are used by orders of magnitude more people.


Tall claims require tall proof. Where's the source code proving this?

It doesn't pass the smell test. FB has been caught numerous times mishandling data. Apple is a walled garden you can't trust, same with WhatsApp. Unless Signal is liberally licensed, we can't verify privacy there.

Also, using a phone number as an ID is a de-anonymizing technique. Industry absolutely does not do this correctly, or they wouldn't be a juicy source of analytics data. Governments court these companies to exfiltrate data they have on people. If the data were adequately protected, they wouldn't be able to do this.

Sending something over HTTPS is encrypted but it's not private to the other end. Nice sleight of hand but I actually understand what I'm talking about.


Anyone technically literate considering iMessage for secure comms should probably read this.

https://news.ycombinator.com/item?id=38537444

In any case, it's better then using plaintext. It's probably safe enough against non nation state actors.


I read about the WoT and I found the idea fascinating, but I've never used it myself. Would love to hear anything you have to share about using it in practice.


> Acquiring keys, rotating keys, identifying compromised keys, and most importantly either reaches a large enough percentage of emails sent that usage of it is not in itself an immediate flag to monitor or can be implemented as a side channel not directly including the signature in the email payload itself.

You are describing S/MIME?


Kinda. But S/MIME has its own problems[0], mostly related to you as a recipient being unable to choose who is authorized to send you encrypted email (and so spam and malware filters don't work).

On top of that, GPG and S/MIME's support of encrypted-at-rest email is, imo, a fool's errand. Handing a payload of data to a third party that the recipient can eventually query to retrieve makes it much easier to grab a hold of and try to decrypt in the future. The same is true of SSL to an extent, but SSL traffic is much more voluminous, such that saving all of it to eventually crack and decide if there's anything worthwhile in it is unlikely.

The only real way to transfer private data between two users is to do it live with an ephemeral channel, whether that's in-person or via SSL or etc. The only value I see in GPG and friends is in verifying authenticity of the contents - signing the email - not encrypting those contents. Email has, and always will be, an open protocol, for better or worse.

[0]: https://en.wikipedia.org/wiki/S/MIME#Obstacles_to_deploying_...


> mostly related to you as a recipient being unable to choose who is authorized to send you encrypted email (and so spam and malware filters don't work).

That's a problem with all encryption anyways. Inspection has to be done at the end-user's device. So I don't think it's fair to hold that against S/MIME.

> On top of that, GPG and S/MIME's support of encrypted-at-rest email is, imo, a fool's errand.

If it can be done with E2EE messaging apps, sure it can be done with email. Long-term storage is a really difficult problem anyways.

> The only value I see in GPG and friends is in verifying authenticity of the contents - signing the email - not encrypting those contents. Email has, and always will be, an open protocol, for better or worse.

To some extent I agree. An ubiquitous deployment of digital signatures would already solve a bunch of problems and most of the rest are handled by transport encryption.


> That's a problem with all encryption anyways. Inspection has to be done at the end-user's device. So I don't think it's fair to hold that against S/MIME.

I don't think that has to be the case, though. Protocol negotiation is a thing; SSL negotiating the version of the protocol to use, HTTP 1.1 -> 2.0 negotiation, etc.

You could imagine a mail protocol that starts at an encryption level, then during the negotiation process when the mail-to-be-delivered provides a public key, the recipient server can check that key against a listing of keys accepted by the user, and if not included, attempt to negotiate down to an unencrypted version of the email.

The sender of the email could choose to not allow that downgrade and get an undeliverable mail error, or they could choose to allow the downgrade to plain text/html email. This could then be run through the standard spam/malware filtering as usual and the spam eliminated, while email that came from already trusted email can skip those filters because the user has already judged them worthy of accepting and keeping the communication private.

So I don't think that's an intrinsic difficulty of all encryption schemes for email, but...

> If it can be done with E2EE messaging apps, sure it can be done with email. Long-term storage is a really difficult problem anyways.

So first I'll state that I don't think all E2EE messaging apps reach the following bar, either, but the difference between an ephemeral SSL-encrypted communication channel and an email, fundamentally, is that the ephemeral channel won't be written to a disk somewhere, while the email will.

The window in which it is possible to get a copy, and the difficulty in obtaining it, is much more in favor of secrets staying secret in the ephemeral channel than it is in encrypted email. The data payload persists longer, and is likely encrypted by the same private key across many emails, so getting the emails and getting the keys are much easier than with the ephemeral channel that generates a temporary set of keys on each connection and never persists any of it to disk (so storing the communication with the hope of eventually grabbing the keys from the user's machine by virus or social engineering or just plain ol' physical theft doesn't even make any sense the same way GPG-encrypted email does).


When some muas announced Autocrypt support a few years ago I got excited again, but unfortunately nothing came of it. I havent been able to auto-update my key on my partner's mua, it seemed 'support' meant different things to different projects but none focused on the goal: making PGP encrypted email usable (and more secure).


Yeah we really had a shot there.

There was a year or two where I actually began to receive autocrypt headers from random people and started encrypting with them.

Unfortunately, this momentum was completely killed by thunderbird, who decided to remove support and proceed to reimplement traditional manual email encryption :(


> something fresh comes along that really solves a lot of the UX and security issues they have?

I'm working on this! It's called Stamp (https://stamp-protocol.github.io/) and takes a lot of the issues I've had with PGP/GPG and creates a more modern refresh. It's definitely not simple but my hope is that having sane defaults and writing good interfaces will help with this.

Unfortunately it just went through a rearchitecting and the docs are horribly out of date, but the basic concept persists. In the current version instead of having different hardcoded key types (alpha, publish, etc), there's now the concept of "admin keys" and "policies." Policies decide what keys can do what as far as managing the identity, so it's possible for instance to have a policy that gives a key god powers, or a policy that sayd "if three of these four signatures match, the entire key and policy set can be replaced" (aka, multisig recovery machanisms). Also, in the current version, "forwards" have been entirely removed and replaced by claims.

The goal is to use this as a means to act as an identity in p2p systems. My issue with p2p systems is that they always punt on identity, making it a function of your device and some randomly generated keypair. That said, Stamp can definitely be used more generally.

Right now I'm focusing on the underlying network that syncs identities between devices and also stores identities publicly, circumventing the need for keyservers and all that stuff.


I'd love to see something displace GPG, particularly the "pipe to an external application" usage paradigm that's just awful in so many ways. At the same time I'm not ready to give up the only cryptosystem that we know to have actually thwarted the NSA in real-world conditions, particularly when so many of the proposed alternatives require absolute non-starters like publishing your phone number or using an unverifiable auto-updating client application.

But fundamentally there's no money in it. GPG is Like That largely because it's maintained by a grand total of one (1) guy, which is all the OSS community can afford to fund. Who's going to pay for a crypto platform, except when it's the CIA (or equivalent) sponsoring useful idiots?


Nice writeup.

It is a serious problem that the ecosystem is held back by wasting resources on personal disputes with immediate consequences for end users.

Hate on OpenPGP all you want, it still is an important technology with unrealized potential and growth.


It’s not actually clear from reading through this document or others, or the linked email threads, etc. that there is a personal dispute at play, or what that might be. I’m also not sure who the target of this document is, or who wrote it. It’s also not clear what the forcing function behind the strong recommendations at the end is — will the author fork GnuPG in the event a resolution can’t be reached?


It doesn't sound like a personal dispute to me, it sounds technical. One camp (Open) wants to move faster and break backward compatibility, the other (Libre) wants to move slower and maintain backwards compatibility


> One camp (Open) wants to move faster and break backward compatibility, the other (Libre) wants to move slower and maintain backwards compatibility

There is no breaking of backward compatibility. The crypto-refresh draft and the LibrePGP draft are equally backward-compatible.

See 'A Critique on “A Critique on the OpenPGP Updates”':

* https://blog.pgpkeys.eu/critique-critique

Both groups would create a new format (Libre = v5; crypto-refresh = v6). v4-only wouldn't be able to handle either new format, and newer software could presumably be told to create files in the older format.

The Proton folks are choosing to support both v5 and v6:

* https://github.com/ProtonMail/go-crypto/pull/182

As is the Thunderbird/RNP team:

* https://github.com/rnpgp/rnp/commit/fdfc1f5bb11d439e35f3c855...


It seems like maintaining backwards compatibility would be important for something that otherwise irreversibly encrypts your data.


The correct way to maintain backwards compatibility in those contexts is to decrypt and re-encrypt, not support broken ciphers or weak modes of encryption indefinitely. The latter is security theater.


A read-only operation should not cause an insane amount of writes. This is perilous for a great many reasons, one of which is the risk of data corruption should something go wrong.


You're thinking about this the wrong way: if your data needs to be secure, then it's already perilous to keep it around with with weak or broken encryption. Security models where data is too important to risk encryption upgrades but not important enough to encrypt correctly are internally incoherent.

(This is sidestepping the other parts of the comment that don't make sense, like why a single read implies multiple writes or why performing cryptographic upgrades is somehow uniquely, unacceptably risky from a data corruption perspective.)


No, I think you're thinking about it the the wrong way: write failures are common. The failure mode for a bad disk is often that reads will succeed and writes will lose data. Something that silently writes like this is increasing the risk of data loss.

It probably depends a lot on the application, but I think it's often much better to have something that will warn the user about security risks and let them decide what to do with that risk. If you do design something with these silent writes, you absolutely need to think hard about failure cases and test them, and not handwave them away. Having the most "secure" data be corrupted is ultimately an unacceptable outcome.

That's not even getting into the other problems, such as ... is it ok for the user to take a performance hit of writing X GB when all they want to do is read a file?


Your cryptosystem is not responsible for the stability of your storage medium, and your storage medium is not responsible for the security of your cryptosystem. They are black boxes to each other; to confound their responsibilities is to ensure doom in your designs.

Put another way: your cryptosystem isn't responsible for saving your ass from not making backups. If your data is valuable, treat it that way.


> Your cryptosystem is not responsible for the stability of your storage medium, and your storage medium is not responsible for the security of your cryptosystem

This is exactly why your crypto system should not rely on spontaneously writing many gigabytes on a read operation, without asking. I couldn't have said it better myself.

What you are advocating is crypto intruding on the storage mechanism inappropriately. It's a layer violation.

I think if it's important to the end user, you could write fairly decent code at the app layer that asynchronously re-encrypts old data in a way that doesn't harm the user. That code would need to have a strategy for write failures. A basic cryptography tool should probably not have this as a built-in feature however, for a few reasons including those I've stated.


> This is exactly why your crypto system should not rely on spontaneously writing many gigabytes on a read operation, without asking.

Again: nobody has said this.

Whether or not the tool does this in bulk, or asynchronously, or whatever else is not particularly important to me. The only concern I have in this conversation is whether it's contradictory to simultaneously assert the value of some data and refuse to encrypt it correctly. Which it is.


> Again: nobody has said this

You said it. You said that's what the cryptosystem should do. It's a bad design.


I don’t see anywhere in my comments that I either state or imply that you have to upgrade everything at once.

From the original comment:

> This is sidestepping the other parts of the comment that don't make sense, like why a single read implies multiple writes


No.

There is 100% value in being able to decrypt a 20 year old email. It doesn't matter if it is a broken cypher.

A 20 year old email has very little actionable in it, but can have value in other ways.

It is absolutely vital that a project like this, supports some means for a modern machine/stack to access older data.


This is silly. Nothing that happens with the standard or its implementation is going to prevent you from decrypting a 20 year old email. It shouldn't need saying, but one reason for that is that PGP's cryptography is schoolbook cryptography.


This is silly

Upstream spoke of deprecated support for older emails. My response is aimed there.

And there is loads of software that will not compile on modern hardware. End users don't often have that ability to re-write, or even to easily validate a random bit of code on github.

A project like this, needs to maintain backwards operability. For decades.


Once again: this is silly, because whatever conversation we are having about the standard, your ability to decrypt old messages would not have been impacted. Standard revisions don't turn the previous standard into secret forbidden knowledge.

What's really being asked for here is the capability to seamlessly continue sending messages with the previous, weak constructions, into the indefinite future, and have the installed base of the system continue seamlessly reading them. I think that is in fact a goal of PGP, and one of its great weaknesses.


When standards remove the requirements for something after a period of obsolescence, that tends to send a message to the implementors to remove that from the software.

Users who still rely on that have to use the old software, against which there can be barriers:

- old executables don't run on newer OS (particularly in the Unix world).

- old source code won't build.

- old code won't retrieve the old data from the newer server it has been migrated do.

Things like that.

The barriers could be significant that even someone skilled and motivated such as myself would be discouraged.


> Users who still rely on that have to use the old software, against which there can be barriers

Not all reliance is reasonable though.

Some legacy software can only do SSLv3 or lower, does that mean the rest of the internet has to carry that support around? Abso-f-lutely not.

The same applies here. If you really need that ancient stuff that loses support, repackage them in newer encryption or remove the obsolete layer. It's highly probable that information no longer needs to stay encrypted at rest anyways.


In my opinion, the Internet should not be removing support for older SSL. The highest SSL version that is common to server and client should always be used.


> The highest SSL version that is common to server and client should always be used.

That is how it works. What you're missing is that everyone, both servers and clients, agrees that supporting old SSL versions is a bad idea. And they're right.


Since I don't agree, it cannot be everyone.

More precisely, I don't agree with web clients not connecting to old servers.


Security done properly requires some sacrifices. Keeping old insecure versions working means exposing users to trivial ways of breaking encryption.


If that were the actual principle being accurately followed, the first feature to have been removed from browsers would have been plain HTTP before any version of SSL.

Plain HTTP is what people resort to when their browser refuses to connect to an old device or server using HTTPS, which is worse than old SSL.


No, because clear lack of security is better than faux security. With older SSL versions, it's security that even creates extra risk for all clients (by leaking server secrets and allowing ciphersuites that don't have PFS).


What hogwash. You alert the user, problem solved.

The absurd idea that a user will have a 20 year old encrypted mail, because software still supports it, is ridiculous. What really happens is someone has a 20 year old mail no matter what, itcwill always exist, and the choice is, support it or not. Support it to be read, support it to be converted, warn the user, suggest fixes.

And your SSL example is senseless! In what world do you envision super secure stuff alongside weaker legacy, on the same damned server. You literally are not thinking sensibility about any of this, you examples are paper tigers.

This stance is absurb.


> You alert the user, problem solved.

How do you alert the users that are running the problematic software and haven't yet updated it? The very premise is ridiculous.

> What really happens is someone has a 20 year old mail no matter what, itcwill always exist, and the choice is, support it or not. Support it to be read, support it to be converted, warn the user, suggest fixes.

Well yeah and the choice should be to not support it. If the user needs those letters they can either decrypt or just re-encrypt them. It's silly to claim that a message can somehow be both so vital to be protected by encryption, but not upgraded to something more modern.

> And your SSL example is senseless! In what world do you envision super secure stuff alongside weaker legacy, on the same damned server.

I'm not envisioning it. Nobody should be running such old useless garbage. What was suggested earlier in this thread does not work and must not happen in practice.


How do you alert the users that are running the problematic software and haven't yet updated it? The very premise is ridiculous.

Where did you get the weird idea the software isn't updated? This entire discussion is about deprecation of older encryption methods in new versions of software.

You're not giving this thought. Good day.


Yes, but that is only needed to connect with old software that has not updated. Two pieces of new software will not negotiate on using old crypto even if they both support it.

In the trouble situations, one of the two pieces of software being upgraded is thrust upon the user.


> lack of security is better than faux security

Only when we are doing theatre. In the actual facts, lack of security is worse than inferior security.


I read the article and couldn't find any hint that previously encrypted data would be undecryptable by either branch of the fork.

Is there a real risk of this that I missed, or are you just free-associating off the term "backwards compatibility"?


The article specifically refers to backward compatibility with RFC4880, RFC5581, and RFC6687. These specifications include encryption techniques. So no. I was not just "free associating". Please do not assume the worst, because it could be that you just haven't understood the implications of the article.


On the other hand, as long as earlier versions or their sources remain available, it doesn't sound like a major problem to me. The sources would effectively be documentation for the previous format and can be reimplemented if needed.


I think the common refrain against PGP is that it shouldn't be important, because it suffers from a myriad of technical and sociological shortcomings.

The whole situation regarding key servers, key rotation, and the web of trust is a complete dumpster fire.


>The whole situation regarding key servers, key rotation, and the web of trust is a complete dumpster fire.

Can you explain why?

People elsewhere in this thread are saying that PGP sucks because it tries to do too many things at once, but it seems to me that the one big advantage of a tool which does everything at once is that you only need to solve authenticity one time for everything you do.

For example, if I'm communicating with an open source dev, having their known-authentic PGP key allows me to simultaneously verify the authenticity of their software updates, verify the authenticity of the email they send me, and encrypt my emails to them. Is there anything outside of PGP that accomplishes this?


>Can you explain why?

Well, the key servers are useless because they are susceptible to that poisoning attack from a few years ago, and they happily send you fraudulent or revoked keys.

And the web of trust doesn't scale. The trust ratings mean different things to different people, the propagation of revocation certs and signatures is slow, and rotating keys is onerous.

>For example, if I'm communicating with an open source dev, having their known-authentic PGP key allows me to simultaneously verify the authenticity of their software updates, verify the authenticity of the email they send me, and encrypt my emails to them. Is there anything outside of PGP that accomplishes this?

How often do you check the fingerprints of that key? Do you verify out of band when the developer rotates their key? (Haha just kidding, PGP users essentially never rotate keys)

If you care enough to encrypt your emails, then what is the virtue of verifying less frequently that you're talking to the correct persons?

Why wouldn't you want separate keys for all those things?

Why would you want an adversary to be able to compromise a single key and have the ability to forge commits, emails, and whatever else?


>How often do you check the fingerprints of that key? Do you verify out of band when the developer rotates their key?

I'm almost certain PGP best practice is to have a single master key, kept on an airgapped device, that's used to sign subkeys for various purposes like email, commit signing, etc. So I only have to verify out of band once, unless the airgapped device gets compromised or the master key encryption is broken.


What percentage of PGP users actually have an airgapped device and actually go through all that rigamarole?

More importantly, how do you know that your counterparty is one of that (extremely small) minority?


PGP users are a minority to begin with. I wouldn't be surprised if a lot of them do this. I think I got that rec from a PGP beginner guide I found the other month.

Don't forget about PGP smart cards either. You could keep the master key you use to sign subkeys on a smart card. A smart card should be harder to hack than your phone.

Qubes has built-in "split GPG" support that allows you to e.g. sign something using your private key while keeping it in a different VM. See https://www.qubes-os.org/doc/split-gpg/

I know PGP isn't for everyone, I just like the idea of keeping high-security options available for those who want them.

>More importantly, how do you know that your counterparty is one of that (extremely small) minority?

You could ask them :-)


>I know PGP isn't for everyone, I just like the idea of keeping high-security options available for those who want them.

But PGP doesn't provide a high-security anything.

- In order to achieve some reasonable level of protection from MITM attacks you can't just get someone's key from a keyserver. You have to go hunting for it and you're never really sure if there's a revocation cert out there that you just missed.

- Some people publish PGP keys on their websites, and you could use that to contact them over encrypted email. You are still vulnerable to metadata analysis and unless you manually re-key on every message (which you don't), you don't enjoy forward secrecy. Additionally, all it takes is one oopsie moment for someone to Reply-All and forget to encrypt first and now the entire conversation went out unencrypted. This has happened to me.

- You claim there's some unspecified benefit to signing commits with the same key you encrypt your emails with, though I don't see why that's superior to signify/minisign

- Best practices demand that you keep an airgapped machine with a long lived master key on it. No mention is made of how to prevent BadUSB-type attacks from jumping the air gap. If you really want to be sure nobody mints their own key from your airgapped machine to impersonate you, you now need to monitor your machine. That raspi in a drawer is still vulnerable to Evil Maid attacks, and the worst part is you won't know someone's impersonating you until it's too late.

All this attack surface just for the purported convenience of having some kind of unified "crypto identity" wherein you only need to verify someone once?

These are not the characteristics of a high security system. This is why people think PGP is a for security LARPers. It's objectively just not a very good tool.


>- In order to achieve some reasonable level of protection from MITM attacks you can't just get someone's key from a keyserver. You have to go hunting for it and you're never really sure if there's a revocation cert out there that you just missed.

This is a convenience consideration, not a security one.

>- Some people publish PGP keys on their websites, and you could use that to contact them over encrypted email. You are still vulnerable to metadata analysis and unless you manually re-key on every message (which you don't), you don't enjoy forward secrecy. Additionally, all it takes is one oopsie moment for someone to Reply-All and forget to encrypt first and now the entire conversation went out unencrypted. This has happened to me.

These are problems with encrypted email, not PGP. Encrypted email is not the only way to use PGP. See e.g. https://news.ycombinator.com/item?id=38575764

>No mention is made of how to prevent BadUSB-type attacks from jumping the air gap.

You could write newly minted keys to single-use CD-Rs.

>If you really want to be sure nobody mints their own key from your airgapped machine to impersonate you, you now need to monitor your machine. That raspi in a drawer is still vulnerable to Evil Maid attacks, and the worst part is you won't know someone's impersonating you until it's too late.

If physical access is part of your threat model, you'll want to monitor access to your stuff anyways.


Wake me up when they finally offer a scalable key management system. PGP always punted on key management, and as a result has been a perpetual market flop. It always seemed to cater to people who were very willing to give up all convenience for perfect security, like can't trust keys unless they were handwritten on paper and you verified their photo ID in person before accepting them levels of paranoid. It's really frustrating for people who want a system that's good enough to always be there and providing passive benefits even if there is a theoretical case where a nation state might MITM your communications if they got there before you knew the person.


And yet, without it, the Snowden leaks wouldn’t have happened. I mostly agree with you, but it’s worth pointing out that they achieved their goals; rare for a security product.


Yeah, I dislike the idea of prioritizing a crypto standard that doesn't work against nation-states. Some of us like our privacy.

That said, I agree with jandrese's point about key management -- low-security key management solutions shouldn't prevent people from taking a high-security approach. Seems like the ideal situation would be if the really paranoid people end up naturally testing the low-security infrastructure as a side effect of doing their paranoia stuff. E.g. if the gpg client was set up to automatically report discrepancies between a user's personal web of trust and a presumed-authoritative keyserver.

That said, it's unclear to me why Signal wouldn't have worked for Snowden -- interested to explore details here.


> Wake me up when they finally offer a scalable key management system. PGP always punted on key management, and as a result has been a perpetual market flop.

What is the alternative KM architecture in a distributed system? Remember that for e-mail there is no central authority to handle assigning keys to individuals, unless perhaps you want Gmail and Outlook.com to handle that.

The other e-mail security system is S/MIME (which uses X.509).


It's not hard to imagine a system where you can contact the server in the MX record for a domain over HTTPS (really just TLS) and query it for a specific email address to get the public key for that user.

Sure if someone knocks over TLS then your email encryption will be in trouble, but you will also have plenty of other problems at that point.


> query it for a specific email address to get the public key for that

This is actually possible for OpenPGP, with WKD ("Web Key Directory"): https://datatracker.ietf.org/doc/html/draft-koch-openpgp-web...

It's an expired draft, but is relatively widely supported: https://wiki.gnupg.org/WKD#Implementations


In a draft published in May of this year. This should have been a draft in the 90s.

It should have been in mail systems forever. Whenever you create an account the first email should have been the server sending you your private key in some format that every email client understands and would then prompt you to automatically install it in your client for that server.

    Welcome to your new email account!  
    
    Your private key is attached in the standard format.  

    When your client asks you to install the key say "Yes", and allow it to automatically sign your email using that key for this account.

    Don't forget to check the box to automatically encrypt mail when possible.
If email worked like this I bet secure POP and IMAP would have been implemented much faster than they were in real life.


The first draft was published in 2016: https://datatracker.ietf.org/doc/html/draft-koch-openpgp-web...

Even before that, there were other mechanisms proposed for this, such as https://datatracker.ietf.org/doc/html/draft-shaw-openpgp-hkp... (from 2003). Unfortunately it didn't catch on; I agree it would've been nice to have such a mechanism earlier.


Openpgp keys in DNS, 2016: https://www.rfc-editor.org/info/rfc7929

Certificates in DNS, 2006: https://www.rfc-editor.org/info/rfc4398


Why not use the SRV record which has been created for this purpose, advertising seervices, specifically and use port 11371/tcp which has been registered with IANA for OpenPGP HTTP Keyserver? Why create new non standard mechanisms when these already exist? One could even use a different port and protocol with SRV records.


What if your mail server decides to advertise their public key instead of your public key, allowing them to read all of your email with you being none the wiser?


One way to solve this problem is Key Transparency, which aims to provides a mechanism to verify that you're receiving a legitimate key, somewhat analogous to Certificate Transparency.

We've implemented this at Proton: https://proton.me/support/key-transparency (although it's still in beta, and opt-in for now - but obviously the aim is to enable it by default).

There's also a (relatively new) working group at the IETF, to work on standardizing (a version of) this: https://datatracker.ietf.org/wg/keytrans/about/.


Then you might want to change email providers.

Really paranoid folks could set up services online that check for this, but I kind of doubt it would happen very often because it would be a major stink for that email service if they were caught, and catching them isn't that hard.


Did you just invent keyservers??? WOW!!!! I wonder if next you will invent the ftp protocol or some other thing that has existed for decades.


Discoverable keyservers are novel I think.

The problem with existing keyservers is that there are several of them and you never know which one someone's public key might be living in. There may even be multiple potential keys for a single email address across the different servers. They are effectively useless for email encryption in their current form. It's a very rare email client that will even query the most popular ones looking for someone's key. In fact I don't know of a single email client that does.


I've been using GPG to symmetrically encrypt most of my secrets, like OPT seeds and stuff like that, and then save it in some accessible places.

Just yesterday I was frustrated that gpg didn't work very well in my terminal for some reason (the barrier between entering the data and the password was weirdly permeable, i.e. sometimes some of my password ended up in the data, and more often a lot of what I pasted ended up in the password prompt).

So I found openpgp.js and made a little website where i can encrypt and decrypt my secrets.

I used that particular library because it feels safe that the things I have encrypted can be decrypted with a tool that is accessible everywhere... as long as I can remember the passwords :)

But I wonder, are the defaults for symmetric encryption in openpgp not considered safe anymore, or how should i interpret some of the other comments here?


...are the defaults for symmetric encryption in openpgp not considered safe anymore, ...

That would be AES. The traditional OpenPGP authenticated encryption (OCFB-MDC) is secure. There has been some widespread misunderstanding, I wrote a rambling editorial against the idea of superseding the block cipher mode:

* https://articles.59.ca/doku.php?id=pgpfan:no_new_ae


>OPT seeds

What's that?


Sorry, meant OTP


The most valuable feature of PGP is establishment of a long term online identity bound to a set of keys.

That is perhaps what all of these replacement schemes fail to realize.

I really wish "Login with PGP" was a thing and use a subkey for each website, with optional ability to hide the identity of a particular website in my own keychain. You know, sorta like passkeys, but I don't have to have Google/Apple/etc involved and I can actually inspect the magic behind the scenes.


This is a perfect encapsulation of the gulf between PGP enthusiasts and cryptography engineers, because this "long term online identity" attribute is not only one of PGP's biggest misfeatures just in an implementation and design sense, but also a devastating weakness of the system for its most important intended application (exchanging messages between humans). PGP's key management system is literally the last thing you want in a messaging system.


> also a devastating weakness of the system for its most important intended application (exchanging messages between humans). PGP's key management system is literally the last thing you want in a messaging system.

Strong words, but why?


It's a devastating weakness only if you use it in a very certain manner.


> The most valuable feature of PGP is establishment of a long term online identity bound to a set of keys.

PGP doesn't do this: PGP key has a claimant identity, but actually verifying that claimant is left to the end user. That's why the WoT and strong set were important (before they violently collapsed, revealing that they weren't load bearing after all).

Other schemes do realize this, and it's why they make tradeoffs around trusted parties (CAs in the Web PKI performing domain validation, EV for code-signing, etc.).


> That's why the WoT and strong set were important (before they violently collapsed, revealing that they weren't load bearing after all).

I hadn't heard they violently collapsed; do you have any links where I can learn more about this?


https://inversegravity.net/2019/web-of-trust-dead/

https://gist.github.com/rjhansen/67ab921ffb4084c865b3618d695...

Also, as someone who went to a few signing parties: the strong set was probably never as strong as some people thought.


> "Login with PGP" was a thing and use a subkey for each website

How well would such sites handle PGP subkey revocation? What about PGP key revocation?

Revocation is very important if your key is compromised.

I haven't seen any really maintained pgp keyserver or service in general that didn't directly or indirectly (by user/agent mistake) fail spectacularly since https://evil32.com/ was released and contaminated the 32bit key id space.


If they want to jump the shark by crashing GnuPG into a compatibility reef, I won't be upgrading. "Don't break shit" is the first rule.

PS: Photon hosting private keys is, to me, a nonstarter. It makes their entire platform and investment in GPG pointless.


> "Don't break shit" is the first rule.

It's not an important rule if security is your primary concern. Because if you "don't break shit", the first thing any adversary will do is to try downgrade things until an old, insecure state.

The way to counteract that is to refuse to downgrade, which obviously breaks shit if one of the sides can't speak the new protocol.


They want to change the protocol, for no clear advantage, not add bits to the encryption or change algorithm.


OpenPGP is in general an awful format, it needs a huge refresh.

The whole format for encapsulating PGP data is extremely complex and in some parts insecure. That's an excellent reason to rethink the problem.

In my understanding, this is not about "adding bits" or changing algorithms, it's about the level above. The packet format that says "This is the algorithm, this is the hash being used, this is the data, etc, etc".


Destroying compatibility for a new protocol that isn't really proven to be more secure…


Exactly. Small changes and a transition plan, not throwing everything that's established away.


I don't think you still understand what the conversation is about. It's not about things like AES, where there might be some hidden flaw in the crypto scheme. It's about basic, glaringly visible issues. Eg, from https://www.latacora.com/blog/2019/07/16/the-pgp-problem/

"The PGP MDC can be stripped off messages –– it was encoded in such a way that you can simply chop off the last 22 bytes of the ciphertext to do that. To retain backwards compatibility with insecure older messages, PGP introduced a new packet type to signal that the MDC needs to be validated; if you use the wrong type, the MDC doesn’t get checked. Even if you do, the new SEIP packet format is close enough to the insecure SE format that you can potentially trick readers into downgrading; Trevor Perrin worked the SEIP out to 16 whole bits of security .

And, finally, even if everything goes right, the reference PGP implementation will (wait for it) release unauthenticated plaintext to callers, even if the MDC doesn’t match."

TL;DR, the packet format has "${DATA}${SIGNATURE}", in such a way that you can strip $SIGNATURE, do whatever you like with $DATA, and it'll go through because this is a backwards compatibility mechanism. Former versions didn't have $SIGNATURE, so any attacker can just get rid of it, and problem solved.

This is I repeat not a case of "maybe there's some PHD level math we're unaware of", but absolutely glaring issues.

You don't need a genius to write a formal proof of why allowing people to strip signatures is a bad thing.

And it's a problem solvable by throwing out the compatibility scheme and adding a hard requirement.


This sucks. At the same time, only experts use OpenPGP so I guess maby it's not a big deal?

I've been using PGP for decades at this point and I'm still somewhat surprised when I encounter security professionals that either can't figure out how to use PGP or just ignore it because "its too complicated, and I already have Signal". I have no beef with encrypted messenger services but it's not the same use case as PGP for email and other needs.

This also hints at a separate cultural divide. Young people only know email as that thing you need in order to setup other accounts. They use IM for the bulk of their real communication and at best consider email something they only use for work. Only old farts think of email as something they use for personal correspondence. And frankly, most of the people I interact with are younger than me so most of my communication has shifted to where the people are that use it, which is IM services.


Experts avoid PGP like the stomach flu. It'd be interesting if you could find a single cryptography engineer of note that has spoken up for it in the last 10 years.


This is all amusing. Not that it matters, but when I said "expert" I really meant any user that understands how to use PGP. Those of us born during the construction of the pyramids know how to use it because it was the only option that existed back then, and we still have that knowledge despite other options existing. Signal and other options are great, but you can't encrypt arbitrary data with it. PGP is really a different use case these days. And the ancients that still write emails.


Experts like certificate authorities, which have proven themselves to be completely unreliable several times.


I'm not aware of any experts who like individual certificate authorities. Are there some that I'm missing?

I think perhaps what you mean to say is that experts like the x509 PKI. Which again, isn't quite true. You'll find plenty of experts pointing out that major parts of x509 and the PKI ecosystem, and of TLS and HTTPS, are garbage: ASN.1 parsing is a trashfire, protocol/cipher negotiation has had numerous critical flaws, things like compression have allowed traffic to be decrypted.

What I think is more accurate to say is that experts have invested heavily in finding ways to augment web infrastructure to remove broad categories of these, and the result is a generally recommendable system. This includes things like moving to TLS 1.3, enforcing that CAs participate in Cert Transparency logs, delisting CAs that misbehave, and adjusting browser behavior to avoid security pitfalls like mixed content that compromise users.

The problem with doing that for GPG is partly that its fundamental nature is not well aligned with making those kinds of changes, and partly that (as we see in the original post) GnuPG is resistant to making changes that would leave behind users of insecure codepaths and cryptography.


I would like you to name a cryptography engineer of any note that advocates for the underlying technical details of the X.509 CA system. For that matter, I'd be interested in whether you could name one of any note that works for any CA, LetsEncrypt possibly excluded.


Why do you, and your supposed cryptography engineers, talk shit about everything that is actually used instead of doing something useful like making those things better or providing alternatives?

It's getting really hard to take you seriously. You hate gpg, you hate pki and you always claim to know better than industry standards without providing any details whatsoever.


What would you say https://www.latacora.com/blog/2018/04/03/cryptographic-right... and https://www.latacora.com/blog/2020/03/12/the-soc-starting/ are, if not "doing something useful like making those things better or providing alternatives"


Grandstanding blog posts? Where is the superior pgp or x509 replacement?


I think we've bottomed out this thread, if somebody saying something negative about something you like is "talking shit" and saying something positive about things you don't like is "grandstanding".


I don't know where "you hate PKI" comes from. Certainly I dislike X.509! It's a terrible protocol/format, and I doubt even its own designers would repeat the mistake. But I use the WebPKI, and have spent most of my time on HN talking about it defending it.

I'm pretty comfortable with who does and doesn't take me seriously, for what it's worth. You don't have to if you don't want to.


Okay you hate X509. Should we just ditch https then?


It's as if you stopped reading 1/3 of the way into the comment you're replying to.


What? You want me to acknowledge this?

> I'm pretty comfortable with who does and doesn't take me seriously, for what it's worth. You don't have to if you don't want to.

Cool. Happy for you bro.


Serious question: does this matter? The PGP key ecosystem is microscopic in 2023, and is actively atrophying. Consensus among cryptographers is that PGP hasn’t reflected anything close to acceptable cryptographic design in decades, and its lack of adoption in serious tools and products reflects that.

Without a serious use case hanging in the balance, this is just OSS drama.


Many Linux distros rely on PGP.

Most of these distros are using PGP to distribute software, often binaries that are distributed to end users.

Gentoo as an example, requires all commits to the gentoo repo to be signed by a key in the "gentoo-developers" keyring. Gentoo also provides "stage3" images which are signed by the "gentoo-release" key. Since this is a source based distro there are no binary packages, but the main ebuild repo is signed and verified upon syncing.

I know some other distros like Arch also do similar things with PGP.

Maybe you will say that this isn't actually using PGP because it's not using the "web of trust", but I don't think this is relevant, because there is much more to PGP than the WoT and those features are being used.


This is really two different worlds. PGP for signing repos is alive and works well enough. In particular the key management is a lot easier since there's really only a small handful of entities signing things and they also control the platform. If you need to add keys (for third party repos) the process is just one more step in adding the repo in the first place so very little friction.

PGP for signing email however has been a perpetual failure. Decades later they still have not figured out a reasonable key management system, which left the community fractured and unable to scale. As you note, the Web of Trust has been a disaster for everybody except the most exceptionally paranoid who can't accept anything less. And those people talk to so few other people that they don't have the scaling issues that regular people face.


> Consensus among cryptographers is that PGP hasn’t reflected anything close to acceptable cryptographic design

Because these cryptographers haven't actually delivered anything else that's actually usable by people who just want to encrypt the bodies of their emails?


Cryptographers and cryptographic experts overwhelmingly agree that encrypting email is a chump's game[1]. If you want secure E2EE transit, Signal is widely recommended.

[1]: https://www.latacora.com/blog/2020/02/19/stop-using-encrypte...


Signal is useless for many use cases where email is widely used. For example, it requires a phone number which just makes it useless for anything where people want to use a (possibly short-lived) pseudonym.

And note that I specifically said "people who just want to encrypt the bodies of their emails". That is still incredibly useful, even if the metadata is (obviously) not encrypted.


My understanding is that Matrix is widely considered to have acceptable design tradeoffs for pseudonymous E2EE. It isn't perfect[1], but it's miles better than the theatre of PGP encrypted emails.

> That is still incredibly useful, even if the metadata is (obviously) not encrypted.

To whom? What is the threat model in which a user who is serious about the security of their messages is better served by PGP-over-email?


PGP over email is used in the real world and used successfully.

Research about online black markets (which serves as a nice "case study") will show that the main form of communication between people there have been pasting a PGP encrypted message into a text box and sending it, and sometimes literally using email.

In a situation like this, where anonymity and privacy are critical, signal is not even remotely an option because it requires a phone number.

Matrix may be possible to use, but it would require that someone run a matrix server that allow people to make accounts with no information required for sign up, and allow signup and use of the server over tor, which are all unlikely.

PGP over email or pasted into a web form is simple, it doesn't require signing up with a phone number or any PI, it can be used over tor, and it can be done with basic utils installed on almost any Linux distro.

I suspect a lot of cryptographers have not done research about what goes on "in the real world" or "in the wild". It would be interesting for them to setup a mock situation where two people attempt to send messages to each other without doing anything that could identify them in any way, completely anonymously, to emulate what using these tools in the real world would be like for a whistleblower or journalist or whatever.


It's worth reading through the criminal complaints against various Silk Road people[1]. You'll notice two things: (1) the government nails these people despite encrypted messages, because their email metadata is more than sufficient, and (2) people whose literal lives depend on using PGP correctly fail to do so (e.g. by sending some messages without encryption, or forwarding previously encrypted messages as unencrypted).

PGP is not a serious answer here.

[1]: https://www.justice.gov/sites/default/files/opa/press-releas...


There's a selection effect here -- the criminal complaints are the people who got caught. And drug dealers aren't exactly known for technical literacy.

I think the usability complaints here are very valid, but chlorion has a good point that PGP still wins for particular use cases. Heck, if you're really paranoid [and metadata is not your primary concern], use PGP over Signal. I don't think any of the alternatives proposed in this thread are chainable in this way like PGP.


There's a problem here: we can't make falsifiable claims about the criminals who aren't caught. I could just as easily say the same thing about Signal!

There's one critical difference, however: governments have identified Signal and its ilk specifically as a threat to their intelligence gathering capabilities. They don't talk this way about PGP; the public signals overwhelmingly indicate that (1) virtually nobody uses PGP for anything worth surveilling, and (2) anybody who does use it for things worth surveilling bungles it (see above). Thats the government's dream!

> Heck, if you're really paranoid [and metadata is not your primary concern], use PGP over Signal. I don't think any of the alternatives proposed in this thread are chainable in this way like PGP.

This doesn't make sense. What is the "paranoid" model in which PGP provides (1) better cryptographic guarantees and (2) metadata isn't your primary concern? PGP cannot provide forward secrecy, provides all-around weaker cryptographic primitives, and is significantly harder to use correctly. It isn't a rational choice for a paranoid actor to make.


>This doesn't make sense. What is the "paranoid" model in which PGP provides (1) better cryptographic guarantees and (2) metadata isn't your primary concern? PGP cannot provide forward secrecy, provides all-around weaker cryptographic primitives, and is significantly harder to use correctly. It isn't a rational choice for a paranoid actor to make.

I think you must have misunderstood me. By "PGP over Signal" I meant PGP-encrypting messages and pasting the ASCII-armored ciphertext into the Signal client. The idea being that even if the NSA can break Signal's crypto, they might fail to also break whatever crypto you select with PGP. I should have said "both PGP and Signal", sorry for the poor communication.

I acknowledge PGP's flaws, but I like it as a ubiquitous DIY tool. I'm hoping that niche gets filled with something better. Though to be honest, I think for the "ubiquitous DIY tool" niche, forward secrecy might just be impractical.


No problem, apologies for my response based on a misunderstanding.

> The idea being that even if the NSA can break Signal's crypto, they might fail to also break whatever crypto you select with PGP.

This is an intuitive idea, but I’ll also hazard that it’s probably security theater: at a “building blocks” level, a theoretical NSA that breaks Signal’s crypto has broken the finite subgroup problem that underpins all of PGP’s cryptography as well.

(The reality is that the NSA doesn’t crack this kind of cryptography, at least not when it’s done correctly. They’re much bigger fans of exploits and implants, which they are absolutely not wasting on “ordinary” criminals.)


Hm, interesting. I don't know much about crypto math. I just typed 'gpg --version' on the command line, and it looks like my gpg has support for various public key schemes including elliptic curves. Are they all based on the same variant of the hidden subgroup problem?

Even if the math itself is bulletproof -- as you stated, there could be an implementation flaw in either the Signal code or the GPG code that effectively bypasses the math, right? See e.g. https://en.wikipedia.org/wiki/GNU_Privacy_Guard#Vulnerabilit...

>They’re much bigger fans of exploits and implants, which they are absolutely not wasting on “ordinary” criminals.

The ASCII-armor scheme I described could be helpful here too. Run Signal in a VM (e.g. with Qubes -- endorsed by Snowden). Copy/paste ciphertext in and out of the VM to GPG. Should be fairly idiotproof because ciphertext doesn't look like plaintext. Now even if the NSA sends you a Signal message that owns the VM, they still need some sort of VM escape/CPU sidechannel, or else knowledge of a vulnerability in GPG's encryption.

>The Rule of Two is a data security principle from the NSA's Commercial Solutions for Classified Program (CSfC).[3] It specifies two completely independent layers of cryptography to protect data. For example, data could be protected by both hardware encryption at its lowest level and software encryption at the application layer. It could mean using two FIPS-validated software cryptomodules from different vendors to en/decrypt data.

>The importance of vendor and/or model diversity between the layers of components centers around removing the possibility that the manufacturers or models will share a vulnerability. This way if one components is compromised there is still an entire layer of encryption protecting the information at rest or in transit. The CSfC Program offers solutions to achieve diversity in two ways. "The first is to implement each layer using components produced by different manufacturers. The second is to use components from the same manufacturer, where that manufacturer has provided NSA with sufficient evidence that the implementations of the two components are independent of one another."[4]

https://en.wikipedia.org/wiki/Multiple_encryption

As for implants, that's going to require physical or root access as a prerequisite, no?


I agree that it's error prone and far from ideal, and that most people should be using signal or matrix or something, but I'm not sure what other practical answers there are for more intense cases where people don't have access to those things.


That's the thing: Signal and Matrix are still the right answer for those cases!

Consider the ANOM[1] case: the FBI found it easier to FUD criminals into using a backdoored chat app than to break actual chat apps.

[1]: https://slate.com/technology/2021/12/fbi-fake-encrypted-mess...


> To whom? What is the threat model in which a user who is serious about the security of their messages is better served by PGP-over-email?

Sending credentials to a coworker, for example. I don't care who knows that I emailed them. I don't even care if they know what it's about (database credentials), as long as they can't get the actual credentials simply by accessing the mail server. I don't have to set up any new infrastructure (not realistic within most orgs), all that is needed is for both parties to use gpg.


What kind of fakakte organization is encouraging you to email credentials? Even the most haphazard, incompetent companies I've worked with have managed to configure an off-prem 1Password group.


Small businesses. But sure, let's hand over our credentials (and a ton of money) to the americans instead of using an open source solution that has worked fine for over a decade.


ERROR: Failed to resolve citation [1].


Yes, it does matter. PGP is by far the most common standard for signatures used in software distribution (Git tags, RPM packages, Yum repos, Deb repos, ad-hoc signature schemes, etc.), and nothing else is particularly close in terms of number of signatures verified per day. While the absolute number of verifications is a lot lower than it could be, there is still substantial inertia behind these current standards, including US government standards.

I am also aware that you are actively involved in moving ecosystems away from PGP and towards solutions in the Sigstore ecosystem, so this is definitely not news to you. I understand you want the situation to change, but for now it's disingenuous to pretend like GPG in particular doesn't have the super majority of "market" share around signatures and that drama like this wouldn't have potential impact to a lot of folks.


How, exactly, would this (waves around at current drama) impact a lot of folks? Can you be specific about it?


> Yes, it does matter. PGP is by far the most common standard for signatures used in software distribution

No, that would be bespoke PKIs on various proprietary OSes. PGP is miniscule compared to any of these; it's not even a rounding error. This is true even when you juice the numbers with things like git signing (which is not only ignored by 99.9% of clients but doesn't even sign the relevant software artifact for end users).

> I understand you want the situation to change, but for now it's disingenuous to pretend like GPG in particular doesn't have the super majority of "market" share around signatures and that drama like this wouldn't have potential impact to a lot of folks.

If you know me, then you know I've run the numbers here[1]: even ecosystems that encouraged PGP use for decades have all the hallmarks of the signatures being completely unused. End users do not verify PGP signatures, signers do not maintain their keys; PGP's tooling encourages this behavior.

I won't deny that workflows will be broken by moving away from PGP. But those workflows are (1) a tiny, tiny minority, and (2) overwhelmingly security theater, especially now that PGP's principle feature (the WoT) is dead.

[1]: https://blog.yossarian.net/2023/05/21/PGP-signatures-on-PyPI...


OK, fair point regarding bespoke PKI on Windows/Mac being more common. I do agree with that. I'll amend my statement to: PGP is by far the most common standard for server-side software distribution. I'll also quickly add that I agree that git signing is worthless as currently implemented.

I have read your blogpost and I agree that PyPI had unused and useless PGP support. I also agree that signatures are a waste of time when the client tooling does not verify by default. However, this is not true for the Linux distro packaging ecosystem where approximately all downloads have their signatures verified. Until you convince RedHat and Debian to move their packaging and repository formats to another standard, PGP will remain very relevant for production server software.

Don't mistake me as a defender of PGP-the-standard. I have absolutely no opinion on its cryptography. I would happily use minisign or any other tool that used only modern cryptography to produce signatures. As a distributor of software though, I don't really get to decide how to sign, I am at the mercy of government and ecosystem standards, and as a result no other signature scheme is close to the level of use that PGP has for enterprises that run server-side software.


> PGP is by far the most common standard for server-side software distribution.

This is probably true, for Linux. Windows does its bespoke PKI thing with Authenticode; OpenBSD uses signify[1].

> Until you convince RedHat and Debian to move their packaging and repository formats to another standard, PGP will remain very relevant for production server software.

This is a fair point! But I'll point something important out: these platforms are essentially using PGP as a dumb key encapsulation format; they benefit (correctly!) from being able to pre-baked rings of trusted distribution keys. The fact that they use PGP for this is an implementation quirk, one that has materially negative consequences[2].

It's not easy for them to switch, but all of these platforms would be well (better) served by switching from PGP to signify, minisign, or similar.

> I am at the mercy of government and ecosystem standards, and as a result no other signature scheme is close to the level of use that PGP has for enterprises that run server-side software.

Out of curiosity: which government standards? I'm not aware of a US Gov't standard that mandates PGP; this would be very valuable to know.

[1]: https://www.openbsd.org/papers/bsdcan-signify.html

[2]: https://bugs.launchpad.net/ubuntu/+source/apt/+bug/1461834


I agree that RedHat and Debian use OpenPGP mostly for its signature only and do not rely on any Web of Trust or other infrastructure for key distribution in the basic case. As a side-note just for fun:

It's interesting to consider third parties who maintain Linux package repos, like Postgres https://ftp.postgresql.org/pub/repos/yum/

They distribute their public keys on their own web infrastructure, so it's effectively TOFU. In your criticism of PyPI, you rightly pointed out that keys are useless unless pushed to a publicly-known key server because they aren't discoverable for users, which leads to ~no one verifying installs. I agree with this for PyPI, but in Postgres' case, if you do not have their key pulled locally, installs will fail by default. This means that users do generally install keys locally (actually they typically install the repo RPMs which configures local repositories as well as installs keys), even if the discovery mechanism isn't enforced by yum. My point here is that ecosystems without the ability to pre-bake rings of trust can still result in ~all downloads being verified as long as the client tooling defaults to the correct thing. pip not doing so is the main reason why OpenPGP signing never caught on in the Python ecosystem.

But yes, I agree: A modern day dpkg and rpm could swap to something with safer cryptography rather easily without changing anything about distribution. OpenPGP is a lot of bloat for their needs.

> Out of curiosity: which government standards? I'm not aware of a US Gov't standard that mandates PGP; this would be very valuable to know.

I don't believe any mandate OpenPGP specifically. I was more trying to get across that many institutions mandate FIPS 140-2/3, and current implementations (like on RHEL) complain if signatures are not present in their expected format. Because OpenPGP is my only choice for signing yum infrastructure and RPMs, it's a de-facto government standard today.


To become a debian member you need to get other debian members to sign your key, after meeting in person and checking id.

So there are certain assurances that keys in the keyring got verified.


That's for developers. But if I'm a user of Debian and I download and install the ISO, I am not part of the web of trust. I'm trusting Debian as a central authority to ship me a valid keyring.

Which is fine; the centralization here makes sense, for the same reason I trust Microsoft to give me Windows updates or McDonalds to give me fries and burgers. But it's no more "web of trust" than either of those examples.


> PGP is by far the most common standard for server-side software distribution.

I have a strong suspicion AuthentiCode is similarly common, plus a certain winner outside the server space.


My guess is that there are a lot more "signature verification events" on Linux that pull from a deb or yum repository than there are on Windows servers, both because the mean server in terms of global usage seems to be Linux (though not the median, I'd think?) as well as because it seems like Linux folk rely an awful lot more on distribution repositories than Windows server users download things. Happy to be corrected if anyone has real numbers, though. My feelings are informed by my company's download statistics which may not be representative.



I have a friend whose observation about many FLOSS projects is "Drama == Fork." I'm trying not to be uncharitable about Libre vs Open here, but it seems to have the aroma of drama. I chartered an IETF WG in the 2000s and there was quite a bit of drama there, though it was sort of important to allow people to be dramatic to express what they wanted the WG to work on for a while. Eventually when they realized there was consensus for something other than what they wanted us to work on, the drama started to recede.

It might be useful to wait three months and revisit LibrePGP and the OpenPGP WG and see what progress has been made.


It looks like a no brainer to me. If you have some program based on a standard, any non-incremental changes to the standard mean spell hard work: you have to read the updated and old spec side by side to just to determine whether something is now broken in your program, never mind what needs to be done. Of course that guy is opposed to that.

Assurances that, don't worry, it's all gonna be backward compatible, are neither here nor there, from that perspective.


I love the concept of PGP. It's too complex for non tech people to put in place ; but it saddens me to think there's an alternative reality where a google or a Microsoft has put it in place and activated it by default, and solved a solid chunk of spams, scams, and made emails secure.

This kind of governance issues make it pretty clear that won't ever happen


Really, we're still only hearing two sides of the story. I for one will wait for the Critique Critique Critique. Though really, ...


“Merge and decist” - these are some strong words. Particularly when the previous paragraphs called for a de-escalative approach and a meeting of minds. I don’t see how public admonishment could possibly be productive towards the stated goals.


Did you read through the linked issue in total?


It's obviously hard to judge from the outside, but the linked issue conversation definitely casts the reviewer in a bad light.


From what I can tell there are some new changes that may break backwards compat with stuff. One group wants that resolved the other does not think it is that big of deal. Something like this that goes back to the early 90s and thousands of random unknown number of installed clients/servers. backwards compat is probably something to look into and properly deal with?


Sure, but I'm not talking about that. I mean the one sentence responses to carefully thought out messages, the feet-dragging, etc etc, and each response being Brazil[0]-like nothing to do with the previous ones which were responded to.

[0] https://www.imdb.com/title/tt0088846


There is no crisis here. Everybody can just keep using the existing standard and everything will work. It turns out that the existing stuff is actually cryptographically secure.

So far the only practical incompatibility I have seen seems to be associated with the GnuPG OCB cipher mode. Newer versions of GnuPG generate keys that show OCB compatibility. So encrypting messages/files to that particular PGP identity will result in the use of the non-backwards compatible OCB mode. That will prevent a non GPG OCB compatible from decrypting the message/file. The GnuPG project should document the issue and make it clear how one could disable the GPG OCB mode.


You've been saying for years on Hacker News that (a) authenticated encryption is overrated, and (b) that the PGP MDC is as secure as an authenticated cipher mode.


Specifically, I have been saying that in normal PGP usage, the PGP MDC is not relevant. Since each message is self contained (no ongoing connection), it is better to authenticate the plaintext directly with a signature. For an unsigned message an attacker can replace the whole thing. For the case of symmetrical encryption, the PGP MDC is relevant. So it depends...

More: https://articles.59.ca/doku.php?id=pgpfan:authenticated (my article)

>the PGP MDC is as secure as an authenticated cipher mode.

It is. It turns out there is a class of authenticated encryption that involves first hashing the plaintext and then encrypting the plaintext and the hash. OCFB-MDC seems to be an instance of that class. That seems to defy conventional wisdom and as a result is interesting.

More: https://articles.59.ca/doku.php?id=pgpfan:mdc (my article)



regretably I'm too ignorant to know any better, so this is a very wild guess:

I wonder up to which extent this has to do with the Mochizuki v. Scholze and Stix disagreement?

I say this because as far as I know: elliptic curves are the state of the art in crypto,

and the IUT theory (mochizuki's) has a lot to do with elliptic curves


I really appreciate the ending. I think that is good leadership.


It better be quantum safe at this point. Otherwise, another update would be needed soon.


On a somewhat related note: GnuPG gets plenty of hate from the crypto crowd for being "obsolete" and insecure.

I would humbly propose that before you go telling everyone to stop using gpg, implement something that works, builds, and that doesn't break after 6 months or so. I've been using gpg for what, 20 years now? And recently (as in, three or four years ago) started using it with private keys stored on YubiKeys (using drduh's excellent guide). I also use it for symmetric encryption. It works. It doesn't break. It gets things done, and while I'm not secure from state-sponsored hackers, all this is infinitely better than no encryption.

Rage for example, is not in that category — I added it to my ansible automation as a test, and it broke after several months.


When the crypto crowd complains about PGP being obsolete, they don’t just knock PGP, they very often mention alternatives. One of the most frequent recommendations is instead of using ordinary email and trying to bolt PGP onto it with individual contacts, use an encrypted messenger. I have been using Signal for four years now with the same install moved from phone to phone, and it has worked just fine. I can chat with my friends and relatives with E2E encryption, while I could have never got them to use PGP at all.

[0] https://moxie.org/2015/02/24/gpg-and-me.html


That solution won't let me encrypt my backups, I think.

Incidentally, Signal is another example of an impractical solution being pushed without consideration for practical requirements. Signal will LOSE ALL YOUR DATA if your phone (iOS) dies. There is no way to backup your chat history and images. It's been like that for years and we get silly features instead of this fundamental thing.

Yes, I know many people like it this way — you are in the minority, and there should be a configuration switch saying "do not backup my data". Most people do not expect that their entire history will be gone one day. Phones die, phones get stolen, and you expect you'll get your history and photos back when restoring from an (encrypted!) backup. Not so with Signal. So the next time you tell people to use Signal over WhatsApp, don't forget to tell them that Signal will lose everything one day, while WhatsApp will not.

Back to my original point: before anyone knocks "old obsolete" apps, consider carefully if the new shiny thing really does everything that the "old obsolete" app did, is it reliable, maintained, and will it stay around for 10+ years.


Signal does have offline backup support though?

https://support.signal.org/hc/en-us/articles/360007059752-Ba...


It does not on iOS, and the webpage you linked to specifically states that.


> Signal will LOSE ALL YOUR DATA if your phone (iOS) dies

I'm seeing this often on HN.

While it's a pain, it is not impractical. Most people don't care about this. They don't come back in time in the conversations.

If you really care about this. Like, you care that much. Just pair Signal Desktop and make a regular backup of your profile. All your messages and attachments are there. If your Signal ever gets borked, you can always put your machine offline and open Signal-Desktop on this backed-up folder. You can also open the db with sqlcipher and access all you data with it, the scheme is not too complicated to grasp.

Ask me how I know.

And since Signal is open source, you can always read its code in case of doubt. Someone could build a convenient UI to display or export the messages and attachments in a user-friendly format. This can happen after the backup is done and Signal breaks.

All this said, people may expect privacy when using Signal, so make sure your backups are well protected. This is the hard part, actually.


I think you are confirming my point. "without consideration for practical requirements"

I can tell you that my daughter cared about this. She cried for a long time when her phone just suddenly died (while charging, no reason, total loss) and her Signal history wasn't restored. I couldn't explain why Signal was the only app that didn't get backed up.

I can tell you that my friend's parents cared about this. The photos of their grandkids were suddenly gone. My fried couldn't explain the reason for this, all the other data did get backed up.

And sure, between us geeks we will keep saying "well ACTUALLY they SHOULD HAVE kept the photos elsewhere", and we'll say "this is good because it makes us more secure" and we'll say "most people don't care about this". We'll all "well actually" ourselves, while patting ourselves on the backs.

And they'll just switch to WhatsApp.


> I think you are confirming my point. "without consideration for practical requirements"

I don't think I am. A practical answer is: "Install Signal Desktop".

And I still think we are few who care about chat history. I'm sure there are people who care. I'm such a person.

It's also only a missing feature for iOS user, so that makes it: "of the few people who care about chat history, the minority of people who use iOS are only covered by installing Signal Desktop". A pain for these users, but they have at least two workable solutions. Using Android or installing Signal Desktop somewhere. (yes, caveat, you need to do that before losing your phone)

By yes, I agree that it would be better to have the backup feature.


> I don't think I am. A practical answer is: "Install Signal Desktop".

IMHO this is not always practical. Signal Desktop can easily get out of sync if you change versions and the only proper way to fix it is to clear all the local data, including your chat/contact database. AFAIK, the only source of truth is your phone.


> A pain for these users, but they have at least two workable solutions. Using Android or installing Signal Desktop somewhere. (yes, caveat, you need to do that before losing your phone)

"These users" don't even have a desktop computer :-)

"These users" will just use WhatsApp. It works it doesn't lose data, and everyone around them uses it anyway.


Got it:

"There's a technical issue with email encryption, but we have a solution: don't use email! instead, use this different protocol, that doesn't work with email clients or email addresses, but instead uses a telephone number as an identifier!"

I've never tried Signal, because I don't want my telephone number used as my identifier.

A bug in email encryption can be fixed by fixing the bug; proposing a completely different protocol/application isn't fixing that bug, it's just saying that this other protocol/application doesn't have the same bug. It's not a solution; at least, not for me.


It's a pity, but email never was designed for security, and you can't graft it on.

GPG doesn't really do much for security, because a lot can be told from simply who communicates with who and when, and GPG does absolutely nothing there.


The biggest bug in email encryption is that important message data & metadata can't be encrypted for SMTP to work. It's a bug in email, and there's no backwards compatible fix.


This is like asking how to drive across the ocean and getting mad when someone tells you that you need to take a boat.

Email just fundamentally isn't encryptable; the protocol and the way it actually works in practice (hi, antispam!) requires that important parts of the email not be encrypted, and things like asynchronous communication make it difficult to do encryption to the gold standard of quality. Also, turning on encrypted email also disables several email features from the perspective of the user (hope you didn't want to search your emails!). The end result is that email encryption is, as someone else put it, LARPing rather than security.

There are a few very narrow use cases for which encrypted email may make sense (largely in cases where you're not concerned about hiding the existence of communication channels, just message contents, and you can do out-of-band public key communication). But notice that those use cases don't include "I want to message someone else securely," and it's definitely not someone that would work if you tried to let regular users do it.


> things like asynchronous communication make it difficult to do encryption to the gold standard of quality.

I agree. Thing is, asynchronous communication is a killer feature. My primary communications channel is email, but I don't use encrypted email.

I do use GPG, but not for email.


None of this explains why I need to give you my telephone number to get on your boat.

Identity is the core of the matter and that is invariant of the modes of transport. On top of that comes key management. On top of that you can build your secure application on whatever platform.

Total end to end encryption can only be built on top of identity and never (ever) on a specific channel. And TE2E should be the social goal.


Even to the extent that's true (and a lot of it is not), none of that explains why there are absolute zero email replacements, and indeed "security" people seem to promptly display brain damage whenever the idea is brought up. "Email-like" doesn't mean it has to be actual current standard email, there could be an "xmail" that has a UX near exactly like email but is more modern. But instant messengers (let alone centralized ones) are not a replacement for email and never will be, and the stubborn idiotic insistence they are is as surprising as it is infuriating. If you insist it's security or email, the answer is email, and that's how very important information will continue to be sent.

Although that said, and while not disagreeing about its flaws, I still can't let entirely go by:

>the protocol and the way it actually works in practice (hi, antispam!)

But anti-spam works fine with encrypted email (putting aside practicalities of no forging making spam harder anyway).

>asynchronous communication make it difficult to do encryption to the gold standard of quality

Nobody gives a shit. Asynchronous communication is well worth it.

>hope you didn't want to search your emails!

lol wut? If I go into Mail and searching something it can include encrypted emails the same as whatever else, why wouldn't it?


There are, obviously, replacements for email. Most of us get a tiny fraction of the email we did just 10 years ago because so much has moved out of email and into other messaging systems. The faulty logic being used here is that there is nothing shaped precisely like email to replace email, and, of course, there never will be.


[flagged]


At the point where you’re joking about other people having brain worms, I think maybe we need to accept that this thread is no in keeping with the guidelines around comments.


It's a 6+ year old conversation now if you're curious:

https://news.ycombinator.com/item?id=17066986

You can read through that if you want, frankly it kind of surprises me how nothing at all has changed since then. It's impossible not to respect Thomas immensely in this field overall, or most of his commentary. Which is why I was and remain genuinely befuddled by his replies on this specific topic. So it goes?


How often do you use encrypted email vs encrypted messaging?

The idea that there isn’t an obvious replacement for email in the encrypted text communications space is a wild assertion. Brain wormy even.


Wire uses the Signal protocol and your e-mail address or a username as your identifier. But you still depend on somebody else's infrastructure. We probably want XMPP with OTR or OMEMO.


> When the crypto crowd complains about PGP being obsolete, they don’t just knock PGP, they very often mention alternatives

Those alternatives have different use cases compared to (open)PGP. So, Moxie said GPG is too complicated? If Signal goes down, how long will it take you to spin up your instance of server and clients, ready for delivery? How much would cost running reliably that infra? For email and GPG, I can get everything in 15-20 minutes, including registering the DNS name, DKIM, and SPF. I can also sign documents, images, and invoices with GPG; in short, I can do business. What can I do with Signal: chat, send nice emojis, and make calls?

Proper security and encryption isn't a toy, and let's not kid ourselves by skinning apps with shiny buttons and cool claims that everything people have done before sucks.


The alternatives are generally far better than email, which is why normal people (and most of us) get so little email these days. My email accounts have more life as a transaction recording system for online shopping than they do for interpersonal communication. Email has largely been superseded. That alarms nerds like us, the way it alarmed people when HTTP superseded purpose-built network protocols, but there is a losing side of this argument, and it's the side that is still using Mutt.


If you are concerned about digital document signing, then that is a different use case from the OP who speaks of encryption and being “[not] safe from state-sponsored hackers”. For digital signing, business sectors like e.g. banking rely on PDF e-signatures[0], not PGP.

[0] https://www.adobe.com/sign/hub/how-to/how-banks-use-electron...


"digital signing" in banking sector means something completely different than PGP-like "digital signing". PGP-like digital signatures can be applied on any document and file and you can easily validate the author by their public key and can't be easily forged.


Your post said “documents, images, and invoices” and “business”. In the real world, documents and invoices in business are overwhelmingly sent in PDF format. Even images get embedded in PDFs a lot of the time. PDF e-signatures can be validated, the technology is based on very standard crypto with FOSS implementations. Fine if you have a different workflow that requires PGP, but most people clearly don’t feel it is impossible to “do business” otherwise.


> In the real world, documents and invoices in business are overwhelmingly sent in PDF format

This is evasive and dismissive. When your parent is talking about the issue, there's a good chance he is already using it. Telling him what "the real world" does is irrelevant. He has a use case already, and needs a solution for his use case, not for the rest of the world.


Almost every time on HN that there is a discussion of changes in technology that experts would argue are overall good, there is always someone who says “But that would break my use case!” This even got lampooned at xkcd[0]. OK, I understand that he finds this trend vexing. But no party to this discussion is obliged to spend their time and effort on suggesting a solution, especially when we don’t know his specifics and our suggestion might simply be rejected out of hand.

[0] https://xkcd.com/1172/


Except when it comes to GnuPG, you have to realize:

1. Many, many people used it, and continue to do so.

2. The anti GnuPG crowd keeps coming and touting alternatives, and then complain when the people in the prior bullet point out the impracticality of the alternatives for their use case.

We get that GnuPG has issues. We get that sometimes the alternatives are better. We can coexist with them. People simply don't understand that for a number of tasks, GnuPG is totally appropriate, and solves the problem with little pain.


> In the real world, documents and invoices in business are overwhelmingly sent in PDF format

In your real world, probably. In actual real world, people use MS Word/Excel (or LibreOffice) and images a lot, beside PDF. Good luck using e-signatures with that :)


Are you aware of any other widely deployed, stable tools that meet the other uses cases from the parent comment?


I'm not sure I want to route all my email through my mobile phone.

Though yes... it's more secure than SMS.


There's plenty of competition in the IM space for E2E realtime chat; none are positioning themselves as an alternative to email.

> they don’t just knock PGP, they very often mention alternatives

What alternatives do they mention?


I've been using GPG for about as long, and I've had problems with it.

They've used some cipher, was it IDEA, than then got removed. So then I had to hunt down years old versions in order to be able to extract my old backups.

Or was that more of a PGP than GPG problem, and I misremember?


I don't understand cryptography, and I don't understand the particulars of the concerns that people have with GnuPG. I do know I've used it maybe once, and it wasn't straightforward.

You wrote: "I would humbly propose that before you go telling everyone to stop using gpg, implement something that works, builds, and that doesn't break after 6 months or so."

Keybase did some stuff and got tons of users. It even had a command-line client. It was all open source, too. Is there any reason that the crypto old guard hasn't just shamelessly copied Keybase? The most important being: its CLI (the design, not necessarily the implementation) and the overall "shape" of the project organization/structure. Why hasn't gpg been "uplifted" into a Keybase-shaped "shell" for people to interact with? I.e. at both the tool level and at the collaborative project level?

RMS came from the AI lab working on Lisp machines. Despite this, he recognized that a UNIX-shaped project was the correct vehicle for his vision, given its growing popularity. Why haven't the GPG folks given up on trying force people to do things their traditional way and attached their cart to a horse that has legs?


Why rage and not the canonical age?


[flagged]


Idealists?


> Is this a - the NSA made a offer to backdoor nobody can refuse and the idealists abandon the "standard"- situation?

Why would the NSA bother to target PGP, when the biggest threat to their surveillance is the use of other encryption mechanisms with better usability (and therefore lower chance of misuse)?

If anything, the NSA would want to encourage people using PGP, because it's far easier to break PGP using the default settings than any modern tool, even before you account for the extremely high likelihood of "user error".


> Why would the NSA bother to target PGP, when the biggest threat to their surveillance is the use of other encryption mechanisms with better usability (and therefore lower chance of misuse)?

Could you give an example of such encryption mechanisms please?


WhatsApp, Signal, Messenger, and other much more commonly used communication methods with E2EE.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: