The correct way to maintain backwards compatibility in those contexts is to decrypt and re-encrypt, not support broken ciphers or weak modes of encryption indefinitely. The latter is security theater.
A read-only operation should not cause an insane amount of writes. This is perilous for a great many reasons, one of which is the risk of data corruption should something go wrong.
You're thinking about this the wrong way: if your data needs to be secure, then it's already perilous to keep it around with with weak or broken encryption. Security models where data is too important to risk encryption upgrades but not important enough to encrypt correctly are internally incoherent.
(This is sidestepping the other parts of the comment that don't make sense, like why a single read implies multiple writes or why performing cryptographic upgrades is somehow uniquely, unacceptably risky from a data corruption perspective.)
No, I think you're thinking about it the the wrong way: write failures are common. The failure mode for a bad disk is often that reads will succeed and writes will lose data. Something that silently writes like this is increasing the risk of data loss.
It probably depends a lot on the application, but I think it's often much better to have something that will warn the user about security risks and let them decide what to do with that risk. If you do design something with these silent writes, you absolutely need to think hard about failure cases and test them, and not handwave them away. Having the most "secure" data be corrupted is ultimately an unacceptable outcome.
That's not even getting into the other problems, such as ... is it ok for the user to take a performance hit of writing X GB when all they want to do is read a file?
Your cryptosystem is not responsible for the stability of your storage medium, and your storage medium is not responsible for the security of your cryptosystem. They are black boxes to each other; to confound their responsibilities is to ensure doom in your designs.
Put another way: your cryptosystem isn't responsible for saving your ass from not making backups. If your data is valuable, treat it that way.
> Your cryptosystem is not responsible for the stability of your storage medium, and your storage medium is not responsible for the security of your cryptosystem
This is exactly why your crypto system should not rely on spontaneously writing many gigabytes on a read operation, without asking. I couldn't have said it better myself.
What you are advocating is crypto intruding on the storage mechanism inappropriately. It's a layer violation.
I think if it's important to the end user, you could write fairly decent code at the app layer that asynchronously re-encrypts old data in a way that doesn't harm the user. That code would need to have a strategy for write failures. A basic cryptography tool should probably not have this as a built-in feature however, for a few reasons including those I've stated.
> This is exactly why your crypto system should not rely on spontaneously writing many gigabytes on a read operation, without asking.
Again: nobody has said this.
Whether or not the tool does this in bulk, or asynchronously, or whatever else is not particularly important to me. The only concern I have in this conversation is whether it's contradictory to simultaneously assert the value of some data and refuse to encrypt it correctly. Which it is.
This is silly. Nothing that happens with the standard or its implementation is going to prevent you from decrypting a 20 year old email. It shouldn't need saying, but one reason for that is that PGP's cryptography is schoolbook cryptography.
Upstream spoke of deprecated support for older emails. My response is aimed there.
And there is loads of software that will not compile on modern hardware. End users don't often have that ability to re-write, or even to easily validate a random bit of code on github.
A project like this, needs to maintain backwards operability. For decades.
Once again: this is silly, because whatever conversation we are having about the standard, your ability to decrypt old messages would not have been impacted. Standard revisions don't turn the previous standard into secret forbidden knowledge.
What's really being asked for here is the capability to seamlessly continue sending messages with the previous, weak constructions, into the indefinite future, and have the installed base of the system continue seamlessly reading them. I think that is in fact a goal of PGP, and one of its great weaknesses.
When standards remove the requirements for something after a period of obsolescence, that tends to send a message to the implementors to remove that from the software.
Users who still rely on that have to use the old software, against which there can be barriers:
- old executables don't run on newer OS (particularly in the Unix world).
- old source code won't build.
- old code won't retrieve the old data from the newer server it has been migrated do.
Things like that.
The barriers could be significant that even someone skilled and motivated such as myself would be discouraged.
> Users who still rely on that have to use the old software, against which there can be barriers
Not all reliance is reasonable though.
Some legacy software can only do SSLv3 or lower, does that mean the rest of the internet has to carry that support around? Abso-f-lutely not.
The same applies here. If you really need that ancient stuff that loses support, repackage them in newer encryption or remove the obsolete layer. It's highly probable that information no longer needs to stay encrypted at rest anyways.
In my opinion, the Internet should not be removing support for older SSL. The highest SSL version that is common to server and client should always be used.
> The highest SSL version that is common to server and client should always be used.
That is how it works. What you're missing is that everyone, both servers and clients, agrees that supporting old SSL versions is a bad idea. And they're right.
If that were the actual principle being accurately followed, the first feature to have been removed from browsers would have been plain HTTP before any version of SSL.
Plain HTTP is what people resort to when their browser refuses to connect to an old device or server using HTTPS, which is worse than old SSL.
No, because clear lack of security is better than faux security. With older SSL versions, it's security that even creates extra risk for all clients (by leaking server secrets and allowing ciphersuites that don't have PFS).
The absurd idea that a user will have a 20 year old encrypted mail, because software still supports it, is ridiculous. What really happens is someone has a 20 year old mail no matter what, itcwill always exist, and the choice is, support it or not. Support it to be read, support it to be converted, warn the user, suggest fixes.
And your SSL example is senseless! In what world do you envision super secure stuff alongside weaker legacy, on the same damned server. You literally are not thinking sensibility about any of this, you examples are paper tigers.
How do you alert the users that are running the problematic software and haven't yet updated it? The very premise is ridiculous.
> What really happens is someone has a 20 year old mail no matter what, itcwill always exist, and the choice is, support it or not. Support it to be read, support it to be converted, warn the user, suggest fixes.
Well yeah and the choice should be to not support it. If the user needs those letters they can either decrypt or just re-encrypt them. It's silly to claim that a message can somehow be both so vital to be protected by encryption, but not upgraded to something more modern.
> And your SSL example is senseless! In what world do you envision super secure stuff alongside weaker legacy, on the same damned server.
I'm not envisioning it. Nobody should be running such old useless garbage. What was suggested earlier in this thread does not work and must not happen in practice.
How do you alert the users that are running the problematic software and haven't yet updated it? The very premise is ridiculous.
Where did you get the weird idea the software isn't updated? This entire discussion is about deprecation of older encryption methods in new versions of software.
Yes, but that is only needed to connect with old software that has not updated. Two pieces of new software will not negotiate on using old crypto even if they both support it.
In the trouble situations, one of the two pieces of software being upgraded is thrust upon the user.