Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could someone help me understand why we're not dramatically ramping up key sizes across the board on all encryption? Not as a solution, but as a buy-some-time measure.

Compute is rapidly increasing, there is continuous chatter about quantum and yet everyone seems to be just staring at their belly buttons. Obviously bigger keys are more expensive in compute, but we've got more too...why only use it on the cracking side, but not on defense?

Even simple things like forcing TLS 1.3 instead of 1.2 from client side breaks things...including hn site.



Old but still relevant: https://www.schneier.com/blog/archives/2009/09/the_doghouse_...

  These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.
Long story short, brute forcing AES256 or RSA4096 is physically impossible


That’s true for AES-256. But brute force attacks are not the most efficient way to attack RSA, so it’s not true in that case. (Eg quantum computers would break RSA-4096 but not AES-256).


Grover’s algorithm could be described as quantum brute force that is able to break AES-128 but not AES-256


Yes (although practically speaking it’s very unlikely that Grover will ever break AES-128), but that’s still a brute force attack and still subject to the physical limits mentioned in the Schneier quote. Whereas attacks on RSA like the number field sieve or Shor’s algorithm are much more efficient than brute force. (Which is why you need such big keys for RSA in the first place).


We are. 1024-bit keys are being retired across cryptosystems everywhere, and have been for over a decade (don't get me started on the one laggard). Nothing threatens 2048 bit keys other than QC, which threatens RSA altogether. Progress isn't linear; it's not like 2048 falls mechanically some time after 1024 (which itself is not practical to attack today).


People might be assuming that 2048-bits is only twice as strong as 1024-bits, but it's in fact a billion times better. (corrected, thanks!)


That would be true if RSA scaled proportionally with the number of bits, but the exponent involved is much lower than 1. 1024->2048 gives you around the same difficulty as adding 30 bits to a symmetric key.


I stand corrected, thanks! 2^30 still means a billion times better.


It's also only true so long as we don't discover more efficient ways of factoring large numbers. We haven't come up with any dramatic improvements lately, but it's always possible that something will come up. Symmetric crypto systems like AES are on much firmer ground, as they don't depend as heavily on the difficulty of any single mathematical problem.


By "lately" you mean...


I'm hedging a little because I'm not an expert. :) As far as I'm aware, the last major algorithmic development was GNFS in 1993.


we are definitely not.

most countries registrar's won't support DNS hacks requied for larger dkim.

we still use the minimum key size in most countries.


What? Just use normal name servers. The registrar doesn't matter one bit, they delegate the zone to whatever name servers you specify. Those can serve whatever records properly.


Probably because RSA 2048 is not yet broken, and once there we still have RSA 4096 to lean back on which is since quite some time the most common key size for most things using RSA (DKIM being one of the exceptions).

In the context of DKIM we're waiting for Ed25519 to reach major adoption, which will solve a lot of annoyances for everyone.


> Probably because RSA 2048 is not yet broken […]

3072 has been recommended by various parties for a few years now:

* https://www.keylength.com


Is there a compelling reason to use 3072 instead of 4096? If you're going to kick the can down the road you might as well put some effort into it. The difference in memory use/compute time has to be marginal at this point. It's not like the old days when jumping from 512 to 4096 made the encryption unusably slow.


There's no good reason at all, which is why RSA-3072 is the rarely seen "oddball".


> There's no good reason at all

Operations per second?

* https://wiki.strongswan.org/projects/strongswan/wiki/PublicK...

Running MacPorts-installed `openssl speed rsa` on an Apple M4 (non-Pro):

    version: 3.4.0
    built on: Tue Dec  3 14:33:57 2024 UTC
    options: bn(64,64)
    compiler: /usr/bin/clang -fPIC -arch arm64 -pipe -Os -isysroot/Library/Developer/CommandLineTools/SDKs/MacOSX15.sdk -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX15.sdk -DL_ENDIAN -DOPENSSL_PIC -D_REENTRANT -DOPENSSL_BUILDING_OPENSSL -DZLIB -DNDEBUG -I/opt/local/include -isysroot/Library/Developer/CommandLineTools/SDKs/MacOSX15.sdk
    CPUINFO: OPENSSL_armcap=0x87d
                       sign    verify    encrypt   decrypt   sign/s verify/s  encr./s  decr./s
    rsa   512 bits 0.000012s 0.000001s 0.000001s 0.000016s  80317.8 973378.4 842915.2  64470.9
    rsa  1024 bits 0.000056s 0.000003s 0.000003s 0.000060s  17752.4 381404.1 352224.8  16594.4
    rsa  2048 bits 0.000334s 0.000008s 0.000009s 0.000343s   2994.9 117811.8 113258.1   2915.6
    rsa  3072 bits 0.000982s 0.000018s 0.000019s 0.000989s   1018.4  54451.6  53334.8   1011.3
    rsa  4096 bits 0.002122s 0.000031s 0.000032s 0.002129s    471.3  31800.6  31598.7    469.8
    rsa  7680 bits 0.016932s 0.000104s 0.000107s 0.017048s     59.1   9585.7   9368.4     58.7
    rsa 15360 bits 0.089821s 0.000424s 0.000425s 0.090631s     11.1   2357.4   2355.5     11.0
(Assuming you have to stick with RSA and not go over to EC.)


These are contrived benchmarks at the extreme end of things. In real world usage the difference is drowned-out by the delays of so many other things happening in order to complete a handshake and key exchange. The mildly higher performance of RSA 3072 versus RSA 4096 wasn't even a big bonus during the CPU performances we had 15 years ago.


It's roughly half as fast as 4096, which sounds bad until you realize that 3072 is already 20% as fast as 2048, 3% as fast as 1024, and 1% as fast as 512. In terms of performance tradeoff it's downright mild compared to the other steps up.


If I could waive a magic wand and get a 40-100% performance boost on a service by changing 3-4 characters (s/4096/3072/) why wouldn't I take it? (Assuming I need security go to beyond RSA 2028.)


Its not a 40-100% performance boost overall, its just during one specific step that is a very small part of the entire overall system.


Well, in typical use cases RSA usage is very limited (eg some operations during TLS handshake), so the 40-100% boost wouldn’t be across the board, but likely shave some milliseconds per connection.


RSA 2048 isn't broken, but experts consider it a matter of time. How long I don't know, but since the attacks are known (prime numbers) someone (read not me) can make an estimate with error bars that are concerning enough to consider it as good as broken.


AFAIK even RSA 1024 isn't broken yet.


RSA-1024 is "only" 80 symmetric equivalent bits. It's a space requiring a tremendous amount of energy to explore, though I personally consider it very likely that the NSA and/or the MSS et al. have poured immense funds into accelerators specifically targeting RSA, and for them there'd be no obstacles at all to be granted an allocation for such energy consumption.


What expert considers it a matter of time before 2048 is broken? 2048 is 112-bit-equivalent security.


RSA2048 is 112-bit-equivalent symmetric security under currently known methods. Number theory advances may change that. It is hard to imagine any significant advances in the breaking of symmetric cryptography (mathematically-unstructured permutations).

Cryptographically-relevant quantum computers (CRQC's) will also break smaller RSA keys long before (years?) the bigger ones. CRQC's can theoretically halve symmetric cryptography keys for brute force complexity (256-bit key becomes 128-bit for a CRQC cracker).


https://www.keylength.com/en/4/ NIST says 2048 bit RSA is good until 2030. I'm not sure what that means, perhaps that it will be broken considering advances, perhaps just that someone (read governments) who cares to spend 5 years on the problem will break your key.


No, we are not in fact 5 years from breaking RSA-2048.


I have no idea what NIST means when they give 5 years for 2048 bit keys, but in generally I trust them more than some random poster on the internet.


My inclination would be that it has less to do with the keys getting broken in that timeframe and more to do with moving to larger key sizes as soon as possible. As pointed out by others, RSA depends on the asymmetric difficulty of multiplication vs factorization of integers, but the degree of that asymmetry has no hard bounds. Advances in mathematics could reduce it, and NSA may already know of or at least suspect the existence of techniques which are not public. Larger key sizes mitigate against these developments, and pushing the software and hardware stacks to their limits sooner rather than later allows both the vendors and standards bodies to adapt before cryptographic breaks actually occur.


Maybe check who you're responding to then ;).

He's not djb but definitely not a “random poster” either.


But perhaps with not a very solid justification to do so:

* https://articles.59.ca/doku.php?id=em:20482030


Nobody is recommending RSA-3072 per se. The recommendation if wanting to stick with RSA is to move beyond RSA-2048, and the world at large jumped all the way to RSA-4096 long ago.


Ed25519 has seen broad adoption in TLS and other stacks that are used pervasively where DKIM is also used. What’s blocking it for DKIM uniquely?

(This isn’t intended as a leading question.)


X25519 has seen broad adoption (in the key exchange). Ed25519 has not, you can't actually use an Ed25519 certificate on the web. It's in a deadlock between CAs, browsers and TPM manufacturers (and to some extent NIST, because Ed25519 has not been approved by them).

It's not being blocked per se, you can use it mostly (98%) without any issues. Though things like Amazon SES incorrectly reject letters with multiple signatures. Google and Microsoft can't validate them when receiving. It's more that a few common implementations lack the support for them so you can't use _just_ Ed25519.


> (and to some extent NIST, because Ed25519 has not been approved by them).

Ed25519 (and Ed448) have been approved for use in FIPS 186-5 as of February 2023:

* https://en.wikipedia.org/wiki/EdDSA#Standardization_and_impl...


Oh, great to know. That gives me hope that we'll see Ed25519 certificates at some point then.


THE CABForum just updated its guidelines (in December) and elliptic curve wise only NIST P-256, NIST P-384 and NIST P-521 are accepted. (See https://cabforum.org/working-groups/server/baseline- requirements/requirements/#615-key-sizes)

So on the general web it seems remote at best.


> THE CABForum just updated its guidelines (in December) and elliptic curve wise only NIST P-256, NIST P-384 and NIST P-521 are accepted.

NIST P-curve certs were acceptable per the Base Requirements all the way back in 2012

* https://cabforum.org/uploads/Baseline_Requirements_V1_1.pdf

See "Appendix A - Cryptographic Algorithm and Key Requirements (Normative)", (3) Subscriber Certificates.


I'm well aware I should have added a "still" in the sentence somewhere. All efforts to have Ed25519 on the general web seem to run out of steam, we can find https://www.ietf.org/archive/id/draft-moskowitz-eddsa-pki-06... IETF side, https://lists.cabforum.org/pipermail/servercert-wg/2024-June... is the last discussion CABforum side.

Ed25519 certs do work with TLS (OpenSSL support at least), but without browser adoption it's machine to machine with private CA only .


Everyone has to be onboard before the switch can be made, and not everyone is happy about the somewhat messy solution of running dual-stack RSA+Ed25519 setups in the interim - it's a bit different than e.g. supporting multiple methods for key exchange or multiple ciphers for symmetric crypto. It's just one of those things that take time to happen.


If the big players (gmail, outlook) jump onto it the rest will be forced to follow. Outlook would probably jump in with a checkbox, and perhaps gmail will make at an option for the paid tier while everyone on the free tier gets no choice - but that is still enough. SmallCompany.com cannot do it alone, probably not even a fortune100.com (if any of them even care - their sales staff probably will overrule the few who do), but there are enough players if they all agree on something.

Getting the big players to agree and execute though is a lot like herding cats. I'm sure some in the big players are trying.


> What’s blocking it for DKIM uniquely?

Mail server administrators.


Because the only way to force their use is to break things, mostly this means transferring the pain directly to the user instead of the service operators in the hope that they will bitch loudly enough for the service operator to care, and this has a good chance of instead causing the user to move to your competitors instead, who will be more than willing to not let a little thing like security get between them and revenue.


512 DKIM was exceedingly rare even 8 years ago when I worked in email.

You're essentially asking "why aren't we doing what we're doing"


"dramatically ramping up key sizes" is not done, because it's overly expensive, and not needed.

What people don't realize: key size recommendations are surprisingly stable over long timeframes, and have not changed for a very long time. In the early 2000s, some cryptographers started warning that 1024 bit RSA is no longer secure enough, and in the following years, recommendations have been updated to 2048 bit minimum. That's now been a stable recommendation for over 20 years, and there's not the slightest sign that 2048 bit can be broken any time soon.

The only real danger for RSA-2048 is quantum computers. But with quantum computers, increasing your RSA key sizes does not buy you much, you have to move to entirely different algorithms.


> That's now been a stable recommendation for over 20 years, and there's not the slightest sign that 2048 bit can be broken any time soon.

Except that now the recommendation by NIST at least is to switch to 2048-bit by 2030 and then deprecate RSA altogether by 2035.

But yeah, not being on at least 1024-bit RSA is weird and careless.


The key sizes we use today are expected to hold against a Dyson sphere focused on breaking them with the best exploit we know today.

What size do you suggest?


It's not quantum-safe though.


Larger keys won't make the algorithms quantum-safe either.


If I'm not mistaken, larger keys require more qbits in a machine to all be coherent together to be able to break it.

So it would be a slight increase in complexity, but if we are able to build a machine with enough qbits to crack 1024 keys, I don't think the engineering is all that far off from slightly scaling things up 2x-10x.


Which is why post-quantum algos were invented.


> Which is why post-quantum algos were invented.

Yup. And I don't even think quantum resistance was the goal of some of the algos that, yet, happen to be believed to be quantum resistant. Take "Lamport signatures" for example: that's from the late seventies. Did anyone even talk about quantum computers back then? I just checked and the word "quantum" doesn't even appear in Lamport's paper.


> Did anyone even talk about quantum computers back then?

Not unless they have a time machine. Shor's algorithm was discovered in the 90s (sure the concept of a quantum computer predates that, but i don't think anyone really realized they had applications to cryptography)


It's not quantum-broken though, it might just make it a bit faster. Just half a Dyson sphere.


> Could someone help me understand why we're not dramatically ramping up key sizes across the board on all encryption? Not as a solution, but as a buy-some-time measure.

We've been doing it for decades now… (DES used 56bits back then, AES started at 128).

Also, keep in mind that increasing the key length by 1 means that you need twice as much compute to crack it through brute force (that is, unless cryptanalysis shows an attack that reduces the difficulty of the scheme, like for instance with the number field sieve on RSA) so you don't need to increase key size too often: following Moore's law, you need to increase it by on bit every two years, or 5 bits every decades. Additionally key size generally account for many years of compute progress and theoretical advance, so that you really don't need to worry about that over a short period (for the record higest RSA factorization to date is 829 bits, yet people started recommending migration away from 1024 bit RSA a decade ago or so, and the industry is in the process in deprecating it entirely even though it's probably going to take years before an attack on it becomes realistic.


> Even simple things like forcing TLS 1.3 instead of 1.2 from client side breaks things...including hn site.

That’s the reason, it breaks things, and some of them are important and can’t simply be updated.


> That’s the reason, it breaks things, and some of them are important and can’t simply be updated.

IMO this is not a valid excuse.

If it's exposed to the internet it needs to be able to be updated with relative ease to respond to a changing threat landscape. Especially if it's "important". If it cannot be then it is already broken and needs to be fixed. Whether that fix is doing a hard upgrade to get to the point that future upgrades can be easier, entirely replacing the component, or taking the thing offline to a private non-Internet network depends on the situation, but "we aren't going to change, the rest of the internet should conform to us" has never been a reasonable response.

This is particularly true in the contexts of public mail servers where DKIM matters and anything involving public use of TLS. The rest of the internet should not care if your company refuses to update their mail servers or replace their garbage TLS interception middleboxes. We should be happy to cause problems for such organizations.


> IMO this is not a valid excuse.

The world is full of things that aren't "valid excuses". Explaining why something is the way it is is not the same as justifying it.


We are doing that, just not everyone is as concerned by safety and make different tradeoffs against things like ease of use or accessibility. Different applications have different tolerances and that’s fine.

If and when anything quantum is able to yield results (I wouldn’t worry much about this), increasing key size is pretty much meaningless, you need to move to other encryption schemes (there’s lots of options already).


In the case of RSA it's not meaningless to increase key size to fend off quantum computers. Quantum computing vs RSA is a case of being the largest contender, because quantum computing in itself doesn't definitively unravel the integer factorization problem.


That seems suspect to me.

Getting a working qc to reasonable scale is the hard part. Once you have done that most of the hard engineering problems are solved. I doubt doubling its size at that point would be very much of a problem.

If we are making (uninformed) predictions, i bet we wont see QC solving 1024 bit RSA in the next 15 years (at least), but once it does it will only take a year or two more to solve 4096.


I meant “meaningless” in the sense that your encryption is then on heavy diminishing returns territory when it comes to defending against a qc.

It will likely work for a while, but it’s a fundamentally wrong approach and you’re going to be exposed into recording & decryption attacks, instead of breaking your encryption today, I just store all your communications and wait for the next qc to be online, then fish for stuff that is still useful.

It’s a silly approach if the timeframe is 50 years because most secret information goes stale quicker, but if you’re just waiting for say a year…


Compare the time it takes to generate or decrypt/encrypt 4096 bit RSA versus 16384 bit RSA (it's not four times slower).


Indeed. There has got to be some middle ground there though that is both an incremental improvement and still within reason on cost


> why we're not dramatically ramping up key sizes across the board on all encryption?

because no one thinks there is a reason to, no one has any fear that classical computers will catch up with RSA-2048/AES-128 before their grand children are dead.

post-quantum crypt stuff is happening and people are planning how to migrate to it.


Well, even MD4 hasn't been cracked yet.


What is your definition of cracked? Collisions are easy to produce; there's one right on the Wikipedia page.


Collisions are not interesting. Millions of leaked passwords hashed with MD4/MD5 are of very practical interest.


Ok, preimage resistance is still pretty strong, but it has been reduced enough that I wouldn't trust it remaining above practical attacks beyond the next decade.


If you use the same password on different sites despite password managers and now passkeys you are asking for it.


Linear bit size increases require exponential compute increases to break. RSA with 1024 bits is still beyond any practical capability to break. The current practical limit is considered to be around 800-something bits. Still the recommendation is to use at least 3000 bits nowadays, to defend against possible mathematical advances.


This is incorrect. Factoring algorithms like GNFS are super-polynomial but sub-exponential. RSA-1024 is likely breakable at practical-but-very-expensive costs.


Key rotation tooling has never been given adequate attention since you only do it every few years. Something ends up breaking, even if it’s just the name of the key .

keys are stateful content like DB schemas, but they don’t receive daily attention, so the tooling to maintain them is usually ad-hoc scripts and manual steps.


In the case of DKIM, Ed25519.


because everyone recommending those things work on both sides.

they recommend 2048 and use 4096 themselves because if they need to ever break your 2048 it's less bad than if you were recommended to use 4069. wink wink

same with everyone recommending ed22519 when ed448 is as good and as fast to encode. but all the arguments point to encode speed from a Bernstein paper which used a pentium iii!

https://cr.yp.to/ecdh/curve25519-20060209.pdf


> Could someone help me understand why we're not dramatically ramping up key sizes across the board on all encryption? Not as a solution, but as a buy-some-time measure.

I am acutely aware that there are SOME places where software only supports RSA and only supports up to 1024-bit or 2048-bit keys, and that is a legal requirement. Ramping up key sizes would be great but even 2048-bit keys aren't quite secure against certain kinds of actors (even disregarding hammer-to-head style of attacks)

> Even simple things like forcing TLS 1.3 instead of 1.2 from client side breaks things

... kind've a case in point about the pace of required improvements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: