Unfortunately that isn't the case. Java's JCE provider for it's random will read from /dev/random, if you happen to generate a lot of keys in quick succession (such as unit tests for a product) you will run out of entropy really fast, and tests will seem to take hours...
Which really sucks because then you run something like haveged on the server running the unit tests because you can't have a build take hours because Linux takes forever to gather new entropy on headless servers :-(
You are wrong. Developers should use random if it's for something security sensitive like generating keys. random will block when entropy is too low, urandom will just continue recycling entropy, which is cryptographically dangerous.
/dev/urandom gives the exact same quality randomness as /dev/random (let's ignore the issue of boot-time and VM cloning for now).
There is a slight twist to it, where for information-theoretically secure algorithms /dev/random would be preferable, but you don't need that. Because you don't use those algorithms (the only one really worth mentioning is Shamir's secret sharing scheme).
I'm just amazed how people don't trust the cryptographic building blocks inside a modern CSPRNG, but then use the very same building blocks to use the randomness and encrypt their secrets.
A PRNG must be seeded. CSPRNGs are no different, right?
In a normal PRNG, if you want X different possible outputs, you must be able to seed it with at least X different seeds. Since each seed corresponds to an output sequence, you need at least as many values for a seed as you wish output sequences. Of course, this seed should be random, and you can't really use a PRNG to seed a PRNG.
How do CSPRNGs get around this? I assume that if I have a CSPRNG, I must seed it, and that I must draw that seed from a pool of seeds at least as big as the desired set of output streams. (See above.) If my intent is to generate 4096 random bits (say, for an encryption seed), to me it seems I must input a random seed at least that long. Thus, I need a good RNG.
Take a look at Wikipedia's definition[1], for example, of what a CSPRNG must do (as opposed to just any old PRNG):
• Every CSPRNG should satisfy the next-bit test.
• Every CSPRNG should withstand "state compromise extensions". In the event that part or all of its state has been revealed (or guessed correctly), it should be impossible to reconstruct the stream of random numbers prior to the revelation. Additionally, if there is an entropy input while running, it should be infeasible to use knowledge of the input's state to predict future conditions of the CSPRNG state.
Let's assume our CSPRNG of choice satisfies that. The problem is that second one only applies to "preceeding bits". If I know the state of the CSPRNG, I can predict future output. If Linux is low on entropy, or runs out, does this not diminish the number of possible inputs or seeds to the CSPRNG, allowing me to guess, or narrow down my guesses, to the state/seed of the CSPRNG, perhaps prior to it generating output?
First, I'm not saying that cryptographic randomness can be created out of thin air, without entropy, I just argue that you don't need n bits of real entropy to get n bits of high-quality randomness.
I mean if you really needed 4096 bits of random seed to generate 4096 bits on randomness, why not just take the 4096 bits you waste on the seed as randomness?
Of course you need a random seed. That's what i was alluding to with the boot-time or VM remark.
But you're not really interested in lots and lots of potential output sequences, one of them is enough. Remember, the first requirement of a block cipher is that it is indistinguishable (to a computational polynomial bound) from a random distribution.
The real counter-argument is the state attack. And that's mitigated by a modern RNG's design. Fortuna, for example, constantly mixes incoming entropy into outputs that occur far in the future (technically, it reseeds every now and then, but without estimating entropy). This does not protect you from a total state compromise, a computer is deterministic, after all, but it's quite hard to argue with a straight face that such a total compromise matters, because everything you might want to use the randomness for would most certainly be quite as compromised, as well.
So why take that (probably insignificant) risk?
Because the alternative is worse. If you want to have Linux's current /dev/random behavior, you have two things:
First, it blocks when there's not enough entropy in the pool. That's bad. Just google for "sshd hangs". Either your system doesn't work anymore, or people find creative ways to subterfuge all the crypto stuff to make it work again. Just for the far-fetched fear about this total state compromise?
Second, how much entropy do you have? Lots and lots has been written about it, but despite all this technobabble ("'entropy', there must be hard physics behind it"), estimating entropy is not really an exact science. More guesswork. So you never know how much entropy you really have. That's why Fortuna got rid of all that estimating that its predecessor Yarrow still did.
Yeah, I see it the same way. but at least you can put forward a highly theoretic argument there that just doesn't work with all the crypto algorithms we actually use.
Especially since: what are you doing with ssss? Usually splitting a private key to a non-information-theoretically secure block cipher, I guess. So you're back to square one.
I think you're right. I probably channeled him inadvertently.
I've collected lots of links to articles and man pages and so on about the issue, because I have been planning to write a coherent article about urandom vs. random.
And I'm pretty sure I remember what posting you mean.
Unfortunately, writing this article gets postponed again and again, just as finishing and sending in the second set of your crypto challenges... maybe before Christmas I'm going to find a few hours to do that.