Those examples are all relatively low-level. Many high-level languages provide a floating point interface on top of such an underlying integer pseudo randomness algorithm. This is the reason why much high-level code use multiplication to get a ranged value.
This post goes into some detail about how the V8 JavaScript engine creates a [0, 1) floating point pseudo random value from its underlying integer pseudo-random algorithm: https://v8.dev/blog/math-random
> This post goes into some detail about how the V8 JavaScript engine creates a [0, 1) floating point pseudo random value from its underlying integer pseudo-random algorithm: https://v8.dev/blog/math-random
You'd think so but it doesn't really say. It's almost entirely about how they make the integer.
Following the commit link shows that they use the method of filling 1.0 with random mantissa bits and then subtracting 1.0
>> The number of random values it can generate is limited to 2^32 as opposed to the 2^52 numbers between 0 and 1 that double precision floating point can represent.
Not really true. That's how many numbers their algorithm returns, which is almost as many evenly-spaced floats there are between 0 and 1. But because it starts with a number between 1 and 2, that method actually wastes a bit, and that number really should be 2^53.
But floating point itself has 1/4 of its values between 0 and 1, so for double precision that's roughly 2^62!
Those examples are all relatively low-level. Many high-level languages provide a floating point interface on top of such an underlying integer pseudo randomness algorithm. This is the reason why much high-level code use multiplication to get a ranged value.
This post goes into some detail about how the V8 JavaScript engine creates a [0, 1) floating point pseudo random value from its underlying integer pseudo-random algorithm: https://v8.dev/blog/math-random