> The BASIC language eliminated much of the complexity of FORTRAN by having a single number type. This simplified the programming model and avoided a class of errors caused by selection of the wrong type. The efficiencies that could have gained from having numerous number types proved to be insignificant.
DEC64 was specifically designed to be the only number type a language uses (not saying I agree, just explaining the rationale).
> Languages for scientific computing like FORTRAN provided multiple floating point types such as REAL and DOUBLE PRECISION as well as INTEGER, often also in multiple sizes. This was to allow programmers to reduce program size and running time. This convention was adopted by later languages like C and Java. In modern systems, this sort of memory saving is pointless.
More than that, the idea that anyone would be confused about whether to use integer or floating-point types absolutely baffles me. Is this something anyone routinely has trouble with?
Ambiguity around type sizes I can understand. Make int expand as needed to contain its value with no truncation, as long as you keep i32 when size and wrapping does matter.
Ambiguity in precision I can understand. I'm not sure this admits of a clean solution beyond making decimal a built-in type that's as convenient (operator support is a must) and fast as possible.
But removing the int/float distinction seems crazy. Feel free to argue about the meaning of `[1,2,3][0.5]` in your language spec - defining that and defending the choice is a much bigger drag on everyone than either throwing an exception or disallowing it via the type system.
There's something to say for languages like Python and Clojure were plain ordinary math might involve ordinary integers, arbitrary precision integers, floats or even rationals.
In grad school it was drilled into me to use floats instead of doubles wherever I could which cuts your memory consumption of big arrays in half. (It was odd that Intel chips in the 1990s were about the same speed for floats and doubles but all the RISC competitors had floats about twice the speed of doubles, something that Intel caught up with in the 2000s)
Old books on numerical analysis, particularly Foreman Acton's
teach the art of how to formulate calculations to minimize the effect of rounding errors which resolves some of the need for deep precision. For that matter, modern neural networks use specialized formats like FP4 because these save memory and are effectively faster in SIMD.
---
Personally when it comes to general purpose programming languages I've watched a lot of people have experiences that lead them to thinking that "programming is not for them", I think
>>> 0.1+0.2
0.30000000000000004
is one of them. Accountants, for instance, expect certain invariants to be true and if they see some nonsense like
>>> 0.1+0.2==0.3
False
it is not unusual for them to refuse to work or leave the room or have a sit-down strike until you can present them numbers that respect the invariants. You have a lot of people who could be productive lay programmers and put their skills on wheels and if you are using the trash floats that we usually use instead of DEC64 you are hitting them in the face with pepper spray as soon as they start.
JavaScript engines do optimize integers. They usually represent integers up to +-2^30 as integers and apply integer operations to them. But of course that's not observable.
You are half correct about 2^53-1 being used (around 9 quadrillion). It is the largest integer representable with 64-bit float. JS even includes a `Number.MAX_SAFE_INTEGER`.
That said, these only get used in the rare cases where your number exceeds around 1 billion which is fairly rare.
JS engines use floats only when they cannot prove/speculate that a number can be an i32. They only use 31 of the 32 bits for the number itself with the last bit used for tagging. i32 takes fewer cycles to do calculations with (even with the need to deal with the tag bit) compared to f64. You fit twice as many i32 in a cache line (affects prefetching). i32 uses half the RAM (and using half the cache increases the hit rate). Finally, it takes way more energy to load two numbers into the ALU/FPU than it does to perform the calculation, so cutting the size in half also reduces power consumption. The max allowable size of a JS array is also 2^32.
JS also has BigInt available for arbitrary precision integers and these are probably what someone should be using if they expect to go over that 2^31-1 limit because hitting a number that big generally means you have something unbounded and might go over that 2^53-1 limit.