This is not a huge problem as if you are outputting the floating point number, you would probably want to round it anyways. The biggest 'gotcha' is when doing equality comparisons between these numbers.
Consider the following:
.1 + .2 == .3 // false
The way to 'get around' this is to have a value (usually called an epsilon) that is relative in magnitude to the numbers being compared. In this example, a value like .00001 as epsilon should work fine.
Anyways, all you have to do is check if the absolute difference of the numbers is less than the epsilon:
var a = .1 + .2, b = .3, epsilon = .00001;
console.log(Math.abs(a-b)<epsilon); // true
In short, try to not put yourself in a situation where you have to compare equality with doubles.
Floating point numbers in any language are inaccurate, if you're performing say, financial calculations then you can use fixed point numbers by using scaled integers. For example:
var SCALE = 100; // 0.01
var a = 0+1 * SCALE; // e.g. 2.5 would be 2+5 * SCALE
var b = 0+2 * SCALE;
if (a + b == 0+3 * SCALE) {
// this will work...
}
console.log(a / SCALE);
Not really. Most languages don't have a built-in decimal type, it's usually just a library feature. Higher precision floats won't help you either, as adding more decimal places won't make 0.2 == 0.3, it will just make the difference between them slightly smaller.
Groovy is a heavily marketed programming language that uses BigDecimal, just like its sister dynamic languages Clojure, Rhino/Nashorn, and Xtend also do. Only Java actually ships BigDecimal.
Yeah, I get that the base JRE is what actually provides the BigDecimal class in this case -- the other languages just use it. (as the default "decimal"-numeric type in the case of Groovy, rather than [Dd]ouble)