I think they mean splitting the "Number" [0] type into an integer type and a decimal type.
Currently the "Number" type is used for every number and it gets stored as a double, and can "only" represent "integers" safely in the range of ±(2^53 -1).
Though there is a "BigInt" [1] type.
* Integer types of various sizes (for efficiency) and so you don't have to constantly do floating point comparisons (i.e. equal within a small delta) and generally have better compatibility with other languages.
* Decimal floating point type so you can work in base-10 and don't have rounding issues.
Plus all of the error checking you get for free if you're using typescript instead of raw-dogging it with plain javascript or coffee-script :-)