As someone who has done lots of work with floating point, I disagree. My fear (or more properly caution) with floating point includes tricks like if you are going to add up an array of floating point numbers, you need to sort them first. The experienced floating-point programmer will know what order you need to do this in.
The inability to represent 0.1 properly in binary floating point is not a problem solved by increasing the precision your floating point library.
Also, there can be be surprising cases where single-precision might not be enough. For example, if you are going to represent exchange prices for, say, futures, you need to take into account that some of the price increments are in pennies, some in fractional dollars. You need the calculations to be reversible, and to have enough precision and accuracy to represent the whole range of price increments and expected volume.
One example of a lack of appropriate paranoia about floating point numbers was the i'm-not-very-good-at-writing-parsers' author of ASP and the result was http://www.exploringbinary.com/php-hangs-on-numeric-value-2-... denial of service to all unpatched PHP sites. Also bit java.
Note that interesting prime number work depends on integers and won't work with floating point. Including factoring large numbers in cryptography.
>tricks like if you are going to add up an array of floating point numbers, you need to sort them first.
...I think you'll want to use a priority queue, with the smallest numbers on top, adding the first two, and then pushing the result back onto the priority queue. Repeat until you only have one item left.
In the context of this paper, we are talking about floating point arithmetic used by mathematicians working in Mathematica. I disagree with you on these points.
> you need to sort them first
Yes, but only if there are quite a lot of them and the partial sums are large relative to the result. Not really an issue usually.
> The inability to represent 0.1
Many mathematical conjectures apply for arbitrary real numbers. The nearest floating point number to 0.1 is a real number, and it's not really an issue for mathematics that it's not exactly 0.1.
> single-precision might not be enough
These are mathematicians working in Mathematica, which has an arbitrary-precision floating-point type. They don't have to come anywhere near single precision floats.
> if you are going to represent exchange prices
You can use integers. Floating-point arithmetic isn't really great here because it makes the wrong kinds of accuracy guarantees.
> prime number work
Primes numbers are all integers. It would be a bit silly to represent integers with floating-point numbers.
The inability to represent 0.1 properly in binary floating point is not a problem solved by increasing the precision your floating point library.
Also, there can be be surprising cases where single-precision might not be enough. For example, if you are going to represent exchange prices for, say, futures, you need to take into account that some of the price increments are in pennies, some in fractional dollars. You need the calculations to be reversible, and to have enough precision and accuracy to represent the whole range of price increments and expected volume.
One example of a lack of appropriate paranoia about floating point numbers was the i'm-not-very-good-at-writing-parsers' author of ASP and the result was http://www.exploringbinary.com/php-hangs-on-numeric-value-2-... denial of service to all unpatched PHP sites. Also bit java.
Note that interesting prime number work depends on integers and won't work with floating point. Including factoring large numbers in cryptography.