I dislike this article, as it tries repeatedly to imply that the use of NaN is somehow a restriction cause by floating point.
No ieee754 ever produces a NaN result unless the operation has no valid result in the set of real values.
Similarly the behaviour in comparisons: if you want NaN to equal NaN you have to come up with a definition of equality that is also consistent with
NaN < X
NaN > X
NaN == X
The logical result of this is that NaN does not equal itself, and I believe mathematicians agree on that definition. Again not a result of the representation, but a result of the mathematical rules of real values.
I want to be very clear here: floating point math always produces the correct value rounded (according to rounding mode) to the appropriate value in the represented space unless it is fundamentally not possible. The only place where floating point (or indeed any finite representation) produces an incorrectly rounded result are the transcendental functions, where some values can only be correctly rounded if you compute the exact value, but the exact value is irrational.
People seem hell bent on complaining about floating point behavior, but it is fundamentally mathematically sound. IEEE754 also specifies some functions like e^x-1 explicitly to ensure that you get the best possible accuracy for the core arithmetic operations
greater-than and less-than already make no sense around NaN, you won't get much worse, I don't get what you're trying to point out with them. This is less a question about mathematical correctness (which there isn't much around NaN anyway), but more practical. There being this annoying NaN that breaks everything if its in an array to be sorted or in a set or a key in a map is just pure awful.
Correct they don’t make sense, but given < and > return a Boolean in the ieee environment they need to produce a deterministic value.
As you say relations with NaN don’t make sense, but given the requirement of a single value NaN != NaN makes the most “sense” mathematically, and a core principle of ieee754 was ensuring the most accurate rendition of true maths with a finite representation (see a bunch of papers by Kahan).
Of course x87’s ieee754 implementation does actually have multiple NaNs, infinities, and representations of the same value. For all its quirks remember x87 was what demonstrated that the ieee754 specification could be made fast and affordably, which non-intel manufacturers were all claiming was impossible. The only real “flaw”* in x87 was the explicit leading 1, which was an artifact of it intel being sufficiently ahead of the curve to predate dropping it.
* the x87 transcendtals are known to be hopelessly inaccurate, but that in theory could have been fixed, whereas the format could not be.
mathematically, yes. In practice, NaN!=NaN just kills any hope of having any amount of sanity for operations that don't care about floating-point and just want to generally compare things. It's not very nice to say "sorting, hashmaps & hashsets containing NaNs cause the entire operation/structure to be completely undefined behavior", especially given that NaNs kind exist to allow noticing errors, not cause even more of them.
No ieee754 ever produces a NaN result unless the operation has no valid result in the set of real values.
Similarly the behaviour in comparisons: if you want NaN to equal NaN you have to come up with a definition of equality that is also consistent with
The logical result of this is that NaN does not equal itself, and I believe mathematicians agree on that definition. Again not a result of the representation, but a result of the mathematical rules of real values.I want to be very clear here: floating point math always produces the correct value rounded (according to rounding mode) to the appropriate value in the represented space unless it is fundamentally not possible. The only place where floating point (or indeed any finite representation) produces an incorrectly rounded result are the transcendental functions, where some values can only be correctly rounded if you compute the exact value, but the exact value is irrational.
People seem hell bent on complaining about floating point behavior, but it is fundamentally mathematically sound. IEEE754 also specifies some functions like e^x-1 explicitly to ensure that you get the best possible accuracy for the core arithmetic operations