Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ths problem isn't rounding the final result. The problem is that the source data itself can't be accurately represented.

There is no floating point value equal to 0.3.




That’s not a problem by itself.

You can represent 0.3 as 0.300000…0004, which rounds to 0.3 again in the end.

But you need to reason about the number and nature of intermediate operations, which is tricky, since errors usually accumulate and don’t always cancel out.


> That’s not a problem by itself.

No, it really is the original sin here.

> since errors usually accumulate and don’t always cancel out.

The problem is that from the system's perspective, these aren't "errors". 0.3000000....4 is a perfectly valid value. It's just not the value that you want. But the computer doesn't know what you want.


> The problem is that from the system's perspective, these aren't "errors".

When I say "error" here I mean the mathematical term, i.e. numerical error, from error analysis, not "error" as in "an erroneous result".

There is a formalism for measuring this type of error and making sure it does not exceed your desired precision.

> It's just not the value that you want.

My point is exactly that if you're looking at 0.300000...4, you aren't done with your calculation yet. If you stop there and show that value to a user somewhere (or are blindly casting it to a decimal or arbitrary precision type), you are using IEEE 754 wrong.

You know that your input values have a precision of only one or two sub-decimal digits, in this example, so considering more than ten digits of precision of your output is wrong. You have to round!

It's the same type of error that newspapers sometimes make when they say "the damage is estimated to be on the order of $100 million (€93.819 million)".

Yes, this is often more complicated and error-prone (the human kind this time) than just using decimals or integers, and sometimes it will outright not work (since it's not precise enough – which your error analysis should tell you!)! But that doesn't mean that IEEE 754 is somehow inherently not suitable for this type of task.

As a practical example, Bitcoin was (according to at least one source) designed with floating point precision and error analysis in mind, i.e. by limiting the domain of possible values so that it fits into double-length IEEE 754 floating point values losslessly – not because it's necessarily a good idea to do Bitcoin arithmetics using floating point numbers, but to put bounds on the resulting errors if somebody does it anyway: That's applied error analysis :)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: