Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A lot of code relies on FP operations being deterministic, often within confines of specific ISA

How much of this is true in practice? I vaguely recall reading about a hard-to-diagnose bug where a seemingly unrelated code change meant that an intermediate value was no longer stored in the x87 extended-precision register but in memory instead, leading to a different result. Wouldn't you run into stuff like that all the time?



> A lot of code relies on FP operations being deterministic

A lot, but still quite the minority. People do run into situations where the same code with the same data produces slightly different results all the time because of changes in compiler, compiler settings, cpu arch, etc… They just don’t notice.

But, then they run into a situation where it matters and it’s a huge PITA to nail down.


Yes this is exactly it. The gold standard is bit-4-bit matching when shifting code between systems. If you move your code to a new system and get different results --> is it due to different summing order, or something else wrong? By ruling out the former you're more likely to get the bit-4-bit match and thus avoid a lot of debugging effort -- or spend it solving a real bug.


This is one of many reasons why we generally don't use the x87 registers anymore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: