Hacker News new | past | comments | ask | show | jobs | submit login

This can't be correct. What would be the point of different compilers be then? There's no way that every compiler would produce the same exact insrtuctions for each respected input. There would be no point to using an optimizing compiler or one with better intrinsic support.



Perhaps you're comparing a multithreaded version with a single threaded version, or a single-host build vs distributed build.

When adding new features to a compiler, you might want to verify that the old and new versions have the same output given the same input. If your compilers produced non-deterministic output, this exercise would not be possible.


I took the statement "different compilers" to be completely different projects and codebases. Of course reproducable builds make sense in that situation. But why in the world would you ever expect gcc to 100% always match the output of llvm? That doesn't make sense.


> What would be the point of different compilers be then?

When given the same input and expecting the same output, there remains only one thing: compile speed.


Well, and memory usage too etc. But what's the point then? Why would Intel have their optimizing compiler? Why GCC vs llvm?


Obtaining evidence a compiler probably isn't backdoored is the point.

>There's no way that every compiler would produce the same exact insrtuctions for each respected input.

You don't know this.


I do, because I can make a compiler that chooses non standard instructions.


Then nobody will use it for reproducible builds. What is your point? Performant compilers for an arch are likely to produce similar code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: