Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To be frank I have always found such dogmatism deeply questionable. I can understand caveats like "for long term maintainability" but taken literally it suggests assinine ideas like using bubble sort instead of merge sort for a large set not because of any memory footprint constraints but because it is easier to read. Insisting upon not referring to reality is not a good pattern.

More charitably, a better argument is that it is easier to adapt a compiler to reoptimize it than to tweak the entire code base - let alone the costs for the gains.



>... but taken literally it suggests assinine ideas like using bubble sort instead of merge sort for a large set not because of any memory footprint constraints but because it is easier to read.

No, it doesn't suggest this at all. It means use the merge sort but implement it in a clean, understandable way.


It means use the merge sort but implement it in a clean, understandable way.

https://rosettacode.org/wiki/Sorting_algorithms/Merge_sort - which ones of these do you consider objectively "clean and understandable"?

Why should any of them (Prolog, Mercury, J, Common-Lisp, etc) be considered "not understandable" instead of "not familiar to the reader"?


We have been waiting for the "sufficiently smart compiler" for around 4 decades now. If it's so easy to write one, where are they?


It is on a spectrum but it is already here to some extent. Look at previous performance practices which have become deprecated like manually unrolling fixed for loops. Those are the sorts of things which have been fixed up and assembly coding is pretty rare still.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: