Hacker News new | past | comments | ask | show | jobs | submit login

Great advice. One nit to pick:

> Don't ignore constant factors. Sometimes an algorithm with asymptotically worse performance will perform better in practice because it has much better cache locality.

Forget the cache, sometimes they're just plain faster (edit in response to comment: I mean faster for your use case). I've e.g. found that convolutions can be much faster with the naive algorithm than with an FFT in a pretty decent set of cases. (Edit: To be specific, these cases necessarily only occur for "sufficiently small" vectors, but it turned out that was a larger size than I expected.) Caching doesn't necessarily explain it I think, it can just simply be extra computation that doesn't end up paying off.




Good correction, but a small second nit.

> sometimes they're just plain faster

Not faster for sufficiently large N (by definition).

But your general point is correct.

I've best seen this expressed in Rob Pike's 5 Rules of Programming [0], Rule 3:

Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy.

[0] http://users.ece.utexas.edu/~adnan/pike.html


>Not faster for sufficiently large N (by definition).

True, but supposedly researchers keep publishing algorithms with lower complexity that will be faster only if N is, like 10^30 or so.

Or so Sedgwick keeps telling us.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: