Hacker News new | past | comments | ask | show | jobs | submit login
Numbers Everyone Should Know (everythingisdata.wordpress.com)
22 points by neilc on Oct 17, 2009 | hide | past | favorite | 10 comments



IMHO the single most useful number here is 10 ms for a disk seek. Next is network time but that really depends on where the computers are that you're talking about.


That entirely depends on your application. If your software does a lot of calculations, the cache-miss cost may be much more important than any hard disk seek. The are the times per event, but the number of these events differ vastly per application.


> Mutex lock/unlock 25ns

That looks suspiciously cheap to me, 1/4th the cost of accessing main memory? I tend to think of mutex ops as quite expensive considering they involve things like memory barriers.


Locking a semaphore should be the same cost as accessing a (potentially cached) memory location. If you own the mutex and are unlocking it, then 25 ns sounds fine. If you are grabbing the lock from another cpu or core, then it will definitely be way more than that.

There would be some cache snooping to invalidate the other cpu's lock and gets your local version dirty.


http://www.feyrer.de/NetBSD/gmcgarry/ suggests that the cost to acquire an uncontested mutex on NetBSD in 2005 was ~37 nsec, so ~25 nsec doesn't seem too unreasonable (although strangely it was more expensive on FreeBSD).


> although strangely it was more expensive on FreeBSD

I think mutex ops in 5.3 depended on syscalls or so. I distinctly remember a commit reducing mutex overheads by moving more of it into userspace, somewhat like futex I guess.


There are some programmers who I have known who should have had this list. People who spend days optimising at levels that have little impact globally - you need to think about the big picture and not just line by line.


One interesting thing that can be inferred from this table is that by increasing your cache hit rate, you can approach a 100x speed up.


Which is why the L1, L2 and L3 caches are there in the first place ;)


One of my favorite modules in systems class was weeks of C programming competitions/exercises where we'd try to get the most L1/L2 based speedup in several challenges. Too bad I don't get to work at this level professionally that often.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: