It's hard to understand because the lines represent minutes (1 minute, 5 minute, 15 minute load averages) but the x-axis is in seconds. That means I have to divide by 60 in my head, which is a lot of unnecessary work.
Just a note: you cannot usefully compare load averages between different Unix OSes. For instance, running the same workload on a Solaris system and a Linux system, then using load average as a measure of performance between the two, is not going to give you any useful information.
To expand upon this, you also cannot usefully compare load between two applications of the same type. e.g. comparing the load averages of Apache vs. LigHTTPD vs. nginx, is not useful. Depending on the architecture of the application, such as threaded, forked, select/poll-based, or event-based, and configuration, you may find significantly different load numbers, even if all are performing identical tasks and with identical "performance".
For example Squid, in its aufs configuration (which uses threads for disk IO), can exhibit extremely high load averages (>5) on even a moderately loaded system. It'll still be performing fine, and you may still have significant headroom for handling requests. But it looks really high, on Linux systems. Even the filesystem on which a service is running can make a difference. Load looks higher on a ReiserFS-based system running Squid than an Ext3 filesystem...and yet the Reiser system would handle more load (I guess it just doesn't hide the evidence of its work very well).
http://www.teamquest.com/images/gunther/ldavg1/LAFull.gif
It's hard to understand because the lines represent minutes (1 minute, 5 minute, 15 minute load averages) but the x-axis is in seconds. That means I have to divide by 60 in my head, which is a lot of unnecessary work.