Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Web Server Benchmarking We Need (ianbicking.org)
46 points by blasdel on March 17, 2010 | hide | past | favorite | 9 comments


Having been subject to benchmark criticisms myself, I say this to all the complainers: walk the walk instead of simply talking the talk.

It is practically impossible for a benchmark to cover all the special cases and what-ifs. If a benchmark doesn't cover the corner case that you want, fix it.

Every benchmark that is too small will be criticized as a not doing anything useful, and every benchmark that is too big will be criticized as too polluted by external factors.


While we're thinking outside the bun, let's consider not just the web server, but the whole stack. That includes load balancers, message queues, data stores, etc and of course all the network gear that we often forget about. A fancy web server benchmark isn't going to tell you squat about how all this stuff will perform together for you because there are too many variables.

The closest we can get to useful metrics is looking at large-scale sites that handle high load in the so-called "real-world". These usually take the form of experiential stories which, incidentally, is how most knowledge is shared.

The solution is more stories, not a fancier benchmark.


Is there anything to test here? If your process uses unbounded memory, you have to kill it. If your proc uses blocking calls in the event loop, it's going to block the whole process until it's done.

No benchmark or special server is going to prevent your code from being bad. So make your code be not bad.


PHP successfully recovers from memory leaks, runaway processes, and any kind of deadlock. App Engine does too. And so does mod_wsgi. It can be done.


Yeah, by spawning a new process for every request. Fork is fast, but not forking is faster.


All of these keep a pool of worker processes, they don't fork each request. With a non-trivial application the speed difference is negligible, though the memory difference is not.

PHP also has language-level isolation from request to request, but extensions are still long lived and require the process to be killed in some cases when they misbehave.


Recovers? Last time I checked it just kills itself if memory usage goes too high, not sure whether that can be really called recovering.


A process monitor or separate thread can check if your request has been stuck for too long and kill it.


Right, but at this point, you've degenerated to CGI scripts with ulimits. If you want to do better than that, then you are going to have to fix your code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: