Which wasn't our point, the OP here misrepresented us. We wanted to explain how popular we were and we realised we had as much traffic as stackoverflow, so we used that as a traffic comparison, our point was not to compare servers. As you can see if you read through the linked topic, at no point did we compare ourselves to SO beyond traffic wise, the OP here is at fault :)
Why was your posted title on reddit then: "Minecraftwiki.net and minecraftforum.net now serve more traffic than Slashdot and Stackoverflow!" ? I just added a server count and php part which were both in the thread mentioned to signal previous false dichotomy. You guys shouldn't have posted a title like that then, you have compared yourself to SO and Slashdot, not me.
Yes, we compared our traffic levels, not our server count. The title of this post implies that we're saying "we have as much traffic as SO and they use more servers!" which wasn't the point. The point was Stackoverflow is a tangible comparison of traffic, not hardware. Anyway I replied to you above explaining, I misunderstood your intentions.
Your title made our servers and setup the focus, that wasn't the point at all.
I agree. Over here we do 70m (high write ratio) pages per month on 1 server handling all apache/php/mysql. Hardware is really fast these days if you tune it to any degree.
That was a very interesting article, thanks. One question, if I may: When using a reverse proxy, it makes no sense to have keepalives on for Apache, correct? The proxy takes care of the keepalive and leaves Apache free for other requests?
Correct. The reverse proxy pulls from the fast, local network apache, and then passed the data to the slow clients. Apace is connected for a shorter time. Basically you're trying minimize the time a "memory expensive" process like apache is open per client.
No, I'm not referring to this parameter in the write up.
ipv4.tcp_slow_start_after_idle, which is on by default on most distros, applies to keepalive connections.
This causes your keepalive connection to return to slow start after TCP_TIMEOUT_INIT which is 3 seconds. Not probably want you want or expect. For example, if you have keepalives of say 10s, you'd expect that a request after say 5s would have it's congestion window fully open from previous requests, but the default behaviour is to go back to slow start, and close your congestion window back down. So you want to tune this to off on your image servers and other keepalive systems.
The tuning which I talk about is to actually increase the default initial congestion window size. The result being an advantage for non-keepalive connections and keepalives. There is no sysctl parameter that will allow for this control. This behaviour is hardcoded into the tcp stack, and hence requires direct modification and a recompiled kernel.
I do 1M pages/day in average since more than 1 year on one unique server that is a bi-opteron 250 at 2,4Ghz with a load average of 0.3...
We just serve mostly static content, and most php content is cached. I just think that this comparison to SO and /. is flawed.