Doesn't this overlook the issue of Heroku's random load balancing system?
This article seems to suggest that the number of Dyno's required for more load scales linearly with load but, as Heroku now admits, the routing is random and therefore scales much less efficiently.
If I'm using it correctly, in his first example using 82 dynos would actually result in requests taking over 9000ms. It takes about 150 nodes to get the average wait times down to around 600ms.
Parameters used:
1. Nodes 82
2. Process/Node 1
3. Requests/second: 338 (20300/60)
4. Duration request min/max 243
5. Timeout 100000
6. Time multiplier 2.00 (not exactly sure what this does)
Using multiple threads per dyno does help considerably. I guess it's equivalent to multiplying your whole dyno pool as well as having intelligent routing within the dyno itself, but problems still crop up if you have slightly inconsistent request times.
Also, one problem that seems to be overlooked is the possibility that multithreading within the dyno itself is less efficient than threading across many dynos since the threads may have to compete for resources. I can imagine that performance may drop across Heroku's platform as a whole as customers switch to threaded application servers and reduce their Dyno counts.
This article seems to suggest that the number of Dyno's required for more load scales linearly with load but, as Heroku now admits, the routing is random and therefore scales much less efficiently.
I found this great simulator that someone put together to demonstrate the issue http://ukautz.github.com/pages/routing-simulator.html
If I'm using it correctly, in his first example using 82 dynos would actually result in requests taking over 9000ms. It takes about 150 nodes to get the average wait times down to around 600ms.
Parameters used:
Using multiple threads per dyno does help considerably. I guess it's equivalent to multiplying your whole dyno pool as well as having intelligent routing within the dyno itself, but problems still crop up if you have slightly inconsistent request times.Also, one problem that seems to be overlooked is the possibility that multithreading within the dyno itself is less efficient than threading across many dynos since the threads may have to compete for resources. I can imagine that performance may drop across Heroku's platform as a whole as customers switch to threaded application servers and reduce their Dyno counts.