There is no substitute to profiling and testing your code thoroughly under real world conditions (and worse, stress conditions).
Having this in mind you can have your rules of thumb, but trying to sell a problem like this as solved with a simple, universal silver bullet is not a good idea IMO.
Sure, but when writing a generic container implementation -- such as nsTArray -- you have to pick a single strategy. And generic containers generally use exponential growth. And in this case it made a clear improvement on memory usage in the general case, as the AWSY result showed.
Someone will probably now ask why Firefox doesn't use standard containers like std::vector. Several reasons: (a) having our own container lets us know that the implementation is the same on every platform; (b) having our own container gives us more control and allows us to add non-standard features like functions that measure memory usage; (c) Firefox doesn't use C++ exceptions (for the most part) but the standard containers do.
Yep but your post is about "fixing" all possible allocation strategies to exponential growth: "please grow your buffers exponentially", "if you know of some more code in Firefox that uses a non-exponential growth strategy for a buffer, please fix it, or let me know so I can look at it".
That sounds as if very little effort was put into the allocation strategies in the first place, since you're willing to override whatever thought was put into deciding an allocation strategy and fix it with exponential growth (that, admittedly, is often a good heuristic) or just have other people do it.
It's perfectly plausible than in other circumstances a different approach is better. That said, in both patches mentioned it makes sense (I think the minimum for XDR encoding buffers should be at least quadrupled from its current 8KiB if it's true that on startup it gets already bigger than 500KiB). One thing about exponential growth with rates as high as x2 each time, is that picking a reasonably big (even slightly overshoot) on the expected buffer size it's the conservative thing to do. Because if you let the allocator do the growing it's often going to overshoot a lot more. If you are going to have buffers in, say, a normal distribution of maximum sizes over their lifetime, it's wise to preallocate as much as the 90% percentile expected size and then instead of growing x2 perhaps grow x1.5 . Something worth testing and tweaking because it makes a real difference.
Sorry if this sounded negative, it wasn't meant to.
Chances are that this scheme will be used to implement your language's dynamic array (or the standard library for that). So unless many people tend to use different dynamic array implementations, or fiddle with their regrowth-factor, then it is indeed being used as a "silver bullet".
That is one thing, and a different thing is recommending that people go through all their allocations ever and just make them all grow exponentially on a x2 rate without much more consideration to what was expected in each case.
If someone is not critically reasoning about what is needed in their specific case, exponential growth is going to give far better results on average than constant growth.
If someone is critically reasoning about their allocations, then presumably they'll be able to pick up the (fairly rare) cases where exponential growth is sufficiently detrimental to matter.
Having this in mind you can have your rules of thumb, but trying to sell a problem like this as solved with a simple, universal silver bullet is not a good idea IMO.