Hacker News new | past | comments | ask | show | jobs | submit login

I've personally tuned a lot of software that ran better with a constant growth factor than an exponential one.

The problem with exponential growth is that, if you're doing a thing that is (kn)+1 bytes, where n is the growth factor and n is a decently large number, you end up with k(kn) bytes, which leaves k^2 * n - kn - 1 bytes useless. Depending on the value of k that can be a big chunk.

Particularly in situations where you're memory constrained and creating long-running processes, adding some-large-percentage of your buffer in empty overhead just wastes resources. It's better to spend the extra allocations in the first minute, and then have it run for a day, than have the first minute be faster and have it take 36 hours because it can't fit the working set into memory.




In most cases the empty overhead won't be part of the working set, though.


But the system still has to commit for the amount of allocated memory, which may deny memory allocations to the rest of the system.


If you have arrays/buffers that take up a significant chunk of a 64 bit address space, you might make a 2nd pass to tune those. Otherwise, "just" make plenty of swap space for the idle memory chunks.

And yes, "swap space" is going to be a bit more constrained on a 32 bit mobile app. CPU cache is the real memory, and DRAM is the new swap :-)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: