Hacker News new | past | comments | ask | show | jobs | submit login

I can speak from experience on this, home made cache can be much better depending on your avg object sizes. Letting the OS make decisions usually in databases is the wrong choice. The data is cached at page level (4k), if you have many objects randomly spread across your working set it's not going to be as efficient as object caching. Usually the more control you desire the better for the performance of your database. One technique that is used if you want to just let the OS deal with cache/paging of data is you have a dedicated commit log that is written with fsync on every update or periodically depending on your durability requirements.

The OS usually does a pretty good job, but remember it's designed to be generic not for a specific use case. When you really care about performance of a database you tend to bypass what the OS does, and take control bypassing things like the scheduler, cache, etc.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: