It is a pretty trivial stuff though. First half describes typical "slab allocator". The other half covers the use of free lists, and that's in a context of speeding up their own clumsy bitmap-based free block tracking code. All this is supported by dmalloc (?), which is IIRC a default GNU libc allocator.
The discussion of multithreading issues is very basic too - just simple single-lock wrapper of all API calls. That's just dumb if you pardon my French. No mentioning of granular locking or lock-free options.
Also, on a more general note - writing custom memory manager is an option of last resort when optimizing existing code that cannot be refactored. That's because writing a manager is meant to negate the effects of a bad application design; so if latter can be fixed, there's really no need for a manager.
Consider their own example of allocating 5000000 objects, each with its own new() call. This is what the std::vector is for - it coalesces all new calls into one and then constructs the instances. If, in a real world, these 5 million objects are not allocated all at once, then vector still can be used as an application-level pool of objects. In the end, this is an application-specific allocation quirk and it needs to be dealt with at the application level, not in libc.
If you can afford 3 people to work on it full-time you can do beatuful things. For example you can have vast areas of code walled-off with auto-release pools (not objecive-c pools that are scoped to a stack frame, but a custom one that's got lifetime of your choice), allowing people in those areas to concentrate on complex algorithms, not memory micromanagement. Then you can have other areas of your code deal with memory directly for highest resource utilization.
Anohter thing you can do make sure to decommit all deallocated pages, and allocate all objects at the boundary of a page. Thus all buffer overruns and hanging referneces will result in a page fault, not a hard to debug corruption.
It is a pretty trivial stuff though. First half describes typical "slab allocator". The other half covers the use of free lists, and that's in a context of speeding up their own clumsy bitmap-based free block tracking code. All this is supported by dmalloc (?), which is IIRC a default GNU libc allocator.
The discussion of multithreading issues is very basic too - just simple single-lock wrapper of all API calls. That's just dumb if you pardon my French. No mentioning of granular locking or lock-free options.
Also, on a more general note - writing custom memory manager is an option of last resort when optimizing existing code that cannot be refactored. That's because writing a manager is meant to negate the effects of a bad application design; so if latter can be fixed, there's really no need for a manager.
Consider their own example of allocating 5000000 objects, each with its own new() call. This is what the std::vector is for - it coalesces all new calls into one and then constructs the instances. If, in a real world, these 5 million objects are not allocated all at once, then vector still can be used as an application-level pool of objects. In the end, this is an application-specific allocation quirk and it needs to be dealt with at the application level, not in libc.