Good luck getting programs to process SIGNOMEM without allocating any more memory!
Besides, if programs are using memory that could be discarded, it would be better if the kernel knew this in the first place and could therefore do the MM on their behalf.
You could have the system reserve a little tiny bit of extra memory (somewhat similar to the "disk space reserved for root" in ext2) that it would only ever hand out in response to malloc() attempts inside a SIGNOMEM handler.
You're right, though; metadata on the allocations themselves marking them as release-on-memory-pressure would be much better.
No it wouldn't. If a program has memory that it doesn't need and can live without it ought to just release it. Otherwise that is called a memory leak. I believe this was tried in early Android releases and was disastrous. Basically during an out of memory condition the kernel would ask every process "hey, can you release some memory?" And each process would reply with "nope, I need it all!"
I don't see how the metadata is any different. What happens when the kernel yanks a page of memory from über a running process?
Programs often make time vs memory tradeoffs. In some cases, it is even possible for them to adjust these tradeoffs during runtime.
The most common example is is file-system caches. Pretending (for the sake of argument) that the kernel did not automatically cache the file-system, a program may reasonably make this optimization [1]. In this case, the program can easily release some memory by clearing its cache.
You could also be running a program with garbage collection that normally wouldn't bother doing a full sweep until it hit some memory usage threshold; again, it can do this on request.
I'm sure that people can come up with other examples.
[1] In fact, a program can request that the kernel not cache its file-system requests, in which case it would have actual reason to cache what it thinks it might need again.
Of course. But in practice here's what's going to happen. The good apps, say Postgres, will implement the ability to do this. They will create caches, and release them upon request from the OS. This will significantly hinder the performance of Postgres because it will keep losing its caches. Note that this is worse than just emptying the cache, you actually lose the allocation and have to start over. In some cases you'll effectively disable the cache, which is there for a reason!
Now, here comes, say, MongoDB, which says "Yeah, I have caches, and I need them all! I won't release any allocations because I have to have them." Let's say, for whatever reason you run both Postgres and MongoDB on the same box and it's running out of RAM. Now you are punishing Postgres, the good citizen, and rewarding MongoDB, the bad citizen, and only because Postgres bothered to implement the ability to give up its caches.
I cannot find the reference to this, but I believe this was tried in early Android, and the consequences were that it became unusable as soon as you installed at least one memory hog app that never gave up any memory.
> No it wouldn't. If a program has memory that it doesn't need and can live without it ought to just release it. Otherwise that is called a memory leak.
I disagree. A process could usefully cache the results of computation in RAM (e.g. a large lookup table). This RAM is useful to the process (will increase performance) but if it is discarded due to a spike in memory pressure it could be rebuilt.
> What happens when the kernel yanks a page of memory from über a running process?
> [...] that it would only ever hand out in response to malloc() attempts inside a SIGNOMEM handler.
POSIX doesn't allow malloc() to be called inside a signal handler at all. Also, when free():ing memory that is allocated with malloc(), there is absolutely no guarantee that the memory is ever returned back to the OS (i.e. via munmap() or sbrk()), due to possible memory fragmentation.
In other words, it's just wouldn't be practically possible to create an application that uses standard memory allocation functions and reliably can free some memory back to the kernel.
malloc() and free() is what I meant by "standard memory allocation functions". I'd say program that is in the position to use madvise() effectively has implemented it's own heap allocator.
Also, MADV_DONTNEED is only usable in some specific situations, like caches. I don't see how it could be used to implement things like "on low memory, trigger garbage collection and trim the heap to the smallest possible with munmap()".
You said: "In other words, it's just wouldn't be practically possible to create an application that uses standard memory allocation functions and reliably can free some memory back to the kernel."
So I thought you'd be interested to know that you can do just this with the standard functions mmap() and madvise().
No, it's not a replacement for malloc/free, but it does have value to some applications for some use cases.
Besides, if programs are using memory that could be discarded, it would be better if the kernel knew this in the first place and could therefore do the MM on their behalf.