I've stopped having as much issues with this when I stopped using swap which must be one of the biggest traps for desktop in common linux guides. In my desktop I'd much rather something crash than my system to grind to a halt.
This is true, and I don't understand why Linux doesn't have a smarter approach here. When a process suddenly starts using a lot of memory, and memory usage is approaching the limits of physical RAM, that process should be killed, rather than swapping out things like Gnome to make room for it.
Of course, that assumes a desktop environment. On a server, if a process starts using up a lot of memory, it's probably vitally important to keeping the (database|webserver) up, and should not be constrained in any way.
Which probably explains the situation that we're in.
Maybe because nvme is more common now? Last time I had swap was a few years ago now and every time I'd enter it, it'd be so slow as to be hard to even fix it. Since I've had no swap I've had a couple of times when docker or whatever just crashes instead and that's it but possibly with my current nvme swap wouldn't be too awful if it gets used.
Nah, I have a swap partition on an nvme drive, and I still get several minutes of unusability when I run out of RAM and low-to-out of swap, while the kernel takes its sweet time deciding what process to kill. It can get pretty frustrating, and sometimes I just don't feel like waiting and reboot. It doesn't happen often (possibly because I've learned to kill things like IntelliJ when I'm not actively using them), but when it does, it's super annoying.
Yeah it can be due to nvme, because then it's at least somewhat responsive. Ran into an OOM situation with a newer computer of mine that has an NVME and I could close the offending electron application easily.
I want to identify the memory hog(s) first so I can determine the next course of action.
Suddenly seeing a black screen with an Apple while I'm in the middle of something important is irritating at best, especially when it happens repeatedly and I haven't pinned down the RC.
Yeah this seems increasingly more common on Mac lately. It's been happening a lot to me too. And I regularly reimage my systems because I manage a fleet of them. It's not just some weird driver.
For me it was a constant battle with Chromium. On 16GB of RAM it simply wasn't enough. Every other day I was hitting the OOM killer or system freezes.
I was so angry I went and upgraded to 64GB of RAM and now it's stable. I found out that I don't use any more tabs than I have in the past, with gobs of RAM available. So I believe it's just the web getting more bloated over time along with Chrome getting more bloated. One day I suspect I'll be running the same number of tabs and 64GB won't even save me.
I've rarely actually experienced this (even on HDD based machines). Sometimes the system will chug if a runaway JavaScript somewhere leaks all available memory, but I can usually switch (slowly) to a terminal, kill the offending process, and after a minute or two things will sort themselves out and be fast again. I suspect Ubuntu and them may have pathological kernel VMM and swap settings out of the box.
Lately I've been running with "swappiness" turned way down, but not with completely disabled swap. Eh, I've got 32 GiB of RAM now, my system can handle a few programs in memory -- even a couple of Electron ones.
(edit) if it is not installed by the distro allready (eg dietpi). It creates a swappartition per thread inside ram, unused pages get compressed and stored there. Increases SSD lifetime by reducing writes to swappartition/file on disk.