Defaults have all sorts of assumptions built into them. So if you compare different programs with their respective defaults, you are actually comparing the assumptions that the developers of those programs have in mind.
For example, if you keep adding data to a Redis server under default config, it will eat up all of your RAM and suddenly stop working. Postgres won't do the same, because its default buffer size is quite small by modern standards. It will happily accept INSERTs until you run out of disk, albeit more slowly as your index size grows.
The two programs behave differently because Redis was conceived as an in-memory database with optional persistence, whereas Postgres puts persistence first. When you use either of them with their default config, you are trusting that the developers' assumptions will match your expectations. If not, you're in for a nasty surprise.
There are radiotrophic fungi that thrive in Chernobyl, so I wouldn't hold too much hope for UV either. It probably won't be able to penetrate a decent biofilm.
Graham seems to be closer to the truth these days. Ordinary middle-class people worry about their mortgage, their retirement fund, their children's college tuition. Meanwhile, the managers are obsessed with quarterly returns.
There is an 80% probability that an earthquake of magnitude 8-9 will occur in the Nankai trough (massive subduction zone along the Pacific coast of Japan) in the next 30 years. Yes, you read that correctly. Eighty percent. It's almost a certainty.
San Andreas sounds like nothing by comparison, especially since it doesn't pose as much of a tsunami risk.
It's also worth noting that a mag 8 is about the maximum expected from the San Andreas fault, a strike-slip fault, and most quakes come in well under that. The two largest quakes I'm aware of, the 1906 San Francisco and 1857 Fort Tejon quakes, were mag 7.8 and 7.9 respectively.
Significant damage can be experienced starting at about mag 6, though that tends to be pretty specific (individual structures, often pre-dating earthquake codes, and locations on poorly-suited terrain such as riverbottoms, reclaimed wetlands, or sand). Widespread general damage would only be experienced with larger quakes (mag 7--8).
Japan has a significantly higher risk of mag 8--9 quakes. The 2011 Tōhoku quake was a magnitude 9, which is 100 times more powerful than a mag 7, and over 100,000 times more powerful than this morning's temblor in Berkeley. Japanese faults include subduction zones and considerable tsunami risk.
Similar risks exist between the California-Oregon border through to British Columbia on the Cascadia Subduction Zone, and could similarly product a mag 9 event.
The Cascadia earthquake in January 1700 produced a tsunami that traveled all the way across the ocean and hit Japan with 16-foot waves. That's what mag 9 looks like.
For modern Linux servers with large amounts of RAM, my rule of thumb is between 1/8 and 1/32 of RAM, depending on what the machine is for.
For example, one of my database servers has 128GB of RAM and 8GB of swap. It tends to stabilize around 108GB of RAM and 5GB of swap usage under normal load, so I know that a 4GB swap would have been less than optimal. A larger swap would have been a waste as well.
Yeh. I haven't yet figured out how to get zram to apply transparently to containers though, anything in another memory cgroup will never get compressed unless swap is explicitly exposed to it.
The backing disk or file will only be written to if cache eviction on the basis of LRU comes into play, which is fine because that's probably worth the write hit. The likelihood of thrashing, the biggest complaint about disk based swap, is far reduced.
zram based swap isn't free. Its efficiency depends on the compression ratio (and cost).
The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.
Another rule of thumb is that performance degradation due to the active working set spilling into the swap is exponential - 0.1% excess causes 2x degradation, 1% - 10x degradation, 10% - 100x degradation (assuming 10^3 difference in latency between RAM and SSD).
I would approach the issue from the other direction. Start by buying enough RAM to contain the active working set for the foreseeable future. Afterward, you can start experimenting with different swap sizes (swapfiles are easier to resize, and they perform exactly as well as swap partitions!) to see how many inactive anonymous pages you can safely swap out. If you can swap out several gigabytes, that's a bonus! But don't take that for granted. Always be prepared to move everything back into RAM when needed.
> 6. Disabling swap doesn't prevent pathological behaviour at near-OOM, although it's true that having swap may prolong it. Whether the global OOM killer is invoked with or without swap, or was invoked sooner or later, the result is the same: you are left with a system in an unpredictable state. Having no swap doesn't avoid this.
This is the most important reason I try to avoid having a large swap. The duration of pathological behavior at near-OOM is proportional to the amount of swap you have. The sooner your program is killed, the sooner your monitoring system can detect it ("Connection refused" is much more clear cut than random latency spikes) and reboot/reprovision the faulty server. We no longer live in a world where we need to keep a particular server online at all cost. When you have an army of servers, a dead server is preferable to a misbehaving server.
OP tries to argue that a long period of thrashing will give you an opportunity for more visibility and controlled intervention. This does not match my experience. It takes ages even to log in to a machine that is thrashing hard, let alone run any serious commands on it. The sooner you just let it crash, the sooner you can restore the system to a working state and inspect the logs in a more comfortable environment.
That assumes the OOM killer kills the right thing. It may well choose to kill something ancillary, which causes your OOM program to just hang or misbehave wildly.
The real danger in all of this, swap or no, is the shitty OOMKiller in Linux.
The OOM killer will be just as shitty whether you have swap or not. But the more swap you have, the longer your program will be allowed to misbehave. I prefer a quick and painless death.
> OP tries to argue that a long period of thrashing will give you an opportunity for more visibility and controlled intervention.
I didn't get that impression. My read was that OP was arguing for user-space process killers so the system doesn't get to the point where the system becomes unresponsive due to thrashing.
> With swap: ... We have more visibility into the instigators of memory pressure and can act on them more reasonably, and can perform a controlled intervention.
But of course if you're doing this kind of monitoring, you can probably just check your processes' memory usage and curb them long before they touch swap.
Maybe I'm just insane, but if I'm on a machine with ample memory, and a process for some reason can't allocate resources, I want that process to fail ASAP. Same thing with high memory pressure situations, just kill greedy/hungry processes, please.
Like something is going very wrong if the system is in that state, so I want everything to die immediately.
sysctl vm.overcommit_memory=2. However, programs for *nix-based systems usually expect overcommit to be on, for example, to support fork(). This is a stark contrast with Windows NT model, where an allocation will fail if it doesn't fit in the remaining memory+swap.
People disable memory overcommit, expecting to fix OOMs, and then they get surprised when their programs start failing mallocs while there are still tons of discardable page cache in the system.
I don't see what would be so awkward about saying "Go to My Cases" even if it was spoken over the phone. The user is already looking at a screen that contains a menu that says "My Cases". You are reading out the name of that menu. That's enough context for most people IRL.
If you are genuinely worried that the user might try to look up your cases instead of their own, you can just add a few words to clarify: "Click the menu that says My Cases."
And you my friend are demonstrating why this keeps being used. It's so common that now generations of devs and designers are so used to it that they don't see anything wrong. And if on the phone with grandma, instructing her to go to "my files" and her asking where to find my files (instead of hers), that's shrugged off as stupid user rather than an UX fail.
If you're talking to someone who is mostly computer illiterate, you'd say something like "do you see a folder icon on the screen that says My Cases? Double click on that." and not "go to My Cases"
Yeah, if somebody is really that computer-illiterate, you'll also need to tell them where on the screen to look since they're likely overwhelmed by all the other things. These tend to be the same people who, unfortunately, haven't installed ad blockers, and are constantly tempted to click on an ad, thinking it's the "right" place to click.
We've been having a similar fraud issue here in South Korea as well. These criminals call vulnerable people, impersonate bankers or government officials, and social-engineer them into trasferring money. Some even pretend to have kidnapped your child, play AI-generated voice of a sobbing kid, and demand ransom.
Fraud has always been around, but I think a few recent developments have exacerbated the problem. KYC has been relaxed a lot since Covid, so you can open a whole bunch of accounts with just an image of an ID card. Lots of elderly people now have access to mobile banking, so they don't have to visit a physical branch where a clerk can flag suspicious transfer requests.
Bank accounts in South Korea now start with a daily transfer limit of 1 million won (about $700), even lower than the 50k bhat limit that the Thai government has instituted.
KK Park – a vast, heavily guarded complex stretching for 210 hectares (520 acres) along the churning Moei River that forms Myanmar’s border with Thailand.. with its on-site hospital, restaurants, bank and neat lines of villas with manicured lawns, looks more like the campus of a Silicon Valley tech company than what it really is: the frontline of a multibillion-dollar criminal fraud industry fuelled by human trafficking and brutal violence.. Myanmar, Cambodia and Laos have in recent years become havens for transnational crime syndicates running scam centres such as KK Park, which use enslaved workers to run complex online fraud and scamming schemes that generate huge profits.
China's Silk Road ends in Sihanoukville Cambodia. If you've ever been there, you've seen the eco-devastation and utter disregard for human life along the entire road.
For example, if you keep adding data to a Redis server under default config, it will eat up all of your RAM and suddenly stop working. Postgres won't do the same, because its default buffer size is quite small by modern standards. It will happily accept INSERTs until you run out of disk, albeit more slowly as your index size grows.
The two programs behave differently because Redis was conceived as an in-memory database with optional persistence, whereas Postgres puts persistence first. When you use either of them with their default config, you are trusting that the developers' assumptions will match your expectations. If not, you're in for a nasty surprise.
reply