I will say that a lot of that RAM is going to creature comforts that aren't about apps getting worse per se. For example, everything is running double buffered images and windows in HiDPI. The era you're talking about, applications were in charge of redrawing their window whenever you exposed their contents/tabbed back to them/etc. If they genuinely needed double buffering, they'd need to do it themselves, so apps rarely did. Plus side, less RAM, downside, you would get gray nondescript windows and redraw errors when moving and resizing windows. Nowadays, Windows/macOS/Linux instead keep double- (or even triple-) buffered copies of all that. Throw on all the HiDPI images and whatnot, and you've already used up more RAM just on that one thing than the old apps used to take. But you can tab between apps with full previews, and you don't get gray blobs and tearing when an app is overloaded. Other things, like 64-bit pointers, or static linking becoming a common way to deal with DLL hell (sigh), also add RAM, but are also solving real problems.
I'm not really defending all those decisions or anything, beyond that it's not simply a case of lazy devs or whatnot. We made trade-offs as a community that genuinely improved the user experience. I may not agree with all of them, but I get why they happened, and don't spend a lot of time wondering why we used to need fewer resources.
I read comments like this and I wonder “how do you know Al, this in enough detail to explain it so simply? What was your path from DSA1 to this? Or maybe happened earlier? Was it a thing of the times? Sometimes I with I was not a kid in the 90s or before to absorb the rawness of computing that must be so useful and unique to have today.
I say this and at the same time I feel like it must be rare in today’s ultra competitive and optimized software world to admit this sort of thing, am I alone? Am I the only ignorant?
I was always under the assumption that application windows were all rendered as separate layers on the GPU and the artifacts on older Windowsen were related to pre-GPU UI rendering.
So rather than occupying main RAM they’re on the GPU RAM and main RAM just orchestrates.
I'm not really defending all those decisions or anything, beyond that it's not simply a case of lazy devs or whatnot. We made trade-offs as a community that genuinely improved the user experience. I may not agree with all of them, but I get why they happened, and don't spend a lot of time wondering why we used to need fewer resources.