50-100MB seems like a miniscule amount of space to warrant something like this.
My WinSxS folder alone is almost 10GB. If they wanted to save space, even a modest improvement in managing updates would yield space saving results orders of magnitude greater than this.
Note that part of the size of WinSxS is an illusion because basically everything there is hard linked from elsewhere on the system (yes, NTFS supports hard links). There are some ways to do limited cleanup on it, but it's not a place to poke around in manually.
I Have found via experimentation and freespace monitoring that most of the size of the winsxs folder that explorer shows when you look at properties, is infact eaten off the drive.
Its a royal pain in the ass for 120GB ssd installs or smaller.
Yes but isolated registry backups make as much sense as a full backup of a modern Linux Desktop's /etc folder. Probably not much. The config's content depends a lot on the things installed and vice-versa, even more so on Windows (unless they changed something fundamentally since 7). At least on Linux and in particular Ubuntu (Desktop) installations I aim for as little manual configuration changes as possible anyways, so updates don't break things...
If you backup etc and home, you can pretty much restore a linux machine back to working order. Yes, you might be missing packages in a brand new install, but then the extra files are harmless.
There are more reasons than backups to use etckeeper; I use it in case a package upgrade mangles my config files (happened once).
Imagine the use case in reality. You accidentally rm -rf your /etc folder but you still have the git tree. /etc itself is gone, entirely. Most of your tools will seize working almost immediately, the only chances you have are having a terminal already open to a root shell. Your Virtual TTYs on other screens won't work (login uses /etc/passwd, which is gone) as well as any sudo command.
Recovery from the life system is likely impossible unless you have a root shell open and git doesn't refuse to work (it might since it checks for /etc/gitconfig, if you made any critical settings here, git stops working). Live recovery also has other issue; some apps might still be writing to or reading from /etc. They might have an open handle. Recovery will result in some processes of the same binary running with different configurations. That will be a joy to discover once it inevitably starts to corrupt the system stability.
So your only way of meaningful recovery is a live system on a USB stick. Do you have one ready to use right now? Like, not prepared in a shelf somewhere but right now is there a live linux system within your reach? If not, it's rather likely it will take a while to find it.
Once you've done that, you can bootup your computer, unlock any encrypted disks you use and install git into the live system. Then you hopefully remember how to recover a git repo using it's tree only, if not, hopefully the live disk's network works.
If recovery succeeds over git, you can reboot and pray the last checkpoint in git isn't too old. Have you changed the LUKS keys in the meantime and did you forget to checkin crypttab? Or did you change your Xorg Conf because X stopped working?
The short answer is; it might be easier and just as complex to simply reinstall the system while backing up your home folder.
>If you backup etc and home, you can pretty much restore a linux machine back to working order.
Hmm, I've never done this. In terms of config files, are there never any breaking changes between versions? Like ..is a v1.0 config file always guaranteed to work with v2.0 and vice versa? I guess and a related question - what is the best way to determine if an /etc/ config file isn't required?
Usually breaking changes are highlighted in your update cycle and if something is added or slightly modified you might find a *.conf.rpmnew or similar. rpmconf [1] helps with those sorts of updates.
Stuff can break - there's not especially strong guarantees across versions, but if I, say, took /etc and /home from a Debian stretch system and put them on another Debian system, it would likely mostly Just Work.
It gets more exciting the further you drift - e.g. if you're going to Debian buster, or something more significantly different like Fedora/RH, you'd probably want to cherrypick config changes from /etc, though /home should be fine(tm).
etckeeper records the exact package version changes too. it's extremely handy for small shops (so unless you do 100% immutable infrastructure, it's worth it)
you start culling packages with apt purge, that should remove config files too, and see what breaks, if nothing, well, try to push your luck further.
also with dpkg -S /etc/some/file you can see which file belongs to which package and target that package
usually on a server you know what you don't need. it's a bit more trial and error on a desktop. (because those can break in a lot more fun, unexpected ways, harder to know what is needed)
Isolated registry backups are handy if you do a lot of poking around in the registry by hand or if you're about to install a .reg file. Regardless, the dialog box actively lying to the user is what's completely unacceptable.
But config not being read by things not installed does not hurt either. And once I install these missing things, the config is read again. And I usually spend a lot more time with configuration than with installing...
There's a key difference though. In Linux and other UNIX inspired OS's /etc is only accessible by root but the Windows registry holds config data for users too. Any userspace program in Windows could potentially bork any registry key that the user has access too.
Agree on the 50-100MB being minuscule. I don't even provision a Windows VM without 60G worth of space due to how bad it is on cleaning itself up/bloat.
I wasn't even able to install Windows 7 a few months ago due to the WinSxS folder continuously bloating to fill the disk (stock on a T440p - super weird issues). It's almost as if they broke new-install update cycles on Win7 on purpose =/
My question is why the hell would they turn off a feature like this in the first place?! What is the actual "money making" motivation behind the decision?
That sounds like something was seriously wrong with your install. I am on a desktop running windows 10 since its release on which I use daily and install a lot of software - and the windows folder is sitting at 12 GB.
I don't think the Windows folder is a particular good indication of Windows space usage.
A while ago Microsoft put a tablet killer design out there: 11 Screen, 4GB RAM, Touch Screen and 32GB SSD. Man+dog shipped them. Then man+dog found the machines died because Windows ran out of disk space on patch Tuesday. So regardless of what is in the windows directory 32GB is not enough to run Windows.
We "fixed" the problem by replacing Windows with Linux. Total size of Linux install, which included Libre Office and a selection of browsers: less than 4GB.
Microsoft attempting to fix the size issue by removing 100M of backups when the scale of the problem they are trying to solve is 4GB vs 32GB is a almost laughable.
As an owner of one of these early tablets(quad core atom, 1GB ram, 32GB SSD) I can confirm that you cannot install updates anymore after a while, even when there's a few GB left, the system just refuses to proceed.
My mother's Intel NUC runs into this problem. Next time I go to visit I think I'm going to have to figure out if we can move the Windows install off the 32GB eMMC onto something else...
I wonder if it's related to the trend in recent years of manufacturers marketing ultra low-budget Windows 10 laptops (and to a lesser extent tablets) with critically small amounts of non-upgradeable storage? An install of Windows 10 typically takes up ~26GB and there's already a problem with consumers discovering that their 32GB soldered eMMC machines are unable to perform any OS updates without jumping through extra hoops with external storage [0].
Those machines are already mostly unusable under Windows because MS only requires 2GiB minimum so it's an instant swapfest to eMMC. Put Linux on them and they're quite nice.
I disagree - I have one with a quad core atom, 1GB of ram(!!!) And latest windows 10 1903 edition runs very nicely on it if you stick to the built-in apps. I usually have it next to my main PC as an additional monitor with the wireless projector functionality .
Windows 10 runs very nicely... If you only want to use the machine as an external monitor? Respectfully, this doesn't seem like a very realistic use case for most people.
That's not what I said. I said it runs really nicely if you stick to the built-in apps, and that I use mine as an external monitor. Those two are not mutually exclusive.
I'll hazard a guess that your Win10 machine on 1GB RAM is swapping plenty. A lightweight Linux distribution can run a reasonable desktop workload (including some web browsing!) on 1GB with zilch swap use, or on as little as 512MB with tolerable use of swap. It's a bit disappointing in a way, but Linux (and not even every Linux distro, for that matter) is the only OS where I can visibly tell that I'm running something reasonably bloat-free, and it makes the UX incredibly snappy and comfortable all-around.
No, Windows did not eat your RAM. Your "guess" would be wrong. I've installed Win 10 to 1GB VM machines many times. The OS itself takes a little over 512 of RAM in this scenario. No swapping at all if your not using any apps more than 512MB.
The reason you probably are assuming Win 10 is unusable is that on your 8 or 16GB machine Windows uses free RAM to speed things up. It's is smart enough to not do this on memory constraints setups.
Funny thing is Linux users often made this same mistake with "free -m", hence the need for this site: https://www.linuxatemyram.com/
I've had to install and run Windows Server 2016 on an XP-era desktop with 256MB RAM... oh it was painful, agonisingly painful, with constant swapping. But it worked!
Well, i'm here to say that if you use a normal linux distro with KDE or Gnome your going to be in the multiple GB's of ram pretty quick. Sure you can have a stripped down linux distro with X/lxde or whatever in under 1GB, but you can also use windows IoT or Core for much the same effect.
The problem is that MS (and the mainstream linux DE's) aren't willing to admit that some of the "modernization" they have done over the last decade has done nothing but increase the visual lag while massively increasing the resource consumption for what is frequently little more than transparent windows (cause high DPI is still not quite there in both cases...).
One of the earliest Win10 updates was "bricking" machines if a device was in the SD card slot during the update process. Guess where my update media were?
The machine became a Debian machine. It was "bricked" only to the extent that there was no way forward from the failed update to getting Windows back.
Reminds me of my Nokia 3 which is stuck in a non up-gradable loop. What I think is happening is that the the new image is downloaded to the SD card, then when you restart to upgrade the SD card isn't mounted so the upgrade fails, then you reboot again and you've lost all your shortcuts to apps on the SD card. I'm not sure if it's the image itself on the SD card or something else required but it's obvious Nokia don't spend much on QA.
Pretty much this. I once tried adding a new ~3 MB binary, and it was rejected on effectively these grounds. Now consider that the team making these in/out decisions has a backlog of requests just like it for larger items...
Is there a team that can exterminate other teams in very medieval ways that fail to make stuff installable to non system drive?
Because there should be.
It's crazy that everything still wants to put stuff on C:\ ... I'm looking at you Visual Studio and Win SDK.
Plus Users\Public shared goodies. And the %userprofile%\AppData. I get that every application thinks it needs to put its cache/files there, but then why can't folks relocate users' homes? :/
That was a design decision made because install to SD card causes apps to load slowly (to my understanding). It used to be possible but they killed it.
Doesn't hurt that it helps speed up the upgrade cycle since users run out of on-board space sooner.
3 in this case ... registry, registry backup and system restore points.
Makes sense to get rid of one of those, especially as restoring a registry backup can lead to unintended issues if you've installed something since, whereas a system restore point takes care of all of that for you.
Between the traditional Disk Cleanup and a quick DISM cleanup (which should already be running as a scheduled task) you can get pretty good results on any Windows 8.1/2012R2 and above:
I don't really see the need to have the ability to uninstall service packs... Especially not if it's wasting 25% of the space on my tiny SSD!
The original windows installation media could be used to restore the system if something went horribly wrong anyway. Why the need to keep every previously used version of every dll?
Because Microsoft. We will lie to you about making registry backups successfully to save a few megs of space, bud god forbid we touch the winsxs elephant in the room.
Because Windows doesn't always have access to the installation media, even when it is a partition on the same drive?
There was an Insider blog post somewhat recently about Windows 10 trying to repack things into installation media over time, when it's a recovery image on the local machine. It sounds like they are working to better the situation.
These are some pretty privileged operations you're trusting the tool to perform. I'm not sure a couple of commands warrant using a GUI tool, especially one that doesn't seem to inspire too much confidence upon cursory examination.
Only the parts that they took from other reverse engineered header projects are opensource (because they have to). They answered on one of their Github issues that the actual core code is not open source. I wouldn't touch this.
Looking into it further, apparently windows keeps a secret cache even after you uninstall those apps, so powershell is necessary not just to remove the install trigger but to remove that extra copy. Thanks, windows.
It's not related to the actual article but the footnote about advertisment is what strike me the most :
> Advertising revenue is falling fast across the Internet, and independently-run sites like Ghacks are hit hardest by it. The advertising model in its current form is coming to an end, and we have to find other ways to continue operating this site.
It's no wonder Google is pushing to remove (partially) the ability to block ads in Chrome. If the business model based on ads is seriously at stake, a big chunk of the revenues of Alphabet is compromised.
To me the absurd thing is that this site could probably just embed ads as if they were normal website elements and not hosted by some ad CDN with lots of Javascript. As long as the ads weren't terrible, I doubt visitors would go to the effort of manually blocking the elements.
But that fundamentally breaks the absolute fraud and house of cards of the entire web ad industry and make layers of ad people obsolete so it won't happen.
It feels a lot like cable companies: "Our model is absolute shit but we don't need to change, you do." It's why I'm completely unsympathetic to the pending death of the cable and ad industries. They both were in a Sears position to dominate whatever paradigms come next.
Not to defend the current model, but it's a lot more complicated than "just embed ads". Where do they get the ads from? How do they handle charging for them? How does the advertiser know they're not being ripped off, reaching their target demo, etc.
The whole model with ads on the internet and the related tracking and profiling of everyone's activity is just terrible. But it's also going to take a lot to replace it with something less bad.
I do kind of agree. But I think that can be a process and tooling problem that can be solved in numerous ways that don't involve handing over the keys to those blocks to the ad industry.
I do kind of disagree. I don't think it needs to be a lot more complicated. I just think the industry kept growing to a point that it needed to justify complexity. There's entire categories of jobs that don't need to exist.
Consider how advertising works for YouTube sponsorship. It's literally just some person emailing another person saying, "hey I've seen your site and based on a bit of research (the thing that justifies my jobs existence) I think your personality and demographic and other factors are all a match for advertising this product. Want to talk about it more?" And then the advertising agency hands over a package for the YouTuber to fit into their channel in an organic, appropriate way. Why oh why is that so hard for the web ad industry to do? Let's hash out the details, and then our guys will send your guys some html and other assets and you can plug it in to run for a month, using whatever technical architecture you use to run ads.
300 hours of YouTube videos are uploaded every minute. Assuming that the average video is 2x the minimum required to maximize ad revenue (10 min so 20 min per) that is 25 videos per second. Assuming you can do a deal every 10 minutes you would need 630 people to get ads for 1% of the videos uploaded.
The entire point of that complicated web is to eliminate the cost of all those people and move that 0.1% that you could actually do to closer to 80%.
While it is true that we have given too much power to advertisers the solutions to the privacy problems that solve the advertisers requirements is hard (remember they are the only ones paying into the system so dropping their requirements won't happen). There have been some technical papers put forward but they are as complex as BitCoin is in order to actually allow for a small subset of the features the terrible leaky privacy of the modern web gives them for no effort.
Print ads are not targeted--they can only be targeted to the text they accompany. This means that the vast majority of their viewers are completely uninterested in the ad, while a viewer who might be interested in an ad that shows in another print item, does not see that ad because they do not subscribe to that print item. Digitization allows for ads to be switched on-the-fly, so no matter that media you consume, you see ads relevant to you. This vastly increases the success rate of the ads. It doesn't seem like much, because even targeted ads have an incredibly low success rate, but there's a huge difference between 0.0001% and 0.01%.
> Print ads are not targeted--they can only be targeted to the text they accompany. This means that the vast majority of their viewers are completely uninterested in the ad
So if I'm reading a review of a new car for example, and there's a full page ad for the new Ford Focus alongside it, I'm not going to be interested? Seems more relevant than constantly being bombarded with ads for mattresses after buying one 6 months ago.
No, not really. All they need to know is the source of their traffic and how well it's converting. That source isn't paying for itself? Cut it and divert the budget to one that is.
Does advertising needs to exist at all? Do businesses need to exist at all? Does economy needs to exist?
There're issues with current state of online advertising, but when was last time humanity said "improving this would be a great business, but you know what - we don't want it to be better"?
> There're issues with current state of online advertising, but when was last time humanity said "improving this would be a great business, but you know what - we don't want it to be better"?
When you externalize costs like this anything is easy to justify, I will ask a different way: should we be reconsidering forcing the costs of adtech's growth in to an engaged panopticon on to society? Do ads need to be 1000 times more efficient (and profitable, if Alphabet and Facebook's quarterly reports are to be believed) if they have so many external costs which the ecosystem is simply unfit to solve itself?
Personally, I only ever go to Harbor Freight when they send out mailers- not because I can't spend money other times, but because their mailers are such a constant thing that it doesn't make sense to buy it not on sale.
As others said the whole functionality can be implemented server side, and you can serve it as part of the HTML.
Or simply reverse proxy it and put that in an IFRAME (or use JS and lazy load it into a DIV).
The current problem is that every ad engine tracks everything they touch. Instead of tracking how well a particular ad performs on a given site. (Of course ad engines try to show ads that are relevant to the profile they built up, but since the hit rate is extremely low, most of it is just noise. Which is very annoying.)
To solve the self-reporting trust issues make the sites report each impression separately with some proxied meta data. (Which is equivalent to always reverse proxy-ing.)
Which would be a bit slower than today (because most sites are not CDNes around the world, but most ad engines are), but who cares, the megabytes of shit downloaded and running on each site slows down UX more.
I just booted up a windows 10 machine earlier today for the first time in many months.
It absolutely bewilders me how terribad the UI is. You have this fancy UI but if you ever click one or two options deep you end up getting these alternately styled legacy interfaces. It's so incoherent.
It really makes windows 10 feel like a skin on top of windows xp.
So when the article says Microsoft wants to push users into windows 10 all I can think of is that it's for the good of Microsoft. But they had to add a little window dressing to make the users happy.
It’s a skin on a skin on a skin on a skin. Its always been this way. By clicking on “advanced” buttons you can easily end up with four different eras of UI on the screen, including different system fonts and different font rendering algorithms!
I can tell you first-hand from trying to get some of that fixed years ago, there’s just so much STUFF in Windows that is hardly ever seen, or should only be seen by “IT professionals” who supposedly don’t care, that there will never be enough resources or time to fix it all.
Yes, it's monumental. Third parties could add their own pages and hook into existing pages. Read the OldNewThing blog for the insanity that is the Windows ISV ecosystem.
Windows is space age tech compared to the garbage these companies pull off.
Oh, I can understand leaving a Control Panel framework in place for opening legacy menus made by third parties. What I find problematic is the Microsoft-supplied functionality. Duplicate menus for the same thing adds an extra mental burden.
> Duplicate menus for the same thing adds an extra mental burden.
I think you'd see that Microsoft gets this. Win8 was all dupes. Initial Win10 release removed most of the dupes and they've removed more with each Win10 release. The strategy is pretty clear - incremental rewrite based on what people use.
It's slow work. Microsoft has gotten more aggressive about change compared to ten years ago, but they are still very conservative. Users don't like it when the UX changes.
This means the old one is deprecated and will be eventually removed.
The problem is, for some reason the presentation is tightly coupled/tied to the data model of the settings. Otherwise there would be no need to have 2 versions of it.
I'm a weirdo but I kinda like the archaeological deep dive-type feelings I get, seeing two decades of OS history rewind as I'm reaching further into the guts of the beast.
My only gripe with the old school is accessibility (scaling in particular, ugh), but even that's not so bad.
A lot of the windows UI code is ancient by software standards, some of it written before desktop GUI frameworks even existed. And much of that code is still being used across versions.
Rewriting in a way that is 100% backwards compatible (any less compatible is unacceptable) would be a massive endeavor and very hard to justify given the cost.
No they're not. They're looking at control panels that were mostly written between 1992 and 2000, and asking why the decade since Win7's release hasn't been enough time to get them all on the same UI framework.
"that deep" should be read in terms of the resources of a large company. It's a small fraction of the OS.
I know relatively very little about win32 but I know enough to say that "getting them on the same UI framework" sounds incredibly complicated to me, and I think would entail basically a complete redoing of the control panel, which is similar to what Microsoft actually did.
1. You already know exactly what functions it needs up front, unlike the first time you coded it.
2. You have much more advanced libraries and development environments available.
3. You have more years for the redo than the original took.
I would not call that "incredibly complicated". I would call it "doing a significantly easier version of what you already did".
And "The Control Panel isn't that deep. It's not as though Microsoft needs to create a UI for every advanced Group Policy setting." is talking about scope, not complexity. So I'll say again that it's a small fraction of the OS.
(It doesn't matter how complicated it is in a vacuum, anyway. As long as the complication is on par with the rest of the OS, which it probably is, then we already know that Microsoft can handle it.)
> similar to what Microsoft actually did
What they actually did was redo parts of it in a giant mishmash of overlapping and conflicting UIs.
Even if they need to keep around parts of the old UI for third parties to hook into, there's no good reason that the new UI is so far from complete. (Except for the obvious reason of "it wasn't a priority, they don't care as an institution that it's a horrible mess")
Win32 is a disgusting nightmare, and I pity the devs who have to re-engineer its features. For backwards compatibility reasons, I am unsure if they would be able to use new Frameworks or tools. Apple still uses ANSI C (as do so many people) because anything newer is a backwards compatibility nightmare.
Dial up isn’t dead yet. I’m okay with old functionality, but it should be more consistently designed so that everything looks at least somewhat cohesive.
it should be more consistently designed so that everything looks at least somewhat cohesive.
Honestly, I think that having Device Manager and the like function the same way [it does have new icons, but that's about it] is good enough for me- I know how to use the tool, and it would be rather frustrating to techs if they replaced it and dropped even one little thing.
Personally, I'm okay with old functionalities- but replacing the stock Windows 98 icons with the new design scheme and making the background window color white [personally, I prefer black, but I'll take what I can get] is enough fresh paint for me.
EDIT: I also want to say that for "old people" that specifically use "old functionalities" and replace their machines with new ones but want to use those same tools, it would be somewhat frustrating to them to have to re-learn a new UI, and we really don't want them running Windows XP anymore.
That's okay. But why isn't there a big strongly typed settings data model with automatic presentation? If it's just reg settings all the way down and triggers on them?
That sounds like it was the intent. But I got into old menus simply by trying to change the clock time. Definitely not an IT professionals who don't care kind of action.
Sure it is. What home user even realizes they can change the clock time any more?
Consider: can you change the clock time on your phone? I've never even considered doing so, personally. Seems like it'd screw up the GPS, if it was possible.
Anyone whose first computer was a smartphone isn't going to think of changing their computer's clock time. To them, the time is a network feature, not a local feature.
> What home user even realizes they can change the clock time any more?
Indeed. Even if you have the temerity to go and change the clock, the iPhone is secretly maintaining the true time internally and can use it or revert to it at will. I'm making a bold claim, so here are the steps to reproduce:
- Go to Settings -> General -> Date & Time -> Set Automatically -> disable
- set an incorrect time
- Go to Settings -> {Wi-Fi, Bluetooth, Cellular} -> Off
- Go to Settings -> Airplane Mode -> enable
- go someplace without any wifi or change the wifi password on your access point or
disconnect/power-off your access point
- remove the SIM card from the iPhone
- Now go back to Settings -> General -> Date & Time -> Set Automatically -> enable
- Surprise surprise! The date and time revert back to the correct value even though you have no possible network connection. How did it know the correct time?
Implementation thoughts: If the true time is 6:00pm and you change it to 9:00pm, the iPhone could calculate the offset as 3 hours and 0 minutes, and show you the time as 6:00pm + 3h = 9:00pm, but knows that the true time is really 6:00pm. The same effect could also be achieved by having two internal real time clocks (aka time of day clocks), one for the true time and one for the time you set.
My theory is that they'd prefer users not messing with the clock, but they provide a UI for setting your own time (such as for people who like setting their clocks 5 minutes ahead to never be late for meetings). However, they keep the true time for other purposes (perhaps for logging, syncing, or something else?).
I had it airplane mode, but I could see an argument that the GPS is receive-only (it is not transmitting, so not affecting the airplane), so it is not disabled in airplane mode. (I saw comments on the Apple discussion site that they used to turn off GPS in airplane mode but no longer do so on recent iPhone models or versions of iOS.)
Though I didn't mention it earlier, I also had location services disabled:
- Go to Settings -> Privacy -> Location Services -> Off
The above supposedly disables the GPS or use of the GPS data.
I actually like the newer interface... when it isn't trying to take over my entire 32" screen to show me information that would have fit in a 2" square window, or doesn't bother exposing more than a quarter of the functionality you get with the older UIs.
I just booted up my Surface Book for the first time in months. It attempted to download and install Windows 10 update 1903. The update failed with a large dialog telling me my system isn't ready to install update 1903.
I'm going to sound like the naive end user now, but isn't it the job of the update manager to only download and install updates that are ready to be installed?
It did eventually install 1903 after about 2 restarts. But failed updates are always scary to an end user.
In my case there is a glitch in the 1903 install where having external drives attached forces a fail as the installer can't determine which is the Windows drive or something (per an error/forum I read at the time). I had coped the ISO from a USB so removing the USB drive resolved the error.
I think it's picky to prevent issues similar to 1809 install. That was a mess and I'm sure MSFT would rather have an error before upgrading than a user missing all their data or something worse.
The data loss issue in 1809 was the result of Microsoft intentionally writing and deploying code with the specific purpose of deleting people's documents directories and everything contained within them. The way to avoid this was not to program Windows updates to delete people's documents.
(For anyone who doesn't know the context: Windows has the ability to redirect the various documents, music, etc directories away from their default locations. Some genius at Microsoft decided that any data left in the original location after such a move was junk and got a Windows 10 update to delete it. Unfortunately, Microsoft's own OneDrive software, which was bundled with and heavily promoted by Windows 10, intentionally redirected some of these folders without moving the contents in older versions and also often failed part-way through leaving some files behind in newer versions which promised to move the existing documents too. So a whole bunch of people who'd clicked Yes when Windows 10 pestered them to put their documents in OneDrive found that not only did it fail to do that, they got all their non-cloud-backed-up documents deleted a few Windows updates down the line.
Making matters worse, some of the members of their crowdsourced beta-testing program had apparently hit this problem and warned them that the update was deleting people's documents, and Microsoft just ignored it and went ahead with the update anyway.)
> In my case there is a glitch in the 1903 install where having external drives attached forces a fail as the installer can't determine which is the Windows drive or something (per an error/forum I read at the time).
All things old become new again. In the Vista and 7 era there were a bunch of updates that you just could not install if you booted Windows by chainloading its bootloader. You had to boot Windows straight from BIOS, otherwise they'd fail to install, because some obscure component in Windows couldn't figure out what C:\ was otherwise.
You're right. I think the two completely separate "settings" programs are the most user-facing of the problems with these interfaces. It probably doesn't help incentivize updating that the new one is so terrible. [1] It looks like a really bad version of KDE's Plasma theme (Breeze).
The settings screens dating from the iconic control panel era can't easily be dumped because third party drivers add tabs and extend the dialogs. That's a big part of the problem.
How would you automatically translate hand-layouted dialogs (originally even manually adjusted for localizations where required) into a completely different layout and design language? Never mind the few dialogs that do custom controls and custom rendering (joystick test and joystick calibration, for example). In plain Win32, that stuff is often part of the window repaint event handler and not encapsulated in a way that allows it to be repositioned reliably. It is a huge mess.
Translator would search for all non-draw calls (like a JVM coverage tester) and place them accordingly in the translated control flow. Layout would be translated by capturing draw calls using an off-screen rendering.
John Siracusa used to write about this in his old OS X reviews… Apple would do weird things like make a dark title bar for Garage Band, then a title bar with vertical "traffic" lights in iTunes that wasn't used anywhere else, then a normal looking title bar that was just a few-pixels off in margin from the stock title bar…
It's all kind of weird and silly, but nothing at all like Windows where you can suddenly jump into the Windows 2000 UI at any turn.
Unlike Windows, though, these were conscious decisions where applications decided to ditch platform controls and roll their own. Windows just has a dozen different UI frameworks that each do their own thing.
One super useful hidden feature but terrible ui is you can start typing as soon as Win 10 start menu is open... example 'notepad'... I used to install classic shell just to get the [run] input box
Except if you start typing just after booting, the search service (or something) isn't running yet and it doesn't give you any results even if you wait...
I hate using Windows 10 because I am constantly hitting new unique bugs in the UI, and regularly enough I run into complete show-stoppers. Windows UI is just so broken.
I do run into bugs on Ubuntu, but usually there is a workaround. And I have lower expectations when it costs $0 (and without the trillion dollar market cap of Microsoft).
I also regularly find bugs in macOS, which annoys me since we pay such a premium for Apple software.
I would prefer if it would just give me no results. Half the time when I hit the windows key, type what I want and hit enter it will open an Edge window with Bing search results for what I typed.
You can turn that off. It's not easy and they keep changing how you do it, but with enough registry keys you can kill Cortana and all the associated bollocks and return to an actually useful system search.
Windows 10 is back to system search focused by default in 1903. Cortana was refocused to voice-centric and the interfaces (and buttons) are now distinct between system search and Cortana. Rumors are Cortana may even be an optional install in 19H2 or 20H1.
There is a bug where if you disable blinking textcursors, notifications in the taskbar never gets cleared. Been that way since Windows 7. How are those two things related?
This search feature was actually good in Win7. On its way into Windows 10 it has regressed into something awful. The way it inconsistently comes up with different results in different orders when you repeat the same search multiple times in quick succession is a good example of how maddening it can be to use this thing. I want to start a program from the start menu, type its name and I often get a Bing link as first result or a document which is replaced by a Bing link in first place just in time for everything to shift under my cursor so that I pick the wrong thing. The actual menu entry that I'm looking for can be 20 seconds late at times (when it shows up in the program list without delay and probably popped up in an earlier search in first place after only 2 seconds or so).
I don't get why this thing is messed up the way it is.
How is that terrible UI? It's basically what Spotlight does for Macs. Cmd+space and it overlays a text box from which you can find and open any file, application, event etc. or even do stuff like calculations and unit conversions.
because there is no indication to the user that they can or should enter text. One my favorite things to type in the old "run" box was "." and that doesn't work anymore in 10. I can still open a command prompt and type "start ." at least.
> One my favorite things to type in the old "run" box was "." and that doesn't work anymore in 10
It absolutely does still work in the Run app.
The Run app is no longer conveniently docked at the bottom of the old Start menu near where the search box now opens when you start typing in the new Windows menu, though.
Oh, wow, thanks for tip. I always though why they don't include %USERPROFILE% to quick links :/
Btw, as others mentioned - Win+R gives you Run dialog. Or Win+X gives you admin tools and a Run button approximately where it used to be. Or just right click on start button...
A nice thing about Win+X (but not Win+R) is you get the pop-up on your current monitor on multimonitor systems. (Unfortunately, if you do “Run” from the Win+X menu, the Run app does not show up on your current monitor.)
Unfortunately apart from the search being awful it also does some internet search and if your network drops out while it's searching for something you'll be left with the start menu frozen until it hits it's 1-2 minute timeout.
> It absolutely bewilders me how terribad the UI is. You have this fancy UI but if you ever click one or two options deep you end up getting these alternately styled legacy interfaces. It's so incoherent.
It's definitely inconsistent, but I'd disagree with it being incoherent. IMO, Windows has done a fantastic job at making complex functionality accessible through reasonable GUIs.
My biggest issue is not that the interface is incoherent (and the consistency is a minor gripe to me). I struggle much more with _navigating_ the interface than interacting with any given dialog.
That said, I've never used Linux distro or macOS iteration that came anywhere close to making features accessible to users via graphic interfaces. GUIs may not always be the _best_ interface (that's another discussion), but you gotta hand it to Microsoft for how much work they've put in to get where they are when it comes to GUIs. I do not envy anyone tasked with "reskinning" it all.
Can someone explain why a monolithic Registry is a good idea, compared to something like macOS's .plists?
I can usually intuitively infer where the plist for a particular app or subsystem may be stored, or just search for it using regular file system tools, and I can use many tools to read, edit and selectively backup/restore plists as well as compress them (something I make use of often, to restore settings only for specific apps on a fresh machine/installation, something that was very cumbersome to do with the Registry.)
I also have yet to experience plist corruptions even once for even one file on macOS, but several times with the Registry during my time on Windows, with multiple unrelated parts of the monolith crumbling at the same time.
The Registry has been around since Windows 3.1. While it's not required to use the Registry, I can imagine there's a large installed base of applications that do use it, and migrating off of something like this can take concerted effort if they decide it's worth the effort.
When there's something I initially find incredulous, I try to take step back and recognize that very likely the people involved in making those decisions have a better perspective on the situation than I do. I've found Chesterton's Fence to be a useful metaphor.
Taking that into account, even my explanation is likely wrong. And even given everything you've said is true, there are any number of additional issues known to those involved than the ones you've put forth.
Windows used to have lots of .INI files, but almost all apps kept them all in C:\Windows or System or some such place.
If only they had discovered the concept of subfolders, instead of shuttling them off into another monolithic container. I suspect having many subdirectories degraded filesystem performance at the time.
The Windows 3.1 version really was just a registry though - it recorded COM object registrations and nothing else. The idea of dumping a whole bunch of unrelated system and app configuration in there was a Windows 95 thing. As for why they did it, that probably seemed like a good idea at the time and there's no way to get rid of it now even if we wanted to.
> Can someone explain why a monolithic Registry is a good idea, compared to something like macOS's .plists?
Why monolithic may be good: because it's global and baked into the system. An application doesn't have to search for its config files. Registry is always there. It's a database.
And it's better than INI or XML files because updates are transactional, and you can even have transactional updates over multiple keys. It also supports fine-grained access control. Manually implementing transactional updates in text files is a hassle and you can forget per-value ACLs... unless all of this is hidden behind a registry-like API.
What does being "global" mean here, and why is it good?
macOS apps don't have to search for their config files either; support for them is baked into their system APIs as well.
The macOS/iOS UserDefaults system is actually very nice, allowing multiple tiers/domains for precedence of settings. For example a specific setting may be optionally overridden for a single session via any app's launch arguments [0]. How do you achieve something like that with the Registry?
The Registry architecture may make things easier for Microsoft, but the plist architecture seems to make things easier for users and developers.
Storing configuration as plaintext files is definitely a big positive, as it allows you to use standard and battle-tested file apis to manipulate them instead of an opaque blob that then requires an additional platform specific API.
The only benefit I can see of a monolithic registry is that it's technically all organized in one central place, compared to say the Linux practice of scattering app settings in home directories (although XDG does help standardize this). But in practice the Windows approach seem to provide an unnecessarily tight coupling.
Also on Linux you have dconf, which is literally a centralized, binary configuration storage backend. I believe it is being replaced with a more decentralized mechanism to ease container use cases.
That link discusses the archaic .INI format versus Registry, not macOS .plist versus Registry, and already the very first point he states is outdated and irrelevant:
> INI files don't support Unicode.
His complaints about XML as an improvement over .INI don't make much sense either.
The security of individual files and their folders can be more easily reasoned about using existing filesystem paradigms, than the nodes of some abstract tree inside the Registry. And you can use existing filesystem tools to monitor which apps are accessing which plists, meanwhile you have to rely on an entirely different set of tools custom-built just for interacting with the Registry.
In practice, .plists feel much better as a user than the Registry does.
Meanwhile, after the latest 1809, there were over 2GiB of update clutter not cleaned up automatically. It took two rounds of Disk Cleanup run manually to get rid of which took over an hour with laptop fans running on high the whole time. This was on a non-OEM, directly from microsoft.com, clean installation of Windows 10 Pro 1709. Windows updates are an abomination.
Update clutter is cleaned up a few weeks later, they keep it so you can 'roll back' the update. I've had to use the roll back feature before, it's useful. I guess it's an abomination, though?
> The scheduled task to create the backups was still running and the run result indicated that the operation completed successfully, but Registry backups were not created anymore.
> This change is by design, and is intended to help reduce the overall disk footprint size of Windows. To recover a system with a corrupt registry hive, Microsoft recommends that you use a system restore point.
I suppose I can see removing the automatic backup feature to save disk space, but what is the argument for silently pretending that you're backing up?
Presumably because they needed to maintain compatibility with other tools which require those backups to exist or interact with that scheduled task or the logs it creates.
Windows really is a recursive backwards-compatibility hack...
I'm still on Windows 7 on a 2012 desktop. There's no reason for me to change. I've yet to run into any situation that has required me to upgrade. Since the world is generally web-first, the only thing that matters is Chrome and Firefox. Even running 4K video just required a codec upgrade as opposed to CPU upgrade.
Yes there are security concerns but I'm pretty vigilant. I haven't run a virus scanner in 15 years, I'm just extremely paranoid over what I click on and which sites I visit. I run an ad-blocker and pi-hole to decrease any attack vectors. If attacks like spear-phishing become a lot more prevalent and much harder to detect then maybe I'll have to upgrade to keep up with security patches but until then I'm keeping status quo with a lot of backups.
> I'm still on Windows 7 on a 2012 desktop... I haven't run a virus scanner in 15 years, I'm just extremely paranoid over what I click on and which sites I visit. I run an ad-blocker and pi-hole to decrease any attack vectors.
To be blunt, this is a bad idea.
I'm sympathetic to the general motivation. I think the value and effectiveness of antivirus is generally overstated; it's often a replacement for vigilant browsing, and usually not an adequate replacement. If you keep to trusted sites, don't click email links, and don't ever unblock ads, you can do remarkably well for yourself. (Of course, if you've ever clicked some link and regretted it, it's time to start scanning or wiping.)
But you don't have to fall for spear-phishing or click shady ads to get into trouble. Since 2012, much less 2004, we've seen zero-days compromising servers and numerous other threats that could affect you indirectly. If your machine was Lenovo, it quite possibly came with malware. If you've used Sennheiser's setup app, your root certificate store may be poisoned. Further back, if you put Sony BMG music CDs into your machine, they may have given you a rootkit.
I don't worry about antivirus much for on-the-web protection; the threats most likely to reach me are the ones it doesn't protect against. But I think it's valuable to have something that keeps up with the history of other known screwups to help protect me against other, stranger threat vectors.
>we've seen zero-days compromising servers and numerous other threats that could affect you indirectly. If your machine was Lenovo, it quite possibly came with malware. If you've used Sennheiser's setup app, your root certificate store may be poisoned. Further back, if you put Sony BMG music CDs into your machine, they may have given you a rootkit.
>But I think it's valuable to have something that keeps up with the history of other known screwups to help protect me against other, stranger threat vectors.
I don't think antiviruses protect you against 0days. If the attacker is going through the trouble of using a 0day, they're probably going to make their payload FUD as well. As for rest, I don't think they'll get picked up either, or at least, they'll only get picked up after a massive media shitstorm. Most
The MO a long time ago was to test it on multiple AV's until it passed on enough that were widely-deployed or used by the target. Those AV's were disconnected from the network where they couldn't send the samples back. There was a risk they'd keep something to send when networking came back on.
Although not a malware developer, my recommendation was installing the software in a VM or something, updating to current sigs, cutting off networking, testing sample till it passes, and delete the VM. Then, restart the process with clean VM or (more likely) from snapshot. I'm not up-to-date on what they're doing these days, though.
Yes, I forgot to mention I don't visit any questionable sites whatsoever. I've extremely vigilant, more vigilant than 99% of the population I would think, and I take it very seriously. If I need to test a link, I'll use my phone. I built my own desktop so no worries there, I don't have a CD player and I only install trusted apps these days, the only ones in the last few years have been Chrome and Firefox.
It's not the most secure method, obviously, but it works for me. And if the environment gets more dangerous than the status quo, I'm flexible enough to know I'll probably be forced to upgrade.
I'm not security-ignorant btw. I used to subscribe to both BugTraq and NTBugtraq back in the day and kept up over the decades. I just haven't seen anything in the last several years that can't be solved via not clicking on random links or installing new software. As said, spear-phishing/breaking out of the sandbox or serving malware via ads is my biggest concern these days and if it gets more prevalent then it will warrant a change in my views.
I forgot to mention I don't visit any questionable sites whatsoever.
The problem isn't questionable sites (including ad networks), it's 0 days and hacked sites. uMatrix won't protect you from malicious js fed from the same site, or perhaps a malicious malformed font download[1].
The bad news is that most antivirus software also won't protect you from a hacked NYT server delivering 0-days, because it's generally reactionary. For this reason, I mostly don't worry too much about about while-browsing protection - if you're cautious, the remaining threats will also evade antivirus. In that sense, uMatrix and NoScript are more proactive. (uMatrix, for example, saved people from Magecart before it was ever discovered.)
But if you do pick something up, there's no guarantee it'll be nice obvious ransomware, rather than something quietly sniffing your keystrokes. That's where I advocate antivirus even for careful users - it's a repository of known threats to tip you off that something is wrong. DoublePulsar was first discovered infecting Windows machines in the wild, so cautious browsing didn't save the people that hit.
Apologies if I implied you were ignorant, or at more risk than most people. Hearing that description, I'm willing to bet you're safer than most people who do run antivirus and the newest OS versions. (And safer than me, for that matter.) As far as principle, though, I think there have been things which can't be solved by avoiding sketchy links and emails?
Bemstour was a mix of two NSA-discovered 0-days, which spent a full year in the wild before it was patched or disclosed. (Admittedly, the only known deployments were highly targeted, so it's not exactly grounds for personal paranoia.) EternalBlue was spun into the widespread WannaCry after the Windows 7 patch was released, but before Windows 8. (Having spent a while on 8 instead of 8.1 while 8.1 was still broken, that hits close to home.) I don't follow Windows that closely anymore, but haven't there been a few other botnets initiated with unpatched vulnerabilities that didn't require browsing somewhere ugly?
Patches might happen, but don't hold your breath. It's at Microsoft's discretion. If they are asked why don't they do something about a new vulnerability on an old OS, they'll say they don't support it anymore, users should upgrade, and that will be that. (Did XP ever get Spectre or Meltdown patches?)
Maybe more importantly, third parties will notice that MS isn't supporting it and is telling people to upgrade, so third parties won't support old OSes anymore.
Companies will continue to pay for support for Windows 7 for at least another 5 years, so I don't see why Microsoft wouldn't push things like 0-day fixes to all existing computers when they fix exploits for paying businesses.
Metrics are that enough companies have moved to Windows 10 that it is increasingly unlikely that Microsoft will over-extend security support for 7 like they did with XP, and the market costs are likely going to be a lot higher for support contracts.
> Yes there are security concerns but I'm pretty vigilant. I haven't run a virus scanner in 15 years, I'm just extremely paranoid over what I click on and which sites I visit.
Why not simply switch to Linux? That's what I did, that way you don't have to install Windows 10 and don't have to worry about visiting web pages.
I do run them occasionally, but haven't found anything on any Linux install I've ever had (except files I've actively quarantined from email to a) test virus checking and b) analyse for activity [deobfuscating js from Wordpress hacks is fun!]).
This sort of thinking might have been understandable back during the '90's. However, today, people have plenty of free space on their hard disk. The track record of Windows 10 has been so poor lately that it's surprising that MS got so overconfident that they decided that they didn't need safeguards like this any longer.
Meanwhile Microsoft steals 7GB for "reserved storage" just in case they happen to have a giant update to install without your consent. "we disabled registry backups and made the dialog box lie to the user to save storage space" is a nauseatingly transparent lie.
Actually no, why is disabling backups in the name of saving 100MB of disk space ever OK?
Did we go back to 1995 when disk space was expensive?
I'd like to look the manager/developer who approved that change in the eye and ask them what the hell were they thinking.
As far lying to the user goes...hasn't that been MS's motto for at least since W10 came out? That's the least surprising part. A poor technical decision is much more worrying.
The registry is still backed up via system restore points and versioned file history. It is still possible (albeit hard) to restore the registry to any point in time via those.
These backups were really a bad plan - they were making a copy of the registry, and then a system restore point or 3rd party full system backup would backup all the backups.
>The Registry backup option has been disabled but not removed according to Microsoft. Administrators who would like to restore the functionality may do so by changing the value of a Registry key: [...Steps 1-6...]
The registry is a fantastic idea - it's a great pity software developers misused and screwed it up so badly, and a shame other OSs haven't copied it and built on it.
Instead of every tool making its own incompatible, undocumented, non-standard config file/database/format/tools.
>> it's a great pity software developers misused and screwed it up so badly
So the Registry is a great idea and great software design, but developers are "holding the phone wrong"?
I think applications should be standalone and not tied into some giant central spaghetti shared knot of chewing gum.
The beauty of Linux/Unix is that stuff is configured by text files. Windows was on that path i the early days when everything was configured with ini files before someone had the idea of introducing the registry.
> The beauty of Linux/Unix is that stuff is configured by text files. Windows was on that path i the early days when everything was configured with ini files before someone had the idea of introducing the registry.
"Holding the phone wrong" referred to people using something as they'd expect, but it not working at all. The registry turned to trash by people using it wrongly, because people don't write software very well. When ordinary users needed admin rights because developers stored per-user settings in the system-wide config, that's developers misusing the registry. When tools register themselves as file-format-handlers in the registry directly instead of going through Windows' API and getting user permission, so they can steal file format registrations, that's developers being malicious or bad at their jobs. When developers persist things in the registry without taking into account the documentation that warns things like "the user hive might be loaded on a desktop then on a terminal server with a different version of the installed application", that's developers not using it properly.
A central configuration database with standard ways of accessing it, doubling as an early 'service-discovery' place (e.g. registration of COM components), with system-wide config and per-user hives loaded when needed, is a great idea, and the registry is tolerably good software design as a way to do that, considering it came from the early 1990s, but developers have carelessly misused it for years, and Microsoft take the blame for that.
Imagine you logged onto a server by SSH and your /home/.program/program.conf file is from a newer version and the program doesn't run. Nobody would attack "the concept of storing configuration in a text file" for that - but people would attack "the registry" when that equivalent happens on Windows.
How many times have people written scripts to "parse" and add/remove lines of config from text files, without ever saying "we're reinventing the wheel for the hundred thousandth time, what a miserable waste of human life, if only there was a standardised key/value store we could put data in with a standard tool"?
And I put "parse" in scare-quotes because how many actually implement a full and correct Apache/Bind/iptables/etc config file parser, instead of a quick regex/sed/AWK/Python hack? JSON changed this a little, but now you'd easily expect to configure a server/service using a REST interface, and the config would be persisted in some unknown not-user-facing database behind the scenes, and that's just "fine".
The minute you make third-party developers potentially responsible for the overall stability of your system, the fault is yours, sorry. Microsoft simply did not grok security principles when building Windows 95, and they have been paying the price ever since.
> your /home/.program/program.conf file is from a newer version and the program doesn't run.
But a problem in program.conf would sabotage the program, not the entire system. Whereas a bad registry can stop you from booting. That is the main problem.
A real unix equivalent could maybe be a problem with something in /etc, but that stuff is supposed to be 1) set to basic defaults that will work in any circumstance, and 2) kept under lock and key. Whereas the Registry was born as a free-for-all, and only much later got a permission system retrofitted on top.
Was the Registry a good idea? Maybe; but it certainly had a terrible execution, and now we're stuck with the results.
> if only there was a standardised key/value store
Yes, if only - but there isn't, the Registry is Windows-only. That's why developers end up managing text files: because they work everywhere, in 1995 as in 2019; whereas APIs come and go.
Windows 95 wasn't multi-user and was from a time when the only software most people would install was some they bought from a company you trust. I hazard a guess that with NT4 and Windows 2000, the user hive was unlikely to be able to crash your system - sure something probably could, but it was at that point separate between what a user could do and what an admin could do. At that point, giving an installer root access to affect the system areas and then complaining that it could screw up a computer, would happen on any OS from any company. If your argument is really that "it was a free for all initially therefore I discount anything they learned or changed since" - that isn't very convincing.
> But a problem in program.conf would sabotage the program, not the entire system. Whereas a bad registry can stop you from booting. That is the main problem.
A bad user hive stored on a network drive, can stop a terminal server from booting? Can it? That's the comparison with /home/.program/program.conf that you're replying to.
> A real unix equivalent could maybe be a problem with something in /etc, but that stuff is supposed to be 1) set to basic defaults that will work in any circumstance
Like a kind of "safe mode" that can boot after something has screwed up the registry?
> That's why developers end up managing text files: because they work everywhere, in 1995 as in 2019; whereas APIs come and go.
Yeah, I've never had to fix character encoding issues in text files, or line ending, or newline at end of file, or escaping or quoting issues in text files, because they're all the same and never cause any problems. Microsoft APIs from years ago still work.
Indeed, at that was a major mistake. Multi-user systems had existed for 20 years, Windows 95 was a huge step back.
> I hazard a guess that with NT4 and Windows 2000, the user hive was unlikely to be able to crash your system
... you wish.
> A bad user hive stored on a network drive, can stop a terminal server from booting?
Maybe, I honestly have no idea - I've only seen it happening locally. I wouldn't put anything past registry corruption anyway.
> Like a kind of "safe mode" that can boot after something has screwed up the registry?
Access to Safe mode requires hardware action at boot. The worst you can get from a userland program in unix failing on logon is that you have to log on via shell.
> Yeah, I've never had to fix character encoding issues in text files
The difference is that you can fix all that yourself, with a shell and an editor, whereas if Windows says the registry is borked, the registry is borked and you have no recourse.
Indeed, at that was a major mistake. Multi-user systems had existed for 20 years, Windows 95 was a huge step back.
Single-user systems had existed for longer. They were dominant at the time especially on small systems - many variants of DOS, Windows 3.1, OS/2 up to version 3, Apple MacOS up to version 8, Amiga, Acorn, Atari, every console, every 8-bit micro, Symbolics Genera[1], BeOS pretended to be single-user. The days of a central computer with tons of connected terminals requiring multi-user auditing, separation, and billing - were fading. Individual computers were becoming cheaper, the internet hadn't risen. Looking back with hindsight of how things turned out, and saying "huge step back" when it was not a step back at the time, it was a step forward from Win 3.1 in many ways and a step sideways in that way, seems weird.
We still have not-multi-user systems now, for embedded devices and mobiles and similar. If it wasn't for network effects, it ought to be possible to make a single-user OS which was simpler and therefore faster and cheaper, and I bet it would be good enough for much of the computing done on the planet today - the amount of personal computers where multiple people need to logon, compared to the amount where you just want random people not to be able to poke at it without permission. Although totally not worth doing these days.
You started this bit by saying "once you give third parties control of the stability of your system, you lose". That happens if you give root access to anyone, on any platform. That happens in Windows 95, but the registry had system/user separation since Windows 2000. So that's 5 years without, vs 20 years with.
Yes I wish it was without problems, and that it had some more standard offline edit ability, but I don't think "userland can crash the system" was a design plan for the registry, so I don't judge it as bad design because of that.
I think Apple solved this problem well with the .app application files. All application files and resources are self-contained. Drag the .app file to the Trash to completely remove it.
Some apps are non-conforming, and store data elsewhere, but as a general model I think it works very well.
.apps aren't really analogous to the registry, they are more like the contents of the Program Files directory.
On macOS, the registry equivalent is all the plist files stored in ~/Library/Preferences and ~/Library/Application Support. These just get left behind when the .app is deleted.
macOS doesn't get it right. Removing .app files doesn't delete attendant files under ~/Library/Application Support. You need a utility like App Cleaner [1] (which is great) for that.
Early macOS used the installer system, which still exists but is increasingly rare. You'll notice it because the installer has the .pkg extension. The installer writes "receipts" to a special folder, which contains all the files it created, which in principle should help you unininstall. In practice, this doesn't work well, either, because apps litter the file system with files after they're installed.
Apple encourages the use of sandboxing for new apps. You'll find that apps each get a root under ~/Library/Containers. This means you can wipe all of an app's data in one go; as I understand it, sandboxed apps have to explicitly request access to any files they need to access outside of their container.
With APFS they could go even further and give each app their own volume, I suspect.
But most apps aren't yet sandboxed, so we still have file systems polluted with stuff.
GNOME used to have GConf, which is basically the same thing as Windows Registry. GNOME 3 replaced that with dconf/gsettings which serves very similar purposes.
Have you ever had chewing gum stuck in your hair? Same thing.
The problem with the registry is that it's a complex mess that can't be pulled apart and it's not modular.
A clean operating system would separate applications cleanly from each other so they can be configured in an isolated manner, installed and uninstalled cleanly without leaving remnants of themselves in the operating system gumming up the works.
If you use Windows then you are probably used to Windows slowing down over time and you've probably just come to accept that you have to reinstall every so often because it has all become so slow. IMO that is at least partly, if not substantially attributable to the registry. By way of comparison, I'm still using the first build of Macintosh OS that I build 12 years ago and moved it from machine to machine. It does not slow over time.
Software should be loosely coupled - the registry tightly couples a whole stack of unrelated stuff together.
The Windows registry is a sticky mess of spaghetti mixed with chewing gum and glue and it can't be unwound.
Operating systems should be modular, able to be pulled apart and put together in a component like manner - the registry is the opposite concept, it's a central dumphouse that links and binds things and makes it hard to impossible to have a clean operating system.
Go have a look into your registry and tell me what's in there - does it make sense to you at any level at all? There you go - that's what's wrong with it.
> it's a complex mess
> sticky mess of spaghetti mixed with chewing gum and glue and it can't be unwound.
This is not how you convince someone of a position. If I said your comment here was a spaghetti mixed with chewing gun would you find that informative, let alone compelling?
> If you use Windows then you are probably used to Windows slowing down over time
No, this is just your stereotype from over a decade ago.
> and you've probably just come to accept that you have to reinstall every so often
Again, false...
> IMO that is at least partly, if not substantially attributable to the registry.
All these problems just to blame the registry? With absolutely no substantiation for your claims?
As a Windows developer who's pretty familiar with its ins and outs (and who's run his fair share of registry defraggers/cleaners/etc. over a decade ago) and who actually pays attention to his OS performance I have never been able to measure a performance impact.
> A clean operating system would separate applications cleanly from each other so they can be configured in an isolated manner, installed and uninstalled cleanly without leaving remnants of themselves in the operating system gumming up the works.
I don't know what this means, but on any system programs will leave remnants on uninstall. Just diff your ~/ and /etc folders for a clean Linux system vs. one you've used for a while and then uninstalled packages from.
> Operating systems should be modular, able to be pulled apart and put together in a component like manner - the registry is the opposite concept, it's a central dumphouse that links and binds things and makes it hard to impossible to have a clean operating system.
"Gumming", "dumphouse", "sticky mess", "spaghetti"... these don't ground your position in anything, except maybe chewing gum.
> Go have a look into your registry and tell me what's in there - does it make sense to you at any level at all? There you go - that's what's wrong with it.
In fact yes it does, and in fact it shouldn't even have to make sense to a user, and only parts of it are relevant for developers, just like so many parts of any system.
In my experience people who describe the registry like this last used Windows as their primary system over a decade ago, read a lot of rants about it, switched to Linux, and just kept repeating myths ever since.
Let's be clear - you're saying there's nothing wrong with the registry, it's a great aspect of the architecture of Windows, and it doesn't result in applications being tightly bound, interconnected, difficult to uninstall and that bits and pieces are never left in the registry, that the registry does not contain orphaned configuration elements, it does not slow down Windows and in the past 10 years it has never needed "cleaning/defragging" as you call it? And you advocate for modern applications to use the registry for their configuration?
Why not? I think they could deprecate the interface and replace the implementation with wherever the new thing is (config files?) transparently. Make Visual Studio emit warnings for a version or two, then errors (and LTS the last Visual Studio version that supports it until official EOL).
They're not going to do that because it's not important to them (and the registry is a bit of an icon at this point).
And here's how to do it instead of using the registry (for all uses - COM garbage included). And here's how to migrate, and here's some tools to help you...
They've indirectly stated support for it so long as VB6 (and COM) continue to be supported.
I suspect a 32bit runtime deprecation is probably slatted for a sooner deprecation than the registry
How the heck are you supposed to use the backup without the registry? You might as well just handpick which document folders you want to back up. Without the registry you can’t even load a DLL; interfaces are loaded via UUID which seeks to the registry to find the dll and its threading model.
Getting the last known good state of the registry can often tell you what install caused issues.
Reinstalling is fine, but IMO should be a last option in many cases. Especially if you are trying to quickly recover from an incident - reinstalls take time!
Actually if you maintain a batch file with all your settings plus use chocolatey, it can be pretty quick (under 40 minutes to reinstall everything from scatch in my case).
> The system registry is no longer backed up to the RegBack folder starting in Windows 10 version 1803
> This change is by design, and is intended to help reduce the overall disk footprint size of Windows. To recover a system with a corrupt registry hive, Microsoft recommends that you use a system restore point.
> If you have to use the legacy backup behavior, you can re-enable it by configuring the following registry entry, and then restarting the computer: HKLM\System\CurrentControlSet\Control\Session Manager\Configuration Manager\EnablePeriodicBackup
Type: REG_DWORD
Value: 1
The issue being not that they stopped doing it but that they failed to properly communicate it to the users.
Coincidentally, I got bit by this on a family computer over this weekend (had to reinstall). I went around enabling the backup via the registry on all of them.
System restore points are the recommended and supported way to backup/restore critical system information including the registry, and they don't depend on the "secret" RegBack folder backups that this Forbes article is spreading panic about.
Yeah considering May 2019 iso takes 26.6 gigabytes post install and updates I'm having a hard time believing this excuse. This is a system with no additional tools by the way, no python, no office, no nothing. Just straight up windows install -> check updates & wait.
Omitting automatic registry backups is a defensible policy. Telling the user that a backup was completed while a 0 byte file was saved is lying. Seems like a criminal act under the circumstances.
What are you talking about? I don't think this was ever even an announced feature. It's redundant with another type of backup which is the one they've been publicly promoting for a long time. (Restore points.) They finally got around to eliminating the unnecessary one. I don't remember there ever being a UI around this or a publicized best practice about using it.
I don't think it qualifies as "telling the user" if it involves the user poking around undocumented system tasks.
It's a very minor fraud, if a product you purchase says it will do backups, but doesn't do them, then you were defrauded. I think you could sue for actual damages if you had them and be successful if it's in the documentation anywhere, or the system actively informed you that a backup was made. I'm in the UK, fwiw; I'd expect most English law systems to be broadly similar but IANAL.
Fraud usually requires that the entity doing it gains something from it, especially if we are talking about a "criminal act" as the parent did. (IANAL)
The civil side might be different, but that is not what the parent insinuated. (And yes, my question to the parent was obviously meant to show that I disagree.)
>Fraud usually requires that the entity doing it gains something from it //
They got your money; so it's obtaining benefit by deception. If it says in the product specification that it does something that it doesn't do then it's fraud, which is criminal.
It's a de minimis form of selling you a product that they know to not include the features they sold.
But, police wouldn't pursue it (they don't even bother with burglary under O(£1000s) in UK); but you could sue for lost damages and you should in theory be successful.
A quick search for "RegBack" doesn't turn up any MSFT documentation on it. But it does turn up an article titled "How to restore Registry from its secret backup on Windows 10". This makes me think the feature is undocumented.
if i spent all day tweaking peoples registries, only to find that they are not backed up to be restored when you desire a sane configuration uuum, "pissed off" would not even begin to approach describing the emotional carnage evoked.
I get the hunch MS is set on controlling the users configuration at all times.
On the flipside i remember the hell of restore point trojans, and perhaps this is MS best go at mitigating a currently guesstimated threat that is coming?
> if i spent all day tweaking peoples registries, only to find that they are not backed up to be restored when you desire a sane configuration uuum, "pissed off" would not even begin to approach describing the emotional carnage evoked.
Step one to editing the Registry is _stop editing the Registry._
Hell, registry tweaking can be so dangerous that Windows keeps a copy of important keys for the last known good boot. That’s what “safe mode” is: it uses that key instead of the current one.
I suspect the simplest explanation is "Microsoft is tired of people being able to use really old versions so they are breaking that ability any way they can."
See the current situation of Windows 10. There are specific "checkpoint" versions and they force you really aggressively to come up to them when they get released.
My WinSxS folder alone is almost 10GB. If they wanted to save space, even a modest improvement in managing updates would yield space saving results orders of magnitude greater than this.