I see two reasons for why it has such a bad reputation.
1. It exposed how terrible device manufacturers are at writing drivers. nVidia alone (which only has niche hardware) themselves stood for the majority of Vista BSODs. And no printer or scanner company had probably written a device driver in a decade now so it took another 5 years for them to catch up (and they just continued creating bloat ever since).
Yes there were major architectural changes but this was the perfect opportunity since it was the first MS (consumer) mainstream 64 bit OS (I don't count the 64 bit version XP). Unfortunately the driver situation made things rather different between the 32 bit and 64 bit versions of windows, this did not help.
2. Pre-fetch/super-fetch or whatever they called it was WAY to aggressive. If you had a decent amount of RAM on launch day, or just a new regular computer 6 months after launch, the pre-fetching algorithms were so aggressive that they completely overloaded the harddrives that perform terrible with that random access load. It meant that the first 10 minutes after boot was spent trying to speed up that you might want to do at the extreme cost of slowing down things you actually wanted to do. Yes they were supposed to be run with low priority but it really exposed how bad spinning harddrives are at multitasking. If doing one task takes 1s, doing two tasks each taking 1s will now take 9 seconds if run in parallel etc.
After enough time this wasn't a problem as all your freely available RAM had been used up by prefetch or actual programs. If you seldom rebooted you never had to worry about it. But the regular user wants to use the computer right away after boot and will only remember the agonizing slowness of trying to start the browser and office applications after boot.
Compared to Vista, Windows 7 was just a new (much better) taskbar, better tuned prefetch and with the very important difference that by the time Windows 7 arrived the drivers had matured and many of them even supported 64 bit systems... But that was all that was needed for Vista to be seen as a disaster and Windows 7 an unparalleled success.
I built a new PC in 2007, bought Vista Ultimate OEM, installed it and added all patches up to that point. Two things became immediately obvious:
- As others have pointed out, UAC was way too active. Just about any application you cared to launch required permission dialogs to be clicked through - irritating to everybody, scary to most users, and quickly ineffective as everybody stopped reading and just reflexively went for the OK button.
- Lots of legacy applications broke for the most trivial of reasons: they were written to store configuration and other data in their installation directory, which defaulted to "C:\Program Files". This worked fine on Windows 9x, which by default allows user-owned processes to do just about anything, but not on NT, where writing to Program Files requires elevation.
So new Vista owners would click through a bunch of obnoxious UAC popups to install their favorite Windows applications, click through more UAC popups to launch them, and then watch them crash or mysteriously lose all their data.
You got extra loser points if you went for the shiny new 64 bit version, in which case your legacy 32 bit application installer was more than likely to try its luck with "Program Files" instead of "Program Files (x86)". Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?
None of this was terribly hard to fix if the application was still supported, but it did require the user to upgrade, often at a cost. Worse, if you were a small indie developer, releasing an upgrade now pretty much required buying an expensive certificate to sign it, lest UAC keep warning your users that they were launching an untrusted file from the scary internet. So lots of small free- and shareware apps which people loved were abandoned, undoing part of the Windows platform's greatest advantage: its large library of existing applications.
> Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?
Or alternatively, why was breaking them out required in the first place? To this day I frequently end up having to look in two places to find something, because it's never obvious which of the two Program Files it should be in. Pre-64-bit Windows there was only ever the one place. This is a permanent usability regression.
And of course, I wouldn't even need to be digging through there in the first place if the Start menu launcher just worked, but no, they had to junk it up with Cortana, which is so incompetent it can't even find installed applications by name. More details on my Cortana rant here: https://news.ycombinator.com/item?id=15758641
I absolutely agree with this. There should have just been a 'Program Files' folder and if a conflicting program was already installed, the architecture could be appended to the name of the newer install's directory (C:\Program Files\Foo, C:\Program Files\Foo (x64)).
>applications broke for … they were written to store configuration and other data in their installation directory, which defaulted to "C:\Program Files"
Vista tried to take care of that by transparently redirecting to %LOCALAPPDATA%\VirtualStore, writable with user privileges. The feature is called Virtual File Store, and comes together with an analogous Virtual Registry Store.
> Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?
"Program files" is localized so it's not even "Program files" in all languages. Installers that looked to that folder were doing it wrong anyway and wouldn't work on non-English machines.
"Program Files" isn't localized, at least not on the French version of Windows. Only user content is ("My Documents", "Desktop", ...), and even then, some of them are just links to non-localized directories.
You could, however, change the path of "Program Files" so your point still holds.
You are correct now but Program Files was fully localized in XP and below. In Vista and beyond, it uses junction points to the non-localized names (something I just learned).
> Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?
The same thing happened with the System32 folder. On 64-bit systems, System32 actually contains the 64-bit(!) versions, and the equally confusingly named SysWOW64 contains the 32-bit versions.
I'm a small fish, but all of the customers who purchased a Windows Vista computer from me found Vista a joy to use. Their drivers worked, their programs worked and the machines were fast. I think a much stronger reason for the bad rep would be that so many big-name labels sold Vista machines that were woefully under-powered and then loaded them with bloatware. I recall visiting a few customers who had new Vista computers from big brands and paid me to upgrade them within the first week of owning them!
You are 100% correct. I remember friends buying both desktops and laptops at the time that essentially contained XP-targeted processor/memory configurations, but instead now were shipped with 32-bit Vista. It was a perfect recipe for bad performance out of the gate, and that's before you even get into the pre-loaded McAfee, etc.
This issue is being discussed a lot and nothing is being done to fight it.
Microsoft and Google with Android - where the same issue can be seen - should step in with more proactive approach to solve this problem.
I don't have a solution but they should probably use some business incentives and possibly something directly from OS like built in benchmark scores/graphs, ideally compared to bare bones for the device so everybody can see how much performance you loose by getting all those "features" preinstalled.
A lot of the bloat on Android phones I see is from Google, apps you can't uninstall without hacking. My Huawei had a few apps from the manufacturer, but they were removable - there's just under 20 Google Apps that came preinstalled many [most?] of which can't be removed.
I use all of the heavyweight Google apps that my phone shipped with, though, so I don't consider them "bloat". Maps, Gmail, Chrome, Play store, Drive, Hangouts, Photos, Wallet, and YouTube all get plenty of use. I will grant you it's weird that I can't uninstall most of them though.
I use none of the google apps and they occupy disk space, run on start and can't be disabled or uninstalled while taking much screen space and cluttering my icons of installed apps with stuff I don't need, don't want and don't use but cannot remove.
They are not running, only Play Services is and you need that anyway. It is event based, the app is only launched and notified, when the push notification comes.
And even that may be delayed, because the push notifications might be delayed to get the radio the chance to sleep and save battery.
They are running on my two android phones (4.2, 4.4 and 6.0).
Android tries to scare me off stopping them will not allow me to disable most of them.
I do not need play services, why would I need it for ? I'm not even sure what this does apart from insta-gobbling my 50Mo monthly data plan downloading updates I don't want and cannot cancel of applications I do not use.
Play Services is a framework, used by other apps, includes handling Google Accounts and push notifications (and this is only the top of the iceberg). Without it, your device would become Kindle-like, and all the apps that do not run on Kindle (or other, Google-less Android versions) would not run on yours too.
On my personal phone (Sony), I have only Play Services, Play Store, Gmail, Hangouts, Maps and Youtube enabled. All the other are disabled, including the 'Google' app.
> Without it, your device would become Kindle-like, and all the apps that do not run on Kindle (or other, Google-less Android versions) would not run on yours too.
I take issue with this statement. Many apps which "depend" on play services work fine on my Google-free android. Some examples are tutanota, duo lingo, and some games. Though it's not an easy path, I wouldn't consider my cellphone experience "kindle-like".
If you do not want to use them, disable them. If you disable them, no updates will be done (and existing ones will be deleted).
You cannot delete them, physically, because they are on /system partition, which is read only. That means, even if you would delete them as root, you would not get more space for another apps or your data. However, the read-only /system has more functions, that you would lose: it has known file layout (so you can image-update your phone, if you ever get an update), it is signed (so you can know your phone has not been tampered with, as it is not going re-sign itself once modified), it is also for factory-reset/sofware recovery purposes, so once you wipe /data, your phone will be in factory-mint condition (software-wise, of course).
Most of the time the disable button is greyed out and cannot be clicked.
I'm not familiar with the /system partition but it seems logical that if I can delete them as root, it means I can also install something else in its place or put some of my data there, which would help me a lot as my phone does not allow for an additional SD card.
Whether it is disabled or not, depends on the phone vendor. In the phones that I have currently available (Google, Sony, Samsung), all the Google applications can be disabled. Samsung usually prevents disabling their applications, but still allows disabling Google ones.
If your vendor prevents disabling the apps, you can still try the route using adb and pm (google for adb pm disable).
The point I was making about /system is, that you don't want to mess with it, even if you have root. You can break more than you think, including dm-verity, and then you are not going to boot anymore. Also, apps installed in /system are getting updates installed into /data, so it is not going to solve your problems with space anyway. You would have to repartition your phone, which on ARM platforms opens a new can of worms (partitions are defined in the secondary boot loader, which is signed too. Moreover, if you do this wrong, you get a brick, you are not going to boot without reflashing the original SPL in an external programmer).
Microsoft did actually introduce a certification programme, as well as start to sell their own systems (other than Surface). I can't recall the brand they used but it seemed like a great idea.
Seemed like. Whenever I quoted one to a customer they always turned their nose down at the price, then paid me several hours to debloat the thing, fix a driver that was shipped faulty, then a year later pay me again to upgrade it! Oh and replace the useless battery. The list goes on!
Case in point: Sony. The amount of CRAP that comes with the Xperia is insane. There was an uninstallable "What's New" app that would notify incessantly when it wanted to push some new app that Sony probably made money on shilling.
And never mind the Google crap.
The day I dumped it an installed LineageOS made my phone usable again.
The bad rep for vista comes from it getting in the way of using the computer among other things such as being bugged.
I remember an update downloading itself and applying itself at shutdown then restarting to apply itself some more and looping like this indefinitely. Best update ever \o/
> Compared to Vista, Windows 7 was just a new (much better) taskbar, better tuned prefetch and with the very important difference that by the time Windows 7 arrived the drivers had matured and many of them even supported 64 bit systems... But that was all that was needed for Vista to be seen as a disaster and Windows 7 an unparalleled success.
Still, Vista was a disaster.
I remember a conversation I had with a MS engineer at that time:
- Vista is like, the foundations for the good things to come. If you want a solid house, you dig solid foundations.
- I am buying a house, not just pillars in the ground.
(It was the same with the w3c specs: "maybe mozilla and opera are the ones misreading the box model spec and IE has it right", me "ms is on the board...").
I'm not quite sure how to interpret your last comment. You prefer the IE box model? Or you're saying they did it right from the get go? Because quirks mode was abandoned and everyone uses the Moz/W3C model now.
A better example would be file line endings, where Microsoft did get it right (\r\n) and all the other OS's screwed up using just \r or \n.
Interesting that you find \r\n to be "right" and the others "screwed up". I'm curious about the reasons for that.
The biggest problem is that each OS went its own way (Mac started with \r but of course uses \n now). If they all had the same line ending all along, whatever it was, no one would think much about it.
\r\n has the obvious disadvantage of being twice the size, along with making it possible to land in the middle of a line ending instead of before or after one.
Of course one advantage would be if you're controlling physical equipment where carriage return and line feed are independent of each other. I learned to program in 1968 on a Teletype ASR33 where CR and LF were literal commands to return the carriage to column 1 and advance the paper. You had to use both because they did two different things. Or on occasion you might use CR by itself to overprint a line. LF by itself was pretty rare, but would do what you expect if you used it: advance the paper without moving the print carriage.
CR LF was fine if you were typing interactively - in fact you just had to hit the CR key and the remote system would provide the LF. But usually we would punch our programs on paper tape, dial in, run the tape through and get the printout, and hang up right away. At $30/hour in 1968 dollars, this saved a lot of money. And of course you would run your tape through locally to print out and proofread your program before testing it online.
To be able to print a tape locally, you needed both CR and LF, but even that wasn't quite adequate. You really wanted to allow a little extra time for the machinery to settle, so the standard line ending we punched on a tape was CR LF RUBOUT.
RUBOUT was a character that punched out all the holes in a row of the paper tape. It was ignored by convention, so you could erase a typing error when punching a tape by pushing the backspace button on the tape punch and hitting the RUBOUT key.
Because it was ignored, RUBOUT was also useful as a "delay" character in the newline sequence. So I guess I'll never get over the feeling that the One True Line Ending is: \r\n\x7F
(Nah, I'm happy with \n, but it makes a good story.)
NUL was also a common delay character, and one can find some of the delay mechanisms, enshrined in the POSIX standard as part of the General Terminal Interface, in Linux even today. (They are largely gone from OpenBSD and FreeBSD.)
Certainly not from DEC. DEC had a powerful "RMS" (record management system) later between a program and the disk. That later would take files in a balloon formats and convert them as needed.
For example, you could define that your file was fixed length records (like the old punch cards); at that point each line doesn't have a line separator at all; the \n or\r is not stored on disk. But when you read a line using the C routines, one will be added.
The W3C box model became standard and IE was criticized for its quirky noncompliant behavior [1], but then many years later 'box-sizing: border-box' was introduced and widely praised and adopted by frameworks [2][3]; it's funny how things change.
> then many years later 'box-sizing: border-box' was introduced and widely praised and adopted by frameworks
Well, IE implemented their version of the box model in '97 (coincidently, NN4 did the same); the box-sizing property was first proposed in 1999[1], and first appeared in a draft in 1999[2], and implemented in Gecko in 1999[3] and in IE5/Mac in 2000[4].
That's two years from IE/NN shipping the non-standard box model (the standard one was defined before IE4 and NN4 shipped) to having a property to toggle between them. To me, that isn't "many years".
Really what makes it seem like many years is the fact that IE didn't implement box-sizing until IE8 which shipped in 2009.
If you send it in raw format to a Teletype, it'll print correctly...
Other than that, I don't see it being any more right. But it is a convention that is far older than Windows or MS-DOS. I saw it myself first on CP/M but it was there on VAX/VMS and I expect the Teletypes had it from 1960's.
It's not that simple. In the 1960s operating systems such as Multics existed, which even then had the idea of device independence. So the end of line sequence as far as Multics applications were concerned was a single LF, whatever the terminal type. The operating system was in charge of converting that to whatever the terminal actually needed. Multics was following the standards of the time, moreover. As the ASCII standard of the time explicitly allowed an LF could denote both a Line Feed and a Carriage Return (and indeed any padding delay characters too) in a single character if the system decided to employ single character encodings.
* H. McGregor Ross (1964-01-01). "The I.S.O. character code". DOI 10.1093/comjnl/7.3.197. The Computer Journal. Volume 7, Issue 3. pp. 197–202.
* Jerome H. Saltzer and J. F. Ossanna (1970). "Remote terminal character stream processing in Multics." DOI 10.1145/1476936.1477030. Proceedings of the AFIPS Conference 36. pp. 621-627.
I tried to say that it was not so simple. Yes, obviously Multics had the LF line end convention. Some systems saved files internally in a record format so the line end was whatever what the software decided to print out there.
But mostly I was just reacting to the silly idea that CRLF would be right and lone LF not: Microsoft didn't come up with that idea. It was there already in 1960's and perhaps useful when you didn't want to do any line drivers for converting strings at output. But that's not really any more "right" than other conventions.
It has downsides if you're actually implementing software, but \r\n is the semantically correct way to represent a newline, because \r is Carriage Return (x=0) and \n is Line Feed (y++). In many scenarios \r is a useful primitive to have by itself, but if you want to support unix-style line endings you can't implement it, and you can't implement \n as a line feed - it has to be a newline. So in practice some expressiveness from the original character set was thrown out to save one byte per newline.
> \r\n is the semantically correct way to represent a newline, because \r is Carriage Return (x=0) and \n is Line Feed (y++)
You left out a key qualifier: CR/LF is the semantically correct way to represent a newline on a physical device that has a physical carriage that has to move back to x=0 and advance one line in the y direction in order to start a new line. What devices connected to any computer today have that property? Answer: none.
In fact, even on computers that have such devices connected, the semantic meaning of "newline" in any file the user actually edits is most likely not to actually cause a CR/LF on the device. Word processing programs for decades now have separated the in-memory and on-disk file data from the data that actually gets sent to a printer. So the file you are editing might not even have any hard "newlines" in it at all, except at paragraph breaks--and that's assuming the file format uses "newline" to mark paragraph breaks, instead of something else.
I find it funny that CRLF as a vestige of devices long gone is ridiculed, yet the same people don't bat an eye at emulating an in-band sort-of API for controlling cursor movement and display characteristics for similar devices of the past. Heck, *roff and man continue to format text by using overstriking to create underlined and bold text and rely on the terminal emulator to understand that this happened to create a particular effect on physical printers and emulate the result.
Neither world is clean, pure, and free of weirdness that's only properly understood when looking decades in the past.
Actually, the GNU tools (in particular grotty) advanced forward to 1976, and are capable of ECMA-48 control sequences that render actual italics and boldface like the source markup describes. It is just that the people who make operating systems have gone out of their way to disable this.
Indeed, I'm looking at my copy of "The C Program Language", second edition (ANSI C). Page 241: A text stream is a sequence of lines; each line has zero or more characters and is terminated by a \n. [\n was defined earlier as ASCII 10 and designated either as NL or LF.] An environment may need to convert a text stream to or from some other representation (such as mapping '\n' to a carriage return and linefeed)"
> I'm not quite sure how to interpret your last comment. You prefer the IE box model? Or you're saying they did it right from the get go? Because quirks mode was abandoned and everyone uses the Moz/W3C model now.
It's worse than that. His position was: MS's understanding and implementation of the box model is the correct interpretation of the W3C specs (yeah, I know).
In the old days, we knew to either open a file in text mode (where whatever the OS had would be converted to a single \n) or in binary mode (where it wouldn't, and you had to deal with the conversion yourself).
IMHO, what SHOULD happen is that if you have a device that has special timing requirements (like the old fashioned printers with no memory), then the driver is responsible for the handling the timing. Adding in weird bits to everyone's files is a bad idea.
And yes, I know the difference between carriage-return and new-line. And I know that in the old C specs, "\n" didn't have a guaranteed mapping to either.
Apart from the general slowness and crashes caused by the issues you mention, I remember UAC was way too intrusive (sometimes you would see 4 or 5 alerts in a row that would monopolize your screen), copying files was slow as a snail an tended to outright crash for large sets of files, and suspend never really worked in my laptop.
Anyway, how is a consumer product that provides a bad experience to the user (regardless of the reasons) "amazing"?
Copying files is an interesting case study. Vista was actually faster but perceived to be slower. XP wouldn't actually tell you when copying was 100% complete. If you turned off your power supply the second XP's copy dialog went away you'd lose data.
Vista sped up file copying operations but fixed that bug, leading to a faulty perception of slowness. Worse, the progress bar behavior encouraged that perception of slowness. A progress bar that speeds up at the end will be perceived to be faster than one that is perfectly even which in turn is perceived faster than one that slows down at the end. And vista's progress bar usually slowed down at the end because it didn't properly account for those disk sync et al operations ahead of time. The result was a worse feeling experience despite what was happening under the hood.
Did this combine with the prefetch situation? It always seemed to me that the files were doing some sort of inspection of state as the dialog box progressed. Even things like Ultracopier seemed to run into this problem.
>As Larry Osterman noted, UAC is not a security feature. It's a convenience feature that acts as a forcing function to get software developers to get their act together.
But noisy UAC ends up being ignored completely by users - there's a very fine line to walk in order to ensure that it protects truly sensitive actions and is recognised by users as such.
I don't know how it looks like nowadays, but I remember Apple forums used to have questions how to run as root to avoid having to deal with such dialogs.
All UAC ever did was further re-enforce the already deeply-ingrained MS convention of 'just click OK' without reading the dialog. It never told you why the program needed elevated access, only that it needed it for it to do what you wanted. And of course, the average user just wants their computer to do what they asked. I would see some users run a program and automatically move their mouse to the place the UAC prompt would appear, seconds before the prompt ever came up. That's a whole new level of programming people.
The only time I can think it ever came in handy was if the user had an unprivileged account and would need an admin to type in the password - hopefully the admin would ask the why question and dig into it before dismissing the UAC prompt.
Loving how the tone of the article is more about "you dumb users" and not "our architecture is so shit we couldn't implement a feature so we just lied to our users instead".
Usually system manufacturer's fault. A recent example where I work is Lenovo's recommended Intel wireless driver (the one on the Lenovo site) was over six months old and had known issues with causing machines to not be able to wake from suspend. Installing the drivers directly from Intel resolved the problem.
That's pretty much the best case scenario. A lot of chipmakers (Conexant, Jmicron, Intel in many cases) don't allow you to download drivers directly from them. So you are stuck with whatever the OEM provides. In some cases I've found that newer model laptops by the same OEM use the same audio/media controllers under the hood in some case and I can use the newer driver from the updated model.
My next computer will be something from Microsoft's Surface line. They seem to be the only manufacturer who can make proper devices (everything working and power bricks which last more than 6 months - thanks Apple).
I wouldn't ascribe anything that generous that to a surface device as long as they still use the marvel wireless cards, which have a storied history of causing hardware connectivity problems[0]. I largely enjoyed my Surface Pro 3 but frequently had the same issues. The Surface Book seems to have issues with sensing connectivity between the keyboard and screen, as well. Anecdotal, at work we even had a developer hololens go paperweight because the wifi stopped signalling. MS told us to junk it, unrepairable.
I've had a few Dell's over the years and have never had a problem with them. The trick at first was to buy from their business line - no bloatware and better support. Now I just buy from the Microsoft Store and they come with no non MS bloatware. My 2-1 Dell is a pretty good computer. I just wish it had a 3:2 display instead of 16:9.
everything working and power bricks which last more than 6 months - thanks Apple
Face it, sample amounts being so big, you're not going to find anything which always worked for everybody. Anecdotally, having worked for a Dell-only place and typing this on a 7-year old XPS, I never encountered severe issues. Only standard hardware problems caused by wear (HD/memory/keyboard keys failing after +5 years of usage).
Lol, yep. My Win 10 desktop will suspend just fine, but then it wakes itself up for no apparent reason and just stays running indefinitely after that. I can't figure out what the cause is, but I've taken to just shutting it down in between uses.
I'm a frequent sleeper and I depend on the feature. Next time your machine wakes up by itself, go to a DOS prompt and type "powercfg -lastwake" That will tell you how it happened.
For me, part of the solution was to go into the device manager and edit the properties for my mouse and my network controller. On the "power management" tab I disabled the "allow this device to wake up the computer" option. I only use the keyboard to wake the PC.
Additionally, when I left the machine sleeping overnight, there was some scheduled task that would occasionally wake the machine. There is a way to disable that, but I forget the specifics.
My computer does it, and typing in powercfg -lastwake just says "unknown source". I've disabled every wake event, update service, wake-on-lan, disabled the ability of my mouse and keyboard to wake my computer up, and it still does it.
This. I was playing Starcraft before bed, and just shut my laptop to suspend it... it was suspended until about 4 hours later, when, in the middle of the night my wife and I were awoken by sounds of zerglings dying. Very unpleasant, and quite surprising.
I've been fighting with this literally since Windows 10 came out. My desktop PC wakes up every night, around midnight/1am, and will not go back to sleep. I've disabled everything single wake event in windows, there is no wake-on-lan(as a matter of fact it does it with lan unplugged and ethernet disabled). Looking in the power events just says the computer woke up due to "unknown source". It does not do it with Windows 7.
I had the same issue. I think, your keyboard/mouse are allowed to wake up from sleep automatically and some glitch gives your desktop the impression that a key was hit. If you disable this in the device manager, this issue should be resolved.
The problem with Vista was mismanagement of expectations.
The ad campaign featured people spotting deer at dawn from their home office, and the message was, this will completely change your life and bring about your inner sense of wonder, as if you were born again, a new person in a brave new world.
In reality, it was an OS upgrade that didn't work too well and that was more-or-less forced on you if you bought a new device, while at the same time XP continued to work just fine on all your older PCs.
People were disappointed, upset and angry. All other things being equal, a little more humility and a lower profile would have helped.
”It exposed how terrible device manufacturers are at writing drivers.”
In their defense, Windows Driver Model (https://en.wikipedia.org/wiki/Windows_Driver_Model) may make it possible to wring the last bit of performance out of a system, but it doesn’t make it easy to write a driver. Its documentation also was somewhat of the type “once you know what this page tries to tell you, you will be able to understand it” variety.
It also didn’t help that new hardware frequently introduced new sleep state levels at the time.
Another reason Windows Vista did so badly was that it really needed a clean install to make it work properly.
This is true of all Windows versions, but was particularly true of Vista, and of course the reality was:
* Very few people carry out a clean install when they get a new computer. This is as true today as it was ten years ago
* Hardware manufacturers loaded the PC's up with terribly written adware before shipping (this situation has improved slightly)
The requirements for Windows 10 aren't that much more than Vista, so, the average person would get their new Vista PC running on a Core 2 Duo/Pentium D and 1-4GB of DDR2 Ram, loaded up with crapware and not do a clean install, and it would run horribly.
By the time Windows 7 came out, the PC manufacturers where writing slightly more efficient crapware, hardware was generally a bit more powerful, and they had fixed a tonne of bugs in the OS itself.
Windows is the second most widely used server operating system in the world, second only to Linux. It's pervasive in companies outside of the tech industry, to give one example of typical usage.
Linux and Windows together dominate this market so thoroughly that everything else (UNIXes, BSDs, macOS) is practically a rounding error.
Not what I was originally referring to, but yes at work our entire system is 30+ Windows Servers running C# programs and services using Consul, Nomad, Mongo, Sql Server, and Memcached.
Continuous integration and deployment is all done with Microsoft agents orchestrated by VSTS (Microsoft's hosted version of TFS). Yes we use git
I came into a midsized company as the dev lead with no real development shop with free reign, a decent budget, and management support to build the department the way I saw fit. I had never used VSTS and had heard nothing but bad things about TFS. They already had it and I decided to play around with it. I was amazed how easy it was to create a CI/CD environment that followed generally accepted best dev ops practices.
If you want to serve files to Windows machines and you don't mind the license and painful remote administration, it's a reasonable choice. Guaranteed SMB compatibility.
How is remote administration painful? All 30+ servers I run have VSTS agents. I can do most administration by running Powershell scripts and choosing the deployment groups based on the purpose of the server. I have Consul agents for health checks, Consul watches for alerting and/or automatic recovery, Nomad for running executables across the app servers, HashiUi for monitoring and controlling the Nomad pool.
I can approve and deploy a release from my iPad (the website is painful on my phone) by logging into Microsoft's Visual Studio Team Services website.
I won't even start to gush about how easy setting up a build and release pipeline is in VSTS compared to the other tools I've used.
It is painful in the mindset of typical unix guy who is used to being able to "just ssh somewhere and vi /etc/something", which is not the way you want to manage large deployments but works even for large-ish ones. On windows there is no real middle ground between use GUI for everything and automate everything.
Also with unix servers various bastion hosts and similar "security measures" are minor inconvenience and usually even supported by automation tools, while on windows this usually ends up being major PITA.
It's been years since I last worked with Windows, but:
1) You can install an SSH server on Windows boxes just fine, then use Putty to SSH directly into PowerShell. PowerShell is not a classical shell, but rather a REPL for a procedural, imperative and object-oriented DSL for system configuration and administration based on .NET, with much saner syntax than my beloved zsh. In short, it works quite well.
2) With PowerShell capabilities - I'm a bit fuzzy on the details here, was a long time ago - you don't even need the SSH server, you can issue remote commands from your local PS instance. It required a bit of configuration up front, IIRC, but then you could replace your local session with a remote one with a single command.
So, in my experience - and note that it was probably nearly a decade ago! - Unix-style remote management was absolutely possible and not that much less convenient. And PowerShell is really a solid tool, with easy access to all of .NET and all of the system; the only annoyance I remember was certificate/signature management, dunno if it got any better.
The biggest pain for Windows remote automation is security around accessing servers remotely, I'll grant you that. I gave up. That's why I have VSTS agents on every box. I can easily write a script and tell each agent to pull the script down and do X locally and insert the results if needed into a Mongo collection. But for the few times I do need to treat my servers like "pets" instead of "cattle". I do everything from the GUI.
There was a time when our net ops team did something and I couldn't Remote Desktop into a server to do something urgent and if course I couldn't just SSH into it where I had to write a quick Powershell script and deploy it via VSTS to make a change. It was ugly.
On the other hand the PITA-ness probably isn't that much caused by the OS itself but by it's historical security track record and mentality of ops and security teams that is caused by that.
We have three different different AD domains - one on prem (well at colo center) and two separate AWS environments. Getting them to be friendly with each other isn't possible. Since the local VSTS agents poll and only need outbound connections, I don't have to deal with firewall issues or domain issues. Also, I can run a script in parallel across as many agents as I want to. You can have as many concurrent agents running on a VSTS account as you have MSDN licenses.
Besides, I already have sane deployment groups and tags defined by server environment and function. I might as well leverage them.
You're completely right. UAC was too aggressive (they admitted that was their mistake) and a few other bits needed tuning but it was actually quite an improvement over XP. 7 was what it should have been.
All those glitches though came from MS's far too aggressive and unrealistic plans for Vista. A couple years before it launched, 2003 I think, I was heavy in the Mozilla world, and had a high-ish profile in the community (I ran MozillaNews.org and was a long time triager). Robert Scoble tried to hire me to be a bridge between MS and Mozilla, a tech evangelist for features of Longhorn (as Vista was known then) that could help Mozilla, or really features that Mozilla could be a showcase for and be a tech ad for MS. I set aside my suspicions and gave it a try, learning about the technical side of Vista. I learned a lot, and wound up not taking the gig. I told him I didn't think these things has any real benefit to a cross platform application like Mozilla, and that I had real doubts they'd have any real impact on the market even if they were delivered, which I had strong doubts about.
The three tentpoles MS wanted Mozilla to use were:
1. Avalon/WPF
2. Palladium
3. WinFS
1. I told Scoble that I saw no benefit in Avalon yet as in 2003/4 Mozilla wasn't really about to dedicate lots of time and attention to coding for some new graphics API that wouldn't be launched for years. He said it would be out much sooner. I said I had my doubts given it's rather early stage of development.
2. "Imagine users knowing their online banking and purchases are 100% secure thanks to the hardware and their OS!" I said I thought the idea was rubbish, a nonstarter, and I hoped it failed.
3. WinFS. My arguments were simple, "apps like this don't care about the FS. Plus, it'll never launch. I have zero faith this feature will be out before 2010. Filesystems are hard, and MS has a long history of cutting features to get products out the door. This is a prime target to be cut."
He argued it was solid and amazing, etc, as a good tech evang should, but in the end I said no to the whole deal. I couldn't in good conscience try to push tech that I didn't believe in and didn't even think would ever release. They were hell bent on shoving all this and more in Longhorn, rather than a smaller release in 2004 and finish the other features later. And thus, we got Vista. Lots of great tech, rushed out the door, and poorly configured.
> I see two reasons for why it has such a bad reputation.
> 1. It exposed how terrible device manufacturers are at writing drivers. nVidia alone (which only has niche hardware) themselves stood for the majority of Vista BSODs.
One of the things Fathi (OP author) writes is
> [. . .] ecosystem partners hated [Vista] because they felt they didn’t have enough time to update and certify their drivers and applications as Vista was rushed out the door to compete with a resurgent Apple.
This goes some way to mitigating the characterization of device manufacturers as terrible at writing drivers. When considered in the context of "a resurgent Apple", it also provides a counterpoint in the specific example of nVidia as a niche hardware manufacturer.
2006 was the year Vista was released, at that time, Apple was shipping the quad-core Xeon Mac Pro with macOS Leopard (later Snow Leopard) which shipped with an NVIDIA GeForce 7300 GT video card. [0]
I used this particular computer all the way through Mac OS 10.7 (Lion) and if memory serves, I had a handful of kernel panics over the course of 6 years. From all I could tell, nVidia's video card device drivers on Mac OS never interfered with daily operation (up 24/7 as it was also an authoritative DNS server for my personal domains).
So, device manufacturers may be bad at writing drivers but those drivers also depend on stable and reliable APIs of the target OS, and such details have to be communicated between the two teams. Device drivers are an interface between host operating systems and hardware embedded systems. As such, the reliability of any device driver will depend on the sharing of information between the OS and device driver teams just as much, if not more, on the competence of the device driver engineers.
I went with Vista in 2008 (having 4 GiB RAM) and never had a reason to complain; in my case, everything from drivers to games just worked. I liked the Aero and UAC. After a few years I permanently switched to Linux, but kept using Vista for compiling some projects for Windows under MSYS2 (until MSYS2 stopped supporting it).
Microsoft has always (as far as I can see) alternated between focus on core tech and focus on user experience. Vista was a heavy tech push and so the user experience and polish suffered a bit. 7 was entirely focused on user experience and so it was awesome to use, but it wouldn't have been possible if not for Vista.
I also remember it was too chatty (very much like Windows 10 now). You wouldn't spend a minute before the OS would ask you to authorise an outgoing connection or something else.
You can do this on Windows 7 and Windows 10 too. Actually, I think it's the default, because it's one of the first things I disable whenever I get a new machine (I'm not a fan)!
I'm on mobile now, but from memory you right click the start button and go into taskbar settings.
Sorry, I meant taskbar similar to e.g. WinXP - where running apps are at one side, and icons (quick launch) at another side of the taskbar. It is possible to show quick launch on Win7 as well but icon/button sizes are different and somewhat ugly. As for "grouping" (into one button) - yes, that the first thing I turn off as well.
As I recall, the only way to group taskbar icons in 7 requires you to also display window titles in the taskbar, which is endlessly annoying; the way to work around this is to enable grouped icons/window titles and then edit your registry to impose a max pixel width on taskbar buttons so that the titles are unseen.
You forgot to mention how Vista skyrocketed the need for higher CPU/GPU/RAM resources, too, compared to XP (which also skyrocketed resources compared to Win98).
I'm inclining to believe this was on purpose, and not just in a "we just want to make the OS prettier for users" way. I think there used to be a theory that Microsoft did this to help PC manufacturers and chip makers sell more hardware, too.
Microsoft's mistake was that the resource requirements were too high compared to XP, so like 90% of the PCs running XP were useless when running Vista.
Win 7 is basically Vista Service Pack 1. Several minor things like the slow-as-hell copy-routine of Vista got reverted back to almost XP level-speed with Win7. Unfortunately the Advanced Search dialog of Vista got removed in Win7. Most Vista problems were third party device drivers (blue screens), first (still new) mainstream 64bit OS and the related issues w 32bit and lack up 16bit support, and the vastly increased memory usage because wrong vision (consume all memory while idle is okay).
To this day the Win7 is arguably the best OS (supported til 2020), followed by the aging XP.
Windows 10 is still terrible in that regard. Like, laughably bad.
1. I dare you to try it on a computer with a 5400 RPM hard drive. A fresh installation will spend the majority of its time sitting around with 100% disk usage as telemetry, superfetch, defender and more all eat up the entire ~2M/s hard drive bandwidth for more than 30 minutes after boot. And then halfway through your day, it'll decide to start recompiling .NET, or some other package with zero user notification and your computer will come to a halt. But hey, you're resourceful right? Just disable those services! Nope, too bad. Every version of Windows (including creators updates) have made it harder and harder for a user to disable features that break. Services get moved to TRUSTEDINSTALLER, an account that you can't override. Those services get restarted without asking you. Some as few as a couple hours. And Windows Defender will restart itself, and the before-last creators update KNEW people were disabling it so they moved the link in Metro/settings to be more obscure.
2. I just spent Friday for a client trying to "fix" Windows for Microsoft and failed. He bought a "Windows 10" laptop with a 30 GB SSD drive. Too bad, Window alone, took 97% of the entire hard drive. I removed literally every application (including 1 GB avast) except for Chrome and Windows 10. Every time I removed space, removed the hibernation file, removed any and all disk cleanup stuff... Windows would then fill it with patches.
It also had a 2.6 GB "C:\recovery" folder. I checked online and they said "Feel free to delete it, it's from an old OS." I tried deleting it, no permissions--even as an admin. I went in, changed the owner from the glorious TRUSTEDINSTALLER and made myself the owner of all the files. I deleted some files, but one file refused to delete. The file? 2.6 GB. It said this file was open in "Windows Provisioning". I checked Windows 10 backups, restorepoints, file history, all that jazz. Zero.
I check online. Maybe I'm insane. What are the Windows 10 requirements for hard drive space? Oh yeah, 15 GB. So there are, "Lies, damned lies, and Windows hardware requirements."
Meanwhile, Windows 10 keeps spamming that "You need to free up space to continue downloading windows updates!!!"
Really? REALLY? Thanks for the update.
I download Process Explorer. They say find the open search and find the file handle of the file open. I do it. ZERO RESULTS.
I download a tool that lets you delete a file on reboot before a program will acquire the file lock. It queues it up. It runs. It fails. Still NOTHING in services.msc that has Provisioning in the name.
Okay, change gears. ALL they want, is to freakin' install Office 356 on their craptop. They've got an SD card with a 32 GB card.
By now, after clearing up at least 4 GB, C: is now down to 100MB free.
I download the 5 MB Office auto-installer. It fails with a pop UNDER error that you don't notice at first under the loading screen. Okay, instead of giving you a description, it gives you an obscure error code. Clicking it at least gives you the KB for "out of disk space." Lovely.
I load up the Microsoft website, I find an alternative downloads link. I find the offline installer.
But back to task at hand! I load up the same link on this slugger of a Windows laptop and I go "Fine, I'll download it to the SD."
But wait, sorry! Thanks to the ultra-progressive, consumer-friendly Microsoft, they're too forward-thinking to let you have a download link. No, you get a Javascript button. It goes right into the full drive and fails. Okay, control click? Nope, Javascript. Okay, load up chrome settings and change where the default save location is and point it to the SD card.
I download it for ~40 minutes. Why? Because it's 4.6 FREAKING GIG for the offline Office suite. What basically boils down to an e-mail client, and word processor, is larger than an entire Linux distro with apps. (<-Yeah yeah, there's more apps, but I'm pissed at this point so I'm taking comedic liberty here.)
So I wait, and it finally downloads to the SD card, and at 99%, it stops and goes "download failed." There must be some Chrome bug with temporary space or something.
Well! I'm not defeated yet--this is my job and I'm paid for results. I've got a USB flash drive and my Linux laptop (read: running an OS that actually works and can be configured and fixed by the end user).
I go to the same website as before with my Linux netbook. But wait, the page... it's... different?
Everything is the same except that wonderful offline installer link? They removed from the page. That's right. Go there with Windows, and then Linux, and go to
same Microsoft download links and they will intentionally hide the ISO links and only give you the auto-installer link to ensure you're only going to run it on a Windows system. So customer friendly! (They do the same thing with Windows 10 ISOs, try it out.)
At that point, the client's laptop owner had to drive back 3+ hours to his office location so he had to take his laptop back.
I spent at least half a work day.. trying to (fight Microsoft) to free some space... on a machine that 100% meets Windows system requirements.
Thanks Microsoft. I wonder why do all my game and app dev on a Linux box these days. It's almost like I like feeling like I own the machine I paid for. Could you imagine having to go through all of these anti-consumer, anti-solution when doing hardware upgrades? What if you couldn't release the case on your machine without getting a "poweruser" license key from HP first? After all, they're just trying to protect you and they know how to run their hardware better than you. The more you look at that analogy, it really becomes insane how much we let Microsoft get away with bricking our own machines. The answer to a working machine should never be, "throw it out and buy a new one" when simply changing a config setting (if you were allowed to modify those registry values--sorry!) would suffice.
"I see you're trying to turn your SSD onto ACPI mode. Have you purchased an Enterprise SSD license yet?"
Who buys a laptop with only 30 GB of storage? I didn’t even know that was possible these days. You should honestly tell your client to return it and buy something else.
For reference, absolute minimum requirements are 16 GB for 32 bit and 20 GB for 64 bit [1]. So in theory your client’s laptop should work, but it’d probably be a poor experience. (Likely also a bad experience with modern Linux on 30 GB.) Given that your client’s Windows 10 laptop has an “old OS” on it, I think there’s some info missing in this story. A fresh laptop shouldn’t have an old OS install on it. (Or maybe this is OEM recovery gunk?)
I just checked my laptop and the Windows folder is 18.7 GB. Did your client's laptop have a Windows.old folder taking a bunch of space? Large updates to Windows will create these. You can whack this if you need. [2] (Should also get deleted automatically after 10 days automatically.)
> (Likely also a bad experience with modern Linux on 30 GB.)
Literally just typed "raspbian minimum card size" in Google and Google dug up this as the top result:
"/Pi Hardware /SD Cards.
The minimum size SD card you can use for Rasbian is 2GB, but it is recommended to get a 4GB SD card or above. Card Speed. A Class 4 card, which is the minimum recommended has an average read/write speed of 4 MB/sec."
The default packages include things like webkit and libre office, so it looks to be a fully functional Linux install on a popular piece of hardware.
Now, 4GB still seems dangerously small. But if all a client wanted was office plus web, I bet someone like OP could make a workable system within that size limit without Raspbian filling the emptied space with updates.
I have a 30 GB OS partition on my ubuntu box. That works nicely. Obviously you won’t be doing big data analyses, but everything runs fine, and with lots of apps installed.
I don't understand the thought process behind cheaping out as much as possible on a terrible PC, then paying for many hours of work from a tech to try to get a pathetic machine to be usable. The correct course of action is to return the faulty machine and buy a better one, rather than throwing away the money on a tech who can really only do so much with such inferior hardware.
It also boggles my mind how, still to this day, it's so hard to get a lower cost desktop or laptop that ships with an SSD, despite the fact that SSDs offer up such a performance improvement that many people consider them mandatory. The average consumer will have a much better experience with a computer that ships with a 128 GB SSD than a 1 TB HDD, yet every manufacturer is offering plenty of the latter (at 5400 rpm no less) and none of the former at sane price points. The two components even have similar costs now. In this era of streaming everything, the average person really isn't using much hard drive space. I know that my non-technical family members certainly aren't.
I just got my mom a $450 refurbished 2012 Dell workstation for common desktop use (mostly email and word processing). She loves it. It's night-and-day faster than the machine it replaced. And the single biggest performance improvement in it comes from, you guessed it, the SSD. A $450 five-year-old used workstation is trouncing any modern desktop in the sub-$1,000 range in practical performance. I would've gotten her a new one, but couldn't find anything in the price range that has an SSD, and the kinds of computers that do ship with SSDs also tend to have unnecessarily upgraded (and costly) processors and graphics cards, which are only useful for gaming.
(Oh, and the used workstation has a Core i7 in it too, so it's not exactly a slouch along any dimension except for 3D graphics performance.)
I don't think that people understand what they are buying. There is an expectation that Walmart wouldn't sell something that cannot work at all, but they do.
Don't buy 5 years old hardware second hands, it's poor investment and I speak from experience. Hardware has a limited lifespan then it just dies. The hard drive, the motherboard or the screen fail without notice and you're screwed.
You're right that people don't understand what they're buying. $200 new, modern Windows laptop is a market segment that cannot exist -- it's like a $5K new automobile in the US. Except there are standards in the automotive market in the US, so no one is allowed to sell the kind of trash that would be a $5K car. You can buy such a thing in, e.g., India, but it's exactly as bad as you'd think it would be, with terrible emissions and crash performance.
As for hardware endurance, I don't think you're giving quality hardware enough credit. I've owned a lot of computing hardware in my lifetime, and the only failures I've ever experienced have been fans going bad (which is easy to fix) and spinning hard drives crapping out. Oh, and I dropped a laptop really badly one time and broke it that way, but that's not really the hardware's fault. Solid state components last quite a long time.
It's funny you say that because I was already writing that there are cars in Indian selling for much less than $5k before finishing to read your first sentence.
Entry cars in Europe are in the range $5k to $10k. Not sure closer to which ends. They are certified for regulations and safety.
I certainly had some hardware and I've seen everything die sooner or later. My order would be rotating hard drive, then gaming GPU, then display, then motherboard.
Never seen any computer reach 10 years without any replacement. You're significantly past half life when buying 5 years old.
The cheapest car in Europe appears to be the Dacio Sandero, which works out to around USD 8,500. There's several problems with the base trim level that would render it unacceptable on the US market: No A/C, no radio, no automatic transmission, and a truly anemic engine that takes 13 seconds to go from 0-100 km/h. That engine might be acceptable on a city car in Europe, but most US drivers are going farther (and faster). But hey, at least it's certified for collisions and emissions; you can't say the same for the Indian cars we're referring to.
I've seen plenty of computers last >10 years. So, we'll see how this one goes. Even if one component does need replacing at some point, it'll likely still have been the best choice. Nothing else offers that kind of performance at a remotely comparable price point unless you're willing to build a PC from scratch.
The low end Dacia are a demonstration of making affordable cars by abandoning options like motorized windows, A/C and radios. They are very successful. I think you can pay a bit more to have all options, which is still a good deal for a brand new car.
Yes, Europeans generally speaking have smaller cars than Americans. All cars have manual transmission.
> (Likely also a bad experience with modern Linux on 30gb.)
I just did an install of modern Linux (the latest CentOS 7, with Gnome deskop), so I can check. The root partition is using at the moment 4.2G, plus a 2.0G swap partition and a 1.0G boot partition. So if this were a 30G disk, I'd have more than 20G left, even after installing a few applications.
There's plenty of serious Linux distros that still ship on a single CD. Arch Linux, for example, comes on a 522 MB ISO. That gets you a basic functional desktop environment, and anything else you might need can be installed from the Net.
There is no desktop on the 522 MB Arch Linux ISO, or if there is, I've never seen it. It boots into a root shell on tty1, and is only supposed to be used for installation. I would be very surprised to even find an X server in there.
EDIT: For a more realistic number, I just checked my Arch-Linux-based home server, which has a fairly small installation (including some multimedia and X11 stuff for PulseAudio, mpd and youtube-dl, though). Needs just over 2 GiB for the entire system and applications:
df / -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdc2 110G 2,2G 103G 3% /
Of course, you should have some breathing room, but usually not more than 8 GiB on a server, and maybe 16 GiB on a desktop.
So consumer-oriented distros don't seem to be nearly so much lighter than Windows. With the caveat that Ubuntu probably comes preloaded with more apps like Libre Office.
I just installed Ubuntu 16.04 a month ago (new laptop), and with /home/ on its own partition, after installing everything I use regularly, root is only using 9.3G
A decent amount of that space ends up being on-disk swap for the RAM, for what it's worth. And note that, on that same page, Xubuntu and Lubuntu are offered up as alternatives for less performant computers. They only require 5 GB of space. Windows doesn't have a light version like that.
When the entire laptop is $300, the difference between those two is 10% of the consumer facing cost. And somebody buying a $300 computer clearly doesn't think (or doesn't know) they need 1 TB of storage.
I fought this battle with a 64GB SSD, and gave up and cloned the drive onto a hybrid SSD/7200RPM drive (using EaseUS). I can't imagine what you went through to get it to fit on 30GB!
Also, I'd never heard of WIMBoot. Being a Microsoft employee doesn't make me a Windows expert. I don't work on Windows and don't own any systems that WIMBoot targets.
Looking at WIMBoot, it doesn't seem relevant for the case discussed here, either, since this client clearly didn't have the small space usage WIMBoot enables.
> The reason Windows 8.1 devices using WIMBOOT are not yet able to upgrade to Windows 10 is because many of the WIMBOOT devices have very limited system storage. That presents a challenge when we need to have the Windows 8.1 OS, the downloaded install image, and the Windows 10 OS available during the upgrade process. We do this because we need to be able to restore the machine back to Windows 8.1 if anything unexpected happens during the upgrade, such as power loss. In sum, WIMBOOT devices present a capacity challenge to the upgrade process and we are evaluating a couple of options for a safe and reliable upgrade path for those devices.
There's a complex workaround of "delete everything, and use two USB sticks" which isn't great for the target user.
The new file compression stuff is much much better than WIMBoot.
And. I purchased one last year because it was outrageously inexpensive and had a i5/7500U, a nice 1080p touchscreen, and a miserable spinning harddrive and a miserable amount of socketed ram.
I consider replaceable harddrives and socketed RAM to be a feature, one I promptly made use of and now have a machine which is quite competitive to machines costing > $1k more than what it cost me for the machine+HD+RAM.
Windows always strikes me as a mishmash of ideas with very little overarching vision. A somewhat superfluous example of this is the branding of basic system tools; you have things like Task Manager, Device Manager, Disk Management etc. On Vista and 7 you can access them by mousing about in the start menu a lot or by trying to search (or using run, or right clicking on Computer and selecting manage). But typing ‘Dev’ into search doesn’t return anything related to Device Manager and only when you type the full name does search return some long winded response like ‘Manage Devices and Drivers on this system’ in the middle of a wall of result text. It shows no connected vision between the ability of the search feature to surface succinct information and the design of the OS to differientiate its core tools. Attempting to be helpful by returning a list of tasks in long winged descriptions betrays the lack of coherence of each tool and its purpose.
MacOS largely succeeds in this regard. I can access Disk Utility simply by typing ‘du’ (or even just ‘di' or just ‘d’ since it learns your preference) into spotlight and hit enter. I can launch any app this way without the need to clutter by desktop, dock or taskbar with shortcuts. Disk Utility and the various other utilities generally get large well branded icons and names that set them apart. Many of the Windows tools just use pretty bland and similar looking tiny icons making the visual aspect of search useless.
Worse still is that if you can have several Device Manager programs installed. I’ve a lab system that requires Win7. The frame grabber is accessed through its own Device Manager and my flow control system has something similarly named.
Reading the article it is easy to see why this behaviour is the case. There is clearly very little time for polish and for a singular vision on how everything should look and feel (taste?). It not surprising to see new Mac users delight in the attention to detail of MacOS. Windows often is obtuse and gets in the way.
Windows 10, for all its sins, makes some big improvements. Right clicking the start button or pressing Super-X instantly returns a list of all these system tools.
A major problem with Windows 10 is that its search is basically retarded. It’s easy to get into a situation where the query ‘printer’ will find the Devices & Printers applet, but the query ‘printers’ will not.
Or Search just doesn't return local results at all. Or the Start Menu often won't open to even get to Search :-(
Even on fresh, completely updated systems I've found the Start menu is so intermittent that I'm glad I didn't sell them. The quality of Microsoft's work is called into question when the feature you're meant to "Start" with doesn't work.
They nevertheless shipped their core product with a completely broken core feature. I am not sold on this software as a service/the OS is in permanent beta approach.
In general I have an even bigger problem with the philosophy of what an OS should do. When I switch on my computer, it's not to admire the OS, its beautiful UI, etc. It is because I want to do something, code something, watch something, read something. I live in the applications not in the OS. The OS should be effortless, surprise-less, it should hide behind and do its basic tasks well. Windows 10 is the exact opposite. It's an OS that is in your face, which interrupts you with full screen banners all the time, which reboots you all the time, which shows you ads, shows you modal pop ups to tell you to do things differently, etc. This is not what we need an OS to do.
On my work laptop, typing "Mouse" into the search box only lists the mouse settings about 50% of the time. Usually closing the search box and then repeating the search returns the correct result.
As to why I have to open mouse settings so often, Windows inexplicably forgets my scroll speed setting every time I plug my mouse in.
This is also possible with spotlight. The most clear example for me was the day I typed "polyc<enter>" for Polycom instead of my usual "poly<enter>" and ended up on the wikipedia page for polycystic ovarian syndrome instead of a video call application.
An example from Windows Server 2016: When I search for "SQL" or for "Studio", it'll find SQL Server Management Studio. But when I search for "SQL Studio", it won't.
Why? Asking as a non-native English speaker, please excuse me if it's obvious. Is "retarded" somehow worse than "fucked up" or simply "shit", which seem to be accepted in casual conversations around here?
Yes. It is considered by the mentally disabled community to be similar to the n word. It is a word that was used to marginalize an entire population, now used to describe something bad.
Consider, as a non native speaker, if English speakers used your language of origin to mean "terrible". I don't know your language, but imagine it was Latin, and kids would taunt each other saying, "ha ha you're so latin" on the playground, and "windows search is simply latin". This is a constructed example, of course, but the issue is not that the word is offensive (fucked, or shit would have been fine), but that the word derives its power from being linked to marginalizing and othering people.
Thanks, I didn't know that. I'm Polish, and, of course, we have such words; not even swear-words per se, just words consistently used over time in a derogatory manner, while also being associated with a group of people. We also have a very insulting word for mentally disabled people, equivalent to the English word "retard", but it functions only as a noun (so no direct equivalent of the "retarded" word) and applies only to people. It's interesting because both noun and adjective versions of a word for homosexual people exist, just like in English, and the same is true for many other insults, especially ethnicity and sex-based ones.
Anyway, I agree that such words should be avoided and not used. (Although it's hard to avoid it completely when talking about the word, without using it.)
Please don't treat pfooti's comment too seriously though, most people would not react to the way the original poster used the word and would disagree with the purported disrespect the usage implies according to pfooti.
The world is a big place and everyone does not (thankfully) live in the liberal US coastal areas :-)
Speaking objectively, I linked to an appeal from a broad coalition of disability rights activists to stop using a particular term in the vernacular that has a demonstrable link to the othering. This is not just me, nor is it "liberal coastal areas".
A little more concretely: it's not up to the person saying anything to decide whether or not the thing they say conveys disrespect or harm. It is up to the people harmed by those words to decide. You can, of course, choose to ignore the harm done and continue on as before.
That said, I'm sure the OP's intent was innocent enough - this is the kind of thoughtless use of the word that I (and many other people) work to problematize. I had intended to do so publicly but without much fanfare (one quick sentence), to draw attention to this fact in hopes that we could all work to shift the language.
Speaking subjectively, I find it troubling that the notion that basic decency toward other humans who have a relatively simple ask (use "terrible" or "garbagefire" or "shitty" instead of "r_") is viewed as a "liberal US coastal" practice that should be avoided out of sheer tribalism. Not sure there's much else to say rather than "troubling".
This whole thread is OT at this point, but I'm simply not willing to let this go. The lexicon has plenty of synonyms for "is terrible and unintelligently designed and frustrating to use" that we can afford to drop one.
Windows 10 has its own UI problems. It is very chatty, possibly even more than Vista, there is an almost permanent flow of notifications on the bottom right corner of the screen. When searching for something in the start menu the result is completely unpredictable, and searching for "control panel", "mmc" or "adapters" usually yields nothing, but sometimes it does. The permanently evolving two control panels makes that even when you learn where a setting is, it can move to another panel within 6 months. There are many rough edges. Like you launch a remote desktop connection, you get a grey window asking for credentials, the login is not selected by default and no keyboard shortcut allows you to select the login box, you must use the mouse. I really question whether Microsoft engineers even use their own OS.
My biggest problem with Windows 10 right now is that with creators update, they basically decided to break scrolling on touchpads. Instead of smooth scrolling, you now move your fingers a bunch, then the whole screen shifts suddenly.
This completely breaks scrolling. My eyes can't properly follow that and the touch component is off: you move your fingers, nothing happens, then suddenly a lot. It's not continuous, which makes no sense for finger interaction.
Apparently it's because the app has to support the specific touch input events for scrolling and before creators update, they would simulate it. Now they don't. I really don't give a fuck for their reason if it means even scrolling in Explorer doesn't work anymore.
Your point about searching is really interests because I feel like it's a good example of what happens when you care about user experience. I think that Spotlight, at the time, was really heralded as a really good search engine. Why? It found what you wanted.
On Windows Vista (and later versions) search, sure, it would find the things you were looking for. But no-one gave a thought to how it should be displayed. Does the search work? Check. Feature done.
Apple designed Spotlight in a way that it would actually return the results you expect when you type a search query.
I bet it took a lot of effort at Apple, too, because making a good search is really difficult. If you just depend on the tools given to you by your database, you'll probably end up with something that works, but not necessarily something that does what you want.
I think a lot of it is helped by the way MacOS works. You’ve got an Applications directory in which each app is listed as a monolithic package (or in a sub directory for built-in things like the terminal).
Windows has various locations where apps are typically installed or dumped (Program files, Program files x86
, program data, app data...) where the particular executable in buried somewhere amongst everything else. Shortcuts to these and various other applications are scattered all over the place.
If you build such a simple approach to dropping applications into a single directory with big obvious icons, not only does it look clean to a user but it prevents naming duplication and allows your search tool to match to this directory first so you can esasily use Spotlight as a launcher.
In my opinion, the whole Application Bundle/AppDir/PortableApp/whatever concept is just so clearly the right way to do application management that I have difficulty understanding how anyone could think otherwise. It's simple, intuitive, conflict free, fits perfectly into the file management metaphor, etc.
What's interesting is that various types of users find themselves with different problems using the App Bundle way of distributing applications.
For Windows users switching to Mac OS X, it's not really something they're used to doing. Instead of downloading an installer, clicking "Open" in the browser, and just following a guide leads to less issues with placing the application the correct place (Program Files), instead you get either a Zip file (so the bundle is in Downloads), an actual Installer (typically a .pkg), or a Disk Image which is often never unmounted. (I once saw a family member with 7 Flash Update disk images mounted)
As such, a lot of OS X users find themselves with a the Skype app in Desktop (inside a Disk Image), their Mail on the dock and some other app in Documents or Downloads. It requires that you organise it yourself.
Another way of saying it is that it lets you organize it yourself.
There will always be people who don't understand, or care to understand, how the system works. You can put in all the hand-holdy nonsense you want to cater to them and they'll still exist, but you'll be adding tedious and unnecessary hurdles in the workflow of people who do know what's going on.
> Windows has various locations where apps are typically installed or dumped (Program files, Program files x86 , program data, app data...)
To be fair, neither Program Data (which is effectively ALl Users AppData), nor AppData are endorsed as application installation locations by Microsoft. For a variety of reasons. Chrome started quite a trend, though for installations without admin privileges to install to AppData for convenience.
I'm not sure what not being endorsed means in this context, but I distinctly remember Microsoft going to considerable work back in the days of Windows 2000 to support per-user installations.
The big change was in the registry: HKEY_CLASSES_ROOT was no longer a direct shortcut to HKEY_LOCAL_MACHINE\SOFTWARE\Classes, but instead was a dynamically merged view of that key and HKEY_CURRENT_USER\Software\Classes.
This allowed you to register COM objects for the current user without requiring administrative privileges.
I and many other people were doing per-user installs long before Chrome existed.
This is especially useful for apps that auto-update. If you install in Program Files [(x86)] then you will pop a UAC prompt every time the app updates. If you install per-user it can be a silent update.
Spotlight will find applications in any location on your disk.
Also, Safari.app (e.g.) is not an executable -- it's essentially just a folder with special display characteristics in the Finder. Many app bundles contain several executables.
But spotlight doesn’t search inside those bundles, that’s the main difference. Even the ability of MacOS to list applications when using the open with context is so much faster compared to windows which grinds for several seconds to populate the list.
Heck even using the New... context window in Explorer is a joke.
MacOS has also not only bunch of preferred locations, where to find applications (except for /Applications and ~/Applications also /System/Library/CoreServices, for example), but also allows the user to place the apps anywhere. As other mentioned, users will place them on desktop, in downloads folder, etc.
On top of that, getting info about the application involves parsing .plist files, followed by parsing localization.
The first is for system-wide apps, the second for apps that the user doesn’t share with others, the third for some parts of the OS itself. It would be insane to throw all that in one place.
Similarly, yes, getting app info involves parsing files in a known location with a known format, how could that situation be better?
I’m not sure how often people fail to put applications where they go. Especially nowadays, where most applications ship as .dmg files that open to a giant arrow telling you to drag the app to Applications, and many nag you if they’re run while not there, and App Store solves this entirely by just putting them there.
The decision to replace some but only about half of the system configuration dialogs with "Metro" ones (which are worse on desktop) is baffling. It makes it completely inconsistent.
I'd recommend Elementary OS for that. Great small team behind it doing excellent open source work and the design team is also far superiour to those of the competition on the Linux space.
> you have things like Task Manager, Device Manager, Disk Management etc. On Vista and 7 you can access them by mousing about in the start menu a lot or by trying to search (or using run, or right clicking on Computer and selecting manage).
Windows+X is your friend. It's the power user menu with all of the things you mentioned on it.
There are a ton of useful Windows+key shortcuts. Try all the letters and the arrow keys too.
Any system that lives.long enough will be a mishmash of ideas that doesn't fully integrate. OSs face this particularly heavily since they have to deal with 3rd party code aka back compat.
To be honest, these days I think if we want a proper OS and if we want the incentives of the vendor to align with ours, we need subscription OSes.
Right now we have:
* Android - basically ad supported, therefore misaligned incentives
* iOS/macOS - locked into Apple hardware
* Windows - old school licensing until 10, with the problems they mentioned for Vista; new licensing which seems to be more like Android adware
A cheap yearly subscriptions would realign the incentives: I am the client, I want frequent feature updates, I want fast security updates, I want to be the client and not the product, I don't want the company to go under or force development teams into stupid feature development or release trains cause they need to make sure there's a revenue stream.
Basically I pay them for providing timely security and regular feature updates, preferably in an incremental fashion.
Everyone who's not happy with this can either lock themselves into hardware (Apple) or go DYI, basically, with Linux/BSD or just accept that being tracked isn't that bad (Android).
I'd be really sad if the anti-tracking forces within the Windows division lose. There has to be a better revenue model than tracking users and providing ads, somewhere.
There's nothing DIY about a ready made Linux distribution such as the Fedora Desktop.
As long as you don't fiddle around too much under the hood, and don't insert random commands in places because someone on the Internet told you to, things will Just Work like nothing else. There's something to be said to having the developers actually caring about making a useful computer without allowing the marketing department to take charge.
Don't try to run it on some unsupported hardware (including outside "third party drivers"), anything bog standard Intel based will do, and you'll be absolutely fine.
> There's nothing DIY about a ready made Linux distribution such as the Fedora Desktop.
Until you want to change something. It is trivial to target a single use case and make it easy with Linux (most of the time) or almost any other OS, but that isn't what we want from Desktop OS, we want flexibility so we can conform it to our workflow, and Linux Desktop is absolutely terrible at that because it's become a giant Rube Goldberg machine of schizophrenic and often half-thought out components written with the idea that they're in a multiuser server environment but pretending they were designed for personal desktops.
If someone built a new, joy to work with and develop for, userland on top of the Linux kernel that wasn't oriented towards the idea that the system is a multi-user network server in 1983, I would subscribe the hell out of it (though in my opinion it had better be open source regardless). As a side project, I'm actually playing with concepts in this vein, but I'm really not that good a developer and don't have a lot of time to devote to it, it's just that no one else is doing it.
It's not clear to me what the userland has to do with "the idea that the system is a multi-user network server in 1983." Pretty much any PC GUI out there today would be at least recognizable to a Windows 3 user OR a Unix workstation user--one of which was based on a multi-user network server but the other was not.
There are alternative models out there. There are Chromebooks where you basically live in the browser (which many people do anyway for most of their computing) and there's the app store model we see on phones and tablets.
I guess there's a potential for new paradigms that blend the flexibility of current GUIs with simplicity but I'm not sure a multiuser server foundation has much to do with it.
It's mostly about the way permissions are handled. User-oriented permissions are about protecting the system from the user. This is a useful notion when people connect to a multiuser server via a terminal, or for shared network resources, but is nigh useless and generally just gets in the way for personal computing devices.
The mobile OSs orient the permissions system around the applications, protecting the user's data and resources.
That's fair enough although one of the reasons that works is that applications are sandboxed and limited in terms of what they are allowed to do. Certainly one could imagine a tablet/phone OS that was more oriented to desktop use cases (mouse support, etc.). But now we're back to Windows 8 and possibly Surface Pros and that just hasn't been a particularly popular model.
But that's exactly what we want: applications that are only allowed to access what the user has permitted them to access. And I believe it can be done better than popping up a dialog all the time like UAC or just once at first launch like Android/iOS.
You don't need to tabletize your interface to implement this concept. Bubblewrap, FireJail, and some similar software already exists that implements this in Linux (poorly in my opinion, but it works).
The self-contained application bundles idea that you espouse can be found as Daniel J. Bernstein's "slashpackage" mechanism, from the turn of the century. It gives side-by-side installation of multiple versions of packages and simple packaged files installation, upgrade, and removal procedures.
The ability for users to isolate their own applications from one another is one of the concepts (removing the superuser) that underpins GNU Hurd, where one can sandbox one's own programs using translators and servers that substitute for the root ones.
I am aware that people have tried it, ROX Filer knew about AppDirs, I think WindowMaker/GNUStep supported Application Bundles, there are currently (in keeping with Linux Desktop tradition) at least 3 implementations of the concept (FlatPak, Snap, AppImage), all of which are overengineered (but AppImage is the closest to correct by a wide margin).
But there is no system I can install today that makes it the primary way of doing things. That's what I'm trying to build, a new OS (or perhaps "operating environment" would be a more precise term), not a "distribution", on top of the Linux kernel, that takes the good ideas that already exists and makes them the primary way of doing things.
It's likely that even if I did finish the project people would ignore it. Maybe only I think these are good ideas, maybe I'm missing something, but I don't care. When I look at the future of the desktop operating system I see Windows 10 continuing down its path of becoming increasingly uselessly stupid, and the Linux Desktop, and those two futures make me want to become Amish. I'm building this because if I don't build something like this because if I don't I'll probably just leave computing altogether.
> it's become a giant Rube Goldberg machine of schizophrenic and often half-thought out components written with the idea that they're in a multiuser server environment but pretending they were designed for personal desktops
Wayland is an attempt to replace X11. PulseAudio is a (now decent) attempt to clean up the Linux audio space.
There are attempts being made. Your comment is too vague to be falsifiable, though. Would you mind being more precise as to what you'd like to see changed?
Package management and spreading files all over the hierarchy is a really terribly inflexible way to manage applications. Applications should just be folders that contain any resources the application requires that aren't in the base system (this requires a defined and stable base system). This requires no special management software whatsoever and fits intuitively into the file management metaphor that already exists. See: Application Bundles, AppDirs, AppImage, and just about any desktop os before Windows 95.
Permissions should not be oriented around protecting the system from the user. That's a concept that makes sense in multiuser servers and organization networks, not personal devices. Permissions should be oriented around protecting the user's data from misbehaving applications.
Base system libraries should be stable on at least decade long timelines, barring a severe security issue that cannot be mitigated in any other way. I generally agree with Linus's opinion regarding breaking comparability. Unfortunately, a lot of the developers of the userland stuff aren't Linus.
In general, the OS should not presume it knows what my workflow is. This is the problem with the difference between "easy", which requires understanding what I want, and "simple" which is making systems understandable enough that I can conform them to what I want. It is my opinion that Linux Desktop has put so much effort into being easy that it has destroyed any simplicity it once had (see common Linux Desktop evangelist argument: "normal users don't need that").
The commandline environment and toolset are ass. PowerShell implemented the same concept significantly better.
Explaining my ideas for the how a GUI interface should work would take a long time and are not really relevant because they could exist as just another compositor in the current ecosystem.
>Applications should just be folders that contain any resources the application requires that aren't in the base system (this requires a defined and stable base system)
No. Well, yes, to a certain extent, but they don't need all the abstraction of containers, they can just be regular old directories. DOS applications were managed this way, MacOS applications were managed this way, RiscOS, NextStep, and OSX (now confusingly renamed to MacOS again) worked this way.
None of them use any containerization hackery to make it work, they just don't hardcode paths at compile time in their applications like UNIX developers do.
No they do not hardcode paths, they just have to use some other mechanism (for example: registry) to find each other's location. MacOS doesn't really work this way either, half the apps available use installer and have their files somewhere in /Library.
The time when applications were universes by themselves ended in 1990 with OLE.
Flatpak in linux uses bubblewrap (namespaces, cgroups, bindmounts, seccomp, basically the same tech behind containers), and constructs /usr based on the frameworks the application requires in its manifest.
On the disk, it boils down to a single copy of each framework required. Additionally, the flatpak repository is content addressable, the constructed directory trees are hardlinks to the repository, so multiple versions or even separate frameworks will share identical files.
No, snaps are a completely ridiculous overengineering of the concept. They use a package manager and repositories and paclage format and, from what I can tell, some overlayfs, they don't fit in to the normal file management paradigm... Frankly, they're exactly what you'd expect from the Linux community.
AppImage did it much better and even that's overengineered, but its goal is to work on all Linux distributions (without a package manager or a new infrastructure or anything) and really doesn't do much more than the bare minimum to make that happen. Naturally, the Linux Desktop community largely ignores it for making too much sense.
ROX Desktop attempted to do app bundles a long time ago on linux as well.
Anyway you are missing one of the core problems of linux distros- they are not split between "base" or "core" OS and extra packages; they are a single monolithic thing! That is why debs and rpms install right into /usr/bin along with the core OS. There is no difference.
I know, I was there when ROX Desktop and ROX Filer hit the scene. Sadly, yet predictably, a concept as simple and flexible as AppDirs was completely ignored by the rest of the desktop linux community. I guess it didn't make them feel leet enough or something.
I may be wrong, but WindowMaker, being based on GNUStep, which was derived from NextSTEP, might have also supported application bundles (as in the actual NextSTEP Application Bundle specification).
“These days, the problem isn't how to innovate; it's how to get society to adopt the good ideas that already exist.” - Douglas Engelbart
> Anyway you are missing one of the core problems of linux distros- they are not split between "base" or "core" OS and extra packages
I didn't miss it, I mention right in there that having a defined base system is a requirement. Something I didn't mention is that I will not call the end product a "Linux Distribution", because it is not intended to play nice with existing Linux Desktop paradigms. It'll be like Android, a completely different OS underpinned by the linux kernel.
I agree completely. Linux as it exists today seems to be dialing up the complexity to 11. I am expecting (hoping, almost) that the whole thing implodes sooner rather than later so we can start getting simple things again.
The driver ABI is not a part of the compatibility promise, it is an internal API.
However, the folks who break internal APIs also fix the users of these APIs -- if they are available to them. If they are not, tough luck, if you want to keep the driver to yourself, you get to keep all the broken pieces too.
Yeah, I also don't care for that, but I can't argue that that policy hasn't achieved its goals. The Linux kernel has more and better driver support than any other open source OS and also most closed source ones.
No they don't? Alpine uses the BusyBox deamons (initd, evdev etc.) And Gentoo uses openrc. Both of these are much easier to tweak than systemd. Same for the network manager replacements and so on.
Different distros (that aren't just modified Debian or redhat) can differ by quite a lot.
> Both of these are much easier to tweak than systemd.
Disagree. The documentation for systemd is very detailed, and while it is in fact different from sysvinit (which I believe is 90% of the criticism), it is remarkably consistent with itself (compared to a feature-equivalent set of 100 separate tools).
The big achievement of systemd is that it steamrolls over a ton of idiosyncrasies that distributions built over the decades. These days, I can just SSH into a random distribution (Arch, Debian, Ubuntu, Redhat, Suse) and everything works as I expect (querying the status of services, finding logs, issuing administrative commands, etc.).
The OS itself is not that DYI, even though I don't like being constrained in hardware terms. Let's say that I get over that. Then the DYI part begins, at least for me.
I like AAA games, almost none of the games I play are available on Linux (primarily Blizzard games). A lot of desktop apps I like or need are not on Linux, etc.
In any case, let's just say I used to have a Linux site and forum back in 2007 or so, I'be been through the Linux evangelism cycle a few times.
For many people, such as myself, Linux desktop is acceptable but not the best option. And for a reasonable price I'd rather have the best option for me, personally.
> I like AAA games, almost none of the games I play are available on Linux (primarily Blizzard games). A lot of desktop apps I like or need are not on Linux, etc.
I don't have any first hand knowledge about the AAA gaming industry but from what I understand, the studios offload a lot of the work to the GPU manufacturer (nVidia most likely) who then "fixes" the bugs in what they call a graphics driver.
I think this is the main problem. The so called graphics drivers do too much work and games don't pull their own weight.
That's certainly a problem, but that's not why games aren't being ported to Linux, in my opinion, it's that A) the user base isn't large enough to bother in many cases (these same companies often release terrible Windows ports for more or less the same reason) and B) It's a pain in the ass for a multitude of reasons not the least of which is that the Linux community thinks it has a great development environment but it doesn't (at least for Destktop stuff).
> the Linux community thinks it has a great development environment but it doesn't
This is an incredibly important point. The community is full of true believers and college students (and many are both) who just can't fathom that almost all of their dev tools and workflows suck compared to professional grade tooling available outside the FOSS echochamber. I have nothing against FOSS in principle, but in practical use I have become convinced that there is no reason to develop on Linux, unless you develop specifically FOR Linux. And even then it's such a PITA to do anything complex that it's often just not worth it to target Linux if you don't absolutely have to. Microsoft and Apple have realized how important it is to make it easy for developers to create good software.
>that's not why games aren't being ported to Linux //
What's weird to me is that a lot of games will run on Linux, eg via WINE (PlayOnLinux) say but the companies then appear to be hostile to users doing that. I used to play a load of the Tom Clancy games on WINE [single player], but then Origin messed with something, maybe their cheat system, making it no longer work on Linux. So now I don't buy games from Origin any more and stick with Steam.
It's not just "not ported to", game companies seem actively hostile to [paying!] Linux users, I don't know if MS pay them to be like that or what their motivation is?
> I am the client, I want frequent feature updates, I want fast security updates
Oh God no. I am the client, I DO NOT want frequent feature updates, or any update at all, except important (life-threatening) basic security patches. I want things to NOT CHANGE ever, esp. the UI and the way to do basic tasks.
To be fair, most subscription model software that I know changes UI less frequently than pay–per-release software. As for every big release the marketing department steps in to fully redesign everything.
What you want is a Rolling Release Distro, like Arch or Gentoo, then you can add whatever GUI you want on top of it, I like XFCE as a sibling comment mentioned.
Another good example of a rolling release distro is OpenSUSE Tumbleweed. It has less of a DIY culture than Arch or Gentoo and the installer will install a graphical desktop by default so arguably it is closer to what the parent poster was asking about.
Regarding UI changes, minimalist Window Managers and Desktop Environments for *nix tend to work like that. They stick to their base paradigms and rarely, if ever change, once considered stable.
XFCE has been already been mentioned. But Openbox, dwm, awesome, etc... all could fit.
Note that it's expected, and assumed, that the user can switch to these more conservative UIs if they prefer, even if the base distro defaults to only one.
Then find yourself a Long Term Servicing Branch iso. The last Windows 10 Creators update trashed all the developer workstations in my company and made them unusable. The only knock on LTSB is the Linux-on-Windows stuff isn't there. Otherwise, it's Windows 10 without the bullshit and with added stability.
It’s full of ads for Edge, OneDrive, Office and some more of their own services, and then it’s automatically installing and wasting the space you paid for on Candy Crush and other games, even if you tune the Start Menu so you don’t see the tiles.
I understand that you don’t see it if you don’t look but you certainly are paying for it.
For my home PCs, I purchased the MS Action Pack subscription so that I could install Windows 10 Enterprise LTSB and not get ads and Cortana. With it and ClassicShell, it serves me nicely.
I prefer my FreeBSD box running Motif as my daily driver though.
I come across various Windows 10 machines and see Candy Crush adverts, Office 365 adverts, OneDrive adverts, Lock Screen adverts. I believe some have even documented adverts showing on File Explorer (!)
Well, the apps do. And Google has unfortunately forgotten to give you the tools to block those from doing that by not giving you ways to block internet access.
No point in leaving the goal posts where they are when they don't reflect the goal: Not having ads in your face all the time.
Hosts-file-based blocking requires Root, which Google has also practically cut off from end-users. Yes, the manufacturers can give you root, so clearly it's their fault, but Google doesn't implement user access control, making Root extremely insecure on Android.
I'm not "having ads in your face all the time" in apps shipped with android distributions. 3rd party apps, anything goes but that's true for all platforms.
Well, I just pointed out that Google actively encourages and enables 3rd party apps to ship ads and track users, and your only rebuttal is that theoretically you can pull off the same on other platforms.
So, if you really need more examples of Google's doing to convince you that it is the platform which causes Android's 3rd party apps to have so much more ads and tracking than other platform's 3rd party apps:
- Android has an "AdvertisingID", uniquely identifying you towards advertisers or anyone else who has a use for it, with essentially no effort required.
- Google's ad display component is a native UI element on Android, so when developing an Android app, you pretty much just need to drag and drop it onto your UI to be 90% there to displaying ads. Really, you don't even need to plan on making revenue with the app, it's so easy to just include ads.
- The Play Store doesn't even attempt to quality control apps in terms of advertisements. No other major software distribution platform is as laissez-faire with that.
I was talking about the combo of tracking + ads, especially as they lead to misaligned incentives for me and Google. Google ultimately wants to sell more ads. It doesn't care about me as a software user. A company that sells (subscription) software does.
That's cause Android pushes Chrome and Google Search and Gmail and... you get it, the whole data collection and ad serving mechanism provided by Google.
I bundle telemetry/data collection with ads for my post since in the end they're just different ends of the same continuum.
Don't move the goalposts from "ads" to bundled apps. Furthermore, the "data collection" would be the same whether I accessed those services from my GNU/Linux desktop or Android device so I don't understand where you're going with that claim.
Subscription incentives only work in the favor of users with viable competition and low switching cost. Neither condition exists right now for businesses or the overwhelming majority of home users.
"I am the client, I want frequent feature updates, I want fast security updates, I want to be the client and not the product, I don't want the company to go under or force development teams into stupid feature development or release trains cause they need to make sure there's a revenue stream."
You just described at least a dozen Linux and BSD distributions. They fit every single one of your points and are free.
How much do you think such an OS could be sold for?
For me, Windows wastes about 2% of my time per year (mostly due to Windows update). From a business perspective, that might justify a $10k expense to recover that lost productivity. From a personal perspective, I would pay $500 to recover that time for my own use. No idea if there's enough like-minded people to make a new OS profitable, though.
The sticking point is that to recoup time lost to Windows the new OS would have to completely replace Windows, which means being able to run key Windows-only business software (SolidWorks, etc.). Maybe it makes more sense to just support Wine instead of reinventing the OS.
I am optimistic that it's possible to do better because:
(a) I'm only counting time as "wasted" if I'm forced to fix something the OS broke with no action on my part, or if I'm forced to wait for it to do something that I did not ask for. E.g., watching Windows apply updates on boot is wasted. Adjusting keybindings to suit my preferences is not wasted. This isn't a high bar to clear.
(b) Design choices, not physical or logical restrictions on what's possible, cause most of this waste. E.g., the choice to have Windows update do substantial work during reboot and to reset user settings (such as HKLM and HKCU keys) is an unforced error. It's demonstrably possible to do better: Linux Mint updates apply in the background, don't lengthen boot time, and leave existing settings unchanged.
(c) If the new OS is worse than the current choices, I just don't use it. The time a new OS wastes is in all cases ≤ 2%. Building a new thing that fails hurts a lot, of course, but being a customer is easy.
I don't think Windows or Android are adware. They are "carrier" products, free but MS/Google can decide what services to bundle, and this is where they get the money (=app store, having Office preinstalled). So it's a much less obstrusive than adware which lives from constantly showing you ads like websites
Ok, maybe not direct adware, but more like light spyware and "carriers", as you said. They bundle services, including data collection, so that they can turn you into the product. Microsoft less so, but and I hope they stop this trend they started.
> that we would no longer support their legacy apps with deep hooks in the Windows kernel — the same ones that hackers were using to attack consumer systems. Our “friends”, the antivirus vendors, turned around and sued us, claiming we were blocking their livelihood and abusing our monopoly power!
"Back then?" We haven't quite come full circle with the Meltdown registry compatibility key being set, because of anti-virus vendors hacking their way around, but it's close. All it needs is one of them to throw a sueball and the circle will have been completed!
Man, even a guy on the team felt like WinFS wasn't a great idea... I still think it's potentially revolutionary and that the POSIX filesystem is just old and clunky
I think most people would expect
/home/geokon/program1/src/
and
/src/program1/home/geokon
to have pretty much the same content
A tag based file system that made the two equivalent would eliminate all sorta of annoyances where you can't decide how to structure your file hierarchy (should you have bin/program1 bin/program2 src/program1 src/program2 or program1/src program1/bin program2/src program2/bin? both layouts have their advantages).
Something like a "path/path/bin/path/path/bin" wouldn't work.. but it's hard to find a case where you really need it. Currently I do from time to time come across 'include' folders inside of 'include' folders, but not for any good logical reason.. more b/c an in-source build vomited something up
You could also have tag based permissions so sharing files between users becomes a lot more sane (file1 has tag 'bob' and tag 'bill' so that they both can access it, instead of having it in .. /home/bob/ and then sim-linked to /home/bill/ or whatever other workaround you'd do in POSIX)
Yeah, the filesystem wouldn't work so great if you have thousands of tiny files... but I'd probably argue that you shouldn't have thousands of tiny files in the first place. If your program needs tens of thousands of tiny files then it should probably use something other than the filesystem
Purely tag based filesystems are not better than hierarchical filesystems because hierarchy is meaningful. The 'pritam' in `/home/pritam` has a different meaning than the one in `~/Documents/taxes/pritam`.
> Yeah, the filesystem wouldn't work so great if you have thousands of tiny files... but I'd probably argue that you shouldn't have thousands of tiny files in the first place. If your program needs tens of thousands of tiny files then it should probably use something other than the filesystem
That's dismissing a core feature of many production file systems today, some of which are used precisely because they support this feature.
That's a valid counterexample! :) The first implies ownership and the other just designates folders with information about 'pritam'. `~/Documents/taxes/pritam` could be a folder on your accountant's computer and then it'd have a different meaning than on your machine.
You're not wrong, but I think that that situation is rather rare and rather harmless. In the vast majority of cases the context doesn't change the meaning of the name much. The converse situation, where you have many folders with the same meaning, is a lot more common. You probably have dozens of 'src' 'include' 'build' folders on your machine; and you could have dozens of folders called "June" "August" "September" - all which carry the same exact meaning in their respective contexts. But we've been forced to recreate these redundant folders b/c the system forces us into use a tree structure for concepts that don't necessarily conform to it.
>In the vast majority of cases the context doesn't change the meaning of the name much.
It absolutely does, and in a vast number of cases I have duplicate folder names.
I probably have a hundred different folders on my computer called "src". Knowing that it's ~/dev/projectName/src/ helps me know which src folder it is.
For a non-software example, I always organize my client folders somewhat like: ~/clients/A/contracts, or ~/clients/A/designs, ~/clients/A/notes
I don’t know that he thought it was a bad idea technically but that it wasn’t necessary for the user-facing features they needed, and would have been much too disruptive for the remaining benefits such as you describe.
Tag based permission system AKA SELinux. Tried, tested, determined to be superfluous bullshit whose only protection is in the jobs it saves at the NSA required to maintain any sort of SELinux ruleset.
SELinux does great at preventing privilege escalation exploits... Tried to root my phone, got a uid 0 prompt, but SELinux kept me from doing anything useful with my "root".
> I worked at Microsoft for about 7 years total, from 1994 to 1998, and from 2002 to 2006.
>-The most frustrating year of those seven was the year I spent working on Windows Vista, which was called Longhorn at the time. I spent a full year working on a feature which should've been designed, implemented and tested in a week.
Man people love to rewrite history. I ran Vista. It was ok. But the constant asking for permission dialogs, while a security improvement, annoyed most users a lot. They were introduced suddenly without fully explaining why they were necessary. Additionally requiring device manufacturers to write new drivers meant lots of people’s printers went obsolete without warning. That was a PR nightmare.
Also the fancy Aero UI looked cool but was taxing for the average user’s computer. So a lot of people when they installed Vista didn’t get the experience they had been shown in commercials and screenshots.
On top of that literally everything in the OS had moved around from where people had been expecting it to be as far as settings and configuration stuff. Keep in mind people had a pretty consistent experience from 95 to 98 to XP. Vista flipped the script and users no longer knew how to do things. A mistake Microsoft repeated and was similarly punished for in Windows 8.
The whole world was not delusional to say they didn’t like Vista. When all your users say something sucks then you should listen.
Windows 7 was mostly PR but the big thing is that it had device support on rollout.
> But the constant asking for permission dialogs, while a security improvement
It wasn't a security improvement. For one, it mostly just got in the way and trained users to ignore it, negating the purpose of its existence, but it also never protected the user from anything. Case in point: ransomware needs no elevated permissions.
> Case in point: ransomware needs no elevated permissions.
That's the result of how we drew up the security boundaries on desktop operating systems. You don't need admin permissions to rewrite your most important documents, so ransomware running under your user account doesn't either. See also: https://xkcd.com/1200/
Linux and macOS have this problem too. The solution isn't to remove the security boundary between the user and admin accounts and to go back to the Windows 9x-era of having zero OS-level security. The solution is to create more security boundaries, so that not every program running on your behalf has the permission to rewrite all your documents. UAC was an unpopular, but not totally ineffective step in the right direction.
I disagree, because it's the user's system and they should have permission to change anything they want in the system, that security boundary serves only to annoy the user for no gain whatsoever.
What our real goal is, is to prevent misbehaving applications from compromising the user's work. Obvious conclusion: permissions boundaries should be on the applications. On this we agree.
My two cents : I use linux for like ten years, and never touched a ms-windows during this time. Recently, I wanted to use windows only software, so I took a Nuc (intel small computer) I own, and install windows 10 on it. While ubuntu, xunbuntu, alpine and so on work out of the box with any nucs, with windows, there was no wireless after install.I had to download a intel driver installer (in another computer, copy the .exe to a usb key, and install it with a gui setup program (click yes yes yes agree yes yes).
I had the clear impression that right now, Ubuntu is the friendly way while MS-Windows is the diy/hacker way.
My desktop PC is Skylake with all fairly high end, popular brand parts that people use in gaming rigs specifically made for running Windows and I have to keep a folder around with about 20-25 things (drivers, chipset drivers, nic drivers, yadda yadda) everytime I boot.
Reinstalled a few months ago and forgot to install Samsung Magician for my NVME drives and so they were capped at something like half their speed.
I don't expect Microsoft to keep up with every vendors drivers but it'd be really nice if they let them tie into Windows update/device manager to tell you "Hey, swozey, I see hardwareID XYXYASUhF12 is running a generic driver, do you want to update to your vendor proprietary driver?"
Maybe they do have that and hardware manufacturers just don't utilize it.
After dealing with that 3-5 times over the last year (migrating to NVMEs, realizing I couldn't encrypt mine, migrating back, yadda yadda) I finally made a Veeam restore ISO which hopefully next time I do this will handle all of it. But I feel like it shouldn't take that to get my system up and running from a fresh install.
I've had the same experience installing Ubuntu on a computer with a broadcom wireless chip. It all just comes down to chance unless you buy a hardware + OS combo, which generally actually means mac or Windows. If you install your own OS you have to expect there may be driver incompatibilities, that's just how it is.
Ubuntu comes with drivers for Broadcom chips. You just have to select "yes" when the installer asks if you want to use proprietary drivers. Otherwise you have to open to the Software and Updates app, go to the Additional Drivers section, and manually check it.
>[anti-virus vendors] just wanted their old solutions to keep working even if that meant reducing the security of our mutual customer — the very thing they were supposed to be improving.
Wow. Is there a backstory here in regards to the development of Windows Defender? I think that was introduced with Vista?
I don't know, but honestly, most antivirus companies are such clowns that it wouldn't surprise me if Microsoft created their own antivirus just to force everyone to be up to par. Which they still seem to be failing at...
I always found it interesting that Defender has always been touted as being well-behaved when installed next to other anti-virus. I have preferred to disable it (IME, Defender's detection rate is poor, and the off-set in possible stability problems is not worth it) but haven't actually come across Defender misbehaving, other than false positives. This suggests it's probably using the operating system correctly, in exception to its peers.
The sentence after that ("Open source doesn’t have that problem.") is absolutely wrong in my experience. Team communication and dynamics aren't somehow magically different with open source, and pretty much every piece of open source software you use reflects the organization that produced it.
I helped a non-tech friend migrating from a crumbling XP laptop to a newly bought Vista laptop.
Took us the best part of an afternoon, because I was really careful and she was paranoid about losing her stuff.
Two things did not work for her:
1) No way to migrate groups (address lists, not sure of the name) from Outlook Express to Vista mail standard app (I also tried Thunderbird and/or another free email app and in the end had to cobble together a small utility to get the addresses out of sent email headers).
2) Word had to be updated (paid) - I don’t remember exactly what the problem was but I think it just wouldn’t run.
On top of that the UI felt alien and non-intuitive to us both.
So while technically it might have been a jewel, in terms of user experience (including the fact that third-part stuff worked, while MS apps were the only ones giving us problems)it really looked like great advertiing. For OsX...
It's always been a massive pain to get email + metadata into or out of Microsoft products. It's so frustrating as it cannot be difficult to implement half-decent import/export but my heart drops every time I know I have to do it for a customer...
OOI, what parts of the UI did you find non-intuitive? I found the new Start Menu odd, but it wasn't long before I was more used to pressing Windows, then typing, to launch a program, rather than find it manually. I can't believe I used to spend so much time manually organising my Start Menu! (And I'm usually the type that doesn't like this sort of change)
It has been ages, and I did probably not touch Vista anymore after that (I used Mac at home after years of Windows&Linux, while at work it was mostly WinNT, Linux, Solaris...) so I cannot recall what specifically felt odd (apart from the UAC that at least for the first hour or so were cropping up with alarming frequency - the person I was helping lives far away from my hometown, so the idea of leaving her with mistery pop-ups wasn't really encouraging).
In any case, the UI was only a minor concern: I was confident that everyone can learn and adjust after a few hours of work with it.
But what I could not help to notice was that I still remembered clearly what happened when I started my first Macbook (three years before the Vista episode): the UI felt very different than what I was accustomed to... but it felt also much better. Intuitive, empowering. When something unexpected happened it was, most often than not, a nice surprise like "wow, the global search can find strings even in a just-closed Word for Mac documents!" (compare and contrast with stuff like Magellan on Windows) or how the progress bar on file copies worked (compare and contrast with https://xkcd.com/612/ ).
I believe the main issues was that Vista needed decent hardware at the time. Many people were buying lower end PC’s at the time and they seemed slower than their older XP based PC’s. However, it seemed to me that Vista SP1 fixed a lot of the issues with the original release and worked very well.
Pretty much. If you had a desktop that was worth a damn at the time you sounded like a crazy person when you said that nothing was really wrong with Vista.
I was that person in high school who was running a test version of Vista and also the RTM on a pretty powerful Alienware laptop.
> Windows XP had shown that we were victims of our own success. A system that was designed for usability fell far short in terms of security when confronted with the realities of the internet age.
Interesting point I'd never considered. Talk about a requirements shift.
Vista was odd for our community college. None of our educational or testing software vendors would offer support until Windows 7 was introduced. It was very much like the whole Windows 8 release where we waited until Windows 10 was released. Heck, we still have one vendor who only supports IE and not Edge. The people who did get it weren't real impressed, but at least it wasn't the "get this crap off my computer" reaction that Windows 8 received.
It was the first time we ever held on while a new operating system was out. Windows NT to 2000 to XP was just so automatic.
> The chaotic nature of development often resulted in teams playing schedule chicken, convincing themselves and others that their code was in better shape than other projects, that they could “polish” the few remaining pieces of work just in time, so they would be allowed to checkin their component in a half-finished state.
I'd love to get clarification on what "checkin" [sic] means in this context.
Were they waiting for permission for their equivalent of a feature branch to be merged, or were they actually waiting for their code to be added to the repository in the first place?
No. The problem described is that the unstable code is being pulled upstream. Merging into the root branch is equivalent to merging into Linus’s repo. Nothing prevents a Windows developer from checking unstable code into a feature branch without merging up, just as nothing prevents a Linux developer from checking unstable code into a private clone without Linus pulling it in. The difference here is simply whether the feature branch has a central host. Decentralization doesn’t fix this issue. Proper code health practices do.
File search has been broken since Vista. It constantly fails to produce results even when you know the search string exists in an indexed directory tree. Thanks MS.
I worked in security around the Vista timeframe, and remember the “think of the children” logic for the UAC prompts.
It’s insecure to not warn people constantly. Full stop. (Never mind that desensitizing people or any of a dozen predictable side effects are ignored). It’s oversimplified black-and-white thinking endemic of many management mistakes.
My guess as an outsider: The general consensus among techies (since, I think we can say, disproven) was that the tablet would replace the laptop. I would guess MS also bought into the hype and wanted to provide some continuity to its customers in the hope of retaining them in this new tablet world, and was willing to throw desktop use under the bus to make it happen. It doesn't matter if your UI sucks on a physical keyboard and mouse if users rejected those devices.
Instead tablets peaked, then bigger smartphones ate their share instead of tablets taking out the remaining use cases for traditional form factors.
As an insider: Windows developers hated the full-screen start menu and were vocal about it. We were the first ones to experience it and generally thought, "They're not going to ship this... right?" At some point, we were told that the design was set and no amount of complaining would change it.
Windows 8 was designed before the iPad was announced, but that 3 year ship cycle this article discusses came back to bite Microsoft. Their vision was a hybrid system but then Apple came out and solidified themselves as the standard tablet. Microsoft lost their ability to define the market and the market liked Apple's definition better.
At the very least, there was some consensus that we were moving toward some sort of laptop/tablet convergence and Microsoft probably saw an opportunity to one-up Apple which ran (and runs) tablets and laptops on two completely different operating systems.
Instead it seems like in the main, people have (once again) decided that they don't really care to switch between a PC modality and a tablet modality [ADDED: on a single device], however appealing the concept seems on paper. And, as you say, the smartphones that everyone (to a first approximation) owns grew to their approximately maximum size and those are good enough tablets for a lot of people.
The fun thing is that there was never such consensus outside of, seemingly, Redmond and wharever place Unity was developed at. Everybody else did not even expect general-purpose Tablet to be a thing and were frustrated OSes tuned for hardware that wasn't even available.
Gnome 3 is where I lost faith in the Linux community's ability to cohesively come up with a solution to user experience. Every time I log onto a system with Gnome 3 it give me a feeling of dread and anger. Gnome 2 had become one of the more popular, and fast interfaces and they threw it all away. Mate seems to hold the torch, but I want progress, not to be frozen in time. But Gnome 3 was just a complete FU to all the current gnome users. How arrogant.
KDE seems to keep it together. KDE plasma 5 seems an incremental upgrade from what came before, getting rid of some of the weirdness and converging on something sane. Progress at least. I can respect what they're doing.
Microsoft, in all it's bullshit, every few years seems to give the users what they want. Licensing is a headspinner, and privacy is questionable, but otherwise they still make an operating system that feels good to use.
I just can't agree with this. I used Gnome 3 with excitement when it came out. I found that it was unpolished back then, and probably not ready for general use. I switched to XFCE for a while, then openbox, fluxbox, and a number of tiling window managers, then to MATE and finally back to Gnome.
I've been using Gnome now for about three years. I find the hot corner UI to be intuitive (and I find myself trying to use that and the meta-key expose function on other OSes). The favorites dock and the search are things that I use frequently, and the title bar menus make sense to me.
I was a Gnome 2 user for years as well, and yes it was polished and predictable. But it wasn't attractive. I think Gnome 3 was the next logical step for the project as far as contemporary UI patterns go, and it feels very "ergonomic" to me.
> "Gnome 2 had become one of the more popular, and fast interfaces and they threw it all away. Mate seems to hold the torch, but I want progress, not to be frozen in time."
I would suggest that Pantheon (the DE for Elementary OS) is more polished compared to Gnome 2. Cinnamon (the DE for Linux Mint) is another solid choice if you wanted incremental improvements over Gnome 2.
Way too impressive goals considering the available time? And no possibility to re-evaluate them, because the whole hardware side was depending on some of the new stuff being ready.
For example the new ideas for the UI were quite interesting, but it was quite obvious they had ship it way before it was ready. They were interesting, because they solve one (IMHO) big issue: ordinary users have hard time taking full use of their screen real-estate. Windows are hard to manage and people end up switching between full screen apps, while it would be in many case more effective to use them side-by-side. A tiling window manager would help here.
They didn't solve anything because there were no apps. Not then, not now. There was this new annoying and confusing default where you would fall from time to time, but no apps.
> Would we make different decisions today? Yup. Hindsight is 20/20. We didn’t know then what we know now.
From very far away, WinFS looked like a turkey even a dozen years ago. I don't know how a smart cookie like Gates went for it. Probably was standing too close.
Using uBlock with Firefox, you can right click anywhere on the page and open the "block element" thingy, which allows you to block anything on the page that you don't like :)
It's fun reading these retrospectives, especially having lived somewhat close to them. I was as much a full time developer back during the NT to Vista days as I am, today. Most of my development career centered around Windows technologies. I got my start, in the professional sense, writing apps in C++/(cough)VB and quickly moved on to C# -- which I still really love. It was both interesting and sad to watch Microsoft during these days. I'll never forget the SCO(/IBM/Microsoft) vs. Linux(/TheGPL/etc) days. Such a strange time to think back on Microsoft hating on Open Source ... or trying to create a competitor in the MSPL/CodePlex mess. It was almost comical to see a company that thrived in a world of "general purpose tools to solve many computing problems" (a.k.a. Office)[0] live in their own NIH world.
It was a hard time as a developer to feel proud of being part of that world. I saw a company that I admired behaving idiotically and under poor management[1]. I loved developing in C# (especially after .Net 2.0 with Generics), but I was generally unsatisfied with my operating system (Windows XP and later Windows 7 ... my company never went Vista). Because of the massive numbers of competing factions that this author points out, Windows tended to get everything but the kitchen sink and managed to do so in a sub-par manner. Oh, how I loved, killing the file/URI associations to Internet Explorer and Windows Media Player on my own PC in favor of Firefox and VLC only to have to make sure that all of the crap I wrote played well with both of those if they applied to the project[2]. The classic Bill Gates rant about Windows Movie Maker summed up that time pretty well[3].
The last few years under Nadella have been something special to me. Windows 10, despite its flaws/spyware bordering on malware, is enjoyable to use. I have bash (well, zsh, in my case) that functions far better than cygwin/msys ever did, PowerShell -- despite my general hate for its syntax and performance -- can basically handle any scripting task I need to throw at it and is far superior to cmd. Heck, in Windows 10, I can even have a True Color experience in conhost. Visual Studio Code, unlike Visual Studio, is an editor I actually go out of my way to use (as do all of my Linux-only coworkers). Microsoft is developing .NET out in the open and has embraced open-source in a way that a decade ago would have been met with suspicion related to "Embrace and Extend". And when I mention that I write code in C#, I don't have to listen to Java/other folks yelling at me about how evil Microsoft is.
And then there's my home-computing life. Despite my previous statements about Windows 10, some of my love for Ubuntu on Windows actually turned me away from Windows. I'd always run Linux at home. All of my servers are bare-metal Linux with one Windows host in a KVM virtual. I started loathing any time I had to do something that took me out of the tooling I use in the Linux world. Just try fighting with gpg-agent on Windows to get SSH key-auth working using the vanilla command-line SSH. It sucks and PuTTY/Pageant is far from a suitable alternative. Jumping and moving things among hosts via SSH, shell and just generally interacting with Linux is far less painful than Windows. So when I got my new laptop this Christmas, I had no intention of ever running Windows 10 on it. I actually booted it and it hung on "set up Windows Hello". I'm not even sure why I bothered to boot it to the hard drive in the first place. This is the first time in a decade of running Linux that I'm actually running it full-time on the machine I spend the (second) most time on. And I discovered, much to my surprise, Windows 10 flies in a KVM virtual with virtio drivers set up in the relevant places. Any time I'd tried to run any Linux variant in Hyper-V and actually use KDE, Gnome or any graphical interface, I gave up after swearing a bit. No clipboard support, and performance was unacceptably laggy. The other way around, I can barely tell I'm in a virtual (I toss the console off to Desktop 2 and I can flip between Windows and Linux) all without doing fancy things like GPU pass-through. And I really only have it there so I can compile a program with MSVC and test the Windows side of it. At this point I don't ever see going back (or switching to fruit). I love endless configurability (even if it comes at the cost of things not being close to "quite right" at the start) and having a mountain of good, free, choices at my disposal. And that work computer is getting reloaded next month when I'm done with the project I'm working on and have a free moment to do so.
[0] The bane of my existence being Excel and Access -- which employees at the company I worked at at the time used to solve incredibly complicated problems, incorrectly, and then panicked when their drive failed or the Access database become corrupted to the point of no return and we discovered how many hundreds of thousands of dollars was about to be lost due to worker creativity.
[1] Ballmer less so than Gates. That's my opinion, which I would normally be happy to back up, but I'm leaving it alone, today.
[2] Don't use that particular CSS, or worse, mangle it in this manner in case of IE 5/6. And why do we have this massive extra video file, oh yeah, that's the best format we can encode it in to guarantee playback on stock Windows without additional codecs installed -- because expecting a codec installation to work across all PCs always ends in tears.
[3] I'm not actually sure if that's urban legend or not and don't care to dig into finding out -- even if it is, and wasn't spoken by The Bill, every statement made described the mess that had become Windows at that time pretty well.
>By far the biggest problem with Windows releases, in my humble opinion, was the duration of each release. On average, a release took about three years from inception to completion but only about six to nine months of that time was spent developing “new” code. The rest of the time was spent in integration, testing, alpha and beta periods — each lasting a few months.
Funny he considers testing a problem. So this is the new Microsoft: testing is not important.
He's not complaining about testing, he's complaining about the length of the iteration cycle. Taking 2 or 3 years to ship 6 months of effort is incredible inefficiency.
I've read those two sentences three times, it's not even close to suggesting "testing is not important" or a problem. The problem is the duration of each release and the lack of time spent developing new code.
I see two reasons for why it has such a bad reputation.
1. It exposed how terrible device manufacturers are at writing drivers. nVidia alone (which only has niche hardware) themselves stood for the majority of Vista BSODs. And no printer or scanner company had probably written a device driver in a decade now so it took another 5 years for them to catch up (and they just continued creating bloat ever since).
Yes there were major architectural changes but this was the perfect opportunity since it was the first MS (consumer) mainstream 64 bit OS (I don't count the 64 bit version XP). Unfortunately the driver situation made things rather different between the 32 bit and 64 bit versions of windows, this did not help.
2. Pre-fetch/super-fetch or whatever they called it was WAY to aggressive. If you had a decent amount of RAM on launch day, or just a new regular computer 6 months after launch, the pre-fetching algorithms were so aggressive that they completely overloaded the harddrives that perform terrible with that random access load. It meant that the first 10 minutes after boot was spent trying to speed up that you might want to do at the extreme cost of slowing down things you actually wanted to do. Yes they were supposed to be run with low priority but it really exposed how bad spinning harddrives are at multitasking. If doing one task takes 1s, doing two tasks each taking 1s will now take 9 seconds if run in parallel etc.
After enough time this wasn't a problem as all your freely available RAM had been used up by prefetch or actual programs. If you seldom rebooted you never had to worry about it. But the regular user wants to use the computer right away after boot and will only remember the agonizing slowness of trying to start the browser and office applications after boot.
Compared to Vista, Windows 7 was just a new (much better) taskbar, better tuned prefetch and with the very important difference that by the time Windows 7 arrived the drivers had matured and many of them even supported 64 bit systems... But that was all that was needed for Vista to be seen as a disaster and Windows 7 an unparalleled success.