So he splashed some new colours onto things, added back in a new Start Menu that is less usable than both the full screen metro one but also the old Windows 7 one, and then made the metro interface even more convoluted? Meh.
As they say, it is very easy to criticise someone else for at least trying something and very hard to solve it yourself. I will fully admit that I have no better ideas for how to both keep the classic desktop but also integrate the new touch screen "metro" elements into it.
I really do have to say that I think Apple has the "best" solution right now. iOS for touch devices (iPad, iPhone, iPod) and OS X for desktop. Then they get to sit back and make an absolutely wonderful user experience for both scenarios without worrying about the potential crossover.
If I was in charge of Windows that is likely what I would do. Make Windows for Desktop and Windows for Touch, with starkly different UIs and designs, but with some emulation layer so people can run a touch app on the desktop (in like a sandbox window).
Frankly Windows Desktop has a LOT of crust on it. Go look at Control Panel. They have UIs unchanged since Windows 9x (Mouse, Keyboard, etc), then they have UIs added in XP (Firewall), then different ones added in Vista (Action Centre, etc), and now in 8 we have "Change PC Settings" which is another UI concept again.
If you've ever used OS X for any amount of time, the thing is just polished and consistent from top to bottom. I mean go compare OS X "Preferences" to Control Panel on Windows 8, and Apple manage to get that kind of consistent polish all over the place.
Microsoft cannot kill the Windows Desktop as much as they might wish, so for the love of god go through from top to bottom and modernise it. Control Panel = Gone. Folder Options = Gone. Device Manage = Gone.
"If I was in charge of Windows that is likely what I would do. Make Windows for Desktop and Windows for Touch, with starkly different UIs and designs, but with some emulation layer so people can run a touch app on the desktop (in like a sandbox window)."
I agree that this would be much better than current Windows 8; Microsoft isn't full of idiots, so I imagine there are people working for MS who are aware that the attempt to combine tablet and desktop interfaces in Windows 8 is producing a pretty shit product. The question is, what is it that is compelling Microsoft to pursue this approach, even though the immediate product it produces isn't very good? I assume the top people at Microsoft think there is commercial pressure to use their dominance on the desktop to strengthen their position on tablets, and they think the mediocrity of Windows 8 is a price worth paying to achieve that.
>>If I was in charge of Windows that is likely what I would do. Make Windows for Desktop and Windows for Touch, with starkly different UIs and designs, but with some emulation layer so people can run a touch app on the desktop (in like a sandbox window)
Emulation layers don't work; if your operating system needs to emulate needed applications no one is going to use it.
Microsoft already has a touch optimized OS: Windows Phone, just port it to tablets. Easy. Done. Of course then it has to compete with iOS and Android on its merits and not because it has Office and can interface with the windows ecosystem and can kinda run desktop applications.
The entire reason why WP is lacking traction is because of IOS and Android's ecosystems, not because of any special merits in their OS code. Indeed, that is also why Windows proper is still dominant on desktop. Throwing away compatibility with the ecosystem would be the height of folly. (Also, there are office clients for all these platforms: they don't work as well as they do in Windows, but they do exist)
> " 1) Microsoft wants to create a coherent store experience and ecosystem for Windows with the Windows Store.
[...] when Apple showed the world how awesome a central managed application store is, everyone had to do it. [...] it’s just an obvious thing that nobody got before. [...] It doesn’t make sense anymore to scout the web through bad website and installers that want to fill your computer with crap by default when you can have a one button purchase/install/update/manage for your apps. "
Actually, Linux, especially Debian and Ubuntu have had that for years. Configuring a Windows machine with the software you need used to mean going to 10 different websites, downloading zip and exe files each with their own installer, etc and it would take forever vs. Debian or Ubuntu where you can install most anything from apt, quickly and easily just like an App Store... since 1999.
I don't know why you picked Linux. Its a particularly poor example. Linux, unlike a modern OS, lacks any kind of sane dependency management. Thus, it throws this responsibility on the repository maintainer to maintain a MxN matrix of packages and dependencies. God help you if you add another repo with a different set of dependencies which conflict with the existing repos. Dependency hell is a still present problem on Linux. Anyway this is nothing new, as users of Linux, we all know about this and the developers know it even better.
The Apple App Store distributes _self contained_ binaries+dependencies (excluding OS libs) which is one sane way of doing things.
Windows and Mac don't have dependency management at all. They just pack one big blob with all the dependencies. Mac tries to imitate the Linux packaging with homebrew, but it's far from it yet.
The thing about different repos with the same deps is solved with 'priorities', since its inception.
>Windows and Mac don't have dependency management at all.
Not sure why you feel so compelled to talk about OSs I didn't mention. I was talking about the iOS App Store. In any case - apt-get, yum etc are third party tools that just copy, extract files from some server and run some pre-made configuration scripts. Thats all they do. All the hard work of figuring out dependencies, avoiding cycles, etc is done OUTSIDE the OS by the repository maintainers. You can do that on ANY OS.
>The thing about different repos with the same deps is solved with 'priorities', since its inception.
Different repositories which the user ads can and do have dependency conflicts. One package from repo-1 can require a lib with version >= 1.0 while another package from another repo-2 will only work with version 0.9, etc. I have often had to add add other repos to get software unavailable in current repos. I have run in to these issues quite often. And I am not even getting into how brittle apt-get in general is.
>Not sure why you feel so compelled to talk about OSs I didn't mention.
Because you said:"Linux, unlike a modern OS, lacks any kind of sane dependency management", which is of course non-sensical, as it is the only OS with dependency management. Also in the case of RPM the dependencies are listed inside the package, the YUM then just does the dependency resolution and fetches the required packages. I really don't understand what point you're trying reach, since automatic dependency management/resolving is one of the most important reasons Linux is chosen for server environments and deployments.
Since I've been doing this for a living for more than 10 years, I really don't know how you have found so many problems and issues with it. Maybe you're using some kind of rolling-release distro?
And of course nope; you can't do that on any OS, because.... we would have done it! That's why homebrew for Mac feels like a poor man's apt/yum system.
> I really don't understand what point you're trying reach, since automatic dependency management/resolving is one of the most important reasons Linux is chosen for server environments and deployments.
>Since I've been doing this for a living for more than 10 years, I really don't know how you have found so many problems and issues with it. Maybe you're using some kind of rolling-release distro?
For average desktop users things like third party repos, PPAs, etc induce failure modes in apt-get that are quite common. One of the most common ways that users get screwed is by adding too many third party repositories or steping outside the repository and build/install via source or installing an rpm/deb file manually. The main reason is the OS does not handle dependencies, only the package manager does. So if you step outside the package manager.. you can get burnt fairly easily doing benign things. A few examples I found in under a minute...
>because.... we would have done it! That's why homebrew for Mac feels like a poor man's apt/yum system.
That's rather poor reasoning. There are several factors why one would not want to maintain a giant repository of packages and associated dependencies. For one, atleast on newer OSX installs the presence of the Mac App Store negates any large scale interest in such a project.
Also the fact that tech like macports, chocolatey, nuget, etc exist, demonstrates the proof of concept. Also I'd like to stick my neck out a bit for MS tech like AppV which is the underlying tech for MS Office 2013 IIRC which allows centralized deployment for enterprises. Its better than apt-get IMHO.
Across platforms I'd say Steam is a much more better platform than apt-get/yum etc for digital distribution of software. On Windows - the only dependencies would be third party libs like PhysX, OpenAL, etc. Windows, since XP, has a great OS-level dependency solution (unlike apt-get and the rest - which the Linux kernel/executable loader/etc do not know about explicitly) for installing multiple versions of libraries side by side and having the developer specify things like - whether they want the latest version or whether they want a specific version, etc via simple application manifests.
Look, no offence, but by the look of the links you gave, you have no leg to stand on and you seem to conflate package management and resolution with software distribution platforms(which has more to do with marketing than technology).
All the links you provided have one of the following traits:
1) User decided to build from source - That's outside the package management and FYI "make && make install" on
Linux is exactly the same on Mac.
2) User decided to include Testing repositories on Debian which, since you don't know, it's like having
a different Debian distro.
3) User decided to install broken proprietary software(in that case Oracle) that for arbitrary reasons
forced the user to install x86 packages while the user's system is 64-bit.
In any case, you don't install Debian if you want bleeding edge packages - wrong tool for the job. Choose
Fedora, Ubuntu or OpenSUSE.
If you work in the CS industry and you say that no one wants something like Linux package management, well
you simply... don't work in the industry.
AppStore, Steam and whatnot have absolutely nothing to do with package management and dep resolution.
Also try something, make a script or program that installs and deploys automatically a development environment
with either of the aforementioned technologies. Oh you can't? Yeah that's because you conflate
technology issues with marketing issues.
>1) User decided to build from source - That's outside the package management and FYI "make && make install" on Linux is exactly the same on Mac.
So Linux as an OS does not do dependency management - you have to stay within a third party tool/package manager. Thanks for proving my point.
2) User decided to include Testing repositories on Debian which, since you don't know, it's like having a different Debian distro.
Adding more repos leads to breakage - Exactly what I said. You are adding nothing new.
3) User decided to install broken proprietary software(in that case Oracle) that for arbitrary reasons forced the user to install x86 packages while the user's system is 64-bit.
It would have made no difference if it wasn't proprietary. Also I have seen such breaks multiple times with F/OSS software.
>If you work in the CS industry and you say that no one wants something like Linux package management, well you simply... don't work in the industry.
Stop putting words in other peoples mouths. Thats dishonest.
>AppStore, Steam and whatnot have absolutely nothing to do with package management and dep resolution.
Good job replying to something I never said. My mention of Steam and the App store was specifically for digital distribution. Heck .. I even said so myself...
Let me quote myself.. (rather sad that I have to do this)
"The Apple App Store distributes _self contained_ binaries+dependencies (excluding OS libs) which is one sane way of doing things."
"Across platforms I'd say Steam is a much more better platform than apt-get/yum etc for digital distribution of software."
> So Linux as an OS does not do dependency management - you have to stay within a third party tool/package manager. Thanks for proving my point.
Debian is an OS. Linux is a kernel. Within a debian (or ubuntu) install, dpkg/apt-get are a first party tool, supplied as part of the operating system.
So debian, as an OS, does do dependency management. I'd include a mirror "thanks for proving" comment here but I don't want to lower myself to your level of supercilious condescension.
The multiple upstream repository problems you describe are very real, though - the nix approach avoids a lot of the upgrade-related problems but still relies on a curated channel of compatible versions, although given the capacity to have multiple versions of things installed relying on multiple such channels will be safe for the user, and the disadvantages at that point largely accrue to the effort required of the curator.
I don't consider the package manager tool-set to be part of the OS. Certainly it is part of a distribution much like other tools are. There is some amount of pedanticness involved when it comes to differentiating the common use of Linux as an OS, Linux as a kernel, Linux distributions, etc. I was using "Linux OS" as a catch all for the kernel + whatever to make it boot to a console.
FYI condescension is reserved for intentionally dishonest replies - which I did detect earlier from the other poster.
I think the 'distribution' concept mostly exists because Linux is just a kernel - I consider the whole thing the OS, especially given apt-get is how I mostly get my kernels.
To my mind, Slackware is an OS that chooses not to ship something of the order of apt-get, rather than separating 'OS' from 'distribution'.
But given you're (by default) booting Debian's choice of Linux kernel, and Debian's choice of userland, all tied together by the dependency resolution of Debian's shipped package manager, which is what provided you with said kernel ... I still think calling it a third-party tool is silly.
Or at least, I see Debian as the first-party rather than Linux here, and I think the fact that your way of looking at it is ... well, "not how I commonly see things regarded" ... the problem wasn't so much dishonesty as a complete failure to realise you were using a different communication protocol.
Never attribute to dishonesty that which can be adequately explained by terminology mismatches :)
You're a riot. I was not dishonest in the slightest. The part which you "detected"(more like
you needed something to evade answering) as dishonest was me saying "If you work in the CS
industry..." which everyone understood as "If someone works in the industry" and not you
specifically!
But yeah, of course, you prefered to think of it as if I was putting words in your mouth. Pfft.
Nevermind that you twisted every answer I gave you with your denial and ignorance.
At least everyone else sees who's dishonest and vote with their 'downvote' button.
So basically it boils down to "I am not a professional and thus I am not qualified to have strong opinions in such matters".
You can close your ears and do "la la la la... I can't hear you" and be as much in denial as
you want. Unfortunately it won't change the reality.
Instead of writing "This is my last reply. Goodbye.", I suggest that you avoid commenting on
issues you have just a passing familiarity and leave that to the professionals. It would
save you from future embarassements.
>In any case - apt-get, yum etc are third party tools that just copy, extract files from some server and run some pre-made configuration scripts. Thats all they do.
Third party to whom, exactly? Each major distribution maintains its own repositories and packaging system.
Okay, so that's all they do... it works great. They also provide a central place for your system to download a huge variety of software that has been tested and approved for your specific OS version.
>All the hard work of figuring out dependencies, avoiding cycles, etc is done OUTSIDE the OS by the repository maintainers. You can do that on ANY OS.
You could, but they don't. The fact is that for years prior to the Apple App Store for iPhone or OSX, or the Windows Store, there existed a way to install a wide variety of Linux software automatically and easily from a central source.
>Different repositories which the user ads can and do have dependency conflicts. One package from repo-1 can require a lib with version >= 1.0 while another package from another repo-2 will only work with version 0.9, etc. I have often had to add add other repos to get software unavailable in current repos. I have run in to these issues quite often. And I am not even getting into how brittle apt-get in general is.
Sure, that's true. Personally I've always found the software I need on the main repositories for Ubuntu or Debian, and not needed to add external sources. When there is a package I need that's not in a repository, installing a binary or compiling from source has worked fine. So I haven't experienced the problems you mention or hint at in 10-12 years of desktop and server Linux use.
> All the hard work of figuring out dependencies, avoiding cycles, etc is done OUTSIDE the OS by the repository maintainers
Your typical package manager will build a recursive and reverse dependency database, resolve version conflicts, detect cycles, and suggest alternative dependencies and recommended or optional packages.
Here's the official PKGBUILD used by Arch for ffmpeg as an example. There's not a single version qualifier in sight, and no other files are needed to build and install the package... from source.
It's not exactly trivial, but FOSS OSs have been doing this for decades and have hundreds of thousands of packages. It's a vast improvement over having 28 copies of the openssl dll on your average Windows machine, all out of date.
>Your typical package manager will build a recursive and reverse dependency database, resolve version conflicts, detect cycles, and suggest alternative dependencies and recommended or optional packages.
All that is done outside the OS by third party code. Also, you have to stay within that particular package manager to get those benefits. With different distros you have to stay within whatever package manager they ship with. There is no common "linux" code that allows this to happen (Indeed you can implement a package manager on any OS). If you go around installing rpms manually or installing/building via source, it is fairly trivial to bork your package managers state. For e.g. - If you manually build/install a dependency that one particular package in the repo says should be of a different version..
But yes, in general all mainstream Linux distros do ship with many tools/libs - X.org, GTK, Gnome, whatnot. Are all of them part of the OS? I see the package manager as a third party tool much like the rest since it does not interact with the core OS in any deep sense. But maybe my point of view comes from being a Linux user since 1996 with Slackware..
LFS is not a distro. It's documentation to build your own distro.
Xorg, GTK and all the irrelevant stuff you mention have nothing to do with it.
If you remove yum/rpm or dpkg/apt from the system it simply stops working. Also
you can't replace apt with yum and vice versa, as you can with KDE vs GNOME.
Package management is a very integral part of a system, that's why we separate
distros by their packaging system(apt-based, rpm-based, emerge, etc...)
You keep spouting nonsense and you keep being voted down, yet your insistence
in showing your ignorance is astonishing.
I use apt to install a wide variety of software on my home system as well as a couple of servers, and I rarely have problems with dependencies. So I chose it as an example because it has worked well for me for years.
But the iOS store is a tool for discovery and management of software, whereas APT is just a tool for management. I mean, you can do "apt-cache search tetris", but you can't view screenshots and reviews of each tetris game.
There's Ubuntu Software Center, but that postdates the iOS store.
That's true, it fills the need for consumer all discovery much better. On the more technical or server admin oriented side, though, apt and it's ilk fill a need that Apple's App Store does not try to… namely, I can go download the something like memcached, PHP or Postgres and all sorts of related libraries and modules and have them installed immediately in. Version configured for my system, integrated with central configuration and init scripts, and updated and tested for security regularly.
I was working on setting up my Mac for Python/Flask development last week and installing MySQL resembled installing it on windows... Time consuming and it required reading documentation and searching just to get it to work. You can ameliorate that by installing MacPorts, homebrew or fink but those truly are third party and ridden with dependency problems. They feel like a 2nd rate apt.
You have a description, that's normally more informative than what pass of as description at least at the Android market, and you can actually search by description. Reviews are not that important when the software distributors are honest.
Apt-cache (in fact, aptitude, but there's no difference) is my primary tool for searching for new software. When it fails, I go to Google, but get much less useful results.
I didn't do this often enough to automate or organize the collection of programs to install. When I did store programs on CDs (for instance, Netscape 4.x so I wouldn't have to download the 30mb package) the version would be outdated by the time I'd need it again.
Linux is a usability nightmare the second you get out of the fake easy-to-use illusion layer they added with the new GUIs. Unless you’re a coder, don’t even think about it.
Do you think he really believes this? Maybe I'm too far in a bubble or something, but that sounds completely dishonest.
I do not just believe it, I know it. When something does not work in Linuxland the answer is still "open the terminal, enter a bunch of cryptic commands" and/or manually edit text-based config files. I can do that easily, I grew up with DOS. We had to tweak the parameters of the memory manager and make custom boot disks with just the right set of drivers to get games working as kids. It is just that I am not willing to do that stuff anymore. This is the 21st century and my time has value.
Recent examples of my Linux "user experience":
- (Fedora) Internet went down while the package manager (OMG I hate those!) was busy updating the system. This left the package manager in a corrupted state. I could no longer use the GUI to install/update anything, only gave me a "please fix me somehow" type error message. I could easily google the solution. Four cryptic terminal commands later it was good to go again. But as I said: unacceptable for a modern desktop OS.
- (Ubuntu) Getting my wifi to work. Do I even have to start? I ended up having to download a kernel module .tarball from somewhere, had to integrate that into the kernel, and edit some config files by hand. Again, no problem, Google is magic. However it wasted one hour of my life: unacceptable. On Windows getting the wifi to work required.. clicking on setup.exe.
The Linux community doesn't help either. Some of them want to be inclusive, but the majority of the community think that Linux should be an exclusive club where only those with a high level of knowledge should get treated with any kind of respect or help. Just go Google any Linux topic, and I assure you the first page will contain several forum post where the only response is "RTFM," "if you cannot figure this out you shouldn't be touching it (audio issues)" (paraphrasing), or even "maybe you'd be better off using Windows teehee."
I think Ubuntu has done much to make Linux more consumer friendly and I congratulate them for that. But realistically Android or Chrome OS (Chromebook, etc) are the only two Linux OSs I'd likely use on a daily basis because in both cases they abstract you away from the ugly Linux innards better than any standard distro' is able to do.
The fact that Linux depends so heavily on the terminal/console is just pathetic. It reminds me of Windows 9x. Which is kind of depressing when you consider that the Linux kernel is the most advanced kernel currently in existence, but they get weighed down by the UNIX legacy stuff, users, and GNU side of things.
That's why Android is so wonderful. Instead of it being Linux/GNU, it is Linux/Android. When we get a full desktop OS without the GNU gunk and the associated bad-attitude ("terminal is the bestest, I am so 1337!!!") I'd happily switch to it. Hell I'd switch to a Linux/Android OS if Google and friends made one for the PC desktop (with real windows/multi-tasking).
OS X is also a great UNIX OS because there is no terminal fallback. You can do 90%+ of things you'd ever need to do on OS X via UIs and tools. Plus the community on OS X is better than the toxic Linux community, even if they're a little defensive when people criticize Apple/OS X.
Not disputing the problem you discuss exists - it certainly does - but be careful of assuming it's the majority of the community. It often happens that the bulk of the visible noise is made by a minority of obnoxious wankers.
That's a very interesting take on the situation. I forget that android is linux, when not browsing files. Come to think of it, my android phone has been more useful and friendly than any desktop distro. Regarding legacy problems, I recently read that even the directory system is a legacy issue.
> The Linux community doesn't help either. Some of them want to be inclusive, but the majority of the community think that Linux should be an exclusive club where only those with a high level of knowledge should get treated with any kind of respect or help. Just go Google any Linux topic, and I assure you the first page will contain several forum post where the only response is "RTFM," "if you cannot figure this out you shouldn't be touching it (audio issues)" (paraphrasing), or even "maybe you'd be better off using Windows teehee."
Just in case anyone thinks you're making this up... For several years, the top result for a search on "ubuntu add user" was this page:
It's an interesting page, because it recommends the use of the low-level "useradd" instead of the easier "adduser". It mentions "adduser" but only way down the page.
Now the Debian/Ubuntu man page for "useradd" says "useradd is a low level utility for adding users. On Debian, administrators should usually use adduser(8) instead."
Of course I made the outrageous mistake of thinking that a "how to geek" site might have accurate information, so I followed along and used "useradd".
When that command left me in a confusing state, I posted a comment on the how to geek page suggesting that they should recommend "adduser" instead of "useradd" just like the Ubuntu man page recommends.
(Click the "show archived reader comments" link on the page to see the discussion.)
This did not go over well. I got these replies:
> Michael Greary you are a D&%K H#$&D. You’re the 1 that messed it up no1 else, and anyone else for that matter. Hopefully you’ve learnt by now that when blindly running linux commands you should read the whole article to make sure it’s what you want first. Blaming it on other ppl isn’t gonna help either. You should’ve said it by admitting your mistake, asking for a way to rectify it. (sigh) I bet you have no friends
And the somewhat more polite:
> Michael Geary, you shouldn’t blame the tutorial because you didn’t read it properly.
And this was after following a "how to" that recommended using the wrong command.
To counter your anecdotes, here are some more anecdotes:
- Windows XP - Power went out at a critical stage of updates. Completely hosed the system, and wasted hours of my life having to rebuild (including rebooting multiple times to get updates installed again after the recovery).
- Windows 7 - Cable giving my router IP addresses. Called ISP tech support... they had me manually plug the cable in and type cryptic commands at a DOS prompt. Is that unacceptible too?
Also, I'm really baffled as to how the network going down could kill a Fedora update. The downloads happen before any significant filesystem modification in every version that I have ever used (certainly any recent one). I guess maybe some third party package does something weird during a POSTINSTALL script? I dunno.
The problem here is that the super poster is criticizing the experience under Linux when you install any of the OSes on non-vetted hardware, whereas 99% of Windows computers come preinstalled with all the drivers properly configured.
If your Windows system breaks, and that means anything beyond "oh I'm missing a driver", you have no recourse, you have to reformat. You end up googling and doing the exact same procedures you would under Linux, except because the OS internals are obscured and proprietary, you don't have whole stack detailed analysis on whatever problem you have, you have a black box and you have to hope the tools MS gives you are good enough.
It doesn't change that for 99% of users, regardless of OS, when you encounter a problem you don't try to fix it yourself. Most people aren't going to even try, they are either going to take the system to an expert who would use a terminal no problem or just get another one.
I experienced this myself recently. The Netflix metro app decided that something was wrong with the DRM path on my two-day old install of Windows 8.1. After much googling, I ended up having to re-install the OS. It was a very similar experience to when something breaks on a Linux install, except that I had less ability to go digging into the internals (not that that would be helpful for someone who doesn't know a lot about computers).
I think the primary difference with Linux is that developers are fairly content mucking about with things, so breakage that would be a showstopper for someone who is not intimately familiar with computers goes somewhat unnoticed. It's hard to step back and say "what would I do if I didn't know about <x,y,z>?" in daily life.
I've been intentionally writing KCM control modules for various traditional command line functionality recently, with the intent to get them in some early series 5 KDE release. Just proof of concepts, but so far I've gotten working ones for xrandr, wlrandr, and saned (ie, network scanners). They aren't usually very complicated, just a bunch of widgets. I intend to finish them with a qml rewrite, most of the work was before you could embed qml in widgets.
The point is, yeah, you shouldn't have to go to a terminal, but that requires the development of guis to overlay the complexity. I use my relatives as test beds for whatever breaks that requires a "control panel entry" to fix.
> When something does not work in Linuxland the answer is still "open the terminal, enter a bunch of cryptic commands" and/or manually edit text-based config files.
That's true in OSX too, though. After upgrading to 10.9 ("Mavericks") I had a 'systemstats' process taking up 100% CPU and >3GB of RAM at random points. Did some googling and reading through forum threads, and the solution was to muck with some BerkeleyDB files in a terminal. Is OSX an acceptable modern desktop OS?
Windows in my experience has been even worse, in that for OSX there at least usually exists some arcane way of fixing your problem, while in Windows it's just "wipe and reinstall", because the problems are totally opaque. (I don't have great memories of the registry editor, either.)
I haven't had wifi issues with Ubuntu for over 5 years. What computer did you have these problems with, where you needed to install a new kernel module?
You hate package managers? I don't have experience with Fedora's and I don't know why it entered an inconsistent state, but OSX/iOS use a package manager. Android uses a package manager. What problem could you possibly have with package managers? Do you enjoy manually updating software one app at a time?
The answer in "Linuxland" may still be to open the terminal and enter commands but the times a regular user needs to do that are becoming few and very far between. I have the majority of our (albeit small) office running Ubuntu 13.10 and I've yet to encounter anything like what you've described.
I think he is not angry at the package manager, he is angry that the package manager failed, and he had to use the terminal to fix it.
And it IS not acceptable, I use Fedora myself, and every time this happened I got REALLY, REALLY upset, if I am using a GUI package manager, why the hell I should need to use console to fix it when it breaks? Why there is no option on the GUI to do that?
Also I had a machine that I had to install a kernel module to put sound on it... Linux still have some serious issues with some hardware (mostly audio and radio-based communications, I've heard printers are problematic too, but it is years that I don't own a printer)
>I haven't had wifi issues with Ubuntu for over 5 years.
No surprise there. Some wifi hardware is supported out of the box and will just work. That is not the problem. The pain starts when something does not just work.
>What computer did you have these problems with, where you needed to install a new kernel module?
Given that I am a major nerd I have multiple ones and admin even more ones. I think that particular episode happened with an Asus Eee Box B202 and an older version of Ubuntu.
> but OSX/iOS use a package manager.
I do not use anything Apple but if I recall correctly OS X uses self-contained .app bundles i.e. the apps usually include all their dependencies except those present on a particular OS X version by default. It is the "dependency management" done by Linux package managers which I hate. Namely that every app becomes a tangled web of dependencies with unlimited potential for pain. Apps should be self-contained, the OS should be a stable, well-defined platform to run them. I.e. "requires Window XP or newer" or "requires Mac OS X 10.6 or newer" is okay, "requires [long list of packages]" is not. That model will cause problems sooner or later. "I updated X and suddenly Y no longer works" etc.
As a programmer I find the concept of simply automatically updating libfoo 2.1.x to libfoo 2.1.y because "the developers say it does not break compatibility" scary. Software is written by people, developers accidentally break compatibility all the time, or write software which depends on buggy behavior, behavior which later gets fixed by other developers, thus breaking said software. Thus all common Linux package managers are themselves broken by design. They are only acceptable if you accept that occasionally some things will just stop working after a system update.
>Do you enjoy manually updating software one app at a time?
I do not often have to update any software and when I do it I can usually try the new version without uninstalling the old one easily and I know that its installation will not affect any other apps on my system. Yes, I enjoy that.
>the times a regular user needs to do that are becoming few and very far between.
No disagreement there, but as I said, personally I am not willing to put up with an OS where this happens, no matter how rarely it happens - unless I have to.
Main issue seems to me that Linux distros use the same mechanisms for installing/updating userland apps/code as for installing/updating system-level libraries and kernel stuff. It's never quite felt right to me, but has allowed and seemed to encourage the weird dependency trees that end up wanting replace entire GUI toolkits because a desktop app was compiled against a certain version of libfoo, which then blithely tries to update 900 packages on your system when you just wanted one little app.
This fear/anger/resistance towards static bundling in the linux world is the main culprit, it seems, and I don't think anything short of a cultural revolution in the linux world will change that. The last cultural revolution seemed to be a large mass exodus of desktop linux folks migrating to osx a decade ago.
I understand your concern for "dependency hell", but to be frank, your arguments come off uninformed to me.
Thus all common Linux package managers are themselves broken by
design.
I can't really see how you'd debate your way out of that. If you mean package management is inherently broken because different software may depend on different versions of libraries then that sounds like an issue you'd have with the software developer. I know that in itself could be a gripe with Linux software, but show me one regular computer user who has ever experienced broken package dependencies. The vast, VAST majority of regular Ubuntu user software is installed through the Software Center and the only times I've ever gotten broken dependencies were when I was specifically trying to install an older version of something.
I do not use anything Apple but if I recall correctly OS X uses
self-contained .app bundles i.e. the apps usually include all
their dependencies except those present on a particular OS X version
by default.
OSX has .dmg files for installing applications. Debian's .deb packages are the exact same thing. Bundling all dependencies can be done in Linux the same way -- again the app developer can statically link to libraries.
The fact of the matter is that Linux is NOT a usability nightmare and it's dishonest to say that. Experiencing some arcane error with an old version of Ubuntu on a six year old computer doesn't afford anyone the right to claim that the modern linux OS is a usability nightmare.
I'd be careful accusing people of being uninformed and dishonest, especially when you make errors in your own post.
.dmg files are disk images. In the case you are discussing the disk image contains an .app package that is copied to the users (or more generally, the systems) Applications folder. A closer equivalent of the .deb file would be the .pkg file, which works in a similar manner as you describe.
As to whether 'Linux' is a usability nightmare or not I'd argue is purely subjective. It is disingenuous to suggest that there isn't a steeper learning curve, however slight and for whatever reason it exists.
We're talking about dishonesty in saying that "Linux is a usability nightmare" from an article about improving Windows' usability. I'm not talking about learning curve here! He cited 2 examples regarding Linux's usability that I don't think are relevant at all.
>The fact of the matter is that Linux is NOT a usability nightmare and it's dishonest to say that.
This is not a fact at all. It's a matter of opinion. The author of TFA is equally wrong in asserting that it is in the way that he did, but dishonest? No more than the dishonesty of claiming that it factually isn't.
>As a programmer I find the concept of simply automatically updating libfoo 2.1.x to libfoo 2.1.y because "the developers say it does not break compatibility" scary. Software is written by people, developers accidentally break compatibility all the time, or write software which depends on buggy behavior, behavior which later gets fixed by other developers, thus breaking said software. Thus all common Linux package managers are themselves broken by design. They are only acceptable if you accept that occasionally some things will just stop working after a system update.
In my experience, Google is only mostly magic. This is probably the fault of the Linux community. Too often the process for how to fix something in Linux is listed on a website as a few cryptic commands, then "and you've got it from there, right?" when you're completely not done yet and there are still several steps to go that they done explain. Its infuriating.
Not to mention that lots of the top google answers on things date back to forums posts in freaking 2005 or so with answers that are worse than useless.
"When something does not work in Linuxland the answer is still "open the terminal, enter a bunch of cryptic commands" and/or manually edit text-based config files. I can do that easily, I grew up with DOS."
I can install CentOS on a laptop, add repositories and install multimedia codecs/applications, change the language and set periodic updates all without any use of the terminal at all.
I suspect that part of the reason that terminal commands are given on support fora is simply that it is easier to provide the text command than it is to describe a series of steps in words that you have to execute on a GUI.
I also suspect that another part of the reason is that people who really know about Unix like OSes tend to be from a sys admin or programmer background and are more comfortable with the terminal.
Agreed that this is going to have to change soon!
"- (Ubuntu) Getting my wifi to work. Do I even have to start? I ended up having to download a kernel module .tarball from somewhere, had to integrate that into the kernel, and edit some config files by hand. Again, no problem, Google is magic. However it wasted one hour of my life: unacceptable. On Windows getting the wifi to work required.. clicking on setup.exe."
Well of course Windows is easier to find drivers for, it is the operating system for which the cheap commodity hardware you can buy is designed. Having said that, I'm surprised that Ubuntu's restricted driver identification was unable to find the WiFi kernel module you needed. What was the module you installed and which version of Ubuntu did you install it on? My reason for asking is that having 'manually' installed a kernel module, you may have to repeat the exercise for each kernel update.
Have you ever done any troubleshooting in Windows? I spent over a year as tech support for Dell supporting Windows, and I honestly can't say that Windows is any easier to fix. Most of the time(outside of PEBKAC errors), I ended up a) enter obscure commands in cmd b) making an obscure edit to the registry or c) re installing windows. No operating system is easy for end users to fix when something isn't working right and at least in linux you very rarely have to reinstall.
Your example of double clicking on setup.exe to fix the wifi is pretty wrong. First of all, you have to find the .exe file which is pretty hard for joe average and you have to make sure its the right one, etc.
You would say the "unless you're a coder" part is wrong, but otherwise it has the ring of truth based on my experience.
Configuration when not using UIs is hell because each application has its own format, settings file location (if it uses settings file at all -- you might be required to run a curses console program) and generally its own idiosyncracies. For a single logical concept (e.g. sound, ditto for display), a plethora of layers are involved and they all clash heavily, both in term of settings and abstractions.
I'm sure someone still hates Unity, but I use it every day. It's a conservative design that evolves slowly. I wish the Ubuntu Touch UI had already been integrated with mainstream desktop Ubuntu so I could get a PC tablet as my day to day coding machine.
It is very difficult to evolve a non-touch UI into a back-compatible touch UI. But the conservative approach Ubuntu Touch is taking might turn out to work better than what Microsoft did to Windows.
I have a windows 8 tablet / laptop hybrid as my hobby coding machine (asus t100), and i have to say that the bad rep windows 8 got does not match my experience. I've found the touch ui easy to use, and i even configured it to launch my ide in full screen mode, so i can go back and forwards between that and my browser without seeing a desktop in between. I've also noticed that i'm touching the screen quite often even while coding, and that's with an ide completely not optimized for touch (webstorm). The whole thing has convinced me that touch is an added value for any use case, even for programming.
As far as I can tell, the "bad rap" is mostly from people who haven't used it at all, or haven't used it for very long. (Typically XP users who are extremely averse to change.) The Windows 8 users I know personally like it a lot, though that doesn't stop them from complaining about the parts they don't like....
I've been using linux for literally 20 years now. The first ten I did all my own sysadminning, the next ten I used it exclusively in organizations with professional administrators to complain to, and now in my current job I'm back to having to do things myself, and I can assure you it's just as much of a pain to do things in 2014 as it was back in 2002.
They gave me a new monitor at work and it took me literally six hours to figure out the right magic to get the resolution in a mode that wouldn't cause instant headaches, and I still can't run KDE applications without the X server segfaulting.
What distribution are you using ? And what graphics card? I remember some issues with the first version of KDE 4, but since then I run it on Intel, AMD and NVIDIA cards, with open and proprietary drivers without big problems (AMD sometimes have glitches..)
Ubuntu (13.04 and 13.10) and a recent nvidia, don't have the model number here.
My best guess is that it's some kind of font issue, plotting in R also segfaults the server but I can see the non-text part of the plot come up. I try to leave well enough alone because (a) I get paid to do work not to troubleshoot linux and (b) there's no guarantee that anything I do will result in a usable system.
Yep that's pretty much mu experience with Linux. By the way I really like Linux (I used to use Ubuntu before they added Unity), however the constant fiddling with drivers and all kinds of stuff is just annoying.
The biggest improvement here is subtle, yet important. He made the desktop experience separate from the touch experience, and the touch experience separate from the desktop. Similar to what one would think of with Ubuntu on the desktop versus the touch version that would go on phones. Not providing the unoptimized experience would go a long way.
And the comment on Windows RT being renamed Windows Lite... good idea.
He says "Apple showed the world how awesome a central managed application store is". As a developer I feel like the worst thing about iOS and Android are their app stores.
Could Bitcoin, Bittorrent, Napster, Chatroulette, or any new and fun applications or websites exist if they were forced to appear in an app store? No way! Do we want some giant company deciding what new technology developers can come up with in the future? I hope we don't do that to ourselves.
App stores are garbage.
I think he was advocating optional app stores, not a sole-provider app store like on iOS. Even though OS X and Android have app stores, they also let you install any “unapproved” apps you want and open whatever web apps you want, after changing a one-time setting. The proposed Windows 8.2 would probably be the same.
As long as app stores remain optional, I think they are a good thing. It is a lot simpler for users to browse one place for apps and install all apps in the same simple way. An app store is a reassuringly simple way to install software for those who aren’t very confident with modifying their computers.
This entirely misses one key part of how Windows has evolved: Microsoft is a collection of many competing individuals, not a single entity with unified goals.
Windows development is shaped by a small team of developers and designers being influenced by a large battery of project managers and stakeholders. Their business is compromise, not uniformity or singular vision.
For instance, Windows has to support both touch and keyboard/mouse because of the tablet PC group desiring equal marketing and development focus so as to not devalue the tablet brand and ecosystem. Windows doesn't have a coherent design language because it doesn't help everyone or any group particularly well. The Windows 8.1 upgrade had store login required because the app store was lacking traction.
While Jay's UI ideas have great intentions, they're not solving any of the immediate problems Microsoft's groups have.
The first section seems like it's mostly just him repeating himself quite a lot without really saying anything new at all. Very difficult to get through it.
He also warns his readers: "Yes there will be a lot of spelling mistakes, unfortunately you will have to leave with it. The goal was to put out the information of my research, not to write a perfectly checked novel."
That's like giving the recycling plant a bag mixed with recyclables and rotten garbage, and saying "you sort it out, it's your problem now" when it really isn't their job.
So it's hard to take this article seriously. Someone who takes his own idea seriously wouldn't settle for presenting it so poorly. I'm disappoint.
Years ago, I was drawn by Apple's attention to detail, UI, and UX, so I migrated from Windows to Mac OS X. Last week, I was drawn by Microsoft's attention to detail, UI, and UX, and have already started migrating from OS X 10.9 to Windows 8.1. (I'm typing this in IE11 right now on my rMBP actually, and I think I prefer it to Chrome.) The ideas and presentations in his article don't settle well with me. They feel like a huge step backwards in usability and design. I admit Windows 8.1 isn't perfect, but I don't think his changes fix any real problems that I have with it, and they certainly create some new ones.
That said, I am excited to see how MS is going to make the desktop environment more unified and consistent in the future, which I'm sure they're working on.
I couldn't have said it better. I think Win 8.1 is quite nice actually, I know it's cool to hate on it, but frankly speaking it's pretty good (IMHO of course).
I really don't get all the fuzz on the start menu, honestly, I'm never in it even on Windows 7, I press "win key"-> search -> run, and guess what I do exactly the same in Windows 8, except now the start menu is a bit more useful, I mean sure I don't use all the metro app (some I do for example news), but the rest acts a very nice shortcut "wall", and it helps keep my desktop is pristine. Even if Microsoft decides to ditch the whole Metro concept I hope they keep the "Start" the same as it is. I know I'm perhaps in the minority here but, the previous start menu was completely useless to me, honestly.
I've often thought about getting a Windows 8 tablet (probably a Surface 2) and I'll admit there are some pretty good things about the Surface-being able to snap two apps side by side is a big plus over iOS and Android, and (horror of horrors) being able to run Flash in IE 11 is a plus. Not to mention the kickstand, and not having to prop my iPad up against a bookcase. I guess I think that Windows 8 is a pretty good choice for tablets-maybe its just me, who doesn't use a lot of apps, but I don't think there's a lot missing on the tablet side. Or take the Surface Pro 2-you have Metro IE 11 which runs ABP, which is in and of itself a bazillion times better than mobile Safari.
But yes, its a royal clusterfuck on the desktop, and I don't want Windows 8, much less a touch screen laptop that I"ll have to keep wiping down to keep fingerprints off of.
My opinion is that he's fixing the tablet experience for power windows users. I want my desktop because I know it works to get my task one; but I also want a tablet experience - in a single, unified device. I believe Microsoft need this as they will only lose a battle from negative reviews if they keep ignoring a very vocal user base, and they'll lose market share if they can't improve over iOS and Android.
"Metro" apps on the desktop suck at the moment in my opinion. That doesn't mean they should be thrown out, they should be fixed.
Kudos to Jay Machalani for a great demonstration of the concept, but I have a couple of big problems with it, or in general any design that splits the current unified tablet/PC design we see in Windows 8 into two strictly separated modes (of course, having two totally separate SKUs or OSs can be thought of as an even more extreme version of this):
1. It breaks useful cross-mode windowing scenarios.
Having a strict modal separation between the two windowing models breaks a bunch of IMO not terribly uncommon scenarios where mixing them in some way is useful.
* Can't have a different mode on each monitor - if one is touch/a tablet and the other isn't, for example.
* Can't snap an app beside the desktop and have the system automatically manage the use of the remaining space. There are actually some desktop apps that have hacked a custom implementation of this - OneNote for example - so I don't think it's a contrived scenario.
* Sometimes I like to use the availability of desktop and immersive windows to express a "work versus play" (or, more precisely, "continuing part of ongoing persistent task versus transient digression") distinction. That way I can use the "transient/play" apps without worrying about them cluttering the taskbar, slowing down the PC, interfering with what I'm doing, etc. Modal separation breaks that.
* Some apps just work better with one or the other windowing model - e.g., part of what I really like about Tweetium is how clicking on a link will automatically shrink the Tweetium window down to a narrow strip and open a browser window in the remaining space. That of course depends on the immersive windowing APIs. Other apps such as calculators or "sticky notes" work better in desktop windows. So it would be nice to have apps like this automatically open in the right kind of window, or at least allow the user to set this per-app, rather than requiring an obnoxious global mode switch that potentially messes with everything else.
2. Desktop apps inherently clash with the immersive app model and UX.
While running immersive apps in desktop windows seems like it could probably work well, I'm really leery of the reverse. There are a few potential problems with running desktop apps in immersive windows that I see:
* Desktop apps can open multiple windows and draw outside of their window. Some apps (ab)use this quite a bit for dialogs, palette windows, etc. This could get pretty awkward to map to immersive windowing - do we put each in its own "strip"? Do we make each "strip" a little virtual desktop where the app can put additional windows? Do we try some mix of the two approaches, and if so, how does the OS decide which is which?
* A goal of the immersive app model was that the user wouldn't have to worry about closing apps or which apps were/weren't running - indeed that the concept of "running apps" wouldn't exist for most users. The system UI was designed around that - no "X" to close, no taskbar to show what's open. But desktop apps can do anything in the background, so closing them and knowing what's running is important. And when you switch modes back into the desktop, which apps should even be kept open? Since "metro" mode blurs the lines between running and suspended apps, it's not clear.
And if you can't run desktop apps in immersive windows, and in general have a 1:1 mapping of running system/app state between modes, you can't really have a pure modal separation - either the mode switch is made destructive and essentially becomes a reboot, or you're left with a bunch of hidden stuff "running in the other mode" which doesn't make any sense. Once you really start thinking through the ramifications of alternative models the current desktop-as-an-app model starts to seem pretty elegant in some ways IMO.
Great points you're bringing! To be honest I'm always up for more options. Why not offer separation or the current solution for all users, although I think that it would be hell to manage two very different ideas.
As for the different modes per screen, why not? It would be nice to have a clean shortcut to say switch environment, but just for the active screen. Basically it would transfer the apps on that current screen from Desktop mode to Metro mode, so it is a workable solution.
As for the app imersion problem, that one is tricky since... well Metro apps are not specifically designed for a mouse and keyboard, it works, but it's not the best. So is the Metro environment. So my point of view is you're using an app not optimized for mouse and keyboards in an environment not optimized for mouse and keyboard... so by bringing Metro apps to a window environment, you're slashing half the problem.
The core idea is you have all your apps, files and stuff on your computer, now how do you want to interact with them. It may not be the most gracious idea, but it's better than trying to get the charms bar with your mouse or being stuck trying to resize windows with a touchscreen on the Desktop.
>Why not offer separation or the current solution for all users, although I think that it would be hell to manage two very different ideas.
Don't be an apologist. This is Microsoft. They have the resources out their ass to commit to this and it still wouldn't even come close to registering the tiniest blip in their budgets.
MS needs to git good and do what the post suggests, in that they need to seperate touch and desktop interfaces for Win9, or else Windows is dead to me and many power users alike.
> Desktop apps inherently clash with the immersive app model and UX.
Desktop apps will almost certainly be subpar in Metro mode. The key isn't to define optimal behavior but present a usable default mode. Presumably, in time, those desktop apps would release updated versions that would adjust their UIs when used in Modern mode (or a competitor would).
For apps with multiple windows, dialogs, etc., all windows belonging to the same desktop app should be drawn in the same "strip". It's basically the MDI interface present in, e.g., older versions of Excel and Powerpoint. Not ideal, but usable.
The issues with desktop apps running in the background is trickier. Two possibilities: (1) Figure out a way to suspend desktop apps. which might be technically difficult, or (2) recreate the taskbar in Metro so the user knows what's running and what to close (Jay's demo includes some possible ways of doing this -- I would also add some way of conveying to the user what Window will automatically suspend and what it won't.
Microsoft actually employs a very Ubuntu-esque icon as their sharing symbol, baked into Windows 8. This guy isn't responsible for it, didn't try to invent anything, re-invent it, or borrow it, or co-opt it.
That's the very icon that Microsoft actually uses within current versions of Windows.
Yes there will be a lot of spelling mistakes, unfortunately
you will have to leave with it. The goal was to put out the
information of my research, not to write a perfectly
checked novel.
Ahhhhhh I was wondering why the sudden traffic! Finally found it after tracking a Twitter referring link with a lot of traffic, to a YC Twitter account mentioning I was in the major news! Thanks all of you!
Despite the linux comment in the authors post, but can someone please please please take some variation of "desktop" part of this design and turn it into a linux mint theme?
I took a look at this. Excellent work on his part!
I still use windows and Metro is just forced full screen apps that dont even work with the desktop classic
. One thing apple got right was not to mixed iOs with MacOS (or whatever it is you guys call it) - Microsoft once again tried to stuff the kitchen sink into one OS.
I'd actually like to - corny as it sounds - literally just have a smart widows logo floating around. It pops up when I hit the windows button on the keyboard, and it always finds an empty blank spot on the desktop to appear in to be ready.
When i click on it this floting windows logo has literally four flyouts from it that contain a way to access apps, documents, email, and yuch - social feeds. (maybe its configurable so its for settings) - might work well for touch screens too.
Of course these flyouts would be configurable to be a palette of your choice.
Of course I'm not a designer and have terrible ideas - so into the ether with this rant and Prepare for downvote!
As they say, it is very easy to criticise someone else for at least trying something and very hard to solve it yourself. I will fully admit that I have no better ideas for how to both keep the classic desktop but also integrate the new touch screen "metro" elements into it.
I really do have to say that I think Apple has the "best" solution right now. iOS for touch devices (iPad, iPhone, iPod) and OS X for desktop. Then they get to sit back and make an absolutely wonderful user experience for both scenarios without worrying about the potential crossover.
If I was in charge of Windows that is likely what I would do. Make Windows for Desktop and Windows for Touch, with starkly different UIs and designs, but with some emulation layer so people can run a touch app on the desktop (in like a sandbox window).
Frankly Windows Desktop has a LOT of crust on it. Go look at Control Panel. They have UIs unchanged since Windows 9x (Mouse, Keyboard, etc), then they have UIs added in XP (Firewall), then different ones added in Vista (Action Centre, etc), and now in 8 we have "Change PC Settings" which is another UI concept again.
If you've ever used OS X for any amount of time, the thing is just polished and consistent from top to bottom. I mean go compare OS X "Preferences" to Control Panel on Windows 8, and Apple manage to get that kind of consistent polish all over the place.
Microsoft cannot kill the Windows Desktop as much as they might wish, so for the love of god go through from top to bottom and modernise it. Control Panel = Gone. Folder Options = Gone. Device Manage = Gone.